Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
3,900
4,530
Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs Ragupathyraj Valluvan UC Irvine [email protected] Animashree Anandkumar UC Irvine [email protected] Abstract Graphical model selection refers to the problem of estimating the unknown graph structure given observations at the nodes in the model. We consider a challenging instance of this problem when some of the nodes are latent or hidden. We characterize conditions for tractable graph estimation and develop efficient methods with provable guarantees. We consider the class of Ising models Markov on locally tree-like graphs, which are in the regime of correlation decay. We propose an efficient method for graph estimation, and establish its structural consistency ???(?+1)?2 when the number of samples n scales as n = ?(?min log p), where ?min is the minimum edge potential, ? is the depth (i.e., distance from a hidden node to the nearest observed nodes), and ? is a parameter which depends on the minimum and maximum node and edge potentials in the Ising model. The proposed method is practical to implement and provides flexibility to control the number of latent variables and the cycle lengths in the output graph. We also present necessary conditions for graph estimation by any method and show that our method nearly matches the lower bound on sample requirements. Keywords: Graphical model selection, latent variables, quartet methods, locally tree-like graphs. 1 Introduction It is widely recognized that the process of fitting observed data to a statistical model needs to incorporate latent or hidden factors, which are not directly observed. Learning latent variable models involves mainly two tasks: discovering structural relationships among the observed and hidden variables, and estimating the strength of such relationships. One of the simplest models is the latent class model (LCM), which incorporates a single hidden variable and the observed variables are conditionally independent given the hidden variable. Latent tree models extend this model class to incorporate many hidden variables in a hierarchical fashion. Latent trees have been effective in modeling data in a variety of domains, such as phylogenetics [1]. Their computational tractability: upon learning the latent tree model, enables the inference to be carried out efficiently through belief propagation. There has been extensive work on learning latent trees, including some of the recent works, e.g. [2?4], demonstrate efficient learning in high dimensions. However, despite the advantages, the assumption of an underlying tree structure may be too restrictive. For instance, consider the example of topic-word models, where topics (which are hidden) are discovered using information about word co-occurrences. In this case, a latent tree model does not accurately represent the hierarchy of topics and words, since there are many common words across different topics. Here, we relax the latent tree assumption to incorporate cycles in the latent graphical model while retaining many advantages of latent tree models, including tractable learning and inference. Relaxing the tree constraint leads to many challenges: in general, learning these models is NP-hard, even when there are no latent variables, and developing tractable methods for such models is itself an area of active research, e.g. [5?7]. We consider structure estimation in latent graphical models Markov on locally 1 tree-like graphs. These extensions of latent tree models are relevant in many settings: for instance, when there is a small overlap among different hierarchies of variables, the resulting graph has mostly long cycles. There are many questions to be addressed: are there parameter regimes where these models can be learnt consistently and efficiently? If so, are there practical learning algorithms? Are learning guarantees for loopy models comparable to those for latent trees? How does learning depend on various graph attributes such as node degrees, girth of the graph, and so on? Our Approach: We consider learning Ising models with latent variables Markov on locally treelike graphs. We assume that the model parameters are in the regime of correlation decay. In this regime, there are no long-range correlations, and the local statistics converge to a tree limit. Hence, we can employ the available latent tree methods to learn ?local? subgraphs consistently, as long as they do not contain any cycles. However, merging these estimated local subgraphs (i.e., latent trees) remains a non-trivial challenge. It is not clear whether an efficient approach is possible for matching latent nodes during this process. We employ a different philosophy for building locally tree-like graphs with latent variables. We decouple the process of introducing cycles and latent variables in the output model. We initialize a loopy graph consisting of only the observed variables, and then iteratively add latent variables to local neighborhoods of the graph. We establish correctness of our method under a set of natural conditions. We establish that our method is structurally consistent ???(?+1)?2 when the number of samples n scales as n = ?(?min log p), where p is the number of observed variables, ?min is the minimum edge potential, ? is the depth (i.e., graph distance from a hidden node to the nearest observed nodes), and ? is a parameter which depends on the minimum and maximum node and edge potentials of the Ising model (? = 1 for homogeneous models). The sample requirement for our method is comparable to the requirement for many popular latent tree methods, e.g. [2?4]. Moreover, note that when there are no hidden variables (? = 1), the ?2 sample complexity of our method is strengthened to n = ?(?min log p), which matches with the sample complexity of existing algorithms for learning fully-observed Ising models [5?7]. Thus, we present an efficient method which bridges structure estimation in latent trees with estimation in fully observed loopy graphical models. Finally, we present necessary conditions for graph estimation by any method and show that our method nearly matches the lower bound. Our method has a number of attractive features: it is amenable to parallelization making it efficient on large datasets, provides flexibility to control the length of cycles and the number of latent variables in the output model, and it can incorporate penalty scores such as the Bayesian information criterion (BIC) [8] to tradeoff model complexity and fidelity. Preliminary experiments on the newsgroup dataset suggests that the method can discover intuitive relationships efficiently, and also compares well with the popular latent Dirichlet allocation (LDA) [9] in terms of topic coherence and perplexity. Related Work: Learning latent trees has been studied extensively, mainly in the context of phylogenetics. Efficient algorithms with provable guarantees are available (e.g. [2?4]). Our proposed method for learning loopy models is inspired by the efficient latent tree learning algorithm of [4]. Works on high-dimensional graphical model selection are more recent. They can be mainly classified into two groups: non-convex local approaches [5, 6, 10] and those based on convex optimization [7, 11, 12]. There is a general agreement that the success of these methods is related to the presence of correlation decay in the model [13]. This work makes the connection explicit: it relates the extent of correlation decay with the learning efficiency for latent models on large girth graphs. An analogous study of the effect of correlation decay for learning fully observed models is presented in [5]. This paper is the first work to provide provable guarantees for learning discrete graphical models on loopy graphs with latent variables (which can also be easily extended to Gaussian models, see Remark following Theorem 1). The work in [12] considers learning latent Gaussian graphical models using a convex relaxation method, by exploiting a sparse-low rank decomposition of the Gaussian precision matrix. However, the method cannot be easily extended to discrete models. Moreover, the ?incoherence? conditions required for the success of convex methods are hard to interpret and verify in general. In contrast, our conditions for success are transparent and based on the presence of correlation decay in the model. 2 System Model Ising Models: A graphical model is a family of multivariate distributions Markov in accordance to a fixed undirected graph [14]. Each node in the graph i ? W is associated to a random variable Xi 2 taking value in a set X . The set of edges E captures the set of conditional independence relations among the random variables. We say that a set of random variables XW := {Xi , i ? W } with probability mass function (pmf) P is Markov on the graph G if P (xi |xN (i) ) = P (xi |xW \i ) (1) holds for all nodes i ? W , where N (i) are the neighbors of node i in graph G. The HammersleyClifford theorem [14] states that under the positivity condition, given by P (xW ) > 0, for all xW ? X |W | , a distribution P satisfies the Markov property according to a graph G iff. it factorizes according to the cliques of G. A special case of graphical models is the class of Ising models, where each node consists of a binary variable over {?1, +1} and there are only pairwise interactions in the model. In this case, the joint distribution factorizes as ! X X P (xW ) = exp ?i,j xi xj + ?i xi ? A(?) , (2) e?E i?V where ? := {?i,j } and ? := {?i } are known as edge and the node potentials, and A(?) is known as the log-partition function, which serves to normalize the probability distribution. We consider latent graphical models in which a subset of nodes is latent or hidden. Let H ? W denote the hidden nodes and V ? W denote the observed nodes. Our goal is to discover the presence of hidden variables XH and learn the unknown graph structure G(W ), given n i.i.d. samples from observed variables XV . Let p := |V | denote the number of observed nodes and m := |W | denote the total number of nodes. Tractable Models for Learning: In general, structure estimation of graphical models is NP-hard. We now characterize a tractable class of models for which we can provide guarantees on graph estimation. Girth-Constrained Graph Families: We consider the family of graphs with a bound on the girth, which is the length of the shortest cycle in the graph. Let GGirth (m; g) denote the ensemble of graphs with girth at most g. There are many graph constructions which lead to a bound on girth. For example, the bipartite Ramanujan graph [15] and the random Cayley graphs [16] have bounds on the girth. Theoretical guarantees for our learning algorithm will depend on the girth of the graph. However, our experiments reveal that our method is able to construct models with short cycles as well. Regime of Correlation Decay: This work establishes tractable learning when the graphical model converges locally to a tree limit. A sufficient condition for the existence of such limits is the regime of correlation decay, which refers to the property that there are no long-range correlations in the model [5]. In this regime, the marginal distribution at a node is asymptotically independent of the configuration of a growing boundary. For the class of Ising models in (2), the regime of correlation decay can be explicitly characterized, in terms of the maximum edge potential ?max of the model and the maximum node degree ?max . Define ? := ?max tanh ?max . When ? < 1, the model is in the regime of correlation decay, and we provide learning guarantees in this regime. 3 Method, Guarantees and Necessary Conditions Background on Learning Latent Trees: Most latent tree learning methods are distance based, meaning they are based on the presence of an additive tree metric between any two nodes in the tree model. For Ising model (and more generally, any discrete model), the ?information? distance between any two nodes i and j in a tree T is defined as d(i, j; T ) := ? log | det(Pi,j )|, (3) where Pi,j denotes the joint probability distribution between nodes i and j. On a tree model T , it can be established that {d(i, j)} is additive along any path in T . Learning latent trees can thus be reforb j) : i, j ? V } mulated as learning tree structure T given end-to-end (estimated) distances d := {d(i, between the observed nodes V . Various methods with performance guarantees have been proposed, e.g. [2?4]. They are usually based on local tests such as quartet tests, involving groups of four nodes. 3 In [4], the so-called CLGrouping method is proposed, which organically grows the tree structure by adding latent nodes to local neighborhoods. In the initial step, the method constructs the minimum spanning tree MST(V ; d) over the observed nodes V using distances d. The method then iteratively visits local neighborhoods of MST(V ; d) and adds latent nodes by conducting local distance tests. Since a tree structure is maintained in every iteration of the algorithm, we can parsimoniously add hidden variables by selecting neighborhoods which locally maximize scores such as the Bayesian information criterion (BIC) [8]. This method also allows for fast implementation by parallelization of latent tree reconstruction in different neighborhoods, see [17] for details. Proposed Algorithm: We now propose a method for learning loopy latent graphical models. As in the case of latent tree methods, our method is also based on estimated information distances n dbn (i, j; G) := ? log | det(Pbi,j )|, ?i, j ? V, (4) n where Pbi,j denotes the empirical probability distribution at nodes i and j computed using n i.i.d. samples. The presence of correlation decay in the Ising model implies that dbn (i, j; G) is approximately a tree metric when nodes i and j are ?close? on graph G (compared to the girth g of the graph). Thus, intuitively, local neighborhoods of G can be constructed through latent tree methb ods. However, the challenge is in merging these local estimates together to get a global estimate G: the presence of latent nodes in the local estimates makes merging challenging. Moreover, such a merging-based method cannot easily incorporate global penalties for the number of latent variables added in the output model, which is relevant to obtain parsimonious representations on real datasets. We overcome the above challenges as follows: our proposed method decouples the process of adding b 0 and then iteratively adds cycles and latent nodes to the output model. It initializes a loopy graph G latent variables to local neighborhoods. Given a parameter r > 0, for every node i ? V , consider b n ) := {j : dbn (i, j) < r}. The initial graph estimate G b 0 is obtained by taking the set of nodes Br (i; d the union of local minimum spanning trees: b n ); d b n ). b 0 ? ?i?V MST(Br (i; d G (5) b 0 and running a The method then adds latent variables by considering only local neighborhoods in G b is obtained. latent tree reconstruction routine. By visiting all the neighborhoods, a graph estimate G Implementation details about the algorithm are available in [17]. We subsequently establish that correctness of the proposed method under a set of natural conditions. We require that the parameter r, which determines the set Br (i; d) for each node i, needs to be chosen as a function of the depth ? (i.e., distance from a hidden node to its closest observed nodes) and girth g of the graph. In practice, the parameter r provides flexibility in tuning the length of cycles added to the graph estimate. When r is large enough, we obtain a latent tree, while for small r, the graph estimate can contain many short cycles (and potentially many components). In experiments, we evaluate the performance of our method for different values of r. For more details, see Section 4. 3.1 Conditions for Recovery We present a set of natural conditions on the graph structure and model parameters under which our proposed method succeeds in structure estimation. (A1) Minimum Degree of Latent Nodes: We require that all latent nodes have degree at least three, which is a natural assumption for identifiability of hidden variables. Otherwise, the latent nodes can be marginalized to obtain an equivalent representation of the observed statistics. (A2) Bounded Potentials: The edge potentials ? := {?i,j } of the Ising model are bounded, and let ?min ? |?i,j | ? ?max , ? (i, j) ? G. (6) Similarly assume bounded node potentials. (A3) Correlation Decay: As described in Section 2, we assume correlation decay in the Ising model. We require ? := ?max tanh ?max < 1, 4 ?g/2 ?(?+1)+2 ?min = o(1), (7) where ?max is the maximum node degree, g is the girth and ?min , ?max are the minimum and maximum (absolute) edge potentials in the model. (A4) Distance Bounds: We now define certain quantities which depend on the edge potential bounds. Given an Ising model P with edge potentials ? = {?i,j } and node potentials ? := {|?i,j |} and node ? = {?i }, consider its attractive counterpart P? with edge potentials ? 0 ? ? ? is the expectation potentials ? := {|?i |}. Let ?max := maxi?V atanh(E(Xi )), where E ? with respect to the distribution P . Let P (X1,2 ; {?, ?1 , ?2 }) denote an Ising model on two nodes {1, 2} with edge potential ? and node potentials {?1 , ?2 }. Our learning guarantees depend on dmin and dmax defined below. dmin := ? log|det P (X1,2 ; {?max , ?0max , ?0max })|, dmax := ? log|det P (X1,2 ; {?min , 0, 0})|, ? := dmax . dmin (A5) Girth vs. Depth: The depth ? characterizes how close the latent nodes are to observed nodes on graph G: for each hidden node h ? H, find a set of four observed nodes which form the shortest quartet with h as one of the middle nodes, and consider the largest graph distance in that quartet. The depth ? is the worst-case distance over all hidden nodes. We require the following tradeoff between the girth g and the depth ?: g ? ?? (? + 1) = ?(1), (8) 4 Further, the parameter r in our algorithm is chosen as r > ? (? + 1) dmax + , for some  > 0, g dmin ? r = ?(1). 4 (9) (A1) is a natural assumption on the minimum degree of the hidden nodes for identifiability. (A2) assumes bounds on the edge potentials. It is natural that the sample requirement of any graph estimation algorithm depends on the ?weakest? edge characterized by the minimum edge potential ?min . Further, the maximum edge potential ?max characterizes the presence/absence of long range correlations in the model, and is made exact in (A3). Intuitively, there is a tradeoff between the maximum degree ?max and the maximum edge potential ?max of the model. Moreover, (A3) prescribes that the extent of correlation decay be strong enough (i.e., a small ? and a large enough girth g) compared to the weakest edge in the model. Similar conditions have been imposed before for graphical model selection in the regime of correlation decay when there are no hidden variables [5]. (A4) defines certain distance bounds. Intuitively, dmin and dmax are bounds on information distances given by the local tree approximation of the loopy model. Note that e?dmax = ?(?min ) and e?dmin = O(?max ). (A5) provides the tradeoff between the girth g and the depth ?. Intuitively, the depth needs to be smaller than the girth to avoid encountering cycles during the process of graph reconstruction. Recall that the parameter r in our algorithm determines the neighborhood over which local MSTs are built in the first step. It is chosen such that it is roughly larger than the depth ? in order for all the hidden nodes to be discovered. The upper bound on r ensures that the distortion from an additive metric is not too large. The parameters for latent tree learning routines (such as confidence intervals for quartet tests) are chosen appropriately depending on dmin and dmax , see [17] for details. 3.2 Guarantees We now provide the main result of this paper that the proposed method correctly estimates the graph structure of a loopy latent graphical model in high dimensions. Recall that ? is the depth (distance from a hidden node to its closest observed nodes), ?min is the minimum (absolute) edge potential is the ratio of distance bounds. and ? = ddmax min Theorem 1 (Structural Consistency and Sample Requirements) Under (A1)?(A5), the probability that the proposed method is structurally consistent tends to one, when the number of samples scales as   ???(?+1)?2 n = ? ?min log p . (10) 5 Thus, for learning Ising models on locally tree-like graphs, the sample complexity is dependent both on the minimum edge potential ?min and on the depth ?. Our method is efficient in high dimensions since the sample requirement is only logarithmic in the number of nodes p. Dependence on Maximum Degree: For the correlation decay to hold (A3), we require ?min ? ??(?+1)+2 ?max = ?(1/?max ). This implies that the sample complexity is at least n = ?(?max log p). Comparison with Fully Observed Models: In the special case when all the nodes are observed1 (? = 1), we strengthen the results for our method and establish that the sample complexity is ?2 n = ?(?min log p). This matches the best known sample complexity for learning fully observed Ising models [5, 6]. Comparison with Learning Latent Trees: Our method is an extension of latent tree methods for learning locally tree-like graphs. The sample complexity of our method matches the sample requirements for learning general latent tree models [2?4]. Thus, we establish that learning locally tree-like graphs is akin to learning latent trees in the regime of correlation decay. Extensions: We strengthen the above results to provide non-asymptotic sample complexity bounds and also consider general discrete models, see [17] for details. The above results can also be easily extended to Gaussian models using the notion of walk-summability in place of correlation decay (see [18]) and the negative logarithm of the correlation coefficient as the additive tree metric (see [4]). Dependence on Fraction of Observed Nodes: In the special case when a fraction ? of the nodes are uniformly selected as observed nodes, we can provide probabilistic bounds on the depth ? in the resulting latent model, see [17] for details. For ? = 1 (homogeneous models) and regular graphs ?min = ?max = ?, the sample complexity simplifies to n = ? ?2 ??2 (log p)3 . Thus, we can characterize an explicit dependence on the fraction of observed nodes ?. 3.3 Necessary Conditions for Graph Estimation We have so far provided sufficient conditions for recovering locally tree-like graphs in latent Ising models. We now provide necessary conditions on the number of samples required by any algorithm b n : (X |V | )n ? Gm denote any deterministic graph estimator using to reconstruct the graph. Let G n i.i.d. samples from the observed node set V and Gm is the set of all possible graphs on m nodes. We first define the notion of the graph edit distance. b be two graphs2 with adjacency matrices AG , A b , and let Definition 1 (Edit Distance) Let G, G G V be the set of labeled vertices in both the graphs (with identical labels). Then the edit distance b is defined as between G, G b G; V ) := min ||A b ? ?(AG )||1 , dist(G, G ? where ? is any permutation on the unlabeled nodes while keeping the labeled nodes fixed. In other words, the edit distance is the minimum number of entries that are different in AGb and in any permutation of AG over the unlabeled nodes. In our context, the labeled nodes correspond to the observed nodes V while the unlabeled nodes correspond to latent nodes H. We now provide necessary conditions for graph reconstruction up to certain edit distance. bm : Theorem 2 (Necessary Condition for Graph Estimation) For any deterministic estimator G ?mn 2 7? Gm based on n i.i.d. samples, where ? ? [0, 1] is the fraction of observed nodes and m is 1 In the trivial case, when all the nodes are observed and the graph is locally tree-like, our method reduces to thresholding of information distances at each node, and building local MSTs. The threshold can be chosen as r = dmax + , for some  > 0. 2 We consider inexact graph matching where the unlabeled nodes can be unmatched. This is done by adding required number of isolated unlabeled nodes in the other graph, and considering the modified adjacency matrices [19]. 6 the total number of nodes of an Ising model Markov on graph Gm ? GGirth (m; g, ?min , ?max ) on m nodes with girth g, minimum degree ?min and maximum degree ?max , for all  > 0, we have 2n?m m(2+1)m 3m , m0.5?min m (m ? g?gmax )0.5?min m under any sampling process used to choose the observed nodes. b m , Gm ; V ) > m] ? 1 ? P[dist(G Proof: The proof is based on counting arguments. See [17] for details. Lower Bound on Sample Requirements: The above result states that roughly  n = ? ?min ??1 log p (11) 2 (12) samples are required for structural consistency under any estimation method. Thus, when ? = ?(1) (constant fraction of observed nodes), polylogarithmic number of samples are necessary (n = ?(poly log p)), while when ? = ?(m?? ) for some ? > 0 (i.e., a vanishing fraction of observed nodes), polynomial number of samples are necessary for reconstruction (n = ?(poly(p)). Comparison with Sample Complexity of Proposed Method: For Ising models, under uniform sampling of observed nodes, we established that the sample complexity of the proposed method scales as n = ?(?2 ??2 (log p)3 ) for regular graphs with degree ?. Thus, we nearly match the lower bound on sample complexity in (12). 4 Experiments We employ latent graphical models for topic modeling. Each hidden variable in the model can be thought of as representing a topic, and topics and words in a document are drawn jointly from the graphical model. We conduct some preliminary experiments on 20 newsgroup dataset with 16,242 binary samples of 100 selected keywords. Each binary sample indicates the appearance of the given words in each posting, these samples are divided in to two equal groups for learning and testing purposes. We compare the performance with popular latent Dirichlet allocation (LDA) model [9]. We evaluate performance in terms of perplexity and topic coherence. In addition, we also study tradeoff between model complexity and data fitting through the Bayesian information criterion (BIC) [8]. Methods: We consider a regularized variant of the proposed method for latent graphical model selection. Here, in every iteration, the decision to add hidden variables to a local neighborhood is based on the improvement of the overall BIC score. This allows us to tradeoff model complexity and data fitting. Note that our proposed method only deals with structure estimation and we use expectation maximization (EM) for parameter estimation. We compare the proposed method with the LDA model3 . This method is implemented in MATLAB. We used the modules for LBP, made available with UGM4 package. The LDA models are learnt using the lda package5 . Performance Evaluation: We evaluate performance based on the test perplexity [20] given by " # n 1 X Perp-LL := exp ? log P (xtest (k)) , (13) np k=1 where n is the number of test samples and p is the number of observed variables (i.e., words). Thus the perplexity is monotonically decreasing in the test likelihood and a lower perplexity indicates a better generalization performance. On lines of (13), we also define   n X 1 Perp-BIC := exp ? BIC(xtest ) , BIC(xtest ) := log P (xtest (k)) ? 0.5(df) log n, np k=1 (14) 3 Typically, LDA models the counts of different words in documents. Here, since we have binary data, we consider a binary LDA model where the observed variables are binary. 4 These codes are available at http://www.di.ens.fr/?mschmidt/Software/UGM.html 5 http://chasen.org/?daiti-m/dist/lda/ 7 Method Proposed Proposed Proposed Proposed LDA LDA LDA LDA r 7 9 11 13 NA NA NA NA Hidden 32 24 26 24 10 20 30 40 Edges 183 129 125 123 NA NA NA NA PMI 0.4313 0.6037 0.4585 0.4289 0.2921 0.1919 0.1653 0.1470 Perp-LL 1.1498 1.1543 1.1555 1.1560 1.1480 1.1348 1.1421 1.1494 Perp-BIC 1.1518 1.1560 1.1571 1.1576 1.1544 1.1474 1.1612 1.1752 Table 1: Comparison of proposed method under different thresholds (r) with LDA under different number of topics (i.e., number of hidden variables) on 20 newsgroup data. For definition of perplexity based on test likelihood and BIC scores, and PMI, see (13), (14), and (15). where df is the degrees of freedom in the model. For a graphical model, we set df GM := m + |E|, where m is the total number of variables (both observed and hidden) and |E| is the number of edges in the model. For the LDA model, we set df LDA := (p(m ? p) ? 1), where p is the number of observed variables (i.e., words) and m ? p is the number of hidden variables (i.e., topics). This is because a LDA model is parameterized by a p ? (m ? p) topic probability matrix and a (m ? p)-length Dirichlet prior. Thus, the BIC perplexity in (14) is monotonically decreasing in the BIC score, and a lower BIC perplexity indicates better tradeoff between model complexity and data fitting. However, the likelihood and BIC score in (13) and (14) are not tractable for exact evaluation in general graphical models since they involve the partition function. We employ loopy belief propagation (LBP) to evaluate them. Note that it is exact on a tree model and approximate for loopy models. In addition, we also evaluate topic coherence, frequently considered in topic modeling. It is based on the average pointwise mutual information (PMI) score 1 X X P (Xi = 1, Xj = 1) PMI := PMI(Xi ; Xj ), PMI(Xi ; Xj ) := log , (15) 45|H| P (Xi = 1)P (Xj = 1) h?H i,j?A(h) i<j where the set A(h) represents the  ?top-10? words associated with topic h ? H. The number of such word pairs for each topic is 10 2 = 45, and is used for normalization. In [21], it is found that the PMI scores are a good measure of human evaluated topic coherence when it is computed using an external corpus. We compute PMI scores based on NYT articles bag-of-words dataset [22]. Experimental Results: We learn the graph structures under different thresholds r ? {7, 9, 11, 13}, which controls the length of cycles. At r = 13, we obtain a latent tree and for all other values, we obtain loopy models. The the first long cycle appears at r = 9. At r = 7, we find a combination of short and long cycles. We find that models with cycles are more effective in discovering intuitive relationships. For instance, in the latent tree (r = 13), the link between ?computer? and ?software? is missing due to the tree constraint, but is discovered when r ? 9. Moreover, we see that common words across different topics tend to connect the local subgraphs, and thus loopy models are better at discovering such relationships. The graph structures from the experiments are available in [17]. In Table 1, we present results under our method and under LDA modeling. For the LDA model, we vary the number of hidden variables (i.e., topics) as {10, 20, 30, 40}. In contrast, our method is designed to optimize for the number of hidden variables, and does not need this input. We note that our method is competitive in terms of both perplexity and topic coherence. We find that topic coherence (i.e., PMI) for our method is optimal at r = 9, where the graph has a single long cycle and a few short cycles. The above experiments confirm the effectiveness of our approach for discovering hidden topics, and are in line with the theoretical guarantees established earlier in the paper. Our analysis reveals that a large class of loopy graphical models with latent variables can be learnt efficiently. Acknowledgement This work is supported by NSF Award CCF-1219234, AFOSR Award FA9550-10-1-0310, ARO Award W911NF-12-1-0404, the setup funds at UCI, and ONR award N00014-08-1-1015. 8 References [1] R. Durbin, S. R. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge Univ. Press, 1999. [2] P. L. Erd?os, L. A. Sz?ekely, M. A. Steel, and T. J. Warnow. A few logs suffice to build (almost) all trees: Part i. Random Structures and Algorithms, 14:153?184, 1999. [3] E. Mossel. Distorted metrics on trees and phylogenetic forests. IEEE/ACM Transactions on Computational Biology and Bioinformatics, pages 108?116, 2007. [4] M.J. Choi, V.Y.F. Tan, A. Anandkumar, and A. Willsky. Learning latent tree graphical models. J. of Machine Learning Research, 12:1771?1812, May 2011. [5] A. Anandkumar, V. Y. F. Tan, F. Huang, and A. S. Willsky. High-dimensional structure learning of Ising models: local separation criterion. The Annals of Statistics, 40(3):1346?1375, 2012. [6] A. Jalali, C. Johnson, and P. Ravikumar. On learning discrete graphical models using greedy methods. In Proc. of NIPS, 2011. [7] P. Ravikumar, M.J. Wainwright, and J. Lafferty. High-dimensional Ising Model Selection Using l1Regularized Logistic Regression. Annals of Statistics, 2008. [8] G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6(2):461?464, 1978. [9] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. J. of Machine Learning Research, 3:993?1022, 2003. [10] G. Bresler, E. Mossel, and A. Sly. Reconstruction of Markov random fields from samples: some observations and algorithms. In Intl. workshop APPROX Approximation, Randomization and Combinatorial Optimization, pages 343?356. Springer, 2008. [11] N. Meinshausen and P. B?uhlmann. High dimensional graphs and variable selection with the lasso. Annals of Statistics, 34(3):1436?1462, 2006. [12] V. Chandrasekaran, P.A. Parrilo, and A.S. Willsky. Latent Variable Graphical Model Selection via Convex Optimization. Arxiv preprint, 2010. [13] J. Bento and A. Montanari. Which Graphical Models are Difficult to Learn? In Proc. of Neural Information Processing Systems (NIPS), Vancouver, Canada, Dec. 2009. [14] M.J. Wainwright and M.I. Jordan. Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008. [15] F.R.K. Chung. Spectral graph theory. Amer Mathematical Society, 1997. [16] A. Gamburd, S. Hoory, M. Shahshahani, A. Shalev, and B. Virag. On the girth of random cayley graphs. Random Structures & Algorithms, 35(1):100?117, 2009. [17] A. Anandkumar and R. Valluvan. Learning Loopy Graphical Models with Latent Variables: Efficient Methods and Guarantees. Under revision from Annals of Statistics. Available on ArXiv:1203.3887, Jan. 2012. [18] A. Anandkumar, V. Y. F. Tan, F. Huang, and A. S. Willsky. High-Dimensional Gaussian Graphical Model Selection: Walk-Summability and Local Separation Criterion. Accepted to J. Machine Learning Research, ArXiv 1107.1270, June 2012. [19] H. Bunke and G. Allermann. Inexact graph matching for structural pattern recognition. Pattern Recognition Letters, 1(4):245?253, 1983. [20] D. Newman, E.V. Bonilla, and W. Buntine. Improving topic coherence with regularized topic models. In Proc. of NIPS, 2011. [21] David Newman, Sarvnaz Karimi, and Lawrence Cavedon. External evaluation of topic models. In Proceedings of the 14th Australasian Computing Symposium(ACD2009), page 8, Sydney, Australia, December 2009. [22] A. Frank and A. Asuncion. UCI machine learning repository, 2010. 9
4530 |@word repository:1 middle:1 polynomial:1 decomposition:1 xtest:4 initial:2 configuration:1 score:9 selecting:1 document:2 existing:1 od:1 mst:5 additive:4 partition:2 enables:1 mulated:1 designed:1 fund:1 v:1 greedy:1 discovering:4 selected:2 vanishing:1 short:4 fa9550:1 blei:1 provides:4 node:84 org:1 phylogenetic:1 mathematical:1 along:1 constructed:1 symposium:1 consists:1 fitting:4 pairwise:1 roughly:2 dist:3 growing:1 frequently:1 inspired:1 decreasing:2 considering:2 revision:1 provided:1 estimating:3 underlying:1 discover:2 moreover:5 mass:1 bounded:3 suffice:1 ag:3 guarantee:13 every:3 decouples:1 control:3 before:1 local:22 accordance:1 xv:1 limit:3 tends:1 perp:4 despite:1 incoherence:1 path:1 approximately:1 studied:1 meinshausen:1 suggests:1 challenging:2 relaxing:1 co:1 range:3 practical:2 testing:1 union:1 practice:1 implement:1 jan:1 area:1 empirical:1 thought:1 matching:3 word:14 confidence:1 refers:2 regular:2 protein:1 get:1 cannot:2 close:2 selection:10 unlabeled:5 context:2 www:1 equivalent:1 imposed:1 deterministic:2 missing:1 ramanujan:1 optimize:1 convex:5 recovery:1 subgraphs:3 estimator:2 l1regularized:1 chasen:1 notion:2 analogous:1 annals:5 hierarchy:2 construction:1 gm:6 strengthen:2 exact:3 tan:3 homogeneous:2 agreement:1 trend:1 recognition:2 cayley:2 ising:21 labeled:3 observed:38 module:1 preprint:1 capture:1 worst:1 ensures:1 cycle:18 complexity:16 prescribes:1 depend:4 upon:1 bipartite:1 efficiency:1 easily:4 joint:2 various:2 univ:1 fast:1 effective:2 newman:2 neighborhood:11 shalev:1 widely:1 larger:1 say:1 relax:1 otherwise:1 distortion:1 reconstruct:1 statistic:7 jointly:1 itself:1 bento:1 advantage:2 sequence:1 propose:2 reconstruction:6 interaction:1 aro:1 fr:1 uci:4 relevant:2 iff:1 flexibility:3 intuitive:2 normalize:1 exploiting:1 requirement:8 intl:1 converges:1 depending:1 develop:1 nearest:2 keywords:2 krogh:1 sydney:1 recovering:1 implemented:1 involves:1 implies:2 strong:1 attribute:1 subsequently:1 human:1 australia:1 adjacency:2 require:5 transparent:1 generalization:1 preliminary:2 randomization:1 biological:1 extension:3 hold:2 considered:1 exp:3 lawrence:1 m0:1 vary:1 a2:2 purpose:1 estimation:16 proc:3 bag:1 label:1 tanh:2 combinatorial:1 uhlmann:1 bridge:1 edit:5 largest:1 schwarz:1 correctness:2 establishes:1 gaussian:5 modified:1 bunke:1 avoid:1 factorizes:2 june:1 improvement:1 consistently:2 rank:1 indicates:3 mainly:3 likelihood:3 contrast:2 inference:3 dependent:1 treelike:1 typically:1 hidden:31 relation:1 karimi:1 overall:1 among:3 fidelity:1 html:1 retaining:1 constrained:1 special:3 initialize:1 uc:2 marginal:1 equal:1 construct:2 mutual:1 field:1 ng:1 sampling:2 identical:1 represents:1 biology:1 nearly:3 ugm:1 np:4 employ:4 few:2 parsimoniously:1 consisting:1 freedom:1 a5:3 evaluation:3 hoory:1 amenable:1 edge:23 gmax:1 necessary:9 tree:61 conduct:1 logarithm:1 pmf:1 walk:2 isolated:1 theoretical:2 instance:4 modeling:4 earlier:1 w911nf:1 maximization:1 loopy:15 tractability:1 introducing:1 vertex:1 subset:1 entry:1 uniform:1 johnson:1 too:2 characterize:3 buntine:1 connect:1 learnt:3 probabilistic:2 together:1 na:8 choose:1 huang:2 positivity:1 unmatched:1 external:2 chung:1 australasian:1 potential:23 parrilo:1 coefficient:1 explicitly:1 bonilla:1 depends:3 characterizes:2 competitive:1 identifiability:2 asuncion:1 acid:1 conducting:1 efficiently:4 ensemble:1 correspond:2 bayesian:3 accurately:1 classified:1 definition:2 inexact:2 associated:2 proof:2 di:1 irvine:2 dataset:3 animashree:1 popular:3 recall:2 eddy:1 routine:2 appears:1 erd:1 amer:1 done:1 evaluated:1 sly:1 correlation:22 o:1 propagation:2 ekely:1 defines:1 logistic:1 lda:18 reveal:1 organically:1 grows:1 building:2 effect:1 contain:2 verify:1 counterpart:1 ccf:1 hence:1 iteratively:3 shahshahani:1 deal:1 conditionally:1 attractive:2 ll:2 during:2 mschmidt:1 maintained:1 hammersleyclifford:1 criterion:5 demonstrate:1 meaning:1 variational:1 common:2 extend:1 interpret:1 cambridge:1 tuning:1 approx:1 consistency:3 dbn:3 similarly:1 pmi:9 lcm:1 encountering:1 add:6 multivariate:1 closest:2 recent:2 perplexity:9 certain:3 n00014:1 binary:6 success:3 onr:1 minimum:14 recognized:1 converge:1 shortest:2 maximize:1 monotonically:2 relates:1 reduces:1 match:6 characterized:2 long:8 divided:1 ravikumar:2 visit:1 award:4 a1:3 involving:1 variant:1 regression:1 metric:5 expectation:2 df:4 arxiv:3 iteration:2 represent:1 normalization:1 dec:1 background:1 addition:2 lbp:2 addressed:1 interval:1 appropriately:1 parallelization:2 tend:1 undirected:1 december:1 incorporates:1 lafferty:1 effectiveness:1 jordan:2 anandkumar:6 structural:5 presence:7 counting:1 enough:3 variety:1 independence:1 bic:13 xj:5 lasso:1 simplifies:1 tradeoff:7 br:3 det:4 whether:1 akin:1 penalty:2 remark:1 matlab:1 generally:1 clear:1 involve:1 locally:13 extensively:1 simplest:1 http:2 nsf:1 estimated:3 correctly:1 discrete:5 group:3 four:2 threshold:3 drawn:1 nyt:1 graph:74 relaxation:1 asymptotically:1 fraction:6 package:1 parameterized:1 letter:1 distorted:1 place:1 family:4 almost:1 chandrasekaran:1 separation:2 parsimonious:1 pbi:2 coherence:7 decision:1 comparable:2 bound:16 durbin:1 strength:1 constraint:2 software:2 argument:1 min:24 developing:1 according:2 combination:1 across:2 smaller:1 em:1 making:1 intuitively:4 remains:1 dmax:8 count:1 tractable:7 serf:1 end:2 available:7 hierarchical:1 spectral:1 occurrence:1 existence:1 denotes:2 dirichlet:4 running:1 assumes:1 top:1 graphical:30 a4:2 marginalized:1 xw:5 restrictive:1 build:1 establish:6 society:1 atanh:1 initializes:1 question:1 added:2 quantity:1 dependence:3 jalali:1 visiting:1 distance:22 link:1 topic:25 extent:2 considers:1 trivial:2 spanning:2 provable:3 willsky:4 quartet:5 length:6 code:1 pointwise:1 relationship:5 ratio:1 setup:1 mostly:1 difficult:1 phylogenetics:2 potentially:1 frank:1 negative:1 steel:1 implementation:2 unknown:2 dmin:7 upper:1 observation:2 nucleic:1 markov:8 datasets:2 extended:3 discovered:3 canada:1 david:1 pair:1 required:4 extensive:1 connection:1 polylogarithmic:1 established:3 nip:3 able:1 usually:1 below:1 pattern:2 regime:12 challenge:4 built:1 including:2 max:23 belief:2 wainwright:2 overlap:1 natural:6 regularized:2 mn:1 representing:1 mossel:2 carried:1 prior:1 acknowledgement:1 vancouver:1 asymptotic:1 afosr:1 fully:5 summability:2 permutation:2 bresler:1 allocation:3 foundation:1 degree:12 sufficient:2 consistent:2 article:1 thresholding:1 pi:2 supported:1 keeping:1 neighbor:1 taking:2 absolute:2 sparse:1 boundary:1 depth:13 dimension:4 xn:1 overcome:1 made:2 bm:1 far:1 transaction:1 approximate:1 clique:1 confirm:1 global:2 active:1 reveals:1 sz:1 corpus:1 xi:11 mitchison:1 latent:76 table:2 learn:4 model3:1 forest:1 improving:1 poly:2 domain:1 main:1 montanari:1 x1:3 en:1 fashion:1 strengthened:1 precision:1 structurally:2 explicit:2 xh:1 exponential:1 posting:1 warnow:1 theorem:4 choi:1 maxi:1 decay:18 a3:4 weakest:2 workshop:1 merging:4 adding:3 logarithmic:1 girth:18 appearance:1 springer:1 satisfies:1 determines:2 acm:1 conditional:1 goal:1 absence:1 hard:3 uniformly:1 decouple:1 total:3 called:1 accepted:1 experimental:1 succeeds:1 newsgroup:3 bioinformatics:1 philosophy:1 incorporate:5 evaluate:5
3,901
4,531
Sparse Approximate Manifolds for Differential Geometric MCMC Ben Calderhead? CoMPLEX University College London London, WC1E 6BT, UK [email protected] M?ty?s A. Sustik Department of Computer Sciences University of Texas at Austin Austin, TX 78712, USA [email protected] Abstract One of the enduring challenges in Markov chain Monte Carlo methodology is the development of proposal mechanisms to make moves distant from the current point, that are accepted with high probability and at low computational cost. The recent introduction of locally adaptive MCMC methods based on the natural underlying Riemannian geometry of such models goes some way to alleviating these problems for certain classes of models for which the metric tensor is analytically tractable, however computational efficiency is not assured due to the necessity of potentially high-dimensional matrix operations at each iteration. In this paper we firstly investigate a sampling-based approach for approximating the metric tensor and suggest a valid MCMC algorithm that extends the applicability of Riemannian Manifold MCMC methods to statistical models that do not admit an analytically computable metric tensor. Secondly, we show how the approximation scheme we consider naturally motivates the use of `1 regularisation to improve estimates and obtain a sparse approximate inverse of the metric, which enables stable and sparse approximations of the local geometry to be made. We demonstrate the application of this algorithm for inferring the parameters of a realistic system of ordinary differential equations using a biologically motivated robust Student-t error model, for which the Expected Fisher Information is analytically intractable. 1 Introduction The use of Markov chain Monte Carlo methods can be extremely challenging in many modern day applications. This difficulty arises from the more frequent use of complex and nonlinear statistical models that induce strong correlation structures in their often high-dimensional parameter spaces. The exact structure of the target distribution is generally not known in advance and local correlation structure between different parameters may vary across the space, particularly as the chain moves from the transient phase, exploring areas of negligible probability mass, to the stationary phase exploring higher density regions [1]. Constructing a Markov chain that adapts to the target distribution while still drawing samples from the correct stationary distribution is challenging, although much research over the last 15 years has resulted in a variety of approaches and theoretical results. Adaptive MCMC for example, allows for global adaptation based on the partial or full history of a chain; this breaks its Markov property, although it has been shown that subject to some technical conditions [2,3] the resulting chain will still converge to the desired stationary distribution. Most recently, advances in Riemannian Manifold MCMC allow locally changing, position specific proposals to be made based on the underlying ? http://www.2020science.net/people/ben-calderhead 1 geometry of the target distribution [1]. This directly takes into account the changing sensitivities of the model for different parameter values and enables very efficient inference over a number of popular statistical models. It is useful for inference over large numbers of strongly covarying parameters, however this methodology is still not suitable for all statistical models; in its current form it is only applicable to models that admit an analytic expression for the metric tensor. In practice, there are many commonly used models for which the Expected Fisher Information is not analytically tractable, such as when a robust Student-t error model is employed to construct the likelihood. In this paper we propose the use of a locally adaptive MCMC algorithm that approximates the local Riemannian geometry at each point in the target space. This extends the applicability of Riemannian Manifold MCMC to a much wider class of statistical models than at present. In particular, we do so by estimating the covariance structure of the tangent vectors at a point on the Riemannian manifold induced by the statistical model. Considering this geometric problem as one of inverse covariance estimation naturally leads us to the use of an `1 regularised maximum likelihood estimator. This approximate inverse approach allows the required geometry to be estimated with few samples, enabling good proposals for the Markov chain while inducing a natural sparsity in the inverse metric tensor that reduces the associated computational cost. We first give a brief characterisation of current adaptive approaches to MCMC, making a distinction between locally and globally adaptive methods, since these two approaches have very different requirements in terms of proving convergence to the stationary distribution. We then discuss the use of geometry in MCMC and the interpretation of such methods as being locally adaptive, before giving the necessary background on Riemannian geometry and MCMC algorithms defined on induced Riemannian manifolds. We focus on the manifold MALA sampler, which is derived from a Langevin diffusion process that takes into account local non-Euclidean geometry, and we discuss simplifications that may be made for computational efficiency. Finally we present a valid MCMC algorithm that estimates the Riemannian geometry at each iteration based on covariance estimates of random vectors tangent to the manifold at the chain?s current point. We demonstrate the use of `1 regularisation to calculate sparse approximate inverses of the metric tensor and investigate the sampling properties of the algorithm on an extremely challenging statistical model for which the Expected Fisher Information is analytically intractable. 2 Background We wish to sample from some arbitrary target density ?(x) defined on a continuous state space XD , which may be high-dimensional. We may define a Markov chain that converges to the correct stationary distribution in the usual manner by proposing a new position x? from the current position xn via some fixed proposal distribution q(x? |xn ); we accept the new move setting xn+1 = x? with ?(x? ) q(xn |x? ) probability ?(x? |xn ) = min( ?(x , 1) and set xn+1 = xn otherwise. In a Bayesian con? n ) q(x |xn ) text, we will often have a posterior distribution as our target ?(x) = p(?|y), where y is the data and ? are the parameters of a statistical model. The choice of proposal distribution is the critical factor in determining how efficiently the Markov chain can explore the space and whether new moves will be accepted with high probability and be sufficiently far from the current point to keep autocorrelation of the samples to a minimum. There is a lot of flexibility in the choice of proposal distribution, in that it may depend on the current point in a deterministic manner. We note that Adaptive MCMC approaches attempt to change their proposal mechanism throughout the running of the algorithm, and for the purpose of proving convergence to the stationary distribution it is useful to categorise them as follows; locally adaptive MCMC methods make proposals based only on the current position of the chain, whereas globally adaptive MCMC methods use previously collected samples in the chain?s history to generate a new proposal mechanism. This is an important distinction since globally adaptive methods lose their Markov property and convergence to the stationary distribution must be proven in an alternative manner. It has been shown that such chains may still be usefully employed as long as they satisfy some technical conditions, namely diminishing adaptation and bounded convergence [2]. In practice these algorithms represent a step towards MCMC as a ?black box? method and may be very useful for sampling from target distributions for which there is no derivative or higher order geometric information available, however there are simple examples of standard Adaptive MCMC methods requiring hundreds of thousands of iterations in higher dimensions before adapting to a suitable proposal distribution [3]. In addi2 tion, if there is more information about the target density available, then there seems little point in trying to guess the geometric structure when it may be calculated directly. In this paper we focus on locally adaptive methods that employ proposals constructed deterministically from information at the current position of the Markov chain. 2.1 Locally Adaptive MCMC Many geometric-based MCMC methods may be categorised as being locally adaptive. When the derivative of the target density is available, MCMC methods such as the Metropolis-adjusted Langevin Algorithm (MALA) [4] allow local adaptation based on the geometry at the current point, but unlike globally adaptive MCMC, they retain their Markovian property and therefore converge to the correct stationary distribution using a standard Metropolis-Hastings step and without the need to satisfy further technical conditions. In general, we can define position-specific proposal densities based on deterministic functions that depend only on the current point. This idea has been previously employed to develop approaches for sampling multimodal distributions whereby large initial jumps followed by deterministic optimisation functions were used to create mode-jumping proposal mechanisms [5]. In some instances, the use of first order geometric information may drastically speed up the convergence to a stationary distribution, however in other cases such algorithms exhibit very slow convergence, due to the gradients not being isotropic in magnitude [6]; in practice gradients may vary greatly in different directions and the rate of exploration of the target density may in addition be dependent on the problem-specific choice of parameterisation [1]. Methods using the standard gradient implicitly assume that the slope in each direction is approximately constant over a small distance, when in fact these gradients may rapidly change over short distances. Incorporating higher order geometry often helps although at an increased computational cost. A number of Hessian-based MCMC methods have been proposed as a solution [7]. While such approaches have been shown to work very well for selected problems there are a number of problems with this use of geometry; ad hoc methods are often necessary to deal with the fact that the Hessian might not be everywhere positive-definite, and second derivatives can be challenging and costly to compute. We can also exploit higher order information through the use of Riemannian geometry. Using a metric tensor instead of a Hessian matrix lends us nice properties such as invariance to reparameterisation of our statistical model, and positive-definiteness is also assured. Riemannian geometry has been useful in a variety of other machine learning and statistical contexts [8] however the limiting factor is usually analytic or computational tractability. 3 Differential Geometric MCMC During the 1940s, Jeffreys and Rao demonstrated that the Expected Fisher Information has the same properties as a metric tensor and indeed induces a natural Riemannian structure for a statistical model [11, 10], providing a fascinating link between statistics and differential geometry. Much work has been done since then elucidating the relationship between statistics and Riemannian geometry, in particular examining geometric concepts such as distance, curvature and geodesics on statistical manifolds, within a field that has become known as Information Geometry [6]. We first provide an overview of Riemannian geometry and MCMC algorithms defined on Riemannian manifolds. We then describe a sampling scheme that allows the local geometry to be estimated at each iteration for statistical models that do not admit an analytically tractable metric tensor. 3.1 Riemannian Geometry Informally, a manifold is an n-dimensional space that is locally Euclidean; it is locally equivalent to Rn via some smooth transformation. At each point ? ? Rn on a Riemannian manifold M there exists a tangent space, which we denote as T? M . We can think of this as a linear approximation to the Riemannian manifold at the point ? and this is simply a standard vector space, whose origin is the current point on the manifold and whose h vectors are itangent to this point. The vector space T? M is spanned by the differential operators ??? 1 , . . . , ???n , which act on functions defining paths on the underlying manifold [9]. In the context of MCMC we can consider the target density as the 3 log-likelihood of a statistical model given some data, such that at a particular point ?, the derivatives of are tangent to the manifold and these are just the score vectors at ?, ?? L = h the log-likelihood i ? ? ??1 , . . . , ??n . The tangent space at each point ? arises when we equip a differentiable manifold with an inner product at each point, which we can use to measure distance and angles between vectors. This inner product is defined in terms of a metric tensor, G? , which defines a basis on each tangent space T? M . The tangent space is therefore a linear approximation of the manifold at a given point and it has the same dimensionality. A natural inner product for this vector space is given by the covariance of the basis score vectors, since the covariance function satisfies the same properties as a metric tensor, namely symmetry, bilinearity and positive-definiteness [9]. This inner product then turns out to be equivalent to the Expected Fisher Information, following from the fact that the expectation of the score is zero, with the [i, j]th component of the tensor given by  Gi,j = Cov ?L ?L , ??i ??j ?L T ?L ??i ??j  = Ep(x|?) !  = ?Ep(x|?) ?2L ??i ??j  (1) Each tangent vector, t1 ? T? M , at a point on the manifold, ? ? M , has a length ||t1 || ? R+ , whose square is given by the inner product, such that ||t1 ||2G? = ht1 , t1 i? = tT1 G? t1 . This squared distance is known as the first fundamental form in Riemannian geometry [9], is invariant to reparameterisations of the coordinates, and importantly for MCMC provides a local measure of distance that takes into account the local 2nd order sensitivity of the statistical model. We note that when the metric tensor is constant for all values of ? then the Riemannian manifold is equivalent to a vector space with constant inner product; further, if the metric tensor is an identity matrix then the manifold simply becomes a Euclidean space. 3.2 Manifold MCMC We consider the manifold version of the MALA sampling algorithm, which proposes moves based on a stochastic differential equation defining a Langevin diffusion [4]. It turns out we can also define such a diffusion on a Riemannian manifold [12], and so in a similar manner we can derive a sampling algorithm that takes the underlying geometric structure into account when making proposals. It is based on the Laplace-Beltrami operator, which simply measures the divergence of a vector field on a manifold. The stochastic differential equation defining the Langevin diffusion on a Riemannian ? where the natural gradient [6] is the gradient of a ? ? L(?(t))dt + db(t), manifold is d?(t) = 12 ? function transformed into the tangent space at the current point by a linear transformation using the ? ? L(?(t)) = G?1 (?(t))?? L(?(t)), and the Brownian basis defined by the metric tensor, such that ? motion on the Riemannian manifold is defined as ? i (t) = |G(?(t))|? 21 db D p  X 1 ? (G?1 (?(t))ij |G(?(t))| 2 )dt + G?1 (?(t))db(t) ?? j i j=1 (2) The first part of the right hand side of Equation 2 represents the 1st order terms of the LaplaceBeltrami operator and these relate to the local curvature of the manifold, reducing to zero if the metric is everywhere constant. The second term on the right hand side provides a position specific linear transformation of the Brownian motion b(t) based on the local metric. Employing a first order Euler integrator, the discrete form of the Langevin diffusion on a Riemannian manifold follows as ? n+1 i = ? ni  D  n X 2 ?1 n n 2 ?1 n ?G(? ) ?1 n + (G (? )?? L(? ))i ?  G (? ) G (? ) 2 ?? j ij j=1    D  p 2 X ?1 n  ?G(? n ) G (? ) ij T r G?1 (? n ) +  G?1 (? n )zn 2 j=1 ?? j i  p  = ?(? n , )i +  G?1 (? n )zn + i 4 (3) which defines a proposal mechanism with density q(? ? |? n ) = N (? ? |?(? n , ), 2 G?1 (? n )) and acceptance probability min{1, p(? ? )q(? n |? ? )/p(? n )q(? ? |? n )} to ensure convergence to the invariant density p(?). We note that this deterministically defines a position-specific proposal distribution at each point on the manifold; we may categorise this as another locally adaptive MCMC method and convergence to the invariant density follows from using the standard Metropolis-Hastings ratio. It may be computationally expensive to calculate the 3rd order derivatives needed for working out the rate of change of the metric tensor, and so an obvious approximation is to assume these derivatives are zero for each step. In other words, for each step we can assume that the metric is locally constant. Of course even if the curvature of the manifold is not constant, this simplified proposal mechanism still defines a correct MCMC method which converges to the target measure, as we accept or reject moves using a Metropolis-Hastings ratio. This is equivalent to a position-specific pre-conditioned MALA proposal, where the pre-conditioning is dependent on the current parameter values ? n+1 = ? n + p 2 ?1 n G (? )?? L(? n ) +  G?1 (? n )zn 2 (4) For a manifold whose metric tensor is globally constant, this reduces further to a pre-conditioned MALA proposal, where the pre-conditioning is effectively independent of the current parameter values. In this context, such pre-conditioning no longer needs to be chosen arbitrarily, but rather it may be informed by the geometry of the distribution we are exploring. We point out that any approximations of the metric tensor would be best employed in the simplified mMALA scheme, defining the covariance of the proposal distribution, or as a flat approximation to a manifold. In the case of full mMALA, or even Hamiltonian Monte Carlo defined on a Riemannian manifold [1], Christoffel symbols are also used, incorporating the derivatives of the metric tensor as it changes across the surface of the manifold - in many cases the extra expense of computing or estimating such higher order information is not sufficiently supported by the increase in sampling efficiency [1] and for this reason we do not consider such methods further. In the next section we consider the representation of the metric tensor as the covariance of the tangent vectors at each point. We consider a method of estimating this such that convergence is guaranteed by extending the state-space and introducing auxiliary variables that are conditioned on the current point and we demonstrate its potential within a Riemannian geometric context. 4 Approximate Geometry for MCMC Proposals We first derive an acceptance ratio on an extended state-space that enables convergence to the stationary distribution before describing the implications for developing new differential geometric MCMC methods. Following [13, 14] we can employ the oft-used trick of defining an extended state space X ? D. We may of course choose D to be of any size, however in our particular case we shall choose D to be Rm?s , where m is the dimension of the data and s is the number of samples; the reasons for this shall become clear. We therefore sample from this extended state space, whose joint distribution follows as ? ? = ?(x)? ? (d|x). Given the current states [xn , dn ], we may propose a new state q(x? |xn , dn ) and the MCMC algorithm will satisfy detailed balance and hence converge to the stationary distribution if we accept joint proposals with Metropolis-Hastings probability ratio,  ? (dn |xn ) ? ? (x? , d? ) q(xn |x? , d? ) ? ?(x , d |xn , dn ) = min 1, ? ? (xn , dn ) q(x? |xn , dn ) ? ? (d? |x? )   ? ? ? ? ?(x ) ? ? (d |x ) q(xn |x , d? ) ? ? (dn |xn ) = min 1, ?(xn ) ? ? (dn |xn ) q(x? |xn , dn ) ? ? (d? |x? )   ? ? ? ?(x ) q(xn |x , d ) = min 1, ?(xn ) q(x? |xn , dn ) ? ?  (5) This is a reversible transition on ?(x, d), from which we can sample to obtain ?(x) as the marginal distribution. The key point here is that we may define our proposal distribution q(x? |xn , dn ) in almost any deterministic manner we wish. In particular, choosing ? ? (d|x) to be the same distribution 5 as the log-likelihood for our statistical model, the s samples from the extended state space D may be thought of as pseudo-data, from which we can deterministically calculate an estimate of the Expected Fisher Information to use as the covariance of a proposal distribution. Specifically, each sampled pseudo-data can be used deterministically to give a sample of ?L d? given the current ?, all of which may then be used deterministically to obtain an approximation of the covariance of tangent vectors at the current point. This approximation, unlike the Hessian, will always be positive definite, and gives us an approximation of the metric tensor defining the local geometry. Further, we may use additional deterministic procedures, given xn and dn , to construct better proposals; we consider a sparsity inducing approach in the next section. 5 Stability and Sparsity via `1 Regularisation We have two motivations for using an `1 regularisation approach for computing the inverse of the metric tensor; firstly, since the metric is equivalent to the covariance of tangent vectors, we may obtain more stable estimates of the inverse metric tensor using smaller numbers of samples, and secondly, it induces a natural sparsity in the inverse metric, which may be exploited to decrease the computational cost associated with repeated Cholesky factorisations and matrix-vector multiplications. We adopted the graphical lasso [15, 16], in which the maximum likelihood solution results in the matrix optimisation problem, arg min{? log det(A) + tr(AG) + ? A0 X |Aij |} (6) i6=j where G is an empirical covariance matrix and ? is a regularisation parameter. This convex optimisation problem aims to find A, the regularised maximum likelihood estimate for the inverse of the covariance matrix. Importantly, the optimisation algorithm we employ is deterministic given our tangent vectors, and therefore does not affect the validity of our MCMC algorithm; indeed we note that we may use any deterministic sparse matrix inverse estimation approaches within this MCMC algorithm. The use of the `1 regularisation promotes sparsity [23]; larger values for the regularisation parameter matrix ? results in a solution that is more sparse, on the other hand when ? approaches zero, the solution converges to the inverse of G (assuming it exists). It is also worth noting that the `1 regularisation helps to recover a sparse structure in a high dimensional setting where the number of samples is less than the number of parameters [17]. In order to achieve sufficiently fast computation we carefully implemented the graphical lasso algorithm tailored to this problem. We used no penalisation for the diagonal and uniform regularisation parameter value for the off-diagonal elements. The motivation for not penalising the diagonal is that it has been shown in the covariance estimation setting that the true inverse is approached as the number of samples is increased [18], and the structure is learned more accurately [19]. The simple regularisation structure allowed code simplification and reduction in memory use. We refactored the graphical lasso algorithm of [15] and implemented it directly in F ORTRAN which we then called from M ATLAB, making sure to minimise matrix copying due to M ATLAB processing. This code is available as a software package, G LASSOFAST [20]. In the current context, the use of this approach allows us to obtain sparse approximations to the inverse metric tensor, which may then be used in an MCMC proposal. Indeed, even if we have access to an analytic metric tensor we need not use the full inverse for our proposals; we could still obtain an approximate sparse representation, which may be beneficial computationally. The metric tensor varies smoothly across a Riemannian manifold and, theoretically, if we are calculating the inverse of 2 metric tensors that are close to each other, they may be numerically similar enough to be able to use the solution of one to speed up convergence of solution for the other, although in the simulations in this paper we found no benefit in doing so, i.e. the metric tensor varied too much as the MCMC sampler took large steps across the manifold. 6 Simulation Study We consider a challenging class of statistical models that severely tests the sampling capability of MCMC methods; in particular, two examples based on nonlinear differential equations using a 6 (a) Exact full inverse (b) Approximate sparse inverse Figure 1: In this comparison we plotted the exact and the sparse approximate inverses of a typical metric tensor G; we note that only subsets of parameters are typically strongly correlated in the statistical models we consider here and that the sparse approximation still captures the main correlation structure present. Here the dimension is p = 25, and the regularisation parameter ? is 0.05 ? ||G||? . Table 1: Summary of results for the Fitzhugh-Nagumo model with 10 runs of each parameter sampling scheme and 5000 posterior samples. Sampling Time (s) Mean ESS Total Time/ Relative Method (a, b, c) (Min mean ESS) Speed Metropolis 14.5 139, 18.2, 23.4 0.80 ?1.1 MALA 24.9 119.3, 28.7, 52.3 0.87 ?1.0 mMALA Simp. 35.9 283.4, 136.6, 173.7 0.26 ?3.4 biologically motivated robust Student-t likelihood, which renders the metric tensor analytically intractable. We examine the efficiency of our MCMC method with approximate metric on a well studied toy example, the Fitzhugh-Nagumo model, before examining a realistic, nonlinear and highly challenging example describing enzymatic circadian control in the plant Arabidopsis thaliana [22]. 6.1 Nonlinear Ordinary Differential Equations Statistical modelling using systems of nonlinear ordinary differential equations plays a vital role in unravelling the structure and behaviour of biological processes at a molecular level. The well-used Gaussian error model however is often inappropriate, particularly in molecular biology where limited measurements may not be repeated under exactly the same conditions and are susceptible to bias and systematic errors. The use of a Student-t distribution as a likelihood may help the robustness of the model with respect to possible outliers in the data. This presents a problem for standard manifold MCMC algorithms as it makes the metric tensor analytically intractable. We consider first the Fitzhugh-Nagumo model [1]. This synthetic dataset consisted of 200 time points simulated from the model between t = [0, 20] with parameters [a, b, c] = [0.2, 0.2, 3], to which Gaussian distributed noise was added with variance ? 2 = 0.25. We employed a Student-t likelihood with scaling parameter v = 3, and compared M-H and MALA (both employing scaled isotropic covariances), and simplified mMALA with approximate metric. The stepsize for each was automatically adjusted during the burn-in phase to obtain the theoretically optimal acceptance rate. Table 1 shows the results including time-normalised effective sample size (ESS) as a measure of sampling efficiency excluding burn-in [1]. The approximate manifold sampler offers a modest improvement on the other two samplers; despite taking longer to run because of the computational cost of estimating the metric, the samples it draws exhibit lower autocorrelation, and as such the approximate manifold sampler offers the highest time-normalised ESS. The toy Fitzhugh-Nagumo model is however rather simple, and despite being a popular example is rather unlike many realistic models used nowadays in the molecular modelling community. As such we consider another larger model that describes the enzymatic control of the circadian networks in Arabidopsis thaliana [21]. This is an extremely challenging, highly nonlinear model. We consider 7 Table 2: Comparison of pseudodata sample size on the quality of metric tensor estimation, and hence on sampling efficiency, using the circadian network example model, with 10 runs and 10,000 posterior samples. Number of Time (s) Min Mean ESS Total Time/ Relative Samples (Min mean ESS) Speed 10 155.6 85.1 1.90 ?1.0 20 163.2 171.9 0.95 ?2.0 30 168.9 209.1 0.81 ?2.35 40 175.2 208.3 0.84 ?2.26 Table 3: Summary of results for the circadian network model with 10 runs of each parameter sampling scheme and 10,000 posterior samples. Sampling Time (s) Min Mean ESS Total Time/ Relative Method (Min mean ESS) Speed Metropolis 37.1 6.0 6.2 ?4.4 MALA 101.3 3.7 27.4 ?1.0 Adaptive MCMC 110.4 46.7 2.34 ?11.7 mMALA Simp. 168.9 209.1 0.81 ?33.8 inferring the 6 rate parameters that control production and decay of proteins in the nucleus and cytoplasm (see [22] for the equations and full details of the model), again employing a Student-t likelihood for which the Expected Fisher Information is analytically intractable. We used parameter values from [22] to simulate observations for each of the six species at 48 time points representing 48 hours in the model. Student-t distributed noise was then added to obtain the data for inference. We first investigated the effect that the tangent vector sample size for covariance estimation has on the sampling efficiency of simplified mMALA. The results in Table 2 show that there is a threshold above which a more accurate estimate of the metric tensor does not result in additional sampling advantage. The threshold for this particular example model is around 30 pseudodata samples. Table 3 shows the time normalised statistical efficiency for each of the sampling methods; this time we also compare an Adaptive MCMC algorithm [2] with M-H, MALA, and simplified mMALA with approximate geometry. Both the M-H and MALA algorithms fail to explore the target distribution and have severe difficulties with the extreme scalings and nonlinear correlation structure present in the manifold. The Adaptive MCMC method works reasonably well after taking 2000 samples to learn the covariance structure, although its performance is still poorer than the simplified mMALA scheme, which converges almost immediately with no adaptation time required; the approximation mMALA makes of the local geometry allows it to adequately deal with the different scalings and correlations that occur in different parts of the space. 7 Conclusions The use of Riemannian geometry can be very useful for enabling efficient sampling from arbitrary probability densities. The metric tensor may be used for creating position-specific proposal mechanisms that allow MCMC methods to automatically adapt to the local correlation structure induced by the sensitivities of the parameters of a statistical model. The metric tensor may conveniently be defined as the Expected Fisher Information, however this quantity is often either difficult or impossible to compute analytically. We have presented a sampling scheme that approximates the Expected Fisher Information by estimating the covariance structure of the tangent vectors at each point on the manifold. By considering this problem as one of inverse covariance estimation, this naturally led us to consider the use of `1 regularisation to improve the estimation procedure. This had the added benefit of inducing sparsity into the metric tensor, which may offer computational advantages when proposing MCMC moves across the manifold. For future work it will be exciting to investigate the potential impact of approximate, sparse metric tensors for high dimensional problems. 8 Ben Calderhead gratefully acknowledges his Research Fellowship through the 2020 Science programme, funded by EPSRC grant number EP/I017909/1 and supported by Microsoft Research. References [1] M. Girolami and B. Calderhead, Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods (with discussion), Journal of the Royal Statistical Society: Series B, 73:123-214, 2011 [2] H. Haario, E. Saksman and J. Tamminen, An Adaptive Metropolis Algorithm, Bernoulli, 7(2):223-242, 2001 [3] G. Roberts and J. Rosenthal, Examples of Adaptive MCMC, Journal of Computational and Graphical Statistics, 18(2), 2009 [4] G. Roberts and O. Stramer, Langevin diffusions and Metropolis-Hastings algorithms, Methodol. Comput. Appl. Probab., 4, 337-358, 2003 [5] H. Tjelmeland and B. Hegstad, Mode Jumping Proposals in MCMC, Scandinavian Journal of Statistics, 28(1), 2001 [6] S. Amari and H. Nagaoka, Methods of Information Geometry, Oxford University Press, 2000 [7] Y. Qi and T. Minka, Hessian-based Markov Chain Monte-Carlo algorithms, 1st Cape Cod Workshop Monte Carlo Methods, 2002 [8] A. Honkela, T. Raiko, M. Kuusela, M. Tornio and J. Karhunen, Approximate Riemannian conjugate gradient learning for fixed-form variational Bayes, JMLR, 11:3235-3268, 2010 [9] M. K. Murray and J. W. Rice, Differential Geometry and Statistics, Chapman and Hall, 1993 [10] C. R. Rao, Information and accuracy attainable in the estimation of statistical parameters, Bull. Calc. Math. Soc., 37:81-91, 1945 [11] H. Jeffreys, Theory of Probability, 1st ed. The Clarendon Press, Oxford, 1939 [12] J. Kent, Time reversible diffusions, Adv. Appl. Probab., 10:819-835, 1978 [13] J. Besag, P. Green, D. Higdon, and K. Mengersen, Bayesian Computation and Stochastic Systems, Statistical Science, 10(1):3-41, 1995 [14] A. Doucet, P. Jacob and A. Johansen, Discussion of Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods, Journal of the Royal Statistical Society: Series B, 73:162, 2011 [15] J. Friedman, T. Hastie and R. Tibshirani, Sparse inverse covariance estimation with the graphical lasso, Biostatistics, 9(3):432-441, 2008 [16] O. Banerjee, L. El Ghaoui and A. d?Aspremont, Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data, JMLR, 9(6), 2008 [17] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu., Model selection in Gaussian graphical models: High-dimensional consistency of `1 -regularized MLE, NIPS 21, 2008 [18] A. J. Rothman, P. J. Bickel, E. Levina and J. Zhu, Sparse permutation invariant covariance estimation, Electronic Journal of Statistics, 2:494-515, 2008 [19] J. Duchi, S. Gould and D. Koller, Projected Subgradient Methods for Learning Sparse Gaussians, Conference on Uncertainty in Artificial Intelligence, 2008 [20] M. A. Sustik and B. Calderhead, G LASSOFAST: An efficient G LASSO implementation, Technical Report, Computer Science Department, University of Texas at Austin, TR-12-29, 2012 [21] J. C. W. Locke, A. Millar and M. Turner, Modelling genetic networks with noisy and varied experimental data: the circadian clock in Arabidopsis thaliana, J. Theor. Biol. 234:383-393, 2005 [22] B. Calderhead and M. Girolami, Statistical analysis of nonlinear dynamical systems using differential geometric sampling methods, Journal of the Royal Society Interface Focus, 1(6), 2011 [23] R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society: Series B, 58:267-288, 1996 9
4531 |@word version:1 seems:1 nd:1 simulation:2 covariance:20 kent:1 jacob:1 attainable:1 tr:2 reduction:1 initial:1 necessity:1 series:3 score:3 genetic:1 current:20 must:1 realistic:3 distant:1 enables:3 analytic:3 stationary:11 intelligence:1 selected:1 guess:1 isotropic:2 es:8 haario:1 hamiltonian:3 short:1 provides:2 math:1 firstly:2 dn:12 constructed:1 differential:13 become:2 autocorrelation:2 manner:5 theoretically:2 indeed:3 expected:9 examine:1 integrator:1 globally:5 riemann:2 automatically:2 little:1 inappropriate:1 considering:2 becomes:1 estimating:5 underlying:4 bounded:1 mass:1 biostatistics:1 proposing:2 informed:1 ag:1 transformation:3 pseudo:2 act:1 xd:1 usefully:1 exactly:1 rm:1 scaled:1 uk:2 control:3 arabidopsis:3 grant:1 before:4 negligible:1 positive:4 local:13 t1:5 simp:2 severely:1 despite:2 oxford:2 path:1 approximately:1 black:1 might:1 tt1:1 burn:2 studied:1 higdon:1 challenging:7 appl:2 tamminen:1 limited:1 practice:3 definite:2 categorise:2 procedure:2 area:1 empirical:1 adapting:1 reject:1 thought:1 word:1 induce:1 pre:5 suggest:1 protein:1 close:1 selection:3 operator:3 context:5 impossible:1 www:1 equivalent:5 deterministic:7 demonstrated:1 go:1 convex:1 immediately:1 factorisation:1 estimator:1 importantly:2 spanned:1 his:1 proving:2 stability:1 coordinate:1 laplace:1 limiting:1 target:13 play:1 alleviating:1 exact:3 regularised:2 origin:1 trick:1 element:1 expensive:1 particularly:2 ep:3 role:1 epsrc:1 capture:1 calculate:3 thousand:1 region:1 adv:1 decrease:1 highest:1 geodesic:1 depend:2 calderhead:7 efficiency:8 basis:3 multimodal:1 joint:2 tx:1 fast:1 describe:1 london:2 monte:7 effective:1 cod:1 approached:1 artificial:1 choosing:1 whose:5 larger:2 drawing:1 otherwise:1 amari:1 statistic:6 gi:1 cov:1 nagaoka:1 think:1 noisy:1 hoc:1 advantage:2 differentiable:1 net:1 ucl:1 propose:2 took:1 product:6 adaptation:4 frequent:1 rapidly:1 flexibility:1 achieve:1 adapts:1 inducing:3 saksman:1 convergence:11 requirement:1 extending:1 circadian:5 converges:4 ben:3 wider:1 help:3 develop:1 ac:1 derive:2 tornio:1 ij:3 strong:1 soc:1 auxiliary:1 c:1 implemented:2 girolami:2 direction:2 beltrami:1 correct:4 stochastic:3 exploration:1 transient:1 unravelling:1 behaviour:1 biological:1 rothman:1 secondly:2 theor:1 adjusted:2 exploring:3 pseudodata:2 sufficiently:3 around:1 hall:1 vary:2 bickel:1 purpose:1 estimation:11 applicable:1 lose:1 utexas:1 create:1 always:1 gaussian:4 aim:1 rather:3 shrinkage:1 derived:1 focus:3 improvement:1 modelling:3 likelihood:12 bernoulli:1 greatly:1 besag:1 inference:3 dependent:2 el:1 bt:1 typically:1 accept:3 diminishing:1 koller:1 transformed:1 arg:1 development:1 proposes:1 marginal:1 field:2 construct:2 sampling:21 chapman:1 biology:1 represents:1 yu:1 future:1 report:1 few:1 employ:3 modern:1 resulted:1 divergence:1 geometry:29 phase:3 microsoft:1 attempt:1 friedman:1 acceptance:3 investigate:3 highly:2 elucidating:1 severe:1 extreme:1 chain:15 implication:1 accurate:1 poorer:1 calc:1 nowadays:1 partial:1 necessary:2 jumping:2 modest:1 euclidean:3 desired:1 plotted:1 theoretical:1 instance:1 increased:2 markovian:1 rao:2 zn:3 bull:1 ordinary:3 cost:5 applicability:2 subset:1 tractability:1 introducing:1 euler:1 hundred:1 uniform:1 examining:2 too:1 varies:1 synthetic:1 mala:10 st:3 density:11 fundamental:1 sensitivity:3 retain:1 systematic:1 off:1 squared:1 again:1 choose:2 admit:3 creating:1 derivative:7 toy:2 tjelmeland:1 account:4 potential:2 student:7 satisfy:3 ad:1 tion:1 break:1 lot:1 doing:1 millar:1 recover:1 bayes:1 capability:1 slope:1 square:1 ni:1 accuracy:1 variance:1 efficiently:1 bayesian:2 accurately:1 carlo:7 worth:1 history:2 ed:1 ty:1 atlab:2 obvious:1 minka:1 naturally:3 associated:2 riemannian:29 con:1 sampled:1 dataset:1 popular:2 penalising:1 dimensionality:1 carefully:1 clarendon:1 higher:6 dt:2 day:1 methodology:2 done:1 box:1 strongly:2 just:1 correlation:6 honkela:1 hand:3 hastings:5 working:1 clock:1 nonlinear:8 banerjee:1 reversible:2 defines:4 mode:2 quality:1 usa:1 effect:1 validity:1 requiring:1 concept:1 true:1 consisted:1 adequately:1 analytically:10 hence:2 deal:2 during:2 whereby:1 trying:1 demonstrate:3 duchi:1 motion:2 interface:1 variational:1 recently:1 bilinearity:1 raskutti:1 covarying:1 overview:1 conditioning:3 interpretation:1 approximates:2 numerically:1 measurement:1 rd:1 consistency:1 i6:1 gratefully:1 had:1 funded:1 stable:2 access:1 longer:2 surface:1 scandinavian:1 curvature:3 posterior:4 brownian:2 recent:1 multivariate:1 certain:1 binary:1 arbitrarily:1 ht1:1 exploited:1 minimum:1 additional:2 employed:5 converge:3 full:5 reduces:2 smooth:1 technical:4 levina:1 adapt:1 christoffel:1 long:1 offer:3 nagumo:4 penalisation:1 molecular:3 ravikumar:1 promotes:1 mle:1 impact:1 qi:1 regression:1 optimisation:4 metric:45 expectation:1 iteration:4 represent:1 tailored:1 proposal:29 background:2 whereas:1 addition:1 fellowship:1 extra:1 unlike:3 sure:1 subject:1 induced:3 db:3 noting:1 vital:1 enough:1 variety:2 affect:1 hastie:1 lasso:6 inner:6 idea:1 computable:1 texas:2 det:1 minimise:1 whether:1 motivated:2 expression:1 six:1 render:1 enzymatic:2 hessian:5 generally:1 useful:5 clear:1 informally:1 detailed:1 locally:13 induces:2 http:1 generate:1 estimated:2 rosenthal:1 tibshirani:2 discrete:1 shall:2 key:1 threshold:2 characterisation:1 changing:2 diffusion:7 subgradient:1 year:1 run:4 inverse:20 everywhere:2 angle:1 package:1 uncertainty:1 extends:2 throughout:1 almost:2 locke:1 electronic:1 draw:1 scaling:3 thaliana:3 followed:1 simplification:2 guaranteed:1 fascinating:1 occur:1 flat:1 software:1 speed:5 simulate:1 extremely:3 min:11 fitzhugh:4 gould:1 department:2 developing:1 conjugate:1 across:5 smaller:1 beneficial:1 describes:1 metropolis:9 parameterisation:1 biologically:2 making:3 jeffreys:2 outlier:1 invariant:4 ghaoui:1 computationally:2 equation:8 previously:2 discus:2 turn:2 mechanism:7 describing:2 needed:1 fail:1 tractable:3 sustik:3 adopted:1 available:4 operation:1 gaussians:1 stepsize:1 alternative:1 robustness:1 running:1 ensure:1 graphical:6 cape:1 calculating:1 wc1e:1 exploit:1 giving:1 murray:1 approximating:1 society:4 tensor:37 move:7 added:3 quantity:1 costly:1 usual:1 diagonal:3 exhibit:2 gradient:7 lends:1 distance:6 link:1 simulated:1 reparameterisation:1 manifold:45 collected:1 reason:2 equip:1 assuming:1 length:1 code:2 copying:1 relationship:1 providing:1 ratio:4 balance:1 difficult:1 susceptible:1 robert:2 potentially:1 relate:1 expense:1 implementation:1 motivates:1 observation:1 markov:10 enabling:2 langevin:8 defining:6 extended:4 excluding:1 rn:2 varied:2 arbitrary:2 community:1 namely:2 required:2 johansen:1 distinction:2 learned:1 hour:1 nip:1 able:1 usually:1 dynamical:1 sparsity:6 challenge:1 oft:1 including:1 memory:1 royal:4 green:1 wainwright:1 suitable:2 critical:1 natural:6 difficulty:2 regularized:1 methodol:1 turner:1 zhu:1 representing:1 scheme:7 improve:2 brief:1 raiko:1 acknowledges:1 aspremont:1 text:1 nice:1 geometric:12 probab:2 tangent:15 multiplication:1 determining:1 regularisation:12 relative:3 plant:1 permutation:1 proven:1 nucleus:1 exciting:1 production:1 austin:3 course:2 summary:2 supported:2 last:1 drastically:1 side:2 allow:3 aij:1 bias:1 normalised:3 taking:2 sparse:17 benefit:2 distributed:2 dimension:3 xn:25 valid:2 calculated:1 transition:1 made:3 adaptive:21 commonly:1 jump:1 simplified:6 programme:1 far:1 employing:3 projected:1 mengersen:1 approximate:15 cytoplasm:1 implicitly:1 keep:1 categorised:1 global:1 doucet:1 continuous:1 table:6 learn:1 reasonably:1 robust:3 symmetry:1 laplacebeltrami:1 complex:2 investigated:1 constructing:1 assured:2 main:1 motivation:2 noise:2 repeated:2 allowed:1 definiteness:2 slow:1 inferring:2 position:10 wish:2 deterministically:5 comput:1 jmlr:2 specific:7 symbol:1 decay:1 intractable:5 incorporating:2 enduring:1 exists:2 workshop:1 effectively:1 magnitude:1 conditioned:3 karhunen:1 stramer:1 smoothly:1 led:1 simply:3 explore:2 conveniently:1 satisfies:1 rice:1 kuusela:1 identity:1 towards:1 fisher:9 change:4 specifically:1 typical:1 reducing:1 sampler:5 called:1 total:3 specie:1 accepted:2 invariance:1 experimental:1 college:1 people:1 cholesky:1 arises:2 mcmc:46 biol:1 correlated:1
3,902
4,532
Learning to Discover Social Circles in Ego Networks Jure Leskovec Stanford, USA [email protected] Julian McAuley Stanford, USA [email protected] Abstract Our personal social networks are big and cluttered, and currently there is no good way to organize them. Social networking sites allow users to manually categorize their friends into social circles (e.g. ?circles? on Google+, and ?lists? on Facebook and Twitter), however they are laborious to construct and must be updated whenever a user?s network grows. We define a novel machine learning task of identifying users? social circles. We pose the problem as a node clustering problem on a user?s ego-network, a network of connections between her friends. We develop a model for detecting circles that combines network structure as well as user profile information. For each circle we learn its members and the circle-specific user profile similarity metric. Modeling node membership to multiple circles allows us to detect overlapping as well as hierarchically nested circles. Experiments show that our model accurately identifies circles on a diverse set of data from Facebook, Google+, and Twitter for all of which we obtain hand-labeled ground-truth. 1 Introduction Online social networks allow users to follow streams of posts generated by hundreds of their friends and acquaintances. Users? friends generate overwhelming volumes of information and to cope with the ?information overload? they need to organize their personal social networks. One of the main mechanisms for users of social networking sites to organize their networks and the content generated by them is to categorize their friends into what we refer to as social circles. Practically all major social networks provide such functionality, for example, ?circles? on Google+, and ?lists? on Facebook and Twitter. Once a user creates her circles, they can be used for content filtering (e.g. to filter status updates posted by distant acquaintances), for privacy (e.g. to hide personal information from coworkers), and for sharing groups of users that others may wish to follow. Currently, users in Facebook, Google+ and Twitter identify their circles either manually, or in a na??ve fashion by identifying friends sharing a common attribute. Neither approach is particularly satisfactory: the former is time consuming and does not update automatically as a user adds more friends, while the latter fails to capture individual aspects of users? communities, and may function poorly when profile information is missing or withheld. In this paper we study the problem of automatically discovering users? social circles. In particular, given a single user with her personal social network, our goal is to identify her circles, each of which is a subset of her friends. Circles are user-specific as each user organizes her personal network of friends independently of all other users to whom she is not connected. This means that we can formulate the problem of circle detection as a clustering problem on her ego-network, the network of friendships between her friends. In Figure 1 we are given a single user u and we form a network between her friends vi . We refer to the user u as the ego and to the nodes vi as alters. The task then is to identify the circles to which each alter vi belongs, as in Figure 1. In other words, the goal is to find nested as well as overlapping communities/clusters in u?s ego-network. Generally, there are two useful sources of data that help with this task. The first is the set of edges of the ego-network. We expect that circles are formed by densely-connected sets of alters [20]. 1 Figure 1: An ego-network with labeled circles. This network shows typical behavior that we observe in our data: Approximately 25% of our ground-truth circles (from Facebook) are contained completely within another circle, 50% overlap with another circle, and 25% of the circles have no members in common with any other circle. The goal is to discover these circles given only the network between the ego?s friends. We aim to discover circle memberships and to find common properties around which circles form. However, different circles overlap heavily, i.e., alters belong to multiple circles simultaneously [1, 21, 28, 29], and many circles are hierarchically nested in larger ones (Figure 1). Thus it is important to model an alter?s memberships to multiple circles. Secondly, we expect that each circle is not only densely connected but its members also share common properties or traits [18, 28]. Thus we need to explicitly model different dimensions of user profiles along which each circle emerges. We model circle affiliations as latent variables, and similarity between alters as a function of common profile information. We propose an unsupervised method to learn which dimensions of profile similarity lead to densely linked circles. Our model has two innovations: First, in contrast to mixedmembership models [2] we predict hard assignment of a node to multiple circles, which proves critical for good performance. Second, by proposing a parameterized definition of profile similarity, we learn the dimensions of similarity along which links emerge. This extends the notion of homophily [12] by allowing different circles to form along different social dimensions, an idea related to the concept of Blau spaces [16]. We achieve this by allowing each circle to have a different definition of profile similarity, so that one circle might form around friends from the same school, and another around friends from the same location. We learn the model by simultaneously choosing node circle memberships and profile similarity functions so as to best explain the observed data. We introduce a dataset of 1,143 ego-networks from Facebook, Google+, and Twitter, for which we obtain hand-labeled ground-truth from 5,636 different circles.1 Experimental results show that by simultaneously considering social network structure as well as user profile information our method performs significantly better than natural alternatives and the current state-of-the-art. Besides being more accurate our method also allows us to generate automatic explanations of why certain nodes belong to common communities. Our method is completely unsupervised, and is able to automatically determine both the number of circles as well as the circles themselves. Further Related Work. Topic-modeling techniques have been used to uncover ?mixedmemberships? of nodes to multiple groups [2], and extensions allow entities to be attributed with text information [3, 5, 11, 13, 26]. Classical algorithms tend to identify communities based on node features [9] or graph structure [1, 21], but rarely use both in concert. Our work is related to [30] in the sense that it performs clustering on social-network data, and [23], which models memberships to multiple communities. Finally, there are works that model network data similar to ours [6, 17], though the underlying models do not form communities. As we shall see, our problem has unique characteristics that require a new model. An extended version of our article appears in [15]. 2 A Generative Model for Friendships in Social Circles We desire a model of circle formation with the following properties: (1) Nodes within circles should have common properties, or ?aspects?. (2) Different circles should be formed by different aspects, e.g. one circle might be formed by family members, and another by students who attended the same university. (3) Circles should be allowed to overlap, and ?stronger? circles should be allowed to form within ?weaker? ones, e.g. a circle of friends from the same degree program may form within a circle 1 http://snap.stanford.edu/data/ 2 from the same university, as in Figure 1. (4) We would like to leverage both profile information and network structure in order to identify the circles. Ideally we would like to be able to pinpoint which aspects of a profile caused a circle to form, so that the model is interpretable by the user. The input to our model is an ego-network G = (V, E), along with ?profiles? for each user v ? V . The ?center? node u of the ego-network (the ?ego?) is not included in G, but rather G consists only of u?s friends (the ?alters?). We define the ego-network in this way precisely because creators of circles do not themselves appear in their own circles. For each ego-network, our goal is to predict a set of circles C = {C1 . . . CK }, Ck ? V , and associated parameter vectors ?k that encode how each circle emerged. We encode ?user profiles? into pairwise features ?(x, y) that in some way capture what properties the users x and y have in common. We first describe our model, which can be applied using arbitrary feature vectors ?(x, y), and in Section 5 we describe several ways to construct feature vectors ?(x, y) that are suited to our particular application. We describe a model of social circles that treats circle memberships as latent variables. Nodes within a common circle are given an opportunity to form an edge, which naturally leads to hierarchical and overlapping circles. We will then devise an unsupervised algorithm to jointly optimize the latent variables and the profile similarity parameters so as to best explain the observed network data. Our model of social circles is defined as follows. Given an ego-network G and a set of K circles C = {C1 . . . CK }, we model the probability that a pair of nodes (x, y) ? V ? V form an edge as ( ) X X p((x, y) ? E) ? exp h?(x, y), ?k i ? ?k h?(x, y), ?k i . (1) Ck ?{x,y} | Ck +{x,y} {z } | circles containing both nodes {z all other circles } For each circle Ck , ?k is the profile similarity parameter that we will learn. The idea is that h?(x, y), ?k i is high if both nodes belong to Ck , and low if either of them do not (?k trades-off these two effects). Since the feature vector ?(x, y) encodes the similarity between the profiles of two users x and y, the parameter vector ?k encodes what dimensions of profile similarity caused the circle to form, so that nodes within a circle Ck should ?look similar? according to ?k . Considering that edges e = (x, y) are generated independently, we can write the probability of G as Y Y P? (G; C) = p(e ? E) ? p(e ? / E), (2) e6?E e?E where ? = {(?k , ?k )} k=1...K is our set of model parameters. Defining the shorthand notation X dk (e) = ?(e ? Ck ) ? ?k ?(e ? / Ck ), ?(e) = dk (e) h?(e), ?k i Ck ?C allows us to write the log-likelihood of G:   X X l? (G; C) = ?(e) ? log 1 + e?(e) . (3) e?V ?V e?E Next, we describe how to optimize node circle memberships C as well as the parameters of the user profile similarity functions ? = {(?k , ?k )} (k = 1 . . . K) given a graph G and user profiles. 3 Unsupervised Learning of Model Parameters ?? ? = {?, Treating circles C as latent variables, we aim to find ? ? } so as to maximize the regularized log-likelihood of (eq. 3), i.e., ? C? = argmax l? (G; C) ? ??(?). ?, (4) ?,C We solve this problem using coordinate ascent on ? and C [14]: Ct ?t+1 = argmax l?t (G; C) (5) = argmax l? (G; C t ) ? ??(?). (6) C ? 3 Noting that (eq. 3) is concave in ?, we optimize (eq. 6) through gradient ascent, where partial derivatives are given by ?l ??k = ?l ??k = X ?de (k)?k e?V ?V X X e?(e) ?? + dk (e)?k ? ??k 1 + e?(e) e?E ?(e ? / Ck ) h?(e), ?k i e?V ?V X e?(e) ? ?(e ? / Ck ) h?(e), ?k i . ?(e) 1+e e?E For fixed C \ Ci we note that solving argmaxCi l? (G; C \ Ci ) can be expressed as pseudo-boolean optimization in a pairwise graphical model [4], i.e., it can be written as X Ck = argmax E(x,y) (?(x ? C), ?(y ? C)). (7) C (x,y)?V ?V In words, we want edges with high weight P(under ?k ) to appear in Ck , and edges with low weight to appear outside of Ck . Defining ok (e) = Ck ?C\Ci dk (e) h?(e), ?k i the energy Ee of (eq. 7) is Ee (0, 0) = Ee (0, 1) = Ee (1, 0) Ee (1, 1) = ? ok (e) ? ?k h?(e), ?k i ? log(1 + eok (e)??k h?(e),?k i ), ? log(1 + eok (e)??k h?(e),?k i ), = ? ok (e) + h?(e), ?k i ? log(1 + eok (e)+h?(e),?k i ), ? log(1 + eok (e)+h?(e),?k i ), e?E e? /E e?E . e? /E By expressing the problem in this form we can draw upon existing work on pseudo-boolean optimization. We use the publicly-available ?QPBO? software described in [22], which is able to accurately approximate problems of the form shown in (eq. 7). We solve (eq. 7) for each Ck in a random order. The two optimization steps of (eq. 5) and (eq. 6) are repeated until convergence, i.e., until C t+1 = C t . PK P|?k | We regularize (eq. 4) using the `1 norm, i.e., ?(?) = k=1 i=1 |?ki |, which leads to sparse (and readily interpretable) parameters. Since ego-networks are naturally relatively small, our algorithm can readily handle problems at the scale required. In the case of Facebook, the average ego-network has around 190 nodes [24], while the largest network we encountered has 4,964 nodes. Note that since the method is unsupervised, inference is performed independently for each ego-network. This means that our method could be run on the full Facebook graph (for example), as circles are independently detected for each user, and the ego-networks typically contain only hundreds of nodes. Hyperparameter estimation. To choose the optimal number of circles, we choose K so as to minimize an approximation to the Bayesian Information Criterion (BIC) [2, 8, 25], ? = argmin BIC (K; ?K ) K (8) K where ?K is the set of parameters predicted for a particular number of communities K, and BIC (K; ?K ) ' ?2l?K (G; C) + |?K | log |E|. (9) The regularization parameter ? ? {0, 1, 10, 100} was determined using leave-one-out cross validation, though in our experience did not significantly impact performance. 4 Dataset Description Our goal is to evaluate our unsupervised method on ground-truth data. We expended significant time, effort, and resources to obtain high quality hand-labeled data.2 We were able to obtain ego-networks and ground-truth from three major social networking sites: Facebook, Google+, and Twitter. From Facebook we obtained profile and network data from 10 ego-networks, consisting of 193 circles and 4,039 users. To do so we developed our own Facebook application and conducted a survey of ten users, who were asked to manually identify all the circles to which their friends belonged. On average, users identified 19 circles in their ego-networks, with an average circle size of 22 friends. Examples of such circles include students of common universities, sports teams, relatives, etc. 2 http://snap.stanford.edu/data/ 4 ?rst name Alan last name Turing position company name work type name education type ?rst name Dilly last name Knox position company position work education company name type Cryptanalyst GC&CS Cambridge College 1 ? ?x,y Princeton Graduate School Cryptanalyst GC&CS Cryptanalyst Royal Navy 0 1 ? ?x,y Cambridge College 203first name : Dilly 607last name : Knox 607first name : Alan 6 7 607last name : Turing 6 7 617work : position : Cryptanalyst 6 7 7 =6 617work : location : GC &CS 607work : location : Royal Navy 6 7 617education : name : Cambridge 6 7 617education : type : College 4 5 0 education : name : Princeton 0 education : type : Graduate School 2 3 0 first name 607last name 6 7 617work : position =6 7 617work : location 415education : name 1 education : type Figure 2: Feature construction. Profiles are tree-structured, and we construct features by comparing paths in those trees. Examples of trees for two users x (blue) and y (pink) are shown at left. Two schemes for constructing feature vectors from these profiles are shown at right: (1) (top right) we construct binary indicators measuring the difference between leaves in the two trees, e.g. ?work?position?Cryptanalyst? appears in both trees. (2) (bottom right) we sum over the leaf nodes in the first scheme, maintaining the fact that the two users worked at the same institution, but discarding the identity of that institution. For the other two datasets we obtained publicly accessible data. From Google+ we obtained data from 133 ego-networks, consisting of 479 circles and 106,674 users. The 133 ego-networks represent all 133 Google+ users who had shared at least two circles, and whose network information was publicly accessible at the time of our crawl. The Google+ circles are quite different to those from Facebook, in the sense that their creators have chosen to release them publicly, and because Google+ is a directed network (note that our model can very naturally be applied to both to directed and undirected networks). For example, one circle contains candidates from the 2012 republican primary, who presumably do not follow their followers, nor each other. Finally, from Twitter we obtained data from 1,000 ego-networks, consisting of 4,869 circles (or ?lists? [10, 19, 27, 31]) and 81,362 users. The ego-networks we obtained range in size from 10 to 4,964 nodes. Taken together our data contains 1,143 different ego-networks, 5,541 circles, and 192,075 users. The size differences between these datasets simply reflects the availability of data from each of the three sources. Our Facebook data is fully labeled, in the sense that we obtain every circle that a user considers to be a cohesive community, whereas our Google+ and Twitter data is only partially labeled, in the sense that we only have access to public circles. We design our evaluation procedure in Section 6 so that partial labels cause no issues. 5 Constructing Features from User Profiles Profile information in all of our datasets can be represented as a tree where each level encodes increasingly specific information (Figure 2, left). From Google+ we collect data from six categories (gender, last name, job titles, institutions, universities, and places lived). From Facebook we collect data from 26 categories, including hometowns, birthdays, colleagues, political affiliations, etc. For Twitter, many choices exist as proxies for user profiles; we simply collect data from two categories, namely the set of hashtags and mentions used by each user during two-weeks? worth of tweets. ?Categories? correspond to parents of leaf nodes in a profile tree, as shown in Figure 2. We first describe a difference vector to encode the relationship between two profiles. A non-technical description is given in Figure 2. Suppose that users v ? V each have an associated profile tree Tv , and that l ? Tv is a leaf in that tree. We define the difference vector ?x,y between two users x and y as a binary indicator encoding the profile aspects where users x and y differ (Figure 2, top right): ?x,y [l] = ?((l ? Tx ) 6= (l ? Ty )). (10) Note that feature descriptors are defined per ego-network: while many thousands of high schools (for example) exist among all Facebook users, only a small number appear among any particular user?s friends. Although the above difference vector has the advantage that it encodes profile information at a fine granularity, it has the disadvantage that it is high-dimensional (up to 4,122 dimensions in the data 5 we considered). One way to address this is to form difference vectors based on the parents of leaf nodes: this way, we encode what profile categories two users have in common, but disregard specific values (Figure 2, bottom right). For example, we encode how many hashtags two users tweeted in common, but discard which hashtags they tweeted: P 0 ?x,y [p] = l?children(p) ?x,y [l]. (11) This scheme has the advantage that it requires a constant number of dimensions, regardless of the size of the ego-network (26 for Facebook, 6 for Google+, 2 for Twitter, as described above). 0 Based on the difference vectors ?x,y (and ?x,y ) we now describe how to construct edge features ?(x, y). The first property we wish to model is that members of circles should have common relationships with each other: ?1 (x, y) = (1; ??x,y ). (12) The second property we wish to model is that members of circles should have common relationships to the ego of the ego-network. In this case, we consider the profile tree Tu from the ego user u. We then define our features in terms of that user: ?2 (x, y) = (1; ? ?x,u ? ?y,u ) (13) (|?x,u ? ?y,u | is taken elementwise). These two parameterizations allow us to assess which mechanism better captures users? subjective definition of a circle. In both cases, we include a constant feature (?1?), which controls the probability that edges form within circles, or equivalently it measures the extent to which circles are made up of friends. Importantly, this allows us to predict memberships even for users who have no profile information, simply due to their patterns of connectivity. 0 , we define Similarly, for the ?compressed? difference vector ?x,y 0 0 0 ? ?y,u ). ? 1 (x, y) = (1; ??x,y ), ? 2 (x, y) = (1; ? ?x,u (14) To summarize, we have identified four ways of representing the compatibility between different aspects of profiles for two users. We considered two ways of constructing a difference vector (?x,y 0 ) and two ways of capturing the compatibility of a pair of profiles (?(x, y) vs. ?(x, y)). vs. ?x,y 6 Experiments Although our method is unsupervised, we can evaluate it on ground-truth data by examining the maximum-likelihood assignments of the latent circles C = {C1 . . . CK } after convergence. Our goal is that for a properly regularized model, the latent variables will align closely with the human labeled ground-truth circles C? = {C?1 . . . C?K? }. Evaluation metrics. To measure the alignment between a predicted circle C and a ground-truth ? we compute the Balanced Error Rate (BER) between the two circles [7], BER(C, C) ? = circle  C,  ? ? |C\ C| | C\C| 1 . This measure assigns equal importance to false positives and false negatives, ? 2 |C| + |C| so that trivial or random predictions incur an error of 0.5 on average. Such a measure is preferable to the 0/1 loss (for example), which assigns extremely low error to trivial predictions. We also report the F1 score, which we find produces qualitatively similar results. Aligning predicted and ground-truth circles. Since we do not know the correspondence between ? we compute the optimal match via linear assignment by maximizing: circles in C and C, 1 X max (1 ? BER(C, f (C))), (15) ? f :C?C |f | C?dom(f ) ? That is, if the number of predicted circles |C| where f is a (partial) correspondence between C and C. ? then every circle C ? C must have a match C? ? C, ? is less than the number of ground-truth circles |C|, ? but if |C| > |C|, we do not incur a penalty for additional predictions that could have been circles but were not included in the ground-truth. We use established techniques to estimate the number of ? = |C|, nor can any circles, so that none of the baselines suffers a disadvantage by mispredicting K method predict the ?trivial? solution of returning the powerset of all users. We note that removing the bijectivity requirement (i.e., forcing all circles to be aligned by allowing multiple predicted circles to match a single groundtruth circle or vice versa) lead to qualitatively similar results. 6 Accuracy (1 - BER) Accuracy (F1 score) Accuracy on detected communities (1 - Balanced Error Rate, higher is better) 1.0 multi-assignment clustering (Streich, Frank, et al.) low-rank embedding (Yoshida) block-LDA (Balasubramanyan and Cohen) .84 .77 .72 .72 .70 .70 our model (friend-to-friend features ?1 , eq. 12) our model (friend-to-user features ?2 , eq. 13) our model (compressed features ? 1 , eq. 14) 0.5 our model (compressed features ? 2 , eq. 14) Facebook Twitter Google+ Accuracy on detected communities (F1 score, higher is better) 1.0 multi-assignment clustering (Streich, Frank, et al.) low-rank embedding (Yoshida) block-LDA (Balasubramanyan and Cohen) .59 .40 .38 .38 .34 .34 our model (friend-to-friend features ?1 , eq. 12) our model (friend-to-user features ?2 , eq. 13) our model (compressed features ? 1 , eq. 14) 0.0 our model (compressed features ? 2 , eq. 14) Facebook Google+ Twitter Figure 3: Performance on Facebook, Google+, and Twitter, in terms of the Balanced Error Rate (top), and the F1 score (bottom). Higher is better. Error bars show standard error. The improvement of our best features ?1 compared to the nearest competitor are significant at the 1% level or better. Baselines. We considered a wide number of baseline methods, including those that consider only network structure, those that consider only profile information, and those that consider both. First we experimented with Mixed Membership Stochastic Block Models [2], which consider only network information, and variants that also consider text attributes [5, 6, 13]. For each node, mixedmembership models predict a stochastic vector encoding partial circle memberships, which we threshold to generate ?hard? assignments. We also considered Block-LDA [3], where we generate ?documents? by treating aspects of user profiles as words in a bag-of-words model. Secondly, we experimented with classical clustering algorithms, such as K-means and Hierarchical Clustering [9], that form clusters based only on node profiles, but ignore the network. Conversely we considered Link Clustering [1] and Clique Percolation [21], which use network information, but ignore profiles. We also considered the Low-Rank Embedding approach of [30], where node attributes and edge information are projected into a feature space where classical clustering techniques can be applied. Finally we considered Multi-Assignment Clustering [23], which is promising in that it predicts hard assignments to multiple clusters, though it does so without using the network. Of the eight baselines highlighted above we report the three whose overall performance was the best, namely Block-LDA [3] (which slightly outperformed mixed membership stochastic block models [2]), Low-Rank Embedding [30], and Multi-Assignment Clustering [23]. Performance on Facebook, Google+, and Twitter Data. Figure 3 shows results on our Facebook, Google+, and Twitter data. Circles were aligned as described in (eq. 15), with the number of circles ? determined as described in Section 3. For non-probabilistic baselines, we chose K ? so as to K maximize the modularity, as described in [20]. In terms of absolute performance our best model ?1 achieves BER scores of 0.84 on Facebook, 0.72 on Google+ and 0.70 on Twitter (F1 scores are 0.59, 0.38, and 0.34, respectively). The lower F1 scores on Google+ and Twitter are explained by the fact that many circles have not been maintained since they were initially created: we achieve high recall (we recover the friends in each circle), but at low precision (we recover additional friends who appeared after the circle was created). Comparing our method to baselines we notice that we outperform all baselines on all datasets by a statistically significant margin. Compared to the nearest competitors, our best performing features ?1 improve on the BER by 43% on Facebook, 26% on Google+, and 16% on Twitter (improvements in terms of the F1 score are similar). Regarding the performance of the baseline methods, we note that good performance seems to depend critically on predicting hard memberships to multiple circles, using a combination of node and edge information; none of the baselines exhibit precisely this combination, a shortcoming our model addresses. Both of the features we propose (friend-to-friend features ?1 and friend-to-user features ?2 ) perform similarly, revealing that both schemes ultimately encode similar information, which is not surprising, 7 studied the same degree speak the same languages feature index for ?i1 Americans weight ?4,i 1 feature index for ?1i weight ?2,i weight ?1,i feature index for ?1i 1 Germans who went to school in 1997 1 1 studied the same degree feature index for ?i1 1 same level of education feature index for ?i1 college educated people working at a particular institute feature index for ?1i feature index for ?1i weight ?4,i living in S.F. or Stanford 1 weight ?3,i people with PhDs weight ?3,i 1 weight ?2,i weight ?1,i Figure 4: Three detected circles on a small ego-network from Facebook, compared to three groundtruth circles (BER ' 0.81). Blue nodes: true positives. Grey: true negatives. Red: false positives. Yellow: false negatives. Our method correctly identifies the largest circle (left), a sub-circle contained within it (center), and a third circle that significantly overlaps with it (right). 1 worked for the same employer at the same time feature index for ?i1 Figure 5: Parameter vectors of four communities for a particular Facebook user. The top four plots show ?complete? features ?1 , while the bottom four plots show ?compressed? features ? 1 (in both cases, BER ' 0.78). For example the former features encode the fact that members of a particular community tend to speak German, while the latter features encode the fact that they speak the same language. (Personally identifiable annotations have been suppressed.) since users and their friends have similar profiles. Using the ?compressed? features ? 1 and ? 2 does not significantly impact performance, which is promising since they have far lower dimension than the full features; what this reveals is that it is sufficient to model categories of attributes that users have in common (e.g. same school, same town), rather than the attribute values themselves. We found that all algorithms perform significantly better on Facebook than on Google+ or Twitter. There are a few explanations: Firstly, our Facebook data is complete, in the sense that survey participants manually labeled every circle in their ego-networks, whereas in other datasets we only observe publicly-visible circles, which may not be up-to-date. Secondly, the 26 profile categories available from Facebook are more informative than the 6 categories from Google+, or the tweet-based profiles we build from Twitter. A more basic difference lies in the nature of the networks themselves: edges in Facebook encode mutual ties, whereas edges in Google+ and Twitter encode follower relationships, which changes the role that circles serve [27]. The latter two points explain why algorithms that use either edge or profile information in isolation are unlikely to perform well on this data. Qualitative analysis. Finally we examine the output of our model in greater detail. Figure 4 shows results of our method on an example ego-network from Facebook. Different colors indicate true-, false- positives and negatives. Our method is correctly able to identify overlapping circles as well as sub-circles (circles within circles). Figure 5 shows parameter vectors learned for four circles for a particular Facebook user. Positive weights indicate properties that users in a particular circle have in common. Notice how the model naturally learns the social dimensions that lead to a social circle. Moreover, the first parameter that corresponds to a constant feature ?1? has the highest weight; this reveals that membership to the same community provides the strongest signal that edges will form, while profile data provides a weaker (but still relevant) signal. Acknowledgements. This research has been supported in part by NSF IIS-1016909, CNS-1010921, IIS-1159679, DARPA XDATA, DARPA GRAPHS, Albert Yu & Mary Bechmann Foundation, Boeing, Allyes, Samsung, Intel, Alfred P. Sloan Fellowship and the Microsoft Faculty Fellowship. 8 References [1] Y.-Y. Ahn, J. Bagrow, and S. Lehmann. Link communities reveal multiscale complexity in networks. Nature, 2010. [2] E. Airoldi, D. Blei, S. Fienberg, and E. Xing. Mixed membership stochastic blockmodels. JMLR, 2008. [3] R. Balasubramanyan and W. Cohen. Block-LDA: Jointly modeling entity-annotated text and entity-entity links. In SDM, 2011. [4] E. Boros and P. Hammer. Pseudo-boolean optimization. Discrete Applied Mathematics, 2002. [5] J. Chang and D. Blei. Relational topic models for document networks. In AIStats, 2009. [6] J. Chang, J. Boyd-Graber, and D. Blei. Connections between the lines: augmenting social networks with text. In KDD, 2009. [7] Y. Chen and C. Lin. Combining SVMs with various feature selection strategies. Springer, 2006. [8] M. Handcock, A. Raftery, and J. Tantrum. Model-based clustering for social networks. Journal of the Royal Statistical Society Series A, 2007. [9] S. Johnson. Hierarchical clustering schemes. Psychometrika, 1967. [10] D. Kim, Y. Jo, L.-C. Moon, and A. Oh. Analysis of twitter lists as a potential source for discovering latent characteristics of users. In CHI, 2010. [11] P. Krivitsky, M. Handcock, A. Raftery, and P. Hoff. Representing degree distributions, clustering, and homophily in social networks with latent cluster random effects models. Social Networks, 2009. [12] P. Lazarsfeld and R. Merton. Friendship as a social process: A substantive and methodological analysis. In Freedom and Control in Modern Society. 1954. [13] Y. Liu, A. Niculescu-Mizil, and W. Gryc. Topic-link LDA: joint models of topic and author community. In ICML, 2009. [14] D. MacKay. Information Theory, Inference and Learning Algorithms. Cambrdige University Press, 2003. [15] J. McAuley and J. Leskovec. Discovering social circles in ego networks. arXiv:1210.8182, 2012. [16] M. McPherson. An ecology of affiliation. American Sociological Review, 1983. [17] A. Menon and C. Elkan. Link prediction via matrix factorization. In ECML/PKDD, 2011. [18] A. Mislove, B. Viswanath, K. Gummadi, and P. Druschel. You are who you know: Inferring user profiles in online social networks. In WSDM, 2010. [19] P. Nasirifard and C. Hayes. Tadvise: A twitter assistant based on twitter lists. In SocInfo, 2011. [20] M. Newman. Modularity and community structure in networks. PNAS, 2006. [21] G. Palla, I. Derenyi, I. Farkas, and T. Vicsek. Uncovering the overlapping community structure of complex networks in nature and society. Nature, 2005. [22] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary MRFs via extended roof duality. In CVPR, 2007. [23] A. Streich, M. Frank, D. Basin, and J. Buhmann. Multi-assignment clustering for boolean data. JMLR, 2012. [24] J. Ugander, B. Karrer, L. Backstrom, and C. Marlow. The anatomy of the Facebook social graph. preprint, 2011. [25] C. Volinsky and A. Raftery. Bayesian information criterion for censored survival models. Biometrics, 2000. [26] D. Vu, A. Asuncion, D. Hunter, and P. Smyth. Dynamic egocentric models for citation networks. In ICML, 2011. [27] S. Wu, J. Hofman, W. Mason, and D. Watts. Who says what to whom on twitter. In WWW, 2011. [28] J. Yang and J. Leskovec. Community-affiliation graph model for overlapping community detection. In ICDM, 2012. [29] J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. In ICDM, 2012. [30] T. Yoshida. Toward finding hidden communities based on user profiles. In ICDM Workshops, 2010. [31] J. Zhao. Examining the evolution of networks based on lists in twitter. In IMSAA, 2011. 9
4532 |@word faculty:1 version:1 seems:1 stronger:1 norm:1 grey:1 attended:1 mention:1 mcauley:2 liu:1 contains:2 score:8 series:1 ours:1 document:2 subjective:1 existing:1 current:1 comparing:2 surprising:1 follower:2 must:2 written:1 readily:2 distant:1 visible:1 informative:1 kdd:1 treating:2 concert:1 update:2 interpretable:2 v:2 farkas:1 generative:1 discovering:3 leaf:5 plot:2 ugander:1 blei:3 institution:3 detecting:1 parameterizations:1 node:28 location:4 provides:2 firstly:1 along:4 balasubramanyan:3 qualitative:1 consists:1 shorthand:1 combine:1 eok:4 introduce:1 privacy:1 pairwise:2 behavior:1 themselves:4 nor:2 examine:1 multi:5 pkdd:1 chi:1 wsdm:1 palla:1 automatically:3 company:3 overwhelming:1 considering:2 psychometrika:1 discover:3 underlying:1 notation:1 moreover:1 what:6 argmin:1 developed:1 proposing:1 finding:1 pseudo:3 every:3 concave:1 tie:1 blau:1 preferable:1 returning:1 control:2 appear:4 organize:3 positive:5 educated:1 treat:1 encoding:2 path:1 approximately:1 birthday:1 might:2 chose:1 studied:2 collect:3 conversely:1 factorization:1 graduate:2 range:1 statistically:1 directed:2 unique:1 vu:1 block:7 procedure:1 significantly:5 revealing:1 boyd:1 word:4 selection:1 optimize:3 www:1 missing:1 center:2 maximizing:1 yoshida:3 regardless:1 cluttered:1 independently:4 survey:2 formulate:1 identifying:2 assigns:2 mixedmembership:2 importantly:1 regularize:1 oh:1 embedding:4 handle:1 notion:1 coordinate:1 updated:1 construction:1 suppose:1 heavily:1 user:69 speak:3 smyth:1 elkan:1 ego:36 particularly:1 viswanath:1 predicts:1 labeled:8 observed:2 bottom:4 role:1 preprint:1 capture:3 thousand:1 connected:3 went:1 trade:1 highest:1 balanced:3 complexity:1 ideally:1 asked:1 dynamic:1 personal:5 dom:1 ultimately:1 depend:1 solving:1 hofman:1 incur:2 serve:1 creates:1 upon:1 completely:2 druschel:1 darpa:2 samsung:1 joint:1 represented:1 tx:1 various:1 kolmogorov:1 describe:6 shortcoming:1 detected:4 newman:1 lazarsfeld:1 formation:1 choosing:1 outside:1 navy:2 whose:2 emerged:1 stanford:7 larger:1 solve:2 snap:2 quite:1 cvpr:1 compressed:7 say:1 jointly:2 highlighted:1 online:2 advantage:2 sdm:1 propose:2 tu:1 aligned:2 relevant:1 combining:1 date:1 poorly:1 achieve:2 description:2 rst:2 convergence:2 cluster:4 parent:2 requirement:1 hashtags:3 produce:1 leave:1 help:1 develop:1 friend:32 augmenting:1 pose:1 nearest:2 school:6 job:1 eq:18 c:5 predicted:5 indicate:2 differ:1 anatomy:1 closely:1 annotated:1 functionality:1 filter:1 attribute:5 stochastic:4 hammer:1 human:1 public:1 education:9 require:1 f1:7 secondly:3 extension:1 practically:1 around:4 considered:7 ground:12 exp:1 presumably:1 predict:5 week:1 major:2 achieves:1 estimation:1 assistant:1 outperformed:1 bag:1 label:1 currently:2 percolation:1 title:1 largest:2 vice:1 reflects:1 aim:2 rather:2 ck:19 encode:10 release:1 she:1 properly:1 rank:4 likelihood:3 improvement:2 methodological:1 political:1 contrast:1 kim:1 sense:5 detect:1 tantrum:1 inference:2 twitter:26 baseline:9 mrfs:1 membership:14 niculescu:1 typically:1 unlikely:1 initially:1 hidden:1 her:9 i1:4 compatibility:2 issue:1 among:2 overall:1 uncovering:1 art:1 mackay:1 mutual:1 hoff:1 equal:1 construct:5 once:1 manually:4 look:1 unsupervised:7 yu:1 icml:2 alter:2 others:1 report:2 few:1 modern:1 simultaneously:3 ve:1 densely:3 individual:1 roof:1 argmax:4 consisting:3 powerset:1 cns:1 microsoft:1 ecology:1 freedom:1 detection:2 evaluation:2 laborious:1 alignment:1 accurate:1 edge:14 partial:4 experience:1 censored:1 biometrics:1 tree:10 circle:131 leskovec:4 modeling:3 boolean:4 disadvantage:2 measuring:1 assignment:10 karrer:1 subset:1 hundred:2 examining:2 conducted:1 johnson:1 knox:2 accessible:2 marlow:1 probabilistic:1 off:1 together:1 na:1 connectivity:1 jo:1 town:1 containing:1 choose:2 acquaintance:2 derivative:1 american:2 zhao:1 expended:1 potential:1 de:1 student:2 availability:1 explicitly:1 caused:2 vi:3 stream:1 sloan:1 performed:1 linked:1 red:1 xing:1 recover:2 participant:1 annotation:1 asuncion:1 streich:3 ass:1 formed:3 publicly:5 minimize:1 moon:1 descriptor:1 characteristic:2 who:9 accuracy:4 correspond:1 identify:7 yellow:1 bayesian:2 accurately:2 critically:1 hunter:1 none:2 worth:1 explain:3 strongest:1 networking:3 suffers:1 whenever:1 sharing:2 facebook:32 definition:3 competitor:2 ty:1 energy:1 colleague:1 volinsky:1 naturally:4 associated:2 attributed:1 merton:1 dataset:2 recall:1 color:1 emerges:1 uncover:1 appears:2 ok:3 higher:3 follow:3 though:3 until:2 hand:3 working:1 multiscale:1 overlapping:6 google:24 quality:1 lda:6 reveal:1 menon:1 grows:1 mary:1 usa:2 effect:2 name:17 concept:1 contain:1 true:3 former:2 regularization:1 evolution:1 satisfactory:1 cohesive:1 during:1 maintained:1 criterion:2 complete:2 performs:2 novel:1 common:16 homophily:2 cohen:3 volume:1 belong:3 elementwise:1 trait:1 refer:2 expressing:1 significant:3 cambridge:3 versa:1 automatic:1 mathematics:1 similarly:2 xdata:1 handcock:2 language:2 had:1 access:1 similarity:12 ahn:1 etc:2 add:1 align:1 aligning:1 own:2 hide:1 optimizing:1 belongs:1 discard:1 forcing:1 certain:1 affiliation:4 binary:3 devise:1 additional:2 greater:1 determine:1 maximize:2 coworkers:1 living:1 signal:2 ii:2 multiple:9 full:2 pnas:1 alan:2 technical:1 match:3 cross:1 lin:1 vicsek:1 icdm:3 post:1 gummadi:1 impact:2 prediction:4 variant:1 basic:1 metric:2 albert:1 arxiv:1 represent:1 c1:3 whereas:3 want:1 fine:1 fellowship:2 source:3 ascent:2 tend:2 undirected:1 member:7 ee:5 leverage:1 noting:1 granularity:1 yang:2 bic:3 isolation:1 identified:2 idea:2 regarding:1 six:1 effort:1 penalty:1 cause:1 boros:1 generally:1 useful:1 ten:1 svms:1 category:8 generate:4 http:2 outperform:1 exist:2 nsf:1 alters:5 notice:2 per:1 correctly:2 blue:2 diverse:1 alfred:1 write:2 hyperparameter:1 shall:1 discrete:1 group:2 four:5 threshold:1 neither:1 graph:6 egocentric:1 tweet:2 sum:1 run:1 turing:2 parameterized:1 lehmann:1 you:2 extends:1 family:1 place:1 groundtruth:2 employer:1 wu:1 draw:1 capturing:1 ki:1 ct:1 correspondence:2 encountered:1 identifiable:1 precisely:2 worked:2 software:1 encodes:4 aspect:7 extremely:1 performing:1 relatively:1 structured:1 tv:2 according:1 combination:2 watt:1 pink:1 slightly:1 increasingly:1 suppressed:1 backstrom:1 explained:1 taken:2 fienberg:1 resource:1 german:2 mechanism:2 know:2 available:2 eight:1 observe:2 hierarchical:3 alternative:1 top:4 clustering:15 creator:2 include:2 graphical:1 opportunity:1 maintaining:1 prof:1 build:1 classical:3 society:3 strategy:1 primary:1 exhibit:1 gradient:1 link:6 entity:4 topic:4 whom:2 considers:1 extent:1 trivial:3 toward:1 substantive:1 rother:1 besides:1 index:8 relationship:4 julian:1 innovation:1 equivalently:1 frank:3 negative:4 boeing:1 design:1 lived:1 perform:3 allowing:3 qpbo:1 datasets:5 withheld:1 ecml:1 defining:3 extended:2 relational:1 team:1 gc:3 arbitrary:1 community:21 pair:2 required:1 namely:2 connection:2 learned:1 established:1 jure:2 able:5 address:2 bar:1 bagrow:1 pattern:1 appeared:1 belonged:1 summarize:1 program:1 royal:3 including:2 explanation:2 max:1 overlap:4 critical:1 natural:1 regularized:2 predicting:1 indicator:2 buhmann:1 mizil:1 representing:2 scheme:5 improve:1 republican:1 identifies:2 created:2 raftery:3 text:4 review:1 acknowledgement:1 personally:1 relative:1 fully:1 expect:2 loss:1 sociological:1 mixed:3 filtering:1 validation:1 foundation:1 degree:4 sufficient:1 proxy:1 basin:1 article:1 share:1 supported:1 last:6 allow:4 weaker:2 ber:8 institute:1 wide:1 emerge:1 absolute:1 sparse:1 dimension:9 crawl:1 evaluating:1 author:1 made:1 qualitatively:2 projected:1 far:1 social:29 cope:1 approximate:1 citation:1 ignore:2 status:1 clique:1 mislove:1 reveals:2 hayes:1 consuming:1 latent:8 modularity:2 why:2 promising:2 learn:5 nature:4 complex:1 posted:1 constructing:3 did:1 pk:1 hierarchically:2 main:1 blockmodels:1 aistats:1 big:1 profile:47 allowed:2 repeated:1 child:1 graber:1 site:3 intel:1 fashion:1 precision:1 fails:1 position:6 sub:2 wish:3 inferring:1 pinpoint:1 candidate:1 lie:1 jmlr:2 third:1 learns:1 removing:1 friendship:3 specific:4 discarding:1 list:6 dk:4 experimented:2 mason:1 survival:1 workshop:1 false:5 importance:1 ci:3 airoldi:1 phd:1 margin:1 chen:1 suited:1 simply:3 desire:1 contained:2 expressed:1 sport:1 partially:1 chang:2 springer:1 gender:1 nested:3 truth:12 corresponds:1 lempitsky:1 goal:6 identity:1 krivitsky:1 shared:1 content:2 hard:4 change:1 included:2 typical:1 determined:2 bijectivity:1 experimental:1 disregard:1 duality:1 organizes:1 rarely:1 college:4 e6:1 people:2 latter:3 szummer:1 categorize:2 overload:1 evaluate:2 princeton:2
3,903
4,533
Perceptron Learning of SAT Matthew B. Blaschko Center for Visual Computing Ecole Centrale Paris [email protected] Alex Flint Department of Engineering Science University of Oxford [email protected] Abstract Boolean satisfiability (SAT) as a canonical NP-complete decision problem is one of the most important problems in computer science. In practice, real-world SAT sentences are drawn from a distribution that may result in efficient algorithms for their solution. Such SAT instances are likely to have shared characteristics and substructures. This work approaches the exploration of a family of SAT solvers as a learning problem. In particular, we relate polynomial time solvability of a SAT subset to a notion of margin between sentences mapped by a feature function into a Hilbert space. Provided this mapping is based on polynomial time computable statistics of a sentence, we show that the existance of a margin between these data points implies the existance of a polynomial time solver for that SAT subset based on the Davis-Putnam-Logemann-Loveland algorithm. Furthermore, we show that a simple perceptron-style learning rule will find an optimal SAT solver with a bounded number of training updates. We derive a linear time computable set of features and show analytically that margins exist for important polynomial special cases of SAT. Empirical results show an order of magnitude improvement over a state-of-the-art SAT solver on a hardware verification task. 1 Introduction SAT was originally shown to be a canonical NP-complete problem in Cook?s seminal work [5]. SAT is of practical interest for solving a number of critical problems in applications such as theorem proving [8], model checking [2], planning [19], and bioinformatics [22]. That it is NP-complete indicates that an efficient learning procedure is unlikely to exist to solve arbitrary instances of SAT. Nevertheless, SAT instances resulting from real world applications are likely to have shared characteristics and substructures. We may view them as being drawn from a distribution over SAT instances, and for key problems this distribution may be benign in that a learning algorithm can enable quick determination of SAT. In this work, we explore the application of a perceptron inspired learning algorithm applied to branching heuristics in the Davis-Putnam-Logemann-Loveland algorithm [8, 7]. The Davis-Putnam-Logemann-Loveland (DPLL) algorithm formulates SAT as a search problem, resulting in a valuation of variables that satisfies the sentence, or a tree resolution refutation proof indicating that the sentence is not satisfiable. The branching rule in this depth-first search procedure is a key determinant of the efficiency of the algorithm, and numerous heuristics have been proposed in the SAT literature [15, 16, 26, 18, 13]. Inspired by the recent framing of learning as search optimization [6], we explore here the application of a perceptron inspired learning rule to application specific samples of the SAT problem. Efficient learning of SAT has profound implications for algorithm development across computer science as a vast number of important problems are polynomial time reducable to SAT. A number of authors have considered learning branching rules for SAT solvers. Ruml applied reinforcement learning to find valuations of satisfiable sentences [25]. An approach that has performed well in SAT competitions in recent years is based on selecting a heuristic from a fixed set and apply1 ing it on a per-sentence basis [27, 17]. The relationship between learnability and NP-completeness has long been considered in the literature, e.g. [20]. Closely related to our approach is the learning as search optimization framework [6]. That approach makes perceptron-style updates to a heuristic function in A? search, but to our knowledge has not been applied to SAT, and requires a level of supervision that is not available in a typical SAT setting. A similar approach to learning heuristics for search was explored in [12]. 2 Theorem Proving as a Search Problem The SAT problem [5] is to determine whether a sentence ? in propositional logic is satisfiable. First we introduce some notation. A binary variable q takes on one of two possible values, {0, 1}. A literal p is a proposition of the form q (a ?positive literal?) or ?q (a ?negative literal?). A clause ?k is a disjunction of nk literals, p1 ? p2 ? ? ? ? ? pnk . A unit clause contains exactly one literal. A sentence ? in conjunctive normal form (CNF) [15] is a conjunction of m clauses, ?1 ??2 ?? ? ???m . A valuation B for ? assigns to each variable in ? a value bi ? {0, 1}. A variable is free under B if B does not assign it a value. A sentence ? is satisfiable iff there exists a valuation under which ? is true. CNF is considered a canonical representation for automated reasoning systems. All sentences in propositional logic can be transformed to CNF [15]. 2.1 The Davis?Putnam?Logemann?Loveland algorithm Davis et al. [7] proposed a simple procedure for recognising satisfiabile CNF sentences on N variables. Their algorithm is essentially a depth first search over all possible 2N valuations over the input sentence, with specialized criteria to prune the search and transformation rules to simplify the sentence. We summarise the DPLL procedure below. if ? contains only unit clauses and no contradictions then return YES end if if ? contains an empty clause then return NO end if for all unit clauses ? ? ? do ? := UnitPropagate(?, ?) end for for all literals p such that ?p ? / ? do remove all clauses containing p from ? end for p :=PickBranch(?) return DPLL(? ? p) ? DPLL(? ? ?p) UnitPropagate simplifies ? under the assumption p. PickBranch applies a heuristic to choose a literal in ?. Many modern SAT algorithms contain the DPLL procedure at their core [15, 16, 26, 18, 13], including top performers at recent SAT competitions [21]. Much recent work has focussed on choosing heuristics for the selection of branching literals since good heuristics have been empirically shown to reduce processing time by several orders of magnitude [28, 16, 13]. In this paper we learn heuristics by optimizing over a family of the form, argmaxp f (x, p) where x is a node in the search tree, p is a candidate literal, and f is a priority function mapping possible branches to real numbers. The state x will contain at least a CNF sentence and possibly pointers to ancestor nodes or statistics of the local search region. Given this relaxed notion of the search state, we are unaware of any branching heuristics in the literature that cannot be expressed in this form. We explicitly describe several in section 4. 3 Perceptron Learning of SAT We propose to learn f from a sequence of sentences drawn from some distribution determined by a given application. We identify f with an element of a Hilbert space, H, the properties of which 2 are determined by a set of statistics polynomial time computable from a SAT instance, ?. We apply stochastic updates to our estimate of f in order to reduce our expected search time. We use xj to denote a node that is visited in the application of the DPLL algorithm, and ?i (xj ) to denote the feature map associated with instantiating literal pi . Using reproducing kernel Hilbert space notation, our decision function at xj takes the form argmaxhf, ?i (xj )iH . (1) i We would like to learn f such that the expected search time is reduced. We define yij to be +1 if the instantiation of pi at xj leads to the shortest possible proof, and ?1 otherwise. Our learning procedure therefore will ideally learn a setting of f that only instantiates literals for which yij is +1. We define a margin in a standard way: max ? s.t. hf, ?i (xj )iH ? hf, ?k (xl )iH ? ? 3.1 ?{(i, j)|yij = +1}, {(k, l)|ykl = ?1} (2) Restriction to Satisfiable Sentences If we had access to all yij , the application of any binary learning algorithm to the problem of learning SAT would be straightforward. Unfortunately, the identity of yij is only known in the worst case after an exhaustive enumeration of all 2N variable assignments. We do note, however, that the DPLL algorithm is a depth?first search over literal valuations. Furthermore, for satisfiable sentences the length of the shortest proof is bounded by the number of variables. Consequently, in this case, all nodes visited on a branch of the search tree that resolved to unsatisfiable have yij = ?1 and the nodes on the branch leading to satisfiable have yij = +1. We may run the DPLL algorithm with a current setting of f and if the sentence is satisfiable, update f using the inferred yij . This learning framework is capable of computing in polynomial time valuations of satisfiable sentences in the following sense. Theorem 1 ? a polynomial time computable ? with ? > 0 ?? ? belongs to a subset of satisfiable sentences for which there exists a polynomial time algorithm to find a valid valuation. Proof Necessity is shown by noting that the argmax in each step of the DPLL algorithm is computable in time polynomial in the sentence length by computing ? for all literals, and that there exists a setting of f such that there will be at most a number of steps equal to the number of variables. Sufficiency is shown by noting that we may run the polynomial algorithm to find a valid valuation and use that valuation to construct a feature space with ? ? 0 in polynomial time. Concretely, choose a canonical ordering of literals indexed by i and let ?i (xj ) be a scalar. Set ?i (xj ) = +i if literal pi is instantiated in the solution found by the polynomial algorithm, ?1 otherwise. When f = 1, ? = 2. 2 Corollary 1 ? polynomial time computable feature space with ? > 0 for SAT ?? P = N P Proof If P = N P there is a polynomial time solution to SAT, meaning that there is a polynomial time solution to finding valuations satisfiable sentences. For satisfiable sentences, this indicates that there is a non-negative margin. For unsatisfiable sentences, either a proof exists with length less than the number of variables, or we may terminate the DPLL procedure after N + 1 steps and return unstatisfiable. 2 While Theorem 1 is positive for finding variable settings that satisfy sentences, unsatisfiable sentences remain problematic when we are unsure that there exists ? > 0 or if we have an incorrect setting of f . We are unaware of an efficient method to determine all yij for visited nodes in proofs of unsatisfiable sentences. However, we expect that similar substructures will exist in satisfiable and unsatisfiable sentences resulting from the same application. Early iterations of our learning algorithm will mistakenly explore branches of the search tree for satisfiable sentences and these branches will share important characteristics with inefficient branches of proofs of unsatisfiability. Consequently, proofs of unsatisfiability may additionally benefit from a learning procedure applied only to satisfiable sentences. In the case that we analyitically know that ? > 0 and we have a correct setting of f , we may use the termination procedure in Corollary 1. 3 Figure 2: Geometry of the feature space. Positive and negative nodes are separated by a margin of ?. Given the current estimate of f , a threshold, T , is selected as described in section 3.2. The positive nodes with a score less than T are averaged, as are negative nodes with a score greater than T . The resulting means lie within the respective convex hulls of the positive and negative sets, ensuring that the geometric conditions of the proof of Theorem 2 are fulfilled. Figure 1: Generation of training samples from the search tree. Nodes labeled in red result in backtracking and therefore have negative label, while those coloured blue lie on the path to a proof of satisfiability. 3.2 Davis-Putnam-Logemann-Loveland Stochastic Gradient We use a modified perceptron style update based on the learning as search optimization framework proposed in [6]. In contrast to that work, we do not have a notion of ?good? and ?bad? nodes at each search step. Instead, we must run the DPLL algorithm to completion with a fixed model, ft . We know that nodes on a path to a valuation that satisfies the sentence have positive labels, and those nodes that require backtracking have negative labels (Figure 1). If the sentence is satisfiable, we may compute a DPLL stochastic gradient, ?DPLL , and update f . We define two sets of nodes, S+ and S? , such that all nodes in S+ have positive label and lower score than all nodes in S? (Figure 2). In this work, we have used the sufficient condition of defining these sets by setting a score threshold, T , such that fk (?i (xj )) < T ?(i, j) ? S+ , fk (?i (xj )) > T ?(i, j) ? S? , and |S+ | ? |S? | is maximized. The DPLL stochastic gradient update is defined as follows: X ?i (xj ) X ?k (xl ) ?DPLL = ? , ft+1 = ft ? ??DPLL (3) |S? | |S+ | (i,j)?S? (k,l)?S+ where ? is a learning rate. While poor settings of f0 may result in a very long proof before learning can occur, we show in Section 4 that we can initialize f0 to emulate the behavior of current state-ofthe-art SAT solvers. Subsequent updates improve performance over the baseline. We define R to be a positive real value such that ?i, j, k, l k?i (xj ) ? ?k (xl )k ? R Theorem 2 For any training sequence that is separable by a margin of size ? with kf k = 1, using the update rule in Equation (3) with ? = 1, the number of errors (updates) made during training on satisfiable sentences is bounded above by R2 /? 2 . Proof Let f1 (?(x)) = 0 ??(x). Considering the kth update, kfk+1 k2 = kfk ? ?DPLL k2 = kfk k2 ? 2hfk , ?DPLL i + k?DPLL k2 ? kfk k2 + 0 + R2 . (4) We note that it is the case that hfk , ?DPLL i ? 0 for any selection of training examples such that the average of the negative examples score higher than the average of the positive examples generated by running a DPLL search. It is possible that some negative examples with lower scores than the some positive nodes will be visited during the depth first search of the DPLL algorithm, but we are guaranteed that at least one of them will have higher score. Similarly, some positive examples may have higher scores than the highest scoring negative example. In both cases, we may simply discard 4 Feature Dimensions Description is-positive lit-unit-clauses var-unit-clauses lit-counts var-counts bohm-max bohm-min lit-total neg-lit-total var-total lit-smallest neg-lit-smallest jw jw-neg activity time-since-active has-been-active 1 1 1 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 if p is positive, 0 otherwise C1 (p), occurences of literal in unit clauses C1 (q), occurences of variable in unit clauses Ci (p) for i = 2, 3, 4, occurences in small clauses Ci (q) for i = 2, 3, 4, as above, by variable max(Ci (p), Ci (?p)), i = 2, 3, 4 max(Ci (p), Ci (?p)), i = 2, 3, 4 C(p), total occurences by literal C(?p), total occurences of negated literal C(q), total occurences by variable Cm (p), where m is the size of the smallest unsatisfied clause Cm (?p), as above, for negated literal J(p) Jeroslow?Wang cue, see main text J(?p) Jeroslow?Wang cue, see main text minisat activity measure t ? T (p) time since last activity (see main text) 1 if this p has ever appeared in a conflict clause; 0 otherwise Figure 3: Summary of our feature space. Features are computed as a function of a sentence ? and a literal p. q implicitly refers to the variable within p. such instances from the training algorithm (as described in Section 3.2) guaranteeing the desired inequality. By induction, kfk+1 k2 ? kR2 . Let u be an element of H that obtains a margin of ? on the training set. We next obtain a lower bound on hu, fk+1 i = hu, fk i ? hu, ?DPLL i ? hu, fk i + ?. That ?hu, ?DPLL i ? ? follows from the fact that the means of the positive and negative training examples lie in the convex hull of the positive and negative sets, respectively, and that u achieves a margin of ?. By induction, hu, fk+1 i ? k?. ? Putting the two results together gives kR ? kfk+1 k ? hu, fk+1 i ? k? which, after some algebra, yields k ? (R/?)2 . 2 The proof of this theorem closely mirrors those of the mistake bounds in [24, 6]. We note also that an extension to approximate large-margin updates is straightforward to implement, resulting in an alternate mistake bound (c.f. [6, Theorem 4]). For simplicity we consider only the perceptron style updates of Equation (3) in the sequel. 4 Feature Space In this section we describe our feature space. Recall that each node xj consists of a CNF sentence ? together with a valuation for zero or more variables. Our feature function ?(x, p) maps a node x and a candidate branching literal p to a real vector ?. Many heuristics involve counting occurences of literals and variables. For notational convenience let C(p) be the number of occurences of p in ? and let Ck (p) be the number of occurences of p among clauses of size k. Table 4 summarizes our feature space. 4.1 Relationship to previous branching heuristics Many branching heuristics have been proposed in the SAT literature [28, 13, 18, 26]. Our features were selected from the most successful of these and our system is hence able to emulate many other systems for particular priority functions f . Literal counting. Silva [26] suggested two simple heuristics based directly on literal counts. The first was to always branch on the literal that maximizes C(p) and the second was to maximize C(p) + C(?p). Our features ?lit-total? and ?neg-lit-total? capture these cues. MOM. Freeman [13] proposed a heuristic that identified the size of the smallest unsatisfied clause, m = min |?|, ? ? ?, and then identified the literal appearing most frequently amongst clauses of size m. This is the motivation for our features ?lit-smallest? and ?neg-lit-smallest?. BOHM. Bohm [3] proposed a heuristic that selects the literal maximizing     ? max Ck (p, xj ), Ck (?p, xj ) + ? min Ck (p, xj ), Ck (?p, xj ) , (5) 5 with k = 2, or in the case of a tie, with k = 3 (and so on until all ties are broken). In practice we found that ties are almost always broken by considering just k ? 4; hence we include ?bohm-max? and ?bohm-min? in our feature space. Jeroslow?Wang. Jerosolow and Wang [18] proposed a voting scheme in which clauses vote for their components with weight 2?k , where k is the length of the clause. The total votes for a literal p is X J(p) = 2?|?| (6) where the sum is over clauses ? that contain p. The Jeroslow?Wang rule chooses branches that maximize J(p). Three variants were studied by Hooker [16]. Our features ?jw? and ?jw-neg? are sufficient to span the original rule as well as the variants. Dynamic activity measures. Many modern SAT solvers use boolean constraint propagation (BCP) to speed up the search process [23]. One component of BCP generates new clauses as a result of conflicts encountered during the search. Several modern SAT solvers use the time since a variable was last added to a conflict clause to measure the ?activity? of that variable . Empirically, resolving variables that have most recently appeared in conflict clauses results in an efficient search[14]. We include several activity?related cues in our feature vector, which we compute as follows. Each decision is is given a sequential time index t. After each decision we update the most?recent? activity table T (p) := t for each p added to a conflict clause during that iteration. We include the difference between the current iteration and the last iteration at which a variable was active in the feature ?time-since-active?. We also include the boolean feature ?has-been-active? to indicate whether a variable has ever been active. The feature ?activity? is a related cue used by minisat [10]. 5 Polynomial special cases In this section we discuss special cases of SAT for which polynomial?time algorithms are known. For each we show that a margin exists in our feature space. 5.1 Horn A Horn clause [4] is a disjunction containing at most one positive literal, ?q1 ? ? ? ? ? ?qk?1 ? qk . A sentence ? is a Horn formula iff it is a conjunction of Horn clauses. There are polynomial time algorithms for deciding satisfiability of Horn formulae [4, 9]. One simple algorithm based on unit propagation [4] operates as follows. If there are no unit clauses in ? then ? is trivially satisfiable by setting all variables to false. Otherwise, let {p} be a unit clause in ?. Delete any clause from ? that contains p and remove ?p wherever it appears. Repeat until either a trivial contradiction q ? ?q is produced (in which case ? is unsatisfiable) or until no further simplification is possible (in which case ? is satisfiable) [4]. Theorem 3 There is a margin for Horn clauses in our feature space. Proof We will show that there is a margin for Horn clauses in our feature space by showing that for a particular priority function f0 , our algorithm will emulate the unit propagation algorithm above. Let f0 be zero everywhere except for the following elements:1 ?is-positive? = ?, ?lit-unit-clauses? = 1. Let H be the decision heuristic corresponding to f0 . Consider a node x and let ? be the input sentence ?0 simplified according to the (perhaps partial) valuation at x. If ? contains no unit clauses then clearly h?(x, p), f0 i will be maximized for a negative literal p = ?q. If ? does contain unit clauses then for literals p which appear in unit clauses we have h?(x, p), f0 i ? 1, while for all other literals we have h?(x, p), f0 i < 1. Therefore H will select a unit literal if ? contains one. For satisfiable ?, this exactly emulates the unit propagation algorithm, and since that algorithm never back?tracks [4], our algorithm makes no mistakes. For unsatisfiable ? our algorithm will behave as follows. First note that every sentence encountered contains at least one unit clause, since, if not, that sentence would be trivially satisfiable by setting all variables to false and this would contradict the assumption that ? is unsatisfiable. So at each node x, the algorithm will first branch on some unit clause p, then later will back?track to x and branch on ?p. But since p appears in a unit clause at x this will immediately generate a contradiction and no further nodes will be expanded along that path. Therefore the algorithm expands no more than 2N nodes, where N is the length of ?. 2 1 For concreteness let  = 1 K+1 where K is the length of the input sentence ? 6 (a) Performance for planar graph colouring (b) Performance for hardware verification Figure 4: Results for our algorithm applied to (a) planar graph colouring; (b) hardware verification. Both figures show the mistake rate as a function of the training iteration. In figure (a) we report the mistake rate on the current training example since no training example is ever repeated, while in figure (b) it is computed on a seperate validation set (see figure 5). The red line shows the performance of minisat on the validation set (which does not change over time). 5.2 2?CNF A 2?CNF sentence is a CNF sentence in which every clause contains exactly two literals. In this section we show that a function exists in our feature space for recognising satisfiable 2?CNF sentences in polynomial time. A simple polynomial?time solution to 2?CNF proposed by Even et al. [11] operates as follows. If there are no unit clauses in ? then pick any literal and add it to ?. Otherwise, let {p} be a unit clause in ? and apply unit propagation to p as described in the previous section. If a contradiction is generated then back?track to the last branch and negate the literal added there. If there is no such branch, then ? is unsatisfiable. Even et al. showed that this algorithm never back?tracks over more than one branch, and therefore completes in polynomial time. Theorem 4 Under our feature space, H contains a priority function that recognizes 2?SAT sentences in polynomial time. Proof By construction. Let f0 be a weight vector with all elements set to zero except for the element corrersponding to the ?appears-in-unit-clause? feature, which is set to 1. When using this weight vector, our algorithm will branch on a unit literal whenever one is present. This exactly emulates the behaviour of the algorithm due to Even et al. described above, and hence completes in polynomial time for all 2?SAT sentences. 2 6 Empirical Results Planar Graph Colouring: We applied our algorithm on the problem of planar graph colouring, for which polynomial time algorithms are known [1]. Working in this domain allowed us to generate an unlimited number of problems with a consistent but non?trivial structure on which to validate our algorithm. By allowing up to four colours we also ensured that all instances were satisfiable [1]. We generated instances as follows. Starting with an empty L ? L grid we sampled K cells at random and labelled them 1 . . . K. We then repeatedly picked a labelled cell with at least one unlabelled neighbour and copied its label to its neighbour until all cells were labelled. Next we formed a K ? K adjacency matrix A with Aij = 1 iff there is a pair of adjacent cells with labels i and j. Finally we generated a SAT sentence over 4K variables (each variable corresponds to a particular colouring of a particular vertex), with clauses expressing the constraints that each vertex must be assigned one and only one colours and that no pair of adjacent vertices may be assigned the same colour. In our experiments we used K = 8, L = 5 and a learning rate of 0.1. We ran 40 training iterations of our algorithm. No training instance was repeated. The number of mistakes (branching decision that 7 Training Validation Problem Clauses Problem Clauses ferry11 ferry11u ferry9 ferry9u ferry12u 26106 25500 16210 15748 31516 ferry10 ferry10u ferry8 ferry8u ferry12 20792 20260 12312 11916 32200 Figure 5: Instances in training and validation sets. were later reversed by back?tracking) made at each iteration is shown in figure 4(a). Our algorithm converged after 18 iterations and never made a mistake after that point. Hardware Verification: We applied our algorithm to a selection of problems from a well?known SAT competition [21]. We selected training and validation examples from the same suite of problems; this is in line with our goal of learning the statistical structure of particular subsets of SAT problems. The problems selected for training and validation are from the 2003 SAT competition and are listed in figure 5. Due to the large size of these problems we extended an existing high?performance SAT solver, minisat [10], replacing its decision heuristic with our perceptron strategy. We executed our algorithm on each training problem sequentially for a total of 8 passes through the training set (40 iterations in total). We performed a perceptron update (3) after solving each problem. After each update we evaluated the current priority function on the entire validation set. The average mistake rate on the validation set are shown for each training iteration in figure 4(b). 7 Discussion Section 6 empirically shows that several important theoretical results of our learning algorithm hold in practice. The experiments reported in Figure 4(a) show in practice that for a polynomial time solvable subset of SAT, the algorithm indeed has a bounded number of mistakes during training. Planar graph colouring is a known polynomial time computable problem, but it is difficult to characterize theoretically and an automated theorem prover was employed in the proof of polynomial solvability. The hardware verification problem explored in Figure 4(b) shows that the algorithm learns a setting of f that gives performance an order of magnitude faster than the state of the art Minisat solver. It does so after relatively few training iterations and then maintains good performance. Several approaches present themselves as good opportunites of extensions to learning SAT. In this work, we argued that learning on positive examples is sufficient if the subset of SAT sentences generated by our application has a positive margin. However, it is of interest to consider learning in the absense of a positive margin, and learning may be accelerated by making updates based on unsatisfiable sentences. One potential approach would be to consider a stochastic finite difference approximation to the risk gradient by running the DPLL algorithm a second time with a perturbed f . Additionally, we may consider updates to f during a run of the DPLL algorithm when the algorithm backtracks from a branch of the search tree for which we can prove that all yij = ?1. This, however, requires care in ensuring that the implicit empirical risk minimization is not biased. In this work, we have shown that a perceptron-style algorithm is capable of learning all polynomial solvable SAT subsets in bounded time. This has important implications for learning real-world SAT applications such as theorem proving, model checking, planning, hardware verification, and bioinformatics. We have shown empirically that our theoretical results hold, and that state-of-theart computation time can be achieved with our learning rule on a real-world hardware verification problem. As SAT is a canonical NP-complete problem, we expect that the efficient solution of important subsets of SAT may have much broader implications for the solution of many real-world problems. Acknowledgements: This work is partially funded by the European Research Council under the European Community?s Seventh Framework Programme (FP7/2007-2013)/ERC Grant agreement number 259112, and by the Royal Academy of Engineering under the Newton Fellowship Alumni Scheme. 8 References [1] K. Appel, W. Haken, and J. Koch. Every planar map is four colorable. Illinois J. Math, 21(3):491 ? 567, 1977. [2] A. Biere, A. Cimatti, E. M. Clarke, and Y. Zhu. Symbolic model checking without BDDs. In International Conference on Tools and Algorithms for Construction and Analysis of Systems, pages 193?207, 1999. [3] M. Buro and H. K. Buning. Report on a sat competition. 1992. [4] C.-L. Chang and R. C.-T. Lee. Symbolic Logic and Mechanical Theorem Proving. Academic Press, Inc., Orlando, FL, USA, 1st edition, 1997. [5] S. A. Cook. The complexity of theorem-proving procedures. In Proceedings of the third annual ACM symposium on Theory of computing, STOC ?71, pages 151?158, New York, NY, USA, 1971. ACM. [6] H. Daum?e, III and D. Marcu. Learning as search optimization: approximate large margin methods for structured prediction. In International Conference on Machine learning, pages 169?176, 2005. [7] M. Davis, G. Logemann, and D. Loveland. A machine program for theorem-proving. Commun. ACM, 5:394?397, July 1962. [8] M. Davis and H. Putnam. A computing procedure for quantification theory. J. ACM, 7:201?215, 1960. [9] W. F. Dowling and J. H. Gallier. Linear-time algorithms for testing the satisfiability of propositional horn formulae. The Journal of Logic Programming, 1(3):267 ? 284, 1984. [10] N. E?en and N. S?orensson. An extensible sat-solver. In Theory and Applications of Satisfiability Testing, pages 333?336. 2004. [11] S. Even, A. Itai, and A. Shamir. On the complexity of time table and multi-commodity flow problems. In Symposium on Foundations of Computer Science, pages 184?193, 1975. [12] M. Fink. Online learning of search heuristics. Journal of Machine Learning Research - Proceedings Track, 2:114?122, 2007. [13] J. W. Freeman. Improvements to propositional satisfiability search algorithms. PhD thesis, University of Pennsylvania, 1995. [14] E. Goldberg and Y. Novikov. Berkmin: A fast and robust sat-solver. In Design, Automation and Test in Europe Conference and Exhibition, 2002. Proceedings, pages 142 ?149, 2002. [15] J. Harrison. Handbook of Practical Logic and Automated Reasoning. Cambridge University Press, 2009. [16] J. N. Hooker and V. Vinay. Branching rules for satisfiability. Journal of Automated Reasoning, 15:359? 383, 1995. [17] F. Hutter, D. Babic, H. H. Hoos, and A. J. Hu. Boosting verification by automatic tuning of decision procedures. In Proceedings of the Formal Methods in Computer Aided Design, pages 27?34, 2007. [18] R. G. Jeroslow and J. Wang. Solving propositional satisfiability problems. Annals of Mathematics and Artificial Intelligence, 1:167?187, 1990. [19] H. A. Kautz. Deconstructing planning as satisfiability. In Proceedings of the Twenty-first National Conference on Artificial Intelligence (AAAI-06), 2006. [20] M. Kearns, M. Li, L. Pitt, and L. Valiant. On the learnability of boolean formulae. In Proceedings of the nineteenth annual ACM symposium on Theory of computing, pages 285?295, 1987. [21] D. Le Berra and O. Roussel. Sat competition 2009. http://www.satcompetition.org/2009/. [22] I. Lynce and J. a. Marques-Silva. Efficient haplotype inference with boolean satisfiability. In Proceedings of the 21st national conference on Artificial intelligence - Volume 1, pages 104?109. AAAI Press, 2006. [23] M. Moskewicz, C. Madigan, Y. Zhao, L. Zhang, and S. Malik. Chaff: engineering an efficient sat solver. In Design Automation Conference, 2001. Proceedings, pages 530 ? 535, 2001. [24] F. Rosenblatt. The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386?408, 1958. [25] W. Ruml. Adaptive Tree Search. PhD thesis, Harvard University, 2002. [26] J. a. P. M. Silva. The impact of branching heuristics in propositional satisfiability algorithms. In Proceedings of the 9th Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence, EPIA ?99, pages 62?74, London, UK, 1999. Springer-Verlag. [27] L. Xu, F. Hutter, H. H. Hoos, and K. Leyton-Brown. Satzilla: portfolio-based algorithm selection for sat. J. Artif. Int. Res., 32:565?606, June 2008. [28] R. Zabih. A rearrangement search strategy for determining propositional satisfiability. In in Proceedings of the National Conference on Artificial Intelligence, pages 155?160, 1988. 9
4533 |@word determinant:1 polynomial:29 termination:1 hu:8 biere:1 q1:1 pick:1 necessity:1 contains:9 score:8 selecting:1 ecole:1 existing:1 current:6 conjunctive:1 must:2 portuguese:1 subsequent:1 benign:1 remove:2 update:18 cue:5 selected:4 cook:2 intelligence:6 core:1 pointer:1 completeness:1 math:1 node:23 boosting:1 org:1 zhang:1 along:1 profound:1 symposium:3 incorrect:1 consists:1 prove:1 introduce:1 theoretically:1 indeed:1 expected:2 behavior:1 p1:1 planning:3 frequently:1 themselves:1 multi:1 brain:1 inspired:3 freeman:2 enumeration:1 solver:13 considering:2 provided:1 blaschko:2 bounded:5 notation:2 maximizes:1 cm:2 finding:2 transformation:1 suite:1 every:3 commodity:1 voting:1 expands:1 fink:1 tie:3 exactly:4 ensured:1 k2:6 uk:2 unit:25 grant:1 appear:1 positive:20 before:1 engineering:3 local:1 mistake:9 oxford:1 path:3 inria:1 studied:1 bi:1 exhibition:1 averaged:1 practical:2 horn:8 testing:2 practice:4 hfk:2 implement:1 existance:2 procedure:12 empirical:3 refers:1 madigan:1 symbolic:2 cannot:1 convenience:1 selection:4 storage:1 kr2:1 risk:2 seminal:1 restriction:1 www:1 map:3 quick:1 center:1 maximizing:1 straightforward:2 starting:1 convex:2 colorable:1 resolution:1 simplicity:1 assigns:1 immediately:1 occurences:9 contradiction:4 rule:10 epia:1 proving:6 notion:3 annals:1 construction:2 shamir:1 programming:1 goldberg:1 agreement:1 harvard:1 element:5 marcu:1 labeled:1 ft:3 wang:6 capture:1 worst:1 region:1 ordering:1 highest:1 ran:1 broken:2 complexity:2 ideally:1 dynamic:1 solving:3 algebra:1 efficiency:1 unsatisfiability:2 basis:1 resolved:1 emulate:3 separated:1 instantiated:1 seperate:1 describe:2 fast:1 london:1 artificial:6 choosing:1 exhaustive:1 disjunction:2 heuristic:20 solve:1 nineteenth:1 otherwise:6 deconstructing:1 statistic:3 pnk:1 online:1 sequence:2 propose:1 fr:1 iff:3 academy:1 description:1 validate:1 competition:6 empty:2 bdds:1 argmaxp:1 guaranteeing:1 derive:1 absense:1 ac:1 completion:1 novikov:1 progress:1 p2:1 implies:1 indicate:1 closely:2 correct:1 stochastic:5 hull:2 exploration:1 enable:1 adjacency:1 require:1 argued:1 behaviour:1 assign:1 orlando:1 f1:1 proposition:1 yij:10 extension:2 hold:2 koch:1 considered:3 normal:1 deciding:1 mapping:2 pitt:1 matthew:2 achieves:1 early:1 smallest:6 label:6 visited:4 council:1 tool:1 minimization:1 clearly:1 always:2 modified:1 ck:5 broader:1 conjunction:2 corollary:2 june:1 improvement:2 notational:1 indicates:2 contrast:1 baseline:1 sense:1 inference:1 minisat:5 unlikely:1 entire:1 ancestor:1 transformed:1 selects:1 among:1 development:1 art:3 special:3 initialize:1 gallier:1 equal:1 construct:1 never:3 lit:11 theart:1 np:5 summarise:1 simplify:1 report:2 few:1 modern:3 neighbour:2 national:3 argmax:1 geometry:1 rearrangement:1 organization:1 interest:2 lynce:1 implication:3 capable:2 partial:1 respective:1 tree:7 indexed:1 refutation:1 desired:1 re:1 theoretical:2 delete:1 hutter:2 instance:10 psychological:1 boolean:5 extensible:1 formulates:1 assignment:1 vertex:3 subset:8 successful:1 seventh:1 learnability:2 characterize:1 reported:1 perturbed:1 chooses:1 st:2 international:2 sequel:1 lee:1 probabilistic:1 together:2 thesis:2 aaai:2 containing:2 choose:2 possibly:1 ykl:1 literal:37 priority:5 inefficient:1 style:5 return:4 leading:1 li:1 zhao:1 potential:1 automation:2 int:1 inc:1 satisfy:1 explicitly:1 performed:2 view:1 later:2 picked:1 dpll:26 red:2 hf:2 maintains:1 satisfiable:23 kautz:1 substructure:3 formed:1 qk:2 characteristic:3 emulates:2 maximized:2 yield:1 identify:1 ofthe:1 yes:1 produced:1 converged:1 whenever:1 proof:17 associated:1 flint:1 sampled:1 colouring:6 recall:1 knowledge:1 satisfiability:12 hilbert:3 back:5 appears:3 originally:1 higher:3 planar:6 jw:4 sufficiency:1 evaluated:1 ox:1 furthermore:2 just:1 implicit:1 until:4 working:1 mistakenly:1 replacing:1 propagation:5 perhaps:1 artif:1 usa:2 contain:4 true:1 brown:1 analytically:1 hence:3 assigned:2 alumnus:1 adjacent:2 during:6 branching:11 davis:8 criterion:1 loveland:6 complete:4 haken:1 silva:3 reasoning:3 meaning:1 recently:1 specialized:1 haplotype:1 clause:45 empirically:4 volume:1 expressing:1 dowling:1 cambridge:1 automatic:1 tuning:1 fk:7 trivially:2 similarly:1 grid:1 erc:1 illinois:1 mathematics:1 had:1 funded:1 portfolio:1 robot:1 access:1 supervision:1 f0:9 europe:1 add:1 solvability:2 recent:5 showed:1 optimizing:1 belongs:1 commun:1 discard:1 verlag:1 inequality:1 binary:2 scoring:1 neg:6 greater:1 relaxed:1 care:1 performer:1 employed:1 prune:1 determine:2 shortest:2 maximize:2 july:1 branch:15 resolving:1 ing:1 unlabelled:1 determination:1 faster:1 academic:1 long:2 unsatisfiable:10 ensuring:2 instantiating:1 variant:2 prediction:1 impact:1 essentially:1 iteration:11 kernel:1 achieved:1 cell:4 c1:2 fellowship:1 completes:2 harrison:1 biased:1 pass:1 flow:1 noting:2 counting:2 iii:1 automated:4 xj:17 pennsylvania:1 identified:2 reduce:2 simplifies:1 computable:7 whether:2 colour:3 york:1 cnf:11 repeatedly:1 appel:1 involve:1 listed:1 hardware:7 zabih:1 reduced:1 generate:2 http:1 exist:3 putnam:6 canonical:5 problematic:1 fulfilled:1 per:1 track:5 blue:1 rosenblatt:1 itai:1 key:2 putting:1 four:2 nevertheless:1 threshold:2 drawn:3 vast:1 graph:5 concreteness:1 year:1 sum:1 run:4 everywhere:1 family:2 almost:1 decision:8 summarizes:1 clarke:1 bound:3 fl:1 guaranteed:1 simplification:1 copied:1 encountered:2 annual:2 activity:8 occur:1 constraint:2 alex:1 unlimited:1 generates:1 speed:1 min:4 span:1 separable:1 expanded:1 relatively:1 department:1 structured:1 according:1 alternate:1 centrale:1 unsure:1 instantiates:1 poor:1 across:1 remain:1 wherever:1 making:1 equation:2 discus:1 count:3 know:2 fp7:1 end:4 available:1 apply:2 backtracks:1 appearing:1 original:1 top:1 running:2 include:4 recognizes:1 newton:1 daum:1 malik:1 added:3 prover:1 strategy:2 gradient:4 kth:1 amongst:1 reversed:1 mapped:1 valuation:14 trivial:2 induction:2 length:6 index:1 relationship:2 difficult:1 unfortunately:1 executed:1 stoc:1 relate:1 negative:13 kfk:6 design:3 twenty:1 negated:2 allowing:1 finite:1 behave:1 marque:1 defining:1 extended:1 ever:3 reproducing:1 arbitrary:1 buro:1 community:1 inferred:1 propositional:7 pair:2 paris:1 mechanical:1 sentence:49 conflict:5 framing:1 able:1 suggested:1 below:1 appeared:2 program:1 including:1 max:6 royal:1 critical:1 quantification:1 solvable:2 zhu:1 scheme:2 improve:1 numerous:1 text:3 coloured:1 literature:4 geometric:1 checking:3 kf:1 mom:1 acknowledgement:1 review:1 unsatisfied:2 determining:1 expect:2 generation:1 var:3 validation:8 foundation:1 verification:8 sufficient:3 consistent:1 pi:3 share:1 summary:1 repeat:1 last:4 free:1 moskewicz:1 aij:1 formal:1 perceptron:12 focussed:1 benefit:1 depth:4 dimension:1 world:5 valid:2 unaware:2 author:1 concretely:1 reinforcement:1 made:3 simplified:1 adaptive:1 programme:1 approximate:2 obtains:1 contradict:1 implicitly:1 logic:5 active:6 instantiation:1 sequentially:1 sat:58 handbook:1 search:31 hooker:2 table:3 additionally:2 learn:4 terminate:1 robust:1 vinay:1 european:2 domain:1 ruml:2 main:3 motivation:1 edition:1 repeated:2 allowed:1 xu:1 en:1 ny:1 xl:3 candidate:2 lie:3 third:1 learns:1 theorem:15 formula:4 bad:1 specific:1 showing:1 explored:2 r2:2 negate:1 exists:7 ih:3 recognising:2 sequential:1 valiant:1 kr:1 ci:6 mirror:1 false:2 magnitude:3 phd:2 margin:16 nk:1 hoos:2 backtracking:2 simply:1 likely:2 explore:3 visual:1 expressed:1 tracking:1 partially:1 scalar:1 chang:1 applies:1 springer:1 corresponds:1 leyton:1 satisfies:2 acm:5 identity:1 goal:1 consequently:2 bohm:6 labelled:3 shared:2 change:1 aided:1 typical:1 determined:2 operates:2 except:2 kearns:1 total:11 vote:2 indicating:1 select:1 bioinformatics:2 accelerated:1
3,904
4,534
Truncation-free Stochastic Variational Inference for Bayesian Nonparametric Models Chong Wang? Machine Learning Department Carnegie Mellon University [email protected] David M. Blei Computer Science Department Princeton Univeristy [email protected] Abstract We present a truncation-free stochastic variational inference algorithm for Bayesian nonparametric models. While traditional variational inference algorithms require truncations for the model or the variational distribution, our method adapts model complexity on the fly. We studied our method with Dirichlet process mixture models and hierarchical Dirichlet process topic models on two large data sets. Our method performs better than previous stochastic variational inference algorithms. 1 Introduction Bayesian nonparametric (BNP) models [1] have emerged as an important tool for building probability models with flexible latent structure and complexity. BNP models use posterior inference to adapt the model complexity to the data. For example, as more data are observed, Dirichlet process (DP) mixture models [2] can create new mixture components and hierarchical Dirichlet process (HDP) topic models [3] can create new topics. In general, posterior inference in BNP models is intractable and we must approximate the posterior. The most widely-used approaches are Markov chain Monte Carlo (MCMC) [4] and variational inference [5]. For BNP models, the advantage of MCMC is that it directly operates in the unbounded latent space; whether to increase model complexity (such as adding a new mixture component) naturally folds in to the sampling steps [6, 3]. However MCMC does not easily scale?it requires storing many configurations of hidden variables, each one on the order of the number of data points. For scalable MCMC one typically needs parallel hardware, and even then the computational complexity scales linearly with the data, which is not fast enough for massive data [7, 8, 9]. The alternative is variational inference, which finds the member of a simplified family of distributions to approximate the true posterior [5, 10]. This is generally faster than MCMC, and recent innovations let us use stochastic optimization to approximate posteriors with massive and streaming data [11, 12, 13]. Unlike MCMC, however, variational inference algorithms for BNP models do not operate in an unbounded latent space. Rather, they truncate the model or the variational distribution to a maximum model complexity [13, 14, 15, 16, 17, 18].1 This is particularly limiting in the stochastic approach, where we might hope for a Bayesian nonparametric posterior seamlessly adapting its model complexity to an endless stream of data. In this paper, we develop a truncation-free stochastic variational inference algorithm for BNP models. This lets us more easily apply Bayesian nonparametric data analysis to massive and streaming data. ? Work was done when the author was with Princeton University. In [17, 18], split-merge techniques were used to grow/shrink truncations. However, split-merge operations are model-specific and difficult to design. It is also unknown how to apply these to the stochastic variational inference setting we consider. 1 1 In particular, we present a new general inference algorithm, locally collapsed variational inference. When applied to BNP models, it does not require truncations and gives a principled mechanism for increasing model complexity on the fly. We demonstrate our algorithm on DP mixture models and HDP topic models with two large data sets, showing improved performance over truncated algorithms. 2 Truncation-free stochastic variational inference for BNP models Although our goal is to develop an efficient stochastic variational inference algorithm for BNP models, it is more succinct to describe our algorithm for a wider class of hierarchical Bayesian models [19]. We will show how we apply our algorithm for BNP models in ?2.3. We consider the general class of hierarchical Bayesian models shown in Figure 1. Let the global hidden variables be ? with prior p(? | ?) (? is the hyperparameter) and local variables for each data sample be zi (hidden) and xi (observed) for i = 1, . . . , n. The joint distribution of all variables (hidden and observed) factorizes as, Qn Qn p(?, z1:n , x1:n | ?) = p(? | ?) i=1 p(xi , zi | ?) = p(? | ?) i=1 p(xi | zi , ?)p(zi | ?). (1) The idea behind the nomenclature is that the local variables are conditionally independent of each other given the global variables. For convenience, we assume global variables ? are continuous and local variables zi are discrete. (This assumption is not necessary.) A large range of models can be represented using this form, e.g., mixture models [20, 21], mixed-membership models [3, 22], latent factor models [23, 24] and tree-based hierarchical models [25]. As an example, consider a DP mixture model for document clustering. Each document is modeled as a bag of words drawn from a distribution over the vocabulary. The mixture components are the distributions over the vocabulary ? and the mixture proportions ? are represented with a stick-breaking process [26]. The global variables ? , (?, ?) contain the proportions and components, and the local variables zi are the mixture assignments for each document xi . The generative process is: 1. Draw mixture component ?k and sticks ?k for k = 1, 2, ? ? ? , Qk?1 ?k ? Dirichlet(?), ?k = ? ?k `=1 (1 ? ? ?` ), ? ?k ? Beta(1, a). 2. For each document xi , (a) Draw mixture assignment zi ? Mult(?). (b) For each word xij , draw the word xij ? Mult(?zi ). We now return to the general model in Eq. 1. In inference, we are interested in the posterior of the hidden variables ? and z1:n given the observed data x1:n , i.e., p(?, z1:n | x1:n , ?). For many models, this posterior is intractable. We will approximate it using mean-field variational inference. 2.1 Variational inference In variational inference we try to find a distribution in a simple family that is close to the true posterior. We describe the mean-field approach, the simplest variational inference algorithm [5]. It assumes the fully factorized family of distributions over the hidden variables, Qn q(?, z1:n ) = q(?) i=1 q(zi ). (2) We call q(?) the global variational distribution and q(zi ) the local variational distribution. We want to minimize the KL-divergence between this variational distribution and the true posterior. Under the standard variational theory [5], this is equivalent to maximizing a lower bound of the log marginal likelihood of the observed data x1:n . We obtain this bound with Jensen?s inequality, RP log p(x1:n | ?) = log z1:n p(x1:n , z1:n , ? | ?)d? Pn ? Eq [log p(?) ? log q(?) + i=1 log p(xi , zi |?) ? log q(zi )] , L(q). (3) 2 A: q(theta1)=Dirichlet(0.1,1,...,1) 30 -4 20 10 -6 -8 log odds our method 0 -1 Figure 1: Graphical model for hierarchical Bayesian models with global hidden variables ?, local hidden and observed variables zi and xi , i = 1, . . . , n. Hyperparameter ? is fixed, not a random variable. 5 4 3 2 1 5 4 3 2 1 n 2 -1 mean-field 4 xi method -1 zi B: q(theta1)=Dirichlet(1,1,...,1) -2 b 0 h w1: frequency of word 1 in the new doc Figure 2: Results on assigning document d = {w1 , 0, . . . , 0} to q(?1 ) (case A and B, shown in the figure above) or q(?2 ) = Dirichlet(0.1, 0.1, . . . , 0.1). The y axis is the log-odds of q(z = 1) to q(z = 2)?if it is larger than 0, it is more likely to be assigned to component 1. The mean-field approach underestimates the uncertainty around ?2 , assigning d incorrectly for case B. The locally collapsed approach does it correctly in both cases. Algorithm 1 Mean-field variational inference. Algorithm 2 Locally collapsed variational inference. 1: Initialize q(?). 2: for iter = 1 to M do 3: for i = 1 to n do 4: Set local  variational distribution q(zi ) ? exp Eq(?) [log p(xi , zi | ?)] . 5: end for 6: Set global variational distribution q(?)  ? exp Eq(z1:n ) [log p(x1:n , z1:n , ?)] . 7: end for 8: return q(?). 1: Initialize q(?). 2: for iter = 1 to M do 3: for i = 1 to n do 4: Set local distribution q(zi ) ? Eq(?) [p(xi , zi | ?)]. 5: Sample from q(zi ) to obtain its empirical q?(zi ). 6: end for 7: Set global variational distribution q(?) ? exp Eq?(z1:n ) [log p(x1:n , z1:n , ?)] . 8: end for 9: return q(?). Maximizing L(q) w.r.t. q(?, z1:n ) defined in Eq. 2 (with the optimal conditions given in [27]) gives  q(?) ? exp Eq(z1:n ) [log p(x1:n , z1:n , ? | ?)] (4)  q(zi ) ? exp Eq(?) [log p(xi , zi | ?)] . (5) Typically these equations are used in a coordinate ascent algorithm, iteratively optimizing each factor while holding the others fixed (see Algorithm 1). The factorization into global and local variables ensures that the local updates only depend on the global factors, which facilitates speed-ups like parallel [28] and stochastic variational inference [11, 12, 13, 29]. In BNP models, however, the value of zi is potentially unbounded (e.g., the mixture assignment in a DP mixture). Thus we need to truncate the variational distribution [13, 14]. Truncation is necessary in variational inference because of the mathematical structure of BNP models. Moreover, it is difficult to grow the truncation in mean-field variational inference even in an ad-hoc way because it tends to underestimate posterior variance [30, 31]. In contrast, its mathematical structure and that it gets the variance right in the conditional distribution allow Gibbs sampling for BNP models to effectively explore the unbounded latent space [6]. 2.2 Locally collapsed variational inference We now describe locally collapsed variational inference, which mitigates the problem of underestimating posterior variance in mean-field variational inference. Further, when applied to BNP models, it is truncation-free?it gives a good mechanism to increase truncation on the fly. Algorithm 2 outlines the approach. The difference between traditional mean-field variational inference and our algorithm lies in the update of the local distribution q(zi ). In our algorithm, it is q(zi ) ? Eq(?) [p(xi , zi | ?)] , (6) as opposed to the mean-field update in Eq. 5. Because we collapse out the global variational distribution q(?) locally, we call this method locally collapsed variational inference. Note the two algorithms are similar when q(?) has low variance. However, when the uncertainty modeled in q(?) is high, these two approaches lead to different approximations of the posterior. 3 In our implementation, we use a collapsed Gibbs sampler to sample from Equation 6. This is a local Gibbs sampling step and thus is very fast. Further, this is where our algorithm does not require truncation because Gibbs samplers for BNP models can operate in an unbounded space [6, 3]. Now we update q(?). Suppose we have a set of samples from q(zi ) to construct its empirical distribution q?(zi ). Plugging this into Eq. 3 gives the solution to q(?),  q(?) ? exp Eq?(z1:n ) [log p(x1:n , z1:n , ? | ?)] , (7) which has the same form as in Eq. 4 for the mean-field approach. This finishes Algorithm 2. To give an intuitive comparison of locally collapsed (Algorithm 2) and mean-field (Algorithm 1) variational inference, we consider a toy document clustering problem with vocabulary size V = 10. We use a two-component Bayesian mixture model with fixed and equal prior proportions ?1 = ?2 = 1/2. Suppose at some stage, component 1 has some documents assignments while component 2 has not yet and we have obtained the (approximate) posterior for the two component parameters ?1 and ?2 as q(?1 ) and q(?2 ). For ?1 , we consider two cases, A) q(?1 ) = Dirichlet(0.1, 1, . . . , 1); B) q(?1 ) = Dirichlet(1, 1, . . . , 1). For ?2 , we only consider q(?2 ) = Dirichlet(0.1, 0.1, . . . , 0.1). In both cases, q(?1 ) has relatively low variance while q(?2 ) has high variance. The difference is that the q(?1 ) in case A has a lower probability on word 1 than that in case B. Now we have a new document d = {w1 , 0, . . . , 0}, where word 1 is the only word and its frequency is w1 . In both cases, document d is more likely be assigned to component 2 when w1 becomes larger. Figure 2 shows the difference between mean-field and locally collapsed variational inference. In case A, the mean-field approach does it correctly, since word 1 already has a very low probability in ?1 . But in case B, it ignores the uncertainty around ?2 , resulting in incorrect clustering. Our approach does it correctly in both cases. What justifies this approach? Alas, as for some other adaptations of variational inference, we do not yet have an airtight justification [32, 33, 34]. We are not optimizing q(zi ) and so the corresponding lower bound must be looser than the optimized lower bound from the mean-field approach if the issue of local modes is excluded. However, our experiments show that we find a better predictive distribution than mean-field inference. One possible explanation is outlined in S.1 (section 1 of the supplement), where we show that our algorithm can be understood as an approximate Expectation Propagation (EP) algorithm [35]. Related algorithms. Our algorithm is closely related to collapsed variational inference (CVI) [15, 16, 36, 32, 33]. CVI applies variational inference to the marginalized model, integrating out the global hidden variable ?. This gives better estimates of posterior variance. In CVI, however, the optimization for each local variable zi depends on all other local variables, and this makes it difficult to apply it at large scale. Our algorithm is akin to applying CVI for the intermediate model that treats q(?) as a prior and considers a single data point xi with its hidden structure zi . This lets us develop stochastic algorithms that can be fit to massive data sets (as we show below). Our algorithm is also related to the recently proposed a hybrid approach of using Gibbs sampling inside stochastic variational inference to take advantage of the sparsity in text documents in topic modeling [37]. Their approach still uses the mean-field update as in Eq. 5, where all local hidden topic variables (for a document) are grouped together and the optimal q(zi ) is approximated by a Gibbs sampler. With some adaptations, their fast sparse update idea can be used inside our algorithm. Stochastic locally collapsed variational inference. We now extend our algorithm to stochastic variational inference, allowing us to fit approximate posteriors to massive data sets. To do this, we assume the model in Figure 1 is in the exponential family and satisfies the conditional conjugacy [11, 13, 29]?the global distribution p(? | ?) is the conjugate prior for the local distribution p(xi , zi | ?),  p(? | ?) = h(?) exp ? > t(?) ? a(?) , (8)  > p(xi , zi | ?) = h(xi , zi ) exp ? t(xi , zi ) ? a(?) , (9) where we overload the notation for base measures h(?), sufficient statistics t(?), and log normalizers a(?). (These will often be different for the two families.) Due to the conjugacy, the term t(?) has the form t(?) = [?; ?a(?)]. Also assume the global variational distribution q(? | ?) is in the same family as the prior q(? | ?). Given these conditions, the batch update for q(? | ?) in Eq. 7 is Pn ? = ? + i=1 Eq?(zi ) [t?(xi , zi )] . (10) 4 The term t?(xi , zi ) is defined as t?(xi , zi ) , [t(xi , zi ); 1]. Analysis in [12, 13, 29] shows that given the conditional conjugacy assumption, the batch update of parameter ? in Eq. 10 can be easily turned into a stochastic update using natural gradient [38]. Suppose our parameter is ?t at step t. Given a random observation xt , we sample from q(zt | xt , ?t ) to obtain the empirical distribution q?(zt ). With an appropriate learning rate ?t , we have  ?t+1 ? ?t + ?t ??t + ? + nEq?(zi ) [t?(xt , zt )] . (11) This corresponds to an stochastic update using the noisy natural gradient to optimize the lower bound in Eq. 3 [39]. (We note that the natural gradient is an approximation since our q(zi ) in Eq. 6 is suboptimal for the lower bound Eq. 3.) Mini-batch. A common strategy used in stochastic variational inference [12, 13] is to use a small batch of samples at each update. Suppose we have a batch size S, and the set of samples xt , t ? S. Using our formulation, the q(zt , t ? S) becomes Q  q(zt,t?S ) ? Eq(? | ?t ) t?S p(xt , zt |?) . We choose not to factorize zt,t?S , since factorization will potentially lead to the label-switching problem when new components are instantiated for BNP models [7]. 2.3 Truncation-free stochastic variational inference for BNP models We have described locally collapsed variational inference in a general setting. Our main interests in this paper are BNP models, and we now show how this approach leads to truncation-free variational algorithms. We describe the approach for a DP mixture model [21], whose full description was presented in the beginning of ?2.1. See S.2 for the details on the HDP topic models [3]. The global variational distribution. The variational distribution for the global hidden variables, mixture components ? and stick proportions ? ? is Q q(?, ? ? | ?, u, v) = k q(? | ?k )q(? ?k | uk , vk ), where ?k is the Dirichlet parameter and (uk , vk ) is the Beta parameter. The sufficient statistic term t(xi , zi ) defined in Eq. 9 can be summarized as P P t(xi , zi )?kw = 1[zi =k] j 1[xij =w] ; t(xi , zi )uk = 1[zi =k] , t(xi , zi )vk = j=k+1 1[zi =j] , where 1[?] is the indicator function. Suppose at time t, we have obtained the empirical distribution q?(zi ) for observation xi , we use Eq. 11 to update Dirichlet parameter ? and Beta parameter (u, v), P ?kw ? ?kw + ?t (??kw + ? + n? q (zi = k) j 1[xij =w] ) uk ? uk + ?t (?uk + 1 + n? q (zi = k)) P vk ? vk + ?t (?vk + a + n `=k+1 q?(zi = `)). Although we have a unbounded number of mixture components, we do not need to represent them explicitly. Suppose we have T components that are associated with some data. These updates indicate q(?k | ?k ) = Dirichlet(?) and q(? ?k ) = Beta(1, a), i.e., their prior distributions, when k > T . Similar to a Gibbs sampler [6], the model is ?truncated? automatically. (We re-ordered the sticks according to their sizes [15].) The local empirical distribution q?(zi ). Since the mixture assignment zi is the only hidden variable, we obtain its analytical form using Eq. 6, R q(zi = k) ? p(xi | ?k )p(zi = k | ?)q(?k | ?k )q(? ? )d?k d? ? Q P P Qk?1 v` ?( w ?kw ) w ?(?kw + j 1[xij =w]) uk P = Q ?(?kw ) `=1 u` +v` , ?( ?kw +|xi |) uk +vk w w where |xi | is the document length and ?(?) is the Gamma function. (For mini batches, we do not have an analytical form, but we can sample from it.) The probability of creating a new component is Q P ) w ?(?+ j 1[xij =w]) QT vk q(zi > T ) ? ?(?V k=1 uk +vk . ?(?V +|xi |) ?V (?) We sample from q(zi ) to obtain its empirical distribution q?(zi ). If zi > T , we create a new component. 5 Discussion. Why is ?locally collapsed? enough? This is analogous to the collapsed Gibbs sampling algorithm in DP mixture models [6]? whether or not exploring a new mixture component is initialized by one single sample. The locally collapsed variational inference is powerful enough to trigger this. In the toy example above, the role of distribution q(?2 ) = Dirichlet(0.1, . . . , 0.1) is similar to that of the potential new component we want to maintain in Gibbs sampling for DP mixture models. Note the difference between this approach and those found in [17, 18], which use mean-field methods that can grow or shrink the truncation using split-merge moves. These approaches are model-specific and difficult to design. Further, they do not transfer to the stochastic setting. In contrast, the approach presented here grows the truncation as a natural consequence of the inference algorithm and is easily adapted to stochastic inference. 3 Experiments We evaluate our methods on DP mixtures and HDP topic models, comparing them to truncation-based stochastic mean-field variational inference. We focus on stochastic methods and large data sets. Datasets. We analyzed two large document collections. The Nature data contains about 350K documents from the journal Nature from years 1869 to 2008, with about 58M tokens and a vocabulary size of 4,253. The New York Times dataset contains about 1.8M documents from the years 1987 to 2007, with about 461M tokens and a vocabulary size of 8,000. Standard stop words and those words that appear less than 10 times or in more than 20 percent of the documents are removed, and the final vocabulary is chosen by TF-IDF. We set aside a test set of 10K documents from each corpus on which to evaluate its predictive power; these test sets were not given for training. Evaluation Metric. We evaluate the different algorithms using held-out per-word likelihood, P likelihoodpw , log p(Dtest | Dtrain )/ xi ?Dtest |xi |, Higher likelihood is better. Since exact computing the held-out likelihood is intractable, we use approximations. See S.3 for details of approximating the likelihood. There is some question as to the meaningfulness of held-out likelihood as a metric for comparing different models [40]. Held-out likelihood metrics are nonetheless suited to measuring how well an inference algorithm accomplishes the specific optimization task defined by a model. Experimental Settings. For DP mixtures, we set component Dirichlet parameter ? = 0.5 and the concentration parameter of DP a = 1. For HDP topic models, we set topic Dirichlet parameter ? = 0.01, and the first-level and second-level concentration parameters of DP a = b = 1 as in [13]. (See S.2 for the full description of HDP topic models.) For stochastic mean-field variational inference, we set the truncation level at 300 for both DP and HDP. We run all algorithms for 10 hours and took the model at the final stage as the output, without assessing the convergence. We vary the mini-batch size S = {1, 2, 5, 10, 50, 100, 500}. (We do not intend to compare DP and HDP; we want to show our algorithm works on both methods.) For stochastic mean-field approach, we set the learning rate according to [13] with ?t = (?0 + t)?? with ? = 0.6 and ?0 = 1. We start our new method with 0 components without seeing any data. We cannot use the learning rate schedule as in [13], since it gives very large weights to the first several components, effectively leaving no room for creating new components on the fly. We set the learning rate ?t = S/nt , for nt < n, where nt is the size of corpus that the algorithm has seen at time t. After we see all the documents, nt = n. For both stochastic mean-field and our algorithm, we set the lower bound of learning rate as S/n. We found this works well in practice. This mimics the usual trick of running Gibbs sampler?one uses sequential prediction for initialization and after all data points have been initialized, one runs the full Gibbs sampler [41]. We remove components with fewer than 1 document for DP and topics with fewer than 1 word for HDP topic models each time when we process 20K documents. 6 mean-field mean-field method 10 8 8 160 6 4 8 4 2 6 50 100 150 200 250 300 2 0 4 1 6 6 00 10 20 0 88 0 3200 1000 10 430 0 0 54200 0000 530 0 40 0 50 0 0 2 50 100 150 200 250 300 50 100 150 200 250 300 0 15 0 50 44 batchsize batchsize batchsize time (hours) time (hours) time (hours) time (hours) time (hours) mean-field our method methodmethod mean-field method method mean-field our our method our method our method mean-field Batchsize=100 Batchsize=100 Batchsize=100 10 10 0 15 0 number of topics 50 batchsize batchsize batchsize batchsize batchsize stochastic mean-field our method methodmethod mean-field method mean-field our our method HDP ourtopic method models method DP mixtures method mean-field our method method method (b)New on New York Times York Times (b)(b) onon New York Times Atend the end At At thethe end Batchsize=100 Batchsize=100 22 -8 .0 -7 .9 -7 .8 -7 .7 -7 .6 1010 00 10 0 2020 0100 0 3032000 0 0 40432010 0 0 0000 5050 04300 20 54000 0 5300 00 40 5 110 115 220 2 25 3 30 0 0 000 500 000 50 0 00 0 number of topics 50 1 1 00 0 -7 -7 .2 .1 .3 -7 .4 -7 .5 -7 50 0of 50 number 0 topics New York Times At the end New York Times York Times AtNew the end 30 30 0100 0 40 40 0100 50 5200 0 0 00 3210000 00 430 20 54000 number number topics of of mixtures 0 10 1 5300 0 - 5 - 50 1-0 5 8. 0 7. 70. -70. 0-7. 00 0 9 8 7 6 40 -8 -7 -7 -7 -7 0 .0 .9 .8 .7 .6 50 0 likelihood batchsize batchsize 20 20 0 0 10 10 0 0 likelihood -7 -7 -7 -7 .5 likelihood .4 .3 .2 -7 -7 -7 -7 .5 .4 .3 .2 30 30 0 0 40 40 0 0 50 50 0 0 20 20 0 0 10 10 0 0 -8 -8 --87 -7-8 -7 -7 -7 .4 .0 ..29 .8.0 .7 .8 .6 -7 .1 -7 .1 -7 -7 -7 -7-7 -7-7 -7 -7 .5 .8 .4 .6.3 .2.4 .1 .2 heldout likelihood likelihood (a)both on both corpora (b) on Nature corpora (a)(a) onon both corpora (b) on New York Times Nature Nature New York Times Nature New York Times 0 (a) on both corpora (a) on both corpora Nature Nature Figure 3: Results on DP mixtures. (a) Held-out likelihood comparison on both corpora. Our approach is more robust to batch sizes and gives better predictive performance. (b) The inferred number of mixtures on New York Times. (Nature is similar.) The left of figure (b) shows the number of mixture components inferred after 10 hours; our method tends to give more mixtures. Small batch sizes for the stochastic mean-field approach do not really work, resulting in very small number of mixtures. The right of figure (b) shows how different methods infer the number of mixtures. The stochastic mean field approach shrinks it while our approach grows it. (a) on both corpora (b) on Nature (a) on both corpora method HDPourtopic methodmodels mean-field method 50 100 150 200 250 300 50 100 150 200 250 300 method Results. Figure 3 shows the results for DP mixture models. Figure 3(a) shows the held-out likelihood comparison on both datasets. Our approach is more robust to batch sizes and usually gives better predictive performance. Small batch sizes of the stochastic mean-field approach do not work well. Figure 3(b) shows the inferred number of mixtures on New York Times. (Nature is similar.) Our method tends to give more mixtures than the stochastic mean-field approach. The stochastic mean-field approach shrinks the preset truncation; our approach does not need a truncation and grows the number of mixtures when data requires. Figure 4 shows the results for HDP topic models. Figure 4(a) shows the held-out likelihood comparison on both datasets. Similar to DP mixtures, our approach is more robust to batch sizes and gives better predictive performance most of time. And small batch sizes of the stochastic mean-field approach do not work well. Figure 4(b) shows the inferred number of topics on Nature. (New York Times is similar.) This is also similar to DP. Our method tends to give more topics than the stochastic mean-field approach. The stochastic mean-field approach shrinks the preset truncation while our approach grows the number of topics when data requires. One possible explanation that our method gives better results than the truncation-based stochastic mean-field approach is as follows. For truncation-based approach, the algorithm relies more on the random initialization placed on the parameters within the preset truncations. If the random initialization is not used well, performance degrades. This also explains that smaller batch sizes in stochastic mean-fields tend to work much worse?the first fewer samples might dominate the effect from the random initialization, leaving no room for later samples. Our approach mitigates this problem by allowing new components/topics to be created as data requires. If we compare DP and HDP, the best result of DP is better than that of HDP. But this comparison is not meaningful. Besides the different settings of hyperparameters, computing the held-out likelihood for DP is tractable, but intractable for HDP. We used importance sampling to approximate. (See S.3 160 time (hours) time (hours) method mean-field mean-field our methodour method more robust to batch sizes and gives better predictive performance most of time. (b) The inferred number of topics on Nature. (New York Times is similar.) The left of figure (b) show the number of topics inferred after 10 hours; our method tends to give more topics. Small batch sizes for the stochastic mean-field approach do not really work, resulting in very small number of topics. The right of figure (b) shows how different methods infer the number of topics. Similar to DP, the stochastic mean field approach shrinks it while our approach grows it. 4 8 2 6 0 4 2 0 0 0 50 40 30 0 30 100 0 40 0 5200 00 8 20 6 10 4 0 our method 0 15 0 10 50 10 batchsizebatchsize time (hours) Figure 4: Results on HDP topic models. (a) Held-out likelihood comparison on both corpora. Our approach is 7 Batchsize=10 Batchsize=100 0 number of topics 0 0 15 number of topics 50 2 50 100 150 200 250 300 201 00 30 0 2 40 00 0 5031 0000 40 200 50 0 0 30 0 stochastic mean-field method mean-field our methodour method method mean-field 40 0 0 50 0 .6 -7 .7 -7 .8 -7 .9 -7 .0 -8 batchsizebatchsize batchsize batchsize (b) on NewTimes York Times (b) on New York At the end At the end Batchsize=100 New York Times Newend York Times At the 0 0 410 00 50 0 20 0 30 0 20 0 Nature 10 .1 -7 .3 .5 -7 .5 10 0 50 0 40 0 0 30 20 10 0 -8 .4 -7 -7 .8 -8 .2 -7 -7 .4 .4 -7 .3 -7 -7 .0 -8 .2 likelihood .2 likelihood .8 -7 .4 -7 .6 -7 Nature -7 .1 -7 .2 -7 heldout likelihood New York Times 3100 00 40 number of topics 20 10 5000 50 0 0 0 30 -8 -7 -7 -7 -7 0 .0 .9 .8 .7 .6 40 0 50 0 (a) on both corpora Nature for details.) [42] shows that importance sampling usually gives the correct ranking of different topic models but significantly underestimates the probability. 4 Conclusion and future work In this paper, we have developed truncation-free stochastic variational inference algorithms for Bayesian nonparametric models (BNP models) and applied them to two large datasets. Extensions to other BNP models, such as Pitman-Yor process [43], Indian buffet process (IBP) [23, 24] and the nested Chinese restaurant process [18, 25] are straightforward by using their stick-breaking constructions. Exploring how this algorithm behaves in the true streaming setting where the program never stops?a ?never-ending learning machine? [44]?is an interesting future direction. Acknowledgements. Chong Wang was supported by Google PhD and Siebel Scholar Fellowships. David M. Blei is supported by ONR N00014-11-1-0651, NSF CAREER 0745520, AFOSR FA955009-1-0668, the Alfred P. Sloan foundation, and a grant from Google. References [1] Hjort, N., C. Holmes, P. Mueller, et al. Bayesian Nonparametrics: Principles and Practice. Cambridge University Press, Cambridge, UK, 2010. [2] Antoniak, C. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. The Annals of Statistics, 2(6):1152?1174, 1974. [3] Teh, Y., M. Jordan, M. Beal, et al. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566?1581, 2007. [4] Andrieu, C., N. de Freitas, A. Doucet, et al. An introduction to MCMC for machine learning. Machine Learning, 50:5?43, 2003. [5] Jordan, M., Z. Ghahramani, T. Jaakkola, et al. Introduction to variational methods for graphical models. Machine Learning, 37:183?233, 1999. [6] Neal, R. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249?265, 2000. [7] Newman, D., A. Asuncion, P. Smyth, et al. Distributed algorithms for topic models. Journal of Machine Learning Research, 10:1801?1828, 2009. [8] Smola, A., S. Narayanamurthy. An architecture for parallel topic models. Proc. VLDB Endow., 3(1-2):703? 710, 2010. [9] Ahmed, A., M. Aly, J. Gonzalez, et al. Scalable inference in latent variable models. In International Conference on Web Search and Data Mining (WSDM). 2012. [10] Wainwright, M., M. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1?2):1?305, 2008. [11] Hoffman, M., D. M. Blei, C. Wang, et al. Stochastic Variational Inference. ArXiv e-prints, 2012. [12] Hoffman, M., D. Blei, F. Bach. Online inference for latent Drichlet allocation. In Advances in Neural Information Processing Systems (NIPS). 2010. [13] Wang, C., J. Paisley, D. M. Blei. Online variational inference for the hierarchical Dirichlet process. In International Conference on Artificial Intelligence and Statistics (AISTATS). 2011. [14] Blei, D., M. Jordan. Variational inference for Dirichlet process mixtures. Journal of Bayesian Analysis, 1(1):121?144, 2005. [15] Kurihara, K., M. Welling, Y. Teh. Collapsed variational Dirichlet process mixture models. In International Joint Conferences on Artificial Intelligence (IJCAI). 2007. [16] Teh, Y., K. Kurihara, M. Welling. Collapsed variational inference for HDP. In Advances in Neural Information Processing Systems (NIPS). 2007. [17] Kurihara, K., M. Welling, N. Vlassis. Accelerated variational Dirichlet process mixtures. In Advances in Neural Information Processing Systems (NIPS). 2007. [18] Wang, C., D. Blei. Variational inference for the nested Chinese restaurant process. In Advances in Neural Information Processing Systems (NIPS). 2009. [19] Gelman, A., J. Hill. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge Univ. Press, 2007. [20] McLachlan, G., D. Peel. Finite mixture models. Wiley-Interscience, 2000. 8 [21] Escobar, M., M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577?588, 1995. [22] Blei, D., A. Ng, M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, 2003. [23] Griffiths, T., Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems (NIPS). 2006. [24] Teh, Y., D. Gorur, Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In International Conference on Artifical Intelligence and Statistics (AISTATS). 2007. [25] Blei, D., T. Griffiths, M. Jordan. The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 57(2):1?30, 2010. [26] Sethuraman, J. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994. [27] Bishop, C. Pattern Recognition and Machine Learning. Springer New York., 2006. [28] Zhai, K., J. Boyd-Graber, N. Asadi, et al. Mr. LDA: A flexible large scale topic modeling package using variational inference in MapReduce. In International World Wide Web Conference (WWW). 2012. [29] Sato, M. Online model selection based on the variational Bayes. Neural Computation, 13(7):1649?1681, 2001. [30] Opper, M., O. Winther. From Naive Mean Field Theory to the TAP Equations, pages 1?19. MIT Press, 2001. [31] MacKay, D. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. [32] Asuncion, A., M. Welling, P. Smyth, et al. On smoothing and inference for topic models. In Uncertainty in Artificial Intelligence (UAI). 2009. [33] Sato, I., H. Nakagawa. Rethinking collapsed variational Bayes inference for LDA. In International Conference on Machine Learning (ICML). 2012. [34] Sato, I., K. Kurihara, H. Nakagawa. Practical collapsed variational bayes inference for hierarchical dirichlet process. In International Conference on Knowledge Discovery and Data Mining, KDD, pages 105?113. ACM, New York, NY, USA, 2012. [35] Minka, T. Divergence measures and message passing. Tech. Rep. TR-2005-173, Microsoft Research, 2005. [36] Teh, Y., D. Newman, M. Welling. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In Advances in Neural Information Processing Systems (NIPS). 2006. [37] Mimno, D., M. Hoffman, D. Blei. Sparse stochastic inference for latent dirichlet allocation. In International Conference on Machine Learning (ICML). 2012. [38] Amari, S. Natural gradient works efficiently in learning. Neural computation, 10(2):251?276, 1998. [39] Robbins, H., S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):pp. 400?407, 1951. [40] Chang, J., J. Boyd-Graber, C. Wang, et al. Reading tea leaves: How humans interpret topic models. In Advances in Neural Information Processing Systems (NIPS). 2009. [41] Griffiths, T., M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences (PNAS), 2004. [42] Wallach, H., I. Murray, R. Salakhutdinov, et al. Evaluation methods for topic models. In International Conference on Machine Learning (ICML). 2009. [43] Pitman, J., M. Yor. The two-parameter poisson-dirichlet distribution derived from a stable subordinator. The Annals of Probability, 25(2):855?900, 1997. [44] Carlson, A., J. Betteridge, B. Kisiel, et al. Toward an architecture for never-ending language learning. In AAAI Conference on Artificial Intelligence (AAAI). 2010. 9
4534 |@word proportion:4 vldb:1 tr:1 configuration:1 contains:2 siebel:1 document:20 ala:1 freitas:1 comparing:2 nt:4 assigning:2 yet:2 must:2 neq:1 kdd:1 remove:1 update:13 aside:1 generative:1 fewer:3 intelligence:5 leaf:1 beginning:1 underestimating:1 blei:11 unbounded:6 mathematical:3 beta:4 incorrect:1 interscience:1 inside:2 bnp:20 wsdm:1 salakhutdinov:1 automatically:1 increasing:1 becomes:2 moreover:1 notation:1 factorized:1 what:1 developed:1 finding:1 stick:6 uk:10 grant:1 appear:1 understood:1 local:18 treat:1 tends:5 consequence:1 switching:1 merge:3 might:2 initialization:4 studied:1 wallach:1 factorization:2 collapse:1 range:1 practical:1 chongw:1 practice:2 empirical:6 adapting:1 mult:2 significantly:1 ups:1 word:12 integrating:1 griffith:3 seeing:1 boyd:2 get:1 convenience:1 close:1 cannot:1 gelman:1 selection:1 collapsed:20 applying:1 optimize:1 equivalent:1 www:1 maximizing:2 straightforward:1 holmes:1 dominate:1 steyvers:1 coordinate:1 justification:1 analogous:1 limiting:1 annals:3 construction:2 suppose:6 trigger:1 massive:5 exact:1 onon:2 smyth:2 us:2 hierarchy:1 trick:1 trend:1 approximated:1 particularly:1 recognition:1 observed:6 ep:1 role:1 fly:4 wang:6 fa955009:1 ensures:1 removed:1 principled:1 complexity:8 depend:1 predictive:6 easily:4 joint:2 represented:2 univ:1 instantiated:1 fast:3 describe:4 monte:1 artificial:4 newman:2 whose:1 emerged:1 widely:1 larger:2 amari:1 statistic:7 noisy:1 final:2 online:3 beal:1 hoc:1 advantage:2 analytical:2 took:1 adaptation:2 turned:1 adapts:1 academy:1 intuitive:1 description:2 convergence:1 ijcai:1 assessing:1 escobar:1 wider:1 develop:3 qt:1 ibp:1 eq:25 c:2 indicate:1 direction:1 closely:1 correct:1 stochastic:43 human:1 explains:1 require:3 multilevel:1 scholar:1 really:2 exploring:2 extension:1 batchsize:20 cvi:4 around:2 exp:8 vary:1 estimation:1 proc:1 bag:1 label:1 robbins:1 grouped:1 create:3 tf:1 tool:1 hoffman:3 hope:1 mclachlan:1 mit:1 rather:1 pn:2 factorizes:1 jaakkola:1 endow:1 derived:1 focus:1 vk:9 likelihood:20 seamlessly:1 tech:1 contrast:2 normalizer:1 inference:66 mueller:1 membership:1 streaming:3 typically:2 hidden:13 airtight:1 interested:1 issue:1 flexible:2 univeristy:1 smoothing:1 initialize:2 mackay:1 marginal:1 field:50 construct:1 equal:1 never:3 ng:1 sampling:9 kw:8 icml:3 mimic:1 future:2 others:1 gamma:1 divergence:2 national:1 maintain:1 microsoft:1 peel:1 interest:1 gorur:1 message:1 mining:2 evaluation:2 chong:2 mixture:44 analyzed:1 behind:1 held:9 chain:2 endless:1 necessary:2 tree:1 initialized:2 re:1 modeling:2 measuring:1 assignment:5 dtrain:1 drichlet:1 density:1 international:9 winther:1 together:1 w1:5 aaai:2 opposed:1 choose:1 dtest:2 worse:1 creating:2 american:2 return:3 toy:2 potential:1 de:1 summarized:1 sloan:1 explicitly:1 ranking:1 ad:1 stream:1 depends:1 try:1 later:1 start:1 bayes:3 parallel:3 asuncion:2 monro:1 minimize:1 qk:2 variance:7 efficiently:1 bayesian:16 carlo:1 definition:1 underestimate:3 nonetheless:1 frequency:2 pp:1 minka:1 naturally:1 associated:1 stop:2 dataset:1 knowledge:1 schedule:1 higher:1 improved:1 formulation:1 done:1 shrink:6 nonparametrics:1 stage:2 smola:1 web:2 propagation:1 google:2 mode:1 lda:2 scientific:1 grows:5 usa:1 effect:1 building:1 contain:1 true:4 andrieu:1 assigned:2 excluded:1 iteratively:1 neal:1 conditionally:1 subordinator:1 hill:1 outline:1 demonstrate:1 performs:1 percent:1 variational:70 recently:1 common:1 behaves:1 extend:1 association:2 interpret:1 mellon:1 cambridge:4 gibbs:11 paisley:1 outlined:1 narayanamurthy:1 language:1 stable:1 base:1 posterior:16 recent:1 optimizing:2 n00014:1 inequality:1 onr:1 rep:1 seen:1 mr:1 accomplishes:1 full:3 pnas:1 infer:2 constructive:1 faster:1 adapt:1 ahmed:1 bach:1 thethe:1 plugging:1 prediction:1 scalable:2 regression:1 cmu:1 expectation:1 metric:3 arxiv:1 poisson:1 represent:1 want:3 fellowship:1 grow:3 leaving:2 operate:2 unlike:1 ascent:1 tend:1 facilitates:1 member:1 jordan:6 odds:2 call:2 hjort:1 intermediate:1 split:3 enough:3 finish:1 zi:62 fit:2 restaurant:3 architecture:2 suboptimal:1 idea:2 whether:2 akin:1 nomenclature:1 york:21 passing:1 generally:1 nonparametric:8 locally:13 hardware:1 simplest:1 xij:6 nsf:1 correctly:3 per:1 alfred:1 carnegie:1 hyperparameter:2 discrete:1 tea:1 iter:2 drawn:1 year:2 run:2 package:1 uncertainty:4 powerful:1 family:7 looser:1 draw:3 doc:1 gonzalez:1 bound:7 fold:1 sato:3 adapted:1 idf:1 asadi:1 speed:1 relatively:1 department:2 according:2 truncate:2 conjugate:1 smaller:1 equation:3 conjugacy:3 mechanism:2 tractable:1 end:10 operation:1 apply:4 hierarchical:10 appropriate:1 alternative:1 batch:16 buffet:3 rp:1 assumes:1 dirichlet:30 clustering:3 running:1 graphical:4 marginalized:1 carlson:1 ghahramani:3 chinese:3 murray:1 approximating:1 meaningfulness:1 move:1 intend:1 already:1 question:1 print:1 strategy:1 concentration:2 degrades:1 usual:1 traditional:2 gradient:4 dp:23 rethinking:1 topic:40 considers:1 toward:1 hdp:16 length:1 besides:1 modeled:2 mini:3 zhai:1 innovation:1 difficult:4 sinica:1 potentially:2 holding:1 design:2 implementation:1 zt:7 unknown:1 allowing:2 teh:5 observation:2 markov:2 datasets:4 finite:1 truncated:2 incorrectly:1 kisiel:1 vlassis:1 aly:1 inferred:6 david:2 kl:1 z1:15 optimized:1 tap:1 hour:11 nip:7 below:1 usually:2 pattern:1 sparsity:1 reading:1 program:1 explanation:2 wainwright:1 power:1 natural:5 hybrid:1 indicator:1 axis:1 created:1 sethuraman:1 naive:1 text:1 prior:7 acknowledgement:1 mapreduce:1 discovery:1 afosr:1 fully:1 heldout:2 mixed:1 interesting:1 allocation:4 foundation:2 sufficient:2 principle:1 storing:1 token:2 placed:1 supported:2 free:8 truncation:25 allow:1 wide:1 sparse:2 pitman:2 yor:2 distributed:1 mimno:1 opper:1 vocabulary:6 ending:2 world:1 qn:3 ignores:1 author:1 collection:1 simplified:1 welling:5 approximate:8 global:16 doucet:1 uai:1 corpus:12 xi:32 factorize:1 continuous:1 latent:11 search:1 why:1 nature:16 transfer:1 robust:4 career:1 aistats:2 main:1 statistica:1 linearly:1 hyperparameters:1 succinct:1 graber:2 x1:10 west:1 ny:1 wiley:1 exponential:2 lie:1 breaking:3 specific:3 xt:5 bishop:1 showing:1 mitigates:2 jensen:1 betteridge:1 intractable:4 adding:1 effectively:2 sequential:1 importance:2 supplement:1 phd:1 justifies:1 suited:1 antoniak:1 likely:2 explore:1 ordered:1 chang:1 applies:1 springer:1 corresponds:1 nested:3 satisfies:1 relies:1 acm:2 conditional:3 goal:1 room:2 infinite:1 nakagawa:2 operates:1 sampler:6 preset:3 kurihara:4 experimental:1 meaningful:1 overload:1 indian:3 accelerated:1 artifical:1 evaluate:3 mcmc:7 princeton:3
3,905
4,535
Fast Bayesian Inference for Non-Conjugate Gaussian Process Regression Mohammad Emtiyaz Khan, Shakir Mohamed, and Kevin P. Murphy Department of Computer Science, University of British Columbia Abstract We present a new variational inference algorithm for Gaussian process regression with non-conjugate likelihood functions, with application to a wide array of problems including binary and multi-class classification, and ordinal regression. Our method constructs a concave lower bound that is optimized using an efficient fixed-point updating algorithm. We show that the new algorithm has highly competitive computational complexity, matching that of alternative approximate inference methods. We also prove that the use of concave variational bounds provides stable and guaranteed convergence ? a property not available to other approaches. We show empirically for both binary and multi-class classification that our new algorithm converges much faster than existing variational methods, and without any degradation in performance. 1 Introduction Gaussian processes (GP) are a popular non-parametric prior for function estimation. For real-valued outputs, we can combine the GP prior with a Gaussian likelihood and perform exact posterior inference in closed form. However, in other cases, such as classification, the likelihood is no longer conjugate to the GP prior, and exact inference is no longer tractable. Various approaches are available to deal with this intractability. One approach is Markov Chain Monte Carlo (MCMC) techniques [1, 11, 22, 9]. Although this can be accurate, it is often quite slow, and assessing convergence is challenging. There is therefore great interest in deterministic approximate inference methods. One recent approach is the Integrated Nested Laplace Approximation (INLA) [21], which uses numerical integration to approximate the marginal likelihood. Unfortunately, this method is limited to six or fewer hyperparameters, and is thus not suitable for models with a large number of hyperparameters. Expectation propagation (EP) [17] is a popular alternative, and is a method that approximates the posterior distribution by maintaining expectations and iterating until these expectations are consistent for all variables. Although this is fast and accurate for the case of binary classification [15, 18], there are difficulties extending EP to many other cases, such as multi-class classification and parameter learning [24, 13]. In addition, EP is known to have convergence issues and can be numerically unstable. In this paper, we use a variational approach, where we compute a lower bound to the log marginal likelihood using Jensen?s inequality. Unlike EP, this approach does not suffer from numerical issues and convergence problems, and can easily handle multi-class and other likelihoods. This is an active area of research and many solutions have been proposed, see for example, [23, 6, 5, 19, 14]. Unfortunately, most of these methods are slow, since they attempt to solve for the posterior covariance matrix, which has size O(N 2 ), where N is the number of data points. In [19], a reparameterization was proposed that only requires computing O(N ) variational parameters. Unfortunately, this method relies on a non-concave lower bound. In this paper, we propose a new lower bound that is concave, and derive an efficient iterative algorithm for its maximization. Since the original objective is unimodal, we reach the same global optimum as the other methods, but we do so much faster. 1 p(z|X, ?) = N (z|?, ?) p(y|z) = N Y (1) p(yn |zn ) (2) n=1 Type Binary Categorical Ordinal Distribution Bernoulli logit Multinomial logit Cumulative logit p(y|z) p(y = 1|z) = ?(z) p(y = k|z) = ezk ?lse(z) p(y ? k|z) = ?(?k ? z) Count Poisson p(y = k|z) = ? X ? ? z1 z2 zN y1 y2 yN z e?e ekz k! Table 1: Gaussian process regression (top left) and its graphical model (right), along with the example likelihoods for outputs (bottom left). Here, ?(z) = 1/(1 + e?z ), lse(?) is the log-sum-exp function, k indexes over discrete output values, and ?k are real numbers such that ?1 < ?2 < . . . < ?K for K ordered categories. 2 Gaussian Process Regression Gaussian process (GP) regression is a powerful method for non-parametric regression that has gained a great deal of attention as a flexible and accurate modeling approach. Consider N data points with the n?th observation denoted by yn , with corresponding features xn . A Gaussian process model uses a non-linear latent function z(x) to obtain the distribution of the observation y using an appropriate likelihood [15, 18]. For example, when y is binary, a Bernoulli logit/probit likelihood is appropriate. Similarly, for count observations, a Poisson distribution can be used. A Gaussian process [20] specifies a distribution over z(x), and is a stochastic process that is characterized by a mean function ?(x) and a covariance function ?(x, x0 ), which are specified using a kernel function that depends on the observed features x. Assuming a GP prior over z(x) implies that a random vector is associated with every input x, such that given all inputs X = [x1 , x2 , . . . , xN ], the joint distribution over z = [z(x1 ), z(x2 ), . . . , z(xN )] is Gaussian. The GP prior is shown in Eq. 1. Here, ? is a vector with ?(xi ) as its i?th element, ? is a matrix with ?(xi , xj ) as the (i, j)?th entry, and ? are the hyperparameters of the mean and covariance functions. We assume throughout a zero mean-function and a squared-exponential covariance function (also known as radial-basis function or Gaussian) defined as: ?(xi , xj ) = ? 2 exp[?(xi ? xj )T (xi ? xj )/(2s)]. The set of hyperparameters is ? = (s, ?). We also define ? = ??1 . Given the GP prior, the observations are modeled using the likelihood shown in Eq. 2. The exact form of the distribution p(yn |zn ) depends on the type of observations and different choices instantiates many existing models for GP regression [15, 18, 10, 14]. We consider frequently encountered data such as binary, ordinal, categorical and count observations, and describe their likelihoods in Table 1. For the case of categorical observations, the latent function z is a vector whose k?th element is the latent function for k?th category. A graphical model for Gaussian process regression is also shown. Given these models, there are three tasks that are to be performed: posterior inference, prediction at test inputs, and model selection. In all cases, the likelihoods we consider are not conjugate to the Gaussian prior distribution and as a result, the posterior distribution is intractable. Similarly, the integrations required in computing the predictive distribution and the marginal likelihood are intractable. To deal with this intractability we make use of variational methods. 3 Variational Lower Bound to the Log Marginal Likelihood Inference and model selection are always problematic in any Gaussian process regression using nonconjugate likelihoods due to the fact that the marginal likelihood contains an intractable integral. In this section, we derive a tractable variational lower bound to the marginal likelihood. We show 2 that the lower bound takes a well known form and can be maximized using concave optimization. Throughout the section, we assume scalar zn , with extension to the vector case being straightforward. We begin with the intractable log marginal likelihood L(?) in Eq. 3 and introduce a variational posterior distribution q(z|?). We use a Gaussian posterior with mean m and covariance V. The full set of variational parameters is thus ? = {m, V}. As log is a concave function, we obtain a lower bound LJ (?, ?) using Jensen?s inequality, given in Eq. 4. The first integral is simply the Kullback?Leibler (KL) divergence from the variational Gaussian posterior q(z|m, V) to the GP prior p(z|?, ?) as shown in Eq. 5, and has a closed-form expression that we substitute to get the first term in Eq. 6 (inside square brackets), with ? = ??1 . The second integral can be expressed in terms of the expectation with respect to the marginal q(zn |mn , Vnn ) as shown in the second term of Eq. 5. Here mn is the n?th element of m and Vnn is the n?th diagonal element of V, the two variables collectively denoted by ?n . The lower bound LJ is still intractable since the expectation of log p(yn |zn ) is not available in closed form for the distributions listed in Table 1. To derive a tractable lower bound, we make use of local variational bounds (LVB) fb , defined such that E[log p(yn |zn )] ? fb (yn , mn , Vnn ), giving us Eq. 6. Z Z p(z|?)p(y|z) L(?) = log p(z|?)p(y|z)dz = log q(z|?) dz (3) q(z|?) Zz Zz q(z|?) ? LJ (?, ?) := ? q(z|?) log dz + q(z|?) log p(y|z)dz (4) p(z|?) z z N X Eq(zn |?n ) [log p(yn |zn )] (5) = ?DKL [q(z|?)||p(z|?)]+ n=1 N   X ? LJ (?, ?) := 12 log |V?|?tr(V?) ?(m??)T ?(m??)+N + fb (yn , mn ,Vnn ). (6) n=1 We discuss the choice of LVBs in the next section, but first discuss the well-known form that the lower bound of Eq. 6 takes. Given V, the optimization function with respect to m is a nonlinear least-squares function. Similarly, the function with respect to V is similar to the graphical lasso [8] or covariance selection problem [7], but is different in that the argument is a covariance matrix instead of a precision matrix [8]. These two objective functions are coupled through the non-linear term fb (?). Usually this term arises due to the prior distribution and may be non-smooth, for example, in graphical lasso. In our case, this term arises from the likelihood, and is smooth and concave as we discuss in next section. It is straightforward to show that the variational lower bound is strictly concave with respect to ? if fb is jointly concave with respect to mn and Vnn . Strict concavity of terms other than fb is well-known since both the least squares and covariance selection problems are concave. Similar concavity results have been discussed by Braun and McAuliffe [5] for the discrete choice model, and more recently by Challis and Barber [6] for the Bayesian linear model, who consider concavity with respect to the Cholesky factor of V. We consider concavity with respect to V instead of its Cholesky factor, which allows us to exploit the special structure of V, as explained in Section 5. 4 Concave Local Variational Bounds In this section, we describe concave LVBs for various likelihoods. For simplicity, we suppress the dependence on n and consider the log-likelihood of a scalar observation y given a predictor z distributed according to q(z|?) = N (z|m, v) with ? = {m, v}. We describe the LVBs for the likelihoods given in Table 1 with z being a scalar for count, binary, and ordinal data, but a vector of length K for categorical data, K being the number of classes. When V is a matrix, we denote its diagonal by v. For the Poison distribution, the expectation is available in closed form and we do not need any bounding: E[log p(y|?)] = ym ? exp(m + v/2) ? log y!. This function is jointly concave with respect to m and v since the exponential is a convex function. 3 For binary data, we use the piecewise linear/quadratic bounds proposed by [16], which is a bound on the logistic-log-partition (LLP) function log(1 + exp(x)) and can be used to obtain a bound over the sigmoid function ?(x). The final bound can be expressed as sum of R pieces: E(log p(y|?)) = PR fb (y, m, v) = ym ? r=1 fbr (m, v) where fbr is the expectation of r?th quadratic piece. The function fbr is jointly concave with respect to m, v and their gradients are available in closed-form. An important property of the piecewise bound is that its maximum error is bounded and can be driven to zero by increasing the number of pieces. This means that the lower bound in Eq. 6 can be made arbitrarily tight by increasing the number of pieces. For this reason, this bound always performs better than other existing bounds, such as Jaakola?s bound [12], given that the number of pieces is chosen appropriately. Finally, the cumulative logit likeilhood for ordinal observations depends on ?(x) and its expectation can be bounded using piecewise bounds in a similar way. For the multinomial logit distribution, we can use the bounds proposed by [3] and [4], both leading to concave LVBs. The first bound takes the form fb (y, m, V) = yT m ? lse(m + v/2) with y represented using a 1-of-K encoding. This function is jointly concave with respect to m and v, which can be shown by noting the fact that the log-sum-exp function is convex. The second bound is the product of sigmoids bound proposed by [4] which bounds the likelihood with product of sigmoids (see Eq. 3 in [4]), with each sigmoid bounded using Jaakkola?s bound [12]. We can also use piecewise linear/quadratic bound to bound each sigmoid. Alternatively, we can use the recently proposed stick-breaking likelihood of [14] which uses piecewise bounds as well. Finally, note that the original log-likelihood may not be concave itself, but if it is such that LJ has a unique solution, then designing a concave variational lower bound will allow us to use concave optimization to efficiently maximize the lower bound. 5 Existing Algorithms for Variational Inference In this section, we assume that for each output yn there is a corresponding scalar latent function zn . All our results can be easily extended to the case of multi-class outputs where the latent function is a vector. In variational inference, we find the approximate Gaussian posterior distribution with mean m and covariance V that maximizes Eq. 6. The simplest approach is to use gradient-based methods for optimization, but this can be problematic since the number of variational parameters is quadratic in N due to the covariance matrix V. The authors of [19] speculate that this may perhaps be the reason behind limited use of Gaussian variational approximations. We now show that the problem is simpler than it appears to be, and in fact the number of parameters can be reduced to O(N ) from O(N 2 ). First, we write the gradients with respect to m and v in Eq. 7 and 8 and equate to zero, using gnm := ?fb (yn , mn , vn )/?mn and gnv := ?fb (yn , mn , vn )/?vn . Also, gm and gv are the vectors of these gradients, and diag(gv ) is the matrix with gv as its diagonal. 1 2 ??(m ? ?) + gm = 0  V?1 ? ? + diag(gv ) = 0 (7) (8) v At the solution, we see that V is completely specified if g is known. This property can be exploited to reduce the number of variational parameters. Opper and Archambeau [19] (and [18]) propose a reparameterization to reduce the number of parameters to O(N ). From the fixed-point equation, we note that at the solution m and V will have the following form, V = (??1 + diag(?))?1 m = ? + ??, (9) (10) where ? and ? are real vectors with ?d > 0, ?d. At the maximum (but not everywhere), ? and ? will be equal to gm and gv respectively. Therefore, instead of solving the fixed-point equations to obtain m and V, we can reparameterize the lower bound with respect to ? and ?. Substituting Eq. 9 and 10 in Eq. 6 and after simplification using the matrix inversion and determinant lemmas, we get the following new objective function (for a detailed derivation, see [18]), 1 2 N  X  T ? log(|B? ||diag(?)|) + Tr(B?1 ?) ? ? ?? + fb (yn , mn , Vnn ), ? n=1 4 (11) with B? = diag(?)?1 + ?. Since the mapping between {?, ?} and {m, V} is one-to-one, we can recover the latter given the former. The one-to-one relationship also implies that the new objective function has a unique maximum. The new lower bound involves vectors of size N , reducing the number of variational parameters to O(N ). The problem with this reparameterization is that the new lower bound is no longer concave, even though it has a unique maximum. To see this, consider the 1-D case. We collect all the terms involving V from Eq. 6, except the LVB term, to define the function f (V ) = [log(V ??1 ) ? V ??1 ]/2. We substitute the reparameterization V = (??1 + ?)?1 to get a new function f (?) = [? log(1 + ??) ? (1 + ??)?1 ]/2. The second derivative of this function is f 00 (?) = 21 [?/(1 + ??)]2 (?? ? 1). Clearly, this derivative is negative for ? < 1/? and non-negative otherwise, making the function neither concave nor convex. The objective function is still unimodal and the maximum of (11) is equal to the maximum of (6). With the reparameterization, we loose concavity and therefore the algorithm may have slow convergence. Our experimental results (Section 7) confirm the slow convergence. 6 Fast Convergent Variational Inference using Coordinate Ascent We now derive an algorithm that reduces the number of variational parameters to 2N while maintaining concavity. Our algorithm uses simple scalar fixed-point updates to obtain the diagonal elements of V. The complete algorithm is shown in Algorithm 1. To derive the algorithm, we first note that the fixed-point equation Eq. 8 has an attractive property: at the solution, the off-diagonal elements of V?1 are the same as the off-diagonal elements of ?, i.e. if we denote K := V?1 , then Kij = ?ij . We need only find the diagonal elements of K to get the full V. This is difficult, however, since the gradient gv depends on v. We take the approach of optimizing each diagonal element Kii fixing all others (and fixing m as well). We partition V as shown on the left side of Eq. 12, indexing the last row by 2 and rest of the rows by 1. We consider a similar partitioning of K and ?. Our goal is to compute v22 and k22 given all other elements of K. Matrices K and V are related through the blockwise inversion, as shown below. ? ? T ?1 K?1 K?1 ?1   11 k12 11 k12 k12 K11 ? K + T T ?1 ?1 V11 v12 ? 11 k22 ?k12 K11 k12 k22 ?k12 K11 k12 ? =? (12) ? kT12 K?1 vT12 v22 1 11 ? T T ?1 ?1 k22 ?k12 K11 k12 k22 ?k12 K11 k12 From the right bottom corner, we have the first relation below, which we simplify further. v22 = 1/(k22 ? kT12 K?1 11 k12 ) ? k22 = e k22 + 1/v22 (13) where we define e k22 := kT12 K?1 11 k12 . We also know from the fixed point Eq. 8 that the optimal v22 v and k22 satisfy Eq. 14 at the solution, where g22 is the gradient of fb with respect to v22 . Substitute the value of k22 from Eq. 13 in Eq. 14 to get Eq. 15. It is easy to check (by taking derivative) that the value v22 that satisfies this fixed-point can be found by maximizing the function defined in Eq. 16. v 0 = k22 ? ?22 + 2g22 0=e k22 + 1/v22 ? ?22 + 2g v 22 f (v) = log(v) ? (?22 ? e k22 )v + 2fb (y2 , m22 , v) (14) (15) (16) The function f (v) is a strictly concave function and can be optimized by iterating the following v update: v22 ? 1/(?22 ? e k22 ? 2g22 ). We will refer to this as a ?fixed-point iteration?. Since all elements of K, except k22 , are fixed, e k22 can be computed beforehand and need not be evaluated at every fixed-point iteration. In fact, we do not need to compute it explicitly, since we can obtain its value using Eq. 13: e k22 = k22 ? 1/v22 , and we do this before starting a fixed-point v iteration. The complexity of these iterations depends on the number of gradient evaluations g22 , which is usually constant and very low. 5 After convergence of the fixed-point iterations, we update V using Eq. 12. It turns out that this is a rank-one update, the complexity of which is O(N 2 ). To show these updates, let us denote the new new new values obtained after the fixed-point iterations by k22 and v22 respectively. and denote the old old old values by k22 and v22 . We use the right top corner of Eq. 12 to get first equality in Eq. 17. Using Eq. 13, we get the second equality. Similarly, we use the top left corner of Eq. 12 to get the first equality in Eq. 18, and use Eq. 13 and 17 to get the second equality. old old old old e K?1 11 k12 = ?(k22 ? k22 )v12 = ?v12 /v22 old K?1 11 = V11 ? T ?1 K?1 11 k12 k12 K11 old ? e k22 k22 old old T old = Vold 11 ? v12 (v12 ) /v22 (17) (18) Note that both K?1 11 and k12 do not change after the fixed point iteration. We use this fact to obtain Vnew . We use Eq. 12 to write updates for Vnew and use 17, 18, and 13 to simplify. vnew 12 = K?1 v new old 11 k12 v = ? 22 old 12 v22 k new ? e k22 (19) 22 ?1 Vnew 11 = K11 + T ?1 new old K?1 v22 ? v22 11 k12 k12 K11 old T + = Vold vold 11 12 (v12 ) old )2 new ? e (v22 k22 k22 (20) After updating V, we update m by optimizing the following non-linear least squares problem, max ? 21 (m ? ?)T ?(m ? ?) + m N X fb (yn , mn , Vnn ) (21) n=1 We use Newton?s method, the cost of which is O(N 3 ). 6.1 Computational complexity The final procedure is shown in Algorithm 1. The main advantage of our algorithm is its fast convergence P as we show this in the results section. The overall computational complexity is O(N 3 + n Inf p ). First term is due to O(N 2 ) update of V for all n and also due to the optimization of m. Second term is for Inf p fixed-point iterations, the total cost of which is linear in N due to the summation. In all our experiments, Inf p is usually 3 to 5, adding very little cost. 6.2 Proof of convergence Proposition 2.7.1 in [2] states that the coordinate ascent algorithm converges if the maximization with respect to each coordinate is uniquely attained. This is indeed the case for us since each fixed point iteration solves a concave problem of the form given by Eq. 16. Similarly, optimization with respect to m is also strictly concave. Hence, convergence of our algorithm is assured. Proof that V will always be positive definite 6.3 Let us assume that we start with a positive definite K, for example, we can initialize it with ?. Now new consider the update of v22 and k22 . Note that v22 will be positive since it is the maximum of Eq. new 16 which involves the log term. Using this and Eq. 13, we get k22 > kT12 K?1 11 k12 . Hence, the T ?1 new Schur complement k22 ? k12 K11 k12 > 0. Using this and the fact that K11 is positive definite, it follows that Knew will also be positive definite, and hence Vnew will be positive definite. 7 Results We now show that the proposed algorithm leads to a significant gain in the speed of Gaussian process regression. The software to reproduce the results of this section are available online1 . We evaluate the performance of our fast variational inference algorithm against existing inference methods for 1 http://www.cs.ubc.ca/emtiyaz/software/codeNIPS2012.html 6 Algorithm 1 Fast convergent coordinate-ascent algorithm 1. Initialize K ? ?, V ? ??1 , m ? ?, where ? := ??1 . 2. Alternate between updating the diagonal of V and then m until convergence, as follows: (a) Update the i?th diagonal of V for all i = 1, . . . , N : i. Rearrange V and ? so that the i?th column is the last one. ii. e k22 ? k22 ? 1/v22 . old iii. Store old value v22 ? v22 . v iv. Run fixed-point iterations for a few steps: v22 ? 1/(?22 ? e k22 ? 2g22 ). v. Update V. old old 2 A. V11 ? V11 + (v22 ? v22 )v12 vT12 /(v22 ) . old B. v12 ? ?v22 v12 /v22 . vi. Update k22 ? e k22 + 1/v22 . (b) Update m by maximizing the least-squares problem of Eq. 21. binary and multi-class classification. For binary classification, we use the UCI ionosphere data (with 351 data examples containing 34 features). For multi-class classification, we use the UCI forensic glass data set with 214 data examples each with 6 category output and features of length 8. In both cases, we use 80% of the dataset for training and the rest for testing. We consider GP classification using the Bernoulli logit likelihood, for which we use the piecewise bound of [16] with 20 pieces. We compare our algorithm with the approach of Opper and Archambeau [19] (Eq. 11). For the latter, we use L-BFGS method for optimization. We also compared to the naive method of optimizing with respect to full m and V, e.g. method of [5], but do not present these results since these algorithms have very slow convergence. We examine the computational cost for each method in terms of the number of floating point operations (flops) for four hyperparameter settings ? = {log(s), log(?)}. This comparison is shown in Figure 1(a). The y-axis shows (negative of) the value of the lower bound, and the x-axis shows the number of flops. We draw markers at iteration 1,2,4,50 and in steps of 50 from then on. In all cases, due to non-concavity, the optimization of the Opper and Archambeau reparameterization (black curve with squares) convergence slowly, passing through flat regions of the objective and requiring a large number of computations to reach convergence. The proposed algorithm (blue curve with circles) has consistently faster convergence than the existing method. For this dataset, our algorithm always converged in 5 iterations. We also compare the total cost to convergence, where we count the total number of flops until successive increase in the objective function is below 10?3 . Each entry is a different setting of {log(s), log(?)}. Rows correspond to values of log(s) while columns correspond to log(?), with units M,G,T denoting Mega-, Giga-, and Terra-flops. We can see that the proposed algorithm takes a much smaller number of operations compared to the existing algorithm. Opper and Archambeau -1 1 3 -1 20G 212G 6T 1 101G 24T 24T 3 38G 1T 24T Proposed Algorithm -1 1 3 -1 6M 7M 7M 1 26M 20M 22M 3 47M 81M 75M We also applied our method to two more datasets of [18], namely ?sonar? and ?usps-3vs5? dataset and observed similar behavior. Next, we apply our algorithm to the problem of multi-class classification, following [14], using the stick-breaking likelihood, and compare to inference using the approach of Opper and Archambeau [19] (Eq. 11). We show results comparing the lower bound vs the number of flops taken in Figure 1(b), for four hyperparameter settings {log(s), log(?)}. We show markers at iterations 1, 2, 10, 100 and every 100th iteration thereafter. The results follow those discussed for binary classification, 7 (?1.0,?1.0) (?1.0, ?1.0) (?1.0,2.5) (?1.0, 2.5) 320 2000 600 300 300 290 280 1500 1000 500 270 260 134 0 300 600 900 0 1000 2000 0 3000 1000 2000 3000 4000 0 30K 40K Mega?flops Mega?flops (1.0,1.0) (2.5, 2.5) (1.0, 1.0) 200 350 Neg?LogLik 110 300 250 100 15K 20K 0 2000 Mega?Flops 4000 6000 0 8000 400 300 200 200 80 10K proposed Opper?Arch 500 Neg?LogLik neg?LogLik 170 140 50K 600 400 Opper?Arch proposed 5K 20K Mega?Flops (3.5,3.5) 300 0 10K Mega?Flops 200 neg?LogLik Neg?LogLik 138 Neg?LogLik neg?LogLik neg?LogLik 310 900 142 20K 40K 60K 80K 100K 0 10K Mega?flops Mega?Flops (a) Ionosphere data 20K 30K 40K 50K Mega?flops (b) Forensic glass data Figure 1: Convergence results for (a) the binary classification on the ionosphere data set and (b) the multi-class classification on the glass dataset. We plot the negative of the lower bound vs the number of flops. Each plot shows the progress of algorithms for a hyperparameter setting {log(s), log(?)} shown at the top of the plot. The proposed algorithm always converges faster than the other method, in fact, in less than 5 iterations. where both methods reach the same lower bound value, but the existing approach converging much slower, with our algorithm always converged within 20 iterations. 8 Discussion In this paper we have presented a new variational inference algorithm for non-conjugate GP regression. We derived a concave variational lower bound to the log marginal likelihood, and used concavity to develop an efficient optimization algorithm. We demonstrated the efficacy of our new algorithm on both binary and multiclass GP classification, demonstrating significant improvement in convergence. Our proposed algorithm is related to many existing methods for GP regression. For example, the objective function that we consider is exactly the KL minimization method discussed in [18], for which a gradient based optimization was used. Our algorithm uses an efficient approach where we update the marginals of the posterior and then do a rank one update of the covariance matrix. Our results show that this leads to fast convergence. Our algorithm also takes a similar form to the popular EP algorithm [17], e.g. see Algorithm 3.5 in [20]. Both EP and our algorithm update posterior marginals, followed by a rank-one update of the covariance. Therefore, the computational complexity of our approach is similar to that of EP. The advantage of our approach is that, unlike EP, it does not suffer from any numerical issues (for example, no negative variances) and is guaranteed to converge. The derivation of our algorithm is based on the observation that the posterior covariance has a special structure, and does not directly use the concavity of the lower bound. An alternate derivation based on the Fenchel duality exists and shows that the fixed-point iterations compute dual variables which are related to the gradients of fb . We skip this derivation since it is tedious, and present the more intuitive derivation instead. The alternative derivation will be made available in an online appendix. Acknowledgements We thank the reviewers for their valuable suggestions. SM is supported by the Canadian Institute for Advanced Research (CIFAR). 8 References [1] J. Albert and S. Chib. Bayesian analysis of binary and polychotomous response data. J. of the Am. Stat. Assoc., 88(422):669?679, 1993. [2] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, second edition, 1999. [3] D. Blei and J. Lafferty. Correlated topic models. In Advances in Neural Information Proceedings Systems, 2006. [4] G. Bouchard. Efficient bounds for the softmax and applications to approximate inference in hybrid models. In NIPS 2007 Workshop on Approximate Inference in Hybrid Models, 2007. [5] M. Braun and J. McAuliffe. Variational inference for large-scale models of discrete choice. Journal of the American Statistical Association, 105(489):324?335, 2010. [6] E. Challis and D. Barber. Concave Gaussian variational approximations for inference in largescale Bayesian linear models. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 6, page 7, 2011. [7] A. Dempster. Covariance selection. Biometrics, 28(1), 1972. [8] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432, 2008. [9] S. Fr?uhwirth-Schnatter and R. Fr?uhwirth. Data augmentation and MCMC for binary and multinomial logit models. Statistical Modelling and Regression Structures, pages 111?132, 2010. [10] M. Girolami and S. Rogers. Variational Bayesian multinomial probit regression with Gaussian process priors. Neural Comptuation, 18(8):1790 ? 1817, 2006. [11] C. Holmes and L. Held. Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis, 1(1):145?168, 2006. [12] T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression problems and their extensions. In AI + Statistics, 1996. [13] P. Jyl?anki, J. Vanhatalo, and A. Vehtari. Robust Gaussian process regression with a student-t likelihood. The Journal of Machine Learning Research, 999888:3227?3257, 2011. [14] M. Khan, S. Mohamed, B. Marlin, and K. Murphy. A stick-breaking likelihood for categorical data analysis with latent Gaussian models. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2012. [15] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. J. of Machine Learning Research, 6:1679?1704, 2005. [16] B. Marlin, M. Khan, and K. Murphy. Piecewise bounds for estimating Bernoulli-logistic latent Gaussian models. In Intl. Conf. on Machine Learning, 2011. [17] T. Minka. Expectation propagation for approximate Bayesian inference. In UAI, 2001. [18] H. Nickisch and C.E. Rasmussen. Approximations for binary Gaussian process classification. Journal of Machine Learning Research, 9(10), 2008. [19] M. Opper and C. Archambeau. The variational Gaussian approximation revisited. Neural computation, 21(3):786?792, 2009. [20] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [21] H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations. J. of Royal Stat. Soc. Series B, 71: 319?392, 2009. [22] S. L. Scott. Data augmentation, frequentist estimation, and the Bayesian analysis of multinomial logit models. Statistical Papers, 52(1):87?109, 2011. [23] M. Seeger. Bayesian Inference and Optimal Design in the Sparse Linear Model. J. of Machine Learning Research, 9:759?813, 2008. [24] M. Seeger and H. Nickisch. Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2011. 9
4535 |@word determinant:1 inversion:2 logit:9 tedious:1 vanhatalo:1 covariance:15 tr:2 gnm:1 contains:1 efficacy:1 series:1 denoting:1 existing:9 z2:1 comparing:1 loglik:8 numerical:3 partition:2 gv:6 plot:3 update:17 v:2 intelligence:3 fewer:1 blei:1 provides:1 revisited:1 successive:1 simpler:1 along:1 m22:1 prove:1 combine:1 inside:1 poison:1 introduce:1 x0:1 indeed:1 behavior:1 frequently:1 nor:1 multi:9 examine:1 little:1 increasing:2 begin:1 estimating:1 bounded:3 maximizes:1 biostatistics:1 vs5:1 marlin:2 every:3 concave:26 braun:2 exactly:1 assoc:1 stick:3 partitioning:1 unit:1 anki:1 yn:14 mcauliffe:2 bertsekas:1 before:1 positive:6 local:2 encoding:1 black:1 collect:1 challenging:1 archambeau:6 limited:2 jaakola:1 challis:2 unique:3 testing:1 definite:5 procedure:1 area:1 matching:1 v11:4 radial:1 get:10 selection:5 www:1 deterministic:1 demonstrated:1 dz:4 yt:1 maximizing:2 straightforward:2 attention:1 starting:1 reviewer:1 convex:3 williams:1 simplicity:1 holmes:1 array:1 reparameterization:6 handle:1 coordinate:4 laplace:2 gm:3 exact:3 programming:1 us:5 designing:1 element:11 updating:3 ep:8 bottom:2 observed:2 region:1 valuable:1 vehtari:1 dempster:1 complexity:6 tight:1 solving:1 predictive:1 basis:1 completely:1 usps:1 easily:2 joint:1 v22:30 various:2 represented:1 derivation:6 fast:8 describe:3 monte:1 artificial:3 kevin:1 quite:1 whose:1 valued:1 solve:1 otherwise:1 statistic:4 gp:13 jointly:4 itself:1 shakir:1 final:2 online:1 advantage:2 propose:2 product:2 fr:2 uci:2 uhwirth:2 intuitive:1 convergence:19 optimum:1 assessing:2 extending:1 intl:1 converges:3 derive:5 develop:1 stat:2 fixing:2 ij:1 progress:1 eq:40 solves:1 soc:1 auxiliary:1 c:1 involves:2 implies:2 skip:1 girolami:1 g22:5 stochastic:1 rogers:1 kii:1 proposition:1 summation:1 extension:2 strictly:3 exp:5 great:2 mapping:1 inla:1 substituting:1 estimation:3 minimization:1 mit:1 clearly:1 gaussian:29 always:6 jaakkola:2 derived:1 improvement:1 consistently:1 bernoulli:4 likelihood:30 check:1 rank:3 martino:1 seeger:2 modelling:1 am:1 glass:3 inference:24 integrated:2 lj:5 relation:1 reproduce:1 chopin:1 issue:3 classification:16 flexible:1 overall:1 denoted:2 html:1 dual:1 integration:2 special:2 initialize:2 marginal:9 equal:2 construct:1 softmax:1 zz:2 others:1 piecewise:7 simplify:2 few:1 chib:1 divergence:1 murphy:3 floating:1 attempt:1 friedman:1 interest:1 highly:1 evaluation:1 bracket:1 behind:1 rearrange:1 held:1 chain:1 accurate:3 beforehand:1 integral:3 biometrics:1 iv:1 old:22 circle:1 kij:1 column:2 modeling:1 fenchel:1 jyl:1 zn:10 maximization:2 cost:5 entry:2 predictor:1 nickisch:2 international:3 terra:1 off:2 ym:2 polychotomous:1 squared:1 augmentation:2 containing:1 slowly:1 corner:3 conf:1 american:1 derivative:3 leading:1 dimitri:1 bfgs:1 speculate:1 student:1 satisfy:1 explicitly:1 depends:5 vi:1 piece:6 performed:1 closed:5 competitive:1 recover:1 start:1 bouchard:1 square:6 variance:1 who:1 efficiently:1 maximized:1 emtiyaz:2 equate:1 correspond:2 bayesian:13 carlo:1 kuss:1 converged:2 reach:3 against:1 mohamed:2 minka:1 associated:1 proof:2 gain:1 dataset:4 popular:3 appears:1 attained:1 follow:1 nonconjugate:1 response:1 evaluated:1 though:1 arch:2 until:3 nonlinear:2 marker:2 propagation:3 logistic:3 perhaps:1 scientific:1 k22:36 requiring:1 y2:2 former:1 equality:4 hence:3 leibler:1 deal:3 attractive:1 uniquely:1 complete:1 mohammad:1 vnn:7 performs:1 lse:3 variational:31 recently:2 sigmoid:3 multinomial:6 empirically:1 volume:1 discussed:3 association:1 approximates:1 numerically:1 marginals:2 refer:1 significant:2 ai:1 similarly:5 stable:1 longer:3 lvb:2 posterior:12 recent:1 optimizing:3 inf:3 driven:1 store:1 inequality:2 binary:18 arbitrarily:1 exploited:1 neg:8 converge:1 maximize:1 ii:1 full:3 unimodal:2 reduces:1 smooth:2 faster:4 characterized:1 cifar:1 dkl:1 prediction:1 involving:1 regression:18 converging:1 expectation:10 poisson:2 albert:1 iteration:17 kernel:1 addition:1 appropriately:1 rest:2 unlike:2 strict:1 ascent:3 lafferty:1 schur:1 jordan:1 noting:1 canadian:1 iii:1 easy:1 xj:4 hastie:1 lasso:3 reduce:2 multiclass:1 six:1 expression:1 suffer:2 passing:1 iterating:2 detailed:1 listed:1 category:3 simplest:1 reduced:1 http:1 specifies:1 problematic:2 tibshirani:1 mega:9 blue:1 discrete:3 write:2 hyperparameter:3 thereafter:1 four:2 demonstrating:1 neither:1 sum:3 run:1 inverse:1 everywhere:1 powerful:1 throughout:2 v12:9 vn:3 draw:1 appendix:1 bound:48 guaranteed:2 simplification:1 convergent:3 followed:1 quadratic:4 encountered:1 x2:2 software:2 flat:1 speed:1 argument:1 reparameterize:1 department:1 according:1 alternate:2 instantiates:1 conjugate:5 smaller:1 making:1 online1:1 explained:1 pr:1 indexing:1 taken:1 equation:3 discus:3 count:5 loose:1 turn:1 ordinal:5 know:1 tractable:3 available:7 operation:2 apply:1 appropriate:2 frequentist:1 alternative:3 slower:1 original:2 substitute:3 top:4 graphical:5 maintaining:2 newton:1 exploit:1 giving:1 objective:8 parametric:2 dependence:1 diagonal:10 gradient:9 thank:1 athena:1 topic:1 barber:2 unstable:1 reason:2 assuming:1 length:2 index:1 modeled:1 relationship:1 difficult:1 unfortunately:3 blockwise:1 negative:5 suppress:1 design:1 perform:1 observation:10 markov:1 datasets:1 sm:1 flop:14 extended:1 ezk:1 y1:1 complement:1 namely:1 required:1 specified:2 khan:3 optimized:2 z1:1 kl:2 nip:1 llp:1 usually:3 below:3 scott:1 including:1 max:1 royal:1 suitable:1 difficulty:1 hybrid:2 largescale:1 forensic:2 mn:10 advanced:1 axis:2 categorical:5 coupled:1 columbia:1 naive:1 prior:10 acknowledgement:1 probit:2 suggestion:1 consistent:1 intractability:2 row:3 supported:1 last:2 rasmussen:3 side:1 allow:1 institute:1 wide:1 taking:1 sparse:2 k12:23 distributed:1 curve:2 opper:8 xn:3 cumulative:2 fb:15 concavity:9 author:1 made:2 approximate:10 kullback:1 confirm:1 global:1 active:1 uai:1 knew:1 xi:5 alternatively:1 iterative:1 latent:8 sonar:1 table:4 robust:1 ca:1 rue:1 diag:5 assured:1 main:1 bounding:1 hyperparameters:4 edition:1 x1:2 schnatter:1 slow:5 precision:1 exponential:2 breaking:3 british:1 jensen:2 ionosphere:3 intractable:5 k11:10 fbr:3 vnew:5 adding:1 exists:1 gained:1 workshop:1 sigmoids:2 vold:3 simply:1 expressed:2 ordered:1 scalar:5 collectively:1 nested:2 ubc:1 satisfies:1 relies:1 goal:1 change:1 except:2 reducing:1 degradation:1 lemma:1 total:3 duality:1 experimental:1 giga:1 cholesky:2 latter:2 arises:2 evaluate:1 mcmc:2 correlated:1
3,906
4,536
A Nonparametric Conjugate Prior Distribution for the Maximizing Argument of a Noisy Function Pedro A. Ortega Max Planck Institute for Intelligent Systems Max Planck Institute for Biolog. Cybernetics [email protected] Jordi Grau-Moya Max Planck Institute for Intelligent Systems Max Planck Institute for Biolog. Cybernetics [email protected] Tim Genewein Max Planck Institute for Intelligent Systems Max Planck Institute for Biolog. Cybernetics [email protected] David Balduzzi Max Planck Institute for Intelligent Systems [email protected] Daniel A. Braun Max Planck Institute for Intelligent Systems Max Planck Institute for Biolog. Cybernetics [email protected] Abstract We propose a novel Bayesian approach to solve stochastic optimization problems that involve finding extrema of noisy, nonlinear functions. Previous work has focused on representing possible functions explicitly, which leads to a two-step procedure of first, doing inference over the function space and second, finding the extrema of these functions. Here we skip the representation step and directly model the distribution over extrema. To this end, we devise a non-parametric conjugate prior based on a kernel regressor. The resulting posterior distribution directly captures the uncertainty over the maximum of the unknown function. Given t observations of the function, the posterior can be evaluated efficiently in time O(t2 ) up to a multiplicative constant. Finally, we show how to apply our model to optimize a noisy, non-convex, high-dimensional objective function. 1 Introduction Historically, the fields of statistical inference and stochastic optimization have often developed their own specific methods and approaches. Recently, however, there has been a growing interest in applying inference-based methods to optimization problems and vice versa [1?4]. Here we consider stochastic optimization problems where we observe noise-contaminated values from an unknown nonlinear function and we want to find the input that maximizes the expected value of this function. The problem statement is as follows. Let X be a metric space. Consider a stochastic function f :X R mapping a test point x ? X to real values y ? R characterized by the conditional pdf P (y|x). Consider the mean function Z ? f (x) := E[y|x] = yP (y|x) dy. (1) The goal consists in modeling the optimal test point x? := arg max{f?(x)}. x 1 (2) a) 2 5 b) 1.5 4 1 0.5 3 0 2 ?0.5 ?1 1 ?1.5 ?2 0 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 Figure 1: a) Given an estimate h of the mean function f? (left), a simple probability density function over the location of the maximum x? is obtained using the transformation P (x? ) ? exp{?h(x? )}, where ? > 0 plays the role of the precision (right). b) Illustration of the Gramian matrix for different test locations. Locations that are close to each other produce large off-diagonal entries. Classic approaches to solve this problem are often based on stochastic approximation methods [5]. Within the context of statistical inference, Bayesian optimization methods have been developed where a prior distribution over the space of functions is assumed and uncertainty is tracked during the entire optimization process [6, 7]. In particular, non-parametric Bayesian approaches such as Gaussian Processes have been applied for derivative-free optimization [8, 9], also within the context of the continuum-armed bandit problem [10]. Typically, these Bayesian approaches aim to explicitly represent the unknown objective function of (1) by entertaining a posterior distribution over the space of objective functions. In contrast, we aim to model directly the distribution of the maximum of (2) conditioned on observations. 2 Brief Description Our model is intuitively straightforward and easy to implement1 . Let h(x) : X ? R be an estimate of the mean f?(x) constructed from data Dt := {(xi , yi )}ti=1 (Figure 1a, left). This estimate can easily be converted into a posterior pdf over the location of the maximum by first multiplying it with a precision parameter ? > 0 and then taking the normalized exponential (Figure 1a, right) P (x? |Dt ) ? exp{? ? h(x? )}. In this transformation, the precision parameter ? controls the certainty we have over our estimate of the maximizing argument: ? ? 0 expresses almost no certainty, while ? ? ? expresses certainty. The rationale for the precision is: the more distinct inputs we test, the higher the precision?testing the same (or similar) inputs only provides local information and therefore should not increase our knowledge about the global maximum. A simple and effective way of implementing this idea is given by P     P K(xi , xi ) K(xi , x? )yi + K0 (x? )y0 (x? ) ? i i P P (x |Dt ) ? exp ? ? ? + t ? P P , (3) ? ? ? i j K(xi , xj ) i K(xi , x ) + K0 (x ) {z } | {z } | estimate of f?(x? ) effective # of locations where ?, ?, K, K0 and y0 are parameters of the estimator: ? > 0 is the precision we gain for each new distinct observation; ? > 0 is the number of prior points; K : X ? X ? R+ is a symmetric kernel function; K0 : X ? R+ is a prior precision function; and y0 : X ? R is a prior estimate of f?. In (3), the mean function f? is estimated with a kernel regressor [11] that combines the function observations with a prior estimate of the function, and the total effective number of locations is calculated as the sum of the prior locations ? and the number of distinct locations in the data Dt . The latter is estimated by multiplying the number of data points t with the coefficient P i K(xi , xi ) P P ? (0, 1], (4) i j K(xi , xj ) 1 Implementations can be downloaded from http://www.adaptiveagents.org/argmaxprior 2 Noisy Function 10 Data Points 100 Data Points 1000 Data Points 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 ?5 ?5 ?5 ?5 ?10 ?10 ?10 ?10 ?15 ?15 ?15 ?15 ?20 ?20 ?20 0 0.5 1 1.5 2 2.5 0 3 0.5 1 1.5 2 2.5 3 ?20 0 0.5 1 1.5 2 2.5 3 5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0 0 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 0 0.5 1 1.5 2 2.5 3 Figure 2: Illustration of the posterior distribution over the maximizing argument for 10, 100 and 1000 observations drawn from a function with varying noise. The top-left panel illustrates the function and the variance bounds (one standard deviation). The observations in the center region close to x = 1.5 are very noisy. It can be seen that the prior gets progressively washed out with more observations. i.e. the ratio between the trace of the Gramian matrix (K(xi , xj ))i,j and the sum of its entries. Inputs that are very close to each other will have overlapping kernels, resulting in large off-diagonal entries of the Gramian matrix?hence decreasing the number of distinct locations (Figure 1b). For example, if we have t observations from n ? t locations, and each location has t/n observations, then the coefficient (4) is equal to n/t and hence the number of distinct locations is exactly n, as expected. Figure 2 illustrates the behavior of the posterior distribution. The expression for the posterior can be calculated up to a constant factor in O(t) time. The computation of the normalizing constant is in general intractable. Therefore, our proposed posterior can be easily combined with Markov chain Monte Carlo methods (MCMC) to implement stochastic optimizers as will be illustrated in Section 4. 3 Derivation 3.1 Function-Based, Indirect Model Our first task is to derive an indirect Bayesian model for the optimal test point that builds its estimate via the underlying function space. Let G be the set of hypotheses, and assume that each hypothesis g ? G corresponds to a stochasticQmapping g : X R. Let P (g) be the prior2 over G and let the likelihood be P ({yt }|g, {xt }) = t P (yt |g, xt ). Then, the posterior of g is given by Q P (g) t P (yt |g, xt ) P (g)P ({yt }|g, {xt }) P (g|{yt }, {xt }) = = . (5) P ({yt }|{xt }) P ({yt }|{xt }) For each x? ? X , let G(x? ) ? G be the subset of functions such that for all g ? G(x? ), x? = arg maxx {? g(x)}3 . Then, the posterior over the optimal test point x? is given by Z P (x? |{yt }, {xt }) = P (g|{yt }, {xt }) dg, (6) G(x? ) This model has two important drawbacks: (a) it relies on modeling the entire function space G, which is potentially much more complex than necessary; (b) it requires calculating the integral (6), which is intractable for virtually all real-world problems. 2 3 For the sake of simplicity, we neglect issues of measurability of G. Note that we assume that the mean function g? is bounded and that it has a unique maximizing test point. 3 3.2 Domain-Based, Direct Model We want to arrive at a Bayesian model that bypasses the integration step suggested by (6) and directly models the location of optimal test point x? . The following theorem explains how this direct model relates to the previous model. Theorem 1. The Bayesian model for the optimal test point x? is given by Z P (g) dg P (x? ) = (prior) G(x? ) ? P (yt |x , xt , Dt?1 ) = R Qt?1 P (yt |g, xt )P (g) k=1 P (yk |g, xk ) dg , R Qt?1 k=1 P (yk |g, xk ) dg G(x? ) P (g) G(x? ) (likelihood) where Dt := {(xk , yk )}tk=1 is the set of past tests. Proof. Using Bayes? rule, the posterior distribution P (x? |{yt }, {xt }) can be rewritten as Q P (x? ) t P (yt |x? , xt , Dt?1 ) . P ({yt }|{xt }) (7) Since this posterior is equal to (6), one concludes (using (5)) that Z Y Y ? ? P (x ) P (g) P (yt |x , xt , Dt?1 ) = P (yt |g, xt ) dg. G(x? ) t t ? Note that this expression corresponds to the joint P (x , {yt }|{xt }). The prior P (x? ) is obtained by setting t = 0. The likelihood is obtained as the fraction P (yt |x? , xt , Dt?1 ) = P (x? , {yk }tk=1 |{xk }tk=1 ) t?1 , P (x? , {yk }t?1 k=1 |{xk }k=1 ) t?1 where it shall be noted that the denominator P (x? , {yk }t?1 k=1 |{xk }k=1 ) doesn?t change if we add the condition xt . From Theorem 1 it is seen that although the likelihood model P (yt |g, xt ) for the indirect model is i.i.d. at each test point, the likelihood model P (yt |x? , xt , Dt?1 ) for the direct model depends on the past tests Dt?1 , that is, it is adaptive. More critically though, the likelihood function?s internal structure of the direct model corresponds to an integration over function space as well? thus inheriting all the difficulties of the indirect model. 3.3 Abstract Properties of the Likelihood Function There is a way to bypass modeling the function space explicitly if we make a few additional assumptions. We assume that for any g ? G(x? ), the mean function g? is continuous and has a unique maximum. Then, the crucial insight consists in realizing that the value of the mean function g? inside a sufficiently small neighborhood of x? is larger than the value outside of it (see Figure 3a). We assume that, for any ? > 0 and any z ? X , let B? (z) denote the open ?-ball centered on z. The functions in G fulfill the following properties: a. Continuous: Every function g ? G is such that its mean g? is continuous and bounded. b. Maximum: For any x? ? X , the functions g ? G(x? ) are such that for all ? > 0 and all z? / B? (x? ), g?(x? ) > g?(z). Furthermore, we impose a symmetry condition on the likelihood function. Let x?1 and x?2 be in X , and consider their associated equivalence classes G(x?1 ) and G(x?2 ). There is no reason for them to be very different: in fact, they should virtually be indistinguishable outside of the neighborhoods of x?1 and x?2 . It is only inside of the neighborhood of x?1 when G(x?1 ) becomes distinguishable from the other equivalence classes because the functions in G(x?1 ) systematically predict higher values 4 a) c) b) 0 Figure 3: Illustration of assumptions. a) Three functions from G(x? ). They all have their maximum located at x? ? X . b) Schematic representation of the likelihood function of x? ? X conditioned on a few observations. The curve corresponds to the mean and the shaded area to the confidence bounds. The density inside of the neighborhood is unique to the hypothesis x? , while the density outside is shared amongst all the hypotheses. c) The log-likelihood ratio of the hypotheses x?1 and x?2 as a function of the test point x. The kernel used in the plot is Gaussian. than the rest. This assumption is illustrated in Figure 3b. In fact, taking the log-likelihood ratio of two competing hypotheses P (yt |x?1 , xt , Dt?1 ) log P (yt |x?2 , xt , Dt?1 ) for a given test location xt should give a value equal to zero unless xt is inside of the vicinity of x?1 or x?2 (see Figure 3c). In other words, the amount of evidence a hypothesis gets when the test point is outside of its neighborhood is essentially zero (i.e. it is the same as the amount of evidence that most of the other hypotheses get). 3.4 Likelihood and Conjugate Prior Following our previous discussion, we propose the following likelihood model. Given the previous data Dt?1 and a test point xt ? X , the likelihood of the observation yt is  1 P (yt |x? , xt , Dt?1 ) = ?(yt |xt , Dt?1 ) exp ?t ? ht (x? ) ? ?t?1 ? ht?1 (x? ) , (8) Z(xt , Dt?1 ) where: Z(xt , Dt?1 ) is a normalizing constant; ?(yt |xt , Dt?1 ) is a posterior probability over yt given xt and the data Dt?1 ; ?t is a precision measuring the knowledge we have about the whole function; and and ht is an estimate of the mean function f?. We have chosen the precision ?t as P   i K(xi , xi ) ?t := ? ? ? + P P i j K(xi , xj ) where ? > 0 is a scaling parameter; ? > 0 is a parameter representing the number prior locations tested; and K : X ? X ? R+ is a symmetric kernel function4. For the estimate ht , we have chosen a Naradaya-Watson kernel regressor [11] Pt K(xi , x? )yi + K0 (x? )y0 (x? ) ? ht (x ) := i=1 . Pt ? ? i=1 K(xi , x ) + K0 (x ) In the last expression, y0 corresponds to a prior estimate of f? with prior precision K0 . Inspecting (8), we see that the likelihood model favours positive changes to the estimated mean function from new, unseen test locations. The pdf ?(yt |xt , Dt?1 ) does not need to be explicitly defined, as it will later drop out when computing the posterior. The only formal requirement is that it should be independent of the hypothesis x? . We propose the conjugate prior P (x? ) = 4 1 1 exp{?0 ? g0 (x? )} = exp{? ? y0 (x? )}. Z0 Z0 We refer the reader to the kernel regression literature for an analysis of the choice of kernel functions. 5 (9) The conjugate prior just encodes a prior estimate of the mean function. In a practical optimization application, it serves the purpose of guiding the exploration of the domain, as locations x? with high prior value y0 (x? ) are more likely to contain the maximizing argument. Given a set of data points Dt , the prior (9) and the likelihood (8) lead to a posterior given by Qt P (x? ) k=1 P (yk |x? , xk , Dk?1 ) P (x? |Dt ) = R Qt ? ? ? k=1 P (yk |x , xk , Dk?1 ) dx X P (x ) Pt ?1 Qt ? ? Z(xk , Dk?1 )?1 exp k=1 ?k ? hk (x ) ? ?k?1 ? hk?1 (x ) Z0 = R Pt ?1 Qtk=1 ?1 dx? ?k ? hk (x? ) ? ?k?1 ? hk?1 (x? ) Z0 k=1 Z(xk , Dk?1 ) X exp  k=1 ? exp ?t ? ht (x )  = R . (10) exp ?t ? ht (x? ) dx? X Thus, the particular choice of the likelihood function guarantees an analytically compact posterior expression. In general, the normalizing constant in (10) is intractable, which is why the expression is only practical for relative comparisons of test locations. Substituting the precision ?t and the mean function estimate ht yields P    P  K(xi , x? )yi + K0 (x? )y0 (x? ) i K(xi , xi ) P (x? |Dt ) ? exp ? ? ? + t ? P P ? i P . ? ? i j K(xi , xj ) i K(xi , x ) + K0 (x ) 4 Experimental Results 4.1 Parameters. We have investigated the influence of the parameters on the resulting posterior probability distribution. We have used the Gaussian kernel o n 1 (11) K(x, x? ) = exp ? 2 (x ? x? )2 . 2? In this figure, 7 data points are shown, which were drawn as y ? N (f (x), 0.3), where the mean function is f (x) = cos(2x + 23 ?) + sin(6x + 32 ?). (12) The prior precision K0 and the prior estimate of the mean function y0 were chosen as K0 (x) = 1 and y0 (x) = ? 1 (x ? ?0 )2 , 2?02 (13) where the latter corresponds to the logarithm of a Gaussian with mean ?0 = 1.5 and variance ?02 = 5. This prior favours the region close to ?. Figure 4 shows how the choice of the precision scale ? and the kernel width ? affect the shape of the posterior probability density. Here, it is seen that a larger kernel width ? increases the region of influence of a particular data point, and hence produce smoother posterior densities. The precision scale parameter ? controls the precision per distinct data point: higher values for ? lead to sharper updates of the posterior distribution. 4.2 Application to Optimization. The main motivation behind our proposed model is its application to the optimization of noisy functions. Because of the noise, choosing new test locations requires carefully balancing explorative and exploitative tests?a problem well known in the multiarmed bandits literature. To overcome this, one can apply the Bayesian control rule/Thompson sampling [12, 13]: the next test location is chosen by sampling it from the posterior. We have carried out two experiments, described in the following. 6 a) b) c) Figure 4: Effect of the change of parameters on the posterior density over the location of the maximizing test point. Panel (a) shows the 7 data points drawn from the noisy function (solid curve). Panel (b) shows the effect of increasing the width of the kernel (here, Gaussian). The solid and dotted curves correspond to ? = 0.01 and ? = 0.1 respectively. Panel (c) shows the effect of diminishing the precision on the posterior, where solid and shaded curves correspond to ? = 0.2 and ? = 0.1 respectively. Average Value Average Value 1.5 1.5 1 1 0.5 0.5 y y obs 0 obs 0 ?0.5 ?0.5 ?1 ?1 ?1.5 ?1.5 ?2 0 50 100 150 # of samples 200 ?2 0 50 100 150 # of samples 200 Figure 5: Observation values obtained by sampling from the posterior over the maximizing argument (left panel) and according to GP-UCB (right panel). The solid blue curve corresponds to the timeaveraged function value, averaged over ten runs. The gray area corresponds to the error bounds (1 standard deviation), and the dashed curve in red shows the time-average of a single run. Comparison to Gaussian Process UCB. We have used the model to optimize the same function (12) as in our preliminary tests but with higher additive noise equal to one. This is done by sampling the next test point xt directly from the posterior density over the optimum location P (x? |Dt ), and then using the resulting pair (xt , yt ) to recursively update the model. Essentially, this procedure corresponds to Bayesian control rule/Thompson sampling. We compared our method against a Gaussian Process optimization method using an upper confidence bound (UCB) criterion [10]. The parameters for the GP-UCB were set to the following values: observation noise ?n = 0.3 and length scale ? = 0.3. For the constant that trades off exploration and exploitation we followed Theorem 1 in [10] which states ?t = 2 log(|D|t2 ? 2 /6?) with ? = 0.5. We have implemented our proposed method with a Gaussian kernel as in (11) with width ? 2 = 0.05. The prior sufficient statistics are exactly as in (13). The precision parameter was set to ? = 0.3. Simulation results over ten independent runs are summarized in Figure 5. We show the timeaveraged observation values y of the noisy function evaluated at test locations sampled from the posterior. Qualitatively, both methods show very similar convergence (on average), however our method converges faster and with a slightly higher variance. High-Dimensional Problem. To test our proposed method on a challenging problem, we have designed a non-convex, high-dimensional noisy function with multiple local optima. This Noisy Ripples function is defined as 1 f (x) = ? 1000 kx ? ?k2 + cos( 32 ?kx ? ?k) where ? ? X is the location of the global maximum, and where observations have additive Gaussian noise with zero mean and variance 0.1. The advantage of this function is that it generalizes well to any number of dimensions of the domain. Figure 6a illustrates the function for the 2-dimensional 7 a) b) Average Value 0 -100 5 0 -200 ?5 0 ?10 15 50 Samples 100 150 100 150 Regret 10 8000 5 6000 15 10 0 4000 5 ?5 0 2000 ?5 ?10 ?10 ?15 0 ?15 50 Samples Figure 6: a) The Noisy Ripples objective function in 2 dimensions. b) The time-averaged value and the regret obtained by the optimization algorithm on a 50-dimensional version of the Noisy Ripples function. input domain. This function is difficult to optimize because it requires averaging the noisy observations and smoothing the ridged landscape in order to detect the underlying quadratic form. We optimized the 50-dimensional version of this function using a Metropolis-Hastings scheme to sample the next test locations from the posterior over the maximizing argument. The Markov chain was started at [20, 20, ? ? ? , 20]T , executing 120 isotropic Gaussian steps of variance 0.07 before the point was used as an actual test location. For the arg-max prior, we used a Gaussian kernel with lengthscale l = 2, precision factor ? = 1.5, prior precision K0 (x? ) = 1 and prior mean 2 estimate y0 (x? ) = ? 1000 kx + 5k2 . The goal ? was located at the origin. The result of one run is presented in Figure 6b. It can be seen that the optimizer manages to quickly (? 100 samples) reach near-optimal performance, overcoming the difficulties associated with the high-dimensionality of the input space and the numerous local optima. Crucial for this success was the choice of a kernel that is wide enough to accurately estimate the mean function. The authors are not aware of any method capable of solving a problem of similar characteristics. 5 Conclusions Our goal was to design a probabilistic model over the maximizing argument that is algorithmically efficient and statistically robust even for large, high-dimensional noisy functions. To this end, we have derived a Bayesian model that directly captures the uncertainty over the maximizing argument, thereby bypassing having to model the underlying function space?a much harder problem. Our proposed model is computationally very efficient when compared to Gaussian process-based (which have cubic time complexity) or models based on upper confidence bounds (which require finding the input maximizing the bound?a generally intractable operation). In our model, evaluating the posterior up to a constant factor scales quadratically with the size of the data. In practice, we have found that one of the main difficulties associated with our proposed method is the choice of the parameters. As in any kernel-based estimation method, choosing the appropriate kernel bandwidth can significantly change the estimate and affect the performance of optimizers that rely on the model. There is no clear rule on how to choose a good bandwidth. In a future research, it will be interesting to investigate the theoretical properties of the proposed nonparametric model, such as the convergence speed of the estimator and its relation to the extensive literature on active learning and bandits. 8 References [1] E. Brochu, V. Cora, and N. de Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Technical Report TR-2009-023, University of British Columbia, Department of Computer Science, 2009. [2] K. Rawlik, M. Toussaint, and S. Vijayakumar. Approximate inference and stochastic optimal control. arXiv:1009.3958, 2010. [3] A. Shapiro. Probabilistic Constrained Optimization: Methodology and Applications, chapter Statistical Inference of Stochastic Optimization Problems, pages 282?304. Kluwer Academic Publishers, 2000. [4] H.J. Kappen, V. G?omez, and M. Opper. Optimal control as a graphical model inference problem. Machine Learning, 87(2):159?182, 2012. [5] H.J. Kushner and G.G. Yin. Stochastic Approximation Algorithms and Applications. SpringerVerlag, 1997. [6] J. Mockus. Application of bayesian approach to numerical methods of global and stochastic optimization. Journal of Global Optimization, 4(4):347?365, 1994. [7] D. Lizotte. Practical Bayesian Optimization. Phd thesis, University of Alberta, 2008. [8] D.R. Jones, M. Schonlau, and W.J. Welch. Efficient global optimization of expensive blackbox functions. Journal of Global Optimization, 13(4):455?492, 1998. [9] M.A. Osborne, R. Garnett, and S.J. Roberts. Gaussian processes for global optimization. In 3rd International Conference on Learning and Intelligent Optimization (LION3), 2009. [10] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In International Conference on Machine Learning, 2010. [11] T. Hastie, R. Tbshirani, and J. Friedman. The Elements of Statistical Learning. Springer, second edition, 2009. [12] P.A. Ortega and D.A. Braun. A minimum relative entropy principle for learning and acting. Journal of Artificial Intelligence Research, 38:475?511, 2010. [13] B.C. May and D.S. Leslie. Simulation studies in optimistic Bayesian sampling in contextualbandit problems. Technical Report 11:02, Statistics Group, Department of Mathematics, University of Bristol, 2011. 9
4536 |@word exploitation:1 version:2 mockus:1 open:1 simulation:2 thereby:1 tr:1 solid:4 harder:1 recursively:1 kappen:1 daniel:2 biolog:4 past:2 freitas:1 dx:3 explorative:1 additive:2 numerical:1 entertaining:1 shape:1 plot:1 drop:1 progressively:1 update:2 designed:1 intelligence:1 xk:10 isotropic:1 realizing:1 provides:1 timeaveraged:2 location:26 org:1 constructed:1 direct:4 consists:2 combine:1 inside:4 expected:2 behavior:1 mpg:5 growing:1 blackbox:1 decreasing:1 alberta:1 actual:1 armed:1 increasing:1 becomes:1 underlying:3 bounded:2 maximizes:1 panel:6 developed:2 finding:3 extremum:3 transformation:2 guarantee:1 certainty:3 every:1 ti:1 braun:3 exactly:2 k2:2 control:6 planck:9 positive:1 before:1 local:3 equivalence:2 shaded:2 challenging:1 co:2 statistically:1 averaged:2 lion3:1 unique:3 practical:3 testing:1 practice:1 regret:3 implement:1 optimizers:2 procedure:2 area:2 maxx:1 significantly:1 confidence:3 word:1 get:3 close:4 context:2 applying:1 influence:2 optimize:3 www:1 center:1 maximizing:11 yt:29 straightforward:1 convex:2 focused:1 thompson:2 welch:1 simplicity:1 schonlau:1 estimator:2 rule:4 insight:1 classic:1 pt:4 play:1 user:1 hypothesis:9 origin:1 element:1 expensive:2 located:2 role:1 capture:2 region:3 trade:1 yk:8 complexity:1 solving:1 easily:2 joint:1 indirect:4 k0:12 chapter:1 derivation:1 distinct:6 effective:3 lengthscale:1 monte:1 artificial:1 neighborhood:5 outside:4 choosing:2 larger:2 solve:2 statistic:2 unseen:1 gp:2 noisy:14 advantage:1 propose:3 description:1 convergence:2 requirement:1 optimum:3 ripple:3 produce:2 converges:1 executing:1 tk:3 tim:2 derive:1 qt:5 implemented:1 skip:1 drawback:1 contextualbandit:1 stochastic:10 centered:1 exploration:2 implementing:1 explains:1 require:1 preliminary:1 inspecting:1 bypassing:1 sufficiently:1 exp:12 mapping:1 predict:1 rawlik:1 substituting:1 continuum:1 optimizer:1 purpose:1 estimation:1 vice:1 cora:1 gaussian:14 aim:2 fulfill:1 varying:1 derived:1 likelihood:17 hk:4 contrast:1 seeger:1 lizotte:1 detect:1 inference:7 entire:2 typically:1 diminishing:1 bandit:4 relation:1 arg:3 issue:1 smoothing:1 integration:2 constrained:1 field:1 equal:4 aware:1 having:1 sampling:6 jones:1 future:1 t2:2 contaminated:1 intelligent:6 report:2 few:2 dg:5 friedman:1 interest:1 investigate:1 behind:1 chain:2 integral:1 capable:1 necessary:1 unless:1 logarithm:1 theoretical:1 grau:2 modeling:4 measuring:1 leslie:1 cost:1 deviation:2 entry:3 subset:1 function4:1 combined:1 density:7 international:2 vijayakumar:1 probabilistic:2 off:3 regressor:3 quickly:1 prior2:1 thesis:1 choose:1 derivative:1 yp:1 converted:1 de:6 summarized:1 coefficient:2 explicitly:4 depends:1 multiplicative:1 later:1 optimistic:1 doing:1 red:1 bayes:1 variance:5 characteristic:1 efficiently:1 yield:1 correspond:2 landscape:1 bayesian:14 accurately:1 critically:1 manages:1 carlo:1 multiplying:2 cybernetics:4 bristol:1 reach:1 against:1 proof:1 jordi:2 associated:3 gain:1 sampled:1 knowledge:2 dimensionality:1 carefully:1 brochu:1 higher:5 dt:25 ridged:1 methodology:1 evaluated:2 though:1 done:1 furthermore:1 just:1 hastings:1 nonlinear:2 overlapping:1 gray:1 measurability:1 effect:3 normalized:1 contain:1 hence:3 vicinity:1 analytically:1 symmetric:2 illustrated:2 indistinguishable:1 during:1 sin:1 width:4 noted:1 criterion:1 ortega:3 pdf:3 novel:1 recently:1 tracked:1 kluwer:1 refer:1 multiarmed:1 versa:1 rd:1 mathematics:1 add:1 posterior:28 own:1 watson:1 success:1 yi:4 devise:1 seen:4 minimum:1 additional:1 impose:1 dashed:1 relates:1 smoother:1 multiple:1 technical:2 faster:1 characterized:1 academic:1 schematic:1 regression:1 denominator:1 essentially:2 metric:1 arxiv:1 kernel:18 represent:1 want:2 krause:1 crucial:2 publisher:1 rest:1 virtually:2 near:1 easy:1 enough:1 xj:5 affect:2 hastie:1 competing:1 bandwidth:2 idea:1 favour:2 expression:5 generally:1 clear:1 involve:1 amount:2 nonparametric:2 ten:2 http:1 shapiro:1 exploitative:1 tutorial:1 dotted:1 estimated:3 algorithmically:1 per:1 blue:1 shall:1 express:2 group:1 drawn:3 ht:8 fraction:1 sum:2 run:4 uncertainty:3 arrive:1 almost:1 reader:1 ob:2 dy:1 scaling:1 bound:6 followed:1 quadratic:1 encodes:1 sake:1 speed:1 argument:8 department:2 according:1 ball:1 conjugate:5 slightly:1 y0:11 kakade:1 metropolis:1 intuitively:1 computationally:1 end:2 serf:1 generalizes:1 operation:1 rewritten:1 apply:2 observe:1 hierarchical:1 appropriate:1 top:1 kushner:1 graphical:1 calculating:1 neglect:1 balduzzi:2 build:1 objective:4 g0:1 parametric:2 diagonal:2 amongst:1 tuebingen:5 reason:1 length:1 gramian:3 illustration:3 ratio:3 difficult:1 robert:1 statement:1 potentially:1 sharper:1 trace:1 implementation:1 design:2 unknown:3 upper:2 observation:16 markov:2 qtk:1 overcoming:1 david:2 pair:1 extensive:1 optimized:1 quadratically:1 suggested:1 max:11 difficulty:3 rely:1 representing:2 scheme:1 historically:1 brief:1 numerous:1 started:1 concludes:1 carried:1 washed:1 columbia:1 prior:27 literature:3 relative:2 rationale:1 interesting:1 toussaint:1 downloaded:1 sufficient:1 principle:1 systematically:1 bypass:2 balancing:1 last:1 free:1 formal:1 institute:9 wide:1 taking:2 curve:6 calculated:2 overcome:1 world:1 dimension:2 evaluating:1 doesn:1 opper:1 author:1 qualitatively:1 adaptive:1 reinforcement:1 approximate:1 compact:1 global:7 active:2 assumed:1 xi:20 continuous:3 why:1 robust:1 symmetry:1 investigated:1 complex:1 domain:4 inheriting:1 garnett:1 main:2 whole:1 noise:6 motivation:1 edition:1 osborne:1 cubic:1 precision:19 guiding:1 exponential:1 theorem:4 z0:4 british:1 specific:1 xt:35 dk:4 normalizing:3 evidence:2 intractable:4 phd:1 conditioned:2 illustrates:3 kx:3 entropy:1 yin:1 distinguishable:1 likely:1 omez:1 springer:1 pedro:2 corresponds:9 relies:1 conditional:1 goal:3 shared:1 change:4 springerverlag:1 averaging:1 acting:1 total:1 experimental:2 ucb:4 internal:1 latter:2 mcmc:1 tested:1 srinivas:1
3,907
4,537
Sparse Prediction with the k-Support Norm Andreas Argyriou ? Ecole Centrale Paris [email protected] Rina Foygel Department of Statistics, Stanford University [email protected] Nathan Srebro Toyota Technological Institute at Chicago [email protected] Abstract We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with an `2 penalty. We show that this new k-support norm provides a tighter relaxation than the elastic net and can thus be advantageous in in sparse prediction problems. We also bound the looseness of the elastic net, thus shedding new light on it and providing justification for its use. 1 Introduction Regularizing with the `1 norm, when we expect a sparse solution to a regression problem, is often justified by kwk1 being the ?convex envelope? of kwk0 (the number of non-zero coordinates of a vector w ? Rd ). That is, kwk1 is the tightest convex lower bound on kwk0 . But we must be careful with this statement?for sparse vectors with large entries, kwk0 can be small while kwk1 is large. In order to discuss convex lower bounds on kwk0 , we must impose some scale constraint. A more accurate statement is that kwk1 ? kwk? kwk0 , and so, when the magnitudes of entries in w are bounded by 1, then kwk1 ? kwk0 , and indeed it is the largest such convex lower bound. Viewed as a convex outer relaxation,   (?) Sk := w kwk0 ? k, kwk? ? 1 ? w kwk1 ? k . Intersecting the right-hand-side with the `? unit ball, we get the tightest convex outer bound (convex (?) hull) of Sk :  (?) w kwk1 ? k, kwk? ? 1 = conv(Sk ) . However, in our view, this relationship between kwk1 and kwk0 yields disappointing learning guarantees, and does not appropriately capture the success of the `1 norm as a surrogate for sparsity. In particular, the sample complexity1 of learning a linear predictor with k non-zero entries by empirical risk minimization inside this class (an NP-hard optimization problem) scales as O(k log d), but relaxing to the constraint kwk1 ? k yields a sample complexity which scales as O(k 2 log d), because the sample complexity of `1 -regularized learning scales quadratically with the `1 norm [11, 20]. Perhaps a better reason for the `1 norm being a good surrogate for sparsity is that, not only do we expect the magnitude of each entry of w to be bounded, but we further expect kwk2 to be small. In a regression setting, with a vector of features x, this can be justified when E[(x> w)2 ] is bounded (a reasonable assumption) and the features are not too correlated?see, e.g. [15]. More broadly, 1 We define this as the number of observations needed in order to ensure expected prediction error no more than  worse than that of the best k-sparse predictor, for an arbitrary constant  (that is, we suppress the dependence on  and focus on the dependence on the sparsity k and dimensionality d). 1 especially in the presence of correlations, we might require this as a modeling assumption to aid p in robustness and generalization. In any case, we have kwk1 ? kwk2 kwk0 , and so if we are interested in predictors with bounded `2 norm, we can motivate the `1 norm through the following relaxation of sparsity, where the scale is now set by the `2 norm: ?   w kwk0 ? k, kwk2 ? B ? w kwk1 ? B k . The sample complexity when using the relaxation now scales as2 O(k log d). Sparse + `2 constraint. Our starting point is then that of combining sparsity and `2 regularization, and learning a sparse predictor with small `2 norm. We are thus interested in classes of the form  (2) Sk := w kwk0 ? k, kwk2 ? 1 . ? As discussed above, the class {kwk1 ? k} (corresponding to the standard Lasso) provides a (2) convex relaxation of Sk . But clearly we can get a tighter relaxation by keeping the `2 constraint: n o n ? o ? (2) conv(Sk ) ? w kwk1 ? k, kwk2 ? 1 ( w kwk1 ? k . (1) Constraining (or equivalently, penalizing) both the `1 and `2 norms, as in (1), is known as the ?elastic net? [5, 21] and has indeed been advocated as a better alternative to the Lasso. In this paper, we ask (2) whether the elastic net is the tightest convex relaxation to sparsity plus `2 (that is, to Sk ) or whether a tighter, and better, convex relaxation is possible. A new norm. (2) We consider the convex hull (tightest convex outer bound) of Sk ,  (2) Ck := conv(Sk ) = conv w kwk0 ? k, kwk2 ? 1 . (2) We study the gauge function associated with this convex set, that is, the norm whose unit ball is given by (2), which we call the k-support norm. We show that, for k > 1, this is indeed a tighter convex relaxation than the elastic net (that is, both inequalities in (1) are in fact strict inequalities), and is therefore a better convex constraint than the elastic net when seeking a sparse, low `2 -norm linear predictor. We thus advocate using it as a replacement for the elastic net. However, we also show that the gap between the elastic net and the k-support norm is at most a factor ? of 2, corresponding to a factor of two difference in the sample complexity. Thus, our work can also be interpreted as justifying the use of the elastic net, viewing it as a fairly good approximation to the tightest possible convex relaxation of sparsity intersected with an `2 constraint. Still, even a factor of two should not necessarily be ignored and, as we show in our experiments, using the tighter k-support norm can indeed be beneficial. To better understand the k-support norm, we show in Section2 that it can also be described as the group lasso with overlaps norm [10] corresponding to all kd subsets of k features. Despite the exponential number of groups in this description, we show that the k-support norm can be calculated efficiently in time O(d log d) and that its dual is given simply by the `2 norm of the k largest entries. We also provide efficient first-order optimization algorithms for learning with the k-support norm. Related Work In many learning problems of interest, Lasso has been observed to shrink too many of the variables of w to zero. In particular, in many applications, when a group of variables is highly correlated, the Lasso may prefer a sparse solution, but we might gain more predictive accuracy by including all the correlated variables in our model. These drawbacks have recently motivated the use of various other regularization methods, such as the elastic net [21], which penalizes the regression coefficients w with a combination of `1 and `2 norms:   1 kXw ? yk2 + ?1 kwk1 + ?2 kwk22 : w ? Rd , (3) min 2 2 More precisely, the sample complexity is O(B 2 k log d), where the dependence on B 2 is to be expected. Note that if feature vectors are `? -bounded (i.e. individual features are bounded), the sample complexity when using only kwk2 ? B (without a sparsity or `1 constraint) scales as O(B 2 d). That is, even after identifying the correct support, we still need a sample complexity that scales with B 2 . 2 where for a sample of size n, y ? Rn is the vector of response values, and X ? Rn?d is a matrix with column j containing the values of feature j. The elastic net can be viewed as a trade-off between `1 regularization (the Lasso) and `2 regularization (Ridge regression [9]), depending on the relative values of ?1 and ?2 . In particular, when ?2 = 0, (3) is equivalent to the Lasso. This method, and the other methods discussed below, have been observed to significantly outperform Lasso in many real applications. The pairwise elastic net (PEN) [13] is a penalty function that accounts for similarity among features: EN kwkP = kwk22 + kwk21 ? |w|> R|w| , R where R ? [0, 1]p?p is a matrix with Rjk measuring similarity between features Xj and Xk . The trace Lasso [6] is a second method proposed to handle correlations within X, defined by kwktrace = kXdiag(w)k? , X where k ? k? denotes the matrix trace-norm (the sum of the singular values) and promotes a low-rank solution. If the features are orthogonal, then both the PEN and the Trace Lasso are equivalent to the Lasso. If the features are all identical, then both penalties are equivalent to Ridge regression (penalizing kwk2 ). Another existing penalty is OSCAR [3], given by X kwkOSCAR = kwk1 + c max{|wj |, |wk |} . c j<k Like the elastic net, each one of these three methods also ?prefers? averaging similar features over selecting a single feature. The k-Support Norm 2 One argument for the elastic net has been the flexibility of tuning the cardinality k of the regression vector w. Thus, when groups of correlated variables are present, a larger k may be learned, which corresponds to a higher ?2 in (3). A more natural way to obtain such an effect of tuning the cardinality is to consider the convex hull of cardinality k vectors, (2) Ck = conv(Sk ) = conv{w ? Rd kwk0 ? k, kwk2 ? 1}. Clearly the sets Ck are nested, and C1 and Cd are the unit balls for the `1 and `2 norms, respectively. Consequently we define the k-support norm as the norm whose unit ball equals Ck (the gauge function associated with the Ck ball).3 An equivalent definition is the following variational formula: d Definition 2.1. Let k ? {1, . . . , d}. The k-support norm k ? ksp k is defined, for every w ? R , as ( ) X X kwksp kvI k2 : supp(vI ) ? I, vI = w , k := min I?Gk I?Gk where Gk denotes the set of all subsets of {1, . . . , d} of cardinality at most k. The equivalence P is immediate by rewriting vI = ?I zI in the above definition, where ?I ? 0, zI ? Ck , ?I ? Gk , I?Gk ?I = 1. In addition, this immediately implies that k ? ksp k is indeed a norm. In fact, the k-support norm is equivalent to the norm used by the group lasso with overlaps [10], when the set of overlapping groups is chosen to be Gk (however, the group lasso has traditionally been used for applications with some specific known group structure, unlike the case considered here). Although the variational definition 2.1 is not amenable to computation because of the exponential growth of the set of groups Gk , the k-support norm is computationally very tractable, with an O(d log d) algorithm described in Section 2.2. sp As already mentioned, k ? ksp 1 = k ? k1 and k ? kd = k ? k2 . The unit ball of this new norm in R3 for k = 2 is depicted in Figure 1. We immediately notice several differences between this unit ball and the elastic net unit ball. For example, at points with cardinality k and `2 norm equal to 1, the k-support norm is not differentiable, but unlike the `1 or elastic-net norm, it is differentiable at points with cardinality less than k. Thus, the k-support norm is less ?biased? towards sparse vectors than the elastic net and the `1 norm. 3 The gauge function ?Ck : Rd ? R ? {+?} is defined as ?Ck (x) = inf{? ? R+ : x ? ?Ck }. 3 Figure 1: Unit ball of the 2-support norm (left) and of the elastic net (right) on R3 . 2.1 The Dual Norm It is interesting and useful to compute the dual of the k-support norm. For w ? Rd , denote |w| for the vector of absolute values, and wi? for the i-th largest element of w [2]. We have ? ? ! 12 ! 21 k ? X ? X ? (2) kukksp = max {hw, ui : kwksp u2i =: kuk(k) . : I ? Gk = (|u|?i )2 k ? 1} = max ? ? i?I i=1 This is the `2 -norm of the largest k entries in u, and is known as the 2-k symmetric gauge norm [2]. Not surprisingly, this dual norm interpolates between the `2 norm (when k = d and all entries are taken) and the `? norm (when k = 1 and only the largest entry is taken). This parallels the interpolation of the k-support norm between the `1 and `2 norms. 2.2 Computation of the Norm In this section, we derive an alternative formula for the k-support norm, which leads to computation of the value of the norm in O(d log d) steps. ? !2 ? 12 k?r?1 d P P 1 ? Proposition 2.1. For every w ? Rd , kwksp (|w|?i )2 + r+1 |w|?i ? , k = i=1 where, letting |w|?0 i=k?r denote +?, r is the unique integer in {0, . . . , k ? 1} satisfying |w|?k?r?1 > d X 1 |w|?i ? |w|?k?r . r+1 (4) i=k?r This result shows that k ? ksp k trades off between the `1 and `2 norms in a way that favors sparse vectors but allows for cardinality larger than k. It combines the uniform shrinkage of an `2 penalty for the largest components, with the sparse shrinkage of an `1 penalty for the smallest components. Proof of Proposition 2.1. We will use the inequality hw, ui ? hw? , u? i [7]. We have ( d   k X 1 1X 2 1 (2) 2 sp 2 d (kwkk ) = max hu, wi ? (kuk(k) ) : u ? R = max ?i |w|?i ? ? : 2 2 2 i=1 i i=1 ) (k?1 ) d k X X 1X 2 ? ? ?1 ? ? ? ? ? ?d ? 0 = max ?i |w|i + ?k |w|i ? ? : ?1 ? ? ? ? ? ?k ? 0 . 2 i=1 i i=1 i=k Let Ar := d P i=k?r |w|?i for r ? {0, . . . , k ? 1}. If A0 < |w|?k?1 then the solution ? is given by ?i = |w|?i for i = 1, . . . , (k ? 1), ?i = A0 for i = k, . . . , d. If A0 ? |w|?k?1 then the optimal ?k , ?k?1 lie between |w|?k?1 and A0 , and have to be equal. So, the maximization becomes (k?2 ) k?2 X 1X 2 ? 2 max ?i |w|i ? ? + A1 ?k?1 ? ?k?1 : ?1 ? ? ? ? ? ?k?1 ? 0 . 2 i=1 i i=1 4 If A0 ? |w|?k?1 and |w|?k?2 > A21 then the solution is ?i = |w|?i for i = 1, . . . , (k ? 2), ?i = A21 for i = (k ? 1), . . . , d. Otherwise we proceed as before and continue this process. At stage r the Ar ? |w|?k?r , r+1 < |w|?k?r?1 and all but the last two process terminates if A0 ? |w|?k?1 , . . . , Ar?1 r inequalities are redundant. Hence the condition can be rewritten as (4). One optimal solution is Ar ?i = |w|?i for i = 1, . . . , k ? r ? 1, ?i = r+1 for i = k ? r, . . . , d. This proves the claim. 2.3 Learning with the k-support norm We thus propose using learning rules with k-support norm regularization. These are appropriate when we would like to learn a sparse predictor that also has low `2 norm, and are especially relevant when features might be correlated (that is, in almost all learning tasks) but the correlation structure is not known in advance. E.g., for squared error regression problems we have:   1 ? sp 2 2 d min kXw ? yk + (kwkk ) : w ? R (5) 2 2 with ? > 0 a regularization parameter and k ? {1, . . . , d} also a parameter to be tuned. As typical in regularization-based methods, both ? and k can be selected by cross validation [8]. Despite the (2) relationship to Sk , the parameter k does not necessarily correspond to the sparsity of the actual minimizer of (5), and should be chosen via cross-validation rather than set to the desired sparsity. 3 Relation to the Elastic Net Recall that the elastic net with penalty parameters ?1 and ?2 selects a vector of coefficients given by   1 kXw ? yk2 + ?1 kwk1 + ?2 kwk22 . arg min (6) 2 For ease of comparison with the k-support norm, we first show that the set of optimal solutions for the elastic net, when the parameters are varied, is the same as for the norm n ? o kwkel := max kwk , kwk / k , 2 1 k when k ? [1, d], corresponding to the unit ball in (1) (note that k is not necessarily an integer). To see this, let w ? be a solution to (6), and let k := (kwk ? 1 /kwk ? 2 )2 ? [1, d] . Now for any w 6= w, ? el el if kwkk ? kwk ? k , then kwkp ? kwk ? p for p = 1, 2. Since w ? is a solution to (6), therefore, kXw ? yk22 ? kX w ? ? yk22 . This proves that, for some constraint parameter B,   1 w ? = arg min kXw ? yk22 : kwkel ? B . k n Like the k-support norm, the elastic net interpolates between the `1 and `2 norms. In fact, when k is an integer, any k-sparse unit vector w ? Rd must lie in the unit ball of k ? kel k . Since the k-support norm gives the convex hull of all k-sparse unit vectors, this immediately implies that sp kwkel ? w ? Rd . k ? kwkk The two norms are not equal, however. The difference between the two is illustrated in Figure 1, where we see that the k-support norm is more ?rounded?. To see an example where the two norms are not equal, we set d = 1 + k 2 for some large k, and let w = (k 1.5 , 1, 1, . . . , 1)> ? Rd . Then p    k 1.5 + k 2 1 1.5 3 + k2 , ? ? kwkel = max k = k 1 + . k k k (2) Taking u = ( ?12 , ?12k , ?12k , . . . , ?12k )> , we have kuk(k) < 1, and recalling this norm is dual to the k-support norm: ? k 1.5 1 ? + k 2 ? ? = 2 ? k 1.5 . kwksp > hw, ui = k 2 2k ? In this example, we see that the two norms can differ by as much as a factor of 2. We now show that this is actually the most by which they can differ. 5 sp Proposition 3.1. k ? kel k ? k ? kk < ? 2 k ? kel k. Proof. We show that these bounds hold in the duals of the two norms. First, since k ? kel k is a maximum over the `1 and `2 norms, its dual is given by n o ? (el)? kukk := inf kak2 + k ? ku ? ak? a?Rd (el)? (2) Now take any u ? Rd . First we show kuk(k) ? kukk u1 ? ? ? ? ? ud ? 0. For any a ? Rd , . Without loss of generality, we take (2) kuk(k) = ku1:k k2 ? ka1:k k2 + ku1:k ? a1:k k2 ? kak2 + (el)? (el)? kku ? ak? . (2) 2 kuk(k) . Let a = (u1 ? uk+1 , . . . , uk ? uk+1 , 0, . . . , 0)> . Then v u k uX ? ? ? kak2 + k ? ku ? ak? = t (ui ? uk+1 )2 + k ? |uk+1 | Finally, we show that kukk kukk ? ? < i=1 v u k uX ? t (u2 ? u2 i k+1 ) + q k u2k+1 ? ? v u k uX ? (2) 2 ? t (u2i ? u2k+1 ) + k u2k+1 = 2 kuk(k) . i=1 i=1 Furthermore, this yields a strict inequality, because if u1 > uk+1 , the next-to-last inequality is strict, while if u1 = ? ? ? = uk+1 , then the last inequality is strict. 4 Optimization Solving the optimization problem (5) efficiently can be done with a first-order proximal algorithm. Proximal methods ? see [1, 4, 14, 18, 19] and references therein ? are used to solve composite problems of the form min{f (x) + ?(x) : x ? Rd }, where the loss function f (x) and the regularizer ?(x) are convex functions, and f is smooth with an L-Lipschitz gradient. These methods require fast computation of the gradient ?f and the proximity operator   1 prox? (x) := argmin ku ? xk2 + ?(u) : u ? Rd . 2 To obtain a proximal method for k-support regularization, it suffices to compute the proximity map L 1 2 (k ? ksp of g = 2? k ) , for any ? > 0 (in particular, for problem (5) ? corresponds to ? ). This computation can be done in O(d(k + log d)) steps with Algorithm 1. Algorithm 1 Computation of the proximity operator. Input v ? Rd 1 Output q = prox 2? 2 (v) (k?ksp k ) Find r ? {0, . . . , k ? 1}, ` ? {k, . . . , d} such that 1 ?+1 zk?r?1 z` > > Tr,` `?k+(?+1)r+?+1 Tr,` `?k+(?+1)r+?+1 ? ? z`+1 ` P where z := |v|? , z0 := +?, zd+1 := ??, Tr,` := zi i=k?r ? ? ? if i = 1, . . . , k ? r ? 1 ? ?+1 zi Tr,` qi ? zi ? `?k+(?+1)r+?+1 if i = k ? r, . . . , ` ? ? 0 if i = ` + 1, . . . , d Reorder and change signs of q to conform with v 6 1 ?+1 zk?r (7) (8) 5 5 5 10 10 10 15 15 15 20 20 20 25 25 25 30 30 30 35 35 35 40 40 45 45 50 50 5 10 15 20 25 30 35 40 40 45 50 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 Figure 2: Solutions learned for the synthetic data. Left to right: k-support, Lasso and elastic net. Proof of Correctness of Algorithm 1. Since the support-norm is sign and permutation invariant, proxg (v) has the same ordering and signs as v. Hence, without loss of generality, we may assume that v1 ? ? ? ? ? vd ? 0 and require that q1 ? ? ? ? ? qd ? 0, which follows from inequality (7) and the fact that z is ordered. 2 Now, q = proxg (v) is equivalent to ?z ? ?q = ?v ? ?q ? ? 12 (k ? ksp k ) (q). It suffices to show that, for w = q, ?z ? ?q is an optimal ? in the proof of Proposition 2.1. Indeed, Ar corresponds to   d ` P P Tr,` (`?k+r+1)Tr,` ? Tr,` qi = zi ? `?k+(?+1)r+?+1 = Tr,` ? `?k+(?+1)r+?+1 = (r + 1) `?k+(?+1)r+?+1 i=k?r i=k?r and (4) is equivalent to condition (7). For i ? k ? r ? 1, we have ?zi ? ?qi = qi . For k ? r ? i ? `, 1 1 we have ?zi ? ?qi = r+1 Ar . For i ? ` + 1, since qi = 0, we only need ?zi ? ?qi ? r+1 Ar , which is true by (8). We can now apply a standard accelerated proximal method, such as FISTA [1], to (5), at each iteration using the gradient of the loss and performing a prox step using Algorithm 1. The FISTA guarantee ensures us that, with appropriate step sizes, after T such iterations, we have: ! 1 ? ? 1 2Lkw? ? w1 k2 sp 2 2 ? 2 ? sp 2 kXwT ? yk + (kwT kk ) ? kXw ? yk + (kw kk ) + . 2 2 2 2 (T + 1)2 5 Empirical Comparisons Our?theoretical analysis indicates that the k-support norm and the elastic net differ by at most a factor of 2, corresponding to at most a factor of two difference in their sample complexities and generalization guarantees. We thus do not expect huge differences between their actual performances, but would still like to see whether the tighter relaxation of the k-support norm does yield some gains. Synthetic Data For the first simulation we follow [21, Sec. 5, example 4]. In this experimental protocol, the target (oracle) vector equals w? = (3, . . . , 3, 0 . . . , 0), with y = (w? )> x + N (0, 1). | {z } | {z } 15 25 The input data X were generated from a normal distribution such that components 1, . . . , 5 have the same random mean Z1 ? N (0, 1), components 6, . . . , 10 have mean Z2 ? N (0, 1) and components 11, . . . , 15 have mean Z3 ? N (0, 1). A total of 50 data sets were created in this way, each containing 50 training points, 50 validation points and 350 test points. The goal is to achieve good prediction performance on the test data. We compared the k-support norm with Lasso and the elastic net. We considered the ranges k = {1, . . . , d} for k-support norm regularization, ? = 10i , i = {?15, . . . , 5}, for the regularization parameter of Lasso and k-support regularization and the same range for the ?1 , ?2 of the elastic net. For each method, the optimal set of parameters was selected based on mean squared error on the validation set. The error reported in Table 5 is the mean squared error with respect to the oracle w? , namely M SE = (w ? ? w? )> V (w ? ? w? ), where V is the population covariance matrix of Xtest . To further illustrate the effect of the k-support norm, in Figure 5 we show the coefficients learned by each method, in absolute value. For each image, one row corresponds to the w learned for one of the 50 data sets. Whereas all three methods distinguish the 15 relevant variables, the elastic net result varies less within these variables. South African Heart Data This is a classification task which has been used in [8]. There are 9 variables and 462 examples, and the response is presence/absence of coronary heart disease. We 7 Table 1: Mean squared errors and classification accuracy for the synthetic data (median over 50 repetition), SA heart data (median over 50 replications) and for the ?20 newsgroups? data set. (SE = standard error) Method Lasso Elastic net k-support Synthetic MSE (SE) 0.2685 (0.02) 0.2274 (0.02) 0.2143 (0.02) Heart MSE (SE) Accuracy (SE) 0.18 (0.005) 66.41 (0.53) 0.18 (0.005) 66.41 (0.53) 0.18 (0.005) 66.41 (0.53) Newsgroups MSE Accuracy 0.70 73.02 0.70 73.02 0.69 73.40 normalized the data so that each predictor variable has zero mean and unit variance. We then split the data 50 times randomly into training, validation, and test sets of sizes 400, 30, and 32 respectively. For each method, parameters were selected using the validation data. In Tables 5, we report the MSE and accuracy of each method on the test data. We observe that all three methods have identical performance. 20 Newsgroups This is a binary classification version of 20 newsgroups created in [12] which can be found in the LIBSVM data repository.4 The positive class consists of the 10 groups with names of form sci.*, comp.*, or misc.forsale and the negative class consists of the other 10 groups. To reduce the number of features, we removed the words which appear in less than 3 documents. We randomly split the data into a training, a validation and a test set of sizes 14000,1000 and 4996, respectively. We report MSE and accuracy on the test data in Table 5. We found that k-support regularization gave improved prediction accuracy over both other methods.5 6 Summary We introduced the k-support norm as the tightest convex relaxation of sparsity plus `2 regularization, ? and showed that it is tighter than the elastic net by exactly a factor of 2. In our view, this sheds light on the elastic net as a close approximation to this tightest possible convex relaxation, and motivates using the k-support norm when a tighter relaxation is sought. This is also demonstrated in our empirical results. We note that the k-support norm has better prediction properties, but not necessarily better sparsityinducing properties, as evident from its more rounded unit ball. It is well understood that there is often a tradeoff between sparsity and good prediction, and that even if the population optimal predictor is sparse, a denser predictor often yields better predictive performance [3, 10, 21]. For example, in the presence of correlated features, it is often beneficial to include several highly correlated features rather than a single representative feature. This is exactly the behavior encouraged by `2 norm regularization, and the elastic net is already known to yield less sparse (but more predictive) solutions. The k-support norm goes a step further in this direction, often yielding solutions that are even less sparse (but more predictive) compared to the elastic net. Nevertheless, it is interesting to consider whether compressed sensing results, where `1 regularization is of course central, can be refined by using the k-support norm, which might be able to handle more correlation structure within the set of features. Acknowledgements The construction showing that the gap between the elastic net and the k? overlap norm can be as large as 2 is due to joint work with Ohad Shamir. Rina Foygel was supported by NSF grant DMS-1203762. References [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal of Imaging Sciences, 2(1):183?202, 2009. [2] R. Bhatia. Matrix Analysis. Graduate Texts in Mathematics. Springer, 1997. 4 http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/ Regarding other sparse prediction methods, we did not manage to compare with OSCAR, due to memory limitations, or to PEN or trace Lasso, which do not have code available online. 5 8 [3] H.D. Bondell and B.J. Reich. Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Biometrics, 64(1):115?123, 2008. [4] P.L. Combettes and V.R. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale Modeling and Simulation, 4(4):1168?1200, 2006. [5] C. De Mol, E. De Vito, and L. Rosasco. Elastic-net regularization in learning theory. Journal of Complexity, 25(2):201?230, 2009. [6] E. Grave, G. R. Obozinski, and F. Bach. Trace lasso: a trace norm regularization for correlated designs. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, 2011. [7] G. H. Hardy, J. E. Littlewood, and G. P?olya. Inequalities. Cambridge University Press, 1934. [8] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer Verlag Series in Statistics, 2001. [9] A.E. Hoerl and R.W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, pages 55?67, 1970. [10] L. Jacob, G. Obozinski, and J.P. Vert. Group Lasso with overlap and graph Lasso. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 433?440. ACM, 2009. [11] S.M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Advances in Neural Information Processing Systems, volume 22, 2008. [12] S. S. Keerthi and D. DeCoste. A modified finite Newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 6:341?361, 2005. [13] A. Lorbert, D. Eis, V. Kostina, D.M. Blei, and P.J. Ramadge. Exploiting covariate similarity in sparse regression via the pairwise elastic net. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, 2010. [14] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE, 2007. [15] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low-noise and fast rates. In Advances in Neural Information Processing Systems 23, 2010. [16] T. Suzuki and R. Tomioka. SpicyMKL: a fast algorithm for multiple kernel learning with thousands of kernels. Machine learning, pages 1?32, 2011. [17] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B (Statistical Methodology), 58(1):267?288, 1996. [18] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. Preprint, 2008. [19] P. Tseng. Approximation accuracy, gradient methods, and error bound for structured convex optimization. Mathematical Programming, 125(2):263?295, 2010. [20] T. Zhang. Covering number bounds of certain regularized linear function classes. The Journal of Machine Learning Research, 2:527?550, 2002. [21] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301?320, 2005. 9
4537 |@word repository:1 version:1 norm:84 advantageous:1 hu:1 simulation:2 covariance:1 jacob:1 xtest:1 q1:1 tr:8 series:3 selecting:1 hardy:1 ecole:1 tuned:1 document:1 existing:1 z2:1 must:3 chicago:1 intelligence:1 selected:3 xk:1 core:1 blei:1 provides:2 zhang:1 u2i:2 mathematical:1 replication:1 consists:2 advocate:1 combine:1 inside:1 pairwise:2 expected:2 indeed:6 behavior:1 olya:1 actual:2 decoste:1 kwk0:13 cardinality:7 conv:6 becomes:1 bounded:6 argmin:1 interpreted:1 wajs:1 guarantee:3 every:2 concave:1 growth:1 shed:1 exactly:2 k2:7 uk:7 unit:14 grant:1 appear:1 before:1 positive:1 understood:1 despite:2 ak:3 interpolation:1 might:4 plus:2 therein:1 equivalence:1 relaxing:1 ease:1 ramadge:1 range:2 graduate:1 unique:1 empirical:3 significantly:1 composite:2 vert:1 word:1 get:2 close:1 selection:3 operator:2 risk:2 www:1 equivalent:7 map:1 demonstrated:1 go:1 starting:1 convex:24 identifying:1 immediately:3 recovery:1 splitting:1 rule:1 kxwt:1 argyriou:1 population:2 handle:2 coordinate:1 justification:1 traditionally:1 target:1 construction:1 shamir:1 sparsityinducing:1 programming:1 element:2 satisfying:1 observed:2 csie:1 preprint:1 capture:1 eis:1 thousand:1 wj:1 ensures:1 rina:2 ordering:1 trade:2 technological:1 removed:1 yk:3 mentioned:1 disease:1 complexity:10 ui:4 nesterov:1 vito:1 motivate:1 solving:1 predictive:4 joint:1 various:1 regularizer:1 fast:5 artificial:1 zemel:1 bhatia:1 refined:1 u2k:3 whose:2 grave:1 stanford:2 larger:2 solve:1 denser:1 otherwise:1 compressed:1 favor:1 statistic:3 online:1 differentiable:2 net:36 propose:1 fr:1 relevant:2 combining:1 flexibility:1 achieve:1 description:1 exploiting:1 derive:2 depending:1 illustrate:1 advocated:1 sa:1 implies:2 qd:1 differ:3 direction:1 drawback:1 correct:1 hull:4 duals:1 viewing:1 libsvmtools:1 require:3 suffices:2 generalization:2 ntu:1 proposition:4 tighter:8 kel:4 hold:1 proximity:3 considered:2 normal:1 proxg:2 nonorthogonal:1 claim:1 forsale:1 sought:1 smallest:1 xk2:1 estimation:1 hoerl:1 largest:6 correctness:1 gauge:4 repetition:1 minimization:1 clearly:2 modified:1 ck:9 rather:2 shrinkage:5 focus:1 rank:1 indicates:1 inference:1 el:6 a0:6 relation:1 interested:2 selects:1 ksp:7 dual:6 among:1 arg:2 classification:3 fairly:1 equal:6 encouraged:1 identical:2 kw:1 np:1 report:2 randomly:2 kwt:1 individual:1 beck:1 replacement:1 keerthi:1 recalling:1 friedman:1 technometrics:1 interest:1 huge:1 highly:2 mining:1 yielding:1 light:2 amenable:1 accurate:1 ohad:1 orthogonal:1 biometrics:1 taylor:1 penalizes:1 desired:1 theoretical:1 column:1 modeling:2 teboulle:1 ar:7 measuring:1 maximization:1 entry:8 subset:2 predictor:10 uniform:1 too:2 reported:1 varies:1 proximal:6 synthetic:4 combined:1 international:2 siam:1 off:2 rounded:2 intersecting:1 w1:1 squared:4 central:1 manage:1 containing:2 rosasco:1 worse:1 supp:1 account:1 ku1:2 prox:3 de:2 sec:1 wk:1 coefficient:3 vi:3 view:2 kwk:9 parallel:1 accuracy:8 variance:1 efficiently:2 yield:6 correspond:1 ka1:1 comp:1 lorbert:1 african:1 simultaneous:1 definition:4 dm:1 associated:2 proof:4 gain:2 ask:1 recall:1 dimensionality:1 actually:1 higher:1 supervised:1 follow:1 methodology:2 response:2 improved:1 done:2 shrink:1 generality:2 furthermore:1 stage:1 correlation:4 hand:1 multiscale:1 overlapping:1 perhaps:1 name:1 effect:2 normalized:1 true:1 regularization:19 hence:2 symmetric:1 misc:1 illustrated:1 covering:1 evident:1 ridge:3 image:1 variational:2 novel:1 recently:1 volume:1 discussed:2 kwk2:9 cambridge:1 smoothness:1 rd:15 tuning:2 mathematics:1 shawe:1 reich:1 similarity:3 yk2:2 showed:1 inf:2 disappointing:1 verlag:1 certain:1 inequality:9 binary:1 success:1 kwk1:17 continue:1 impose:1 redundant:1 ud:1 signal:1 multiple:1 smooth:1 cross:2 bach:1 justifying:1 promotes:1 a1:2 qi:7 prediction:10 regression:11 iteration:2 kernel:2 c1:1 justified:2 addition:1 whereas:1 singular:1 median:2 appropriately:1 envelope:1 biased:2 unlike:2 strict:4 south:1 kwk22:3 sridharan:2 call:1 integer:3 presence:3 yk22:3 constraining:1 split:2 newsgroups:4 xj:1 zi:9 gave:1 hastie:2 lasso:22 andreas:1 reduce:1 regarding:1 tradeoff:1 whether:4 motivated:1 bartlett:1 penalty:7 complexity1:1 interpolates:2 proceed:1 prefers:1 ignored:1 useful:1 tewari:2 se:5 svms:1 http:1 outperform:1 nsf:1 notice:1 sign:3 tibshirani:2 broadly:1 zd:1 conform:1 group:12 nevertheless:1 intersected:1 penalizing:2 rewriting:1 kuk:7 libsvm:1 backward:1 v1:1 imaging:1 graph:1 relaxation:15 sum:1 inverse:1 oscar:3 almost:1 reasonable:1 ecp:1 prefer:1 bound:11 distinguish:1 oracle:2 annual:1 constraint:8 precisely:1 as2:1 kwkp:2 nathan:1 u1:4 argument:1 min:6 lkw:1 performing:1 department:1 structured:1 centrale:1 ball:12 combination:1 kd:2 beneficial:2 terminates:1 wi:2 kakade:1 tw:1 invariant:1 taken:2 heart:4 computationally:1 bondell:1 foygel:2 discus:1 r3:2 cjlin:1 needed:1 letting:1 tractable:1 available:1 tightest:8 rewritten:1 apply:1 observe:1 appropriate:2 alternative:2 robustness:1 weinberger:1 denotes:2 clustering:1 ensure:1 include:1 newton:1 k1:1 especially:2 prof:2 society:2 seeking:1 objective:1 already:2 dependence:3 kak2:3 surrogate:2 gradient:6 sci:1 vd:1 outer:3 tseng:2 reason:1 rjk:1 code:1 relationship:2 kk:3 providing:1 z3:1 minimizing:1 equivalently:1 statement:2 gk:8 trace:6 negative:1 suppress:1 design:1 motivates:1 looseness:1 observation:1 datasets:1 finite:1 immediate:1 rn:2 varied:1 arbitrary:1 kxw:6 ttic:1 introduced:1 namely:1 paris:1 z1:1 quadratically:1 learned:4 able:1 below:1 sparsity:13 including:1 max:9 memory:1 royal:2 overlap:4 natural:1 regularized:2 created:2 text:1 acknowledgement:1 nati:1 relative:1 loss:4 expect:4 permutation:1 interesting:2 limitation:1 coronary:1 srebro:2 validation:7 thresholding:1 editor:1 cd:1 row:1 course:1 summary:1 surprisingly:1 last:3 keeping:1 supported:1 side:1 understand:1 institute:1 taking:1 absolute:2 sparse:20 calculated:1 forward:1 suzuki:1 reorder:1 pen:3 iterative:1 sk:11 table:4 learn:1 ku:3 zk:2 elastic:36 mol:1 mse:5 necessarily:4 zou:1 protocol:1 sp:7 did:1 noise:1 kennard:1 representative:1 en:1 aid:1 combettes:1 tomioka:1 pereira:1 a21:2 exponential:2 lie:2 toyota:1 hw:4 formula:2 z0:1 specific:1 covariate:1 kvi:1 showing:1 sensing:1 littlewood:1 magnitude:2 kx:1 margin:1 gap:2 depicted:1 simply:1 ordered:1 ux:3 u2:2 springer:2 corresponds:5 nested:1 minimizer:1 acm:1 obozinski:2 viewed:2 goal:1 consequently:1 careful:1 towards:1 lipschitz:1 absence:1 hard:1 change:1 fista:2 typical:1 averaging:1 total:1 experimental:1 shedding:1 kwkk:4 support:43 accelerated:2 regularizing:1 correlated:8
3,908
4,538
A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation Tiberio S. Caetano NICTA/ANU/University of Sydney Canberra and Sydney, Australia [email protected] Aaron J. Defazio NICTA/Australian National University Canberra, ACT, Australia [email protected] Abstract A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scale-free. We show that in such cases it is natural to formulate structured sparsity inducing priors using submodular functions, and we use their Lov?asz extension to obtain a convex relaxation. For tractable classes such as Gaussian graphical models, this leads to a convex optimization problem that can be efficiently solved. We show that our method results in an improvement in the accuracy of reconstructed networks for synthetic data. We also show how our prior encourages scale-free reconstructions on a bioinfomatics dataset. Introduction Structure learning for graphical models is a problem that arises in many contexts. In applied statistics, undirected graphical models can be used as a tool for understanding the underlying conditional independence relations between variables in a dataset. For example, in bioinfomatics Gaussian graphical models are fitted to data resulting from micro-array experiments, where the fitted graph can be interpreted as a gene expression network [9]. In the context of Gaussian models, the structure learning problem is known as covariance selection [8]. The most common approach is the application of sparsity inducing regularization to the maximum likelihood objective. There is a significant body of literature, more than 30 papers by our count, on various methods of optimizing the L1 regularized covariance selection objective alone (see the recent review by Scheinberg and Ma [17]). Recent research has seen the development of structured sparsity, where more complex prior knowledge about a sparsity pattern can be encoded. Examples include group sparsity [22], where parameters are linked so that they are regularized in groups. More complex sparsity patterns, such as region shape constraints in the case of pixels in an image [13], or hierarchical constraints [12] have also been explored. In this paper, we study the problem of recovering the structure of a Gaussian graphical model under the assumption that the graph recovered should be scale-free. Many real-world networks are known a priori to be scale-free and therefore enforcing that knowledge through a prior seems a natural idea. Recent work has offered an approach to deal with this problem which results in a non-convex formulation [14]. Here we present a convex formulation. We show that scale-free networks can be induced by enforcing submodular priors on the network?s degree distribution, and then using their convex envelope (the Lov?asz extension) as a convex relaxation [2]. The resulting relaxed prior has an interesting non-differentiable structure, which poses challenges to optimization. We outline a few options for solving the optimisation problem via proximal operators [3], in particular an efficient dual decomposition method. Experiments on both synthetic data produced by scale-free network models and a real bioinformatics dataset suggest that the convex relaxation is not weak: we can infer scale-free networks with similar or superior accuracy than in [14]. 1 1 Combinatorial Objective Consider an undirected graph with edge set E and node set V , where n is the number of nodes. We denote the degree of node v as dE (v), and the complete graph with n nodes as Kn . We are concerned with placing priors on the degree distributions of graphs such as (V, E). By degree distribution, we mean the bag of degrees {dE (v)|v ? V }. A natural prior on degree distributions can be formed from the family of exponential random graphs [21]. Exponential random graph (ERG) models assign a probability to each n node graph using an exponential family model. The probability of each graph depends on a small set of sufficient statistics, in our case we only consider the degree statistics. A ERG distribution with degree parametrization takes the form: " # X 1 p(G = (V, E); h) ? exp ? h(dE (v)) , (1) Z(h) v?V The degree weighting function h : Z+ ? R encodes the preference for each particular degree. The function Z is chosen so that the distribution is correctly normalized over n node graphs. A number of choices for h are reasonable; A geometric series h(i) ? 1 ? ?i with ? ? (0, 1) has been proposed by Snijders et al. [20] and has been widely adopted. However for encouraging scale free graphs we require a more rapidly increasing sequence. It is instructive to observe that, under the strong assumption that each node?s degree is independent of the rest, h grows logarithmically. To see this, take a scale free model with scale ?; the joint distribution takes the form: p(G = (V, E); , ?) ? Y 1 (dE (v) + )?? , Z(, ?) v?V where  > 0 is added to prevent infinite weights. Putting this into ERG form gives the weight sequence h(i) = ? log(i + ). We will consider this and other functions h in Section 4. We intend to perform maximum a posteriori (MAP) estimation of a graph structure using such a distribution as a prior, so the object of our attention is the negative log-posterior, which we denote F : X F (E) = h(dE (v)) + const. (2) v?V So far we have defined a function on edge sets only, however in practice we want to optimize over a weighted graph, which is intractable when using discontinuous functions such as F . We now consider the properties of h that lead to a convex relaxation of F . 2 Submodularity A set function F : 2E ? R on E is a non-decreasing submodular function if for all A ? B ? E and x ? E\B the following conditions hold: F (A ? {x}) ? F (A) ? F (B ? {x}) ? F (B) and F (A) ? F (B). (submodularity) (non-decreasing) The first condition can be interpreted as a diminishing returns condition; adding x to a set A increases F (A) by more than adding it to a larger set B, if B contains A. We now consider a set of conditions that can be placed on h so that F is submodular. Proposition 1. Denote h as tractable if h is non-decreasing, concave and h(0) = 0. For tractable h, F is a non-decreasing submodular function. Proof. First note that the degree function is a set cardinality function, and hence modular. A concave transformation of a modular function is submodular [1], and the sum of submodular functions is submodular. 2 The concavity restriction we impose on h is the key ingredient that allows us to use submodularity to enforce a prior for scale-free networks; any prior favouring long tailed degree distributions must place a lower weight on new edges joining highly connected nodes than on those joining other nodes. As far as we are aware, this is a novel way of mathematically modelling the ?preferential attachment? rule [4] that gives rise to scale-free networks: through non-decreasing submodular functions on the degree distribution. Let X denote a symmetric matrix of edge weights. A natural convex relaxation of F would be the convex envelope of F (Supp(X)) under some restricted domain. For tractable h, we have by construction that F satisfies the conditions of Proposition 1 in [2], so that the convex envelope of F (Supp(X)) on the L? ball is precisely the Lov?asz extension evaluated on |X|. The Lov?asz extension for our function is easy to determine as it is a sum of ?functions of cardinality? which are considered in [2]. Below is the result from [2] adapted to our problem. Proposition 2. Let Xi,(j) be the weight of the jth edge connected to i, under a decreasing ordering by absolute value (i.e |Xi,(0) | ? |Xi,(1) | ? ... ? |Xi,(n?1) |). The notation (i) maps from sorted order to the natural ordering, with the diagonal not included. Then the convex envelope of F for tractable h over the L? norm unit ball is: n n?1 X X (h(k + 1) ? h(k)) |Xi,(k) |. ?(X) = i=0 k=0 This function is piece-wise linear and convex. The form of ? is quite intuitive. It behaves like a L1 norm with an additional weight on each edge that depends on how the edge ranks with respect to the other edges of its neighbouring nodes. 3 Optimization We are interested in using ? as a prior, for optimizations of the form minimizeX f (X) = g(X) + ??(X), for convex functions g and prior strength parameters ? ? R+ , over symmetric X. We will focus on the simplest structure learning problem that occurs in graphical model training, that of Gaussian models. In which case we have g(X) = hX, Ci ? log det X, where C is the observed covariance matrix of our data. The support of X will then be the set of edges in the undirected graphical model together with the node precisions. This function is a rescaling of the maximum likelihood objective. In order for the resulting X to define a normalizable distribution, X must be restricted to the cone of positive definite matrices. This is not a problem in practice as g(X) is infinite on the boundary of the PSD cone, and hence the constraint can be handled by restricting optimization steps to the interior of the cone. In fact X can be shown to be in a strictly smaller cone, X ?  aI, for a derivable from C [15]. This restricted domain is useful as g(X) has Lipschitz continuous gradients over X  aI but not over all positive definite matrices [18]. There are a number of possible algorithms that can be applied for optimizing a convex nondifferentiable objective such as f . Bach [2] suggests two approaches to optimizing functions involving submodular relaxation priors; a subgradient approach and a proximal approach. Subgradient methods are the simplest class of methods for optimizing non-smooth convex functions. They provide a good baseline for comparison with other methods. For our objective, a subgradient is simple to evaluate at any point, due to the piecewise continuous nature of ?(X). Unfortunately (primal) subgradient methods for our problem will not return sparse solutions except in the limit of convergence. They will instead give intermediate values that oscillate around their limiting values. An alternative is the use of proximal methods [2]. Proximal methods exhibit superior convergence in comparison to subgradient methods, and produce sparse solutions. Proximal methods rely on solving a simpler optimization problem, known as the proximal operator at each iteration:   1 2 arg min ??(X) + kX ? Zk2 , 2 X 3 where Z is a variable that varies at each iteration. For many problems of interest, the proximal operator can be evaluated using a closed form solution. For non-decreasing submodular relaxations, the proximal operator can be evaluated by solving a submodular minimization on a related (not necessarily non-decreasing) submodular function [2]. Bach [2] considers several example problems where the proximal operator can be evaluated using fast graph cut methods. For the class of functions we consider, graph-cut methods are not applicable. Generic submodular minimization algorithms could be as slow as O(n12 ) for a n-vertex graph, which is clearly impractical [11]. We will instead propose a dual decomposition method for solving this proximal operator problem in Section 3.2. For solving our optimisation problem, instead of using the standard proximal method (sometimes known as ISTA), which involves a gradient step followed by the proximal operator, we propose to use the alternating direction method of multipliers (ADMM), which has shown good results when applied to the standard L1 regularized covariance selection problem [18]. Next we show how to apply ADMM to our problem. 3.1 Alternating direction method of multipliers The alternating direction method of multipliers (ADMM, Boyd et al. [6]) is one approach to optimizing our objective that has a number of advantages over the basic proximal method. Let U be the matrix of dual variables for the decoupled problem: minimizeX g(X) + ??(Y ), s.t. X = Y. Following the presentation of the algorithm in Boyd et al. [6], given the values Y (l) and U (l) from iteration l, with U (0) = 0n and Y (0) = In the ADMM updates for iteration l + 1 are: h i ? X (l+1) = arg min hX, Ci ? log det X + ||X ? Y (l) + U (l) ||22 2 X i h ? (l+1) (l+1) ? Y + U (l) ||22 Y = arg min ??(Y ) + ||X 2 Y U (l+1) = U (l) + X (l+1) ? Y (l+1) , where ? > 0 is a fixed step-size parameter (we used ? = 0.5). The advantage of this form is that both the X and Y updates are a proximal operation. It turns out that the proximal operator for g (i.e. the X (l+1) update) actually has a simple solution [18] that can be computed by taking an eigenvalue decomposition QT ?Q = ?(Y ? U ) ? C, where ? = diag(?1 , . . . , ?n ) and updating the eigenvalues using the formula p ?i + ?2i + 4? ?0i := 2? to give X = QT ?0 Q. The stopping criterion we used was ||X (l+1) ? Y (l+1) || <  and ||Y (l+1) ? Y (l) || < . In practice the ADMM method is one of the fastest methods for L1 regularized covariance selection. Scheinbert et al. [18] show that convergence is guaranteed if additional cone restrictions are placed on the minimization with respect to X, and small enough step sizes are used. For our degree prior regularizer, the difficultly is in computing the proximal operator for ?, as the rest of the algorithm is identical to that presented in Boyd et al. [6]. We now show how we solve the problem of computing the proximal operator for ?. 3.2 Proximal operator using dual decomposition Here we describe the optimisation algorithm that we effectively use for computing the proximal operator. The regularizer ? has a quite complicated structure due to the interplay between the terms involving the two end points for each edge. We can decouple these terms using the dual decomposition technique, by writing the proximal operation for a given Z = Y ? U as: n n?1 1 ?XX minimizeX = (h(k + 1) ? h(k)) Xi,(k) + ||X ? Z||22 ? i 2 k s.t. X = XT . 4 The only difference so far is that we have made the symmetry constraint explicit. Taking the dual gives a formulation where the upper and lower triangle are treated as separate variables. The dual variable matrix V corresponds to the Lagrange multipliers of the symmetry constraint, which for notational convenience we store in an anti-symmetric matrix. The dual decomposition method is given in Algorithm 1. Algorithm 1 Dual decomposition main input: matrix Z, constants ?, ? input: step-size 0 < ? < 1 initialize: X = Z initialize: V = 0n repeat for l = 0 until n ? 1 do Xl? = solveSubproblem(Zl? , Vl? ) # Algorithm 2 end for V = V + ?(X ? X T ) until ||X ? X T || < 10?6 X = 21 (X + X T ) # symmetrize round: any |Xij | < 10?15 to 0 return X We use the notation Xi? to denote the ith row of X. Since this is a dual method, the primal variables X are not feasible (i.e. symmetric) until convergence. Essentially we have decomposed the original problem, so that now we only need to solve the proximal operation for each node in isolation, namely the subproblems: n?1 ?X (l) (l+1) (h(k + 1) ? h(k)) x(k) + ||x ? Zi? + Vi? ||22 . (3) ?i. Xi? = arg min x ? k Note that the dual variable has been integrated into the quadratic term by completing the square. As the diagonal elements of X are not included in the sort ordering, they will be minimized by Xii = Zii , for all i. Each subproblem is strongly convex as they consist of convex terms plus a positive quadratic term. This implies that the dual problem is differentiable (as the subdifferential contains only one subgradient), hence the V update is actually gradient ascent. Since a fixed step size is used, and the dual is Lipschitz continuous, for sufficiently small step-size convergence is guaranteed. In practice we used ? = 0.9 for all our tests. This dual decomposition subproblem can also be interpreted as just a step within the ADMM framework. If applied in a standard way, only one dual variable update would be performed before another expensive eigenvalue decomposition step. Since each iteration of the dual decomposition is much faster than the eigenvalue decomposition, it makes more sense to treat it as a separate problem as we propose here. It also ensures that the eigenvalue decomposition is only performed on symmetric matrices. Each subproblem in our decomposition is still a non-trivial problem. They do have a closed form solution, involving a sort and several passes over the node?s edges, as described in Algorithm 2. Proposition 3. Algorithm 2 solves the subproblem in equation 3. Proof: See Appendix 1 in the supplementary material. The main subtlety is the grouping together of elements induced at the non-differentiable points. If multiple edges connected to the same node have the same absolute value, their subdifferential becomes the same, and they behave as a single point whose weight is the average. To handle this grouping, we use a disjoint-set data-structure, where each xj is either in a singleton set, or grouped in a set with other elements, whose absolute value is the same. 4 Alternative degree priors Under the restrictions on h detailed in Proposition 1, several other choices seem reasonable. The scale free prior can be smoothed somewhat, by the addition of a linear term, giving h,? (i) = log(i + ) + ?i, 5 Algorithm 2 Dual decomposition subproblem (solveSubproblem) input: vectors z, v initialize: Disjoint-set datastructure with set membership function ? w = z ? v # w gives the sort order u = 0n build: sorted-to-original position function ? under descending absolute value order of w, excluding the diagonal for k = 0 until n ? 1 do j = ?(k) uj = |wj | ? ?? (h(k + 1) ? h(k)) ?(j).value = uj r=k while r > 1 and ?(?(r)).value ? ?(?(r ? 1)).value do join: the sets containing ?(r) P and ?(r ? 1) 1 ?(?(r)).value = |?(?(r))| i??(?(r)) ui set: r to the first element of ?(?(r)) by the sort ordering end while end for for i = 1 to N do xi = ?(i).value if xi < 0 then xj = 0 # negative values imply shrinkage to 0 end if if wi < 0 then xj = ?xj # Correct orthant end if end for return x where ? controls the strength of the smoothing. A slower diminishing choice would be a square-root function such as 1 h? (i) = (i + 1) 2 ? 1 + ?i. This requires the linear term in order to correspond to a normalizable prior. Ideally we would choose h so that the expected degree distribution under the ERG model matches the particular form we wish to encourage. Finding such a h for a particular graph size and degree distribution amounts to maximum likelihood parameter learning, which for ERG models is a hard learning problem. The most common approach is to use sampling based inference. Approaches based on Markov chain Monte Carlo techniques have been applied widely to ERG models [19] and are therefore applicable to our model. 5 Related Work The covariance selection problem has recently been addressed by Liu and Ihler [14] using reweighted L1 regularization. They minimize the following objective: X X f (X) = hX, Ci ? log det X + ? log (kX?v k + ) + ? |Xvv | . v?V v The regularizer is split into an off diagonal term which is designed to encourage sparsity in the edge parameters, and a more traditional diagonal term. Essentially they use kX?v k as the continuous counterpart of node v?s degree. The biggest difficulty with this objective is the log term, which makes f highly non-convex. This can be contrasted to our approach, where we start with essentially the same combinatorial prior, but we use an alternative, convex relaxation. The reweighted L1 [7] aspect refers to the method of optimization applied. A double loop method is used, in the same class as EM methods and difference of convex programming, where each L1 inner problem gives a monotonically improving lower bound on the true solution. 6 1.00 0.95 0.95 0.90 0.90 True Positives True Positives 1.00 0.85 0.80 L1 Reweighted L1 Submodular log Submodular root 0.75 0.70 0.00 0.05 0.10 0.15 False Positives 0.20 0.85 0.80 L1 Reweighted L1 Submodular log Submodular root 0.75 0.70 0.00 0.25 0.05 0.10 0.15 False Positives 0.20 0.25 Figure 1: ROC curves for BA model (left) and fixed degree distribution model (right) Figure 2: Reconstruction of a gene association network using L1 (left), submodular relaxation (middle), and reweighted L1 (right) methods 6 Experiments Reconstruction of synthetic networks. We performed a comparison against the reweighted L1 method of Liu and Ihler [14], and a standard L1 regularized method, both implemented using ADMM for optimization. Although Liu and Ihler [14] use the glasso [10] method for the inner loop, ADMM will give identical results, and is usually faster [18]. Graphs with 60 nodes were generated using both the Barabasi-Albert model [4] and a predefined degree distribution model sampled using the method from Bayati et al. [5] implemented in the NetworkX software package. Both methods generate scale-free graphs; the BA model exhibits a scale parameter of 3.0, whereas we fixed the scale parameter at 2.0 for the other model. To define a valid GaussianP model, edge weights of Xij = ?0.2 were assigned, and the node weights were set at Xii = 0.5 ? i6=j Xij so as to make the resulting precision matrix diagonally dominant. The resulting Gaussian graphical model was sampled 500 times. The covariance matrix of these samples was formed, then normalized to have diagonal uniformly 1.0. We tested with the two h sequences described in section 4. The parameters for the degree weight sequences were chosen by grid search on random instances separate from those we tested on. The resulting ROC curves for the Hamming reconstruction loss are shown in Figure 1. Results were averaged over 30 randomly generated graphs for each each figure. We can see from the plots that our method with the square-root weighting presents results superior to those from Liu and Ihler [14] for these datasets. This is encouraging particularly since our formulation is convex while the one from Liu and Ihler [14] isn?t. Interestingly, the log based weights give very similar but not identical results to the reweighting scheme which also uses a log term. The only case where it gives inferior reconstructions is when it is forced to give a sparser reconstruction than the original graph. Reconstruction of a gene activation network. A common application of sparse covariance selection is the estimation of gene association networks from experimental data. A covariance matrix of gene co-activations from a number of independent micro-array experiments is typically formed, on which a number of methods, including sparse covariance selection, can be applied. Sparse estimation is key for a consistent reconstruction due to the small number of experiments performed. Many biological networks are conjectured to be scale-free, and additionally ERG modelling techniques are known to produce good results on biological networks [16]. So we consider micro-array datasets a natural test-bed for our method. We ran our method and the L1 reconstruction method on the first 7 500 genes from the GDS1429 dataset (http://www.ncbi.nlm.nih.gov/gds/1429), which contains 69 samples for 8565 genes. The parameters for both methods were tuned to produce a network with near to 50 edges for visualization purposes. The major connected component for each is shown in Figure 2. While these networks are too small for valid statistical analysis of the degree distribution, the submodular relaxation method produces a network with structure that is commonly seen in scale free networks. The star subgraph centered around gene 60 is more clearly defined in the submodular relaxation reconstruction, and the tight cluster of genes in the right is less clustered in the L1 reconstruction. The reweighted L1 method produced a quite different reconstruction, with greater clustering. Runtime comparison: different proximal operator methods. We performed a comparison against 10 two other methods for computing the proximal operator: subgradient descent and the minimum norm point 10 (MNP) algorithm. The MNP algorithm is a submodu10 lar minimization method that can be adapted for com10 puting the proximal operator [2]. We took the input pa10 rameters from the last invocation of the proximal oper10 ator in the BA test, at a prior strength of 0.7. We then plotted the convergence rate of each of the methods, 10 Dual decomp Subgradient shown in Figure 3. As the tests are on randomly gen10 MNP erated graphs, we present only a representative exam10 0 20 40 60 80 100 ple. It is clear from this and similar tests that we perIteration formed that the subgradient descent method converges too slowly to be of practical applicability for this prob- Figure 3: Comparison of proximal operators lem. Subgradient methods can be a good choice when only a low accuracy solution is required; for convergence of ADMM the error in the proximal operator needs to be smaller than what can be obtained by the subgradient method. The MNP method also converges slowly for this problem, however it achieves a low but usable accuracy quickly enough that it could be used in practice. The dual decomposition method achieves a much better rate of convergence, converging quickly enough to be of use even for strong accuracy requirements. 0 Distance from solution -1 -2 -3 -4 -5 -6 -7 -8 The time for individual iterations of each of the methods was 0.65ms for subgradient descent, 0.82ms for dual decomposition and 15ms for the MNP method. The speed difference is small between a subgradient iteration and a dual decomposition iteration as both are dominated by the cost of a sort operation. The cost of a MNP iteration is dominated by two least squares solves, whose running time in the worst case is proportional to the square of the current iteration number. Overall, it is clear that our dual decomposition method is significantly more efficient. Runtime comparison: submodular relaxation against other approaches. The running time of the three methods we tested is highly dependent on implementation details, so the following speed comparison should be taken as a rough guide. For a sparse reconstruction of a BA model graph with 100 vertices and 200 edges, the average running time per 10?4 error reconstruction over 10 random graphs was 16 seconds for the reweighted L1 method and 5.0 seconds for the submodular relaxation method. This accuracy level was chosen so that the active edge set for both methods had stabilized between iterations. For comparison, the standard L1 method was significantly faster, taking only 0.72 seconds on average. Conclusion We have presented a new prior for graph reconstruction, which enforces the recovery of scale-free networks. This prior falls within the growing class of structured sparsity methods. Unlike previous approaches to regularizing the degree distribution, our proposed prior is convex, making training tractable and convergence predictable. Our method can be directly applied in contexts where sparse covariance selection is currently used, where it may improve the reconstruction quality. Acknowledgements NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. 8 References [1] Francis Bach. Convex analysis and optimization with submodular functions: a tutorial. Technical report, INRIA, 2010. [2] Francis Bach. Structured sparsity-inducing norms through submodular functions. NIPS, 2010. [3] Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization with sparsityinducing penalties. Foundations and Trends in Machine Learning, 2012. [4] Albert-Laszlo Barabasi and Reka Albert. Emergence of scaling in random networks. Science, 286:509? 512, 1999. [5] Moshen Bayati, Jeong Han Kim, and Amin Saberi. A sequential algorithm for generating random graphs. Algorithmica, 58, 2009. [6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3, 2011. [7] Emmanuel J. Candes, Michael B. Wakin, and Stephen P. Boyd. Enhancing sparsity by reweighted l1 minimization. Journal of Fourier Analysis and Applications, 2008. [8] A. P. Dempster. Covariance selection. Biometrics, 28:157?175, 1972. [9] Adrian Dobra, Chris Hans, Beatrix Jones, Joseph R Nevins, and Mike West. Sparse graphical models for exploring gene expression data. Journal of Multivariate Analysis, 2004. [10] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 2007. [11] Saruto Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [12] Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, and Francis Bach. Proximal methods for sparse hierarchical dictionary learning. ICML, 2010. [13] Rodolphe Jenatton, Guillaume Obozinski, and Francis Bach. Structured sparse principal component analysis. AISTATS, 2010. [14] Qiang Liu and Alexander Ihler. Learning scale free networks by reweighted l1 regularization. AISTATS, 2011. [15] Zhaosong Lu. Smooth optimization approach for sparse covariance selection. SIAM J. Optim., 2009. [16] Zachary M. Saul and Vladimir Filkov. Exploring biological network structure using exponential random graph models. Bioinformatics, 2007. [17] Katya Scheinberg and Shiqian Ma. Optimization for Machine Learning, chapter 17. optimization methods for sparse inverse covariance selection. MIT Press, 2011. [18] Katya Scheinbert, Shiqian Ma, and Donald Goldfarb. Sparse inverse covariance selection via alternating linearization methods. In NIPS, 2010. [19] T. Snijders. Markov chain monte carlo estimation of exponential random graph models. Journal of Social Structure, 2002. [20] Tom A.B. Snijders, Philippa E. Pattison, and Mark S. Handcock. New specifications for exponential random graph models. Technical report, University of Washington, 2004. [21] Alan Terry. Exponential random graphs. Master?s thesis, University of York, 2005. [22] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, 2007. 9
4538 |@word middle:1 seems:1 norm:4 adrian:1 covariance:16 decomposition:18 liu:6 series:1 contains:3 tuned:1 interestingly:1 favouring:1 recovered:1 com:1 current:1 optim:1 activation:2 chu:1 must:2 shape:1 designed:1 plot:1 update:5 alone:1 parametrization:1 ith:1 node:17 preference:1 simpler:1 zii:1 yuan:1 excellence:1 lov:4 expected:1 growing:1 decreasing:8 decomposed:1 gov:1 encouraging:2 cardinality:2 increasing:1 becomes:1 xx:1 underlying:1 notation:2 biostatistics:1 what:1 interpreted:3 finding:1 transformation:1 impractical:1 act:1 concave:2 runtime:2 zl:1 unit:1 control:1 positive:7 before:1 puting:1 treat:1 limit:1 joining:2 inria:1 plus:1 au:2 katya:2 suggests:1 co:1 fastest:1 averaged:1 practical:1 nevins:1 enforces:1 practice:5 definite:2 significantly:2 boyd:5 refers:1 donald:1 suggest:1 convenience:1 interior:1 selection:13 operator:17 context:3 writing:1 descending:1 optimize:1 restriction:3 map:2 www:1 attention:1 convex:25 formulate:1 recovery:1 rule:1 array:3 erg:7 datastructure:1 handle:1 n12:1 limiting:1 construction:1 neighbouring:1 programming:1 us:1 sparsityinducing:1 logarithmically:1 element:4 expensive:1 particularly:1 updating:1 trend:2 cut:2 erated:1 observed:1 mike:1 subproblem:5 solved:1 worst:1 region:1 ensures:1 caetano:2 connected:4 wj:1 ordering:4 ran:1 predictable:1 dempster:1 ui:1 ideally:1 solving:5 tight:1 triangle:1 joint:1 various:1 represented:1 chapter:1 regularizer:3 forced:1 fast:1 describe:1 monte:2 quite:3 encoded:1 widely:2 larger:1 modular:2 solve:2 supplementary:1 whose:3 tested:3 statistic:4 emergence:1 interplay:1 sequence:4 differentiable:3 advantage:2 eigenvalue:5 took:1 reconstruction:16 propose:3 loop:2 rapidly:1 subgraph:1 amin:1 intuitive:1 inducing:3 bed:1 convergence:9 double:1 cluster:1 requirement:1 produce:4 generating:1 converges:2 object:1 pose:1 qt:2 solves:2 sydney:2 recovering:1 implemented:2 involves:1 implies:1 australian:3 strong:2 direction:4 submodularity:3 discontinuous:1 correct:1 centered:1 nlm:1 australia:2 material:1 require:1 government:1 hx:3 assign:1 clustered:1 tiberio:2 proposition:5 biological:3 mathematically:1 extension:4 strictly:1 exploring:2 hold:1 around:2 considered:1 sufficiently:1 exp:1 major:1 achieves:2 dictionary:1 barabasi:2 purpose:1 estimation:6 applicable:2 bag:1 combinatorial:2 currently:1 bioinfomatics:2 council:1 grouped:2 tool:1 weighted:1 minimization:5 rough:1 clearly:2 mit:1 gaussian:6 shrinkage:1 focus:1 improvement:1 notational:1 modelling:2 likelihood:3 rank:1 baseline:1 sense:1 kim:1 posteriori:1 inference:1 economy:1 dependent:1 stopping:1 membership:1 elsevier:1 vl:1 integrated:1 typically:1 diminishing:2 relation:1 interested:1 pixel:1 arg:4 dual:22 overall:1 priori:1 development:1 smoothing:1 initialize:3 reka:1 aware:1 washington:1 sampling:1 qiang:1 identical:3 placing:1 jones:1 icml:1 minimized:1 report:2 piecewise:1 micro:3 few:1 randomly:2 national:1 individual:1 algorithmica:1 psd:1 friedman:1 interest:1 highly:3 zhaosong:1 rodolphe:3 primal:2 chain:2 predefined:1 laszlo:1 edge:17 encourage:2 preferential:1 decoupled:1 biometrics:1 plotted:1 fitted:2 instance:1 applicability:1 cost:2 vertex:2 too:2 kn:1 varies:1 proximal:28 synthetic:3 gd:1 siam:1 off:1 michael:1 together:2 quickly:2 thesis:1 containing:1 choose:1 slowly:2 shiqian:2 usable:1 return:4 rescaling:1 supp:2 de:5 singleton:1 star:1 philippa:1 depends:2 vi:1 piece:1 performed:5 root:4 closed:2 linked:1 francis:5 start:1 sort:5 option:1 complicated:1 candes:1 minimize:1 formed:4 square:5 accuracy:6 efficiently:1 correspond:1 weak:1 periteration:1 produced:2 lu:1 carlo:2 difficultly:1 trevor:1 against:3 proof:2 ihler:6 hamming:1 sampled:2 dataset:4 knowledge:2 actually:2 jenatton:3 dobra:1 tom:1 formulation:5 evaluated:4 strongly:1 just:1 until:4 jerome:1 reweighting:1 lar:1 quality:1 grows:1 normalized:2 multiplier:5 true:3 counterpart:1 regularization:3 hence:3 assigned:1 alternating:5 symmetric:5 goldfarb:1 deal:1 reweighted:10 round:1 encourages:1 inferior:1 criterion:1 m:3 outline:1 complete:1 l1:22 saberi:1 image:1 wise:1 novel:1 recently:1 parikh:1 nih:1 common:3 superior:3 behaves:1 association:2 significant:1 ai:2 grid:1 i6:1 handcock:1 centre:1 submodular:27 had:1 funded:1 specification:1 han:2 dominant:1 posterior:1 multivariate:1 recent:3 optimizing:5 conjectured:1 store:1 seen:2 minimum:1 additional:2 relaxed:1 impose:1 somewhat:1 greater:1 determine:1 monotonically:1 stephen:1 multiple:1 snijders:3 infer:1 smooth:2 technical:2 faster:3 determination:1 match:1 bach:7 long:1 alan:1 lin:1 converging:1 involving:3 basic:1 regression:1 optimisation:3 essentially:3 enhancing:1 albert:3 iteration:11 sometimes:1 subdifferential:2 want:1 addition:1 whereas:1 addressed:1 envelope:4 rest:2 unlike:1 asz:4 ascent:1 pass:1 induced:2 fujishige:1 undirected:3 seem:1 near:1 intermediate:1 split:1 easy:1 concerned:1 enough:3 independence:1 isolation:1 zi:1 xj:4 hastie:1 lasso:1 inner:2 idea:1 det:3 expression:2 handled:1 defazio:2 penalty:1 york:1 oscillate:1 useful:1 detailed:1 clear:2 amount:1 simplest:2 generate:1 http:1 xij:3 stabilized:1 tutorial:1 disjoint:2 correctly:1 per:1 tibshirani:1 xii:2 group:2 key:3 putting:1 prevent:1 graph:31 relaxation:14 subgradient:13 sum:2 cone:5 package:1 prob:1 inverse:3 master:1 place:1 family:2 reasonable:2 appendix:1 scaling:1 bound:1 completing:1 followed:1 guaranteed:2 quadratic:2 adapted:2 strength:3 constraint:5 precisely:1 normalizable:2 software:1 encodes:1 dominated:2 aspect:1 speed:2 fourier:1 min:4 structured:5 department:1 ball:2 smaller:2 em:1 wi:1 joseph:1 making:1 lem:1 restricted:3 taken:1 equation:1 visualization:1 scheinberg:2 turn:1 count:1 tractable:6 zk2:1 end:7 adopted:1 operation:4 apply:1 observe:1 hierarchical:2 enforce:1 generic:1 alternative:3 slower:1 original:3 clustering:1 include:1 running:3 graphical:10 wakin:1 ncbi:1 const:1 giving:1 emmanuel:1 build:1 uj:2 society:1 objective:9 intend:1 added:1 occurs:1 diagonal:6 traditional:1 exhibit:2 gradient:3 distance:1 separate:3 nondifferentiable:1 chris:1 considers:1 trivial:1 nicta:4 enforcing:2 xvv:1 vladimir:1 unfortunately:1 robert:1 subproblems:1 negative:2 rise:1 ba:4 implementation:1 perform:1 upper:1 markov:2 datasets:2 descent:3 anti:1 behave:1 orthant:1 excluding:1 communication:1 smoothed:1 peleato:1 namely:1 required:1 eckstein:1 jeong:1 nip:2 below:1 pattern:2 usually:1 sparsity:10 challenge:1 program:1 including:1 royal:1 terry:1 natural:6 rely:1 regularized:5 treated:1 difficulty:1 ator:1 scheme:1 improve:1 imply:1 julien:2 pattison:1 attachment:1 isn:1 prior:23 understanding:1 literature:1 review:1 geometric:1 acknowledgement:1 ict:1 glasso:1 loss:1 interesting:1 rameters:1 proportional:1 ingredient:1 bayati:2 digital:1 foundation:2 degree:24 offered:1 sufficient:1 minimizex:3 consistent:1 row:1 diagonally:1 placed:2 repeat:1 free:18 last:1 jth:1 guide:1 fall:1 saul:1 taking:3 absolute:4 sparse:14 distributed:1 boundary:1 curve:2 zachary:1 world:1 valid:2 concavity:1 symmetrize:1 made:1 commonly:1 ple:1 far:3 social:1 reconstructed:2 derivable:1 gene:10 active:1 mairal:2 xi:10 continuous:4 search:1 tailed:1 additionally:1 nature:1 symmetry:2 improving:1 complex:2 necessarily:1 domain:2 diag:1 aistats:2 main:2 ista:1 body:1 canberra:2 join:1 biggest:1 roc:2 representative:1 broadband:1 west:1 slow:1 precision:2 position:1 explicit:1 wish:1 exponential:7 xl:1 invocation:1 weighting:2 formula:1 xt:1 explored:1 grouping:2 intractable:1 consist:1 restricting:1 adding:2 effectively:1 false:2 ci:3 sequential:1 linearization:1 anu:2 kx:3 sparser:1 lagrange:1 subtlety:1 corresponds:1 satisfies:1 ma:3 obozinski:3 conditional:1 sorted:2 presentation:1 decomp:1 mnp:6 lipschitz:2 admm:9 feasible:1 hard:1 included:2 infinite:2 except:1 contrasted:1 uniformly:1 decouple:1 principal:1 experimental:1 aaron:2 guillaume:3 support:1 mark:1 arises:1 alexander:1 bioinformatics:2 evaluate:1 regularizing:1 instructive:1
3,909
4,539
A Geometric take on Metric Learning S?ren Hauberg MPI for Intelligent Systems T?ubingen, Germany Oren Freifeld Brown University Providence, US Michael J. Black MPI for Intelligent Systems T?ubingen, Germany [email protected] [email protected] [email protected] Abstract Multi-metric learning techniques learn local metric tensors in different parts of a feature space. With such an approach, even simple classifiers can be competitive with the state-of-the-art because the distance measure locally adapts to the structure of the data. The learned distance measure is, however, non-metric, which has prevented multi-metric learning from generalizing to tasks such as dimensionality reduction and regression in a principled way. We prove that, with appropriate changes, multi-metric learning corresponds to learning the structure of a Riemannian manifold. We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics. Algorithmically, we provide the first practical algorithm for computing geodesics according to the learned metrics, as well as algorithms for computing exponential and logarithmic maps on the Riemannian manifold. Together, these tools let many Euclidean algorithms take advantage of multi-metric learning. We illustrate the approach on regression and dimensionality reduction tasks that involve predicting measurements of the human body from shape data. 1 Learning and Computing Distances Statistics relies on measuring distances. When the Euclidean metric is insufficient, as is the case in many real problems, standard methods break down. This is a key motivation behind metric learning, which strives to learn good distance measures from data. In the most simple scenarios a single metric tensor is learned, but in recent years, several methods have proposed learning multiple metric tensors, such that different distance measures are applied in different parts of the feature space. This has proven to be a very powerful approach for classification tasks [1, 2], but the approach has not generalized to other tasks. Here we consider the generalization of Principal Component Analysis (PCA) and linear regression; see Fig. 1 for an illustration of our approach. The main problem with generalizing multi-metric learning is that it is based on assumptions that make the feature space both non-smooth and non-metric. Specifically, it is often assumed that straight lines form geodesic curves and that the metric tensor stays constant along these lines. These assumptions are made because it is believed that computing the actual geodesics is intractable, requiring a discretization of the entire feature space [3]. We solve these problems by smoothing the transitions between different metric tensors, which ensures a metric space where geodesics can be computed. In this paper, we consider the scenario where the metric tensor at a given point in feature space is defined as the weighted average of a set of learned metric tensors. In this model, we prove that the feature space becomes a chart for a Riemannian manifold. This ensures a metric feature space, i.e. dist(x, y) = 0 ? x = y , dist(x, y) = dist(y, x) (symmetry), (1) dist(x, z) ? dist(x, y) + dist(y, z) (triangle inequality). To compute statistics according to the learned metric, we need to be able to compute distances, which implies that we need to compute geodesics. Based on the observation that geodesics are 1 (a) Local Metrics & Geodesics (b) Tangent Space Representation (c) First Principal Geodesic Figure 1: Illustration of Principal Geodesic Analysis. (a) Geodesics are computed between the mean and each data point. (b) Data is mapped to the Euclidean tangent space and the first principal component is computed. (c) The principal component is mapped back to the feature space. smooth curves in Riemannian spaces, we derive an algorithm for computing geodesics that only requires a discretization of the geodesic rather than the entire feature space. Furthermore, we show how to compute the exponential and logarithmic maps of the manifold. With this we can map any point back and forth between a Euclidean tangent space and the manifold. This gives us a general strategy for incorporating the learned metric tensors in many Euclidean algorithms: map the data to the tangent of the manifold, perform the Euclidean analysis and map the results back to the manifold. Before deriving the algorithms (Sec. 3) we set the scene by an analysis of the shortcomings of current state-of-the-art methods (Sec. 2), which motivate our final model. The model is general and can be used for many problems. Here we illustrate it with several challenging problems in 3D body shape modeling and analysis (Sec. 4). All proofs can be found in the supplementary material along with algorithmic details and further experimental results. 2 Background and Related Work Single-metric learning learns a metric tensor, M, such that distances are measured as dist2 (xi , xj ) = kxi ? xj k2M ? (xi ? xj )T M(xi ? xj ) , (2) where M is a symmetric and positive definite D ? D matrix. Classic approaches for finding such a metric tensor include PCA, where the metric is given by the inverse covariance matrix of the training ?1 data; and linear discriminant analysis (LDA), where the metric tensor is M = S?1 W SB SW , with Sw and SB being the within class scatter and the between class scatter respectively [9]. A more recent approach tries to learn a metric tensor from triplets of data points (xi , xj , xk ), where the metric should obey the constraint that dist(xi , xj ) < dist(xi , xk ). Here the constraints are often chosen such that xi and xj belong to the same class, while xi and xk do not. Various relaxed versions of this idea have been suggested such that the metric can be learned by solving a semi-definite or a quadratic program [1, 2, 4?8]. Among the most popular approaches is the Large Margin Nearest Neighbor (LMNN) classifier [5], which finds a linear transformation that satisfies local distance constraints, making the approach suitable for multi-modal classes. For many problems, a single global metric tensor is not enough, which motivates learning several local metric tensors. The classic work by Hastie and Tibshirani [9] advocates locally learning metric tensors according to LDA and using these as part of a kNN classifier. In a somewhat similar fashion, Weinberger and Saul [5] cluster the training data and learn a separate metric tensor for each cluster using LMNN. A more extreme point of view was taken by Frome et al. [1, 2], who learn a diagonal metric tensor for every point in the training set, such that distance rankings are preserved. Similarly, Malisiewicz and Efros [6] find a diagonal metric tensor for each training point such that the distance to a subset of the training data from the same class is kept small. Once a set of metric tensors {M1 , . . . , MR } has been learned, the distance dist(a, b) is measured according to (2) where ?the nearest? metric tensor is used, i.e.  R X w ?r (x) 1 kx ? xr k2Mr ? kx ? xj k2Mj , ?j P M(x) = Mr , where w ?r (x) = , (3) 0 otherwise ?j (x) jw r=1 where x is either a or b depending on the algorithm. Note that this gives a non-metric distance function as it is not symmetric. To derive this equation, it is necessary to assume that 1) geodesics 2 ?8 ?8 Assumed Geodesics Location of Metric Tensors Test Points ?6 ?8 Actual Geodesics Location of Metric Tensors Test Points ?6 ?4 ?4 ?4 ?2 ?2 ?2 0 0 0 2 2 2 4 4 4 6 ?8 6 ?8 ?6 ?4 ?2 0 (a) 2 4 6 ?6 ?4 ?2 0 2 4 Riemannian Geodesics Location of Metric Tensors Test Points ?6 6 6 ?8 ?6 (b) ?4 ?2 (c) 0 2 4 6 (d) Figure 2: (a)?(b) An illustrative example where straight lines do not form geodesics and where the metric tensor does not stay constant along lines; see text for details. The background color is proportional to the trace of the metric tensor, such that light grey corresponds to regions where paths are short (M1 ), and dark grey corresponds to regions they are long (M2 ). (c) The suggested geometric model along with the geodesics. Again, background colour is proportional to the trace of the metric tensor; the colour scale is the same is used in (a) and (b). (d) An illustration of the exponential and logarithmic maps. form straight lines, and 2) the metric tensor stays constant along these lines [3]. Both assumptions are problematic, which we illustrate with a simple example in Fig. 2a?c. Assume we are given two metric tensors M1 = 2I and M2 = I positioned at x1 = (2, 2)T and x2 = (4, 4)T respectively. This gives rise to two regions in feature space in which x1 is nearest in the first and x2 is nearest in the second, according to (3). This is illustrated in Fig. 2a. In the same figure, we also show the assumed straight-line geodesics between selected points in space. As can be seen, two of the lines goes through both regions, such that the assumption of constant metric tensors along the line is violated. Hence, it would seem natural to measure the length of the line, by adding the length of the line segments which pass through the different regions of feature space. This was suggested by Ramanan and Baker [3] who also proposed a polynomial time algorithm for measuring these line lengths. This gives a symmetric distance function. Properly computing line lengths according to the local metrics is, however, not enough to ensure that the distance function is metric. As can be seen in Fig. 2a the straight line does not form a geodesic as a shorter path can be found by circumventing the region with the ?expensive? metric tensor M1 as illustrated in Fig. 2b. This issue makes it trivial to construct cases where the triangle inequality is violated, which again makes the line length measure non-metric. In summary, if we want a metric feature space, we can neither assume that geodesics are straight lines nor that the metric tensor stays constant along such lines. In practice, good results have been reported using (3) [1,3,5], so it seems obvious to ask: is metricity required? For kNN classifiers this does not appear to be the case, with many successes based on dissimilarities rather than distances [10]. We, however, want to generalize PCA and linear regression, which both seek to minimize the reconstruction error of points projected onto a subspace. As the notion of projection is hard to define sensibly in non-metric spaces, we consider metricity essential. In order to build a model with a metric feature space, we change the weights in (3) to be smooth functions. This impose a well-behaved geometric structure on the feature space, which we take advantage of in order to perform statistical analysis according to the learned metrics. However, first we review the basics of Riemannian geometry as this provides the theoretical foundation of our work. 2.1 Geodesics and Riemannian Geometry We start by defining Riemannian manifolds, which intuitively are smoothly curved spaces equipped with an inner product. Formally, they are smooth manifolds endowed with a Riemannian metric [11]: Definition A Riemannian metric M on a manifold M is a smoothly varying inner product < a, b >x = aT M(x)b in the tangent space Tx M of each point x ? M . 3 Often Riemannian manifolds are represented by a chart; i.e. a parameter space for the curved surface. An example chart is the spherical coordinate system often used to represent spheres. While such charts are often flat spaces, the curvature of the manifold arises from the smooth changes in the metric. On a Riemannian manifold M, the length of a smooth curve c : [0, 1] ? M is defined as the integral of the norm of the tangent vector (interpreted as speed) along the curve: Z 1 Z 1q 0 Length(c) = kc (?)kM(c(?)) d? = c0 (?)T M(c(?))c0 (?)d? , (4) 0 0 where c0 denotes the derivative of c and M(c(?)) is the metric tensor at c(?). A geodesic curve is then a length-minimizing curve connecting two given points x and y, i.e.  (5) cgeo = arg min Length(c) with c(0) = x and c(1) = y . c The distance between x and y is defined as the length of the geodesic. Given a tangent vector v ? Tx M, there exists a unique geodesic cv (t) with initial velocity v at x. The Riemannian exponential map, Expx , maps v to a point on the manifold along the geodesic cv at t = 1. This mapping preserves distances such that dist(cv (0), cv (1)) = kvk. The inverse of the exponential map is the Riemannian logarithmic map denoted Logx . Informally, the exponential and logarithmic maps move points back and forth between the manifold and the tangent space while preserving distances (see Fig. 2d for an illustration). This provides a general strategy for generalizing many Euclidean techniques to Riemannian domains: data points are mapped to the tangent space, where ordinary Euclidean techniques are applied and the results are mapped back to the manifold. 3 A Metric Feature Space With the preliminaries settled we define the new model. Let C = RD denote the feature space. We endow C with a metric tensor in every point x, which we define akin to (3), R X w ?r (x) M(x) = wr (x)Mr , where wr (x) = PR , (6) ?j (x) r=1 j=1 w with w ?r > 0. The only difference from (3) is that we shall not restrict ourselves to binary weight functions w ?r . We assume the metric tensors Mr have already been learned; Sec. 4 contain examples where they have been learned using LMNN [5] and LDA [9]. From the definition of a Riemannian metric, we trivially have the following result: Lemma 1 The space C = RD endowed with the metric tensor from (6) is a chart of a Riemannian manifold, iff the weights wr (x) change smoothly with x. Hence, by only considering smooth weight functions w ?r we get a well-studied geometric structure on the feature space, which ensures us that it is metric. To illustrate the implications we return to the example in Fig. 2. We change the weight functions from binary to squared exponentials, which gives the feature space shown in Fig. 2c. As can be seen, the metric tensor now changes smoothly, which also makes the geodesics smooth curves (a property we will use when computing the geodesics). It is worth noting that Ramanan and Baker [3] also consider the idea of smoothly averaging the metric tensor. They, however, only evaluate the metric tensor at the test point of their classifier and then assume straight line geodesics with a constant metric tensor. Such assumptions violate the premise of a smoothly changing metric tensor and, again, the distance measure becomes non-metric. Lemma 1 shows that metric learning can be viewed as manifold learning. The main difference between our approach and techniques such as Isomap [12] is that, while Isomap learns an embedding of the data points, we learn the actual manifold structure. This gives us the benefit that we can compute geodesics as well as the exponential and logarithmic maps. These provide us with mappings back and forth between the manifold and Euclidean representation of the data, which preserve distances as well as possible. The availability of such mappings is in stark contrast to e.g. Isomap. In the next section we will derive a system of ordinary differential equations (ODE?s) that geodesics in C have to satisfy, which provides us with algorithms for computing geodesics as well as exponential and logarithmic maps. With these we can generalize many Euclidean techniques. 4 3.1 Computing Geodesics, Maps and Statistics At minima of (4) we know that the Euler-Lagrange equation must hold [11], i.e. ?L d ?L , where L(?, c, c0 ) = c0 (?)T M(c(?))c0 (?) . = ?c d? ?c0 As we have an explicit expression for the metric tensor we can compute (7) in closed form: Theorem 2 Geodesic curves in C satisfy the following system of 2nd order ODE?s  T 1 ?vec [M(c(?))] M(c(?))c00 (?) = ? (c0 (?) ? c0 (?)) , 2 ?c(?) (7) (8) where ? denotes the Kronecker product and vec [?] stacks the columns of a matrix into a vector [13].  Proof See supplementary material. This result holds for any smooth weight functions w ?r . We, however, still need to compute ?vec[M] , ?c which depends on the specific choice of w ?r . Any smooth weighting scheme is applicable, but we restrict ourselves to the obvious smooth generalization of (3) and use squared exponentials. From this assumption, we get the following result  Theorem 3 For w ?r (x) = exp ? ?2 kx ? xr k2Mr the derivative of the metric tensor from (6) is ?vec [M(c)] ? = P ?c R j=1 R X w ?j 2 w ?r vec [Mr ] r=1 R X   T T w ?j (c ? xj ) Mj ? (c ? xr ) Mr . (9) j=1  Proof See supplementary material. Computing Geodesics. Any geodesic curve must be a solution to (8). Hence, to compute a geodesic between x and y, we can solve (8) subject to the constraints c(0) = x and c(1) = y . (10) This is a boundary value problem, which has a smooth solution. This allows us to solve the problem numerically using a standard three-stage Lobatto IIIa formula, which provides a fourth-order accurate C 1 ?continuous solution [14]. Ramanan and Baker [3] discuss the possibility of computing geodesics, but arrive at the conclusion that this is intractable based on the assumption that it requires discretizing the entire feature space. Our solution avoids discretizing the feature space by discretizing the geodesic curve instead. As this is always one-dimensional the approach remains tractable in high-dimensional feature spaces. Computing Logarithmic Maps. Once a geodesic c is found, it follows from the definition of the logarithmic map, Logx (y), that it can be computed as v = Logx (y) = c0 (0) Length(c) . kc0 (0)k (11) In practice, we solve (8) by rewriting it as a system of first order ODE?s, such that we compute both c and c0 simultaneously (see supplementary material for details). Computing Exponential Maps. Given a starting point x on the manifold and a vector v in the tangent space, the exponential map, Expx (v), finds the unique geodesic starting at x with initial velocity v. As the geodesic must fulfill (8), we can compute the exponential map by solving this system of ODE?s with the initial conditions c(0) = x and c0 (0) = v . (12) This initial value problem has a unique solution, which we find numerically using a standard RungeKutta scheme [15]. 5 3.1.1 Generalizing PCA and Regression At this stage, we know that the feature space is Riemannian and we know how to compute geodesics and exponential and logarithmic maps. We now seek to generalize PCA and linear regression, which becomes straightforward since solutions are available in Riemannian spaces [16, 17]. These generalizations can be summarized as mapping the data to the tangent space at the mean, performing standard Euclidean analysis in the tangent and mapping the results back. The first step is to compute the mean value on the manifold, which is defined as the point that minimizes the sum-of-squares distances to the data points. Pennec [18] provides an efficient gradient descent approach for computing this point, which we also summarize in the supplementary material. The empirical covariance of a set of points is defined as the ordinary Euclidean covariance in the tangent space at the mean value [18]. With this in mind, it is not surprising that the principal components of a dataset have been generalized as the geodesics starting at the mean with initial velocity corresponding to the eigenvectors of the covariance [16], ?vd (t) = Exp? (tvd ) , (13) th where vd denotes the d eigenvector of the covariance. This approach is called Principal Geodesic Analysis (PGA), and the geodesic curve ?vd is called the principal geodesic. An illustration of the approach can be seen in Fig. 1 and more algorithmic details are in the supplementary material. Linear regression has been generalized in a similar way [17] by performing regression in the tangent of the mean and mapping the resulting line back to the manifold using the exponential map. The idea of working in the tangent space is both efficient and convenient, but comes with an element of approximation as the logarithmic map is only guarantied to preserve distances to the origin of the tangent and not between all pairs of data points. Practical experience, however, indicates that this is a good tradeoff; see [19] for a more in-depth discussion of when the approximation is suitable. 4 Experiments To illustrate the framework1 we consider an example in human body analysis, and then we analyze the scalability of the approach. But first, to build intuition, Fig. 3a show synthetically generated data samples from two classes. We sample random points xr and learn a local LDA metric [9] by considering all data points within a radius; this locally pushes the two classes apart. We combine the local metrics using (6) and Fig. 3b show the data in the tangent space of the resulting manifold. As can be seen the two classes are now globally further apart, which shows the effect of local metrics. 4.1 Human Body Shape We consider a regression example concerning human body shape analysis. We study 986 female body laser scans from the CAESAR [20] data set; each shape is represented using the leading 35 principal components of the data learned using a SCAPE-like model [21, 22]. Each shape is associated with anthropometric measurements such as body height, shoe size, etc. We show results for shoulder to wrist distance and shoulder breadth, but results for more measurements are in the supplementary material. To predict the measurements from shape coefficients, we learn local metrics and perform linear regression according to these. As a further experiment, we use PGA to reduce the dimensionality of the shape coefficients according to the local metrics, and measure the quality of the reduction by performing linear regression to predict the measurements. As a baseline we use the corresponding Euclidean techniques. To learn the local metric we do the following. First we whiten the data such that the variance captured by PGA will only be due to the change of metric; this allows easy visualization of the impact of the learned metrics. We then cluster the body shapes into equal-sized clusters according to the measurement and learn a LMNN metric for each cluster [5], which we associate with the mean of each class. These push the clusters apart, which introduces variance along the directions where the measurement changes. From this we construct a Riemannian manifold according to (6), 1 Our software implementation for computing geodesics and performing manifold statistics is available at http://ps.is.tue.mpg.de/project/Smooth Metric Learning 6 30 Euclidean Model Riemannian Model 24 20 18 16 20 15 10 5 14 12 0 (a) 25 22 Running Time (sec.) Average Prediction Error 26 10 (b) 20 Dimensionality 0 0 30 50 (c) 100 Dimensionality 150 (d) 4 3 3 2 2 1 1 0 0 ?1 ?1 ?2 ?2 ?3 ?3 ?4 ?4 ?3 ?2 ?1 0 1 2 3 Shoulder breadth 4 3 25 Euclidean Model Riemannian Model Prediction Error 4 20 15 10 0 ?4 ?5 0 5 10 5 4 17 3 16 15 20 Dimensionality 25 30 35 Euclidean Model Riemannian Model 2 15 2 1 1 Prediction Error Shoulder to wrist distance Figure 3: Left panels: Synthetic data. (a) Samples from two classes along with illustratively sampled metric tensors from (6). (b) The data represented in the tangent of a manifold constructed from local LDA metrics learned at random positions. Right panels: Real data. (c) Average error of linearly predicted body measurements (mm). (d) Running time (sec) of the geodesic computation as a function of dimensionality. 0 0 ?1 ?2 ?1 ?3 14 13 12 11 ?2 ?4 ?3 ?4 ?4 10 ?5 ?3 ?2 ?1 0 1 Euclidean PCA 2 3 ?6 ?4 9 0 ?2 0 2 4 Tangent Space PCA (PGA) 6 5 10 15 20 Dimensionality 25 30 35 Regression Error Figure 4: Left: body shape data in the first two principal components according to the Euclidean metric. Point color indicates cluster membership. Center: As on the left, but according to the Riemannian model. Right: regression error as a function of the dimensionality of the shape space; again the Euclidean metric and the Riemannian metric are compared. compute the mean value on the manifold, map the data to the tangent space at the mean and perform linear regression in the tangent space. As a first visualization we plot the data expressed in the leading two dimensions of PGA in Fig. 4; as can be seen the learned metrics provide principal geodesics, which are more strongly related with the measurements than the Euclidean model. In order to predict the measurements from the body shape, we perform linear regression, both directly in the shape space according to the Euclidean metric and in the tangent space of the manifold corresponding to the learned metrics (using the logarithmic map from (11)). We measure the prediction error using leave-one-out cross-validation. To further illustrate the power of the PGA model, we repeat this experiment for different dimensionalities of the data. The results are plotted in Fig. 4, showing that regression according to the learned metrics outperforms the Euclidean model. To verify that the learned metrics improve accuracy, we average the prediction errors over all millimeter measurements. The result in Fig. 3c shows that much can be gained in lower dimensions by using the local metrics. To provide visual insights into the behavior of the learned metrics, we uniformly sample body shape along the first principal geodesic (in the range ?7 times the standard deviation) according to the different metrics. The results are available as a movie in the supplementary material, but are also shown in Fig. 5. As can be seen, the learned metrics pick up intuitive relationships between body shape and the measurements, e.g. shoulder to wrist distance is related to overall body size, while shoulder breadth is related to body weight. 7 Shoulder to wrist distance Shoulder breadth Figure 5: Shapes corresponding to the mean (center) and ?7 times the standard deviations along the principal geodesics (left and right). Movies are available in the supplementary material. 4.2 Scalability The human body data set is small enough (986 samples in 35 dimensions) that computing a geodesic only takes a few seconds. To show that the current unoptimized Matlab implementation can handle somewhat larger datasets, we briefly consider a dimensionality reduction task on the classic MNIST handwritten digit data set. We use the preprocessed data available with [3] where the original 28?28 gray scale images were deskewed and projected onto their leading 164 Euclidean principal components (which captures 95% of the variance in the original data). We learn one diagonal LMNN metric per class, which we associate with the mean of the class. From this we construct a Riemannian manifold from (6), compute the mean value on the manifold and compute geodesics between the mean and each data point; this is the computationally expensive part of performing PGA. Fig. 3d plots the average running time (sec) for the computation of geodesics as a function of the dimensionality of the training data. A geodesic can be computed in 100 dimensions in approximately 5 sec., whereas in 150 dimensions it takes about 30 sec. In this experiment, we train a PGA model on 60,000 data points, and test a nearest neighbor classifier in the tangent space as we decrease the dimensionality of the model. Compared to a Euclidean model, this gives a modest improvement in classification accuracy of 2.3 percent, when averaged across different dimensionalities. Plots of the results can be found in the supplementary material. 5 Discussion This work shows that multi-metric learning techniques are indeed applicable outside the realm of kNN classifiers. The idea of defining the metric tensor at any given point as the weighted average of a finite set of learned metrics is quite natural from a modeling point of view, which is also validated by the Riemannian structure of the resulting space. This opens both a theoretical and a practical toolbox for analyzing and developing algorithms that use local metric tensors. Specifically, we show how to use local metric tensors for both regression and dimensionality reduction tasks. Others have attempted to solve non-classification problems using local metrics, but we feel that our approach is the first to have a solid theoretical backing. For example, Hastie and Tibshirani [9] use local LDA metrics for dimensionality reduction by averaging the local metrics and using the resulting metric as part of a Euclidean PCA, which essentially is a linear approach. Another approach was suggested by Hong et al. [23] who simply compute the principal components according to each metric separately, such that one low dimensional model is learned per metric. The suggested approach is, however, not difficulty-free in its current implementation. Currently, we are using off-the-shelf numerical solvers for computing geodesics, which can be computationally demanding. While we managed to analyze medium-sized datasets, we believe that the run-time can be drastically improved by developing specialized numerical solvers. In the experiments, we learned local metrics using techniques specialized for classification tasks as this is all the current literature provides. We expect improvements by learning the metrics specifically for regression and dimensionality reduction, but doing so is currently an open problem. Acknowledgments: S?ren Hauberg is supported in part by the Villum Foundation, and Oren Freifeld is supported in part by NIH-NINDS EUREKA (R01-NS066311). 8 References [1] Andrea Frome, Yoram Singer, and Jitendra Malik. Image retrieval and classification using local distance functions. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19 (NIPS), pages 417?424, Cambridge, MA, 2007. MIT Press. [2] Andrea Frome, Fei Sha, Yoram Singer, and Jitendra Malik. Learning globally-consistent local distance functions for shape-based image retrieval and classification. In International Conference on Computer Vision (ICCV), pages 1?8, 2007. [3] Deva Ramanan and Simon Baker. Local distance functions: A taxonomy, new algorithms, and an evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(4):794?806, 2011. [4] Shai Shalev-Shwartz, Yoram Singer, and Andrew Y. Ng. Online and batch learning of pseudo-metrics. In Proceedings of the twenty-first international conference on Machine learning, ICML ?04, pages 94?101. ACM, 2004. [5] Kilian Q. Weinberger and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research, 10:207?244, 2009. [6] Tomasz Malisiewicz and Alexei A. Efros. Recognition by association via learning per-exemplar distances. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1?8, 2008. [7] Yiming Ying and Peng Li. Distance metric learning with eigenvalue optimization. The Journal of Machine Learning Research, 13:1?26, 2012. [8] Matthew Schultz and Thorsten Joachims. Learning a distance metric from relative comparisons. In Advances in Neural Information Processing Systems 16 (NIPS), 2004. [9] Trevor Hastie and Robert Tibshirani. Discriminant adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):607?616, June 1996. [10] Elzbieta Pekalska, Pavel Paclik, and Robert P. W. Duin. A generalized kernel approach to dissimilaritybased classification. Journal of Machine Learning Research, 2:175?211, 2002. [11] Manfredo Perdigao do Carmo. Riemannian Geometry. Birkh?auser Boston, January 1992. [12] Joshua B. Tenenbaum, Vin De Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000. [13] Jan R. Magnus and Heinz Neudecker. Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley & Sons, 2007. [14] Jacek Kierzenka and Lawrence F. Shampine. A BVP solver based on residual control and the Matlab PSE. ACM Transactions on Mathematical Software, 27(3):299?316, 2001. [15] John R. Dormand and P. J. Prince. A family of embedded Runge-Kutta formulae. Journal of Computational and Applied Mathematics, 6:19?26, 1980. [16] P. Thomas Fletcher, Conglin Lu, Stephen M. Pizer, and Sarang Joshi. Principal Geodesic Analysis for the study of Nonlinear Statistics of Shape. IEEE Transactions on Medical Imaging, 23(8):995?1005, 2004. [17] Peter E. Jupp and John T. Kent. Fitting smooth paths to spherical data. Applied Statistics, 36(1):34?46, 1987. [18] Xavier Pennec. Probabilities and statistics on Riemannian manifolds: Basic tools for geometric measurements. In Proceedings of Nonlinear Signal and Image Processing, pages 194?198, 1999. [19] Stefan Sommer, Franc?ois Lauze, S?ren Hauberg, and Mads Nielsen. Manifold valued statistics, exact principal geodesic analysis and the effect of linear approximations. In European Conference on Computer Vision (ECCV), pages 43?56, 2010. [20] Kathleen M. Robinette, Hein Daanen, and Eric Paquet. The CAESAR project: a 3-D surface anthropometry survey. In 3-D Digital Imaging and Modeling, pages 380?386, 1999. [21] Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, and James Davis. Scape: shape completion and animation of people. ACM Transactions on Graphics, 24(3):408?416, 2005. [22] Oren Freifeld and Michael J. Black. Lie bodies: A manifold representation of 3D human shape. In A. Fitzgibbon et al. (Eds.), editor, European Conference on Computer Vision (ECCV), Part I, LNCS 7572, pages 1?14. Springer-Verlag, oct 2012. [23] Yi Hong, Quannan Li, Jiayan Jiang, and Zhuowen Tu. Learning a mixture of sparse distance metrics for classification and dimensionality reduction. In International Conference on Computer Vision (ICCV), pages 906?913, 2011. 9
4539 |@word briefly:1 version:1 polynomial:1 seems:1 norm:1 nd:1 c0:12 open:2 grey:2 km:1 seek:2 calculus:1 covariance:5 pavel:1 kent:1 pick:1 solid:1 reduction:10 initial:5 outperforms:1 current:4 discretization:2 jupp:1 surprising:1 scatter:2 must:3 john:4 numerical:2 shape:20 plot:3 intelligence:2 selected:1 xk:3 short:1 provides:6 location:3 lauze:1 k2m:1 daphne:1 height:1 mathematical:1 along:13 constructed:1 differential:2 prove:2 advocate:1 combine:1 fitting:1 peng:1 indeed:1 andrea:2 mpg:3 dist:10 nor:1 multi:7 behavior:1 heinz:1 lmnn:5 globally:2 spherical:2 actual:3 equipped:1 considering:2 solver:3 becomes:3 project:2 baker:4 panel:2 medium:1 interpreted:1 minimizes:1 eigenvector:1 finding:1 transformation:1 pseudo:1 every:2 sensibly:1 classifier:7 platt:1 control:1 ramanan:4 rungekutta:1 medical:1 appear:1 before:1 positive:1 local:22 analyzing:1 jiang:1 path:3 approximately:1 black:3 studied:1 challenging:1 range:1 malisiewicz:2 averaged:1 mads:1 practical:3 unique:3 acknowledgment:1 wrist:4 practice:2 definite:2 xr:4 digit:1 fitzgibbon:1 jan:1 lncs:1 empirical:1 projection:1 convenient:1 get:2 onto:2 dam:1 map:24 center:2 go:1 straightforward:1 starting:3 survey:1 m2:2 insight:1 deriving:1 classic:3 embedding:1 notion:1 coordinate:1 handle:1 feel:1 exact:1 origin:1 associate:2 velocity:3 element:1 expensive:2 recognition:2 econometrics:1 capture:1 region:6 ensures:3 kilian:1 decrease:1 principled:2 intuition:1 geodesic:59 motivate:1 solving:2 segment:1 deva:1 eric:1 triangle:2 various:1 tx:2 represented:3 laser:1 train:1 shortcoming:1 birkh:1 outside:1 shalev:1 quite:1 supplementary:10 solve:5 larger:1 cvpr:1 valued:1 otherwise:1 statistic:9 knn:3 paquet:1 final:1 online:1 runge:1 advantage:2 eigenvalue:1 reconstruction:1 product:3 tu:1 iff:1 adapts:1 kc0:1 forth:3 intuitive:1 scalability:2 olkopf:1 dist2:1 cluster:7 p:1 leave:1 yiming:1 illustrate:6 derive:3 depending:1 andrew:1 completion:1 exemplar:1 measured:2 nearest:7 predicted:1 frome:3 implies:1 ois:1 come:1 direction:1 radius:1 human:6 material:10 premise:1 villum:1 generalization:3 preliminary:1 c00:1 hold:2 mm:1 magnus:1 exp:2 lawrence:2 algorithmic:2 mapping:6 predict:3 fletcher:1 matthew:1 efros:2 applicable:2 currently:2 tool:2 weighted:2 hoffman:1 stefan:1 mit:1 always:1 dormand:1 rather:2 fulfill:1 shelf:1 varying:1 endow:1 validated:1 june:1 joachim:1 properly:1 improvement:2 indicates:2 contrast:1 baseline:1 hauberg:4 membership:1 sb:2 entire:3 kc:1 koller:1 unoptimized:1 germany:2 backing:1 issue:1 classification:10 among:1 overall:1 denoted:1 arg:1 art:2 smoothing:1 auser:1 equal:1 once:2 construct:3 ng:1 icml:1 caesar:2 others:1 logx:3 intelligent:2 few:1 franc:1 preserve:3 simultaneously:1 geometry:3 ourselves:2 possibility:1 alexei:1 evaluation:1 introduces:1 mixture:1 extreme:1 kvk:1 light:1 behind:1 implication:1 accurate:1 integral:1 necessary:1 experience:1 shorter:1 modest:1 euclidean:25 plotted:1 prince:1 hein:1 theoretical:3 column:1 modeling:3 measuring:2 ordinary:3 deviation:2 subset:1 euler:1 graphic:1 reported:1 providence:1 kxi:1 synthetic:1 international:3 stay:4 off:1 michael:2 together:1 connecting:1 again:4 squared:2 settled:1 derivative:2 leading:3 return:1 stark:1 li:2 de:4 sec:9 summarized:1 availability:1 tvd:1 coefficient:2 satisfy:2 jitendra:2 ranking:1 depends:1 break:1 try:1 view:2 closed:1 analyze:2 doing:1 competitive:1 start:1 tomasz:1 shai:1 vin:1 simon:1 minimize:1 chart:5 square:1 accuracy:2 variance:3 who:3 millimeter:1 generalize:3 handwritten:1 ren:3 lu:1 worth:1 straight:7 sebastian:1 trevor:1 ed:1 definition:3 james:1 obvious:2 proof:3 riemannian:29 associated:1 sampled:1 dataset:1 popular:1 ask:1 color:2 framework1:1 dimensionality:20 realm:1 nielsen:1 positioned:1 back:8 soren:1 modal:1 improved:1 jw:1 strongly:1 furthermore:1 stage:2 langford:1 working:1 nonlinear:3 lda:6 quality:1 behaved:1 gray:1 believe:1 effect:2 brown:2 requiring:1 contain:1 isomap:3 verify:1 hence:3 managed:1 xavier:1 symmetric:3 illustrated:2 davis:1 illustrative:1 mpi:2 whiten:1 hong:2 generalized:4 dragomir:1 jacek:1 silva:1 percent:1 image:4 nih:1 specialized:2 belong:1 association:1 m1:4 rodgers:1 numerically:2 measurement:13 anguelov:1 cambridge:1 vec:5 cv:4 rd:2 trivially:1 mathematics:1 similarly:1 surface:2 etc:1 curvature:1 recent:2 female:1 apart:3 scenario:2 verlag:1 ubingen:2 inequality:2 binary:2 success:1 discretizing:3 pennec:2 carmo:1 yi:1 joshua:1 seen:7 preserving:1 minimum:1 relaxed:1 somewhat:2 mr:6 impose:1 captured:1 scape:2 signal:1 semi:1 stephen:1 multiple:1 violate:1 smooth:14 believed:1 long:1 sphere:1 cross:1 retrieval:2 concerning:1 prevented:1 impact:1 prediction:5 regression:19 basic:2 essentially:1 metric:121 vision:5 represent:1 kernel:1 oren:3 preserved:1 background:3 want:2 whereas:1 ode:4 separately:1 sch:1 subject:1 seem:1 joshi:1 noting:1 synthetically:1 zhuowen:1 enough:3 easy:1 xj:9 hastie:3 restrict:2 inner:2 idea:4 reduce:1 tradeoff:1 expression:1 pca:8 pse:1 colour:2 akin:1 peter:1 matlab:2 involve:1 informally:1 eigenvectors:1 dark:1 locally:3 tenenbaum:1 http:1 problematic:1 algorithmically:1 tibshirani:3 wr:3 per:3 shall:1 srinivasan:1 key:1 changing:1 preprocessed:1 neither:1 rewriting:1 breadth:4 kept:1 eureka:1 imaging:2 circumventing:1 year:1 sum:1 run:1 inverse:2 powerful:1 fourth:1 arrive:1 family:1 ninds:1 pga:8 quadratic:1 pizer:1 duin:1 constraint:4 kronecker:1 fei:1 scene:1 x2:2 flat:1 software:2 neudecker:1 speed:1 min:1 performing:5 developing:2 according:18 across:1 strives:1 son:1 making:1 praveen:1 intuitively:1 iccv:2 pr:1 thorsten:1 taken:1 computationally:2 equation:3 visualization:2 remains:1 discus:1 singer:3 know:3 mind:1 tractable:1 available:5 endowed:2 obey:1 appropriate:1 batch:1 weinberger:2 original:2 thomas:1 denotes:3 running:3 include:1 ensure:1 sommer:1 sw:2 yoram:3 build:2 r01:1 tensor:46 move:1 malik:2 already:1 strategy:2 sha:1 diagonal:3 gradient:1 subspace:1 distance:35 separate:1 mapped:4 kutta:1 tue:3 vd:3 thrun:1 manifold:34 discriminant:2 trivial:1 length:11 relationship:1 insufficient:1 illustration:5 minimizing:1 ying:1 robert:2 taxonomy:1 trace:2 rise:1 implementation:3 motivates:1 twenty:1 perform:6 observation:1 datasets:2 finite:1 descent:1 curved:2 january:1 defining:2 shoulder:8 jim:1 stack:1 pair:1 required:1 toolbox:1 pekalska:1 kathleen:1 learned:24 nip:2 able:1 suggested:5 pattern:3 summarize:1 program:1 power:1 suitable:2 demanding:1 natural:2 difficulty:1 predicting:1 residual:1 scheme:2 improve:1 movie:2 text:1 review:1 geometric:6 literature:1 tangent:23 relative:1 embedded:1 expect:1 proportional:2 proven:1 validation:1 foundation:2 digital:1 freifeld:4 consistent:1 editor:2 expx:2 eccv:2 summary:1 repeat:1 supported:2 free:1 iiia:1 drastically:1 neighbor:4 saul:2 sparse:1 benefit:1 curve:11 boundary:1 depth:1 transition:1 avoids:1 dimension:5 made:1 adaptive:1 projected:2 schultz:1 transaction:5 global:2 assumed:3 xi:8 shwartz:1 continuous:1 triplet:1 learn:11 mj:1 symmetry:1 european:2 domain:1 main:2 linearly:1 motivation:1 animation:1 body:17 x1:2 fig:16 fashion:1 wiley:1 position:1 explicit:1 exponential:15 lie:1 weighting:1 learns:2 down:1 metricity:2 theorem:2 formula:2 specific:1 showing:1 deskewed:1 intractable:2 incorporating:1 essential:1 exists:1 adding:1 mnist:1 gained:1 dissimilarity:1 push:2 margin:2 kx:3 boston:1 smoothly:6 generalizing:4 logarithmic:12 simply:1 shoe:1 visual:1 lagrange:1 expressed:1 paclik:1 springer:1 corresponds:3 satisfies:1 relies:1 acm:3 ma:1 oct:1 viewed:1 sized:2 bvp:1 change:8 hard:1 specifically:3 uniformly:1 averaging:2 principal:17 lemma:2 called:2 pas:1 experimental:1 attempted:1 formally:1 people:1 arises:1 scan:1 violated:2 evaluate:1
3,910
454
Gradient Descent: Second-Order Momentum and Saturating Error Barak Pearlmutter Department of Psychology P.O. Box llA Yale Station New Haven, CT 06520-7447 [email protected] Abstract = Batch gradient descent, ~w(t) -7JdE/dw(t) , conver~es to a minimum of quadratic form with a time constant no better than '4Amax/ Amin where Amin and Amax are the minimum and maximum eigenvalues of the Hessian matrix of E with respect to w. It was recently shown that adding a momentum term ~w(t) = -7JdE/dw(t) + Q'~w(t - 1) improves this to ~ VAmax/ Amin, although only in the batch case. Here we show that secondorder momentum, ~w(t) -7JdE/dw(t) + Q'~w(t -1) + (3~w(t - 2), can lower this no further. We then regard gradient descent with momentum as a dynamic system and explore a non quadratic error surface, showing that saturation of the error accounts for a variety of effects observed in simulations and justifies some popular heuristics. = 1 INTRODUCTION Gradient descent is the bread-and-butter optimization technique in neural networks. Some people build special purpose hardware to accelerate gradient descent optimization of backpropagation networks. Understanding the dynamics of gradient descent on such surfaces is therefore of great practical value. Here we briefly review the known results in the convergence of batch gradient descent; show that second-order momentum does not give any speedup; simulate a real network and observe some effect not predicted by theory; and account for these effects by analyzing gradient descent with momentum on a saturating error surface. 887 888 Pearl mutter 1.1 SIMPLE GRADIENT DESCENT First, let us review the bounds on the convergence rate of simple gradient descent without momentum to a minimum of quadratic form [11,1]. Let w* be the minimum of E, the error, H = d2 E/dw 2 (w*), and Ai, vi be the eigenvalues and eigenvectors of H. The weight change equation .6.w = (where .6.f(t) = f(t + 1) - dE dw (1) -T}- f(t? is limited by o < T} < 2/ Amax (2) We can substitute T} = 2/ Amax into the weight change equation to obtain convergence that tightly bounds any achievable in practice, getting a time constant of convergence of -1/log(1 - 2s) = (2s)-1 + 0(1), or E - E* ~ exp(-4st) (3) where we use s = Amin/ Amax for the inverse eigenvalues spread of H and "asymptotically converges to zero more slowly than." 1.2 ~ is read FIRST-ORDER MOMENTUM Sometimes a momentum term is used, the weight update (1) being modified to incorporate a momentum term a < 1 [5, equation 16], .6.w(t) dE = -T} dw (t) + a.6.w(t - 1). (4) The Momentum LMS algorithm, MLMS, has been analyzed by Shynk and Roy [6], who have shown that the momentum term can not speed convergence in the online, or stochastic gradient, case. In the batch case, which we consider here, Tugay and Tanik [9] have shown that momentum is stable when a < 1 and 0 < TJ < 2(a+ 1)/Amax (5) which speeds convergence to E - E* ~ exp(-(4VS + O(s)) t) (6) by * a = 2 2 - 4Js(1 - s) (1- 2s)2 -1 = 1- 4VS+ 0 (s), T}*=2(a*+1)/Amax. (7) SECOND-ORDER MOMENTUM The time constant of asymptotic convergence can be changed from O(Amax/ Amin) to O( JA max / Amin) by going from a first-order system, (1), to a second-order system, (4). Making a physical analogy, the first-order system corresponds to a circuit with Gradient Descent: Second Order Momentum and Saturating Error .... Figure 1: Second-order momentum converges if 7]Amax is less than the value plotted as "eta," as a function of a and (3. The region of convergence is bounded by four smooth surfaces: three planes and one hyperbola. One of the planes is parallel to the 7] axis, even though the sampling of the plotting program makes it appear slightly sloped. Another is at 7] = 0 and thus hidden. The peak is at 4. a resistor, and the second-order system adds a capacitor to make an RC oscillator. One might ask whether further gains can be had by going to a third-order system, dE ~w(t) -7] dw + a~w(t - 1) + (3~w(t - 2) . (8) = For convergence, all the eigenvalues of the matrix Mi = (~6 - {3 -a + {3 ~) 1 - 7]Ai + a in (c,(t - 1) Ci(t) Ci(t + l))T ~ M,(Ci(t - 2) c,(t - 1) ci(t)f must have absolute value less than or equal to 1, which occurs precisely when -1 ~ (3 ~ ~ 7]Ad2 - (1 - (3) < 7] a ~ ~ o 1 4({3 + 1) / Ai {37]Ai/2 + (1 - (3). For {3 ~ 0 this is most restrictive for Amax, but for {3 > 0 Amin also comes into play. Taking the limit as Amin -.0, this gives convergence conditions for gradient descent with second-order momentum of -1< {3-1~ when a ~ ~1-{3 3{3 + 1 : 0< when a {3 a (9) 7] ~ 2 -(1+a-{3) Amax 7] ~ f 2: 3{3+ 1: 0< + ~(a + (3 - 1) max 889 890 Pearlmutter a region shown in figure 1. Fastest convergence for Amin within this region lies along the ridge a = 3{3 + 1, T} 2(1 + a - {3)/ Amax. Unfortunately, although convergence is slightly faster than with first-order momentum, the relative advantage tends to zero as 8 --+- 0, giving negligible speedup when Amax ~ Amin. For small 8, the optimal settings of the parameters are = 1- 9 4: vIS + 0(8) 3 --vIS + 0(8) 4 4(1 - vIS) + 0(8) (10) where a? is as in (7). 3 SIMULATIONS We constructed a standard three layer backpropagation network with 10 input units, 3 sigmoidal hidden units, and 10 sigmoidal output units. 15 associations between random 10 bit binary input and output vectors were constructed, and the weights were initialized to uniformly chosen random values between -1 and +1. Training was performed with a square error measure, batch weight updates, targets of 0 and 1, and a weight decay coefficient of 0.01. = To get past the initial transients, the network was run at T} 0.45, a = 0 for 150 epochs, and at T} 0.3, a 0.9 for another 200 epochs. The weights were then saved, and the network run for 200 epochs for T} ranging from 0 to 0.5 and a ranging from 0 to 1 from that starting point. = = Figure 3 shows that the region of convergence has the shape predicted by theory. Calculation of the eigenvalues of d 2 E / dw 2 confirms that the location of the boundary is correctly predicted. Figure 2 shows that momentum speeded convergence by the amount predicted by theory. Figure 3 shows that the parameter setting that give the most rapid convergence in practice are the settings predicted by theory. However, within the region that does not converge to the minimum, there appear to be two regimes: one that is characterized by apparently chaotic fluctuations of the error, and one which slopes up gradually from the global minimum. Since this phenomenon is so atypical of a quadratic minimum in a linear system, which either converges or diverges, and this phenomenon seems important in practice, we decided to investigate a simple system to see if this behavior could be replicated and understood, which is the subject of the next section. 4 GRADIENT DESCENT WITH SATURATING ERROR The analysis of the sections above may be objected to on the grounds that it assumes the minimum to have quadratic form and then performs an analysis in the neighborhood of that minimum, which is equivalent to analyzing a linear unit. Surely our nonlinear backpropagation networks are richer than that. Gradient Descent: Second Order Momentum and Saturating Error 0.1195 ! 0.690~~11~---------__---:l 0.611:1 O . 680L......~~-----.J'---'-~~------.l~~~-..J.~~~.......... 350 400 450 epoch 500 Figure 2: Error plotted as a function of time for two settings of the learning parameters, both determined empirically: the one that minimized the error the most, and the one with a = 0 that minimized the error the most. There exists a less aggressive setting of the parameters that converges nearly as fast as the quickly converging curve but does not oscillate. A clue that this might be the case was shown in figure 3. The region where the system converges to the minimum is of the expected shape, but rather than simply diverging outside of this region, as would a linear system, more complex phenomena are observed, in particular a sloping region . Acting on the hypothesis that this region is caused by Amax being maximal at the minimum, and gradually decreasing away from it (it must decrease to zero in the limit, since the hidden units saturate and the squared error is thus bounded) we decided to perform a dynamic systems analysis of the convergence of gradient descent on a one dimensional nonquadratic error surface. We chose 1 E=l-l +w 2 (11) which is shown in figure 4, as this results in a bounded E. Letting f( ) = W W _ E'( ) _ w(l - 2T} + 2w 2 + w 4 ) T} W (1 + w 2 )2 (12) be our transfer function, a local analysis at the minimum gives Amax = E"(O) = 2 which limits convergence to T} < 1. Since the gradient towards the minimum is always less than predicted by a second-order series at the minimum, such T} are in fact globally convergent. As T} passes 1 the fixedpoint bifurcates into the limit cycle w = ?j.,;ry- 1, = (13) which remains stable until T} --+- 16/9 1.77777 ... , at which point the single symmetric binary limit cycle splits into two asymmetric limit cycles, each still of period two. These in turn remain stable until T} --+- 2.0732261475-, at which point repeated period doubling to chaos occurs. This progression is shown in figure 7. 891 892 Pearlmutter "i 0.40 !! ~ 0.30 'c 5 0.20 .!l -; 0.10 O.OOC=:==""""",_--=~ 0.0 0.2 Q 0.4 0.6 0.8 (mom.... lum) ' .0 Figure 3: (Left) the error at epoch 550 as a function of the learning regime. Shading is based on the height, but most of the vertical scale is devoted to nonconvergent networks in order to show the mysterious non convergent sloping region. The minimum, corresponding to the most darkly shaded point, is on the plateau of convergence at the location predicted by the theory. (Center) the region in which the network is convergent, as measured by a strictly monotonically decreasing error. Learning parameter settings for which the error was strictly decreasing have a low value while those for which it was not have a high one. The lip at 7] 0 has a value of 0, given where the error did not change. The rim at a = 1 corresponds to damped oscillation caused by 7] > 4aA/(1 - a)2. (Right) contour plot of the convergent plateau shows that the regions of equal error have linear boundaries in the nonoscillatory region in the center, as predicted by theory. = As usual in a bifurcation, w rises sharply as 7] passes 1. But recall that figure 3, with the smooth sloping region, plotted the error E rather than the weights. The analogous graph here is shown in figure 6 where we see the same qualitative feature of a smooth gradual rise, which first begins to jitter as the limit cycle becomes asymmetric, and then becomes more and more jagged as the period doubles its way to chaos. From figure 7 it is clear that for higher 7] the peak error of the attractor will continue to rise gently until it saturates. Next, we add momentum to the system. This simple one dimensional system duplicates the phenomena we found earlier, as can be seen by comparing figure 3 with figure 5. We see that momentum delays the bifurcation of the fixed point attractor at the minimum by the amount predicted by (5), namely until 7] approaches 1 + a. At this point the fixed point bifurcates into a symmetric limit cycle of period 2 at (14) a formula of which (13) is a special case. This limit cycle is stable for 16 7]< g(1+a), (15) but as 7] reaches this limit, which happens at the same time that w reaches ?1/V3 (the inflection point of E where E 1/4) the limit cycle becomes unstable. However, for a near 1 the cycle breaks down more quickly in practice, as it becomes haloed by more complex attractors which make it progressively less likely that a sequence of iterations will actually converge to the limit cycle in question. Both boundaries of this strip, 7] = 1 + a and 7] = 196 (1 + a), are visible in figure 5, = Gradient Descent: Second Order Momentum and Saturating Error 1 E Figure 6: E as a funco. tion of 7J with a When convergent, the final value is shown; otherwise E after 100 iterations from a starting point of w 1.0. This a more detailed graph of a slice of figure 5 at a = O. = o o 3 w Figure 4: A one dimensional tulip-shaped nonlinear error surface E 1- (1 + w2)-1. -3 = Figure 5: E after 50 iterations from a starting point of 0.05, as a function of 7J and a. 0.' = 1.' -1 -I Figure 7: The attractor in was a function of 7J is shown, with the progression from a single attract or at the minimum of E to a limit cycle of period two, which bifurcates and then doubles to chaos . a = 0 (left) and a = 0.8 (right). For the numerical simulations portions of the graphs, iterations 100 through 150 from a starting point of w 1 or w 0.05 are shown. = = particularly since in the region between them E obeys E= J1 ~a 1- (16) The bifurcation and subsequent transition to chaos with momentum is shown for a 0.8 in figure 7. This a is high enough that the limit cycle fails to be reached by the iteration procedure long before it actually becomes unstable. Note that this diagram was made with w started near the minimum. If it had been started far from it, the system would usually not reach the attractor at w = 0 but instead enter a halo attractor. This accounts for the policy of backpropagation experts, who gradually raise momentum as the optimization proceeds. = 893 894 Pearlmutter 5 CONCLUSIONS The convergence bounds derived assume that the learning parameters are set optimally. Finding these optimal values in practice is beyond the scope of this paper, but some techniques for achieving nearly optimal learning rates are available [4, 10, 8, 7, 3]. Adjusting the momentum feels easier to practitioners than adjusting the learning rate, as too high a value leads to small oscillations rather than divergence, and techniques from control theory can be applied to the problem [2]. However, because error surfaces in practice saturate, techniques for adjusting the learning parameters automatically as learning proceeds can not be derived under the quadratic minimum assumption, but must take into account the bifurcation and limit cycle and the sloping region of the error, or they may mistake this regime of stable error for convergence, leading to premature termination. References [1] S. Thomas Alexander. Adaptive Signal Processing. Springer-Verlag, 1986. [2] H. S. Dabis and T. J. Moir. Least mean squares as a control system. International Journal of Control, 54(2):321-335, 1991. [3] Yan Fang and Terrence J. Sejnowski. Faster learning for dynamic recurrent backpropagation. Neural Computation, 2(3):270-273, 1990. [4] Robert A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1(4):295-307,1988. [5] David E. Rumelhart, Geoffrey E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart, J. 1. McClelland, and the PDP research group., editors, Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundations. MIT Press, 1986. [6] J. J. Shynk and S. Roy. The LMS algorithm with momentum updating. In Proceedings of the IEEE International Symposium on Circuits and Systems, pages 2651-2654, June 6-9 1988. [7] F. M. Silva and L. B. Almeida. Acceleration techniques for the backpropagation algorithm. In L. B. Almeida and C. J. Wellekens, editors, Proceedings of the 1990 EURASIP Workshop on Neural Networks. Springer-Verlag, February 1990. (Lecture Notes in Computer Science series). [8] Tom Tollenaere. SuperSAB: Fast adaptive back propagation with good scaling properties. Neural Networks, 3(5):561-573, 1990. [9] Mehmet Ali Tugay and Yal~in Tanik. Properties of the momentum LMS algorithm. Signal Processing, 18(2):117-127, October 1989. [10] T. P. Vogl, J. K. Mangis, A. K. Zigler, W. T. Zink, and D. L. Alkon. Accelerating the convergence of the back-propagation method. Biological Cybernetics, 59:257-263, September 1988. [11] B. Widrow, J. M. McCool, M. G. Larimore, and C. R. Johnson Jr. Stational and nonstationary learning characteristics of the LMS adaptive filter. Proceedings of the IEEE, 64:1151-1162, 1979.
454 |@word briefly:1 achievable:1 seems:1 termination:1 d2:1 confirms:1 simulation:3 gradual:1 jacob:1 shading:1 initial:1 series:2 past:1 comparing:1 must:3 visible:1 numerical:1 j1:1 subsequent:1 shape:2 plot:1 update:2 progressively:1 v:2 plane:2 location:2 sigmoidal:2 height:1 rc:1 along:1 constructed:2 symposium:1 qualitative:1 expected:1 rapid:1 behavior:1 ry:1 globally:1 decreasing:3 automatically:1 becomes:5 begin:1 bounded:3 circuit:2 sloping:4 finding:1 control:3 unit:5 appear:2 before:1 negligible:1 understood:1 local:1 tends:1 limit:15 mistake:1 analyzing:2 fluctuation:1 might:2 chose:1 shaded:1 fastest:1 limited:1 speeded:1 tulip:1 obeys:1 decided:2 practical:1 practice:6 backpropagation:6 chaotic:1 ooc:1 procedure:1 lla:1 yan:1 get:1 equivalent:1 center:2 williams:1 starting:4 amax:15 fang:1 dw:8 analogous:1 feel:1 target:1 play:1 hypothesis:1 secondorder:1 roy:2 rumelhart:2 particularly:1 updating:1 asymmetric:2 observed:2 region:16 cycle:12 decrease:1 dynamic:4 raise:1 ali:1 conver:1 accelerate:1 tugay:2 fast:2 sejnowski:1 neighborhood:1 outside:1 heuristic:1 richer:1 otherwise:1 final:1 online:1 advantage:1 eigenvalue:5 sequence:1 bifurcates:3 maximal:1 adaptation:1 amin:10 getting:1 convergence:22 double:2 diverges:1 converges:5 recurrent:1 widrow:1 objected:1 measured:1 predicted:9 come:1 saved:1 filter:1 stochastic:1 exploration:1 transient:1 ja:1 microstructure:1 biological:1 strictly:2 ad2:1 ground:1 exp:2 great:1 scope:1 cognition:1 lm:4 purpose:1 mit:1 always:1 modified:1 rather:3 derived:2 june:1 inflection:1 attract:1 hidden:3 going:2 special:2 bifurcation:4 equal:2 shaped:1 sampling:1 nearly:2 minimized:2 haven:1 duplicate:1 alkon:1 tightly:1 divergence:1 attractor:6 investigate:1 analyzed:1 tj:1 devoted:1 damped:1 initialized:1 plotted:3 increased:1 earlier:1 eta:1 delay:1 johnson:1 too:1 optimally:1 st:1 peak:2 international:2 terrence:1 quickly:2 squared:1 slowly:1 tanik:2 expert:1 leading:1 account:4 aggressive:1 de:3 coefficient:1 jagged:1 caused:2 vi:4 performed:1 break:1 tion:1 apparently:1 portion:1 reached:1 parallel:2 slope:1 square:2 who:2 characteristic:1 hyperbola:1 cybernetics:1 plateau:2 reach:3 strip:1 mysterious:1 mi:1 gain:1 adjusting:3 popular:1 ask:1 recall:1 improves:1 rim:1 actually:2 back:2 higher:1 tom:1 mutter:1 box:1 though:1 until:4 nonlinear:2 propagation:3 effect:3 read:1 symmetric:2 ridge:1 pearlmutter:5 performs:1 silva:1 ranging:2 chaos:4 recently:1 physical:1 empirically:1 volume:1 gently:1 association:1 ai:4 enter:1 had:2 stable:5 surface:7 add:2 j:1 verlag:2 binary:2 continue:1 seen:1 minimum:19 mccool:1 surely:1 converge:2 v3:1 period:5 monotonically:1 signal:2 bread:1 smooth:3 faster:2 characterized:1 calculation:1 long:1 converging:1 iteration:5 sometimes:1 diagram:1 w2:1 jde:3 pass:2 subject:1 capacitor:1 practitioner:1 nonstationary:1 near:2 split:1 enough:1 variety:1 psychology:1 whether:1 accelerating:1 nonquadratic:1 hessian:1 oscillate:1 clear:1 eigenvectors:1 detailed:1 amount:2 hardware:1 mcclelland:1 correctly:1 group:1 four:1 achieving:1 asymptotically:1 graph:3 run:2 inverse:1 jitter:1 oscillation:2 scaling:1 bit:1 layer:1 bound:3 ct:1 convergent:5 yale:2 quadratic:6 precisely:1 sharply:1 larimore:1 simulate:1 speed:2 speedup:2 department:1 jr:1 remain:1 slightly:2 making:1 happens:1 gradually:3 equation:3 wellekens:1 remains:1 turn:1 letting:1 available:1 observe:1 progression:2 away:1 batch:5 supersab:1 substitute:1 thomas:1 assumes:1 giving:1 restrictive:1 build:1 february:1 question:1 occurs:2 usual:1 september:1 gradient:18 unstable:2 sloped:1 unfortunately:1 october:1 robert:1 rise:3 policy:1 perform:1 vertical:1 descent:16 saturates:1 halo:1 hinton:1 pdp:1 station:1 david:1 namely:1 darkly:1 pearl:1 beyond:1 proceeds:2 usually:1 regime:3 zink:1 saturation:1 program:1 max:2 axis:1 started:2 shynk:2 lum:1 mehmet:1 review:2 understanding:1 epoch:5 mom:1 asymptotic:1 relative:1 lecture:1 analogy:1 geoffrey:1 foundation:1 plotting:1 editor:2 changed:1 barak:2 taking:1 absolute:1 distributed:1 regard:1 boundary:3 curve:1 slice:1 transition:1 contour:1 made:1 clue:1 replicated:1 fixedpoint:1 premature:1 adaptive:3 far:1 global:1 lip:1 transfer:1 funco:1 complex:2 did:1 spread:1 repeated:1 fails:1 momentum:28 resistor:1 lie:1 atypical:1 third:1 saturate:2 formula:1 down:1 nonconvergent:1 showing:1 decay:1 exists:1 workshop:1 adding:1 ci:4 justifies:1 easier:1 simply:1 explore:1 likely:1 saturating:6 doubling:1 springer:2 aa:1 corresponds:2 vogl:1 acceleration:1 towards:1 oscillator:1 butter:1 change:3 eurasip:1 determined:1 uniformly:1 acting:1 e:1 diverging:1 tollenaere:1 internal:1 people:1 almeida:2 alexander:1 incorporate:1 phenomenon:4
3,911
4,540
Sketch-Based Linear Value Function Approximation Marc G. Bellemare University of Alberta Joel Veness University of Alberta Michael Bowling University of Alberta [email protected] [email protected] [email protected] Abstract Hashing is a common method to reduce large, potentially infinite feature vectors to a fixed-size table. In reinforcement learning, hashing is often used in conjunction with tile coding to represent states in continuous spaces. Hashing is also a promising approach to value function approximation in large discrete domains such as Go and Hearts, where feature vectors can be constructed by exhaustively combining a set of atomic features. Unfortunately, the typical use of hashing in value function approximation results in biased value estimates due to the possibility of collisions. Recent work in data stream summaries has led to the development of the tug-of-war sketch, an unbiased estimator for approximating inner products. Our work investigates the application of this new data structure to linear value function approximation. Although in the reinforcement learning setting the use of the tug-of-war sketch leads to biased value estimates, we show that this bias can be orders of magnitude less than that of standard hashing. We provide empirical results on two RL benchmark domains and fifty-five Atari 2600 games to highlight the superior learning performance obtained when using tug-of-war hashing. 1 Introduction Recent value-based reinforcement learning applications have shown the benefit of exhaustively generating features, both in discrete and continuous state domains. In discrete domains, exhaustive feature generation combines atomic features into logical predicates. In the game of Go, Silver et al. [19] showed that good features could be generated by enumerating all stone patterns up to a certain size. Sturtevant and White [21] similarly obtained promising reinforcement learning results using a feature generation method that enumerated all 2, 3 and 4-wise combinations of a set of 60 atomic features. In continuous-state RL domains, tile coding [23] is a canonical example of exhaustive feature generation; tile coding has been successfully applied to benchmark domains [22], to learn to play keepaway soccer [20], in multiagent robot learning [4], to train bipedal robots to walk [18, 24] and to learn mixed strategies in the game of Goofspiel [3]. Exhaustive feature generation, however, can result in feature vectors that are too large to be represented in memory, especially when applied to continuous spaces. Although such feature vectors are too large to be represented explicitly, in many domains of interest they are also sparse. For example, most stone patterns are absent from any particular Go position. Given a fixed memory budget, the standard approach is to hash features into a fixed-size table, with collisions implicitly handled by the learning algorithm; all but one of the applications discussed above use some form of hashing. With respect to its typical use for linear value function approximation, hashing lacks theoretical guarantees. In order to improve on the basic hashing idea, we turn to sketches: state-of-the-art methods for approximately storing large vectors [6]. Our goal is to show that one such sketch, the tug-of-war sketch [7], is particularly well-suited for linear value function approximation. Our work is related to recent developments on the use of random projections in reinforcement learning [11] and least-squares regression [16, 10]. Hashing, however, possesses a computational advantage over traditional random projections: each feature is hashed exactly once. In comparison, even sparse 1 random projection methods [1, 14] carry a per-feature cost that increases with the size of the reduced space. Tug-of-war hashing seeks to reconcile the computational efficiency that makes hashing a practical method for linear value function approximation on large feature spaces, while preserving the theoretical appeal of random projection methods. A natural concern when using hashing in RL is that hash collisions irremediably degrade learning. In this paper we argue that tug-of-war hashing addresses this concern by providing us with a low-error approximation of large feature vectors at a fraction of the memory cost. To quote Sutton and Barto [23], ?Hashing frees us from the curse of dimensionality in the sense that memory requirements need not be exponential in the number of dimensions, but need merely match the real demands of the task.? 2 Background We consider the reinforcement learning framework of Sutton and Barto [23]. An MDP M is a tuple hS, A, P, R, ?i, where S is the set of states, A is the set of actions, P : S ? A ? S ? [0, 1] is the transition probability function, R : S ? A ? R is the reward function and ? ? [0, 1] is the discount factor. At time step t the agent observes state st ? S, selects an action at ? A and receives a reward rt := R(st , at ). The agent then observes the new state st+1 distributed according to P (?|st , at ).P From state st , the agent?s goal is to maximize the expected discounted  ? i sum of future rewards E ? R(s , a ) . A typical approach is to learn state-action values t+i t+i i=0 Q? (s, a), where the stationary policy ? : S ? A ? [0, 1] represents the agent?s behaviour. Q? (s, a) is recursively defined as: " # X ? 0 0 ? 0 0 Q (s, a) := R(s, a) + ?Es0 ?P (?|s,a) ?(a |s )Q (s , a ) (1) a0 ?A A special case of this equation is the optimal value function Q? (s, a) := R(s, a) + ?Es0 [maxa0 Q? (s0 , a0 )]. The optimal value function corresponds to the value under an optimal policy ? ? . For a fixed ?, The SARSA(?) algorithm [23] learns Q? from sample transitions (st , at , rt , st+1 , at+1 ). In domains where S is large (or infinite), learning Q? exactly is impractical and one must rely on value function approximation. A common value function approximation scheme in reinforcement learning is linear approximation. Given ? : S ? A ? Rn mapping state-action pairs to feature vectors, we represent Q? with the linear approximation Qt (s, a) := ?t ? ?(s, a), where ?t ? Rn is a weight vector. The gradient descent SARSA(?) update is defined as: ?t et ?t+1 ? rt + ??t ? ?(st+1 , at+1 ) ? ?t ? ?(st , at ) ? ??et?1 + ?(st , at ) ? ?t + ??t et , (2) where ? ? [0, 1] is a step-size parameter and ? ? [0, 1] controls the degree to which changes in the value function are propagated back in time. Throughout the rest of this paper Q? (s, a) refers to the exact value function computed from Equation 1 and we use Qt (s, a) to refer to the linear approximation ?t ? ?(s, a); ?gradient descent SARSA(?) with linear approximation? is always implied when referring to SARSA(?). We call ?(s, a) the full feature vector and Qt (s, a) the full-vector value function. Asymptotically, SARSA(?) is guaranteed to find the best solution within the span of ?(s, a), up to a multiplicative constant that depends on ? [25]. If we let ? ? R|S||A|?n denote the matrix of full feature vectors ?(s, a), and let ? : S ? A ? [0, 1] denote the steady state distribution over stateaction pairs induced by ? and P then, under mild assumptions, we can guarantee the existence and uniqueness of ?. We denote by h?, ?i? the inner product induced by ?, i.e. hx, yi? := xT Dy, where x, y ? R|S||A| and D ? R|S||A|?|S||A| is a diagonal matrix with entries ?(s, a). The norm k?k? is p defined as h?, ?i? . We assume the following: 1) S and A are finite, 2) the Markov chain induced by ? and P is irreducible and aperiodic, and 3) ? has full rank. The following theorem bounds the error of SARSA(?): Theorem 1 (Restated from Tsitsiklis and Van Roy [25]). Let M = hS, A, P, R, ?i be an MDP and ? : S ? A ? [0, 1] be a policy. Denote by ? ? R|S||A|?n the matrix of full feature vectors and 2 by ? the stationary distribution on (S, A) induced by ? and P . Under assumptions 1-3), SARSA(?) converges to a unique ?? ? Rn with probability one and k??? ? Q? k? ? 1 ? ?? k?Q? ? Q? k? , 1?? where Q? ? R|S||A| is a vector representing the exact solution to Equation 1 and ? := ?(?T D?)?1 ?T D is the projection operator. Because ? is the projector operator for ?, for any ? we have k?? ? Q? k? ? k?Q? ? Q? k? ; Theorem 1 thus implies that SARSA(1) converges to ?? = arg min? k?? ? Q? k? . 2.1 Hashing in Reinforcement Learning As discussed previously, it is often impractical to store the full weight vector ?t in memory. A typical example of this is tile coding on continuous-state domains [22], which generates a number of features exponential in the dimensionality of the state space. In such cases, hashing can effectively be used to approximate Q? (s, a) using a fixed memory budget. Let h be a hash function h : {1, . . . , n} ? {1, . . . , m}, mapping full feature vector indices into hash table indices, where ? a) whose m  n is the hash table size. We define standard hashing features as the feature map ?(s, ith component is defined as: n X ??i (s, a) := I[h(j)=i] ?j (s, a) , (3) j=1 th where ?j (s, a) denotes the j component of ?(s, a) and I[x] denotes the indicator function. We assume that our hash function h is drawn from a universal family: for any i, j ? {1, . . . , n}, i 6= j, 1 1 ? a), ? t (s, a) := ??t ? ?(s, Pr(h(i) = h(j)) ? m . We define the standard hashing value function Q m ? ? where ?t ? R is a weight vector, and ?(s, a) is the hashed vector. Because of hashing collisions, ? t (s, a)] 6= the standard hashing value function is a biased estimator of Qt (s, a), i.e., in general Eh [Q Qt (s, a). For example, consider the extreme case where m = 1: all features share the same weight. We return to the issue of the bias introduced by standard hashing in Section 4.1. 2.2 Tug-of-War Hashing The tug-of-war sketch, also known as the Fast-AGMS, was recently introduced as a powerful method for approximating inner products of large vectors [7]. The name ?sketch? refers to the data structure?s function as a summary of a stream of data. In the canonical sketch setting, we summarize a count vector ? ? Rn using a sketch vector ?? ? Rm . At each time step a vector ?t ? Rn is received. Pt?1 The purpose of the sketch vector is to approximate the count vector ?t := i=0 ?i . Given two hash functions, h and ? : {1, . . . , n} ? {?1, 1}, ?t is mapped to a vector ??t whose ith component is ??t,i := n X I[h(j)=i] ?t,j ?(j) (4) j=1 The tug-of-war sketch vector is then updated as ??t+1 ? ??t + ??t . In addition to h being drawn from a universal family of hash functions, ? is drawn from a four-wise independent family of hash functions: for all sets of four unique indices {i1 , i2 , i3 , i4 }, Pr? (?(i1 ) = k1 , ?(i2 ) = k2 , ?(i3 ) = 1 with k1 . . . k4 ? {?1, 1}. For an arbitrary ? ? Rn and its corresponding k3 , ?(i4 ) = k4 ) = 16 ? = ?t ? ?: the tug-of-war sketch produces unbiased estimates tug-of-war vector ?? ? Rm , Eh,? [??t ? ?] Pt?1 of inner products [7]. This unbiasedness property can be derived as follows. First let ??t = i=0 ??i . 1 While it may seem odd to randomly select your hash function, this can equivalently be thought as sampling an indexing assignment for the MDP?s features. While a particular hash function may be well- (or poorly-) suited for a particular MDP, it is hard to imagine how this could be known a priori. By considering a randomly selected hash function (or random permutation of the features), we are simulating the uncertainty of using a particular hash function on a never before encountered MDP. 3 Pt?1 Then ??t ? ??t0 = i=0 ??i ? ??t0 and ? Eh,? [??i ? ??t0 ] = Eh,? ? n X n X ? I[h(j1 )=h(j2 )] ?i,j1 ?t0 ,j2 ?(j1 )?(j2 )? j1 =1 j2 =1  E? [?(j1 )?(j2 )] = 1 0 if j1 = j2 otherwise (by four-wise independence) The result follows by noting that I[h(j1 )=h(j2 )] is independent from ?(j1 )?(j2 ) given j1 , j2 . 3 Tug-of-War with Linear Value Function Approximation We now extend the tug-of-war sketch to the reinforcement setting by defining the tug-of-war Plearning n hashing features as ?? : S ? A ? Rm with ??i (s, a) := j=1 I[h(j)=i] ?j (s, a)?(j). The SARSA(?) update becomes: ??t e?t ? ?t+1 ? t+1 , at+1 ) ? ??t ? ?(s ? t , at ) ? rt + ? ??t ? ?(s ? t , at ) ? ??? et?1 + ?(s ? ??t + ???t e?t . (5) ? a) with ??t ? Rm and refer to ? t (s, a) := ??t ? ?(s, We also define the tug-of-war value function Q ? a) as the tug-of-war vector. ?(s, 3.1 Value Function Approximation with Tug-of-War Hashing Intuitively, one might hope that the unbiasedness of the tug-of-war sketch for approximating inner products carries over to the case of linear value function approximation. Unfortunately, this is not the case. However, it is still possible to bound the error of the tug-of-war value function learned with SARSA(1) in terms of the full-vector value function. Our bound relies on interpreting tug-of-war hashing as a special kind of Johnson-Lindenstrauss transform [8]. We define a ?-universal family of hash functions H such that for any set of indices i1 , i2 , . . . , il Pr(h(i1 ) = k1 , . . . , h(il ) = kl ) ? |C1|l , where C ? N and h ? H : {1, . . . , n} ? C . Lemma 1 (Dasgupta et al. [8], Theorem 2). Let h : {1, . . . , n} ? {1, . . . , m} and ? : {1, . . . , n} ? {?1, 1} be two independent hash functions chosen uniformly at random from ?-universal families 1 , and let H ? {0, ?1}m?n be a matrix with entries Hij = I[h(j)=i] ?(j). Let  < 1, ? < 10    2 m 12 1 16 1 1 n ? m = 2 log ? and c =  log ? log ? . For any given vector x ? R such that kxk? ? c , with probability 1 ? 3?, H satisfies the following property: 2 2 2 (1 ? ) kxk2 ? kHxk2 ? (1 + ) kxk2 . Lemma 1 states that, under certain conditions on the input vector x, tug-of-war hashing approximately preserves the norm of x. When ? and  are constant, the requirement on kxk? can be waived by applying Theorem 1 to the normalized vector u = kxkx ?c . A clear discussion on hashing as 2 a Johnson-Lindenstrauss transform can be found in the work of Kane and Nelson [13], who also improve Lemma 1 and extend it to the case where the family of hash functions is k-universal rather than ?-universal. Lemma 2 (Based on Maillard and Munos [16], Proposition 1). Let x1 . . . xK and y be vectors in Rn . Let H ? {0, ?1}m?n , , ? and m be defined as in Lemma 1. With probability at least 1 ? 6K?, for all k ? {1, . . . , K}, xk ? y ?  kxk k2 kyk2 ? Hxk ? Hy ? xk ? y +  kxk k2 kyk2 . Proof (Sketch). The proof follows the steps of Maillard and Munos [16]. Given two unit vectors 2 2 u, v ? Rn , we can relate (Hu) ? (Hv) to kHu + Hvk2 and kHu ? Hvk2 using the parallelogram law. We then apply Lemma 1 to bound both sides of each squared norm and substitute xk for u and y for w to bound Hxk ? Hy. Applying the union bound yields the desired statement. 4 We are now in a position to bound the asymptotic error of SARSA(1) with tug-of-war hashing. Given hash functions h and ? defined as per Lemma 1, we denote by H ? Rm?n the matrix whose ? a) = H?(s, a). We also denote by ? ? := ?H T the entries are Hij := I[h(j)=i] ?(j), such that ?(s, matrix of tug-of-war vectors. We again assume that 1) S and A are finite, that 2) ? and P induce an irreducible, aperiodic Markov chain and that 3) ? has full rank. For simplicity of argument, we also ? := ?H T has full rank; when ? ? is rank-deficient, SARSA(1) converges to a set of assume that 4) ? ? ? satisfying the bound of Theorem 2, rather than to a unique ??? . solutions ? Theorem 2. Let M = hS, A, P, R, ?i be an MDP and ? : S ? A ? [0, 1] be a policy. Let ? ? R|S||A|?m be the matrix of tug-of? ? R|S||A|?n be the matrix of full feature vectors and ? war vectors. Denote by ? the stationary distribution on (S, A) induced by ? and P . Let  < 1, ? 1 ? < 1, ? 0 = 6|S||A| and m ? 12 2 log ? 0 . Under assumptions 1-4), gradient-descent SARSA(1) with tug-of-war hashing converges to a unique ??? ? Rm and with probability at least 1 ? ? ? ? ?? ? Q? ? k??? ? Q? k +  k?? k ? sup k?(s, a)k2 , ? 2 ? s?S,a?A where Q? is the exact solution to Equation 1 and ?? = arg min? k?? ? Q? k? . Proof. First note that Theorem 1 implies the convergence of SARSA(1) with tug-of-war hashing to a unique solution, which we denote ??? . We first apply Lemma 2 to the set {?(s, a) : (s, a) ? S ?A} and ?? ; note that we can safely assume |S||A| > 1, and therefore ? 0 < 1/10. By our choice of m, for all (s, a) ? S ? A, |H?(s, a) ? H?? ? ?(s, a) ? ?? | ?  k?(s, a)k2 k?? k2 with probability at least ? ? Q? k? ; 1 ? 6|S||A|? 0 = 1 ? ?. As previously noted, SARSA(1) converges to ??? = arg min? k?? ? ? ? ? ? ? compared to ?? , the solution ?H := ?H? is thus an equal or worse approximation to Q? . It follows that ? ? ? ? ?? ? Q? ? H ? ??? + ??? ? Q? ? H ? Q? ? ?? ? ? ?? ? ? ? ? s X  2 = ?(s, a) H?(s, a) ? H?? ? ?(s, a) ? ?? + k??? ? Q? k? s?S,a?A ? s X 2  ?(s, a)  k?(s, a)k2 k?? k2 + k??? ? Q? k? (Lemma 2) s?S,a?A ?  k?? k2 sup s?S,a?A k?(s, a)k2 + k??? ? Q? k? , as desired. Our proof of Theorem 2 critically requires the use of ? = 1. A natural next step would be to attempt to drop this restriction on ?. It also seems likely that the finite-sample analysis of LSTD with random projections [11] can be extended to cover the case of tug-of-war hashing. Theorem 2 suggests that, under the right conditions, the tug-of-war value function is a good approximation to the full-vector value function. A natural question now arises: does tug-of-war hashing lead to improved linear value function approximation compared with standard hashing? More importantly, does tug-of-war hashing result in better learned policies? These are the questions we investigate empirically in the next section. 4 Experimental Study In the sketch setting, the appeal of tug-of-war hashing over standard hashing lies in its unbiasedness. We therefore begin with an empirical study of the magnitude of the bias when applying different hashing methods in a value function approximation setting. 4.1 Value Function Bias We used standard hashing, tug-of-war hashing, and no hashing to learn a value function over a short trajectory in the Mountain Car domain [22]. Our evaluation uses a standard implementation available online [15]. 5 10 100 1 Mean Squared Error Standard Bias 0.1 0.01 Tug-of-War 0.001 0.0001 1e-05 1e-06 10 Standard 1 0.1 Tug-of-War 0.01 0.001 0.0001 0 100 200 300 400 500 600 700 800 900 1000 0 Time Steps 100 200 300 400 500 600 700 800 900 1000 Time Steps Figure 1: Bias and Mean Squared Error of value estimates using standard and tug-of-war hashing in 1,000 learning steps of Mountain Car. Note the log scale of the y axis. We generated a 1,000-step trajectory using an -greedy policy [23]. For this fixed trajectory we updated a full feature weight vector ?t using SARSA(0) with ? = 1.0 and ? = 0.01. We focus on SARSA(0) as it is commonly used in practice for its ease of implementation and its faster update speed in sparse settings. Parallel to the full-vector update we also updated both a tug-of-war weight vector ??t and a standard hashing weight vector ??t , with the same values of ? and ?. Both methods use a hash table size of m = 100 and the same randomly selected hash function. This hash function is defined as (ax + b) mod p mod m, where p is a large prime and a, b < p are random integers ? t (st , at ) [5]. At every step we compute the difference in value between the hashed value functions Q ? and Qt (st , at ), and the full-vector value function Qt (st , at ). We repeated this experiment using 1 million hash functions selected uniformly at random. Figure 1 shows for each time step, estimates ? t (st , at )] ? Qt (st , at ) and E[Q ? t (st , at )] ? Qt (st , at ) as well as of the magnitude of the biases E[Q 2 ? t (st , at ) ? Qt (st , at )) ] and E[(Q ? t (st , at ) ? Qt (st , at ))2 ] estimates of the mean squared errors E[(Q using the different hash functions. To provide a sense of scale, the estimate of the value of the final state when using no hashing is approximately ?4; note that the y-axis uses a logarithmic scale. The tug-of-war value function has a small, almost negligible bias. In comparison, the bias of standard hashing is orders of magnitude larger ? almost as large as the value it is trying to estimate. The mean square error estimates show a similar trend. Furthermore, the same experiment on the Acrobot domain [22] yielded qualitatively similar results. Our results confirm the insights provided in Section 2: the tug-of-war value function can be significantly less biased than the standard hashing value function. 4.2 Reinforcement Learning Performance Having smaller bias and mean square error in the Q-value estimates does not necessarily imply improved agent performance. In reinforcement learning, actions are selected based on relative Qvalues, so a consistent bias may be harmless. In this section we evaluate the performance (cumulative reward per episode) of learning agents using both tug-of-war and standard hashing. 4.2.1 Tile Coding We first studied the performance of agents using each of the two hashing methods in conjunction with tile coding. Our study is based on Mountain Car and Acrobot, two standard RL benchmark domains. For both domains we used the standard environment dynamics [22]; we used the fixed starting-state version of Mountain Car to reduce the variance in our results. We compared the two hashing methods using -greedy policies and the SARSA(?) algorithm. For each domain and each hashing method we performed a parameter sweep over the learning rate ? and selected the best value which did not cause the value estimates to divergence. The Acrobot state was represented using 48 6 ? 6 ? 6 ? 6 tilings and the Mountain Car state, 10 9 ? 9 tilings. Other parameters were set to ? = 1.0, ? = 0.9,  = 0.0; the learning rate was further divided by the number of tilings. 6 Acrobot 2500 Mountain Car 5000 Random Agent Standard Hashing 4000 Steps to Goal Steps to Goal 2000 Random Agent 1500 Tug-of-War Hashing 1000 500 0 3000 2000 Standard Hashing Tug-of-War Hashing 1000 100 500 1000 1500 0 2000 Hash Table Size 200 400 600 800 1000 Hash Table Size Figure 2: Performance of standard hashing and tug-of-war hashing in two benchmark domains. The performance of the random agent is provided as reference. We experimented with hash table sizes m ? [20, 1000] for Mountain Car and m ? [100, 2000] for Acrobot. Each experiment consisted of 100 trials, sampling a new hash function for each trial. Each trial consisted of 10,000 episodes, and episodes were restricted to 5,000 steps. At the end of each trial, we disabled learning by setting ? = 0 and evaluated the agent on an additional 500 episodes. Figure 2 shows the performance of standard hashing and tug-of-war hashing as a function of the hash table size. The conclusion is clear: when the hashed vector is small relative to the full vector, tug-of-war hashing performs better than standard hashing. This is especially true in Acrobot, where the number of features (over 62,000) necessarily results in harmful collisions. 4.2.2 Atari We next evaluated tug-of-war hashing and standard hashing on a suite of Atari 2600 games. The Atari domain was proposed as a game-independent platform for AI research by Naddaf [17]. Atari games pose a variety of challenges for learning agents. The learning agent?s observation space is the game screen: 160x210 pixels, each taking on one of 128 colors. In the game-independent setting, agents are tuned using a small number of training games and subsequently evaluated over a large number of games for which no game-specific tuning takes place. The game-independent setting forces us to use features that are common to all games, for example, by encoding the presence of color patterns in game screens; such an encoding is a form of exhaustive feature generation. Different learning methods have been evaluated on the Atari 2600 platform [9, 26, 12]. We based our evaluation on prior work on a suite of Atari 2600 games [2], to which we refer the reader for full details on handling Atari 2600 games as RL domains. We performed parameter sweeps over five training games, and tested our algorithms on fifty testing games. We used models of contingency awareness to locate the player avatar [2]. From a given game, we generate feature sets by exhaustively enumerating all single-color patterns of size 1x1 (single pixels), 2x2, and 3x3. The presence of each different pattern within a 4x5 tile is encoded as a binary feature. We also encode the relative presence of patterns with respect to the player avatar location. This procedures gives rise to 569,856,000 different features, of which 5,000 to 15,000 are active at a given time step. We trained -greedy SARSA(0) agents using both standard hashing and tug-of-war hashing with hash tables of size m=1,000, 5,000 and 20,000. We chose the step-size ? using a parameter sweep over the training games: we selected the best-performing ? which never resulted in divergence in the value function. For standard hashing, ? = 0.01, 0.05, 0.2 for m = 1,000, 5,000 and 20,000, respectively. For tug-of-war hashing, ? = 0.5 across table sizes. We set ? = 0.999 and  = 0.05. Each experiment was repeated over ten trials lasting 10,000 episodes each; we limited episodes to 18,000 frames to avoid issues with non-terminating policies. 7 Hash Table Size: 1000 1.0 1.0 Hash Table Size: 5000 1.0 0.5 Tug-of-War Fraction of games Fraction of games Fraction of games Tug-of-War 0.5 Standard Hash Table Size: 20,000 Tug-of-War Standard 0.5 Standard 0.0 1.0 0.8 0.6 0.4 Inter-algorithm score 0.2 0 0.0 1.0 0.8 0.6 0.4 Inter-algorithm score 0.2 0 0.0 1.0 0.8 0.6 0.4 0.2 0 Inter-algorithm score Figure 3: Inter-algorithm score distributions over fifty-five Atari games. Higher curves reflect higher normalized scores. Accurately comparing methods across fifty-five games poses a challenge, as each game exhibits a different reward function and game dynamics. We compared methods using inter-algorithm score distributions [2]. For each game, we extracted the average score achieved by our agents over the last 500 episodes of training, yielding six different scores (three per hashing method) per game. Denoting these scores by sg,i , i = 1 . . . 6, we defined the inter-algorithm normalized score zg,i := (sg,i ? rg,min )/(rg,max ? rg,min ) with rg,min := mini {sg,i } and rg,max := maxi {sg,i }. Thus zg,i = 1.0 indicates that the ith score was the highest for game g, and zg,i = 0.0 similarly indicates the lowest score. For each combination of hashing method and memory size, its inter-algorithm score distribution shows the fraction of games for which the corresponding agent achieves a certain normalized score or better. Figure 3 compares the score distributions of agents using either standard hashing or tug-of-war hashing for m = 1,000, 5,000 and 20,000. Tug-of-war hashing consistently outperforms standard hashing across hash table sizes. For each m and each game, we also performed a two-tailed Welch?s t-test with 99% confidence intervals to determine the statistical significance of the average score difference between the two methods. For m = 1,000, tug-of-war hashing performed statistically better in 38 games and worse in 5; for m = 5,000, it performed better in 41 games and worse in 7; and for m = 20,000 it performed better in 35 games and worse in 5. Our results on Atari games confirm what we observed on Mountain Car and Acrobot: in practice, tug-of-war hashing performs much better than standard hashing. Furthermore, computing the ? function took less than 0.3% of the total experiment time, a negligible cost in comparison to the benefits of using tug-of-war hashing. 5 Conclusion In this paper, we cast the tug-of-war sketch into the reinforcement learning framework. We showed that, although the tug-of-war sketch is unbiased in the setting for which it was developed [7], the self-referential component of reinforcement learning induces a small bias. We showed that this bias can be much smaller than the bias that results from standard hashing and provided empirical results confirming the superiority of tug-of-war hashing for value function approximation. As increasingly more complex reinforcement learning problems arise and strain against the boundaries of practicality, so the need for fast and reliable approximation methods grows. If standard hashing frees us from the curse of dimensionality, then tug-of-war hashing goes a step further by ensuring, when the demands of the task exceed available resources, a robust and principled shift from the exact solution to its approximation. Acknowledgements ? We would like to thank Bernardo Avila Pires, Martha White, Yasin Abbasi-Yadkori and Csaba Szepesv?ari for the help they provided with the theoretical aspects of this paper, as well as Adam White and Rich Sutton for insightful discussions on hashing and tile coding. This research was supported by the Alberta Innovates Technology Futures and the Alberta Innovates Centre for Machine Learning at the University of Alberta. Invaluable computational resources were provided by Compute/Calcul Canada. 8 References [1] Dimitris Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Sciences, 66(4):671?687, 2003. [2] Marc G. Bellemare, Joel Veness, and Michael Bowling. Investigating contingency awareness using Atari 2600 games. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012. [3] Michael Bowling and Manuela Veloso. Scalable learning in stochastic games. In AAAI Workshop on Game Theoretic and Decision Theoretic Agents, 2002. [4] Michael Bowling and Manuela Veloso. Simultaneous adversarial multi-robot learning. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, pages 699?704, 2003. [5] J. Lawrence Carter and Mark N. Wegman. Universal classes of hash functions. Journal of Computer and System Sciences, 18(2):143?154, 1979. [6] Graham Cormode. Sketch techniques for massive data. In Graham Cormode, Minos Garofalakis, Peter Haas, and Chris Jermaine, editors, Synopses for Massive Data: Samples, Histograms, Wavelets and Sketches, Foundations and Trends in Databases. NOW publishers, 2011. [7] Graham Cormode and Minos Garofalakis. Sketching streams through the net: Distributed approximate query tracking. In Proceedings of the 31st International Conference on Very Large Data Bases, pages 13?24, 2005. [8] Anirban Dasgupta, Ravi Kumar, and Tam?as Sarl?os. A sparse Johnson-Lindenstrauss transform. In Proceedings of the 42nd ACM Symposium on Theory of Computing, pages 341?350, 2010. [9] Carlos Diuk, A. Andre Cohen, and Michael L. Littman. An object-oriented representation for efficient reinforcement learning. In Proceedings of the Twenty-Fifth International Conference on Machine Learning, pages 240?247, 2008. [10] Mahdi Milani Fard, Yuri Grinberg, Joelle Pineau, and Doina Precup. Compressed least-squares regression on sparse spaces. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. AAAI, 2012. [11] Mohammad Ghavamzadeh, Alessandro Lazaric, Oldaric-Ambrym Maillard, and R?emi Munos. LSTD with random projections. In Advances in Neural Information Processing Systems 23, pages 721?729, 2010. [12] Matthew Hausknecht, Piyush Khandelwal, Risto Miikkulainen, and Peter Stone. HyperNEAT-GGP: A HyperNEAT-based Atari general game player. In Genetic and Evolutionary Computation Conference (GECCO), 2012. [13] Daniel M. Kane and Jelani Nelson. A derandomized sparse Johnson-Lindenstrauss transform. arXiv preprint arXiv:1006.3585, 2010. [14] Ping Li, Trevor J. Hastie, and Kenneth W. Church. Very sparse random projections. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 287?296, 2006. [15] The Reinforcement Learning Library, 2010. http://library.rl-community.org. [16] Oldaric-Ambrym Maillard and R?emi Munos. Compressed least squares regression. In Advances in Neural Information Processing Systems 22, pages 1213?1221, 2009. [17] Yavar Naddaf. Game-independent AI agents for playing Atari 2600 console games. PhD thesis, University of Alberta, 2010. [18] E. Schuitema, D.G.E. Hobbelen, P.P. Jonker, M. Wisse, and J.G.D. Karssen. Using a controller based on reinforcement learning for a passive dynamic walking robot. In Proceedings of the Fifth IEEE-RAS International Conference on Humanoid Robots, pages 232?237, 2005. [19] David Silver, Richard S. Sutton, and Martin M?uller. Reinforcement learning of local shape in the game of Go. In 20th International Joint Conference on Artificial Intelligence, pages 1053?1058, 2007. [20] Peter Stone, Richard S. Sutton, and Gregory Kuhlmann. Reinforcement learning for RoboCup soccer keepaway. Adaptive Behavior, 13(3):165, 2005. [21] Nathan Sturtevant and Adam White. Feature construction for reinforcement learning in Hearts. Computers and Games, pages 122?134, 2006. [22] Richard S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 1038?1044, 1996. [23] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [24] Russ Tedrake, Teresa Weirui Zhang, and H. Sebastian Seung. Stochastic policy gradient reinforcement learning on a simple 3D biped. In Proceedings of Intelligent Robots and Systems 2004, volume 3, pages 2849?2854, 2004. [25] John N. Tsitsiklis and Benjamin Van Roy. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674?690, 1997. [26] Samuel Wintermute. Using imagery to simplify perceptual abstraction in reinforcement learning agents. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010. 9
4540 |@word h:3 mild:1 innovates:2 version:1 trial:5 norm:3 seems:1 nd:1 risto:1 hu:1 seek:1 diuk:1 recursively:1 carry:2 score:16 daniel:1 tuned:1 denoting:1 genetic:1 outperforms:1 comparing:1 must:1 john:1 j1:9 confirming:1 shape:1 drop:1 update:4 hash:33 stationary:3 greedy:3 selected:6 intelligence:5 xk:4 ith:3 short:1 cormode:3 coarse:1 location:1 org:1 zhang:1 five:4 constructed:1 symposium:1 waived:1 combine:1 inter:7 ra:1 expected:1 behavior:1 multi:1 yasin:1 discounted:1 alberta:7 curse:2 es0:2 considering:1 becomes:1 begin:1 provided:5 lowest:1 what:1 mountain:8 atari:13 kind:1 hyperneat:2 developed:1 csaba:1 impractical:2 suite:2 guarantee:2 safely:1 temporal:1 every:1 bernardo:1 friendly:1 stateaction:1 exactly:2 rm:6 k2:10 control:2 unit:1 superiority:1 before:1 negligible:2 local:1 sutton:7 encoding:2 approximately:3 might:1 chose:1 studied:1 kane:2 suggests:1 ease:1 limited:1 statistically:1 practical:1 unique:5 testing:1 atomic:3 union:1 practice:2 x3:1 procedure:1 empirical:3 universal:7 thought:1 significantly:1 projection:9 fard:1 confidence:1 induce:1 refers:2 operator:2 applying:3 bellemare:2 restriction:1 projector:1 map:1 eighteenth:1 go:5 starting:1 restated:1 welch:1 simplicity:1 estimator:2 insight:1 importantly:1 harmless:1 updated:3 pt:3 play:1 imagine:1 ualberta:3 exact:4 avatar:2 massive:2 us:2 construction:1 trend:2 roy:2 satisfying:1 particularly:1 walking:1 database:2 observed:1 preprint:1 hv:1 episode:7 highest:1 observes:2 principled:1 alessandro:1 environment:1 mozer:1 benjamin:1 reward:5 littman:1 seung:1 exhaustively:3 dynamic:3 ghavamzadeh:1 trained:1 terminating:1 efficiency:1 joint:2 represented:3 train:1 fast:2 artificial:5 query:1 exhaustive:4 sarl:1 whose:3 encoded:1 larger:1 otherwise:1 compressed:2 transform:4 final:1 online:1 advantage:1 net:1 took:1 product:5 milani:1 j2:9 combining:1 poorly:1 convergence:1 requirement:2 produce:1 generating:1 silver:2 converges:5 adam:2 object:1 help:1 piyush:1 andrew:1 pose:2 odd:1 qt:11 received:1 c:3 implies:2 aperiodic:2 subsequently:1 stochastic:2 maxa0:1 behaviour:1 hx:1 generalization:1 proposition:1 minos:2 sarsa:19 enumerated:1 k3:1 mapping:2 lawrence:1 matthew:1 achieves:1 purpose:1 uniqueness:1 quote:1 hasselmo:1 successfully:1 hope:1 uller:1 mit:1 always:1 i3:2 rather:2 avoid:1 barto:3 conjunction:2 encode:1 derived:1 focus:1 ax:1 consistently:1 rank:4 indicates:2 adversarial:1 sigkdd:1 sense:2 abstraction:1 a0:2 selects:1 i1:4 pixel:2 arg:3 issue:2 priori:1 development:2 art:1 special:2 platform:2 equal:1 once:1 never:2 having:1 veness:3 sampling:2 represents:1 future:2 intelligent:1 richard:4 simplify:1 irreducible:2 randomly:3 oriented:1 preserve:1 divergence:2 resulted:1 attempt:1 interest:1 possibility:1 investigate:1 mining:1 joel:2 evaluation:2 derandomized:1 bipedal:1 extreme:1 yielding:1 chain:2 tuple:1 hausknecht:1 harmful:1 walk:1 desired:2 theoretical:3 cover:1 assignment:1 cost:3 entry:3 predicate:1 successful:1 johnson:5 too:2 gregory:1 referring:1 st:22 unbiasedness:3 international:6 michael:7 sketching:1 precup:1 thesis:1 aaai:5 imagery:1 squared:4 again:1 reflect:1 abbasi:1 tile:8 worse:4 tam:1 return:1 li:1 coding:8 explicitly:1 doina:1 depends:1 stream:3 multiplicative:1 performed:6 sup:2 tug:59 carlos:1 parallel:1 square:5 il:2 robocup:1 variance:1 who:1 yield:1 accurately:1 critically:1 rus:1 trajectory:3 simultaneous:1 ping:1 touretzky:1 andre:1 trevor:1 sebastian:1 sixth:2 against:1 kuhlmann:1 proof:4 propagated:1 logical:1 color:3 car:8 dimensionality:3 maillard:4 knowledge:1 back:1 hashing:81 higher:2 improved:2 synopsis:1 evaluated:4 furthermore:2 achlioptas:1 sketch:21 receives:1 o:1 lack:1 pineau:1 disabled:1 mdp:6 grows:1 name:1 normalized:4 unbiased:3 consisted:2 true:1 i2:3 white:4 x5:1 bowling:5 game:44 kyk2:2 self:1 noted:1 steady:1 soccer:2 samuel:1 trying:1 stone:4 theoretic:2 mohammad:1 performs:2 invaluable:1 interpreting:1 passive:1 wise:3 recently:1 ari:1 common:3 superior:1 console:1 rl:6 empirically:1 cohen:1 volume:2 million:1 discussed:2 extend:2 refer:3 ai:2 tuning:1 automatic:1 similarly:2 centre:1 biped:1 robot:6 hashed:4 base:1 recent:3 showed:3 hxk:2 prime:1 store:1 certain:3 binary:2 yuri:1 yi:1 joelle:1 preserving:1 additional:1 determine:1 maximize:1 full:17 match:1 faster:1 veloso:2 divided:1 ensuring:1 scalable:1 basic:1 regression:3 controller:1 arxiv:2 histogram:1 represent:2 achieved:1 c1:1 background:1 addition:1 szepesv:1 interval:1 publisher:1 biased:4 fifty:4 rest:1 posse:1 induced:5 deficient:1 mod:2 seem:1 call:1 integer:1 garofalakis:2 noting:1 presence:3 exceed:1 variety:1 independence:1 hastie:1 reduce:2 inner:5 idea:1 enumerating:2 absent:1 shift:1 t0:4 six:1 war:59 handled:1 jonker:1 peter:3 cause:1 action:5 collision:5 clear:2 discount:1 referential:1 ten:1 induces:1 carter:1 khu:2 reduced:1 generate:1 http:1 canonical:2 lazaric:1 per:5 naddaf:2 discrete:3 dasgupta:2 four:3 drawn:3 k4:2 ravi:1 kenneth:1 asymptotically:1 merely:1 fraction:5 sum:1 powerful:1 uncertainty:1 fourth:1 place:1 throughout:1 family:6 almost:2 reader:1 parallelogram:1 decision:1 dy:1 investigates:1 graham:3 bound:8 guaranteed:1 encountered:1 yielded:1 i4:2 your:1 x2:1 hy:2 avila:1 grinberg:1 generates:1 nathan:1 speed:1 argument:1 span:1 min:6 aspect:1 performing:1 kumar:1 emi:2 yavar:1 martin:1 according:1 combination:2 anirban:1 smaller:2 across:3 increasingly:1 lasting:1 intuitively:1 restricted:1 pr:3 indexing:1 heart:2 equation:4 resource:2 previously:2 turn:1 count:2 end:1 tiling:3 available:2 apply:2 simulating:1 yadkori:1 coin:1 existence:1 substitute:1 denotes:2 practicality:1 k1:3 especially:2 approximating:3 implied:1 sweep:3 question:2 strategy:1 rt:4 traditional:1 diagonal:1 exhibit:1 gradient:4 evolutionary:1 thank:1 mapped:1 gecco:1 degrade:1 nelson:2 haas:1 argue:1 chris:1 index:4 mini:1 providing:1 equivalently:1 unfortunately:2 potentially:1 relate:1 hij:2 statement:1 ggp:1 rise:1 implementation:2 policy:9 twenty:4 observation:1 markov:2 benchmark:4 finite:3 descent:3 wegman:1 defining:1 extended:1 strain:1 locate:1 rn:8 frame:1 arbitrary:1 community:1 canada:1 introduced:2 david:2 pair:2 cast:1 kl:1 teresa:1 learned:2 pires:1 address:1 pattern:6 dimitris:1 summarize:1 challenge:2 max:2 memory:7 reliable:1 natural:3 rely:1 eh:4 force:1 indicator:1 representing:1 scheme:1 improve:2 jelani:1 technology:1 imply:1 library:2 axis:2 church:1 keepaway:2 prior:1 sg:4 acknowledgement:1 calcul:1 discovery:1 asymptotic:1 law:1 relative:3 multiagent:1 highlight:1 sturtevant:2 mixed:1 generation:5 permutation:1 foundation:1 contingency:2 awareness:2 agent:21 degree:1 humanoid:1 consistent:1 s0:1 editor:2 storing:1 share:1 playing:1 summary:2 supported:1 last:1 free:2 tsitsiklis:2 bias:14 side:1 ambrym:2 taking:1 munos:4 fifth:2 sparse:8 benefit:2 distributed:2 van:2 dimension:1 curve:1 transition:2 lindenstrauss:5 cumulative:1 boundary:1 rich:1 commonly:1 reinforcement:24 qualitatively:1 adaptive:1 miikkulainen:1 transaction:1 approximate:3 implicitly:1 confirm:2 active:1 investigating:1 manuela:2 continuous:5 tailed:1 table:15 promising:2 learn:4 robust:1 ca:3 goofspiel:1 necessarily:2 complex:1 marc:2 domain:17 did:1 significance:1 reconcile:1 arise:1 repeated:2 x1:2 screen:2 jermaine:1 position:2 exponential:2 lie:1 kxk2:2 perceptual:1 mahdi:1 learns:1 wavelet:1 theorem:10 xt:1 specific:1 insightful:1 maxi:1 appeal:2 experimented:1 concern:2 workshop:1 effectively:1 qvalues:1 phd:1 magnitude:4 acrobot:7 budget:2 demand:2 suited:2 rg:5 led:1 logarithmic:1 likely:1 kxk:4 tracking:1 tedrake:1 lstd:2 corresponds:1 satisfies:1 relies:1 extracted:1 acm:2 goal:4 change:1 hard:1 martha:1 infinite:2 typical:4 uniformly:2 lemma:9 total:1 experimental:1 player:3 zg:3 select:1 mark:1 arises:1 evaluate:1 tested:1 handling:1
3,912
4,541
Causal discovery with scale-mixture model for spatiotemporal variance dependencies Zhitang Chen* , Kun Zhang? , and Laiwan Chan* * Department of Computer Science and Engineering, Chinese University of Hong Kong, Hong Kong {ztchen,lwchan}@cse.cuhk.edu.hk ? Max Planck Institute for Intelligent Systems, T?ubingen, Germany [email protected] Abstract In conventional causal discovery, structural equation models (SEM) are directly applied to the observed variables, meaning that the causal effect can be represented as a function of the direct causes themselves. However, in many real world problems, there are significant dependencies in the variances or energies, which indicates that causality may possibly take place at the level of variances or energies. In this paper, we propose a probabilistic causal scale-mixture model with spatiotemporal variance dependencies to represent a specific type of generating mechanism of the observations. In particular, the causal mechanism including contemporaneous and temporal causal relations in variances or energies is represented by a Structural Vector AutoRegressive model (SVAR). We prove the identifiability of this model under the non-Gaussian assumption on the innovation processes. We also propose algorithms to estimate the involved parameters and discover the contemporaneous causal structure. Experiments on synthetic and real world data are conducted to show the applicability of the proposed model and algorithms. 1 Introduction Causal discovery aims to discover the underlying generating mechanism of the observed data, and consequently, the causal relations allow us to predict the effects of interventions on the system [15, 19]. For example, if we know the causal structure of a stock market, we are able to predict the reactions of other stocks against the sudden collapse of one share price in the market. A traditional way to infer the causal structure is by controlled experiments. However, controlled experiments are in general expensive, time consuming, technically infeasible and/or ethically prohibited. Thus, causal discovery from non-experimental data is of great importance and has drawn considerable attention in the past decades [15, 19, 16, 17, 12, 22, 2]. Probabilistic models such as Bayesian Networks (BNs) and Linear Non-Gaussian Acyclic Models (LiNGAM) have been proposed and applied to many real world problems [18, 13, 14, 21]. Conventional models such as LiNGAM assume that the causal relations are of a linear form, i.e., if the observed variable x is the cause of another observed variable y, we model the causal relation as y = ?x + e, where ? is a constant coefficient and e is the additive noise independent of x. However, in many types of natural signals or time series such as MEG/EEG data [23] and financial data [20], a common form of nonlinear dependency, as seen from the correlation in variances or energies, is found [5]. This observation indicates that causality may take place at the level of variances or energies instead of the observed variables themselves. Generally speaking, traditional methods cannot detect this type of causal relations. Another restriction of conventional causal models is that these models assume constant variances of the observations; this assumption is unrealistic for those data with strong heteroscedasticity [1]. 1 In this paper, we propose a new probabilistic model called Causal Scale-Mixture model with SpatioTemporal Variance Dependencies (CSM-STVD) incorporating the spatial and temporal variance or energy dependencies among the observed data. The main feature of the new model is that we model the spatiotemporal variance dependencies based on the Structural Vector AutoRegressive (SVAR) model, in particular the Non-Gaussian SVAR [11]. The contributions of this study are two-fold. First, we provide an alternative way to model the causal relations among the observations, i.e., causality in variances or energies. In this model, causality takes place at the level of variances or energies, i.e., the variance or energy of one observed series at time instant t0 is influenced by the variances or energies of other variables at time instants t ? t0 and its past values at time instants t < t0 . Thus, both contemporaneous and temporal causal relations in variances are considered. Secondly, we prove the identifiability of this model and more specifically, we show that Non-Gaussianity makes the model fully identifiable. Furthermore, we propose a method which directly estimates such causal structures without explicitly estimating the variances. 2 Related work To model the variance or energy dependencies of the observations, a classic method is to use a scalemixture model [5, 23, 9, 8]. Mathematically, we can represent a signal as si = ui ?i , where ui is a signal with zero mean and constant variance, and ?i is a positive factor which is independent of ui and modulates the variance or energy of si [5]. For multivariate case, we have s = u ? ?, (1) where ? means element-wise multiplication. In basic scale-mixture model, u and ? are statistically independent and the components ui are spatiatemporally independent, i.e. ui,t?1 ? ? uj,t?2 , ?t?1 , t?2 . However, ?i , the standard deviations of the observations, are dependent across i. The observation x, in many situations, is assumed to be a linear mixture of the source s, i.e., x = As, where A is a mixing matrix. In [5], Hirayama and Hyv?arinen proposed a two-stage model. The first stage is a classic ICA model [3, 10], where the observation x is a linear mixture of the hidden source s, i.e., x = As. On the second stage, the variance dependencies are modeled by applying a linear Non-Gaussian (LiN) SEM to the log-energies of the sources. ? yi = hij yj + hi0 + ri , i = 1, 2, ? ? ? , d, j where yi = log ?(?i ) are the log-energies of sources si and the nonlinear function ? is any appropriate measure of energy; ri are non-Gaussian distributed and independent of yj . To make the problem tractable, they assumed that ui are binary, i.e., ui ? {?1, 1} and uniformly distributed. The parameters of this two-stage model including A and hij are estimated by maximum likelihood without approximation due to the uniform binary distribution assumption of u. However, this assumption is restrictive and thus may not fit real world observations well. Furthermore, they assumed that ?i are spatially dependent but temporally white. However, many time series show strong heterosecadasticity and temporal variance dependencies such as financial time series and brain signals. Taking into account of temporal variance dependencies would improve the quality of the estimated underlying structure of the observed data. Another two-stage model for magnetoencephalography (MEG) or electroencephalography (EEG) data was propsoed earlier in [23]. The first stage also performs linear separation; they proposed a blind source separation algorithm by exploiting the autocorrelations and time-varying variances of the sources. In the second stage, si (t) are modeled by autoregressive processes with L lags (AR(L)) driven by innovations ei (t). The innovation processes ei (t) are mutually uncorrelated and temporally white. However, ei (t) are not necessarily independent. They modeled ei (t) as follows: ei (t) = ?it zi (t), where zi (t) ? N (0, 1). (2) Two different methods are used to model the dependencies among the variances of the innovations. 2 The first method is causal-in-variance GARCH (CausalVar-GARCH). Specifically ?it are modeled by a multivariate GARCH model. The advantage of this model is that we are able to estimate the temporal causal structure in variances. However, this model provides no information about the 2 contemporaneous causal relations among the sources if there indeed exist such causal relations. The second method to model the variance dependencies is applying a factor model to the log-energies 2 (log ?it ) of the sources. The disadvantage of this method is that we cannot model the causal relations among the sources which are more interesting to us. In many real world observations, there are causal influences in variances among the observed variables. For instance, there are significant mutual influences among the volatilities of the observed stock prices. We are more interested in investigating the underlying causal structure among the variances of the observed data. Consequently, in this paper, we consider the situation where the correlation in the variances of the observed data is interesting. That is, the first stage of [5, 23] is not needed, and we focus on the second stage, i.e., modeling the spatiotemporal variance dependencies and causal mechanism among the observations. In the following sections, we propose our probabilistic model based on SVAR to describe the spatiotemporal variance dependencies among the observations. Our model is, as shown in later sections, closely related to the models introduced in [5, 23], but has significant advantages: (1) both contemporaneous and temporal causal relations can be modeled; (2) this model is fully identifiable under certain assumptions. 3 Causal scale-mixture model with spatiotemporal variances dependencies We propose the causal scale-mixture model with spatiotemporal variance dependencies as follows. Let z(t) be the m ? 1 observed vector with components zi (t), which are assumed to be generated according to the scale-mixture model: zi (t) = ui (t)?i (t). (3) Here we assume that ui (t) are temporally independent processes, i.e., ui (t?1 ) ? ? uj (t?2 ), ?t?1 ?= t?2 but unlike basic scale-mixture model, here ui (t) may be contemporarily dependent, i.e., ui (t) ?? ? uj (t), ?i ?= j. ?(t) is spatially and temporally independent of u(t). Using vector notation, zt = ut ? ?t . (4) Here ?it > 0 are related to the variances or energies of the observations zt and are assumed to be spatiotemporally dependent. As in [5, 23], let yt = log ?t . In this paper, we model the spatiotemporal variance dependencies by a Structural Vector AutoRegressive model (SVAR), i.e., yt = A0 yt + L ? B? yt?? + ?t , (5) ? =1 where A0 contains the contemporaneous causal strengths among the variances of the observations, i.e., if [A0 ]ij ?= 0, we say that yit is contemporaneously affected by yjt ; B? contains the temporal (time-lag) causal relations, i.e., if [B? ]ij ?= 0, we say that yi,t is affected by yj,t?? . Here, ?t are i.i.d. mutually independent innovations. Let xt = log |zt | (In this model, we assume that ui (t) do not take value zero) and ?t = log |ut |.Take log of the absolute values of both sides of equation (4), then we have the following model: xt = yt + ?t , yt = A0 yt + L ? B? yt?? + ?t . (6) ? =1 We make the following assumptions on the model: A1 Both ?t and ?t are temporally white with zero means. The components of ?t are not necessarily independent, and we assume that the covariance matrix of ?t is ?? . The components of ?t are independent and ?? = I1 . A2 The contemporaneous causal structure is acyclic, i.e., by simultaneous row and column permutations, A0 can be permuted to a strictly lower triangular matrix. BL is of full rank. 1 Note that ?? = I is assumed just for convenience. A0 and B? can also be correctly estimated if ?? is a general diagonal covariance matrix. The explanation why the scaling indeterminacy can be eliminated is the same as LiNGAM given in [16]. 3 A3 The innovations ?t are non-Gaussian, and ?t are either Gaussian or non-Gaussian. Inspired by the identifiability results of the Non-Gaussian state-space model in [24], we show that our model is identifiable. Note that our new model and the state-space model proposed in [24] are two different models, while interestingly by simple re-parameterization we can prove the following Lemma 3.1 and Theorem 3.1 following [24]. Lemma 3.1 Given the log-transformed observation xt = log |zt | generated by Equations (6), if the assumptions A1 ? A2 hold, by solving simple linear equations involving the autocovariances of xt , the covariance ?? and AB? can be uniquely determined, where A = (I ? A0 )?1 ; furthermore, A and B? can be identified up to some rotation transformations. That is, suppose that two models with ? ? L ? ? ) generate the same observation xt , then we parameters (A, {B? }L ? =1 , ?? ) and (A, {B? }? =1 , ?? T ? ? ? have ?? = ??? , A = AU, B? = U B? , where U is an orthogonal matrix. Non-Gaussianity of the innovations ?t makes the model fully identifiable, as seen in the following theorem. Theorem 3.1 Given the log-transformed observation xt = log |zt | generated by Equations (6) and given L, if assumptions A1 ? A3 hold, then the model is identifiable. In other words, suppose ? ? L ? ? ) generate the same that two models with parameters (A, {B? }L ? =1 , ?? ) and (A, {B? }? =1 , ?? ? ? = A, B ? ? = B? , observation xt ; then these two models are identical, i.e., we have ??? = ?? , A ? t = yt . and y 4 Parameter learning and causal discovery In this section, we propose an effective algorithm to estimate the contemporaneous causal structure matrix A0 and temporal causal structure matrices B? , ? = 1, ? ? ? , L (see (6)). 4.1 Estimation of AB? We have shown that AB? can be uniquely determined, where A = (I ? A0 )?1 . The proof of Lemma 3.1 also suggests a way to estimate AB? , as given below. Readers can refer to the appendix for the detailed mathematical derivation. Although we are aware that this method might not be statistically efficient, we adopt this estimation method due to its great computational efficiency. Given the log-transformed observations xt = log |zt |, denoted by Rx (k) the autocovariance function of xt at lag k, we have Rx (k) = E(xt xTt+k ). Based on the model assumptions A1 and A2 , we have the following linear equations of the autocovarainces of xt . ?? T C1 ? ? ? ?? ? R (L + 2) R (L + 1) R (L) ? ? ? R (2) ? x ? ? x ? ? CT2 x x ? ? ? ?? = ? ? ? ?? . .. .. .. .. .. ? ? ? ?? . . ? ? ? ?? . . . . . Rx (2L) Rx (2L ? 1) Rx (2L ? 2) ? ? ? Rx (L) CTL | {z } ? Rx (L + 1) ? ? Rx (L) Rx (L ? 1) ? ? ? Rx (1) ? ? ? ? ?, ? ? ? (7) ,H where C? = AB? (? = 1, ? ? ? , L). As shown in the proof of Lemma 3.1, H is invertible. We can easily estimate AB? by solving the linear Equations (7). 4.2 Estimation of A0 The estimations of AB? (? = 1, ? ? ? , L) still contain the mixing information of the causal structures A0 and B? . In order to further obtain the contemporaneous and temporal causal relations, we need to estimate both A0 and B? (? = 1, ? ? ? , L). Here, we show that the estimation of A0 can be reduced to solving a Linear Non-Gaussian Acyclic Models with latent confounders. Substituting yt = xt ? ?t into Equations (6), we have xt ? ?t = L ? AB? (xt?? ? ?t?? ) + A?t . ? =1 4 (8) Since AB? can be uniquely determined according to Lemma 3.1 or more specifically Equations (7), ?L we can easily obtain ?t = xt ? ? =1 AB? xt?? , then we have: ?t = A?t + ?t ? L ? AB? ?t?? . (9) ? =1 This is exactly a Linear Non-Gaussian Acyclic Model with latent confounders and the estimation of A is a very challenging problem [6, 2]. To make to problem tractable, we further have the following two assumptions on the model: ? A4 If the components of ?t are not independent, we assume that ?t follows a factor model: ?t = Dft , where the components of ft are spatially and temporally independent Gaussian factors and D is the factor loading matrix (not necessarily square). ? A5 The components of ?t are simultaneously super-Gaussian or sub-Gaussian. By replacing ?t with Dft , we have: ?t = A?t + Dft ? | L ? AB? Dft?? . (10) ? =1 {z } confounding effects To identify the matrix A which contains the contemporaneous causal information of the observed variables, we treat ft and ft?? as latent confounders and the interpretation of assumption A4 is that we can treat the independent factors ft as some external factors outside the system. The Gaussian assumption of ft can be interpreted hierarchically as the result of central limit theorem because these factors themselves represent the ensemble effects of numerous factors from the whole environment. On the contrary, the disturbances ?it are local factors that describe the intrinsic behaviors of the observed variables [4]. Since they are local and thus not regarded as the ensembles of large amount of factors. In this case, the disturbances ?it are assumed to be non-Gaussian. The LiNGAM-GC model [2] takes into the consideration of latent confounders. In that model, the confounders are assumed to follow Gaussian distribution, which was interpreted as the result of central limit theorem. It mainly focuses on the following cause-effect pair: x = e1 + ?c, y = ?x + e2 + ?c, (11) where e1 and e2 are non-Gaussian and mutually independent, and c is the latent Gaussian confounder independent of the disturbances e1 and e2 . To tackle the causal discovery problem of LiNGAMGC, it was firstly shown that if x and y are standardized to unit absolute kurtosis then |?| < 1 based on the assumption that e1 and e2 are simultaneously super-Gaussian or sub-Gaussian. Note that assumption A5 is a natural extension of this assumption. It holds in many practical problems, ? xy especially for financial data. After the standardization, the following cumulant-based measure R was proposed [2]: ? xy = (Cxy + Cyx )(Cxy ? Cyx ), where R ? 3 y} ? 3E{xy} ? ? 2 }, Cxy = E{x E{x 3 ? ? ? 2 }, Cyx = E{xy } ? 3E{xy} E{y (12) ? means sample average. It was shown that the causal direction can be identified simply by and E ? xy , i.e., if R ? xy > 0, x ? y is concluded; otherwise if R ? xy < 0, y ? examining the sign of R x is concluded. Once the causal direction has been identified, the estimation of causal strength is straightforward. The work can be extended to multivariate causal network discovery following DirectLiNGAM framework [17]. Here we adopt LiNGAM-GC-UK, the algorithm proposed in [2], to find the contemporaneous casual ? ? = (I ? A ? 0 )C ??, structure matrix A0 . Once A0 has been estimated, B? can be easily obtained by B ? 0 and C ? ? are the estimations of A0 and AB? , respectively. The algorithm for learning the where A model is summarized in the following algorithm. 5 Algorithm 1 Causal discovery with scale-mixture model for spatiotemporal variance dependencies 1: Given the observations zt , compute xt = log |zt |. ? t from xt , i.e., xt = xt ? x ?t 2: Subtract the mean x 3: Choose an appropriate lag L for the SVAR and then estimate AB? where A = (I ? A0 )?1 and ? = 1, ? ? ? , L, using Equations(7). ?L 4: Obtain the residues by ?t = xt ? ? =1 AB? xt?? . 5: Apply LiNGAM-GC algorithms to ?t and obtain the estimation of A0 and B? (? = 1, ? ? ? , L) and the corresponding comtemporaneous and temporal causal orderings. 5 Experiment We conduct experiments using synthetic data and real world data to investigate the effectiveness of our proposed model and algorithms. 5.1 Synthetic data We generate the observations according to the following model: zt = r ? ut ? ?t , r is a m?1 scale vector of which the elements are randomly selected from interval [1.0, 6.0]; ut > 0 and ?t = log ut follows a factor model: ?t = Dft , where D is m ? m and the elements of D are randomly selected from [0.2, 0.4] . fit are i.i.d. and fit ? N (0, 0.5). Denoted by yt = log ?t , we model the spatiotemporal variance dependencies of the observations xt by an SVAR(1): yt = A0 yt + B1 yt?1 + ?t , where A0 is a m ? m strictly lower triangular matrix of which the elements are randomly selected from [0.1, 0.2] or [?0.2, ?0.1]; B1 is a m ? m matrix of which the diagonal elements [B1 ]ii are randomly selected from [0.7, 0.8], 80% of the off-diagonal elements [B1 ]i?=j are zero and the remaining 20% are randomly selected from [?0.1, 0.1]; ?it are i.i.d. super-Gaussian generated by ?it = sign(nit )|nit |2 (nit ? N (0, 1)) and normalized to unit variance. The generated observations are permuted to a random order. The task of this experiment is to investigate the performance of our algorithms in estimating the coefficient matrix (I ? A0 )?1 B1 and also the contemporaneous causal ordering induced by A0 . We estimate the matrix (I ? A0 )?1 B1 using Lemma 3.1 or specifically Equations (7). We use different algorithms: LiNGAM-GC-UK proposed in [2], C-M proposed in [7] and LiNGAM [16] to estimate the contemporaneous causal structure. We investigate the performances of different algorithms in the scenarios of m = 4 with sample size from 500 to 4000 and m = 8 with sample size from 1000 to 10000. For each scenario, we randomly conduct 100 independent trials and discard those trials where the SVAR processes are not stable. We calculate the accuracies of LiNGAM-GC-UK, C-M and LiNGAM in finding (1) whole causal ordering (2) exogenous variable (root) of the causal network. We also calculate the sum square error Err of estimated causal strength matrix ? of different algorithms with respect to the true one. The average SNR defined V ar(? ) ? as SN R = 10 log i V ar(fii ) is about 13.85 dB. The experimental results are shown in Figure 1 and i Table 1. Figure 1 shows the plots of the estimated entries of (I?A0 )?1 B1 versus the true ones when the dimension of the observations m = 8. From Figure 1, we can see that the matrix (I ? A0 )?1 B1 is estimated well enough when the sample size is only 1000. This confirms the correctness of our theoretical analysis of the proposed model. From Table 1, we can see that when the dimension of the observations is small (m = 4), all algorithms have acceptable performances. The performance of LiNGAM is the best when the sample size is small. This is because C-M and LiNGAM-GC-UK are cumulant-based methods which need sufficiently large sample size. When the dimension of the observations m increases to 8, we can see that the performances of C-M and LiNGAM degrade dramatically. While LiNGAM-GC-UK still successfully finds the exogenous variable (root) or even the whole contemporaneous causal ordering among the variances of the observations if the sample size is sufficiently large enough. This is mainly due to the fact that when the dimension increases, 6 sample size: 2000 sample size: 4000 1 1 0.5 0.5 FTSE ?1 ?1 0 ?1 ?1 1 sample size: 8000 0 sample size: 10000 1 1 1 0.5 0.5 0.5 0 0 0 0.8 GDAXI estimated parameters 4 ?1 ?1 ?0.5 0 1 DJI ?0.5 ?1 ?1 ?1 ?1 0 1 true parameters 7 0.9833 ?0.5 42 1 -0.624 sample size: 6000 FCHI 0.4798 ?1 ?1 0 1 true parameters 40 0 ?0.5 0.7 0 ?0.5 05 0 ?0.5 1.0 estimated parameters sample size: 1000 1 0.5 0 NDX 1 Figure 1: Estimated entries causal strength matrix (I ? A0 )?1 B1 vs the true ones (m = 8) Figure 2: Contemporaneous causal network of the selected stock indices Table 1: Accuracy of finding the causal ordering sample size m=4 500 1000 2000 3000 4000 m=8 1000 2000 4000 6000 8000 10000 whole causal ordering first variable found Err C-M LiNGAM LiNGAM-GC-UK C-M LiNGAM LiNGAM-GC-UK C-M LiNGAM LiNGAM-GC-UK 37% 47% 74% 67% 63% 70% 75% 86% 78% 83% 28% 25% 81% 90% 90% 61% 25% 82% 79% 81% 85% 92% 90% 88% 92% 60% 72% 92% 96% 94% 0.1101 0.0865 0.0679 0.0716 0.0669 0.0326 0.024 0.02 0.0201 0.0193 0.0938 0.0444 0.0199 0.0126 0.0109 0% 1.14% 0% 0% 2.20% 0% 23.08% 26.14% 31.87% 25.29% 30.77% 23.53% 8.79% 25% 58.24% 83.91% 80.22% 91.76% 20.88% 25% 19.78% 25.29% 17.58% 12.94% 75.82% 70.45% 82.41% 75.86% 79.12% 68.24% 65.93% 75% 86.81% 96.55% 91.21% 97.64% 0.8516 0.7866 0.7537 0.7638 0.7735 0.7794 0.2318 0.2082 0.1916 0.1843 0.1824 0.194 0.3017 0.1396 0.0634 0.0341 0.029 0.0199 the confounding effects of Dft ? (I ? A)?1 B1 Dft?1 become more problematic such that the performances of C-M and LiNGAM are strongly affected by confounding effect. Table 1 also shows the estimation accuracies of the compared methods. Among them, LiNGAM-GC-UK significantly outperforms other methods given sufficiently large sample size. In order to investigate the robustness of our methods against the Gaussian assumption on the external factors ft , we conduct the following experiment. The experimental setting is the same as that in the above experiment but here the external factors ft are non-Gaussian, and more specifically fit = sign(nit )|nit |p , where nit ? N (0, 0.5). When p > 1, the factor is super-Gaussian and when p < 1 the factor is sub-Gaussian. We investigate the performances of LiNGAMGC-UK, LiNGAM and C-M in finding the whole causal ordering in difference scenarios where p = {0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6} with sample size of 6000. The results in Figure 3 show that LiNGAM-GC-UK achieved satisfying results compared to LiNGAM and C-M. This suggests that although LiNGAM-GC is developed based on the assumption that the latent confounders are Gaussian distributed, it is still robust in the scenarios where the latent confounders are mildly non-Gaussian with mild causal strengths. whole causal ordering 100 accuracy(%) 80 60 40 20 0 0.2 0.4 0.6 0.8 kurtosis of f 1 p 1.2 1.4 1.6 1.8 t 0.4 LiNGAM?GC?UK C?M LiNGAM kurtosis 0.2 0 ?0.2 ?0.4 0.5 1 p 1.5 Figure 3: Robustness against Gaussianity of ft 7 5.2 Real world data In this section, we use our new model to discover the causal relations among five major world stocks indices: (1) Dow Jones Industrial Average (DJI) (2) FTSE 100 (FTSE) (3) Nasdaq-100 (NDX) (4) CAC 40 (FCHI) (5) DAX (GDAXI), where DJI and NDX are stock indices in US, and FTSE, FCHI and GDAXI are indices in Europe. Note that because of the time difference, we believe that the causal relations among these stock indices are mainly acyclic, as we assumed in this paper. We collect the adjusted close prices of these selected indices from May 2nd, 2006 to April 12th, 2012, and use linear interpolation to estimate the prices on those dates when the data are not available. We apply our proposed model with SVAR(1) to model the spatiotemporal variance dependencies of the data. For the contemporaneous causal structure discovery, we use LiNGAM-GC-UK, C-M, LiNGAM2 and Direct-LiNGAM3 to estimate the causal ordering. The discovered causal orderings of different algorithms are shown in Table 2. From Table 2, we see that in the causal ordering Table 2: Contemporaneous causal ordering of the selected stock indices algorithm LiNGAM-GC-UK C-M LiNGAM Direct-LiNGAM causal ordering {2} ? {4} ? {5} ? {1} ? {3} {1} ? {2} ? {4} ? {5} {1} ? {3} {2} ? {5} ? {3} ? {1} {2} ? {4} {3} ? {1} ? {5} ? {4} ? {2} discovered by LiNGAM-GC-UK and LiNGAM, the stock indices in US, i.e., DJI and NDX are contemporaneously affected by the indices in Europe. Note that each stock index is given in local time. Because of the time difference between Europe and America and the efficient market hypothesis (the market is quick to absorb new information and adjust stock prices relative to that), the contemporaneous causal relations should be from Europe to America, if they exist. This is consistent with the results our method and LiNGAM produced. Another interesting finding is that in the graphs obtained by LiNGAM-GC-UK and LiNGAM, we can see that FTSE is the root, which is consistent with the fact that London is the financial centre of Europe and FTSE is regarded as Europe?s most important index. However, in results by C-M and DirectLiNGAM, we have the opposite direction, i.e., the stock indices in US is contemporaneously the cause of the indices in Europe, which is difficult to interpret. The contemporaneous causal network of the stock indices are shown in Figure 2. Further interpretation on the discovered causal strengths needs expertise knowledge. 6 Conclusion In this paper, we investigate the causal discovery problem where causality takes place at the level of variances or energies instead of the observed variables themselves. We propose a causal scalemixture model with spatiotemporal variance dependencies to describe this type of causal mechanism. We show that the model is fully identifiable under the non-Gaussian assumption of the innovations. In addition, we propose algorithms to estimate the parameters, especially the contemporaneous causal structure of this model. Experimental results on synthetic data verify the practical usefulness of our model and the effectiveness of our algorithms. Results using real world data further suggest that our new model can possibly explain the underlying interaction mechanism of major world stock markets. Acknowledgments The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administration Region, China. 2 LiNGAM converges to several local optima. We only show one of the discovered causal ordering here. The code is available at:http://www.cs.helsinki.fi/group/neuroinf/lingam/ 3 http://www.ar.sanken.osaka-u.ac.jp/?inazumi/dlingam.html 8 References [1] T. Bollerslev. Generalized autoregressive conditional heteroskedasticity. 31(3):307?327, 1986. Journal of econometrics, [2] Z. Chen and L. Chan. Causal discovery for linear non-gaussian acyclic models in the presence of latent gaussian confounders. In Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation, pages 17?24. Springer-Verlag, 2012. [3] P. Comon. Independent component analysis, a new concept? Signal processing, 36(3):287?314, 1994. [4] R. Henao and O. Winther. Sparse linear identifiable multivariate modeling. Journal of Machine Learning Research, 12:863?905, 2011. [5] J. Hirayama and A. Hyv?arinen. Structural equations and divisive normalization for energy-dependent component analysis. Advances in Neural Information Processing Systems (NIPS2011), 24, 2012. [6] P.O. Hoyer, S. Shimizu, A.J. Kerminen, and M. Palviainen. Estimation of causal effects using linear non-gaussian causal models with hidden variables. International Journal of Approximate Reasoning, 49(2):362?378, 2008. [7] A. Hyv?arinen. Pairwise measures of causal direction in linear non-gaussian acyclic models. In JMLR Workshop and Conference Proceedings (Proc. 2nd Asian Conference on Machine Learning), ACML2010, volume 13, pages 1?16, 2010. [8] A. Hyv?arinen, P. O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527?1558, 2001. [9] A. Hyv?arinen and J. Hurri. Blind separation of sources that have spatiotemporal variance dependencies. Signal Processing, 84(2):247?254, 2004. [10] A. Hyv?arinen and E. Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4-5):411?430, 2000. [11] A. Hyv?arinen, K. Zhang, S. Shimizu, and P. O. Hoyer. Estimation of a structural vector autoregression model using non-gaussianity. Journal of Machine Learning Research, 11:1709?1731, 2010. [12] D. Janzing, J. Mooij, K. Zhang, J. Lemeire, J. Zscheischler, P. Daniu?sis, B. Steudel, and B. Sch?olkopf. Information-geometric approach to inferring causal directions. Artificial Intelligence, 2012. [13] Y. Kawahara, S. Shimizu, and T. Washio. Analyzing relationships among arma processes based on nongaussianity of external influences. Neurocomputing, 2011. [14] A. Moneta, D. Entner, PO Hoyer, and A. Coad. Causal inference by independent component analysis with applications to micro-and macroeconomic data. Jena Economic Research Papers, 2010:031, 2010. [15] J. Pearl. Causality: models, reasoning, and inference. Cambridge Univ Pr, 2000. [16] S. Shimizu, P.O. Hoyer, A. Hyv?arinen, and A. Kerminen. A linear non-gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7:2003?2030, 2006. [17] S. Shimizu, T. Inazumi, Y. Sogawa, A. Hyv?arinen, Y. Kawahara, T. Washio, P.O. Hoyer, and K. Bollen. Directlingam: A direct method for learning a linear non-gaussian structural equation model. Journal of Machine Learning Research, 12:1225?1248, 2011. [18] Y. Sogawa, S. Shimizu, T. Shimamura, A. Hyv?arinen, T. Washio, and S. Imoto. Estimating exogenous variables in data with more variables than observations. Neural Networks, 2011. [19] P. Spirtes, C.N. Glymour, and R. Scheines. Causation, prediction, and search. The MIT Press, 2000. [20] K. Zhang and L. Chan. Efficient factor garch models and factor-dcc models. Quantitative Finance, 9(1):71?91, 2009. [21] K. Zhang and L.W. Chan. Extensions of ica for causality discovery in the hong kong stock market. In Proc. of the 13th international conference on Neural information processing-Volume Part III, pages 400?409. Springer-Verlag, 2006. [22] K. Zhang and A. Hyv?arinen. On the identifiability of the post-nonlinear causal model. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 647?655, 2009. [23] K. Zhang and A. Hyv?arinen. Source separation and higher-order causal analysis of meg and eeg. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, pages 709?716, 2010. [24] K. Zhang and A. Hyv?arinen. A general linear non-gaussian state-space model: Identifiability, identification, and applications. In Proceedings of Asian Conference on Machine Learning, JMLR W&CP, pages 113?128, 2011. 9
4541 |@word mild:1 kong:4 trial:2 loading:1 nd:2 hyv:13 confirms:1 covariance:3 series:4 contains:3 sogawa:2 interestingly:1 past:2 reaction:1 err:2 outperforms:1 imoto:1 si:5 additive:1 directlingam:3 plot:1 v:1 intelligence:3 selected:8 parameterization:1 sudden:1 provides:1 cse:1 firstly:1 zhang:8 five:1 mathematical:1 direct:4 become:1 prove:3 pairwise:1 ica:2 indeed:1 market:6 mpg:1 themselves:4 behavior:1 brain:1 inspired:1 electroencephalography:1 discover:3 underlying:4 estimating:3 notation:1 dax:1 interpreted:2 developed:1 finding:4 transformation:1 temporal:11 quantitative:1 tackle:1 finance:1 exactly:1 uk:16 unit:2 grant:2 intervention:1 planck:1 positive:1 engineering:1 local:4 treat:2 limit:2 analyzing:1 interpolation:1 might:1 au:1 china:1 suggests:2 challenging:1 collect:1 collapse:1 confounder:1 statistically:2 practical:2 acknowledgment:1 yj:3 neuroinf:1 significantly:1 spatiotemporally:1 word:1 suggest:1 cannot:2 convenience:1 close:1 applying:2 influence:3 restriction:1 conventional:3 www:2 quick:1 yt:14 nit:6 straightforward:1 attention:1 regarded:2 osaka:1 financial:4 classic:2 suppose:2 hypothesis:1 element:6 expensive:1 satisfying:1 econometrics:1 observed:16 ft:8 calculate:2 region:1 ordering:14 zscheischler:1 environment:1 cyx:3 ui:13 hi0:1 solving:3 heteroscedasticity:1 nips2011:1 technically:1 efficiency:1 easily:3 po:1 stock:15 represented:2 america:2 derivation:1 univ:1 describe:3 effective:1 london:1 artificial:3 outside:1 kawahara:2 lag:4 inazumi:2 say:2 otherwise:1 triangular:2 topographic:1 advantage:2 kurtosis:3 propose:9 interaction:1 date:1 mixing:2 olkopf:1 bollerslev:1 exploiting:1 optimum:1 generating:2 converges:1 volatility:1 ac:1 hirayama:2 ij:2 indeterminacy:1 strong:2 c:1 direction:5 closely:1 arinen:13 secondly:1 mathematically:1 adjusted:1 strictly:2 extension:2 hold:3 sufficiently:3 considered:1 prohibited:1 great:2 predict:2 substituting:1 major:2 adopt:2 a2:3 csm:1 sanken:1 estimation:12 proc:2 lemeire:1 council:1 ctl:1 correctness:1 successfully:1 mit:1 gaussian:36 aim:1 super:4 varying:1 focus:2 rank:1 indicates:2 likelihood:1 mainly:3 hk:1 industrial:1 detect:1 inference:2 dependent:5 nasdaq:1 a0:26 hidden:2 relation:16 transformed:3 interested:1 germany:1 i1:1 henao:1 among:16 html:1 denoted:2 spatial:1 special:1 mutual:1 aware:1 once:2 eliminated:1 identical:1 jones:1 intelligent:1 micro:1 causation:1 randomly:6 oja:1 simultaneously:2 neurocomputing:1 asian:2 ab:15 a5:2 investigate:6 adjust:1 mixture:11 xy:8 orthogonal:1 autocovariance:1 conduct:3 re:1 arma:1 causal:85 theoretical:1 instance:1 column:1 earlier:1 modeling:2 ar:4 disadvantage:1 kerminen:2 applicability:1 deviation:1 entry:2 snr:1 uniform:1 usefulness:1 examining:1 conducted:1 dependency:23 spatiotemporal:14 synthetic:4 confounders:8 international:3 winther:1 probabilistic:4 ct2:1 off:1 invertible:1 central:2 choose:1 possibly:2 external:4 account:1 de:1 svar:9 summarized:1 gaussianity:4 coefficient:2 explicitly:1 blind:2 later:1 root:3 exogenous:3 identifiability:5 contribution:1 square:2 accuracy:4 cxy:3 variance:46 ensemble:2 identify:1 bayesian:1 identification:1 produced:1 rx:10 expertise:1 casual:1 simultaneous:1 explain:1 influenced:1 janzing:1 sixth:1 against:3 energy:19 involved:1 e2:4 proof:2 knowledge:1 ut:5 higher:1 dcc:1 follow:1 april:1 strongly:1 furthermore:3 just:1 stage:9 correlation:2 dow:1 ndx:4 ei:5 replacing:1 nonlinear:3 autocorrelations:1 quality:1 believe:1 effect:8 contain:1 normalized:1 true:5 concept:1 verify:1 spatially:3 spirtes:1 white:3 uniquely:3 hong:4 generalized:1 performs:1 cp:1 reasoning:2 meaning:1 wise:1 consideration:1 fi:1 common:1 rotation:1 inki:1 permuted:2 dji:4 jp:1 volume:2 interpretation:2 interpret:1 significant:3 refer:1 cambridge:1 dft:7 centre:1 stable:1 europe:7 heteroskedasticity:1 fii:1 multivariate:4 chan:4 confounding:3 driven:1 discard:1 scenario:4 certain:1 verlag:2 ubingen:1 binary:2 yi:3 garch:4 seen:2 cuhk:1 signal:7 ii:1 full:1 infer:1 lin:1 yjt:1 post:1 e1:4 a1:4 controlled:2 prediction:1 involving:1 basic:2 represent:3 normalization:1 achieved:1 c1:1 addition:1 residue:1 interval:1 source:11 concluded:2 sch:1 unlike:1 induced:1 db:1 contrary:1 effectiveness:2 structural:7 presence:1 iii:1 enough:2 fit:4 zi:4 identified:3 opposite:1 economic:1 administration:1 t0:3 speaking:1 cause:4 dramatically:1 generally:1 detailed:1 amount:1 reduced:1 generate:3 http:2 exist:2 problematic:1 sign:3 estimated:10 correctly:1 affected:4 group:1 drawn:1 yit:1 graph:1 sum:1 uncertainty:2 ftse:6 place:4 reader:1 separation:5 appendix:1 scaling:1 kzhang:1 acceptable:1 steudel:1 fold:1 identifiable:7 cac:1 strength:6 ri:2 helsinki:1 bns:1 glymour:1 department:1 according:3 across:1 comon:1 pr:1 lingam:39 equation:13 mutually:3 scheines:1 mechanism:6 needed:1 know:1 tractable:2 available:2 autoregression:1 apply:2 appropriate:2 alternative:1 robustness:2 standardized:1 remaining:1 a4:2 instant:3 restrictive:1 chinese:1 uj:3 especially:2 bl:1 traditional:2 diagonal:3 hoyer:6 degrade:1 tuebingen:1 meg:3 code:1 modeled:5 index:14 relationship:1 innovation:8 kun:1 difficult:1 hij:2 bollen:1 zt:9 twenty:2 observation:28 situation:2 extended:1 gc:18 discovered:4 scalemixture:2 introduced:1 pair:1 pearl:1 able:2 below:1 max:1 including:2 explanation:1 unrealistic:1 natural:2 disturbance:3 improve:1 temporally:6 numerous:1 sn:1 geometric:1 discovery:13 mooij:1 multiplication:1 xtt:1 relative:1 fully:4 permutation:1 interesting:3 acyclic:8 versus:1 consistent:2 autocovariances:1 standardization:1 uncorrelated:1 share:1 row:1 daniu:1 moneta:1 supported:1 infeasible:1 side:1 allow:1 institute:1 taking:1 absolute:2 sparse:1 fifth:1 distributed:3 dimension:4 world:10 autoregressive:5 contemporaneous:20 approximate:1 absorb:1 investigating:1 b1:10 assumed:9 hurri:1 consuming:1 search:1 latent:9 decade:1 why:1 table:7 robust:1 sem:2 eeg:3 necessarily:3 main:1 hierarchically:1 whole:6 noise:1 causality:7 jena:1 sub:3 inferring:1 jmlr:2 theorem:5 specific:1 xt:23 a3:2 incorporating:1 intrinsic:1 workshop:1 importance:1 modulates:1 contemporaneously:3 chen:2 mildly:1 subtract:1 shimizu:6 simply:1 partially:1 springer:2 conditional:1 magnetoencephalography:1 consequently:2 price:5 considerable:1 specifically:5 determined:3 uniformly:1 lemma:6 called:1 experimental:4 divisive:1 entner:1 macroeconomic:1 cumulant:2 nongaussianity:1 washio:3
3,913
4,542
Tight Bounds on Profile Redundancy and Distinguishability Jayadev Acharya ECE, UCSD [email protected] Hirakendu Das Yahoo! [email protected] Alon Orlitsky ECE & CSE, UCSD [email protected] Abstract The minimax KL-divergence of any distribution from all distributions in a collection P has several practical implications. In compression, it is called redundancy and represents the least additional number of bits over the entropy needed to encode the output of any distribution in P. In online estimation and learning, it is the lowest expected log-loss regret when guessing a sequence of random values generated by a distribution in P. In hypothesis testing, it upper bounds the largest number of distinguishable distributions in P. Motivated by problems ranging from population estimation to text classification and speech recognition, several machine-learning and information-theory researchers have recently considered label-invariant observations and properties induced by i.i.d. distributions. A sufficient statistic for all these properties is the data?s profile, the multiset of the number of times each data element appears. Improving on a sequence of previous works, we show that the redundancy of the collection of distributions induced over profiles by length-n i.i.d. sequences is between 0.3 ? n1/3 and n1/3 log2 n, in particular, establishing its exact growth power. 1 Introduction Information theory, machine learning, and statistics, are closely related disciplines. One of their main intersection areas is the confluence of universal compression, online learning, and hypothesis testing. We consider two concepts in this overlap. The minimax KL divergence?a fundamental measure for, among other things, how difficult distributions are to compress, predict, and classify, and profiles?a relatively new approach for compression, classification, and property testing over large alphabets. Improving on several previous results, we determine the exact growth power of the KL-divergence minimax of profiles of i.i.d. distributions over any alphabet. 1.1 Minimax KL divergence As is well known in information theory, the expected number of bits required to compress data X generated according to a known distribution P is the distribution?s entropy, H(P ) = EP log 1/P (X), and is achieved by encoding X using roughly log 1/P (X) bits. However, in many applications P is unknown, except that it belongs to a known collection P of distributions, for example the collection of all i.i.d., or all Markov distributions. This uncertainty typically raises the number of bits above the entropy and is studied in Universal compression [9, 13]. Any encoding corresponds to some distribution Q over the encoded symbols. Hence the increase in the expected number of bits used to encode the output of P is EP log 1/Q(X) ? H(P ) = D(P ||Q), the KL divergence between P and Q. Typically one is interested in the highest increase for any distribution P ? P, and finds the encoding that minimizes it. The resulting quantity, called the (expected) redundancy of P, e.g., [8, Chap. 13], is therefore the KL minimax def R(P) = min max D(P ||Q). Q P ?P The same quantity arises in online-learning, e.g., [5, Ch. 9], where the probabilities of random elements X1 , . . . , Xn are sequentially Pn estimated. One of the most popular measures for the performance of an estimator Q is the per-symbol log loss n1 i=1 log Q(Xi |X i?1 ). As in compression, for underlying distribution P ? P, the expected log loss is EP log 1/Q(X), and the log-loss regret is EP log 1/Q(X) ? H(P ) = D(P ||Q). The maximal expected regret for any distribution in P, minimized over all estimators Q is again the KL minimax, namely, redundancy. 1 In statistics, redundancy arises in multiple hypothesis testing. Consider the largest number of distributions that can be distinguished from their observations. For example, the largest number of topics distinguishable based on text of a given length. Let P be a collection of distributions over a support set X . As in [18], a sub-collection S ? P of the distributions is -distinguishable if there is a mapping f : X ? S such that if X is generated by a distribution S ? S, then P (f (X) 6= S) ? . Let M (P, ) be the largest number of -distinguishable distributions in P, and let h() be the binary entropy function. In Section 4 we show that for all P, (1 ? ) log M (P, ) ? R(P) + h(), (1) and in many cases, like the one considered here, the inequality is close to equality. Redundancy has many other connections to data compression [27, 28], the minimum-description-length principle [3, 16, 17], sequential prediction [21], and gambling [20]. Because of the fundamental nature of R(P), and since tight bounds on it often reveal the structure of P, the value of R(P) has been studied extensively in all three communities, e.g., the above references as well as [29, 37] and a related minimax in [6]. 1.2 Redundancy of i.i.d. distributions The most extensively studied collections are independently, identically distributed (i.i.d.). For example, for the collection Ikn of length-n i.i.d. distributions over alphabets of size k, a string of works [7, 10, 11, 28, 33, 35, 36] determined the redundancy up to a diminishing additive term, R(Ikn ) = k?1 log n + Ck + o(1), 2 (2) where the constant Ck was determined exactly in terms of k. For compression this shows that the extra number of bits per symbol required to encode an i.i.d. sequence when the underlying distribution is unknown diminishes to zero as (k ? 1) log n/(2n). For online learning this shows that these distributions can be learned (or approximated) and that this approximation can be done at the above rate. In hypothesis testing this shows that there are roughly n(k?1)/2 distinguishable i.i.d. distributions of alphabet size k and length n. Unfortunately, while R(Ikn ) increases logarithmically in the sequence length n, it grows linearly in the alphabet size k. For sufficiently large k, this value even exceeds n itself, showing that general distributions over large alphabets cannot be compressed or learned at a uniform rate over all alphabet sizes, and as the alphabet size increases, progressively larger lengths are needed to achieve a given redundancy, learning rate, or test error. 1.3 Patterns Partly motivated by redundancy?s fast increase with the alphabet size, a new approach was recently proposed to address compression, estimation, classification, and property testing over large alphabets. The pattern [25] of a sequence represents the relative order in which its symbols appear. For example, the pattern of abracadabra is 12314151231. A natural method to compress a sequence over a large alphabet is to compress its pattern as well as the dictionary that maps the order to the original symbols. For example, for abracadabra, 1 ? a, 2 ? b, 3 ? r, 4 ? c, 5 ? d. It can be shown [15, 26] that for all i.i.d. distributions, over any alphabet, even infinitely large, as the sequence length increases, essentially all the entropy lies in the pattern, and practically none is in the dictionary. Hence [25] focused on the redundancy of compressing patterns. They showed, e.g., Subsection 1.5, that the although, as in (2), i.i.d. sequences over large alphabets have arbitrarily high per-symbol redundancy, and although as above patterns contain essentially all the information of long sequences, the per-symbol redundancy of patterns diminishes to zero at a uniform rate independent of the alphabet size. In online learning, patterns correspond to estimating the probabilities of each observed symbol, and of all unseen ones combined. For example, after observing the sequence dad, with pattern 121, we estimate the probabilities of 1, 2, and 3. The probability we assign to 1 is that of d, the probability we assign to 2 is that of a, and the probability we assign to 3 is the probability of all remaining letters combined. The aforementioned results imply that while distributions over large alphabets cannot be learned with uniformly diminishing per-symbol log loss, if we would like to estimate the probability of each seen element, but combine together the probabilities of all unseen ones, then the per symbol log loss diminishes to zero uniformly regardless of the alphabet size. 2 1.4 Profiles Improving on existing pattern-redundancy bounds seems easier to accomplish via profiles. Since we consider i.i.d. distributions, the order of the elements in a pattern does not affect its probability. For example, for every distribution P , P (112) = P (121). It is easy to see that the probability of a pattern is determined by the fingerprint [4] or profile [25] of the pattern, the multiset of the number of appearances of the symbols in the pattern. For example, the profile of the pattern 121 is {1, 2} and all patterns with this profile, 112, 121, 122 will have the same probability under any distribution P . Similarly, the profile of 1213 is {1, 1, 2} and all patterns with this profile, 1123, 1213, 1231, 1223, 1232, and 1233, will have the same probability under any distribution. It is easy to see that since all patterns of a given profile have the same probability, the ratio between the actual and estimated probability of a profile is the same as this ratio for each of its patterns. Hence pattern redundancy is the same as profile redundancy [25]. Therefore from now on we consider only profile redundancy, and begin by defining it more formally. The multiplicity ?(a) of a symbol a in a sequence is the number of times it appears. The profile ?(x) of a sequence x is the multiset of multiplicities of all symbols appearing in it [24, 25]. The profile of the sequence is the multiset of multiplicities. For example, the sequence ababcde has multiplicities ?(a) = ?(b) = 2, ?(c) = ?(d) = ?(e) = 1, and profile {1, 1, 1, 2, 2}. The prevalence ?? of a multiplicity ? is the number of elements with multiplicity ?. Let ?n denote the collection of all profiles of length-n sequences. For example, for sequences of length one there is a single element appearing once, hence ?1 = {{1}}, for length two, either one element appears twice, or each of two elements appear once, hence ?2 = {{2}, {1, 1}}, similarly ?3 = {{3}, {2, 1}, {1, 1, 1}}, etc. We consider the distributions induced on ?n by all discrete i.i.d. distributions over any alphabet. The probability def Qn that an i.i.d. distribution P generates an n-element sequence x is P (x) = i=1 P (xi ). The probability of a profile def P ? ? ?n is the sum of the probabilities of all sequences of this profile, P (?) = x:?(x)=? P (x). For example, if P is B(2/3) over h and t, then for n = 3, P ({3}) = P (hhh) + P (ttt) = 1/3, P ({2, 1}) = P (hht) + P (hth) + P (thh) + P (tth) + P (tht) + P (htt) = 2/3, and P ({1, 1, 1} = 0 as this P is binary hence at most two symbols can appear. On the other hand, if P is a roll of a fair die, then P ({3}) = 1/36, P ({2, 1}) = 5/12, and P ({1, 1, 1} = 5/9. We let I?n = {P (?) : P is a discrete i.i.d. distribution} be the collection of all distributions on ?n induced by any discrete i.i.d. distribution over any alphabet, possibly even infinite. It is easy to see that any relabeling of the elements in an i.i.d. distribution will leave the profile distribution unchanged, for example, if instead of h and t above, we have a distribution over 0?s and 1?s. Furthermore, profiles are sufficient statistics for every label-invariant property. While many theoretical properties of profiles are known, even calculating the profile probabilities for a given distribution and a profile seems hard [23, 38] in general. Profile redundancy arises in at least two other machine-learning applications, closeness-testing and classification.In closeness testing [4], we try to determine if two sequences are generated by same or different distributions. In classification, we try to assign a test sequence to one of two training sequences. Joint profiles and quantities related to profile redundancy are used to construct competitive closeness tests and classifiers that perform almost as well as the best possible [1, 2]. Profiles also arise in statistics, in estimating symmetric or label-invariant properties of i.i.d. distributions ([34] and references therein). For example the support size, entropy, moments, or number of heavy hitters. All these properties depend only on the multiset of probability values in the distribution. For example, the entropy of the distribution p(heads) = .6, p(tails) = .4, depends only on the probability multiset {.6, .4}. For all these properties, profiles are a sufficient statistic. 1.5 Previous Results As patterns and profiles have the same redundancy, we describe the results for profiles. Instead of the expected redundancy R(I?n ) that reflects the increase in the expected number of bits, [25] bounded the ? n ), reflecting the increase in the worst-case number of more stringent but closely-related worst-case redundancy, R(I ? bits, namely over all sequences. Using bounds [19] on the partition function, they showed that r ! 1/3 n ? ? ) ? ? 2 n1/2 . ?(n ) ? R(I 3 3 These bounds do not involve the alphabet size, hence show that unlike the sequences themselves, patterns (whose redundancy equals that of profiles), though containing essentially all the information of the sequence, can be compressed and learned with redundancy and log-loss diminishing as n?1/2 , uniformly over all alphabet sizes. Note however that by contrast to i.i.d. distributions, where the redundancy (2) was determined up to a diminishing additive constant, here not even the power was known. Consequently several papers considered improvements of these bounds, mostly for expected redundancy, the minimax KL divergence. Since expected redundancy is at most the worst-case redundancy, the upper bound applies also for expected redundancy. Subsequently [31] described a partial proof-outline that could potentially show the following tighter upper bound on expected redundancy, and [14] proved the following lower bound, strengthening one in [32],  1/3 n 1.84 ? R(I?n ) ? n0.4 . (3) log n 1.6 New results In Theorem 15 we use error-correcting codes to exhibit a larger class of distinguishable distributions in I?n than was known before, thereby removing the log n factor from the lower bound in (3). In Theorem 11 we demonstrate a small number of distributions such that every distribution in I?n is within a small KL divergence from one of them, thereby reducing the upper bound to have the same power as the lower bound. Combining these results we obtain, 0.3 ? n1/3 ? (1 ? ) log M (I?n , ) ? R(I?n ) ? n1/3 log2 n. (4) These results close the power gap between the upper and lower bounds that existed in the literature. They show that when a pattern is compressed or a sequence is estimated (with all unseen elements combined into new), the per-symbol redundancy and log-loss decrease to 0 uniformly over all distributions faster than log2 n/n2/3 , a rate that is optimal up to a log2 n factor. They also show that for length-n profiles, the redundancy R(I?n ) is essentially the logarithm log M (I?n , ) of the number of distinguishable distributions. 1.7 Outline In the next section we describe properties of Poisson sampling and redundancy that will be used later in the paper. In Section 3 we establish the upper bound and in Section 4, the lower bound. Most of the proofs are provided in the Appendix. 2 Preliminaries We describe some techniques and results used in the proofs. 2.1 Poisson sampling When a distribution is sampled i.i.d. exactly n times, the multiplicities are dependent, complicating the analysis of many properties. A standard approach [22] to overcome the dependence is to sample the distribution a random poi(n) times, the Poisson distribution with parameter n, resulting in sequences of random length near close to n. We let def poi(?, ?) = e?? ?? /?! denote the probability that a poi(?) random variable attains the value ?. The following basic properties of Poisson sampling help simplify the analysis and relate it to fixed-length sampling. Lemma 1. If a discrete i.i.d. distribution is sampled poi(n) times then: (1) the number of appearances of different symbols are independent; (2) a symbol with probability p appears poi(np) times; (3) for any fixed n0 , conditioned on the length poi(n) ? n0 , the first n0 elements are distributed identically to sampling P exactly n0 times. We now express profile probabilities and redundancy under Poisson sampling. As we saw, the probability of a profile is determined by just the multiset of probability value and the symbol labels are irrelevant. For convenience, we assume that the distribution is over the positive integers, and we replace the distribution parameters {pi } by the Poisson def parameters {npi }. For a distribution P = {p1 , p2 , . . .}, let ?i = npi , and ? = {?1 , ?2 , . . .}. The profile generated 4 by this distribution is a multiset ? = {?1 , ?2 , . . .}, where each ?i generated independently according to poi(?i ). The probability that ? generates ? is [1, 25], XY 1 ?(?) = Q? poi(??(i) , ?i ). (5) ?=0 ?? ! ? i where the summation is over all permutations of the support set. For example, for ? = {?1 , ?2 , ?3 }, the profile ? = {2, 2, 3} can be generated by specifying which element appears three times. This is reflected by the ?2 ! in the denominator, and each of the repeated terms in the numerator are counted only once. ? poi(n) Similar to I?n , we use I? to denote the class of distributions induced on ?? = ?0 ? ?1 ? ?2 ? . . . when sequences poi(n) of length poi(n) are generated i.i.d.. It is easy to see that a distribution in I? is a collection of ?i ?s summing to n. poi(n) poi(n) The redundancy R(I? ), and -distinguishability M (I? , ) are defined as before. The following lemma shows poi(n) poi(n) ) is sufficient to bound R(I?n ). that bounding M (I? , ) and R(I? Lemma 2. For any fixed  > 0, ? n? n log n (1 ? o(1))R(I? poi(n) ) ? R(I? ) and poi(n) M (I? ? n+ n log n , ) ? M (I? , 2). in n. Combining this with the fact that Proof Sketch. It is easy to show that R(I?n )?and M (I?n , ) are non-decreasing ? the probability that poi(n) is less than n ? n log n or greater than n + n log n goes to 0 yields the bounds.  Finally, the next lemma, proved in the Appendix, provides a simple formula for cross expectations of Poisson distributions. Lemma 3. For any ?0 , ?1 , ?2 > 0,  E??poi(?1 ) 2.2    poi(?2 , ?) (?1 ? ?0 )(?2 ? ?0 ) = exp . poi(?0 , ?) ?0 Redundancy We state some basic properties of redundancy. For a distribution P over A and a function f : A ? B, let f (P ) be the distribution over B that assigns to b ? B the probability P (f ?1 (b)). Similarly, for a collection P of distributions over A, let f (P) = {f (P ) : P ? P}. The convexity of KL-divergence shows that D(f (P )||f (Q)) ? D(P ||Q), and can be used to show Lemma 4 (Function Redundancy). R(f (P)) ? R(P). For a collection P of distributions over A ? B, let PA and PB be the collection of marginal distributions over A and B, respectively. In general, R(P) can be larger or smaller than R(PA ) + R(PB ). However, when P consists of product distributions, namely P (a, b) = PA (a) ? PB (b), the redundancy of the product is at most the sum of the marginal redundancies. The proof is given in the Appendix. Lemma 5 (Redundancy of products). If P be a collection of product distributions over A ? B, then R(P) ? R(PA ) + R(PB ). For a prefix-free code C : A ? {0, 1}? , let EP [|C|] be the expected length of C under distribution P . Redundancy is the extra number of bits above the entropy needed to encode the output of any distribution in P. Hence, Lemma 6. For every prefix-free code C, R(P) ? maxP ?P EP [|C|]. Lemma 7 (Redundancy of unions). If P1 , . . . , PT are distribution collections, then [ R( Pi ) ? max R(Pi ) + log T. 1?i?k 1?i?T 5 3 Upper bound poi(n) A distribution in ? ? I? is a multiset of ??s adding to n. For any such distribution, let def def def ?low = {? ? ? : ? ? n1/3 }, ?med = {? ? ? : n1/3 < ? ? n2/3 }, ?high = {? ? ? : ? > n2/3 }, and let ?low , ?med , ?high denote the corresponding profile each subset generates. Then ? = ?low ? ?med ? ?high . Let poi(n) I?low = {?low : ? ? I? } be the collection of all ?low . Note that n is implicit here and in the rest of the paper. A distribution in I?low is a multiset of ??s such that each is ? n1/3 and they sum to either n or to ? n ? n1/3 . I?med and I?high are defined similarly. ? is determined by the triple (?low , ?med , ?high ), and by Poisson sampling, ?low , ?med and ?high are independent. Hence by Lemmas 4 and 5, R(I?n ) ? R(I(?low ,?med ,?high ) ) ? R(I?low ) + R(I?med ) + R(I?high ). In Subsection 3.1 we show that I?low < 4n1/3 log n and I?high < 4n1/3 log n. In Subsection 3.2 we show that I?med < 12 n1/3 log2 n. In the next two subsections we elaborate on the overview and sketch some proof details. 3.1 Bounds on R(I?low ) and R(I?high ) Elias Codes [12] are prefix-free codes that encode a positive integer n using at most log n + log(log n + 1) + 1 bits. We use Elias codes and design explicit coding schemes for distributions in I?low and I?high , and prove the following result. Lemma 8. R(I?low ) < 4n1/3 log n, and R(I?high ) < 2n1/3 log n. Proof. Any distribution ?high ? I?high consists of ??s that are > n2/3 and add to ? n. Hence |?high | is < n1/3 , and so is the number of multiplicities in ?high . Each multiplicity is a poi(?) random variable, and is encoded separately using Elias code. For example, the profile {100, 100, 200, 250, 500} is encoded by coding the sequence 100, 100, 200, 250, 500 all using Elias scheme. For ? > 10, the number of bits needed to encode a poi(?) random variable using Elias codes can be shown to be at most 2 log ?. The expected code-length is at most n1/3 ? 2 log n. Applying Lemma 6 gives R(I?high ) < 2n1/3 log n. A distribution ?low ? I?low consists of ??s less that < n1/3 and sum at most n. We encode distinct multiplicities along with their prevalences, using two integers for each distinct multiplicity. For example, ? = {1, 1, 1, 1, 1, 2, 2, 2, 5} is coded as 1, 5, 2, 3, 5, 1. Using Poisson tail bounds, we bound the largest multiplicity in ?low , and use arguments similar  to I?high to obtain R(I?low ) < 4n1/3 log n. 3.2 Bound on R I?med  We partition the interval (n1/3 , n2/3 ] into B = n1/3 bins. For each distribution in I?med , we divide the ??s in it according to these bins. We show that within each interval, there is a uniform distribution such that the KL divergence between the underlying distribution and the induced uniform distribution is small. We then show that the number of uniform distributions needed is at most exp(n1/3 log n). We expand on these ideas and bound R(I?med ). We partition I?med into T ? exp(n1/3 log n) classes, upper bound the redundancy of each class, and then invoke  Lemma 7 to obtain an upper bound on R I?med . A distribution ? = {?1 , ?2 , . . . , ?r } ? I?med is such that ?i ? Pr [n1/3 , n2/3 ] and i=1 ?i ? n. def Consider any partition of (n1/3 , n2/3 ] into B = n1/3 consecutive intervals I1 , I2 , . . . , IB of lengths def def ?1 , ?2 , . . . , ?B .For each distribution ? ? I?med , let ?j = {?j,l : l = 1, 2, . . . , mj } = {? : ? ? ? ? Ij } be def def the set of elements of ? in Ij where mj = mj (?) = |?j | is the number of elements of ? in Ij . Let def ? (?) = (m1 , m2 , . . . , mB ) 6 be the B?tuple of the counts of ??s in each interval. For example, if n = 1000, then n1/3 = 10 and n2/3 = 100. For simplicity, we choose B = 3 instead of n1/3 and ?1 = 10, ?2 = 30, ?3 = 50, so the intervals are I1 = (10, 20], I2 = (20, 50], I3 = (50, 100]. Suppose, ? = {12, 15, 25, 35, 32, 43, 46, 73}, then ?1 = {12, 15}, ?2 = {25, 35, 32, 43, 46}, ?3 = {73} and ? (?) = (m1 , m2 , m3 ) = (2, 5, 1). We partition I?med , such that two distributions ? and ?0 are in the same class if and only if ? (?) = ? (?0 ). Thus each class of distributions is characterized by a B-tuple of integers ? = (m1 , m2 , . . . , mB ) and let I? denote this class. Let def T = T (?) be the set of all possible different ? (such that I? is non-empty), and T = |T | be the numberP of classes. We first bound T below. Observe that for any ? ? I?med , and any j, we have mj < n2/3 , otherwise ??? ? > 1/3 mj ? n1/3 = n. So, each mj in ? can take at most n2/3 < n values. So, T < (n2/3 )B < nn = exp(n1/3 log n). Pj?1 def 1/3 For any choice of ?, let ?? + i=1 ?i be the left end point of the interval Ij for j = 1, 2, . . . , B. We upper j = n bound R(I? ) of any particular class ? = (m1 , m2 , . . . , mB ) in the following result. Lemma 9. For all choices of ? = (?1 , . . . , ?B ), and all classes I? such that ? = (m1 , . . . , mB ) ? T (?), R(I? ) ? B X mj j=1 ?2j ?? j . Proof Sketch. For any choice of ?, ? = (m1 , . . . , mB ) ? T (?), we show a distribution ?? ? I? such that for PB ?2j all ? ? I? , D(?||?? ) ? . Recall that for ? ? I? , ?j is the set of elements of ? in Ij . Let ?j j=1 mj ?? j be the profile generated by ?j . Then, ?med = ?1 ? . . . ? ?B . The distribution ?? is chosen to be of the form {??1 ?m1 , ??2 ?m2 , . . . , ??B ?mB }, i.e., each ??j is uniform. The result follows from Lemma 3, and the details are in the Appendix .  We now prove that R(I?med ) < 12 n1/3 log2 n. By Lemma 7 it suffices to bound R(I? ). From Theorem 9 it follows that the choice of ? determines the bound on R(I? ). A solution to the following optimization problem yields a bound : B B X X ?2j mj ? , subject to min max mj ?? j ? n. ? ? ? j j=1 j=1 Instead of minimizing over all partitions, we choose the endpoints of the intervals as a geometric series as a bound for ? ? 1/3 the expression. The left-end point of Ij is ?? . We let ?? j , so ?1 = n j+1 = ?j (1 + c). The constant c is chosen to B 1/3 ensure that ?? (1+c)n 1 (1+c) = n ? ? Now, ?j = ?? j+1 ? ?j = c?j , so 1/3 ?2j ?? j = n2/3 , the right end-point of IB . This yields, c < 2 log(1+c) = 2 log(n1/3 ) . n1/3 = c2 ?? j . This translates the objective function to the constraint, and is in fact the optimal intervals for the optimization problem (details omitted). Using this, for any ? = (m1 , . . . , mB ) ? T (?), B X j=1 mj ?2j ?? j = c2 B X 2 mj ?? j ? c n< j=1  2 log(n1/3 ) n1/3 2 n= This, along with Lemma 7 gives the following Corollary for sufficiently large n. Corollary 10. For large n, R(I?med ) < 1 2 ? n1/3 log2 n. Combining Lemma 8 with this result yields, Theorem 11. For sufficiently large n, R(I?n ) ? n1/3 log2 n. 7 4 1/3 n log2 n. 9 4 Lower bound We use error-correcting codes to construct a collection of 20.3n rithmic factor the bound in [14, 31]. 1/3 distinguishable distributions, improving by a loga- The convexity of KL-divergence can be used to show Lemma 12. Let P and Q be distributions  on A. Suppose A1 ? A be such that P (A1 ) ? 1 ?  > 1/2, Q(A1 ) ? ? < 1/2. Then, D(P ||Q) ? (1 ? ) log 1? ? h(). def We use this result to show that (1 ? ) log M (P, ) ? R(P). Recall that for P over A, M = M (P, ) is the largest number of ?distinguishable distributions in P. Let P1 , P2 , . . . , PM in P and A1 , A2 , . . . , AM be a partition of A PM such that Pj (Aj ) ? 1?. Let Q0 be the distribution such that, R(P) = supP ?P D(P ||Q0 ). Since j=1 Q0 (Aj ) = 1, 1 for some m ? {1, . . . , M }. Also, Pm (Am ) ? 1 ? . Plugging in P = Pm , Q = Q0 , A1 = Am , and Q0 (Am ) < M ? = 1/M in the Lemma 12, R(P) ? D(Pm ||Q0 ) ? (1 ? ) log (M (P, )) ? h(). def def We now describe the class of distinguishable distributions. Fix C > 0. Let ??i = Ci2 , K = b(3n/C)1/3 c, and def S = {??i : 1 ? i ? K}. K is chosen so that sum of elements in S is at most n. Let x = x1 x2 . . . xK be a binary string and n o X def ?x = {??i : xi = 1} ? n ? ??i xi . The distribution contains ??i whenever xi = 1, and the last element ensures that the elements add up to n. A binary code of length k and minimum distance dmin is a collection of k?length binary strings with Hamming distance between any two strings is at least dmin . The size of the code is the number of elements (codewords) in it. The following shows the existence of codes with a specified minimum distance and size. Lemma 13 ([30]). Let 1 2 > ? > 0. There exists a code with dmin ? ?k and size ? 2k(1?h(?)?o(1)) . Let C be a code satisfying Lemma 13 for k = K and let L = {?c : c ? C} be the set of distributions generated by using the strings in C. The following result shows that distributions in L are distinguishable and is proved in Appendix . ?C/4 Lemma 14. The set L is 2e ? ?distinguishable. Plugging ? = 5 ? 10?5 and C = 60, then Lemma 13 and Equation (1) yields, Theorem 15. For sufficiently large n, 0.3 ? n1/3 ? R(I?n ). Acknowledgments The authors thank Ashkan Jafarpour and Ananda Theertha Suresh for many helpful discussions. References [1] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, and S. Pan. Competitive closeness testing. J. of Machine Learning Research Proceedings Track, 19:47?68, 2011. [2] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, S. Pan, and A. T. Suresh. Competitive classification and closeness testing. Journal of Machine Learning Research - Proceedings Track, 23:22.1?22.18, 2012. [3] A. R. Barron, J. Rissanen, and B. Yu. The minimum description length principle in coding and modeling. IEEE Transactions on Information Theory, 44(6):2743?2760, 1998. [4] T. Batu, L. Fortnow, R. Rubinfeld, W. D. Smith, and P. White. Testing that distributions are close. In Annual Symposium on Foundations of Computer Science, page 259, 2000. [5] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [6] K. Chaudhuri and A. McGregor. Finding metric structure in information theoretic clustering. In Conference on Learning Theory, pages 391?402, 2008. [7] T. Cover. Universal portfolios. Mathematical Finance, 1(1):1?29, January 1991. 8 [8] T. Cover and J. Thomas. Elements of Information Theory, 2nd Ed. Wiley Interscience, 2006. [9] L. Davisson. Universal noiseless coding. IEEE Transactions on Information Theory, 19(6):783?795, November 1973. [10] L. D. Davisson, R. J. McEliece, M. B. Pursley, and M. S. Wallace. Efficient universal noiseless source codes. IEEE Transactions on Information Theory, 27(3):269?279, 1981. [11] M. Drmota and W. Szpankowski. Precise minimax redundancy and regret. IEEE Transactions on Information Theory, 50(11):2686?2707, 2004. [12] P. Elias. Universal codeword sets and representations of the integers. IEEE Transactions on Information Theory, 21(2):194? 203, Mar 1975. [13] B. M. Fitingof. Optimal coding in the case of unknown and changing message statistics. Probl. Inform. Transm., 2(2):1?7, 1966. [14] A. Garivier. A lower-bound for the maximin redundancy in pattern coding. Entropy, 11(4):634?642, 2009. [15] G. M. Gemelos and T. Weissman. On the entropy rate of pattern processes. IEEE Transactions on Information Theory, 52(9):3994?4007, 2006. [16] P. Gr?unwald. A tutorial introduction to the minimum description length principle. CoRR, math.ST/0406077, 2004. ? Smith. Safe learning: bridging the gap between bayes, mdl and statistical [17] P. Gr?unwald, J. S. Jones, J. de Winter, and E. learning theory via empirical convexity. J. of Machine Learning Research - Proceedings Track, 19:397?420, 2011. [18] P. D. Gr?unwald. The Minimum Description Length Principle. The MIT Press, 2007. [19] G. Hardy and S. Ramanujan. Asymptotic formulae in combinatory analysis. Proceedings of London Mathematics Society, 17(2):75?115, 1918. [20] J. Kelly. A new interpretation of information rate. IEEE Transactions on Information Theory, 2(3):185?189, 1956. [21] N. Merhav and M. Feder. Universal prediction. IEEE Transactions on Information Theory, 44(6):2124?2147, October 1998. [22] M. Mitzenmacher and E. Upfal. Probability and computing - randomized algorithms and probabilistic analysis. Cambridge University Press, 2005. [23] A. Orlitsky, S. Pan, Sajama, N. Santhanam, and K. Viswanathan. Pattern maximum likelihood: computation and experiments. In preparation, 2012. [24] A. Orlitsky, N. Santhanam, K. Viswanathan, and J. Zhang. On modeling profiles instead of values. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, 2004. [25] A. Orlitsky, N. Santhanam, and J. Zhang. Universal compression of memoryless sources over unknown alphabets. IEEE Transactions on Information Theory, 50(7):1469? 1481, July 2004. [26] A. Orlitsky, N. P. Santhanam, K. Viswanathan, and J. Zhang. Limit results on pattern entropy. IEEE Transactions on Information Theory, 52(7):2954?2964, 2006. [27] J. Rissanen. Universal coding, information, prediction, and estimation. IEEE Transactions on Information Theory, 30(4):629? 636, July 1984. [28] J. Rissanen. Fisher information and stochastic complexity. IEEE Transactions on Information Theory, 42(1):40?47, January 1996. [29] J. Rissanen, T. P. Speed, and B. Yu. Density estimation by stochastic complexity. IEEE Transactions on Information Theory, 38(2):315?323, 1992. [30] R. M. Roth. Introduction to coding theory. Cambridge University Press, 2006. [31] G. Shamir. A new upper bound on the redundancy of unknown alphabets. In Proceedings of The 38th Annual Conference on Information Sciences and Systems, Princeton, New-Jersey, 2004. [32] G. Shamir. Universal lossless compression with unknown alphabets?the average case. IEEE Transactions on Information Theory, 52(11):4915?4944, November 2006. [33] W. Szpankowski. On asymptotics of certain recurrences arising in universal coding. Problems of Information Transmission, 34(2):142?146, 1998. [34] P. Valiant. Testing symmetric properties of distributions. PhD thesis, Cambridge, MA, USA, 2008. AAI0821026. [35] F. M. J. Willems, Y. M. Shtarkov, and T. J. Tjalkens. The context-tree weighting method: basic properties. IEEE Transactions on Information Theory, 41(3):653?664, 1995. [36] Q. Xie and A. Barron. Asymptotic minimax regret for data compression, gambling and prediction. IEEE Transactions on Information Theory, 46(2):431?445, March 2000. [37] B. Yu and T. P. Speed. A rate of convergence result for a universal d-semifaithful code. IEEE Transactions on Information Theory, 39(3):813?820, 1993. [38] J. Zhang. Universal Compression and Probability Estimation with Unknown Alphabets. PhD thesis, UCSD, 2005. 9
4542 |@word compression:12 seems:2 nd:1 ci2:1 thereby:2 jafarpour:3 moment:1 series:1 contains:1 hardy:1 prefix:3 existing:1 com:1 additive:2 partition:7 progressively:1 n0:5 intelligence:1 xk:1 smith:2 provides:1 multiset:10 cse:1 math:1 zhang:4 mathematical:1 along:2 c2:2 shtarkov:1 symposium:1 consists:3 prove:2 combine:1 interscience:1 expected:14 roughly:2 themselves:1 p1:3 wallace:1 chap:1 decreasing:1 actual:1 begin:1 estimating:2 underlying:3 bounded:1 provided:1 lowest:1 minimizes:1 string:5 finding:1 every:4 orlitsky:7 growth:2 finance:1 exactly:3 classifier:1 appear:3 before:2 positive:2 limit:1 encoding:3 establishing:1 lugosi:1 twice:1 therein:1 studied:3 specifying:1 practical:1 acknowledgment:1 testing:12 union:1 regret:5 prevalence:2 suresh:2 asymptotics:1 area:1 universal:13 empirical:1 confluence:1 close:4 cannot:2 convenience:1 context:1 applying:1 map:1 roth:1 ramanujan:1 go:1 regardless:1 independently:2 tjalkens:1 focused:1 simplicity:1 assigns:1 correcting:2 m2:5 estimator:2 population:1 tht:1 pt:1 suppose:2 shamir:2 exact:2 hypothesis:4 pa:4 element:21 logarithmically:1 recognition:1 approximated:1 satisfying:1 ep:6 observed:1 worst:3 compressing:1 ensures:1 decrease:1 highest:1 convexity:3 complexity:2 davisson:2 raise:1 tight:2 depend:1 transm:1 joint:1 jersey:1 alphabet:24 distinct:2 fast:1 describe:4 london:1 artificial:1 whose:1 encoded:3 larger:3 ikn:3 otherwise:1 compressed:3 maxp:1 statistic:7 unseen:3 itself:1 online:5 sequence:29 maximal:1 strengthening:1 product:4 mb:7 combining:3 chaudhuri:1 achieve:1 description:4 convergence:1 empty:1 transmission:1 leave:1 help:1 alon:2 ij:6 p2:2 safe:1 closely:2 subsequently:1 stochastic:2 stringent:1 bin:2 assign:4 suffices:1 fix:1 preliminary:1 tighter:1 summation:1 practically:1 sufficiently:4 considered:3 exp:4 mapping:1 predict:1 dictionary:2 consecutive:1 a2:1 omitted:1 estimation:6 diminishes:3 label:4 saw:1 largest:6 reflects:1 mit:1 i3:1 ck:2 pn:1 poi:25 corollary:2 encode:7 improvement:1 likelihood:1 contrast:1 attains:1 am:4 helpful:1 dependent:1 nn:1 typically:2 diminishing:4 expand:1 interested:1 i1:2 classification:6 among:1 aforementioned:1 yahoo:2 marginal:2 equal:1 once:3 construct:2 sampling:7 represents:2 yu:3 jones:1 minimized:1 np:1 simplify:1 acharya:3 winter:1 divergence:10 relabeling:1 n1:39 message:1 mdl:1 implication:1 tuple:2 partial:1 xy:1 tree:1 divide:1 logarithm:1 theoretical:1 classify:1 modeling:2 cover:2 subset:1 uniform:6 sajama:1 gr:3 accomplish:1 combined:3 st:1 density:1 fundamental:2 randomized:1 probabilistic:1 invoke:1 discipline:1 together:1 again:1 thesis:2 cesa:1 containing:1 choose:2 possibly:1 supp:1 de:1 coding:9 inc:1 depends:1 later:1 try:2 dad:1 observing:1 competitive:3 bayes:1 ttt:1 npi:2 roll:1 correspond:1 yield:5 none:1 researcher:1 inform:1 whenever:1 ashkan:1 ed:1 proof:8 hamming:1 sampled:2 proved:3 popular:1 recall:2 subsection:4 reflecting:1 appears:5 xie:1 reflected:1 done:1 though:1 mar:1 mitzenmacher:1 furthermore:1 just:1 implicit:1 mceliece:1 hand:1 sketch:3 aj:2 reveal:1 grows:1 usa:2 concept:1 contain:1 hence:10 equality:1 symmetric:2 q0:6 memoryless:1 i2:2 white:1 numerator:1 game:1 recurrence:1 die:1 outline:2 theoretic:1 demonstrate:1 ranging:1 recently:2 overview:1 endpoint:1 fortnow:1 tail:2 interpretation:1 m1:8 cambridge:4 probl:1 pm:5 similarly:4 mathematics:1 fingerprint:1 portfolio:1 etc:1 add:2 showed:2 belongs:1 irrelevant:1 loga:1 codeword:1 certain:1 inequality:1 binary:5 arbitrarily:1 seen:1 minimum:6 additional:1 greater:1 determine:2 july:2 multiple:1 exceeds:1 faster:1 characterized:1 cross:1 long:1 weissman:1 coded:1 a1:5 plugging:2 prediction:5 basic:3 denominator:1 essentially:4 expectation:1 poisson:9 metric:1 noiseless:2 achieved:1 separately:1 interval:8 source:2 extra:2 rest:1 unlike:1 induced:6 med:21 subject:1 thing:1 integer:5 near:1 htt:1 identically:2 easy:5 affect:1 idea:1 translates:1 motivated:2 expression:1 bridging:1 feder:1 speech:1 york:1 involve:1 extensively:2 tth:1 tutorial:1 estimated:3 arising:1 per:7 track:3 discrete:4 express:1 santhanam:4 redundancy:48 rissanen:4 pb:5 hirakendu:1 changing:1 pj:2 garivier:1 sum:5 letter:1 uncertainty:2 almost:1 appendix:5 bit:11 bound:36 def:21 existed:1 annual:2 constraint:1 x2:1 generates:3 speed:2 argument:1 min:2 relatively:1 according:3 rubinfeld:1 viswanathan:3 march:1 smaller:1 pan:3 invariant:3 multiplicity:12 pr:1 equation:1 count:1 needed:5 hitter:1 end:3 observe:1 barron:2 appearing:2 distinguished:1 existence:1 original:1 compress:4 thomas:1 remaining:1 ensure:1 clustering:1 log2:9 calculating:1 establish:1 society:1 jayadev:1 unchanged:1 objective:1 quantity:3 codewords:1 dependence:1 guessing:1 exhibit:1 distance:3 thank:1 topic:1 length:24 code:17 ratio:2 minimizing:1 difficult:1 unfortunately:1 mostly:1 merhav:1 potentially:1 relate:1 october:1 design:1 unknown:7 perform:1 bianchi:1 upper:11 dmin:3 observation:2 willems:1 markov:1 november:2 january:2 defining:1 head:1 precise:1 ucsd:5 community:1 namely:3 required:2 kl:12 specified:1 connection:1 maximin:1 learned:4 distinguishability:2 address:1 below:1 pattern:28 max:3 power:5 overlap:1 natural:1 minimax:10 scheme:2 lossless:1 imply:1 text:2 literature:1 geometric:1 batu:1 kelly:1 relative:1 asymptotic:2 loss:8 permutation:1 triple:1 foundation:1 upfal:1 elia:6 sufficient:4 principle:4 pi:3 heavy:1 last:1 free:3 distributed:2 overcome:1 xn:1 complicating:1 qn:1 hhh:1 author:1 collection:19 counted:1 transaction:17 sequentially:1 summing:1 xi:5 nature:1 mj:12 improving:4 hth:1 da:3 main:1 linearly:1 bounding:1 arise:1 profile:44 n2:12 fair:1 repeated:1 x1:2 gambling:2 elaborate:1 ny:1 wiley:1 sub:1 explicit:1 lie:1 ib:2 weighting:1 theorem:5 removing:1 formula:2 showing:1 symbol:18 theertha:1 closeness:5 exists:1 sequential:1 adding:1 corr:1 valiant:1 phd:2 conditioned:1 gap:2 easier:1 entropy:11 intersection:1 distinguishable:12 appearance:2 infinitely:1 rithmic:1 applies:1 ch:1 corresponds:1 determines:1 ma:1 consequently:1 replace:1 fisher:1 hard:1 determined:6 except:1 uniformly:4 infinite:1 reducing:1 combinatory:1 lemma:24 ananda:1 called:2 hht:1 ece:2 partly:1 m3:1 unwald:3 formally:1 support:3 arises:3 preparation:1 princeton:1 mcgregor:1
3,914
4,543
Optimal Regularized Dual Averaging Methods for Stochastic Optimization Xi Chen Machine Learning Department Carnegie Mellon University [email protected] ? Qihang Lin Javier Pena Tepper School of Business Carnegie Mellon University {qihangl,jfp}@andrew.cmu.edu Abstract This paper considers a wide spectrum of regularized stochastic optimization problems where both the loss function and regularizer can be non-smooth. We develop a novel algorithm based on the regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for both convex and strongly convex loss. In particular, for strongly convex loss, it achieves the optimal rate of O( N1 + N12 ) for N iterations, which improves the rate O( logNN ) for previous regularized dual averaging algorithms. In addition, our method constructs the final solution directly from the proximal mapping instead of averaging of all previous iterates. For widely used sparsity-inducing regularizers (e.g., `1 -norm), it has the advantage of encouraging sparser solutions. We further develop a multistage extension using the proposed algorithm as a subroutine, which achieves the uniformly-optimal rate O( N1 + exp{?N }) for strongly convex loss. 1 Introduction Many risk minimization problems in machine learning can be formulated into a regularized stochastic optimization problem of the following form: minx?X {?(x) := f (x) + h(x)}. (1) Here, the set of feasible solutions X is a convex set in Rn , which is endowed with a norm k ? k and the dual norm k ? k? . The regularizer h(x) is assumed to be convex, but could be non-differentiable. Popular examples of h(x) include `1 -norm and related R sparsity-inducing regularizers. The loss function f (x) takes the form: f (x) := E? (F (x, ?)) = F (x, ?)dP (?), where ? is a random vector with the distribution P . In typical regression or classification tasks, ? is the input and response (or class label) pair. We assume that for every random vector ?, F (x, ?) is a convex and continuous function in x ? X . Therefore, f (x) is also convex. Furthermore, we assume that there exist constants L ? 0, M ? 0 and ? e ? 0 such that L ? e kx ? yk2 ? f (y) ? f (x) ? hy ? x, f 0 (x)i ? kx ? yk2 + M kx ? yk, 2 2 ?x, y ? X , (2) where f 0 (x) ? ?f (x), the subdifferential of f . We note that this assumption allows us to adopt a wide class of loss functions. For example, if f (x) is smooth and its gradient f 0 (x) = ?f (x) is Lipschitz continuous, we have L > 0 and M = 0 (e.g., squared or logistic loss). If f (x) is nonsmooth but Lipschitz continuous, we have L = 0 and M > 0 (e.g., hinge loss). If ? e > 0, f (x) is strongly convex and ? e is the so-called strong convexity parameter. In general, the optimization problem in Eq.(1) is challenging since the integration in f (x) is computationally intractable for high-dimensional P . In many learning problems, we do not even know the underlying distribution P but can only generate i.i.d. samples ? from P . A traditional approach is to 1 consider empirical loss minimization problem where the expectation in fP (x) is replaced by its emm 1 pirical average on a set of training samples {?1 , . . . , ?m }: femp (x) := m i=1 F (x, ?i ). However, for modern data-intensive applications, minimization of empirical loss with an off-line optimization solver could suffer from very poor scalability. In the past few years, many stochastic (sub)gradient methods [6, 5, 8, 12, 14, 10, 9, 11, 7, 18] have been developed to directly solve the stochastic optimization problem in Eq.(1), which enjoy low periteration complexity and the capability of scaling up to very large data sets. In particular, at the t-th iteration with the current iterate xt , these methods randomly draw a sample ?t from P ; then compute the so-called ?stochastic subgradient? G(xt , ?t ) ? ?x F (xt , ?t ) where ?x F (xt , ?t ) denotes the subdifferential of F (x, ?t ) with respect to x at xt ; and update xt using G(xt , ?t ). These algorithms fall into the class of stochastic approximation methods. Recently, Xiao [21] proposed the regularized dual averaging (RDA) method and its accelerated version (AC-RDA) based on Nesterov?s primal-dual method [17]. Instead of only utilizing a single stochastic subgradient G(xt , ?t ) of the current iteration, it updates the parameter vector using the average of all past stochastic subgradients {G(xi , ?i )}ti=1 and hence leads to improved empirical performances. In this paper, we propose a novel regularized dual averaging method, called optimal RDA or ORDA, which achieves the optimal expected convergence rate of E[?(b x) ? ?(x? )], where x b is the solution ? from ORDA and x is the optimal solution of Eq.(1). As compared to previous dual averaging methods, it has three main advantages: 1. For strongly convex f (x), ORDA improves the convergence rate of stochastic dual aver 2  ? 2 log N log N ? +M 2 L aging methods O( ?eN ) ? O( ?eN ) [17, 21] to an optimal rate O + ? 2 ? eN N   O ?e1N , where ? 2 is the variance of the stochastic subgradient, N is the number of iterations, and the parameters ? e, M and L of f (x) are defined in Eq.(2). 2. ORDA is a self-adaptive and optimal algorithm for solving both convex and strongly convex f (x) with the strong convexity parameter ? e as an input. When ? e = 0, ORDA reduces to a variant of AC-RDA in [21] with the optimal rate for solving convex f (x). Furthermore, our analysis allows f (x) to be non-smooth while AC-RDA requires the smoothness of f (x). e > 0, our algorithm achieves the optimal rate of  For strongly convex f (x) with ?  1 ? eN while AC-RDA does not utilize the advantage of strong convexity. 3. Existing RDA methods [21] and many other stochastic gradient methods (e.g., [14, 10]) PN PN can only show the convergence rate for the averaged iterates: x ?N = t=1 %t xt / t=1 %t , where the {%t } are nonnegative weights. However, in general, the average iterates x ?N cannot keep the structure that the regularizer tends to enforce (e.g., sparsity, low-rank, etc). For example, when h(x) is a sparsity-inducing regularizer (`1 -norm), although xt computed from proximal mapping will be sparse as t goes large, the averaged solution could be non-sparse. In contrast, our method directly generates the final solution from the proximal mapping, which leads to sparser solutions. In addition to the rate of convergence, we also provide high probability bounds on the error of objective values. Utilizing a technical lemma from [3], we could show the same high probability bound as in RDA [21] but under a weaker assumption. Furthermore, using ORDA we develop the multi-stage ORDA which obtains the   2 as 2a subroutine, p + exp{? ? e /LN } for strongly convex f (x). Recall that ORDA convergence rate of O ? ?e+M N   2 2 has the rate O ? ?e+M + NL2 for strongly convex f (x). The rate of muli-stage ORDA improves N   p  the second term in the rate of ORDA from O NL2 to O exp{? ? e/LN } and achieves the socalled ?uniformly-optimal ? rate [15]. Although the improvement is on the non-dominating term, multi-stage ORDA is an optimal algorithm for both stochastic and deterministic optimization. In particular, for deterministic strongly convex and smooth f (x) (M = 0), one can use the same algorithm but only replaces the stochastic subgradient G(x, ?) by the deterministic gradient ?f (x). 2 2 Then, the variance of the stochastic subgradient ? = 0. Now the term ? ?e+M in the rate equals N to 0 and multi-stage ORDA becomes an optimal deterministic solver with the exponential rate 2 Algorithm 1 Optimal Regularized Dual Averaging Method: ORDA(x0 , N, ?, c) Input Parameters: Starting point x0 ? X , the number of iterations N , constants ? ? L and c ? 0. Parameters for f (x): Constants L, M and ? e for f (x) in Eq. (2) and set ? = ? e/? . 2 2 Initialization: Set ?t = t+2 ; ?t = t+1 ; ?t = c(t + 1)3/2 + ? ?; z0 = x0 . Iterate for t = 0, 1, 2, . . . , N : 1. yt = (1??t )(?+?t2 ?t ) x ?t2 ?t +(1??t2 )? t + (1??t )?t ?+?t3 ?t z ?t2 ?t +(1??t2 )? t 2. Sample ?t from the distribution P (?) and compute the stochastic subgradient G(yt , ?t ). P  t G(yi ,?i ) 3. gt = ?t ?t i=0 ?i n P  o t ?V (x,yi ) 4. zt+1 = arg minx?X hx, gt i + h(x) + ?t ?t + ?t ?t ?t+1 V (x, x0 ) i=0 ?i n   o 5. xt+1 = arg minx?X hx, G(yt , ?t )i + h(x) + ???2 + ??t V (x, yt ) t Output: xN +1   p O exp{? ? e/LN } . This is the reason why such a rate is ?uniformly-optimal?, i.e., optimal with respect to both stochastic and deterministic optimization. 2 Preliminary and Notations In the framework of first-order stochastic optimization, the only available information of f (x) is the stochastic subgradient. Formally speaking, stochastic subgradient of f (x) at x, G(x, ?), is a vectorvalued function such that E? G(x, ?) = f 0 (x) ? ?f (x). Following the existing literature, a standard assumption on G(x, ?) is made throughout the paper : there exists a constant ? such that ?x ? X , E? (kG(x, ?) ? f 0 (x)k2? ) ? ? 2 . (3) A key updating step in dual averaging methods, the proximal mapping, utilizes the Bregman divergence. Let ?(x) : X ? R be a strongly convex and differentiable function, the Bregman divergence associated with ?(x) is defined as: V (x, y) := ?(x) ? ?(y) ? h??(y), x ? yi. (4) 1 2 2 kxk2 together with V (x, y) = 12 kx ? yk22 . One may scale ?(x) so that V (x, y) ? 21 kx ? yk2 for all One typical and simple example is ?(x) = refer to [21] for more examples. We can always x, y ? X . Following the assumption in [10]: we assume that V (x, y) grows quadratically with the parameter ? > 1, i.e., V (x, y) ? ?2 kx ? yk2 with ? > 1 for all x, y ? X . In fact, we could simply choose ?(x) with a ? -Lipschitz continuous gradient so that the quadratic growth assumption will be automatically satisfied. 3 Optimal Regularized Dual Averaging Method In dual averaging methods [17, 21], the key proximal mapping step utilizes the average of all past stochastic subgradients to update the parameter vector. In particular, it takes the form: zt+1 = n o Pt 1 arg minx?X hgt , xi + h(x) + ?tt V (x, x0 ) , where ?t is the step-size and gt = t+1 i=0 G(zi , ?i ). 2 N For strongly convex f (x), the current dual averaging methods achieve a rate of O( ? ?elog N ), which is suboptimal. In this section, we propose a new dual averaging algorithm which adapts to both strongly and non-strongly convex f (x) via the strong convexity parameter ? e and achieves optimal rates in both cases. In addition, for previous dual averaging methods, to guarantee the convergence, PN the final solution takes the form: x b = N1+1 t=0 zt and hence is not sparse in nature for sparsityinducing regularizers. Instead of taking the average, we introduce another proximal mapping and generate the final solution directly from the second proximal mapping. This strategy will provide us sparser solutions in practice. It is worthy to note that in RDA, zN has been proved to achieve the desirable sparsity pattern (i.e., manifold identification property) [13]. However, according to [13], the 3 convergence of ?(zN ) to the optimal ?(x? ) is established only under a more restrictive assumption that x? is a strong local minimizer of ? relative to the optimal manifold and the convergence rate is quite slow. Without this assumption, the convergence of ?(zN ) is still unknown. The proposed optimal RDA (ORDA) method is presented in Algorithm 1. To simplify our notations, we define the parameter ? = ? e/? , which scales the strong convexity parameter ? e by ?1 , where ? is the quadratic growth constant. In general, the constant ? which defines the step-size parameter ?t is set to L. However, we allow ? to be an arbitrary constant greater than or equal to L to facilitate the introduction of the multi-stage ORDA in the later section. The parameter c is set to achieve the optimal rates for both convex and strongly convex loss.? When ? > 0 (or equivalently, ? e > 0), c is ) ? . Since x is unknown in practice, set to 0 so that ?t ? ? ? ? ? L; while for ? = 0, c = ?? (?+M ? 2 V (x ,x0 ) one might replace V (x? , x0 ) in c by a tuning parameter. Here, we make a few more explanations of Algorithm 1. In Step 1, the intermediate point yt is a convex combination of xt and zt and when ? = 0, yt = (1 ? ?t )xt + ?t zt . The choice of the combination weights is inspired by [10]. Second, with our choice of ?t and ?t , it is easy to prove that Pt 1 1 t = i=0 ?i ?t ?t . Therefore, gt in Step 3 is a convex combination of {G(yi , ?i )}i=0 . As compared to RDA which uses the average of past subgradients, gt in ORDA is a weighted average of all past stochastic subgradients and the subgradient from the larger iteration has a larger weight (i.e., 2(i+1) G(yi , ?i ) has the weight (t+1)(t+2) ). In practice, instead of storing all past stochastic subgradients,   gt?1 G(yt ,?t ) + . We also note that gt could be simply updated based on gt?1 : gt = ?t ?t ?t?1 ?t?1 ?t since the error in the stochastic subgradient G(yt , ?t ) will affect the sparsity of xt+1 via the second proximal mapping, to obtain stable sparsity recovery performances, it would be better to construct the stochastic subgradient with a small batch of samples [21, 1]. This could help to reduce the noise of the stochastic subgradient. 3.1 Convergence Rate We present the convergence rate for ORDA. We start by presenting a general theorem without plugging the values of the parameters. To simplify our notations, we define ?t := G(yt , ?t ) ? f 0 (yt ). Theorem 1 For ORDA, if we require c > 0 when ? e = 0, then for any t ? 0: ?(xt+1 ) ? ?(x? ) ? ?t ?t ?t+1 V (x? , x0 ) + t ?t ? t X  2 i=0 (k?i k? + M )2 ? ? ?i + ?i ?i ?  + ?t ?t ? ?i L ?i t X hx? ? zbi , ?i i , (5) ?i i=0 (1?? )?+? ? 2 ?t ? t t t where zbt = ?+? zt , is a convex combination of yt and zt ; and zbt = zt when 2 yt + ?+?t ?t2 t ?t ? = 0. Taking the expectation on both sides of Eq.(5): E?(xt+1 ) ? ?(x? ) ? ?t ?t ?t+1 V (x? , x0 ) + (? 2 + M 2 )?t ?t t X 1  i=0 ? ? ?i + ?i ?i ?  . ? ?i L ? i (6) The proof of Theorem 1 is given in Appendix. In the next two corollaries, we establish the rates of convergence in expectation for ORDA by choosing different values for c based on ? e. ? ) Corollary 1 For convex f (x) with ? e = 0 , by setting c = ?? (?+M and ? = L, we obtain: ? 2 E?(xN +1 ) ? ?(x? ) ? V (x ,x0 ) p 4? LV (x? , x0 ) 8(? + M ) ? V (x? , x0 ) ? + . N2 N (7) Based on Eq.(6), the proof of Corollary 1 is straightforward with the details in Appendix. Since x? is unknown in practice, one could set c by replacing V (x? , x0 ) in c with any value D? ? V (x? , x0 ). By doing so, Eq.(7) remains valid after replacing all V (x? , x0 ) by D? . For convex f (x) with ? e = 0, the rate in Eq.(7) has achieved the uniformly-optimal rate according to [15]. In fact, if f (x) is a deterministic and smooth function with ? = M = 0 (e.g., smooth empirical loss), one only needs 4 to change the stochastic subgradient G(yt , ?t ) to ?f (yt ). The resulting algorithm, which reduces to ? ,x0 ) Algorithm 3 in [20], is an optimal deterministic first-order method with the rate O( LV (x ). N2 We note that the quadratic growth assumption of V (x, y) is not necessary for convex f (x). If one doesn not assume this assumption andreplaces the  o last step in ORDA by xt+1 = ? ?t 2 arg minx?X hx, G(yt , ?t )i + h(x) + 2?2 + 2 kx ? yt k , we can achieve the same rate as in t Eq.(7) but just removing all ? from the right hand side. But the quadratic growth assumption is indeed required for showing the convergence for strongly convex f (x) as in the next corollary. Corollary 2 For strongly convex f (x) with ? e > 0, we set c = 0 and ? = L and obtain that: 4? LV (x? , x0 ) 4? (? 2 + M 2 ) + . (8) N2 ?N     N 1 , is optimal and better than the O log rate for previous The dominating term in Eq.(8), O ?N ?N dual averaging methods. However, ORDA has not achieved the uniformly-optimal rate, which takes p? 2 +M 2 the form of O( ? ?N +exp(? L N )). In particular, for deterministic smooth and strongly convex f (x) (i.e., empirical loss with ? = M = 0), ORDA only achieves the rate of O( NL2 ) while the p?  optimal deterministic rate should be O exp(? L N ) [16]. Inspired by the multi-restart technique in [7, 11], we present a multi-stage extension of ORDA in Section 4 which achieves the uniformlyoptimal convergence rate. E?(xN +1 ) ? ?(x? ) ? 3.2 High Probability Bounds For stochastic optimization problems, another important evaluation criterion is the confidence level of the objective value. In particular, it is of great interest to find (N, ?) as a monotonically decreasing function in both N and ? ? (0, 1) such that the solution xN +1 satisfies Pr (?(xN +1 ) ? ?(x? ) ? (N, ?)) ? ?. In other words, we want to show that with probability at least 1 ? ?, ?(xN +1 ) ? ?(x? ) < (N, ?). According to Markov inequality, for any  > 0, ? ? Pr(?(xN +1 ) ? ?(x? ) ? ) ? E(?(xN +1)??(x )) . Therefore, we have (N, ?) = E?(xN +1?)??(x ) . Under the basic assumption in Eq.(3), namely ?) ? f 0 (x)k2? ) ? ? 2 , and according to ? ? E? (kG(x,    2  (?+M ) V (x ,x0 ) ? +M 2 ? Corollary 1 and 2, (N, ?) = O for convex f (x), and (N, ?) = O ?N ? N? for strongly convex f (x). However, the above bounds are quite loose. To obtain tighter bounds, we strengthen the basic assumption of the stochastic to the ?light-tail? assumption [14]. In particular,  subgradient in Eq. (3) we assume that E exp kG(x, ?) ? f 0 (x)k2? /? 2 ? exp{1}, ?x ? X . By further making the boundedness assumption (kx? ? zbt k ? D) and utilizing?a technical lemma from [3], we obtain a   ln(1/?)D? ? much tighter high probability bound with (N, ?) = O for both convex and strongly N convex f (x). The details are presented in Appendix. 4 Multi-stage ORDA for Stochastic Strongly Convex Optimization As we show in Section 3.1, for convex f (x), ORDA achieves the uniformly-optimal rate. However, for strongly convex f (x), although the dominating term of the convergence rate in Eq.(8) is optimal, the overall rate is not uniformly-optimal. Inspired by the multi-stage stochastic approximation methods [7, 9, 11], we propose the multi-stage extension of ORDA in Algorithm 2 for stochastic strongly convex optimization. For each stage 1 ? k ? K, we run ORDA in Algorithm 1 as a sub-routine for Nk iterations with the parameter ?t = c(t + 1)3/2 + ? ? with c = 0 and ? = ?k + L. Roughly speaking, we set Nk = 2Nk?1 and ?k = 4?k?1 . In other words, we double the number of iterations for the next stage but reduce the step-size. The multi-stage ORDA has achieved uniformly-optimal convergence rate as shown in Theorem 2 with the proof in Appendix. The proof technique follows the one in [11]. Due this specialized proof technique, instead of showing E(?(xN )) ? ?(x? ) ? (N ) as in ORDA, we show the number of iterations N () to achieve the -accurate solution: E(?(xN () )) ? ?(x? ) ? . But the two convergence rates are equivalent. 5 Algorithm 2 Multi-stage ORDA for Stochastic Strongly Convex Optimization Initialization: x0 ? X , a constant V0 ? ?(x0 ) ? ?(x? ) and the number of stages K. Iterate for k = 1, 2, . . . , K: n q o k+9 (? 2 +M 2 ) 1. Set Nk = max 4 ??L , 2 ??V 0 q 3/2 2k?1 ?(? 2 +M 2 ) 2. Set ?k = Nk ? V0 3. Generate x ek by calling the sub-routine ORDA(e xk?1 , Nk , ? = ?k + L, c = 0) Output: x eK  Theorem 2 If we run multi-stage ORDA for K stages with K = log2 V0 for any given , we have E(?(e xK )) ? ?(x? ) ?  and the total number of iterations is upper bounded by: s   K X ?L V0 1024? (? 2 + M 2 ) log2 + . (9) N= Nk ? 4 ?  ? k=1 5 Related Works In the last few years, a number of stochastic gradient methods [6, 5, 8, 12, 14, 21, 10, 11, 7, 4, 3] have been developed to solve Eq.(1), especially for a sparsity-inducing h(x). In Table 1, we compare the proposed ORDA and its multi-stage extension with some widely used stochastic gradient methods using the following metrics. For the ease of comparison, we assume f (x) is smooth with M = 0. 1. The convergence rate for solving (non-strongly) convex f (x) and whether this rate has achieved the uniformly-optimal (Uni-opt) rate. 2. The convergence rate for solving convex f (x) and whether (1) the dominating   strongly term of rate is optimal, i.e., O ?2 ? eN and (2) the overall rate is uniformly-optimal. 3. Whether the final solution x b, on which the results of convergence are built, is generated from the weighted average of previous iterates (Avg) or from the proximal mapping (Prox). For sparsity-inducing regularizers, the solution directly from the proximal mapping is often sparser than the averaged solution. 4. Whether an algorithm allows to use a general Bregman divergence in proximal mapping or it only allows the Euclidean distance V (x, y) = 21 kx ? yk22 . In Table 1, the algorithms in the first 7 rows are stochastic approximation algorithms where only the current stochastic gradient is used at each iteration. The last 4 rows are dual averaging methods where all past subgradients are used. Some algorithms in Table 1 make a more restrictive assumption on the stochastic gradient: ?G > 0, EkG(x, ?)k2? ? G2 , ?x ? X . It is easy to verify that this assumption implies our basic assumption in Eq.(3) by Jensen?s inequality. As we can see from Table 1, the proposed ORDA possesses all good properties except that the convergence rate for strongly convex f (x) is not uniformly-optimal. Multi-stage ORDA further improves this rate to be uniformly-optimal. In particular, SAGE [8] achieves a nearly optimal rate since the parameter D in the convergence rate is chosen such that E kxt ? x? k22 ? D for all t ? 0 and it could be much larger than V ? V (x? , x0 ). In addition, SAGE requires the boundedness of the domain X , the smoothness of f (x), and only allows the Euclidean distance in proximal mapping. As compared to AC-SA [10] and multi-stage AC-SA [11], our methods do not require the final averaging step; and as shown in our experiments, ORDA has better empirical performances due to the usage of all past stochastic subgradients. Furthermore, we improve the rates of RDA and extend AC-RDA to an optimal algorithm for both convex and strongly convex f (x). Another highly relevant work is [9]. Juditsky et al. [9] proposed multi-stage algorithms to achieve the optimal strongly convex rate based on non-accelerated dual averaging methods. However, the algorithms in [9] assume that ?(x) is a Lipschitz continuous function, i.e., the subgradient of ?(x) is bounded. Therefore, when the domain X is unbounded, the algorithms in [9] cannot be directly applied. 6 FOBOS [6] COMID [5] SAGE [8] AC-SA [10] Convex f (x) Rate Uni-opt  ?  V G ? O NO  ?N  V G O ?N NO   ? ?? D LD O + N 2 NEARLY   ?N ?? V O + LV YES N2 N M-AC-SA [11] NA NA Epoch-GD [7] NA  ?  RDA [21] O G?NV  ? AC-RDA [21] O ??NV +  ? ORDA O ??NV + NA M-ORDA NA NO LV N2  YES LV N2  YES NA Strongly Convex f (x) Rate Opt  2  G log N O NO  2?eN  G log N O NO   2?eN ? LD O ?eN + N 2 YES   2 ? LV O ?eN + N 2 YES   q 2 ? e N } YES O ?e?N + exp{? L  2 O ?eGN YES  2  G log N O NO ? eN Uni-opt Final x b Bregman NO Prox NO NO Prox YES NO Prox NO NO Avg YES YES Avg YES NO Avg NO NO Avg YES Avg YES Prox YES Prox YES NA NA NA  2  ? LV O ?eN + N 2 YES NO   q 2 ? e O ?e?N + exp{? L N } YES YES Table 1: Summary for different stochastic gradient algorithms. V is short for V (x? , x0 ); AC for ?accelerated?; M for ?multi-stage? and NA stands for either ?not applicable? or ?no analysis of the rate?. 2 Recently, the paper [18] develops another stochastic gradient method which achieves the rate O( ?eGN ) for strongly convex f (x). However, for non-smooth f (x), it requires the averaging of the last a few iterates and this rate is not uniformly-optimal. 6 Simulated Experiments In this section, we conduct simulated experiments to demonstrate the performance of ORDA and its multi-stage extension (M ORDA). We compare our ORDA and M ORDA (only for strongly convex loss) with several state-of-the-art stochastic gradient methods, including RDA and AC-RDA [21], AC-SA [10], FOBOS [6] and SAGE [8]. For a fair comparison, we compare all different methods using solutions which have expected convergence guarantees. For all algorithms, we tune the parameter related to step-size (e.g., c in ORDA for convex loss) within an appropriate range and choose the one that leads to the minimum objective value. In this experiment, we solve a sparse linear regression problem: minx?Rn f (x)+h(x) where f (x) = ? 1 T 2 2 2 Ea,b ((a x ? b) ) + 2 kxk2 and h(x) = ?kxk1 . The input vector a is generated from N (0, In?n ) T ? and the response b = a x + , where x?i = 1 for 1 ? i ? n/2 and 0 otherwise and the noise  ? N (0, 1). When ? = 0, th problem is the well known Lasso [19] and when ? > 0, it is known as Elastic-net [22]. The regularization parameter ? is tuned so that a deterministic solver on all the samples can correctly recover the underlying sparsity pattern. We set n = 100 and create a large pool of samples for generating stochastic gradients and evaluating objective values. The number of iterations N is set to 500. Since we focus on stochastic optimization instead of online learning, we could randomly draw samples from an underlying distribution. So we construct the stochastic gradient using the mini-batch strategy [2, 1] with the batch size 50. We run each algorithm for 100 times and report the mean of the objective value and the F1-score for sparsity recovery performance. Pp Pp precision?recall where precision = F1-score is defined as 2 precision+recall xi =1,x? =1} / xi =1} and i=1 1{b i=1 1{b i Pp Pp ? ? recall = i=1 1{bxi =1,xi =1} / i=1 1{xi =1} . The higher the F1-score is, the better the recovery ability of the sparsity pattern. The standard deviations for both objective value and the F1-score in 100 runs are very small and thus omitted here due to space limitations. We first set ? = 0 to test algorithms for (non-strongly) convex f (x). The result is presented in Table 2 (the first two columns). We also plot the decrease of the objective values for the first 200 iterations in Figure 1. From Table 2, ORDA performs the best in both objective value and recovery ability of sparsity pattern. For those optimal algorithms (e.g., AC-RDA, AC-SA, SAGE, ORDA), they achieve lower final objective values and the rates of the decrease are also faster. We note that for dual averaging methods, the solution generated from the (first) proximal mapping (e.g., zt in 7 Table 2: Comparisons in objective value and F1-score. 28 31 RDA AC?RDA AC?SA FOBOS SAGE ORDA 27 26 25 24 23 29 28 27 26 22 25 21 24 20 50 100 150 200 RDA AC?RDA AC?SA FOBOS SAGE ORDA 30 Objective ?=1 Obj F1 21.57 0.67 21.12 0.67 21.01 0.67 21.19 0.84 21.09 0.73 20.97 0.87 20.98 0.88 Objective ?=0 Obj F1 RDA 20.87 0.67 AC-RDA 20.67 0.67 AC-SA 20.66 0.67 FOBOS 20.98 0.83 SAGE 20.65 0.82 ORDA 20.56 0.92 M ORDA N.A. N.A. 23 50 100 150 200 Iteration Iteration Figure 1: Obj for Lasso. Figure 2: Obj for Elastic-Net. ORDA) has almost perfect sparsity recovery performance. However, since here is no convergence guarantee for that solution, we do not report results here. Objective Then we set ? = 1 to test algorithms for solving strongly convex f (x). The results are presented 30 in Table 2 (the last two columns) and Figure ORDA M_ORDA 2 and 3. As we can see from Table 2, ORDA 28 and M ORDA perform the best. Although M ORDA achieves the theoretical uniformly26 optimal convergence rate, the empirical per24 formance of M ORDA is almost identical to that of ORDA. This observation is consistent 22 with our theoretical analysis since the improvement of the convergence rate only appears on 20 the non-dominating term. In addition, ORDA, 100 200 300 400 500 Iteration M ORDA, AC-SA and SAGE with the convergence rate O( ?e1N ) achieve lower objective valFigure 3: ORDA v.s. M ORDA. ues as compared to other algorithms with the N ) . For better visualization, we do rate O( log ? eN not include the comparison between M ORA and ORDA in Figure 2. Instead, we present the comparison separately in Figure 3. From Figure 3, the final objective values of both algorithms are very close. An interesting observation is that, for M ORDA, each time when a new stage starts, it leads to a sharp increase in the objective value following by a quick drop. 7 Conclusions and Future Works In this paper, we propose a new dual averaging method which achieves the optimal rates for solving stochastic regularized problems with both convex and strongly convex loss functions. We further propose a multi-stage extension to achieve the uniformly-optimal convergence rate for strongly convex loss. Although we study stochastic optimization problems in this paper, our algorithms can be easily converted into online optimization approaches, where a sequence of decisions {xt }N t=1 are generated according to Algorithm 1 or 2. We often measure the quality of an online learning algorithm via the PN so-called regret, defined as RN (x? ) = t=1 (F (xt , ?t ) + h(xt )) ? (F (x? , ?t ) + h(x? )) . Given the expected convergence rate in Corollary 1 and 2, the expected regret can be easily derived. For PN PN 1 ? example, for strongly convex f (x): ERN (x? ) ? t=1 (E(?(xt )) ? ?(x )) ? t=1 O( t ) = O(ln N ). However, it would be a challenging future work to derive the regret bound for ORDA instead of the expected regret. It would also be interesting to develop the parallel extensions of ORDA (e.g., combining the distributed mini-batch strategy in [21] with ORDA) and apply them to some large-scale real problems. 8 References [1] A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated gradient methods. In Advances in Neural Information Processing Systems (NIPS), 2011. [2] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Technical report, Microsoft Research, 2011. [3] J. Duchi, P. L. Bartlett, and M. Wainwright. Randomized smoothing for stochastic optimization. arXiv:1103.4296v1, 2011. [4] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT), 2010. [5] J. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent. In Conference on Learning Theory (COLT), 2010. [6] J. Duchi and Y. Singer. Efficient online and batch learning using forward-backward splitting. Journal of Machine Learning Research, 10:2873?2898, 2009. [7] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. In Conference on Learning Theory (COLT), 2011. [8] C. Hu, J. T. Kwok, and W. Pan. Accelerated gradient methods for stochastic optimization and online learning. In Advances in Neural Information Processing Systems (NIPS), 2009. [9] A. Juditsky and Y. Nesterov. Primal-dual subgradient methods for minimizing uniformly convex functions. August 2010. [10] G. Lan and S. Ghadimi. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, part i: a generic algorithmic framework. Technical report, University of Florida, 2010. [11] G. Lan and S. Ghadimi. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, part ii: shrinking procedures and optimal algorithms. Technical report, University of Florida, 2010. [12] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. Journal of Machine Learning Research, 10:777?801, 2009. [13] S. Lee and S. J. Wright. Manifold identification of dual averaging methods for regularized stochastic online learning. In International Conference on Machine Learning (ICML), 2011. [14] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009. [15] A. Nemirovski and D. Yudin. Problem complexity and method efficiency in optimization. John Wiley New York, 1983. [16] Y. Nesterov. Introductory lectures on convex optimization: a basic course. Kluwer Academic Pub, 2003. [17] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120:221?259, 2009. [18] A. Rakhlin, O. Shamir, and K. Sridharan. To average or not to average? making stochastic gradient descent optimal for strongly convex problems. In International Conference on Machine Learning (ICML), 2012. [19] R. Tibshirani. Regression shrinkage and selection via the lasso. J.R.Statist.Soc.B, 58:267?288, 1996. [20] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. SIAM Journal on Optimization (Submitted), 2008. [21] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11:2543?2596, 2010. [22] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. J. R. Statist. Soc. B, 67(2):301?320, 2005. 9
4543 |@word version:1 norm:5 dekel:1 hu:1 boundedness:2 ld:2 score:5 pub:1 tuned:1 past:8 existing:2 current:4 john:1 plot:1 drop:1 update:3 juditsky:3 xk:2 short:1 iterates:5 zhang:1 unbounded:1 mathematical:1 prove:1 introductory:1 introduce:1 x0:22 indeed:1 expected:5 os:1 roughly:1 e1n:2 multi:19 inspired:3 decreasing:1 automatically:1 encouraging:1 solver:3 becomes:1 underlying:3 notation:3 bounded:2 kg:3 developed:2 guarantee:3 every:1 ti:1 concave:1 growth:4 k2:4 pirical:1 enjoy:1 local:1 tends:1 aging:1 might:1 initialization:2 challenging:2 ease:1 nemirovski:2 range:1 averaged:3 practice:4 regret:5 procedure:1 empirical:7 composite:3 confidence:1 word:2 cannot:2 close:1 selection:2 risk:1 equivalent:1 deterministic:10 quick:1 yt:16 ghadimi:2 go:1 straightforward:1 starting:1 kale:1 convex:65 bachrach:1 recovery:5 splitting:1 zbi:1 utilizing:3 egn:2 n12:1 updated:1 pt:2 shamir:3 strengthen:1 sparsityinducing:1 programming:2 us:1 updating:1 kxk1:1 decrease:2 yk:1 convexity:5 complexity:2 multistage:1 nesterov:4 solving:6 efficiency:1 easily:2 regularizer:4 choosing:1 shalev:1 quite:2 widely:2 solve:3 dominating:5 larger:3 otherwise:1 ability:2 final:9 online:10 advantage:3 differentiable:2 kxt:1 net:3 sequence:1 propose:5 relevant:1 combining:1 achieve:10 adapts:1 inducing:5 scalability:1 convergence:30 double:1 generating:1 perfect:1 help:1 derive:1 andrew:1 develop:4 ac:22 school:1 sa:10 eq:16 soc:2 strong:6 c:1 implies:1 lognn:1 stochastic:60 fobos:5 require:2 hx:4 f1:7 preliminary:1 rda:25 tighter:2 opt:4 extension:7 wright:1 exp:10 great:1 mapping:13 elog:1 algorithmic:1 achieves:13 adopt:1 omitted:1 applicable:1 label:1 create:1 weighted:2 cotter:1 minimization:4 always:1 pn:6 shrinkage:1 corollary:7 derived:1 focus:1 improvement:2 rank:1 contrast:1 comid:1 subroutine:2 arg:4 dual:25 classification:1 overall:2 colt:3 socalled:1 art:1 integration:1 smoothing:1 equal:2 construct:3 identical:1 icml:2 nearly:2 future:2 nonsmooth:1 t2:6 simplify:2 develops:1 few:4 report:5 modern:1 randomly:2 simultaneously:1 divergence:3 replaced:1 n1:3 microsoft:1 interest:1 highly:1 evaluation:1 light:1 primal:3 regularizers:4 accurate:1 bregman:4 necessary:1 conduct:1 euclidean:2 theoretical:2 column:2 zn:3 hgt:1 deviation:1 proximal:14 gd:1 international:2 randomized:1 siam:2 lee:1 off:1 pool:1 together:1 na:10 squared:1 satisfied:1 choose:2 ek:2 li:1 converted:1 prox:6 later:1 doing:1 hazan:2 start:2 recover:1 capability:1 parallel:1 variance:2 formance:1 t3:1 yes:18 identification:2 periteration:1 submitted:1 nl2:3 pp:4 associated:1 proof:5 proved:1 popular:1 recall:4 improves:4 javier:1 routine:2 ea:1 appears:1 higher:1 response:2 improved:1 strongly:41 furthermore:4 just:1 stage:24 langford:1 hand:1 replacing:2 defines:1 logistic:1 quality:1 grows:1 usage:1 facilitate:1 k22:1 verify:1 hence:2 regularization:2 self:1 criterion:1 presenting:1 tt:1 demonstrate:1 performs:1 duchi:4 novel:2 recently:2 specialized:1 vectorvalued:1 bxi:1 tail:1 extend:1 pena:1 kluwer:1 mellon:2 refer:1 smoothness:2 tuning:1 stable:1 yk2:4 v0:3 etc:1 gt:9 inequality:2 yi:5 minimum:1 greater:1 monotonically:1 ii:1 desirable:1 reduces:2 smooth:9 technical:5 faster:1 academic:1 lin:1 plugging:1 prediction:1 variant:1 regression:3 basic:4 expectation:3 cmu:2 metric:1 arxiv:1 iteration:16 gilad:1 achieved:4 addition:5 subdifferential:2 want:1 separately:1 posse:1 nv:3 zbt:3 sridharan:2 obj:4 yk22:2 intermediate:1 easy:2 iterate:3 affect:1 zi:1 xichen:1 lasso:3 suboptimal:1 hastie:1 reduce:2 intensive:1 whether:4 bartlett:1 suffer:1 speaking:2 york:1 tewari:1 tune:1 statist:2 generate:3 shapiro:1 exist:1 qihang:1 correctly:1 tibshirani:1 carnegie:2 key:2 lan:3 utilize:1 backward:1 v1:1 subgradient:18 year:2 run:4 muli:1 throughout:1 almost:2 utilizes:2 draw:2 decision:1 appendix:4 scaling:1 bound:7 replaces:2 quadratic:4 nonnegative:1 hy:1 calling:1 generates:1 subgradients:7 ern:1 department:1 according:5 combination:4 poor:1 pan:1 making:2 pr:2 computationally:1 ln:5 visualization:1 remains:1 loose:1 singer:3 know:1 available:1 endowed:1 apply:1 kwok:1 enforce:1 appropriate:1 generic:1 batch:7 florida:2 denotes:1 include:2 log2:2 hinge:1 restrictive:2 especially:1 establish:1 objective:17 strategy:3 traditional:1 minx:6 dp:1 gradient:19 distance:2 simulated:2 restart:1 manifold:3 considers:1 tseng:1 reason:1 mini:4 minimizing:1 equivalently:1 sage:9 zt:9 unknown:3 perform:1 upper:1 observation:2 markov:1 descent:2 ekg:1 truncated:1 rn:3 worthy:1 arbitrary:1 sharp:1 august:1 pair:1 required:1 namely:1 quadratically:1 established:1 nip:2 beyond:1 pattern:4 fp:1 sparsity:14 built:1 max:1 including:1 explanation:1 wainwright:1 emm:1 business:1 regularized:12 improve:1 ues:1 epoch:1 literature:1 relative:1 loss:17 lecture:1 interesting:2 limitation:1 srebro:1 lv:8 consistent:1 xiao:3 storing:1 row:2 course:1 summary:1 last:5 side:2 weaker:1 allow:1 wide:2 fall:1 taking:2 barrier:1 orda:65 sparse:5 distributed:2 xn:11 valid:1 stand:1 evaluating:1 doesn:1 yudin:1 forward:1 made:1 adaptive:2 avg:6 obtains:1 uni:3 keep:1 assumed:1 xi:7 shwartz:1 spectrum:1 continuous:5 why:1 table:10 nature:1 robust:1 elastic:3 zou:1 domain:2 main:1 noise:2 n2:6 fair:1 en:12 slow:1 wiley:1 precision:3 sub:3 shrinking:1 exponential:1 kxk2:2 aver:1 z0:1 theorem:5 removing:1 xt:21 showing:2 jensen:1 rakhlin:1 intractable:1 exists:1 mirror:1 kx:9 nk:7 chen:1 sparser:4 simply:2 g2:1 minimizer:1 satisfies:1 formulated:1 lipschitz:4 replace:1 feasible:1 change:1 typical:2 except:1 uniformly:15 averaging:23 lemma:2 called:4 total:1 formally:1 accelerated:6
3,915
4,544
Learning the Architecture of Sum-Product Networks Using Clustering on Variables Dan Ventura Department of Computer Science Brigham Young University Provo, UT 84602 [email protected] Aaron Dennis Department of Computer Science Brigham Young University Provo, UT 84602 [email protected] Abstract The sum-product network (SPN) is a recently-proposed deep model consisting of a network of sum and product nodes, and has been shown to be competitive with state-of-the-art deep models on certain difficult tasks such as image completion. Designing an SPN network architecture that is suitable for the task at hand is an open question. We propose an algorithm for learning the SPN architecture from data. The idea is to cluster variables (as opposed to data instances) in order to identify variable subsets that strongly interact with one another. Nodes in the SPN network are then allocated towards explaining these interactions. Experimental evidence shows that learning the SPN architecture significantly improves its performance compared to using a previously-proposed static architecture. 1 Introduction The number of parameters in a textbook probabilistic graphical model (PGM) is an exponential function of the number of parents of the nodes in the graph. Latent variables can often be introduced such that the number of parents is reduced while still allowing the probability distribution to be represented. Figure 1 shows an example of modeling the relationship between symptoms of a set of diseases. The PGM at the left has no latent variables and the PGM at the right has an appropriately added ?disease? variable. The model is able to be simplified because the symptoms are statistically independent of one another given the disease. The middle PGM shows a model in which the latent variable is introduced to no simplifying effect, demonstrating the need to be intelligent about what latent variables are added and how they are added. S1 S2 S3 (a) S1 D S2 S3 (b) D S1 S2 S3 (c) Figure 1: Introducing a latent variable. The PGM in (a) has no latent variables. The PGM in (b) has a latent variable introduced to no beneficial effect. The PGM in (c) has a latent variable that simplifies the model. 1 Deep models can be interpreted as PGMs that introduce multiple layers of latent variables over a layer of observed variables [1]. The architecture of these latent variables (the size of the layers, the number of variables, the connections between variables) can dramatically affect the performance of these models. Selecting a reasonable architecture is often done by hand. This paper proposes an algorithm that automatically learns a deep architecture from data for a sum-product network (SPN), a recently-proposed deep model that takes advantage of the simplifying effect of latent variables [2]. Learning the appropri- + x + x + + + + ?a ?a ?b ?b A B x + Figure 2: A simple SPN over two binary variables A and B. The leaf node ?a takes value 1 if A = 0 and 0 otherwise while leaf node ?a takes value 1 if A = 1 and 0 otherwise. If the value of A is not known then both leaf nodes take value 1. Leaf nodes ?b and ?b behave similarly. Weights on the edges connecting sum nodes with their children are not shown. The shortdashed edge causes the SPN to be incomplete. The long-dashed edge causes the SPN to be inconsistent. x + + x + + + Figure 3: The Poon architecture with m = 1 sum nodes per region. Three product nodes are introduced because the 2?3-pixel image patch can be split vertically and horizontally in three different ways. In general the Poon architecture has number-of-splits times m2 product nodes per region. ate architecture for a traditional deep model can be challenging [3, 4], but the nature of SPNs lend themselves to a remarkably simple, fast, and effective architecture-learning algorithm. In proposing SPNs, Poon & Domingos introduce a general scheme for building an initial SPN architecture; the experiments they run all use one particular instantiation of this scheme to build an initial ?fixed? architecture that is suitable for image data. We will refer to this architecture as the Poon architecture. Training is done by learning the parameters of an initial SPN; after training is complete, parts of the SPN may be pruned to produce a final SPN architecture. In this way both the weights and architecture are learned from data. We take this a step further by also learning the initial SPN architecture from data. Our algorithm works by finding subsets of variables (and sets of subsets of variables) that are highly dependent and then effectively combining these together under a set of latent variables. This encourages the latent variables to act as mediators between the variables, capturing and representing the dependencies between them. Our experiments show that learning the initial SPN architecture in this way improves its performance. 2 Sum-Product Networks Sum-product networks are rooted, directed acyclic graphs (DAGs) of sum, product, and leaf nodes. Edges connecting sum nodes to their children are weighted using non-negative weights. The value of a sum node is computed as the dot product of its weights with the values of it child nodes. The value of a product node is computed by multiplying the values of its child nodes. A simple SPN is shown in Figure 2. Leaf node values are determined by the input to the SPN. Each input variable has an associated set of leaf nodes, one for each value the variable can take. For example, a binary variable would have two associated leaf nodes. The leaf nodes act as indicator functions, taking the value 1 when the variable takes on the value that the leaf node is responsible for and 0 otherwise. An SPN can be constructed such that it is a representation of some probability distribution, with the value of its root node and certain partial derivatives with respect to the root node having probabilistic meaning. In particular, all marginal probabilities and many conditional probabilities can be computed [5]. Consequently an SPN can perform exact inference and does so efficiently when the size of the SPN is polynomial in the number of variables. 2 If an SPN does represent a probability distribution then we call it a valid SPN; of course, not all SPNs are valid, nor do they all facilitate efficient, exact inference. However, Poon & Domingos proved that if the architecture of an SPN follows two simple rules then it will be valid. (Note that this relationship does not go both ways; an SPN may be valid and violate one or both of these rules.) This, along with showing that SPNs can represent a broader class of distributions than other models that allow for efficient and exact inference are the key contributions made by Poon & Domingos. To understand these rules it will help to know what the ?scope of an SPN node? means. The scope of an SPN node n is a subset of the input variables. This subset can be determined by looking at the leaf nodes of the subgraph rooted at n. All input variables that have one or more of their associated leaf nodes in this subgraph are included in the scope of the node. We will denote the scope of n as scope(n). The first rule is that all children of a sum node must have the same scope. Such an SPN is called complete. The second rule is that for every pair of children, (ci , cj ), of a product node, there must not be contradictory leaf nodes in the subgraphs rooted at ci and cj . For example, if the leaf node corresponding to the variable X taking on value x is in the subgraph rooted at ci , then the leaf nodes corresponding to the variable X taking on any other value may not appear in the subgraph rooted at cj . An SPN following this rule is called consistent. The SPN in Figure 2 violates completeness (due to the short-dashed arrow) and it violates consistency (due to the long-dashed arrow). An SPN may also be decomposable, which is a property similar to, but somewhat more restrictive than consistency. A decomposable SPN is one in which the scopes of the children of each product node are disjoint. All of the architectures described in this paper are decomposable. Very deep SPNs can be built using these rules as a guide. The number of layers in an SPN can be on the order of tens of layers, whereas the typical deep model has three to five layers. Recently it was shown that deep SPNs can compute some functions using exponentially fewer resources than shallow SPNs would need [6]. The Poon architecture is suited for modeling probability distributions over images, or other domains with local dependencies among variables. It is constructed as follows. For every possible axisaligned rectangular region in the image, the Poon architecture includes a set of m sum nodes, all of whose scope is the set of variables associated with the pixels in that region. Each of these (nonsingle-pixel) regions are conceptually split vertically and horizontally in all possible ways to form pairs of rectangular subregions. For each pair of subregions, and for every possible pairing of sum nodes (one taken from each subregion), a product node is introduced and made the parent of the pair of sum nodes. The product node is also added as a child to all of the top region?s sum nodes. Figure 3 shows a fragment of a Poon architecture SPN modeling a 2 ? 3 image patch. 3 Cluster Architecture As mentioned earlier, care needs to be taken when introducing latent variables into a model. Since the effect of a latent variable is to help explain the interactions between its child variables [7], it makes little sense to add a latent variable as the parent of two statistically independent variables. In the example in Figure 4, variables W and X strongly interact and variables Y and Z do as well. But the relationship between all other pairs of variables is weak. The PGM in (a), therefore, allows latent variable A to take account of the interaction between W and X. On the other hand, variable A does little in the PGM in (b) since W and Y are nearly independent. A similar argument can be made about variable B. Consequently, variable C in the PGM in (a) can be used to explain the weak interactions between variables, whereas in the PGM in (b), variable C essentially has the task of explaining the interaction between all the variables. C C A W B X Y (a) Z W A B X Y Z (b) Figure 4: Latent variables explain the interaction between child variables, causing the children to be independent given the latent variable parent. If variable pairs (W, X) and (Y, Z) strongly interact and other variable pairs do not, then the PGM in (a) is a more suitable model than the PGM in (b). 3 In the probabilistic interpretation of an SPN, sum nodes are associated with latent variables. (The evaluation of a sum node is equivalent to summing out its associated latent variable.) Each latent variable helps the SPN explain interactions between variables in the scope of the sum nodes. Just as in the example, then, we would like to place sum nodes over sets of variables with strong interactions. The Poon architecture takes this principle into account. Images exhibit strong interactions between pixels in local spatial neighborhoods. Taking advantage of this prior knowledge, the Poon architecture chooses to place sum nodes over local spatial neighborhoods that are rectangular in shape. There are a few potential problems with this approach, however. One is that the Poon architecture includes many rectangular regions that are long and skinny. This means that the pixels at each end of these regions are grouped together even though they probably have only weak interactions. Some grouping of weakly-interacting pixels is inevitable, but the Poon architecture probably does this more than is needed. Another problem is that the Poon architecture has no way of explaining strongly-interacting, non-rectangular local spatial regions. This is a major problem because such regions are very common in images. Additionally, if the data does not exhibit strong spatially-local interactions then the Poon architecture could perform poorly. Our proposed architecture (we will call it the cluster architecture) avoids these problems. Large regions containing non-interacting pixels are avoided. Sum nodes can be placed over spatially-local, non-rectangular regions; we are not restricted to rectangular regions, but can explain arbitrarilyshaped blob-like regions. In fact, the regions found by the cluster architecture are not required to exhibit spatial locality. This makes our architecture suitable for modeling data that does not exhibit strong spatially-local interactions between variables. 3.1 Building a Cluster Architecture As was described earlier, a sum node s in an SPN has the task of explaining the interactions between all the variables in its scope. Let scope(s) = {V1 , ? ? ? , Vn }. If n is large, then this task will likely be very difficult. SPNs have a mechanism for making it easier, however. Essentially, s delegates part of its responsibilities to another set of sum nodes. This is done by first S forming a partition of scope(s), where {S1 , ? ? ? , Sk } is a partition of scope(s) if and only if i Si = scope(s) and ?i, j(Si ? Sj = ?). Then, for each subset Si in the partition, an additional sum node si is introduced into the SPN and is given the task of explaining the interactions between all the variables in Si . The original sum node s is then given a new child product node p and the product node becomes the parent of each sum node si . In this example the node s is analogous to the variable C in Figure 4 and the nodes si are analogous to the variables A and B. So this partitioning process allows s to focus on explaining the interactions between the nodes si and frees it from needing to explain everything about the interactions between the variables {V1 , ? ? ? , Vn }. And, of course, the partitioning process can be repeated recursively, with any of the nodes si taking the place of s. This is the main idea behind the algorithm for building a cluster architecture (see Algorithm 1 and Algorithm 2). However, due to the architectural flexibility of an SPN, discussing this algorithm in terms of sum and product nodes quickly becomes tedious and confusing. The following definition will help in this regard. Definition 1. A region graph is a rooted DAG consisting of region nodes and partition nodes. The root node is a region node. Partition nodes are restricted to being the children of region nodes and vice versa. Region and partition nodes have scopes just like nodes in an SPN. The scope of a node n in a region graph is denoted scope(n). Region nodes can be thought of as playing the role of sum nodes (explaining interactions among variables) and partition nodes can be thought of as playing the role of product nodes (delegating responsibilities). Using the definition of the region graph may not appear to have made things any simpler, but its benefits will become more clear when discussing the conversion of region graphs to SPNs (see Figure 5). At a high level the algorithm for building a cluster architecture is simple: build a region graph (Algorithm 1 and Algorithm 2), then convert it to an SPN (Algorithm 3). These steps are described below. 4 R1 Algorithm 1 BuildRegionGraph 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: Input: training data D C 0 ? Cluster(D, 1) for k = 2 to ? do C ? Cluster(D, k) r ? Quality(C)/Quality(C 0 ) if r < 1 + ? then break else C0 ? C G ? CreateRegionGraph() n ? AddRegionNodeTo(G) for i = 1 to k do ExpandRegionGraph(G, n, Ci ) x P1 x + + x x x P2 x x x R1 R2 R2 R3 R4 (a) R5 x + + ... x R3 x + + + ... x x + + R4 ... x x + R5 ... x (b) Figure 5: Subfigure (a) shows a region graph fragment consisting of region nodes R1 , R2 , R3 , R4 , and R5 . R1 has two parition nodes (the smaller, filled-in nodes). Subfigure (b) shows the region graph converted to an SPN. In the SPN each region is allotted two sum nodes. The product nodes in R1 are surrounded by two rectangles labeled P1 and P2 ; they correspond to the partition nodes in the region graph. Algorithm 1 builds a region graph using training data to guide the construction. In lines 2 through 9 the algorithm clusters the training instances into k clusters C = {C1 , ? ? ? , Ck }. Our implementation uses the scikit-learn [8] implementation of k-means to cluster the data instances, but any clustering method could be used. The value for k is chosen automatically; larger values of k are tried until increasing the value does not substantially improve a cluster-quality score. The remainder of the algorithm creates a single-node region graph G and then adds nodes and edges to G using k calls to Algorithm 2 (ExpandRegionGraph). To encourage the expansion of G in different ways, a different subset of the training data, Ci , is passed to ExpandRegionGraph on each call. At a high level, Algorithm 2 partitions scopes into sub-scopes recursively, adding region and partition nodes to G along the way. The initial call to ExpandRegionGraph partitions the scope of the root region node. A corresponding partition node is added as a child of the root node. Two sub-region nodes (whose scopes form the partition) are then added as children to the partition node. Algorithm 2 is then called recursively with each of these sub-region nodes as arguments (unless the scope of the sub-region node is too small). In line 3 of Algorithm 2 the PartitionScope function in our implementation uses the k-means algorithm in an unusual way. Instead of partitioning the instances of the training dataset D into k instance-clusters, it partitions variables into k variable-clusters as follows. D is encoded as a matrix, each row being a data instance and each column corresponding to a variable. Then k-means is run on DT , causing it to partition the variables into k clusters. Actually, the PartitionScope function is only supposed to partition the variables in scope(n), not all the variables (note its input parameter). So before calling k-means we build a new matrix Dn by removing columns from D, keeping only those columns that correspond to variables in scope(n). Then k-means is run on DnT and the resulting variable partition is returned. The k-means algorithm serves the purpose of detecting subsets of variables that strongly interact with one another. Other methods (including other clustering algorithms) could be used in its place. After the scope Sn of a node n has been partitioned into S1 and S2 , Algorithm 2 (lines 4 through 11) looks for region nodes in G whose scope is similar to S1 or S2 ; if region node r with scope Sr is such a node, then S1 and S2 are adjusted so that S1 = Sr and {S1 , S2 } is still a partition of Sn . Lines 12 through 18 expand the region graph based on the partition of Sn . If node n does not already have a child partition node representing the partition {S1 , S2 } then one is created (p in line 15); p is then connected to child region nodes n1 and n2 , whose scopes are S1 and S2 , respectively. Note that n1 and n2 may be newly-created region nodes or they may be nodes that were created during a previous call to Algorithm 2. We recursively call ExpandRegionGraph only on newly-created nodes; the recursive call is also not made if the node is a leaf node (|Si | = 1) since partitioning a leaf node is not helpful (see lines 19 through 22). 5 Algorithm 2 ExpandRegionGraph Algorithm 3 BuildSPN Input: region graph G, sums per region m Output: SPN S R ? RegionNodesIn(G) for all r ? R do if IsRootNode(r) then N ? AddSumNodesToSPN(S, 1) else N ? AddSumNodesToSPN(S, m) P ? ChildPartitionNodesOf(r) for all p ? P do C ? ChildrenOf(p) O ? AddProductNodesToSPN(S, m|C| ) for all n ? N do AddChildrenToSumNode(n, O) Q ? empty list for all c ? C do //We assume the sum nodes associated //with c have already been created. U ? SumNodesAssociatedWith(c) AppendToList(Q, U ) ConnectProductsToSums(O, Q) return S 1: Input: region graph G, 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: region node n in G, training data D Sn ? scope(n) {S1 , S2 } ? PartitionScope(Sn , D) S ? ScopesOfAllRegionNodesIn(G) for all Sr ? S s.t. Sr ? Sn do p1 ? |S1 ? Sr |/|S1 ? Sr | p2 ? |S2 ? Sr |/|S2 ? Sr | if max{p1 , p2 } > threshold then S1 ? Sr S2 ? Sn \ Sr break n1 ? GetOrCreateRegionNode(G, S1 ) n2 ? GetOrCreateRegionNode(G, S2 ) if PartitionDoesNotExist(G, n, n1 , n2 ) then p ? NewPartitionNode() AddChildToRegionNode(n, p) AddChildToPartitionNode(p, n1 ) AddChildToPartitionNode(p, n2 ) if S1 ? / S ? |S1 | > 1 then ExpandRegionGraph(G, n1 ) if S2 ? / S ? |S2 | > 1 then ExpandRegionGraph(G, n2 ) After the k calls to Algorithm 2 have been made, the resulting region graph must be converted to an SPN. Figure 5 shows a small subgraph from a region graph and its conversion into an SPN; this example demonstrates the basic pattern that can be applied to all region nodes in G in order to generate an SPN. A more precise description of this conversion is given in Algorithm 3. In this algorithm the assumption is made (noted in the comments) that certain sum nodes are inserted before others. This assumption can be guaranteed if the algorithm performs a postorder traversal of the region nodes in G in the outermost loop. Also note that the ConnectProductsToSums method connects product nodes of the current region with sum nodes from its subregions; the children of a product node consist of a single node drawn from each subregion, and there is a product node for every possible combination of such sum nodes. 4 Experiments and Results Poon showed that SPNs can outperform deep belief networks (DBNs), deep Boltzman machines (DBMs), principle component analysis (PCA), and a nearest- neighbors algorithm (NN) on a difficult image completion task. The task is the following: given the right/top half of an image, paint in the left/bottom half of it. The completion results of these models were compared qualitatively by inspection and quantitatively using mean squared error (MSE). SPNs produced the best results; our experiments show that the cluster architecture significantly improves SPN performance. We matched the experimental set-up reported in [2] in order to isolate the effect of changing the initial SPN architecture and to make their reported results directly comparable to several of our results. They add 20 sum nodes for each non-unit and non-root region. The root region has one sum node and the unit regions have four sum nodes, each of which function as a Gaussian over pixel values. The Gaussians means are calculated using the training data for each pixel, with one Gaussian covering each quartile of the pixel-values histogram. Each training image is normalized such that its mean pixel value is zero with a standard deviation of one. Hard expectation maximization (EM) is used to train the SPNs; mini-batches of 50 training instances are used to calculate each weight update. All sum node weights are initialized to zero; weight values are decreased after each training epoch using an L0 prior; add-one smoothing on sum node weights is used during network evaluation. 6 Table 1: Results of experiments on the Olivetti, Caltech 101 Faces, artificial, and shuffled-Olivetti datasets comparing the Poon and cluster architectures. Negative log-likelihood (LLH) of the training set and test set is reported along with the MSE for the image completion results (both left-half and bottom-half completion results). Dataset Olivetti Caltech Faces Artificial Shuffled Measurement Train LLH Test LLH MSE (left) MSE (bottom) Train LLH Test LLH MSE (left) MSE (bottom) Train LLH Test LLH MSE (left) MSE (bottom) Train LLH Test LLH MSE (left) MSE (bottom) Poon 318 ? 1 863 ? 9 996 ? 42 963 ? 42 289 ? 4 674 ? 15 1968 ? 89 1925 ? 82 195 ? 0 266 ? 4 842 ? 51 877 ? 85 793 ? 3 1193 ? 3 811 ? 11 817 ? 17 Cluster 433 ? 17 715 ? 31 814 ? 35 820 ? 38 379 ? 8 557 ? 11 1746 ? 87 1561 ? 44 169 ? 0 223 ? 6 558 ? 27 561 ? 29 442 ? 14 703 ? 14 402 ? 16 403 ? 17 Figure 6: A cluster-architecture SPN completed the images in the left column and a Poon-architecture SPN completed the images in the right column. All images shown are left-half completions. The top row is the best results as measured by MSE and the bottom row is the worst results. Note the smooth edges in the cluster completions and the jagged edges in the Poon completions. We test the cluster and Poon architectures by learning on the Olivetti dataset [9], the faces from the Caltech-101 dataset [10], an artificial dataset that we generated, and the shuffled-Olivetti dataset, which the Olivetti dataset with the pixels randomly shuffled (all images are shuffled in the same way). The Caltech-101 faces were preprocessed as described by Poon & Domingos. The cluster architecture is compared to the Poon architectures using the negative log-likelihood (LLH) of the training and test sets as well as the MSE of the image completion results for the left half and bottom half of the images. We train ten cluster architecture SPNs and ten Poon architecture SPNs. Average results across the ten SPNs along with the standard deviation are given for each measurement. On the Olivetti and Caltech-101 Faces datasets the Poon architecture resulted in better training set LLH, but the cluster architecture generalized better, getting a better test set LLH (see Table 1). The cluster architecture was also clearly better at the image completion tasks as measured by MSE. The difference between the two architectures is most pronounced on the artificial dataset. The images in this dataset are created by pasting randomly-shaded circle- and diamond-shaped image patches on top of one another (see Figure 6), ensuring that various pixel patches are statistically independent. The cluster architecture outperforms the Poon architecture across all measures on this dataset (see Table 1); this is due to its ability to focus resources on non-rectangular regions. To demonstrate that the cluster architecture does not rely on the presence of spatially-local, strong interactions between the variables, we repeated the Olivetti experiment with the pixels in the images having been shuffled. In this experiment (see Table 1) the cluster architecture was, as expected, relatively unaffected by the pixel shuffling. The LLH measures remained basically unchanged from the Olivetti to the Olivetti-shuffled datasets. (The MSE results did not stay the same because the image completions happened over different subsets of the pixels.) On the other hand, the performance of the Poon architecture dropped considerably due to the fact that it was no longer able to take advantage of strong correlations between neighboring pixels. Figure 7 visually demonstrates the difference between the rectangular-regions Poon architecture and the arbitrarily-shaped-regions cluster architecture. Artifacts of the different region shapes can be seen in subfigure (a), where some regions are shaded lighter or darker, revealing region boundaries. Subfigure (b) compares the best of both architectures, showing image completion results on which both architectures did well, qualitatively speaking. Note how the Poon architecture produces results that look ?blocky?, whereas the cluster architecture produces results that are smoother-looking. 7 (a) (b) Figure 7: The completion results in subfigure (a) highlight the difference between the rectangularshaped regions of the Poon architecture (top image) and the blob-like regions of the cluster architecture (bottom image), artifacts of which can be seen in the completions. Subfigure (b) shows ground truth images, cluster-architecture SPN completions, and Poon-architecture SPN completions in the left, middle, and right columns respectively. Left-half completions are in the top row and bottom-half completions are in the bottom row. Table 2: Test set LLH values for the Olivetti, Olivetti45, and Olivetti4590 datasets for different values of k. For each dataset the best LLH value is marked in bold. Dataset / k Olivetti Olivetti45 Olivetti4590 1 650 523 579 2 653 495 576 3 671 508 550 4 685 529 554 5 711 541 577 6 716 528 595 7 717 544 608 8 741 532 592 Algorithm 1 expands a region graph k times (lines 12 and 13). The value of k can significantly affect test set LLH, as shown in Table 2. A value that is too low leads to an insufficiently powerful model and a value that is too high leads to a model that overfits the training data and generalizes poorly. A singly-expanded model (k = 1) is optimal for the Olivetti dataset. This may be due in part to the Olivetti dataset having only one distinct class of images (faces in a particular pose). Datasets with more image classes may benefit from additional expansions. To experiment with this hypothesis we create two new datasets: Olivetti45 and Olivetti4590. Olivetti45 is created by augmenting the Olivetti dataset with Olivetti images that are rotated by ?45 degrees. Olivetti4590 is built similarly but with rotations by ?45 degrees and by ?90 degrees. The Olivetti45 dataset, then, has two distinct classes of images: rotated and non-rotated. Similarly, Olivetti4590 has three distinct image classes. Table 2 shows that, as expected, the optimal value of k for the Olivetti45 and Olivetti4590 datasets is two and three, respectively. Note that the Olivetti test set LLH with k = 1 in Table 2 is better than the test set LLH reported in Table 1. This shows that the algorithm for automatically selecting k in Algorithm 1 is not optimal. Another option is to use a hold-out set to select k, although this method may not not be appropriate for small datasets. 5 Conclusion The algorithm for learning a cluster architecture is simple, fast, and effective. It allows the SPN to focus its resources on explaining the interactions between arbitrary subsets of input variables. And, being driven by data, the algorithm guides the allocation of SPN resources such that it is able to model the data more efficiently. Future work includes experimenting with alternative clustering algorithms, experimenting with methods for selecting the value of k, and experimenting with variations of Algorithm 2 such as generalizing it to handle partitions of size greater than two. 8 References [1] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527?1554, July 2006. [2] Hoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In Proceedings of the Twenty-Seventh Annual Conference on Uncertainty in Artificial Intelligence (UAI-11), pages 337?346, Corvallis, Oregon, 2011. AUAI Press. [3] Ryan Prescott Adams, Hanna M. Wallach, and Zoubin Ghahramani. Learning the structure of deep sparse graphical models. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, 2010. [4] Nevin L. Zhang. Hierarchical latent class models for cluster analysis. Journal of Machine Learning Research, 5:697?723, December 2004. [5] Adnan Darwiche. A differential approach to inference in bayesian networks. Journal of the ACM, 50:280?305, May 2003. [6] Olivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In Advances in Neural Information Processing Systems 24, pages 666?674. 2011. [7] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [8] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12:2825?2830, 2011. [9] F.S. Samaria and A.C. Harter. Parameterisation of a stochastic model for human face identification. In Proceedings of the Second IEEE Workshop on Applications of Computer Vision, pages 138 ?142, Dec 1994. [10] Li Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In IEEE CVPR 2004, Workshop on Generative-Model Based Vision, 2004. 9
4544 |@word middle:2 polynomial:1 c0:1 tedious:1 open:1 adnan:1 tried:1 simplifying:2 recursively:4 initial:7 fragment:2 selecting:3 score:1 dubourg:1 outperforms:1 current:1 comparing:1 si:10 must:3 partition:23 shape:2 update:1 v:1 half:9 leaf:17 fewer:1 intelligence:2 generative:2 inspection:1 short:1 completeness:1 detecting:1 node:117 simpler:1 zhang:1 five:1 daphne:1 along:4 constructed:2 dn:1 become:1 differential:1 pairing:1 dan:1 darwiche:1 introduce:2 blondel:1 expected:2 themselves:1 nor:1 p1:4 automatically:3 little:2 increasing:1 becomes:2 matched:1 what:2 interpreted:1 substantially:1 textbook:1 proposing:1 finding:1 pasting:1 every:4 act:2 expands:1 auai:1 demonstrates:2 partitioning:4 unit:2 appear:2 before:2 dropped:1 vertically:2 local:8 r4:3 wallach:1 challenging:1 shaded:2 statistically:3 directed:1 responsible:1 hoifung:1 recursive:1 significantly:3 thought:2 revealing:1 prescott:1 zoubin:1 yee:1 equivalent:1 go:1 rectangular:9 decomposable:3 m2:1 rule:7 subgraphs:1 handle:1 variation:1 analogous:2 construction:1 dbns:1 exact:3 lighter:1 olivier:1 us:2 designing:1 domingo:5 hypothesis:1 labeled:1 observed:1 role:2 inserted:1 bottom:11 worst:1 calculate:1 region:63 connected:1 nevin:1 disease:3 mentioned:1 traversal:1 weakly:1 passos:1 creates:1 represented:1 various:1 samaria:1 train:6 distinct:3 fast:3 effective:2 artificial:6 neighborhood:2 whose:4 encoded:1 larger:1 cvpr:1 delalleau:1 otherwise:3 ability:1 statistic:1 axisaligned:1 final:1 advantage:3 blob:2 net:1 propose:1 interaction:19 product:25 remainder:1 causing:2 neighboring:1 combining:1 loop:1 poon:32 subgraph:5 poorly:2 flexibility:1 postorder:1 supposed:1 description:1 pronounced:1 harter:1 getting:1 parent:6 cluster:35 empty:1 r1:5 produce:3 adam:1 incremental:1 rotated:3 object:1 help:4 completion:18 augmenting:1 pose:1 measured:2 nearest:1 strong:6 p2:4 subregion:2 c:1 quartile:1 stochastic:1 dbms:1 human:1 violates:2 everything:1 varoquaux:1 ryan:1 adjusted:1 hold:1 ground:1 visually:1 scope:29 major:1 purpose:1 prettenhofer:1 grouped:1 vice:1 create:1 weighted:1 mit:1 clearly:1 gaussian:2 ck:1 broader:1 l0:1 focus:3 likelihood:2 experimenting:3 grisel:1 sense:1 helpful:1 inference:4 dependent:1 nn:1 perona:1 mediator:1 koller:1 expand:1 pixel:17 among:2 denoted:1 proposes:1 art:1 spatial:4 smoothing:1 gramfort:1 marginal:1 having:3 shaped:2 r5:3 look:2 nearly:1 inevitable:1 future:1 others:1 yoshua:1 intelligent:1 quantitatively:1 few:2 randomly:2 resulted:1 consisting:3 skinny:1 connects:1 n1:6 friedman:1 highly:1 cournapeau:1 evaluation:2 blocky:1 behind:1 edge:7 encourage:1 partial:1 unless:1 filled:1 incomplete:1 initialized:1 circle:1 subfigure:6 instance:7 column:6 modeling:4 earlier:2 maximization:1 introducing:2 deviation:2 subset:10 seventh:1 osindero:1 too:3 reported:4 dependency:2 considerably:1 chooses:1 international:1 stay:1 probabilistic:4 connecting:2 together:2 quickly:1 provo:2 squared:1 opposed:1 containing:1 derivative:1 return:1 michel:1 li:1 account:2 potential:1 converted:2 bold:1 includes:3 dnt:1 oregon:1 jagged:1 root:7 break:2 responsibility:2 overfits:1 competitive:1 option:1 simon:1 contribution:1 efficiently:2 correspond:2 identify:1 conceptually:1 weak:3 bayesian:2 identification:1 produced:1 basically:1 multiplying:1 unaffected:1 explain:6 definition:3 associated:7 static:1 newly:2 proved:1 dataset:16 knowledge:1 ut:2 improves:3 cj:3 actually:1 dt:1 wei:1 done:3 though:1 strongly:5 symptom:2 just:2 until:1 correlation:1 hand:4 dennis:1 scikit:2 quality:3 artifact:2 facilitate:1 effect:5 building:4 normalized:1 shuffled:7 spatially:4 during:2 encourages:1 rooted:6 noted:1 covering:1 generalized:1 whye:1 complete:2 demonstrate:1 performs:1 image:32 meaning:1 recently:3 common:1 rotation:1 exponentially:1 interpretation:1 refer:1 measurement:2 corvallis:1 versa:1 dag:2 shuffling:1 consistency:2 similarly:3 dot:1 longer:1 add:4 showed:1 olivetti:17 driven:1 certain:3 binary:2 arbitrarily:1 discussing:2 caltech:5 seen:2 byu:2 somewhat:1 care:1 additional:2 greater:1 dashed:3 july:1 smoother:1 multiple:1 violate:1 needing:1 smooth:1 long:3 ensuring:1 basic:1 essentially:2 expectation:1 vision:2 histogram:1 represent:2 dec:1 c1:1 whereas:3 remarkably:1 decreased:1 else:2 allocated:1 appropriately:1 sr:10 probably:2 comment:1 isolate:1 thing:1 december:1 inconsistent:1 call:9 delegate:1 presence:1 split:3 bengio:1 affect:2 architecture:70 idea:2 simplifies:1 pca:1 passed:1 returned:1 speaking:1 cause:2 deep:15 dramatically:1 clear:1 singly:1 ten:4 subregions:3 category:1 reduced:1 generate:1 outperform:1 s3:3 happened:1 disjoint:1 per:3 brucher:1 key:1 four:1 demonstrating:1 threshold:1 drawn:1 changing:1 preprocessed:1 spns:15 rectangle:1 v1:2 graph:18 sum:42 convert:1 run:3 powerful:1 uncertainty:1 place:4 reasonable:1 architectural:1 vn:2 patch:4 confusing:1 comparable:1 capturing:1 layer:6 guaranteed:1 annual:1 insufficiently:1 pgm:13 fei:2 calling:1 argument:2 pruned:1 expanded:1 relatively:1 department:2 combination:1 beneficial:1 ate:1 smaller:1 em:1 across:2 partitioned:1 shallow:2 parameterisation:1 making:1 s1:18 restricted:2 taken:2 resource:4 previously:1 r3:3 mechanism:1 thirion:1 needed:1 know:1 end:1 unusual:1 serf:1 generalizes:1 gaussians:1 hierarchical:1 appropriate:1 batch:1 alternative:1 original:1 top:6 clustering:4 completed:2 graphical:3 spn:54 restrictive:1 ghahramani:1 build:4 unchanged:1 perrot:1 question:1 added:6 already:2 paint:1 traditional:1 exhibit:4 relationship:3 mini:1 difficult:3 ventura:2 negative:3 implementation:3 twenty:1 perform:2 allowing:1 conversion:3 diamond:1 teh:1 datasets:8 behave:1 hinton:1 looking:2 precise:1 interacting:3 arbitrary:1 introduced:6 pair:7 required:1 vanderplas:1 connection:1 learned:1 able:3 below:1 pattern:1 built:2 including:1 max:1 lend:1 belief:2 suitable:4 rely:1 indicator:1 representing:2 scheme:2 improve:1 created:7 sn:7 nir:1 prior:2 epoch:1 python:1 highlight:1 allocation:1 acyclic:1 geoffrey:1 degree:3 consistent:1 principle:3 playing:2 surrounded:1 row:5 course:2 placed:1 free:1 keeping:1 guide:3 allow:1 understand:1 explaining:8 neighbor:1 taking:5 face:7 sparse:1 pgms:1 benefit:2 regard:1 outermost:1 calculated:1 boundary:1 valid:4 avoids:1 llh:18 made:7 qualitatively:2 simplified:1 avoided:1 boltzman:1 sj:1 instantiation:1 uai:1 summing:1 fergus:1 latent:23 sk:1 table:9 additionally:1 nature:1 learn:2 interact:4 expansion:2 mse:14 hanna:1 domain:1 did:2 main:1 arrow:2 s2:16 n2:6 child:18 repeated:2 darker:1 sub:4 duchesnay:1 exponential:1 learns:1 young:2 removing:1 remained:1 showing:2 appropri:1 r2:3 list:1 evidence:1 brigham:2 grouping:1 consist:1 workshop:2 delegating:1 effectively:1 adding:1 ci:5 easier:1 suited:1 locality:1 generalizing:1 likely:1 forming:1 visual:1 horizontally:2 pedro:1 truth:1 acm:1 conditional:1 marked:1 consequently:2 towards:1 hard:1 included:1 determined:2 typical:1 contradictory:1 called:3 experimental:2 aaron:1 select:1 allotted:1 pedregosa:1 tested:1
3,916
4,545
Imitation Learning by Coaching Jason Eisner Department of Computer Science Johns Hopkins University Baltimore, MD 21218 [email protected] He He Hal Daum? III Department of Computer Science University of Maryland College Park, MD 20740 {hhe,hal}@cs.umd.edu Abstract Imitation Learning has been shown to be successful in solving many challenging real-world problems. Some recent approaches give strong performance guarantees by training the policy iteratively. However, it is important to note that these guarantees depend on how well the policy we found can imitate the oracle on the training data. When there is a substantial difference between the oracle?s ability and the learner?s policy space, we may fail to find a policy that has low error on the training set. In such cases, we propose to use a coach that demonstrates easy-to-learn actions for the learner and gradually approaches the oracle. By a reduction of learning by demonstration to online learning, we prove that coaching can yield a lower regret bound than using the oracle. We apply our algorithm to cost-sensitive dynamic feature selection, a hard decision problem that considers a user-specified accuracy-cost trade-off. Experimental results on UCI datasets show that our method outperforms state-of-the-art imitation learning methods in dynamic feature selection and two static feature selection methods. 1 Introduction Imitation learning has been successfully applied to a variety of applications [1, 2]. The standard approach is to use supervised learning algorithms and minimize a surrogate loss with respect to an oracle. However, this method ignores the difference between distributions of states induced by executing the oracle?s policy and the learner?s, thus has a quadratic loss in the task horizon T . A recent approach called Dataset Aggregation [3] (DAgger) yields a loss linear in T by iteratively training the policy in states induced by all previously learned policies. Its theoretical guarantees are relative to performance of the policy that best mimics the oracle on the training data. In difficult decision-making problems, however, it can be hard to find a good policy that has a low training error, since the oracle?s policy may resides in a space that is not imitable in the learner?s policy space. For instance, the task loss function can be highly non-convex in the learner?s parameter space and very different from the surrogate loss. When the optimal action is hard to achieve, we propose to coach the learner with easy-to-learn actions and let it gradually approach the oracle (Section 3). A coach trains the learner iteratively in a fashion similar to DAgger. At each iteration it demonstrates actions that the learner?s current policy prefers and have a small task loss. The coach becomes harsher by showing more oracle actions as the learner makes progress. Intuitively, this allows the learner to move towards a better action without much effort. Thus our algorithm achieves the best action gradually instead of aiming at an impractical goal from the beginning. We analyze our algorithm by a reduction to online learning and show that our approach achieves a lower regret bound than DAgger that uses the oracle action (Section 3.1). Our method is also related to direct loss minimization [4] for structured prediction and methods of selecting oracle translations in machine translation [5, 6] (Section 5). 1 Our approach is motivated by a formulation of budgeted learning as a sequential decision-making problem [7, 8] (Section 4). In this setting, features are acquired at a cost, such as computation time and experiment expense. In dynamic feature selection, we would like to sequentially select a subset of features for each instance at test time according to a user-specified accuracy-cost trade-off. Experimental results show that coaching has a more stable training curve and achieves lower task loss than state-of-the-art imitation learning algorithms. Our major contribution is a meta-algorithm for hard imitation learning tasks where the available policy space is not adequate for imitating the oracle. Our main theoretical result is Theorem 4 which states that coaching as a smooth transition from the learner to the oracle have a lower regret bound than only using the oracle. 2 Background In a sequential decision-making problem, we have a set of states S, a set of actions A and a policy space ?. An agent follows a policy ? : S ? A that determines which action to take in a given state. After taking action a in state s, the environment responds by some immediate loss L(s, a). We assume L(s, a) is bounded in [0, 1]. The agent is then taken to the next state s0 according to the transition probability P (s0 |s, a). We denote dt? the state distribution at time t after executing ? from time 1 to t ? 1, and d? the average state distribution of states over T steps. Then the T -step expected PT loss of ? is J(?) = t=1 Es?dt? [L(s, ?(s)] = T Es?d? [L(s, ?(s))]. A trajectory is a complete sequence of hs, a, L(s, a)i tuples from the starting state to a goal state. Our goal is to learn a policy ? ? ? that minimizes the task loss J(?). We assume that ? is a closed, bounded and non-empty convex set in Euclidean space; a policy ? can be parameterized by a vector w ? Rd . In imitation learning, we define an oracle that executes policy ? ? and demonstrates actions a?s = arg min L(s, a) in state s. The learner only attempts to imitate the oracle?s behavior without any a?A notion of the task loss function. Thus minimizing the task loss is reduced to minimizing a surrogate loss with respect to the oracle?s policy. 2.1 Imitation by Classification A typical approach to imitation learning is to use the oracle?s trajectories as supervised data and learn a policy (multiclass classifier) that predicts the oracle action under distribution of states induced by running the oracle?s policy. At each step t, we collect a training example (st , ? ? (st )), where ? ? (st ) is the oracle?s action (class label) in state st . Let `(s, ?, ? ? (s)) denote the surrogate loss of executing ? in state s with respect to ? ? (s). This can be any convex loss function used for training the classifier, for example, hinge loss in SVM. Using any standard supervised learning algorithm, we can learn a policy ? ? = arg min Es?d?? [`(s, ?, ? ? (s))]. (1) ??? We then bound J(? ? ) based on how well the learner imitates the oracle. Assuming `(s, ?, ? ? (s)) is an upper bound on the 0-1 loss and L(s, a) is bounded in [0,1], Ross and Bagnell [9] have shown that: Theorem 1. Let Es? d?? [`(s, ? ? , ? ? (s))] = , then J(? ? ) ? J(? ? ) + T 2 . One drawback of the supervised approach is that it ignores the fact that the state distribution is different for the oracle and the learner. When the learner cannot mimic the oracle perfectly (i.e. classification error occurs), the wrong action will change the following state distribution. Thus the learned policy is not able to handle situations where the learner follows a wrong path that is never chosen by the oracle, hence the quadratically increasing loss. In fact in the worst case, performance can approach random guessing, even for arbitrarily small  [10]. Ross et al. [3] generalized Theorem 1 to any policy that has  surrogate loss under its own state 0 distribution, i.e. Es?d? [`(s, ?, ? ? (s))] = . Let Q?t (s, ?) denote the t-step loss of executing ? in 0 the initial state and then running ? . We have the following: ? ? Theorem 2. If Q?T ?t+1 (s, ?) ? Q?T ?t+1 (s, ? ? ) ? u for all action a, t ? {1, 2, . . . , T }, then J(?) ? J(? ? ) + uT . 2 It basically says that when ? chooses a different action from ? ? at time step t, if the cumulative cost due to this error is bounded by u, then the relative task loss is O(uT ). 2.2 Dataset Aggregation The above problem of insufficient exploration can be alleviated by iteratively learning a policy trained under states visited by both the oracle and the learner. For example, during training one can use a ?mixture oracle? that at times takes an action given by the previous learned policy [11]. Alternatively, at each iteration one can learn a policy from trajectories generated by all previous policies [3]. In its simplest form, the Dataset Aggregation (DAgger) algorithm [3] works as follows. Let s? denote a state visited by executing ?. In the first iteration, we collect a training set D1 = {(s?? , ? ? (s?? ))} from the oracle (?1 = ? ? ) and learn a policy ?2 . This is the same as the supervised approach to imitation. In iteration i, we collect trajectories by executing the previous policy ?i andS form the training set Di by labeling s?i with the oracle action ? ? (s?i ); ?i+1 is then learned on D1 . . . Di . Intuitively, this enables the learner to make up for past failures to mimic the oracle. Thus we can obtain a policy that performs well under its own induced state distribution. 2.3 Reduction to Online Learning Let `i (?) = Es?d?i [`(s, ?, ? ? (s))] denote the expected surrogate loss of executing ? in states distributed according to d?i . In an online learning setting, in iteration i an algorithm executes policy ?i and observes loss `i (?i ). It then provides a different policy ?i+1 in the next iteration and observes `i+1 (?i+1 ). A no-regret algorithm guarantees that in N iterations N N 1 X 1 X `i (?i ) ? min `i (?) ? ?N ??? N N i=1 i=1 (2) and limN ?? ?N = 0. Assuming a strongly convex loss function, Follow-The-Leader is a simple no-regret algorithm. In Pi each iteration it picks the policy that works best so far: ?i+1 = arg min j=1 `j (?). Similarly, ??? in DAgger at iteration i we choose the policy that has the minimum surrogate loss on all previous data. Thus it can be interpreted as Follow-The-Leader where trajectories collected in each iteration are treated as one online-learning example. Assume that `(s, ?, ? ? (s)) is a strongly convex loss in ? upper bounding the 0-1 loss. We denote the sequence of learned policies ?1 , ?2 , . . . , ?N by ?1:N . Let N = PN min??? N1 i=1 Es?d?i [`(s, ?, ? ? (s))] be the minimum loss we can achieve in the policy space ?. In the infinite sample per iteration case, following proofs in [3] we have: ? ? Theorem 3. For DAgger, if N is O(uT log T ) and Q?T ?t+1 (s, ?)?Q?T ?t+1 (s, ? ? ) ? u, there exists a policy ? ? ?1:N s.t. J(?) ? J(? ? ) + uT N + O(1). This theorem holds for any no-regret online learning algorithm and can be generalized to the finite sample case as well. 3 Imitation by Coaching An oracle can be hard to imitate in two ways. First, the learning policy space is far from the space that the oracle policy lies in, meaning that the learner only has limited learning ability. Second, the environment information known by the oracle cannot be sufficiently inferred from the state, meaning that the learner does not have access to good learning resources. In the online learning setting, a too-good oracle may result in adversarially varying loss functions over iterations from the learner?s perspective. This may cause violent changes during policy updating. These difficulties result in a substantial gap between the oracle?s performance and the best performance achievable in the policy space ? (i.e. a large N in Theorem 3). 3 Algorithm 1 DAgger by Coaching Initialize D ? ? Initialize ?1 ? ? ? for i = 1 to N do Sample T -step trajectories using ?i Collect coaching dataset Di = {(s?i , arg max ?i ? score?i (s?i , a) ? L(s?i , a))} a?A S Aggregate datasets D ? D Di Train policy ?i+1 on D end for Return best ?i evaluated on validation set To address this problem, we define a coach in place of the oracle. To better instruct the learner, a coach should demonstrate actions that are not much worse than the oracle action but are easier to achieve within the learner?s ability. The lower an action?s task loss is, the closer it is to the oracle action. The higher an action is ranked by the learner?s current policy, the more it is preferred by the learner, thus easier to learn. Therefore, similar to [6], we define a hope action that combines the task loss and the score of the learner?s current policy. Let score?i (s, a) be a measure of how likely ?i chooses action a in state s. We define ? ?i by ? ?i (s) = arg max ?i ? score?i (s, a) ? L(s, a) (3) a?A where ?i is a nonnegative parameter specifying how close the coach is to the oracle. In the first iteration, we set ?1 = 0 as the learner has not learned any model yet. Algorithm 1 shows the training process. Our intuition is that when the learner has difficulty performing the optimal action, the coach should lower the goal properly and let the learner gradually achieving the original goal in a more stable way. 3.1 Theoretical Analysis Let `?i (?) = Es?d?i [`(s, ?, ? ?i (s))] denote the expected surrogate loss with respect to ? ?i . We denote PN ? 1 ?N = N min??? i=1 `i (?) the minimum loss of the best policy in hindsight with respect to hope actions. The main result of this paper is the following theorem: ? ? Theorem 4. For DAgger with coaching, if N is O(uT log T ) and Q?T ?t+1 (s, ?)?Q?T ?t+1 (s, ? ? ) ? u, there exists a policy ? ? ?1:N s.t. J(?) ? J(? ? ) + uT ?N + O(1). It is important to note that both the DAgger theorem and the coaching theorem provide a relative guarantee. They depend on whether we can find a policy that has small training error in each FollowThe-Leader step. However, in practice, for hard learning tasks DAgger may fail to find such a good policy. Through coaching, we can always adjust ? to create a more learnable oracle policy space, thus get a relatively good policy that has small training error, at the price of running a few more iterations. To prove this theorem, we first derive a regret bound for coaching, and then follows the proofs of DAgger. We consider a policy ? parameterized by a vector w ? Rd . Let ? : S ? A ? Rd be a feature map describing the state. The predicted action is a ??,s = arg max wT ?(s, a) (4) a?A and the hope action is a ??,s = arg max ? ? wT ?(s, a) ? L(s, a). (5) a?A We assume that the loss function ` : Rd ? R is a convex upper bound of the 0-1 loss. Further, it can be written as `(s, ?, ? ? (s)) = f (wT ?(s, ?(s)), ? ? (s)) for a function f : R ? R and a feature vector k?(s, a)k2 ? R. We assume that f is twice differentiable and convex in wT ?(s, ?(s)), which is common for most loss functions used by supervised classification methods. 4 It has been shown that given a strongly convex loss function `, Follow-The-Leader has O(log N ) regret [12, 13]. More specifically, given the above assumptions we have: Theorem 5. Let D = maxw1 ,w2 ?Rd kw1 ? w2 k2 be the diameter of the convex set Rd . For some b, m > 0, assume that for all w ? Rd , we have |f 0 (wT ?(s, a))| ? b and |f 00 (wT ?(s, a))| ? m. Then Follow-The-Leader on functions ` have the following regret: N X i=1 `i (?i ) ? min ??? N X i=1 N X `i (?) ? i=1 `i (?i ) ? N X `i (?i+1 ) i=1     DRmN 2nb2 log +1 m b ? To analyze the regret using surrogate loss with respect to hope actions, we use the following lemma: PN PN ? PN PN ? Lemma 1. `i (?i ) ? min??? `i (?) ? `i (?i ) ? `i (?i+1 ). i=1 i=1 Proof. We prove inductively that i=1 i=1 PN ? PN ? i=1 `i (?i+1 ) ? min??? i=1 `i (?). When N = 1, by Follow-The-Leader we have ?2 = arg min `?1 (?), thus `?1 (?2 ) = min??? `?1 (?). ??? Assume correctness for N ? 1, then N X i=1 `?i (?i+1 ) ? ? min ??? N ?1 X N ?1 X `?i (?) + `?N (?N +1 ) (inductive assumption) i=1 `?i (?N +1 ) + `?N (?N +1 ) = min ??? i=1 The last equality is due to the fact that ?N +1 N X `?i (?) i=1 PN = arg min i=1 `?i (?). ??? To see how learning from ? ? allows us to approaching ? ? , we derive the regret bound of PN PN ?i i=1 `i (?i ) ? min??? i=1 `i (?). Theorem 6. Assume that wi is upper bounded by C, i.e. for all i kwi k2 ? C, k?(s, a)k2 ? R and |L(s, a) ? L(s, a0 )| ?  for some action a, a0 ? A. Assume ?i is non-increasing and define n? as  . Let `max be an upper bound on the loss, i.e. for all i, the largest n < N such that ?n? ? 2RC `i (s, ?i , ? ? (s)) ? `max . We have     N N X X 2nb2 DRmN `i (?i ) ? min `?i (?) ? 2`max n? + log +1 ??? m b i=1 i=1 Proof. Given Lemma 1, we only need to bound the RHS, which can be written as ! ! N N X X `i (?i ) ? `?i (?i ) + `?i (?i ) ? `?i (?i+1 ) . i=1 (6) i=1 To bound the first term, we consider a binary action space A = {1, ?1} for clarity. The proof can be extended to the general case in a straightforward manner. Note that in states where a?s = a ??,s , `(s, ?, ? ? (s)) = `(s, ?, ? ? (s)). Thus we only need to consider ? situations where as 6= a ??,s : = `i (?i ) ? `?i (?i ) h i Es?d?i (`i (s, ?i , ?1) ? `i (s, ?i , 1)) 1{s : a??i ,s =1,a?s =?1} h i +Es?d?i (`i (s, ?i , 1) ? `i (s, ?i , ?1)) 1{s:?a?i ,s =?1,a?s =1} 5 In the binary case, we define ?L(s) = L(s, 1) ? L(s, ?1) and ??(s) = ?(s, 1) ? ?(s, ?1). Case 1 a ??i ,s = 1 and a?s = ?1. a ??i ,s = 1 implies ?i wTi ??(s) ? ?L(s) and a?s = ?1 implies ?L(s) > 0. Together we have ?L(s) ? (0, ?i wTi ??(s)]. From this we know that wTi ??(s) ? 0 since ?i > 0, which implies a ??i = 1. Therefore we have p(a?s = ?1, a ??i ,s = 1, a ??i ,s = 1) = p(? a?i ,s = 1|a?s = ?1, a ??i ,s = 1)p(? a?i , s = 1)p(a?s = ?1)   ?L(s) = p ?i ? T ? p(wTi ??(s) ? 0) ? p(?L(s) > 0) wi ??(s)       ? 1 ? 1 = p ?i ? ? p ?i ? 2RC 2RC  , we have Let n? be the largest n < N such that ?i ? 2RC N X i=1 h i Es?d?i (`i (s, ?i , ?1) ? `i (s, ?i , 1)) 1{s : a??i ,s =1,a?s =?1} ? `max n? eN For example, let ?i decrease exponentially, e.g., ?i = ?0 e?i . If ?0 < , Then n? = 2RC 2?0 RC dlog e.  Case 2 a ??i ,s = ?1 and a?s = 1. This is symmetrical to Case 1. Similar arguments yield the same bound. In the online learning setting, imitating the coach is to obsearve the loss `?i (?i ) and learn a policy Pi ? ?i+1 = arg min `j (?) at iteration i. This is indeed equivalent to Follow-The-Leader except ??? j=1 that we replaced the loss function. Thus Theorem 5 gives the bound of the second term.     DRmN 2nb2 log +1 . Equation 6 is then bounded by 2`max n? + m b Now we can prove Theorem 4. Consider the best policy in ?1:N , we have N 1 X Es?d?i [`(s, ?i , ? ? (s))] N i=1     2`max n? 2nb2 DRmN ? ?N + + log +1 N mN b min Es?d? [`(s, ?, ? ? (s))] ? ???1:N When N is ?(T log T ), the regret is O(1/T ). Applying Theorem 2 completes the proof. 4 Experiments We apply imitation learning to a novel dynamic feature selection problem. We consider the setting where a pretrained model (data classifier) on a complete feature set is given and each feature has a known cost. At test time, we would like to dynamically select a subset of features for each instance and be able to explicitly specify the accuracy-cost trade-off. This can be naturally framed as a sequential decision-making problem. The state includes all features selected so far. The action space includes a set of non-selected features and the stop action. At each time step, the policy decides whether to stop acquiring features and make a prediction; if not, which feature(s) to purchase next. Achieving an accuracy-cost trade-off corresponds to finding the optimal policy minimizing a loss function. We define the loss function as a combination of accuracy and cost: L(s, a) = ? ? cost(s) ? margin(a) 6 (7) 1.00 0.95 0.90 reward accuracy 0.55 0.50 0.45 0.85 0.80 0.75 |w|/cost Forward DAgger Coaching Oracle 0.70 DAgger Coaching 0.40 0.26 0.28 0.30 0.32 0.34 0.36 average cost per example 0.65 0.60 0.0 0.38 0.2 (a) Reward of DAgger and Coaching. 0.4 0.6 average cost per example 0.8 1.0 (b) Radar dataset. 0.9 0.90 0.85 accuracy accuracy 0.8 0.7 |w|/cost Forward DAgger Coaching Oracle 0.6 0.5 0.4 0.0 0.2 0.4 0.6 average cost per example 0.8 0.80 0.75 |w|/cost Forward DAgger Coaching Oracle 0.70 0.65 0.60 0.0 1.0 (c) Digit dataset. 0.2 0.4 0.6 average cost per example 0.8 1.0 (d) Segmentation dataset. Figure 1: 1(a) shows reward versus cost of DAgger and Coaching over 15 iterations on the digit dataset with ? = 0.5. 1(b) to 1(d) show accuracy versus cost on the three datasets. For DAgger and Coaching, we show results when ? = 0, 0.1, 0.25, 0.5, 1.0, 1.5, 2. where margin(a) denote the margin of classifying the instance after action a; cost(s) denote the user-defined cost of all selected features in the current state s; and ? is a user-specified trade-off parameter. Since we consider feature selection for each single instance here, the average margin reflects accuracy on the whole datasets. 4.1 Dynamic Feature Selection by Imitation Learning Ideally, an oracle should lead to a subset of features having the maximum reward. However, we have too large a state space to exhaustedly search for the optimal subset of features. In addition, the oracle action may not be unique since the optimal subset of features do not have to be selected in a fixed order. We address this problem by using a forward-selection oracle. Given a state s, the oracle iterates through the action space and calculates each action?s loss; it then chooses the action that leads to the minimum immediate loss in the current state. We define ?(st , a) as a concatenation of the current feature vector and a meta-feature vector that provides information about previous classification results and cost. In most cases, our oracle can achieve high accuracy with rather small cost. Considering a linear classifier, as the oracle already knows the correct class label of an instance, it can simply choose, for example, a positive feature that has a positive weight to correctly classify a positive instance. In addition, at the start state even when ?(s0 , a) are almost the same for all instances, the oracle may tend to choose features that favor the instance?s class. This makes the oracle?s behavior very hard to imitate. In the next section we show that in this case coaching achieves better results than using an oracle. 7 4.2 Experimental Results We perform experiments on three UCI datasets (radar signal, digit recognition, image segmentation). Random costs are assigned to features. We first compare the learning curve of DAgger and Coaching over 15 iterations on the digit dataset with ? = 0.5 in Figure 1(a). We can see that DAgger makes a big improvement in the second iteration, while Coaching takes smaller steps but achieves higher reward gradually. In addition, the reward of Coaching changes smoothly and grows stably, which means coaching avoids drastic change of the policy. To test the effect of dynamic selection, we compare our results with DAgger and two static feature selection baselines that sequentially add features according to a ranked list. The first baseline (denoted by Forward) ranks features according to the standard forward feature selection algorithm without any notion of the cost. The second baseline (denoted by |w|/cost) uses a cost-sensitive ranking scheme based on |w|/cost, the weight of a feature divided by its cost. Therefore, features having high scores are expected to be cost-efficient. We give the results in Figure 1(b) to 1(d). To get results of our dynamic feature selection algorithm at different costs, we set ? in the loss function to be 0.0, 0.1, 0.25, 0.5, 1.0, 1.5, 2.0 and use the best policy evaluated on the development set for each ?. For coaching, we set ?2 = 1 and decrease it by e?1 in each iteration. First, we can see that dynamically selecting features for each instance significantly improves the accuracy at a small cost. Sometimes, it even achieves higher accuracy than using all features. Second, we notice that there is a substantial gap between the learned policy?s performance and the oracle?s, however, in almost all settings Coaching achieves higher reward, i.e. higher accuracy at a lower cost as shown in the figures. Through coaching, we can reduce the gap by taking small steps towards the oracle. However, the learned policy is still much worse compared to the oracle?s policy. This is because coaching is still inherently limited by the insufficient policy space, which can be fixed by using expensive kernels and nonlinear policies. 5 Related Work The idea of using hope action is similar to what Chiang et al. [6] and Liang et al. [5] have used for selecting oracle translations in machine translation. They maximized a linear combination of the BLEU score (similar to negative task loss in our case) and the model score to find good translations that are easier to train against. More recently, McAllester et al. [4] defined the direct label that combines model score and task loss from a different view: they showed that using a perceptron-like training methods and update towards the direct label is equivalent to perform gradient descent on the task loss. Coaching is also similar to proximal methods in online learning [14, 15]. They avoid large changes during updating and achieve the original goal gradually. In proximal regularization, we want the updated parameter vector to stay close to the previous one. Do et al. [14] showed that solving the original learning problem through a sequence of modified optimization tasks whose objectives have greater curvature can achieve a lower regret bound. 6 Conclusion and Future Work In this paper, we consider the situation in imitation learning where an oracle?s performance is far from what is achievable in the learning space. We propose a coaching algorithm that lets the learner target at easier goals first and gradually approaches the oracle. We show that coaching has a lower regret bound both theoretically and empirically. In the future, we are interested in formally defining the hardness of a problem so that we know exactly in which cases coaching is more suitable than DAgger. Another direction is to develop a similar coaching process in online convex optimization by optimizing a sequence of approximating functions. We are also interested in applying coaching to more complex structured prediction problems in natural language processing and computer vision. References [1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In ICML, 2004. 8 [2] M. Veloso B. D. Argall, S. Chernova and B. Browning. A survey of robot learning from demonstration. 2009. [3] St?phane. Ross, Geoffrey J. Gordon, and J. Andrew. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011. [4] D. McAllester, T. Hazan, and J. Keshet. Direct loss minimization for structured prediction. In NIPS, 2010. [5] D. Klein P. Liang, A. Bouchard-Ct and B. Taskar. An end-to-end discriminative approach to machine translation. In ACL, 2006. [6] D. Chiang, Y. Marton, and P. Resnik. Online large-margin training of syntactic and structural translation features. In EMNLP, 2008. [7] R. Busa-Fekete D. Benbouzid and B. K?gl. Fast classification using space decision dags. In ICML, 2012. [8] P. Preux G. Dulac-Arnold, L. Denoyer and P. Gallinari. Datum-wise classification: a sequential approach to sparsity. In ECML, 2011. [9] St?phane Ross and J. Andrew Bagnell. Efficient reductions for imitation learning. In AISTATS, 2010. [10] K??ri?inen. Lower bounds for reductions. In Atomic Learning Workshop, 2006. [11] Hal Daum? III, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning Journal (MLJ), 2009. [12] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic regret algorithms for online convex optimization. In COLT, pages 499?513, 2006. [13] Sham M. Kakade and Shai Shalev-shwartz. Mind the duality gap: Logarithmic regret algorithms for online optimization. In NIPS, 2008. [14] Q. Le C. B. Do and C.S. Foo. Proximal regularization for online and batch learning. In ICML, 2009. [15] H Brendan Mcmahan. Follow-the-regularized-leader and mirror descent : Equivalence theorems and l1 regularization. JMLR, 15:525?533, 2011. 9
4545 |@word h:1 achievable:2 pick:1 reduction:6 initial:1 score:8 selecting:3 daniel:1 outperforms:1 past:1 current:6 yet:1 written:2 john:2 nb2:4 enables:1 update:1 selected:4 imitate:4 beginning:1 chiang:2 provides:2 iterates:1 rc:6 direct:4 prove:4 combine:2 busa:1 manner:1 apprenticeship:1 theoretically:1 acquired:1 indeed:1 hardness:1 expected:4 behavior:2 considering:1 increasing:2 becomes:1 bounded:6 what:2 interpreted:1 minimizes:1 argall:1 hindsight:1 finding:1 impractical:1 guarantee:5 exactly:1 demonstrates:3 classifier:4 wrong:2 k2:4 gallinari:1 positive:3 aiming:1 path:1 acl:1 twice:1 dynamically:2 collect:4 challenging:1 specifying:1 equivalence:1 limited:2 unique:1 atomic:1 practice:1 regret:17 digit:4 jhu:1 significantly:1 alleviated:1 get:2 cannot:2 close:2 selection:12 applying:2 equivalent:2 map:1 straightforward:1 kale:1 starting:1 convex:11 survey:1 handle:1 notion:2 updated:1 pt:1 target:1 dulac:1 user:4 us:2 recognition:1 expensive:1 updating:2 marcu:1 predicts:1 taskar:1 worst:1 trade:5 decrease:2 observes:2 substantial:3 intuition:1 environment:2 reward:7 ideally:1 inductively:1 dynamic:7 radar:2 trained:1 depend:2 solving:2 learner:30 train:3 fast:1 labeling:1 aggregate:1 shalev:1 whose:1 elad:1 say:1 ability:3 favor:1 satyen:1 syntactic:1 online:15 sequence:4 differentiable:1 propose:3 uci:2 achieve:6 empty:1 adam:1 executing:7 phane:2 derive:2 develop:1 andrew:2 progress:1 strong:1 c:2 predicted:1 implies:3 direction:1 drawback:1 correct:1 exploration:1 mcallester:2 abbeel:1 hold:1 sufficiently:1 major:1 achieves:7 violent:1 label:4 visited:2 ross:4 sensitive:2 largest:2 correctness:1 create:1 successfully:1 reflects:1 minimization:2 hope:5 ands:1 always:1 modified:1 rather:1 kalai:1 pn:11 avoid:1 varying:1 coaching:33 properly:1 improvement:1 rank:1 brendan:1 baseline:3 browning:1 a0:2 interested:2 arg:10 classification:6 colt:1 denoted:2 development:1 art:2 initialize:2 never:1 having:2 ng:1 adversarially:1 park:1 icml:3 mimic:3 purchase:1 future:2 gordon:1 few:1 replaced:1 n1:1 attempt:1 highly:1 adjust:1 mixture:1 chernova:1 closer:1 euclidean:1 benbouzid:1 theoretical:3 instance:10 classify:1 cost:33 subset:5 successful:1 too:2 proximal:3 chooses:3 st:7 stay:1 off:5 together:1 hopkins:1 choose:3 emnlp:1 worse:2 return:1 includes:2 explicitly:1 ranking:1 view:1 jason:2 closed:1 analyze:2 hazan:2 dagger:22 aggregation:3 start:1 bouchard:1 shai:1 contribution:1 minimize:1 accuracy:14 maximized:1 yield:3 basically:1 trajectory:6 executes:2 failure:1 against:1 naturally:1 proof:6 di:4 static:2 stop:2 dataset:9 ut:6 improves:1 segmentation:2 mlj:1 higher:5 dt:2 supervised:6 follow:7 specify:1 formulation:1 evaluated:2 strongly:3 langford:1 nonlinear:1 stably:1 grows:1 hal:3 effect:1 inductive:1 hence:1 equality:1 assigned:1 regularization:3 iteratively:4 during:3 generalized:2 complete:2 demonstrate:1 performs:1 l1:1 meaning:2 image:1 wise:1 novel:1 recently:1 common:1 empirically:1 exponentially:1 he:2 dag:1 framed:1 rd:7 similarly:1 language:1 kw1:1 stable:2 harsher:1 robot:1 access:1 add:1 curvature:1 own:2 showed:2 followthe:1 perspective:1 recent:2 optimizing:1 meta:2 binary:2 arbitrarily:1 minimum:4 greater:1 signal:1 sham:1 smooth:1 instruct:1 veloso:1 divided:1 calculates:1 prediction:6 vision:1 iteration:19 kernel:1 sometimes:1 agarwal:1 background:1 addition:3 want:1 baltimore:1 completes:1 limn:1 w2:2 umd:1 kwi:1 induced:4 tend:1 structural:1 iii:2 coach:9 easy:2 variety:1 wti:4 perfectly:1 approaching:1 marton:1 reduce:1 idea:1 multiclass:1 whether:2 motivated:1 effort:1 cause:1 action:41 prefers:1 adequate:1 simplest:1 reduced:1 diameter:1 notice:1 per:5 correctly:1 klein:1 achieving:2 clarity:1 budgeted:1 inverse:1 parameterized:2 place:1 almost:2 denoyer:1 decision:6 bound:16 ct:1 datum:1 quadratic:1 oracle:59 nonnegative:1 ri:1 argument:1 min:18 performing:1 relatively:1 department:2 structured:5 according:5 combination:2 smaller:1 wi:2 kakade:1 making:4 intuitively:2 gradually:7 dlog:1 imitating:2 taken:1 resource:1 equation:1 previously:1 describing:1 fail:2 know:3 mind:1 drastic:1 end:3 available:1 apply:2 batch:1 original:3 running:3 hinge:1 daum:2 eisner:1 amit:1 approximating:1 move:1 objective:1 already:1 occurs:1 md:2 responds:1 surrogate:9 bagnell:3 guessing:1 gradient:1 maryland:1 concatenation:1 considers:1 collected:1 bleu:1 assuming:2 insufficient:2 demonstration:2 minimizing:3 liang:2 difficult:1 expense:1 negative:1 policy:63 perform:2 upper:5 datasets:5 finite:1 descent:2 ecml:1 immediate:2 situation:3 extended:1 defining:1 inferred:1 specified:3 learned:8 quadratically:1 nip:2 address:2 able:2 sparsity:1 preux:1 max:10 suitable:1 treated:1 difficulty:2 ranked:2 natural:1 regularized:1 mn:1 scheme:1 imitates:1 relative:3 loss:51 versus:2 geoffrey:1 validation:1 agent:2 s0:3 classifying:1 pi:2 translation:7 gl:1 last:1 perceptron:1 arnold:1 taking:2 distributed:1 curve:2 world:1 transition:2 resides:1 cumulative:1 ignores:2 forward:6 avoids:1 reinforcement:1 far:4 preferred:1 sequentially:2 decides:1 symmetrical:1 tuples:1 leader:8 discriminative:1 imitation:16 alternatively:1 inen:1 search:2 shwartz:1 learn:9 inherently:1 complex:1 aistats:2 main:2 rh:1 bounding:1 whole:1 big:1 en:1 fashion:1 resnik:1 foo:1 lie:1 mcmahan:1 jmlr:1 theorem:18 showing:1 learnable:1 list:1 svm:1 exists:2 workshop:1 sequential:4 mirror:1 keshet:1 horizon:1 margin:5 gap:4 easier:4 smoothly:1 logarithmic:2 simply:1 likely:1 pretrained:1 acquiring:1 fekete:1 corresponds:1 determines:1 goal:7 towards:3 price:1 change:5 hard:7 typical:1 infinite:1 specifically:1 except:1 wt:6 lemma:3 called:1 duality:1 experimental:3 e:13 select:2 college:1 formally:1 hhe:1 d1:2
3,917
4,546
A Divide-and-Conquer Procedure for Sparse Inverse Covariance Estimation Inderjit S. Dhillon Dept. of Computer Science University of Texas, Austin [email protected] Cho-Jui Hsieh Dept. of Computer Science University of Texas, Austin [email protected] Pradeep Ravikumar Dept. of Computer Science University of Texas [email protected] Arindam Banerjee Dept. of Computer Science & Engineering University of Minnesota, Twin Cities [email protected] Abstract We consider the composite log-determinant optimization problem, arising from the `1 regularized Gaussian maximum likelihood estimator of a sparse inverse covariance matrix, in a high-dimensional setting with a very large number of variables. Recent work has shown this estimator to have strong statistical guarantees in recovering the true structure of the sparse inverse covariance matrix, or alternatively the underlying graph structure of the corresponding Gaussian Markov Random Field, even in very high-dimensional regimes with a limited number of samples. In this paper, we are concerned with the computational cost in solving the above optimization problem. Our proposed algorithm partitions the problem into smaller sub-problems, and uses the solutions of the sub-problems to build a good approximation for the original problem. Our key idea for the divide step to obtain a sub-problem partition is as follows: we first derive a tractable bound on the quality of the approximate solution obtained from solving the corresponding sub-divided problems. Based on this bound, we propose a clustering algorithm that attempts to minimize this bound, in order to find effective partitions of the variables. For the conquer step, we use the approximate solution, i.e., solution resulting from solving the sub-problems, as an initial point to solve the original problem, and thereby achieve a much faster computational procedure. 1 Introduction Let {x1 , x2 , . . . , xn } be n sample points drawn from a p-dimensional Gaussian distribution N (?, ?), also known as a Gaussian Markov Random Field (GMRF), where each xi is a pdimensional vector. An important problem is that of recovering the covariance matrix, or its inverse, given the samples in a high-dimensional regime where n  p, and p could number in the tens of thousands. In such settings, the computational efficiency of any estimator becomes very important. A popular approach for such high-dimensional inverse covariance matrix estimation is to impose the structure of sparsity on the inverse covariance matrix (which can be shown to encourage conditional independences among the Gaussian variables), and to solve the following `1 regularized maximum likelihood problem: arg min{? log det ? + tr(S?) + ?k?k1 } = arg min f (?), (1) ?0 ?0 Pn Pn where S = n1 i=1 (xi ? ? ?)(xi ? ? ?)T is the sample covariance matrix and ? ? = n1 i=1 xi is the sample mean. The key focus in this paper is on developing computationally efficient methods to solve this composite log-determinant optimization problem. 1 Due in part to its importance, many optimization methods [4, 1, 9, 7, 6] have been developed in recent years for solving (1). However, these methods have a computational complexity of at least O(p3 ) (typically this is the complexity per iteration). It is therefore hard to scale these procedures to problems with tens of thousands of variables. For instance, in a climate application, if we are modeling a GMRF over random variables corresponding to each Earth grid point, the number of nodes can easily number in the tens of thousands. For this data, a recently proposed state-of-the-art method QUIC [6], that uses a Newton-like method to solve (1), for instance takes more than 10 hours to converge. A natural strategy when the computational complexity of a procedure scales poorly with the problem size is a divide and conquer strategy: Given a partition of the set of nodes, we can first solve the `1 regularized MLE over the sub-problems invidually, and than in the second step, aggregate the ? But how do we come up with a suitable partition? The main contribution solutions together to get ?. of this paper is to provide a principled answer to this question. As we show, our resulting divide and conquer procedure produces overwhelming improvements in computational efficiency. Interestingly, [8] recently proposed a decomposition-based method for GMRFs. They first observe the following useful property of the composite log-determinant optimization problem in (1): if we threshold the off-diagonal elements of the sample covariance matrix S, and the resulting thresholded matrix is block-diagonal, then the corresponding inverse covariance matrix has the same blockdiagonal sparsity structure as well. Using this property, they decomposed the problem along these block-diagonal components and solved these separately, thus achieving a sharp computational gain. A major drawback to this approach of [8] however is that often the decomposition of the thresholded sample covariance matrix can be very unbalanced ? indeed, in many of our real-life examples, we found that the decomposition resulted in one giant component and several very small components. In these cases, the approach in [8] is only a bit faster than directly solving the entire problem. In this paper, we propose a different strategy based on the following simple idea. Suppose we are given a particular partitioning, and solve the sub-problems specified by the partition components. ? clearly need not be equal to `1 regularized MLE (1). HowThe resulting decomposed estimator ? ever, can we use bounds on the deviation to propose a clustering criterion? We first derive a bound ? ? ?? kF based on the off-diagonal error of the partition. Based on this bound, we propose on k? a normalized-cut spectral clustering algorithm to minimize the off-diagonal error, which is able to ? is very close to ?? . Interestingly, we show that this clustering find a balanced partition such that ? criterion can also be motivated as leveraging a property more general than that in [8] of the `1 reg? to initialize an iterative solver for the ularized MLE (1). In the ?conquering? step, we then use ? original problem (1). As we show, the resulting algorithm is much faster than other state-of-the-art methods. For example, our algorithm can achieve an accurate solution for the climate data problem in 1 hour, whereas directly solving it takes 10 hours. In section 2, we outline the standard skeleton of a divide and conquer framework for GMRF estimation. The key step in such a framework is to come up with a suitable and efficient clustering criterion. In the next section 3, we then outline our clustering criteria. Finally, in Section 4 we show that in practice, our method achieves impressive improvements in computational efficiency. 2 The Proposed Divide and Conquer Framework We first set up some notation. In this paper, we will consider each p ? p matrix X as an adjacency matrix, where V = {1, . . . , p} is the node set, Xij is the weighted link between node i and node j. We will use {Vc }kc=1 to denote a disjoint partitioning of the node set V, and each Vc will be called a partition or a cluster. Given a partition {Vc }kc=1 , our divide and conquer algorithm first solves GMRF for all node partitions to get the inverse covariance matrices {?(c) }kc=1 , and then uses the following matrix ? (1) ? ? 0 ... 0 ? 0 ?(2) . . . 0 ? ? ? =? ? (2) ? .. .. .. .. ? , ? . . . . ? 0 0 0 ?(k) to initialize the solver for the whole GMRF. In this paper we use X (c) to denote the submatrix XVc ,Vc for any matrix X. Notice that in our framework any sparse inverse covariance solver can 2 be used, however, in this paper we will focus on using the state-of-the-art method QUIC [6] as the base solver, which was shown to have super-linear convergence when close to the solution. Using a better starting point enables QUIC to more quickly reach this region of super-linear convergence, as we will show later in our experiments. The skeleton of the divide and conquer framework is quite simple and is summarized in Algorithm 1. ? defined in (2) should be close to the optimal In order that Algorithm 1 be efficient, we require that ? ? F . Based solution of the original problem ?? . In the following, we will derive a bound for k?? ? ?k on this bound, we propose a spectral clustering algorithm to find an effective partitioning of the nodes. 1 2 3 4 5 6 Algorithm 1: Divide and Conquer method for Sparse Inverse Covariance Estimation Input : Empirical covariance matrix S, scalar ? Output: ?? , the solution of (1) Obtain a partition of the nodes {Vc }kc=1 ; for c = 1, . . . , k do Solve (1) on S (c) and subset of variables in Vc to get ?(c) ; end ? by ?(1) , ?(2) , . . . , ?(k) as in (2) ; Form ? ? Use ? as an initial point to solve the whole problem (1) ; 2.1 Hierarchical Divide and Conquer Algorithm Assume we conduct a k-way clustering, then the initial time for solving sub-problems is at least O(k(p/k)3 ) = O(p3 /k 2 ) where p denotes the dimensionality, When we consider k = 2, the divide and conquer algorithm can be at most 4 times faster than the original one. One can increase k, however, a larger k entails a worse initial point for training the whole problem. Based on this observation, we consider the hierarchical version of our divide-and-conquer algorithm. For solving subproblems we can again apply a divide and conquer algorithm. In this way, the initial time can be much less than O(p3 /k 2 ) if we use divide and conquer algorithm hierarchically for each level. In the experiments, we will see that this hierarchical method can further improve the performance of the divide-and-conquer algorithm. 3 Main Results: Clustering Criteria for GMRF This section outlines the main contribution of this paper; in coming up with suitable efficient clustering criteria for use within the divide and framework structure in the previous section. 3.1 ? Bounding the distance between ?? and ? To start, we discuss the following result from [8], which we reproduce using the notation in this paper for convenience. Specifically, [8] shows that when all the between cluster edges in S have absolute values smaller than ?, ?? will have a block-diagonal structure. Theorem 1 ([8]). For any ? > 0 and a given partition {Vc }kc=1 , if |Sij | ? ? for all i, j in different ? where ?? is the optimal solution of (1) and ? ? is as defined in (2). partitions, then ?? = ?, ? and ?? will be As a consequence, if a partition {Vc }kc=1 satisfies the assumption of Theorem 1, ? the same, and the last step of Algorithm 1 is not needed anymore. Therefore the result in [8] may be viewed as a special case of our Divide-and-Conquer Algorithm 1. However, in most real examples, a perfect partitioning as in Theorem 1 does not exist, which motivates a divide and conquer framework that does not need as stringent assumptions as in Theorem 1. ? we first prove a similar property for the To allow a more general relationship between ?? and ?, following generalized inverse covariance problem: X ?? = arg min{? log det ? + tr(S?) + ?ij |?ij |} = arg min f? (?). (3) ?0 i,j 3 ?0 In the following, we use 1? to denote a matrix with all elements equal to ?. Therefore (1) is a special case of (3) with ? = 1? . In (3), the regularization parameter ? is a p?p matrix, where each element corresponds to a weighted regularization of each element of ?. We can then prove the following theorem, as a generalization of Theorem 1. Theorem 2. For any matrix regularization parameter ? (?ij > 0 ?i, j) and a given partition {Vc }kc=1 , if |Sij | ? ?ij for all i, j in different partitions, then the solution of (3) will be the block ? defined in (2), where ?(c) is the solution for (3) with sample covariance S (c) and diagonal matrix ? regularization parameter ?(c) . Proof. Consider the dual problem of (3): max log det W s.t. |Wij ? Sij | ? ?ij ?i, j, (4) W 0 ? = ? ? ?1 is a feasible sobased on the condition stated in the theorem, we can easily verify W Pk (c) ? ? is the oplution of (4) with the objective function value c=1 log det W . To show that W ? . From Fischer?s inequaltimal solution of (4), we consider an arbitrary feasible solution W ? ? Qk det W ? (c) for W ?  0. Since W ? (c) is the optimizer of the c-th block, ity [2], det W c=1 ? (c) ? det W ? (c) for all c, which implies log det W ? ? log det W ? . Therefore ? ? is the primal det W optimal solution. Next we apply Theorem 2 to develop a decomposition method. Assume our goal is to solve (1) and we have clusters {Vc }kc=1 which may not satisfy the assumption in Theorem 1. We start by choosing ? such that a matrix regularization weight ?  if i, j are in the same cluster, ? ij = ? ? (5) max(|Sij |, ?) if i, j are in different clusters. ? By construction, Now consider the generalized inverse covariance problem (3) with this specified ?. ? the assumption in Theorem 2 holds for ?, so we can decompose this problem into k sub-problems; for each cluster c ? {1, . . . , k}, the subproblem has the following form: ?(c) = arg min{? log det ? + tr(S (c) ?) + ?k?k1 }, ?0 ? is the optimal solution of where S (c) is the sample covariance matrix of cluster c. Therefore, ? ? as the regularization parameter. problem (3) with ? Based on this observation, we will now provide another view of our divide and conquer algorithm as follows. Considering the dual problem of the sparse inverse covariance estimation with the weighted ? defined in (5) to get regularization defined in (4), Algorithm 1 can be seen to solve (4) with ? = ? ? the initial point W , and then solve (4) with ? = 1? for all elements. Therefore we initially solve the problem with looser bounded constraints to get an initial guess, and then solve the problem with ? are close to the real constraint 1? , the tighter constraints. Intuitively, if the relaxed constraints ? ? and W ? will be close to each other. So in the following we derive a bound based on solutions W this observation. ? For convenience, we use P ? to denote the original dual problem (4) with ? = 1? , and P ? to denote the relaxed dual problem with different edge weights across edges as defined in (5). Based on the ? =? ? ?1 is the solution of P ?? . We above discussions, W ? = (?? )?1 is the solution of P ? and W define E as the following matrix:  0 if i, j are in the same cluster, Eij = (6) max(|Sij | ? ?, 0) otherwise. ? by Theorem 2. In If E = 0, all the off-diagonal elements are below the threshold ?, so W ? = W the following we consider a more interesting case where E 6= 0. In this case kEkF measures how much the off-diagonal elements exceed the threshold ?, and a good clustering algorithm should be ? kF can able to find a partition to minimize kEkF . In the following theorem we show that kW ? ? W ? F can also be bounded by kEkF : be bounded by kEkF , therefore k?? ? ?k 4 1 Theorem 3. If there exists a ? > 0 such that kEk2 ? (1 ? ?) kW ? k2 , then ? ? ? kF < p max(?max (W ), ?max (W )) kEkF , kW ? ? W ?) ??min (W ? 2 ? ? ? F ? p max(?max (?), ?max (? )) ?max (?) kEkF , k?? ? ?k ? ? min(?min (?? ), ?min (?)) (7) (8) where ?min (?), ?max (?) denote the minimum/maximum singular values. Proof. To prove Theorem 3, we need the following Lemma, which is proved in the Appendix: Lemma 1. If A is a positive definite matrix and there exists a ? > 0 such that kA?1 Bk2 ? 1 ? ?, then log det(A + B) ? log det A ? p/(??min (A))kBkF . (9) ? ? may not be a feasible solution of P ? . Since P ? has a relaxed bounded constraint than P ? , W ? ? However, we can construct a feasible solution W = W ? G ? E, where Gij = sign(Wij ) and ? indicates the entrywise product of two matrices. The assumption of this theorem implies that ? k2 , so kW ? ?1 (G ? E)k ? (1 ? ?). From Lemma 1 we have log det W ? ? kG ? Ek2 ? (1 ? ?)/kW p ? ? ? ? log det W ? ??min (W ? ) kEkF . Since W is the optimal solution of P and W is a feasible solution p ? ? ? ? ? log det W ? ? of P , log det W ? log det W ? kEkF . Also, since W is the optimal ??min (W ) ? ? ? . Therefore, solution of P ? and W ? is a feasible solution of P ? , we have log det W ? < log det W p ? ? | log det W ? log det W | < ??min (W ? ) kEkF . ? )| By the mean value theorem and some calculations, we have |f (W ? ) ? f (W ? ?W ? kF kW ? ),?max (W ? )) , which implies (7). max(?max (W > To establish the bound on ?, we use the mean value theorem again with g(W ) = W ?1 = ?, ?g(W ) = ? ? ? where ? is kronecker product. Moreover, ?max (? ? ?) = (?max (?))2 , so we can combine with (7) to prove (8). 3.2 Clustering algorithm In order to obtain computational savings, the clustering algorithm for the divide-and-conquer algorithm (Algorithm 1) should satisfy three conditions: (1) minimize the distance between the approx? ? ?? kF , (2) be cheap to compute, and (3) partition the nodes into imate and the true solution k? balanced clusters. Assume the real inverse covariance matrix ?? is block-diagonal, then it is easy to show that W ? is also block-diagonal. This is the case considered in [8]. Now let us assume ?? has almost a block-diagonal structure but a few off-diagonal entries are not zero. Assume ?? = ?bd + vei eTj where ?bd is the block-diagonal part of ?? and ei denotes the i-th standard basis vector, then from Sherman-Morrison formula, v W ? = (?? )?1 = (?bd )?1 ? ?bd (?bd )T , 1 + v(?bd )ij i j where ?ibd is the ith column vector of ?bd . Therefore adding one off-diagonal element to ?bd will introduce at most one nonzero off-diagonal block in W . Moreover, if block (i, j) of W is already nonzero, adding more elements in block (i, j) of ? will not introduce any more nonzero blocks in W . As long as just a few entries in off-diagonal blocks of ?? are nonzero, W will be block-diagonal with a few nonzero off-diagonal blocks. Since kW ? ?S ? k? ? ?, we are able to use the thresholding matrix S ? to guess the clustering structure of ?? . In the following, we show this observation is consistent withP the bound we get in Theorem 3. From (8), ideally we want to find a partition to minimize kEk? = i ? |?i (E)|. Since it is computationally difficult to optimize this directly, we can use the bound kEk? ? pkEkF , so that minimizing kEkF ? ? ?? kF . can be cast as a relaxation of the problem of minimizing k? 5 To find a partition minimizing kEkF , we want to find a partition {Vc }kc=1 such that the sum of off-diagonal block entries of S ? is minimized, where S ? is defined as ? (S ? )ij = max(|Sij | ? ?, 0)2 ? i 6= j and Sij = 0 ?i = j. (10) At the same time, we want to have balanced clusters. Therefore, we minimize the following normalized cut objective value [10]: p k P ? X XX i?Vc ,j ?V / c Sij ? k ? N Cut(S , {Vc }c=1 ) = Sij where d(Vc ) = . (11) d(V ) c c=1 j=1 i?Vc In (11), d(Vc ) is the volume of the vertex set Vc for balancing cluster sizes, and the numerator is the sum of off-diagonal entries, which corresponds to kEk2F . As shown in [10, 3], minimizing the normalized cut is equivalent to finding cluster indicators x1 , . . . , xc to maximize min x k X xT (D ? S ? )xc c c=1 xTc Dx = trace(Y T (I ? D?1/2 S ? D?1/2 )Y ), (12) Pp ? where D is a diagonal matrix with Dii = j=1 Sij , Y = D1/2 X and X = [x1 . . . xc ]. Therefore, a common way for getting cluster indicators is to compute the leading k eigenvectors of D?1/2 S ? D?1/2 and then conduct kmeans on these eigenvectors. The time complexity of normalized cut on S ? is mainly from computing the leading k eigenvectors of D?1/2 S ? D?1/2 , which is at most O(p3 ). Since most state-of-the-art methods for solving (1) require O(p3 ) per iteration, the cost for clustering is no more than one iteration for the original solver. If S ? is sparse, as is common in real situations, we could speed up the clustering phase by using the Graclus multilevel algorithm, which is a faster heuristic to minimize normalized cut [3]. 4 Experimental Results In this section, we first show that the normalized cut criterion for the thresholded matrix S ? in (10) can capture the block diagonal structure of the inverse covariance matrix before solving (1). Using the clustering results, we show that our divide and conquer algorithm significantly reduces the time needed for solving the sparse inverse covariance estimation problem. We use the following datasets: 1. Leukemia: Gene expression data ? originally provided by [5], we use the data after the pre-processing done in [7]. 2. Climate: This dataset is generated from NCEP/NCAR Reanalysis data 1 , with focus on the daily temperature at several grid points on earth. We treat each grid point as a random variable, and use daily temperature in year 2001 as features. 3. Stock: Financial dataset downloaded from Yahoo Finance 2 . We collected 3724 stocks, each with daily closing price recorded in latest 300 days before May 15, 2012. 4. Synthetic: We generated synthetic data containing 20, 000 nodes with 100 randomly generated group centers ?1 , . . . , ?100 , each of dimension 200, such that each group c has half of its nodes with feature ?c and the other half with features ??c . We then add Gaussian noise to the features. The data statistics are summarized in Table 1. 4.1 Clustering quality on real datasets Given a clustering partition {Vc }kc=1 , we use the following ?within-cluster ratio? to determine its performance on ?? : Pk P ? 2 c=1 i,j:i6=j and i,j?Vc (?ij ) k P . (13) R({Vc }c=1 ) = ? 2 i6=j (?ij ) 1 2 www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html http://finance.yahoo.com/ 6 p n Table 1: Dataset Statistics Leukemia Climate Stock Synthetic 1255 10512 3724 20000 72 1464 300 200 Table 2: Within-cluster ratios (see (13)) on real datasets. We can see that our proposed clustering ? = ?? ? ?? , which we cannot see method Spectral S ? is very close to the clustering based on ? before solving (1). Leukemia Climate Stock Synthetic ? = 0.5 ? = 0.3 ? = 0.005 ? = 0.001 ? = 0.0005 ? = 0.0001 ? = 0.005 ? = 0.001 random clustering 0.26 0.24 0.24 0.25 0.24 0.24 0.25 0.24 spectral on S ? 0.91 0.84 0.87 0.65 0.96 0.87 0.98 0.93 ? spectral on ? 0.93 0.84 0.90 0.71 0.97 0.85 0.99 0.93 Higher values of R({Vc }kc=1 ) are indicative of better performance of the clustering algorithm. In section 3.1, we presented theoretical justification for using normalized cut on the thresholded matrix S ? . Here we show that this strategy shows great promise on real datasets. Table 2 shows the within-cluster ratios (13) of the inverse covariance matrix using different clustering methods. We include the following methods in our comparison: ? Random partition: partition the nodes randomly into k clusters. We use this as a baseline. ? Spectral clustering on thresholded matrix S ? : Our proposed method. ? = ?? ? ?? , which is the element-wise square of ?? : This is the ? Spectral clustering on ? best clustering method we can conduct, which directly minimizes within-cluster ratio of the ?? matrix. However, practically we cannot use this method as we do not know ?? . We can observe in Table 2 that our proposed spectral clustering on S ? achieves almost the same performance as spectral clustering on ?? ? ?? even though we do not know ?? . Also, Figure 1 gives a pictorial view of how our clustering results help in recovering the sparse inverse covariance matrix at different levels. We run a hierarchical 2-way clustering on the Leukemia ? with 1-level clustering and ? ? with 2-level dataset, and plot the original ?? (solution of (1)), ? ? clustering. We can see that although our clustering method does not look at ? , the clustering result matches the nonzero pattern of ?? pretty well. 4.2 The performance of our divide and conquer algorithm Next, we investigate the time taken by our divide and conquer algorithm on large real and synthetic datasets. We include the following methods in our comparisons: ? DC-QUIC-1: Divide and Conquer framework with QUIC and with 1 level clustering. (a) The inverse covariance matrix ?? . ? from level-1 (b) The recovered ? clusters. ? from level 2 (c) The recovered ? clusters. Figure 1: The clustering results and the nonzero patterns of inverse covariance matrix ?? on Leukemia dataset. Although our clustering method does not look at ?? , the clustering results match the nonzero pattern in ?? pretty well. 7 (a) Leukemia (b) Stock (c) Climate (d) Synthetic Figure 2: Comparison of algorithms on real datasets. The results show that DC-QUIC is much faster than other state-of-the-art solvers. ? DC-QUIC-3: Divide and Conquer QUIC with 3 levels of hierarchical clustering. ? QUIC: The original QUIC, which is a state-of-the-art second order solver for sparse inverse estimation [6]. ? QUIC-conn: Using the decomposition method described in [8] and using QUIC to solve each smaller sub-problem. ? Glasso: The block coordinate descent algorithm proposed in [4]. ? ALM: The alternating linearization algorithm proposed and implemented by [9]. All of our experiments are run on an Intel Xeon E5440 2.83GHz CPU with 32GB main memory. Figure 2 shows the results. For DC-QUIC and QUIC-conn, we show the run time of the whole process, including the preprocessing time. We can see that in the largest synthetic dataset, DCQUIC is more than 10 times faster than QUIC, and thus also faster than Glasso and ALM. For the largest real dataset: Climate with more than 10,000 points, QUIC takes more than 10 hours to get a reasonable solution (relative error=0), while DC-QUIC-3 converges in 1 hour. Moreover, on these 4 datasets QUIC-conn using the decomposition method of [8] provides limited savings, in part because the connected components for the thresholded covariance matrix for each dataset turned out to have a giant component, and multiple smaller components. DC-QUIC however was able to leverage a reasonably good clustered decomposition, which dramatically reduced the inference time. Acknowledgements We would like to thank Soumyadeep Chatterjee and Puja Das for help with the climate and stock data. C.-J.H., I.S.D and P.R. acknowledge the support of NSF under grant IIS-1018426. P.R. also acknowledges support from NSF IIS-1149803. A.B. acknowledges support from NSF grants IIS0916750, IIS-0953274, and IIS-1029711. 8 References [1] O. Banerjee, L. E. Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. The Journal of Machine Learning Research, 9, 6 2008. [2] R. Bhatia. Matrix Analysis. Springer Verlag, New York, 1997. [3] I. S. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors: A multilevel approach. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 29:11:1944?1957, 2007. [4] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, July 2008. [5] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, and C. D. Bloomfield. Molecular classication of cancer: class discovery and class prediction by gene expression monitoring. Science, pages 531?537, 1999. [6] C.-J. Hsieh, M. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In NIPS, 2011. [7] L. Li and K.-C. Toh. An inexact interior point method for l1-reguarlized sparse covariance selection. Mathematical Programming Computation, 2:291?315, 2010. [8] R. Mazumder and T. Hastie. Exact covariance thresholding into connected components for large-scale graphical lasso. Journal of Machine Learning Research, 13:723?736, 2012. [9] K. Scheinberg, S. Ma, and D. Glodfarb. Sparse inverse covariance selection via alternating linearization methods. NIPS, 2010. [10] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Analysis and Machine Intelligence, 22(8):888?905, 2000. 9
4546 |@word kulis:1 determinant:3 version:1 kbkf:1 tamayo:1 covariance:32 hsieh:2 decomposition:7 thereby:1 tr:3 initial:7 interestingly:2 ka:1 com:1 ncar:1 recovered:2 toh:1 dx:1 bd:8 partition:25 enables:1 cheap:1 plot:1 half:2 intelligence:2 guess:2 indicative:1 ith:1 provides:1 node:13 downing:1 mathematical:1 along:1 prove:4 combine:1 introduce:2 alm:2 indeed:1 decomposed:2 gov:1 cpu:1 overwhelming:1 solver:7 considering:1 becomes:1 provided:1 xx:1 underlying:1 notation:2 bounded:4 moreover:3 biostatistics:1 kg:1 minimizes:1 developed:1 finding:1 giant:2 guarantee:1 finance:2 k2:2 partitioning:4 grant:2 positive:1 before:3 engineering:1 treat:1 slonim:1 consequence:1 limited:2 kek2f:1 practice:1 block:19 definite:1 procedure:5 empirical:1 significantly:1 composite:3 pre:1 jui:1 get:7 cannot:2 interior:1 close:6 convenience:2 selection:3 optimize:1 equivalent:1 www:1 center:1 shi:1 latest:1 starting:1 gmrf:6 estimator:4 financial:1 ity:1 coordinate:1 justification:1 construction:1 suppose:1 exact:1 programming:1 us:3 element:10 cut:10 subproblem:1 solved:1 capture:1 thousand:3 region:1 pradeepr:1 connected:2 principled:1 balanced:3 complexity:4 skeleton:2 ideally:1 solving:12 efficiency:3 basis:1 easily:2 stock:6 effective:2 bhatia:1 aggregate:1 choosing:1 quite:1 heuristic:1 larger:1 solve:14 otherwise:1 statistic:2 fischer:1 tpami:1 propose:5 mesirov:1 coming:1 product:2 turned:1 poorly:1 achieve:2 getting:1 convergence:2 cluster:20 etj:1 produce:1 perfect:1 converges:1 help:2 derive:4 develop:1 ij:10 solves:1 strong:1 recovering:3 c:4 implemented:1 come:2 implies:3 drawback:1 vc:21 stringent:1 dii:1 adjacency:1 require:2 multilevel:2 generalization:1 clustered:1 decompose:1 tighter:1 hold:1 practically:1 considered:1 great:1 major:1 achieves:2 optimizer:1 earth:2 estimation:10 utexas:3 largest:2 city:1 weighted:4 clearly:1 gaussian:7 super:2 pn:2 focus:3 improvement:2 likelihood:3 indicates:1 mainly:1 baseline:1 inference:1 typically:1 entire:1 initially:1 kc:11 reproduce:1 wij:2 arg:5 among:1 dual:4 html:1 yahoo:2 art:6 special:2 initialize:2 field:2 equal:2 pdimensional:1 construct:1 saving:2 graclus:1 kw:7 look:2 leukemia:6 minimized:1 few:3 randomly:2 resulted:1 pictorial:1 phase:1 n1:2 attempt:1 psd:1 friedman:1 investigate:1 withp:1 umn:1 golub:1 pradeep:1 primal:1 accurate:1 edge:3 encourage:1 daily:3 conduct:3 divide:25 theoretical:1 instance:2 column:1 modeling:1 xeon:1 cost:2 deviation:1 subset:1 entry:4 vertex:1 answer:1 synthetic:7 cho:1 off:12 together:1 quickly:1 again:2 recorded:1 containing:1 worse:1 leading:2 li:1 vei:1 twin:1 summarized:2 satisfy:2 later:1 view:2 start:2 cjhsieh:1 contribution:2 minimize:7 square:1 qk:1 kek:2 monitoring:1 reach:1 inexact:1 pp:1 proof:2 gain:1 proved:1 dataset:8 popular:1 dimensionality:1 segmentation:1 noaa:1 originally:1 higher:1 day:1 entrywise:1 done:1 though:1 just:1 ei:1 banerjee:3 quality:2 normalized:8 true:2 verify:1 regularization:7 alternating:2 dhillon:3 nonzero:8 climate:8 numerator:1 xtc:1 criterion:7 generalized:2 outline:3 l1:1 temperature:2 image:1 wise:1 arindam:1 recently:2 ncep:2 common:2 volume:1 approx:1 grid:3 i6:2 closing:1 sherman:1 minnesota:1 entail:1 impressive:1 surface:1 base:1 add:1 multivariate:1 recent:2 verlag:1 binary:1 life:1 seen:1 minimum:1 relaxed:3 impose:1 converge:1 maximize:1 determine:1 morrison:1 ii:4 july:1 multiple:1 reduces:1 faster:8 match:2 calculation:1 long:1 divided:1 ravikumar:2 mle:3 molecular:1 prediction:1 iteration:3 whereas:1 want:3 separately:1 singular:1 leveraging:1 leverage:1 coller:1 exceed:1 easy:1 concerned:1 independence:1 hastie:2 lasso:2 idea:2 texas:3 det:22 bloomfield:1 motivated:1 expression:2 gb:1 loh:1 york:1 dramatically:1 useful:1 eigenvectors:4 ten:3 reduced:1 http:1 xij:1 exist:1 nsf:3 notice:1 sign:1 arising:1 per:2 disjoint:1 tibshirani:1 promise:1 group:2 key:3 conn:3 threshold:3 achieving:1 drawn:1 reguarlized:1 thresholded:6 caligiuri:1 graph:2 relaxation:1 year:2 sum:2 run:3 inverse:24 almost:2 reasonable:1 looser:1 p3:5 appendix:1 bit:1 submatrix:1 bound:12 quadratic:1 constraint:5 kronecker:1 x2:1 speed:1 min:15 conquering:1 developing:1 smaller:4 across:1 intuitively:1 sij:10 ghaoui:1 taken:1 computationally:2 imate:1 scheinberg:1 discus:1 needed:2 know:2 tractable:1 end:1 sustik:1 apply:2 observe:2 hierarchical:5 spectral:9 anymore:1 gridded:1 original:9 denotes:2 clustering:40 include:2 graphical:2 newton:1 xc:3 k1:2 build:1 conquer:24 establish:1 ek2:1 objective:2 malik:1 question:1 already:1 quic:19 strategy:4 diagonal:23 distance:2 link:1 thank:1 kekf:11 collected:1 relationship:1 ratio:4 minimizing:4 difficult:1 subproblems:1 trace:1 stated:1 motivates:1 observation:4 markov:2 datasets:7 acknowledge:1 descent:1 situation:1 ever:1 dc:6 sharp:1 arbitrary:1 cast:1 specified:2 hour:5 nip:2 trans:1 able:4 below:1 pattern:5 regime:2 sparsity:2 max:17 memory:1 including:1 suitable:3 natural:1 regularized:4 kek2:1 indicator:2 improve:1 acknowledges:2 aspremont:1 gmrfs:1 soumyadeep:1 acknowledgement:1 blockdiagonal:1 kf:6 discovery:1 relative:1 glasso:2 interesting:1 downloaded:1 consistent:1 thresholding:2 bk2:1 balancing:1 reanalysis:2 austin:2 classication:1 cancer:1 last:1 allow:1 absolute:1 sparse:15 ghz:1 dimension:1 xn:1 preprocessing:1 transaction:1 approximate:2 gene:2 xi:4 alternatively:1 iterative:1 pretty:2 table:5 reasonably:1 mazumder:1 da:1 pk:2 main:4 hierarchically:1 whole:4 bounding:1 noise:1 x1:3 gaasenbeek:1 intel:1 sub:10 guan:1 theorem:19 formula:1 xt:1 exists:2 adding:2 importance:1 linearization:2 chatterjee:1 eij:1 huard:1 inderjit:2 scalar:1 springer:1 corresponds:2 satisfies:1 ma:1 conditional:1 viewed:1 goal:1 kmeans:1 price:1 feasible:6 hard:1 specifically:1 lemma:3 called:1 gij:1 experimental:1 support:3 ularized:1 unbalanced:1 dept:4 reg:1 d1:1
3,918
4,547
Non-parametric Approximate Dynamic Programming via the Kernel Method Nikhil Bhat Graduate School of Business Columbia University New York, NY 10027 [email protected] Vivek F. Farias Sloan School of Management Massachusetts Institute of Technology Cambridge, MA 02142 [email protected] Ciamac C. Moallemi Graduate School of Business Columbia University New York, NY 10027 [email protected] Abstract This paper presents a novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees. In particular, we establish both theoretically and computationally that our proposal can serve as a viable alternative to state-of-the-art parametric ADP algorithms, freeing the designer from carefully specifying an approximation architecture. We accomplish this by developing a kernel-based mathematical program for ADP. Via a computational study on a controlled queueing network, we show that our procedure is competitive with parametric ADP approaches. 1 Introduction Problems of dynamic optimization in the face of uncertainty are frequently posed as Markov decision processes (MDPs). The central computational problem is then reduced to the computation of an optimal ?cost-to-go? function that encodes the cost incurred under an optimal policy starting from any given MDP state. Many MDPs of practical interest suffer from the curse of dimensionality, where intractably large state spaces precluding exact computation of the cost-to-go function. Approximate dynamic programming (ADP) is an umbrella term for algorithms designed to produce good approximation to this function, yielding a natural ?greedy? control policy. ADP algorithms are, in large part, parametric in nature; requiring the user to provide an ?approximation architecture? (i.e., a set of basis functions). The algorithm then produces an approximation in the span of this basis. The strongest theoretical results available for such algorithms typically share two features: (1) the quality of the approximation produced is comparable with the best possible within the basis specified, and (2) the computational effort required for doing so typically scales as the dimension of the basis specified. These results highlight the importance of selecting a ?good? approximation architecture, and remain somewhat dissatisfying in that additional sampling or computational effort cannot remedy a bad approximation architecture. On the other hand, a non-parametric approach would, in principle, permit the user to select a rich, potentially full-dimensional architecture (e.g., the Haar basis). One would then expect to compute increasingly accurate approximations with increasing computational effort. The present work presents a practical algorithm of this type. Before describing our contributions, we begin with summarizing the existing body of research on non-parametric ADP algorithms. 1 The key computational step in approximate policy iteration methods is approximate policy evaluation. This step involves solving the projected Bellman equation, a linear stochastic fixed point equation. A numerically stable approach to this is to perform regression with a certain ?2 -regularization, where the loss is the ?2 -norm of the Bellman error. By substituting this step with a suitable nonparametric regression procedure, [2, 3, 4] come up with a corresponding non-parametric algorithm. Unfortunately schemes such approximate policy iteration have no convergence guarantees in parametric settings, and these difficulties remain in non-parametric variations. Another idea has been to use kernel-based local averaging ideas to approximate the solution of an MDP with that of a simpler variation on a sampled state space [5, 6, 7]. However, convergence rates for local averaging methods are exponential in the dimension of the problem state space. As in our setting, [8] constructs kernel-based cost-to-go function approximations. These are subsequently plugged into various ad hoc optimization-based ADP formulations, without theoretical justification. Closely related to our work, [9, 10] consider modifying the approximate linear program with an ?1 regularization term to encourage sparse approximations in the span of a large, but necessarily tractable set of features. Along these lines, [11] discuss a non-parametric method that explicitly restricts the smoothness of the value function. However, sample complexity results for this method are not provided and it appears unsuitable for high-dimensional problems (such as, for instance, the problem we consider in our experiments). In contrast to this line of work, our approach will allow for approximations in a potentially infinite dimensional approximation architecture with a constraint on an appropriate ?2 -norm of the weight vector. The non-parametric ADP algorithm we develop enjoys non-trivial approximation and sample complexity guarantees. We show that our approach complements state-of-the-art parametric ADP algorithms by allowing the algorithm designer to compute what is essentially the best possible ?simple? approximation1 in a full-dimensional approximation architecture as opposed to restricting attention to some a-priori fixed low dimensional architecture. In greater detail, we make the following contributions: A new mathematical programming formulation. We rigorously develop a kernel-based variation of the ?smoothed? approximate LP (SALP) approach to ADP proposed by [12]. The resulting mathematical program, which we dub the regularized smoothed approximate LP (RSALP), is distinct from simply substituting a kernel-based approximation in the SALP formulation. We develop a companion active set method that is capable of solving this mathematical program rapidly and with limited memory requirements. Theoretical guarantees. 2 We establish a graceful approximation guarantee for our algorithm. Our algorithm can be interpreted as solving an approximate linear program in an appropriate Hilbert space. We provide, with high probability, an upper bound on the approximation error of the algorithm relative to the best possible approximation subject to a regularization constraint. The sampling requirements for our method are, in fact, independent of the dimension of the approximation architecture. Instead, we show that the number of samples grows polynomially as a function of a regularization parameter. Hence, the sampling requirements are a function of the complexity of the approximation, not of the dimension of the approximating architecture. This result can be seen as the ?right? generalization of the prior parametric approximate LP approaches [13, 14, 12], where, in contrast, sample complexity grows with the dimension of the approximating architecture. A computational study. To study the efficacy of RSALP, we consider an MDP arising from a challenging queueing network scheduling problem. We demonstrate that our RSALP method yields significant improvements over known heuristics and standard parametric ADP methods. In what follows, proofs and a detailed discussion of our numerical procedure are deferred to the Online Supplement to this paper. 1 In the sense that the ?2 norm of the weight vector can grow at most polynomially with a certain measure of computational budget. 2 These guarantees come under assumption of being able to sample from a certain idealized distribution. This is a common in the ADP literature. 2 2 Formulation Consider a discrete time Markov decision process with finite state space S and finite action space A. We denote by xt and at respectively, the state and action at time t. We assume time-homogeneous Markovian dynamics: conditioned on being at state x and taking action a, the system transitions to state x? with probability p(x, x? , a) independent of the past. A policy is a map ? : S ? A, so that ?? ? ? ? t J (x) ? Ex,? ? gxt ,at t=0 represents the expected (discounted, infinite horizon) cost-to-go under policy ? starting at state x. Letting ? denote the set of all policies our goal is to find an optimal policy ?? such that ?? ? argmax??? J ? (x) for all x ? S (it is well known that such a policy exists). We denote the optimal ? cost-to-go function by J ? ? J ? . An optimal policy ?? can be recovered as a ?greedy? policy with respect to J ? , ?? (x) ? argmin gx,a + ?Ex,a [J ? (X ? )], a?A ? where we define Ex,a [f (X ? )] as x? ?S p(x, x? , a)f (x? ), for all f : S ? R. Since in practical applications S is often intractably large, exact computation of J ? is untenable. ADP algorithms are principally tasked with computing approximations to J ? of the form J ? (x) ? ? z ? ?(x) ? J(x), where ? : S ? Rm is referred to as an ?approximation architecture? or a basis and must be provided as input to the ADP algorithm. The ADP algorithm computes a ?weight? vector z; ? one then employs a policy that is greedy with respect to the corresponding approximation J. 2.1 Primal Formulation Motivated by the LP for exact dynamic programming, a series of ADP algorithms [15, 13, 12] have been proposed that compute a weight vector z by solving an appropriate modification of the exact LP for dynamic programming. In particular, [12] propose solving the following optimization problem where ? ? RS+ is a strictly positive probability distribution and ? > 0 is a penalty parameter: ? ? max ?x z ? ?(x) ? ? ?x s x x?S s. t. x?S ? z ?(x) ? ga,x + ?Ex,a [z ? ?(X ? )] + sx , z ? Rm , s ? RS+ . ? x ? S, a ? A, (1) In parsing the above program notice that if one insisted that the slack variables s were precisely 0, one is left with the ALP proposed by [15]. [13] provided a pioneering analysis that loosely showed 2 ?J ? ? z ? ? ??1,? ? inf ?J ? ? z ? ??? , 1?? z for an optimal solution z ? to the ALP; [12] showed that these bounds could be improved upon substantially by ?smoothing? the constraints of the ALP, i.e., permitting positive slacks. In both cases, one must solve a ?sampled? version of the above program. Now, consider allowing ? to map from S to a general (potentially infinite dimensional) Hilbert space H. We use bold letters to denote elements in the Hilbert space H, e.g., the weight vector is denoted by z ? H. We further suppress the dependence on ? and denote the elements H corresponding to their counterparts in S by bold letters. Hence, for example, x ? ?(x) and X ? ?(X). Further, we denote X ? ?(S); X ? H. The value function approximation in this case would be given by J?z,b (x) ? ?x, z? + b = ??(x), z? + b, (2) where b is a scalar offset corresponding to a constant basis function. The following generalization of (1) ? which we dub the regularized SALP (RSALP) ? then essentially suggests itself: ? ? ? max ?x ?x, z? + b ? ? ?x sx ? ?z, z? 2 x?S x?S (3) s. t. ?x, z? + b ? ga,x + ?Ex,a [?X? , z? + b] + sx , ? x ? S, a ? A, z ? H, b ? R, s ? RS+ . 3 The only ?new? ingredient in the ? program above is the fact that we regularize z using the parameter ? > 0. Constraining ?z?H ? ?z, z? to lie within some ?2 -ball anticipates that we will eventually resort to sampling in solving this program and we cannot hope for a reasonable number of samples to provide a good solution to a problem where z was unconstrained. This regularization, which plays a crucial role both in theory and practice, is easily missed if one directly ?plugs in? a local averaging approximation in place of z ? ?(x) as is the case in the earlier work of [5, 6, 7, 8] and others. Since the RSALP, i.e., program (3), can be interpreted as a regularized stochastic optimization problem, one may hope to solve it via its sample average approximation. To this end, define the likelihood ratio wx ? ?x /?x , and let S? ? S be a set of N states sampled independently according to the distribution ?. The sample average approximation of (3) is then 1 ? ? ? ? max wx ?x, z? + b ? sx ? ?z, z? N N 2 x?S? x?S? (4) ? a ? A, s. t. ?x, z? + b ? ga,x + ?Ex,a [?X? , z? + b] + sx , ? x ? S, ? z ? H, b ? R, s ? RS+ . ? were small, it is still not clear that this program We call this program the sampled RSALP. Even if |S| can be solved effectively. We will, in fact, solve the dual to this problem. 2.2 Dual Formulation We begin by establishing some notation. Let Nx,a ? {x} ? {x? ? S|p(x, x? , a) > 0}. Now, define ? ? the symmetric positive semi-definite matrix Q ? R(S?A)?(S?A) according to ?? ? ? ? ? ? ? ? ? ? ? Q(x, a, x , a ) ? 1{x=y} ? ?p(x, y, a) 1{x =y } ? ?p(x , y , a) ?y, y? ?, (5) y?Nx,a y ? ?Nx? ,a? ? and the vector R ? RS?A according to ? ? 1 ? ? ? R(x, a) ? ?gx,a ? wx 1{x=y} ? ?p(x, y, a) ?y, x? ?. N ? (6) x ?S? y?Nx,a Notice that Q and R depend only on inner products in X (and other, easily computable quantities). The dual to (4) is then given by: min s. t. 1 ? 2 ? Q? + R? ? ? ? ?x,a ? , N a?A ?? ?x,a = x?S? a?A ? ? x ? S, 1 , 1?? ?? (7) ? RS?A . + Assuming that Q and R can be easily computed, this finite dimensional quadratic program, is tractable ? its size is polynomial in the number of sampled states. We may recover a primal solution (i.e., the weight vector z? ) from an optimal dual solution: Proposition 1. The optimal solution to (7) is attained at some ?? , then optimal solution to (4) is attained at some (z ? , s? , b? ) with ? ? ? ? ? ? 1 1 z? = ? wx x ? ??x,a x ? ?Ex,a [X? ] ? . (8) ? N x?S? ? x?S,a?A Having solved this program, we may, using Proposition 1, recover our approximate cost-to-go func? tion J(x) = ?z? , x? + b? as ? ? ?? ? ? 1 1 ? ? ? J(x) = wy ?y, x? ? ?y,a ?y, x? ? ?Ey,a [?X , x?] + b? . (9) ? N y?S? ? y?S,a?A 4 A policy greedy with respect to J? is not affected by constant translations, hence in (9), the value of b? can be set to be zero arbitrarily. Again note that given ?? , J? only involves the inner products. At this point, we use the ?kernel? trick: instead of explicitly specifying H or the mapping ?, we take the approach of specifying inner products. In particular, given any positive definite kernel K : S ? S ? R, it is well known (Mercer?s theorem) that there exists a Hilbert space H and ? : S ? H such that K(x, y) = ??(x), ?(y)?. Consequently, given a positive definite kernel, we simply replace every inner product ?x, x? ? in the defining of the program (7) with the quantity K(x, x? ) and similarly in the approximation (9). In particular, this is equivalent to using a Hilbert space, H and mapping ? corresponding to that kernel. Solving (7) directly is costly. In particular, it is computationally expensive to pre-compute and store the matrix Q. An alternative to this is to employ the following broad strategy, as recognized by [16] and [17] in the context of solving SVM classification problems, referred to as an active set method: At every point in time, one attempts to (a) change only a small number of variables while not impacting other variables (b) maintain feasibility. It turns out that this results in a method that requires memory and per-step computation that scales only linearly with the sample size. We defer the details of the procedure as well as the theoretical analysis to the Online Supplement 3 Approximation Guarantees Recall that we are employing an approximation J?z,b of the form (2), parameterized by the weight vector z and the offset parameter b. Now denoting by C the feasible region of the RSALP projected onto the z and b co-ordinates, the best possible approximation one may hope for among those permitted by the RSALP will have ?? -approximation error inf (z,b)?C ?J ? ? J?z,b ?? . Provided the Gram matrix given by the kernel restricted to S is positive definite, this quantity can be made arbitrarily small by making ? small. The rate at which this happens would reflect the quality of the kernel in use. Here we focus on asking the following question: for a fixed choice of regularization parameters (i.e., with C fixed) what approximation guarantee can be obtained for a solution to the RSALP? This section will show that one can achieve a guarantee that is, in essence, within a certain constant multiple of the optimal approximation error using a number of samples that is independent of the size of the state space and the dimension of the approximation architecture. 3.1 The Guarantee Define the Bellman operator, T : RS ? RS according to (T J)(x) ? min gx,a + ?Ex,a [J(X ? )]. a?A Let S? be a set of N states drawn independently at random from S under the distribution ? over S. Given the definition of J?z,b in (2), we consider the following sampled version of RSALP, 2 1 ? max ? ? J?z,b ? sx 1??N x?S? (10) ? a ? A, s. t. ?x, z? + b ? ga,x + ?Ex,a [?X? , z? + b] + sx , ? x ? S, ? ?z?H ? C, |b| ? B, z ? H, b ? R, s ? RS+ . We will assume that states are sampled according to an idealized distribution. In particular, ? ? ??? ,? where ? ? ???? ,? ? (1 ? ?) ?t ? ? P?t ? . (11) t=0 Here, P?? is the transition matrix under the optimal policy ?? . This idealized assumption is also common to the work of [14] and [12]. In addition, this program is somewhat distinct from the program presented earlier, (4): (1) As opposed to a ?soft? regularization term in the objective, we have a ?hard? regularization constraint, ?z?H ? C. It is easy to see that given a ?, we can choose a radius C(?) that yields an equivalent optimization problem. (2) We bound the magnitude of the offset b. This is for theoretical convenience; our sample complexity bound will be parameterized 5 by B. (3) We fix ? = 2/(1 ? ?). Our analysis reveals this to be the ?right? penalty weight on the Bellman inequality violations. Before stating our bound we establish a few bits of notation. We let (z? , b? ) denote an optimal solution to (10). We let K ? maxx?X ?x?H , and finally, we define the quantity ? ?? ? ? 1 ?(C, B, K, ?) ? 1+ ln(1/?) 4CK(1 + ?) + 4B(1 ? ?) + 2?g?? . 2 We have the following theorem: Theorem 1. For any ? > 0 and ? > 0, let N ? ?(C, B, K, ?)2 /?2 . If (10) is solved by sampling N states from S with distribution ??? ,? , then with probability at least 1 ? ? ? ? 4 , ?J ? ? J?z? ,b? ?1,? ? inf ?z?H ?C,|b|?B 3+? ? 4? ?J ? J?z,b ?? + . 1?? 1?? (12) Ignoring the ?-dependent error terms, we see that the quality of approximation provided by (z? , b? ) is essentially within a constant multiple of the optimal (in the sense of ?? -error) approximation to J ? possible using a weight vector z and offsets b permitted by the regularization constraints. This is a ?structural? error term that will persist even if one were permitted to draw an arbitrarily large number of samples. It is analogous to the approximation results produced in parametric settings with the important distinction that one allows comparisons to approximations in potentially fulldimensional basis sets which might be substantially superior. In addition to the structural?error above, one incurs an additional additive ?sampling? error that scales like O(N ?1/2 (CK + B) ln 1/?). This quantity has no explicit dependence on the dimension of the approximation architecture. In contrast, comparable sample complexity results (eg. [14, 12]) typically scale with the dimension of the approximation architecture. Here, this space may be full dimensional, so that such a dependence would yield a vacuous guarantee. The error depends on the user specified quantities C and B, and K, which is bounded for many kernels. The result allows for arbitrary ?simple? (i.e. with ?z?H small) approximations in a rich feature space as opposed to restricting us to some a-priori fixed, low dimensional feature space. This yields some intuition for why we expect the approach to perform well even with a relatively general choice of kernel. As C and B grow large, the structural error will decrease to zero provided K restricted to S is positive definite. In order to maintain the sampling error constant, one would then need to increase N (at a rate that is ?((CK + B)2 ). In summary, increased sampling yields approximations of increasing quality, approaching an exact approximation. If J ? admits a good approximation with ?z?H small, one can expect a good approximation with a reasonable number of samples. 3.2 Proof Sketch A detailed proof of a stronger result is in the Online Supplement. Here, we provide a proof sketch. The first step of the proof involves providing a guarantee for the exact (non-sampled) RSALP with hard regularization. Assuming (z? , b? ) is the ?learned? parameter pair, we first establish the guarantee: 3+? ?J ? ? J?z? ,b? ?1,? ? inf ?J ? ? J?z,b ?? . 1 ? ? ?z?H ?C,b?R Geometrically, the proof works loosely by translating the ?best? approximation given the regularization constraints to one that is guaranteed to yield an approximation error no worse that that produced by the RSALP. To establish a guarantee for the sampled RSALP, we first pose the RSALP as a stochastic optimization problem by setting s(z, b) ? (J?z,b ? T J?z,b )+ . We must ensure that with high probability, the sample averages in the sampled program are close to the exact expectations, uniformly for all possible values of (z, b) with high accuracy. In order to establish such a guarantee, we bound the Rademacher complexity of the class of functions given by ? ? + ? ? ? FS,? ? x ?? (Jz,b (x) ? T? Jz,b (x)) : ?z?H ? C, |b| ? B , 6 queue 2 queue 4 ?2 = 0.12 ?4 = 0.08 ?4 = 0.28 server 1 server 2 ?3 = 0.28 ?1 = 0.08 ?1 = 0.12 queue 1 queue 3 Figure 1: The queueing network example. (where T? is the Bellman operator associated with policy ?), This yields the appropriate uniform large deviations bound. Using this guarantee we show that the optimal solution to the sampled RSALP yields similar approximation guarantees as that with the exact RSALP; this proof is somewhat delicate as it appears difficult to directly show that the optimal solutions themselves are close. 4 Case Study: A Queueing Network This section considers the problem of controlling the queuing network illustrated in Figure 1, with the objective of minimizing long run average delay. There are two ?flows? in this network: the first through server 1 followed by server 2 (with buffering at queues 1 and 2, respectively), and the second through server 2 followed by server 1 (with buffering at queues 4 and 3, respectively). Here, all interarrival and service times are exponential with rate parameters summarized in Figure 1. This specific network has been studied [13, 18] and is considered to be a challenging control problem. Our goal in this section will be two-fold. First, we will show that the RSALP can surpass the performance of both heuristic as well as established ADP-based approaches, when used ?out-of-the-box? with a generic kernel. Second, we will show that the RSALP can be solved efficiently. 4.1 MDP Formulation Although the control problem at hand is nominally a continuous time problem, it is routinely converted into a discrete time problem via a standard uniformization device; see [19], for instance, for an explicit such example. In the equivalent discrete time problem, at most a single event can occur in a given epoch, corresponding either to the arrival of a job at queues 1 or 4, or the arrival of a service token for one of the four queues with probability proportional to the corresponding rates. The state of the system is described by the number of jobs is each of the four queues, so that S ? Z4+ , whereas the action space A consists of four potential actions each corresponding to a matching between servers and queues. We take the single period cost as the total number of jobs in the system, so that gx,a = ?x?1 ; note that minimizing the average number of jobs in the system is equivalent to minimizing average delay by Little?s law. Finally, we take ? = 0.9 as our discount factor. 4.2 Approaches RSALP (this paper). We solve (7) using the active set method outlined in the Online Supplement, taking as? our kernel the standard Gaussian radial basis function kernel K(x, y) ? ? exp ??x ? y?22 /h , with the bandwidth parameter h ? 100. (The sensitivity of our results to this bandwidth parameter appears minimal.) Note that this implicitly corresponds to a full-dimensional basis function architecture. Since the idealized sampling distribution, ??? ,? is unavailable to us, we use in its place the geometric distribution ?(x) ? (1 ? ?)4 ? ?x?1 , with the sampling parameter ? set at 0.9, as in [13]. The regularization parameter ? was chosen via a line-search; we report results for ? ? 10?8 . (Again performance does not appear to be very sensitive to ?, so that a crude linesearch appears to suffice.) In accordance with the theory we set the constraint violation parameter ? ? 2/(1 ? ?), as suggested by the analysis of Section 3.1, as well as by [12], 7 policy performance Longest Queue Max-Weight 8.09 6.55 sample size SALP, cubic basis RSALP, Gaussian kernel 1000 7.19 6.72 (1.76) (0.39) 3000 7.89 6.31 (1.76) (0.11) 5000 6.94 6.13 (1.15) (0.08) 10000 6.63 6.04 (0.92) (0.05) Table 1: Performance results in the queueing example. For the SALP and RSALP methods, the number in the parenthesis gives the standard deviation across sample sets. SALP [12]. The SALP formulation (1), is, as discussed earlier, the parametric counterpart to the RSALP. It may be viewed as a generalization of the ALP approach proposed by [13] and has been demonstrated to provide substantial performance benefits relative to the ALP approach. Our choice of parameters for the SALP mirrors those for the RSALP to the extent possible, so as to allow for an ?apples-to-apples? comparison. Thus, we solve the sample average approximation of this program using the same geometric sampling distribution and parameter ?. Approximation architectures in which the basis functions are monomials of the queue lengths appear to be a popular choice for queueing control problems [13]. We use all monomials with degree at most 3, which we will call the cubic basis, as our approximation architectures. Longest Queue (generic). This is a simple heuristic approach: at any given time, a server chooses to work on the longest queue from among those it can service. Max-Weight [20]. Max-Weight is a well known scheduling heuristic for queueing networks. The policy is obtained as the greedy policy with respect to a value function approximation of the form ?4 J?M W (x) ? i=1 |xi |1+? , given a parameter ? > 0. This policy has been extensively studied and shown to have a number of good properties, for example, being throughput optimal and offering good performance for critically loaded settings [21]. Via a line-search, we chose to ? ? 1.5 as the exponent for our experiments. 4.3 Results Policies were evaluated using a common set of arrival process sample paths. The performance metric we report for each control policy is the long run average number of jobs in the system under that ?T policy, t=1 ?xt ?1 /T , where we set T ? 10000. We further average this random quantity over an ensemble of 300 sample paths. Further, in order to generate SALP and RSALP policies, state sampling is required. To understand the effect of the sample size on the resulting policy performance, the different sample sizes listed in Table 1 were used. Since the policies generated involve randomness to the sampled states, we further average performance over 10 sets of sampled states. The results are reported in Table 1 and have the following salient features: 1. RSALP outperforms established policies: Approaches such as the Max-Weight or ?parametric? ADP with basis spanning polynomials have been previously shown to work well for the problem of interest. We see that RSALP with 10000 samples achieves performance that is superior to these extant schemes. 2. Sampling improves performance: This is expected from the theory in Section 3. Ideally, as the sample size is increased one should relax the regularization. However, for our experiments we noticed that the performance is quite insensitive to the parameter ?. Nonetheless, it is clear that larger sample sets yield a significant performance improvement. 3. RSALP in less sensitive to state sampling: We notice from the standard deviation values in Table 1 that our approach gives policies whose performance varies significantly less across different sample sets of the same size. In summary we view these results as indicative of the possibility that the RSALP may serve as a practical and viable alternative to state-of-the-art parametric ADP techniques. 8 References [1] D. P. Bertsekas. Dynamic Programming and Optimal Control, Vol. II. Athena Scientific, 2007. [2] B. Bethke, J. P. How, and A. Ozdaglar. Kernel-based reinforcement learning using Bellman residual elimination. MIT Working Paper, 2008. [3] Y. Engel, S. Mannor, and R. Meir. Bayes meets Bellman: The Gaussian process approach to temporal difference learning. In Proceedings of the 20th International Conference on Machine Learning, pages 154?161. AAAI Press, 2003. [4] X. Xu, D. Hu, and X. Lu. Kernel-based least squares policy iteration for reinforcement learning. IEEE Transactions on Neural Networks, 18(4):973?992, 2007. [5] D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine Learning, 49(2):161? 178, 2002. [6] D. Ormoneit and P. Glynn. Kernel-based reinforcement learning in average cost poblems. IEEE Transactions on Automatic Control, 47(10):1624?1636, 2002. [7] A. M. S. Barreto, D. Precup, and J. Pineau. Reinforcement learning using kernel-based stochastic factorization. In Advances in Neural Information Processing Systems, volume 24, pages 720?728. MIT Press, 2011. [8] T. G. Dietterich and X. Wang. Batch value function approximation via support vectors. In Advances in Neural Information Processing Systems, volume 14, pages 1491?1498. MIT Press, 2002. [9] J. Kolter and A. Ng. Regularization and feature selection in least-squares temporal difference learning. ICML ?09, pages 521?528. ACM, 2009. [10] M. Petrik, G. Taylor, R. Parr, and S. Zilberstein. Feature selection using regularization in approximate linear programs for Markov decision processes. ICML ?10, pages 871?879, 2010. [11] J. Pazis and R. Parr. Non-parametric approximate linear programming for MDPs. AAAI Conference on Artificial Intelligence. AAAI, 2011. [12] V. V. Desai, V. F. Farias, and C. C. Moallemi. Approximate dynamic programming via a smoothed linear program. To appear in Operations Research, 2011. [13] D. P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic programming. Operations Research, 51(6):850?865, 2003. [14] D. P. de Farias and B. Van Roy. On constraint sampling in the linear programming approach to approximate dynamic programming. Mathematics of Operations Research, 29:3:462?478, 2004. [15] P. Schweitzer and A. Seidman. Generalized polynomial approximations in Markovian decision processes. Journal of Mathematical Analysis and Applications, 110:568?582, 1985. [16] E. Osuna, R. Freund, and F. Girosi. An improved training algorithm for support vector machines. In Neural Networks for Signal Processing, Proceedings of the 1997 IEEE Workshop, pages 276 ?285, sep 1997. [17] T. Joachims. Making large-scale support vector machine learning practical, pages 169?184. MIT Press, Cambridge, MA, USA, 1999. [18] R. R. Chen and S. Meyn. Value iteration and optimization of multiclass queueing networks. In Decision and Control, 1998. Proceedings of the 37th IEEE Conference on, volume 1, pages 50 ?55 vol.1, 1998. [19] C. C. Moallemi, S. Kumar, and B. Van Roy. Approximate and data-driven dynamic programming for queueing networks. Working Paper, 2008. [20] L. Tassiulas and A. Ephremides. Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 37(12):1936?1948, December 1992. [21] A. L. Stolyar. Maxweight scheduling in a generalized switch: State space collapse and workload minimization in heavy traffic. The Annals of Applied Probability, 14:1?53, 2004. 9
4547 |@word version:2 polynomial:3 norm:3 stronger:1 hu:1 r:9 incurs:1 series:1 efficacy:1 selecting:1 denoting:1 precluding:1 offering:1 past:1 existing:1 outperforms:1 recovered:1 must:3 parsing:1 additive:1 numerical:1 wx:4 girosi:1 designed:1 greedy:5 intelligence:1 device:1 indicative:1 dissatisfying:1 mannor:1 gx:4 multihop:1 simpler:1 mathematical:5 along:1 schweitzer:1 viable:2 consists:1 theoretically:1 expected:2 themselves:1 frequently:1 bellman:7 discounted:1 little:1 curse:1 increasing:2 begin:2 provided:6 notation:2 bounded:1 suffice:1 gsb:2 what:3 argmin:1 interpreted:2 substantially:2 guarantee:17 temporal:2 every:2 rm:2 control:9 ozdaglar:1 appear:3 bertsekas:1 before:2 positive:7 service:3 local:3 accordance:1 establishing:1 meet:1 path:2 might:1 chose:1 studied:2 specifying:3 challenging:2 suggests:1 co:1 limited:1 factorization:1 collapse:1 graduate:2 practical:5 practice:1 definite:5 procedure:4 maxx:1 significantly:1 matching:1 pre:1 radial:1 cannot:2 ga:4 onto:1 operator:2 scheduling:4 convenience:1 context:1 close:2 selection:2 equivalent:4 map:2 demonstrated:1 go:6 attention:1 starting:2 independently:2 meyn:1 regularize:1 stability:1 variation:3 justification:1 analogous:1 annals:1 controlling:1 play:1 user:3 exact:8 programming:14 homogeneous:1 trick:1 element:2 roy:3 expensive:1 persist:1 role:1 solved:4 wang:1 region:1 desai:1 decrease:1 substantial:1 intuition:1 complexity:8 ideally:1 rigorously:1 dynamic:12 depend:1 solving:8 petrik:1 serve:2 upon:1 basis:14 farias:4 easily:3 sep:1 workload:1 various:1 routinely:1 distinct:2 artificial:1 quite:1 heuristic:4 posed:1 solve:5 larger:1 nikhil:1 relax:1 whose:1 itself:1 online:4 hoc:1 sen:1 propose:1 product:4 rapidly:1 achieve:1 convergence:2 requirement:3 rademacher:1 produce:2 develop:3 stating:1 pose:1 freeing:1 school:3 job:5 involves:3 come:2 radius:1 closely:1 modifying:1 stochastic:4 subsequently:1 alp:5 translating:1 elimination:1 fix:1 generalization:3 proposition:2 strictly:1 considered:1 exp:1 mapping:2 parr:2 substituting:2 achieves:1 radio:1 sensitive:2 engel:1 hope:3 minimization:1 mit:5 gaussian:3 ck:3 zilberstein:1 focus:1 joachim:1 improvement:2 longest:3 likelihood:1 contrast:3 summarizing:1 sense:2 dependent:1 typically:3 dual:4 classification:1 among:2 denoted:1 priori:2 impacting:1 exponent:1 art:3 smoothing:1 constrained:1 construct:1 having:1 ng:1 sampling:15 represents:1 broad:1 buffering:2 icml:2 throughput:2 others:1 report:2 employ:2 few:1 argmax:1 maintain:2 delicate:1 attempt:1 interest:2 possibility:1 evaluation:1 deferred:1 violation:2 yielding:1 primal:2 accurate:1 capable:1 moallemi:3 encourage:1 plugged:1 loosely:2 taylor:1 theoretical:5 minimal:1 instance:2 increased:2 earlier:3 soft:1 linesearch:1 markovian:2 asking:1 cost:9 deviation:3 monomials:2 uniform:1 delay:2 reported:1 varies:1 accomplish:1 anticipates:1 chooses:1 international:1 sensitivity:1 precup:1 extant:1 again:2 aaai:3 central:1 management:1 opposed:3 reflect:1 choose:1 worse:1 resort:1 converted:1 potential:1 de:2 bold:2 summarized:1 kolter:1 sloan:1 explicitly:2 ad:1 idealized:4 depends:1 tion:1 queuing:1 view:1 doing:1 traffic:1 competitive:1 recover:2 bayes:1 defer:1 contribution:2 square:2 accuracy:1 loaded:1 efficiently:1 ensemble:1 yield:9 interarrival:1 produced:3 dub:2 critically:1 lu:1 apple:2 randomness:1 strongest:1 definition:1 nonetheless:1 glynn:1 proof:7 associated:1 sampled:13 massachusetts:1 popular:1 recall:1 dimensionality:1 improves:1 hilbert:5 carefully:1 appears:4 attained:2 permitted:3 improved:2 formulation:8 evaluated:1 box:1 hand:2 sketch:2 working:2 pineau:1 quality:4 scientific:1 mdp:4 grows:2 usa:1 dietterich:1 umbrella:1 requiring:1 effect:1 remedy:1 counterpart:2 regularization:15 hence:3 symmetric:1 illustrated:1 vivek:1 eg:1 essence:1 pazis:1 generalized:2 demonstrate:1 novel:1 common:3 superior:2 insensitive:1 volume:3 discussed:1 adp:20 numerically:1 significant:2 cambridge:2 smoothness:1 automatic:2 unconstrained:1 outlined:1 mathematics:1 similarly:1 z4:1 stable:1 showed:2 inf:4 driven:1 store:1 certain:4 server:8 inequality:1 arbitrarily:3 seen:1 additional:2 somewhat:3 greater:1 ey:1 recognized:1 period:1 signal:1 semi:1 ii:1 full:4 multiple:2 plug:1 long:2 permitting:1 controlled:1 feasibility:1 parenthesis:1 regression:2 essentially:3 expectation:1 tasked:1 metric:1 iteration:4 kernel:23 proposal:1 addition:2 whereas:1 bhat:1 grow:2 crucial:1 subject:1 december:1 flow:1 call:2 structural:3 constraining:1 easy:1 switch:1 architecture:18 approaching:1 bandwidth:2 inner:4 idea:2 computable:1 multiclass:1 motivated:1 effort:3 penalty:2 suffer:1 f:1 queue:14 york:2 action:5 detailed:2 clear:2 listed:1 involve:1 nonparametric:1 discount:1 extensively:1 reduced:1 generate:1 meir:1 restricts:1 notice:3 designer:2 arising:1 per:1 discrete:3 vol:2 affected:1 key:1 four:3 salient:1 drawn:1 queueing:10 geometrically:1 run:2 letter:2 uncertainty:1 parameterized:2 place:2 reasonable:2 missed:1 draw:1 decision:5 comparable:2 bit:1 bound:7 guaranteed:1 followed:2 fold:1 quadratic:1 untenable:1 occur:1 constraint:8 precisely:1 encodes:1 span:2 min:2 kumar:1 graceful:2 relatively:1 developing:1 according:5 ball:1 seidman:1 remain:2 across:2 increasingly:1 osuna:1 lp:5 modification:1 making:2 happens:1 restricted:2 principally:1 computationally:2 equation:2 ln:2 previously:1 describing:1 discus:1 slack:2 eventually:1 turn:1 letting:1 tractable:2 end:1 available:1 operation:3 permit:1 appropriate:4 generic:2 alternative:3 batch:1 ensure:1 unsuitable:1 establish:6 approximating:2 objective:2 noticed:1 question:1 quantity:7 parametric:20 costly:1 dependence:3 strategy:1 athena:1 nx:4 considers:1 extent:1 trivial:1 spanning:1 assuming:2 length:1 ratio:1 providing:1 minimizing:3 difficult:1 unfortunately:1 potentially:4 suppress:1 policy:30 perform:2 allowing:2 upper:1 markov:3 finite:3 defining:1 smoothed:3 arbitrary:1 uniformization:1 ordinate:1 complement:1 vacuous:1 required:2 specified:3 pair:1 distinction:1 learned:1 established:2 able:1 suggested:1 wy:1 program:21 pioneering:1 max:8 memory:2 suitable:1 event:1 business:2 natural:1 difficulty:1 haar:1 regularized:3 ormoneit:2 residual:1 scheme:2 salp:9 technology:1 mdps:3 columbia:2 func:1 prior:1 literature:1 epoch:1 geometric:2 relative:2 law:1 freund:1 loss:1 expect:3 highlight:1 proportional:1 ingredient:1 incurred:1 degree:1 mercer:1 principle:1 share:1 heavy:1 translation:1 summary:2 token:1 intractably:2 enjoys:2 allow:2 understand:1 institute:1 face:1 vivekf:1 taking:2 sparse:1 benefit:1 van:3 dimension:8 transition:2 gram:1 rich:2 computes:1 made:1 reinforcement:5 projected:2 employing:1 polynomially:2 transaction:3 approximate:19 implicitly:1 active:3 reveals:1 xi:1 continuous:1 search:2 why:1 table:4 nature:1 jz:2 ignoring:1 unavailable:1 necessarily:1 linearly:1 arrival:3 body:1 xu:1 gxt:1 referred:2 cubic:2 ny:2 explicit:2 exponential:2 lie:1 crude:1 companion:1 theorem:3 bad:1 xt:2 specific:1 ciamac:2 offset:4 svm:1 admits:1 exists:2 workshop:1 restricting:2 effectively:1 importance:1 supplement:4 mirror:1 magnitude:1 budget:1 conditioned:1 horizon:1 sx:7 chen:1 simply:2 scalar:1 nominally:1 corresponds:1 acm:1 ma:2 goal:2 rsalp:28 viewed:1 consequently:1 replace:1 feasible:1 change:1 hard:2 infinite:3 uniformly:1 averaging:3 surpass:1 total:1 select:1 support:3 barreto:1 ex:9
3,919
4,548
A Simple and Practical Algorithm for Differentially Private Data Release Moritz Hardt IBM Almaden Research San Jose, CA [email protected] Katrina Ligett? Caltech [email protected] Frank McSherry Microsoft Research SVC [email protected] Abstract We present a new algorithm for differentially private data release, based on a simple combination of the Multiplicative Weights update rule with the Exponential Mechanism. Our MWEM algorithm achieves what are the best known and nearly optimal theoretical guarantees, while at the same time being simple to implement and experimentally more accurate on actual data sets than existing techniques. 1 Introduction Sensitive statistical data on individuals are ubiquitous, and publishable analysis of such private data is an important objective. When releasing statistics or synthetic data based on sensitive data sets, one must balance the inherent tradeoff between the usefulness of the released information and the privacy of the affected individuals. Against this backdrop, differential privacy [1, 2, 3] has emerged as a compelling privacy definition that allows one to understand this tradeoff via formal, provable guarantees. In recent years, the theoretical literature on differential privacy has provided a large repertoire of techniques for achieving the definition in a variety of settings (see, e.g., [4, 5]). However, data analysts have found that several algorithms for achieving differential privacy add unacceptable levels of noise. In this work we develop a broadly applicable, simple, and easy-to-implement algorithm, capable of substantially improving the performance of linear queries on many realistic datasets. Linear queries are equivalent to statistical queries (in the sense of [6]) and can serve as the basis of a wide range of data analysis and learning algorithms (see [7] for some examples). Our algorithm is a combination of the Multiplicative Weights approach of [8, 9], maintaining and correcting an approximating distribution through queries on which the approximate and true datasets differ, and the Exponential Mechanism [10], which selects the queries most informative to the Multiplicative Weights algorithm (specifically, those most incorrect vis-a-vis the current approximation). One can view our approach as combining expert learning techniques (multiplicative weights) with an active learning component (via the exponential mechanism). We present experimental results for differentially private data release for a variety of problems studied in prior work: range queries as studied by [11, 12], contingency table release across a collection of statistical benchmarks as in [13], and datacube release as studied by [14]. We empirically evaluate the accuracy of the differentially private data produced by MWEM using the same query class and accuracy metric proposed by each of the corresponding prior works, improving on all. Beyond empirical improvements in these settings, MWEM matches the best known and nearly optimal theoretical accuracy guarantees for differentially private data analysis with linear queries. ? Computer Science Department, Cornell University. Work supported in part by an NSF Computing Innovation Fellowship (NSF Award CNF-0937060) and an NSF Mathematical Sciences Postdoctoral Fellowship (NSF Award DMS-1004416). 1 Finally, we describe a scalable implementation of MWEM capable of processing datasets of substantial complexity. Producing synthetic data for the classes of queries we consider is known to be computationally hard in the worst-case [15, 16]. Indeed, almost all prior work performs computation proportional to the size of the data domain, which limits them to datasets with relatively few attributes. In contrast, we are able to process datasets with thousands of attributes, corresponding to domains of size 21000 . Our implementation integrates a scalable parallel implementation of Multiplicative Weights, and a representation of the approximating distribution in a factored form that only exhibits complexity when the model requires it. 2 Our Approach The MWEM algorithm (Figure 1) maintains an approximating distribution over the domain D of data records, scaled up by the number of records. We repeatedly improve the accuracy of this approximation with respect to the private dataset and the desired query set by selecting and posing a query poorly served by our approximation and improving the approximation to better reflect the true answer to this query. We select and pose queries using the Exponential [10] and Laplace Mechanisms [3], whose definitions and privacy properties we review in Subsection 2.1. We improve our approximation using the Multiplicative Weights update rule [8], reviewed in Subsection 2.2. 2.1 Differential Privacy and Mechanisms Differential privacy is a constraint on a randomized computation that the computation should not reveal specifics of individual records present in the input. It places this constraint by requiring the mechanism to behave almost identically on any two datasets that are sufficiently close. Imagine a dataset A whose records are drawn from some abstract domain D, and which is described as a function from D to the natural numbers N, with A(x) indicating the frequency (number of occurrences) of x in the dataset. We use kA Bk to indicate the sum of the absolute values of difference in frequencies (how many records would have to be added or removed to change A to B). Definition 2.1 (Differential Privacy). A mechanism M mapping datasets to distributions over an output space R provides (", )-differential privacy if for every S ? R and for all data sets A, B where kA Bk ? 1, P r[M (A) 2 S] ? e" Pr[M (B) 2 S] + . If = 0 we say that M provides "-differential privacy. The Exponential Mechanism [10] is an "-differentially private mechanism that can be used to select among the best of a discrete set of alternatives, where ?best? is defined by a function relating each alternative to the underlying secret data. Formally, for a set of alternative results R, we require a quality scoring function s : dataset ? R ! R, where s(B, r) is interpreted as the quality of the result r for the dataset B. To guarantee "-differential privacy, the quality function is required to satisfy a stability property: that for each result r the difference |s(A, r) s(B, r)| is at most kA Bk. The Exponential Mechanism E simply selects a result r from the distribution satisfying Pr[E(B) = r] / exp(" ? s(B, r)/2). Intuitively, the mechanism selects result r biased exponentially by its quality score. The Exponential Mechanism takes time linear in the number of possible results, evaluating s(B, r) once for each r. A linear query (also referred to as counting query or statistical query) is specified by a function q mapping data records P to the interval [ 1, +1]. The answer of a linear query on a data set D, denoted q(B), is the sum x2D q(x) ? B(x). The Laplace Mechanism is an "-differentially private mechanism which reports approximate sums of bounded functions across a dataset. If q is a linear query, the Laplace Mechanism L obeys Pr[L(B) = r] / exp ( " ? |r q(B)|) Although the Laplace Mechanism is an instance of the Exponential Mechanism, it can be implemented much more efficiently, by adding Laplace noise with parameter 1/" to the value q(B). As the Laplace distribution is exponentially concentrated, the Laplace Mechanism provides an excellent approximation to the true sum. 2 Inputs: Data set B over a universe D; Set Q of linear queries; Number of iterations T 2 N; Privacy parameter " > 0; Number of records n. Let A0 denote n times the uniform distribution over D. For iteration i = 1, ..., T : 1. Exponential Mechanism: Select a query qi 2 Q using the Exponential Mechanism parameterized with epsilon value "/2T and the score function si (B, q) = |q(Ai 1) q(B)| . 2. Laplace Mechanism: Let measurement mi = qi (B) + Lap(2T /"). 3. Multiplicative Weights: Let Ai be n times the distribution whose entries satisfy Ai (x) / Ai 1 (x) ? exp(qi (x) ? (mi qi (Ai 1 ))/2n) . Output: A = avgi<T Ai . Figure 1: The MWEM algorithm. 2.2 Multiplicative Weights Update Rule The Multiplicative Weights approach has seen application in many areas of computer science. Here we will use it as proposed in Hardt and Rothblum [8], to repeatedly improve an approximate distribution to better reflect some true distribution. The intuition behind Multiplicative Weights is that should we find a query whose answer on the true data is much larger than its answer or the approximate data, we should scale up the approximating weights on records contributing positively and scale down the weights on records contributing negatively. If the true answer is much less than the approximate answer, we should do the opposite. More formally, let q be a linear query. If A and B are distributions over the domain D of records, where A is a synthetic distribution intended to approximate a true distribution B with respect to query q, then the Multiplicative Weights update rule recommends updating the weight A places on each record x by: Anew (x) / A(x) ? exp(q(x) ? (q(B) q(A))/2) . The proportionality sign indicates that the approximation should be renormalized after scaling. Hardt and Rothblum show that each time this rule is applied, the relative entropy between A and B decreases by an additive (q(A) q(B))2 . As long as we can continue to find queries on which the two disagree, we can continue to improve the approximation. 2.3 Formal Guarantees As indicated in the introduction, the formal guarantees of MWEM represent the best known theoretical results on differentially private synthetic data release. We first describe the privacy properties. Theorem 2.1. MWEM satisfies "-differential privacy. Proof. The composition rules for differential privacy state that " values accumulate additively. We make T calls to the Exponential Mechanism with parameter ("/2T ) and T calls to the Laplace Mechanism with parameter ("/2T ), resulting in "-differential privacy. We now bound the worst-case performance of the algorithm, in terms of the maximum error between A and B across all q 2 Q. The natural range for q(A) is [ n, +n], and we see that by increasing T beyond 4 log |D| we can bring the error asymptotically smaller than n. Theorem 2.2. For any dataset B, set of linear queries Q, T 2 N, and " > 0, with probability at least 1 2T /|Q|, MWEM produces A such that r log |D| 10T log |Q| max |q(A) q(B)| ? 2n + . q2Q T " 3 Proof. The proof of this theorem is an integration of pre-existing analyses of both the Exponential Mechanism and the Multiplicative Weights update rule, omitted for reasons of space. Note that these bounds are worst-case bounds, over adversarially chosen data and query sets. We will see in Section 3 that MWEM works very well in more realistic settings. 2.3.1 Running time The running time of our basic algorithm as described in Figure 1 is O(n|Q| + T |D||Q|)). The algorithm is embarrassingly parallel: query evaluation can be conducted independently, implemented using modern database technology; the only required serialization is that the T steps must proceed in sequence, but within each step essentially all work is parallelizable. Results of Dwork et al. [17] show that for worst case data, producing differentially private synthetic data for a set of counting queries requires time |D|0.99 under reasonable cryptographic hardness assumptions. Moreover, Ullman and Vadhan [16] showed that similar lower bounds also hold for more basic query classes such as we consider in Section 3.2. Despite these hardness results, we provide an alternate implementation of our algorithm in Section 4 and demonstrate that its running time is acceptable on real-world data even in cases where |D| is as large as 277 , and on simple synthetic input datasets where |D| is as large as 21000 . 2.3.2 Improvements and Variations There are several ways to improve the empirical performance of MWEM at the expense of the theoretical guarantees. First, rather than use the average of the distributions Ai we use only the final distribution. Second, in each iteration we apply the multiplicative weights update rule for all measuments taken, multiple times; as long as any measurements do not agree with the approximating distribution (within error) we can improve the result. Finally, it is occasionally helpful to initialize A0 by performing a noisy count for each element of the domain; this consumes from the privacy budget and lessens the accuracy of subsequent queries, but is often a good trade-off. 2.4 Related Work The study of differentially private synthetic data release mechanisms for arbitrary counting queries began with the work of Blum, Ligett, and Roth [18], who gave a computationally inefficient (superpolynomial in |D|) "-differentially private algorithm that achieves error that scales only logarithmically with the number of queries. The dependence on n and |Q| achieved by their algorithm is O(n2/3 log1/3 |Q|) (which is the same dependence achieved by optimizing the choice of T in Theorem 2.2). Since [18], subsequent work [17, 19, 20, 8] has focused on computationally more efficient algorithms (i.e., polynomial in |D|) as well as algorithms that work in the interactive query setting. The latest of these results is the private p Multiplicative Weights method of Hardt and Rothblum [8] which achieves error rates of O( n log(|Q|)) for (", )-differential privacy (which is the same dependence achieved by applying k-fold adaptive composition [19] and optimizing T in our Theorem 2.2). While their algorithm works in the interactive setting, it can also be used non-interactively to produce synthetic data, albeit at a computational overhead of O(n). MWEM can also be cast as an instance of a more general Multiplicative-Weights based framework of Gupta et al. [9], though our specific instantiation and its practical appeal were not anticipated in their work. Prior work on linear queries includes Fienberg et al. [13] and Barak et al. [21] on contingency tables; Li et al. [22] on range queries (and substantial related work [23, 24, 22, 11, 12, 25] which Li and Miklau [11, 25] show can all be seen as instances of the matrix mechanism of [22]); and Ding et al. [14] on data cubes. In each case, MWEM?s theoretical guarantees and experimental performance improve on prior work. We compare further in Section 3. 3 Experimental Evaluation We evaluate MWEM across a variety of query classes, datasets, and metrics as explored by prior work, demonstrating improvement in the quality of approximation (often significant) in each case. The problems we consider are: (1) range queries under the total squared error metric, (2) binary 4 contingency table release under the relative entropy metric, and (3) datacube release under the average absolute error metric. Although contingency table release and datacube release are very similar, prior work on the two have had different focuses: small datasets over many binary attributes vs. large datasets over few categorical attributes, low-order marginals vs. all cuboids as queries, and relative entropy vs. the average error within a cuboid as metrics. Our general conclusion is that intelligently selecting the queries to measure can result in significant accuracy improvements, in settings where accuracy is a scare resource. When the privacy parameters are very lax, or the query set very simple, direct measurement of all queries yields better results than expending some fraction of the privacy budget determining what to measure. On the other hand, in the more challenging case of restrictions on privacy for complex data and query sets, MWEM can substantially out-perform previous algorithms. 3.1 Range Queries A range query over a domain D = {1, . . . , N } is a counting query specified by the indicator function of an interval I ? D. Over a multi-dimensional domain D = D1 ? . . . Dd a range query is defined by the product of indicator functions. Differentially private algorithms for range queries were specifically considered by [18, 23, 24, 22, 11, 12, 25]. As noted in [11, 25], all previously implemented algorithms for range queries can be seen as instances of the matrix mechanism of [22]. Moreover, [11, 25] show a lower bound on the total squared error achieved by the matrix mechanism in terms of the singular values of a matrix associated with the set of queries. We refer to this bound as the SVD bound. Transfusion: monetary Transfusion: recency x frequency 1.00E+10 1.00E+11 1.00E+10 1.00E+09 1.00E+09 1.00E+08 1.00E+08 1.00E+07 1.00E+07 1.00E+06 1.00E+06 0.0125 0.025 MWEM (T = 10) 0.5 0.1 0.0125 SVD Lower Bound 0.025 MWEM (T = 10) Adult: capital loss 0.5 0.1 SVD Lower Bound Adult: age x hours 1.00E+11 1.00E+11 1.00E+10 1.00E+10 1.00E+09 1.00E+09 1.00E+08 1.00E+08 1.00E+07 1.00E+07 1.00E+06 1.00E+06 0.0125 0.025 MWEM (T = 10) 0.5 0.1 0.0125 SVD Lower Bound 0.025 MWEM (T = 10) 0.5 0.1 SVD Lower Bound Figure 2: Comparison of MWEM with the SVD lower bound on four data sets. The y-axis measures the average squared error per query, averaged over 5 independent repetitions of the experiment, as epsilon varies. The improvement is most significant for small epsilon, diminishing as epsilon increases. We empirically evaluate MWEM for range queries on restrictions of the Adult data set [26] to (a) the ?capital loss? attribute, and (b) the ?age? and ?hours? attributes, as well as the restriction of the Blood Transfusion data set [26, 27] to (c) the ?recency? and ?frequency? attributes, and (d) the ?monetary? attribute. We chose these data sets as they feature numerical attributes of suitable size. In Figure 2, we compare the performance of MWEM on sets of randomly chosen range queries against the SVD lower bound proved by [11, 25], varying " while keeping the number of queries fixed. The SVD lower bound holds for algorithms achieving the strictly weaker guarantee of (", )differential privacy with > 0, permitting some probability of unbounded disclosure. The SVD 5 50 5 0.5 45 4.5 0.45 40 4 0.4 35 3.5 0.35 30 3 0.3 25 2.5 0.25 20 2 0.2 15 1.5 0.15 10 1 0.1 5 0.5 0.05 0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: Relative entropy (y-axis) as a function of epsilon (x-axis) for the mildew, rochdale, and czech datasets, respectively. The lines represent averages across 100 runs, and the corresponding shaded areas one standard deviation in each direction. Red (dashed) represents the modified Barak et al. [21] algorithm, green (dot-dashed) represents unoptimized MWEM, and blue (solid) represents the optimized version thereof. The solid black horizontal line is the stated relative entropy values from Fienberg et al. [13]. bound depends on ; in our experiments we fixed = 1/n when instantiating the SVD bound, as any larger value of permits mechanisms capable of exact release of individual records. 3.2 Contingency Tables A contingency table can be thought of as a table of records over d binary attributes, and the k-way marginals of a contingency table correspond to the kd possible choices of k attributes, where each marginal is represented by the 2k counts of the records with each possible setting of attributes. In previous work, Barak et al. [21] describe an approach to differentially private contingency table release using linear queries defined by the Hadamard matrix. Importantly, all k-dimensional marginals can be exactly recovered by examination of relatively few such queries: roughly kd out of the possible 2d , improving over direct measurement of the marginals by a factor of 2k . This algorithm is evaluated by Fienberg et al. [13], and was found to do poorly on several benchmark datasets. We evaluate our approximate dataset following Fienberg et al. [13] using relative entropy, also known as the Kullback-Leibler (or KL) divergence. Formally, the relative entropy between our two distributions (A/n and B/n) is X RE(B||A) = B(x) log(B(x)/A(x))/n . x2D We use several statistical datasets from Fienberg et al. [13], and evaluate two variants of MWEM (both with and without initialization of A0 ) against a modification of Barak et al. [21] which combines its observations using multiplicative weights (we find that without this modification, [21] is terrible with respect to relative entropy). These experiments are therefore largely assessing the selective choice of measurements to take, rather than the efficacy of multiplicative weights. Figure 3 presents the evaluation of MWEM on several small datasets in common use by statisticians. Our findings here are fairly uniform across the datasets: the ability to measure only those queries that are informative about the dataset results in substantial savings over taking all possible measurements. In many cases MWEM approaches the good non-private values of [13], indicating that we can approach levels of accuracy at the limit of statistical validity. We also consider a larger dataset, the National Long-Term Care Study (NLTCS), in Figure 4. This dataset contains orders of magnitudes more records, and has 16 binary attributes. For our initial settings, maintaining all three-way marginals, we see similar behavior as above: the ability to choose the measurements that are important allows substantially higher accuracy on those that matter. However, we see that the algorithm of Barak et al. [21] is substantially more competitive in the regime where we are interested in querying all two-dimensional marginals, rather than the default three we have been using. In this case, for values of epsilon at least 0.1, it seems that there is enough signal present to simply measure all corresponding entries of the Hadamard transform; each is sufficiently informative that measuring substantially fewer at higher accuracy imparts less information, rather than more. 6 5 2 2 4.5 1.8 1.8 4 1.6 1.6 3.5 1.4 1.4 3 1.2 1.2 2.5 1 1 2 0.8 0.8 1.5 0.6 0.6 1 0.4 0.4 0.5 0.2 0.01 0.03 0.05 0.07 0.1 0 0.1 0.2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 4: Curves comparing our approach with that of Barak et al. on the National Long Term Care Survey. The red (dashed) curve represents Barak et al, and the multiple blue (solid) curves represent MWEM, with 20, 30, and 40 queries (top to bottom, respectively). From left to right, the first two figures correspond to degree 2 marginals, and the third to degree 3 marginals. As before, the xaxis is the value of epsilon guaranteed, and the y-axis is the relative entropy between the produced distribution and actual dataset. The lines represent averages across only 10 runs, owing to the high complexity of Barak et al. on this many-attributed dataset, and the corresponding shaded areas one standard deviation in each direction. 3.3 Data Cubes We now change our terminology and objectives, shifting our view of contingency tables to one of datacubes. The two concepts are interchangeable, a contingency table corresponding to the datacube, and a marginal corresponding to its cuboids. However, the datasets studied and the metrics applied are different. We focus on the restriction of the Adult dataset [26] to its eight categorical attributes, as done in [14], and evaluate our approximations using average error within a cuboid, also as in [14]. Although MWEM is defined with respect to a single query at a time, it generalizes to sets of counting queries, as reflected in a cuboid. The Exponential Mechanism can select a cuboid to measure using a quality score function summing the absolute values of the errors within the cells of the cuboid. We also (heuristically) subtract the number of cells from the score of a cuboid to bias the selection away from cuboids with many cells, which would collect Laplace error in each cell. This subtraction does not affect privacy properties. An entire cuboid can be measured with a single differentially private query, as any record contributes to at most one cell (this is a generalization of the Laplace Mechanism to multiple dimensions, from [3]). Finally, Multiplicative Weights works unmodified, increasing and decreasing weights based on the over- or under-estimation of the count to which the record contributes. Average Average Error Maximum Average Error 250 800 700 200 600 500 150 400 100 300 200 50 100 0 0 0.25 0.5 PMostC 1 1.5 2 0.25 MWEM (T = 10) 0.5 BMaxC 1 1.5 2 MWEM (T = 10) Figure 5: Comparison of MWEM with the custom approaches from [14], varying epsilon through the reported values from [14]. Each cuboid (marginal) is assessed by its average error, and either the average or maximum over all 256 marginals is taken to evaluate the technique. We compare MWEM with the work of [14] in Figure 5. The average average error improves noticeably, by approximately a factor of four. The maximum average error is less clear; experimentally we have found we can bring the numbers lower using different heuristic variants of MWEM, but without principled guidance we report only the default behavior. Of note, our results are achieved 7 by a single algorithm, whereas the best results for maximum and average error in [14] are achieved by two different algorithms, each designed to optimize one specific metric. 4 A Scalable Implementation The implementation of MWEM used in the previous experiments quite literally maintains a distribution Ai over the elements of the universe D. As the number of attributes grows, the universe D grows exponentially, and it can quickly become infeasible to track the distribution explicitly. In this section, we consider a scalable implementation with essentially no memory footprint, whose running time is in the worst case proportional to |D|, but which for many classes of simple datasets remains linear in the number of attributes. Recall that the heart of MWEM maintains a distribution Ai over D that is then used in the Exponential Mechanism to select queries poorly approximated by the current distribution. From the definition of the Multiplicative Weights distribution, we see that the weight Ai (x) can be determined from the history Hi = {(qj , mj ) : j ? i}: 0 1 X Ai (x) / exp @ qj (x) ? (mj qj (Aj 1 ))/2nA . j?i We explicitly record the scaling factors lj = mj qj (Aj {(qj , mj , lj ) : j ? i}, to remove the dependence on prior Aj . 1) as part of the history Hi = The domain D is often the product of many attributes. If we partition these attributes into disjoint parts D1 , D2 , . . . Dk so that no query in Hi involves attributes from more than one part, then the distribution produced by Multiplicative Weights is a product distribution over D1 ?D2 ?. . . Dk . For query classes that factorize over the attributes of the domain (for example, range queries, marginal queries, and cuboid queries) we can rewrite and efficiently perform the integration over D using 0 1 X Y X @ q(x) ? Ai (x) = q(xj ) ? Aji (xj )A . x2D1 ?D2 ?...Dk 1?j?k xj 2Dj where Aji from Hi . is a mini Multiplicative Weights over attributes in part Dj , using only the relevant queries So long as the measurements taken reflect modest groups of independent attributes, the integration can be efficiently performed. As the measurements overlap more and more, additional computation or approximation is required. The memory footprint is only the combined size of the data, query, and history sets. Experimentally, we are able to process a binarized form of the Adult dataset with 27 attributes efficiently (taking 80 seconds to process completely), and the addition of 50 new independent binary attributes, corresponding to a domain of size 277 , results in neglible performance impact. For a simple synthetic dataset with up to 1,000 independent binary attributes, the factorized implementation of MWEM takes only 19 seconds to for a complete execution. 5 Conclusions We introduced MWEM, a simple algorithm for releasing data maintaining a high fidelity to the protected source data, as well as differential privacy with respect to the records. The approach builds upon the Multiplicative Weights approach of [8, 9], by introducing the Exponential Mechanism [10] as a more judicious approach to determining which measurements to take. The theoretical analysis matches previous work in the area, and experimentally we have evidence that for many interesting settings, MWEM represents a substantial improvement over existing techniques. As well as improving on experimental error, the algorithm is both simple to implement and simple to use. An analyst does not require a complicated mathematical understanding of the nature of the queries (as the community has for linear algebra [11] and the Hadamard transform [21]), but rather only needs to enumerate those measurements that should be preserved. We hope that this generality leads to a broader class of high-fidelity differentially-private data releases across a variety of data domains. 8 References [1] I. Dinur and K. Nissim. Revealing information while preserving privacy. In PODS, 2003. [2] Cynthia Dwork and Kobbi Nissim. Privacy-preserving datamining on vertically partitioned databases. In CRYPTO. Springer, 2004. [3] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In TCC, 2006. [4] Cynthia Dwork. The differential privacy frontier (extended abstract). In TCC, 2009. [5] Cynthia Dwork. The promise of differential privacy: A tutorial on algorithmic techniques. In FOCS, 2011. [6] Michael J. Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM (JACM), 45(6):983?1006, 1998. [7] Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim. Practical privacy: the SuLQ framework. In Proc. 24th PODS, pages 128?138. ACM, 2005. [8] Moritz Hardt and Guy Rothblum. A multiplicative weights mechanism for interactive privacy-preserving data analysis. In FOCS, 2010. [9] Anupam Gupta, Moritz Hardt, Aaron Roth, and Jon Ullman. Privately releasing conjunctions and the statistical query barrier. In STOC, 2011. [10] Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In FOCS, 2007. [11] Chao Li and Gerome Miklau. abs/1103.1367, 2011. Efficient batch query answering under differential privacy. CoRR, [12] Chao Li and Gerome Miklau. An adaptive mechanism for accurate query answering under differential privacy. to appear, PVLDB, 2012. [13] Stephen E. Fienberg, Alessandro Rinaldo, and Xiolin Yang. Differential privacy and the risk-utility tradeoff for multi-dimensional contingency tables. In Privacy in Statistical Databases, 2010. [14] Bolin Ding, Marianne Winslett, Jiawei Han, and Zhenhui Li. Differentially private data cubes: optimizing noise sources and consistency. In SIGMOD, 2011. [15] Cynthia Dwork, Moni Naor, Omer Reingold, Guy N. Rothblum, and Salil P. Vadhan. On the complexity of differentially private data release: efficient algorithms and hardness results. In STOC, 2009. [16] Jonathan Ullman and Salil P. Vadhan. PCPs and the hardness of generating private synthetic data. In TCC, 2011. [17] C. Dwork, M. Naor, O. Reingold, G.N. Rothblum, and S. Vadhan. On the complexity of differentially private data release: efficient algorithms and hardness results. In STOC, 2009. [18] Avrim Blum, Katrina Ligett, and Aaron Roth. A learning theory approach to non-interactive database privacy. In STOC, 2008. [19] Cynthia Dwork, Guy Rothblum, and Salil Vadhan. Boosting and differential privacy. In FOCS, 2010. [20] Aaron Roth and Tim Roughgarden. The median mechanism: Interactive and efficient privacy with multiple queries. In STOC, 2010. [21] B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: a holistic solution to contingency table release. In PODS, 2007. [22] C. Li, M. Hay, V. Rastogi, G. Miklau, and A. McGregor. Optimizing linear counting queries under differential privacy. In PODS, 2010. [23] Xiaokui Xiao, Guozhang Wang, and Johannes Gehrke. Differential privacy via wavelet transforms. IEEE Transactions on Knowledge and Data Engineering, 23:1200?1214, 2011. [24] Michael Hay, Vibhor Rastogi, Gerome Miklau, and Dan Suciu. Boosting the accuracy of differentiallyprivate queries through consistency. In VLDB, 2010. [25] Chao Li and Gerome Miklau. Measuring the achievable error of query sets under differential privacy. CoRR, abs/1202.3399v2, 2012. [26] A. Frank and A. Asuncion. UCI machine learning repository, 2010. [27] I-Cheng Yeh, King-Jang Yang, and Tao-Ming Ting. Knowledge discovery on RFM model using Bernoulli sequence. Expert Systems with Applications, 36(3), 2008. 9
4548 |@word private:24 version:1 repository:1 polynomial:1 seems:1 achievable:1 proportionality:1 heuristically:1 additively:1 d2:3 vldb:1 solid:3 initial:1 contains:1 score:4 selecting:2 efficacy:1 miklau:6 existing:3 current:2 com:2 ka:3 recovered:1 comparing:1 si:1 must:2 pcp:1 realistic:2 additive:1 informative:3 subsequent:2 numerical:1 partition:1 remove:1 designed:1 ligett:3 update:6 v:3 fewer:1 pvldb:1 smith:1 record:19 provides:3 boosting:2 unbounded:1 mathematical:2 unacceptable:1 direct:2 differential:25 become:1 incorrect:1 focs:4 naor:2 overhead:1 combine:1 dan:1 privacy:43 secret:1 hardness:5 indeed:1 behavior:2 roughly:1 multi:2 ming:1 decreasing:1 actual:2 increasing:2 provided:1 underlying:1 bounded:1 moreover:2 factorized:1 what:2 interpreted:1 substantially:5 finding:1 guarantee:9 every:1 binarized:1 interactive:5 exactly:1 scaled:1 appear:1 producing:2 before:1 engineering:1 vertically:1 limit:2 despite:1 rothblum:7 approximately:1 black:1 chose:1 initialization:1 studied:4 collect:1 challenging:1 shaded:2 range:13 obeys:1 averaged:1 practical:3 implement:3 footprint:2 aji:2 area:4 empirical:2 thought:1 revealing:1 pre:1 close:1 selection:1 recency:2 risk:1 applying:1 restriction:4 equivalent:1 optimize:1 roth:4 latest:1 kale:1 independently:1 pod:4 focused:1 survey:1 correcting:1 factored:1 rule:8 importantly:1 stability:1 variation:1 laplace:11 imagine:1 exact:1 kunal:1 element:2 logarithmically:1 satisfying:1 approximated:1 updating:1 database:4 bottom:1 ding:2 wang:1 worst:5 thousand:1 decrease:1 removed:1 trade:1 consumes:1 substantial:4 intuition:1 principled:1 alessandro:1 complexity:5 renormalized:1 salil:3 interchangeable:1 rewrite:1 algebra:1 serve:1 negatively:1 upon:1 basis:1 completely:1 represented:1 x2d:2 describe:3 query:77 whose:5 emerged:1 larger:3 heuristic:1 quite:1 katrina:3 say:1 ability:2 statistic:1 transform:2 noisy:1 final:1 sequence:2 intelligently:1 tcc:3 product:3 vibhor:1 relevant:1 combining:1 monetary:2 hadamard:3 uci:1 holistic:1 omer:1 poorly:3 chaudhuri:1 differentially:18 assessing:1 produce:2 generating:1 tim:1 develop:1 pose:1 measured:1 implemented:3 involves:1 indicate:1 differ:1 direction:2 attribute:25 owing:1 noticeably:1 require:2 generalization:1 repertoire:1 strictly:1 frontier:1 hold:2 sufficiently:2 considered:1 marianne:1 exp:5 mapping:2 algorithmic:1 achieves:3 released:1 omitted:1 estimation:1 proc:1 integrates:1 applicable:1 lessens:1 sensitive:2 repetition:1 gehrke:1 hope:1 modified:1 rather:5 cornell:1 varying:2 broader:1 conjunction:1 release:17 focus:2 improvement:6 bernoulli:1 indicates:1 contrast:1 sense:1 helpful:1 entire:1 lj:2 a0:3 diminishing:1 sulq:1 jiawei:1 unoptimized:1 selective:1 selects:3 interested:1 tao:1 among:1 fidelity:2 almaden:1 denoted:1 integration:3 initialize:1 fairly:1 cube:3 marginal:4 once:1 saving:1 adversarially:1 represents:5 nearly:2 jon:1 anticipated:1 report:2 inherent:1 few:3 modern:1 randomly:1 divergence:1 national:2 individual:4 intended:1 statistician:1 microsoft:2 ab:2 differentiallyprivate:1 dwork:10 rfm:1 custom:1 evaluation:3 behind:1 mcsherry:6 suciu:1 xaxis:1 accurate:2 capable:3 modest:1 literally:1 desired:1 re:1 guidance:1 theoretical:7 instance:4 compelling:1 measuring:2 unmodified:1 introducing:1 deviation:2 entry:2 uniform:2 usefulness:1 conducted:1 too:1 reported:1 answer:6 varies:1 gerome:4 synthetic:10 combined:1 randomized:1 sensitivity:1 off:1 transfusion:3 michael:2 quickly:1 na:1 squared:3 reflect:3 interactively:1 choose:1 guy:3 expert:2 inefficient:1 kobbi:2 ullman:3 li:7 includes:1 matter:1 satisfy:2 explicitly:2 vi:2 depends:1 multiplicative:23 view:2 performed:1 red:2 competitive:1 maintains:3 parallel:2 complicated:1 asuncion:1 accuracy:12 who:1 efficiently:4 largely:1 yield:1 correspond:2 rastogi:2 produced:3 served:1 history:3 parallelizable:1 definition:5 against:3 frequency:4 thereof:1 dm:1 proof:3 mi:2 associated:1 attributed:1 dataset:16 hardt:6 proved:1 recall:1 subsection:2 knowledge:2 improves:1 ubiquitous:1 embarrassingly:1 dinur:1 higher:2 reflected:1 evaluated:1 though:1 done:1 generality:1 hand:1 horizontal:1 quality:6 reveal:1 indicated:1 aj:3 grows:2 validity:1 requiring:1 true:7 concept:1 calibrating:1 moritz:3 leibler:1 noted:1 complete:1 demonstrate:1 performs:1 bring:2 svc:1 began:1 common:1 empirically:2 superpolynomial:1 exponentially:3 relating:1 marginals:9 accumulate:1 measurement:11 composition:2 significant:3 refer:1 ai:12 consistency:3 had:1 dot:1 dj:2 han:1 moni:1 add:1 recent:1 showed:1 optimizing:4 occasionally:1 hay:2 binary:6 continue:2 caltech:2 scoring:1 seen:3 preserving:3 additional:1 care:2 subtraction:1 dashed:3 signal:1 stephen:1 multiple:4 expending:1 match:2 long:5 permitting:1 award:2 qi:4 instantiating:1 scalable:4 basic:2 q2q:1 variant:2 essentially:2 metric:8 imparts:1 impact:1 iteration:3 represent:4 achieved:6 cell:5 preserved:1 whereas:1 fellowship:2 addition:1 interval:2 singular:1 source:2 median:1 biased:1 releasing:3 reingold:2 call:2 vadhan:5 counting:6 yang:2 easy:1 identically:1 recommends:1 variety:4 enough:1 affect:1 gave:1 xj:3 opposite:1 tradeoff:3 qj:5 utility:1 proceed:1 cnf:1 repeatedly:2 enumerate:1 clear:1 johannes:1 transforms:1 concentrated:1 terrible:1 datacube:4 nsf:4 tutorial:1 sign:1 disjoint:1 per:1 track:1 blue:2 broadly:1 discrete:1 promise:1 affected:1 group:1 four:2 terminology:1 demonstrating:1 blum:3 achieving:3 drawn:1 capital:2 blood:1 asymptotically:1 fraction:1 year:1 sum:4 run:2 jose:1 parameterized:1 talwar:2 place:2 almost:2 reasonable:1 acceptable:1 scaling:2 bound:16 hi:4 guaranteed:1 cheng:1 fold:1 mildew:1 roughgarden:1 constraint:2 performing:1 relatively:2 department:1 alternate:1 combination:2 kd:2 across:8 smaller:1 partitioned:1 modification:2 intuitively:1 pr:3 taken:3 fienberg:6 computationally:3 resource:1 agree:1 previously:1 remains:1 crypto:1 count:3 mechanism:37 disclosure:1 lax:1 generalizes:1 permit:1 apply:1 eight:1 away:1 v2:1 datamining:1 occurrence:1 alternative:3 anupam:1 batch:1 jang:1 top:1 running:4 maintaining:3 sigmod:1 ting:1 epsilon:8 build:1 approximating:5 objective:2 added:1 dependence:4 exhibit:1 nissim:4 reason:1 provable:1 analyst:2 mini:1 balance:1 innovation:1 stoc:5 frank:4 expense:1 stated:1 implementation:8 design:1 cryptographic:1 perform:2 disagree:1 observation:1 datasets:18 benchmark:2 behave:1 extended:1 arbitrary:1 community:1 bk:3 introduced:1 cast:1 required:3 specified:2 kl:1 optimized:1 czech:1 hour:2 adult:5 beyond:2 able:2 nltcs:1 regime:1 max:1 green:1 memory:2 shifting:1 suitable:1 overlap:1 natural:2 examination:1 indicator:2 improve:7 technology:1 axis:4 log1:1 categorical:2 neglible:1 prior:8 literature:1 review:1 understanding:1 chao:3 yeh:1 contributing:2 relative:9 determining:2 discovery:1 loss:2 xiaokui:1 interesting:1 proportional:2 querying:1 age:2 contingency:12 degree:2 xiao:1 dd:1 ibm:2 supported:1 keeping:1 infeasible:1 formal:3 weaker:1 understand:1 barak:9 bias:1 wide:1 taking:2 barrier:1 absolute:3 curve:3 default:2 dimension:1 evaluating:1 world:1 collection:1 adaptive:2 san:1 transaction:1 approximate:7 kullback:1 cuboid:12 anew:1 active:1 instantiation:1 tolerant:1 summing:1 factorize:1 postdoctoral:1 protected:1 table:13 reviewed:1 mj:4 nature:1 ca:1 contributes:2 improving:5 posing:1 excellent:1 complex:1 domain:12 universe:3 privately:1 noise:5 n2:1 positively:1 referred:1 backdrop:1 heart:1 exponential:15 answering:2 third:1 wavelet:1 down:1 theorem:5 specific:3 cynthia:6 appeal:1 explored:1 dk:3 gupta:2 evidence:1 serialization:1 albeit:1 adding:1 avrim:2 corr:2 magnitude:1 execution:1 budget:2 subtract:1 entropy:9 lap:1 simply:2 jacm:1 rinaldo:1 springer:1 satisfies:1 acm:2 king:1 experimentally:4 hard:1 change:2 specifically:2 determined:1 judicious:1 kearns:1 total:2 experimental:4 svd:10 indicating:2 select:5 formally:3 aaron:3 assessed:1 jonathan:1 evaluate:7 d1:3 mcgregor:1
3,920
4,549
Multiple Choice Learning: Learning to Produce Multiple Structured Outputs Abner Guzman-Rivera University of Illinois [email protected] Dhruv Batra Virginia Tech [email protected] Pushmeet Kohli Microsoft Research Cambridge [email protected] Abstract We address the problem of generating multiple hypotheses for structured prediction tasks that involve interaction with users or successive components in a cascaded architecture. Given a set of multiple hypotheses, such components/users typically have the ability to retrieve the best (or approximately the best) solution in this set. The standard approach for handling such a scenario is to first learn a single-output model and then produce M -Best Maximum a Posteriori (MAP) hypotheses from this model. In contrast, we learn to produce multiple outputs by formulating this task as a multiple-output structured-output prediction problem with a loss-function that effectively captures the setup of the problem. We present a max-margin formulation that minimizes an upper-bound on this lossfunction. Experimental results on image segmentation and protein side-chain prediction show that our method outperforms conventional approaches used for this type of scenario and leads to substantial improvements in prediction accuracy. 1 Introduction A number of problems in Computer Vision, Natural Language Processing and Computational Biology involve predictions over complex but structured interdependent outputs, also known as structured-output prediction. Formulations such as Conditional Random Fields (CRFs) [18], MaxMargin Markov Networks (M3 N) [27], and Structured Support Vector Machines (SSVMs) [28] have provided principled techniques for learning such models. In all these (supervised) settings, the learning algorithm typically has access to input-output pairs: {(xi , yi ) | xi ? X , yi ? Y} and the goal is to learn a mapping from the input space to the output space f : X ? Y that minimizes a (regularized) task-dependent loss function ` : Y ? Y ? R+ , where `(yi , y?i ) denotes the cost of predicting y?i when the correct label is yi . Notice that the algorithm always makes a single prediction y?i and pays a penalty `(yi , y?i ) for that prediction. However, in a number of settings, it might be beneficial (even necessary) to make multiple predictions: 1. Interactive Intelligent Systems. The goal of interactive machine-learning algorithms is to produce an output for an expert or a user in the loop. Popular examples include tools for interactive image segmentation (where the system produces a cutout of an object from a picture [5, 25]), systems for image processing/manipulation tasks such as image denoising and deblurring (e.g., Photoshop), or machine translation services (e.g., Google Translate). These problems are typically modeled using structured probabilistic models and involve computing the Maximum a Posteriori (MAP) solution. In order to minimize user interactions, the interface could show not just a single prediction but a small set of diverse predictions, and simply let the user pick the best one. 2. Generating M -Best Hypotheses. Machine learning algorithms are often cascaded, with the output of one model being fed into another. In such a setting, at the initial stages it is 1 not necessary to make the perfect prediction, rather the goal is to make a set of plausible predictions, which may then be re-ranked or combined by a secondary mechanism. For instance, in Computer Vision, this is the case for state-of-the-art methods for human-pose estimation which produce multiple predictions that are then refined by employing a temporal model [23, 3]. In Natural Language Processing, this is the case for sentence parsing [8] and machine translation [26], where an initial system produces a list of M -Best hypotheses [12, 24] (also called k-best lists in the NLP literature), which are then re-ranked. The common principle in both scenarios is that we need to generate a set of plausible hypotheses for an algorithm/expert downstream to evaluate. Traditionally, this is accomplished by learning a single-output model and then producing M -Best hypotheses from it (also called the M -Best MAP problem [20, 11, 29] or the Diverse M -Best problem [3] in the context of graphical models). Notice that the single-output model is typically trained in the standard way, i.e., either to match the data distribution (max-likelihood) or to score ground-truth the highest by a margin (max-margin). Thus, there is a disparity between the way this model is trained and the way it is actually used. The key motivating question for this paper is ? can we learn to produce a set of plausible hypotheses? We refer to such a setting as Multiple Choice Learning (MCL) because the learner must learn to produce multiple choices for an expert or other algorithm. Overview. This paper presents an algorithm for MCL, formulated as multiple-output structuredoutput learning, where given an input sample xi the algorithm produces a set of M hypotheses {? yi1 , . . . , y?iM }. We first present a meaningful loss function for this task that effectively captures the setup of the problem. Next, we present a max-margin formulation for training this M -tuple predictor that minimizes an upper-bound on the loss-function. Despite the popularity of M -Best approaches, to the best our knowledge, this is the first attempt to directly model the M -Best prediction problem. Our approach has natural connections to SSVMs with latent variables, and resembles a structuredoutput version of k-means clustering. Experimental results on the problems of image segmentation and protein side-chain prediction show that our method outperforms conventional M -Best prediction approaches used for this scenario and leads to substantial improvements in prediction accuracy. The outline for the rest of this paper is as follows: Section 2 provides the notation and discusses classical (single-output) structured-output learning; Section 3 introduces the natural task loss for multiple-output prediction and presents our learning algorithm; Section 4 discusses related work; Section 5 compares our algorithm to other approaches experimentally and; we conclude in Section 6 with a summary and ideas for future work. 2 Preliminaries: (Single-Output) Structured-Output Prediction We begin by reviewing classical (single-output) structured-output prediction and establishing the notation used in the paper. Notation. For any positive integer n, let [n] be shorthand for the set {1, 2, . . . , n}. Given a training dataset of input-output pairs {(xi , yi ) | i ? [n], xi ? X , yi ? Y}, we are interested in learning a mapping f : X ? Y from an input space X to a structured output space Y that is finite but typically exponentially large (e.g., the set of all segmentations of an image, or all English translations of a Chinese sentence). Structured Support Vector Machines (SSVMs). In an SSVM setting, the mapping is defined as f (x) = argmaxy?Y wT ?(x, y), where ?(x, y) is a joint feature map: ? : X ? Y ? Rd . The quality of the prediction y?i = f (xi ) is measured by a task-specific loss function ` : Y ? Y ? R+ , where `(yi , y?i ) denotes the cost of predicting y?i when the correct label is yi . Some examples of loss functions are the intersection/union criteria used by the PASCAL Visual Object Category Segmentation Challenge [10], and the BLEU score used to evaluate machine translations [22]. The task-loss is typically non-convex and non-continuous in w. Tsochantaridis et al. [28] proposed to optimize a regularized surrogate loss function: min w X 1 2 ||w||2 + C }i (w) 2 i?[n] 2 (1) where C is a positive multiplier and }i (?) is the structured hinge-loss:   }i (w) = max `(yi , y) + wT ?(xi , y) ? wT ?(xi , yi ). y (2) It can be shown [28] that the hinge-loss is an upper-bound on the task loss, i.e., }i (w) ? `(yi , f (xi )). Moreover, }i (w) is a non-smooth convex function, and can be equivalently expressed with a set of constraints: X 1 2 min ||w||2 + C ?i (3a) w,?i 2 i?[n] T s.t. w ?(xi , yi ) ? wT ?(xi , y) ? `(yi , y) ? ?i ?i ? 0 ?y ? Y \ yi (3b) (3c) This formulation is known as the margin-rescaled n-slack SSVM [28]. Intuitively, we can see that it minimizes the squared-norm of w subject to constraints that enforce a soft-margin between the score of the ground-truth yi and the score of all other predictions. The above problem (3) is a Quadratic Program (QP) with n|Y| constraints, which is typically exponentially large. If an efficient separation oracle for identifying the most violated constraint is available, then a cutting-plane approach can be used to solve the QP. A cutting-plane algorithm maintains a working set of constraints and incrementally adds the most violated constraint to this working set while solving for the optimum solution under the working set. Tsochantaridis et al. [28] showed that such a procedure converges in a polynomial number of steps. 3 Multiple-Output Structured-Output Prediction We now describe our proposed formulation for multiple-output structured-output prediction. Model. Our model is a generalization of the single-output SSVM. A multiple-output SSVM is a mapping from the input space X to an M -tuple1 of structured outputs Yi = {? yi1 , . . . , y?iM | y?i ? Y}, M T given by g : X ? Y , where g(x) = argmaxY ?Y M W ?(x, Y ). Notice that the joint feature map is now a function of the input and the entire set of predicted structured-outputs, i.e., ? : X ? Y M ? Rd . Without further assumptions, optimizing over the output space |Y|M would be intractable. We make a mean-field-like simplifying assumption that the set score factors into independent predictor scores, i.e., ?(xi , Y ) = [ ?1 (xi , y 1 )T , . . . , ?M (xi , y M )T ]T . Thus, g is composed of M single-output predictors: g(x) = f 1 (x), . . . , f M (x) , where f m (x) = T m argmaxy?Y wm ? (x, y). Hence, the multiple-output SSVM is parameterized by an M -tuple of T T ] . weight vectors: W = [w1T , . . . , wM 3.1 Multiple-Output Loss Let Y?i = {? yi1 , . . . , y?iM } be the set of predicted outputs for input xi , i.e., y?im = f m (xi ). In the singleoutput SSVM, there typically exists a ground-truth output yi for each datapoint, and the quality of y?i w.r.t. yi is given by `(yi , y?i ). How good is a set of outputs? For our multiple-output predictor, we need to define a task-specific loss function that can measure the quality of any set of predictions Y?i ? Y M . Ideally, the quality of these predictions should be evaluated by the secondary mechanism that uses these predictions. For instance, in an interactive setting where they are shown to a user, the quality of Y?i could be measured by how much it reduces the user-interaction time. In the M-best hypotheses re-ranking scenario, the accuracy of the top single output after re-ranking could be used as the quality measure for Y?i . While multiple options exist, in order to provide a general formulation and to isolate our approach, we propose the ?oracle? or ?hindsight? set-loss as a surrogate: L(Y?i ) = min `(yi , y?i ) y?i ?Y?i (4) 1 Our formulation is described with a nominal ordering of the predictions. However, both the proposed objective function and optimization algorithm are invariant to permutations of this ordering. 3 i.e., the set of predictions Y?i only pays a loss for the most accurate prediction contained in this set (e.g., the best segmentation of an image, or the best translation of a sentence). This loss has the desirable behaviour that predicting a set that contains even a single accurate output is better than predicting a set that has none. Moreover, only being penalized for the most accurate prediction allows an ensemble to hedge its bets without having to pay for being too diverse (this is opposite to the effect that replacing min with max or avg. would have). However, this also makes the setloss rather poorly conditioned ? if even a single prediction in the ensemble is the ground-truth, the set-loss is 0, no matter what else is predicted. Hinge-like Upper-Bound. The set-loss L(Y?i (W)) is a non-continuous non-convex function of W and is thus difficult to optimize. If unique ground-truth sets Yi were available, we could set up a standard hinge-loss approximation:   Hi (W) = max L(Y ) + WT ?(xi , Y ) ? WT ?(xi , Yi ) (5) Y ?Y M where ?(xi , Y ) = [ ?1 (xi , y 1 )T , . . . , ?M (xi , y M )T ]T are stacked joint feature maps. However, no such natural choice for Yi exists. We propose a hinge-like upper-bound on the set-loss, that we refer to as min-hinge: ? i (W) = min }i (wm ), H (6) m?[M ] i.e., we take the min over the hinge-losses (2) corresponding to each of the M predictors. Since each hinge-loss is an upper-bound on the corresponding task-loss, i.e., }i (wm ) ? `(yi , f m (xi )), it ? i (W) ? L(Y?i ). is straightforward to see that the min-hinge is an upper-bound on the set-loss, i.e., H Notice that min-hinge is a min of convex functions, and thus not guaranteed to be convex. 3.2 Coordinate Descent for Learning Multiple Predictors We now present our algorithm for learning a multiple-output SSVM by minimizing the regularized min-hinge loss: X 1 2 ? i (W) min H (7) ||W||2 + C W 2 i?[n] We begin by rewriting the min-hinge loss in terms of indicator ?flag? variables, i.e., X X 1 2 min ||W||2 + C ?i,m }i (wm ) 2 W,{?i,m } i?[n] m?[M ] X s.t. ?i,m = 1 ?i ? [n] (8a) (8b) m?[M ] ?i,m ? {0, 1} ?i ? [n], m ? [M ] (8c) where ?i,m is a flag variable that indicates which predictor produces the smallest hinge-loss. Optimization problem 8 is a mixed-integer quadratic programming problem (MIQP), which is NPhard in general. However, we can exploit the structure of the problem via a block-coordinate descent algorithm where W and {?i,m } are optimized iteratively: 1. Fix W; Optimize all {?i,m }. Given W, the optimization over {?i,m } reduces to the minimization of P P i?[n] m?[M ] ?i,m }i (wm ) subject to the ?pick-one-predictor? constraints (8b, 8c). This decomposes into n independent problems, which simply identify the best predictor for each datapoint according to the current hinge-losses, i.e.:  1 if m = argmin } (w ) i m m?[M ] ?i,m = (9) 0 else. 2. Fix {?i,m }; Optimize W. Given {?i,m }, optimization over W decomposes into M independent problems, one for each predictor, which are equivalent to single-output SSVM learning problems: 4 min W X X 1 2 ||W||2 + C ?i,m }i (wm ) 2 i?[n] m?[M ] ? ? ? X X ?1 2 ?i,m }i (wm ) = min ||wm ||2 + C W ? ?2 i?[n] m?[M ] ? ? ?1 ? X X 2 }i (wm ) = min ||wm ||2 + C wm ? 2 ? (10a) (10b) (10c) i:?i,m 6=0 m?[M ] Thus, each subproblem in 10c can be optimized using using any standard technique for training SSVMs. We use the 1-slack algorithm of [14]. Convergence. Overall, the block-coordinate descent algorithm above iteratively assigns each datapoint to a particular predictor (Step 1) and then independently trains each predictor with just the points that were assigned to it (Step 2). This is fairly reminiscent of k-means, where step 1 can be thought of as the member re-assignment step (or the M-step in EM) and step 2 can be thought of as the cluster-fitting step (or the E-step in EM). Since the flag variables take on discrete values and the objective function is non-increasing with iterations, the algorithm is guaranteed to converge in a finite number of steps. Generalization. Formulation (8) canPbe generalized by replacing the ?pick-one-predictor? constraint with ?pick-K-predictors?, i.e., m?[M ] ?i,m = K, where K is a robustness parameter that allows training data overlap between predictors. The M-step (cluster reassignment) is still simple, and involves assigning a data-point to the top K best predictors. The E-step is unchanged. Notice that at K = M , all predictors learn the same mapping. We analyze the effect of K in our experiments. 4 Related Work At first glance, our work seems related to the multi-label classification literature, where the goal is to predict multiple labels for each input instance (e.g., text tags for images on Flickr). However, the motivation and context of our work is fundamentally different. Specifically, in multi-label classification there are multiple possible labels for each instance and the goal is to predict as many of them as possible. On the other hand, in our setting there is a single ground-truth label for each instance and the learner makes multiple guesses, all of which are evaluated against that single ground-truth. For the unstructured setting (i.e. when |Y| is polynomial), Dey et al. [9] proposed an algorithm that learns a multi-class classifier for each ?slot? in a M -Best list, and provide a formal regret reduction from submodular sequence optimization. To the best of our knowledge, the only other work that explicitly addresses the task of predicting multiple structured outputs is multi-label structured prediction (MLSP) [19]. This work may be seen as a technique to output predictions in the power-set of Y (2Y ) with a learning cost comparable to algorithms for prediction over Y. Most critically, MLSP requires gold-standard sets of labels (one set for each training example). In contrast, MCL neither needs nor has access to gold-standard sets. At a high-level, MCL and MLSP are orthogonal approaches, e.g., we could introduce MLSP within MCL to create an algorithm that predicts multiple (diverse) sets of structured-outputs (e.g., multiple guesses by the algorithm where each guess is a set of bounding boxes of objects in an image). A form of min-set-loss has received some attention in the context of ambiguously or incompletely annotated data. For instance, [4] trains an SSVM for object detection implicitly defining a taskadapted loss, Lmin (Y, y?) = miny?Y `(y, y?). Note that in this case there is a set of ground-truth labels and the model?s prediction is a single label (evaluated against the closest ground-truth). Our formulation is also reminiscent of a Latent-SSVM with the indicator flags {?i,m | m ? [M ]} taking a role similar to latent variables. However, the two play very different roles. Latent variable models typically maximize or marginalize the model score across the latent variables, while MCL uses the flag variables as a representation of the oracle loss. At a high-level, our ideas are also related to ensemble methods [21] like boosting. However, the key difference is that ensemble methods attempt to combine outputs from multiple weak predictors to ultimately make a single prediction. We are interested in making multiple predictions which will all 5 (a) Image (b) GT (c) 3.44% (d) 40.79% (e) 14.78% (f) 26.98% (g) 53.78% (h) 19.54% (c) 16.59% (d) 2.12% (e) 11.54% (f) 11.34% (g) 77.91% (h) 65.34% Figure 1: Each row shows the: (a) input image (b) ground-truth segmentation and (c-h) the set of predictions produced by MCL (M = 6). Red border indicates the most accurate segmentation (i.e., lowest error). We can see that the predictors produce different plausible foreground hypotheses, e.g., predictor (g) thinks foliage-like things are foreground. be handed to an expert or secondary mechanism that has access to more complex (e.g., higher-order) features. 5 Experiments Setup. We tested algorithm MCL on two problems: i) foreground-background segmentation in image collections and ii) protein side-chain prediction. In both problems making a single perfect prediction is difficult due to inherent ambiguity in the tasks. Moreover, inference-time computing limitations force us to learn restricted models (e.g., pairwise attractive CRFs) that may never be able to capture the true solution with a single prediction. The goal of our experiments is to study how much predicting a set of plausible hypotheses helps. Our experiments will show that MCL is able to produce sets of hypotheses which contain more accurate predictions than other algorithms and baselines aimed at producing multiple hypotheses. 5.1 Foreground-Background Segmentation Dataset. We used the co-segmentation dataset, iCoseg, of Batra et al. [2]. iCoseg consists of 37 groups of related images mimicking typical consumer photograph collections. Each group may be thought of as an ?event? (e.g., images from a baseball game, a safari, etc.). The dataset provides pixel-level ground-truth foreground-background segmentations for each image. We used 9 difficult groups from iCoseg containing 166 images in total. These images were then split into train, validation and test sets of roughly equal size. See Fig. 1, 2 for some example images and segmentations. Model and Features. The segmentation task is modeled as a binary pairwise MRF where each node corresponds to a superpixel [1] in the image. We extracted 12-dim color features at each superpixel (mean RGB; mean HSV; 5 bin Hue histogram; Hue histogram entropy). The edge features, computed for each pair of adjacent superpixels, correspond to a standard Potts model and a contrast sensitive Potts model. The weights at each edge were constrained to be positive so that the resulting supermodular potentials could be maximized via graph-cuts [6, 17]. Baselines and Evaluation. We compare our algorithm against three alternatives for producing multiple predictions: i) Single SSVM + M -Best MAP [29], ii) Single SSVM + Diverse M -Best MAP [3] and iii) Clustering + Multiple SSVMs. For the first two baselines, we used all training images to learn a single SSVM and then produced multiple segmentations via M -Best MAP and Diverse M -Best MAP [3]. The M -Best MAP baseline was implemented via the BMMF algorithm [29] using dynamic graph-cuts [15] for computing maxmarginals efficiently. For the Diverse M -Best MAP baseline we implemented the D IV MB EST algorithm of Batra et al. [3] using dynamic graph-cuts. The third baseline, Clustering + Multiple SSVM (C-SSVM), involves first clustering the training images into M clusters and then training M SSVMs independently on each cluster. For clustering, we used k-means with `2 distance on color features (same as above) computed on foreground pixels. For each algorithm we varied the number of predictors M ? {1, 2, . . . , 6} and tuned the regularization parameter C on validation. Since MCL involves non-convex optimization, a good initialization is important. We used the output of k-means clustering as the initial assignment of images to predictors, so MCL?s first coordinate descent iteration produces the same results as C-SSVM. The task-loss 6 (a) 45.91% (b) 76.84% (c) 8.30% (d) 40.55% (e) 30.01% (f) 29.16% (g) 19.54% (a) 37.00% (b) 36.06% (c) 9.14% (d) 28.54% (e) 17.43% (f) 26.91% (g) 11.09% (a) 14.70% (b) 1.17% (c) 5.69% (d) 5.86% (e) 1.18% (f) 13.32% (g) 3.44% Figure 2: In each column: first row shows input images; second shows ground-truth; third shows segmentation produced by the single SSVM baseline; and the last two rows show the best MCL predictions (M = 6) at the end of the first and last coordinate descent iteration. in this experiment (`) is the percentage of incorrectly labeled pixels, and the evaluation metric is the set-loss, L = miny?i ?Y?i `(yi , y?i ), i.e., the pixel error of the best segmentation among all predictions. Comparison against Baselines. Fig. 3a show the performance of various algorithms as a function of the number of predictors M . We observed that M -Best MAP produces nearly identical predictions and thus the error drops negligibly as M is increased. On the other hand, the diverse M -Best predictions output by D IV MB EST [3] lead to a substantial drop in the set-loss. MCL outperforms both D IV MB EST and C-SSVM, confirming our hypothesis that it is beneficial to learn a collection of predictors, rather than learning a single predictor and making diverse predictions from it. Behaviour of Coordinate Descent. Fig. 3b shows the MCL objective and train/test errors as a function of the coordinate descent steps. We verify that the objective function is improved at every iteration and notice a nice correlation between the objective and the train/test errors. Effect of C. Fig. 3c compares performance for different values of regularization parameter C. We observe a fairly stable trend with MCL consistently outperforming baselines. Effect of K. Fig. 3d shows the performance of MCL as robustness parameter K is increased from 1 to M . We observe a monotonic reduction in error as K decreases, which suggests there is a natural clustering of the data and thus learning a single SSVM is detrimental. Qualitative Results. Fig. 1 shows example images, ground-truth segmentations, and the predictions made by M = 6 predictors. We observe that the M hypotheses are both diverse and plausible. The evolution of the best prediction with coordinate descent iterations can be seen in Fig. 2. 5.2 Protein Side-Chain Prediction Model and Dataset. Given a protein backbone structure, the task here is to predict the amino acid side-chain configurations. This problem has been traditionally formulated as a pairwise MRF with node labels corresponding to (discretized) side-chain configurations (rotamers). These models include pairwise interactions between nearby side-chains, and between side-chains and backbone. We use the dataset of [7] which consists of 276 proteins (up to 700 residues long) split into train and test sets of sizes 55 and 221 respectively.2 The energy function is defined as a weighted sum of eight 2 Dataset available from: http://cyanover.fhcrc.org/recomb-2007/ 7 14 Test error 13 Train error 12 4 15 11 3.5 10 3 3 4 5 6 20 25 15 10 8 2.5 2 25 MCL MCL (train) 20 15 9 10 1 Task?Loss vs K; M = 6 MCL MCL (train) C?SSVM M?Best DivM?Best Pixel Error % Objective Pixel Error % 20 Task?Loss vs C; M = 6 M = 6; C = 0.8 4.5 Objective 25 Pixel Error % 4 x 10 MCL MCL (train) C?SSVM M?Best DivM?Best Pixel Error % Task?Loss vs M 7 M 1 2 3 4 5 Coordinate?Descent Iteration (a) Error vs. M . (b) Error vs. Iterations. 10 0.1 0.4 1.6 C 6.4 25.6 1 2 3 4 5 6 K (c) Error vs. C. (d) Error vs. K. Figure 3: Experiments on foreground-background segmentation. Task?Loss vs M Task?Loss vs C; M = 4 M = 4; C = 1 30 0.85 MCL CRF HCRF Boost 29.5 29 30 28 MCL CRF HCRF Boost Objective 29.5 Test error 29 0.8 27 27.5 27 26.5 Error % 28 Error % Objective Error % 27.5 28.5 28.5 28 27.5 27 1 2 3 4 M (a) Error vs. M . 0.75 1 2 3 4 Coordinate?Descent Iteration (b) Error vs. Iterations. 26.5 26.5 1 10 C 100 (c) Error vs. C. Figure 4: Experiments on protein side-chain prediction. known energy terms where the weights are to be learned. We used TRW-S [16] (early iterations) and ILP (CPLEX [13]) for inference. Baselines and Evaluation. For this application there is no natural analogue to the C-SSVM baseline and thus we used a boosting-like baseline where we first train an SSVM on the entire training data; use the training instances with high error to train a second SSVM, and so on. For comparison, we also report results from the CRF and HCRF models proposed in [7]. Following [7], we report average error rates for the first two angles (?1 and ?2 ) on all test proteins. Results. Fig. 4 shows the results. Overall, we observe behavior similar to the previous set of experiments. Fig. 4a confirms that multiple predictors are beneficial, and that MCL is able to outperform the boosting-like baseline. Fig. 4b shows the progress of the MCL objective and test loss with coordinate descent iterations; we again observe a positive correlation between the objective and the loss. Fig. 4c shows that MCL outperforms baselines across a range of values of C. 6 Discussion and Conclusions We presented an algorithm for producing a set of structured outputs and argued that in a number of problems it is beneficial to generate a set of plausible and diverse hypotheses. Typically, this is accomplished by learning a single-output model and then producing M -best hypotheses from it. This causes a disparity between the way the model is trained (to produce a single output) and the way it is used (to produce multiple outputs). Our proposed algorithm (MCL) provides a principled way to directly optimize the multiple prediction min-set-loss. There are a number of directions to extend this work. While we evaluated performance of all algorithms in terms of oracle set-loss, it would be interesting to measure the impact of MCL and other baselines on user experience or final stage performance in cascaded algorithms. Further, our model assumes a modular scoring function S(Y ) = WT ?(x, Y ) = P T m m m?[M ] wm ? (x, y ), i.e., the score of a set is the sum of the scores of its members. In a number of situations, the score S(Y ) might be a submodular function. Such scoring functions often arise when we want the model to explicitly reward diverse subsets. We plan to make connections with greedy-algorithms for submodular maximization for such cases. 8 Acknowledgments: We thank David Sontag for his assistance with the protein data. AGR was supported by the C2S2 Focus Center (under the SRC?s Focus Center Research Program). References [1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Ssstrunk. SLIC Superpixels Compared to State-of-the-art Superpixel Methods. PAMI, (To Appear) 2012. 6 [2] D. Batra, A. Kowdle, D. Parikh, J. Luo, and T. Chen. iCoseg: Interactive Co-segmentation with Intelligent Scribble Guidance. In CVPR, 2010. 6 [3] D. Batra, P. Yadollahpour, A. Guzman-Rivera, and G. Shakhnarovich. Diverse M-Best Solutions in Markov Random Fields. In ECCV, 2012. 2, 6, 7 [4] M. B. Blaschko and C. H. Lampert. Learning to Localize Objects with Structured Output Regression. In ECCV, 2008. 5 [5] Y. Boykov and M.-P. Jolly. Interactive Graph Cuts for Optimal Boundary and Region Segmentation of Objects in N-D Images. ICCV, 2001. 1 [6] Y. Boykov, O. Veksler, and R. Zabih. Efficient Approximate Energy Minimization via Graph Cuts. PAMI, 20(12):1222?1239, 2001. 6 [7] O. S.-F. Chen Yanover and Y. Weiss. Minimizing and Learning Energy Functions for Side-Chain Prediction. Journal of Computational Biology, 15(7):899?911, 2008. 7, 8 [8] M. Collins. Discriminative Reranking for Natural Language Parsing. In ICML, pages 175?182, 2000. 2 [9] D. Dey, T. Y. Liu, M. Hebert, and J. A. Bagnell. Contextual sequence prediction with application to control library optimization. In Robotics: Science and Systems, 2012. 5 [10] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results. http://www.pascal-network.org/ challenges/VOC/voc2011/workshop/index.html. 2 [11] M. Fromer and A. Globerson. An LP View of the M-best MAP problem. In NIPS, 2009. 2 [12] L. Huang and D. Chiang. Better K-best parsing. In Proceedings of the Ninth International Workshop on Parsing Technology (IWPT), pages 53?64, 2005. 2 [13] IBM Corporation. IBM ILOG CPLEX Optimization Studio. http://www-01.ibm.com/ software/integration/optimization/cplex-optimization-studio/, 2012. 8 [14] T. Joachims, T. Finley, and C.-N. Yu. Cutting-Plane Training of Structural SVMs. Machine Learning, 77(1):27?59, 2009. 5 [15] P. Kohli and P. H. S. Torr. Measuring Uncertainty in Graph Cut Solutions. CVIU, 112(1):30?38, 2008. 6 [16] V. Kolmogorov. Convergent Tree-Reweighted Message Passing for Energy Minimization. PAMI, 28(10):1568?1583, 2006. 8 [17] V. Kolmogorov and R. Zabih. What Energy Functions can be Minimized via Graph Cuts? PAMI, 26(2):147?159, 2004. 6 [18] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In ICML, 2001. 1 [19] C. H. Lampert. Maximum Margin Multi-Label Structured Prediction. In NIPS, 2011. 5 [20] E. L. Lawler. A Procedure for Computing the K Best Solutions to Discrete Optimization Problems and Its Application to the Shortest Path Problem. Management Science, 18:401?405, 1972. 2 [21] D. W. Opitz and R. Maclin. Popular Ensemble Methods: An Empirical Study. J. Artif. Intell. Res. (JAIR), 11:169?198, 1999. 5 [22] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: a Method for Automatic Evaluation of Machine Translation. In ACL, 2002. 2 [23] D. Park and D. Ramanan. N-Best Maximal Decoders for Part Models. In ICCV, 2011. 2 [24] A. Pauls, D. Klein, and C. Quirk. Top-Down K-Best A* Parsing. In ACL, 2010. 2 [25] C. Rother, V. Kolmogorov, and A. Blake. ?GrabCut? ? Interactive Foreground Extraction using Iterated Graph Cuts. SIGGRAPH, 2004. 1 [26] L. Shen, A. Sarkar, and F. J. Och. Discriminative Reranking for Machine Translation. In HLT-NAACL, pages 177?184, 2004. 2 [27] B. Taskar, C. Guestrin, and D. Koller. Max-Margin Markov Networks. In NIPS, 2003. 1 [28] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large Margin Methods for Structured and Interdependent Output Variables. JMLR, 6:1453?1484, 2005. 1, 2, 3 [29] C. Yanover and Y. Weiss. Finding the M Most Probable Configurations Using Loopy Belief Propagation. In NIPS, 2003. 2, 6 9
4549 |@word kohli:2 version:1 polynomial:2 norm:1 seems:1 everingham:1 confirms:1 rgb:1 simplifying:1 pick:4 rivera:2 reduction:2 initial:3 configuration:3 contains:1 score:10 disparity:2 liu:1 tuned:1 outperforms:4 current:1 com:2 contextual:1 luo:1 assigning:1 must:1 parsing:5 reminiscent:2 confirming:1 hofmann:1 drop:2 v:12 greedy:1 reranking:2 guess:3 plane:3 mccallum:1 yi1:3 smith:1 chiang:1 provides:3 boosting:3 node:2 hsv:1 successive:1 org:2 qualitative:1 shorthand:1 consists:2 fitting:1 combine:1 introduce:1 pairwise:4 behavior:1 roughly:1 nor:1 multi:5 photoshop:1 discretized:1 jolly:1 voc:1 increasing:1 provided:1 begin:2 notation:3 moreover:3 blaschko:1 lowest:1 what:2 backbone:2 argmin:1 minimizes:4 hindsight:1 finding:1 corporation:1 temporal:1 every:1 interactive:7 classifier:1 control:1 ramanan:1 och:1 appear:1 producing:5 segmenting:1 positive:4 service:1 pkohli:1 despite:1 establishing:1 path:1 approximately:1 pami:4 might:2 acl:2 initialization:1 resembles:1 achanta:1 suggests:1 co:2 icoseg:4 range:1 unique:1 acknowledgment:1 globerson:1 union:1 block:2 regret:1 procedure:2 empirical:1 thought:3 protein:9 altun:1 marginalize:1 tsochantaridis:3 context:3 lmin:1 optimize:5 conventional:2 map:14 equivalent:1 center:2 crfs:2 www:2 straightforward:1 attention:1 williams:1 independently:2 convex:6 shen:1 identifying:1 assigns:1 unstructured:1 his:1 retrieve:1 cutout:1 traditionally:2 coordinate:11 canpbe:1 nominal:1 play:1 user:8 programming:1 us:2 deblurring:1 hypothesis:18 rotamers:1 superpixel:3 trend:1 cut:8 predicts:1 labeled:1 observed:1 role:2 subproblem:1 negligibly:1 taskar:1 capture:3 region:1 ordering:2 decrease:1 highest:1 rescaled:1 src:1 substantial:3 principled:2 miny:2 reward:1 ideally:1 maxmarginals:1 dynamic:2 ultimately:1 trained:3 reviewing:1 solving:1 shakhnarovich:1 baseball:1 learner:2 joint:3 siggraph:1 various:1 kolmogorov:3 stacked:1 train:12 describe:1 labeling:1 refined:1 modular:1 plausible:7 solve:1 cvpr:1 ability:1 ward:1 think:1 final:1 agr:1 sequence:3 propose:2 interaction:4 ambiguously:1 mb:3 maximal:1 loop:1 translate:1 poorly:1 gold:2 papineni:1 convergence:1 cluster:4 optimum:1 produce:17 generating:2 perfect:2 converges:1 object:7 help:1 pose:1 quirk:1 measured:2 received:1 progress:1 implemented:2 predicted:3 involves:3 foliage:1 direction:1 correct:2 annotated:1 human:1 bin:1 argued:1 behaviour:2 fix:2 generalization:2 preliminary:1 probable:1 im:4 dhruv:1 ground:13 blake:1 mapping:5 predict:3 early:1 smallest:1 estimation:1 label:13 sensitive:1 create:1 tool:1 weighted:1 minimization:3 always:1 rather:3 bet:1 focus:2 joachim:2 improvement:2 potts:2 consistently:1 likelihood:1 indicates:2 superpixels:2 tech:1 contrast:3 baseline:15 posteriori:2 inference:2 dim:1 dependent:1 typically:10 entire:2 maclin:1 koller:1 interested:2 mimicking:1 overall:2 classification:2 among:1 pascal:3 pixel:8 html:1 plan:1 art:2 miqp:1 fairly:2 constrained:1 integration:1 field:4 equal:1 never:1 having:1 extraction:1 biology:2 identical:1 park:1 yu:1 icml:2 nearly:1 foreground:8 future:1 minimized:1 report:2 guzman:2 fundamentally:1 inherent:1 intelligent:2 composed:1 intell:1 mcl:29 cplex:3 microsoft:2 attempt:2 detection:1 message:1 evaluation:4 introduces:1 argmaxy:3 bmmf:1 chain:10 accurate:5 tuple:2 edge:2 necessary:2 experience:1 orthogonal:1 tree:1 iv:3 re:6 guidance:1 instance:7 handed:1 soft:1 column:1 increased:2 measuring:1 assignment:2 maximization:1 loopy:1 cost:3 subset:1 predictor:27 veksler:1 virginia:1 too:1 motivating:1 combined:1 international:1 probabilistic:2 lucchi:1 squared:1 ambiguity:1 again:1 management:1 containing:1 huang:1 iwpt:1 expert:4 potential:1 mlsp:4 matter:1 explicitly:2 ranking:2 view:1 analyze:1 red:1 wm:13 maintains:1 option:1 minimize:1 accuracy:3 acid:1 efficiently:1 ensemble:5 correspond:1 identify:1 maximized:1 weak:1 iterated:1 critically:1 produced:3 none:1 datapoint:3 flickr:1 hlt:1 against:4 energy:6 dataset:7 popular:2 knowledge:2 color:2 segmentation:21 actually:1 lawler:1 trw:1 higher:1 supermodular:1 supervised:1 jair:1 zisserman:1 improved:1 wei:2 fua:1 formulation:9 singleoutput:1 evaluated:4 box:1 dey:2 just:2 stage:2 correlation:2 working:3 hand:2 replacing:2 propagation:1 google:1 incrementally:1 glance:1 quality:6 artif:1 effect:4 naacl:1 contain:1 multiplier:1 true:1 verify:1 evolution:1 hence:1 assigned:1 regularization:2 iteratively:2 attractive:1 adjacent:1 assistance:1 game:1 reweighted:1 reassignment:1 criterion:1 generalized:1 outline:1 crf:3 interface:1 image:25 parikh:1 boykov:2 common:1 qp:2 overview:1 exponentially:2 extend:1 refer:2 cambridge:1 rd:2 automatic:1 illinois:2 submodular:3 language:3 access:3 stable:1 structuredoutput:2 gt:1 add:1 etc:1 closest:1 showed:1 optimizing:1 scenario:5 manipulation:1 binary:1 outperforming:1 vt:1 yi:26 accomplished:2 scoring:2 seen:2 guestrin:1 converge:1 maximize:1 shortest:1 grabcut:1 ii:2 multiple:37 desirable:1 reduces:2 smooth:1 match:1 long:1 impact:1 prediction:58 mrf:2 regression:1 vision:2 metric:1 iteration:11 histogram:2 robotics:1 background:4 residue:1 want:1 winn:1 else:2 ssvms:6 rest:1 subject:2 isolate:1 thing:1 member:2 lafferty:1 integer:2 structural:1 split:2 iii:1 architecture:1 opposite:1 idea:2 penalty:1 sontag:1 passing:1 cause:1 ssvm:24 involve:3 aimed:1 dbatra:1 hue:2 zabih:2 svms:1 category:1 generate:2 http:3 outperform:1 exist:1 percentage:1 notice:6 popularity:1 klein:1 diverse:13 discrete:2 group:3 key:2 slic:1 localize:1 neither:1 rewriting:1 yadollahpour:1 graph:8 downstream:1 sum:2 angle:1 parameterized:1 uncertainty:1 separation:1 comparable:1 bound:7 hi:1 pay:3 guaranteed:2 convergent:1 quadratic:2 oracle:4 constraint:8 software:1 tag:1 nearby:1 min:19 formulating:1 shaji:1 structured:24 according:1 beneficial:4 across:2 em:2 lp:1 making:3 maxmargin:1 intuitively:1 invariant:1 restricted:1 iccv:2 discus:2 slack:2 mechanism:3 ilp:1 fed:1 end:1 available:3 eight:1 observe:5 enforce:1 alternative:1 robustness:2 denotes:2 clustering:7 include:2 nlp:1 top:3 graphical:1 assumes:1 hinge:14 exploit:1 chinese:1 classical:2 unchanged:1 objective:11 question:1 opitz:1 bagnell:1 surrogate:2 detrimental:1 distance:1 thank:1 incompletely:1 decoder:1 bleu:2 consumer:1 rother:1 modeled:2 index:1 minimizing:2 equivalently:1 setup:3 difficult:3 fromer:1 upper:7 ilog:1 markov:3 finite:2 descent:11 incorrectly:1 defining:1 situation:1 varied:1 ninth:1 sarkar:1 david:1 pair:3 sentence:3 connection:2 optimized:2 learned:1 boost:2 nip:4 address:2 able:3 challenge:3 program:2 max:8 belief:1 gool:1 analogue:1 power:1 overlap:1 event:1 natural:8 ranked:2 regularized:3 cascaded:3 predicting:6 indicator:2 force:1 yanover:2 zhu:1 technology:1 library:1 picture:1 finley:1 text:1 nice:1 interdependent:2 literature:2 loss:44 permutation:1 mixed:1 interesting:1 limitation:1 recomb:1 validation:2 principle:1 roukos:1 ibm:3 translation:7 row:3 hcrf:3 summary:1 penalized:1 supported:1 last:2 eccv:2 english:1 hebert:1 side:10 formal:1 taking:1 van:1 boundary:1 collection:3 avg:1 made:1 employing:1 pushmeet:1 scribble:1 approximate:1 cutting:3 implicitly:1 conclude:1 xi:22 discriminative:2 continuous:2 latent:5 decomposes:2 learn:9 complex:2 voc2011:2 motivation:1 bounding:1 border:1 arise:1 lampert:2 w1t:1 paul:1 amino:1 fig:11 nphard:1 pereira:1 jmlr:1 third:2 learns:1 down:1 specific:2 list:3 intractable:1 exists:2 workshop:2 effectively:2 conditioned:1 studio:2 margin:9 chen:2 cviu:1 entropy:1 intersection:1 photograph:1 simply:2 visual:2 expressed:1 contained:1 monotonic:1 corresponds:1 truth:13 extracted:1 hedge:1 conditional:2 slot:1 goal:6 formulated:2 experimentally:1 specifically:1 typical:1 torr:1 wt:7 kowdle:1 denoising:1 flag:5 called:2 batra:5 secondary:3 total:1 experimental:2 m3:1 est:3 meaningful:1 support:2 collins:1 violated:2 evaluate:2 tested:1 handling:1
3,921
455
Temporal Adaptation ? In a Silicon Auditory Nerve John Lazzaro CS Division UC Berkeley 571 Evans Hall Berkeley, CA 94720 Abstract Many auditory theorists consider the temporal adaptation of the auditory nerve a key aspect of speech coding in the auditory periphery. Experiments with models of auditory localization and pitch perception also suggest temporal adaptation is an important element of practical auditory processing. I have designed, fabricated, and successfully tested an analog integrated circuit that models many aspects of auditory nerve response, including temporal adaptation. 1. INTRODUCTION We are modeling known and proposed auditory structures in the brain using analog VLSI circuits, with the goal of making contributions both to engineering practice and biological understanding. Computational neuroscience involves modeling biology at many levels of abstraction. The first silicon auditory models were constructed at a fairly high level of abstraction (Lyon and Mead, 1988; Lazzaro and Mead, 1989ab; Mead et al., 1991; Lyon, 1991). The functional limitations of these silicon systems have prompted a new generation of auditory neural circuits designed at a lower level of abstraction (Watts et al., 1991; Liu et -al., 1991). 813 814 Lazzaro The silicon model of auditory nerve response models sensory transduction and spike generation in the auditory periphery at a high level of abstraction (Lazzaro and Mead, 1989c); this circuit is a component in silicon models of auditory localization, pitch perception, and spectral shape enhancement (Lazzaro and Mead, 1989ab; Lazzaro, 1991a). Among other limitations, this circuit does not model the shortterm temporal adaptation of the auditory nerve. Many auditory theorists consider the temporal adaptation of the auditory nerve a key aspect of speech coding in the auditory periphery (Delgutte and Kiang, 1984). From the engineering perspective, the pitch perception and auditory localization chips perform well with sustained sounds as input; temporal adaptation in the silicon auditory nerve should improve performance for transient sounds. I have designed, fabricated, and tested an integrated circuit that models the temporal adaptation of spiral ganglion neurons in the auditory periphery. The circuit receives an analog voltage input, corresponding to the signal at an output tap of a silicon cochlea, and produces fixed-width, fixed-height pulses that are correlates to the action potentials of an auditory nerve fiber. I have also fabricated and tested an integrated circuit that combines an array of these neurons with a silicon cochlea (Lyon and Mead, 1988); this design is a silicon model of auditory nerve response. Both circuits were fabricated using the Orbit double polysilicon n-well 2/.l1n process. 2. TEMPORAL ADAPTATION Figure 1 shows data from the temporal adaptation circuit; the data in this figure was taken by connecting signals directly to the inner hair cell circuit input, bypassing silicon cochlea processing. In (a), we apply a 1 kHz pure tone burst of 20ms in duration to the input of the hair cell circuit (top trace), and see an adapting sequence of spikes as the output (middle trace). If this tone burst in repeated at 80ms intervals, each response in unique; by averaging the responses to 64 consecutive tone bursts (bottom trace), we see the envelope ofthe temporal adaptation superimposed on the cycle-by-cycle phase-locking of the spike train. These behaviors qualitatively match biological experiments (Kiang et al., 1965). In biological auditory nerve fibers, cycle-by-cycle phase locking ceases for auditory fibers tuned to sufficiently high frequencies, but the temporal adaptation property remains. In the silicon spiral ganglion neuron, a 10kHz pure tone burst fails to elicit phae;e-Iocking (Figure 1(b), trace identities ae; in (a)). Temporal adaptation remains, however, qualitatively matching biological experiments (Kiang et aI., 1965). To compare this data with the previous generation of silicon auditory nerve circuits, we set the control parameters of the new spiral ganglion model to eliminate temporal adaptation. Figure 1(c) shows the 1 kHz tone burst response (trace identities as in (a)). Phase locking occurs without temporal adaptation. The uneven response of the averaged spike outputs is due to beat frequencies between the input tone frequency and the output spike rate; in practice, the circuit noise of the silicon cochleae; adds random variation to the auditory input and smooths this response (Lazzaro and Mead, 1989c). Temporal Adaptation in a Silicon Auditory Nerve U I ,I ___UJ j I 1. _,. -.......-.I.LW-..-J~_. . _ (a) (b) (c) Figure 1. Responses of test chip to pure tone bursts. Horizontal axis is time for all plots, all horizontal rules measure 5 ms. (a) Chip response to a 1 kHz, 20 ms tone burst. Top trace shows tone burst input, middle trace shows a sample response from the chip, bottom trace shows averaged output of 64 responses to tone bursts. Averaged response shows both temporal adaptation and phage locking. (b) Chip response to a 10 kHz, 20 ms tone burst. Trace identifications identical to (a). Response shows temporal adaptation without phase locking. (c) Chip response to a 1 kHz, 20 ms tone burst, with adaptation circuitry disabled. Trace identifications identical to (a). Response shows phase locking without temporal adaptation. 815 816 Lazzaro 3. CIRCUIT DESIGN Figure 2 shows a block diagram of the model. The circuits modeling inner hair cell transduction remain unchanged from the original model (Lazzaro and Mead, 1989c), and are shown as a single box. This box performs time differentiation, nonlinear compression and half-wave rectification on the input waveform Vi, producing a unidirectional current waveform as output. The dependent current source represents this processed signal. The axon hillock circuit (Mead, 1989), drawn as a box marked with a pulse, converts this current signal into a series of fixed-width, fixed height spikes; Vo is the output of the model. The current signal is connected to the pulse generator using a novel current mirror circuit, that serves as the control element to regulate temporal adaptation. This current mirror circuit has an additional high impedance input, Va, that exponentially scales the current entering the axon hillock circuit (the current mirror operates in the subthreshold region) . The adaptation capacitor C a is associated with the control voltage Va. IHC Figure 2. Circuit schematic of the enhanced silicon model of auditory nerve response. The circuit converts the analog voltage input Vi into the pulse train Vo; control voltages VI and Vp control the temporal adaptation of state variable Va on capacitor Ca. See text for details. Temporal Adaptation in a Silicon Auditory Nerve C a is constantly charged by the PFET transistor associated with control voltage Vi, and is discharged during every pulse output of the axon hillock circuit, by an amount set by the control voltage Vp. During periods with no input signal, Va is charged to Vdd, and the current mirror is set to deliver maximum current with the onset of an input signal. If an input signal occurs and neuron activity begins, the capacitor Va is discharged with every spike, degrading the output of the current mirror. In this way, temporal adaptation occurs, with characteristics determined by Vp and Vi. The nonlinear differential equations for this adaptation circuit are similar to the equations governing the adaptive baroreceptor circuit (Lazzaro et al., 1991); the publication describing this circuit includes an analysis deriving a recurrence relation that describes the pulse output of the circuit given a step input. 26 40 Sp/sec 600 36 400 200 0.0 14 _ _ _ _ _ _ _ _- 20 10 30 (ms) Figure 3. Instantaneous firing rate of the adaptive neuron, as a function of time; tone burst begins at 0 ms. Each curve is marked with the amplitude of presented tone burst, in dB. Tone burst frequency is 1Khz. 817 818 Lazzaro 4. DATA ANALYSIS The experiment shown in Figure l(a) was repeated for tone bursts of different amplitudes; this data set was used to produce several standard measures of adaptive response (Hewitt and Meddis, 1991). The integrated auditory nerve circuit was used for this set of experiments. Data was taken from an adaptive auditory nerve output that had a best frequency of 1 Khz.; the frequency of all tone bursts was also 1 Khz. Figure 3 shows the instantaneous firing rate of the auditory nerve output as a function of time, for tone bursts of different amplitudes. Adaptation was more pronounced for more intense sounds. This difference is also seen in Figure 4. In this figure, instantaneous firing rate is plotted as a function of amplitude, both at response onset and after full adaptation. 700 Spikes/sec 500 300 100 10 30 20 40 dB Figure 4. Instantaneous firing rate of the adaptive neuron, as a function of amplitude (in dB). Top curve is firing rate at onset of response, bottom curve is firing rate after adaptation. Tone burst frequency is 1Khz. Temporal Adaptation in a Silicon Auditory Nerve Figure 4 shows that the instantaneous spike rate saturates at moderate intensity after full adaptation; at these moderate intensities, however, the onset instantaneous spike rate continues to encode intensity. Figure 4 shows a non-monotonicity at high intensities in the onset response; this undesired non-monotonicity is a result of the undesired saturation of the silicon cochlea circuit (Lazzaro, 1991b). 5. CONCLUSION This circuit improves the silicon model of auditory response, by adding temporal adaptation. We expect this improvement to enhance existing architectures for auditory localization and pitch perception, and aid the creation of new circuits for speech processing. A cknow ledgeluen t s Thanks to K. Johnson of CU Boulder and J. Wawrzynek of UC Berkeley for hosting this research in their laboratories. I also thank the Caltech auditory research community, specifically C. l\-1ead, D. Lyon, M. Konishi, L. Watts, M. Godfrey, and X. Arreguit. This work was funded by the National Science Foundation. References Delgutte, B., and Kiang, Y. S. (1984). Speech coding in the auditory nerve I-V. 1. Acoust. Soc. Am 75:3,866-918. Hewitt, M. J. and TvIeddis, R. (1991). An evaluation of eight computer models of mammalian inner hair-cell function. J. Acoust. Soc. Am 90:2, 904. Kiang, N. Y.-s, 'Watenabe, T., Thomas, E.C., and Clark, L.F. (1965). Discharge Patterns of Single Fibers in the Cat's Auditory Nerve. Cambridge, MA: M.LT Press. Lazzaro, J. and Mead, C. (1989a). A silicon model of auditory localization. Neural Computation 1: 41-70. Lazzaro, J. and Mead, C. (1989b). Silicon modeling of pitch perception. Proceedings National Academy of Sciences 86: 9597-960l. Lazzaro, J. and Mead, C. (1989c). Circuit models of sensory transduction in the cochlea. In Mead, C. and Ismail, M. (eds), Analog VLSI Implementations of Neural Net'works. Nonvell, MA: Klmver Academic Publishers, pp. 85-1Ol. Lazzaro, J. P. (1991a). A silicon model of an auditory neural representation of spectral shape. IEEE Jour1lal Solid State Circuits 26: 772-777 . Lazzaro, J. P. (1991b). Biologically-based auditory signal processing in analog VLSL IEEE Asilomar COllference on Signals, System,s, and Computers . Lazzaro, J. P., Schwaber, J., and Rogers, W. (1991). Silicon baroreceptors: modeling cardiovascular pressure transduction in analog VLSL In Sequin, C. (ed), Ad- 819 820 Lazzaro vanced Research. in VLSI, Proceedings of the 1991 Santa Cruz Conference, Cam- bridge, MA: MIT Press, pp. 163-177. Liu, VV., Andreou, A., and Goldstein, M. (1991). Analog VLSI implementation of an auditory periphery model. 25 Annual Conference on Information Sciences and Systems, Baltimore, MD, 1991. Lyon, R. and Mead, C. (1988). An analog electronic cochlea. IEEE Trans. Acoust., Speech, Signal Processing 36: 1119-1134. Lyon, R. (1991). CCD correlators for auditory models. IEEE Asilomar Conference on Signals, Systems, and Computers. Mead, C. A., Arreguit, X., Lazzaro, J. P. (1991) Analog VLSI models of binaural hearing. IEEE Journal of Neural Networks, 2: 230-236. Mead, C. A. (1989). Analog VLSI and Neural Systems. Reading, MA: AddisonWesley. Watts, L., Lyon, R., and Mead, C. (1991). A bidirectional analog VLSI cochlear model. In Sequin, C. (ed), Advanced Research in VLSI, Proceedings of tlte 1991 Santa Cruz Conference, Cambridge, MA: MIT Press, pp. 153-163.
455 |@word cu:1 middle:2 compression:1 pulse:6 pressure:1 solid:1 liu:2 series:1 tuned:1 existing:1 current:11 delgutte:2 john:1 cruz:2 evans:1 shape:2 designed:3 plot:1 half:1 tone:19 height:2 burst:18 constructed:1 differential:1 sustained:1 combine:1 behavior:1 brain:1 ol:1 lyon:7 begin:2 circuit:33 kiang:5 degrading:1 acoust:3 fabricated:4 differentiation:1 temporal:25 berkeley:3 every:2 control:7 baroreceptor:2 producing:1 cardiovascular:1 engineering:2 mead:17 firing:6 averaged:3 practical:1 unique:1 practice:2 block:1 ead:1 elicit:1 adapting:1 matching:1 suggest:1 charged:2 duration:1 pure:3 rule:1 array:1 deriving:1 konishi:1 variation:1 discharge:1 enhanced:1 element:2 sequin:2 continues:1 mammalian:1 bottom:3 region:1 cycle:4 connected:1 locking:6 cam:1 vdd:1 creation:1 deliver:1 localization:5 division:1 binaural:1 chip:6 cat:1 fiber:4 train:2 sequence:1 transistor:1 net:1 adaptation:32 academy:1 ismail:1 pronounced:1 enhancement:1 double:1 produce:2 soc:2 c:1 involves:1 waveform:2 transient:1 rogers:1 biological:4 bypassing:1 sufficiently:1 hall:1 circuitry:1 consecutive:1 bridge:1 successfully:1 mit:2 voltage:6 publication:1 encode:1 improvement:1 superimposed:1 am:2 abstraction:4 dependent:1 integrated:4 eliminate:1 vlsi:8 relation:1 among:1 godfrey:1 fairly:1 uc:2 biology:1 identical:2 represents:1 national:2 phase:5 ab:2 evaluation:1 intense:1 addisonwesley:1 phage:1 orbit:1 plotted:1 cknow:1 modeling:5 hearing:1 schwaber:1 johnson:1 hillock:3 thanks:1 enhance:1 connecting:1 potential:1 coding:3 sec:2 includes:1 vi:5 onset:5 ad:1 wave:1 unidirectional:1 contribution:1 characteristic:1 subthreshold:1 ofthe:1 discharged:2 vp:3 identification:2 ed:3 frequency:7 pp:3 associated:2 auditory:43 improves:1 amplitude:5 goldstein:1 nerve:20 bidirectional:1 response:23 box:3 governing:1 receives:1 horizontal:2 nonlinear:2 disabled:1 entering:1 laboratory:1 undesired:2 during:2 width:2 recurrence:1 m:8 vo:2 performs:1 instantaneous:6 novel:1 functional:1 khz:10 exponentially:1 analog:12 silicon:23 theorist:2 cambridge:2 ai:1 had:1 arreguit:2 funded:1 vlsl:2 add:1 perspective:1 moderate:2 periphery:5 caltech:1 seen:1 additional:1 period:1 signal:12 full:2 sound:3 smooth:1 match:1 polysilicon:1 academic:1 va:5 schematic:1 pitch:5 hair:4 ae:1 cochlea:7 cell:4 interval:1 baltimore:1 diagram:1 source:1 publisher:1 envelope:1 db:3 capacitor:3 correlators:1 spiral:3 architecture:1 inner:3 speech:5 lazzaro:20 action:1 santa:2 hosting:1 amount:1 processed:1 neuroscience:1 key:2 drawn:1 convert:2 electronic:1 annual:1 activity:1 aspect:3 watt:3 pfet:1 remain:1 describes:1 wawrzynek:1 making:1 biologically:1 boulder:1 meddis:1 taken:2 asilomar:2 rectification:1 equation:2 remains:2 describing:1 serf:1 apply:1 eight:1 spectral:2 regulate:1 original:1 l1n:1 top:3 thomas:1 ccd:1 uj:1 unchanged:1 spike:10 occurs:3 md:1 thank:1 cochlear:1 prompted:1 trace:10 design:2 implementation:2 perform:1 neuron:6 beat:1 saturates:1 community:1 intensity:4 tap:1 andreou:1 trans:1 perception:5 pattern:1 reading:1 saturation:1 including:1 advanced:1 improve:1 axis:1 shortterm:1 text:1 understanding:1 expect:1 generation:3 limitation:2 generator:1 clark:1 foundation:1 vv:1 curve:3 sensory:2 qualitatively:2 adaptive:5 correlate:1 monotonicity:2 impedance:1 ca:2 sp:1 hewitt:2 noise:1 repeated:2 transduction:4 axon:3 aid:1 fails:1 lw:1 ihc:1 tlte:1 cease:1 adding:1 mirror:5 lt:1 ganglion:3 constantly:1 ma:5 goal:1 identity:2 marked:2 determined:1 specifically:1 operates:1 averaging:1 uneven:1 tested:3
3,922
4,550
Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods John C. Duchi1 Michael I. Jordan1,2 Martin J. Wainwright1,2 Andre Wibisono1 1 2 Department of Electrical Engineering and Computer Science and Department of Statistics University of California, Berkeley Berkeley, CA USA 94720 {jduchi,jordan,wainwrig,wibisono}@eecs.berkeley.edu Abstract We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates. We show that if pairs of function values are available, algorithms that ? use gradient estimates based on random perturbations suffer a factor of at most d in convergence rate over traditional stochastic gradient methods, where d is the problem dimension. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, which show that our bounds are sharp with respect to all problemdependent quantities: they cannot be improved by more than constant factors. 1 Introduction Derivative-free optimization schemes have a long history in optimization (see, for example, the book by Spall [21]), and they have the clearly desirable property of never requiring explicit gradient calculations. Classical techniques in stochastic and non-stochastic optimization, including KieferWolfowitz-type procedures [e.g. 17], use function difference information to approximate gradients of the function to be minimized rather than calculating gradients. Researchers in machine learning and statistics have studied online convex optimization problems in the bandit setting, where a player and adversary compete, with the player choosing points ? in some domain ? and an adversary choosing a point x, forcing the player to suffer a loss F (?; x), where F (?; x) : ? ? R is a convex function [13, 5, 1]. The goal is to choose optimal ? based only on observations of function values F (?; x). Applications including online auctions and advertisement selection in search engine results. Additionally, the field of simulation-based optimization provides many examples of problems in which optimization is performed based only on function values [21, 10], and problems in which the objective is defined variationally (as the maximum of a family of functions), such as certain graphical model and structured-prediction problems, are also natural because explicit differentiation may be difficult [23]. Despite the long history and recent renewed interest in such procedures, an understanding of their finite-sample convergence rates remains elusive. In this paper, we study algorithms for solving stochastic convex optimization problems of the form ! F (?; x)dP (x), (1) min f (?) := EP [F (?; X)] = ??? X d where ? ? R is a compact convex set, P is a distribution over the space X , and for P -almost every x ? X , the function F (?; x) is closed convex. Our focus is on the convergence rates of algorithms that observe only stochastic realizations of the function values f (?). Work on this problem includes Nemirovski and Yudin [18, Chapter 9.3], who develop a randomized sampling strategy that estimates ?F (?; x) using samples from the surface of the "2 -sphere, and 1 Flaxman et al. [13], who build on this approach, applying it to bandit convex optimization problems. The convergence rates in these works are (retrospectively) sub-optimal [20, 2]: Agarwal et al. ?[2] provide algorithms that achieve convergence rates (ignoring logarithmic factors) of O(poly(d)/ k), where poly(d) is a polynomial in the dimension d, for stochastic algorithms receiving only single function values, but (as the authors themselves note) the algorithms are quite complicated. Some of the difficulties inherent in optimization using only a single function evaluation can be alleviated when the function F (?; x) can be evaluated at two points, as noted independently by Agarwal et al. [1] and Nesterov [20]. The insight is that for small u, the quantity (F (? + uZ; x) ? F (?; x))/u approximates a directional derivative of F (?; x) and can thus be used in first-order optimization schemes. Such two-sample-based gradient estimators allow simpler analyses, with sharper convergence rates [1, 20], than algorithms that have access to only a single function evaluation in each iteration. In the current paper, we take this line of work further, finding the optimal rate of convergence for procedures that are only able to obtain function evaluations, F (?; X), for samples X. Moreover, adopting the two-point perspective, we present simple randomization-based algorithms that achieve these optimal rates. More formally, we study algorithms that receive paired observations Y (?, ? ) ? R2 , where ? and ? are points the algorithm selects, and the tth sample is " # F (?t ; X t ) t t t Y (? , ? ) := (2) F (? t ; X t ) where X t is a sample drawn from the distribution P . After k iterations, the algorithm returns a $ vector ?(k) ? ?. In this setting, we analyze stochastic gradient and mirror-descent procedures [27, 18, 6, 19] that construct gradient estimators using the two-point observations Y t . By a careful analysis of the dimension dependence of certain random perturbation schemes, we?show that the convergence rate attained by our stochastic gradient methods is roughly a factor of d worse than that attained by stochastic methods that observe the ? full gradient ?F (?; X). Under appropriate conditions, our convergence rates are a factor of d better than those attained by Agarwal et al. [1] and Nesterov [20]. In addition, though we present our results in the framework of stochastic optimization, our analysis applies to (two-point) bandit online convex optimization problems [13, 5, 1], and we consequently obtain the sharpest rates for such problems. Finally, we show that the convergence rates we provide are tight?meaning sharp to within constant factors?by using information-theoretic techniques for constructing lower bounds on statistical estimators. 2 Algorithms Stochastic mirror descent methods are a class of stochastic gradient methods for solving the problem min??? f (?). They are based on a proximal function ?, which is a differentiable convex function defined over ? that is assumed (w.l.o.g. by scaling) to be 1-strongly convex with respect to the norm '?' over ?. The proximal function defines a Bregman divergence D? : ? ? ? ? R+ via D? (?, ? ) := ?(?) ? ?(? ) ? )??(? ), ? ? ? * ? 1 2 '? ? ? ' , 2 (3) where the inequality follows from the strong convexity of ? over ?. The mirror descent (MD) method proceeds in a sequence of iterations that we index by t, updating the parameter vector ?t ? ? using stochastic gradient information to form ?t+1 . At iteration t the MD method receives a (subgradient) vector g t ? Rd , which it uses to update ?t via % ( & t ' 1 t+1 t ? = argmin g , ? + D? (?, ? ) , (4) ?(t) ??? where {?(t)} is a non-increasing sequence of positive stepsizes. We make two standard assumptions throughout the paper. Let ?? denote a minimizer of the problem (1). The first assumption [18, 6, 19] describes the properties of ? and the domain. Assumption A. The proximal function ? is strongly convex with respect to the norm '?'. The domain ? is compact, and there exists R < ? such that D? (?? , ?) ? 12 R2 for ? ? ?. 2 Our second assumption is standard for almost all first-order stochastic gradient methods [19, 24, 20], and it holds whenever the functions F (?; x) are G-Lipschitz with respect to the norm '?'. We use '?'? to denote the dual norm to '?', and let g : ? ? X ? Rd denote a measurable subgradient selection for the functions F ; that is, g(?; x) ? ?F (?; x) with E[g(?; X)] ? ?f (?). Assumption B. There is a constant G < ? such that the (sub)gradient selection g satisfies 2 E['g(?; X)'? ] ? G2 for ? ? ?. When Assumptions A and B hold, the convergence rate of stochastic mirror descent methods is well understood [6, 19, Section 2.3]. Indeed, let the variables X t ? X be sampled i.i.d. according t to P , set g? = g(?t ; X t ), and let ?t be generated by the mirror descent iteration (4) with stepsize ?(t) = ?/ t. Then one obtains $ E[f (?(k))] ? f (?? ) ? ? 1 ? R2 + ? G2 . 2? k k (5) For the remainder of this section, we explore the use of function difference information to obtain subgradient estimates that can be used in mirror descent methods to achieve statements similar to the convergence guarantee (5). 2.1 Two-point gradient estimates and general convergence rates In this section, we show?under a reasonable additional assumption?how to use two samples of the random function values F (?; X) to construct nearly unbiased estimators of the gradient ?f (?) of the expected function f . Our analytic techniques are somewhat different than methods employed in past work [1, 20]; as a consequence, we are able to achieve optimal dimension dependence. Our method is based on an estimator of ?f (?). Our algorithm uses a non-increasing sequence of positive smoothing parameters {ut } and a distribution ? on Rd (which we specify) satisfying E? [ZZ # ] = I. Upon receiving the point X t ? X , we sample an independent vector Z t and set gt = F (?t + ut Z t ; X t ) ? F (?t ; X t ) t Z . ut (6) We then apply the mirror descent update (4) to the quantity g t . The intuition for the estimator (6) of ?f (?) follows from an understanding of the directional derivatives of the random function realizations F (?; X). The directional derivative f $ (?, z) of the function (?) f at the point ? in the direction z is f $ (?, z) := limu?0 f (?+uz)?f . The limit always exists when u f is convex [15, Chapter VI], and if f is differentiable at ?, then f $ (?, z) = )?f (?), z*. In addition, we have the following key insight (see also Nesterov [20, Eq. (32)]): whenever ?f (?) exists, E[f $ (?, Z)Z] = E[)?f (?), Z* Z] = E[ZZ # ?f (?)] = ?f (?) if the random vector Z ? Rd has E[ZZ # ] = I. Intuitively, for ut small enough in the construction (6), the vector g t should be a nearly unbiased estimator of the gradient ?f (?). To formalize our intuition, we make the following assumption. Assumption C. There is a function L : X ? R+ such that for (P -almost every) x ? X , the function F (?; x) has L(x)-Lipschitz continuous gradient with respect to the norm '?', and the quantity L(P )2 := E[L(X)2 ] < ?. With Assumption C, we can show that g t is (nearly) an unbiased estimator of ?f (?t ). Furthermore, for appropriate random vectors Z, we can also show that g t has small norm, which yields better convergence rates for mirror descent-type methods. (See the proof of Theorem 1.) In order to study the convergence of mirror descent methods using the estimator (6), we make the following additional assumption on the distribution ?. Assumption D. Let Z be sampled according to the distribution ?, where E[ZZ # ] = I. The quantity 4 2 M (?)2 := E['Z' 'Z'? ] < ?, and there is a constant s(d) such that for any vector g ? Rd , 2 2 E[')g, Z* Z'? ] ? s(d) 'g'? . 3 As the next theorem shows, Assumption D is somewhat innocuous, the constant M (?) not even appearing in the final bound. The dimension (and norm) dependent term s(d), however, is important for our results. In Section 2.2 we give explicit constructions of random variables that satisfy Assumption D. For now, we present the following result. Theorem 1. Let {ut } ? R+ be a non-increasing sequence of positive numbers, and let ?t be generated according to the mirror descent update (4) using the gradient estimator (6). Under Assumptions A, B, C, and D, if we set the step and perturbation sizes ) G s(d) R 1 ?(t) = ? ) ? , ? and ut = u L(P )M (?) t 2G s(d) t then ) ) ) + , RG s(d) RG s(d) log k ? ?1 2 RG s(d) $ ? +u , E f (?(k)) ? f (? ) ? 2 max ?, ? + ?u k k k $ = 1 .k ?t , and the expectation is taken with respect to the samples X and Z. where ?(k) t=1 k * The proof of Theorem 1 requires some technical care?we never truly receive unbiased gradients? and it builds on convergence proofs developed in the analysis of online and stochastic convex optimization [27, 19, 1, 12, 20] to achieve bounds of the form (5). Though we defer proof to Appendix A.1, at a very high level, the argument is as follows. By using Assumption C, we see that for small enough ut , the gradient estimator g t from (6) is close (in expectation with respect to X t ) to f $ (?t , Z t )Z t , which is an unbiased estimate of ?f (?t ). Assumption C allows us to bound the moments of the gradient estimator g t . By carefully showing that taking care to make sure that the errors in g t as an estimator of ?f (?t ) scale with ut , we given an analysis similar to that used to derive the bound (5) to obtain Theorem 1. Before continuing, we make a few remarks. First, the method is reasonably robust to the selection of the step-size multiplier ? ? (as noted by Nemirovski et al. [19] for gradient-based MD methods). So long as ?(t) ? 1/ t, mis-specifying the multiplier ? results in a scaling at worst linear in max{?, ??1 }. Perhaps more interestingly, our setting of ut was chosen mostly for convenience and elegance of the final bound. In a sense, we can simply take u to be extremely close to zero (in practice, we must avoid numerical precision issues, and the stochasticity in the method makes such choices somewhat unnecessary). In addition, the convergence rate of the method is independent of the Lipschitz continuity constant L(P ) of the instantaneous gradients ?F (?; X); the penalty for nearly non-smooth objective functions comes into the bound only as a second-order term. This suggests similar results should hold for non-differentiable functions; we have been able to show that in some cases this is true, but a fully general result has proved elusive thus far. We are currently investigating strategies for the non-differentiable case. Using similar arguments based on Azuma-Hoeffding-type inequalities, it is possible to give highprobability convergence guarantees [cf. 9, 19] under additional tail conditions on g, for example, 2 that E[exp('g(?; X)'? /G2 )] ? exp(1). Additionally, though we have presented our results as convergence guarantees for stochastic optimization problems, an inspection of our analysis in Appendix A.1 shows that we obtain (expected) regret bounds for bandit online convex optimization problems [e.g. 13, 5, 1]. 2.2 Examples and corollaries In this section, we provide examples of random sampling strategies that give direct convergence rate estimates for the mirror descent algorithm with subgradient samples (6). For each corollary, we specify the norm '?', proximal function ?, and distribution ?, verify that Assumptions A, B, C, and D hold, and then apply Theorem 1 to obtain a convergence rate. We begin with a corollary that describes the convergence rate of our algorithm when the expected function f is Lipschitz continuous with respect to the Euclidean norm '?'2 . 2 2 Corollary 1. Given the proximal function ?(?) := 21 '?'2 , suppose that E['g(?; X)'2 ] ? G2 and ? that ? is uniform on the surface of the "2 -ball of radius d. With the step size choices in Theorem 1, 4 we have ? ? ? + RG d RG d log k ?1 2 RG d ? $ ? max{?, ? } + ?u E f (?(k)) ? f (? ) ? 2 +u . k k k * ? 6 Proof Note that 'Z'2 = d, which implies M (?)2 = E['Z'2 ] = d3 . Furthermore, it is easy to see that E[ZZ # ] = I. Thus, for g ? Rd we have 2 2 2 E[')g, Z* Z'2 ] = dE[)g, Z* ] = dE[g # ZZ # g] = d 'g'2 , which gives us s(d) = d. The rate provided by Corollary 1 is the fastest derived to date for zero-order stochastic optimization using two function evaluations. Both Agarwal et al. [1] and Nesterov [20] achieve rates of ? convergence of order RGd/ k. Admittedly, neither requires that the random functions F (?; X) be continuously differentiable. Nonetheless, Assumption C does not require a uniform bound on the Lipschitz constant L(X) of the gradients ?F (?; X); moreover, the convergence rate of the method is essentially independent of L(P ). In high-dimensional scenarios, appropriate choices for the proximal function ? yield better scaling on the norm of the gradients [18, 14, 19, 12]. In online learning and stochastic optimization settings where one observes gradients g(?; X), if the domain ? is. the simplex, then exponentiated gradient algorithms [16, 6] using the proximal function ?(?) = j ?j log ?j obtain rates of convergence dependent on the "? -norm of the gradients 'g(?; X)'? . This scaling is more palatable?than dependence on Euclidean norms applied to the gradient vectors, which may be a factor of d larger. Similar results apply [7, 6] when using proximal functions based on "p -norms. Indeed, making the 2 1 '?'p , we obtain the following corollary. choice p = 1 + 1/ log d and ?(?) = 2(p?1) 2 2 d Corollary 2. Assume that E['g(?; X)'? ] ? G ? and that ? ? {? ? R : '?'1 ? R}. Set ? to be uniform on the surface of the "2 -ball of radius d. Use the step sizes specified in Theorem 1. There are universal constants C1 < 20e and C2 < 10e such that ? ? * + , 0 RG d log d RG d log d / 2 ? ?1 $ ? E f (?(k)) ? f (? ) ? C1 ?u + u log k . max ?, ? + C2 k k Proof The proof of this corollary is somewhat involved. The main argument involves showing that the constants M (?) and s(d) may be taken as M (?) ? d6 and s(d) ? 24d log d. First, we recall [18, 7, Appendix 1] that our choice of ? is strongly convex with respect to the norm '?'p . In addition, if we define q = 1 + log d, then we have 1/p + 1/q = 1, and 'v'q ? e 'v'? for any v ? Rd and any d. As a consequence, we see that we may take the norm '?' = '?'1 and the dual 2 2 norm '?'? = '?'? , and E[')g, Z* Z'q ] ? e2 E[')g, Z* Z'? ]. To apply Theorem 1 with appropriate 2 values from Assumption D, we now bound E[')g, Z* Z'? ]; see Appendix A.3 for a proof. ? Lemma 3. Let Z be distributed uniformly on the "2 -sphere of radius d. For any g ? Rd , 2 2 E[')g, Z* Z'? ] ? C ? d log d 'g'? , where C ? 24 is a universal constant. As a consequence of Lemma 3, the constant s(d) of Assumption D satisfies s(d) ? Cd log d. 4 2 Finally, we have the essentially trivial bound M (?)2 = E['Z'1 'Z'? ] ? d6 (we only need the quantity M (?) to be finite to apply Theorem 1). Recalling that the set ? ? {? ? Rd : '?'1 ? R}, our choice of ? yields [e.g., 14, Lemma 3] (p ? 1)D? (?, ? ) ? 1 1 2 2 '?'p + '? 'p + '?'p '? 'p . 2 2 We thus find that D? (?, ? ) ? 2R2 log d for any ?, ? ? ?, and using the step and perturbation size choices of Theorem 1 gives the result. 5 ? Corollary 2 attains a convergence rate that scales with dimension as d log d. This dependence on dimension is much worse than that of (stochastic) mirror descent using full gradient information [18, 19]. The additional dependence on d suggests that while O(1/'2 ) iterations are required to achieve '-optimization accuracy for mirror descent methods (ignoring logarithmic factors), the twopoint method requires O(d/'2 ) iterations to obtain the same accuracy. A similar statement holds for the results of Corollary 1. In the next section, we show that this dependence is sharp: except for logarithmic factors, no algorithm can attain better convergence rates, including the problem-dependent constants R and G. 3 Lower bounds on zero-order optimization We turn to providing lower bounds on the rate of convergence for any method that receives random function values. For our lower bounds, we fix a norm '?' on Rd and as usual let '?'? denote its dual norm. We assume that ? = {? ? Rd : '?' ? R} is the norm ball of radius R. We study all optimization methods that receive function values of random convex functions, building on the analysis of stochastic gradient methods by Agarwal et al. [3]. More formally, let Ak denote the collection of all methods that observe a sequence of data points $ ? ?. The classes (Y 1 , . . . , Y k ) ? R2 with Y t = [F (?t , X t ) F (? t , X t )] and return an estimate ?(k) of functions over which we prove our lower bounds consist of those satisfying Assumption B, that is, for a given Lipschitz constant G > 0, optimization problems over the set FG . The set FG consists of pairs (F, P ) as described in the objective (1), and for (F, P ) ? FG we assume there is a measurable 2 subgradient selection g(?; X) ? ?F (?; X) satisfying EP ['g(?; X)'? ] ? G2 for ? ? ?. Given an algorithm A ? Ak and a pair (F, P ) ? FG , we define the optimality gap 1 2 $ $ 'k (A, F, P, ?) := f (?(k)) ? inf f (?) = EP F (?(k); X) ? inf EP [F (?; X)] , ??? ??? (7) $ where ?(k) is the output of A on the sequence of observed function values. The quantity (7) is a random variable, since the Y t are random and A may use additional randomness. We we are thus interested in its expected value, and we define the minimax error '?k (FG , ?) := inf sup A?Ak (F,P )?FG E['k (A, F, P, ?)], (8) where the expectation is over the observations (Y 1 , . . . , Y k ) and randomness in A. 3.1 Lower bounds and optimality In this section, we give lower bounds on the minimax rate of optimization for a few specific settings. We present our main results, then recall Corollaries 1 and 2, which imply we have attained the minimax rates of convergence for zero-order (stochastic) optimization schemes. The following sections contain proof sketches; we defer technical arguments to appendices. We begin by providing minimax lower bounds when the expected function f (?) = E[F (?; X)] is Lipschitz continuous with respect to the "2 -norm. We have the following proposition. , Proposition 1. Let ? = ? ? Rd : '?'2 ? R and FG consist of pairs (F, P ) for which the sub2 gradient mapping g satisfies EP ['g(?; X)'2 ] ? G2 for ? ? ?. There exists a universal constant c > 0 such that for k ? d, ? GR d ? 'k (FG , ?) ? c ? . k Combining the lower bounds provided by Proposition 1 with our algorithmic scheme in Section 2 shows that our analysis is ?essentially sharp, since Corollary 1 provides an upper bound for the ? minimax optimality of RG d/ k. The stochastic gradient descent algorithm (4) coupled with the sampling strategy (6) is thus optimal for stochastic problems with two-point feedback. Now we investigate the minimax rates at which it is possible to solve stochastic convex optimization problems whose objectives are Lipschitz continuous with respect to the "1 -norm. As noted earlier, such scenarios are suitable for high-dimensional problems [e.g. 19]. 6 Proposition 2. Let ? = {? ? Rd : '?'1 ? R} and FG consist of pairs (F, P ) for which the 2 subgradient mapping g satisfies EP ['g(?; X)'? ] ? G2 for ? ? ?. There exists a universal constant c > 0 such that for k ? d, ? GR d ? 'k (FG , ?) ? c ? . k We may again consider the optimality of our mirror descent algorithms, recalling Corollary 2. In this 2 1 case, the MD algorithm (4) with the choice ?(?) = 2(p?1) '?'p , where p = 1 + 1/ log d, implies that there exist universal constants c and C such that ? ? GR d GR d log d ? c ? ? '?k (FG , ?) ? C k k for the problem class described in Proposition 2. Here the upper bound is again attained by our two-point mirror-descent procedure. Thus, to within logarithmic factors, our mirror-descent based algorithm is optimal for these zero-order optimization problems. When full gradient information is available, that is, one has access to the subgradient selection ? g(?; X), the d factors appearing in the lower bounds in Proposition 1 and 2 are not present [3]. ? The d factors similarly disappear from the convergence rates in Corollaries 1 and 2 when one uses g t = g(?; X) in the mirror descent updates (4); said differently, the constant s(d) = 1 in Theorem 1 [19, 6]. As noted in Section 2, our lower bounds consequently show that in addition to dependence on the radius R and second moment G2 in the case when gradients are available [3], ? all algorithms must suffer an additional O( d) penalty in convergence rate. This suggests that for high-dimensional problems, it is preferable to use full gradient information if possible, even when the cost of obtaining the gradients is somewhat high. 3.2 Proofs of lower bounds We sketch proofs for our lower bounds on the minimax error (8), which are based on the framework introduced by Agarwal et al. [3]. The strategy is to reduce the optimization problem to a testing problem: we choose a finite set of (well-separated) functions, show that optimizing well implies that one can identify the function being optimized, and then, as in statistical minimax theory [26, 25], apply information-theoretic lower bounds on the probability of error in hypothesis testing problems. We begin with a finite set V ? Rd , to be chosen depending on the characteristics of the function class FG , and a collection of functions and distributions G = {(Fv , Pv ) : v ? V} ? FG indexed by V. Define fv (?) = EPv [Fv (?; X)], and let ?v? ? argmin??? fv (?). We also let ? > 0 be an accuracy parameter upon which Pv and the following quantities implicitly depend. Following Agarwal et al. [3], we define the separation between two functions as 1 2 ? ?(fv , fw ) := inf fv (?) + fw (?) ? fv (?v? ) ? fw (?w ), ??? and the minimal separation of the set V (this may depend on the accuracy parameter ?) as ?? (V) := min{?(fv , fw ) : v, w ? V, v 0= w}. For any algorithm A ? Ak , there exists a hypothesis test v$ : (Y 1 , . . . , Y k ) ? V such that for V sampled uniformly from V (see [3, Lemma 2]), P($ v (Y 1 , . . . , Y k ) 0= V ) ? 2 2 E['k (A, FV , PV , ?)] ? ? max E['k (A, Fv , Pv , ?)], (9) ?? (V) ? (V) v?V where the expectation is taken over the observations (Y 1 , . . . , Y k ). By Fano?s inequality [11], P($ v 0= V ) ? 1 ? I(Y 1 , . . . , Y k ; V ) + log 2 . log |V| (10) We thus must upper bound the mutual information I(Y 1 , . . . , Y k ; V ), which leads us to the following. (See Appendix B.3 for the somewhat technical proof of the lemma.) 7 Lemma 4. Let X | V = v be distributed as N (?v, ? 2 I), and let F (?; x) = )?, x*. Let V be a uniform random variable on V ? Rd , and assume that Cov(V ) 1 ?I for some ? ? 0. Then ?k? 2 I(Y 1 , Y 2 , . . . , Y k ; V ) ? . ?2 Using Lemma 4, we can obtain a lower bound on the minimax optimization error whenever the instantaneous objective functions are of the form F (?; x) = )?, x*. Combining inequalities (9), (10), and Lemma 4, we find that if we choose the accuracy parameter 3 41/2 log |V| ? ? log 2 , (11) ?=? 2 k? (we assume that |V| > 4) we find that there exist a pair (F, P ) ? FG such that E['k (A, F, P, ?)] ? ?? (V)/4. (12) The inequality (12) can give concrete lower bounds on the minimax optimization error. In our lower bounds, we use Fv (?; x) = )?, x* and set Pv to be the N (?v, ? 2 I) distribution, which allows us to apply Lemma 4. Proving Propositions 1 and 2 thus requires three steps: 1. Choose the set V with the property that Cov(V ) 1 ?I when V ? Uniform(V). 2. Choose the variance parameter ? 2 such that for each v ? V, the pair (Fv , Pv ) ? FG . 3. Calculate the separation value ?? (V) as a function of the accuracy parameter ?. 2 Enforcing (Fv , Pv ) ? FG amounts to choosing ? 2 so that E['X'? ] ? G2 for X ? N (?v, ? 2 I). By construction fv (?) = ? )?, v*, which allows us to give lower bounds on the minimal separation ?? (V) for the choices of the norm constraint '?' ? R in Propositions 1 and 2. We defer formal proofs to Appendices B.1 and B.2, providing sketches here. For Proposition 1, an argument using the probabilistic method implies that there are universal constants c1 , c2 > 0 for which there is a 12 packing V of the "2 -sphere of radius 1 with size . at least |V| ? exp(c1 d) and such that (1/|V|) v?V vv # 1 c2 Id?d /d. By the linearity of fv , we find ?(fv , fw ) ? ?R/16, and setting ? 2 = G2 /(2d) and ? as in the choice (11) implies that 2 E['X'2 ] ? G2 . Substituting ? and ?? (V) into the bound (12) proves Proposition 1. For Proposition 2, we use the packing set V = {?ei : i = 1, . . . , d}. Standard bounds [8] on 2 the normal distribution imply that for Z ? N (0, I), we have E['Z'? ] = O(log d). Thus we find 2 that for ? 2 = O(G2 / log(d)) and suitably small ?, we have E['X'? ] = O(G2 ); linearity yields ?(fv , fw ) ? ?R for v 0= w ? V. Setting ? as in the expression (11) yields Proposition 2. 4 Discussion We have analyzed algorithms for stochastic optimization problems that use only random function values?as opposed to gradient computations?to minimize an objective function. As our development of minimax lower bounds shows, the algorithms we present, which build on those proposed by Agarwal et al. [1] and Nesterov [20], are optimal: their convergence rates cannot be improved (in a minimax sense) by more than numerical constant factors. As a consequence of our results, we have attained sharp rates for bandit online convex optimization problems with multi-point feedback. We have also shown that there is a necessary sharp transition in convergence rates between stochastic gradient algorithms and algorithms that compute only function values. This result highlights the advantages of using gradient information when it is available, but we recall that there are many applications in which gradients are not available. Finally, one question that this work leaves open, and which we are actively attempting to address, is whether our convergence rates extend to non-smooth optimization problems. We conjecture that they do, though it will be interesting to understand the differences between smooth and non-smooth problems when only zeroth-order feedback is available. Acknowledgments This material supported in part by ONR MURI grant N00014-11-1-0688 and the U.S. Army Research Laboratory and the U.S. Army Research Office under grant no. W911NF11-1-0391. JCD was also supported by an NDSEG fellowship and a Facebook PhD fellowship. 8 References [1] A. Agarwal, O. Dekel, and L. Xiao. Optimal algorithms for online convex optimization with multi-point bandit feedback. In Proceedings of the Twenty Third Annual Conference on Computational Learning Theory, 2010. [2] A. Agarwal, D. P. Foster, D. Hsu, S. M. Kakade, and A. Rakhlin. Stochastic convex optimization with bandit feedback. SIAM Journal on Optimization, To appear, 2011. URL http://arxiv.org/abs/1107.1744. [3] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds on the oracle complexity of convex optimization. IEEE Transactions on Information Theory, 58(5):3235?3249, May 2012. [4] K. Ball. An elementary introduction to modern convex geometry. In S. Levy, editor, Flavors of Geometry, pages 1?58. MSRI Publications, 1997. [5] P. L. Bartlett, V. Dani, T. P. Hayes, S. M. Kakade, A. Rakhlin, and A. Tewari. High-probability regret bounds for bandit online linear optimization. In Proceedings of the Twenty First Annual Conference on Computational Learning Theory, 2008. [6] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31:167?175, 2003. [7] A. Ben-Tal, T. Margalit, and A. Nemirovski. The ordered subsets mirror descent optimization method with applications to tomography. SIAM Journal on Optimization, 12:79?108, 2001. [8] V. Buldygin and Y. Kozachenko. Metric Characterization of Random Variables and Random Processes, volume 188 of Translations of Mathematical Monographs. American Mathematical Society, 2000. [9] N. Cesa-Bianchi, A. Conconi, and C.Gentile. On the generalization ability of on-line learning algorithms. In Advances in Neural Information Processing Systems 14, pages 359?366, 2002. [10] A. Conn, K. Scheinberg, and L. Vicente. Introduction to Derivative-Free Optimization, volume 8 of MPS-SIAM Series on Optimization. SIAM, 2009. [11] T. M. Cover and J. A. Thomas. Elements of Information Theory, Second Edition. Wiley, 2006. [12] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent. In Proceedings of the Twenty Third Annual Conference on Computational Learning Theory, 2010. [13] A. D. Flaxman, A. T. Kalai, and H. B. McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2005. [14] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3), 2002. [15] J. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms I & II. Springer, 1996. [16] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?64, Jan. 1997. [17] H. J. Kushner and G. Yin. Stochastic Approximation and Recursive Algorithms and Applications. Springer, Second edition, 2003. [18] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, 1983. [19] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009. [20] Y. Nesterov. Random gradient-free minimization of convex functions. URL http://www.ecore.be/DPs/dp_1297333890.pdf, 2011. [21] J. C. Spall. Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Wiley, 2003. [22] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing: Theory and Applications, chapter 5, pages 210?268. Cambridge University Press, 2012. [23] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1?2):1?305, 2008. [24] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11:2543?2596, 2010. [25] Y. Yang and A. Barron. Information-theoretic determination of minimax rates of convergence. Annals of Statistics, 27(5):1564?1599, 1999. [26] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pages 423?435. Springer-Verlag, 1997. [27] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning, 2003. 9
4550 |@word polynomial:1 norm:23 suitably:1 dekel:1 open:1 simulation:2 moment:2 series:1 renewed:1 interestingly:1 past:1 wainwrig:1 current:1 must:3 john:1 numerical:2 analytic:1 update:4 juditsky:1 leaf:1 warmuth:1 inspection:1 provides:2 characterization:1 org:1 simpler:1 buldygin:1 mathematical:2 c2:4 direct:1 symposium:1 prove:1 consists:1 expected:5 indeed:2 roughly:1 themselves:1 uz:2 multi:2 increasing:3 begin:3 provided:2 moreover:2 linearity:2 argmin:2 developed:1 finding:1 differentiation:1 jduchi:1 guarantee:3 berkeley:3 every:2 preferable:1 control:1 grant:2 appear:1 positive:3 before:1 engineering:1 understood:1 limit:1 consequence:4 despite:1 ak:4 analyzing:1 id:1 zeroth:1 studied:1 specifying:1 innocuous:1 suggests:3 fastest:1 nemirovski:5 acknowledgment:1 testing:2 practice:1 regret:2 recursive:1 procedure:5 jan:1 problemdependent:1 universal:6 attain:1 composite:1 alleviated:1 cannot:2 close:2 selection:6 convenience:1 applying:1 www:1 measurable:2 zinkevich:1 elusive:2 independently:1 convex:26 insight:2 estimator:13 proving:1 annals:1 construction:3 suppose:1 programming:2 us:3 hypothesis:2 element:1 trend:1 satisfying:3 updating:1 muri:1 ep:6 observed:1 electrical:1 worst:1 calculate:1 observes:1 monograph:1 intuition:2 convexity:1 complexity:2 nesterov:6 cam:2 depend:2 solving:2 tight:1 upon:2 efficiency:1 packing:2 differently:1 chapter:3 separated:1 choosing:3 shalev:1 quite:1 whose:1 larger:1 solve:1 compressed:1 ability:1 statistic:3 cov:2 noisy:1 final:2 online:12 sequence:6 differentiable:5 advantage:1 hiriart:1 ecore:1 remainder:1 combining:2 realization:2 date:1 achieve:7 sixteenth:1 convergence:38 ben:1 derive:1 develop:1 depending:1 eq:1 strong:1 involves:1 come:1 implies:5 direction:1 radius:6 stochastic:35 material:1 require:1 fix:1 generalization:1 randomization:1 proposition:12 elementary:1 duchi1:1 hold:5 normal:1 exp:3 algorithmic:2 mapping:2 substituting:1 estimation:1 currently:1 lucien:1 minimization:2 dani:1 clearly:1 always:1 rather:2 kalai:1 avoid:1 stepsizes:1 office:1 publication:1 corollary:14 derived:1 focus:1 attains:1 sense:2 inference:1 dependent:3 margalit:1 bandit:9 selects:1 interested:1 issue:1 dual:4 development:2 smoothing:1 mutual:1 field:1 construct:2 never:2 sampling:3 zz:6 yu:1 nearly:4 minimized:1 spall:2 simplex:1 inherent:1 few:2 modern:1 divergence:1 beck:1 festschrift:1 geometry:2 recalling:2 ab:1 interest:1 investigate:1 evaluation:4 truly:1 analyzed:1 bregman:1 necessary:1 indexed:1 continuing:1 euclidean:2 minimal:2 earlier:1 teboulle:1 cover:1 cost:1 subset:1 uniform:5 predictor:1 gr:4 eec:1 proximal:8 vershynin:1 international:1 randomized:1 siam:6 probabilistic:1 receiving:2 michael:1 continuously:1 concrete:1 again:2 ndseg:1 cesa:1 opposed:1 choose:5 hoeffding:1 worse:2 book:1 american:1 derivative:6 return:2 actively:1 de:2 includes:1 satisfy:1 mp:1 vi:1 performed:1 closed:1 analyze:1 sup:1 complicated:1 defer:3 minimize:1 accuracy:6 variance:1 who:2 characteristic:1 yield:5 identify:1 directional:3 sharpest:1 researcher:1 randomness:2 history:2 andre:1 whenever:3 facebook:1 infinitesimal:1 nonetheless:1 involved:1 e2:1 elegance:1 proof:13 mi:1 sampled:3 hsu:1 proved:1 recall:3 ut:9 formalize:1 variationally:1 carefully:1 rgd:1 attained:6 specify:2 improved:2 evaluated:1 though:4 strongly:3 furthermore:2 sketch:3 receives:2 ei:1 nonlinear:1 continuity:1 defines:1 perhaps:1 usa:1 building:1 requiring:1 true:1 unbiased:5 twopoint:1 multiplier:2 verify:1 contain:1 laboratory:1 noted:4 generalized:1 pdf:1 theoretic:5 duchi:1 auction:1 meaning:1 variational:1 instantaneous:2 volume:2 tail:1 extend:1 approximates:1 jcd:1 sub2:1 cambridge:1 rd:15 similarly:1 fano:2 stochasticity:1 access:2 surface:3 gt:1 recent:1 perspective:1 optimizing:1 inf:4 forcing:1 scenario:2 certain:2 n00014:1 verlag:1 inequality:5 onr:1 additional:6 somewhat:6 care:2 gentile:2 employed:1 ii:1 full:4 desirable:1 smooth:4 technical:3 determination:1 calculation:1 long:3 sphere:3 ravikumar:1 paired:1 prediction:1 essentially:3 expectation:4 metric:1 arxiv:1 iteration:7 adopting:1 agarwal:11 c1:4 receive:3 addition:5 fellowship:2 sure:1 ascent:1 jordan:2 yang:1 enough:2 easy:1 reduce:1 whether:1 expression:1 bartlett:2 url:2 penalty:2 suffer:3 remark:1 tewari:2 amount:1 tomography:1 tth:1 http:2 shapiro:1 exist:2 msri:1 discrete:1 key:1 conn:1 lan:1 drawn:1 d3:1 neither:1 subgradient:8 compete:1 letter:1 soda:1 family:2 almost:3 throughout:1 reasonable:1 separation:4 appendix:7 scaling:4 bound:38 annual:4 oracle:1 constraint:1 tal:1 argument:5 min:3 extremely:1 optimality:4 attempting:1 martin:1 conjecture:1 department:2 structured:1 according:3 ball:4 describes:2 kakade:2 making:1 retrospectively:1 intuitively:1 taken:3 remains:1 scheinberg:1 turn:1 singer:1 urruty:1 available:6 operation:1 apply:7 observe:3 barron:1 appropriate:4 kozachenko:1 appearing:2 stepsize:1 robustness:1 thomas:1 cf:1 kushner:1 graphical:2 calculating:1 build:3 prof:1 disappear:1 classical:1 society:1 objective:7 question:1 quantity:8 strategy:5 dependence:7 md:4 traditional:1 usual:1 said:1 gradient:49 dp:2 d6:2 kieferwolfowitz:1 trivial:1 enforcing:1 index:1 providing:3 difficult:1 mostly:1 sharper:1 statement:2 twenty:3 bianchi:1 upper:3 observation:5 finite:6 descent:23 perturbation:4 sharp:6 introduced:1 complement:1 pair:7 required:1 specified:1 optimized:1 california:1 engine:1 fv:17 address:1 able:3 adversary:2 proceeds:1 azuma:1 including:3 max:5 wainwright:2 suitable:1 natural:1 difficulty:1 regularized:1 kivinen:1 minimax:14 scheme:5 imply:2 coupled:1 flaxman:2 understanding:2 asymptotic:1 loss:1 fully:1 highlight:1 interesting:1 versus:1 foundation:1 limu:1 xiao:2 foster:1 editor:1 cd:1 translation:1 echal:1 supported:2 free:4 formal:1 allow:1 highprobability:1 exponentiated:2 vv:1 understand:1 taking:1 fg:16 distributed:2 feedback:5 dimension:7 transition:1 yudin:2 author:1 collection:2 projected:1 far:1 transaction:1 approximate:1 compact:2 obtains:1 implicitly:1 investigating:1 hayes:1 assumed:1 unnecessary:1 shwartz:1 search:2 continuous:4 additionally:2 reasonably:1 robust:2 ca:1 ignoring:2 obtaining:1 poly:2 constructing:1 domain:4 main:2 edition:2 wiley:3 precision:1 sub:2 pv:7 explicit:3 exponential:1 mcmahan:1 levy:1 third:2 advertisement:1 theorem:12 specific:1 showing:2 sensing:1 r2:5 rakhlin:2 exists:6 consist:3 mirror:20 phd:1 gap:1 flavor:1 rg:9 logarithmic:4 yin:1 simply:1 explore:1 army:2 twentieth:1 ordered:1 conconi:1 g2:13 applies:1 springer:3 minimizer:1 satisfies:4 acm:1 assouad:1 goal:1 consequently:2 careful:1 lipschitz:8 lemar:1 jordan1:1 epv:1 fw:6 vicente:1 except:1 uniformly:2 averaging:1 lemma:9 admittedly:1 player:3 palatable:1 formally:2 wainwright1:1 wibisono:1
3,923
4,551
Inverse Reinforcement Learning through Structured Classification Edouard Klein1,2 LORIA ? team ABC Nancy, France [email protected] 2 1 Matthieu Geist2 Sup?lec ? IMS-MaLIS Research Group Metz, France [email protected] Bilal Piot2,3 , Olivier Pietquin2,3 UMI 2958 (GeorgiaTech-CNRS) Metz, France {bilal.piot,olivier.pietquin}@supelec.fr 3 Abstract This paper adresses the inverse reinforcement learning (IRL) problem, that is inferring a reward for which a demonstrated expert behavior is optimal. We introduce a new algorithm, SCIRL, whose principle is to use the so-called feature expectation of the expert as the parameterization of the score function of a multiclass classifier. This approach produces a reward function for which the expert policy is provably near-optimal. Contrary to most of existing IRL algorithms, SCIRL does not require solving the direct RL problem. Moreover, with an appropriate heuristic, it can succeed with only trajectories sampled according to the expert behavior. This is illustrated on a car driving simulator. 1 Introduction Inverse reinforcement learning (IRL) [14] consists in finding a reward function such that a demonstrated expert behavior is optimal. Many IRL algorithms (to be briefly reviewed in Sec. 5) search for a reward function such that the associated optimal policy induces a distribution over trajectories (or some measure of this distribution) which matches the one induced by the expert. Often, this distribution is characterized by the so-called feature expectation (see Sec. 2.1): given a reward function linearly parameterized by some feature vector, it is the expected discounted cumulative feature vector for starting in a given state, applying a given action and following the related policy. In this paper, we take a different route. The expert behavior could be mimicked by a supervised learning algorithm generalizing the mapping from states to actions. Here, we consider generally multi-class classifiers which compute from a training set the parameters of a linearly parameterized score function; the decision rule for a given state is the argument (the action) which maximizes the score function for this state (see Sec. 2.2). The basic idea of our SCIRL (Structured Classificationbased IRL) algorithm is simply to take an estimate of the expert feature expectation as the parameterization of the score function (see Sec. 3.1). The computed parameter vector actually defines a reward function for which we show the expert policy to be near-optimal (Sec. 3.2). Contrary to most existing IRL algorithms, a clear advantage of SCIRL is that it does not require solving repeatedly the direct reinforcement learning (RL) problem. It requires estimating the expert feature expectation, but this is roughly a policy evaluation problem (for an observed policy, so is less involved than repeated policy optimization problems), see Sec. 4. Moreover, up to the use of some heuristic, SCIRL may be trained solely from transitions sampled from the expert policy (no need to sample the whole dynamic). We illustrate this on a car driving simulator in Sec. 6. 1 2 Background and Notations 2.1 (Inverse) Reinforcement Learning A Markov Decision process (MDP) [12] is a tuple {S, A, P, R, ?} where S is the finite state space1 , A the finite actions space, P = {Pa = (p(s0 |s, a))1?s,s0 ?|S| , a ? A} the set of Markovian transition probabilities, R ? RS the state-dependent reward function and ? the discount factor. A deterministic policy ? ? S A defines the behavior of an agent. The quality of this control is quantified by the ? value function vR ? RS , associating to each state the cumulative P discounted reward for starting in ? this state and following the policy ? afterwards: vR (s) = E[ t?0 ? t R(St )|S0 = s, ?]. An optimal ? ? policy ?R (according to the reward function R) is a policy of associated value function vR satisfying ? ? vR ? vR , for any policy ? and componentwise. Let P? be the stochastic matrix P? = (p(s0 |s, ?(s)))1?s,s0 ?|S| . With a slight abuse of notation, we may write a the policy which associates the action a to each state s. The Bellman evaluation ? ? ? (resp. optimality) operators TR (resp. TR ) : RS ? RS are defined as TR v = R + ?P? v and ? ? ? ? TR v = max? TR v. These operators are contractions and vR and vR are their respective fixed? ? ? ? ? ? points: vR = TR vR and vR = TR vR . The action-value function Q? ? RS?A adds a degree of a ? vR ](s). We also freedom on the choice of the first action, it is formally defined as Q?R (s, a) = [TR > > write ?? the stationary distribution of the policy ? (satisfying ?? P? = ?? ). Reinforcement learning and approximate dynamic programming aim at estimating the optimal con? when the model (transition probabilities and the reward function) is unknown (but trol policy ?R observed through interactions with the system to be controlled) and when the state space is too large to allow exact representations of the objects of interest (as value functions or policies) [2, 15, 17]. We refer to this as the direct problem. On the contrary, (approximate) inverse reinforcement learning [11] aim at estimating a reward function for which an observed policy is (nearly) optimal. Let us call this policy the expert policy, denoted ?E . We may assume that it optimizes some unknown ? such that the expert policy is reward function RE . The aim of IRL is to compute some reward R ?E ? (close to be) optimal, that is such that vR ? v . We refer to this as the inverse problem. ? ? R Similarly to the direct problem, the state space may be too large for the reward function to admit a practical exact representation. Therefore, we restrict our search of a good reward among linearly parameterized functions. Let ?(s) = (?1 (s) . . . ?p (s))> be a feature vector composed ofPp basis funcp tion ?i ? RS , we define the parameterized reward functions as R? (s) = ?> ?(s) = i=1 ?i ?i (s). p Searching a good reward thus reduces to searching a good parameter vector ? ? R . Notice that we ? will use interchangeably R? and ? as subscripts (e.g., v?? for vR ). Parameterizing the reward this ? way implies a related parameterization for the action-value function: X Q?? (s, a) = ?> ?? (s, a) with ?? (s, a) = E[ ? t ?(St )|S0 = s, A0 = a, ?]. (1) t?0 Therefore, the action-value function shares the parameter vector of the reward function, with an associated feature vector ?? called the feature expectation. This notion will be of primary importance for the contribution of this paper. Notice that each component ??i of this feature vector is actually the action-value function of the policy ? assuming the reward is ?i : ??i (s, a) = Q??i (s, a). Therefore, any algorithm designed for estimating an action-value function may be used to estimate the feature expectation, such as Monte-Carlo rollouts or temporal difference learning [7]. 2.2 Classification with Linearly Parameterized Score Functions Let X be a compact or a finite set (of inputs to be classified) and let Y be a finite set (of labels). Assume that inputs x ? X are drawn according to some unknown distribution P(x) and that there exists some oracle which associates to each of these inputs a label y ? Y drawn according to the unknown conditional distribution P(y|x). Generally speaking, the goal of multi-class classification is, given a training set {(xi , yi )1?i?N } drawn according to P(x, y), to produce a decision rule g ? Y X which aims at minimizing the classification error E[?{g(x)6=y} ] = P(g(x) 6= y), where ? denotes the indicator function. 1 This work can be extended to compact state spaces, up to some technical aspects. 2 Here, we consider a more restrictive set of classification algorithms. We assume that the decision rule associates to an input the argument which maximizes a related score function, this score function being linearly parameterized and the associated parameters being learnt by the algorithm. More formally, let ?(s, a) = (?1 (x, y) . . . ?d (x, y))> ? Rd be a feature vector whose components are d basis functions ?i ? RX ?Y . The linearly parameterized score function sw ? RX ?Y of parameter vector w ? Rd is defined as sw (x, y) = w> ?(x, y). The associated decision rule gw ? Y X is defined as gw (x) ? argmaxy?Y sw (x, y). Using a training set {(xi , yi )1?i?N }, a linearly parameterized score function-based multi-class classification (MC2 for short) algorithm computes a parameter vector ?c . The quality of the solution is quantified by the classification error c = P(g?c (x) 6= y). We do not consider a specific MC2 algorithm, as long as it classifies inputs by maximizing the argument of a linearly parameterized score function. For example, one may choose a multi-class support vector machine [6] (taking the kernel induced by the feature vector) or a structured large margin approach [18]. Other choices may be possible, one can choose its preferred algorithm. 3 3.1 Structured Classification for Inverse Reinforcement Learning General Algorithm Consider the classification framework of Sec. 2.2. The input x may be seen as a state and the label y as an action. Then, the decision rule gw (x) can be interpreted as a policy which is greedy according to the score function w> ?(x, y), which may itself be seen as an action-value function. Making the parallel with Eq. (1), if ?(x, y) is the feature expectation of some policy ? which produces labels of the training set, and if the classification error is small, then w will be the parameter vector of a reward function for which we may hope the policy ? to be near optimal. Based on these remarks, we?re ready to present the proposed Structured Classification-based IRL (SCIRL) algorithm. Let ?E be the expert policy from which we would like to recover a reward function. Assume that we have a training set D = {(si , ai = ?E (si ))1?i?N } where states are sampled according to the expert stationary distribution2 ?E = ??E . Assume also that we have an estimate ? ??E of the expert ?E feature expectation ? defined in Eq. (1). How to practically estimate this quantity is postponed to Sec. 4.1; however, recall that estimating ??E is simply a policy evaluation problem (estimating the action-value function of a given policy), as noted in Sec. 2.1. Assume also that an MC2 algorithm has been chosen. The proposed algorithm simply consists in choosing ?> ? ??E (s, a) as the linearly parameterized score function, training the classifier on D which produces a parameter vector ?c , and outputting the reward function R?c (s) = ?c> ?(s). Algorithm 1: SCIRL algorithm Given a training set D = {(si , ai = ?E (si ))1?i?N }, an estimate ? ??E of the expert feature ?E 2 expectation ? and an MC algorithm; Compute the parameter vector ?c using the MC2 algorithm fed with the training set D and considering the parameterized score function ?> ? ??E (s, a); Output the reward function R?c (s) = ?c> ?(s) ; The proposed approach is summarized in Alg. 1. We call this Structured Classification-based IRL because using the (estimated) expert feature expectation as the feature vector for the classifier somehow implies taking into account the MDP structure into the classification problem and allows outputting a reward vector. Notice that contrary to most of existing IRL algorithms, SCIRL does not require solving the direct problem. If it possibly requires estimating the expert feature expectation, it is just a policy evaluation problem, less difficult than the policy optimization issue involved by the direct problem. This is further discussed in Sec. 5. 2 For example, if the Markov chain induced by the expert policy is fast-mixing, sampling a trajectory will quickly lead to sample states according to this distribution. 3 3.2 Analysis In this section, we show that the expert policy ?E is close to be optimal according to the reward function R?c , more precisely that Es??E [v??c (s) ? v??cE (s)] is small. Before stating our main result, we need to introduce some notations and to define some objects. We will use the first order discounted future state distribution concentration coefficient Cf [9]: Cf = (1 ? ?) X ? t c(t) with c(t) = t?0 max ?1 ,...,?t ,s?S (?> E P?1 . . . P?t )(s) . ?E (s) We note ?c the decision rule of the classifier: ?c (s) ? argmaxa?A ?c> ? ??E (s, a). The classifica? ?E = ?c> ? tion error is therefore c = Es??E [?{?c (s)6=?E (s)} ] ? [0, 1]. We write Q ??E the score ?c function computed from the training set D (which can be interpreted as an approximate action-value function). Let also ? = ? ??E ? ??E : S ? A ? Rp be the feature expectation error. Conse? ?E ? Q?E = ?c> (? quently, we define the action-value function error as Q = Q ??E ? ??E ) = ?c ?c ?c> ? : S ? A ? R. We finally define the mean delta-max action-value function error as ?Q = Es??E [maxa?A Q (s, a) ? mina?A Q (s, a)] ? 0. Theorem 1. Let R?c be the reward function outputted by Alg. 1. Let also the quantities Cf , c and ?Q be defined as above. We have   Cf 2?kR?c k? ?E ? 0 ? Es??E [vR?c ? vR?c ] ? ?Q + c . 1?? 1?? Proof. As the proof only relies on the reward R?c , we omit the related subscripts to keep the nota? or R for R?c ). First, we link the error Es??E [v ? (s) ? v ?E (s)] tions simple (e.g., v ? for v??c = vR ?c to the Bellman residual Es??E [[T ? v ?E ](s) ? v ?E (s)]. Componentwise, we have that: ? ? v ? ? v ?E = T ? v ? ? T ? v ?E + T ? v ?E ? T ? v ?E + T ? v ?E ? v ?E (a) (b) ? ?P?? (v ? ? v ?E ) + T ? v ?E ? v ?E ? (I ? ?P?? )?1 (T ? v ?E ? v ?E ). ? Inequality (a) holds because T ? v ?E ? T ? v ?E and inequality (b) holds thanks to [9, Lemma 4.2]. Moreover, v ? being optimal we have that v ? ? v ?E ? 0 and T ? being the Bellman optimality operP ator, we have T ? v ?E ? T ?E v ?E = v ?E . Additionally, remark that (I ? ?P?? )?1 = t?0 ? t P?t ? . Therefore, using the definition of the concentration coefficient Cf , we have that: 0 ? Es??E [v ? (s) ? v ?E (s)] ? Cf Es??E [[T ? v ?E ](s) ? v ?E (s)] . 1?? (2) This results actually follows closely the one of [9, Theorem 4.2]. There remains to bound the Bellman residual Es??E [[T ? v ?E ](s) ? v ?E (s)]. Considering the following decomposition, T ? v ?E ? v ?E = T ? v ?E ? T ?c v ?E + T ?c v ?E ? v ?E , we will bound Es??E [[T ? v ?E ](s) ? [T ?c v ?E ](s)] and Es??E [[T ?c v ?E ](s) ? v ?E (s)]. ? ?E = ?c> ? The policy ?c (the decision rule of the classifier) is greedy with respect to Q ??E . Therefore, for any state-action couple (s, a) ? S ? A we have: ? ?E (s, ?c (s)) ? Q ? ?E (s, a) ? Q?E (s, a) ? Q?E (s, ?c (s)) + Q (s, ?c (s)) ? Q (s, a). Q By definition, Q?E (s, a) = [T a v ?E ](s) and Q?E (s, ?c (s)) = [T ?c v ?E ](s). Therefore, for s ? S: ?a ? A, [T a v ?E ](s) ? [T ?c v ?E ](s) + Q (s, ?c (s)) ? Q (s, a) ? [T ? v ?E ](s) ? [T ?c v ?E ](s) + max Q (s, a) ? min Q (s, a). a?A a?A ? ?E Taking the expectation according to ?E and noticing that T v ?v ?E , we bound the first term: 0 ? Es??E [[T ? v ?E ](s) ? [T ?c v ?E ](s)] ? ?Q . There finally remains to bound the term Es??E [[T 4 ?c ?E v ](s) ? v ?E (s)]. (3) Let us write M ? R|S|?|S| the diagonal matrix defined as M = diag(?{?c (s)6=?E (s)} ). Using this, the Bellman operator T ?c may be written as, for any v ? RS : T ?c v = R + ?M P?c v + ?(I ? M )P?E v = R + ?P?E v + ?M (P?c ? P?E )v. Applying this operator to v ?E and recalling that R + ?P?E v ?E = T ?E v ?E = v ?E , we get: ?E ?c ?E |. ? v ?E )| = ?|?> T ?c v ?E ? v ?E = ?M (P?c ? P?E )v ?E ? |?> E M (P?c ? P?E )v E (T v One can easily see that k(P?c ? P?E )v ?E k? ? 2 1?? kRk? , which allows bounding the last term: |Es??E [[T ?c v ?E ](s) ? v ?E (s)]| ? c 2? kRk? . 1?? (4) Injecting bounds of Eqs. (3) and (4) into Eq. (2) gives the stated result. This result shows that if the expert feature expectation is well estimated (in the sense that the estimation error ? is small for states sampled according to the expert stationary policy and for all actions) and if the classification error c is small, then the proposed generic algorithm outputs a reward function R?c for which the expert policy will be near optimal. A direct corollary of Th. 1 is that given the true expert feature expectation ??E and a perfect classifier (c = 0), ?E is the unique optimal policy for R?c . One may argue that this bounds trivially holds for the null reward function (a reward often exhibited to show that IRL is an ill-posed problem), obtained if ?c = 0. However, recall that the parameter vector ?c is computed by the classifier. With ?c = 0, the decision rule would be a random policy and we would have c = |A|?1 |A| , the worst possible classification error. This case is really unlikely. Therefore, we advocate that the proposed approach somehow allows disambiguating the IRL problem (at least, it does not output trivial reward functions such as the null vector). Also, this bound is scale-invariant: one could impose k?c k = 1 or normalize (action-) value functions by kR?c k?1 ?. One should notice that there is a hidden dependency of the classification error c to the estimated expert feature expectation ? ??E . Indeed, the minimum classification error depends on the hypothesis space spanned by the chosen score function basis functions for the MC2 algorithm (here ? ??E ). Nevertheless, provided a good representation for the reward function (that is a good choice of basis functions ?i ) and a small estimation error, this should not be a practical problem. Finally, if our bound relies on the generalization errors c and ?Q , the classifier will only use (? ??E (si , a))1?i?N,a?A in the training phase, where si are the states from the set D. It outputs ?c , seen as a reward function, thus the estimated feature expectation ? ??E is no longer required. Therefore, practically it should be sufficient to estimate well ? ??E on state-action couples (si , a)1?i?N,a?A , which allows envisioning Monte-Carlo rollouts for example. 4 4.1 A Practical Approach Estimating the Expert Feature Expectation SCIRL relies on an estimate ? ??E of the expert feature expectation. Basically, this is a policy evaluation problem. An already made key observation is that each component of ??E is the action-value function of ?E for a reward function ?i : ??i E (s, a) = Q??Ei (s, a) = [T?ai v??iE ](s). We briefly review its exact computation and possible estimation approaches, and consider possible heuristics. If the model is known, the feature expectation can be computed explicitly. Let ? ? R|S|?p be the feature matrix whose rows contain the feature vectors ?(s)> for all s ? S. For a fixed a ? A, let ?a?E ? R|S|?p be the feature expectation matrix whose rows are the expert feature vectors, that is (??E (s, a))> for any s ? S. With these notations, we have ??a E = ? + ?Pa (I ? ?P?E )?1 ?. Moreover, the related computational cost is the same order of magnitude as evaluating a single policy (as the costly part, computing (I ? ?P?E )?1 , is shared by all components). If the model is unknown, any temporal difference learning algorithm can be used to estimate the expert feature expectation [7], as LSTD (Least-Squares Temporal Differences) [4]. Let ? : S ?A ? Rd be a feature vector composed of d basis functions ?i ? RS?A . Each component ??i E of the 5 expert feature expectation is parameterized by a vector ?i ? Rd : ??i E (s, a) ? ?i> ?(s, a). Assume that we have a training set {(si , ai , s0i , a0i = ?E (s0i ))1?i?M } with actions ai not necessarily sampled according to policy ?E (e.g., this may be obtained by sampling trajectories according to an expertbased -greedy policy), the aim being to have a better variability of tuples (non-expert actions should ? ? RM ?d (resp. ? ? 0 ) be the feature matrix whose rows are the feature vectors be tried). Let ? > 0 0 > ? ? RM ?p be the feature matrix whose rows are the ?(si , ai ) (resp. ?(si , ai ) ). Let also ? > reward?s feature vectors ?(si ) . Finally, let ? = [?1 . . . ?p ] ? Rd?p be the matrix of all parameter vectors. Applying LSTD to each component of the feature expectation gives the LSTD-? ? > (? ? ? ?? ? 0 ))?1 ? ? >? ? and ? algorithm [7]: ? = (? ??E (s, a) = ?> ?(s, a). As for the exact case, the costly part (computing the inverse matrix) is shared by all feature expectation components, the computational cost is reasonable (same order as LSTD). Provided a simulator and the ability to sample according to the expert policy, the expert feature expectation may also be estimated using Monte-Carlo rollouts for a given state-action pair (as noted in Sec. 3.2, ? ??E need only be known on (si , a)1?i?N,a?A ). Assuming that K trajectories are sampled for each required state-action pair, this method would require KN |A| rollouts. In order to have a small error ?Q , one may learn using transitions whose starting state is sampled according to ?E and whose actions are uniformly distributed. However, it may happen that only transitions of the expert are available: T = {(si , ai = ?E (si ), s0i )1?i?N }. If the state-action couples (si , ai ) may be used to feed the classifier, the transitions (si , ai , s0i ) are not enough to provide an accurate estimate of the feature expectation. In this case, we can still expect an accurate estimate of ??E (s, ?E (s)), but there is little hope for ??E (s, a 6= ?E (s)). However, one can still rely on some heuristic; this does not fit the analysis of Sec. 3.2, but it can still provide good experimental results, as illustrated in Sec. 6. We propose such a heuristic. Assume that only data T is available and that we use it to provide an (accurate) estimate ? ??E (s, ?E (s)) (this basically means estimating a value function instead of an action-value function as described above). We may adopt an optimistic point of view by assuming that applying a non-expert action just delays the effect of the expert action. More formally, we associate to each state s a virtual state sv for which p(.|sv , a) = p(.|s, ?E (s)) for any action a and for which the reward feature expectation is the null vector, ?(sv ) = 0. In this case, we have ??E (s, a 6= ?E (s)) = ???E (s, ?E (s)). Applying this idea to the available estimate (recalling that the classifiers only requires evaluating ? ??E on (si , a)1?i?N,a?A ) provides the proposed heuristic: for 1 ? i ? N , ? ??E (si , a 6= ai ) = ? ? ??E (si , ai ). We may even push this idea further, to get the simpler estimate of the expert feature expectation (but with the weakest guarantees). Assume that the set T consists of one long trajectory, that is s0i = si+1 (thus T = {s1 , a1 , s2 , . . . , sN ?1 , aN ?1 , sN , aN }). We may estimate ??E (si , ai ) using the single rollout available in the training set and use the proposed heuristic for other actions: ?1 ? i ? N, ? ??E (si , ai ) = N X ? j?i ?(sj ) and ? ??E (si , a 6= ai ) = ? ? ??E (si , ai ). (5) j=i To sum up, the expert feature expectation may be seen as a vector of action-value functions (for the same policy ?E and different reward functions ?i ). Consequently, any action-value function evaluation algorithm may be used to estimate ?? (s, a). Depending on the available data, one may have to rely on some heuristic to assess the feature expectation for a unexperienced (non-expert) action. Also, this expert feature expectation estimate is only required for training the classifier, so it is sufficient to estimate on state-action couples (si , a)1?i?N,a?A . In any case, estimating ??E is not harder than estimating the action-value function of a given policy in the on-policy case, which is much easier than computing an optimal policy for an arbitrary reward function (as required by most of existing IRL algorithms, see Sec. 5). 4.2 An Instantiation As stated before, any MC2 algorithm may be used. Here, we choose the structured large margin approach [18]. Let L : S ? A ? R+ be a user-defined margin function satisfying L(s, ?E (s)) ? 6 L(s, a) (here, L(si , ai ) = 0 and L(si , a 6= ai ) = 1). The MC2 algorithm solves: N 1 ? X min k?k2 + ?i ?,? 2 N i=1 s.t. ?i, ?> ? ??E (si , ai ) + ?i ? max ?> ? ??E (si , a) + L(si , a). a Following [13], we express the equivalent hinge-loss form (noting that the slack variables ?i are tight, which allows moving the constraints in the objective function): J(?) = N 1 X ? ??E (si , ai ) + k?k2 . max ?> ? ??E (si , a) + L(si , a) ? ?> ? N i=1 a 2 This objective function is minimized using a subgradient descent. The expert feature expectation is estimated using the scheme described in Eq. (5). 5 Related Works The notion of IRL has first been introduced in [14] and first been formalized in [11]. A classic approach to IRL, initiated in [1], consists in finding a policy (through some reward function) such that its feature expectation (or more generally some measure of the underlying trajectories? distribution) matches the one of the expert policy. See [10] for a review. Notice that related algorithms are not always able to output a reward function, even if they may make use of IRL as an intermediate step. In such case, they are usually refereed to as apprenticeship learning algorithms. Closer to our contribution, some approaches also somehow introduce a structure in a classification procedure [8][13]. In [8], a metric induced by the MDP is used to build a kernel which is used in a classification algorithm, showing improvements compared to a non-structured kernel. However, this approach is not an IRL algorithm, and more important assessing the metric of an MDP is a quite involved problem. In [13], a classification algorithm is also used to produce a reward function. However, instead of associating actions to states, as we do, it associates optimal policies (labels) to MDPs (inputs), which is how the structure is incorporated. This involves solving many MDPs. As far as we know, all IRL algorithms require solving the direct RL problem repeatedly, except [5, 3]. [5] applies to linearly-solvable MDPs (where the control is done by imposing any dynamic to the system). In [3], based on a relative entropy argument, some utility function is maximized using a subgradient ascent. Estimating the subgradient requires sampling trajectories according to the policy being optimal for the current estimated reward. This is avoided thanks to the use of importance sampling. Still, this requires sampling trajectories according to a non-expert policy and the direct problem remains at the core of the approach (even if solving it is avoided). SCIRL does not require solving the direct problem, just estimating the feature expectation of the expert policy. In other words, instead of solving multiple policy optimization problems, we only solve one policy evaluation problem. This comes with theoretical guarantees (which is not the case of all IRL algorithms, e.g. [3]). Moreover, using heuristics which go beyond our analysis, SCIRL may rely solely on data provided by expert trajectories. We demonstrate this empirically in the next section. To the best of our knowledge, no other IRL algorithm can work in such a restrictive case. 6 Experiments We illustrate the proposed approach on a car driving simulator, similar to [1, 16]. The goal si to drive a car on a busy three-lane highway with randomly generated traffic (driving off-road is allowed on both sides). The car can move left and right, accelerate, decelerate and keep a constant speed. The expert optimizes a handcrafted reward RE which favours speed, punish off-road, punish collisions even more and is neutral otherwise. We compare SCIRL as instantiated in Sec. 4.2 to the unstructured classifier (using the same classification algorithm) and to the algorithm of [1] (called here PIRL for Projection IRL). We also consider the optimal behavior according to a randomly sampled reward function as a baseline (using the same reward feature vector as SCIRL and PIRL, the associated parameter vector is randomly sampled). For SCIRL and PIRL we use a discretization of the state space as the reward feature vector, ? ? R729 : 9 horizontal positions for the user?s car, 3 horizontal and 9 vertical positions for the closest 7 10 8 8 6 6 4 4 Es?U [VR?E (s)] Es?U [VR?E (s)] 10 2 2 0 0 ?2 ?2 ?4 ?4 50 100 150 200 250 300 Number of samples from the expert 350 400 50 100 150 200 250 300 Number of samples from the expert 350 400 Figure 1: Highway problem. The highest line is the expert value. For each curves, we show the mean (plain line), the standard deviation (dark color) and the min-max values (light color). The policy corresponding to the random reward is in blue, the policy outputted by the classifier is in yellow and the optimal policy according the SCIRL?s reward is in red. PIRL is the dark blue line. traffic?s car and 3 speeds. Notice that these features are much less informative than the ones used in [1, 16]. Actually, in [16] features are so informative that sampling a random positive parameter vector ? already gives an acceptable behavior. The discount factor is ? = 0.9. The classifier uses the same feature vector reproduced for each action. SCIRL is fed with n trajectories of length n (started in a random state) with n varying from 3 to 20 (so fed with 9 to 400 transitions). Each experiment is repeated 50 times. The classifier uses the same data. PIRL is an iterative algorithm, each iteration requiring to solve the MDP for some reward function. It is run for 70 iterations, all required objects (a feature expectations for a non-expert policy and an optimal policy according to some reward function at each iteration) are computed exactly ? using the model. We measure the performance of each approach with Es?U [vR (s)], where U is E the uniform distribution (this allows measuring the generalization capability of each approach for states infrequently encountered), RE is the expert reward and ? is one of the following polices: the optimal policy for RE (upper baseline), the optimal policy for a random reward (lower baseline), the optimal policy for R?c (SCIRL), the policy produced by PIRL and the classifier decision rule. Fig. 1 shows the performance of each approach as a number of used expert transitions (except PIRL which uses the model). We can see that the classifier does not work well on this example. Increasing the number of samples would improve its performance, but after 400 transitions it does not work as well as SCIRL with only a ten of transitions. SCIRL works pretty well here: after only a hundred of transitions it reaches the performance of PIRL, both being close to the expert value. We do not report exact computational times, but running SCIRL one time with 400 transitions is approximately hundred time faster than running PIRL for 70 iteration. 7 Conclusion We have introduced a new way to perform IRL by structuring a linearly parameterized score function-based multi-class classification algorithm with an estimate of the expert feature expectation. This outputs a reward function for which we have shown the expert to be near optimal, provided a small classification error and a good expert feature expectation estimate. How to practically estimate this quantity has been discussed and we have introduced a heuristic for the case where only transitions from the expert are available, along with a specific instantiation of the SCIRL algorithm. We have shown on a car driving simulator benchmark that the proposed approach works well (even combined with the introduced heuristic), much better than the unstructured classifier and as well as a state-of-the-art algorithm making use of the model (and with a much lower computational time). In the future, we plan to deepen the theoretical properties of SCIRL (notably regarding possible heuristics) and to apply it to real-world robotic problems. Acknowledgments. This research was partly funded by the EU FP7 project ILHAIRE (grant n? 270780), by the EU INTERREG IVa project ALLEGRO and by the R?gion Lorraine (France). 8 References [1] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the 21st International Conference on Machine learning (ICML), 2004. [2] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming (Optimization and Neural Computation Series, 3). Athena Scientific, 1996. [3] Abdeslam Boularias, Jens Kober, and Jan Peters. Relative entropy inverse reinforcement learning. In JMLR Workshop and Conference Proceedings Volume 15: AISTATS 2011, 2011. [4] Steven J. Bradtke and Andrew G. Barto. Linear Least-Squares algorithms for temporal difference learning. Machine Learning, 22(1-3):33?57, 1996. [5] Krishnamurthy Dvijotham and Emanuel Todorov. Inverse Optimal Control with LinearlySolvable MDPs. In Proceedings of the 27th International Conference on Machine Learning (ICML), 2010. [6] Yann Guermeur. VC thoery of large margin multi-category classifiers. Journal of Machine Learning Research, 8:2551?2594, 2007. [7] Edouard Klein, Matthieu Geist, and Olivier Pietquin. Batch, Off-policy and Model-free Apprenticeship Learning. In Proceedings of the European Workshop on Reinforcement Learning (EWRL), 2011. [8] Francisco S. Melo and Manuel Lopes. Learning from demonstration using MDP induced metrics. In Proceedings of the European Conference on Machine Learning (ECML), 2010. [9] R?mi Munos. Performance bounds in Lp norm for approximate value iteration. SIAM journal on control and optimization, 46(2):541?561, 2007. [10] Gergely Neu and Czaba Szepesvari. Training Parsers by Inverse Reinforcement Learning. Machine Learning, 77(2-3):303?337, 2009. [11] Andrew Y. Ng and Stuart Russell. Algorithms for Inverse Reinforcement Learning. In Proceedings of 17th International Conference on Machine Learning (ICML), 2000. [12] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience, 1994. [13] Nathan Ratliff, Andrew D. Bagnell, and Martin Zinkevich. Maximum Margin Planning. In Proceedings of the 23rd International Conference on Machine Learning (ICML), 2006. [14] Stuart Russell. Learning agents for uncertain environments (extended abstract). In Proceedings of the 11th annual Conference on Computational Learning Theory (COLT), 1998. [15] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 3rd edition, March 1998. [16] Umar Syed and Robert Schapire. A game-theoretic approach to apprenticeship learning. In Advances in Neural Information Processing Systems 20 (NIPS), 2008. [17] Csaba Szepesv?ri. Algorithms for Reinforcement Learning. Morgan and Claypool, 2010. [18] Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. Learning Structured Prediction Models: a Large Margin Approach. In Proceedings of 22nd International Conference on Machine Learning (ICML), 2005. 9
4551 |@word briefly:2 norm:1 nd:1 pieter:1 r:8 tried:1 contraction:1 decomposition:1 tr:8 harder:1 lorraine:1 series:1 score:16 bilal:2 existing:4 current:1 discretization:1 manuel:1 si:34 written:1 john:1 happen:1 informative:2 designed:1 stationary:3 greedy:3 parameterization:3 short:1 core:1 provides:1 simpler:1 daphne:1 conse:1 rollout:1 along:1 direct:10 consists:4 advocate:1 interscience:1 introduce:3 apprenticeship:4 notably:1 indeed:1 expected:1 roughly:1 behavior:7 planning:1 simulator:5 multi:6 bellman:5 discounted:3 little:1 considering:2 increasing:1 provided:4 estimating:13 moreover:5 notation:4 maximizes:2 classifies:1 underlying:1 null:3 project:2 interpreted:2 maxa:1 iva:1 finding:2 csaba:1 guarantee:2 temporal:4 exactly:1 classifier:20 rm:2 k2:2 control:4 grant:1 omit:1 bertsekas:1 before:2 positive:1 sutton:1 initiated:1 punish:2 subscript:2 solely:2 mc2:7 abuse:1 approximately:1 quantified:2 edouard:3 practical:3 unique:1 acknowledgment:1 procedure:1 jan:1 outputted:2 projection:1 word:1 road:2 argmaxa:1 get:2 close:3 operator:4 applying:5 equivalent:1 deterministic:1 demonstrated:2 zinkevich:1 maximizing:1 go:1 starting:3 formalized:1 unstructured:2 matthieu:3 rule:9 parameterizing:1 spanned:1 classic:1 searching:2 notion:2 krishnamurthy:1 resp:4 parser:1 user:2 exact:5 olivier:3 programming:3 us:3 hypothesis:1 pa:2 associate:5 infrequently:1 satisfying:3 observed:3 steven:1 taskar:1 worst:1 distribution2:1 eu:2 russell:2 highest:1 environment:1 reward:54 dynamic:5 trained:1 trol:1 solving:8 tight:1 basis:5 abdeslam:1 easily:1 accelerate:1 geist:2 instantiated:1 fast:1 thoery:1 monte:3 choosing:1 whose:8 heuristic:12 posed:1 quite:1 solve:2 otherwise:1 ability:1 itself:1 reproduced:1 advantage:1 propose:1 outputting:2 interaction:1 kober:1 fr:3 mixing:1 normalize:1 assessing:1 produce:5 perfect:1 ben:1 object:3 tions:1 illustrate:2 depending:1 stating:1 andrew:5 eq:5 solves:1 pietquin:2 involves:1 implies:2 come:1 closely:1 stochastic:2 vc:1 virtual:1 require:6 abbeel:1 generalization:2 really:1 hold:3 practically:3 claypool:1 mapping:1 driving:5 adopt:1 estimation:3 injecting:1 label:5 highway:2 hope:2 mit:1 always:1 ewrl:1 aim:5 varying:1 barto:2 corollary:1 structuring:1 improvement:1 baseline:3 sense:1 dependent:1 cnrs:1 chatalbashev:1 unlikely:1 a0:1 hidden:1 koller:1 france:4 provably:1 issue:1 classification:23 among:1 ill:1 denoted:1 colt:1 plan:1 art:1 ng:2 sampling:6 stuart:2 icml:5 nearly:1 future:2 minimized:1 report:1 richard:1 randomly:3 composed:2 phase:1 rollouts:4 recalling:2 freedom:1 interest:1 evaluation:7 scirl:22 argmaxy:1 light:1 a0i:1 chain:1 accurate:3 nota:1 tuple:1 closer:1 respective:1 re:5 theoretical:2 uncertain:1 markovian:1 measuring:1 cost:2 deviation:1 neutral:1 supelec:3 uniform:1 delay:1 hundred:2 too:2 dependency:1 kn:1 learnt:1 sv:3 combined:1 st:3 thanks:2 international:5 siam:1 ie:1 off:3 quickly:1 gergely:1 boularias:1 choose:3 possibly:1 admit:1 expert:58 dimitri:1 account:1 busy:1 sec:16 summarized:1 coefficient:2 explicitly:1 depends:1 tion:2 view:1 optimistic:1 sup:1 traffic:2 red:1 recover:1 metz:2 parallel:1 capability:1 carlos:1 contribution:2 ass:1 square:2 maximized:1 yellow:1 produced:1 basically:2 mc:1 carlo:3 trajectory:11 rx:2 drive:1 classified:1 reach:1 neu:1 adresses:1 definition:2 involved:3 associated:6 proof:2 mi:1 con:1 couple:4 sampled:9 emanuel:1 nancy:1 recall:2 knowledge:1 car:8 color:2 actually:4 umi:1 feed:1 supervised:1 classifica:1 done:1 just:3 horizontal:2 irl:22 ei:1 somehow:3 defines:2 quality:2 scientific:1 mdp:6 effect:1 contain:1 true:1 requiring:1 illustrated:2 puterman:1 gw:3 interchangeably:1 game:1 noted:2 funcp:1 mina:1 theoretic:1 demonstrate:1 bradtke:1 rl:3 empirically:1 handcrafted:1 volume:1 discussed:2 slight:1 ims:1 refer:2 imposing:1 ai:20 rd:7 trivially:1 similarly:1 refereed:1 funded:1 moving:1 longer:1 loria:1 add:1 closest:1 optimizes:2 route:1 inequality:2 pirl:9 yi:2 postponed:1 jens:1 seen:4 minimum:1 morgan:1 guestrin:1 impose:1 afterwards:1 multiple:1 reduces:1 technical:1 match:2 characterized:1 faster:1 melo:1 long:2 mali:1 a1:1 controlled:1 prediction:1 neuro:1 basic:1 expectation:38 metric:3 iteration:5 kernel:3 background:1 szepesv:1 exhibited:1 ascent:1 induced:5 contrary:4 call:2 near:5 noting:1 intermediate:1 enough:1 todorov:1 fit:1 associating:2 restrict:1 idea:3 regarding:1 multiclass:1 favour:1 utility:1 peter:1 speaking:1 action:40 repeatedly:2 remark:2 generally:3 collision:1 clear:1 envisioning:1 discount:2 dark:2 ten:1 induces:1 category:1 schapire:1 notice:6 piot:1 estimated:7 delta:1 klein:2 blue:2 write:4 discrete:1 express:1 group:1 key:1 nevertheless:1 drawn:3 ce:1 subgradient:3 sum:1 run:1 inverse:13 parameterized:13 noticing:1 lope:1 reasonable:1 yann:1 decision:11 acceptable:1 bound:9 lec:1 encountered:1 oracle:1 annual:1 precisely:1 constraint:1 ri:1 lane:1 aspect:1 speed:3 argument:4 optimality:2 min:3 nathan:1 martin:2 guermeur:1 structured:9 according:20 march:1 lp:1 making:2 s1:1 invariant:1 remains:3 slack:1 know:1 fed:3 fp7:1 available:6 decelerate:1 apply:1 appropriate:1 generic:1 mimicked:1 batch:1 rp:1 denotes:1 running:2 cf:6 hinge:1 sw:3 umar:1 restrictive:2 build:1 objective:2 move:1 already:2 quantity:3 primary:1 concentration:2 costly:2 diagonal:1 bagnell:1 link:1 athena:1 argue:1 trivial:1 assuming:3 length:1 gion:1 minimizing:1 demonstration:1 difficult:1 robert:1 stated:2 ratliff:1 policy:64 unknown:5 perform:1 upper:1 vertical:1 observation:1 markov:3 benchmark:1 finite:4 descent:1 ecml:1 extended:2 variability:1 team:1 incorporated:1 arbitrary:1 police:1 introduced:4 pair:2 required:5 componentwise:2 nip:1 able:1 beyond:1 deepen:1 usually:1 max:7 syed:1 rely:3 indicator:1 solvable:1 residual:2 ator:1 scheme:1 improve:1 mdps:4 started:1 ready:1 sn:2 review:2 relative:2 loss:1 expect:1 agent:2 degree:1 sufficient:2 s0:6 principle:1 vassil:1 share:1 row:4 last:1 free:1 tsitsiklis:1 side:1 allow:1 taking:3 munos:1 distributed:1 curve:1 plain:1 transition:13 cumulative:2 evaluating:2 computes:1 world:1 made:1 reinforcement:15 avoided:2 far:1 sj:1 approximate:4 compact:2 preferred:1 keep:2 robotic:1 instantiation:2 francisco:1 tuples:1 xi:2 allegro:1 search:2 iterative:1 s0i:5 pretty:1 reviewed:1 additionally:1 learn:1 szepesvari:1 alg:2 necessarily:1 european:2 krk:2 diag:1 aistats:1 main:1 linearly:11 whole:1 bounding:1 s2:1 edition:1 repeated:2 allowed:1 fig:1 vr:20 wiley:1 inferring:1 space1:1 position:2 jmlr:1 theorem:2 specific:2 showing:1 weakest:1 exists:1 quently:1 workshop:2 importance:2 kr:2 classificationbased:1 magnitude:1 push:1 margin:6 easier:1 entropy:2 generalizing:1 simply:3 lstd:4 applies:1 dvijotham:1 relies:3 abc:1 succeed:1 conditional:1 goal:2 consequently:1 disambiguating:1 shared:2 except:2 uniformly:1 lemma:1 called:4 partly:1 e:17 experimental:1 formally:3 support:1
3,924
4,552
Bayesian active learning with localized priors for fast receptive field characterization Jonathan W. Pillow Center For Perceptual Systems The University of Texas at Austin [email protected] Mijung Park Electrical and Computer Engineering The University of Texas at Austin [email protected] Abstract Active learning methods can dramatically improve the yield of neurophysiology experiments by adaptively selecting stimuli to probe a neuron?s receptive field (RF). Bayesian active learning methods specify a posterior distribution over the RF given the data collected so far in the experiment, and select a stimulus on each time step that maximally reduces posterior uncertainty. However, existing methods tend to employ simple Gaussian priors over the RF and do not exploit uncertainty at the level of hyperparameters. Incorporating this uncertainty can substantially speed up active learning, particularly when RFs are smooth, sparse, or local in space and time. Here we describe a novel framework for active learning under hierarchical, conditionally Gaussian priors. Our algorithm uses sequential Markov Chain Monte Carlo sampling (?particle filtering? with MCMC) to construct a mixture-of-Gaussians representation of the RF posterior, and selects optimal stimuli using an approximate infomax criterion. The core elements of this algorithm are parallelizable, making it computationally efficient for real-time experiments. We apply our algorithm to simulated and real neural data, and show that it can provide highly accurate receptive field estimates from very limited data, even with a small number of hyperparameter samples. 1 Introduction Neurophysiology experiments are costly and time-consuming. Data are limited by an animal?s willingness to perform a task (in awake experiments) and the difficulty of maintaining stable neural recordings. This motivates the use of active learning, known in statistics as ?optimal experimental design?, to improve experiments using adaptive stimulus selection in closed-loop experiments. These methods are especially powerful for models with many parameters, where traditional methods typically require large amounts of data. In Bayesian active learning, the basic idea is to define a statistical model of the neural response, then carry out experiments to efficiently characterize the model parameters [1?6]. (See Fig. 1A). Typically, this begins with a (weakly- or non-informative) prior distribution, which expresses our uncertainty about these parameters before the start of the experiment. Then, recorded data (i.e., stimulus-response pairs) provide likelihood terms that we combine with the prior to obtain a posterior distribution. This posterior reflects our beliefs about the parameters given the data collected so far in the experiment. We then select a stimulus for the next trial that maximizes some measure of utility (e.g., expected reduction in entropy, mean-squared error, classification error, etc.), integrated with respect to the current posterior. In this paper, we focus on the problem of receptive field (RF) characterization from extracellularly recorded spike train data. The receptive field is a linear filter that describes how the neuron integrates its input (e.g., light) over space and time; it can be equated with the linear term in a generalized linear 1 model (GLM) of the neural response [7]. Typically, RFs are high-dimensional (with 10s to 100s of parameters, depending on the choice of input domain), making them an attractive target for active learning methods. Our paper builds on prior work from Lewi et al [6], a seminal paper that describes active learning for RFs under a conditionally Poisson point process model. Here we show that a sophisticated choice of prior distribution can lead to substantial improvements in active learning. Specifically, we develop a method for learning under a class of hierarchical, conditionally Gaussian priors that have been recently developed for RF estimation [8, 9]. These priors flexibly encode a preference for smooth, sparse, and/or localized structure, which are common features of real neural RFs. In fixed datasets (?passive learning?), the associated estimators give substantial improvements over both maximum likelihood and standard lasso/ridge-regression shrinkage estimators, but they have not yet been incorporated into frameworks for active learning. Active learning with a non-Gaussian prior poses several major challenges, however, since the posterior is non-Gaussian, and requisite posterior expectations are much harder to compute. We address these challenges by exploiting a conditionally Gaussian representation of the prior (and posterior) using sampling at the level of the hyperparameters. We demonstrate our method using the Automatic Locality Determination (ALD) prior introduced in [9], where hyperparameters control the locality of the RF in space-time and frequency. The resulting algorithm outperforms previous active learning methods on real and simulated neural data, even under various forms of model mismatch. The paper is organized as follows. In Sec. 2, we formally define the Bayesian active learning problem and review the algorithm of [6], to which we will compare our results. In Sec. 3, we describe a hierarchical response model, and in Sec. 4 describe the localized RF prior that we will employ for active learning. In Sec. 5, we describe a new active learning method for conditionally Gaussian priors. In Sec. 6, we show results of simulated experiments with simulated and real neural data. 2 Bayesian active learning Bayesian active learning (or ?experimental design?) provides a model-based framework for selecting optimal stimuli or experiments. A Bayesian active learning method has three basic ingredients: (1) an observation model (likelihood) p(y|x, k), specifying the conditional probability of a scalar response y given vector stimulus x and parameter vector k; (2) a prior p(k) over the parameters of interest; and (3) a loss or utility function U , which characterizes the desirability of a stimulusresponse pair (x, y) under the current posterior over k. The optimal stimulus x is the one that maximizes the expected utility Ey|x [U (x, y)], meaning the utility averaged over the distribution of (as yet) unobserved y|x. One popular choice of utility function is the mutual information between (x, y) and the parameters k. This is commonly known as information-theoretic or infomax learning [10]. It is equivalent to picking the stimulus on each trial that minimizes the expected posterior entropy. Let Dt = {xi , yi }ti=1 denote the data collected up to time step t in the experiment. Under infomax learning, the optimal stimulus at time step t + 1 is: xt+1 = arg max Ey|x,Dt [I(y, k|x, Dt )] = arg min Ey|x,Dt , [H(k|x, y, Dt )], x (1) x R where H(k|x, y, D R t ) = ? p(k|x, y, Dt ) log p(k|x, y, Dt )dk denotes the posterior entropy of k, and p(y|x, Dt ) = p(y|x, k)p(k|Dt )dk is the predictive distribution over response y given stimulus x and data Dt . The mutual information provided by (y, x) about k, denoted by I(y, k|x, Dt ), is simply the difference between the prior and posterior entropy. 2.1 Method of Lewi, Butera & Paninski 2009 Lewi et al [6] developed a Bayesian active learning framework for RF characterization in closed-loop neurophysiology experiments, which we henceforth refer to as ?Lewi-09?. This method employs a conditionally Poisson generalized linear model (GLM) of the neural spike response: ?t yt = g(k> xt ) ? Poiss(?t ), 2 (2) A select stimulus B experiment RF model (Lewi et al 09) C hierarchical RF model hyperparameters spike count stimulus parameters (RF) update posterior parameters (RF) Figure 1: (A) Schematic of Bayesian active learning for neurophysiology experiments. For each presented stimulus x and recorded response y (upper right), we update the posterior over receptive field k (bottom), then select the stimulus that maximizes expected information gain (upper left). (B) Graphical model for the non-hierarchical RF model used by Lewi-09. It assumes a Gaussian prior p(k) and Poisson likelihood p(yt |xt , k). (C) Graphical model for the hierarchical RF model used here, with a hyper-prior p? (?) over hyper-parameters and conditionally Gaussian prior p(k|?) over the RF. For simplicity and speed, we assume a Gaussian likelihood for p(yt |xt , k), though all examples in the manuscript involved real neural data or simulations from a Poisson GLM. where g is a nonlinear function that ensures non-negative spike rate ?t . The Lewi-09 method assumes a Gaussian prior over k, which leads to a (non-Gaussian) posterior given by the product of Poisson likelihood and Gaussian prior. (See Fig. 1B). Neither the predictive distribution p(y|x, Dt ) nor the posterior entropy H(k|x, y, Dt ) can be computed in closed form. However, the log-concavity of the posterior (guaranteed for suitable choice of g [11]) motivates a tractable and accurate Gaussian approximation to the posterior, which provides a concise analytic formula for posterior entropy [12, 13]. The key contributions of Lewi-09 include fast methods for updating the Gaussian approximation to the posterior and for selecting the stimulus (subject to a maximum-power constraint) that maximizes expected information gain. The Lewi-09 algorithm yields substantial improvement in characterization performance relative to randomized iid (e.g., ?white noise?) stimulus selection. Below, we will benchmark the performance of our method against this algorithm. 3 Hierarchical RF models Here we seek to extend the work of Lewi et al to incorporate non-Gaussian priors in a hierarchical receptive field model. (See Fig. 1C). Intuitively, a good prior can improve active learning by reducing the prior entropy, i.e., the effective size of the parameter space to be searched. The drawback of more sophisticated priors is that they may complicate the problem of computing and optimizing the posterior expectations needed for active learning. To focus more straightforwardly on the role of the prior distribution, we employ a simple linearGaussian model of the neural response: yt = k> xt + t , t ? N (0, ? 2 ), (3) 2 where t is iid zero-mean Gaussian noise with variance ? . We then place a hierarchical, conditionally Gaussian prior on k: k|? ? ? N (0, C? ) ? p? , (4) (5) where C? is a prior covariance matrix that depends on hyperparameters ?. These hyperparameters in turn have a hyper-prior p? . We will specify the functional form of C? in the next section. In this setup, the effective prior over k is a mixture-of-Gaussians, obtained by marginalizing over ?: Z Z p(k) = p(k|?)p(?)d? = N (0, C? ) p? (?)d?. (6) 3 Given data X = (x1 , . . . , xt )> and Y = (y1 , . . . , yt )> , the posterior also takes the form of a mixture-of-Gaussians: Z p(k|X, Y ) = p(k|X, Y, ?)p(?|X, Y )d? (7) where the conditional posterior given ? is the Gaussian p(k|X, Y, ?) = N (?? , ?? ), ?? = > 1 ? 2 ?? X Y, ?? = ( ?12 X > X + C??1 )?1 , (8) and the mixing weights are given by the marginal posterior, p(?|X, Y ) ? p(Y |X, ?)p? (?), (9) which we will only need up to a constant of proportionality. The marginal likelihood or evidence p(Y |X, ?) is the marginal probability of the data given the hyperparameters, and has a closed form for the linear Gaussian model: 1 p(Y |X, ?) = |2??? | 2 1 1 |2?? 2 I| 2 |2?C? | 2 where L = ? 2 (X > X)?1 and m = 1 > ? 2 LX Y exp 1 2  ?1 > ?1 ?> m , ? ?? ?? ? m L (10) . Several authors have pointed out that active learning confers no benefit over fixed-design experiments in linear-Gaussian models with Gaussian priors, due to the fact that the posterior covariance is response-independent [1, 6]. That is, an optimal design (one that minimizes the final posterior entropy) can be planned out entirely in advance of the experiment. However, this does not hold for linear-Gaussian models with non-Gaussian priors, such as those considered here. The posterior distribution in such models is data-dependent via the marginal posterior?s dependence on Y (eq. 9). Thus, active learning is warranted even for linear-Gaussian responses, as we will demonstrate empirically below. 4 Automatic Locality Determination (ALD) prior In this paper, we employ a flexible RF model underlying the so-called automatic locality determination (ALD) estimator [9].1 The key justification for the ALD prior is the observation that most neural RFs tend to be localized in both space-time and spatio-temporal frequency. Locality in space-time refers to the fact that (e.g., visual) neurons integrate input over a limited domain in time and space; locality in frequency refers to the band-pass (or smooth / low pass) character of most neural RFs. The ALD prior encodes these tendencies in the parametric form of the covariance matrix C? , where hyperparameters ? control the support of both the RF and its Fourier transform. The hyperparameters for the ALD prior are ? = (?, ?s , ?f , Ms , Mf )> , where ? is a ?ridge? parameter that determines the overall amplitude of the covariance; ?s and ?f are length-D vectors that specify the center of the RF support in space-time and frequency, respectively (where D is the degree of the RF tensor2 ); and Ms and Mf are D ? D positive definite matrices that describe an elliptical (Gaussian) region of support for the RF in space-time and frequency, respectively. In practice, we will also include the additive noise variance ? 2 (eq. 3) as a hyperparameter, since it plays a similar role to C in determining the posterior and evidence. Thus, for the (D = 2) examples considered here, there are 12 hyperparameters, including scalars ? 2 and ?, two hyperparameters each for ?s and ?f , and three each for symmetric matrices Ms and Mf . Note that although the conditional ALD prior over k|? assigns high prior probability to smooth and sparse RFs for some settings of ?, for other settings (i.e., where Ms and Mf describe elliptical regions large enough to cover the entire RF) the conditional prior corresponds to a simple ridge prior and imposes no such structure. We place a flat prior over ? so that no strong prior beliefs about spatial locality or bandpass frequency characteristics are imposed a priori. However, as data from a neuron with a truly localized RF accumulates, the support of the marginal posterior p(?|Dt ) shrinks down on regions that favor a localized RF, shrinking the posterior entropy over k far more quickly than is achievable with methods based on Gaussian priors. 1 ?Automatic? refers to the fact that in [9], the model was used for empirical Bayes inference, i.e., MAP inference after maximizing the evidence for ?. Here, we consider perform fully Bayesian inference under the associated model. 2 e.g., a space?space?time RF has degree D = 3. 4 5 Bayesian active learning with ALD To perform active learning under the ALD model, we need two basic ingredients: (1) an efficient method for representing and updating the posterior p(k|Dt ) as data come in during the experiment; and (2) an efficient algorithm for computing and maximizing the expected information gain given a stimulus x. We will describe each of these in turn below. 5.1 Posterior updating via sequential Markov Chain Monte Carlo To represent the ALD posterior over k given data, we will rely on the conditionally Gaussian representation of the posterior (eq. 7) using particles {?i }i=1,...,N sampled from the marginal posterior, ?i ? P (?|Dt ) (eq. 9). The posterior will then be approximated as: p(k|Dt ) ? 1 X p(k|Dt , ?i ), N i (11) where each distribution p(k|Dt , ?i ) is Gaussian with ?i -dependent mean and covariance (eq. 8). Markov Chain Monte Carlo (MCMC) is a popular method for sampling from distributions known only up to a normalizing constant. In cases where the target distribution evolves over time by accumulating more data, however, MCMC samplers are often impractical due to the time required for convergence (i.e., ?burning in?). To reduce the computational burden, we use a sequential sampling algorithm to update the samples of the hyperparameters at each time step, based on the samples drawn at the previous time step. The main idea of our algorithm is adopted from the resample-move particle filter, which involves generating initial particles; resampling particles according to incoming data; then performing MCMC moves to avoid degeneracy in particles [14]. The details are as follows. Initialization: On the first time step, generate initial hyperparameter samples {?i } from the hyperprior p? , which we take to be flat over a broad range in ?. Resampling: Given a new stimulus/response pair {x, y} at time t, resample the existing particles according to the importance weights: (t) p(yt |?i , Dt?1 , xt ) = N (yt |?i > xt , xt > ?i xt + ?i2 ), (12) where (?i , ?i ) denote the mean and covariance of the Gaussian component attached to particle ?i , This ensures the posterior evolves according to: (t) (t) (t) p(?i |Dt ) ? p(yt |?i , Dt?1 , xt )p(?i |Dt?1 ). (13) MCMC Move: Propagate particles via Metropolis Hastings (MH), with multivariate Gaussian proposals centered on the current particle ?i of the Markov chain: ?? ? N (?i , ?), where ? is a diagonal matrix with diagonal entries given by the variance of the particles at the end of time step t?1. Accept ? ) the proposal with probability min(1, ?), where ? = q(? q(?i ) , with q(?i ) = p(?i |Dt ). Repeat MCMC moves until computational or time budget has expired. The main bottleneck of this scheme is the updating of conditional posterior mean ?i and covariance ?i for each particle ?i , since this requires inversion of a d ? d matrix. (Note that, unlike Lewi09, these are not rank-one updates due to the fact that C?i changes after each ?i move). This cost is independent of the amount of data, linear in the number of particles, and scales as O(d3 ) in RF dimensionality d. However, particle updates can be performed efficiently in parallel on GPUs or machines with multi-core processors, since the particles do not interact except for stimulus selection, which we describe below. 5.2 Optimal Stimulus Selection Given the posterior over k at time t, represented by a mixture of Gaussians attached to particles {?i } sampled from the marginal posterior, our task is to determine the maximally informative stimulus to present at time t + 1. Although the entropy of a mixture-of-Gaussians has no analytic form, we can 5 A 20 angle difference in degree 70 Passive-ALD B true filter 1 1 10 20 Lewi-09 ALD10 40 200 400 62.82 51.54 44.94 57.29 40.69 36.65 43.34 35.90 28.98 1000 trials ALD100 0 ALD100 400 trials 50 30 ALD10 200 trials 10 60 Lewi-09 600 # trials 800 1000 Figure 2: Simulated experiment. (A) Angular error in estimates of a simulated RF (20 ? 20 pixels, shown in inset) vs. number of stimuli, for Lewi-09 method (blue), the ALD-based active learning method using 10 (pink) or 100 (red) particles, and the ALD-based passive learning method (black). True responses were simulated from a Poisson-GLM neuron. Traces show average over 20 independent repetitions. (B) RF estimates obtained by each method after 200, 400, and 1000 trials. Red numbers below indicate angular error (deg). compute the exact posterior covariance via the formula: N  1 X ? ?t = ?i + ?i ?i > ? ? ?? ?> , N i=1 (14) P where ? ?t = N1 ?i is the full posterior mean. This leads to an upper bound on posterior entropy, since a Gaussian is the maximum-entropy distribution for fixed covariance. We then take the next stimulus to be the maximum-variance eigenvector of the posterior covariance, which is the most informative stimulus under a Gaussian posterior and Gaussian noise model, subject to a power constraint on stimuli [6]. Although this selection criterion is heuristic, since it is not guaranteed to maximize mutual information under the true posterior, it is intuitively reasonable since it selects the stimulus direction along which the current posterior is maximally uncertain. Conceptually, directions of large posterior variance can arise in two different ways: (1) directions of large variance for all covariances ?i , meaning that all particles assign high posterior uncertainty over k|Dt in the direction of x; or (2) directions in which the means ?i are highly dispersed, meaning the particles disagree about the mean of k|Dt in the direction of x. In either scenario, selecting a stimulus proportional to the dominant eigenvector is heuristically justified by the fact that it will reduce collective uncertainty in particle covariances or cause particle means to converge by narrowing of the marginal posterior. We show that the method performs well in practice for both real and simulated data (Section 6). We summarize the complete method in Algorithm 1. Algorithm 1 Sequential active learning under conditionally Gaussian models P Given particles {?i } from p(?|Dt ), which define the posterior as P (k|Dt ) = i N (?i , ?i ), ? t from {(?i , ?i )} (eq. 14). 1. Compute the posterior covariance ? ?t 2. Select optimal stimulus xt+1 as the maximal eigenvector of ? 3. Measure response yt+1 . 4. Resample particles {?i } with the weights {N (yt+1 |?i > xt+1 , xt+1 > ?i xt+1 + ?i2 )}. 5. Perform MH sampling of p(?|Dt+1 ), starting from resampled particles. repeat 6 true filter Lewi-09 ALD10 Figure 3: Additional simulated examples comparing Lewi-09 and ALDbased active learning. Responses were simulated from a GLM-Poisson model with three different true 400-pixel RFs (left column): (A) a Gabor filter (shown previously in [6]); (B): a centersurround RF, typical in retinal ganglion cells; (C): a relatively non-localized grid-cell RF. Middle and right columns show RF estimates after 400 trials of active learning under each method, with average angular error (over independent 20 repeats) shown beneath in red. (A) angle difference: 60.68 37.82 62.82 42.57 60.32 50.73 (B) (C) 6 Results Simulated Data: We tested the performance of our algorithm using data simulated from a PoissonGLM neuron with a 20 ? 20 pixel Gabor filter and an exponential nonlinearity (See Fig. 2). This is the response model assumed by the Lewi-09 method, and therefore substantially mismatched to the linear-Gaussian model assumed by our method. For the Lewi-09 method, we used a diagonal prior covariance with amplitude set by maximizing marginal likelihood for a small dataset. We compared two versions of the ALD-based algorithm (with 10 and 100 hyperparameter particles, respectively) to examine the relationship between performance and fidelity of the posterior representation. To quantify the performance, we used the angular difference (in degrees) between the true and estimated RF. Fig 2A shows the angular difference between the true RF and estimates obtained by Lewi-09 and the ALD-based method, as a function of the number of trials. The ALD estimate exhibits more rapid convergence, and performs noticeably better with 100 than with 10 particles (ALD100 vs. ALD10 ), indicating that accurately preserving uncertainty over the hyperparameters is beneficial to performance. We also show the performance of ALD inference under passive learning (iid random stimulus selection), which indicates that the improvement in our method is not simply due to the use of an improved RF estimator. Fig 2B shows the estimates obtained by each method after 200, 400, and 1000 trials. Note that the estimate with 100 hyperparameter samples is almost indistinguishable from the true filter after 200 trials, which is substantially lower than the dimensionality of the filter itself (d = 400). Fig. 3 shows a performance comparison using three additional 2-dimensional receptive fields, to show that performance improves across a variety of different RF shapes. The filters included: (A) a gabor filter similar to that used in [6]; (B) a retina-like center-surround receptive field; (C) a grid-cell receptive field with multiple modes. As before, noisy responses were simulated from a Poisson-GLM. For the grid-cell example, these filter is not strongly localized in space, yet the ALDbased estimate substantially outperforms Lewi-09 due to its sensitivity to localized components in frequency. Thus, ALD-based method converges more quickly despite the mismatch between the model used to simulate data and the model assumed for active learning. Neural Data: We also tested our method with an off-line analysis of real neural data from a simple cell recorded in primate V1 (published in [15]). The stimulus consisted of 1D spatiotemporal white noise (?flickering bars?), with 16 spatial bars on each frame, aligned with the cell?s preferred orientation. We took the RF to have 16 time bins, resulting in a 256-dimensional parameter space for the RF. We performed simulated active learning by extracting the raw stimuli from 46 minutes of experimental data. On each trial, we then computed the expected information gain from presenting each of these stimuli (blind to neuron?s actual response to each stimulus). We used ALD-based active learning with 10 hyperparameter particles, and examined performance of both algorithms for 960 trials (selecting from ? 276,000 possible stimuli on each trial). 7 B 70 avg angle difference ml (46 min.) 16 60 8 1 40 0 1 8 16 Lewi-09 50 ALD 160 480 # of stimuli 960 480 stimuli 160 stimuli A C Lewi-09 ALD 20 ?20 angle: 55.0 45.1 ?60 ?100 47.2 42.5 ?140 0 320 640 960 # of stimuli Figure 4: Comparison of active learning methods in a simulated experiment with real neural data from a primate V1 simple cell. (Original data recorded in response to white noise ?flickering bars? stimuli, see [15]). (A): Average angular difference between the MLE (inset, computed from an entire 46-minute dataset) and the estimates obtained by active learning, as a function of the amount of data. We simulated active learning via an offline analysis of the fixed dataset, where methods had access to possible stimuli but not responses. (B): RF estimates after 10 and 30 seconds of data. Note that the ALD-based estimate has smaller error with 10 seconds of data than Lewi-09 with 30 seconds of data. (C): Average entropy of hyperparameter particles as a function of t, showing rapid narrowing of marginal posterior. Fig 4A shows the average angular difference between the maximum likelihood estimate (computed with the entire dataset) and the estimate obtained by each active learning method, as a function of the number of stimuli. The ALD-based method reduces the angular difference by 45 degrees with only 160 stimuli, although the filter dimensionality of the RF for this example is 256. The Lewi-09 method requires four times more data to achieve the same accuracy. Fig 4B shows estimates after 160 and 480 stimuli. We also examined the average entropy of the hyperparameter particles as a function of the amount of data used. Fig. 4C shows that the entropy of the marginal posterior over hyperparameters falls rapidly during the first 150 trials of active learning. ? which The main bottleneck of the algorithm is eigendecomposition of the posterior covariance ?, took 30ms for a 256 ? 256 matrix on a 2 ? 2.66 GHz Quad-Core Intel Xeon Mac Pro. Updating importance weights and resampling 10 particles took 4ms, and a single step of MH resampling for each particle took 5ms. In total, it took <60 ms to compute the optimal stimulus in each trial using a non-optimized implementation of our algorithm, indicating that our methods should be fast enough for use in real-time neurophysiology experiments. 7 Discussion We have developed a Bayesian active learning method for neural RFs under hierarchical response models with conditionally Gaussian priors. To take account of uncertainty at the level of hyperparameters, we developed an approximate information-theoretic criterion for selecting optimal stimuli under a mixture-of-Gaussians posterior. We applied this framework using a prior designed to capture smooth and localized RF structure. The resulting method showed clear advantages over traditional designs that do not exploit structured prior knowledge. We have contrasted our method with that of Lewi et al [6], which employs a more flexible and accurate model of the neural response, but a less flexible model of the RF prior. A natural future direction therefore will be to combine the Poisson-GLM likelihood and ALD prior, which will combine the benefits of a more accurate neural response model and a flexible (low-entropy) prior for neural receptive fields, while incurring only a small increase in computational cost. 8 Acknowledgments We thank N. C. Rust and J. A. Movshon for V1 data, and several anonymous reviewers for helpful advice on the original manuscript. This work was supported by a Sloan Research Fellowship, McKnight Scholar?s Award, and NSF CAREER Award IIS-1150186 (JP). References [1] D. J. C. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):590?604, 1992. [2] K. Chaloner and I. Verdinelli. Bayesian experimental design: a review. Statistical Science, 10:273?304, 1995. [3] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. J. Artif. Intell. Res. (JAIR), 4:129?145, 1996. [4] A. Watson and D. Pelli. QUEST: a Bayesian adaptive psychophysical method. Perception and Psychophysics, 33:113?120, 1983. [5] L. Paninski. Asymptotic theory of information-theoretic experimental design. Neural Computation, 17(7):1480?1507, 2005. [6] J. Lewi, R. Butera, and L. Paninski. Sequential optimal design of neurophysiology experiments. Neural Computation, 21(3):619?687, 2009. [7] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J. Neurophysiol, 93(2):1074?1089, 2005. [8] M. Sahani and J. Linden. Evidence optimization techniques for estimating stimulus-response functions. NIPS, 15, 2003. [9] M. Park and J. W. Pillow. 7(10):e1002219, 2011. Receptive field inference with localized priors. PLoS Comput Biol, [10] N. Houlsby, F. Huszar, Z. Ghahramani, and M. Lengyel. Bayesian active learning for classification and preference learning. CoRR, abs/1112.5745, 2011. [11] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15:243?262, 2004. [12] R. Kass and A. Raftery. Bayes factors. Journal of the American Statistical Association, 90:773?795, 1995. [13] J. W. Pillow, Y. Ahmadian, and L. Paninski. Model-based decoding, information estimation, and changepoint detection techniques for multineuron spike trains. Neural Comput, 23(1):1?45, Jan 2011. [14] W. R. Gilks and C. Berzuini. Following a moving target ? monte carlo inference for dynamic bayesian models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(1):127?146, 2001. [15] N. C. Rust, Schwartz O., J. A. Movshon, and Simoncelli E.P. Spatiotemporal elements of macaque v1 receptive fields. Neuron, 46(6):945?956, 2005. 9
4552 |@word neurophysiology:6 trial:16 middle:1 inversion:1 achievable:1 version:1 proportionality:1 heuristically:1 simulation:1 seek:1 propagate:1 covariance:15 concise:1 harder:1 carry:1 reduction:1 initial:2 series:1 selecting:6 outperforms:2 existing:2 current:4 elliptical:2 comparing:1 ka:1 yet:3 additive:1 multineuron:1 informative:3 shape:1 analytic:2 designed:1 update:5 resampling:4 v:2 core:3 characterization:4 provides:2 preference:2 lx:1 along:1 combine:3 expected:7 rapid:2 nor:1 examine:1 multi:1 actual:1 quad:1 mijung:1 begin:1 provided:1 underlying:1 estimating:1 maximizes:4 substantially:4 minimizes:2 eigenvector:3 developed:4 unobserved:1 impractical:1 temporal:1 fellow:1 ti:1 schwartz:1 control:2 before:2 positive:1 engineering:1 local:1 despite:1 accumulates:1 encoding:1 black:1 initialization:1 examined:2 specifying:1 limited:3 range:1 averaged:1 acknowledgment:1 gilks:1 practice:2 definite:1 lewi:25 jan:1 empirical:1 gabor:3 cascade:1 refers:3 selection:7 seminal:1 accumulating:1 equivalent:1 confers:1 imposed:1 center:3 yt:10 map:1 maximizing:3 reviewer:1 flexibly:1 starting:1 simplicity:1 assigns:1 estimator:4 justification:1 target:3 play:1 exact:1 us:1 element:2 approximated:1 particularly:1 updating:5 bottom:1 role:2 narrowing:2 electrical:1 capture:1 region:3 ensures:2 plo:1 substantial:3 dynamic:1 weakly:1 predictive:2 neurophysiol:1 mh:3 various:1 represented:1 train:2 fast:3 describe:8 effective:2 monte:4 ahmadian:1 hyper:3 heuristic:1 favor:1 statistic:1 transform:1 itself:1 noisy:1 final:1 advantage:1 took:5 product:1 tensor2:1 maximal:1 aligned:1 loop:2 beneath:1 rapidly:1 mixing:1 achieve:1 exploiting:1 convergence:2 generating:1 converges:1 depending:1 develop:1 pose:1 eq:6 strong:1 involves:1 come:1 indicate:1 quantify:1 direction:7 drawback:1 filter:12 centered:1 noticeably:1 bin:1 require:1 truccolo:1 assign:1 scholar:1 anonymous:1 hold:1 considered:2 exp:1 changepoint:1 major:1 resample:3 estimation:3 integrates:1 utexas:2 repetition:1 reflects:1 gaussian:37 desirability:1 avoid:1 shrinkage:1 poi:1 encode:1 focus:2 improvement:4 rank:1 likelihood:11 indicates:1 chaloner:1 helpful:1 inference:6 dependent:2 typically:3 integrated:1 entire:3 accept:1 selects:2 pixel:3 arg:2 classification:2 flexible:4 overall:1 denoted:1 priori:1 fidelity:1 orientation:1 animal:1 spatial:2 psychophysics:1 mackay:1 mutual:3 marginal:11 field:13 construct:1 sampling:5 park:2 broad:1 future:1 stimulus:48 employ:6 retina:1 intell:1 n1:1 ab:1 detection:1 interest:1 highly:2 mixture:6 truly:1 light:1 chain:4 accurate:4 hyperprior:1 re:1 uncertain:1 column:2 xeon:1 planned:1 cover:1 cost:2 mac:1 entry:1 stimulusresponse:1 characterize:1 straightforwardly:1 spatiotemporal:2 adaptively:1 randomized:1 sensitivity:1 off:1 decoding:1 infomax:3 picking:1 quickly:2 squared:1 recorded:5 henceforth:1 american:1 account:1 retinal:1 sec:5 sloan:1 depends:1 blind:1 performed:2 closed:4 extracellularly:1 characterizes:1 red:3 start:1 bayes:2 houlsby:1 parallel:1 contribution:1 e1002219:1 accuracy:1 variance:6 characteristic:1 efficiently:2 ensemble:1 yield:2 conceptually:1 bayesian:16 raw:1 accurately:1 iid:3 carlo:4 published:1 processor:1 history:1 lengyel:1 parallelizable:1 complicate:1 against:1 frequency:7 involved:1 associated:2 degeneracy:1 gain:4 sampled:2 dataset:4 popular:2 knowledge:1 dimensionality:3 improves:1 organized:1 amplitude:2 sophisticated:2 manuscript:2 jair:1 dt:29 methodology:1 specify:3 maximally:3 response:24 improved:1 though:1 shrink:1 strongly:1 angular:8 until:1 hastings:1 cohn:1 nonlinear:1 mode:1 willingness:1 artif:1 effect:1 consisted:1 true:8 brown:1 butera:2 symmetric:1 i2:2 white:3 conditionally:11 attractive:1 indistinguishable:1 during:2 criterion:3 generalized:2 m:8 presenting:1 ridge:3 demonstrate:2 theoretic:3 complete:1 performs:2 passive:4 pro:1 meaning:3 novel:1 recently:1 common:1 functional:1 spiking:2 empirically:1 rust:2 attached:2 jp:1 extend:1 association:1 relating:1 refer:1 surround:1 automatic:4 grid:3 pointed:1 particle:31 nonlinearity:1 had:1 moving:1 stable:1 access:1 etc:1 dominant:1 posterior:59 multivariate:1 showed:1 optimizing:1 scenario:1 watson:1 yi:1 preserving:1 additional:2 ey:3 determine:1 maximize:1 converge:1 ii:1 full:1 multiple:1 simoncelli:1 reduces:2 smooth:5 determination:3 mle:1 award:2 schematic:1 basic:3 regression:1 expectation:2 poisson:9 represent:1 cell:7 proposal:2 justified:1 fellowship:1 unlike:1 recording:1 tend:2 subject:2 jordan:1 extracting:1 enough:2 variety:1 lasso:1 reduce:2 idea:2 donoghue:1 texas:2 bottleneck:2 utility:5 movshon:2 cause:1 dramatically:1 clear:1 amount:4 band:1 requisite:1 generate:1 nsf:1 estimated:1 extrinsic:1 blue:1 hyperparameter:8 express:1 key:2 four:1 eden:1 drawn:1 d3:1 neither:1 v1:4 angle:4 uncertainty:8 powerful:1 place:2 almost:1 reasonable:1 huszar:1 entirely:1 bound:1 resampled:1 guaranteed:2 activity:1 constraint:2 awake:1 flat:2 encodes:1 fourier:1 speed:2 simulate:1 min:3 performing:1 relatively:1 gpus:1 structured:1 according:3 mcknight:1 pink:1 describes:2 beneficial:1 across:1 character:1 smaller:1 metropolis:1 evolves:2 making:2 primate:2 intuitively:2 glm:7 computationally:1 previously:1 turn:2 count:1 needed:1 tractable:1 end:1 adopted:1 gaussians:6 incurring:1 mjpark:1 probe:1 apply:1 hierarchical:10 original:2 denotes:1 assumes:2 include:2 graphical:2 maintaining:1 exploit:2 ghahramani:2 especially:1 build:1 society:1 psychophysical:1 move:5 objective:1 spike:5 receptive:13 costly:1 dependence:1 parametric:1 traditional:2 burning:1 diagonal:3 exhibit:1 thank:1 simulated:16 centersurround:1 mail:2 collected:3 length:1 relationship:1 setup:1 trace:1 negative:1 design:8 implementation:1 motivates:2 collective:1 perform:4 upper:3 disagree:1 neuron:8 observation:2 markov:4 datasets:1 benchmark:1 incorporated:1 y1:1 frame:1 introduced:1 pair:3 required:1 optimized:1 pelli:1 nip:1 macaque:1 address:1 bar:3 below:5 perception:1 mismatch:2 challenge:2 summarize:1 rf:51 max:1 including:1 royal:1 belief:2 power:2 suitable:1 difficulty:1 rely:1 natural:1 representing:1 scheme:1 improve:3 raftery:1 sahani:1 prior:51 review:2 marginalizing:1 relative:1 determining:1 asymptotic:1 loss:1 fully:1 filtering:1 proportional:1 localized:11 ingredient:2 eigendecomposition:1 integrate:1 degree:5 imposes:1 expired:1 austin:2 repeat:3 supported:1 offline:1 mismatched:1 fall:1 sparse:3 benefit:2 ghz:1 pillow:4 concavity:1 equated:1 commonly:1 adaptive:2 author:1 avg:1 far:3 approximate:2 preferred:1 deg:1 ml:1 active:43 incoming:1 assumed:3 consuming:1 xi:1 spatio:1 lineargaussian:1 career:1 interact:1 warranted:1 domain:2 main:3 noise:6 hyperparameters:15 arise:1 x1:1 fig:10 intel:1 advice:1 shrinking:1 bandpass:1 exponential:1 comput:2 perceptual:1 formula:2 down:1 minute:2 xt:15 covariate:1 inset:2 showing:1 dk:2 linden:1 evidence:4 normalizing:1 incorporating:1 burden:1 sequential:5 corr:1 importance:2 budget:1 mf:4 locality:7 entropy:16 simply:2 paninski:5 ganglion:1 visual:1 scalar:2 corresponds:1 determines:1 ald:24 dispersed:1 conditional:5 flickering:2 change:1 included:1 specifically:1 except:1 reducing:1 typical:1 sampler:1 contrasted:1 called:1 total:1 pas:2 verdinelli:1 experimental:5 tendency:1 indicating:2 select:5 formally:1 searched:1 support:4 quest:1 jonathan:1 incorporate:1 mcmc:6 tested:2 biol:1
3,925
4,553
Learning High-Density Regions for a Generalized Kolmogorov-Smirnov Test in High-Dimensional Data Michael Lindenbaoum Department of Computer Science Technion ? Israel Institute of Technology Haifa 32000, Israel [email protected] Assaf Glazer Department of Computer Science Technion ? Israel Institute of Technology Haifa 32000, Israel [email protected] Shaul Markovitch Department of Computer Science Technion ? Israel Institute of Technology Haifa 32000, Israel Address [email protected] Abstract We propose an efficient, generalized, nonparametric, statistical KolmogorovSmirnov test for detecting distributional change in high-dimensional data. To implement the test, we introduce a novel, hierarchical, minimum-volume sets estimator to represent the distributions to be tested. Our work is motivated by the need to detect changes in data streams, and the test is especially efficient in this context. We provide the theoretical foundations of our test and show its superiority over existing methods. 1 Introduction The Kolmogorov-Smirnov (KS) test is efficient, simple, and often considered the choice method for comparing distributions. Let X = {x1 , . . . , xn } and X 0 = {x01 , . . . , x0m } be two sets of feature vectors sampled i.i.d. with respect to F and F 0 distributions. The goal of the KS test is to determine whether F 6= F 0 . For one-dimensional distributions, the KS statistics are based on the maximal difference between cumulative distribution functions (CDFs) of the two distributions. However, nonparametric extensions of this test to high-dimensional data are hard to define since there are 2d?1 ways to represent a d-dimensional distribution by a CDF. Indeed, due to this limitation, several extensions of the KS test to more than one dimension have been proposed [17, 9] but their practical applications are mostly limited to a few dimensions. One prominent approach of generalizing the KS test to beyond one-dimensional data is that of Polonik [18]. It is based on a generalized quantile transform to a set of high-density hierarchical regions. The transform is used to construct two sets of plots, expected and empirical, which serve as the two input CDFs for the KS test. Polonik?s transform is based on a density estimation over X . It maps the input quantile in [0, 1] to a level-set of the estimated density such that the expected probability of feature vectors to lie within it is equal to its associated quantile. The expected plots are the quantiles, and the empirical plots are fractions of examples in X 0 that lie within each mapped region. Polonik?s approach can handle multivariate data, but is hard to apply in high-dimensional or smallsample-sized settings where a reliable density estimation is hard. In this paper we introduce a generalized KS test, based on Polonik?s theory, to determine whether two samples are drawn from dif1 ferent distributions. However, instead of a density estimator, we use a novel hierarchical minimumvolume sets estimator to estimate the set of high-density regions directly. Because the estimation of such regions is intrinsically simpler than density estimation, our test is more accurate than densityestimation approaches. In addition, whereas Polonik?s work was largely theoretical, we take a practical approach and empirically show the superiority of our test over existing nonparametric tests in realistic, high-dimensional data. To use Polonik?s generalization of the KS test, the high-density regions should be hierarchical. Using classical minimum-volume set (MV-set) estimators, however, does not, in itself, guarantee this property. We present here a novel method for approximate MV-sets estimation that guarantees the hierarchy, thus allowing the KS test to be generalized to high dimensions. Our method uses classical MV-set estimators as a basic component. We test our method with two types of estimators: one-class SVMs (OCSVMs) and one-class neighbor machines (OCNMs). While the statistical test introduced in this paper traces distributional changes in high dimensional data in general, it is effective in particular for change detection in data streams. Many real-world applications (e.g. process control) work in dynamic environments where streams of multivariate data are collected over time, during which unanticipated distributional changes in data streams might prevent the proper operation of these applications. Change-detection methods are thus required to trace such changes (e.g. [6]). We extensively evaluate our test on a collection of change-detection tasks. We also show that our proposed test can be used for the classical setting of the two-sample problem using symmetric and asymmetric variations of our test. 2 Learning Hierarchical High-Density Regions Our approach for generalizing the KS test is based on estimating a hierarchical set of MV-sets in input space. In this section we introduce a method for finding such a set in high-dimensional data. Following the notion of multivariate quantiles [8], let X = {x1 , . . . , xn } be a setof examples i.i.d. with respect to a probability distribution F defined on a measurable space Rd , S . Let ? be a realvalued function defined on C ? S. Then, the minimum-volume set (MV-set) with respect to F , ?, and C at level ? is C (?) = argmin {?(C 0 ) : F (C 0 ) ? ?} . (1) C 0 ?C If more than one Pn set attains the minimum, one will be picked. Equivalently, if F (C) is replaced with Fn (C) = n1 1 1C (xi ), then Cn (?) is one of the empirical MV-sets that attains the minimum. In the following we think of ? as a Lebesgue measure on Rd . Polonik introduced a new approach that uses a hierarchical set of MV-sets to generalize the KS test beyond one dimension. Assume F has a density function f with respect to ?, and let Lf (c) = {x : f (x) ? c} be the level set of f at level c. Sufficient regularity conditions on f are assumed. Polonik observed that if Lf (c) ? C, then Lf (c) is an MV-set of F at level ? = F (Lf (c)). He thus suggested that level-sets can be used as approximations of the MV-sets of a distribution. Hence, a density estimator was used to define a family of MV-sets {C(?), ? ? [0, 1]} such that a hierarchy constraint C(?) ? C(?) is satisfied for 0 ? ? < ? ? 1. We also use hierarchical MV-sets to represent distributions in our research. However, since a density estimation is hard to apply in high-dimensional data, a more practical solution is proposed. Instead of basing our method on the products of a density estimation method, we introduce a novel nonparametric method, which uses MV-set estimators (OCSVM and OCNM) as a basic component, to estimate hierarchical MV-sets without the need for a density estimation step. 2.1 Learning Minimum-Volume Sets with One-Class SVM Estimators OCSVM is a nonparametric method for estimating a high-density region in a high-dimensional distribution [19]. Consider a function ? : Rd ? F mapping the feature vectors in X to a hypersphere in an infinite Hilbert space F. Let H be a hypothesis space of half-space decision functions fC (x) = sgn ((w ? ?(x)) ? ?) such that fC (x) = +1 if x ? C, and ?1 otherwise. To separate X 2 from the origin, the learner is asked to solve this quadratic program: 1 1 X minn ||w||2 ? ? + ?i , s.t. (w ? ? (xi )) ? ? ? ?i , ?i ? 0, w?F ,??R ,??R 2 ?n i (2) where ? is the vector of the slack variables, and 0 < ? < 1 is a regularization parameter related to the proportion of outliers in the training data. All training examples xi for which (w ? ?(x)) ? ? ? 0 are called support vectors (SVs). Outliers are referred to as examples that strictly satisfy (w ? ?(x)) ? ? < 0. Since the algorithm only depends on the dot product in F, ? never needs to be explicitly computed, and a kernel function k (?, ?) is used instead such that k (xi , xj ) = (?(xi ) ? ?(xj ))F . The following theorem draws the connection between the ? regularization parameter and the region C provided by the solution of Equation 2: Theorem 1 (Sch?olkopf et al. [19]). Assume the solution of Equation 2 satisfies ? 6= 0. The following statements hold: (1) ? is an upper bound on the fraction of outliers. (2) ? is a lower bound on the fraction of SVs. (3) Suppose X were generated i.i.d. from a distribution F which does not contain discrete components. Suppose, moreover, that the kernel k is analytic and non-constant. Then, with probability 1, asymptotically, ? is equal to both the fraction of SVs and to the fraction of outliers. This theorem shows that we can use OCSVMs to estimate high-density regions in the input space while bounding the number of examples in X lying outside these regions. Thus, by setting ? = 1??, we can use OCSVMs to estimate regions approximating C(?). We use this estimation method with its original quadratic optimization scheme to learn a family of MV-sets. However, a straightforward approach of training a set of OCSVMs, each with different ? ? (0, 1), would not necessarily satisfy the hierarchy requirement. In the following algorithm, we propose a modified construction of these regions such that both the hierarchical constraint and the density assumption (Theorem 1) will hold for each region. Let 0 < ?1 < ?2 , . . . , < ?q < 1 be a sequence of quantiles. Given X and a kernel function k (?, ?), our hierarchical MV-sets estimator iteratively trains a set of q OCSVMs, one for each quantile, and returns a set of decision functions, f?C(?1 ) , . . . , f?C(?q ) that satisfy both hierarchy and density requirements. Training starts from the largest quantile (?q ). Let Di be the training set of the OCSVM trained for the ?i quantile. Let fC(?i ) , SVbi be the decision function and the calculated outliers Sq (bounded SVs) of the OCSVM trained for the i-th quantile. Let Oi = j=i SVbj . At each iteration, Di contains examples in X that were not classified as outliers in previous iterations (not in Oi+1 ). In addition, ? is set to the required fraction of outliers over Di that will keep the total fraction of outliers over X equal to 1 ? ?i . After each iteration, f?C(?i ) corresponds to the intersection between the region associated with the previous decision function and the half-space associated with the current learned OCSVM. Thus f?C(?i ) corresponds to the region specified by an intersection of halfspaces. The outliers in Oi are points that lie strictly outside the constructed region. The pseudo-code of our estimator is given in Algorithm 1. Algorithm 1 Hierarchical MV-sets Estimator (HMVE) 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: Input: X , 0 < ?1 < ?2 , . . . , < ?q < 1, k (?, ?) Output: f?C(?1 ) , . . . , f?C(?q ) Initialize: Dq ? X , Oq+1 ? ? for i = q to 1 do (1??i )|X |?|Oi+1 | ?? |Di | fC(?i ) , SVbi ? OCSV M (Di , ?, k) if i = q then f?C(?i ) (x) ? fC(?i (x)) else  fC(?i (x)) : f?C(?i+1 ) (x) f?C(?i ) (x) ? ?1 : otherwise 11: Oi ? Oi+1 ? SVbi , Di?1 ? Di \ SVbi 12: return f?C(? ) , . . . , f?C(? ) 1 q The following theorem shows that the regions specified by the decision functions f?C(?1 ) , . . . , f?C(?q ) are: (a) approximations for the MV-sets in the same sense suggested by Sch?olkopf et al., and (b) ? i ) is denoted as the estimates of C(?i ) with respect to hierarchically nested. In the following, C(? ? fC(?i ) . Theorem 2. Let f?C(?1 ) , . . . , f?eC(?q ) be the decision functions returned by Algorithm 1 with param? i ) be the region in the input space eters {?1 , . . . , ?q }, X , k (?, ?). Assume X is separable. Let C(? 3 C1 C2 C2 C3 C3 C4 2D level?sets estimations: qcd ocsvm/ocnm Vs. kde 0.2 S ? ? x3 ? x3 0.16 ? ? x1 ? symmetric difference x2 C? ?? 3 ? ? ? x2 ? C? ?? 2 ? Fd C? ??1 ? HMVE (OCSVM) HMVE (OCNM) KDE2D 0.18 O C? ?? 2 ? x1 0.14 0.12 0.1 0.08 0.06 F1 0.04 0.02 10 ? i ) in Figure 1: Left: Estimated MV-sets C(? the original input space, q = 3. Right: the hj ? i ) in F. projected hC(? j ?1 15 20 25 30 35 # training points 40 45 50 Figure 2: Averaged symmetric differences against the number of training points for the OCSVM / OCNM versions of our estimator, and the KDE2d density estimator j ptop j ?1 ptop associated with f?C(?i ) , and SVubi be the set of (unbounded) SVs lying on the separating hyperplane ? i ) ? C(? ? j ) for j in the region associated with fC(?ip(x)) . Then, the following statements hold:(1) C(? sv |SVj ?1 |+|O | i i| psvubi ?i < ?j . (2) |O . (3) Suppose X were i.i.d. drawn from a distribution |X | ? 1 ? ?i ? |X | O ?j F which does not contain discrete components, and k is analytic and non-constant. Then, 1 ? ?i is wj i| ? asymptotically equal to |O |X | . j ?1 w j ?1 Proof. Statement (1) holds by definition of f?C(?i ) . Statements (2)-(3) are proved by induction Hypersphere with radius on the number of1 iterations. In the first iteration f?C(?q ) equals fC(?q ) . Thus, since Oq = SVbq and ? = 1 ? ?q , statements (2)-(3) follow directly from Theorem 1 1 . Then, by the induction hypothesis, statements (2)-(3) hold for the first n ? 1 iterations over the ?q , . . . , ?q?n+1 quantiles. We Time now prove that statements (2)-(3) hold for f?C(?q?n ) in the next iteration. Since f?C(?q?n+1 ) (x) = x1 ,..., x100 ?x101 ,..., x150 . . . xi ,..., xi ? 49 . . . xm ? n ? 49 ,..., xm ? n ?1 implies fC(?q?n ) (x) = ?1, Oq?n+1 are outliers with respect to f?C(?q?n ) . In addition, ? = (1??q?n )|X |?|Oq?n+1 | . Training|D set | i ... ... Hence, Theorem 1, the total proportion of outliers with respect to Testingfollowing sets X is |Oq?n | = |SVbq?n | + |Oq?n+1 | ? ?|Di | + |Oq?n+1 | = (1 ? ?q?n )|X |, and |SVubq?n | + |O |SVub | |+|Oq?n | q?n q?n |Oq?n+1 | ? (1 ? ?q?n )|X |. Hence, |X ? 1 ? ?q?n ? . In the same manner, | |X | under the conditions of statement (3), |Oq?n | is asymptotically equal to (1 ? ?q?n )|X |, and hence, |Oq?n | asymptotically, 1 ? ?q?n = |X | . ? i ) in both the original and the projected spaces. On Figure 1 illustrates the estimated MV-sets C(? ? i ) regions in the original input space are colored with decreased gray levels. Note the left, all C(? ? i ) is a subset of C(? ? j ) if i < j. On the right, the projected regions of all C(? ? i )s in F are that C(? marked with the same colors. Examples xi in the input space and their mapped vectors ?(xi ) in F ? i) are contained in the same relative regions in both spaces. It can be seen that the projections of C(? in F are the intersecting half-spaces learned by Algorithm 1. 2.2 Learning Minimum-Volume Sets with One-Class Neighbor Machine Estimators OCNM [15] is as an alternative method to the OCSVM estimator for finding regions close to C(?). Unlike OCSVM, the OCNM solution is proven to be asymptotically close to the MV-set specified 2 . Degenerated structures in data that may damage the generalization of SVMs could be another reason for choosing OCNM [24]. In practice, for finite sample size, it is not clear which estimator is more accurate. 1 Note that the separability of the data implies that the solution of Equation 2 satisfies ? 6= 0. Sch?olkopf et al. [19] proved that the set provided by OCSVM converges asymptotically to the correct probability and not to the correct MV-set. Although this property should be sufficient for the correctness of our test, Polonik observed that MV-sets are preferred. 2 4 OCNM uses either a sparsity or a concentration neighborhood measure. M (Rd , X ) ? R is a sparsity measure if f (x) > f (y) implies lim|X |?? P (M (x, X ) < M (y, X )) = 1. An example for a valid sparsity measure is the distance of x to its kth-nearest neighbor in X . When a sparsity measure is used, the OCNM estimator solves the following linear problem max n ??R ,??R ?n? ? n X ?i , s.t. M (xi , X ) ? ? ? ?i , ?i ? 0, (3) i such that the resulting decision function fC (x) = sgn (? ? M (x, X )) satisfies bounds and convergence properties similar to those mentioned in Theorem 1 (?-property). OCNM can replace OCSVM in our hierarchical MV-sets estimator. In contrast to OCSVMs, when OCNMs are iteratively trained on X using a growing sequence of ? values, outliers need not be moved from previous iterations to ensure that the ?-property will hold for each decision function. Hence, a simpler version of Algorithm 1 can be used, where X is used for training all OCNMs and ? = 1 ? ?i for each step 3 . Since Theorem 2 relies on the ?-property of the estimator, it can be shown that similar statements to those of Theorem 2 also hold when OCNM is used. As previously discussed, since the estimation of MV-sets is simpler than density estimation, our test can achieve higher accuracy than approaches based on density estimation. To illustrate this hypothesis empirically, we conducted the following preliminary experiment. We sampled 10 to 50 i.i.d. points with respect to a two-dimensional, mixture of Gaussians, distribution p = 12 N (? = (0.5, 0.5), ? = 0.1I) + 12 N (? = (?0.5, ?0.5), ? = 0.5I). We use the OCNM and OCSVM versions of our estimator to approximate hierarchical MV-sets for q? = 9 quantiles: ? = 0.1, 0, 2, . . . , 0.9 (detailed setup parameters are discussed in Section 4). MV-sets estimated with a KDE2d kernel-density estimation [2] were used for comparison. For each sample size, we measured the error of each method according P R to the mean weighted symmetric difference between p(x)dx. Results, averaged over 50 simthe true MV-sets and their estimates, q1? ? x?C(?)?C(?) ? ulations, are shows in Figure 2. The advantages of our approach can easily be seen: both versions of our estimator preform notably better, especially for small sample sizes. 3 Generalized Kolmogorov-Smirnov Test We now introduce a nonparametric, generalized Kolmogorov-Smirnov (GKS) statistical test for determining whether F 6= F 0 in high-dimensional data. Assume F, F 0 are one-dimensional continuous 0 distributions and Fn , Fm are empirical distributions estimated from n and m examples i.i.d. drawn 0 from F, F . Then, the two-sample Kolmogorov-Smirnov (KS) statistic is 0 KSn,m = sup |Fn (x) ? Fm (x)| (4) x?R and q nm n+m KSn,m is asymptotically distributed, under the null hypothesis, as the distribution of supx?R |B(F (x))| for a standard Brownian bridge B when F = F 0 . Under the null hypothesis, assume F = F 0 and let F ?1 be a quantile transform of F , i.e., the inverse of F . Then we can replace the supremum over x ? R with the supremum over ? ? [0, 1] as follows: 0 KSn,m = sup Fn (F ?1 (?)) ? Fm (F ?1 (?)) . (5) ??[0,1] Note that in the one-dimensional setting, F ?1 (?) is the point x s.t. F (X ? x) ? ? where X is a random variable drawn from F . Equivalently, F ?1 (?) can be identified with the interval [??, x]. In a high-dimensional space these intervals can be replaced by hierarchical MV-sets C(?) [18], and hence, Equation 5 can be calculated regardless of the input space dimensionality. We suggest replacing KSn,m with 0 Tn,m = sup |Fn (C(?)) ? Fm (C(?))|. (6) ??[0,1] ? For estimating C(?) we use our nonparametric method from Section 2. C(?) is learned with X ? and marked as CX (?). In practice, when |X | is finite, the expected proportion of examples that lie 3 ? i ). Note that intersection is still needed (Algorithm 1, line 10) to ensure the hierarchical property on C(? 5 C1 C2 C2 C3 C3 C4 S ? ? x3 ? C3 ? ? x1 ? ? ? x2 ? x x2 within C?X (?i ) is not guaranteed to be exactly3 ?i . Therefore, after learning the decision functions, C2 we estimate Fn (C?X (?i )) by a k-folds cross-validation procedure. Our final test statistic is Fd C1 O ? ? C2? ? (7) Tn,m = sup Fn (CX (?i )) ? Fm (CX (?i )) , 1?i?q x1 where F?n (C?X (?i )) is the estimate of Fn (C?X (?i )). The two-sample KS statistical test is used over F1 ? n,m to calculate the resulting p-value. T The test defined above works only in one direction by predicting whether distributions of the samples share the same ?concentrations? as regions estimated according to X , and not according to X 0 . We may symmetrize it by running the non-symmetric test twice, once in each direction, and return twice hj their minimum p-value (Bonferroni correction). Note that by doing so in the context of a change h j detection task, we pay in runtimej ?1required for learning MV-sets for each X 0 . ptop 4 j ?1 ptop Empirical Evaluation We first evaluated our test on concept-drift detection problems in data-stream classification tasks. Concept drifts are associated with distributional changes inj data streams that occur due to hidden psv context [22] ? changes of which the classifier is unaware. j ?1 We used the 27 UCI datasets used sv in [6], and 6 additional high-dimensionality UCI datasets: parrhythmia, madelon, semeion, internet O musk. The average ?j advertisement, hill-valley, and number of features for all datasets is 123 4 . w ? j ?1we generated, for each dataset, a sequence Following the experimental setup used by j[11, 6], hx1 , . . . , xn+m i, where the first n examples are associated with the most frequent label, and the w j ?1 following m examples with the second most frequent. Within each label the examples were shuffled randomly. The first 100 examples hx1 , . . . , x100 i, associated, in all datasets, with the most common label, were used as the baseline dataset X . A sliding window of 50 consecutive examples over the Hypersphere following sequence of examples used to define the most recent data X 0 at hand. Stawith was radiusiteratively 1 tistical tests were evaluated with X and all possible X 0 windows. In total, for each dataset, the set {hX , X 0 i i |X 0 i = {xi , . . . , xi+49 } , 101 ? i ? n + m ? 49} of pairs were used for evaluation. The following figure illustrates this setup: Time x1 ,..., x100 x101 ,..., x150 ... xi ,..., xi ? 49 ... ... xm ? n ? 49 ,..., xm ? n ... Training set Testing sets The pairs hX , X 0 i i , i ? n ? 49, where all examples in X 0 i have the same labels as in X , are considered ?unchanged.? The remaining pairs are considered ?changed.? Performance is evaluated using precision-recall values with respect to the change detection task. We compare our one-directional (GKS1d ) and two-directional (GKS2d ) tests to the following 5 reference tests: kdq-tree test (KDQ) [4], Metavariable Wald-Wolfowitz test (WW) [10], Kernel change detection (KCD) [5], Maximum mean discrepancy test (MMD) [12], and PAC-Bayesian margin test (PBM) [6]. See section 5 for details. All tests, except of MMD, were implemented and parameters were set with accordance to their suggested setting in their associate papers. The implementation of MMD test provided by the authors 5 was used with default parameters (RBF kernels with automatic kernel width detection) and Rademacher bounds. Similar results were also measured for asymptotic bounds. Note that we cannot compare our test to Polonik?s test since density estimations and level-sets extractions are not practically feasible on high-dimensional data. 2 The LibSVM package [3] with a Gaussian kernel (? = #f eatures ) was used for the OCSVMs. A distance from a point to its kth-nearest neighbor was used as a sparsity measure for the OCNMs. k is set to 10% of the sample size 6 . ? = 0.1, 0.2, . . . , 0.9 were used for all experiments. 4 Nominal features were transformed into numeric ones using binary encoding; missing values were replaced by their features? average values. 5 The code can be downloaded at http://people.kyb.tuebingen.mpg.de/arthur/mmd.htm. 6 Preliminary experiments show similar results obtained with k equal to 10, 20, . . . , 50% of |X |. 6 1 0.9 0.9 0.8 0.8 precision precision 1 0.7 GKS1d (OCSVM) GKS2d (OCSVM) 0.6 0.4 0.6 WW MMD PBM KCD KDQ BEP 0.5 0 0.1 0.2 GKS1d (OCSVM) GKS2d (OCSVM) 0.5 GKS1d (OCNM) GKS2d (OCNM) 0.3 0.4 0.5 recall 0.6 0.7 0.8 0.9 0.4 1 0 0.1 0.2 0.3 0.4 0.5 recall 0.6 0.7 0.8 0.9 1 Figure 4: Precision-recall curves averaged over all 33 experiments for GKS1d (OCSVMs), GKS2d (OCSVMs), GKS1d (OSNMs), and GKS2d (OSNMs). Figure 3: Precision-recall curves averaged over all 33 experiments for GKS1d (OCSVMs), GKS2d (OCSVMs), and the 5 reference tests. 4.1 0.7 Results For better visualization, results are shown in two separate figures: Figure 3 shows the precisionrecall plots averaged over the 33 experiments for the OCSVM version of our tests, and the 5 reference tests. Figure 4 shows the precision-recall plots averaged over the 33 experiments for the OCSVM and OCNM versions of our tests. In both versions, GKS1d and GKS2d provide the best precisionrecall compromise. For example, for the OCSVM version, at a recall of 0.86, GKS1d accurately detects distributional changes with 0.90 precision and GKS2d with 0.88 precision, while the second best competitor does so with 0.84 precision. In terms of their break even point (BEP) measures ? the points at which precision equals recall ? GKS1d outperforms the other 5 reference tests with a BEP of 0.89 while its second best competitor does so with BEP of 0.84. Mean precisions for each dataset were compared using the Wilcoxon statistical test with ? = 0.05. Here, too, GKS1d performs significantly better than all others for both OCSVM and OCNM versions, except for the MMD with a p-value of 0.08 for GKS1d (OCSVM) and 0.12 for GKS1d (OCNM). Although the plots for our GKS1d (OCSVM) test (Figure 4) look better than GKS2d , no significant difference was found. This result is consistent with previous studies which claim that variants of solutions whose goal is to make the tests more symmetric have empirically shown no conclusive superiority [4]. We also found that the GKS1d (OCSVM) version of our test has the least runtime and scales well with dimensionality, while the GKS1d (OSNM) version suffers from increased time complexity, especially in high dimensions, due to its expensive neighborhood measure. However, note that this observation is true only when off-line computational processing on X is not considered. As opposed to the KCD, and, PBM, tests, our GKS1d test need not be retrained on each X 0 . Hence, in the context where X is treated as a baseline dataset, GKS1d (OCSVM) is relatively cheap, and estimated in O (nm) time (the total number of SVs used to calculate f 0 C(?1 ) , . . . , f 0 C(?q ) is O (n)). In comparison to other tests, it is still the least computationally demanding 7 . 4.2 Topic Change Detection among Documents We evaluated our test on an additional setup of high-dimensionality problems pertaining to the detection of topic changes in streams of documents. We used the 20-Newsgroup document corpus 8 . 1000 words were randomly picked to generate 1000 bag-of-words features. 12 categories were used for the experiments 9 . Topic changes were simulated between all pairs of categories (66 pairs in total), using the same methodology as in the previous UCI experiments. Due to the excessive runtime  MMD and WW complexities are estimated in O (n + m)2 time where n, m are the sample sizes. KDQ uses bootstrapping for p-value estimations, and hence, is more expensive. 8 The 20-Newsgroup corpus is at http://people.csail.mit.edu/jrennie/20Newsgroups/. 9 The selection of these categories is based on the train/test split defined in http://www.cad.zju. edu.cn/home/dengcai/Data/TextData.html. 7 7 of some of the tests, especially with high-dimensional data, we evaluated only 4 of the 7 methods: GKS1d (OCSVM), WW, MMD, and KDQ, whose expected runtime may be more reasonable. Once again, our GKS1d test dominates the others with the best precision-recall compromise. With regard to BEP values, GKS1d outperforms the other reference tests with a BEP of 0.67 (0.70 precision on average), while its second best competitor (MMD) does so with a BEP of 0.62 (0.64 precision on average). According to the Wilcoxon statistical test with ? = 0.05, GKS1d performs significantly better than the others in terms of their average precision measures. 5 Related Work Our proposed test belongs to a family of nonparametric tests for detecting change in multivariate data that compare distributions without the intermediate density estimation step. Our reference tests were thus taken from this family of studies. The kdq-tree test (KDQ) [4] uses a spatial scheme (called kdq-tree) to partition the data into small cells. Then, the Kullback-Leibler (KL) distance is used to measure the difference between data counts for the two samples in each cell. A permutation (bootstrapping) test [7] is used to calculate the significant difference (p-value). The metavariable WaldWolfowitz test (WW) [10] measures the differences between two samples according to the minimum spanning tree in the graph of distances between all pairs in both samples. Then, the Wald-Wolfowitz test statistics are computed over the number of components left in the graph after removing edges between examples of different samples. The kernel change detection (KCD) [5] measures the distance between two samples according to a ?Fisher-like? distance between samples. This distance is based on hypercircle characteristics of the resulting two OCSVMs, which were trained separately on each sample. The maximum mean discrepancy test (MMD) [12] meausres discrepancy according to a complete matrix of kernel-based dissimilarity measures between all examples, and test statistics are then computed. (5) The PAC-Bayesian margin test (PBM) [6] measures the distance between two samples according to the average margins of a linear SVM classifier between the samples, and test statistics are computed. As discussed in detail before, our test follows the general approach of Polonik but differs in three important ways: (1) While Polonik uses a density estimator for specifying the MV-sets, we introduce a simpler method that finds the MV-sets directly from the data. Our method is thus more practical and accurate in high-dimensional or small-sample-sized settings. (2) Once the MV-sets are defined, Polonik uses their hypothetical quantiles as the expected plots, and hence, runs the KS test in its onesample version (goodness-of-fit test). We take a more practically accurate approach for finite sample size when approximations of MV-sets are not precise. Instead of using the hypothetical measures, we estimate the expected plots of X empirically and use the two-sample KS test instead. (3) Unlike Polonik?s work, ours was evaluated empirically and its superiority demonstrated over a wide range of nonparametric tests. Moreover, since Polonik?s test relies on a density estimation and the ability to extract its level-sets, it is not practically feasible in high-dimensional settings. Other methods for estimating MV-sets exist in the literature [21, 1, 16, 13, 20, 23, 14]. Unfortunately, for problems beyond two dimensions and non-convex sets, there is often a gap between their theoretical and practical estimates [20]. We chose here OCSVM and OSNM because they perform well on small, high-dimensional samples. 6 Discussion and Summary This paper makes two contributions. First, it proposes a new method that uses OCSVMs or OCNMs to represent high-dimensional distributions as a hierarchy of high-density regions. This method is used for statistical tests, but can also be used as a general, black-box, method for efficient and practical representations of high-dimensional distributions. Second, it presents a nonparametric, generalized, KS test that uses our representation method to detect distributional changes in highdimensional data. Our test was found superior to competing tests in the sense of average precision and BEP measures, especially in the context of change-detection tasks. An interesting and still open question is how we should set the input ? quantiles for our method. The problem of determining the number of quantiles ? and the gaps between consecutive ones ? is related to the problem of histogram design. 8 References [1] S. Ben-David and M. Lindenbaum. Learning distributions by their density levels: A paradigm for learning without a teacher. Journal of Computer and System Sciences, 55(1):171?182, 1997. [2] ZI Botev, JF Grotowski, and DP Kroese. Kernel density estimation via diffusion. The Annals of Statistics, 38(5):2916?2957, 2010. [3] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001. [4] T. Dasu, S. Krishnan, S. Venkatasubramanian, and K. Yi. An information-theoretic approach to detecting changes in multi-dimensional data streams. In INTERFACE, 2006. [5] F. Desobry, M. Davy, and C. Doncarli. An online kernel change detection algorithm. Signal Processing, Transactions on Information Theory, 53(8):2961?2974, 2005. [6] Anton Dries and Ulrich R?uckert. Adaptive concept drift detection. Statistical Analysis and Data Mining, 2(5-6):311?327, 2009. [7] B. Efron and R.J. Tibshirani. An Introduction to the Bootstrap. Chapman and Hall/CRC, 1994. [8] J.H.J. Einmahl and D.M. Mason. Generalized quantile processes. The Annals of Statistics, pages 1062?1078, 1992. [9] G. Fasano and A. Franceschini. A multidimensional version of the kolmogorov-smirnov test. Monthly Notices of the Royal Astronomical Society, 225:155?170, 1987. [10] J.H. Friedman and L.C. Rafsky. Multivariate generalizations of the Wald-Wolfowitz and Smirnov two-sample tests. The Annals of Statistics, 7(4):697?717, 1979. [11] J. Gama, P. Medas, G. Castillo, and P. Rodrigues. Learning with drift detection. In SBIA, pages 66?112. Springer, 2004. [12] A. Gretton, K.M. Borgwardt, M. Rasch, B. Scholkopf, and A.J. Smola. A kernel method for the two-sample-problem. Machine Learning, 1:1?10, 2008. [13] X. Huo and J.C. Lu. A network flow approach in finding maximum likelihood estimate of high concentration regions. Computational Statistics & Data Analysis, 46(1):33?56, 2004. [14] D.M. Mason and W. Polonik. Asymptotic normality of plug-in level set estimates. The Annals of Applied Probability, 19(3):1108?1142, 2009. [15] A. Munoz and J.M. Moguerza. Estimation of high-density regions using one-class neighbor machines. In PAMI, pages 476?480, 2006. [16] J. Nunez Garcia, Z. Kutalik, K.H. Cho, and O. Wolkenhauer. Level sets and minimum volume sets of probability density functions. International Journal of Approximate Reasoning, 34(1): 25?47, 2003. [17] JA Peacock. Two-dimensional goodness-of-fit testing in astronomy. Monthly Notices of the Royal Astronomical Society, 202:615?627, 1983. [18] W. Polonik. Concentration and goodness-of-fit in higher dimensions:(asymptotically) distribution-free methods. The Annals of Statistics, 27(4):1210?1229, 1999. [19] Bernhard Sch?olkopf, John C. Platt, John C. Shawe-Taylor, Alex J. Smola, and Robert C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443?1471, 2001. [20] C.D. Scott and R.D. Nowak. Learning minimum volume sets. The Journal of Machine Learning Research, 7:665?704, 2006. [21] G. Walther. Granulometric smoothing. The Annals of Statistics, pages 2273?2299, 1997. [22] G. Widmer and M. Kubat. Learning in the presence of concept drift and hidden contexts. Machine Learning, 23(1):69?101, 1996. [23] R.M. Willett and R.D. Nowak. Minimax optimal level-set estimation. Image Processing, IEEE Transactions on, 16(12):2965?2979, 2007. [24] John Wright, Yi Ma, Yangyu Tao, Zhouchen Lin, and Heung-Yeung Shum. Classification via minimum incremental coding length. SIAM J. Imaging Sciences, 2(2):367?395, 2009. 9
4553 |@word madelon:1 version:13 proportion:3 smirnov:7 open:1 q1:1 venkatasubramanian:1 contains:1 shum:1 document:3 ours:1 outperforms:2 existing:2 current:1 comparing:1 cad:1 dx:1 john:3 fn:8 realistic:1 partition:1 analytic:2 kyb:1 cheap:1 plot:8 v:1 half:3 huo:1 colored:1 hypersphere:3 detecting:3 simpler:4 unbounded:1 constructed:1 c2:6 qcd:1 dengcai:1 scholkopf:1 walther:1 prove:1 assaf:1 manner:1 introduce:6 notably:1 expected:7 indeed:1 sbia:1 mpg:1 growing:1 multi:1 detects:1 param:1 window:2 provided:3 estimating:5 moreover:2 bounded:1 kubat:1 null:2 israel:6 argmin:1 preform:1 finding:3 astronomy:1 bootstrapping:2 guarantee:2 pseudo:1 hypothetical:2 multidimensional:1 runtime:3 classifier:2 x0m:1 control:1 platt:1 superiority:4 before:1 accordance:1 encoding:1 pami:1 might:1 chose:1 twice:2 black:1 k:16 specifying:1 limited:1 cdfs:2 range:1 averaged:6 practical:6 testing:2 practice:2 implement:1 lf:4 x3:3 differs:1 bootstrap:1 sq:1 procedure:1 precisionrecall:2 empirical:5 significantly:2 projection:1 davy:1 word:2 einmahl:1 suggest:1 hx1:2 cannot:1 close:2 valley:1 selection:1 lindenbaum:1 context:6 www:1 measurable:1 map:1 demonstrated:1 missing:1 peacock:1 straightforward:1 regardless:1 convex:1 bep:8 estimator:23 fasano:1 handle:1 markovitch:1 variation:1 notion:1 annals:6 hierarchy:5 suppose:3 construction:1 nominal:1 us:10 rodrigues:1 hypothesis:5 origin:1 associate:1 mic:1 expensive:2 asymmetric:1 textdata:1 distributional:6 observed:2 calculate:3 region:28 wj:1 halfspaces:1 of1:1 mentioned:1 environment:1 complexity:2 asked:1 dynamic:1 trained:4 compromise:2 serve:1 learner:1 easily:1 htm:1 x100:3 kolmogorov:6 train:2 effective:1 pertaining:1 desobry:1 outside:2 choosing:1 neighborhood:2 whose:2 solve:1 otherwise:2 ability:1 statistic:12 think:1 transform:4 itself:1 ip:1 final:1 online:1 sequence:4 advantage:1 propose:2 maximal:1 product:2 frequent:2 uci:3 achieve:1 moved:1 olkopf:4 convergence:1 regularity:1 requirement:2 rademacher:1 incremental:1 converges:1 ben:1 illustrate:1 ac:3 measured:2 nearest:2 solves:1 implemented:1 c:3 implies:3 rasch:1 direction:2 shaulm:1 radius:1 correct:2 sgn:2 crc:1 ja:1 hx:2 f1:2 generalization:3 preliminary:2 extension:2 strictly:2 correction:1 hold:8 lying:2 practically:3 considered:4 hall:1 wright:1 mapping:1 claim:1 consecutive:2 estimation:21 bag:1 label:4 bridge:1 largest:1 basing:1 correctness:1 weighted:1 mit:1 gaussian:1 modified:1 pn:1 hj:2 semeion:1 zju:1 likelihood:1 contrast:1 attains:2 baseline:2 detect:2 sense:2 shaul:1 hidden:2 kdq:8 transformed:1 tao:1 classification:2 musk:1 among:1 denoted:1 html:1 proposes:1 polonik:17 spatial:1 initialize:1 smoothing:1 equal:8 construct:1 never:1 extraction:1 once:3 chapman:1 look:1 excessive:1 discrepancy:3 others:3 few:1 randomly:2 replaced:3 lebesgue:1 n1:1 friedman:1 detection:15 fd:2 mining:1 evaluation:2 mixture:1 accurate:4 edge:1 nowak:2 arthur:1 tree:4 taylor:1 haifa:3 theoretical:3 increased:1 goodness:3 subset:1 technion:6 conducted:1 too:1 supx:1 teacher:1 sv:2 cho:1 density:32 borgwardt:1 international:1 siam:1 csail:1 off:1 michael:1 kroese:1 intersecting:1 again:1 satisfied:1 nm:2 opposed:1 pbm:4 chung:1 return:3 de:1 coding:1 satisfy:3 explicitly:1 mv:34 depends:1 stream:8 break:1 picked:2 franceschini:1 doing:1 sup:4 start:1 ocnm:17 contribution:1 il:3 oi:6 hmve:3 accuracy:1 largely:1 characteristic:1 dry:1 directional:2 generalize:1 anton:1 bayesian:2 svj:1 eters:1 accurately:1 lu:1 classified:1 suffers:1 definition:1 against:1 competitor:3 associated:8 di:8 proof:1 sampled:2 proved:2 dataset:5 intrinsically:1 recall:9 color:1 lim:1 dimensionality:4 efron:1 hilbert:1 astronomical:2 higher:2 follow:1 methodology:1 evaluated:6 box:1 smola:2 hand:1 ocsvms:13 replacing:1 gray:1 contain:2 true:2 concept:4 hence:9 regularization:2 shuffled:1 symmetric:6 iteratively:2 leibler:1 widmer:1 during:1 bonferroni:1 width:1 generalized:9 prominent:1 hill:1 complete:1 theoretic:1 tn:2 performs:2 interface:1 dif1:1 reasoning:1 image:1 novel:4 common:1 superior:1 empirically:5 volume:7 discussed:3 he:1 willett:1 significant:2 monthly:2 munoz:1 rd:4 automatic:1 zhouchen:1 shawe:1 dot:1 jrennie:1 wilcoxon:2 multivariate:5 brownian:1 recent:1 belongs:1 binary:1 yi:2 seen:2 minimum:12 additional:2 determine:2 wolfowitz:3 paradigm:1 signal:1 sliding:1 gretton:1 plug:1 cross:1 lin:2 variant:1 basic:2 wald:3 nunez:1 yeung:1 iteration:8 represent:4 kernel:13 mmd:10 histogram:1 cell:2 c1:3 addition:3 whereas:1 separately:1 decreased:1 interval:2 else:1 sch:4 unlike:2 oq:11 flow:1 presence:1 intermediate:1 split:1 assafgr:1 krishnan:1 newsgroups:1 xj:2 fit:3 zi:1 fm:5 identified:1 competing:1 cn:2 whether:4 motivated:1 returned:1 svs:6 clear:1 detailed:1 gks:1 nonparametric:10 kcd:4 extensively:1 svms:2 category:3 http:3 generate:1 exist:1 notice:2 estimated:8 tibshirani:1 discrete:2 drawn:4 prevent:1 libsvm:2 diffusion:1 heung:1 imaging:1 asymptotically:8 graph:2 fraction:7 run:1 inverse:1 package:1 family:4 reasonable:1 chih:2 home:1 draw:1 decision:9 bound:5 internet:1 pay:1 guaranteed:1 fold:1 quadratic:2 occur:1 constraint:2 alex:1 x2:4 separable:1 relatively:1 department:3 according:8 granulometric:1 separability:1 outlier:12 taken:1 computationally:1 equation:4 visualization:1 previously:1 slack:1 count:1 needed:1 operation:1 gaussians:1 apply:2 hierarchical:16 eatures:1 alternative:1 original:4 running:1 ensure:2 remaining:1 quantile:9 especially:5 approximating:1 classical:3 society:2 unchanged:1 question:1 damage:1 concentration:4 kth:2 dp:1 distance:8 separate:2 mapped:2 separating:1 simulated:1 tistical:1 topic:3 collected:1 tuebingen:1 reason:1 induction:2 spanning:1 degenerated:1 code:2 length:1 minn:1 equivalently:2 setup:4 mostly:1 unfortunately:1 robert:1 statement:9 kde:1 trace:2 implementation:1 design:1 proper:1 perform:1 allowing:1 upper:1 rafsky:1 observation:1 datasets:4 finite:3 precise:1 unanticipated:1 ww:5 retrained:1 drift:5 introduced:2 david:1 pair:6 required:3 specified:3 c3:5 connection:1 conclusive:1 glazer:1 kl:1 c4:2 learned:3 address:1 beyond:3 suggested:3 xm:4 scott:1 sparsity:5 program:1 reliable:1 max:1 royal:2 ocsvm:27 demanding:1 treated:1 predicting:1 normality:1 scheme:2 minimax:1 technology:3 library:1 realvalued:1 extract:1 literature:1 determining:2 relative:1 asymptotic:2 permutation:1 gama:1 interesting:1 limitation:1 x150:2 proven:1 validation:1 foundation:1 downloaded:1 x01:1 sufficient:2 consistent:1 dq:1 ptop:4 ulrich:1 share:1 changed:1 summary:1 free:1 institute:3 neighbor:5 wide:1 distributed:1 regard:1 curve:2 default:1 dimension:7 numeric:1 xn:3 world:1 cumulative:1 calculated:2 ferent:1 valid:1 symmetrize:1 collection:1 unaware:1 projected:3 author:1 adaptive:1 ec:1 transaction:2 approximate:3 preferred:1 kullback:1 bernhard:1 keep:1 supremum:2 corpus:2 assumed:1 xi:14 continuous:1 svbj:1 learn:1 williamson:1 hc:1 necessarily:1 hierarchically:1 bounding:1 x1:8 referred:1 quantiles:8 precision:16 doncarli:1 lie:4 advertisement:1 theorem:11 removing:1 jen:1 pac:2 mason:2 svm:2 dominates:1 dissimilarity:1 illustrates:2 margin:3 gap:2 generalizing:2 intersection:3 fc:11 cx:3 garcia:1 contained:1 chang:1 springer:1 corresponds:2 nested:1 satisfies:3 relies:2 cdf:1 ma:1 goal:2 sized:2 marked:2 rbf:1 replace:2 fisher:1 feasible:2 change:23 hard:4 jf:1 infinite:1 except:2 hyperplane:1 called:2 total:5 ksn:4 inj:1 experimental:1 castillo:1 newsgroup:2 highdimensional:1 support:3 people:2 evaluate:1 tested:1
3,926
4,554
Slice Normalized Dynamic Markov Logic Networks Tivadar Papai Henry Kautz Daniel Stefankovic Department of Computer Science University of Rochester Rochester, NY 14627 {papai,kautz,stefanko}@cs.rochester.edu Abstract Markov logic is a widely used tool in statistical relational learning, which uses a weighted first-order logic knowledge base to specify a Markov random field (MRF) or a conditional random field (CRF). In many applications, a Markov logic network (MLN) is trained in one domain, but used in a different one. This paper focuses on dynamic Markov logic networks, where the size of the discretized time-domain typically varies between training and testing. It has been previously pointed out that the marginal probabilities of truth assignments to ground atoms can change if one extends or reduces the domains of predicates in an MLN. We show that in addition to this problem, the standard way of unrolling a Markov logic theory into a MRF may result in time-inhomogeneity of the underlying Markov chain. Furthermore, even if these representational problems are not significant for a given domain, we show that the more practical problem of generating samples in a sequential conditional random field for the next slice relying on the samples from the previous slice has high computational cost in the general case, due to the need to estimate a normalization factor for each sample. We propose a new discriminative model, slice normalized dynamic Markov logic networks (SN-DMLN), that suffers from none of these issues. It supports efficient online inference, and can directly model influences between variables within a time slice that do not have a causal direction, in contrast with fully directed models (e.g., DBNs). Experimental results show an improvement in accuracy over previous approaches to online inference in dynamic Markov logic networks. 1 Introduction Markov logic [1] is a language for statistical relational learning, which employs weighted first-order logic formulas to compactly represent a Markov random field (MRF) or a conditional random field (CRF). A Markov logic theory where each predicate can take an argument representing a time point is called a dynamic Markov logic network (DMLN). We will focus on two-slice dynamic Markov logic networks, i.e., ones in which each quantified temporal argument is of the form t or t + 1, in the conditional (CRF) setting. DMLNs are the undirected analogue of dynamic Bayesian networks (DBN) [13] and akin to dynamic conditional random fields [19]. DMLNs have been shown useful for relational inference in complex dynamic domains; for example, [17] employed DMLNs for reasoning about the movements and strategies of 14-player games of Capture the Flag. The usual method for performing offline inference in a DMLN is to simply unroll it into a CRF and employ a general MLN or CRF inference algorithm. We will show, however, that the standard unrolling approach has a number of undesirable properties. The first two negative properties derive from the fact that MLNs are in general sensitive to the number of constants in each variable domain [6]; and so, in particular cases, unintuitive results can occur when the length of training and testing sequences differ. First, as one increases the number of time points in the domain, the marginals can fluctuate, even if the observations have little or no influence on the hidden variables. Second, the model can become time-inhomogeneous, even if the ground weighted formulas between the time slices originate from the same weighted first-order logic formulas. The third negative property is of greater practical concern. In domains where there are a large number of variables within each slice dynamic programming based exact inference cannot be used. When 1 the number of time steps is high and/or online inference is required, unrolling the entire sequence (perhaps repeatedly) becomes prohibitively expensive. Kersting et al. [7] suggests reducing the cost by exploiting symmetries while Nath & Domingos [14] propose reusing previously sent messages while performing a loopy belief propagation. Both algorithms are restricted by the capabilities of loopy belief propagation, which can fail to converge to the correct distribution in MLNs. Geier & Biundo [2] provides a slice-by-slice approximate inference algorithm for DMLNs that can utilize any inference algorithm as a black box, but assumes that projecting the distribution over the random variables at every time step to the product of their marginal distributions does not introduce a large degree of error ? an assumption that does not always hold. Sequential Monte Carlo methods, or particle filters, are perhaps the most popular methods for online inference in high-dimensional sequential models. However, except for special cases such as, e.g., the Gaussian distributions used in [11], sampling from a two-slice CRF model can become expensive, due to the need to evaluate a partition function for each particle (see Sec. 3 for more details). As a solution to all of these concerns, we propose a novel way of unrolling a Markov logic theory such that in the resulting probabilistic model a smaller CRF is embedded into a larger CRF making the clique potentials between adjacent slices normalized. We call this model slice normalized dynamic Markov logic network (SN-DMLN). Because of the embedded CRF and the undirected components in our proposed model, the distribution represented by a SN-DMLN cannot be compactly captured by conventional chain graph [10], DBN or CRF graph representations, as we will explain in Sec. 4. The SN-DMLN has none of the negative theoretical or practical properties outlined above, and for accuracy and/or speed of inference matches or outperforms unrolled CRFs and the slice-by-slice approximate inference methods. Finally, because the maximum likelihood parameter learning for an SN-DMLN can be a non-convex optimization problem, we provide an effective heuristic for weight learning, along with initial experimental results. 2 Background Probabilistic graphical models compactly represent probability distributions using a graph structure that expresses conditional independences among the variables. Directed graphical models are mainly used in the generative setting, i.e., they model the joint distribution of the hidden variables and the observations, and during training the joint probability of the training data is maximized. Hidden Markov models are the prototypical directed models used for sequential data with hidden and observable parts. It has been demonstrated that for classification problems, discriminative models, which model the conditional probability of the hidden variables given the observations, can outperform generative models [12]. The main justifications for their success are that complex dependencies between observed variables do not have to be modeled explicitly, and the conditional probability of the training data (which is maximized during parameter learning) is a better objective function if we eventually want to use our model for classification. Markov random fields (MRFs) and conditional random fields (CRFs) belong to the class of undirected graphical models. MRFs are generative models, while CRFs are their discriminative version. (For a more detailed discussion of the relationships between these models see [8]). Markov logic [1] is a first-order probabilistic language that allows one to define template features that apply to whole classes of objects at once. A Markov logic network is a set of weighted first-order logic formulas and a finite set of constants C = {c1 , c2 , . . . , c|C| } which together define a Markov network ML,C that contains a binary node for each possible grounding of each predicate (ground atom) and a binary valued feature for each grounding of each first-order logic formula. We will also call the ground atoms variables (since they are random variables). In each truth assignment to the variables, each variable or feature (ground formula) evaluates to 1 (true) or 0 (false). In this paper we assume function-free clauses and Herbrand interpretations. Using the knowledge base we can either create an MRF or a CRF. If we instantiate the model as a CRF, the conditional probability of a truth assignment y to the hidden ground atoms (query atoms) in an MLN, given truth assignment x to the observable ground atoms (evidence atoms), is defined as: P P exp( i wi j fi,j (x, y)) , (1) P r(Y = y|X = x) = Z(x) where fi,j (x, y) = 1 if the jth grounding of the ith formula is true under truth assignment {x, y}, and fi,j (x, y) = 0 otherwise. wi is the weight of the ith formula and Z(x) is the normalization factor. Ground atoms share the same weight if they are groundings of Pthe same weighted firstorder logic formula, and (1) could be expressed in terms of ni (x, y) = j fi,j (x, y). Instantiation as an MRF can be done similarly, having an empty set of evidence atoms. Dynamic MLNs [7] are MLNs with distinguished arguments in every predicate representing the flow of time or some other sequential quantity. In our settings, Yt and Xt will denote the set of hidden and observable random variables, respectively, at time t, and Y1:t and X1:t from time step 1 to t. Each set can contain many variables, and we should note that their distribution will be represented compactly by weighted first-order logic formulas. The formulas in the knowledge base can be partitioned into 2 two sets. The transitions part contains the formulas for which it is true that for any grounding of each formula, there is a t such that the grounding shares variables only with Yt and Yt+1 . The emission part represents the formulas which connect the hidden and observable variables, i.e. Yt and Xt . We will use P? (Yt , Yt+1 ) (or P? (Yt:t+1 )) and P? (Yt , Xt ) to denote the product of the potentials corresponding to weighted ground formulas at time t of the transition and the observation formulas, respectively. Since some ground formulas may contain only variables from Yt ( i.e., defined over hidden variables within the same slice), in order to count the corresponding potentials exactly once, we always include their potentials P? (Yt , Yt?1 ), and for t = 1 we have a separate P? (Y1 ). Hence, the distribution defined in (1) in sequential domains can be factorized as: Qt Qt P?1 (Y1 = y1 ) i=2 P? (Yi?1:i = yi?1:i ) i=1 P? (Yi = yi , Xi = xi ) P r(Y1:t = y1:t |X1:t = x1:t ) = Z(x1:t ) (2) In the rest of the paper, we only allow the temporal domain to vary, and the rest of the domains are fixed. 3 Unrolling MLNs into random fields in temporal domains We now describe disadvantages of the standard definition of DMLNs, i.e., when the knowledge base is unrolled into a CRF: 1. As one increases the number of time points the marginals can fluctuate, even if all the clique potentials P? (Yi = yi , Xi = xi ) in (2) are uninformative. 2. The transition probability Pr(Yi+1 |Yi ) can be dependent on i, even if every P? (Yi = yi , Xi = xi ) is uninformative and we use the same weighted first-order logic formula responsible for the ground formulas covering the transitions between every i and i + 1. 3. Particle filtering is costly in general, i.e., if we have the marginal probabilities at time t, we cannot compute them at time t + 1 using particle filtering unless certain special conditions are satisfied. Saying that P? (Yi = yi , Xi = xi ) is uninformative is equivalent to saying that P? (Yi = yi , Xi = xi ) is constant. (Note that, if Yi and Xi are independent, i.e., for some q and r P? (Yi = yi , Xi = xi ) = r(yi )q(xi ) then q could be marginalized out and r(Yi ) could be snapped to P? (Yi , Yi?1 ) in (2).) To demonstrate Property 1, consider an unrolled MRF with the temporal domain T = {1, . . . , T }, with only predicate P (t) (t ? T ) and with the weighted formulas (+?, P (t) ? P (t + 1)) (hard constraint) and (w, P (t)) (soft constraint). Because of the hard constraint, only the sequences ?t : P (t) and ?t : ?P (t) have non-zero probabilities. The soft weights imply that Pr(P (t)) = exp(wT )Pr(?P (t)), i.e., Pr(P (t)) converges to 1, 0 or to 0.5 with exponential rate depending on the sign of w. But we are not always fortunate to have converging marginals, e.g., if we change the hard constraint to be P (t) ? ?P (t + 1) and w 6= 0 the marginals will diverge. If T is even, then for every t ? T , Pr(P (t)) = Pr(?P (t)), since in both sequences P (t) has the same number of true groundings. If T is odd then for every odd t ? T : Pr(P (t)) = exp(w)Pr(?P (t)). Consequently, we have diverging marginals as T ? +?. This phenomenon not only makes the inference unreliable, but a weight learning algorithm that maximizes the log-likelihood of the data would produce different weights depending on whether T is even or odd. A similar effect arising from moving between different sized domains is discussed in more details in [6]. The akin Property 2 (inhomogeneity) can be demonstrated similarly, consider, e.g., an MLN with a single first-order logic formula P (t) ? P (t + 1) with weight w. For the sake of simplicity, assume T = 3. 1+exp(w) The unrolled MRF defines a distribution where Pr(?P (3)|?P (2)) = 1+2exp(w)+exp(2w) which is not equal to Pr(?P (2)|?P (1)) = 1+exp(w) 1+exp(w)+2 exp(2w) for an arbitrary choice of w. The examples we just gave involved hard constraints. In fact, we can show that if there are no hard hard constraints, as T increases the marginals converge and the system becomes homogeneous (except for a finite number of transitions). Consider the matrix ? s.t. ?i,j = P? (Yt = aj , Yt?1 = ai ), where ai , i = 1, . . . , N is an enumeration of the all the possible truth assignments within each slice and N is the number of the possible truth assignments in the slice. Let PrT (Y1 = y1 ) = P QT ?1 ? P QT ?1 ? 1 y2 ,...,yT y1 ,...,yT i=1 P (Yi = yi , Yi+1 = yi+1 ), where Z(Y1:T ) = i=1 P (Yi = Z(Y1:T ) yi , Yi+1 = yi+1 ). Proposition 1. limt?? Prt (Y1 = y) exists if ? is a positive matrix, i.e., ?i, j : ?i,j > 0. 3 Proof. Using ? and the notation ~1 for all one vector and e~i for a vector which has 1 at the ith component and 0 everywhere else, we can express Prt (Y1 = y) as: P ? ei ?t?1~1 y2 P (Y1 = ai , Y2 = y2 )~ (3) Prt (Y1 = y) = ~1T ?t~1 Since ? is positive we can apply theorem 8.2.8. from [5], i.e., if the spectral radius of ? is ?(?) (which is always positive for positive matrices): limt?? (??1 (?)?)t = L, where L = xy T , ?x = ?(?)x, ?T y = ?(?)y, x > 0,y > 0 and xT y = 1. Dividing both the numerator and the denominator by ?t (?) in (3) proves the convergence of Prt (Y1 = y). The issue of diverging marginals and time-inhomogeneity has not been previously recognized as a practical problem. However, the increasing interest in probabilistic models that contain large numbers of deterministic constraints (see, e.g. [4]) might bring this issues to the fore. This proposition can serve as an explanation why in practice we do not encounter diverging marginals in linear chain type CRFsand why except for a finite number of transitions the model becomes time-homogeneous. A more significant practical challenge is described by Property 3, the problem of sampling from Pr(Yt |X1:t = x1:t ) using the previously drawn samples from Pr(Yt?1 |X1:t?1 = x1:t?1 ). In a directed graphical model (e.g., in a hidden Markov model), following standard particle filter design, having sampled s1:t?1 ? Pr(Y1:t?1 = s1:t?1 |X1:t?1 = x1:t?1 ), and then using s1:t?1 one would sample st ? Pr(Yt , Y1:t?1 = s1:t?1 |X1:t?1 ). Since Pr(Y1:t = s1:t |X1:t?1 = x1:t?1 ) = Pr(Yt = st |Yt?1 = st?1 )Pr(Y1:t?1 = s1:t?1 |X1:t?1 = x1:t?1 ) (4) we do not have any difficulty performing this sampling step, and all that is left is to re-sample the collection of s1:t with importance weights Pr(Yt = st |Xt = xt ). The analogue of this process does not work in a CRF in general. If one first draws a sample s1:t?1 ? P? (Y1 , X1 = Qt?1 x1 )P? (Y1 ) i=2 P? (Yi , Yi?1 )P? (Yi , Xi = xi ), and then draws st ? P? (Yt , Yt?1 = st?1 ), we end up sampling from: t Y 1 s ? P? (Y1 , X1 = x1 )P? (Y1 ) P? (Yi , Yi?1 )P? (Yi , Xi = xi ) (5) Z (y t?1 t?1 ) i=2 P ? where Zt?1 (yt?1 ) = yt P (Yt = yt , Yt?1 = yt?1 ). Unless Zt?1 (yt?1 ) is the same for every yt?1 , it is necessary to approximate Zt?1 (st?1 ) for every st?1 . 1 Although several algorithms have been proposed to estimate partition functions [16, 18], the partition function estimation can increase both the running time of the sampling algorithm significantly and the error of the approximation of the sampling algorithm. While there are restricted special cases where the normalization factor can be ignored [11], in general ignoring the approximation of Zt?1 (yt?1 ) could cause a large error in the computed marginals. Consider, e.g., when we have three weighted formulas in the previously used toy domain, namely, w : ?P (Yt ) ? ?P (Yt+1 ), ?w : P (Yt ) ? ?P (Yt+1 ) and w? : P (Yt ) ? ?P (Yt+1 ), where w > 0 and w? < 0. It can be proved that in this setting using particle filtering in a CRF without accounting for Zt?1 (yt?1 ) would result in limt?? Pr(P (Yt )) = 21 , while in the CRF exp(w) the correct marginal would be limt?? Pr(P (Yt )) = 1 ? 1+exp(w) exp(w? ) + O(exp(2w? )), which ? gets arbitrarily close to 1 as we decrease w . 4 Slice normalized DMLNs As we demonstrated in Section 3, the root cause of the weaknesses of an ordinarily unrolled CRF P lies in that P? (Yt = yt , Yt?1 = yt?1 ) is unnormalized, i.e., yt P? (Yt = yt , Yt?1 = yt?1 ) 6= 1 in general. One approach to introduce normalization could be to use maximum entropy Markov models (MEMM) [12]. In that case we would directly represent Pr(Yt |Xt , Yt?1 ), hence we could implement a sequential Monte Carlo algorithm simply directly sampling st ? Pr(Yt |Xt = xt , Yt?1 = st?1 ) from slice to slice. However, in [9], it was pointed out that MEMMs suffer from the label-bias problem to which as a solution CRFs were invented. Chain graphs (see e.g. [10]) have also the advantage of mixing directed and undirected components, and would be a tempting choice to use, but they could only model the transition between slices by either representing (i) Pr(Yt |Xt = xt , Yt?1 = st?1 ), 1 Exploiting inner structure according to the graphical model within the slice would in worst case still result in computation of the expensive partition function, or could result in a higher variance estimator the same way as, e.g., using a uniform proposal distribution does. 4 in which case the model would again suffer from the label-bias problem, or (ii) Pr(Yt , Xt |Yt?1 ) or (iii) Pr(Xt |Yt ) and Pr(Yt |Yt?1 ). The defined distributions both in (ii) and (iii) do not give any advantage performing the sampling step in (4), and similarly to CRFs would require the expensive computation of a normalization factor. We propose a slice normalized dynamic Markov logic network (SN-DMLN) model, which consists of directed and undirected components on the high level, and can be thought of as a smaller CRF nested into a larger CRF describing the transition probabilities constructed using weighted first-order logic formulas as templates. SN-DMLNs neither suffer from the label bias problem, nor bear the disadvantageous properties presented in Section 3. The distribution defined by an unrolled SN-DMLN is as follows: Pr(Y1:t = y1:t |X1:t = x1:t ) = t Y 1 P1 (Y1 ) P? (Yi = yi , Xi = xi ) Z(x1:t ) i=1 t Y (6) P (Yi = yi |Yi?1 = yi?1 ) , i=2 where P? (Y1 = y1 ) P1 (Y1 = y1 ) = P , ? ? y ? P (Y1 = y1 ) 1 P? (Yi = yi , Yi?1 = yi?1 ) P (Yi = yi |Yi?1 = yi?1 ) = P , ? ? y ? P (Yi = yi , Yi?1 = yi?1 ) i and the partition function is defined by: ) ( t t Y Y X P (Yi = yi |Yi?1 = yi?1 ) . P1 (Y1 = y1 ) P? (Yi = yi , Xi = xi ) Z(x1:t ) = y1 ,...,yt i=1 i=2 P (Yt = yt |Yt?1 = yt?1 ) is defined by a two-slice Markov logic network (CRF), which describes the state transitions probabilities in a compact way. If we hide the details of this nested CRF component and treat it as one potential, we could represent the distribution in (6) by regular chain graphs or CRFs; however we would lose then the compactness the nested CRF provides for describing the distribution. Similarly, we could collapse the variables at every time slice into one and could use a DBN (or again a chain graph), but it would need exponentially more entries in its conditional probability tables. If P? (Yi = yi , Xi = xi ) does not have any information content , the probability distribution Qt defined in (6) reduces to P1 (Y1 = y1 ) i=2 P (Yi = yi |Yi?1 = yi?1 ), which is a time-homogeneous Markov chain 2 , hence this model clearly does not have Properties 1 and 2, no matter what formulas are present in the knowledge base. Furthermore, we do not have to compute the partition function between the slices, because equation (5) shows, drawing a sample yt ? P? (Yt , Yt?1 = yt?1 ) while keeping the value yt?1 fixed is equivalent to sampling from P (Yt |Yt?1 = yt?1 ), the quantity present in equation (6). This means that using our model one can avoid estimating Z(yt?1 ). To learn the parameters of the model we will maximize the conditional log-likelihood (L) of the data. We use a modified version of a hill climbing algorithm. The modification is needed, because in our case L is not necessarily concave. We will partition the weights (parameters) of our model based on whether they belong to transition or to emission part of the model. The gradient of the L of a data sequence d = y1 , x1 , . . . , yt , xt w.r.t. an emission parameter we (to which feature ne belongs) is: " t # t X X ?Ld ne (yi , xi ) ? EP r(Y |X=x) = ne (Yi , xi ) , (7) ?we i=1 i=1 which is analogous to what one would expect for CRFs. However, for a transition parameter wtr (belonging to feature ntr ) we get something different: t?1 t X X ?Ld EP (Yi+1 |yi ) [ntr (Yi+1 , Yi = yi )] ntr (yi+1 , yi ) ? = ?wtr i=1 i=1 ?X t?1 t?1 h i? X ? ? EP r(Y |X=x) EP (Y?i+1 |Yi ) ntr (Yi+1 , Yi ) . ntr (Yi+1 , Yi ) ? (8) i=1 i=1 (Note that, Ld is concave w.r.t. the emission parameters, i.e., when the transition parameters are kept fixed, allowing the transition parameters to vary makes Ld no longer concave.) In (8) the first 2 Note that, in the SN-DMLN model the uniformity of P? (Yi = yi , Xi = xi ) is a stronger assumption than the independence of Xi and Yi . 5 friendships reflect people?s similarity in smoking habits symmetry and reflexivity of friendship persistence of smoking people with different smoking habits hang out separately Smokes(p1 , t) ? ?Smokes(p2 , t) ? (p1 6= p2 ) ? ?F riends(p1 , p2 , t) Smokes(p1 , t) ? Smokes(p2 , t) ? (p1 6= p2 ) ? F riends(p1 , p2 , t) ?Smokes(p1 , t) ? ?Smokes(p2 , t) ? (p1 6= p2 ) ? F riends(p1 , p2 , t) ?F riends(p1 , p2 , t) ? ?F riends(p2 , p1 , t) F riends(p1 , p2 , t) ? F riends(p2 , p1 , t) F riends(p, p, t) Smokes(p, t) ? Smokes(p, t + 1) ?Smokes(p, t) ? ?Smokes(p, t + 1) Hangout(p1 , g1 , t) ? Hangout(p2 , g2 , x) ? Smokes(p1 , t)? (p1 6= p2 ) ? (g1 6= g2 ) ? ?Smokes(p2 , t) Hangout(p1 , g1 , t) ? Hangout(p2 , g2 , t) ? ?Smokes(p1 , t)? (p1 6= p2 ) ? (g1 6= g2 ) ? Smokes(p2 , t) Table 1: Formulas in the knowledge base two and the last two terms can be grouped together. The first group would represent the gradient in the case of uninformative observations, i.e., when the model simplifies to a Markov chain with a compactly represented transition probability distribution. The second group is the expected value of the expression in the first group. The first three terms correspond to the gradient of a concave function; while the fourth term corresponds to the gradient of a convex function, so the function as a whole is not guaranteed to be maximized by convex optimization techniques alone. Therefore, we chose a heuristic for our optimization algorithm which gradually increases the effects of the second group in the gradient. More precisely, we always compute the gradient w.r.t. wo according to (7), but w.r.t. wtr we use: t?1 t X X ?Ld EP (Yi+1 |yi ) [ntr (Yi+1 , yi )] ntr (yi+1 , yi ) ? = ?wtr i=1 i=1 ?X t?1 t h i? X ? ?EP r(Y |X=x) EP (Y?i+1 |Yi ) ntr (Y?i+1 , Yi ) ntr (Yi+1 , Yi ) ? (9) i=1 i=1 where ? is kept at the value of 0 until convergence, and then gradually increased from 0 to 1 to converge to the nearest local optimum. In Section 5, we experimentally demonstrate that this heuristic provides reasonably good results, hence we did not turn to more sophisticated algorithms. The rationale behind our heuristic is that if P? (Yi = yi , Xi = xi ) had truly no information content, then for ? = 0 we would find the global optimum, and as we increase ? we are taking into account that the observations are correlated with the hidden variables with an increasing weight. 5 Experiments For our experiments we extended the Probabilistic Consistency Engine (PCE) [3], a Markov logic implementation that has been used effectively in different problem domains. For training, we used 10000 samples for the unrolled CRF and 100 particles and 100 samples for approximating the conditional expectations in (9) for the SN-DMLN to estimate the gradients. For inference we used 10000 samples for the CRF and 10000 particles for the mixed model. The sampling algorithm we relied on was MC-SAT [15]. Our example training data set was a modified version of the dynamic social network example [7, 2]. The hidden predicates in our knowledge base were Smokes(person, time), F riends(person1 , person2 , time) and the observable was Hangout(person, group, time). The goal of inference was to predict which people could potentially be friends, based on the similarity in their smoking habits, which similarity could be inferred based on the groups the individuals hang out. We generated training and test data as follows: there were two groups g1 , g2 , one for smokers and one for non-smokers. Initially 2 people were randomly chosen to be smokers and 2 to be non-smokers. People with the same smoking habits can become friends at any time step with probability 1 ? 0.05?, and a smoker and a non-smoker can become friends with probability 0.05?. Every 5th time step (starting with t = 0) people hang out in groups and for each person the probability of joining one of the groups is 1 ? 0.05?. With probability 1? 0.05?, everyone spends time with the group reflecting their smoking habits, and with probability 0.05? they go to hang out with the other group. The rest of the days people do not hang out. The smoking habits persist, i.e., a smoker stays a smoker and a non-smoker stays a non-smoker at the next time step with probability 1 ? 0.05?. In our two configurations we had ? = 0 (deterministic case) and ? = 1 (non-deterministic case). The weights of the clauses we learned using the SN-DMLN and the CRF unrolled models are in Table 1. We used chains with length 5, 10, 20 and 40 as training data, respectively. For each chain we had 40, 20, 10 and 5 examples both for the training and for testing, respectively. In our experiments we compared three types of inference, and measured the prediction quality for the hidden predicate F riends by assigning true to every ground atom the marginal probability of which was greater than 6 length 5 10 20 40 SN 1.0 1.0 1.0 1.0 ?=0 accuracy MAR MC-SAT SN 0.40 1.0 1.0 0.40 0.97 1.0 0.40 0.67 1.0 0.85 0.60 1.0 f1 MAR MC-SAT 0.14 1.0 0.14 0.95 0.14 0.49 0.72 0.43 SN 0.84 0.84 0.92 0.88 ?=1 accuracy MAR MC-SAT SN 0.36 0.81 0.75 0.36 0.77 0.74 0.55 0.66 0.85 0.73 0.59 0.78 f1 MAR 0.10 0.11 0.32 0.55 MC-SAT 0.69 0.61 0.47 0.42 Table 2: Accuracy and F-score results when models were trained and tested on chains with the same length (a) ? = 0 (b) ? = 1 Figure 1: F-score of models trained and tested on the same length of data 0.55, and false if its probability was less than 0.45; otherwise we considered it as a misclassification. Prediction of Smokes was impossible in the generated data set, because the data generation was symmetric w.r.t to smoking and not smoking, and from the observations we could only tell that certain pairs of people have similar or different smoking habits, but not who smokes and who does not. The three methods we compared were (i) particle filtering in the SN-DMLN model (SN), (ii) the approximate online inference algorithm of [2], which projects the inferred distribution of the random variables at the previous slice to the product of their marginals, and incorporates this information into a two slice MLN to infer the probabilities at the next slice (we re-implemented the algorithm in PCE) (MAR), and (iii) using a general inference algorithm (MC-SAT [15]) for a CRF which is always completely unrolled in every time step (UNR). In UNR and MAR the same CRF models were used. The training of the SN-DMLN model took approximately for 120 minutes for all the test cases, while for the CRF model, it took 120, 145, 175 and 240 minutes respectively. The inference over the entire test set, took approximately 6 minutes for SN and MAR in every test case, while UNR required 5, 8, 12 and 40 minutes for the different test cases. The accuracy and F-scores for the different test cases are summarized in Table 2 and the F-scores are plotted in Fig. 1. SN outperforms MAR, because as we see that in the knowledge base, MAR can only conclude that people have the same or different smoking habits on the days when people hang out (every 5th time step), and the marginal distributions of Smokes do not carry enough information about which pair of people have different smoking habits, hence the quality of MAR?s prediction decreases on days when people do not hang out. The performance of SN and MAR stays the same as we increase the length of the chain while the performance of UNR degrades. This is most pronounced in the deterministic case (? = 0). This can be explained by that MC-SAT requires more sampling steps to maintain the same performance as the chain length increases. To demonstrate that if we use the same number of particles in SN as number of samples in UNR, the performance of SN stays approximately the same while the performance of UNR degrades over time, we trained both the CRF and SN-DMLN on length 5 chains where both SN and UNR were performing equally well and used test sets of different lengths up to length 150. F-scores are plotted in Fig. 2. We see from Fig. 2 that SN outperforms both UNR and MAR as the chain length increases. Moreover, UNR?s performance is clearly decreasing as the length of the chain increases. 6 Conclusion In this paper, we explored the theoretical and practical questions of unrolling a sequential Markov logic knowledge base into different probabilistic models. The theoretical issues arising in a CRF7 (a) ? = 0 (b) ? = 1 Figure 2: F-score of models trained and tested on different length of data based MLN unrolling are a warning that unexpected results may occur if the observations are weakly correlated with the hidden variables. We gave a qualitative justification why this phenomenon is more of a theoretical concern in domains lacking deterministic constraints. We demonstrated that the CRF based unrolling can be outperformed by a model that mixes directed and undirected components (the proposed model does not suffer from any of the theoretical weaknesses, nor from the label-bias problem). From a more practical point of view, we showed that our proposed model provides computational savings, when the data has to be processed in a sequential manner. These savings are due to that we do not have to unroll a new CRF at every time step, or estimate a partition function which is responsible for normalizing the product of clique potentials appearing in two consecutive slices. The previously used approximate inference methods in dynamic MLNs either relied on belief propagation or assumed that approximating the distribution at every time step by the product of the marginals would not cause any error. It is important to note that, although in our paper we focused on marginal inference, finding the most likely state sequence could be done using the generated particles. Although the conditional log-likelihood of the training data in our model may be non-concave so that hill climbing based approaches could fail to settle in a global maximum, we proposed a heuristic for weight learning and demonstrated that it could train our model so that it performs as well as conditional random fields. Although training the mixed model might have a higher computational cost than training a conditional random field, but this cost is amortized over time, since in applications inference is performed many times, while weight learning only once. Designing more scalable weight learning algorithms is among our future goals. 7 Acknowledgments We thank Daniel Gildea for his insightful comments. This research was supported by grants from ARO (W991NF-08-1-0242), ONR (N00014-11-10417), NSF (IIS-1012017), DOD (N00014-12-C-0263), and a gift from Intel. References [1] Pedro Domingos and Daniel Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2009. [2] Thomas Geier and Susanne Biundo. Approximate online inference for dynamic markov logic networks. In Tools with Artificial Intelligence (ICTAI), 2011 23rd IEEE International Conference on, pages 764?768, 2011. [3] Shalini Ghosh, Natarajan Shankar, and Sam Owre. Machine reading using markov logic networks for collective probabilistic inference. In In Proceedings of ECML-CoLISD., 2011. [4] Vibhav Gogate and Rina Dechter. Samplesearch: Importance sampling in presence of determinism. Artif. Intell., 175(2):694?729, 2011. [5] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1990. [6] Dominik Jain, Andreas Barthels, and Michael Beetz. Adaptive Markov Logic Networks: Learning Statistical Relational Models with Dynamic Parameters. In 19th European Conference on Artificial Intelligence (ECAI), pages 937?942, 2010. 8 [7] K. Kersting, B. Ahmadi, and S. Natarajan. Counting belief propagation. In J. Bilmes A. Ng, editor, Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence (UAI?09), Montreal, Canada, June 18?21 2009. [8] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [9] John Lafferty. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. pages 282?289. Morgan Kaufmann, 2001. [10] Steffen Lauritzen and Thomas S. Richardson. Chain graph models and their causal interpretations. B, 64:321?361, 2001. [11] B. Limketkai, D. Fox, and Lin Liao. CRF-Filters: Discriminative Particle Filters for Sequential State Estimation. In Robotics and Automation, 2007 IEEE International Conference on, pages 3142?3147, 2007. [12] Andrew McCallum, Dayne Freitag, and Fernando C. N. Pereira. Maximum entropy markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML ?00, pages 591?598, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. [13] Kevin Patrick Murphy. Dynamic bayesian networks: representation, inference and learning. PhD thesis, 2002. AAI3082340. [14] Aniruddh Nath and Pedro Domingos. Efficient belief propagation for utility maximization and repeated inference, 2010. [15] Hoifung Poon and Pedro Domingos. Sound and efficient inference with probabilistic and deterministic dependencies. In Proceedings of the 21st national conference on Artificial intelligence - Volume 1, AAAI?06, pages 458?463. AAAI Press, 2006. [16] G. Potamianos and J. Goutsias. Stochastic approximation algorithms for partition function estimation of Gibbs random fields. IEEE Transactions on Information Theory, 43(6):1948? 1965, 1997. [17] Adam Sadilek and Henry Kautz. Recognizing multi-agent activities from GPS data. In TwentyFourth AAAI Conference on Artificial Intelligence, 2010. [18] R. Salakhutdinov. Learning and evaluating Boltzmann machines. Technical Report UTML TR 2008-002, Department of Computer Science, University of Toronto, June 2008. [19] Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. J. Mach. Learn. Res., 8:693?723, May 2007. 9
4554 |@word version:3 stronger:1 accounting:1 tr:1 carry:1 ld:5 initial:1 configuration:1 contains:2 score:6 daniel:3 outperforms:3 assigning:1 john:1 dechter:1 partition:9 utml:1 alone:1 generative:3 instantiate:1 intelligence:7 mln:7 mccallum:2 ith:3 provides:4 node:1 toronto:1 along:1 c2:1 constructed:1 become:4 qualitative:1 consists:1 freitag:1 manner:1 introduce:2 samplesearch:1 expected:1 p1:23 nor:2 multi:1 steffen:1 discretized:1 salakhutdinov:1 relying:1 decreasing:1 little:1 enumeration:1 unrolling:8 becomes:3 increasing:2 estimating:1 underlying:1 notation:1 maximizes:1 factorized:2 project:1 moreover:1 what:2 spends:1 gift:1 finding:1 warning:1 ghosh:1 temporal:4 every:16 firstorder:1 concave:5 exactly:1 prohibitively:1 grant:1 segmenting:2 positive:4 local:1 treat:1 sutton:1 mach:1 joining:1 approximately:3 black:1 might:2 chose:1 quantified:1 suggests:1 collapse:1 seventeenth:1 directed:7 practical:7 responsible:2 acknowledgment:1 testing:3 horn:1 practice:1 hoifung:1 implement:1 habit:9 significantly:1 thought:1 persistence:1 regular:1 get:2 cannot:3 undesirable:1 close:1 shankar:1 influence:2 impossible:1 conventional:1 equivalent:2 demonstrated:5 yt:76 crfs:7 deterministic:6 go:1 starting:1 convex:3 focused:1 simplicity:1 estimator:1 his:1 justification:2 analogous:1 dbns:1 exact:1 programming:1 homogeneous:3 us:1 designing:1 domingo:4 gps:1 amortized:1 expensive:4 natarajan:2 persist:1 observed:1 invented:1 ep:7 capture:1 worst:1 rina:1 movement:1 decrease:2 dynamic:19 trained:5 uniformity:1 weakly:1 serve:1 completely:1 compactly:5 joint:2 represented:3 unr:9 train:1 jain:1 effective:1 describe:1 monte:2 query:1 artificial:7 tell:1 labeling:2 kevin:1 heuristic:5 widely:1 larger:2 valued:1 drawing:1 otherwise:2 g1:5 richardson:1 inhomogeneity:3 online:6 sequence:8 advantage:2 took:3 propose:4 aro:1 product:5 pthe:1 mixing:1 poon:1 representational:1 pronounced:1 exploiting:2 convergence:2 empty:1 optimum:2 produce:1 generating:1 adam:1 converges:1 object:1 derive:1 andrew:2 depending:2 friend:3 montreal:1 measured:1 nearest:1 lauritzen:1 odd:3 qt:6 p2:19 dividing:1 implemented:1 c:1 differ:1 direction:1 inhomogeneous:1 radius:1 correct:2 filter:4 stochastic:1 settle:1 require:1 f1:2 proposition:2 hold:1 considered:1 ground:12 exp:13 claypool:1 predict:1 vary:2 consecutive:1 mlns:6 estimation:3 outperformed:1 lose:1 label:4 sensitive:1 grouped:1 create:1 tool:2 weighted:12 mit:1 clearly:2 gaussian:1 always:6 modified:2 avoid:1 fluctuate:2 kersting:2 focus:2 emission:4 june:2 improvement:1 likelihood:4 mainly:1 contrast:1 inference:27 mrfs:2 dependent:1 prt:5 typically:1 entire:2 compactness:1 hidden:14 initially:1 koller:1 issue:4 among:2 classification:2 special:3 marginal:7 field:14 once:3 equal:1 having:2 extraction:1 atom:10 sampling:12 ng:1 represents:1 saving:2 icml:1 future:1 report:1 employ:2 randomly:1 national:1 intell:1 individual:1 murphy:1 maintain:1 friedman:1 interest:1 message:1 weakness:2 truly:1 behind:1 chain:17 necessary:1 xy:1 reflexivity:1 unless:2 fox:1 re:3 plotted:2 causal:2 theoretical:5 increased:1 soft:2 disadvantage:1 assignment:7 maximization:1 loopy:2 cost:4 entry:1 uniform:1 dod:1 recognizing:1 predicate:7 johnson:1 dependency:2 connect:1 varies:1 st:12 person:3 international:3 stay:4 probabilistic:11 diverge:1 michael:1 together:2 synthesis:1 thesis:1 aaai:3 again:2 reflect:1 satisfied:1 reusing:1 toy:1 account:1 potential:7 sec:2 summarized:1 automation:1 matter:1 inc:1 explicitly:1 performed:1 root:1 view:1 disadvantageous:1 relied:2 capability:1 kautz:3 rochester:3 gildea:1 ni:1 accuracy:6 variance:1 who:2 kaufmann:2 maximized:3 correspond:1 climbing:2 bayesian:2 none:2 carlo:2 fore:1 mc:7 bilmes:1 explain:1 suffers:1 definition:1 evaluates:1 involved:1 proof:1 sampled:1 proved:1 popular:1 knowledge:9 segmentation:1 sophisticated:1 reflecting:1 higher:2 day:3 specify:1 done:2 box:1 mar:12 furthermore:2 just:1 roger:1 until:1 ei:1 propagation:5 smoke:18 defines:1 aj:1 perhaps:2 quality:2 lowd:1 vibhav:1 artif:1 grounding:7 effect:2 usa:1 normalized:6 true:5 contain:3 y2:4 unroll:2 hence:5 symmetric:1 adjacent:1 game:1 during:2 numerator:1 covering:1 unnormalized:1 hill:2 crf:32 demonstrate:3 performs:1 bring:1 interface:1 reasoning:1 novel:1 fi:4 charles:2 clause:2 exponentially:1 volume:1 belong:2 interpretation:2 discussed:1 marginals:11 significant:2 cambridge:1 gibbs:1 ai:3 rd:1 consistency:1 dbn:3 outlined:1 pointed:2 particle:12 similarly:4 riends:10 language:2 henry:2 had:3 moving:1 longer:1 similarity:3 base:9 patrick:1 something:1 hide:1 showed:1 belongs:1 certain:2 n00014:2 binary:2 success:1 arbitrarily:1 onr:1 yi:94 captured:1 morgan:3 greater:2 employed:1 recognized:1 converge:3 maximize:1 fernando:1 tempting:1 ii:4 mix:1 ntr:9 reduces:2 infer:1 sound:1 technical:1 match:1 wtr:4 lin:1 equally:1 converging:1 mrf:7 prediction:3 scalable:1 denominator:1 liao:1 expectation:1 normalization:5 represent:5 limt:4 robotics:1 c1:1 proposal:1 addition:1 background:1 want:1 uninformative:4 separately:1 else:1 publisher:2 rest:3 beetz:1 comment:1 undirected:6 sent:1 flow:1 nath:2 incorporates:1 lafferty:1 call:2 ictai:1 presence:1 counting:1 iii:3 enough:1 independence:2 gave:2 inner:1 simplifies:1 andreas:1 whether:2 expression:1 utility:1 akin:2 wo:1 suffer:4 cause:3 repeatedly:1 ignored:1 useful:1 detailed:1 processed:1 outperform:1 nsf:1 sign:1 arising:2 herbrand:1 express:2 group:11 drawn:1 neither:1 utilize:1 kept:2 graph:7 everywhere:1 fourth:1 uncertainty:1 person1:1 extends:1 saying:2 draw:2 layer:1 guaranteed:1 activity:1 occur:2 constraint:8 precisely:1 sake:1 speed:1 argument:3 performing:5 department:2 according:2 belonging:1 smaller:2 describes:1 sam:1 wi:2 partitioned:1 making:1 s1:8 modification:1 projecting:1 restricted:2 pr:27 gradually:2 explained:1 potamianos:1 equation:2 previously:6 describing:2 eventually:1 fail:2 count:1 needed:1 turn:1 end:1 apply:2 spectral:1 appearing:1 distinguished:1 encounter:1 ahmadi:1 thomas:2 assumes:1 running:1 include:1 graphical:6 marginalized:1 prof:1 approximating:2 objective:1 question:1 quantity:2 strategy:1 costly:1 degrades:2 usual:1 gradient:7 separate:1 thank:1 originate:1 khashayar:1 length:13 modeled:1 relationship:1 gogate:1 unrolled:9 potentially:1 negative:3 ordinarily:1 unintuitive:1 design:1 implementation:1 zt:5 susanne:1 collective:1 boltzmann:1 allowing:1 observation:8 markov:34 finite:3 ecml:1 relational:4 extended:1 y1:39 arbitrary:1 canada:1 inferred:2 namely:1 required:2 smoking:12 pair:2 engine:1 learned:1 reading:1 challenge:1 explanation:1 belief:5 analogue:2 everyone:1 misclassification:1 difficulty:1 representing:3 imply:1 ne:3 sn:26 embedded:2 fully:1 expect:1 bear:1 rationale:1 mixed:2 prototypical:1 generation:1 filtering:4 lacking:1 lecture:1 degree:1 agent:1 principle:1 editor:1 share:2 supported:1 last:1 free:1 keeping:1 jth:1 ecai:1 offline:1 bias:4 allow:1 template:2 taking:1 determinism:1 slice:31 transition:14 evaluating:1 collection:1 adaptive:1 san:1 social:1 transaction:1 approximate:6 observable:5 compact:1 hang:7 logic:33 clique:3 ml:1 unreliable:1 global:2 instantiation:1 uai:1 sat:7 conclude:1 assumed:1 francisco:1 discriminative:4 xi:31 why:3 table:5 learn:2 reasonably:1 rohanimanesh:1 ca:1 ignoring:1 symmetry:2 complex:2 necessarily:1 european:1 domain:17 sadilek:1 did:1 main:1 whole:2 repeated:1 x1:24 fig:3 intel:1 ny:1 pereira:1 exponential:1 fortunate:1 lie:1 dominik:1 third:1 formula:25 theorem:1 friendship:2 minute:4 xt:14 insightful:1 explored:1 concern:3 evidence:2 exists:1 normalizing:1 false:2 sequential:10 effectively:1 importance:2 phd:1 smoker:10 entropy:2 simply:2 likely:1 expressed:1 unexpected:1 g2:5 pedro:3 nested:3 truth:7 corresponds:1 person2:1 conditional:18 sized:1 goal:2 consequently:1 content:2 change:2 hard:6 experimentally:1 except:3 reducing:1 wt:1 flag:1 called:1 pce:2 experimental:2 diverging:3 player:1 support:1 people:12 evaluate:1 tested:3 phenomenon:2 correlated:2
3,927
4,555
Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models Jeff Beck Department of Brain and Cognitive Sciences University of Rochester [email protected] Katherine Heller Department of Statistical Science Duke University [email protected] Alexandre Pouget Department of Neuroscience University of Geneva [email protected] Abstract Recent experiments have demonstrated that humans and animals typically reason probabilistically about their environment. This ability requires a neural code that represents probability distributions and neural circuits that are capable of implementing the operations of probabilistic inference. The proposed probabilistic population coding (PPC) framework provides a statistically efficient neural representation of probability distributions that is both broadly consistent with physiological measurements and capable of implementing some of the basic operations of probabilistic inference in a biologically plausible way. However, these experiments and the corresponding neural models have largely focused on simple (tractable) probabilistic computations such as cue combination, coordinate transformations, and decision making. As a result it remains unclear how to generalize this framework to more complex probabilistic computations. Here we address this short coming by showing that a very general approximate inference algorithm known as Variational Bayesian Expectation Maximization can be naturally implemented within the linear PPC framework. We apply this approach to a generic problem faced by any given layer of cortex, namely the identification of latent causes of complex mixtures of spikes. We identify a formal equivalent between this spike pattern demixing problem and topic models used for document classification, in particular Latent Dirichlet Allocation (LDA). We then construct a neural network implementation of variational inference and learning for LDA that utilizes a linear PPC. This network relies critically on two non-linear operations: divisive normalization and super-linear facilitation, both of which are ubiquitously observed in neural circuits. We also demonstrate how online learning can be achieved using a variation of Hebb?s rule and describe an extension of this work which allows us to deal with time varying and correlated latent causes. 1 Introduction to Probabilistic Inference in Cortex Probabilistic (Bayesian) reasoning provides a coherent and, in many ways, optimal framework for dealing with complex problems in an uncertain world. It is, therefore, somewhat reassuring that behavioural experiments reliably demonstrate that humans and animals behave in a manner consistent with optimal probabilistic reasoning when performing a wide variety of perceptual [1, 2, 3], motor [4, 5, 6], and cognitive tasks[7]. This remarkable ability requires a neural code that represents probability distribution functions of task relevant stimuli rather than just single values. While there 1 are many ways to represent functions, Bayes rule tells us that when it comes to probability distribution functions, there is only one statistically optimal way to do it. More precisely, Bayes Rule states that any pattern of activity, r, that efficiently represents a probability distribution over some task relevant quantity s, must satisfy the relationship p(s|r) ? p(r|s)p(s), where p(r|s) is the stimulus conditioned likelihood function that specifies the form of neural variability, p(s) gives the prior belief regarding the stimulus, and p(s|r) gives the posterior distribution over values of the stimulus, s given the representation r . Of course, it is unlikely that the nervous system consistently achieves this level of optimality. None-the-less, Bayes rule suggests the existence of a link between neural variability as characterized by the likelihood function p(r|s) and the state of belief of a mature statistical learning machine such as the brain. The so called Probabilistic Population Coding (or PPC) framework[8, 9, 10] takes this link seriously by proposing that the function encoded by a pattern of neural activity r is, in fact, the likelihood function p(r|s). When this is the case, the precise form of the neural variability informs the nature of the neural code. For example, the exponential family of statistical models with linear sufficient statistics has been shown to be flexible enough to model the first and second order statistics of in vivo recordings in awake behaving monkeys[9, 11, 12] and anesthetized cats[13]. When the likelihood function is modeled in this way, the log posterior probability over the stimulus is linearly encoded by neural activity, i.e. log p(s|r) = h(s) ? r ? log Z(r) (1) Here, the stimulus dependent kernel, h(s), is a vector of functions of s, the dot represents a standard dot product, and Z(r) is the partition function which serves to normalize the posterior. This log linear form for a posterior distribution is highly computationally convenient and allows for evidence integration to be implemented via linear operations on neural activity[14, 8]. Proponents of this kind of linear PPC have demonstrated how to build biologically plausible neural networks capable of implementing the operations of probabilistic inference that are needed to optimally perform the behavioural tasks listed above. This includes, linear PPC implementations of cue combination[8], evidence integration over time, maximum likelihood and maximum a posterior estimation[9], coordinate transformation/auditory localization[10], object tracking/Kalman filtering[10], explaining away[10], and visual search[15]. Moreover, each of these neural computations has required only a single recurrently connected layer of neurons that is capable of just two non-linear operations: coincidence detection and divisive normalization, both of which are widely observed in cortex[16, 17]. Unfortunately, this research program has been a piecemeal effort that has largely proceeded by building neural networks designed deal with particular problems. As a result, there have been no proposals for a general principle by which neural network implementations of linear PPCs might be generated and no suggestions regarding how to deal with complex (intractable) problems of probabilistic inference. In this work, we will partially address this short coming by showing that Variation Bayesian Expectation Maximization (VBEM) algorithm provides a general scheme for approximate inference and learning with linear PPCs. In section 2, we briefly review the VBEM algorithm and show how it naturally leads to a linear PPC representation of the posterior as well as constraints on the neural network dynamics which build that PPC representation. Because this section describes the VB-PPC approach rather abstractly, the remainder of the paper is dedicated to concrete applications. As a motivating example, we consider the problem of inferring the concentrations of odors in an olfactory scene from a complex pattern of spikes in a population of olfactory receptor neurons (ORNs). In section 3, we argue that this requires solving a spike pattern demixing problem which is indicative of the generic problem faced by many layers of cortex. We then show that this demixing problem is equivalent to the problem addressed by a class of models for text documents know as probabilistic topic models, in particular Latent Dirichlet Allocation or LDA[18]. In section 4, we apply the VB-PPC approach to build a neural network implementation of probabilistic inference and learning for LDA. This derivation shows that causal inference with linear PPC?s also critically relies on divisive normalization. This result suggests that this particular non-linearity may be involved in very general and fundamental probabilistic computation, rather than simply playing a role in gain modulation. In this section, we also show how this formulation allows for a probabilistic treatment of learning and show that a simple variation of Hebb?s rule can implement Bayesian learning in neural circuits. 2 We conclude this work by generalizing this approach to time varying inputs by introducing the Dynamic Document Model (DDM) which can infer short term fluctuations in the concentrations of individual topics/odors and can be used to model foraging and other tracking tasks. 2 Variational Bayesian Inference with linear Probabilistic Population Codes Variational Bayesian (VB) inference refers to a class of deterministic methods for approximating the intractable integrals which arise in the context of probabilistic reasoning. Properly implemented it can result a fast alternative to sampling based methods of inference such as MCMC[19] sampling. Generically, the goal of any Bayesian inference algorithm is to infer a posterior distribution over behaviourally relevant latent variables Z given observations X and a generative model which specifies the joint distribution p(X, ?, Z). This task is confounded by the fact that the generative model includes latent parameters ? which must be marginalized out, i.e. we wish to compute, Z p(Z|X) ? p(X, ?, Z)d? (2) When the number of latent parameters is large this integral can be quite unwieldy. The VB algorithms simplify this marginalization by approximating the complex joint distribution over behaviourally relevant latents and parameters, p(?, Z|X), with a distribution q(?, Z) for which integrals of this form are easier to deal with in some sense. There is some art to choosing the particular form for the approximating distribution to make the above integral tractable, however, a factorized approximation is common, i.e. q(?, Z) = q? (?)qZ (Z). Regardless, for any given observation X, the approximate posterior is found by minimizing the Kullback-Leibler divergence between q(?, Z) and p(?, Z|X). When a factorized posterior is assumed, the Variational Bayesian Expectation Maximization (VBEM) algorithm finds a local minimum of the KL divergence by iteratively updating, q? (?) and qZ (Z) according to the scheme n log q? (?) ? hlog p(X, ?, Z)iqn (Z) Z and n+1 log qZ (Z) ? hlog p(X, ?, Z)iqn (?) ? (3) Here the brackets indicate an expected value taken with respect to the subscripted probability distribution function and the tilde indicates equality up to a constant which is independent of ? and Z. The key property to note here is that the approximate posterior which results from this procedure is in an exponential family form and is therefore representable by a linear PPC (Eq. 1). This feature allows for the straightforward construction of networks which implement the VBEM algorithm with linear PPC?s in the following way. If rn? and rnZ are patterns of activity that use a linear PPC representation of the relevant posteriors, then n log q? (?) ? h? (?) ? rn? and n+1 log qZ (Z) ? hZ (Z) ? rn+1 Z . (4) Here the stimulus dependent kernels hZ (Z) and h? (?) are chosen so that their outer product results in a basis that spans the function space on Z ? ? given by log p(X, ?, Z) for every X. This choice guarantees that there exist functions f? (X, rnZ ) and fZ (X, rn? ) such that rn? = f? (X, rnZ ) and rn+1 = fZ (X, rn? ) Z (5) satisfy Eq. 3. When this is the case, simply iterating the discrete dynamical system described by Eq. 5 until convergence will find the VBEM approximation to the posterior. This is one way to build a neural network implementation of the VB algorithm. However, its not the only way. In general, any dynamical system which has stable fixed points in common with Eq. 5 can also be said to implement the VBEM algorithm. In the example below we will take advantage of this flexibility in order to build biologically plausible neural network implementations. 3 Response! to Mixture ! of Odors! Single Odor Response Cause Intensity Figure 1: (Left) Each cause (e.g. coffee) in isolation results in a pattern of neural activity (top). When multiple causes contribute to a scene this results in an overall pattern of neural activity which is a mixture of these patterns weighted by the intensities (bottom). (Right) The resulting pattern can be represented by a raster, where each spike is colored by its corresponding latent cause. 3 Probabilistic Topic Models for Spike Train Demixing Consider the problem of odor identification depicted in Fig. 1. A typical mammalian olfactory system consists of a few hundred different types of olfactory receptor neurons (ORNs), each of which responds to a wide range of volatile chemicals. This results in a highly distributed code for each odor. Since, a typical olfactory scene consists of many different odors at different concentrations, the pattern of ORN spike trains represents a complex mixture. Described in this way, it is easy to see that the problem faced by early olfactory cortex can be described as the task of demixing spike trains to infer latent causes (odor intensities). In many ways this olfactory problem is a generic problem faced by each cortical layer as it tries to make sense of the activity of the neurons in the layer below. The input patterns of activity consist of spikes (or spike counts) labeled by the axons which deliver them and summarized by a histogram which indicates how many spikes come from each input neuron. Of course, just because a spike came from a particular neuron does not mean that it had a particular cause, just as any particular ORN spike could have been caused by any one of a large number of volatile chemicals. Like olfactory codes, cortical codes are often distributed and multiple latent causes can be present at the same time. Regardless, this spike or histogram demixing problem is formally equivalent to a class of demixing problems which arise in the context of probabilistic topic models used for document modeling. A simple but successful example of this kind of topic model is called Latent Dirichlet Allocation (LDA) [18]. LDA assumes that word order in documents is irrelevant and, therefore, models documents as histograms of word counts. It also assumes that there are K topics and that each of these topics appears in different proportions in each document, e.g. 80% of the words in a document might be concerned with coffee and 20% with strawberries. Words from a given topic are themselves drawn from a distribution over words associated with that topic, e.g. when talking about coffee you have a 5% chance of using the word ?bitter?. The goal of LDA is to infer both the distribution over topics discussed in each document and the distribution of words associated with each topic. We can map the generative model for LDA onto the task of spike demixing in cortex by letting topics become latent causes or odors, words become neurons, word occurrences become spikes, word distributions associated with each topic become patterns of neural activity associated with each cause, and different documents become the observed patterns of neural activity on different trials. This equivalence is made explicit in Fig. 2 which describes the standard generative model for LDA applied to documents on the left and mixtures of spikes on the right. 4 LDA Inference and Network Implementation In this section we will apply the VB-PPC formulation to build a biologically plausible network capable of approximating probabilistic inference for spike pattern demixing. For simplicity, we will use the equivalent Gamma-Poisson formulation of LDA which directly models word and topic counts 4 1. For each topic k = 1, . . . , K, (a) Distribution over words ?k ? Dirichlet(?0 ) 2. For document d = 1, . . . , D, (a) Distribution over topics ?d ? Dirichlet(?0 ) (b) For word m = 1, . . . , ?d i. Topic assignment zd,m ? Multinomial(?d ) ii. Word assignment ?d,m ? Multinomial(?zm ) 1. For latent cause k = 1, . . . , K, (a) Pattern of neural activity ?k ? Dirichlet(?0 ) 2. For scene d = 1, . . . , D, (a) Relative intensity of each cause ?d ? Dirichlet(?0 ) (b) For spike m = 1, . . . , ?d i. Cause assignment zd,m ? Multinomial(?d ) ii. Neuron assignment ?d,m ? Multinomial(?zm ) Figure 2: (Left) The LDA generative model in the context of document modeling. (Right) The corresponding LDA generative model mapped onto the problem of spike demixing. Text related attributes on the left, in red, have been replaced with neural attributes on the right, in green. rather than topic assignments. Specifically, we define, Rd,j to be the number of times neuron j fires during trial d. Similarly, we let Nd,j,k to be the number of times a spike in neuron j comes from cause k in trial d. These new variables play the roles of the cause and neuron assignment variables, zd,m and ?d,m by simply P counting them up. If we let cd,k be an un-normalized intensity of cause j such that ?d,k = cd,k / k cd,k then the generative model, Rd,j = P k Nd,j,k Nd,j,k ? Poisson(?j,k cd,k ) cd,k ? Gamma(?k0 , C ?1 ). (6) is equivalent to the topic models described above. Here the parameter C is a scale parameter which sets the expected total number of spikes from the population on each trial. Note that, the problem of inferring the wj,k and cd,k is a non-negative matrix factorization problem similar to that considered by Lee and Seung[20]. The primary difference is that, here, we are attempting to infer a probability distribution over these quantities rather than maximum likelihood estimates. See supplement for details. Following the prescription laid out in section 2, we approximate the posterior over latent variables given a set of input patterns, Rd , d = 1, . . . , D, with a factorized distribution of the form, qN (N)q  c (c)q? (?). This results in marginal posterior distributions q (?:,k |?:,k ), ?1 q cd,k |?d,k , C + 1 ), and q (Nd,j,: | log pd,j,: , Rd,i ) which are Dirichlet, Gamma, and Multinomial respectively. Here, the parameters ?:,k , ?d,k , and log pd,j,: are the natural parameters of these distributions. The VBEM update algorithm yields update rules for these parameters which are summarized in Fig. 3 Algorithm1. Algorithm 1: Batch VB updates 1: while ?j,k not converged do 2: for d = 1, ? ? ? , D do 3: while pd,j,k , ?d,kP not converged do 4: ?d,k ? ?0 + j Rd,j pd,j,k 5: pd,j,k ? Algorithm 2: Online VB updates 1: for d = 1, ? ? ? , D do 2: reinitialize pj,k , ?k ?j, k 3: while pj,k , ?k not P converged do 4: ?k ? ?0 + j Rd,j pj,k 5: pj,k ? exp (?(?j,k )??(? ?k )) exp ?(?k ) P ?i )) exp ?(?i ) i exp (?(?j,i )??(? exp (?(?j,k )??(? ?k )) exp ?(?d,k ) P ?i )) exp ?(?d,i ) i exp (?(?j,i )??(? end while ?j,k ? (1 ? dt)?j,k + dt(? 0 + Rd,j pj,k ) 8: end for 6: end while 7: end for P 8: ?j,k = ? 0 + d Rd,j pd,j,k 9: end while 6: 7: P Figure 3: Here ??k = j ?j,k and ?(x) is the digamma function so that exp ?(x) is a smoothed threshold linear function. Before we move on to the neural network implementation, note that this standard formulation of variational inference for LDA utilizes a batch learning scheme that is not biologically plausible. Fortunately, an online version of this variational algorithm was recently proposed and shown to give 5 superior results when compared to the batch learning algorithm[21]. This algorithm replaces the sum over d in update equation for ?j,k with an incremental update based upon only the most recently observed pattern of spikes. See Fig. 3 Algorithm 2. 4.1 Neural Network Implementation Recall that the goal was to build a neural network that implements the VBEM algorithm for the underlying latent causes of a mixture of spikes using a neural code that represents the posterior distribution via a linear PPC. A linear PPC represents the natural parameters of a posterior distribution via a linear operation on neural activity. Since the primary quantity of interest here is the posterior distribution over odor concentrations, qc (c|?), this means that we need a pattern of activity r? which is linearly related to the ?k ?s in the equations above. One way to accomplish this is to simply assume that the firing rates of output neurons are equal to the positive valued ?k parameters. Fig. 4 depicts the overall network architecture. Input patterns of activity, R, are transmitted to the synapses of a population of output neurons which represent the ?k ?s. The output activity is pooled to ? j , given the output layer?s form an un-normalized prediction of the activity of each input neuron, R current state of belief about the latent causes of the Rj . The activity at each synapse targeted by input neuron j is then inhibited divisively by this prediction. This results in a dendrite that reports to the ?j,k , which represents the fraction of unexplained spikes from input neuron j that soma a quantity, N could be explained by latent cause k. A continuous time dynamical system with this feature and the property that it shares its fixed points with the LDA algorithm is given by d ? Nj,k dt d ?k dt ?j N ?j,k = wj,k Rj ? R = exp (? (? ?k )) (?0 ? ?k ) + exp (? (?k )) (7) X ?j,k N (8) i ? j = P wj,k exp (? (?k )), and wj,k = exp (? (?j,k )). Note that, despite its form, it is where R k Eq. 7 which implements the required divisive normalization operation since, in the steady state, ?j,k = wj,k Rj /R ?j . N Regardless, this network has a variety of interesting properties that align well with biology. It predicts that a balance of excitation and inhibition is maintained in the dendrites via divisive normalization and that the role of inhibitory neurons is to predict the input spikes which target individual dendrites. It also predicts superlinear facilitation. Specifically, the final term on the right of Eq. 8 indicates that more active cells will be more sensitive to their dendritic inputs. Alternatively, this could be implemented via recurrent excitation at the population level. In either case, this is the mechanism by which the network implements a sparse prior on topic concentrations and stands in stark contrast to the winner take all mechanisms which rely on competitive mutual inhibition mechanisms. Additionally, the ??j in Eq. 8 represents a cell wide ?leak? parameter that indicates that the total leak should be roughly proportional to the sum total weight of the synapses which drive the neuron. This predicts that cells that are highly sensitive to input should also decay back to baseline more quickly. This implementation also predicts Hebbian learning of synaptic weights. To observe this fact, note that the online update rule for the ?j,k parameters can be implemented by simply correlating the activity at ?j,k with activity at the soma ?j via the equation: each synapse, N ?L d ?j,k exp ? (?k ) wj,k = exp (? (? ?k )) (?0 ? 1/2 ? wj,k ) + N dt (9) where ?L is a long time constant for learning and we have used the fact that exp (? (?jk )) ? ?jk ?1/2 for x > 1. For a detailed derivation see the supplementary material. 5 Dynamic Document Model LDA is a rather simple generative model that makes several unrealistic assumptions about mixtures of sensory and cortical spikes. In particular, it assumes both that there are no correlations between the 6 Targeted Divisive Normalization Targeted Divisive Normalization ?j Ri Input Neurons Recurrent Connections ? ? -1 -1 ? ?j Nij Ri Synapses Output Neurons Figure 4: The LDA network model. Dendritically targeted inhibition is pooled from the activity of all neurons in the output layer and acts divisively. ? jj' Nij Input Neurons Synapses Output Neurons Figure 5: DDM network model also includes recurrent connections which target the soma with both a linear excitatory signal and an inhibitory signal that also takes the form of a divisive normalization. intensities of latent causes and that there are no correlations between the intensities of latent causes in temporally adjacent trials or scenes. This makes LDA a rather poor computational model for a task like olfactory foraging which requires the animal to track the rise a fall of odor intensities as it navigates its environment. We can model this more complicated task by replacing the static cause or odor intensity parameters with dynamic odor intensity parameters whose behavior is governed by an exponentiated Ornstein-Uhlenbeck process with drift and diffusion matrices given by (? and ?D ). We call this variant of LDA the Dynamic Document Model (DDM) as it could be used to model smooth changes in the distribution of topics over the course of a single document. 5.1 DDM Model Thus the generative model for the DDM is as follows: 1. For latent cause k = 1, . . . , K, (a) Cause distribution over spikes ?k ? Dirichlet(?0 ) 2. For scene t = 1, . . . , T , (a) Log intensity of causes c(t) ? Normal(?ct?1 , ?D ) (b) Number of spikes in neuron j resulting from cause k, Nj,k (t) ? Poisson(?j,k exp ck (t)) P (c) Number of spikes in neuron j, Rj (t) = k Nj,k (t) This model bears many similarities to the Correlated and Dynamic topic models[22], but models dynamics over a short time scale, where the dynamic relationship (?, ?D ) is important. 5.2 Network Implementation Once again the quantity of interest is the current distribution of latent causes, p(c(t)|R(? ), ? = 0..T ). If no spikes occur then no evidence is presented and posterior inference over c(t) is simply given by an undriven Kalman filter with parameters (?, ?D ). A recurrent neural network which uses a linear PPC to encode a posterior that evolves according to a Kalman filter has the property that neural responses are linearly related to the inverse covariance matrix of the posterior as well as that inverse covariance matrix times the posterior mean. In the absence of evidence, it is easy to show that these quantities must evolve according to recurrent dynamics which implement divisive normalization[10]. Thus, the patterns of neural activity which linearly encode them must do so as well. When a new spike arrives, optimal inference is no longer possible and a variational approximation must be utilized. As is shown in the supplement, this variational approximation is similar to the variational approximation used for LDA. As a result, a network which can divisively inhibit its synapses is able to implement approximate Bayesian inference. Curiously, this implies that the addition of spatial and temporal correlations to the latent causes adds very little complexity to the VB-PPC network implementation of probabilistic inference. All that is required is an additional inhibitory population which targets the somata in the output population. See Fig. 5. 7 Natural Parameters 200 450 180 400 350 300 250 200 150 100 0.2 0.1 160 0 50 100 150 200 250 300 350 400 450 500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 140 120 0.4 0.3 100 0.2 80 0.1 0 60 40 0.4 20 50 0 0 0.4 0.3 Network Estimate Network Estimate Natural Parameters (?) 500 0 0 0.3 0.2 20 40 60 80 100 120 VBEM Estimate VBEM Estimate 140 160 180 200 0.1 0 Figure 6: (Left) Neural network approximation to the natural parameters of the posterior distribution over topics (the ??s) as a function of the VBEM estimate of those same parameters for a variety of ?documents?. (Center) Same as left, but for the natural parameters of the DDM (i.e the entries of the matrix ??1 (t) and ??1 ?(t) of the distribution over log topic intensities. (Right) Three example traces for cause intensity in the DDM. Black shows true concentration, blue and red (indistinguishable) show MAP estimates for the network and VBEM algorithms. 6 Experimental Results We compared the PPC neural network implementations of the variational inference with the standard VBEM algorithm. This comparison is necessary because the two algorithms are not guaranteed to converge to the same solution due to the fact that we only required that the neural network dynamics have the same fixed points as the standard VBEM algorithm. As a result, it is possible for the two algorithms to converge to different local minima of the KL divergence. For the network implementation of LDA we find good agreement between the neural network and VBEM estimates of the natural parameters of the posterior. See Fig. 6(left) which shows the two algorithms estimates of the shape parameter of the posterior distribution over topic (odor) concentrations (a quantity which is proportional to the expected concentration). This agreement, however, is not perfect, especially when posterior predicted concentrations are low. In part, this is due to the fact we are presenting the network with difficult inference problems for which the true posterior distribution over topics (odors) is highly correlated and multimodal. As a result, the objective function (KL divergence) is littered with local minima. Additionally, the discrete iterations of the VBEM algorithm can take very large steps in the space of natural parameters while the neural network implementation cannot. In contrast, the network implementation of the DDM is in much better agreement with the VBEM estimation. See Fig. 6(right). This is because the smooth temporal dynamics of the topics eliminate the need for the VBEM algorithm to take large steps. As a result, the smooth network dynamics are better able to accurately track the VBEM algorithms output. For simulation details please see the supplement. 7 Discussion and Conclusion In this work we presented a general framework for inference and learning with linear Probabilistic Population codes. This framework takes advantage of the fact that the Variational Bayesian Expectation Maximization algorithm generates approximate posterior distributions which are in an exponential family form. This is precisely the form needed in order to make probability distributions representable by a linear PPC. We then outlined a general means by which one can build a neural network implementation of the VB algorithm using this kind of neural code. We applied this VB-PPC framework to generate a biologically plausible neural network for spike train demixing. We chose this problem because it has many of the features of the canonical problem faced by nearly every layer of cortex, i.e. that of inferring the latent causes of complex mixtures of spike trains in the layer below. Curiously, this very complicated problem of probabilistic inference and learning ended up having a remarkably simple network solution, requiring only that neurons be capable of implementing divisive normalization via dendritically targeted inhibition and superlinear facilitation. Moreover, we showed that extending this approach to the more complex dynamic case in which latent causes change in intensity over time does not substantially increase the complexity of the neural circuit. Finally, we would like to note that, while we utilized a rate coding scheme for our linear PPC, the basic equations would still apply to any spike based log probability codes such as that considered Beorlin and Deneve[23]. 8 References [1] Daniel Kersten, Pascal Mamassian, and Alan Yuille. Object perception as Bayesian inference. Annual review of psychology, 55:271?304, January 2004. [2] Marc O Ernst and Martin S Banks. Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415(6870):429?33, 2002. [3] Yair Weiss, Eero P Simoncelli, and Edward H Adelson. Motion illusions as optimal percepts. Nature neuroscience, 5(6):598?604, 2002. [4] P N Sabes. The planning and control of reaching movements. Current opinion in neurobiology, 10(6): 740?6, 2000. [5] Konrad P K?ording and Daniel M Wolpert. Bayesian integration in sensorimotor learning. Nature, 427 (6971):244?7, 2004. [6] Emanuel Todorov. Optimality principles in sensorimotor control. Nature neuroscience, 7(9):907?15, 2004. [7] Erno T?egl?as, Edward Vul, Vittorio Girotto, Michel Gonzalez, Joshua B Tenenbaum, and Luca L Bonatti. Pure reasoning in 12-month-old infants as probabilistic inference. Science (New York, N.Y.), 332(6033): 1054?9, 2011. [8] W.J. Ma, J.M. Beck, P.E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes. Nature Neuroscience, 2006. [9] Jeffrey M Beck, Wei Ji Ma, Roozbeh Kiani, Tim Hanks, Anne K Churchland, Jamie Roitman, Michael N Shadlen, Peter E Latham, and Alexandre Pouget. Probabilistic population codes for Bayesian decision making. Neuron, 60(6):1142?52, 2008. [10] J. M. Beck, P. E. Latham, and a. Pouget. Marginalization in Neural Circuits with Divisive Normalization. Journal of Neuroscience, 31(43):15310?15319, 2011. [11] Tianming Yang and Michael N Shadlen. Probabilistic reasoning by neurons. Nature, 447(7148):1075?80, 2007. [12] RHS Carpenter and MLL Williams. Neural computation of log likelihood in control of saccadic eye movements. Nature, 1995. [13] Arnulf B a Graf, Adam Kohn, Mehrdad Jazayeri, and J Anthony Movshon. Decoding the activity of neuronal populations in macaque primary visual cortex. Nature neuroscience, 14(2):239?45, 2011. [14] HB Barlow. Pattern Recognition and the Responses of Sensory Neurons. Annals of the New York Academy of Sciences, 1969. [15] Wei Ji Ma, Vidhya Navalpakkam, Jeffrey M Beck, Ronald Van Den Berg, and Alexandre Pouget. Behavior and neural basis of near-optimal visual search. Nature Neuroscience, (May), 2011. [16] DJ Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9, 1992. [17] M Carandini, D J Heeger, and J a Movshon. Linearity and normalization in simple cells of the macaque primary visual cortex. The Journal of neuroscience : the official journal of the Society for Neuroscience, 17(21):8621?44, 1997. [18] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet Allocation. JMLR, 2003. [19] M. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Unit, UCL, 2003. [20] D D Lee and H S Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401 (6755):788?91, 1999. [21] M. Hoffman, D. Blei, and F. Bach. Online learning for Latent Dirichlet Allocation. In NIPS, 2010. [22] D. Blei and J. Lafferty. Dynamic topic models. In ICML, 2006. [23] M. Boerlin and S. Deneve. Spike-based population coding and working memory. PLOS computational biology, 2011. 9
4555 |@word proceeded:1 trial:5 version:1 briefly:1 proportion:1 nd:4 simulation:1 covariance:2 daniel:2 seriously:1 document:17 ording:1 current:3 anne:1 must:5 ronald:1 partition:1 shape:1 motor:1 designed:1 update:7 infant:1 cue:2 generative:9 nervous:1 indicative:1 short:4 colored:1 blei:3 provides:3 contribute:1 become:5 consists:2 olfactory:9 manner:1 expected:3 behavior:2 themselves:1 planning:1 roughly:1 brain:2 little:1 moreover:2 linearity:2 circuit:6 factorized:3 underlying:1 vidhya:1 kind:3 substantially:1 monkey:1 proposing:1 transformation:2 nj:3 ended:1 guarantee:1 temporal:2 every:2 act:1 control:3 unit:1 before:1 positive:1 local:3 receptor:2 despite:1 fluctuation:1 modulation:1 firing:1 might:2 black:1 chose:1 equivalence:1 suggests:2 factorization:2 range:1 statistically:3 implement:8 illusion:1 procedure:1 convenient:1 word:14 ppcs:2 refers:1 onto:2 superlinear:2 cannot:1 context:3 kersten:1 equivalent:5 deterministic:1 demonstrated:2 map:2 center:1 vittorio:1 straightforward:1 regardless:3 williams:1 focused:1 qc:1 simplicity:1 pure:1 pouget:6 rule:7 facilitation:3 population:15 coordinate:2 variation:3 annals:1 construction:1 play:1 target:3 tianming:1 duke:2 us:1 agreement:3 recognition:1 jk:2 updating:1 utilized:2 mammalian:1 predicts:4 labeled:1 observed:4 role:3 bottom:1 divisively:3 coincidence:1 wj:7 connected:1 plo:1 movement:2 inhibit:1 environment:2 pd:6 leak:2 complexity:2 seung:2 dynamic:14 solving:1 churchland:1 deliver:1 localization:1 upon:1 yuille:1 basis:2 multimodal:1 joint:2 k0:1 cat:2 represented:1 derivation:2 train:5 fast:1 describe:1 kp:1 tell:1 choosing:1 quite:1 unige:1 encoded:2 plausible:6 widely:1 valued:1 supplementary:1 whose:1 ability:2 statistic:2 abstractly:1 final:1 online:5 beal:1 advantage:2 ucl:1 jamie:1 coming:2 product:2 remainder:1 zm:2 relevant:5 flexibility:1 ernst:1 academy:1 normalize:1 convergence:1 extending:1 incremental:1 perfect:1 adam:1 object:3 tim:1 informs:1 recurrent:5 stat:1 eq:7 edward:2 implemented:5 predicted:1 come:3 indicate:1 implies:1 attribute:2 filter:2 human:3 opinion:1 material:1 implementing:4 orn:2 dendritic:1 extension:1 ppc:23 considered:2 normal:1 exp:17 predict:1 achieves:1 early:1 boerlin:1 estimation:2 proponent:1 unexplained:1 sensitive:2 weighted:1 hoffman:1 super:1 rather:7 ck:1 reaching:1 varying:2 probabilistically:1 encode:2 properly:1 consistently:1 likelihood:7 indicates:4 digamma:1 contrast:2 baseline:1 sense:2 inference:30 dependent:2 typically:1 unlikely:1 eliminate:1 subscripted:1 overall:2 classification:1 flexible:1 pascal:1 animal:3 art:1 integration:3 spatial:1 mutual:1 marginal:1 equal:1 construct:1 once:1 having:1 ng:1 sampling:2 biology:2 represents:9 adelson:1 icml:1 nearly:1 report:1 stimulus:7 simplify:1 inhibited:1 few:1 gamma:3 divergence:4 individual:2 mll:1 beck:5 replaced:1 fire:1 jeffrey:2 detection:1 interest:2 highly:4 generically:1 mixture:8 bracket:1 arrives:1 integral:4 capable:6 necessary:1 mamassian:1 old:1 causal:1 nij:2 jazayeri:1 uncertain:1 vbem:19 modeling:2 assignment:6 maximization:4 introducing:1 entry:1 latents:1 hundred:1 successful:1 optimally:1 motivating:1 foraging:2 accomplish:1 iqn:2 fundamental:1 probabilistic:28 lee:2 decoding:1 michael:2 quickly:1 concrete:1 again:1 thesis:1 cognitive:2 stark:1 michel:1 coding:4 summarized:2 includes:3 pooled:2 satisfy:2 caused:1 ornstein:1 try:1 red:2 competitive:1 bayes:3 complicated:2 rochester:2 vivo:1 largely:2 efficiently:1 percept:1 yield:1 identify:1 generalize:1 bayesian:15 identification:2 accurately:1 critically:2 none:1 drive:1 converged:3 synapsis:5 synaptic:1 raster:1 sensorimotor:2 involved:1 naturally:2 associated:4 static:1 gain:1 auditory:1 emanuel:1 treatment:1 carandini:1 recall:1 back:1 appears:1 alexandre:4 dt:5 response:5 wei:3 synapse:2 roozbeh:1 formulation:4 hank:1 just:4 until:1 correlation:3 working:1 replacing:1 lda:21 building:1 roitman:1 normalized:2 true:2 requiring:1 barlow:1 equality:1 chemical:2 leibler:1 iteratively:1 deal:4 adjacent:1 konrad:1 indistinguishable:1 during:1 please:1 maintained:1 steady:1 excitation:2 presenting:1 demonstrate:2 latham:3 dedicated:1 motion:1 reasoning:5 variational:13 recently:2 common:2 volatile:2 superior:1 multinomial:5 ji:2 winner:1 discussed:1 measurement:1 rd:8 outlined:1 similarly:1 had:1 dot:2 dj:1 stable:1 cortex:10 behaving:1 inhibition:4 similarity:1 align:1 navigates:1 longer:1 add:1 posterior:27 recent:1 showed:1 irrelevant:1 came:1 ubiquitously:1 vul:1 joshua:1 transmitted:1 minimum:3 fortunately:1 somewhat:1 additional:1 converge:2 signal:2 ii:2 multiple:2 simoncelli:1 bcs:1 infer:5 rj:4 hebbian:1 smooth:3 alan:1 characterized:1 bach:1 long:1 prescription:1 luca:1 prediction:2 variant:1 basic:2 expectation:4 poisson:3 histogram:3 normalization:13 represent:2 kernel:2 uhlenbeck:1 achieved:1 cell:5 iteration:1 proposal:1 addition:1 remarkably:1 addressed:1 haptic:1 recording:1 hz:2 mature:1 lafferty:1 jordan:1 call:1 near:1 counting:1 yang:1 enough:1 easy:2 concerned:1 variety:3 marginalization:2 isolation:1 psychology:1 todorov:1 architecture:1 hb:1 regarding:2 curiously:2 kohn:1 effort:1 movshon:2 peter:1 york:2 cause:32 jj:1 iterating:1 detailed:1 listed:1 ddm:8 tenenbaum:1 kiani:1 generate:1 specifies:2 fz:2 exist:1 canonical:1 inhibitory:3 neuroscience:10 track:2 blue:1 broadly:1 zd:3 discrete:2 undriven:1 key:1 soma:4 threshold:1 drawn:1 pj:5 diffusion:1 deneve:2 fraction:1 sum:2 inverse:2 you:1 family:3 laid:1 utilizes:2 gonzalez:1 decision:2 vb:11 layer:9 ct:1 guaranteed:1 replaces:1 annual:1 activity:23 occur:1 precisely:2 constraint:1 awake:1 scene:6 ri:2 generates:1 optimality:2 rnz:3 span:1 performing:1 attempting:1 martin:1 department:3 according:3 combination:2 representable:2 poor:1 describes:2 evolves:1 biologically:6 making:2 explained:1 den:1 taken:1 behavioural:2 computationally:1 equation:4 remains:1 count:3 mechanism:3 needed:2 know:1 letting:1 tractable:2 serf:1 confounded:1 end:5 operation:8 apply:4 observe:1 away:1 generic:3 occurrence:1 alternative:1 odor:15 batch:3 yair:1 algorithm1:1 existence:1 top:1 dirichlet:11 assumes:3 marginalized:1 build:8 coffee:3 approximating:4 especially:1 society:1 move:1 objective:1 quantity:7 spike:36 reinitialize:1 concentration:9 primary:4 saccadic:1 responds:1 mehrdad:1 unclear:1 said:1 striate:1 link:2 mapped:1 outer:1 strawberry:1 topic:30 argue:1 reason:1 arnulf:1 code:14 kalman:3 modeled:1 relationship:2 navalpakkam:1 minimizing:1 balance:1 difficult:1 katherine:1 unfortunately:1 hlog:2 trace:1 negative:2 rise:1 implementation:17 reliably:1 perform:1 neuron:29 observation:2 behave:1 january:1 tilde:1 neurobiology:1 variability:3 precise:1 rn:7 smoothed:1 intensity:14 drift:1 namely:1 required:4 kl:3 connection:2 coherent:1 macaque:2 nip:1 address:2 able:2 dynamical:3 pattern:22 below:3 perception:1 program:1 green:1 memory:1 belief:3 unrealistic:1 natural:8 rely:1 scheme:4 eye:1 temporally:1 sabes:1 faced:5 heller:1 prior:2 review:2 text:2 evolve:1 relative:1 graf:1 bear:1 suggestion:1 interesting:1 allocation:5 filtering:1 proportional:2 remarkable:1 integrate:1 sufficient:1 consistent:2 shadlen:2 principle:2 bank:1 playing:1 share:1 cd:7 course:3 excitatory:1 formal:1 exponentiated:1 wide:3 explaining:1 fall:1 anesthetized:1 sparse:1 distributed:2 van:1 cortical:3 world:1 stand:1 qn:1 sensory:2 made:1 piecemeal:1 geneva:1 approximate:8 kullback:1 dealing:1 active:1 correlating:1 conclude:1 assumed:1 eero:1 alternatively:1 search:2 latent:26 un:2 continuous:1 additionally:2 qz:4 nature:11 kheller:1 dendrite:3 complex:10 anthony:1 behaviourally:2 marc:1 official:1 linearly:4 rh:1 arise:2 carpenter:1 neuronal:1 fig:8 depicts:1 fashion:1 hebb:2 gatsby:1 axon:1 inferring:3 wish:1 explicit:1 exponential:3 heeger:2 governed:1 perceptual:1 jmlr:1 orns:2 unwieldy:1 showing:2 recurrently:1 decay:1 physiological:1 evidence:4 demixing:11 intractable:2 consist:1 supplement:3 phd:1 conditioned:1 egl:1 easier:1 wolpert:1 generalizing:1 depicted:1 simply:6 visual:6 tracking:2 partially:1 talking:1 ch:1 chance:1 relies:2 ma:3 reassuring:1 goal:3 targeted:5 month:1 jeff:1 absence:1 change:2 typical:2 specifically:2 called:2 total:3 divisive:11 experimental:1 formally:1 berg:1 mcmc:1 correlated:3
3,928
4,556
Learned Prioritization for Trading Off Accuracy and Speed? Jiarong Jiang? Adam Teichert? Hal Daum?e III? Jason Eisner? ? ? Department of Computer Science Johns Hopkins University Baltimore, MD 21218 {teichert,eisner}@jhu.edu Department of Computer Science University of Maryland College Park, MD 20742 {jiarong,hal}@umiacs.umd.edu Abstract Users want inference to be both fast and accurate, but quality often comes at the cost of speed. The field has experimented with approximate inference algorithms that make different speed-accuracy tradeoffs (for particular problems and datasets). We aim to explore this space automatically, focusing here on the case of agenda-based syntactic parsing [12]. Unfortunately, off-the-shelf reinforcement learning techniques fail to learn good policies: the state space is simply too large to explore naively. An attempt to counteract this by applying imitation learning algorithms also fails: the ?teacher? follows a far better policy than anything in our learner?s policy space, free of the speed-accuracy tradeoff that arises when oracle information is unavailable, and thus largely insensitive to the known reward functfion. We propose a hybrid reinforcement/apprenticeship learning algorithm that learns to speed up an initial policy, trading off accuracy for speed according to various settings of a speed term in the loss function. 1 Introduction The nominal goal of predictive inference is to achieve high accuracy. Unfortunately, high accuracy often comes at the price of slow computation. In practice one wants a ?reasonable? tradeoff between accuracy and speed. But the definition of ?reasonable? varies with the application. Our goal is to optimize a system with respect to a user-specified speed/accuracy tradeoff, on a user-specified data distribution. We formalize our problem in terms of learning priority functions for generic inference algorithms (Section 2). Much research in natural language processing (NLP) has been dedicated to finding speedups for exact or approximate computation in a wide range of inference problems including sequence tagging, constituent parsing, dependency parsing, and machine translation. Many of the speedup strategies in the literature can be expressed as pruning or prioritization heuristics. Prioritization heuristics govern the order in which search actions are taken while pruning heuristics explicitly dictate whether particular actions should be taken at all. Examples of prioritization include A? [13] and Hierarchical A? [19] heuristics, which, in the case of agenda-based parsing, prioritize parse actions so as to reduce work while maintaining the guarantee that the most likely parse is found. Alternatively, coarse-to-fine pruning [21], classifier-based pruning [23], [22] beam-width prediction [3], etc can result in even faster inference if a small amount of search error can be tolerated. Unfortunately, deciding which techniques to use for a specific setting can be difficult: it is impractical to ?try everything.? In the same way that statistical learning has dramatically improved the accuracy of NLP applications, we seek to develop statistical learning technology that can dramatically improve their speed while maintaining tolerable accuracy. By combining reinforcement learning and imitation learning methods, we develop an algorithm that can successfully learn such a tradeoff in the context of constituency parsing. Although this paper focuses on parsing, we expect the approach to transfer to prioritization in other agenda-based algorithms, such as machine translation and residual belief propagation. We give a broader discussion of this setting in [8]. ? This material is based upon work supported by the National Science Foundation under Grant No. 0964681. 1 2 Priority-based Inference Inference algorithms in NLP (e.g. parsers, taggers, or translation systems) as well as more broadly in artificial intelligence (e.g., planners) often rely on prioritized exploration. For concreteness, we describe inference in the context of parsing, though it is well known that this setting captures all the essential structure of a much larger family of ?deductive inference? problems [12, 9]. 2.1 Prioritized Parsing Given a probabilistic context-free grammar, one approach to inferring the best parse tree for a given sentence is to build the tree from the bottom up by dynamic programming, as in CKY [29]. When a prospective constituent such as ?NP from 3 to 8? is built, its Viterbi inside score is the log-probability of the best known subparse that matches that description.1 A standard extension of the CKY algorithm [12] uses an agenda?a priority queue of constituents built so far?to decide which constituent is most promising to extend next, as detailed in section 2.2 below. The success of the inference algorithm in terms of speed and accuracy hinge on its ability to prioritize ?good? actions before ?bad? actions. In our context, a constituent is ?good? if it somehow leads to a high accuracy solution, quickly. Running Example 1. Either CKY or an agenda-based parser that prioritizes by Viterbi inside score will find the highest-scoring parse. This achieves a percentage accuracy of 93.3, given the very large grammar and experimental conditions described in Section 6. However, the agenda-based parser is over an order of magnitude faster than CKY (wall clock time) because it stops as soon as it finds a parse, without building further constituents. With mild pruning according to Viterbi inside score, the accuracy remains 93.3 and the speed triples. With more aggressive pruning, the accuracy drops to 92.0 and the speed triples again. Our goal is to learn a prioritization function that satisfies this condition. In order to operationalize this approach, we need to define the test-time objective function we wish to optimize; we choose a simple linear interpolation of accuracy and speed: quality = accuracy ? ? ? time (1) where we can choose a ? that reflects our true preferences. The goal of ? is to encode ?how much more time am I willing to spend to achieve an additional unit of accuracy?? In this paper, we consider a very simple notion of time: the number of constituents popped from/pushed into the agenda during inference, halting inference as soon as the parser pops its first complete parse. When considering how to optimize the expectation of Eq (1) over test data, several challenges present themselves. First, this is a sequential decision process: the parsing decisions made at a given time may affect both the availability and goodness of future decisions. Second, the parser?s total runtime and accuracy on a sentence are unknown until parsing is complete, making this an instance of delayed reward. These considerations lead us to formulate this problem as a Markov Decision Process (MDP), a well-studied model of decision processes. 2.2 Inference as a Markov Decision Process A Markov Decision Process (MDP) is a formalization of a memoryless search process. An MDP consists of a state space S, an action space A, and a transition function T . An agent in an MDP observes the current state s ? S and chooses an action a ? A. The environment responds by transitioning to a state s0 ? S, sampled from the transition distribution T (s0 | s, a). The agent then observes its new state and chooses a new action. An agent?s policy ? describes how the (memoryless) agent chooses an action based on its current state, where ? is either a deterministic function of the state (i.e., ?(s) 7? a) or a stochastic distribution over actions (i.e., ?(a | s)). For parsing, the state is the full current chart and agenda (and is astronomically large: roughly 1017 states for average sentences). The agent controls which item (constituent) to ?pop? from the agenda. The initial state has an agenda consisting of all single-word constituents, and an empty chart of previously popped constituents. Possible actions correspond to items currently on the agenda. When the agent chooses to pop item y, the environment deterministically adds y to the chart, combines y as licensed by the grammar with adjacent items z in the chart, and places each resulting new item x 1 E.g., the maximum log-probability of generating some tree whose fringe is the substring spanning words (3,8], given that NP (noun phrase) is the root nonterminal. This is the total log-probability of rules in the tree. 2 on the agenda. (Duplicates in the chart or agenda are merged: the one of highest Viterbi inside score is kept.) The only stochasticity is the initial draw of a new sentence to be parsed. We are interested in learning a deterministic policy that always pops the highest-priority available action. Thus, learning a policy corresponds to learning a priority function. We define the priority of action a in state s as the dot product of a feature vector ?(a, s) with the weight vector ?; our features are described in Section 2.3. Formally, our policy is ?? (s) = arg max ? ? ?(a, s) (2) a An admissible policy in the sense of A? search [13] would guarantee that we always return the parse of highest Viterbi inside score?but we do not require this, instead aiming to optimize Eq (1). 2.3 Features for Prioritized Parsing We use the following simple features to prioritize a possible constituent. (1) Viterbi inside score; (2) constituent touches start of sentence; (3) constituent touches end of sentence; (4) constituent length; length (5) constituent sentence length ; (6) log p(constituent label | prev. word POS tag) and log p(constituent label | next word POS tag), where the part-of-speech (POS) tag of w is taken to be arg maxt p(w | t) under the grammar; (7) 12 features indicating whether the constituent?s {preceding, following, initial} word starts with an {uppercase, lowercase, number, symbol} character; (8) the 5 most positive and 5 most negative punctuation features from [14], which consider the placement of punctuation marks within the constituent. The log-probability features (1), (6) are inspired by work on figures of merit for agenda-based parsing [4], while case and punctuation patterns (7), (8) are inspired by structure-free parsing [14]. 3 Reinforcement Learning Reinforcement learning (RL) provides a generic solution to solving learning problems with delayed reward [25]. The reward function takes a state of the world s and an agent?s chosen action a and returns a real value r that indicates the ?immediate reward? the agent receives for taking that action. In general the reward function may be stochastic, but in our case, it is deterministic: r(s, a) ? R. The reward function we consider is:  r(s, a) = acc(a) ? ? ? time(s) if a is a full parse tree 0 otherwise (3) Here, acc(a) measures the accuracy of the full parse tree popped by the action a (against a gold standard) and time(s) is a user-defined measure of time. In words, when the parser completes parsing, it receives reward given by Eq (1); at all other times, it receives no reward. 3.1 Boltzmann Exploration At test time, the transition between states is deterministic: our policy always chooses the action a that has highest priority in the current state s. However, during training, we promote exploration of policy space by running with stochastic policies ?? (a | s). Thus, there is some chance of popping a lower-priority action, to find out if it is useful and should be given higher-priority. In particular, we use Boltzmann exploration to construct a stochastic policy with a Gibbs distribution. Our policy is:   1 1 exp ? ? ?(a, s) with Z(s) as the appropriate normalizing constant (4) ?? (a | s) = Z(s) temp That is, the log-likelihood of action a at state s is an affine function of its priority. The temperature temp controls the amount of exploration. As temp ? 0, ?? approaches the deterministic policy in Eq (2); as temp ? ?, ?? approaches the uniform distribution over available actions. During training, temp can be decreased to shift from exploration to exploitation. A trajectory ? is the complete sequence of state/action/reward triples from parsing a single sentence. As is common, we denote ? = hs0 , a0 , r0 , s1 , a1 , r1 , . . . , sT , aT , rT i, where: s0 is the starting state; at is chosen by the agent by ?? (at | st ); rt = r(st , at ); and st+1 is drawn by the environment from 3 T (st+1 | st , at ), deterministically in our case. At a given temperature, the weight vector ? gives rise to a distribution over trajectories and hence to an expected total reward: " T # X R = E? ??? [R(? )] = E? ??? rt . (5) t=0 where ? is a random trajectory chosen by policy ?? , and rt is the reward at step t of ? . 3.2 Policy Gradient Given our features, we wish to find parameters that yield the highest possible expected reward. We carry out this optimization using a stochastic gradient ascent algorithm known as policy gradient [27, 26]. This operates by taking steps in the direction of ?? R: ?? E? [R(? )] = E? [ T h i h i X ?? p? (? ) R(? )] = E? R(? )?? log p? (? ) = E? R(? ) ?? log ?(at | st ) p? (? ) t=0 (6) The expectation can be approximated by sampling trajectories. It also requires computing the gradient of each policy decision, which, by Eq (4), is: ! X 1 ?(at , st ) ? ?? (a0 | st )?(a0 , st ) (7) ?? log ?? (at | st ) = temp 0 a ?A Combining Eq (6) and Eq (7) gives the form of the gradient with respect to a single trajectory. The policy gradient algorithm samples one trajectory (or several) according to the current ?? , and then takes a gradient step according to Eq (6). This increases the probability of actions on high-reward trajectories more than actions on low-reward trajectories. Running Example 2. The baseline system from Running Example 1 always returns the target parse (the complete parse with maximum Viterbi inside score). This achieves an accuracy of 93.3 (percent recall) and speed of 1.5 mpops (million pops) on training data. Unfortunately, running policy gradient from this starting point degrades speed and accuracy. Training is not practically feasible: even the first pass over 100 training sentences (sampling 5 trajectories per sentence) takes over a day. 3.3 Analysis One might wonder why policy gradient performed so poorly on this problem. One hypothesis is that it is the fault of stochastic gradient descent: the optimization problem was too hard or our step sizes were chosen poorly. To address this, we attempted an experiment where we added a ?cheating? feature to the model, which had a value of one for constituents that should be in the final parse, and zero otherwise. Under almost every condition, policy gradient was able to learn a near-optimal policy by placing high weight on this cheating feature. An alternative hypothesis is overfitting to the training data. However, we were unable to achieve significantly higher accuracy even when evaluating on our training data?indeed, even for a single train/test sentence. The main difficulty with policy gradient is credit assignment: it has no way to determine which actions were ?responsible? for a trajectory?s reward. Without causal reasoning, we need to sample many trajectories in order to distinguish which actions are reliably associated with higher-reward. This is a significant problem for us, since the average trajectory length of an A?0 parser on a 15 word sentence is about 30,000 steps, only about 40 of which (less than 0.15%) are actually needed to successfully complete the parse optimally. 3.4 Reward Shaping A classic approach to attenuating the credit assignment problem when one has some knowledge about the domain is reward shaping [10]. The goal of reward shaping is to heuristically associate portions of the total reward with specific time steps, and to favor actions that are observed to be soon followed by a reward, on the assumption that they caused that reward. If speed is measured by the number of popped items and accuracy is measured by labeled constituent recall of the first-popped complete parse (compared to the gold-standard parse), one natural way to shape rewards is to give an immediate penalty for the time incurred in performing the action while giving an immediate positive reward for actions that build constituents of the gold parse. Since only some of the correct constituents built may actually make it into the returned tree, we can correct for having ?incorrectly? rewarded the others by penalizing the final action. Thus, the shaped reward: 4 ? ? 1 ? ?(s, a) ? ? 1?? r?(s, a) = ? ?? if a pops a complete parse (causing the parser to halt and return a) if a pops a labeled constituent that appears in the gold parse otherwise (8) ? is from Eq (1), penalizing the runtime of each step. 1 rewards a correct constituent. The correction ?(s, a) is the number of correct constituents popped into the chart of s that were not in the first-popped parse a. It is easy to see that for any trajectory ending in a complete parse, the total shaped and unshaped rewards along a trajectory are equal (i.e. r(? ) = r?(? )). We now modify the total reward to use temporal discounting. Let 0 ? ? ? 1 be a discount factor. When rewards are discounted over time, the policy gradient becomes " ? ? (? )] = E? ?? E? ??? [R ? T X T X t=0 t0 =t ? t0 ?t ! r?t0 # ?? log ?? (at | st ) (9) where r?t0 = r?(st0 , at0 ). When ? = 1, the gradient of the above turns out to be equivalent to Eq (6) [20, section 3.1], and therefore following the gradient is equivalent to policy gradient. When ? = 0, the parser gets only immediate reward?and in general, a small ? assigns the credit for a local reward r?t0 mainly to actions at at closely preceding times. This gradient step can now achieve some credit assignment. If an action is on a good trajectory but occurs after most of the useful actions (pops of correct constituents), then it does not receive credit for those previously occurring actions. However, if it occurs before useful actions, it still does receive credit because we do not know (without additional simulation) whether it was a necessary step toward those actions. Running Example 3. Reward shaping helps significantly, but not enough to be competitive. As the parser speeds up, training is about 10 times faster than before. The best setting (? = 0, ? = 10?6 ) achieves an accuracy in the mid-70?s with only about 0.2 mpops. No settings were able to achieve higher accuracy. 4 Apprenticeship Learning In reinforcement learning, an agent interacts with an environment and attempts to learn to maximize its reward by repeating actions that led to high reward in the past. In apprenticeship learning, we assume access to a collection of trajectories taken by an optimal policy and attempt to learn to mimic those trajectories. The learner?s only goal is to behave like the teacher at every step: it does not have any notion of reward. In contrast, the related task of inverse reinforcement learning/optimal control [17, 11] attempts to infer a reward function from the teacher?s optimal behavior. Many algorithms exist for apprenticeship learning. Some of them work by first executing inverse reinforcement learning [11, 17] to induce a reward function and then feeding this reward function into an off-the-shelf reinforcement learning algorithm like policy gradient to learn an approximately optimal agent [1]. Alternatively, one can directly learn to mimic an optimal demonstrator, without going through the side task of trying to induce its reward function [7, 24]. 4.1 Oracle Actions With a teacher to help guide the learning process, we would like to explore more intelligently than Boltzmann exploration, in particular, focusing on high-reward regions of policy space. We introduce oracle actions as a guidance for areas to explore. Ideally, oracle actions should lead to a maximum-reward tree. In training, we will identify oracle actions to be those that build items in the maximum likelihood parse consistent with the gold parse. When multiple oracle actions are available on the agenda, we will break ties according to the priority assigned by the current policy (i.e., choose the oracle action that it currently likes best). 4.2 Apprenticeship Learning via Classification Given a notion of oracle actions, a straightforward approach to policy learning is to simply train a classifier to follow the oracle?a popular approach in incremental parsing [6, 5]. Indeed, this serves as the initial iteration of the state-of-the-art apprenticeship learning algorithm, DAGGER [24]. We train a classifier as follows. Trajectories are generated by following oracle actions, breaking ties using the initial policy (Viterbi inside score) when multiple oracle actions are available. These trajectories are incredibly 5 short (roughly double the number of words in the sentence). At each step in the trajectory, (st , at ), a classification example is generated, where the action taken by the oracle (at ) is considered the correct class and all other available actions are considered incorrect. The classifier that we train on these examples is a maximum entropy classifier, so it has exactly the same form as the Boltzmann exploration model (Eq (4)) but without the temperature control. In fact, the gradient of this classifier (Eq (10)) is nearly identical to the policy gradient (Eq (6)) except that ? is distributed differently and the total reward R(? ) does not appear: instead of mimicking high-reward trajectories we now try to mimic oracle trajectories. " T !# X X 0 0 E? ??? ?(at , st ) ? ?? (a | st )?(a , st ) (10) t=0 a0 ?A where ? ? denotes the oracle policy so at is the oracle action. The potential benefit of the classifier-based approach over policy gradient with shaped rewards is increased credit assignment. In policy gradient with reward shaping, an action gets credit for all future reward (though no past reward). In the classifier-based approach, it gets credit for exactly whether or not it builds an item that is in the true parse. Running Example 4. The classifier-based approach performs only marginally better than policy gradient with shaped rewards. The best accuracy we can obtain is 76.5 with 0.19 mpops. To execute the DAGGER algorithm, we would continue in the next iteration by following the trajectories learned by the classifier and generating new classification examples on those states. Unfortunately, this is not computationally feasible due to the poor quality of the policy learned in the first iteration. Attempting to follow the learned policy essentially tries to build all possible constituents licensed by the grammar, which can be prohibitively expensive. We will remedy this in section 5. 4.3 What?s Wrong With Apprenticeship Learning An obvious practical issue with the classifier-based approach is that it trains the classifier only at states visited by the oracle. This leads to the well-known problem that it is unable to learn to recover from past errors [2, 28, 7, 24]. Even though our current feature set depends only on the action and not on the state, making action scores independent of the current state, there is still an issue since the set of actions to choose from does depend on the state. That is, the classifier is trained to discriminate only among the small set of agenda items available on the oracle trajectory (which are always combinations of correct constituents). But the action sets the parser faces at test time are much larger and more diverse. An additional objection to classifiers is that not all errors are created equal. Some incorrect actions are more expensive than others, if they create constituents that can be combined in many locally-attractive ways and hence slow the parser down or result in errors. Our classification problem does not distinguish among incorrect actions. The S EARN algorithm [7] would distinguish them by explicitly evaluating the future reward of each possible action (instead of using a teacher) and incorporating this into the classification problem. But explicit evaluation is computationally infeasible in our setting (at each time step, it must roll out a full future trajectory for each possible action from the agenda). Policy gradient provides another approach by observing which actions are good or bad across many random trajectories, but recall that we found it impractical as well. We do not further address this problem in this paper, but in [8] we suggested explicit causality analysis. A final issue has to do with the nature of the oracle. Recall that the oracle is ?supposed to? choose optimal actions for the given reward. Also recall that our oracle always picks correct constituents. There seems to be a contradiction here: our oracle action selector ignores ?, the tradeoff between accuracy and speed, and only focuses on accuracy. This happens because for any reasonable setting of ?, the optimal thing to do is always to just build the correct tree without building any extra constituents. Only for very large values of ? is it optimal to do anything else, and for such values of ?, the learned model will have hugely negative reward. This means that under the apprenticeship learning setting, we are actually never going to be able to learn to trade off accuracy and speed: as far as the oracle is concerned, you can have both! The tradeoff only appears because our model cannot come remotely close to mimicking the oracle. 5 Oracle-Infused Policy Gradient The failure of both standard reinforcement learning algorithms and standard apprenticeship learning algorithms on our problem leads us to develop a new approach. We start with the policy gradient algorithm (Section 3.2) and use ideas from apprenticeship learning to improve it. Our formulation preserves the reinforcement learning flavor of our overall setting, which involves delayed reward for a known reward function. Our approach is specifically designed for the non-deterministic nature of the agenda-based parsing setting [8]: once some action a becomes available (appears on the agenda), it never goes away until it is taken. This makes the notion of ?interleaving? oracle actions with policy actions both feasible and sensible. Like policy gradient, we draw trajectories from a policy and take gradient steps that favor actions with high reward under reward shaping. Like S EARN and DAGGER, we begin by exploring the space around the optimal policy and slowly explore out from there. 6 To achieve this, we define the notion of an oracle-infused policy. Let ? be an arbitrary policy and let ? ? [0, 1]. We define the oracle-infused policy ??+ as follows: ??+ (a | s) = ?? ? (a | s) + (1 ? ?)?(a | s) (11) In other words, when choosing an action, ??+ explores the policy space with probability 1 ? ? (according to its current model), but with probability ?, we force it to take an oracle action. Our algorithm takes policy gradient steps with reward shaping (Eqs (9) and (7)), but with respect to trajectories drawn from ??+ rather than ?. If ? = 0, it reduces to policy gradient, with reward shaping if ? < 1 and immediate reward if ? = 0. For ? = 1, the ? = 0 case reduces to the classifier-based approach with ? ? (which in turn breaks ties by choosing the best action under ?). Similar to DAGGER and S EARN, we do not stay at ? = 1, but wean our learner off the oracle supervision as it starts to find a good policy ? that imitates the classifier reasonably well. We use ? = 0.8epoch , where epoch is the total number of passes made through the training set at that point (so ? = 0.80 = 1 on the initial pass). Over time, ? ? 0, so that eventually we are training the policy to do well on the same distribution of states that it will pass through at test time (as in policy gradient). With intermediate values of ? (and ? ? 1), an iteration behaves similarly to an iteration of S EARN, except that it ?rolls out? the consequences of an action chosen randomly from (11) instead of evaluating all possible actions in parallel. Running Example 5. Oracle-infusion gives a competitive speed and accuracy tradeoff. A typical result is 91.2 with 0.68 mpops. 6 Experiments All of our experiments (including those discussed earlier) are based on the Wall Street Journal portion of the Penn Treebank [15]. We use a probabilistic context-free grammar with 370,396 rules?enough to make the baseline system accurate but slow. We obtained it as a latent-variable grammar [16] using 5 split-merge iterations [21] on sections 2?20 of the Treebank, reserving section 22 for learning the parameters of our policy. All approaches to trading off speed and accuracy are trained on section 22; in particular, for the running example and Section 6.2, the same 100 sentences of at most 15 words from that section were used for training and test. We measure accuracy in terms of labeled recall (including preterminals) and measure speed in terms of the number of pops from on the agenda. The limitation to relatively short sentences is purely for improved efficiency at training time. 6.1 Baseline Approaches Our baseline approaches trade off speed and accuracy not by learning to prioritize, but by varying the pruning level ?. A constituent is pruned if its Viterbi inside score is more than ? worse than that of some other constituent that covers the same substring. Our baselines are: (HA? ) a Hierarchical A? parser [18] with the same pruning threshold at each hierarchy level; (A?0 ) an A? parser with a 0 heuristic function plus pruning; (IDA?0 ) an iterative deepening A? algorithm, on which a failure to find any parse causes us to increase ? and try again with less aggressive pruning (note that this is not the traditional meaning of IDA*); and (CTF) the default coarse-to-fine parser in the Berkeley parser [21]. Several of these algorithms can make multiple passes, in which case the runtime (number of pops) is assessed cumulatively. 6.2 Learned Prioritization Approaches Model # of pops Recall F1 We explored four variants of our oracle-infused polA?0 (no pruning) 1496080 93.34 93.19 icy gradient with with ? = 10?6 . Figure 1 shows D686641 56.35 58.74 the result on the 100 training sentences. The ?-? tests I187403 76.48 76.92 are the degenerate case of ? = 1, or apprenticeship D+ 1275292 84.17 83.38 learning (section 4.2), while the ?+? tests use ? = I+ 682540 91.16 91.33 0.8epoch as recommended in section 5. Temperature matters for the ?+? tests and we use temp = 1. We Figure 1: Performance on 100 sentences. performed stochastic gradient descent for 25 passes over the data, sampling 5 trajectories in a row for each sentence (when ? < 1 so that trajectories are random). We can see that the classifier-based approaches ?-? perform poorly: when training trajectories consist of only oracle actions, learning is severely biased. Yet we saw in section 3.2 that without any help from the oracle actions, we suffer from such large variance in the training trajectories that performance degrades rapidly and learning does not converge even after days of training. Our ?oracle-infused? compromise ?+? uses some oracle actions: after several passes through the data, the parser learns to make good decisions without help from the oracle. 7 # of pops 3 x 10 Change of recall and # of pops 7 2 1 0 0.82 I+ A* 0 IDA* 0 CTF HA* 0.84 0.86 0.88 0.9 Recall 0.92 0.94 0.96 Figure 2: Pareto frontiers: Our I+ parser at different values of ?, against the baselines at different pruning levels. The other axis of variation is that the ?D? tests (delayed reward) use ? = 1, while the ?I? tests (immediate reward) use ? = 0. Note that I+ attempts a form of credit assignment and works better than D+.2 We were not able to get better results with intermediate values of ?, presumably because this crudely assigns credit for a reward (correct constituent) to the actions that closely preceded it, whereas in our agenda-based parser, the causes of the reward (correct subconstituents) related actions may have happened much earlier [8]. 6.3 Pareto Frontier Our final evaluation is on the held-out test set (length-limited sentences from Section 23). A 5-split grammar trained on section 2-21 is used. Given our previous results in Table 1, we only consider the I+ model: immediate reward with oracle infusion. To investigate trading off speed and accuracy, we learn and then evaluate a policy for each of several settings of the tradeoff parameter: ?. We train our policy using sentences of at most 15 words from Section 22 and evaluate the learned policy on the held out data (from Section 23). We measure accuracy as labeled constituent recall and evaluate speed in terms of the number of pops (or pushes) performed on the agenda. Figure 2 shows the baselines at different pruning thresholds as well as the performance of our policies trained using I+ for ? ? {10?3 , 10?4 , . . . , 10?8 }, using agenda pops as the measure of time. I+ is about 3 times as fast as unpruned A?0 at the cost of about 1% drop in accuracy (F-score from 94.58 to 93.56). Thus, I+ achieves the same accuracy as the pruned version of A?0 while still being twice as fast. I+ also improves upon HA? and IDA?0 with respect to speed at 60% of the pops. I+ always does better than the coarse-to-fine parser (CTF) in terms of both speed and accuracy, though using the number of agenda pops as our measure of speed puts both of our hierarchical baselines at a disadvantage. We also ran experiments using the number of agenda pushes as a more accurate measure of time, again sweeping over settings of ?. Since our reward shaping was crafted with agenda pops in mind, perhaps it is not surprising that learning performs relatively poorly in this setting. Still, we do manage to learn to trade off speed and accuracy. With a 1% drop in recall (F-score from 94.58 to 93.54), we speed up from A?0 by a factor of 4 (from around 8 billion pushes to 2 billion). Note that known pruning methods could also be employed in conjunction with learned prioritization. 7 Conclusions and Future Work In this paper, we considered the application of both reinforcement learning and apprenticeship learning to prioritize search in a way that is sensitive to a user-defined tradeoff between speed and accuracy. We found that a novel oracle-infused variant of the policy gradient algorithm for reinforcement learning is effective for learning a fast and accurate parser with only a simple set of features. In addition, we uncovered many properties of this problem that separate it from more standard learning scenarios, and designed experiments to determine the reasons off-the-shelf learning algorithms fail. An important avenue for future work is to consider better credit assignment. We are also very interested in designing richer feature sets, including ?dynamic? features that depend on both the action and the state of the chart and agenda. One role for dynamic features is to decide when to halt. The parser might decide to continue working past the first complete parse, or give up (returning a partial or default parse) before any complete parse is found. 2 The D- and I- approaches are quite similar to each other. Both train on oracle trajectories where all actions receive a reward of 1 ? ?, and simply try to make these oracle actions probable. However, D- trains more aggressively on long trajectories, since (9) implies that it weights a given training action by T ? t + 1, the number of future actions on that trajectory. The difference between D+ and I+ is more interesting because the trajectory includes non-oracle actions as well. 8 References [1] Pieter Abbeel and Andrew Ng. Apprenticeship learning via inverse reinforcement learning. In ICML, 2004. [2] J. Andrew Bagnell. Robust supervised learning. In AAAI, 2005. [3] Nathan Bodenstab, Aaron Dunlop, Keith Hall, and Brian Roark. Beam-width prediction for efficient CYK parsing. In ACL, 2011. [4] Sharon A. Caraballo and Eugene Charniak. New figures of merit for best-first probabilistic chart parsing. Computational Linguistics, 24(2):275?298, 1998. [5] Eugene Charniak. Top-down nearly-context-sensitive parsing. In EMNLP, 2010. [6] Michael Collins and Brian Roark. Incremental parsing with the perceptron algorithm. In ACL, 2004. [7] Hal Daum?e III, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning, 75(3):297?325, 2009. [8] Jason Eisner and Hal Daum?e III. Learning speed-accuracy tradeoffs in nondeterministic inference algorithms. In COST: NIPS Workshop on Computational Trade-offs in Statistical Learning, 2011. [9] Joshua Goodman. Semiring parsing. Computational Linguistics, 25(4):573?605, December 1999. [10] V. Gullapalli and A. G. Barto. Shaping as a method for accelerating reinforcement learning. In Proceedings of the IEEE International Symposium on Intelligent Control, 1992. [11] R. Kalman. Contributions to the theory of optimal control. Bol. Soc. Mat. Mexicana, 5:558?563, 1968. [12] Martin Kay. Algorithm schemata and data structures in syntactic processing. In B. J. Grosz, K. Sparck Jones, and B. L. Webber, editors, Readings in Natural Language Processing, pages 35?70. Kaufmann, 1986. First published (1980) as Xerox PARC TR CSL-80-12. [13] Dan Klein and Chris Manning. A* parsing: Fast exact Viterbi parse selection. In NAACL/HLT, 2003. [14] Percy Liang, Hal Daum?e III, and Dan Klein. Structure compilation: Trading structure for features. In ICML, Helsinki, Finland, 2008. [15] M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics, 19(2):330, 1993. [16] Takuya Matsuzaki, Yusuke Miyao, and Junichi Tsujii. Probabilistic CFG with latent annotations. In ACL, 2005. [17] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In ICML, 2000. [18] A. Pauls and D. Klein. Hierarchical search for parsing. In NAACL/HLT, pages 557?565. Association for Computational Linguistics, 2009. [19] A. Pauls and D. Klein. Hierarchical A* parsing with bridge outside scores. In ACL, pages 348?352. Association for Computational Linguistics, 2010. [20] Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4), 2008. [21] S. Petrov and D. Klein. Improved inference for unlexicalized parsing. In NAACL/HLT, pages 404?411, 2007. [22] B. Roark, K. Hollingshead, and N. Bodenstab. Finite-state chart constraints for reduced complexity context-free parsing pipelines. Computational Linguistics, Early Access:1?35, 2012. [23] Brian Roark and Kristy Hollingshead. Classifying chart cells for quadratic complexity context-free inference. In COLING, pages 745?752, Manchester, UK, August 2008. Coling 2008 Organizing Committee. [24] Stephane Ross, Geoff J. Gordon, and J. Andrew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AI-Stats, 2011. [25] Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [26] Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pages 1057?1063. MIT Press, 2000. [27] R.J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(23), 1992. [28] Yuehua Xu and Alan Fern. On learning linear ranking functions for beam search. In ICML, pages 1047? 1054, 2007. [29] D. H. Younger. Recognition and parsing of context-free languages in time n3 . Information and Control, 10(2):189?208, February 1967. 9
4556 |@word mild:1 webber:1 exploitation:1 version:1 seems:1 willing:1 heuristically:1 seek:1 simulation:1 pieter:1 pick:1 tr:1 carry:1 reduction:1 takuya:1 initial:7 uncovered:1 score:13 charniak:2 daniel:1 past:4 current:9 ida:4 surprising:1 yet:1 must:1 parsing:29 john:2 shape:1 motor:1 drop:3 designed:2 intelligence:1 item:9 short:2 coarse:3 provides:2 preference:1 tagger:1 along:1 symposium:1 incorrect:3 consists:1 combine:1 prev:1 nondeterministic:1 inside:9 dan:2 introduce:1 apprenticeship:13 tagging:1 indeed:2 expected:2 behavior:1 themselves:1 roughly:2 inspired:2 discounted:1 automatically:1 csl:1 considering:1 becomes:2 begin:1 what:1 finding:1 st0:1 impractical:2 guarantee:2 temporal:1 berkeley:1 every:2 runtime:3 tie:3 exactly:2 classifier:17 prohibitively:1 uk:1 wrong:1 control:7 penn:2 grant:1 appear:1 unit:1 before:4 positive:2 local:1 modify:1 aiming:1 consequence:1 severely:1 sutton:2 jiang:1 yusuke:1 interpolation:1 approximately:1 merge:1 might:2 plus:1 twice:1 acl:4 studied:1 limited:1 range:1 practical:1 responsible:1 practice:1 regret:1 jan:1 area:1 remotely:1 jhu:1 significantly:2 dictate:1 word:11 induce:2 get:4 cannot:1 close:1 selection:1 put:1 context:9 applying:1 optimize:4 equivalent:2 deterministic:6 straightforward:1 go:1 starting:2 incredibly:1 hugely:1 williams:1 formulate:1 assigns:2 stats:1 contradiction:1 rule:2 kay:1 classic:1 notion:5 variation:1 hierarchy:1 target:1 nominal:1 parser:22 user:5 exact:2 programming:1 prioritization:8 us:2 hypothesis:2 designing:1 associate:1 approximated:1 expensive:2 recognition:1 marcu:1 labeled:4 bottom:1 observed:1 role:1 capture:1 region:1 trade:4 highest:6 russell:1 observes:2 ran:1 environment:4 govern:1 complexity:2 reward:66 ideally:1 dynamic:3 trained:4 depend:2 solving:1 parc:1 singh:1 compromise:1 predictive:1 purely:1 upon:2 efficiency:1 learner:3 po:3 differently:1 geoff:1 various:1 train:8 fast:5 describe:1 effective:1 artificial:1 choosing:2 outside:1 whose:1 heuristic:5 larger:2 spend:1 richer:1 quite:1 otherwise:3 grammar:8 ability:1 favor:2 cfg:1 syntactic:2 final:4 online:1 sequence:2 intelligently:1 propose:1 product:1 causing:1 combining:2 rapidly:1 organizing:1 poorly:4 achieve:6 degenerate:1 gold:5 supposed:1 description:1 constituent:36 billion:2 manchester:1 empty:1 double:1 r1:1 generating:2 adam:1 executing:1 incremental:2 help:4 develop:3 andrew:5 measured:2 nonterminal:1 keith:1 wean:1 eq:14 soc:1 involves:1 trading:5 come:3 implies:1 direction:1 merged:1 correct:11 closely:2 annotated:1 stochastic:7 stephane:1 exploration:8 mcallester:1 material:1 everything:1 require:1 feeding:1 f1:1 abbeel:1 jiarong:2 wall:2 marcinkiewicz:1 probable:1 brian:3 extension:1 exploring:1 correction:1 frontier:2 practically:1 around:2 credit:12 considered:3 hall:1 deciding:1 exp:1 presumably:1 viterbi:10 achieves:4 finland:1 early:1 label:2 currently:2 visited:1 ross:1 saw:1 sensitive:2 deductive:1 bridge:1 create:1 successfully:2 reflects:1 stefan:1 offs:1 mit:2 always:8 aim:1 rather:1 shelf:3 varying:1 barto:2 broader:1 conjunction:1 encode:1 focus:2 schaal:1 indicates:1 likelihood:2 mainly:1 contrast:1 baseline:8 am:1 sense:1 inference:17 lowercase:1 a0:4 going:2 interested:2 mimicking:2 arg:2 classification:5 issue:3 among:2 overall:1 noun:1 art:1 field:1 construct:1 equal:2 having:1 shaped:4 sampling:3 never:2 identical:1 placing:1 park:1 ng:2 icml:4 nearly:2 prioritizes:1 promote:1 stuart:1 future:7 mimic:3 np:2 others:2 intelligent:1 duplicate:1 gordon:1 richard:2 connectionist:1 randomly:1 preserve:1 national:1 delayed:4 cumulatively:1 consisting:1 attempt:5 once:1 investigate:1 evaluation:2 punctuation:3 popping:1 uppercase:1 compilation:1 held:2 accurate:4 partial:1 necessary:1 tree:9 causal:1 guidance:1 instance:1 increased:1 earlier:2 cover:1 disadvantage:1 goodness:1 assignment:6 licensed:2 cost:3 phrase:1 returning:1 uniform:1 wonder:1 too:2 optimally:1 dependency:1 teacher:5 varies:1 tolerated:1 chooses:5 combined:1 st:16 explores:1 international:1 stay:1 probabilistic:4 off:11 michael:1 hopkins:1 quickly:1 earn:4 again:3 aaai:1 deepening:1 manage:1 choose:5 prioritize:5 slowly:1 emnlp:1 priority:11 worse:1 return:4 aggressive:2 halting:1 potential:1 availability:1 includes:1 matter:1 explicitly:2 caused:1 depends:1 ranking:1 performed:3 try:5 jason:2 root:1 break:2 observing:1 schema:1 portion:2 start:4 competitive:2 dagger:4 recover:1 parallel:1 annotation:1 contribution:1 chart:10 accuracy:42 roll:2 variance:1 largely:1 kaufmann:1 correspond:1 yield:1 identify:1 fern:1 substring:2 marginally:1 trajectory:36 published:1 matsuzaki:1 acc:2 hlt:3 definition:1 against:2 failure:2 petrov:1 obvious:1 associated:1 stop:1 sampled:1 popular:1 recall:11 knowledge:1 improves:1 formalize:1 shaping:10 actually:3 focusing:2 appears:3 higher:4 day:2 follow:2 supervised:1 improved:3 formulation:1 execute:1 though:4 just:1 clock:1 until:2 crudely:1 receives:3 working:1 parse:28 touch:2 langford:1 propagation:1 somehow:1 tsujii:1 quality:3 perhaps:1 hal:5 mdp:4 building:3 naacl:3 true:2 remedy:1 hence:2 discounting:1 assigned:1 aggressively:1 memoryless:2 attractive:1 adjacent:1 during:3 width:2 anything:2 trying:1 dunlop:1 complete:10 junichi:1 performs:2 dedicated:1 temperature:4 percy:1 percent:1 reasoning:1 meaning:1 consideration:1 novel:1 common:1 behaves:1 preceded:1 rl:1 at0:1 infused:6 insensitive:1 million:1 extend:1 discussed:1 association:2 significant:1 ctf:3 gibbs:1 ai:1 similarly:1 stochasticity:1 language:3 had:1 dot:1 access:2 supervision:1 etc:1 add:1 rewarded:1 scenario:1 success:1 continue:2 fault:1 joshua:1 scoring:1 additional:3 preceding:2 employed:1 r0:1 determine:2 maximize:1 converge:1 recommended:1 full:4 multiple:3 infer:1 reduces:2 alan:1 match:1 faster:3 long:1 a1:1 halt:2 prediction:4 variant:2 essentially:1 expectation:2 iteration:6 cell:1 beam:3 receive:3 whereas:1 want:2 fine:3 addition:1 baltimore:1 decreased:1 completes:1 objection:1 else:1 goodman:1 extra:1 biased:1 umiacs:1 umd:1 ascent:1 pass:4 thing:1 december:1 near:1 intermediate:2 iii:4 easy:1 enough:2 concerned:1 split:2 affect:1 reduce:1 idea:1 avenue:1 tradeoff:11 shift:1 t0:5 whether:4 gullapalli:1 accelerating:1 penalty:1 suffer:1 queue:1 returned:1 peter:1 speech:1 cause:2 yishay:1 action:80 dramatically:2 useful:3 detailed:1 reserving:1 amount:2 repeating:1 discount:1 mid:1 locally:1 constituency:1 demonstrator:1 reduced:1 percentage:1 exist:1 happened:1 per:1 klein:5 broadly:1 diverse:1 mat:1 four:1 threshold:2 drawn:2 penalizing:2 kept:1 sharon:1 concreteness:1 counteract:1 inverse:4 you:1 place:1 planner:1 reasonable:3 decide:3 family:1 almost:1 draw:2 roark:4 decision:9 pushed:1 followed:1 distinguish:3 quadratic:1 oracle:41 jones:1 placement:1 constraint:1 helsinki:1 n3:1 tag:3 nathan:1 speed:33 pruned:2 performing:1 attempting:1 relatively:2 martin:1 speedup:2 department:2 structured:2 according:6 xerox:1 combination:1 poor:1 manning:1 describes:1 across:1 character:1 temp:7 making:2 s1:1 happens:1 taken:6 pipeline:1 computationally:2 remains:1 previously:2 turn:2 eventually:1 fail:2 committee:1 needed:1 know:1 merit:2 popped:7 mind:1 end:1 serf:1 available:7 hierarchical:5 away:1 generic:2 appropriate:1 tolerable:1 alternative:1 denotes:1 running:9 nlp:3 include:1 linguistics:6 top:1 maintaining:2 hinge:1 daum:4 infusion:2 parsed:1 eisner:3 giving:1 build:6 miyao:1 february:1 objective:1 added:1 occurs:2 strategy:1 degrades:2 rt:4 md:2 responds:1 interacts:1 traditional:1 bagnell:2 gradient:37 unable:2 maryland:1 separate:1 street:1 sensible:1 prospective:1 chris:1 younger:1 unlexicalized:1 spanning:1 toward:1 reason:1 marcus:1 length:5 kalman:1 liang:1 teichert:2 unfortunately:5 difficult:1 negative:2 rise:1 agenda:27 reliably:1 policy:64 unknown:1 boltzmann:4 perform:1 datasets:1 markov:3 finite:1 descent:2 behave:1 incorrectly:1 immediate:7 santorini:1 mansour:1 arbitrary:1 sweeping:1 august:1 david:1 cheating:2 semiring:1 specified:2 sentence:20 learned:8 icy:1 pop:18 nip:2 address:2 able:4 suggested:1 below:1 pattern:1 reading:1 challenge:1 built:3 including:4 max:1 belief:1 natural:3 hybrid:1 rely:1 difficulty:1 force:1 residual:1 improve:2 technology:1 axis:1 created:1 cky:4 imitates:1 epoch:3 literature:1 eugene:2 loss:1 expect:1 interesting:1 limitation:1 triple:3 foundation:1 incurred:1 agent:11 affine:1 consistent:1 s0:3 unpruned:1 treebank:3 editor:1 pareto:2 classifying:1 translation:3 maxt:1 row:1 supported:1 free:7 soon:3 infeasible:1 english:1 side:1 guide:1 perceptron:1 wide:1 taking:2 face:1 distributed:1 benefit:1 default:2 transition:3 world:1 evaluating:3 ending:1 ignores:1 made:2 reinforcement:20 collection:1 far:3 approximate:2 pruning:14 selector:1 skill:1 satinder:1 overfitting:1 corpus:1 imitation:3 alternatively:2 search:8 latent:2 iterative:1 bol:1 why:1 table:1 promising:1 learn:12 transfer:1 robust:1 nature:2 reasonably:1 unavailable:1 domain:1 cyk:1 main:1 paul:2 xu:1 causality:1 crafted:1 slow:3 formalization:1 fails:1 inferring:1 wish:2 deterministically:2 explicit:2 breaking:1 learns:2 interleaving:1 admissible:1 coling:2 down:2 transitioning:1 bad:2 specific:2 operationalize:1 symbol:1 explored:1 experimented:1 normalizing:1 naively:1 essential:1 incorporating:1 consist:1 sequential:1 workshop:1 magnitude:1 occurring:1 push:3 flavor:1 entropy:1 led:1 simply:3 explore:5 likely:1 expressed:1 corresponds:1 satisfies:1 astronomically:1 chance:1 fringe:1 goal:6 attenuating:1 hs0:1 prioritized:3 price:1 feasible:3 hard:1 pola:1 change:1 specifically:1 except:2 operates:1 typical:1 total:8 pas:3 discriminate:1 experimental:1 attempted:1 indicating:1 formally:1 college:1 aaron:1 mark:1 arises:1 assessed:1 collins:1 evaluate:3
3,929
4,557
Putting Bayes to sleep Wouter M. Koolen? Dmitry Adamskiy? Manfred K. Warmuth? Abstract We consider sequential prediction algorithms that are given the predictions from a set of models as inputs. If the nature of the data is changing over time in that different models predict well on different segments of the data, then adaptivity is typically achieved by mixing into the weights in each round a bit of the initial prior (kind of like a weak restart). However, what if the favored models in each segment are from a small subset, i.e. the data is likely to be predicted well by models that predicted well before? Curiously, fitting such ?sparse composite models? is achieved by mixing in a bit of all the past posteriors. This self-referential updating method is rather peculiar, but it is efficient and gives superior performance on many natural data sets. Also it is important because it introduces a long-term memory: any model that has done well in the past can be recovered quickly. While Bayesian interpretations can be found for mixing in a bit of the initial prior, no Bayesian interpretation is known for mixing in past posteriors. We build atop the ?specialist? framework from the online learning literature to give the Mixing Past Posteriors update a proper Bayesian foundation. We apply our method to a well-studied multitask learning problem and obtain a new intriguing efficient update that achieves a significantly better bound. 1 Introduction We consider sequential prediction of outcomes y1 , y2 , . . . using a set of models m = 1, . . . , M for this task. In practice m could range over a mix of human experts, parametric models, or even complex machine learning algorithms. In any case we denote the prediction of model m for outcome yt given past observations y<t = (y1 , . . . , yt?1 ) by P (yt |y<t , m). The goal is to design a computationally efficient predictor P (yt |y<t ) that maximally leverages the predictive power of these models as measured in log loss. The yardstick in this paper is a notion of regret defined w.r.t. a given comparator class of models or composite models: it is the additional loss of the predictor over the best comparator. For example if the comparator class is the set of base models m = 1, . . . , M , then the regret for a sequence of T outcomes y?T = (y1 , . . . , yT ) is R := T X M ? ln P (yt |y<t ) ? min m=1 t=1 T X ? ln P (yt |y<t , m). t=1 The Bayesian predictor (detailed below) with uniform model prior has regret at most ln M for all T . Typically the nature of the data is changing with time: in an initial segment one model predicts well, followed by a second segment in which another model has small loss and so forth. For this scenario the natural comparator class is the set of partition models which divide the sequence of T outcomes into B segments and specify the model that predicts in each segment. By running Bayes on all exponentially many partition models comprising the comparator class, we can guarantee regret  T ?1 ln B?1 +B ln M , which is optimal. The goal then is to find efficient algorithms with approximately ? Supported by NWO Rubicon grant 680-50-1010. Supported by Veterinary Laboratories Agency of Department for Environment, Food and Rural Affairs. ? Supported by NSF grant IIS-0917397. ? 1 the same guarantee as full Bayes. In this case this is achieved by the Fixed Share [HW98] predictor. It assigns a certain prior to all partition models for which the exponentially many posterior weights collapse to M posterior weights that can be maintained efficiently. Modifications of this algorithm achieve essentially the same bound for all T , B and M simultaneously [VW98, KdR08]. In an open problem Yoav Freund [BW02] asked whether there are algorithms that have small regret against sparse partition models where the base models allocated to the segments are from a small subset of N of the M models. The Bayes algorithm when run on all such partition models achieves  T ?1 regret ln M + ln + B ln N , but contrary to the non-sparse case, emulating this algorithm N B?1 is NP-hard. However in a breakthrough paper, Bousquet and Warmuth in 2001 [BW02] gave the efficient MPP algorithm with only a slightly weaker regret bound. Like Fixed Share, MPP maintains M ?posterior? weights, but it instead mixes in a bit of all past posteriors in each update. This causes weights of previously good models to ?glow? a little bit, even if they perform bad locally. When the data later favors one of those good models, its weight is pulled up quickly. However the term ?posterior? is a misnomer because no Bayesian interpretation for this curious self-referential update was known. Understanding the MPP update is a very important problem because in many practical applications [HLSS00, GWBA02]1 it significantly outperforms Fixed Share. Our main philosophical contribution is finding a Bayesian interpretation for MPP. We employ the specialist framework from online learning [FSSW97, CV09, CKZV10]. So-called specialist models are either awake or asleep. When they are awake, they predict as usual. However when they are asleep, they ?go with the rest?, i.e. they predict with the combined prediction of all awake models. T outcomes 3 9 7 3 7 9 7 3 (a) A comparator partition model: segmentation and model assignment 3 zz z Instead of fully coordinated partition models, we construct partition specialists consisting of a base model and a set of segments where this base model is awake. The figure to the right shows how a comparator partition model is assembled from partition specialists. We can emulate Bayes on all partition specialists; NP-completeness is avoided by forgoing a-priori segment synchronization. By carefully choosing the prior, the exponentially many posterior weights collapse to the small number of weights used by the efficient MPP algorithm. Our analysis technique magically aggregates the contribution of the N partition specialists that constitute the comparator partition, showing that we achieve regret close to the regret of Bayes when run on all full partition models. Actually our new insights into the nature of MPP result in slightly improved regret bounds. 3 7 9 7 ZZZZ.... zZ 3 7 9 (b) Decomposition into 3 partition specialists, asleep at shaded times We then apply our methods to an online multitask learning problem where a small subset of models from a big set solve a large number of tasks. Again simulating Bayes on all sparse assignments of models to tasks is NP-hard. We split an assignment into subset specialists that assign a single base model to a subset of tasks. With the right prior, Bayes on these subset specialists again gently collapses to an efficient algorithm with a regret bound not much larger than Bayes on all assignments. This considerably improves the previous regret bound of [ABR07]. Our algorithm simply maintains one weight per model/task pair and does not rely on sampling (often used for multitask learning). Why is this line of research important? We found a new intuitive Bayesian method to quickly recover information that was learned before, allowing us to exploit sparse composite models. Moreover, it expressly avoids computational hardness by splitting composite models into smaller constituent ?specialists? that are asleep in time steps outside their jurisdiction. This method clearly beats Fixed Share when few base models constitute a partition, i.e. the composite models are sparse. We expect this methodology to become a main tool for making Bayesian prediction adapt to sparse models. The goal is to develop general tools for adding this type of adaptivity to existing Bayesian models without losing efficiency. It also lets us look again at the updates used in Nature in a new light, where species/genes cannot dare adapt too quickly to the current environment and must guard themselves against an environment that changes or fluctuates at a large scale. Surprisingly these type of updates might now be amenable to a Bayesian analysis. For example, it might be possible to interpret sex and the double stranded recessive/dominant gene device employed by Nature as a Bayesian update of genes that are either awake or asleep. 1 The experiments reported in [HLSS00] are based on precursors of MPP. However MPP outperforms these algorithms in later experiments we have done on natural data for the same problem (not shown). 2 2 Bayes and Specialists We consider sequential prediction of outcomes y1 , y2 , . . . from a finite alphabet. Assume that we have access to a collection of models m = 1, . . . , M with data likelihoods P (y1 , y2 , . . . |m). We then design a prior P (m) with roughly two goals in mind: the Bayes algorithm should ?collapse? (become efficient) and have a good regret bound. After observing past outcomes y<t := (y1 , . . . , yt?1 ), the next outcome yt is predicted by the predictive distribution P (yt |y<t ), which averages the model predictions P (yt |y<t , m) according to the posterior distribution P (m|y<t ): P (yt |y<t ) = M X P (yt |y<t , m)P (m|y<t ), where P (m|y<t ) = m=1 P (y<t |m)P (m) . P (y<t ) The latter is conveniently updated step-wise: P (m|yt , y<t ) = P (yt |y<t , m)P (m|y<t )/P (yt |y<t ). The log loss of the Bayesian predictor on data y?T := (y1 , . . . , yT ) is the cumulative loss of the predictive distributions and this readily relates to the cumulative loss of any model m: ? M X    ? = ?ln P (y?T |m)P (m) ? ?lnP (y?T |m) ? ? ?lnP (m). ? ?lnP (y?T ) ? ?lnP (y?T |m) {z } | {z } | m=1 P P T t=1 ?lnP (yt |y<t ) T t=1 ?lnP (yt |y<t ,m) ? That is, the additional loss (or regret) of Bayes w.r.t. model m ? is at most ? ln P (m). ? The uniform prior P (m) = 1/M ensures regret at most ln M w.r.t. any model m. ? This is a so-called individual sequence result, because no probabilistic assumptions were made on the data. Our main results will make essential use of the following fancier  weighted notion of regret. Here U (m) is any distribution on the models and 4 U (m) P (m) denotes the relative entropy PM U (m) m=1 U (m) ln P (m) between the distributions U (m) and P (m): M X    U (m) ? ln P (y?T )?(? ln P (y?T |m)) = 4 U (m) P (m) ?4 U (m) P (m|y?T ) . (1) m=1 By dropping the subtracted positive term we get an upper bound. The previous regret bound is now the special case when U is concentrated on model m. ? However when multiple models are good we achieve tighter regret bounds by letting U be the uniform distribution on all of them. Specialists We now consider a complication of the prediction task, which was introduced in the online learning literature under the name specialists [FSSW97]. The Bayesian algorithm, adapted to this task, will serve as the foundation of our main results. The idea is that in practice the predictions P (yt |y<t , m) of some models may be unavailable. Human forecasters may be specialized, unreachable or too expensive, algorithms may run out of memory or simply take too long. We call models that may possibly abstain from prediction specialists. The question is how to produce quality predictions from the predictions that are available. We will denote by Wt the set of specialists whose predictions are available at time t, and call them awake and the others asleep. The crucial idea, introduced in [CV09], is to assign to the sleeping specialists the prediction P (yt |y<t ). But wait! That prediction P (yt |y<t ) is defined to average all model predictions, including those of the sleeping specialists, which we just defined to be P (yt |y<t ): X X P (yt |y<t ) = P (yt |y<t , m) P (m|y<t ) + P (yt |y<t ) P (m|y<t ). m?Wt m?W / t Although this equation is self-referential, it does have a unique solution, namely P m?Wt P (yt |y<t , m)P (m|y<t ) P (yt |y<t ) := . P (Wt |y<t ) Thus the sleeping specialists are assigned the average prediction of the awake ones. This completes them to full models to which we can apply the unaltered Bayesian method as before. At first this may seem like a kludge, but actually this phenomenon arises naturally wherever concentrations are 3 manipulated. For example, in a democracy abstaining essentially endorses the vote of the participating voters or in Nature unexpressed genes reproduce at rates determined by the active genes of the organism. The effect of abstaining on the update of the posterior weights is also intuitive: weights of asleep specialists are unaffected, whereas weights of awake models are updated with Bayes rule and then renormalised to the original weight of the awake set: ? ? P (yt |y<t ,m)P (m|y<t ) = P P (yt |y<t ,m)P (m|y<t ) P (Wt |y<t ) if m ? Wt , P (yt |y<t ) m?Wt P (yt |y<t ,m)P (m|y<t ) )P (m|y<t ) P (m|y?t ) = P (y (2)  |y t <t ? = P (m|y ) if m ? / W .  <t t  P (y |y ) t <t  To obtain regret bounds in the specialist setting, we use the fact that sleeping specialists m ? / Wt are defined to predict P (yt |y<t , m) := P (yt |y<t ) like the Bayesian aggregate. Now (1) becomes: Theorem 1 ([FSSW97, Theorem 1]). Let U (m) be any distribution on a set of specialists with wake sets W1 , W2 , . . . Then for any T , Bayes guarantees ? ? M X X X  ? ln P (yt |y<t , m)? ? 4 U (m) P (m) . ? ln P (yt |y<t ) ? U (m) ? m=1 3 t?T : m?Wt t?T : m?Wt Sparse partition learning We design efficient predictors with small regret compared to the best sparse partition model. We do this by constructing partition specialists from the input models and obtain a proper Bayesian predictor by averaging their predictions. We consider two priors. With the first prior we obtain the Mixing Past Posteriors (MPP) algorithm, giving it a Bayesian interpretation and slightly improving its regret bound. We then develop a new Markov chain prior. Bayes with this prior collapses to an efficient algorithm for which we prove the best known regret bound compared to sparse partitions. Construction Each partition specialist (?, m) is parameterized by a model index m and a circadian (wake/sleep pattern) ? = (?1 , ?2 , . . .) with ?t ? {w, s}. We use infinite circadians in order to obtain algorithms that do not depend on a time horizon. The wake set Wt includes all partition specialists that are awake at time t, i.e. Wt := {(?, m) | ?t = w}. An awake specialist (?, m) in Wt predicts as the base model m, i.e. P(yt |y<t , (?, m)) := P (yt |y<t , m). The Bayesian joint distribution P is completed2 by choosing a prior on partition specialists. In this paper we enforce the independence P(?, m) := P(?)P(m) and define P(m) := 1/M uniform on the base models. We now can apply Theorem 1 to bound the regret w.r.t. any partition model with time horizon T by decomposing it into N partition specialists (?1?T , m ? 1 ), . . . , (?N ? N ) and choosing U (?) = 1/N ?T , m uniform on these specialists: N R ? N ln M X + ? ln P(?n?T ). N n=1 (3) The overhead of selecting N reference models from  the pool of size M closely approximates M the information-theoretic ideal N ln M ? ln This improves previous regret bounds N N . [BW02, ABR07, CBGLS12] by an additive N ln N . Next we consider two choices for P(?): one for which we retrieve MPP, and a natural one which leads to efficient algorithms and sharper bounds. 3.1 A circadian prior equivalent to Mixing Past Posteriors The Mixing Past Posteriors algorithm is parameterized a so-called mixing scheme, which is a sequence ?1 , ?2 , . . . of distributions, each ?t with support {0, . . . , t ? 1}. MPP predicts outcome PM yt with Predt (yt ) := m=1 P (yt |y<t , m) vt (m), i.e. by averaging the model predictions with weights vt (m) defined recursively by vt (m) := t?1 X v?s+1 (m) ?t (s) where v?1 (m) := s=0 2 1 M and v?t+1 (m) := P (yt |y<t , m)vt (m) . Predt (yt ) From here on we use the symbol P for the Bayesian joint to avoid a fundamental ambiguity: P(yt |y<t , m) does not equal the prediction P (yt |y<t , m) of the input model m, since it averages over both asleep and awake specialists (?, m). The predictions of base models are now recovered as P(yt |y<t , Wt , m) = P (yt |y<t , m). 4 The auxiliary distribution v?t+1 (m) is formally the (incremental) posterior from prior vt (m). The predictive weights vt (m) are then the pre-specified ?t mixture of all such past posteriors. To make the Bayesian predictor equal to MPP, we define from the MPP mixing scheme a circadian prior measure P(?) that puts mass only on sequences with a finite nonzero number of w?s, by P(?) := J Y 1 sJ (sJ + 1) j=1 ?sj (sj?1 ) where s?J are the indices of the w?s in ? and s0 = 0. (4) We built the independence m ? ? into the prior P(?, m) and (4) ensures ?<t ? ?>t | ?t = w for all t. Since the outcomes y?t are a stochastic function of m and ??t , the Bayesian joint satisfies y?t , m ? ?>t | ?t = w for all t. (5) Theorem 2. Let Predt (yt ) be the prediction of MPP for some mixing scheme ?1 , ?2 , . . . Let P(yt |y<t ) be the prediction of Bayes with prior (4). Then for all outcomes y?t Predt (yt ) = P(yt |y<t ). Proof. Partition the event Wt = {?t = w} into Zqt := {?t = ?q = w and ?r = s for all q < r < t} for all 0 ? q < t, with the convention that ?0 = w. We first establish that the Bayesian joint with prior (4) satisfies y?t ? Wt for all t. Namely, by induction on t, for all q < t (5) Induction P(y<t |Zqt ) = P(y<t |y?q )P(y?q |Zqt ) = P(y<t |y?q )P(y?q |Wq ) = P(y<t ), Pt?1 and therefore P(y?t |Wt ) = P(yt |y<t ) q=0 P(y<t |Zqt )P(Zqt |Wt ) = P(y?t ), i.e. y?t ? Wt . The theorem will be implied by the stronger claim vt (m) = P(m|y<t , Wt ), which we again prove by induction on t. The case t = 1 is trivial. For t > 1, we expand the right-hand side, apply (5), use the independence we just proved, and the fact that asleep specialist predict with the rest: P(m|y<t , Wt ) = t?1 X q=0 = P(m|y?q , Wq ) t   , Wq )P(Wq |H m, y?q y?q P(Zqt | , m, y?q ) q  H) P(y<t |Z   P(y<t |y?q ) P(Wt |H y<t )  H t?1 X P (yq |y<q , m)P(m|y<q , Wq ) q=0 P(yq |y<q ) P(Zqt |Wt ) By (4) P(Zqt |Wt ) = ?t (q), and the proof is completed by applying the induction hypothesis. The proof of the theorem provides a Bayesian interpretation of all the MPP weights: vt (m) = P(m|y<t , Wt ) is the predictive distribution, v?t+1 (m) = P(m|y?t , Wt ) is the posterior, and ?t (q) = P(Zqt |Wt ) is the conditional probability of the previous awake time. 3.2 A simple Markov chain circadian prior In the previous section we recovered circadian priors corresponding to the MPP mixing schemes. Here we design priors afresh from first principles. Our goal is efficiency and good regret bounds. A simple and intuitive choice for prior P(?) is a Markov chain on states {w, s} with initial distribution ?(?) and transition probabilities ?(?|w) and ?(?|s), that is P(??t ) := ?(?1 ) t Y ?(?s |?s?1 ). (6) s=2 By choosing low transition probabilities we obtain a prior that favors temporal locality in that it allocates high probability to circadians that are awake and asleep in contiguous segments. Thus if a good sparse partition model exists for the data, our algorithm will pick up on this and predict well. The resulting Bayesian strategy (aggregating infinitely many specialists) can be executed efficiently. Theorem 3. The prediction P(yt |y<t ) of Bayes with Markov prior (6) equals the prediction Predt (yt ) of Algorithm 1, which can be computed in O(M ) time per outcome using O(M ) space. 5 Proof. We prove by induction on t that vt (b, m) = P(?t = b, m|y<t ) for each model m and b ? {w, s}. The base case t = 1 is automatic. For the induction step we expand (6) P(?t+1 = b, m|y?t ) = ?(b|w)P(?t = w, m|y?t ) + ?(b|s)P(?t = s, m|y?t ) P(?t = w, m|y<t )P (yt |y<t , m) (2) = ?(b|w) PM + ?(b|s)P(?t = s, m|y<t ). i=1 P(i|?t = w, y<t )P (yt |y<t , i) By applying the induction hypothesis we obtain the update rule for vt+1 (b, m). Algorithm 1 Bayes with Markov circadian prior (6) (for Freund?s problem) Input: Distributions ?(?), ?(?|w) and ?(?|s) on {w, s}. Initialize v1 (b, m) := ?(b)/M for each model m and b ? {w, s} for t = 1, 2, . . . do Receive prediction P (yt |y<t , m) of each model m PM Predict with Predt (yt ) := m=1 P (yt |y<t , m)vt (m|w) where vt (m|w) = PM vt (w,m) vt (w,m0 ) m0 =1 Observe outcome yt and suffer loss ? ln Predt (yt ). (yt |y<t ,m) Update vt+1 (b, m) := ?(b|w) P Pred vt (w, m) + ?(b|s)vt (s, m). t (yt ) end for The previous theorem establishes that we can predict fast. Next we show that we predict well. Theorem 4. Let m ? 1, . . . , m ? T be an N -sparse assignment of M models to T times with B segments. The regret of Bayes (Algorithm 1) with tuning ?(w) = 1/N , ?(s|w) = B?1 T ?1 and B?1 ?(w|s) = (N ?1)(T ?1) is at most       M 1 B?1 B?1 R ? N ln +NH + (T ? 1) H + (N ? 1)(T ? 1) H , N N T ?1 (N ? 1)(T ? 1) where H(p) := ?p ln(p) ? (1 ? p) ln(1 ? p) is the binary entropy function. Proof. Without generality assume m ? t ? {1, . . . , N }. For each reference model n pick circadian ? t = n. Expanding the definition of the prior (6) we find ?n?T with ?nt = w iff m N Y P(?n?T ) = ?(w)?(s)N ?1 ?(s|s) (N ?1)(T ?1)?(B?1) T ?B ?(w|w) B?1 ?(w|s) ?(s|w) B?1 , n=1 which is in fact maximized by the proposed tuning. The theorem follows from (3).   T ?1 The information-theoretic ideal regret is ln M N + ln B?1 + B ln N . Theorem 4 is very close to this except for a factor of 2 in front of the middle term; since n H(k/n) ? k ln(n/k) + k we have M T ?1 R ? N ln + 2 (B ? 1) ln + B ln N + 2B. N B?1 The origin of this factor remained a mystery in [BW02], but becomes clear in our analysis: it is the price of coordination between the specialists that constitute the best partition model. To see this, let us regard a circadian as a sequence of wake/sleep transition times. With this viewpoint (3) bounds the regret by summing the prior costs of all the reference wake/sleep transition times. This means that we incur overhead at each segment boundary of the comparator twice: once as the sleep time of the preceding model, and once more as the wake time of the subsequent model. In practice the comparator parameters T , N and B are unknown. This can be addressed by standard orthogonal techniques. Of particular interest is the method inspired by [SM99, KdR08, Koo11] of changing the Markov transition probabilities as a function of time. It can be shown that by setting 1 ) we keep the update time and ?(w) = 1/2 and increasing ?(w|w) and ?(s|s) as exp(? t ln2 (t+1) space of the algorithm at O(M ) and guarantee regret bounded for all T , N and B as M R ? N ln + 2N + 2(B ? 1) ln T + 4(B ? 1) ln ln(T + 1). N At no computational overhead, this bound is remarkably close to the fully tuned bound of the theorem above, especially when the number of segments B is modest as a function of T . 6 4 Sparse multitask learning We transition to an extension of the sequential prediction setup called online multitask learning [ABR07, RAB07, ARB08, LPS09, CCBG10, SRDV11]. The new ingredient is that before predicting outcome yt we are given its task number ?t ? {1, . . . , K}. The goal is to exploit similarities between tasks. As before, we have access to M models that each issue a prediction each round. If a single model predicts well on several tasks we want to figure this out quickly and exploit it. Simply ignoring the task number would not result in an adaptive algorithm. Applying a separate Bayesian predictor to each task independently would not result in any inter-task synergy. Nevertheless, it would guarantee regret at most K ln M overall. Now suppose each task is predicted well by some model from a small subset of models of size N  M . Running Bayes on all N -sparse allocations would achieve regret ln M N + K ln N . However, emulating Bayes in this case is NP-hard [RAB07]. The goal is to design efficient algorithms with approximately the same regret bound. In [ABR07] this multiclass problem is reduced to MPP, giving regret bound N ln M N + B ln N . Here B is the number of same-task segments in the task sequence ??T . When all outcomes with the same task number are consecutive, i.e. B = K, then the desired bound is achieved. However the tasks may be interleaved, making the number of segments B much larger than K. We now eliminate the dependence on B, i.e. we solve a key open problem of [ABR07]. We apply the method of specialists to multitask learning, and obtain regret bounds close to the information-theoretic ideal, which in particular do not depend on the task segment count B at all. Construction We create a subset specialist (S, m) for each basic model index m and subset of tasks S ? {1, . . . , K}. At time t, specialists with the current task ?t in their set S are awake, i.e. Wt := {(S, m) | ?t ? S}, and issue the prediction P(yt |y<t , S, m) := P (yt |y<t , m) of model m. We assign to subset specialist (S, m) prior probability P(S, m) := P(S)P(m) where P(m) := 1/M is uniform, and P(S) includes each task independently with some fixed bias ?(w) P(S) := ?(w)|S| ?(s)K?|S| . (7) This construction has the property that the product of prior weights of two loners ({?1 }, m) ? and ({?2 }, m) ? is dramatically lower than the single pair specialist ({?1 , ?2 }, m), ? especially so when the number of models M is large or when we consider larger task clusters. By strongly favoring it in the prior, any inter-task similarity present will be picked up fast. The resulting Bayesian strategy involving M 2K subset specialists can be implemented efficiently. Theorem 5. The predictions P(yt |y<t ) of Bayes with the set prior (7) equal the predictions Predt (yt ) of Algorithm 2. They can be computed in O(M ) time per outcome using O(KM ) storage. ? Of particular interest is Algorithm 2?s update rule for ft+1 (m). This would be a regular Bayesian posterior calculation if vt (m) in Predt (yt ) were replaced by ft? (m). In fact, vt (m) is the communication channel by which knowledge about the performance of model m in other tasks is received. Proof. The resource analysis follows from inspection, noting that the update is fast because only the weights ft? (m) associated to the current task ? are changed. We prove by induction on t that P(m|y<t , Wt ) = vt (m). In the base case t = 1 both equal 1/M . For the induction step we expand P(m|y?t , Wt+1 ), which is by definition proportional to ? ?? ? X 1 Y Y ?(w)|S| ?(s)K?|S| ? P (yq |y<q , m)? ? P(yq |y<q )? . (8) M q?t : ?q ?S S3?t+1 q?t : ?q ?S / The product form of both set prior and likelihood allows us to factor this exponential sum of products into a product of binary sums. It follows from the induction hypothesis that Y ?(w) P (yq |y<q , m) ftk (m) = ?(s) P(yq |y<q ) q?t : ?q =k K Then we can divide (8) by P(y?t )?(s) Y 1 ?t+1 P(m|y?t , Wt+1 ) ? ft (m) M and reorganize to ftk (m) k6=?t+1 7 +1  K ?  1 ft t+1 (m) Y k = ft (m) + 1 ?t+1 M ft (m) + 1 k=1 Since the algorithm maintains ?t (m) = QK k k=1 (ft (m) + 1) this is proportional to vt+1 (m). Algorithm 2 Bayes with set prior (7) (for online multitask learning) Input: Number of tasks K ? 2, distribution ?(?) on {w, s}. Q w) k := K Initialize f1k (m) := ?( k=1 (f1 (m) + 1). ?(s) for each task k and ?1 (m) for t = 1, 2, . . . do Observe task index ? = ?t . f ? (m) ?t (m)/(ft? (m)+1) . Compute auxiliary vt (m) := PtM f ? (i) ? (i)/(f ? (i)+1) i=1 t t t Receive prediction P (yt |y<t , m) of each model m PM Issue prediction Predt (yt ) := m=1 P (yt |y<t , m)vt (m). Observe outcome yt and suffer loss ? ln Predt (yt ). (yt |y<t ,m) ? ? k Update ft+1 (m) := P Pred ft (m) and keep ft+1 (m) := ftk (m) for all k 6= ?. t (yt ) Update ?t+1 (m) := end for ? ft+1 (m)+1 ft? (m)+1 ?t (m). The Bayesian strategy is hence emulated fast by Algorithm 2. We now show it predicts well. Theorem 6. Let m ? 1, . . . , m ? K be an N -sparse allocation of M models to K tasks. With tuned inclusion rate ?(w) = 1/N , the regret of Bayes (Algorithm 2) is bounded by R ? N ln ( M /N ) + KN H(1/N ). Proof. Without loss of generality assume that m ? k ? {1, . . . , N }. Let Sn := {1 ? k ? K | QN m ? k = n}. The sets Sn for n = 1, . . . , N form a partition of the K tasks. By (7) n=1 P(Sn ) = ?(w)K ?(s)(N ?1)K , which is maximized by the proposed tuning. The theorem now follows from (3). We achieve the desired goal since KN H(1/N ) ? K ln N . In practice N is of course unavailable for tuning, and we may tune ?(w) = 1/K pessimistically to get K ln K + N instead for all N simultaneously. Or alternatively, we may sacrifice some time efficiency to externally mix over all M possible values with decreasing prior, increasing the tuned regret by just ln N + O(ln ln N ). If in addition the number of tasks is unknown or unbounded, we may (as done in Section 3.2) decrease the membership rate ?(w) with each new task encountered and guarantee regret R ? N ln(M/N ) + K ln K + 4N + 2K ln ln K where now K is the number of tasks actually received. 5 Discussion We showed that Mixing Past Posteriors is not just a heuristic with an unusual regret bound: we gave it a full Bayesian interpretation using specialist models. We then applied our method to a multitask problem. Again an unusual algorithm resulted that exploits sparsity by pulling up the weights of models that have done well before in other tasks. In other words, if all tasks are well predicted by a small subset of base models, then this algorithm improves its prior over models as it learns from previous tasks. Both algorithms closely circumvent NP-hardness. The deep question is whether some of the common updates used in Nature can be brought into the Bayesian fold using the specialist mechanism. There are a large number of more immediate technical open problems (we just discuss a few). We presented our results using probabilities and log loss. However the bounds should easily carry over to the typical pseudo-likelihoods employed in online learning in connection with other loss functions. Next, it would be worthwhile to investigate for which infinite sets of models we can still employ our updates implicitly. It was already shown in [KvE10, Koo11] that MPP can be efficiently emulated on all Bernoulli models. However, what about Gaussians, exponential families in general, or even linear regression? Finally, is there a Bayesian method for modeling concurrent multitasking, i.e. can the Bayesian analysis be generalized to the case where a small subset of models solve many tasks in parallel? 8 References [ABR07] Jacob Ducan Abernethy, Peter Bartlett, and Alexander Rakhlin. Multitask learning with expert advice. Technical report, University of California at Berkeley, January 2007. [ARB08] Alekh Agarwal, Alexander Rakhlin, and Peter Bartlett. Matrix regularization techniques for online multitask learning, October 2008. [BW02] Olivier Bousquet and Manfred K. Warmuth. Tracking a small set of experts by mixing past posteriors. Journal of Machine Learning Research, 3:363?396, 2002. [CBGLS12] Nicol`o Cesa-Bianchi, Pierre Gaillard, G?abor Lugosi, and Gilles Stoltz. A new look at shifting regret. CoRR, abs/1202.3323, 2012. [CCBG10] Giovanni Cavallanti, Nicol`o Cesa-Bianchi, and Claudio Gentile. Linear algorithms for online multitask classification. J. Mach. Learn. Res., 11:2901?2934, December 2010. [CKZV10] Alexey Chernov, Yuri Kalnishkan, Fedor Zhdanov, and Vladimir Vovk. Supermartingales in prediction with expert advice. Theor. Comput. Sci., 411(29-30):2647?2669, June 2010. [CV09] Alexey Chernov and Vladimir Vovk. Prediction with expert evaluators? advice. In Proceedings of the 20th international conference on Algorithmic learning theory, ALT?09, pages 8?22, Berlin, Heidelberg, 2009. Springer-Verlag. [FSSW97] Y. Freund, R. E. Schapire, Y. Singer, and M. K. Warmuth. Using and combining predictors that specialize. In Proc. 29th Annual ACM Symposium on Theory of Computing, pages 334?343. ACM, 1997. [GWBA02] Robert B. Gramacy, Manfred K. Warmuth, Scott A. Brandt, and Ismail Ari. Adaptive caching by refetching. In Suzanna Becker, Sebastian Thrun, and Klaus Obermayer, editors, NIPS, pages 1465?1472. MIT Press, 2002. [HLSS00] David P. Helmbold, Darrell D. E. Long, Tracey L. Sconyers, and Bruce Sherrod. Adaptive disk spin-down for mobile computers. ACM/Baltzer Mobile Networks and Applications (MONET), pages 285?297, 2000. [HW98] Mark Herbster and Manfred K. Warmuth. Tracking the best expert. Machine Learning, 32:151?178, 1998. Wouter M. Koolen and Steven de Rooij. Combining expert advice efficiently. In Rocco Servedio and Tong Zang, editors, Proceedings of the 21st Annual Conference on Learning Theory (COLT 2008), pages 275?286, June 2008. [KdR08] [Koo11] Wouter M. Koolen. Combining Strategies Efficiently: High-quality Decisions from Conflicting Advice. PhD thesis, Institute of Logic, Language and Computation (ILLC), University of Amsterdam, January 2011. [KvE10] Wouter M. Koolen and Tim van Erven. Freezing and sleeping: Tracking experts that learn by evolving past posteriors. CoRR, abs/1008.4654, 2010. [LPS09] G?abor Lugosi, Omiros Papaspiliopoulos, and Gilles Stoltz. Online multi-task learning with hard constraints. In COLT, 2009. [RAB07] Alexander Rakhlin, Jacob Abernethy, and Peter L. Bartlett. Online discovery of similarity mappings. In Proceedings of the 24th international conference on Machine learning, ICML ?07, pages 767?774, New York, NY, USA, 2007. ACM. [SM99] Gil I. Shamir and Neri Merhav. Low complexity sequential lossless coding for piecewise stationary memoryless sources. IEEE Trans. Info. Theory, 45:1498?1519, 1999. [SRDV11] Avishek Saha, Piyush Rai, Hal Daum?e III, and Suresh Venkatasubramanian. Online learning of multiple tasks and their relationships. In AISTATS, Ft. Lauderdale, Florida, 2011. [VW98] Paul A.J. Volf and Frans M.J. Willems. Switching between two universal source coding algorithms. In Proceedings of the Data Compression Conference, Snowbird, Utah, pages 491?500, 1998. 9
4557 |@word multitask:11 unaltered:1 middle:1 compression:1 stronger:1 disk:1 sex:1 open:3 km:1 forecaster:1 decomposition:1 jacob:2 pick:2 recursively:1 carry:1 venkatasubramanian:1 initial:4 selecting:1 tuned:3 past:14 outperforms:2 existing:1 recovered:3 current:3 nt:1 erven:1 atop:1 intriguing:1 must:1 readily:1 additive:1 partition:29 subsequent:1 update:18 stationary:1 device:1 warmuth:6 inspection:1 affair:1 manfred:4 ptm:1 completeness:1 provides:1 complication:1 brandt:1 evaluator:1 unbounded:1 guard:1 become:2 symposium:1 prove:4 specialize:1 fitting:1 overhead:3 frans:1 sacrifice:1 inter:2 hardness:2 roughly:1 themselves:1 multi:1 inspired:1 decreasing:1 food:1 little:1 precursor:1 multitasking:1 increasing:2 becomes:2 moreover:1 bounded:2 mass:1 what:2 kind:1 finding:1 guarantee:6 pseudo:1 berkeley:1 temporal:1 grant:2 baltzer:1 before:6 positive:1 aggregating:1 switching:1 mach:1 approximately:2 lugosi:2 might:2 alexey:2 voter:1 twice:1 studied:1 shaded:1 collapse:5 range:1 practical:1 unique:1 renormalised:1 practice:4 regret:39 suresh:1 universal:1 evolving:1 significantly:2 composite:5 pre:1 word:1 regular:1 wait:1 get:2 cannot:1 close:4 put:1 storage:1 applying:3 equivalent:1 yt:76 rural:1 go:1 independently:2 splitting:1 assigns:1 gramacy:1 suzanna:1 helmbold:1 insight:1 rule:3 retrieve:1 notion:2 veterinary:1 updated:2 construction:3 pt:1 suppose:1 shamir:1 losing:1 olivier:1 hypothesis:3 origin:1 expensive:1 democracy:1 updating:1 dare:1 predicts:6 ft:15 reorganize:1 steven:1 ensures:2 decrease:1 environment:3 agency:1 complexity:1 asked:1 depend:2 fedor:1 segment:16 predictive:5 serve:1 incur:1 monet:1 efficiency:3 easily:1 joint:4 emulate:1 alphabet:1 fast:4 aggregate:2 klaus:1 outcome:17 choosing:4 outside:1 abernethy:2 whose:1 fluctuates:1 larger:3 solve:3 heuristic:1 favor:2 online:12 sequence:7 product:4 fssw97:4 combining:3 mixing:14 iff:1 achieve:5 forth:1 intuitive:3 ismail:1 participating:1 constituent:1 double:1 cluster:1 darrell:1 circadian:8 produce:1 incremental:1 tim:1 piyush:1 develop:2 snowbird:1 measured:1 received:2 auxiliary:2 predicted:5 implemented:1 kludge:1 convention:1 closely:2 stochastic:1 human:2 supermartingales:1 assign:3 f1:1 tighter:1 theor:1 extension:1 exp:1 algorithmic:1 predict:9 mapping:1 claim:1 m0:2 achieves:2 consecutive:1 proc:1 nwo:1 coordination:1 gaillard:1 concurrent:1 create:1 establishes:1 tool:2 weighted:1 brought:1 clearly:1 mit:1 rather:1 avoid:1 claudio:1 caching:1 mobile:2 june:2 bernoulli:1 likelihood:3 membership:1 typically:2 eliminate:1 abor:2 favoring:1 expand:3 reproduce:1 comprising:1 issue:3 unreachable:1 overall:1 classification:1 colt:2 favored:1 priori:1 k6:1 breakthrough:1 special:1 initialize:2 equal:5 construct:1 once:2 sampling:1 zz:2 look:2 icml:1 np:5 others:1 report:1 piecewise:1 employ:2 few:2 saha:1 manipulated:1 simultaneously:2 resulted:1 individual:1 refetching:1 replaced:1 consisting:1 ab:2 interest:2 wouter:4 investigate:1 introduces:1 mixture:1 light:1 chain:3 amenable:1 peculiar:1 allocates:1 orthogonal:1 modest:1 stoltz:2 divide:2 desired:2 re:1 hw98:2 modeling:1 contiguous:1 yoav:1 assignment:5 recessive:1 cost:1 subset:13 afresh:1 predictor:10 uniform:6 too:3 front:1 reported:1 kn:2 considerably:1 combined:1 st:1 fundamental:1 international:2 sherrod:1 herbster:1 probabilistic:1 lauderdale:1 pool:1 quickly:5 w1:1 again:5 ambiguity:1 cesa:2 thesis:1 possibly:1 glow:1 expert:8 forgoing:1 avishek:1 zhdanov:1 de:1 coding:2 includes:2 coordinated:1 later:2 picked:1 observing:1 bayes:24 maintains:3 recover:1 parallel:1 bruce:1 contribution:2 spin:1 qk:1 efficiently:6 maximized:2 weak:1 bayesian:32 emulated:2 unaffected:1 sebastian:1 definition:2 against:2 servedio:1 naturally:1 proof:7 associated:1 proved:1 knowledge:1 improves:3 segmentation:1 carefully:1 actually:3 methodology:1 specify:1 maximally:1 improved:1 done:4 strongly:1 generality:2 misnomer:1 just:5 hand:1 freezing:1 quality:2 pulling:1 hal:1 name:1 effect:1 usa:1 utah:1 y2:3 hence:1 assigned:1 regularization:1 memoryless:1 laboratory:1 nonzero:1 round:2 self:3 maintained:1 ln2:1 generalized:1 f1k:1 theoretic:3 wise:1 abstain:1 ari:1 superior:1 common:1 specialized:1 koolen:4 exponentially:3 nh:1 gently:1 interpretation:7 organism:1 approximates:1 interpret:1 automatic:1 tuning:4 pm:6 inclusion:1 language:1 access:2 similarity:3 alekh:1 base:12 dominant:1 posterior:21 showed:1 scenario:1 certain:1 verlag:1 binary:2 vt:23 yuri:1 lnp:6 additional:2 gentile:1 preceding:1 employed:2 ii:1 relates:1 full:4 mix:3 multiple:2 chernov:2 technical:2 adapt:2 calculation:1 long:3 prediction:36 involving:1 basic:1 regression:1 essentially:2 agarwal:1 achieved:4 sleeping:5 receive:2 whereas:1 remarkably:1 want:1 addition:1 addressed:1 completes:1 wake:6 source:2 allocated:1 crucial:1 w2:1 rest:2 december:1 contrary:1 seem:1 call:2 curious:1 leverage:1 ideal:3 noting:1 split:1 iii:1 independence:3 gave:2 idea:2 multiclass:1 mpp:18 whether:2 curiously:1 bartlett:3 becker:1 neri:1 suffer:2 peter:3 york:1 cause:1 constitute:3 deep:1 dramatically:1 detailed:1 clear:1 tune:1 referential:3 locally:1 concentrated:1 kalnishkan:1 reduced:1 schapire:1 nsf:1 s3:1 gil:1 per:3 zang:1 dropping:1 putting:1 key:1 hlss00:3 nevertheless:1 rooij:1 changing:3 abstaining:2 v1:1 sum:2 run:3 mystery:1 parameterized:2 family:1 decision:1 bit:5 interleaved:1 bound:26 followed:1 fold:1 sleep:5 encountered:1 annual:2 adapted:1 constraint:1 awake:15 ftk:3 bousquet:2 min:1 department:1 according:1 rai:1 magically:1 slightly:3 smaller:1 modification:1 making:2 wherever:1 computationally:1 ln:52 equation:1 previously:1 resource:1 discus:1 count:1 mechanism:1 singer:1 mind:1 letting:1 bw02:5 end:2 unusual:2 available:2 decomposing:1 gaussians:1 apply:6 observe:3 worthwhile:1 enforce:1 simulating:1 pierre:1 subtracted:1 specialist:42 florida:1 expressly:1 original:1 pessimistically:1 denotes:1 running:2 completed:1 daum:1 exploit:4 giving:2 build:1 establish:1 especially:2 implied:1 question:2 already:1 parametric:1 concentration:1 strategy:4 usual:1 dependence:1 rocco:1 obermayer:1 separate:1 sci:1 berlin:1 restart:1 thrun:1 trivial:1 induction:10 index:4 relationship:1 vladimir:2 setup:1 executed:1 october:1 robert:1 sharper:1 merhav:1 info:1 design:5 proper:2 unknown:2 perform:1 allowing:1 upper:1 bianchi:2 observation:1 gilles:2 markov:6 willems:1 finite:2 beat:1 immediate:1 january:2 emulating:2 communication:1 y1:7 introduced:2 pred:2 pair:2 namely:2 specified:1 david:1 connection:1 philosophical:1 california:1 learned:1 conflicting:1 nip:1 assembled:1 trans:1 below:1 pattern:1 scott:1 sparsity:1 cavallanti:1 built:1 including:1 memory:2 shifting:1 power:1 event:1 natural:4 rely:1 circumvent:1 predicting:1 scheme:4 lossless:1 yq:6 sn:3 prior:36 literature:2 understanding:1 discovery:1 nicol:2 relative:1 freund:3 loss:12 fully:2 synchronization:1 expect:1 adaptivity:2 allocation:2 proportional:2 ingredient:1 foundation:2 s0:1 principle:1 viewpoint:1 editor:2 share:4 course:1 changed:1 supported:3 surprisingly:1 zqt:9 side:1 weaker:1 pulled:1 bias:1 institute:1 sparse:15 van:1 regard:1 boundary:1 transition:6 avoids:1 cumulative:2 qn:1 stranded:1 giovanni:1 collection:1 made:1 adaptive:3 avoided:1 sj:4 implicitly:1 dmitry:1 gene:5 adamskiy:1 keep:2 synergy:1 active:1 logic:1 volf:1 summing:1 alternatively:1 why:1 nature:7 channel:1 learn:2 expanding:1 ignoring:1 unavailable:2 improving:1 heidelberg:1 complex:1 constructing:1 aistats:1 main:4 big:1 paul:1 advice:5 papaspiliopoulos:1 ny:1 tong:1 exponential:2 comput:1 learns:1 externally:1 theorem:15 remained:1 down:1 bad:1 showing:1 symbol:1 rakhlin:3 alt:1 essential:1 exists:1 sequential:5 adding:1 corr:2 phd:1 horizon:2 rubicon:1 locality:1 entropy:2 simply:3 likely:1 infinitely:1 conveniently:1 amsterdam:1 omiros:1 tracking:3 tracey:1 springer:1 satisfies:2 acm:4 asleep:10 comparator:10 conditional:1 goal:8 price:1 hard:4 change:1 determined:1 infinite:2 except:1 typical:1 wt:31 averaging:2 vovk:2 called:4 specie:1 zzzz:1 vote:1 formally:1 illc:1 wq:5 support:1 mark:1 latter:1 arises:1 yardstick:1 fancier:1 alexander:3 phenomenon:1
3,930
4,558
A Polynomial-time Form of Robust Regression ? Yaoliang Yu, Ozlem Aslan and Dale Schuurmans Department of Computing Science, University of Alberta, Edmonton AB T6G 2E8, Canada {yaoliang,ozlem,dale}@cs.ualberta.ca Abstract Despite the variety of robust regression methods that have been developed, current regression formulations are either NP-hard, or allow unbounded response to even a single leverage point. We present a general formulation for robust regression?Variational M-estimation?that unifies a number of robust regression methods while allowing a tractable approximation strategy. We develop an estimator that requires only polynomial-time, while achieving certain robustness and consistency guarantees. An experimental evaluation demonstrates the effectiveness of the new estimation approach compared to standard methods. 1 Introduction It is well known that outliers have a detrimental effect on standard regression estimators. Even a single erroneous observation can arbitrarily affect the estimates produced by methods such as least squares. Unfortunately, outliers are prevalent in modern data analysis, as large data sets are automatically gathered without the benefit of manual oversight. Thus the need for regression estimators that are both scalable and robust is increasing. Although the field of robust regression is well established, it has not considered computational complexity analysis to be one of its central concerns. Consequently, none of the standard regression estimators in the literature are both robust and tractable, even in a weak sense: it has been shown that standard robust regression formulations with non-zero breakdown are NP-hard [1, 2], while any estimator based on minimizing a convex loss cannot guarantee bounded response to even a single leverage point [3] (definitions given below). Surprisingly, there remain no standard regression formulations that guarantee both polynomial run-time with bounded response to even single outliers. It is important to note that robustness and tractability can be achieved under restricted conditions. For example, if the domain is bounded, then any estimator based on minimizing a convex and Lipschitzcontinuous loss achieves high breakdown [4]. Such results have been extended to kernel-based regression under the analogous assumption of a bounded kernel [5, 6]. Unfortunately, these results can no longer hold when the domain or kernel is unbounded: in such a case arbitrary leverage can occur [4, 7] and no (non-constant) convex loss, even Lipschitz-continuous, can ensure robustness against even a single outlier [3]. Our main motivation therefore is to extend these existing results to the case of an unbounded domain. Unfortunately, the inapplicability of convex losses in this situation means that computational tractability becomes a major challenge, and new computational strategies are required to achieve tractable robust estimators. The main contribution of this paper is to develop a new robust regression strategy that can guarantee both polynomial run-time and bounded response to individual outliers, including leverage points. Although such an achievement is modest, it is based on two developments of interest. The first is a general formulation of adaptive M-estimation, Variational M-estimation, that unifies a number of robust regression formulations, including convex and bounded M-estimators with certain subsetselection estimators such as Least Trimmed Loss [7]. By incorporating Tikhonov regularization, these estimators can be extended to reproducing kernel Hilbert spaces (RKHSs). The second development is a convex relaxation scheme that ensures bounded outlier influence on the final estimator. 1 The overall estimation procedure is guaranteed to be tractable, robust to single outliers with unbounded leverage, and consistent under non-trivial conditions. An experimental evaluation of the proposed estimator demonstrates effective performance compared to standard robust estimators. The closest previous works are [8], which formulated variational representations of certain robust losses, and [9], which formulated a convex relaxation of bounded loss minimization. Unfortunately, [8] did not offer a general characterization, while [9] did not prove their final estimator was robust, nor was any form of consistency established. The formulation we present in this paper generalizes [8] while the convex relaxation scheme we propose is simpler and tighter than [9]; we are thus able to establish non-trivial forms of both robustness and consistency while maintaining tractability. There are many other notions of ?robust? estimation in the machine learning literature that do not correspond to the specific notion being addressed in this paper. Work on ?robust optimization? [10? 12], for example, considers minimizing the worst case loss achieved given bounds on the maximum data deviation that will be considered. Such results are not relevant to the present investigation because we explicitly do not bound the magnitude of the outliers. Another notion of robustness is algorithmic stability under leave-one-out perturbation [13], which analyzes specific learning procedures rather than describing how a stable algorithm might be generally achieved. 2 Preliminaries We start by considering the standard linear regression model y = xT ? ? + u (1) where x is an Rp -valued random variable, u is a real-valued random noise term, and ? ? ? ? ? Rp is an unknown deterministic parameter vector. Assume we are given a sample of n independent identically distributed (i.i.d.) observations represented by an n ? p matrix X and an n ? 1 vector y, where each row Xi: is drawn from some unknown marginal probability measure Px , and yi are generated according to (1). Our task is to estimate the unknown deterministic parameter ? ? ? ?. Clearly, this is a well-studied problem in statistics and machine learning. If the noise distribution has a known density p(?), then a standard estimator is given by maximum likelihood Pn Pn ??ML ? arg min n1 i=1 ? log p(yi ? Xi: ?) = arg min n1 i=1 ? log p(ri ), (2) ??? ??? where ri = yi ? Xi: ? is the ith residual. When the noise distribution is unknown, one can replace the negative log-likelihood with a loss function ?(?) and use the estimator ??M ? arg min n1 1T ?(y ? X?), (3) ??? where ?(r) denotes the of losses obtained by applying the loss componentwise to each residPvector n ual, hence 1T ?(r) = i=1 ?(ri ). Such a procedure is known as M -estimation in the robust statistics literature, and empirical risk minimization in the machine learning literature.1 Although uncommon in robust regression, it is conventional in machine learning to include a regularizer. In particular we will use Tikhonov (?ridge?) regularization by adding a squared penalty ??MR ? arg min n1 1T ?(y ? X?) + ?2 k?k22 ??? for ? ? 0, (4) The significance of Tikhonov regularization is that it ensures ??MR = X T ? for some ? ? Rn [14]. More generally, under Tikhonov regularization, the regression problem can be conveniently expressed in a reproducing kernel Hilbert space (RKHS). If we let H denote the RKHS corresponding to positive semidefinite kernel ? : X ? X ? R, then f (x) = h?(x, ?), f iH for any f ? H by the reproducing property [14, 15]. We consider the generalized regression model y = f ? (x) + u (5) ? where x is an X -valued random variable, u is a real-valued random noise term as above, and f ? H is an unknown deterministic function. Given a sample of n i.i.d. observations (x1 , y1 ), ..., (xn , yn ), 1 Generally one has to introduce an additional scale parameter ? and allow rescaling of the residuals via ri /?, to preserve parameter equivariance [3, 4]. However, we will initially assume a known scale. 2 where each xi is drawn from some unknown marginal probability measure Px , and yi are generated according to (5),2 the task is then to estimate the unknown deterministic function f ? ? H. To do so we can express the estimator (4) more generally as Pn f?MR ? arg min 1 ?(yi ? f (xi )) + ? kf k2 . (6) f ?H n i=1 2 H Pn By the representer theorem [14], the solution to (6) can be expressed by f?MR (x) = i=1 ?i ?(xi , x) for some ? ? Rn , and therefore (6) can be recovered by solving the finite dimensional problem ? MR ? arg min n1 1T ?(y ? K?) + ?2 ?T K? such that Kij = ?(xi , xj ). (7) ? ? Our interest is understanding the tractability, robustness and consistency aspects of such estimators. Consistency: Much is known about the consistency properties of estimators expressed as regularized empirical risk minimizers. For example, the ML-estimator (2) and the M -estimator (3) are both known to be parameter consistent under general conditions [16].3 The regularized M -estimator in RKHSs (6), is loss consistent under some general assumptions on the kernel, loss and training distribution.4 Furthermore, a weak form of f -consistency has also established in [6]. For bounded kernel and bounded Lipschitz losses, one can similarly prove the loss consistency of the regularized M -estimator (6) (in RKHS). See Appendix C.1 of the supplement for more discussion. Generally speaking, any estimator that can be expressed as a regularized empirical loss minimization is consistent under ?reasonable? conditions. That is, one can consider regularized loss minimization to be a (generally) sound principle for formulating regression estimators, at least from the perspective of consistency. However, this is no longer the case when we consider robustness and tractability; here sharp distinctions begin to arise within this class of estimators. Robustness: Although robustness is an intuitive notion, it has not been given a unique technical definition in the literature. Several definitions have been proposed, with distinct advantages and disadvantages [4]. Some standard definitions consider the asymptotic invariance of estimators to an infinitesimal but arbitrary perturbation of the underlying distribution, e.g. the influence function [4, 17]. Although these analyses can be useful, we will focus on finite sample notions of robustness since these are most related to concerns of computational tractability. In particular, we focus on the following definition related to the finite sample breakdown point [18, 19]. Definition 1 (Bounded Response). Assuming the parameter set ? is metrizable, an estimator has bounded response if for any finite data sample its output remains in a bounded interior subset of the closed parameter set ? (or respectively H), no matter how a single observation pair is perturbed. This is a much weaker definition than having a non-zero breakdown point: a breakdown of  requires that bounded response be guaranteed when any  fraction of the data is perturbed arbitrarily. Bounded response is obviously a far more modest requirement. However, importantly, the definition of bounded response allows the possibility of arbitrary leverage; that is, no bound is imposed on the magnitude of a perturbed input (i.e. kx1 k ? ? or ?(x1 , x1 ) ? ?). Surprisingly, we find that even such a weak robustness property is difficult to achieve while retaining computational tractability. Computational Dilemma: The goals of robustness and computational tractability raise a dilemma: it is easy to achieve robustness (i.e. bounded response) or tractability (i.e. polynomial run-time) in a consistent estimator, but apparently not both. Consider, for example, using a convex loss function. These are the best known class of functions that admit computationally efficient polynomial-time minimization [20] (see also [21] ). It is sufficient that the objective be polynomial-time evaluable, along with its first and second derivatives, 2 We are obviously assuming X is equipped with an appropriate ?-algebra, and R with the standard Borel ?-algebra, such that the joint distribution P over X ? R is well defined and ?(?, ?) is measurable. P 3 T T In particular, let Mn (?) = n1 n i=1 ?(yi ? xi ?), let M (?) = E(?(y1 ? x1 ?)), and equip the parameter (n) space ? with the uniform metric k ? k? . Then ??M ? ? ? , provided kMn ? M k? ? 0 in outer probability (adopted to avoid measurability issues) and M (? ? ) > sup??G M (?) for every open set G that contains ? ? . The latter assumption is satisfied in particular when M : ? 7? R is upper semicontinuous with a unique maximum at ? ? . It is also possible to derive asymptotic convergence rates for general M -estimators [16]. P 4 ? ? Specifically, let ?? = inf f ?H E[?(y1 ? f (x1 ))]. Then [6] showed that n1 n i=1 ?(yi ? fMR (xi )) ? ? 2 provided the regularization constant ?n ? 0 and ?n n ? ?, the loss ? is convex and Lipschitz-continuous, and the RKHS H (induced by some bounded measurable kernel ?) is separable and dense in L1 (P) (the space of P-integrable functions) for all distributions P on X . Also, Y ? R is required to be closed where y ? Y. 3 and that the objective be self-concordant [20].5 Since a Tikhonov regularizer is automatically selfconcordant, the minimization problems outlined above can all be solved in polynomial time with Newton-type algorithms, provided ?(r), ?0 (r), and ?00 (r) can all be evaluated in polynomial time for a self-concordant ? [22, Ch.9]. Standard loss functions, such as squared error or Huber?s loss satisfy these conditions, hence the corresponding estimators are polynomial-time. Unfortunately, loss minimization with a (non-constant) convex loss yields unbounded response to even a single outlier [3, Ch.5]. We extend this result to also account for regularization and RKHSs. Theorem 1. Empirical risk minimization based on a (non-constant) convex loss cannot have bounded response if the domain (or kernel) is unbounded, even under Tikhonov regularization. (Proof given in Appendix B of the supplement.) By contrast, consider the case of a (non-constant) bounded loss function.6 Bounded loss functions are a common choice in robust regression because they not only ensure bounded response, trivially, they can also ensure a high breakdown point of (n ? p)/(2n) [3, Ch.5]. Unfortunately, estimators based on bounded losses are inherently intractable. Theorem 2. Bounded (non-constant) loss minimization is NP-hard. (Proof given in Appendix E.) These difficulties with empirical risk minimization have led the field of robust statistics to develop a variety of alternative estimators [4, Ch.7]. For example, [7] recommends subset-selection based regression estimators, such as Least Trimmed Loss: Pn0 ??LTL ? arg min??? i=1 ?(r[i] ). (8) Here r[i] denotes sorted residuals r[1] ? ? ? ? ? r[n] and n0 < n is the number of terms to consider. Traditionally ?(r) = r2 is used. These estimators are known to have high breakdown [7],7 and obviously demonstrate bounded response to single outliers. Unfortunately, (8) is NP-hard [1]. 3 Variational M-estimation To address the dilemma, we first adopt a general form of adaptive M-estimator that allows flexibility while allowing a general approximation strategy. The key construction is a variational representation of M-estimation that can express a number of standard robust (and non-robust) methods in a common framework. In particular, consider the following adaptive form of loss function ?(r) = min ?`(r) + ?(?). (9) 0???1 where r is a residual value, ` is a closed convex base loss, ? is an adaptive weight on the base loss, and ? is a convex auxiliary function. The weight can choose to ignore the base loss if `(r) is large, but this is balanced against a prior penalty ?(?). Different choices of base loss and auxiliary function will yield different results, and one can represent a wide variety of loss functions ? in this way [8]. For example, any convex loss ? can be trivially represented in the form (9) by setting ` = ?, and ?(?) = ?{1} (?).8 Bounded loss functions can also be represented in this way, for example (Geman-McClure) [8] ?(r) = (Geman-Reynolds) [8] ?(r) = (LeClerc) [8] (Clipped-loss) [9] r2 1+r 2 |r| 1+|r| `(r) = r2 ?(r) = 1 ? exp(?`(r)) ?(r) = max(1, `(r)) `(r) = |r| `(?) convex `(?) convex ? ?(?) = ( ? ? 1)2 ? ?(?) = ( ? ? 1)2 ?(?) = ? log ? ? ? + 1 ?(?) = 1 ? ?. (10) (11) (12) (13) Appendix D in the supplement demonstrates how one can represent general functions ? in the form (9), not just specific examples, significantly extending [8] with a general characterization. A function ? is self-concordant if |?000 (r)| ? 2?00 (r)3/2 ; see e.g. [22, Ch.9]. A bounded function obviously cannot be convex over an unbounded domain unless it is constant. 7 When n0 approaches n/2 the breakdown of (8) approaches 1/2 [7]. 8 We use ?C (?) to denote the indicator for the point set C; i.e., ?C (?) = 0 if ? ? C, otherwise ?C (?) = ?. 5 6 4 Therefore, all of the previous forms of regularized empirical risk minimization, whether with a convex or bounded loss ?, can be easily expressed using only convex base losses ` and convex auxiliary functions ?, as follows ??VM f?VM ? VM ? ? arg min min ? T `(y ? X?) + 1T ?(?) + ?2 k?k1 k?k22 ??? 0???1 Pn ? 2 ? arg min min i=1 {?i `(yi ? f (xi )) + ?(?i )} + 2 k?k1 kf kH f ?H 0???1 ? arg min min ? T `(y ? K?) + 1T ?(?) + ?2 k?k1 ?T K?. ? 0???1 (14) (15) (16) Note that we have added a regularizer k?k1 /n, which increases robustness by encouraging ? weights to prefer small values (but adaptively increase on indices with small loss). This particular form of regularization has two advantages: (i) it is a smooth function of ? on 0 ? ? ? 1 (since k?k1 = 1T ? in this case), and (ii) it enables a tight convex approximation strategy, as we will see below. Note that other forms of robust regression can be expressed in a similar framework. For example, generalized M-estimation (GM-estimation) can be formulated simply by forcing each ?i to take on a specific value determined by kxi k or ri [7], ignoring the auxilary function ?. Least Trimmed Loss (8) can be expressed in the form (9) provided only that we add a shared constraint over ?: ??LT L ? arg min min ??? 0???1:1T ?=n0 ? T `(r) + ?(?) (17) where ?(?i ) = 1 ? ?i and n0 < n specifies the number of terms to consider in the sum of losses. Since ? ? {0, 1}n at a solution (see e.g. [9]), (17) is equivalent to (8) if ? is the clipped loss (13). These formulations are all convex in the parameters given the auxiliary weights, and vice versa. However, they are not jointly convex in the optimization variables (i.e. in ? and ?, or in ? and ?). Therefore, one is not assured that the problems (14)?(16) have only global minima; in fact local minima exist and global minima cannot be easily found (or even verified). 4 Computationally Efficient Approximation We present a general approximation strategy for the variational regression estimators above that can guarantee polynomial run-time while ensuring certain robustness and consistency properties. The approximation is significantly tighter than the existing work [9], which allows us to achieve stronger guarantees while providing better empirical performance. In developing our estimator we follow standard methodology from combinatorial optimization: given an intractable optimization problem, first formulate a (hopefully tight) convex relaxation that provides a lower bound on the objective, then round the relaxed minimizer back to the feasible space, hopefully verifying that the rounded solution preserves desirable properties, and finally re-optimize the rounded solution to refine the result; see e.g. [23]. To maintain generality, we formulate the approximate estimator in the RKHS setting. Consider (16). Although the problem is obviously convex in ? given ?, and vice versa, it is not jointly convex (recall the assumption that ` and ? are both convex functions). This suggests that an obvious computational strategy for computing the estimator (16) is to alternate between ? and ? optimizations (or use heuristic methods [2]), but this cannot guarantee anything other than local solutions (and thus may not even achieve any of the desired theoretical properties associated with the estimator). Reformulation: We first need to reformulate the problem to allow a tight relaxation. Let ?(?) denote putting a vector ? on the main diagonal of a square matrix, and let ? denote componentwise multiplication. Since ` is closed and convex by assumption, we know that `(r) = sup? ?r ? ?`? (?), where `? is the Fenchel conjugate of ` [22]. This allows (16) to be reformulated as follows. Lemma 1. min min ? T `(y ? K?) + 1T ?(?) + ?2 k?k1 ?T K? (18) 0???1 ?  1 T T = min sup 1T ?(?) ? ? T (`? (?) ? ?(y)?) ? 2? ? K ? (?k?k?1 (19) 1 ? ) ?, 0???1 ? where the function evaluations are componentwise. (Proof given in Appendix A of the supplement.) Although no relaxation has been introduced, the new form (25) has a more convenient structure. 5 T Relaxation: Let N = ?k?k?1 1 ? and note that, since 0 ? ? ? 1, N must satisfy a number of useful properties. We can summarize these by formulating a constraint set N ? N? given by: N? = {N : N < 0, N 1 = ?, rank(N ) = 1} (20) M? = {M : M < 0, M 1 = ?, tr(M ) ? 1}. (21) Unfortunately, the set N? is not convex because of the rank constraint. However, relaxing this constraint leads to a set M? ? N? which preserves much of the key structure, as we verify below. ? 1 (22) Lemma 2. (25) = min min sup 1T ?(?) ? ? T (` (?) ? ?(y)?) ? 2? ? T (K ? N ) ? 0???1 N ?N? ? min ? min sup 1T ?(?) ? ? T (`? (?) ? ?(y)?) ? 0???1 M ?M? ? 1 T 2? ? (K ? M ) ?. (23) using the fact that N? ? M? . (Proof given in Appendix A of the supplement.) Crucially, the constraint set {(?, M ) : 0 ? ? ? 1, M ? M? } is jointly convex in ? and M , thus (35) is a convex-concave min-max problem. To see why, note that the inner objective function is jointly convex in ? and M , and concave in ?. Since a pointwise maximum of convex functions is convex, the problem is convex in (?, M ) [22, Ch.3]. We conclude that all local minima in (?, M ) are global. Therefore, (35) provides the foundation for an efficiently solvable relaxation. Rounding: Unfortunately the solution to M in (35) does not allow direct recovery of an estimator ? achieving the same objective value in (24), unless M satisfies rank(M ) = 1. In general we first need to round M to a rank 1 solution. Fortunately, a trivial rounding procedure is available: we simply use ? (ignoring M ) and re-solve for ? in (24). This is equivalent to replacing M with the ? = ?k?k?1 ? T ? N? , which restores feasibility in the original problem. Of course, rank 1 matrix N 1 such a rounding step will generally increase the objective value. Reoptimization: Finally, the rounded solution can be locally improved by alternating between ? ? and ? updates in (24) (or using any other local optimization method), yielding the final estimate ?. 5 Properties Although a tight a priori bound on the size of the optimality gap is difficult to achieve, a rigorous bound on the optimality gap can be recovered post hoc once the re-optimized estimator is computed. Let R0 denote the minimum value of (24) (not efficiently computable); let R1 denote the minimum value of (35) (the relaxed solution); let R2 denote the value of (24) achieved by freezing ? from the relaxed solution but re-optimizing ? (the rounded solution); and finally let R3 denote the value of (24) achieved by re-optimizing ? and ? from the rounded solution (the re-optimized solution). Clearly we have the relationships R1 ? R0 ? R3 ? R2 . An upper bound on the relative optimality gap of the final solution (R3 ) can be determined by (R3 ? R0 )/R3 ? (R3 ? R1 )/R3 , since R1 and R3 are both known quantities. Tractability: Under mild assumptions on ` and ?, computation of the approximate estimator (solving the relaxed problem, rounding, then re-optimizing) admits a polynomial-time solution; see Appendix E in the supplement. (Appendix E also provides details for an efficient implementation for solving (35).) Once ? is recovered from the relaxed solution, the subsequent optimizations of (24) can be solved efficiently under weak assumptions about ` and ?; namely that they both satisfy the self-concordance and polynomial-time computation properties discussed in Section 2. Robustness: Despite the approximation, the relaxation remains sufficiently tight to preserve some of the robustness properties of bounded loss minimization. To establish the robustness (and consistency) properties, we will need to make use of a specific technical definition of outliers and inliers. Definition 2 (Outliers and Inliers). For an L-Lipschitz loss `, an outlier is a point (xi , yi ) that satisfies `(yi ) > L2 Kii /(2?) ? ? 0 (0), while an inlier satisfies `(yi ) + L2 Kii /(2?) < ?? 0 (1). Theorem 3. Assume the loss ? is bounded and has a variational representation (9) such that ` is Lipschitz-continuous and ? 0 is bounded. Also assume there is at least one (unperturbed) inlier, and consider the perturbation of a single data point (y1 , x1 ). Under the following conditions, the rounded (re-optimized) estimator maintains bounded response: (i) If either y1 remains bounded, or ?(x1 , x1 ) remains bounded. (ii) If |y1 | ? ?, ?(x1 , x1 ) ? ? and `(y1 )/?(x1 , x1 ) ? ?. (Proof given in Appendix B of the supplement.) 6 Methods L2 L1 Huber LTS GemMc [9] AltBndL2 AltBndL1 CvxBndL2 CvxBndL1 Outlier Probability p = 0.4 p = 0.2 p = 0.0 43.5 (13) 57.6 (21.21) 0.52 (0.01) 4.89 (2.81) 3.6 (2.04) 0.52 (0.01) 4.89 (2.81) 3.62 (2.02) 0.52 (0.01) 6.72 (7.37) 8.65 (14.11) 0.52 (0.01) 0.53 (0.03) 0.52 (0.02) 0.52 (0.01) 0.52 (0.01) 0.52 (0.01) 0.52 (0.01) 0.52 (0.01) 0.52 (0.01) 0.52 (0.02) 0.73 (0.12) 0.74 (0.16) 0.52 (0.01) 0.52 (0.01) 0.52 (0.01) 0.52 (0.01) 0.53 (0.02) 0.55 (0.05) 0.52 (0.01) Table 1: RMSE on clean test data for an artificial data set with 5 features and 100 training points, with outlier probability p, and 10000 test data points. Results are averaged over 10 repetitions. Standard deviations are given in parentheses. Note that the latter condition causes any convex loss ` to demonstrate unbounded response (see proof of Theorem 5 in Appendix B). Therefore, the approximate estimator is strictly more robust (in terms of bounded response) than regularized empirical risk minimization with a convex loss `. Consistency: Finally, we can establish consistency of the approximate estimator in a limited albeit non-trivial setting, although we have yet to establish it generally. Theorem 4. Assume ` is Lipschitz-continuous and ?(?) = 1 ? ?. Assume that the data is generated from a mixture of inliers and outliers, where P (inlier) > P (outlier). Then the estimate ?? produced by the rounded (re-optimized) method is loss consistent.(Proof given in Appendix C.2.) 6 Experimental Evaluation We conducted a set of experiments to evaluate the effectiveness of the proposed method compared to standard methods from the literature. Our experimental evaluation was conducted in two parts: first a synthetic experiment where we could control data generation, then an experiment on real data. The first synthetic experiment was conducted as follows. A target weight vector ? was drawn from N (0, I), with Xi: sampled uniformly from [0, 1]m , m = 5, and outputs yi computed as yi = Xi: ? + i , i ? N (0, 12 ). We then seeded the data set with outliers by randomly re-sampling each yi and Xi: from N (0, 108 ) and N (0, 104 ) respectively, governed by an outlier probability p. Then we randomly sampled 100 points as the training set and another 10000 samples are used for testing. We implemented the proposed method with two different base losses, L2 and L1 , respectively; referring to these as CvxBndL2 and CvxBndL1. We compared to standard L2 and L1 loss minimization, as well as minimizing the Huber minimax loss (Huber) [4]. We also considered standard methods from the robust statistics literature, including the least trimmed square method (LTS) [7, 24], and bounded loss minimization based on the Geman-McClure loss (GemMc) [8]. Finally we also compared to the alternating minimization strategies outlined at the end of Section 3 (AltBndL2 and AltBndL1 for L2 and L1 losses respectively), and implemented the strategy described in [9]. We added the Tikhonov regularization to each method and the regularization parameter ? was selected (optimally for each method) on a separate validation set. Note that LTS has an extra parameter n0 , which is the number of inliers. The ideal setting n0 = (1 ? p)n was granted to LTS. We also tried 30 random restarts for LTS and picked the best result. All experiments are repeated 10 times and the average root mean square errors (RMSE) (with standard deviations) on the clean test data are reported in Table 1. For p = 0 (i.e. no outliers), all methods perform well; their RMSEs are close to optimal (1/2, the standard deviation of i ). However, when outliers start to appear, the result of least squares is significantly skewed, while the results of classic robust statistics methods, Huber, L1 and LTS, indeed turn out to be more robust than the least squares, but nevertheless are still affected significantly. Both implementations of the new method performs comparably to the the non-convex Geman-McClure loss while substantially improving the alternating strategy under the L1 loss. Note that the latter improvement clearly demonstrates that 7 Methods L2 L1 Huber LTS GemMc [9] AltBndL2 AltBndL1 CvxBndL2 CvxBndL1 Gap(Cvx2) Gap(Cvx1) cal-housing 1185 (124.59) 1303 (244.85) 1221 (119.18) 533 (398.92) 28 (88.45) 967 (522.40) 967 (522.40) 1005 (603.00) 9 (0.64) 8 (0.28) 2e-12 (3e-12) 0.005 (0.01) Datasets abalone pumadyn 7.93 (0.67) 1.24 (0.42) 7.30 (0.40) 1.29 (0.42) 7.73 (0.49) 1.24 (0.42) 755.1 (126) 0.32 (0.41) 2.30 (0.01) 0.12 (0.12) 8.39 (0.54) 0.81 (0.77) 8.39 (0.54) 0.81 (0.77) 7.30 (0.40) 1.29 (0.42) 7.60 (0.86) 0.07 (0.07) 2.98 (0.08) 0.08 (0.07) 3e-9 (4e-9) 0.025 (0.052) 0.001 (0.001) 0.267 (0.269) bank-8fh 18.21 (6.57) 6.54 (3.09) 7.37 (3.18) 10.96 (6.67) 0.93 (0.80) 3.91 (6.18) 7.74 (9.40) 1.61 (2.51) 0.20 (0.05) 0.10 (0.07) 0.001 (0.003) 0.011 (0.028) Table 2: RMSE on clean test data for 108 training data points and 1000 test data points, with 10 repeats. Standard deviations shown parentheses. The mean gap values of CvxBndL2 and CvxBndL1, Gap(Cvx2) and Gap(Cvx1) respectively, are given in the last two rows. alternating can be trapped in poor local minima. The proposal from [9] was not effective in this setting (which differed from the one investigated there). Next, we conducted an experiment on four real datasets taken from the StatLib repository9 and DELVE.10 For each data set, we randomly selected 108 points as the training set, and another random 1000 points as the test set. Here the regularization constant is tuned by 10-fold cross validation. To seed outliers, 5% of the training set are randomly chosen and their X and y values are multiplied by 100 and 10000, respectively. All of these data sets have 8 features, except pumadyn which has 32 features. We also estimated the scale factor on the training set by the mean absolute deviation method, a common method in robust statistics [3]. Again, the ideal parameter n0 = (1 ? 5%)n is granted to LTS and 30 random restarts are performed. The RMSE on test set for all methods are reported in Table 2. It is clear that all methods based on convex losses (L2, L1, Huber) suffer significantly from the added outliers. The method proposed in this paper consistently outperform all other methods with a noticeable margin, except on the abalone data set where GemMc performs slightly better.11 Again, we observe evidence that the alternating strategy can be trapped in poor local minima, while the method from [9] was less effective. We also measured the relative optimality gaps for the approximate CvxBnd procedures. The gaps were quite small in most cases (the gaps were very close to zero in the synthetic case, and so are not shown), demonstrating the tightness of the proposed approximation scheme. 7 Conclusion We have developed a new robust regression method that can guarantee a form of robustness (bounded response) while ensuring tractability (polynomial run-time). The estimator has been proved consistent under some restrictive but non-trivial conditions, although we have not established general consistency. Nevertheless, an empirical evaluation reveals that the method meets or surpasses the generalization ability of state-of-the-art robust regression methods in experimental studies. Although the method is more computationally involved than standard approaches, it achieves reasonable scalability in real problems. We are investigating whether the proposed estimator achieves stronger robustness properties, such as high breakdown or bounded influence. It would be interesting to extend the approach to also estimate scale in a robust and tractable manner. Finally, we continue to investigate whether other techniques from the robust statistics and machine learning literatures can be incorporated in the general framework while preserving desired properties. Acknowledgements Research supported by AICML and NSERC. 9 http://lib.stat.cmu.edu/datasets/ http://www.cs.utoronto.ca/ delve/data/summaryTable.html 11 Note that we obtain different results than [9] arising from a very different outlier process. 10 8 References [1] T. Bernholt. Robust estimators are hard to compute. Technical Report 52/2005, SFB475, U. Dortmund, 2005. [2] R. Nunkesser and O. Morell. An evolutionary algorithm for robust regression. Computational Statistics and Data Analysis, 54:3242?3248, 2010. [3] R. Maronna, R. Martin, and V. Yohai. Robust Statistics: Theory and Methods. Wiley, 2006. [4] P. Huber and E. Ronchetti. Robust Statistics. Wiley, 2nd edition, 2009. [5] A. Christmann and I. Steinwart. Consistency and robustness of kernel-based regression in convex risk minimization. Bernoulli, 13(3):799?819, 2007. [6] A. Christmann, A. Van Messem, and I. Steinwart. On consistency and robustness properties of support vector machines for heavy-tailed distributions. Statistics and Its Interface, 2:311?327, 2009. [7] P. Rousseeuw and A. Leroy. Robust Regression and Outlier Detection. Wiley, 1987. [8] M. Black and A. Rangarajan. On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. International Journal of Computer Vision, 19(1): 57?91, 1996. [9] Y. Yu, M. Yang, L. Xu, M. White, and D. Schuurmans. Relaxed clipping: A global training method for robust regression and classification. In Advances in Neural Information Processings Systems (NIPS), 2010. [10] A. Bental, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, October 2009. [11] H. Xu, C. Caramanis, and S. Mannor. Robust regression and Lasso. In Advances in Neural Information Processing Systems (NIPS), volume 21, pages 1801?1808, 2008. [12] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal of Machine Learning Research, 10:1485?1510, 2009. [13] S. Mukherjee, P. Niyogi, T. Poggio, and R. Rifkin. Learning theory: Stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Advances in Computational Mathematics, 25(1-3):161?193, 2006. [14] G. Kimeldorf and G. Wahba. A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Annals of Mathematical Statistics, 41(2):495?502, 1970. [15] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. [16] Aad W. van der Vaart and Jon A. Wellner. Weak Convergence and Empirical Processes. Springer, 1996. [17] F. Hampel, E. Ronchetti, P. Rousseeuw, and W. Stahel. Robust Statistics: The Approach Based on Influence Functions. Wiley, 1986. [18] D. Donoho and P. Huber. The notion of breakdown point. In A Festschrift for Erich L. Lehmann, pages 157?184. Wadsworth, 1983. [19] P. Davies and U. Gather. The breakdown point?examples and counterexamples. REVSTAT Statistical Journal, 5(1):1?17, 2007. [20] Y. Nesterov and A. Nemiroviskii. Interior-point Polynomial Methods in Convex Programming. SIAM, 1994. [21] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, 2003. [22] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge U. Press, 2004. [23] J. Peng and Y. Wei. Approximating k-means-type clustering via semidefinite programming. SIAM Journal on Optimization, 18(1):186?205, 2007. [24] P. Rousseeuw and K. Van Driessen. Computing LTS regression for large data sets. Data Mining and Knowledge Discovery, 12(1):29?45, 2006. [25] R. Horn and C. Johnson. Matrix Analysis. Cambridge, 1985. 9
4558 |@word mild:1 polynomial:15 stronger:2 nd:1 open:1 semicontinuous:1 crucially:1 tried:1 ronchetti:2 tr:1 contains:1 series:1 tuned:1 rkhs:5 reynolds:1 existing:2 current:1 recovered:3 yet:1 must:1 subsequent:1 enables:1 update:1 n0:7 selected:2 ith:1 stahel:1 characterization:2 provides:3 mannor:2 simpler:1 unbounded:8 mathematical:1 along:1 direct:1 prove:2 introductory:1 manner:1 introduce:1 peng:1 huber:9 indeed:1 nor:1 alberta:1 automatically:2 encouraging:1 equipped:1 considering:1 increasing:1 becomes:1 begin:1 provided:4 bounded:38 underlying:1 lib:1 kimeldorf:1 substantially:1 developed:2 guarantee:8 every:1 concave:2 demonstrates:4 k2:1 control:1 yn:1 appear:1 positive:1 local:6 despite:2 meet:1 might:1 black:1 studied:1 suggests:1 relaxing:1 delve:2 limited:1 nemirovski:1 averaged:1 unique:2 horn:1 testing:1 procedure:5 empirical:11 significantly:5 convenient:1 davy:1 boyd:1 cannot:5 interior:2 selection:1 close:2 cal:1 risk:8 influence:4 applying:1 optimize:1 conventional:1 deterministic:4 imposed:1 measurable:2 equivalent:2 www:1 convex:44 formulate:2 recovery:1 estimator:50 importantly:1 vandenberghe:1 stability:2 classic:1 notion:6 traditionally:1 analogous:1 annals:1 construction:1 gm:1 target:1 ualberta:1 programming:2 breakdown:11 mukherjee:1 geman:4 solved:2 verifying:1 worst:1 ensures:2 e8:1 balanced:1 complexity:1 nesterov:2 raise:1 solving:3 tight:5 algebra:2 dilemma:3 easily:2 joint:1 represented:3 caramanis:2 regularizer:3 distinct:1 effective:3 artificial:1 quite:1 heuristic:1 valued:4 solve:1 tightness:1 otherwise:1 ability:1 statistic:14 niyogi:1 vaart:1 jointly:4 final:4 obviously:5 hoc:1 advantage:2 housing:1 propose:1 relevant:1 rifkin:1 flexibility:1 achieve:6 metrizable:1 kx1:1 intuitive:1 kh:1 scalability:1 achievement:1 convergence:2 requirement:1 extending:1 r1:4 rangarajan:1 leave:1 inlier:3 derive:1 develop:3 stat:1 measured:1 noticeable:1 auxiliary:4 c:2 implemented:2 christmann:3 stochastic:1 kii:2 generalization:2 investigation:1 preliminary:1 tighter:2 strictly:1 hold:1 sufficiently:1 considered:3 exp:1 seed:1 algorithmic:1 major:1 achieves:3 adopt:1 early:1 fh:1 fmr:1 estimation:12 combinatorial:1 vice:2 repetition:1 minimization:18 clearly:3 rather:1 pn:5 avoid:1 focus:2 improvement:1 consistently:1 prevalent:1 likelihood:2 rank:5 bernoulli:1 contrast:1 rigorous:1 sense:1 minimizers:1 el:1 yaoliang:2 initially:1 overall:1 classification:1 arg:11 oversight:1 issue:1 html:1 priori:1 retaining:1 development:2 restores:1 art:1 smoothing:1 wadsworth:1 marginal:2 field:2 once:2 having:1 sampling:1 yu:2 representer:1 jon:1 np:4 report:1 spline:1 modern:1 randomly:4 preserve:4 individual:1 inapplicability:1 festschrift:1 n1:7 maintain:1 ab:1 detection:1 interest:2 possibility:1 investigate:1 mining:1 evaluation:6 uncommon:1 mixture:1 semidefinite:2 yielding:1 inliers:4 unification:1 necessary:1 poggio:1 modest:2 unless:2 re:10 desired:2 theoretical:1 kij:1 fenchel:1 disadvantage:1 clipping:1 tractability:11 deviation:6 subset:2 surpasses:1 uniform:1 rounding:4 conducted:4 johnson:1 optimally:1 reported:2 perturbed:3 kxi:1 synthetic:3 adaptively:1 referring:1 density:1 pn0:1 international:1 siam:2 vm:3 rounded:7 pumadyn:2 squared:2 central:1 satisfied:1 again:2 choose:1 admit:1 derivative:1 rescaling:1 concordance:1 account:1 matter:1 satisfy:3 explicitly:1 performed:1 root:1 picked:1 closed:4 apparently:1 sup:5 start:2 maintains:1 rmse:4 contribution:1 square:6 efficiently:3 gathered:1 correspond:1 yield:2 weak:5 bayesian:1 unifies:2 produced:2 comparably:1 none:1 dortmund:1 manual:1 definition:10 infinitesimal:1 against:2 involved:1 obvious:1 proof:7 associated:1 sampled:2 proved:1 recall:1 knowledge:1 hilbert:2 back:1 follow:1 methodology:1 response:18 improved:1 restarts:2 wei:1 formulation:8 evaluated:1 generality:1 furthermore:1 just:1 steinwart:3 replacing:1 freezing:1 hopefully:2 measurability:1 effect:1 k22:2 verify:1 regularization:12 hence:2 seeded:1 alternating:5 lts:9 white:1 round:2 skewed:1 self:4 anything:1 abalone:2 generalized:2 ridge:1 demonstrate:2 performs:2 l1:9 interface:1 variational:7 common:3 ltl:1 volume:1 extend:3 discussed:1 kluwer:1 versa:2 counterexample:1 cambridge:2 consistency:17 outlined:2 similarly:1 trivially:2 mathematics:2 erich:1 stable:1 longer:2 base:6 add:1 closest:1 showed:1 perspective:1 optimizing:3 inf:1 forcing:1 tikhonov:7 certain:4 arbitrarily:2 continue:1 yi:14 der:1 integrable:1 preserving:1 analyzes:1 additional:1 minimum:8 relaxed:6 mr:5 fortunately:1 reoptimization:1 r0:3 ii:2 sound:1 desirable:1 smooth:1 technical:3 offer:1 mcclure:3 cross:1 post:1 feasibility:1 ensuring:2 parenthesis:2 scalable:1 regression:31 basic:1 vision:2 metric:1 cmu:1 kernel:11 represent:2 achieved:5 proposal:1 addressed:1 extra:1 induced:1 effectiveness:2 leverage:6 ideal:2 yang:1 identically:1 easy:1 recommends:1 variety:3 affect:1 xj:1 lasso:1 wahba:1 inner:1 computable:1 whether:3 granted:2 trimmed:4 wellner:1 penalty:2 suffer:1 reformulated:1 speaking:1 cause:1 generally:8 useful:2 clear:1 rousseeuw:3 locally:1 maronna:1 http:2 specifies:1 outperform:1 exist:1 driessen:1 trapped:2 estimated:1 ozlem:2 arising:1 affected:1 express:2 key:2 putting:1 reformulation:1 nevertheless:2 four:1 achieving:2 drawn:3 demonstrating:1 clean:3 verified:1 relaxation:9 fraction:1 sum:1 run:5 lehmann:1 clipped:2 reasonable:2 appendix:11 prefer:1 bound:7 guaranteed:2 correspondence:1 fold:1 refine:1 leroy:1 occur:1 constraint:5 ri:5 aspect:1 min:24 formulating:2 selfconcordant:1 optimality:4 separable:1 px:2 martin:1 department:1 developing:1 according:2 alternate:1 poor:2 conjugate:1 remain:1 slightly:1 outlier:26 restricted:1 ghaoui:1 taken:1 computationally:3 remains:4 describing:1 r3:8 turn:1 know:1 tractable:5 end:1 adopted:1 generalizes:1 available:1 multiplied:1 observe:1 appropriate:1 cvx2:2 alternative:1 robustness:23 rkhss:3 rp:2 original:1 denotes:2 clustering:1 ensure:3 include:1 maintaining:1 newton:1 restrictive:1 k1:6 establish:4 approximating:1 objective:6 added:3 quantity:1 strategy:11 diagonal:1 evolutionary:1 detrimental:1 separate:1 outer:1 considers:1 trivial:5 equip:1 assuming:2 aicml:1 index:1 pointwise:1 reformulate:1 providing:1 minimizing:4 relationship:1 difficult:2 unfortunately:9 october:1 negative:1 implementation:2 unknown:7 perform:1 allowing:2 upper:2 observation:4 datasets:3 finite:4 auxilary:1 situation:1 extended:2 incorporated:1 y1:7 rn:2 perturbation:3 reproducing:3 arbitrary:3 sharp:1 canada:1 introduced:1 pair:1 required:2 namely:1 componentwise:3 optimized:4 distinction:1 established:4 nip:2 address:1 able:1 below:3 challenge:1 summarize:1 including:3 max:2 ual:1 difficulty:1 regularized:7 hampel:1 indicator:1 solvable:1 residual:4 mn:1 minimax:1 scheme:3 prior:1 literature:8 understanding:1 l2:8 kf:2 multiplication:1 acknowledgement:1 asymptotic:2 relative:2 yohai:1 loss:59 lecture:1 aslan:1 generation:1 interesting:1 validation:2 foundation:1 rmses:1 gather:1 sufficient:3 t6g:1 consistent:7 principle:1 bank:1 heavy:1 row:2 statlib:1 course:2 surprisingly:2 repeat:1 last:1 supported:1 evaluable:1 allow:4 weaker:1 aad:1 wide:1 absolute:1 benefit:1 distributed:1 van:3 xn:1 lipschitzcontinuous:1 dale:2 adaptive:4 far:1 approximate:5 ignore:1 ml:2 global:4 reveals:1 investigating:1 conclude:1 xi:14 discovery:1 continuous:4 tailed:1 why:1 table:4 robust:43 ca:2 inherently:1 ignoring:2 schuurmans:2 improving:1 investigated:1 domain:5 equivariance:1 assured:1 did:2 significance:1 main:3 dense:1 motivation:1 noise:4 arise:1 kmn:1 edition:1 repeated:1 x1:12 xu:3 edmonton:1 borel:1 differed:1 wiley:4 governed:1 theorem:6 erroneous:1 specific:5 xt:1 utoronto:1 unperturbed:1 r2:5 admits:1 concern:2 evidence:1 incorporating:1 intractable:2 ih:1 albeit:1 adding:1 supplement:7 magnitude:2 margin:1 gap:11 rejection:1 led:1 lt:1 simply:2 conveniently:1 expressed:7 nserc:1 springer:2 ch:6 minimizer:1 satisfies:3 goal:1 formulated:3 sorted:1 consequently:1 donoho:1 lipschitz:6 replace:1 shared:1 hard:5 feasible:1 specifically:1 determined:2 uniformly:1 except:2 lemma:2 invariance:1 experimental:5 concordant:3 support:3 latter:3 evaluate:1 princeton:2
3,931
4,559
Multi-Task Averaging Sergey Feldman, Maya R. Gupta, and Bela A. Frigyik Department of Electrical Engineering University of Washington Seattle, WA 98103 Abstract We present a multi-task learning approach to jointly estimate the means of multiple independent data sets. The proposed multi-task averaging (MTA) algorithm results in a convex combination of the single-task averages. We derive the optimal amount of regularization, and show that it can be effectively estimated. Simulations and real data experiments demonstrate that MTA outperforms both maximum likelihood and James-Stein estimators, and that our approach to estimating the amount of regularization rivals cross-validation in performance but is more computationally efficient. 1 Introduction The motivating hypothesis behind multi-task learning (MTL) algorithms is that leveraging data from related tasks can yield superior performance over learning from each task independently. Early evidence for this hypothesis is Stein?s work on the estimation of the means of T distributions (tasks) [1]. Stein showed that it is better (in a summed squared error sense) to estimate each of the means of T Gaussian random variables using data sampled from all of them, even if they are independent and have different means. That is, it is beneficial to consider samples from seemingly unrelated distributions in the estimation of the tth mean. This surprising result is often referred to as Stein?s paradox [2]. Estimating means is perhaps the most common of all estimation tasks, and often multiple means need to be estimated. In this paper we consider a multi-task regularization approach to the problem of estimating multiple means that we call multi-task averaging (MTA). We show that MTA has provably nice theoretical properties, is effective in practice, and is computationally efficient. We define the MTA objective in Section 2, and review related work in Section 3. We present some key properties of MTA in Section 4 (proofs are omitted due to space constraints). In particular, we state the optimal amount of regularization to be used, and show that this optimal amount can be effectively estimated. Simulations in Section 5 verify the advantage of MTA over standard sample means and James-Stein estimation if the true means are close compared to the sample variance. In Section 6.1, two experiments estimating expected sales show that MTA can reduce real errors by over 30% compared to the sample mean. MTA can be used anywhere multiple averages are needed; we demonstrate this by applying it fruitfully to the averaging step of kernel density estimation in Section 6.1. 2 Multi-Task Averaging Consider the T -task problem of estimating the means of T random variables that have finite mean t and variance. Let {Yti }N i=1 be Nt independent and identically-distributed random samples for t = 1, . . . , T . The MTA objective and many of the results in this paper generalize trivially to samples that are vectors rather than scalars, but for notational simplicity we restrict our focus to scalar samples Yti ? R. Key notation is given in Table 1. 1 Table 1: Key Notation T Nt Yti ? R Y?t ? R Yt? ? R ?t2 ? A ? RT ?T L=D?A number of tasks number of samples for tth task ith random sample from Ptth task tth sample average N1t i Yti MTA estimate of tth mean variance of the tth task ?2 diagonal covariance matrix of Y? with ?tt = Ntt pairwise task similarity matrix PT graph Laplacian of A, with diagonal D s.t. Dtt = r=1 Atr In addition, assume that the T ? T matrix A describes the relatedness or similarity of any pair of the T tasks, with Att = 0 for all t without loss of generality (because the diagonal self-similarity terms are canceled in the objective below). The proposed MTA objective is {Yt? }Tt=1 = arg min {Y?t }T t=1 T T Nt T (Yti ? Y?t )2 ? XX 1 XX Ars (Y?r ? Y?s )2 . + T t=1 i=1 ?t2 T 2 r=1 s=1 (1) The first term minimizes the sum of the empirical losses, and the second term jointly regularizes the estimates by regularizing their pairwise differences. The regularization parameter ? balances the empirical risk and the multi-task regularizer. Note that if ? = 0, then (1) decomposes to T separate minimization problems, producing the sample averages Y?t . The normalization of each error term in (1) by its task-specific variance ?t2 (which may be estimated) scales the T empirical loss terms relative to the variance of their distribution; this ensures that high-variance tasks do not disproportionately dominate the loss term. A more general formulation of MTA is {Yt? }Tt=1 T Nt   1 XX = arg min L(Yti , Y?t ) + ?J {Y?t }Tt=1 , T t=1 i=1 {Y?t }T t=1 where L is some loss function and J is a regularization function. If L is chosen to be any Bregman loss, then setting ? = 0 will produce the T sample averages [3]. For the analysis and experiments in this paper, we restrict our focus to the tractable squared-error formulation given in (1). The task similarity matrix A can be specified as side information (e.g. from a domain expert), or set in an optimal fashion. In Section 4 we derive two optimal choices of A for the T = 2 case: the A that minimizes expected squared error, and a minimax A. We use the T = 2 analysis to propose practical estimators of A for any number of tasks. 3 Related Work MTA is an approach to the problem of estimating T means. We are not aware of other work in the multi-task literature that addresses this problem; most MTL methods are designed for regression, classification, or feature selection, e.g. [4, 5, 6]. The most closely related work is Stein estimation, an empirical Bayes strategy for estimating multiple means simultaneously [7, 8, 2, 9]. James and Stein [7] showed that the maximum likelihood estimate of the tth mean ?t can be dominated by a shrinkage estimate given Gaussian assumptions. There have been a number of extensions to the original James-Stein estimator. We compare to the positive-part residual James-Stein estimator for multiple data points per task and independent unequal variances [8, 10], such that the estimated mean for the tth task is   T ?3 (Y?t ? ?), (2) ?+ 1? ? (Y ? ?)T ??1 (Y? ? ?) + 2 where (x)+ = max(0, x); ? is a diagonal matrix of the estimated variances of each sample mean ? ?2 where ?tt = Ntt and the estimate is shrunk towards ?, which is usually set to be the mean of the P sample means (other choices are sometimes used) ? = Y? = T1 t Y?t . Bock?s formulation of (2) uses the effective dimension (defined as the ratio of the trace of ? to the maximum eigenvalue of ?) rather than the T in the numerator of (2) [8, 7, 10]. In preliminary practical experiments where ? must be estimated from the data, we found that using the effective dimension significantly crippled the performance of the James-Stein estimator. We hypothesize that this is due to the high variance of the estimate of the maximum eigenvalue of ?. MTA can be interpreted as estimating means of T Gaussians with an intrinsic Gaussian Markov random field prior [11]. Unlike most work in graphical models, we do not assume any variables are conditionally independent, and generally have non-sparse inverse covariance. A key issue for MTA and many other multi-task learning methods is how to estimate the similarity (or task relatedness) between tasks and/or samples if it is not provided. A common approach is to estimate the similarity matrix jointly with the task parameters [12, 13, 5, 14, 15]. For example, Zhang and Yeung [15] assumed that there exists a covariance matrix for the task relatedness, and proposed a convex optimization approach to estimate the task covariance matrix and the task parameters in a joint, alternating way. Applying such joint and alternating approaches to the MTA objective (1) leads to a degenerate solution with zero similarity. However, the simplicity of MTA enables us to specify the optimal task similarity matrix for T = 2 (see Sec. 4), which we generalize to obtain an estimator for the general multi-task case. 4 MTA Theory For symmetric A with non-negative components1 , the MTA objective given in (1) is continuous, differentiable, and convex. It is straightforward to show that (1) has closed-form solution:  ?1 ? Y ? = I + ?L Y? , T (3) PNt where Y? is the vector of sample averages with tth entry Y?t = N1t i=1 Yti , L is the graph Laplacian of A, and ? is defined as before. With non-negative A and ?, the matrix inverse in (3) can be shown to always exist using the Gershgorin Circle Theorem [16]. Note that the (r, s)th entry of T? ?L goes to 0 as Nt approaches infinity, and since matrix inversion ?1 is a continuous operation, I + T? ?L ? I in the norm. By the law of large numbers one can conclude that Y ? asymptotically approaches the true means. 4.1 Convexity of MTA Solution From inspection of (3), it is clear that each of the elements of Y ? is a linear combination of the sample averages Y? . However, a stronger statement can be made: ?2 Theorem: If ? ? 0, 0 ? Ars < ? for all r, s and 0 < Ntt < ? for all t, then the MTA estimates {Yt? } given in (3) are a convex combination of the task sample averages {Y?t }. ?1 exists and is Proof Sketch: The theorem requires showing that the matrix W = I + T? ?L right-stochastic. Using the Gershgorin Circle Theorem [16], we can show that the real part of every eigenvalue of W ?1 is positive. The matrix W ?1 is a Z-matrix [17], and if the real part of each of the eigenvalues of a Z-matrix is positive, then its inverse has all non-negative entries (See Chapter 6, Theorem 2.3, G20 , and N38 , [17]). Finally, to prove that W has rows that sum  to 1, first note that by definition the rows of the graph Laplacian L sum to zero. Thus I + T? ?L 1 = 1, and because ?1 we established invertibility, this implies the desired right-stochasticity: 1 = I + T? ?L 1. 1 If an asymmetric A is provided, using it with MTA is equivalent to using the symmetric (AT + A)/2. 3 4.2 Optimal A for the Two Task Case In this section we analyze the T = 2 task case, with N1 and N2 samples for tasks 1 and 2 respectively. Suppose {Y1i } are iid (independently and identically distributed) with finite mean ?1 and finite variance ?12 , and {Y2i } are iid with finite mean ?2 = ?1 + ? and finite variance ?22 . Let the task-relatedness matrix be A = [0 a; a 0], and without loss of generality, we fix ? = 1. Then the closed-form solution (3) can be simplified: ? ? ? ? ?22 ?12 T + a a N2 N1 ? Y?1 + ? ? Y?2 . Y1? = ? (4) ?2 ?2 ?2 ?2 T + N11 a + N22 a T + N11 a + N22 a It is straightforward to derive the mean squared error of Y1? : ? ? ?22 ?4 ?12 ?22 2 ?24 2 2 2 T + 2T ?2 N12 a2 a + a + a 2 ?1 ? N2 N1 N2 N2 ? 1 ?+ MSE[Y1 ] = . ?12 ?22 ?12 ?2 N1 2 (T + N1 a + N2 a) (T + N1 a + N22 a)2 (5) Comparing to the MSE of the sample average, one obtains the following relationship: ?2 ?2 4 MSE[Y1? ] < MSE[Y?1 ] if ?2 ? 1 ? 2 < , N1 N2 a (6) Thus the MTA estimate of the first mean has lower MSE if the squared mean-separation ?2 is small compared to the variances of the sample averages. Note that as a approaches 0 from above, the RHS of (6) approaches infinity, which means that a small amount of regularization can be helpful even when the difference between the task means ? is large. Summarizing, if the two task means are close relative to each task?s sample variance, MTA will help. The risk is the sum of the mean squared errors: MSE[Y1? ]+MSE[Y2? ], which is a convex, continuous, and differentiable function of a, and therefore the first derivative can be used to specify the optimal value a? , when all the other variables are fixed. Minimizing MSE[Y1? ] + MSE[Y2? ] w.r.t. a one obtains the following solution: 2 (7) a? = 2 , ? which is always non-negative. Analysis of the second derivative shows that this minimizer always holds for the cases of interest (that is, for N1 , N2 ? 1). In the limit case, when the difference in the task means ? goes to zero (while ?t2 stay constant), the optimal task-relatedness a? goes to infinity, and the weights in (4) on Y?1 and Y?2 become 1/2 each. 4.3 Estimating A from Data Based on our analysis of the optimal A for the two-task case, we propose two methods to estimate A from data for arbitrary T . The first method is designed to minimize the approximate risk using a constant similarity matrix. The second method provides a minimax estimator. With both methods we can use the Sherman-Morrison formula to avoid taking the matrix inverse in (3), and the computation of Y ? is O(T ). 4.3.1 Constant MTA Recalling that E[Y? Y? T ] = ??T + ?, the risk of estimator Y? = W Y? of unknown parameter vector ? for the squared loss is the sum of the mean squared errors: R(?, W Y? ) = E[(W Y? ? ?)T (W Y? ? ?)] = tr(W ?W T ) + ?T (I ? W )T (I ? W )?. (8) One approach to generalizing the results of Section 4.2 to arbitrary T is to try to find a symmetric, non-negative matrix A such that the (convex, differentiable) risk R(?, W Y? ) is minimized for W = ?1 I + T? ?L (recall L is the graph Laplacian of A). The problem with this approach is two-fold: (i) the solution is not analytically tractable for T > 2 and (ii) an arbitrary A has T (T ? 1) degrees of freedom, which is considerably more than the number of means we are trying to estimate in 4 the first place. To avoid these problems, we generalize the two-task results by constraining A to be a scaled constant matrix A = a11T , and find the optimal a? that minimizes the risk in (8). In addition, w.l.o.g. we set ? to 1, and for analytic tractability we assume that all the tasks have the same variance, estimating ? as tr(?) T I. Then it remains to solve:  ?1 ! 1 tr(?) a? = arg min R ?, I + L(a11T ) Y? , T T a which has the solution a? = 1 T (T ?1) PT r=1 2 PT s=1 (?r ? ?s )2 , which reduces to the optimal two task MTA solution (7) when T = 2. In practice, one of course does not have {?r } as these are precisely the values one is trying to estimate. So, to estimate a? we PT 2PT use the sample means {? yr }: a ?? = . Using this estimated optimal constant 1 (? y ?? y )2 T (T ?1) r=1 s=1 r s ? produces what we refer to as the constant MTA similarity and an estimated covariance matrix ? estimate  ?1 1? T ? ? Y = I + ?L(? a 11 ) Y? . (9) T Note that we made the assumption that the entries of ? were the same in order to be able to derive ? used with a the constant similarity a? , but we do not need nor suggest that assumption on the ? ?? in (9). 4.4 Minimax MTA Bock?s James-Stein estimator is minimax in that it minimizes the worst-case loss, not necessarily the expected loss [10]. This leads to a more conservative use of regularization. In this section, we derive a minimax version of MTA, that prescribes less regularization than the constant MTA. Formally, an estimator Y M of ? is called minimax if it minimizes the maximum risk: inf sup R(?, Y? ) = sup R(?, Y M ). Y? ? ? First, we will specify minimax MTA for the T = 2 case. To find a minimax estimator Y M it is sufficient to show that (i) Y M is a Bayes estimator w.r.t. the least favorable prior (LFP) and (ii) it has constant risk [10]. To find a LFP, we first need to specify a constraint set for ?t ; we use an interval: ?t ? [bl , bu ], for all t, where bl ? R and bu ? R. With this constraint set the minimax estimator is:  ?1 2 T M Y = I+ ?L(11 ) Y? , (10) T (bu ? bl )2 which reduces to (7) when T = 2. This minimax analysis is only valid for the case when T = 2, but we found that good practical results for larger T using (10) with the data-dependent interval ?bl = mint y?t and ?bu = maxt y?t . 5 Simulations We first illustrate the performance of the proposed MTA using Gaussian and uniform simulations so that comparisons to ground truth can be made. Simulation parameters are given in the table in Figure 1, and were set so that the variances of the distribution of the true means were the same in both types of simulations. Simulation results are reported in Figure 1 for different values of ??2 , which determines the variance of the distribution over the means. We compared constant MTA and minimax MTA to single-task sample averages and to the JamesStein estimator given in (2). We also compared to a randomized 5-fold 50/50 cross-validated (CV) version of constant MTA, and minimax MTA, and the James-Stein estimator (which is simply a convex regularization towards the average of the sample means: ?? yt +(1??)y?.). For the cross-validated versions, we randomly subsampled Nt /2 samples and chose the value of ? for constant/minimax 5 Gaussian Simulations ?t ? N (0, ??2 ) ?t2 ? Gamma(0.9, 1.0) + 0.1 Nt ? U {2, . . . , 100} yti ? N (?t , ?t2 ) Uniform Simulations q q ?t ? U (? 3??2 , 3??2 ) ?t2 ? U (0.1, 2.0) Nt ? U {2, . . .p , 100} p yti ? U [?t ? 3?t2 , ?t + 3?t2 ] T=2 T=2 10 % change vs. single?task % change vs. single?task 10 0 ?10 ?20 ?30 ?40 ?50 0 0.5 Single?Task James?Stein MTA, constant MTA, minimax James?Stein (CV) MTA, constant (CV) MTA, minimax (CV) 1 1.5 2 2.5 3 2 ?? (variance of the means) 0 ?10 ?20 ?30 ?40 ?50 0 0.5 Single?Task James?Stein MTA, constant MTA, minimax James?Stein (CV) MTA, constant (CV) MTA, minimax (CV) 1 1.5 2 2.5 3 2 ?? (variance of the means) T=5 T=5 10 % change vs. single?task % change vs. single?task 10 0 ?10 ?20 ?30 ?40 ?50 0 0.5 1 1.5 2 2.5 ?2 (variance of the means) 0 ?10 ?20 ?30 ?40 ?50 0 3 0.5 ? T = 25 T = 25 10 % change vs. single?task % change vs. single?task 3 ? 10 0 ?10 ?20 ?30 ?40 ?50 0 1 1.5 2 2.5 ?2 (variance of the means) 0.5 1 1.5 2 2.5 ?2? (variance of the means) 0 ?10 ?20 ?30 ?40 ?50 0 3 0.5 1 1.5 2 2.5 ?2? (variance of the means) 3 Figure 1: Average (over 10000 random draws) percent change in risk vs. single-task. Lower is better. MTA or ? for James-Stein that resulted in the lowest average left-out risk compared to the sample mean estimated with all Nt samples. In the optimal versions of constant/minimax MTA, ? was set to 1, as this was the case during derivation. We used the following parameters for CV: ? ? {2?5 , 2?4 , . . . , 25 } for the MTA estimators and a ? comparable set of ? spanning (0, 1) by the transformation ? = ?+1 . Even when cross-validating, an advantage of using the proposed constant MTA or minimax MTA is that these estimators provide ? a data-adaptive scale for ?, where ? = 1 sets the regularization parameter to be aT or T (bu1?bl )2 , respectively. Some observations from Figure 1: further to the right on the x-axis, the means are more likely to be further apart, and multi-task approaches help less on average. For T = 2, the James-Stein estimator reduces to the single-task estimator, and is of no help. The MTA estimators provide a gain while 6 ??2 < 1 but deteriorates quickly thereafter. For T = 5, constant MTA dominates in the Gaussian case, but in the uniform case does worse than single-task when the means are far apart. Note that for all T > 2 minimax MTA almost always outperforms James-Stein and always outperforms singletask, which is to be expected as it was designed conservatively. For T = 25, we see the trend that all estimators benefit from an increase in the number of tasks. For constant MTA, cross-validation is always worse than the estimated optimal regularization. Since both constant MTA and minimax MTA use a similarity matrix of all ones scaled by a constant, crossvalidating over a set of possible ? may result in nearly identical performance, and this can be seen in the Figure (i.e. the green and blue dotted lines are superimposed). To conclude, when the tasks are close to each other compared to their variances, constant MTA is the best estimator to use by a wide margin. When the tasks are farther apart, minimax MTA will provide a win over both James-Stein and maximum likelihood. 6 Applications We present two applications with real data. The first application parallels the simulations, estimating expected values of sales of related products. The second application uses MTA for multi-task kernel density estimation, highlighting the applicability of MTA to any algorithm that uses sample averages. 6.1 Application: Estimating Product Sales We consider two multi-task problems using sales data over a certain time period supplied by Artifact Puzzles, a company that sells jigsaw puzzles online. For both problems, we model the given samples as being drawn iid from each task. The first problem estimates the impact of a particular puzzle on repeat business: ?Estimate how much a random customer will spend on an order on average, if on their last order they purchased the tth puzzle, for each of T = 77 puzzles.? The samples were the amounts different customers had spent on orders after buying each of the t puzzles, and ranged from 480 down to 0 for customers that had not re-ordered. The number of samples for each puzzle ranged from Nt = 8 to Nt = 348. The second problem estimates the expected order size of a particular customer: ?Estimate how much the tth customer will spend on a order on average, for each of the T = 477 customers that ordered at least twice during the data timeframe.? The samples were the order amounts for each of the T customers. Order amounts varied from 15 to 480. The number of samples for each customer ranged from Nt = 2 to Nt = 17. There is no ground truth. As a metric to compare the estimates, we treat each task?s sample average computed from all of the samples as the ground truth, and compare to estimates computed from a uniformly randomly chosen 50% of the samples. Results in Table 2 are averaged over 1000 random draws of the 50% used for estimation. We used 5-fold cross-validation with the same parameter choices as in the simulations section. Table 2: Percent change in average risk (for puzzle and buyer data, lower is better), and mean reciprocal rank (for terrorist data, higher is better). Estimator Pooled Across Tasks James-Stein James-Stein (CV) Constant MTA Constant MTA (CV) Minimax MTA Minimax MTA (CV) Expert MTA Expert MTA (CV) Puzzles T = 77 181.67% -6.87% -21.18% -17.48% -21.65% -8.41% -19.83 % - Customers T = 477 109.21% -14.04% -31.01% -32.29% -30.89% -2.96% -25.04% 7 Suicide Bombings T =7 0.13 0.15 0.15 0.19 0.19 0.19 0.19 0.19 0.19 6.2 Density Estimation for Terrorism Risk Assessment MTA can be used whenever multiple averages are taken. In this section we present multi-task kernel density estimation, as an application of MTA. Recall that for standard single-task kernel density estimation (KDE) [18], a set of random samples xi ? Rd , i ? {1, . . . , N } are assumed to be iid from an unknown distribution pX , and the problem is to estimate the density for a query sample, z ? Rd . Given a kernel function K(xi , xj ), the un-normalized single-task KDE estimate is PN p?(z) = N1 i=1 K(xi , z), which is just a sample average. When multiple kernel densities {pt (z)}Tt=1 are estimated for the same domain, we replace the multiple sample averages with MTA estimates, which we refer to as multi-task kernel density estimation (MT-KDE). We compared KDE and MT-KDE on a problem of estimating the probability of terrorist events in Jerusalem using the Naval Research Laboratory?s Adversarial Modeling and Exploitation Database (NRL AMX-DB). The NRL AMX-DB combined multiple open primary sources2 to create a rich representation of the geospatial features of urban Jerusalem and the surrounding region, and accurately geocoded locations of terrorist attacks. Density estimation models are used to analyze the behavior of such violent agents, and to allocate security and medical resources. In related work, [19] also used a Gaussian kernel density estimate to assess risk from past terrorism events. The goal in this application is to estimate a risk density for 40,000 geographical locations (samples) in a 20km ? 20km area of interest in Jerusalem. Each geographical location is represented by a d = 76-dimensional feature vector. Each of the 76 features is the distance in kilometers to the nearest instance of some geographic location of interest, such as the nearest market or bus stop. Locations of past events are known for 17 suicide bombings. All the events are attributed to one of seven terrorist groups. The density estimates for these seven groups are expected to be related, and are treated as T = 7 tasks. The kernel K was taken to be a Gaussian kernel with identity covariance. In addition to constant A and minimax A, we also obtained a side-information A from terrorism expert Mohammed M. Hafez of the Naval Postgraduate School; he assessed the similarity between the seven groups during the Second Intifada (the time period of the data), providing similarities between 0 and 1. We used leave-one-out cross validation to assess KDE and MT-KDE for this problem, as follows. After computing the KDE and MT-KDE density estimates using all but one of the training examples {xti } for each task, we sort the resulting 40,000 estimated probabilities for each of the seven tasks, and extract the rank of the left-out known event. The mean reciprocal rank (MRR) metric is reported in Table 2. Ideally, the MRR of the left-out events would be as close to 1 as possible, and indicating that the location of the left-out event is at high-risk. The results show that the MRR for MT-KDE are lower or not worse than those for KDE for both problems; there are, however, too few samples to verify statistical significance of these results. 7 Summary Though perhaps unintuitive, we showed that both in theory and in practice, estimating multiple unrelated means using an MTL approach can improve the overall risk, even more so than James-Stein estimation. Averaging is common, and MTA has potentially broad applicability as a subcomponent in many algorithms, such as k-means clustering, kernel density estimation, or non-local means denoising. Acknowledgments We thank Peter Sadowski, Mohammed Hafez, Carol Chang, Brian Sandberg and Ruth Wilis for help with preliminary experiments and access to the terorrist dataset. 2 Primary sources included the NRL Israel Suicide Terrorism Database (ISD) cross referenced with open sources (including the Israel Ministry of Foreign Affairs, BBC, CPOST, Daily Telegraph, Associated Press, Ha?aretz Daily, Jerusalem Post, Israel National News), as well as the University of New Haven Institute for the Study of Violent Groups, the University of Maryland Global Terrorism Database, and the National Counter Terrorism Center Worldwide Incident Tracking System. 8 References [1] C. Stein, ?Inadmissibility of the usual estimator for the mean of a multivariate distribution,? Proc. Third Berkeley Symposium on Mathematical Statistics and Probability, pp. 197?206, 1956. [2] B. Efron and C. N. Morris, ?Stein?s paradox in statistics,? Scientific American, vol. 236, no. 5, pp. 119?127, 1977. [3] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, ?Clustering with Bregman divergences,? Journal Machine Learning Research, vol. 6, pp. 1705?1749, December 2005. [4] C. A. Micchelli and M. Pontil, ?Kernels for multi?task learning,? in Advances in Neural Information Processing Systems (NIPS), 2004. [5] E. V. Bonilla, K. M. A. Chai, and C. K. I. Williams, ?Multi-task Gaussian process prediction,? in Advances in Neural Information Processing Systems (NIPS). MIT Press, 2008. [6] A. Argyriou, T. Evgeniou, and M. Pontil, ?Convex multi-task feature learning,? Machine Learning, vol. 73, no. 3, pp. 243?272, 2008. [7] W. James and C. Stein, ?Estimation with quadratic loss,? Proc. Fourth Berkeley Symposium on Mathematical Statistics and Probability, pp. 361?379, 1961. [8] M. E. Bock, ?Minimax estimators of the mean of a multivariate normal distribution,? The Annals of Statistics, vol. 3, no. 1, 1975. [9] G. Casella, ?An introduction to empirical Bayes data analysis,? The American Statistician, pp. 83?87, 1985. [10] E. L. Lehmann and G. Casella, Theory of Point Estimation. New York: Springer, 1998. [11] H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications, ser. Monographs on Statistics and Applied Probability. London: Chapman & Hall, 2005, vol. 104. [12] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying, ?A spectral regularization framework for multi-task structure learning,? in Advances in Neural Information Processing Systems (NIPS), 2007. [13] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram, ?Multi-task learning for classification with Dirichlet process priors,? Journal Machine Learning Research, vol. 8, pp. 35?63, 2007. [14] L. Jacob, F. Bach, and J.-P. Vert, ?Clustered multi-task learning: A convex formulation,? in Advances in Neural Information Processing Systems (NIPS), 2008, pp. 745?752. [15] Y. Zhang and D.-Y. Yeung, ?A convex formulation for learning task relationships,? in Proc. of the 26th Conference on Uncertainty in Artificial Intelligence (UAI), 2010. [16] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press, 1990, corrected reprint of the 1985 original. [17] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences. demic Press, 1979. [18] B. W. Silverman, Density Estimation for Statistics and Data Analysis. and Hall, 1986. Aca- New York: Chapman [19] D. Brown, J. Dalton, and H. Hoyle, ?Spatial forecast methods for terrorist events in urban environments,? Lecture Notes in Computer Science, vol. 3073, pp. 426?435, 2004. 9
4559 |@word exploitation:1 version:4 inversion:1 norm:1 stronger:1 open:2 km:2 simulation:11 covariance:6 jacob:1 frigyik:1 tr:3 att:1 outperforms:3 past:2 comparing:1 nt:13 surprising:1 crippled:1 must:1 subcomponent:1 enables:1 analytic:1 hypothesize:1 designed:3 v:7 intelligence:1 yr:1 inspection:1 affair:1 ith:1 reciprocal:2 farther:1 geospatial:1 provides:1 location:6 attack:1 zhang:2 mathematical:3 become:1 symposium:2 prove:1 n22:3 pairwise:2 market:1 expected:7 behavior:1 nor:1 multi:22 plemmons:1 buying:1 company:1 xti:1 provided:2 estimating:14 unrelated:2 notation:2 xx:3 lowest:1 what:1 israel:3 interpreted:1 minimizes:5 ghosh:1 transformation:1 berkeley:2 every:1 scaled:2 ser:1 sale:4 medical:1 producing:1 positive:3 t1:1 engineering:1 before:1 treat:1 local:1 limit:1 referenced:1 chose:1 twice:1 terrorism:6 averaged:1 practical:3 acknowledgment:1 horn:1 lfp:2 practice:3 silverman:1 pontil:3 y2i:1 area:1 empirical:5 significantly:1 vert:1 suggest:1 krishnapuram:1 close:4 selection:1 risk:16 applying:2 equivalent:1 customer:9 yt:5 center:1 straightforward:2 go:3 jerusalem:4 independently:2 convex:10 williams:1 simplicity:2 estimator:25 dominate:1 n12:1 annals:1 pt:6 suppose:1 us:3 hypothesis:2 element:1 trend:1 asymmetric:1 database:3 electrical:1 worst:1 region:1 ensures:1 news:1 counter:1 monograph:1 environment:1 convexity:1 ideally:1 n1t:2 prescribes:1 bbc:1 joint:2 chapter:1 represented:1 regularizer:1 derivation:1 surrounding:1 effective:3 london:1 query:1 artificial:1 larger:1 solve:1 spend:2 statistic:6 jointly:3 cpost:1 seemingly:1 online:1 advantage:2 eigenvalue:4 differentiable:3 g20:1 propose:2 product:2 degenerate:1 seattle:1 chai:1 inadmissibility:1 produce:2 leave:1 help:4 derive:5 illustrate:1 spent:1 nearest:2 school:1 implies:1 berman:1 closely:1 shrunk:1 stochastic:1 demic:1 disproportionately:1 fix:1 clustered:1 preliminary:2 brian:1 extension:1 hold:1 hall:2 ground:3 normal:1 puzzle:9 early:1 a2:1 omitted:1 estimation:18 favorable:1 proc:3 violent:2 create:1 dalton:1 minimization:1 mit:1 gaussian:10 pnt:1 always:6 rather:2 avoid:2 pn:1 shrinkage:1 validated:2 focus:2 naval:2 notational:1 rank:3 likelihood:3 superimposed:1 adversarial:1 sense:1 summarizing:1 helpful:1 dependent:1 foreign:1 provably:1 issue:1 canceled:1 arg:3 classification:2 overall:1 spatial:1 summed:1 field:2 aware:1 evgeniou:1 washington:1 chapman:2 identical:1 sell:1 broad:1 nearly:1 carin:1 minimized:1 t2:9 haven:1 few:1 randomly:2 simultaneously:1 gamma:1 resulted:1 national:2 divergence:1 subsampled:1 statistician:1 n1:9 recalling:1 freedom:1 interest:3 behind:1 held:1 bregman:2 daily:2 circle:2 desired:1 re:1 theoretical:1 instance:1 modeling:1 ar:2 tractability:1 applicability:2 entry:4 uniform:3 fruitfully:1 johnson:1 too:1 motivating:1 reported:2 xue:1 considerably:1 combined:1 density:15 geographical:2 randomized:1 stay:1 bu:4 telegraph:1 quickly:1 squared:8 worse:3 bela:1 timeframe:1 expert:4 derivative:2 american:2 sec:1 pooled:1 invertibility:1 bonilla:1 try:1 closed:2 jigsaw:1 analyze:2 sup:2 aca:1 bayes:3 sort:1 parallel:1 minimize:1 ass:2 variance:23 merugu:1 yield:1 generalize:3 accurately:1 iid:4 gershgorin:2 casella:2 whenever:1 definition:1 pp:9 james:20 proof:2 attributed:1 associated:1 sampled:1 gain:1 stop:1 dataset:1 recall:2 efron:1 sandberg:1 higher:1 mtl:3 specify:4 formulation:5 though:1 generality:2 anywhere:1 just:1 sketch:1 assessment:1 banerjee:1 artifact:1 perhaps:2 scientific:1 verify:2 true:3 y2:2 ranged:3 normalized:1 regularization:13 analytically:1 geographic:1 alternating:2 symmetric:3 laboratory:1 dhillon:1 conditionally:1 numerator:1 self:1 during:3 trying:2 tt:6 demonstrate:2 percent:2 regularizing:1 superior:1 common:3 mt:5 he:1 refer:2 cambridge:1 feldman:1 cv:12 rd:2 trivially:1 stochasticity:1 sherman:1 had:2 access:1 similarity:14 multivariate:2 showed:3 mrr:3 inf:1 mint:1 apart:3 certain:1 mohammed:2 seen:1 ministry:1 period:2 hoyle:1 morrison:1 ii:2 multiple:11 worldwide:1 reduces:3 ntt:3 cross:8 bach:1 post:1 n11:2 laplacian:4 impact:1 prediction:1 regression:1 liao:1 metric:2 yeung:2 sometimes:1 sergey:1 kernel:12 normalization:1 addition:3 interval:2 source:2 unlike:1 validating:1 db:2 december:1 leveraging:1 call:1 constraining:1 identically:2 xj:1 nrl:3 restrict:2 reduce:1 allocate:1 peter:1 york:2 generally:1 clear:1 amount:8 stein:26 rival:1 morris:1 tth:10 supplied:1 exist:1 dotted:1 estimated:13 deteriorates:1 per:1 blue:1 vol:7 group:4 key:4 thereafter:1 drawn:1 urban:2 isd:1 graph:4 asymptotically:1 sum:5 inverse:4 fourth:1 lehmann:1 uncertainty:1 place:1 almost:1 separation:1 draw:2 comparable:1 maya:1 fold:3 quadratic:1 nonnegative:1 constraint:3 infinity:3 precisely:1 y1i:1 dominated:1 min:3 px:1 department:1 mta:70 combination:3 beneficial:1 describes:1 across:1 taken:2 computationally:2 singletask:1 resource:1 remains:1 bus:1 needed:1 tractable:2 gaussians:1 operation:1 spectral:1 original:2 clustering:2 dirichlet:1 graphical:1 purchased:1 bl:5 micchelli:2 objective:6 strategy:1 primary:2 rt:1 usual:1 diagonal:4 a11t:2 win:1 distance:1 separate:1 thank:1 atr:1 maryland:1 seven:4 spanning:1 ruth:1 suicide:3 relationship:2 ratio:1 balance:1 minimizing:1 providing:1 ying:1 statement:1 kde:11 potentially:1 trace:1 negative:5 unintuitive:1 unknown:2 observation:1 markov:2 finite:5 regularizes:1 paradox:2 y1:6 varied:1 arbitrary:3 pair:1 specified:1 security:1 unequal:1 established:1 dtt:1 nip:4 address:1 able:1 below:1 usually:1 max:1 green:1 including:1 event:8 business:1 treated:1 residual:1 minimax:26 improve:1 brown:1 axis:1 reprint:1 extract:1 nice:1 review:1 literature:1 prior:3 relative:2 law:1 loss:11 lecture:1 bock:3 validation:4 degree:1 agent:1 incident:1 sufficient:1 maxt:1 row:2 course:1 summary:1 repeat:1 last:1 side:2 institute:1 wide:1 terrorist:5 taking:1 sparse:1 distributed:2 benefit:1 dimension:2 valid:1 rich:1 conservatively:1 made:3 adaptive:1 simplified:1 far:1 approximate:1 obtains:2 relatedness:5 global:1 uai:1 assumed:2 conclude:2 xi:3 continuous:3 un:1 decomposes:1 kilometer:1 table:6 mse:9 necessarily:1 domain:2 rue:1 significance:1 rh:1 n2:8 referred:1 fashion:1 amx:2 third:1 theorem:5 formula:1 down:1 sadowski:1 specific:1 showing:1 gupta:1 evidence:1 dominates:1 intrinsic:1 exists:2 postgraduate:1 effectively:2 margin:1 forecast:1 generalizing:1 simply:1 likely:1 highlighting:1 ordered:2 tracking:1 scalar:2 chang:1 springer:1 minimizer:1 truth:3 determines:1 goal:1 identity:1 towards:2 replace:1 yti:9 change:8 included:1 uniformly:1 corrected:1 averaging:6 denoising:1 conservative:1 called:1 buyer:1 indicating:1 formally:1 assessed:1 carol:1 argyriou:2
3,932
456
Shooting Craps in Search of an Optimal Strategy for Training Connectionist Pattern Classifiers J. B. Hampshire IT and B. V. K. Vijaya Kumar Department of Electrical & Computer Engineering Carnegie Mellon University Pittsbwgh. PA 15213-3890 [email protected] and [email protected] Abstract We compare two strategies for training connectionist (as well as nonconnectionist) models for statistical pattern recognition. The probabilistic strategy is based on the notion that Bayesian discrimination (i.e .? optimal classification) is achieved when the classifier learns the a posteriori class distributions of the random feature vector. The differential strategy is based on the notion that the identity of the largest class a posteriori probability of the feature vector is all that is needed to achieve Bayesian discrimination. Each strategy is directly linked to a family of objective functions that can be used in the supervised training procedure. We prove that the probabilistic strategy - linked with error measure objective functions such as mean-squared-error and cross-entropy - typically used to train classifiers necessarily requires larger training sets and more complex classifier architectures than those needed to approximate the Bayesian discriminant function. In contrast. we prove that the differential strategy - linked with classificationfigure-of-merit objective functions (CF'MmoIlO) [3] - requires the minimum classifier functional complexity and the fewest training examples necessary to approximate the Bayesian discriminant function with specified precision (measured in probability of error). We present our proofs in the context of a game of chance in which an unfair C-sided die is tossed repeatedly. We show that this rigged game of dice is a paradigm at the root of all statistical pattern recognition tasks. and demonstrate how a simple extension of the concept leads us to a general information-theoretic model of sample complexity for statistical pattern recognition. 1125 1126 Hampshire and Kumar 1 Introduction Creating a connectionist pattern classifier that generalizes well to novel test data has recently focussed on the process of finding the network architecture with the minimum functional complexity necessary to model the training data accurately (see, for example, the works of Baum. Cover, Haussler, and Vapnik). Meanwhile, relatively little attention has been paid to the effect on generalization of the objective function used to train the classifier. In fact, the choice of objective function used to train the classifier is tantamount to a choice of training strategy, as described in the abstract [2,3]. We formulate the proofs outlined in the abstract in the context of a rigged game of dice in which an unfair C-sided die is tossed repeatedl y. Each face of the die has some probability of turning up. We assume that one face is always more likely than all the others. As a result, all the probabilities may be different, but at most C - 1 of them can be identical. The objective of the game is to identify the most likely die face with specified high confidence. The relationship between this rigged dice paradigm and statistical pattern recognition becomes clear if one realizes that a single unfair die is analogous to a specific point on the domain of the randomfeature vector being classified. Just as there are specific class probabilities associated with each point in feature vector space, each die has specific probabilities associated with each of its faces. The number of faces on the die equals the number of classes associated with the analogous point in feature vector space. Identifying the most likely die face is equivalent to identifying the maximum class a posteriori probability for the analogous point in feature vector space - the requirement for Bayesian discrimination. We formulate our proofs for the case of a single die, and conclude by showing how a simple extension of the mathematics leads to general expressions for pattern recognition involving both discrete and continuous random feature vectors. Authors' Note: In the interest of brevity, our proofs are posed as answers to questions that pertain to the rigged game of dice. It is hoped that the reader will find the relevance of each question/answer to statistical pattern recognition clear. Owing to page limitations, we cannot provide our proofs in full detail; the reader seeking such detail should refer to [1], Definitions of symbols used in the following proofs are given in table 1. 1.1 A Fixed-Point Representation The Mq-bit approximation qM[X] to the real number x E (-1, 1] is of the form MSB (most significant bit) = sign [x] MSB - 1 = 2- 1 ! LSB (least significant bit) = (1) 2-(M.-1) with the specific value defined as the mid-point of the 2-(M.-1) -wide interval in which x is located: A { sign[x] . (L Ixl . 2(M.-l) J . 2-(M.-1) + 2- M.) , Ixl < 1 qM[X] = (2) sign [x] . (1 - 2- M.) , Ixl = 1 The lower and upper bounds on the quantization interval are LM.[X] < x < UM.[X] (3) An Optimal Strategy for Training Connectionist Pattern Classifiers Thble 1: Definitions of symbols used to describe die faces, probabilities, probabilistic differences, and associated estimates. Symbol Wrj P(Wrj) kq P(Wrj) Definition The lrUe jth most likely die face (w;j is the estimated jth most likely face). The probability of the lrUe jth most likely die face. The number of occurrences of the true jth most likely die face. An empirical estimate of the probability of the true jth most likely die face: =!c;. P(wrj) (note n denotes the sample size) The probabilistic difference involving the true rankings and probabilities of the C die faces: ..1ri = P(Wri) - SUPj,..i P(Wrj) The probabilistic difference involving the true rankings but empirically estimated probabilities of the C die faces: " ..1ri = P(Wri) - sUPhfj ? P(wrj) = k,; - lupifi ~ II where (4) and (5) The fixed-point representation described by (1) - (5) differs from standard fixed-point representations in its choice of quantization interval. The choice of (2) - (5) represents zero as a negative - more precisely, a non-positive - finite precision number. See [1] for the motivation of this format choice. 1.2 A Mathematical Comparison of the Probabilistic and Differential Strategies The probabilistic strategy for identifying the most likely face on a die with C faces involves estimating the C face probabilities. In order for us to distinguish P(Wrl) from P(W r2) , we must choose Mq (i.e. the number of bits in our fiXed-point representation of the estimated probabilities) such that (6) The distinction between the differential and probabilistic strategies is made more clear if one considers the way in which the Mq-bit approximation jrl is computed from a random sample containing krl occurrences of die face Wr l and kl'2. occurrences of die face Wr2. For the differential strategy .1rl dijferelltUJl = qM [krl : kl'2.] (7) and for the probabilistic strategy .1rl probabilistic (8) 1127 1128 Hampshire and Kumar where .d; 6. i = 1,2, ... C P(Wi) - sup P(Wj) iii (9) Note that when i = rl (10) and when i ::f r 1 (ll) Note also (12) Since rC = L P(Wj) - i=1 (C - 2) P(W rl) (13) i..,.3 we can show that the C differences of (9) yield the C probabilities by = P(Wrj) ~C [1 - t .di ] (14) izr2 = .drj + P(Wrl) Vj >1 Thus, estimating the C differences of (9) is equivalent to estimating the C probabilities P(Wl), P(W2) , ... ,P(wc). Clearly, the sign of L1rl in (7) is modeled correctly (i.e., L1rl differentWl can correctly identify the most likely face) when Mq = 1, while this is typically not the case for .dr l probabilistic in (8). In the latter case, L1rl probabilistic is zero when Mq = 1 because qm[p(Wrl)] and QM[P(Wr'2)] are indistinguishable for Mq below some minimal value implied by (6). That minimal value of Mq can be found by recognizing that the number of bits necessary for (6) to hold for asymptotically large n (Le., for the quantized difference in (8) to exceed one LSB) is 1 + r-log2 [.drd 1, ~" sign bit ~ + sign bit magnit~de bits -log2 [P(Wrj)] :3 Z+ j E {1,2} J J-log2 [.drd 1 + 1) otherwise magnit~de bits (15) where Z+ represents the set of all positive integers. Note that the conditional nature of Mq min in (15) prevents the case in which lime-+o P(Wrl) - ? = LM. [P(Wrl)] or P(W r2) = UM.[P(W r2)]; either case would require an infinitely large sample size before the variance of the corresponding estimated probability became small enough to distinguish QM[P(Wrl)] from QM[P(Wr'2)]. The sign bit in (15)is not required to estimate the probabilities themselves in (8), but it is necessary to compute the difference between the two probabilities in that equation - this difference being the ultimate computation by which we choose the most likely die face. An Optimal Strategy for Training Connectionist Pattern Classifiers 1.3 The Sample Complexity Product We introduce the sample complexity product (SCP) as a measure of both the number of samples and the functional complexity (measured in bits) required to identify the most likely face of an unfair die with specified probability. A SCP = n . Mq s.t. P(most likely face correctly IO'd) ~ a (16) 2 A Comparison of the Sample Complexity Requirements for the Probabilistic and Differential Strategies Axiom 1 We view the number of bits Mq in the finite-precision approximation qM[X] to the real number x E (-1, 1] as a measure of the approximation's functional complexity. That is, the functional complexity of an approximation is the number of bits with which it represents a real number on (-1, 1]. Assumption 1 If P(Wrl) > P(W r2), then P(Wrl) will be greater than P(Wrj) Vj> 2 (see [1] for an analysis of cases in which this assumption is invalid). Question: What is the probability that the most likely face of an unfair die will be empirically identifiable after n tosses? Answer for the probabilistic strategy: P(qM[P(Wrl)] ~ n! t k.1=>', > qM[P(Wrj)] , Vj P(Wrl)k., krl! > 1) [t P(Wr2)~ ~=>'l (1 - P(Wrl) - P(Wr2))(II-k., -~)] (17) kr2! (n - krl - kr2)! where Al = max ( B + 1, nC-_k~ + 1 ) = A2 = V]. = B = Vl VC > 2 n 0 (18) min (B , n - krl ) {BM9} = kUN9 [P(Wr2)] = kt..vq [P(Wrl)] - 1 There is a simple recursion in [1] by which every possible boundary for Mq-bit quantization leads to itself and two additional boundaries in the set {BM9} for (Mq + I)-bit quantization. Answer for the differential strategy: 1129 1130 Hampshire and Kumar where Al VI = = A2 = Vl max ( kL.v [Llrd, nC-k~ _ 1 + t 1) \lC > 2 n (20) max ( 0 , krl - kUJft [LlrlJ ) = min (krl - kr.Jft [Llrd , n - krl ) Since the multinomial distribution is positive semi-definite, it should be clear from a comparisonof(17)-(18) and (19)-(20) thatP (LMt[Llrd < Lirl < UMt[Llrtl) islargest (and larger than any possible P (qM[P(Wrl)] > qM[P(Wrj)] , \I j > 1) ) for a given sample size n when the differential strategy is employed with Mq = 1 such that LMt [Llrtl = 0 and UMt [Llrd = 1 (Le., lr. [Llrtl = 1 and ku~ [Llrd = n). The converse is also true, to wit: ALNt -t Theorem 1 For aftxed value ofn in (19), the l-bitapproximationto Llrl yields the highest probability of identifying the most likely die face Wrl . It can be shown that theorem 1 does not depend on the validity of assumption 1 [1]. Given Axiom 1, the following corollary to theorem 1 holds: Corollary 1 The differential strategy's minimum-complexity l-bit approximation of Llrl yields the highest probability of identifying the most likely die face Wrl for a given number of tosses n. Corollary 2 The differential strategy's minimum-complexity l-bit approximation of Llrl requires the smallest sample size necessary (nmi,,) to identify P(Wrl) -and thereby the most likely die face Wrl - correctly with specified confidence. Thus, the differential strategy requires the minimum SCP necessary to identify the most likely die face with specified confidence. 2.1 Theoretical Predictions versus Empirical Results Figures 1 and 2 compare theoretical predictions of the number of samples n and the number of bits Mq necessary to identify the most likely face of a particular die versus the actual requirements obtained from 1000 games (3000 tosses of the die in each game). The die has five faces with probabilities P(W rl) = 0.37 ,P(Wr2) = 0.28, P(W r3) = 0.2, P(Wr4) = 0.1 ,and P(W rl) = 0.05. The theoretical predictions for Mq and n (arrows with boxed labels based on iterative searches employing equations (17) and (19)) that would with 0.95 confidence correctly identify the most likely die face Wrl are shown to correspond with the empirical results: in figure 1 the empirical 0.95 confidence interval is marked by the lower bound of the dark gray and the upper bound of the light gray; in figure 2 the empirical 0.95 confidence interval is marked by the lower bound of the P(W rl) distribution and the upper bound of the An Optimal Strategy for Training Connectionist Pattern Classifiers O~~f __- - - - - - - - - - - -__- -__- -__ -G.l.'Ot;;:;;.-_ _ _ _ _ _ _ _ _ _ __ ~ FigUre 1: Theoretical predictions of the number of tosses needed to identify the most likely face Wrl with 95% confidence (Die 1): Differential strategy prediction superimposed on empirical results of 1000 games (3000 tosses each). 1000 1~ 2000 2~ 3000 Figure 2: Theoretical predictions of the number of tosses needed to identify the most likely face Wrl with 95% confidence (Die 1): Probabilistic strategy prediction superimposed on empirical results of 1000 games (3000 tosses each). P(Wr2) distribution. These figures illustrate that the differential strategy's minimum SCP is 227 (n = 227, Mq = 1) while the minimum SCP for the probabilistic strategy is 2720 (n = 544 , Mq = 5). A complete tabulation of SCP as a function of P(Wrl) , P(W r2) , and the worst-case choice for C (the number of classes/die faces) is given in [1]. 3 Conclusion The sample complexity product (SCP) notion of functional complexity set forth herein is closely aligned with the complexity measures of Kolmogorov and Rissanen [4, 6]. We have used it to prove that the differential strategy for learning the Bayesian discriminant function is optimal in terms of its minimum requirements for classifier functional complexity and number of training examples when the classification task is identifying the most likely face of an unfair die. It is relatively straightforward to extend theorem 1 and its corollaries to the general pattern recognition case in order to show that the expected SCP for the I-bit differential strategy E [SCP]diffenntial ~ Ix nmin [p (Wrl I x) , P (wr21 x)] '~q min [p (Wrl I~) , P (wr21 x) tp{x)dx =1 (21) (or the discrete random vector analog of this equation) is minimal [1]. This is because nmin is by corollary 2 the smallest sample size necessary to distinguish any and all P(Wrl) from 1131 1132 Hampshire and Kumar lesser P(Wr2). The resulting analysis confinns that the classifier trained with the differential strategy for statistical pattern recognition (Le., using a CFMmoM objective function) has the highest probability of learning the Bayesian discriminant function when the functional capacity of the classifier and the available training data are both limited. The relevance of this work to the process of designing and training robust connectionist pattern classifiers is evident if one considers the practical meaning of the terms nmilt [p (Wrl I x) , P (wr21 x)] and Mq mill [p (Wrl I x) , P (wr21 x)] in the sample complexity product of (21). Oi ven one's choice of connectionist model to employ as a classifier, the M q milt term dictates the minimum necessary connectivity of that model. For example, (21) can be used to prove that a partially connected radial basis function (RBF) with trainable variance parameters and three hidden layer ''nodes'' has the minimum Mq necessary for Bayesian discrimination in the 3-class task described by [5]. However, because both SCP terms are functions of the probabilistic nature of the random feature vector being classified and the learning strategy employed. that minimal RBF architecture will only yield Bayesian discrimination if trained using the differential strategy. The probabilistic strategy requires significantly more functional complexity in the RBF in order to meet the requirements of the probabilistic strategy's SCP [1]. Philosophical arguments regarding the use of the differential strategy in lieu of the more traditional probabilistic strategy are discussed at length in [1]. Acknowledgement This research was funded by the Air Force Office of Scientific Research under grant AFOSR-89-0551. We gratefully acknowledge their support. References [1] J. B. Hampshire II. A Differential Theory of Statistical Pattern Recognition. PhD thesis, Carnegie Mellon University, Department of ElectricaI & Computer Engineering, Hammerschlag Hall, Pittsburgh. PA 15213-3890,1992. manuscript in progress. [2] J. B. Hampshire II and B. A. Pearlmutter. Equivalence Proofs for Multi-Layer Perceptton Classifiers and the Bayesian Discriminant Function. In Touretzky, Elman. Sejnowski, and Hinton, editors, Proceedings of the 1990 Connectionist Models Summer School. pages 159-172. San Mateo, CA, 1991. Morgan-Kaufmann. [3] J. B. Hampshire II and A. H. Waibel. A Novel Objective Function for Improved Phoneme Recognition Using Time-Delay Neural Networks. IEEE Transactions on Neural Networks, 1(2):216-228, June 1990. A revised and extended version of work first presented at the 1989 International Joint Conference on Neural Networks, vol. I. pp.235-241. [4] A. N. Kolmogorov. Three Approaches to the Quantitative Definition of Information. Problems of Information Transmission. 1(1):1-7, Jan. - Mar. 1965. Faraday Press ttanslation of Problemy Peredachi Informatsii. [5] M. D. Richard and R. P. Lippmann. Neural Network Classifiers Estimate Bayesian a posteriori Probabilities. Neural Computation, 3(4):461-483.1991. [6] J. Rissanen. Modeling by shortest data description. Automatica, 14:465-471,1978.
456 |@word version:1 rigged:4 paid:1 thereby:1 dx:1 must:1 thble:1 msb:2 discrimination:5 lr:1 quantized:1 node:1 five:1 mathematical:1 rc:1 differential:19 shooting:1 prove:4 introduce:1 expected:1 themselves:1 elman:1 multi:1 little:1 actual:1 becomes:1 estimating:3 what:1 finding:1 quantitative:1 every:1 um:2 classifier:18 qm:12 converse:1 grant:1 positive:3 before:1 engineering:2 io:1 meet:1 mateo:1 equivalence:1 wri:2 limited:1 drj:1 practical:1 definite:1 differs:1 procedure:1 jan:1 dice:4 axiom:2 empirical:7 significantly:1 dictate:1 confidence:8 radial:1 cannot:1 pertain:1 context:2 kr2:2 equivalent:2 baum:1 straightforward:1 attention:1 formulate:2 wit:1 identifying:6 haussler:1 mq:19 notion:3 analogous:3 designing:1 pa:2 recognition:10 located:1 electrical:1 worst:1 wj:2 connected:1 highest:3 thatp:1 complexity:17 trained:2 depend:1 basis:1 joint:1 kolmogorov:2 fewest:1 train:3 describe:1 sejnowski:1 larger:2 posed:1 otherwise:1 itself:1 product:4 aligned:1 achieve:1 forth:1 description:1 requirement:5 transmission:1 illustrate:1 measured:2 school:1 progress:1 c:1 involves:1 closely:1 owing:1 vc:1 require:1 generalization:1 extension:2 hold:2 hall:1 lm:2 a2:2 smallest:2 realizes:1 label:1 largest:1 wl:1 clearly:1 always:1 office:1 corollary:5 june:1 superimposed:2 contrast:1 problemy:1 posteriori:4 vl:2 typically:2 hidden:1 classification:2 equal:1 identical:1 represents:3 ven:1 connectionist:9 others:1 richard:1 employ:1 interest:1 light:1 kt:1 necessary:10 theoretical:5 minimal:4 modeling:1 cover:1 tp:1 tossed:2 kq:1 recognizing:1 delay:1 wrl:26 answer:4 international:1 probabilistic:20 connectivity:1 squared:1 thesis:1 containing:1 choose:2 dr:1 creating:1 de:2 ranking:2 vi:1 root:1 view:1 linked:3 sup:1 oi:1 air:1 became:1 variance:2 kaufmann:1 phoneme:1 yield:4 identify:9 correspond:1 vijaya:1 bayesian:11 accurately:1 classified:2 umt:2 touretzky:1 definition:4 pp:1 proof:7 associated:4 di:1 manuscript:1 supervised:1 improved:1 mar:1 just:1 nmin:2 drd:2 gray:2 scientific:1 effect:1 validity:1 concept:1 true:5 ll:1 game:9 indistinguishable:1 die:34 evident:1 theoretic:1 demonstrate:1 complete:1 pearlmutter:1 jrl:1 meaning:1 krl:8 novel:2 recently:1 functional:9 multinomial:1 empirically:2 rl:7 tabulation:1 extend:1 analog:1 discussed:1 mellon:2 refer:1 significant:2 outlined:1 mathematics:1 gratefully:1 funded:1 morgan:1 minimum:10 greater:1 additional:1 employed:2 paradigm:2 shortest:1 ii:5 semi:1 full:1 cross:1 jft:1 prediction:7 involving:3 cmu:2 achieved:1 interval:5 lsb:2 w2:1 ot:1 integer:1 exceed:1 iii:1 enough:1 architecture:3 regarding:1 lesser:1 expression:1 ultimate:1 repeatedly:1 clear:4 dark:1 mid:1 sign:7 estimated:4 wr2:7 wr:3 correctly:5 carnegie:2 discrete:2 vol:1 rissanen:2 asymptotically:1 family:1 reader:2 lime:1 bit:20 bound:5 layer:2 summer:1 distinguish:3 identifiable:1 precisely:1 informatsii:1 ri:2 wc:1 argument:1 min:4 kumar:6 format:1 relatively:2 department:2 magnit:2 waibel:1 nmi:1 wi:1 sided:2 equation:3 vq:1 r3:1 needed:4 merit:1 supj:1 lieu:1 generalizes:1 available:1 occurrence:3 denotes:1 cf:1 log2:3 seeking:1 objective:8 implied:1 question:3 strategy:36 traditional:1 capacity:1 considers:2 discriminant:5 length:1 modeled:1 relationship:1 nc:2 negative:1 upper:3 revised:1 finite:2 acknowledge:1 hinton:1 extended:1 required:2 specified:5 kl:3 philosophical:1 distinction:1 herein:1 below:1 pattern:15 max:3 lmt:2 force:1 turning:1 recursion:1 acknowledgement:1 tantamount:1 afosr:1 limitation:1 versus:2 ixl:3 editor:1 jth:5 wide:1 focussed:1 face:35 peredachi:1 boundary:2 author:1 made:1 san:1 employing:1 transaction:1 approximate:2 lippmann:1 pittsburgh:1 conclude:1 automatica:1 search:2 continuous:1 iterative:1 table:1 nature:2 ku:1 robust:1 ca:1 boxed:1 necessarily:1 complex:1 meanwhile:1 domain:1 vj:3 arrow:1 motivation:1 lc:1 precision:3 unfair:6 learns:1 ix:1 theorem:4 ofn:1 specific:4 showing:1 symbol:3 r2:5 quantization:4 vapnik:1 kr:1 phd:1 hoped:1 entropy:1 mill:1 likely:23 infinitely:1 prevents:1 partially:1 faraday:1 chance:1 conditional:1 identity:1 marked:2 invalid:1 rbf:3 toss:7 hampshire:8 ece:1 gauss:1 scp:11 support:1 latter:1 brevity:1 relevance:2 trainable:1
3,933
4,560
Semi-supervised Eigenvectors for Locally-biased Learning Michael W. Mahoney Department of Mathematics Stanford University Stanford, CA 94305 [email protected] Toke Jansen Hansen Section for Cognitive Systems DTU Informatics Technical University of Denmark [email protected] Abstract In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks ?nearby? that pre-specified target region. Locally-biased problems of this sort are particularly challenging for popular eigenvector-based machine learning and data analysis tools. At root, the reason is that eigenvectors are inherently global quantities. In this paper, we address this issue by providing a methodology to construct semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these locally-biased eigenvectors can be used to perform locally-biased machine learning. These semi-supervised eigenvectors capture successively-orthogonalized directions of maximum variance, conditioned on being well-correlated with an input seed set of nodes that is assumed to be provided in a semi-supervised manner. We also provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning. 1 Introduction We consider the problem of finding a set of locally-biased vectors that inherit many of the ?nice? properties that the leading nontrivial global eigenvectors of a graph Laplacian have?for example, that capture ?slowly varying? modes in the data, that are fairly-efficiently computable, that can be used for common machine learning and data analysis tasks such as kernel-based and semi-supervised learning, etc.?so that we can perform what we will call locally-biased machine learning in a principled manner. By locally-biased machine learning, we mean that we have a very large data set, e.g., represented as a graph, and that we have information, e.g., given in a semi-supervised manner, that certain ?regions? of the data graph are of particular interest. In this case, we may want to focus predominantly on those regions and perform data analysis and machine learning, e.g., classification, clustering, ranking, etc., that is ?biased toward? those pre-specified regions. Examples of this include the following. ? Locally-biased community identification. In social and information network analysis, one might have a small ?seed set? of nodes that belong to a cluster or community of interest [2, 13]; in this case, one might want to perform link or edge prediction, or one might want to ?refine? the seed set in order to find other nearby members. ? Locally-biased image segmentation. In computer vision, one might have a large corpus of images along with a ?ground truth? set of pixels as provided by a face detection algorithm [7, 14, 15]; in this case, one might want to segment entire heads from the background for all the images in the corpus in an automated manner. 1 ? Locally-biased neural connectivity analysis. In functional magnetic resonance imaging applications, one might have small sets of neurons that ?fire? in response to some external experimental stimulus [16]; in this case, one might want to analyze the subsequent temporal dynamics of stimulation of neurons that are ?nearby,? either in terms of connectivity topology or functional response. These examples present considerable challenges for spectral techniques and traditional eigenvectorbased methods. At root, the reason is that eigenvectors are inherently global quantities, thus limiting their applicability in situations where one is interested in very local properties of the data. In this paper, we provide a methodology to construct what we will call semi-supervised eigenvectors of a graph Laplacian; and we illustrate how these locally-biased eigenvectors inherit many of the properties that make the leading nontrivial global eigenvectors of the graph Laplacian so useful in applications. To achieve this, we will formulate an optimization ansatz that is a variant of the usual global spectral graph partitioning optimization problem that includes a natural locality constraint as well as an orthogonality constraint, and we will iteratively solve this problem. In more detail, assume that we are given as input a (possibly weighted) data graph G = (V, E), an indicator vector s of a small ?seed set? of nodes, a correlation parameter ? ? [0, 1], and a positive integer k. Then, informally, we would like to construct k vectors that satisfy the following bicriteria: first, each of these k vectors is well-correlated with the input seed set; and second, those k vectors describe successively-orthogonalized directions of maximum variance, in a manner analogous to the leading k nontrivial global eigenvectors of the graph Laplacian. (We emphasize that the seed set s of nodes, the integer k, and the correlation parameter ? are part of the input; and thus they should be thought of as being available in a semi-supervised manner.) Somewhat more formally, our main algorithm, Algorithm 1 in Section 3, returns as output k semi-supervised eigenvectors; each of these is the solution to an optimization problem of the form of G ENERALIZED L OCAL S PECTRAL in Figure 1, and thus each ?captures? (say) ?/k of the correlation with the seed set. Our main theoretical result states that these vectors define successively-orthogonalized directions of maximum variance, conditioned on being ?/k-well-correlated with an input seed set s; and that each of these k semisupervised eigenvectors can be computed quickly as the solution to a system of linear equations. From a technical perspective, the work most closely related to ours is that of Mahoney et al. [14]. The original algorithm of Mahoney et al. [14] introduced a methodology to construct a locally-biased version of the leading nontrivial eigenvector of a graph Laplacian and showed (theoretically and empirically in a social network analysis application) that the resulting vector could be used to partition a graph in a locally-biased manner. From this perspective, our extension incorporates a natural orthogonality constraint that successive vectors need to be orthogonal to previous vectors. Subsequent to the work of [14], [15] applied the algorithm of [14] to the problem of finding locally-biased cuts in a computer vision application. Similar ideas have also been applied somewhat differently. For example, [2] use locally-biased random walks, e.g., short random walks starting from a small seed set of nodes, to find clusters and communities in graphs arising in Internet advertising applications; [13] used locally-biased random walks to characterize the local and global clustering structure of a wide range of social and information networks; [11] developed the Spectral Graph Transducer (SGT), that performs transductive learning via spectral graph partitioning. The objectives in both [11] and [14] are considered constrained eigenvalue problems, that can be solved by finding the smallest eigenvalue of an asymmetric generalized eigenvalue problem, but in practice this procedure can be highly unstable [8]. The SGT reduces the instabilities by performing all calculations in a subspace spanned by the d smallest eigenvectors of the graph Laplacian, whereas [14] perform a binary search, exploiting the monotonic relationship between a control parameter and the corresponding Lagrange multiplier. In parallel, [3] and a large body of subsequent work including [6] used eigenvectors of the graph Laplacian to perform dimensionality reduction and data representation, in unsupervised and semisupervised settings. Many of these methods have a natural interpretation in terms of kernel-based learning [18]. Many of these diffusion-based spectral methods also have a natural interpretation in terms of spectral ranking [21]. ?Topic sensitive? and ?personalized? versions of these spectral ranking methods have also been studied [9, 10]; and these were the motivation for diffusion-based methods to find locally-biased clusters in large graphs [19, 1, 14]. Our optimization ansatz is a generalization of the linear equation formulation of the PageRank procedure [17, 14, 21], and the solution involves Laplacian-based linear equation solving, which has been suggested as a primitive 2 of more general interest in large-scale data analysis [20]. Finally, the form of our optimization problem has similarities to other work in computer vision applications: e.g., [23] and [7] find good conductance clusters subject to a set of linear constraints. 2 Background and Notation Let G = (V, E, w) be a connected undirected graph with n = |V | vertices and m = |E| edges, in which edge {i, j} has non-negative weight wij . In the following, AG ? RV ?V will denote the V ?V adjacency matrix of will denote the diagonal degree matrix of G, i.e., P G, while DG ? R DG (i, i) = di = {i,j}?E wij , the weighted degree of vertex i. Moreover, for a set of vertices def P S ? V in a graph, the volume of S is vol(S) = i?S di . The Laplacian of G is defined as def LG = DG ? AG . (This is also called the combinatorial Laplacian, in which case the normalized def ?1/2 ?1/2 Laplacian of G is LG = DG LG DG .) P 2 The Laplacian is the symmetric matrix having quadratic form xT LG x = ij?E wij (xi ? xj ) , for x ? RV . This implies that LG is positive semidefinite and that the all-one vector 1 ? RV is the eigenvector corresponding to the smallest eigenvalue 0. The generalized eigenvalues of LG x = ?i DG x are 0 = ?1 < ?2 ? ? ? ? ? ?N . We will use v2 to denote smallest non-trivial eigenvector, i.e., the eigenvector corresponding to ?2 ; v3 to denote the next eigenvector; and so on. Finally, for a matrix A, let A+ denote its (uniquely defined) Moore-Penrose pseudoinverse. For two vectors x, y ? Rn , and the degree matrix DG for a graph G, we define the degree-weighted inner product def Pn as xT DG y = i=1 xi yi di . In particular, if a vector x has unit norm, then xT DG x = 1. Given a subset of vertices S ? V , we denote by 1S the indicator vector of S in RV and by 1 the vector in RV having all entries set equal to 1. 3 3.1 Optimization Approach to Semi-supervised Eigenvectors Motivation for the Program Recall the optimization perspective on how one computes the leading nontrivial global eigenvectors of the normalized Laplacian LG . The first nontrivial eigenvector v2 is the solution to the problem G LOBAL S PECTRAL that is presented on the left of Figure 1. Equivalently, although G LOBAL S PEC TRAL is a non-convex optimization problem, strong duality holds for it and it?s solution may be computed as v2 , the leading nontrivial generalized eigenvector of LG . The next eigenvector v3 is the solution to G LOBAL S PECTRAL, augmented with the constraint that xT DG v2 = 0; and in general the tth generalized eigenvector of LG is the solution to G LOBAL S PECTRAL, augmented with the constraints that xT DG vi = 0, for i ? {2, . . . , t ? 1}. Clearly, this set of constraints and the constraint xT DG 1 = 0 can be written as xT DG Q = 0, where 0 is a (t ? 1)-dimensional all-zeros vector, and where Q is an n ? (t ? 1) orthogonal matrix whose ith column equals vi (where v1 = 1, the all-ones vector, is the first column of Q). Also presented in Figure 1 is L OCAL S PECTRAL, which includes a constraint requiring the solution to be well-correlated with an input seed set. This L OCAL S PECTRAL optimization problem was introduced in [14], where it was shown that the solution to L OCAL S PECTRAL may be interpreted as a locally-biased version of the second eigenvector of the Laplacian. In particular, although L OCAL S PECTRAL is not convex, it?s solution can be computed efficiently as the solution to a set of linear equations that generalize the popular Personalized PageRank procedure; in addition, by performing a sweep cut and appealing to a variant of Cheeger?s inequality, this locally-biased eigenvector can be used to perform locally-biased spectral graph partitioning [14]. 3.2 Our Main Algorithm We will formulate the problem of computing semi-supervised vectors in terms of a primitive optimization problem of independent interest. Consider the G ENERALIZED L OCAL S PECTRAL optimization problem, as shown in Figure 1. For this problem, we are given a graph G = (V, E), with associated Laplacian matrix LG and diagonal degree matrix DG ; an indicator vector s of a small 3 G LOBAL S PECTRAL T minimize x LG x s.t xT DG x = 1 L OCAL S PECTRAL T minimize x LG x s.t xT DG x = 1 xT DG 1 = 0 xT D G 1 = 0 ? xT D G s ? ? G ENERALIZED L OCAL S PECTRAL minimize xT LG x s.t xT DG x = 1 xT DG Q = 0 ? xT DG s ? ? Figure 1: Left: The usual G LOBAL S PECTRAL partitioning optimization problem; the vector achieving the optimal solution is v2 , the leading nontrivial generalized eigenvector of LG with respect to DG . Middle: The L OCAL S PECTRAL optimization problem, which was originally introduced in [14]; for ? = 0, this coincides with the usual global spectral objective, while for ? > 0, this produces solutions that are biased toward the seed vector s. Right: The G ENERALIZED L OCAL S PECTRAL optimization problem we introduce that includes both the locality constraint and a more general orthogonality constraint. Our main algorithm for computing semi-supervised eigenvectors will iteratively compute the solution to G ENERALIZED L OCAL S PECTRAL for a sequence of Q matrices. In all three cases, the optimization variable is x ? Rn . ?seed set? of nodes; a correlation parameter ? ? [0, 1]; and an n?? constraint matrix Q that may be assumed to be an orthogonal matrix. We will assume (without loss of generality) that s is properly normalized and orthogonalized so that sT DG s = 1 and sT DG 1 = 0. While s can be a general unit vector orthogonal to 1, it may be helpful to think of s as the indicator vector of one or more vertices in V , corresponding to the target region of the graph. In words, the problem G ENERALIZED L OCAL S PECTRAL asks us to find a vector x ? Rn that minimizes the variance xT LG x subject ? to several constraints: that x is unit length; that x is orthogonal to the span of Q; and that x is ?-well-correlated with the input seed set vector s. In our application of G ENERALIZED L OCAL S PECTRAL to the computation of semi-supervised eigenvectors, we will iteratively compute the solution to G ENERALIZED L OCAL S PECTRAL, updating Q to contain the already-computed semi-supervised eigenvectors. That is, to compute the first semi-supervised eigenvector, we let Q = 1, i.e., the n-dimensional all-ones vector, which is the trivial eigenvector of LG , in which case Q is an n ? 1 matrix; and to compute each subsequent semi-supervised eigenvector, we let the columns of Q consist of 1 and the other semi-supervised eigenvectors found in each of the previous iterations. To show that G ENERALIZED L OCAL S PECTRAL is efficiently-solvable, note that it is a quadratic program with only one quadratic constraint and one linear equality constraint. In order to remove the equality constraint, which will simplify the problem, let?s change variables by defining the n?(n??) matrix F as {x : QT DG x = 0} = {x : x = F y}. That is, F is a span for the null space of QT ; and we will take F to be an orthogonal matrix. Then, with respect to the y variable, G ENERALIZED L OCAL S PECTRAL becomes minimize y T F T LG F y y (1) subject to y T F T DG F y = 1, ? y T F T DG s ? ?. In terms of the variable x, the solution to this optimization problem is of the form + x? = cF F T (LG ? ?DG ) F F T DG s + = c F F T (LG ? ?DG ) F F T DG s, (2) ? for a normalization constant c ? (0, ?) and for some ? that depends on ?. The second line follows from the first since F is an n ? (n ? ?) orthogonal matrix. This so-called ?S-procedure? is described in greater detail in Chapter 5 and Appendix B of [4]. The significance of this is that, although it is a non-convex optimization problem, the G ENERALIZED L OCAL S PECTRAL problem can be solved by solving a linear equation, in the form given in Eqn. (2). Returning to our problem of computing semi-supervised eigenvectors, recall that, in addition to the input for the G ENERALIZED L OCAL S PECTRAL problem, we need to specify a positive integer k that indicates the number of vectors to be computed. In the simplest case, we would assume that 4 we would like the correlation p to be ?evenly distributed? across all k vectors, in which case we will require that each vector is ?/k-well-correlated with the input seed set vector s; but this assumption can easily be relaxed, and thus Algorithm 1 is formulated more generally as taking a k-dimensional vector ? = [?1 , . . . , ?k ]T of correlation coefficients as input. To compute the first semi-supervised eigenvector, we will let Q = 1, the all-ones vector, in which case the first nontrivial semi-supervised eigenvector is + x?1 = c (LG ? ?1 DG ) DG s, (3) where ?1 is chosen to saturate the part of the correlation constraint along the first direction. (Note that the projections F F T from Eqn. (2) are not present in Eqn. (3) since by design sT DG 1 = 0.) That is, to find the correct setting of ?1 , it suffices to perform a binary search over the possible values of ?1 in the interval (?vol(G), ?2 (G)) until the correlation constraint is satisfied, that is, until (sT DG x)2 is sufficiently close to ?21 , see [8, 14]. To compute subsequent semi-supervised eigenvectors, i.e., at steps t = 2, . . . , k if one ultimately wants a total of k semi-supervised eigenvectors, then one lets Q be the n ? (t ? 1) matrix with first column equal to 1 and with j th column, for i = 2, . . . , t ? 1, equal to x?j?1 (where we emphasize that x?j?1 is a vector not an element of a vector). That is, Q is of the form Q = [1, x?1 , . . . , x?t?1 ], where x?i are successive semi-supervised eigenvectors, and the projection matrix F F T is of the form F F T = I ? DG Q(QT DG DG Q)?1 QT DG , due to the degree-weighted inner norm. Then, by Eqn. (2), the tth semi-supervised eigenvector takes the form + x?t = c F F T (LG ? ?t DG )F F T DG s. (4) Algorithm 1 Semi-supervised eigenvectors Input: LG , DG , s, ? = [?1 , . . . , ?k ]T ,  Require: sT DG 1 = 0, sT DG s = 1, ?T 1 ? 1 1: Q = [1] 2: for t = 1 to k do 3: F F T ? I ? DG Q(QT DG DG Q)?1 QT DG 4: > ? ?2 where F F T LG F F T v2 = ?2 F F T DG F F T v2 5: ? ? ?vol(G) 6: repeat 7: ?t ? (? + >)/2 (Binary search over ?t ) 8: xt ? (F F T (LG ? ?t DG )F F T )+ F F T DG s 9: Normalize xt such that xTt DG xt = 1 10: if (xTt DG s)2 > ?t then ? ? ?t else > ? ?t end if 11: until k(xTt DG s)2 ? ?t k ?  or k(? + >)/2 ? ?t k ?  12: Augment Q with x?t by letting Q = [Q, x?t ]. 13: end for In more detail, Algorithm 1 presents pseudo-code for our main algorithm for computing semisupervised eigenvectors. Several things should be noted about our implementation. First, note that we implicitly compute the projection matrix F F T . Second, a na??ve approach to Eqn. (2) does not immediately lead to an efficient solution, since DG s will not be in the span of (F F T (LG ? ?DG )F F T ), thus leading to a large residual. By changing variables so that x = F F T y, the solution becomes x? ? F F T (F F T (LG ? ?DG )F F T )+ F F T DG s. Since F F T is a projection matrix, this expression is equivalent to x? ? (F F T (LG ? ?DG )F F T )+ F F T DG s. Third, we exploit that F F T (LG ? ?i DG )F F T is an SPSD matrix, and we apply the conjugate gradient method, rather than computing the explicit pseudoinverse. That is, in the implementation we never represent the dense matrix F F T , but instead we treat it as an operator and we simply evaluate the result of applying a vector to it on either side. Fourth, we use that ?2 can never decrease (here we refer to ?2 as the smallest non-zero eigenvalue of the modified matrix), so we only recalculate the upper bound for the binary search when an iteration saturates without satisfying k(xTt DG s)2 ? ?t k ? . In case of saturation one can for instance recalculate ?2 iteratively by using the inverse iteration T T + T T k method, v2k+1 ? (F F T LG F F T ? ?est 2 F F DG F F ) F F DG F F v2 , and normalizing such k+1 T k+1 that (v2 ) v2 = 1. 5 4 Illustrative Empirical Results In this section, we will provide a detailed empirical evaluation of our method of semi-supervised eigenvectors and how they can be used for locally-biased machine learning. Our goal will be twofold: first, to illustrate how the ?knobs? of our method work; and second, to illustrate the usefulness of the method in a real application. To do so, we will consider: ? Toy data. In Section 4.1, we will consider one-dimensional examples of the popular ?small world? model [22]. This is a parameterized family of models that interpolates between low-dimensional grids and random graphs; and, as such, it will allow us to illustrate the behavior of our method and it?s various parameters in a controlled setting. ? Handwritten image data. In Section 4.2, we will consider the data from the MNIST digit data set [12]. These data have been widely-studied in machine learning and related areas and they have substantial ?local heterogeneity?; and thus these data will allow us to illustrate how our method may be used to perform locally-biased versions of common machine learning tasks such as smoothing, clustering, and kernel construction. 4.1 Small-world Data To illustrate how the ?knobs? of our method work, and in particular how ? and ? interplay, we consider data constructed from the so-called small-world model. To demonstrate how semi-supervised eigenvectors can focus on specific target regions of a data graph to capture slowest modes of local variation, we plot semi-supervised eigenvectors around illustrations of (non-rewired and rewired) realizations of the small-world graph; see Figure 2. ?2 ?3 ?4 ?5 p = 0, = 0.000011, = 0.000011, = 0.000046, = 0.000046. (a) Global eigenvectors ?2 ?3 ?4 ?5 p = 0.01, = 0.000149, = 0.000274, = 0.000315, = 0.000489. (b) Global eigenvectors p = 0.01, ? = 0.005, ?1 = 0.000047, ?2 = 0.000052, ?3 = ?0.000000, ?4 = ?0.000000. p = 0.01, ? = 0.05, ?1 = ?0.004367, ?2 = ?0.001778, ?3 = ?0.001665, ?4 = ?0.000822. (c) Semi-supervised eigenvectors (d) Semi-supervised eigenvectors Figure 2: In each case, (a-d) the data consist of 3600 nodes, each connected to it?s 8 nearestneighbors. In the center of each subfigure, we show the nodes (blue) and edges (black and light gray are the local edges, and blue are the randomly-rewired edges). In each subfigure, we wrap a plot (black x-axis and gray background) visualizing the 4 smallest semi-supervised eigenvectors, allowing us to see the effect of random edges (different values of rewiring probability p) and degree of localization (different values of ?). Eigenvectors are color coded as blue, red, yellow, and green, starting with the one having the smallest eigenvalue. See the main text for more details. In Figure 2.a, we show a graph with no randomly-rewired edges (p = 0) and a locality parameter ? such that the global eigenvectors are obtained. This yields a symmetric graph with eigenvectors corresponding to orthogonal sinusoids, i.e., for all eigenvectors, except the all-ones with eigenvalue 0, the algebraic multiplicity is 2, i.e., the first two capture the slowest mode of variation and correspond to a sine and cosine with equal random phase-shift (rotational ambiguity). In Figure 2.b, random edges have been added with probability p = 0.01 and the locality parameter ? is still chosen such that the global eigenvectors of the rewired graph are obtained. In particular, note small kinks in the eigenvectors at the location of the randomly added edges. Since the graph is no longer symmetric, all of the visualized eigenvectors have algebraic multiplicity 1. Moreover, note that the slow mode of variation in the interval on the top left; a normalized-cut based on the leading global eigenvector would extract this region since the remainder of the ring is more well-connected due to the degree of rewiring. In Figure 2.c, we see the same graph realization as in Figure 2.b, except that the semi-supervised eigenvectors have a seed node at the top of the circle and the correlation 6 parameter ?t = 0.005. Note that, like the global eigenvectors, the local approach produces modes of increasing variation. In addition, note that the neighborhood around ?11 o-clock? contains more mass, when compared with Figure 2.b; the reason for this is that this region is well-connected with the seed via a randomly added edge. Above the visualization we also show the ?t that saturates ?t , i.e., ?t is the Lagrange multiplier that defines the effective correlation ?t . Not shown is that if we kept reducing ?, then ?t would tend towards ?t+1 , and the respective semi-supervised eigenvector would tend towards the global eigenvector. Finally, in Figure 2.d, the desired correlation is increased to ? = 0.05 (thus decreasing the value of ?t ), making the different modes of variation more localized in the neighborhood of the seed. It should be clear that, in addition to being determined by the locality parameter, we can think of ? as a regularizer biasing the global eigenvectors towards the region near the seed set. 4.2 MNIST Digit Data We now demonstrate the semi-supervised eigenvectors as a feature extraction preprocessing step in a machine learning setting. We consider the well-studied MNIST dataset containing 60000 training digits and 10000 test digits ranging from 0 to 9. We construct the complete 70000 ? 70000 k-NN graph with k = 10 and with edge weights given by wij = exp(? ?42 kxi ? xj k2 ), where ?i2 being i the Euclidean distance to it?s nearest neighbor, and we define the graph Laplacian in the usual way. We evaluate the semi-supervised eigenvectors in a transductive learning setting by disregarding the majority of labels in the entire training data. We then use a few samples from each class to seed our semi-supervised eigenvectors, and a few others to train a downstream classification algorithm. Here we choose to apply the SGT of [11] for two main reasons. First, the transductive classifier is inherently designed to work on a subset of global eigenvectors of the graph Laplacian, making it ideal for validating that our localized basis constructed by the semi-supervised eigenvectors can be more informative when we are solely interested in the ?local heterogeneity? near a seed set. Second, using the SGT based on global eigenvectors is a good point of comparison, because we are only interested in the effect of our subspace representation. (If we used one type of classifier in the local setting, and another in the global, the classification accuracy that we measure would obviously be biased.) As in [11], we normalize the spectrum of both global and semi-supervised eigenvectors 2 by replacing the eigenvalues with some monotonically increasing function. We use ?i = ki 2 , i.e., focusing on ranking among smallest cuts; see [5]. Furthermore, we fix the regularization parameter of the SGT to c = 3200, and for simplicity we fix ? = 0 for all semi-supervised eigenvectors, implicitly defining the effective ? = [?1 , . . . , ?k ]T . Clearly, other correlation distributions and values of ? may yield subspaces with even better discriminative properties1 . Labeled points 1:1 1 : 10 5 : 50 10 : 100 50 : 500 #Semi-supervised eigenvectors for SGT 1 2 4 6 8 10 0.39 0.39 0.38 0.38 0.38 0.36 0.30 0.31 0.25 0.23 0.19 0.15 0.12 0.15 0.09 0.08 0.07 0.06 0.09 0.10 0.07 0.06 0.05 0.05 0.03 0.03 0.03 0.03 0.03 0.03 1 0.50 0.49 0.49 0.49 0.49 #Global eigenvectors for SGT 5 10 15 20 0.48 0.36 0.27 0.27 0.36 0.09 0.08 0.06 0.09 0.08 0.07 0.05 0.08 0.07 0.06 0.04 0.10 0.07 0.06 0.04 25 0.19 0.06 0.04 0.04 0.04 Table 1: Classification error for the SGT based on respectively semi-supervised and global eigenvectors. The first column from the left encodes the configuration, e.g., 1:10 interprets as 1 seed and 10 training samples from each class (total of 22 samples - for the global approach these are all used for training). When the seed is well determined and the number of training samples moderate (50:500) a single semi-supervised eigenvector is sufficient, where for less data we benefit from using multiple semi-supervised eigenvectors. All experiments have been repeated 10 times. Here, we consider the task of discriminating between fours and nines, as these two classes tend to overlap more than other combinations. (A closed four usually resembles nine more than an ?open? four.) Hence, we expect localization on low order global eigenvectors, meaning that class separation will not be evident in the leading global eigenvector, but instead will be ?buried? further down the spectrum. Thus, this will illustrate how semi-supervised eigenvectors can represent relevant heterogeneities in a local subspace of low dimensionality. Table 1 summarizes our classification results based on respectively semi-supervised and global eigenvectors. Finally, Figure 3 and 4 illustrates two realizations for the 1:10 configuration, where the training samples are fixed, but where we vary 1 A thorough analysis regarding the importance of this parameter will appear in the journal version. 7 the seed nodes, to demonstrate the influence of the seed. See the caption in these figures for further details. s+ = { } } s? = { } l? = { 1 vs. 2 } ? ????????? ? Test data ? ????????? ? l+ = { 0.6 1 vs. 3 1 vs. 4 1 vs. 5 2 vs. 3 2 vs. 4 2 vs. 5 3 vs. 4 3 vs. 5 Classification error Unexplained correlation 0.5 0.4 0.3 4 vs. 5 0.2 0.1 0 0.08 0.07 0.06 0.05 1 2 3 4 0.03 0.02 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 5 6 7 8 9 10 11 12 #Semi-supervised eigenvectors 13 14 15 Figure 3: Left: Shows a subset of the classification results for the SGT based on 5 semi-supervised eigenvectors seeded in s+ and s? , and trained using samples l+ and l? . Misclassifications are marked with black frames. Right: Visualizes all test data spanned by the first 5 semi-supervised eigenvectors, by plotting each component as a function of the others. Red (blue) points correspond to 4 (9), whereas green points correspond to remaining digits. As the seed nodes are good representatives, we note that the eigenvectors provide a good class separation. We also plot the error as a function of local dimensionality, as well as the unexplained correlation, i.e., initial components explain the majority of the correlation with the seed (effect of ? = 0). The particular realization based on the leading 5 semi-supervised eigenvectors yields an error of ? 0.03 (dashed circle). s+ = { } ? ????????? ? Test data ? ????????? ? l+ = { } s? = { } l? = { 1 vs. 2 } 0.6 0.5 0.31 0.30 0.30 0.30 0.29 0.3 0.2 0.1 0 1 vs. 4 1 vs. 5 2 vs. 3 2 vs. 4 2 vs. 5 3 vs. 4 3 vs. 5 Classification error Unexplained correlation 0.48 0.4 1 vs. 3 0.27 0.24 0.20 4 vs. 5 0.15 0.10 0.04 0.04 0.04 0.04 1 2 3 4 5 6 7 8 9 10 11 12 #Semi-supervised eigenvectors 13 14 15 Figure 4: See the general description in Figure 3. Here we illustrate an instance where the s+ shares many similarities with s? , i.e., s+ is on the boundary of the two classes. This particular realization achieves a classification error of ? 0.30 (dashed circle). In this constellation we first discover localization on low order semi-supervised eigenvectors (? 12 eigenvectors), which is comparable to the error based on global eigenvectors (see Table 1), i.e., further down the spectrum we recover from the bad seed and pickup the relevant mode of variation. In summary: We introduced the concept of semi-supervised eigenvectors that are biased towards local regions of interest in a large data graph. We demonstrated the feasibility on a well-studied dataset and found that our approach leads to more compact subspace representations by extracting desired local heterogeneities. Moreover, the algorithm is scalable as the eigenvectors are computed by the solution to a sparse system of linear equations, preserving the low O(m) space complexity. Finally, we foresee that the approach will prove useful in a wide range of data analysis fields, due to the algorithm?s speed, simplicity, and stability. 8 References [1] R. Andersen, F.R.K. Chung, and K. Lang. Local graph partitioning using PageRank vectors. In FOCS ?06: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 475?486, 2006. [2] R. Andersen and K. Lang. Communities from seed sets. In WWW ?06: Proceedings of the 15th International Conference on World Wide Web, pages 223?232, 2006. [3] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003. [4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK, 2004. [5] O. Chapelle, J. Weston, and B. Sch?olkopf. Cluster Kernels for Semi-Supervised Learning. In Becker, editor, NIPS 2002, volume 15, pages 585?592, Cambridge, MA, USA, 2003. [6] R.R. Coifman, S. Lafon, A.B. Lee, M. Maggioni, B. Nadler, F. Warner, and S.W. Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition in data: Diffusion maps. Proc. Natl. Acad. Sci. USA, 102(21):7426?7431, 2005. [7] A. P. Eriksson, C. Olsson, and F. Kahl. Normalized cuts revisited: A reformulation for segmentation with linear grouping constraints. In Proceedings of th 11th International Conference on Computer Vision, pages 1?8, 2007. [8] W. Gander, G. H. Golub, and U. von Matt. A constrained eigenvalue problem. Linear Algebra and its Applications, 114/115:815?839, 1989. [9] T.H. Haveliwala. Topic-sensitive PageRank: A context-sensitive ranking algorithm for web search. IEEE Transactions on Knowledge and Data Engineering, 15(4):784?796, 2003. [10] G. Jeh and J. Widom. Scaling personalized web search. In WWW ?03: Proceedings of the 12th International Conference on World Wide Web, pages 271?279, 2003. [11] T. Joachims. Transductive learning via spectral graph partitioning. In Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), 2003. [12] Y. Lecun and C. Cortes. The MNIST database of handwritten digits. [13] J. Leskovec, K.J. Lang, A. Dasgupta, and M.W. Mahoney. Statistical properties of community structure in large social and information networks. In WWW ?08: Proceedings of the 17th International Conference on World Wide Web, pages 695?704, 2008. [14] M. W. Mahoney, L. Orecchia, and N. K. Vishnoi. A local spectral method for graphs: with applications to improving graph partitions and exploring data graphs locally. Technical report, 2009. Preprint: arXiv:0912.0681. [15] S. Maji, N. K. Vishnoi, and J. Malik. Biased normalized cuts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2057?2064, 2011. [16] K.A. Norman, S.M. Polyn, G.J. Detre, and J.V. Haxby. Beyond mind-reading: multi-voxel pattern analysis of fmri data. Trends in Cognitive Sciences, 10(9):424?30, 2006. [17] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999. [18] B. Scholkopf and A.J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001. [19] D.A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In STOC ?04: Proceedings of the 36th annual ACM Symposium on Theory of Computing, pages 81?90, 2004. [20] S.-H. Teng. The Laplacian paradigm: Emerging algorithms for massive graphs. In Proceedings of the 7th Annual Conference on Theory and Applications of Models of Computation, pages 2?14, 2010. [21] S. Vigna. Spectral ranking. Technical report. Preprint: arXiv:0912.0238 (2009). [22] D.J. Watts and S.H. Strogatz. Collective dynamics of small-world networks. Nature, 393:440? 442, 1998. [23] S. X. Yu and J. Shi. Grouping with bias. In Annual Advances in Neural Information Processing Systems 14: Proceedings of the 2001 Conference, pages 1327?1334, 2002. 9
4560 |@word middle:1 version:5 norm:2 open:1 widom:1 asks:1 bicriteria:1 reduction:2 initial:1 configuration:2 contains:1 ours:1 kahl:1 lang:3 written:1 subsequent:5 partition:2 informative:1 haxby:1 remove:1 plot:3 designed:1 v:20 ith:1 short:1 node:11 location:1 successive:2 revisited:1 along:2 constructed:2 symposium:2 scholkopf:1 transducer:1 prove:1 focs:1 introduce:1 manner:8 coifman:1 theoretically:1 behavior:1 warner:1 multi:1 decreasing:1 increasing:2 becomes:2 provided:3 discover:1 notation:1 moreover:3 mass:1 null:1 what:2 interpreted:1 minimizes:1 eigenvector:24 emerging:1 developed:1 finding:3 ag:2 sparsification:1 temporal:1 pseudo:1 thorough:1 returning:1 k2:1 classifier:2 uk:1 partitioning:7 control:1 unit:3 appear:1 positive:3 engineering:1 local:14 treat:1 acad:1 solely:1 might:7 black:3 pec:1 studied:4 resembles:1 challenging:1 range:2 lecun:1 practice:1 digit:6 procedure:4 area:1 empirical:3 thought:1 projection:4 boyd:1 pre:2 word:1 eriksson:1 close:1 operator:1 context:1 applying:1 instability:1 influence:1 www:3 equivalent:1 map:1 lobal:6 center:1 demonstrated:1 shi:1 primitive:2 starting:2 convex:4 formulate:2 simplicity:2 immediately:1 spanned:2 vandenberghe:1 stability:1 maggioni:1 variation:6 analogous:1 limiting:1 target:4 construction:1 massive:1 caption:1 element:1 trend:1 satisfying:1 particularly:1 updating:1 recognition:1 asymmetric:1 cut:6 labeled:1 database:1 winograd:1 polyn:1 preprint:2 solved:2 capture:5 recalculate:2 region:11 connected:4 decrease:1 principled:1 cheeger:1 substantial:1 complexity:1 dynamic:2 ultimately:1 trained:1 solving:3 segment:1 algebra:1 localization:3 basis:1 easily:1 differently:1 represented:1 chapter:1 various:1 regularizer:1 maji:1 train:1 describe:1 effective:2 neighborhood:2 sgt:9 whose:1 stanford:4 solve:1 widely:1 say:1 niyogi:1 transductive:4 think:2 obviously:1 interplay:1 sequence:1 eigenvalue:10 rewiring:2 product:1 remainder:1 relevant:2 realization:5 achieve:1 description:1 normalize:2 olkopf:1 spsd:1 exploiting:1 kink:1 cluster:5 motwani:1 produce:2 ring:1 mmahoney:1 illustrate:9 nearest:1 ij:1 qt:6 strong:1 c:1 involves:1 implies:1 direction:4 closely:1 correct:1 brin:1 adjacency:1 require:2 suffices:1 generalization:1 fix:2 extension:1 exploring:1 hold:1 sufficiently:1 considered:1 ground:1 around:2 exp:1 seed:28 nadler:1 vary:1 achieves:1 smallest:8 proc:1 label:2 combinatorial:1 hansen:1 unexplained:3 sensitive:3 tool:2 weighted:4 mit:1 clearly:2 modified:1 rather:1 pn:1 varying:1 knob:2 focus:2 joachim:1 properly:1 indicates:1 slowest:2 helpful:1 nn:1 entire:2 wij:4 buried:1 interested:3 pixel:1 issue:1 classification:9 among:1 augment:1 jansen:1 smoothing:1 resonance:1 constrained:2 fairly:1 equal:5 construct:5 never:2 having:3 extraction:1 field:1 yu:1 unsupervised:1 icml:1 nearly:1 fmri:1 report:3 others:2 stimulus:1 simplify:1 few:2 belkin:1 randomly:4 pectral:23 dg:63 ve:1 olsson:1 phase:1 fire:1 detection:1 conductance:1 interest:5 highly:1 evaluation:1 golub:1 mahoney:5 semidefinite:1 light:1 natl:1 edge:12 respective:1 orthogonal:8 euclidean:1 walk:3 circle:3 desired:2 orthogonalized:4 theoretical:1 subfigure:2 leskovec:1 instance:2 column:6 increased:1 applicability:1 vertex:5 subset:3 entry:1 usefulness:1 eigenmaps:1 haveliwala:1 characterize:1 kxi:1 st:6 international:5 discriminating:1 lee:1 informatics:1 ansatz:2 michael:1 quickly:1 na:1 connectivity:2 andersen:2 ambiguity:1 satisfied:1 successively:3 containing:1 choose:1 slowly:1 possibly:1 von:1 cognitive:2 external:1 ocal:18 chung:1 leading:11 return:1 toy:1 includes:3 coefficient:1 satisfy:1 ranking:7 vi:2 depends:1 sine:1 root:2 closed:1 analyze:1 red:2 sort:1 recover:1 parallel:1 minimize:4 accuracy:1 variance:4 efficiently:3 yield:3 correspond:3 yellow:1 infolab:1 generalize:1 identification:1 handwritten:2 advertising:1 visualizes:1 explain:1 definition:1 associated:1 di:3 dataset:2 popular:3 recall:2 color:1 knowledge:1 dimensionality:4 segmentation:2 focusing:1 originally:1 supervised:55 methodology:3 response:2 specify:1 formulation:1 foresee:1 generality:1 furthermore:1 smola:1 correlation:16 until:3 clock:1 eqn:5 web:6 replacing:1 defines:1 mode:7 gray:2 semisupervised:3 usa:3 effect:3 matt:1 normalized:6 multiplier:2 requiring:1 contain:1 concept:1 equality:2 regularization:2 sinusoid:1 seeded:1 symmetric:3 iteratively:4 moore:1 hence:1 i2:1 visualizing:1 uniquely:1 noted:1 illustrative:1 coincides:1 cosine:1 generalized:5 evident:1 complete:1 demonstrate:3 performs:1 image:4 ranging:1 meaning:1 harmonic:1 predominantly:1 common:2 functional:2 stimulation:1 empirically:1 volume:2 belong:1 interpretation:2 refer:1 cambridge:4 grid:1 mathematics:1 chapelle:1 zucker:1 similarity:2 longer:1 etc:2 showed:1 perspective:3 moderate:1 certain:1 inequality:1 binary:4 yi:1 preserving:1 greater:1 somewhat:2 relaxed:1 v3:2 paradigm:1 monotonically:1 dashed:2 semi:55 rv:5 multiple:1 reduces:1 technical:5 calculation:1 coded:1 laplacian:20 controlled:1 prediction:1 variant:2 feasibility:1 scalable:1 vision:5 arxiv:2 iteration:3 kernel:5 tral:1 normalization:1 represent:2 background:3 want:7 whereas:2 addition:4 interval:2 else:1 sch:1 biased:28 bringing:1 subject:3 tend:3 validating:1 undirected:1 thing:1 member:1 orecchia:1 incorporates:1 call:2 integer:3 extracting:1 near:2 ideal:1 automated:1 xj:2 misclassifications:1 topology:1 interprets:1 inner:2 idea:1 regarding:1 computable:1 shift:1 expression:1 becker:1 algebraic:2 interpolates:1 nine:2 useful:2 generally:1 detailed:1 eigenvectors:72 informally:1 clear:1 locally:24 visualized:1 tth:2 simplest:1 arising:1 blue:4 dasgupta:1 vol:3 four:3 reformulation:1 demonstrating:1 achieving:1 changing:1 diffusion:4 kept:1 v1:1 imaging:1 graph:43 downstream:1 inverse:1 parameterized:1 fourth:1 family:1 separation:2 appendix:1 summarizes:1 scaling:1 comparable:1 jeh:1 def:4 internet:1 bound:1 ki:1 rewired:5 quadratic:3 refine:1 annual:4 nontrivial:9 constraint:19 orthogonality:3 personalized:3 encodes:1 nearby:3 speed:1 span:3 performing:2 department:1 combination:1 watt:1 conjugate:1 across:1 appealing:1 v2k:1 making:2 multiplicity:2 equation:6 visualization:1 mind:1 letting:1 end:2 available:1 apply:2 v2:10 spectral:12 magnetic:1 vishnoi:2 original:1 top:2 clustering:3 include:1 cf:1 remaining:1 exploit:1 sweep:1 objective:2 malik:1 already:1 quantity:2 added:3 usual:4 traditional:1 diagonal:2 gradient:1 subspace:5 wrap:1 link:1 distance:1 sci:1 majority:2 vigna:1 evenly:1 topic:2 evaluate:2 unstable:1 trivial:2 reason:4 toward:2 denmark:1 length:1 code:1 relationship:1 illustration:1 providing:1 rotational:1 equivalently:1 lg:29 stoc:1 negative:1 design:1 implementation:2 collective:1 perform:11 allowing:1 upper:1 neuron:2 pickup:1 situation:1 defining:2 saturates:2 head:1 heterogeneity:4 frame:1 rn:3 community:5 introduced:4 specified:2 nip:1 address:1 beyond:2 suggested:1 usually:1 pattern:2 biasing:1 reading:1 challenge:1 program:2 pagerank:5 saturation:1 including:1 green:2 overlap:1 natural:4 indicator:4 solvable:1 residual:1 dtu:2 axis:1 nearestneighbors:1 extract:1 text:1 nice:1 geometric:1 xtt:4 loss:1 expect:1 localized:2 foundation:1 degree:8 sufficient:1 plotting:1 editor:1 share:1 summary:1 repeat:1 side:2 allow:2 bias:1 wide:5 neighbor:1 face:1 taking:1 sparse:1 distributed:1 benefit:1 boundary:1 world:8 lafon:1 computes:1 preprocessing:1 voxel:1 social:4 transaction:1 citation:1 emphasize:2 compact:1 implicitly:2 global:28 imm:1 pseudoinverse:2 corpus:2 assumed:2 xi:2 discriminative:1 spectrum:3 search:6 table:3 nature:1 ca:1 inherently:3 improving:1 inherit:2 significance:1 main:7 dense:1 motivation:2 repeated:1 body:1 augmented:2 representative:1 slow:1 detre:1 explicit:1 third:1 saturate:1 eneralized:12 down:2 bad:1 specific:2 xt:20 constellation:1 dk:1 disregarding:1 cortes:1 normalizing:1 grouping:2 consist:2 mnist:4 importance:1 conditioned:2 illustrates:1 locality:5 simply:1 twentieth:1 penrose:1 lagrange:2 strogatz:1 monotonic:1 truth:1 acm:1 ma:2 weston:1 goal:1 formulated:1 marked:1 towards:4 twofold:1 considerable:1 change:1 determined:2 except:2 reducing:1 called:3 total:2 teng:2 duality:1 experimental:1 est:1 formally:1 support:1 gander:1 spielman:1 norman:1 correlated:6
3,934
4,561
Feature-aware Label Space Dimension Reduction for Multi-label Classification Hsuan-Tien Lin Department of Computer Science & Information Engineering, National Taiwan University [email protected] Yao-Nan Chen Department of Computer Science & Information Engineering, National Taiwan University [email protected] Abstract Label space dimension reduction (LSDR) is an efficient and effective paradigm for multi-label classification with many classes. Existing approaches to LSDR, such as compressive sensing and principal label space transformation, exploit only the label part of the dataset, but not the feature part. In this paper, we propose a novel approach to LSDR that considers both the label and the feature parts. The approach, called conditional principal label space transformation, is based on minimizing an upper bound of the popular Hamming loss. The minimization step of the approach can be carried out efficiently by a simple use of singular value decomposition. In addition, the approach can be extended to a kernelized version that allows the use of sophisticated feature combinations to assist LSDR. The experimental results verify that the proposed approach is more effective than existing ones to LSDR across many real-world datasets. 1 Introduction The multi-label classification problem is an extension of the traditional multiclass classification problem. In contrast to the multiclass problem, which associates only a single label to each instance, the multi-label classification problem allows multiple labels for each instance. General solutions to this problem meet the demands of many real-world applications for classifying instances into multiple concepts, including categorization of text [1], scene [2], genes [3] and so on. Given the wide range of such applications, the multi-label classification problem has been attracting much attention of researchers in machine learning [4, 5, 6]. Label space dimension reduction (LSDR) is a new paradigm in multi-label classification [4, 5]. By viewing the set of multiple labels as a high-dimensional vector in some label space, LSDR approaches use certain assumed or observed properties of the vectors to ?compress? them. The compression step transforms the original multi-label classification problem (with many labels) to a small number of learning tasks. If the compression step, de-compression step, and learning steps can be efficient and effective, LSDR approaches can be useful for multi-label classification because of the appropriate use of joint information within the labels [5]. For instance, a representative LSDR approach is the principal label space transformation [PLST; 5]. PLST takes advantage of the key linear correlations between labels to build a small number of regression tasks. LSDR approaches are homologous to the feature space dimension reduction (FSDR) approaches and share similar advantages: saving computational power and storage without much loss of prediction accuracy and improving performance by removing irrelevant, redundant, or noisy information [7]. There are two types of FSDR approaches: unsupervised and supervised. Unsupervised FSDR considers only feature information during reduction, while supervised FSDR considers the additional label information. A typical instance of unsupervised FSDR is principal component analysis [PCA; 8]. PCA transforms the features into a small number of uncorrelated variables. On the other hand, the supervised FSDR approaches include supervised principal component analysis [9], sliced inverse regression [10], and kernel dimension reduction [11]. In particular, for multi-label classification, a 1 leading supervised FSDR approach is canonical correlation analysis [CCA; 6, 12] which is based on linear projections in both the feature space and the label space. In general, well-tuned supervised FSDR approaches can perform better than unsupervised ones because of the additional label information. PLST can be viewed as the counterpart of PCA in the label space [5] and is feature-unaware. That is, it considers only the label information during reduction. Motivated by the superiority of supervised FSDR over unsupervised approaches, we are interested in studying feature-aware LSDR: LSDR that considers feature information. In this paper, we propose a novel feature-aware LSDR approach, conditional principal label space transformation (CPLST). CPLST combines the concepts of PLST (LSDR) and CCA (supervised FSDR) and can improve PLST through the addition of feature information. We derive CPLST by minimizing an upper bound of the popular Hamming loss and show that CPLST can be accomplished by a simple use of singular value decomposition. Moreover, CPLST can be flexibly extended by the kernel trick with suitable regularization, thereby allowing the use of sophisticated feature information to assist LSDR. The experimental results on real-world datasets confirm that CPLST can reduce the number of learning tasks without loss of prediction performance. In particular, CPLST is usually better than PLST and other related LSDR approaches. The rest of this paper is organized as follows. In Section 2, we define the multi-label classification problem and review related works. Then, in Section 3, we derive the proposed CPLST approach. Finally, we present the experimental results in Section 4 and conclude our study in Section 5. 2 Label Space Dimension Reduction The multi-label classification problem aims at finding a classifier from the input vector x to a label set Y, where x ? Rd , Y ? {1, 2, . . . , K} and K is the number of classes. The label set Y is often conveniently represented as a label vector, y ? {0, 1}K , where y[k] = 1 if and only if k ? Y. Given a dataset D = {(xn , yn )}N n=1 , which contains N training examples (xn , yn ), the multi-label classification algorithm uses D to find a classifier h: X ? 2{1,2,??? ,K} anticipating that h predicts y well on any future (unseen) test example (x, y). There are many existing algorithms for solving multi-label classification problems. The simplest and most intuitive one is binary relevance [BR; 13]. BR decomposes the original dataset D into K binary classification datasets, Dk = {(xn , yn [k])}N n=1 , and learns K independent binary classifiers, each of which is learned from Dk and is responsible for predicting whether the label set Y includes label k. When K is small, BR is an efficient and effective baseline algorithm for multi-label classification. However, when K is large, the algorithm can be costly in training, prediction, and storage. Facing the above challenges, LSDR (Label Space Dimension Reduction) offers a potential solution to these issues by compressing the K-dimensional label space before learning. LSDR transforms D N into M datasets, where Dm = {(xn , tn [m])}n=1 , m = 1, 2, . . . , M , and M  K such that the multi-label classification problem can be tackled efficiently without significant loss of prediction performance. In particular, LSDR involves solving, predicting with, and storing the models for only M , instead of K, learning tasks. For instance, compressive sensing [CS; 4], a precursor of LSDR, is based on the assumption that the label set vector y is sparse (i.e., contains few ones) to ?compressed? y to a shorter code vector t by projecting y on M random directions v1 , ? ? ? , vM , where M  K can be determined according to the assumed sparsity level. CS transforms the original multi-label classification problem into M T regression tasks with Dm = {(xn , tn [m])}N n=1 , where tn [m] = vm yn . After obtaining a multioutput regressor r(x) for predicting the code vector t, CS decodes r(x) to the optimal label set vector by solving an optimization problem for each input instance x under the sparsity assumption, which can be time-consuming. 2.1 Principal Label Space Transformation Principal label space transformation [PLST; 5] is another approach to LSDR. PLST first shifts each P ? , where y ? = N1 N label set vector y to z = y ? y n=1 yn is the estimated mean of the label set vectors. Then, PLST takes a matrix V that linearly maps z to the code vector t by t = Vz. Unlike CS, however, PLST takes principal directions vm (to be introduced next) rather than the random ones, and does not need to solve an optimization problem during decoding. 2 In particular, PLST considers only a matrix V with orthogonal rows, and decodes r(x) to the pre? ), which is called round-based decoding. Tai and Lin [5] dicted labels by h(x) = round(VT r(x)+ y prove that when using round-based decoding and a linear transformation V that contains orthogonal rows, the common Hamming loss for evaluating multi-label classifiers [14] is bounded by  2  2 T T , (1) Training Hamming Loss ? c r(X) ? ZV + Z ? ZV V F T F zTn where r(X) contains r(xn ) as rows, Z contains as rows and c is a constant that depends on K and N . The matrix ZVT then contains the code vector tTn as rows. The bound can be divided into two parts. The first part is kr(X) ? ZVT k2F , which represents the prediction error from the regressor r(xn ) to the desired code vectors tn . The second part is kZ ? ZVT Vk2F , which stands for the encoding error for projecting zn into the closest vector in span{v1 , ? ? ? , vM }, which is VT tn . PLST is derived by minimizing the encoding error [5] and finds the optimal M by K matrix V by applying the singular value decomposition on Z and take the M right-singular vectors vm that correspond to the M largest singular values. The M right-singular vectors are called the principal directions for representing zn . PLST can be viewed as a linear case of the kernel dependency estimation (KDE) algorithm [15]. Nevertheless, the general nonlinear KDE must solve a computationally expensive pre-image problem for each test input x during the prediction phase. The linearity of PLST avoids the pre-image problem and enjoys efficient round-based decoding. In this paper, we will focus on the linear case in order to design efficient algorithms for LSDR during both the training and prediction phases. 2.2 Canonical Correlation Analysis A related technique that we will consider in this paper is canonical correlation analysis [CCA; 6], a well-known statistical technique for analyzing the linear relationship between two multidimensional variables. Traditionally, CCA is regarded as a FSDR approach in multi-label classification [12]. In this subsection, we discuss whether CCA can also be viewed as an LSDR approach. Formally, given an N by d matrix X with the n-th row being xTn (assumed to be zero mean) as well as an N by K matrix Z with the n-th row being zTn (assumed to be zero mean), CCA aims at (2) (1) (2) (1) finding two lists of basis vectors, (wx , wx , ? ? ? ) and (wz , wz , ? ? ? ), such that the correlation (i) (i) (i) (i) coefficient between the canonical variables cx = Xwx and cz = Zwz is maximized, under the (j) (j) (i) constraint that cx is uncorrelated to all other cx and cz for 1 ? j < i. Kettenring [16] showed that CCA is equivalent to simultaneously solving the following constrained optimization problem: XWxT ? ZWzT 2 min subject to Wx XT XWxT = Wz ZT ZWzT = I, (2) F Wx ,Wz (i) T (i) T where Wx is the matrix with the i-th row (wx ) , and Wz is the matrix with the i-th row (wz ) . When CCA is considered in the context of multi-label classification, X is the matrix that contains the mean-shifted xTn as rows and Z is the shifted label matrix that contains the mean-shifted ynT as rows. Traditionally, CCA is used as a supervised FSDR approach that discards Wz and uses only Wx to project features onto a lower-dimension space before learning with binary relevance [12, 17]. On the other hand, due to the symmetry between X and Z, we can also view CCA as an approach to feature-aware LSDR. In particular, CCA is equivalent to first seeking projection directions Wz of Z, and then performing a multi-output linear regression from xn to Wz zn , under the constraints Wx XT XWxT = I, to obtain Wx . However, it has not been seriously studied how to use CCA for LSDR because Wz does not contain orthogonal rows. That is, unlike PLST, round-based decoding cannot be used and it remains to be an ongoing research issue for designing a suitable decoding scheme with CCA [18]. 3 Proposed Algorithm Inspired by CCA, we first design a variant that involves an appropriate decoding step. As suggested in Section 2.2, CCA is equivalent to finding a projection that minimizes the squared prediction error under the constraints Wx XT XWxT = Wz ZT ZWzT = I. If we drop the constraint on Wx in order to further decrease the squared prediction error and change Wz ZT ZWzT = I to Wz WzT = I in 3 order to enable round-based decoding, we obtain XWxT ? ZWzT 2 min subject to Wz WzT = I F Wx ,Wz (3) Problem (3) preserves the original objective function of CCA and specifies that Wz must contain orthogonal rows for applying round-based decoding. We call this algorithm orthogonally constrained CCA (OCCA). Then, using the Hamming loss bound (1), when V = Wz and r(x) = XWzT , OCCA minimizes kr(x) ? ZWzT k in (1) with the hope that the Hamming loss is also minimized. In other words, OCCA is employed for the orthogonal directions V that are ?easy to learn? (of low prediction error) in terms of linear regression. For every fixed Wz = V in (3), the optimization problem for Wx is simply a linear regression from X to ZVT . Then, the optimal Wx can be computed by a closed-form solution WxT = X? ZVT , where X? is the pseudo inverse of X. When the optimal Wx is inserted back into (3), the optimiza XX? ZVT ? ZVT 2 which is equivalent to tion problem becomes min F VVT =I  min tr VZT (I ? H) ZVT . VVT =I (4) The matrix H = XX? is called the hat matrix for linear regression [19]. Similar to PLST, by EckartYoung theorem [20], we can solve problem (4) by considering the eigenvectors that correspond to the largest eigenvalues of ZT (H ? I)Z. 3.1 Conditional Principal Label Space Transformation From the previous discussions, OCCA captures the input-output relation to minimize the prediction error in bound (1) with the ?easy? directions. In contrast, PLST minimizes the encoding error in bound (1) with the ?principal? directions. Now, we combine the benefits of the two algorithms, and minimize the two error terms simultaneously with the ?conditional principal? directions. We begin by continuing our derivation of OCCA, which obtains r(x) by a linear regression from X to ZVT . If we minimize both terms in (1) together with such a linear regression, the optimization problem becomes  2 2  T T T min c XW ? ZV + Z ? ZV V F F W,VVT =I  T T T T ? min tr VZ (I ? H) ZV ? V VZ Z ? ZT ZVT V + VT VZT ZVT V (5) VVT =I  ? max tr VZT HZVT (6) VVT =I Problem (6) is derived by a cyclic permutation to eliminate a pair of V and VT and combine the last three terms of (5). The problem can again be solved by taking the eigenvectors with the largest eigenvalues of ZT HZ as the rows of V. Such a matrix V minimizes the prediction error term and the encoding error term simultaneously. The resulting algorithm is called conditional principal label space transformation (CPLST), as shown in Algorithm 1. Algorithm 1 Conditional Principal Label Space Transformation T ?. 1: Let Z = [z1 . . . zN ] with zn = yn ? y 2: Preform SVD on ZT HZ to obtain ZT HZ = A?B with ?1 ? ?2 ? ? ? ? ? ?N . Let VM contain the top M rows of B. N 3: Encode {(xn , yn )}N n=1 to {(xn , tn )}n=1 , where tn = VM zn . 4: Learn a multi-dimension regressor r(x) from {(xn , tn )}N n=1 .  T ? . 5: Predict the label-set of an instance x by h(x) = round VM r(x) + y CPLST balances the prediction error with the encoding error and is closely related with bound (1). Moreover, in contrast with PLST, which uses the key unconditional correlations, CPLST is featureaware and allows the capture of conditional correlations [14]. We summarize the three algorithms in Table 1, and we will compare them empirically in Section 4. The three algorithms are similar. They all operate with an SVD (or eigenvalue decomposition) on a K by K matrix. PLST focuses on the encoding error and does not consider the features during LSDR, i.e. it is feature-unaware. On the other hand, CPLST and OCCA are feature-aware approaches, which consider features during LSDR. When using linear regression as the multi-output 4 Table 1: Summary of three LSDR algorithms Algorithm PLST OCCA CPLST Matrix for SVD ZT Z ZT (H ? I)Z ZT HZ LSDR feature-unaware feature-aware feature-aware Relation to bound (1) minimizes the encoding error minimizes the prediction error minimizes both regressor, CPLST simultaneously minimizes the two terms in bound (1), while OCCA minimizes only one term of the bound. In contrast to PLST, the two feature-aware approaches OCCA and CPLST must calculate the matrix H and are thus slower than PLST if the dimension d of the input space is large. 3.2 Kernelization and Regularization Kernelization?extending a linear model to a nonlinear one using the kernel trick [21]?and regularization are two important techniques in machine learning. The former expands the power of the linear models while the latter regularizes the complexity of the learning model. In this subsection, we show that kernelization and regularization can be applied to CPLST (and OCCA). In Section 3.1, we derive CPLST by using linear regression as the underlying multi-output regression method. Next, we replace linear regression by its kernelized form with `2 regularization, kernel ridge regression [22], as the underlying regression algorithm. Kernel ridge regression considers a feature mapping ? : X ? F before performing regularized linear regression. According to ?, the kernel function k(x, x0 ) = ?(x)T ?(x0 ) is defined as the inner product in the space F. When applying kernel ridge regression with a regularization parameter ? to map from X to ZV, if ?(x) can be explicitly computed, it is known that the closed-form solution is [22] ?1 ?1 W = ?T ?I + ??T ZVT = ?T (?I + K) ZVT , (7) where ? is the matrix containing ?(xn )T as rows, and K is the matrix with Kij = k(xi , xj ) = ?(xi )T ?(xj ). That is, K = ??T and is called the kernel matrix of X. Now, we derive kernel-CPLST by inserting the optimal W into the Hamming loss bound (1). When substituting (7) into minimizing the loss bound (1) with r(X) = ?W and letting Q = (?I + K)?1 ,  2  2 min c ??T QZVT ? ZVT F + Z ? ZVT V F VVT =I  2  KQZVT ? ZVT 2 + ? min Z ? ZVT V F F VVT =I  T T ? max tr VZ (2KQ ? QKKQ ? I) ZV (8) VVT =I Notice that in equation (8), kernel-CPLST do not need to explicitly compute the matrix ? and only needs the kernel matrix K (that can be computed through the kernel function k). Therefore, a high or even an infinite dimensional feature transform can be used to assist LSDR in kernel-CPLST through a suitable kernel function. Problem (8) can again be solved by considering the eigenvectors with the largest eigenvalues of ZT (2KQ ? QKKQ) Z as the rows of V. 4 Experiment In this section, we conduct experiments on eight real-world datasets, downloaded from Mulan [23], to validate the performance of CPLST and other LSDR approaches. Table 2 shows the number of labels of each dataset. Because kernel ridge regression itself, kernel-CPLST need to invert an N by N matrix, we can only afford to conduct a fair comparison using mid-sized datasets. In each run of the experiment, we randomly sample 80% of the dataset for training and reserve the rest for testing. All the results are reported with the mean and the standard error over 100 different random runs. Dataset # Labels (K) Table 2: The number of labels of each dataset bib. cor. emo. enr. gen. med. 159 374 6 53 27 45 sce. 6 yea. 14 We take PLST, OCCA, CPLST, and kernel-CPLST in our comparison. We do not include Compressive Sensing [13] in the comparison because earlier work [24] has shown that the algorithm is more sophisticated while being inferior to PLST. We conducted some side experiments on CCA [6] for LSDR (see Subsection 2.2) and found that it is at best comparable to OCCA. Given the space 5 0.245 0.235 PBR OCCA PLST CPLST 2000 0.23 |Z ? ZVTV|2 Hamning loss 2500 PBR OCCA PLST CPLST 0.24 0.225 0.22 1500 1000 0.215 0.21 500 0.205 0.2 0 5 10 0 15 0 5 # of dimension (a) Hamming loss |XWT ? ZVT|2 + |Z ? ZVTV|2 2250 |XWT ? ZVT|2 2000 1500 1000 PBR OCCA PLST CPLST 500 0 5 15 (b) encoding error 2500 0 10 # of dimension 10 2150 2100 2050 2000 1950 15 # of dimension PBR OCCA PLST CPLST 2200 0 5 10 15 # of dimension (c) prediction error (d) loss bound Figure 1: yeast: test results of LSDR algorithm when coupled with linear regression constraints, we decide to only report the results on OCCA. In addition to those LSDR approaches, we also consider a simple baseline approach [24], partial binary relevance (PBR). PBR randomly selects M labels from the original label set during training and only learns those M binary classifiers for prediction. For the other labels, PBR directly predicts ?1 without any training to match the sparsity assumption as exploited by Compressive Sensing [13]. 4.1 Label Space Dimension Reduction with Linear Regression In this subsection, we couple PBR, OCCA, PLST and CPLST with linear regression. The yeast dataset reveals clear differences between the four LSDR approaches and is hence taken for presentation here, while similar differences have been observed on other datasets as well. Figure 1(a) shows the test Hamming loss with respect to the possible M (labels) used. It is clear that CPLST is better than the other three approaches. PLST can reach similar performance to CPLST only at a larger M . The other two algorithms, OCCA and PBR, are both significantly worse than CPLST. To understand the cause of the different performance, we plot the (test) encoding error kZ ? ZVT Vk2F , the prediction error kXWT ? ZVT k2F , and the loss bound (1) in Figure 1. Figure 1(b) shows the encoding error on the test set, which matches the design of PLST. Regardless of the approaches used, the encoding error decreases to 0 when using all 14 dimensions because the {vm }?s can span the whole label space. As expected, PLST achieves the lowest encoding error across every number of dimensions. CPLST partially minimizes the encoding error in its objective function, and hence also achieves a decent encoding error. On the other hand, OCCA is blind to and hence worst at the encoding error. In particular, its encoding error is even worse than that of the baseline PBR. Figure 1(c) shows the prediction error kXWT ? ZVT k2F on the test set, which matches the design of OCCA. First, OCCA indeed achieves the lowest prediction error across all number of dimensions. PLST, which is blind to the prediction error, reaches the highest prediction error, and is even worse than PBR. The results further reveal the trade-off between the encoding error and the prediction error: more efficient encoding of the label space are harder to predict. PLST takes the more efficient encoding to the extreme, and results in worse prediction error; OCCA, on the other hand, is better in terms of the prediction error, but leads to the least efficient encoding. Figure 1(d) shows the scaled upper bound (1) of the Hamming loss, which equals the sum of the encoding error and the prediction error. CPLST is designed to knock down this bound, which explains its behavior in Figure 1(d) and echoes its superior performance in Figure 1(a). In fact, Figure 1(d) shows that the bound (1) is quite indicative of the performance differences in Figure 1(a). The results 6 Table 3: Test Hamming loss of PLST and CPLST with linear regression Dataset bibtex corel5k emotions enron genbase medical scene yeast Algorithm PLST CPLST PLST CPLST PLST CPLST PLST CPLST PLST CPLST PLST CPLST PLST CPLST PLST CPLST M = 20%K 0.0129 ? 0.0000 0.0127 ? 0.0000 0.0094 ? 0.0000 0.0094 ? 0.0000 0.2207 ? 0.0020 0.2189 ? 0.0019 0.0728 ? 0.0004 0.0729 ? 0.0004 0.0169 ? 0.0004 0.0168 ? 0.0004 0.0346 ? 0.0004 0.0346 ? 0.0004 0.1809 ? 0.0004 0.1744 ? 0.0004 0.2150 ? 0.0008 0.2069 ? 0.0008 40% 0.0125 ? 0.0000 0.0124 ? 0.0000 0.0094 ? 0.0000 0.0094 ? 0.0000 0.2064 ? 0.0023 0.2059 ? 0.0022 0.0860 ? 0.0005 0.0864 ? 0.0005 0.0040 ? 0.0002 0.0041 ? 0.0002 0.0407 ? 0.0005 0.0406 ? 0.0005 0.1718 ? 0.0006 0.1532 ? 0.0005 0.2052 ? 0.0009 0.2041 ? 0.0009 60% 0.0124 ? 0.0000 0.0123 ? 0.0000 0.0094 ? 0.0000 0.0094 ? 0.0000 0.1982 ? 0.0022 0.1990 ? 0.0022 0.0946 ? 0.0006 0.0943 ? 0.0006 0.0012 ? 0.0001 0.0012 ? 0.0001 0.0472 ? 0.0005 0.0471 ? 0.0005 0.1566 ? 0.0007 0.1349 ? 0.0005 0.2033 ? 0.0009 0.2024 ? 0.0009 80% 0.0123 ? 0.0000 0.0123 ? 0.0000 0.0094 ? 0.0000 0.0094 ? 0.0000 0.2013 ? 0.0020 0.2015 ? 0.0021 0.1006 ? 0.0007 0.1006 ? 0.0007 0.0009 ? 0.0001 0.0008 ? 0.0001 0.0490 ? 0.0005 0.0490 ? 0.0005 0.1321 ? 0.0008 0.1209 ? 0.0007 0.2020 ? 0.0009 0.2020 ? 0.0009 100% 0.0123 ? 0.0000 0.0123 ? 0.0000 0.0094 ? 0.0000 0.0094 ? 0.0000 0.2040 ? 0.0022 0.2040 ? 0.0022 0.1028 ? 0.0007 0.1028 ? 0.0007 0.0007 ? 0.0001 0.0007 ? 0.0001 0.0497 ? 0.0006 0.0497 ? 0.0006 0.1106 ? 0.0008 0.1106 ? 0.0008 0.2022 ? 0.0009 0.2022 ? 0.0009 (those within one standard error of the lower one are in bold) Table 4: Test Hamming loss of LSDR algorithm with M5P Dataset bibtex corel5k emotions enron genbase medical scene yeast Algorithm PLST CPLST PLST CPLST PLST CPLST PLST CPLST PLST CPLST PLST CPLST PLST CPLST PLST CPLST M = 20%K 0.0130 ? 0.0001 0.0129 ? 0.0001* 0.0094 ? 0.0000* 0.0094 ? 0.0000* 0.2213 ? 0.0030 0.2209 ? 0.0031* 0.0490 ? 0.0002 0.0489 ? 0.0003* 0.0215 ? 0.0004* 0.0215 ? 0.0004* 0.0127 ? 0.0002 0.0126 ? 0.0002* 0.1802 ? 0.0005 0.1674 ? 0.0005 0.2162 ? 0.0008 0.2083 ? 0.0009* 40% 0.0128 ? 0.0001* 0.0128 ? 0.0001* 0.0094 ? 0.0000* 0.0094 ? 0.0000* 0.2109 ? 0.0030 0.2085 ? 0.0032* 0.0488 ? 0.0002* 0.0489 ? 0.0003 0.0202 ? 0.0004* 0.0202 ? 0.0004* 0.0099 ? 0.0002* 0.0099 ? 0.0002* 0.1688 ? 0.0007 0.1538 ? 0.0006* 0.2082 ? 0.0009 0.2064 ? 0.0009* 60% 0.0128 ? 0.0001 0.0127 ? 0.0001* 0.0094 ? 0.0000* 0.0094 ? 0.0000* 0.2039 ? 0.0029 0.2004 ? 0.0031* 0.0489 ? 0.0002* 0.0490 ? 0.0003 0.0195 ? 0.0003* 0.0195 ? 0.0003* 0.0097 ? 0.0002 0.0096 ? 0.0002* 0.1540 ? 0.0008 0.1428 ? 0.0007* 0.2071 ? 0.0009 0.2063 ? 0.0009* 80% 0.0127 ? 0.0001* 0.0127 ? 0.0001* 0.0094 ? 0.0000* 0.0094 ? 0.0000* 0.2051 ? 0.0029 0.2020 ? 0.0031* 0.0490 ? 0.0002* 0.0490 ? 0.0003* 0.0194 ? 0.0003* 0.0195 ? 0.0003 0.0097 ? 0.0002 0.0096 ? 0.0002* 0.1396 ? 0.0011 0.1289 ? 0.0007* 0.2064 ? 0.0009* 0.2064 ? 0.0009* 100% 0.0127 ? 0.0001* 0.0127 ? 0.0001* 0.0094 ? 0.0000* 0.0094 ? 0.0000* 0.2063 ? 0.0030 0.2046 ? 0.0031* 0.0490 ? 0.0002* 0.0490 ? 0.0003* 0.0194 ? 0.0003* 0.0195 ? 0.0003 0.0097 ? 0.0002 0.0096 ? 0.0002* 0.1281 ? 0.0008 0.1268 ? 0.0008* 0.2067 ? 0.0009 0.2066 ? 0.0009* (those with the lowest mean are marked with *; those within one standard error of the lowest one are in bold) demonstrate that CPLST explores the trade-off between the encoding error and the prediction error in an optimal manner to reach the best performance for label space dimension reduction. The results of PBR and OCCA are consistently inferior to PLST and CPLST across most of the datasets in our experiments [25] and are not reported here because of space constraints. The test Hamming loss achieved by PLST and CPLST on other datasets with different percentage of used labels are reported in Table 3. In most datasets, CPLST is at least as effective as PLST; in bibtex, scene and yeast, CPLST performs significantly better than PLST. Note that in the medical and enron datasets, both PLST and CPLST overfit when using many dimensions. That is, the performance of both algorithms would be better when using fewer dimensions (than the full binary relevance, which is provably equivalent to either PLST or CPLST with M = K when using linear regression). These results demonstrate that LSDR approaches, like their feature space dimension reduction counterparts, can potentially help resolve the issue of overfitting. 4.2 Coupling Label Space Dimension Reduction with the M5P Decision Tree CPLST is designed by assuming a specific regression method. Next, we demonstrate that the inputoutput relationship captured by CPLST is not restricted for coupling with linear regression, but can be effective for other regression methods in the learning stage (step 4 of Algorithm 1). We do so by coupling the LSDR approaches with the M5P decision tree [26]. M5P decision tree is a nonlinear regression method. We take the implementation from WEKA [27] for M5P with the default parameter setting. The experimental results are shown in Table 4. The relations between PLST and CPLST when coupled with M5P are similar to the ones when coupled with linear regression. In particular, in the yeast, scene, and emotions, CPLST outperforms PLST. The results demonstrate that the captured input-output relation is also effective for regression methods other than linear regression. 4.3 Label Space Dimension Reduction with Kernel Ridge Regression In this subsection, we conduct experiments for demonstrating the performance of kernelization and regularization. For kernel-CPLST, we use the Gaussian kernel k(xi , xj ) = exp ??kxi ? xj k2 7 Table 5: Test Hamming loss of LSDR algorithm with kernel ridge regression Dataset bibtex corel5k emotions enron genbase medical scene yeast Algorithm PLST kernel-CPLST PLST kernel-CPLST PLST kernel-CPLST PLST kernel-CPLST PLST kernel-CPLST PLST kernel-CPLST PLST kernel-CPLST PLST kernel-CPLST M = 20%K 0.0151 ? 0.0000 0.0127 ? 0.0000 0.0094 ? 0.0000 0.0092 ? 0.0000 0.2218 ? 0.0020 0.2231 ? 0.0020 0.0460 ? 0.0002 0.0453 ? 0.0002 0.0169 ? 0.0004 0.0170 ? 0.0004 0.0136 ? 0.0002 0.0131 ? 0.0002 0.1713 ? 0.0004 0.1733 ? 0.0004 0.2030 ? 0.0008 0.2018 ? 0.0008 40% 0.0151 ? 0.0000 0.0123 ? 0.0000 0.0094 ? 0.0000 0.0092 ? 0.0000 0.2074 ? 0.0023 0.2071 ? 0.0024 0.0462 ? 0.0002 0.0454 ? 0.0002 0.0039 ? 0.0002 0.0040 ? 0.0002 0.0106 ? 0.0002 0.0098 ? 0.0002 0.1468 ? 0.0006 0.1470 ? 0.0006 0.1913 ? 0.0009 0.1904 ? 0.0009 60% 0.0151 ? 0.0000 0.0121 ? 0.0000 0.0094 ? 0.0000 0.0092 ? 0.0000 0.1983 ? 0.0026 0.1981 ? 0.0025 0.0466 ? 0.0002 0.0455 ? 0.0002 0.0014 ? 0.0001 0.0013 ? 0.0001 0.0103 ? 0.0002 0.0096 ? 0.0002 0.1173 ? 0.0008 0.1179 ? 0.0007 0.1892 ? 0.0009 0.1875 ? 0.0009 80% 0.0151 ? 0.0000 0.0120 ? 0.0000 0.0094 ? 0.0000 0.0092 ? 0.0000 0.2000 ? 0.0025 0.1973 ? 0.0027 0.0468 ? 0.0002 0.0455 ? 0.0002 0.0010 ? 0.0001 0.0009 ? 0.0001 0.0102 ? 0.0002 0.0096 ? 0.0002 0.0932 ? 0.0011 0.0905 ? 0.0007 0.1882 ? 0.0009 0.1869 ? 0.0009 100% 0.0151 ? 0.0000 0.0120 ? 0.0000 0.0094 ? 0.0000 0.0092 ? 0.0000 0.2002 ? 0.0025 0.1988 ? 0.0027 0.0469 ? 0.0002 0.0456 ? 0.0002 0.0008 ? 0.0001 0.0008 ? 0.0001 0.0102 ? 0.0002 0.0096 ? 0.0002 0.0731 ? 0.0007 0.0717 ? 0.0007 0.1881 ? 0.0009 0.1868 ? 0.0009 (those within one standard error of the lower one are in bold) during LSDR and take kernel ridge regression with the same kernel and the same regularization parameter as the underlying multi-output regression method. We also couple PLST with kernel ridge regression for a fair comparison. We select the Gaussian kernel parameter ? and the regularization parameter ? with a grid search on (log2 ?, log2 ?) using a 5-fold cross validation using the sum of the Hamming loss across all dimensions. The details of the grid search can be found in the Master?s Thesis of the first author [25]. When coupled with kernel ridge regression, the comparison between PLST and kernel-CPLST in terms of the Hamming loss is shown in Table 5. kernel-CPLST performs well for LSDR and outperforms the feature-unaware PLST in most cases. In particular, in five out of the eight datasets, kernel-CPLST is significantly better than PLST regardless of the number of dimensions used. In addition, in the medical and enron datasets, the overfitting problem is eliminated with regularization (and parameter selection), and hence kernel-CPLST not only performs better than PLST with kernel ridge regression, but also is better than the (unregularized) linear regression results in Table 3. From the previous comparison between CPLST and PLST, CPLST is at least as good as, and usually better than, PLST. The difference between CPLST and PLST is small but consistent, and does suggest that CPLST is a better choice for label-space dimension reduction. The results provide practical insights on the two types of label correlation [14]: unconditional correlation (feature-unaware) and conditional correlation (feature-aware). The unconditional correlation, exploited by PLST and other LSDR algorithms, readily leads to promising performance in practice. On the other hand, there is room for some (albeit small) improvements when exploiting the conditional correlation properly like CPLST. 5 Conclusion In this paper, we studied feature-aware label space dimension reduction (LSDR) approaches, which utilize the feature information during LSDR and can be viewed as the counterpart of supervised feature space dimension reduction. We proposed a novel feature-aware LSDR algorithm, conditional principal label space transformation (CPLST) which utilizes the key conditional correlations for dimension reduction. CPLST enjoys the theoretical guarantee in balancing between the prediction error and the encoding error in minimizing the Hamming loss bound. In addition, we extended CPLST to a kernelized version for capturing more sophisticated relations between features and labels. We conducted experiments for comparing CPLST and its kernelized version with other LSDR approaches. The experimental results demonstrated that CPLST is the best among the LSDR approaches when coupled with linear regression or kernel ridge regression. In particular, CPLST is consistently better than its feature-unaware precursor, PLST. Moreover, the input-output relation captured by CPLST can be utilized by regression method other than linear regression. Acknowledgments We thank the anonymous reviewers of the conference and members of the Computational Learning Laboratory at National Taiwan University for valuable suggestions. This work is partially supported by National Science Council of Taiwan via the grant NSC 101-2628-E-002-029-MY2. 8 References [1] I. Katakis, G. Tsoumakas, and I. Vlahavas. Multilabel text classification for automated tag suggestion. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2008 Discovery Challenge, 2008. [2] M. Boutell, J. Luo, X. Shen, and C. Brown. Learning multi-label scene classification. Pattern Recognition, 2004. [3] A. Elisseeff and J. Weston. A kernel method for multi-labelled classification. In Advances in Neural Information Processing Systems 14, 2001. [4] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-Label prediction via compressed sensing. In Advances in Neural Information Processing Systems 22, 2009. [5] F. Tai and H.-T. Lin. Multi-Label classification with principal label space transformation. In Neural Computation, 2012. [6] H. Hotelling. Relations between two sets of variates. Biometrika, 1936. [7] M. Wall, A. Rechtsteiner, and L. Rocha. Singular value decomposition and principal component analysis. A Practical Approach to Microarray Data Analysis, 2003. [8] I. Jolliffe. Principal Component Analysis. Springer, second edition, October 2002. [9] E. Barshan, A. Ghodsi, Z. Azimifar, and M. Zolghadri Jahromi. Supervised principal component analysis: Visualization, classification and regression on subspaces and submanifolds. Pattern Recognition, 2011. [10] K.-C. Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 1991. [11] K. Fukumizu, F. Bach, and M. Jordan. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. Journal of Machine Learning Research, 2004. [12] L. Sun, S. Ji, and J. Ye. Canonical correlation analysis for multilabel classification: A least-squares formulation, extensions, and analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011. [13] G. Tsoumakas, I. Katakis, and I. Vlahavas. Mining multi-label data. In Data Mining and Knowledge Discovery Handbook. Springer US, 2010. [14] K. Dembczynski, W. Waegeman, W. Cheng, and E. H?llermeier. On label dependence and loss minimization in multi-label classification. Machine Learning, 2012. [15] J. Weston, O. Chapelle, A. Elisseeff, B. Sch?lkopf, and V. Vapnik. Kernel dependency estimation. In Advances in Neural Information Processing Systems 15, 2002. [16] J. Kettenring. Canonical analysis of several sets of variables. Biometrika, 1971. [17] S. Yu, K. Yu, V. Tresp, and H.-P. Kriegel. Multi-output regularized feature projection. IEEE Transactions on Knowledge and Data Engineering, 2006. [18] Y. Zhang and J. Schneider. Multi-label output codes using canonical correlation analysis. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011. [19] D. Hoaglin and R. Welsch. The hat matrix in regression and ANOVA. The American Statistician, 1978. [20] C. Eckart and G. Young. The approximation of one matrix by another of lower rank. Psychometrika, 1936. [21] B. Sch?lkopf and A. Smola. Learning with kernels : support vector machines, regularization, optimization, and beyond. The MIT Press, first edition, 2002. [22] G. Saunders, A. Gammerman, and V. Vovk. Ridge regression learning algorithm in dual variables. In Proceedings of the Fifteenth International Conference on Machine Learning, 1998. [23] G. Tsoumakas, E. Spyromitros-Xioufis, J. Vilcek, and I. Vlahavas. Mulan: A java library for multi-label learning. Journal of Machine Learning Research, 2011. [24] B. Datta. Numerical Linear Algebra and Applications, Second Edition. SIAM-Society for Industrial and Applied Mathematics, 2010. [25] Y.-N. Chen. Feature-aware label space dimension reduction for multi-label classification problem. Master?s thesis, National Taiwan University, 2012. [26] Y. Wang and I. Witten. Induction of model trees for predicting continuous classes. In Poster Papers of the Nineth European Conference on Machine Learning, 1997. [27] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. Witten. The weka data mining software: an update. SIGKDD Exploration Newsletter, 2009. 9
4561 |@word version:3 compression:3 decomposition:5 elisseeff:2 thereby:1 yea:1 ttn:1 harder:1 tr:4 reduction:21 cyclic:1 contains:8 tuned:1 seriously:1 bibtex:4 outperforms:2 existing:3 comparing:1 luo:1 must:3 readily:1 multioutput:1 numerical:1 wx:15 drop:1 plot:1 designed:2 update:1 intelligence:2 fewer:1 indicative:1 zhang:2 five:1 prove:1 combine:3 xwxt:5 manner:1 x0:2 indeed:1 expected:1 behavior:1 multi:34 inspired:1 sce:1 pbr:12 resolve:1 precursor:2 considering:2 becomes:2 project:1 xx:2 moreover:3 bounded:1 begin:1 underlying:3 katakis:2 lowest:4 mulan:2 linearity:1 submanifolds:1 preform:1 minimizes:10 compressive:4 finding:3 transformation:12 guarantee:1 pseudo:1 every:2 multidimensional:1 expands:1 biometrika:2 classifier:5 scaled:1 k2:1 medical:5 grant:1 superiority:1 yn:7 before:3 engineering:3 encoding:23 analyzing:1 meet:1 studied:2 corel5k:3 range:1 practical:2 responsible:1 acknowledgment:1 testing:1 practice:2 significantly:3 java:1 projection:4 poster:1 pre:3 word:1 suggest:1 onto:1 cannot:1 selection:1 storage:2 context:1 applying:3 equivalent:5 map:2 demonstrated:1 reviewer:1 attention:1 flexibly:1 regardless:2 boutell:1 shen:1 hsuan:1 kxwt:2 insight:1 holmes:1 regarded:1 rocha:1 traditionally:2 us:3 designing:1 associate:1 trick:2 expensive:1 recognition:2 utilized:1 predicts:2 database:1 observed:2 csie:2 inserted:1 solved:2 capture:2 worst:1 calculate:1 wang:1 eckart:1 compressing:1 sun:1 decrease:2 highest:1 trade:2 valuable:1 complexity:1 multilabel:2 solving:4 algebra:1 basis:1 joint:1 represented:1 derivation:1 effective:7 fsdr:12 artificial:1 saunders:1 quite:1 my2:1 larger:1 solve:3 compressed:2 statistic:1 unseen:1 transform:1 noisy:1 itself:1 echo:1 advantage:2 eigenvalue:4 propose:2 product:1 inserting:1 gen:1 intuitive:1 validate:1 inputoutput:1 exploiting:1 extending:1 categorization:1 help:1 derive:4 coupling:3 c:4 involves:2 direction:8 closely:1 bib:1 exploration:1 viewing:1 enable:1 tsoumakas:3 explains:1 wall:1 ntu:2 anonymous:1 extension:2 reutemann:1 considered:1 hall:1 exp:1 mapping:1 predict:2 reserve:1 substituting:1 achieves:3 estimation:2 label:90 council:1 vz:4 largest:4 minimization:2 hope:1 fukumizu:1 mit:1 gaussian:2 aim:2 rather:1 encode:1 derived:2 focus:2 genbase:3 improvement:1 consistently:2 properly:1 rank:1 contrast:4 industrial:1 sigkdd:1 baseline:3 dicted:1 eliminate:1 pfahringer:1 kernelized:4 relation:7 selects:1 interested:1 provably:1 issue:3 classification:28 dual:1 among:1 constrained:2 equal:1 aware:12 saving:1 emotion:4 eliminated:1 represents:1 vvt:8 yu:2 unsupervised:5 k2f:3 future:1 minimized:1 report:1 few:1 randomly:2 simultaneously:4 national:5 preserve:1 phase:2 statistician:1 n1:1 mining:3 extreme:1 unconditional:3 vk2f:2 partial:1 shorter:1 orthogonal:5 conduct:3 tree:4 continuing:1 desired:1 theoretical:1 instance:8 kij:1 earlier:1 zn:6 kq:2 conducted:2 reported:3 dependency:2 kxi:1 explores:1 international:2 siam:1 vm:9 off:2 decoding:9 regressor:4 together:1 yao:1 squared:2 again:2 thesis:2 containing:1 worse:4 american:2 leading:1 li:1 potential:1 de:1 bold:3 includes:1 coefficient:1 explicitly:2 depends:1 blind:2 tion:1 view:1 closed:2 dembczynski:1 minimize:3 square:1 accuracy:1 ynt:1 efficiently:2 maximized:1 correspond:2 ztn:2 lkopf:2 decodes:2 researcher:1 xtn:2 reach:3 dm:2 hamming:17 couple:2 hsu:1 dataset:11 popular:2 subsection:5 knowledge:3 dimensionality:1 organized:1 hilbert:1 sophisticated:4 anticipating:1 back:1 supervised:12 formulation:1 psychometrika:1 hoaglin:1 stage:1 smola:1 correlation:15 overfit:1 hand:6 plst:77 langford:1 nonlinear:3 cplst:87 reveal:1 yeast:7 ye:1 verify:1 concept:2 contain:3 counterpart:3 former:1 regularization:11 hence:4 brown:1 laboratory:1 round:8 during:10 inferior:2 ridge:12 demonstrate:4 tn:8 performs:3 newsletter:1 image:2 novel:3 common:1 superior:1 witten:2 empirically:1 ji:1 association:1 significant:1 rd:1 grid:2 mathematics:1 chapelle:1 attracting:1 closest:1 wxt:1 showed:1 irrelevant:1 discard:1 certain:1 binary:7 vt:4 tien:1 accomplished:1 exploited:2 captured:3 additional:2 schneider:1 employed:1 paradigm:2 redundant:1 multiple:3 full:1 match:3 offer:1 cross:1 lin:3 bach:1 divided:1 prediction:28 variant:1 regression:47 fifteenth:1 kernel:45 cz:2 invert:1 achieved:1 addition:5 singular:7 microarray:1 sch:2 rest:2 unlike:2 operate:1 enron:5 subject:2 hz:4 med:1 member:1 jordan:1 call:1 easy:2 decent:1 automated:1 xj:4 variate:1 reduce:1 inner:1 multiclass:2 br:3 weka:2 shift:1 whether:2 motivated:1 pca:3 assist:3 afford:1 cause:1 useful:1 clear:2 eigenvectors:3 transforms:4 mid:1 simplest:1 specifies:1 percentage:1 canonical:7 shifted:3 notice:1 llermeier:1 estimated:1 gammerman:1 zv:7 key:3 xwx:1 four:1 nevertheless:1 demonstrating:1 waegeman:1 anova:1 occa:24 utilize:1 v1:2 kettenring:2 sum:2 run:2 inverse:3 fourteenth:1 master:2 decide:1 utilizes:1 decision:3 comparable:1 capturing:1 bound:18 cca:18 nan:1 tackled:1 cheng:1 fold:1 constraint:6 ghodsi:1 scene:7 software:1 tag:1 span:2 min:8 performing:2 department:2 according:2 combination:1 across:5 kakade:1 tw:2 projecting:2 restricted:1 taken:1 unregularized:1 computationally:1 equation:1 visualization:1 tai:2 remains:1 discus:1 emo:1 jolliffe:1 letting:1 cor:1 studying:1 eight:2 appropriate:2 vlahavas:3 hotelling:1 slower:1 hat:2 original:5 compress:1 top:1 include:2 log2:2 xw:1 exploit:1 build:1 society:1 seeking:1 objective:2 costly:1 dependence:1 traditional:1 subspace:1 thank:1 considers:7 induction:1 taiwan:5 assuming:1 code:6 relationship:2 minimizing:5 balance:1 october:1 potentially:1 kde:2 frank:1 design:4 implementation:1 zt:12 perform:1 allowing:1 upper:3 datasets:13 regularizes:1 extended:3 reproducing:1 datta:1 introduced:1 pair:1 z1:1 nsc:1 learned:1 beyond:1 suggested:1 kriegel:1 usually:2 pattern:3 sparsity:3 challenge:2 summarize:1 including:1 wz:18 max:2 power:2 suitable:3 homologous:1 predicting:4 regularized:2 representing:1 scheme:1 improve:1 orthogonally:1 library:1 carried:1 coupled:5 tresp:1 text:2 review:1 discovery:3 loss:25 permutation:1 suggestion:2 facing:1 validation:1 downloaded:1 xwt:2 consistent:1 principle:1 classifying:1 share:1 uncorrelated:2 storing:1 row:17 balancing:1 summary:1 supported:1 last:1 enjoys:2 side:1 understand:1 wide:1 taking:1 sparse:1 benefit:1 dimension:32 xn:12 world:4 evaluating:1 unaware:6 kz:2 stand:1 avoids:1 default:1 author:1 wzt:2 enr:1 transaction:2 obtains:1 gene:1 confirm:1 overfitting:2 reveals:1 handbook:1 assumed:4 conclude:1 consuming:1 xi:3 search:2 continuous:1 decomposes:1 table:11 promising:1 learn:2 obtaining:1 symmetry:1 improving:1 european:2 linearly:1 whole:1 edition:3 fair:2 sliced:2 representative:1 learns:2 young:1 removing:1 theorem:1 down:1 xt:3 specific:1 sensing:5 list:1 dk:2 albeit:1 vapnik:1 kr:2 demand:1 knock:1 chen:2 cx:3 simply:1 welsch:1 conveniently:1 partially:2 springer:2 weston:2 conditional:11 viewed:4 sized:1 presentation:1 marked:1 room:1 replace:1 labelled:1 change:1 typical:1 determined:1 infinite:1 vovk:1 principal:20 called:6 experimental:5 optimiza:1 svd:3 formally:1 select:1 support:1 latter:1 relevance:4 ongoing:1 kernelization:4
3,935
4,562
3D Object Detection and Viewpoint Estimation with a Deformable 3D Cuboid Model Sven Dickinson University of Toronto [email protected] Sanja Fidler TTI Chicago [email protected] Raquel Urtasun TTI Chicago [email protected] Abstract This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the well-acclaimed deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. Our model reasons about face visibility patters called aspects. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. Inference then entails sliding and rotating the box in 3D and scoring object hypotheses. While for inference we discretize the search space, the variables are continuous in our model. We demonstrate the effectiveness of our approach in indoor and outdoor scenarios, and show that our approach significantly outperforms the stateof-the-art in both 2D [1] and 3D object detection [2]. 1 Introduction Estimating semantic 3D information from monocular images is an important task in applications such as autonomous driving and personal robotics. Let?s consider for example, the case of an autonomous agent driving around a city. In order to properly react to dynamic situations, such an agent needs to reason about which objects are present in the scene, as well as their 3D location, orientation and 3D extent. Likewise, a home robot requires accurate 3D information in order to navigate in cluttered environments as well as grasp and manipulate objects. While impressive performance has been achieved for instance-level 3D object recognition [3], category-level 3D object detection has proven to be a much harder task, due to intra-class variation as well as appearance variation due to viewpoint changes. The most common approach to 3D detection is to discretize the viewing sphere into bins and train a 2D detector for each viewpoint [4, 5, 1, 6]. However, these approaches output rather weak 3D information, where typically a 2D bounding box around the object is returned along with an estimated discretized viewpoint. In contrast, object-centered approaches represent and reason about objects using more sophisticated 3D models. The main idea is to index (or vote) into a parameterized pose space with local geometric [7] or appearance features, that bear only weak viewpoint dependencies [8, 9, 10, 11]. The main advantage of this line of work is that it enables a continuous pose representation [10, 11, 12, 8], 3D bounding box prediction [8], and potentially requires less training examples due to its more com1 Figure 1: Left: Our deformable 3D cuboid model. Right Viewpoint angle ?. pact visual representation. Unfortunately, these approaches work with weaker appearance models that cannot compete with current discriminative approaches [1, 6, 13]. Recently, Hedau et al. [2] proposed to extend the 2D HOG-based template detector of [14] to predict 3D cuboids. However, since the model represents object?s appearance as a rigid template in 3D, its performance has been shown to be inferior to (2D) deformable part-based models (DPMs) [1]. In contrast, in this paper we extend DPM to reason in 3D. Our model represents an object class with a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box (see Fig 1). Towards this goal, we introduce the notion of stitching point, which enables the deformation between the faces and the cuboid to be encoded efficiently. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation due to viewpoint. We reason about different face visibility patterns called aspects [15]. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. In inference, our model outputs 2D along with oriented 3D bounding boxes around the objects. This enables the estimation of object?s viewpoint which is a continuous variable in our representation. We demonstrate the effectiveness of our approach in indoor [2] and outdoor scenarios [16], and show that our approach significantly outperforms the state-of-the-art in both 2D [1] and 3D object detection [2]. 2 Related work The most common way to tackle 3D detection is to represent a 3D object by a collection of independent 2D appearance models [4, 5, 1, 6, 13], one for each viewpoint. Several authors augmented the multi-view representation with weak 3D information by linking the features or parts across views [17, 18, 19, 20, 21]. This allows for a dense representation of the viewing sphere by morphing related near-by views [12]. Since these methods usually require a significant amount of training data, renderings of synthetic CAD models have been used to supplement under-represented views or provide supervision for training object parts or object geometry [22, 13, 8]. Object-centered approaches, represent object classes with a 3D model typically equipped with viewinvariant geometry and appearance [7, 23, 24, 8, 9, 10, 11, 25]. While these types of models are attractive as they enable continuous viewpoint representations, their detection performance has typically been inferior to 2D deformable models. Deformable part-based models (DPMs) [1] are nowadays arguably the most successful approach to category-level 2D detection. Towards 3D, DPMs have been extended to reason about object viewpoint by training the mixture model with viewpoint supervision [6, 13]. Pepik et al. [13] took a step further by incorporating supervision also at the part level. Consistency was enforced by forcing the parts for different 2D viewpoint models to belong to the same set of 3D parts in the physical space. However, all these approaches base their representation in 2D and thus output only 2D bounding boxes along with a discretized viewpoint. The closest work to ours is [2], which models an object with a rigid 3D cuboid, composed of independently trained faces without deformations or parts. Our model shares certain similarities with this work, but has a set of important differences. First, our model is hierarchical and deformable: we allow deformations of the faces, while the faces themselves are composed of deformable parts. We also explicitly reason about the visibility patterns of the cuboid model and train the model accordingly. Furthermore, all the parameters in our model are trained jointly using a latent SVM formulation. These differences are important, as our approach outperforms [2] by a significant margin. 2 Figure 2: Aspects, together with the range of ? that they cover, for (left) cars and (right) beds. Finally, in concurrent work, Xiang and Savarese [26] introduced a deformable 3D aspect model, where an object is represented as a set of planar parts in 3D. This model shares many similarities with our approach, however, unlike ours, it requires a collection of CAD models in training. 3 A Deformable 3D Cuboid Model In this paper, we are interested in the problem of, given a single image, estimating the 3D location and orientation of the objects present in the scene. We parameterize the problem as the one of estimating a tight 3D bounding box around each object. Our 3D box is oriented, as we reason about the correspondences between the faces in the estimated bounding box and the faces of our model (i.e., which face is the top face, front face, etc). Towards this goal, we represent an object class as a deformable 3D cuboid, which is composed of 6 deformable faces, i.e., their locations and scales can deviate from their anchors on the cuboid. The model for each cuboid?s face is a 2D template that represents the appearance of the object in view-rectified coordinates, i.e., where the face is frontal. Additionally, we augment each face with parts, and employ a deformation model between the locations of the parts and the anchor points on the face they belong to. We assume that any viewpoint of an object in the image domain can be modeled by rotating our cuboid in 3D, followed by perspective projection onto the image plane. Thus inference involves sliding and rotating the deformable cuboid in 3D and scoring the hypotheses. A necessary component of any 3D model is to properly reason about the face visibility of the object (in our case, the cuboid). Assuming a perspective camera, for any given viewpoint, at most 3 faces are visible in an image. Topologically different visibility patterns define different aspects [15] of the object. Note that a cuboid can have up to 26 aspects, however, not all necessarily occur for each object class. For example, for objects supported by the floor, the bottom face will never be visible. For cars, typically the top face is not visible either. Our model only reasons about the occurring aspects of the object class of interest, which we estimate from the training data. Note that the visibility, and thus the aspect, is a function of the 3D orientation and position of a cuboid hypothesis with respect to the camera. We define ? to be the angle between the outer normal to the front face of the cuboid hypothesis, and the vector connecting the camera and the center of the 3D box. We refer the reader to Fig. 1 for a visualization. Assuming a camera overlooking the center of the cuboid, Fig. 2 shows the range of the cuboid orientation angle on the viewing sphere for which each aspect occurs in the datasets of [2, 16], which we employ for our experiments. Note however, that in inference we do not assume that the object?s center lies on the camera?s principal axis. In order to make the cuboid deformable, we introduce the notion of stitching point, which is a point on the box that is common to all visible faces for a particular aspect. We incorporate a quadratic deformation cost between the locations of the faces and the stitching point to encourage the cuboid to be as rigid as possible. We impose an additional deformation cost between the visible faces, ensuring that their sizes match when we stitch them into a cuboid hypothesis. Our model represents each aspect with its own set of weights. To reduce the computational complexity and impose regularization, we share the face and part templates across all aspects, as well as the deformations between them. However, the deformations between the faces and the cuboid are aspect specific as they depend on the stitching point. We formally define the model by a (6 ? (n + 1) + 1)-tuple ({(Pi , Pi,1 , . . . , Pi,n )}i=1,..,6 , b) where Pi models the i-th face, Pi,j is a model for the j-th part belonging to face i, and b is a real valued bias term. For ease of exposition, we assume each face to have the same number of parts, n; however, the framework is general and allows the numbers of parts to vary across faces. For each aspect a, 3 100 50 R?T L?T F?T F?R?T F?L?T 400 DPM: mixture statistics for bed num of training examples 150 0 BBOX3D: face statistics for bed num of training examples num of training examples BBOX3D: aspect statistics for bed 200 300 200 100 0 front cuboid aspects left right top cuboid faces 400 300 200 100 0 1 2 3 4 5 6 mixture id Figure 3: Dataset [2] statistics for training our cuboid model (left and middle) and DPM [1] (right). we define each of its visible faces by a 3-tuple (Fi , ra,i , dstitch , ba ), where Fi is a filter for the i-th a,i face, ra,i is a two-dimensional vector specifying the position of the i-th face relative to the position of the stitching point in the rectified view, and di is a four-dimensional vector specifying coefficients of a quadratic function defining a deformation cost for each possible placement of the face relative to the position of the stitching point. Here, ba is a bias term that is aspect specific and allows us to calibrate the scores across aspects with different number of visible faces. Note that Fi will be shared across aspects and thus we omit index a. The model representing each part is face-specific, and is defined by a 3-tuple (Fi,j , ri,j , di,j ), where Fi,j is a filter for the j-th part of the i-th face, ri,j is a two-dimensional vector specifying an ?anchor? position for part j relative to the root position of face i, and di,j is a four dimensional vector specifying coefficients of a quadratic function defining a deformation cost for each possible placement of the part relative to the anchor position on the face. Note that the parts are defined relative to the face and are thus independent of the aspects. We thus share them across aspects. The appearance templates as well as the deformation parameters in the model are defined for each face in a canonical view where that face is frontal. We thus score a face hypothesis in the rectified view that makes the hypothesis frontal. Each pair of parallel faces shares a homography, and thus at most three rectifications are needed for each viewpoint hypothesis ?. In indoor scenarios, we estimate the 3 orthogonal vanishing points and assume a Manhattan world. As a consequence only 3 rectifications are necessary altogether. In the outdoor scenario, we assume that at least the vertical vanishing point is given, or equivalently, that the orientation (but not position) of the ground plane is known. As a consequence, we only need to search for a 1-D angle ?, i.e., the azimuth, in order to estimate the rotation of the 3D box. A sliding window approach is then used to score the cuboid hypotheses, by scoring the parts, faces and their deformations in their own rectified view, as well as the deformations of the faces with respect to the stitching point. Following 2D deformable part-based models [1], we use a pyramid of HOG features to describe each face-specific rectified view, H(i, ?), and score a template for a face as follows: X score(pi , ?) = Fi (u0 , v 0 ) ? H[ui + u0 ; vi + v 0 ; i, ?] (1) u0 ,v 0 where pi = (ui , vi , li ) specifies the position (ui , vi ) and level li of the face filters in the face-rectified feature pyramids. We score each part pi,j = (ui,j , vi,j , li,j ) in a similar fashion, but the pyramid is indexed at twice the resolution of the face. We define the compatibility score between the parts and the corresponding face, denoted as pi = {pi , {pi,j }j=1,...,n }, as the sum over the part scores and their deformations with respect to the anchor positions on the face: n X scoreparts (pi , ?) = (score(pi,j , ?) ? dij ? ?d (pi , pi,j )) , (2) j=1 We thus define the score of a 3D cuboid hypothesis to be the sum of the scores of each face and its parts, as well as the deformation of each face with respect to the stitching point and the deformation of the faces with respect to each other as follows 6 X  score(x, ?, s, p) = V (i, a) score(pi , ?) ? dstitch ? ?stich (pi , s, ?) ? a,i d i=1 ? 6 X ace f ace V (i, a) ? dfi,ref ?d (pi , pref , ?) + 6 X i=1 i>ref 4 V (i, a) ? scoreparts (pi , ?) + ba Figure 4: Learned models for (left) bed, (right) car. where p = (p1 , ? ? ? , p6 ) and V (i, a) is a binary variable encoding whether face i is visible under aspect a. Note that a = a(?, s) can be deterministically computed from the rotation angle ? and the position of the stitching point s (which we assume to always be visible), which in turns determines the face visibility V . We use ref to index the first visible face in the aspect model, and ?d (pi , pi,j , ?) = ?d (du, dv) = (du, dv, du2 , dv 2 ) (3) are the part deformation features, computed in the rectified image of face i implied by the 3D angle ?. As in [1], we employ a quadratic deformation cost to model the relationships between the parts and the anchor points on the face, and define (dui,j , dvi,j ) = (ui,j , vi,j ) ? (2 ? (ui , vi ) + ri,j ) as the displacement of the j-th part with respect to its anchor (ui , vi ) in the rectified j-th face. The deformation features ?stich (pi , s, ?) between the face pi and the stitching point s are defined as d (dui , dvi ) = (ui , vi ) ? (u(s, i), v(s, i)) + ra,i ). Here, (u(s, i), v(s, i)) is the position of the stitching point in the rectified coordinates corresponding to face i and level l. We define the deformation cost between the faces to be a function of their relative dimensions: ( i ,ek ) 0, if max(e f ace min(ei ,ek ) < 1 +  ?d (pi , pk , ?) = (4) ? otherwise with ei and ek the lengths of the common edge between faces i and k. We define the deformation of a face with respect to the stitching point to also be quadratic. It is defined in the rectified view, and thus depend on ?. We additionally incorporate a bias term for each aspect, ba , to make the scores of multiple aspects comparable when we combine them into a full cuboid model. Given an image x, the score of a hypothesized 3D cuboid can be obtained as the dot product between the model?s parameters and a feature vector, i.e., score(x, ?, s, p) = wa ? ?(x, a(?, s), p), with ace ace 0 0 wa = (F10 , ? ? ? , F60 , F1,1 , ? ? ? , F6,n , d1,1 , ? ? ? , d6,n , dstitch , ? ? ? , dstitch , df1,2 , ? ? ? , df5,6 , ba ), (5) a,1 a,6 and the feature vector: ? 1 , i, ?), ? ? ? , H(p ? 1,1 , i, ?), ???d (p1 , p1,1 ), ? ? ? , ???d (p6 , p6,n ), ?(x, a(?, s), p) = H(p  ? ??stitch (p1 , s, ?), ? ? ? , ???stitch (p6 , s, ?), ???f ace (p1 , p2 ), ? ? ? , 1 d d d ? ?) = V (i, a) ? ?(i, ?). where ?? includes the visibility score in the feature vector, e.g., ?(i, Inference: Inference in this model can be done by computing fw (x) = max wa ? ?(x, a(?, s), p) ?,s,p This can be solved exactly via dynamic programming, where the score is first computed for each ?, i.e., maxs,p wa ? ?(x, a(?, s), p), and then a max is taken over the angles ?. We use a discretization of 20 deg for the angles. To get the score for each ?, we first compute the feature responses for the part and face templates (Eq. (1)) using a sliding window approach in the corresponding feature pyramids. As in [1], distance transforms are used to compute the deformation scores of the parts efficiently, that is, Eq. (2). The score for each face simply sums the response of the face template and the scores of the parts. We again use distance transforms to compute the deformation scores for each face and the stitching point, which is carried out in the rectified coordinates for each face. We then compute the deformation scores between the faces in Eq. (4), which can be performed efficiently due to the fact that sides of the same length along one dimension (horizontal or vertical) in the coordinates of face i will also be constant along the corresponding line when projected to the coordinate system of face j. Thus, computing the side length ratios of two faces is not quadratic in the number of pixels but only in the number of horizontal or vertical lines. Finally, we reproject the scores to the image coordinate system and sum them to get the score for each ?. 5 Hedau et al. [2] ours Detectors? performance DPM [1] 3D det. combined 54.2% 51.3% 59.6% 55.6% 59.4% 60.5% DPM [1] 60.0% Layout rescoring 3D det. combined 62.8% 64.6% 63.8% Table 1: Detection performance (measured in AP at 0.5 IOU overlap) for the bed dataset of [2] 3D measure DPM fit3D BBOX3D combined BBOX3D + layout comb. + layout 48.2% 53.9% 53.9% 57.8% 57.1% convex hull 16.3% 33.0% 34.4% 33.5% 33.6% face overlap Table 2: 3D detection performance in AP (50% IOU overlap of convex hulls and faces) bed: 3D perf.: conv hull overlap DPM (AP = 0.556) 3D BBOX (AP = 0.594) combined (AP = 0.605) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 recall 1 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 bed: 3D perf.: face overlap DPM fit3D (AP = 0.482) 3D BBOX (AP = 0.539) combined (AP = 0.539) precision precision precision bed: 2D Detection performance 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 recall 1 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 DPM fit3D (AP = 0.163) 3D BBOX (AP = 0.330) combined (AP = 0.344) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 recall Figure 5: Precision-recall curves for (left) 2D detection (middle) convex hull, (right) face overlap. Learning: Given a set of training samples D = (hx1 , y1 , bb1 i, ? ? ? hxN , yN , bbN i), where x is an image, yi ? {?1, 1}, and bb ? R8?2 are the eight coordinates of the 3D bounding box in the image, our goal is to learn the weights w = [wa1 , ? ? ? , waP ] for all P aspects in Eq. (5). To train our model using partially labeled data, we use a latent SVM formulation [1], however, frameworks such as latent structural SVMs [27] are also possible. To initialize the full model, we first learn a deformable face+parts model for each face independently, where the faces of the training examples are rectified to be frontal prior to training. We estimate the different aspects of our 3D model from the statistics of the training data, and compute for each training cuboid the relative positions va,i of face i and the stitching point in the rectified view of each face. We then perform joint training of the full model, treating the training cuboid and the stitching point as latent, however, requiring that each face filter and the face annotation overlap more than 70%. Following [1], we utilize a stochastic gradient descent approach which alternates between solving for the latent variables and updating the weights w. Note that this algorithm is only guaranteed to converge to a local optimum, as the latent variables make the problem non-convex. 4 Experiments We evaluate our approach on two datasets, the dataset of [2] as well as KITTI [16], an autonomous driving dataset. To our knowledge, these are the only datasets which have been labeled with 3D bounding boxes. We begin our experimentation with the indoor scenario [2]. The bedroom dataset contains 181 train and 128 test images. To enable a comparison with the DPM detector [1], we trained a model with 6 mixtures and 8 parts using the same training instances but employing 2D bounding boxes. Our 3D bed model was trained with two parts per face. Fig. 3 shows the statistics of the dataset in terms of the number of training examples for each aspect (where L-R-T denotes an aspect for which the front, right and the top face are visible), as well as per face. Note that the fact that the dataset is unbalanced (fewer examples for aspects with two faces) does not affect too much our approach, as only the face-stitching point deformation parameters are aspect specific. As we share the weights among the aspects, the number of training instances for each face is significantly higher (Fig. 3, middle). We compare this to DPM in Fig. 3, right. Our method can better exploit the training data by factoring out the viewpoint dependance of the training examples. We begin our quantitative evaluation by using our model to reason about 2D detection. The 2D bounding boxes for our model are computed by fitting a 2D box around the convex hull of the projection of the predicted 3D box. We report average precision (AP) where we require that the output 2D boxes overlap with the ground-truth boxes at least 50% using the intersection-over-union (IOU) criteria. The precision-recall curves are shown in Fig. 5. We compare our approach to the deformable part model (DPM) [1] and the cuboid model of Hedau et al. [2]. As shown in Table 1 we outperform the cuboid model of [2] by 8.1% and DPM by 3.8%. This is notable, as to the best 6 Figure 6: Detection examples obtained with our model on the bed dataset [2]. Figure 7: Detections in 3D + layout of our knowledge, this is the first time that a 3D approach outperforms the DPM. detections of our model are shown in Fig. 6. 1 Examples of A standard way to improve the detector?s performance has been to rescore object detections using contextual information [1]. Following [2], we use two types of context. We first combined our detector with the 2D-DPM [1] to see whether the two sources of information complement each other. The second type of context is at the scene level, where we exploit the fact that the objects in indoor environments do not penetrate the walls and usually respect certain size ratios in 3D. We combine the 3D and 2D detectors using a two step process, where first the 2D detector is run inside the bounding boxes produced by our cuboid model. A linear SVM that utilizes both scores as input is then employed to produce a score for the combined detection. While we observe a slight improvement in performance (1.1%), it seems that our cuboid model is already scoring the correct boxes well. This is in contrast to the cuboid model of [2], where the increase in performance is more significant due to the poorer accuracy of their 3D approach. Following [2], we use an estimate of the room layout to rescore the object hypotheses at the scene level. We use the approach by Schwing et al. [28] to estimate the layout. To train the re-scoring classifier, we use the image-relative width and height features as in [1], footprint overlap between the 3D box and the floor as in [2] as well as 3D statistics such as distance between the object 3D box and the wall relative to the room height and the ratio between the object and room height in 3D. This further increases our performance by 5.2% (Table 1). Examples of 3D reconstruction of the room and our predicted 3D object hypotheses are shown in Fig. 7. To evaluate the 3D performance of our detector we use the convex hull overlap measure as introduced in [2]. Here, instead of computing the overlap between the predicted boxes, we require that the convex hulls of our 3D hypotheses projected to the image plane and groundtruth annotations overlap at least 50% in IOU measure. Table 2 reports the results and shows that only little is lost in performance due to a stricter overlap measure. 1 Note that the numbers for our and [2]?s version of DPM slightly differ. The difference is likely due to how the negative examples are sampled during training (the dataset has a positive example in each training image). 7 Figure 8: KITTI: examples of car detections. (top) Ground truth, (bottom) Our 3D detections, augmented with best fitting CAD models to visualize inferred 3D box orientations. Since our model also predicts the locations of the dominant object faces (and thus the 3D object orientation), we would like to quantify its accuracy. We introduce an even stricter measure where we require that also the predicted cuboid faces overlap with the faces of the ground-truth cuboids. In particular, a hypothesis is correct if the average of the overlaps between top faces and vertical faces exceeds 50% IOU. We compare the results of our approach to DPM [1]. Note however, that [1] returns only 2D boxes and hence a direct comparison is not possible. We thus augment the original DPM with 3D information in the following way. Since the three dominant orientations of the room, and thus the objects, are known (estimated via the vanishing points), we can find a 3D box whose projection best overlaps with the output of the 2D detector. This can be done by sliding a cuboid (whose dimensions match our cuboid model) in 3D to best fit the 2D bounding box. Our approach outperforms the 3D augmented DPM by a significant margin of 16.7%. We attribute this to the fact that our cuboid is deformable and thus the faces localize more accurately on the faces of the object. We also conducted preliminary tests for our model on the autonomous driving dataset KITTI [16]. We trained our model with 8 aspects (estimated from the data) and 4 parts per face. An example of a learned aspect model is shown in Fig. 4. Note that the rectangular patches on the faces represent the parts, and color coding is used to depict the learned part and face deformation weights. We can observe that the model effectively and compactly factors out the appearance changes due to changes in viewpoint. Examples of detections are shown in Fig.8. The top rows show groundtruth annotations, while the bottom rows depict our predicted 3D boxes. To showcase also the viewpoint prediction of our detector we insert a CAD model inside each estimated 3D box, matching its orientation in 3D. In particular, for each detection we automatically chose a CAD model out of a collection of 80 models whose 3D bounding box best matches the dimensions of the predicted box. One can see that our 3D detector is able to predict the viewpoints of the objects well, as well as the type of car. 5 Conclusion We proposed a novel approach to 3D object detection, which extends the well-acclaimed DPM to reason in 3D by means of a deformable 3D cuboid. Our cuboid allows for deformations at the face level via a stitching point as well as deformations between the faces and the parts. We demonstrated the effectiveness of our approach in indoor and outdoor scenarios and showed that our approach outperforms [1] and [2] in terms of 2D and 3D estimation. In future work, we plan to reason jointly about the 3D scene layout and the objects in order to improve the performance in both tasks. Acknowledgements. S.F. has been supported in part by DARPA, contract number W911NF-10-20060. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either express or implied, of the Army Research Laboratory or the U.S. Government. 8 References [1] Felzenszwalb, P. F., Girshick, R. B., McAllester, D., and Ramanan, D. (2010) Object detection with discriminatively trained part based models. IEEE TPAMI, 32, 1627?1645. [2] Hedau, V., Hoiem, D., and Forsyth, D. (2010) Thinking inside the box: Using appearance models and context based on room geometry. ECCV, vol. 6, pp. 224?237. [3] Hinterstoisser, S., Lepetit, V., Ilic, S., Fua, P., and Navab, N. (2010) Dominant orientation templates for real-time detection of texture-less objects. CVPR. [4] Schneiderman, H. and Kanade, T. (2000) A statistical method for 3d object detection applied to faces and cars. CVPR, pp. 1746?1759. [5] Torralba, A., Murphy, K. P., and Freeman, W. T. (2007) Sharing visual features for multiclass and multiview object detection. IEEE TPAMI, 29, 854?869. [6] Gu, C. and Ren, X. (2010) Discriminative mixture-of-templates for viewpoint classification. ECCV, pp. 408?421. [7] Lowe, D. (1991) Fitting parameterized three-dimensional models to images. IEEE TPAMI, 13, 441?450. [8] Liebelt, J., Schmid, C., and Schertler, K. (2008) Viewpoint-independent object class detection using 3d feature maps. CVPR. [9] Yan, P., Khan, S. M., and Shah, M. (2007) 3d model based oblect class detection in an arbitrary view. ICCV. [10] Glasner, D., Galun, M., Alpert, S., Basri, R., and Shakhnarovich, G. (2011) Viewpoint-aware object detection and pose estimation. ICCV. [11] Savarese, S. and Fei-Fei, L. (2007) 3d generic object categorization, localization and pose estimation. ICCV. [12] Su, H., Sun, M., Fei-Fei, L., and Savarese, S. (2009) Learning a dense multi-view representation for detection, viewpoint classification and synthesis of object categories. ICCV. [13] Pepik, B., Stark, M., Gehler, P., and Schiele, B. (2012) Teaching 3d geometry to deformable part models. Belongie, S., Blake, A., Luo, J., and Yuille, A. (eds.), CVPR. [14] Dalal, N. and Triggs, B. (2005) Histograms of oriented gradients for human detection. CVPR. [15] Koenderink, J. and van Doorn, A. (1976) The singularities of the visual mappings. Bio. Cyber., 24, 51?59. [16] Geiger, A., Lenz, P., and Urtasun, R. (2012) Are we ready for autonomous driving? CVPR. [17] Kushal, A., Schmid, C., and Ponce, J. (2007) Flexible object models for category-level 3d object recognition. CVPR. [18] Thomas, A., Ferrari, V., Leibe, B., Tuytelaars, T., Schiele, B., and Gool, L. V. (2006) Toward multi-view object class detection. CVPR. [19] Hoiem, D., Rother, C., and Winn, J. (2007) 3d layoutcrf for multi-view object class recognition and segmentation. CVPR. [20] Sun, M., Su, H., Savarese, S., and Fei-Fei, L. (2009) A multi-view probabilistic model for 3d oblect classes. CVPR. [21] Payet, N. and Todorovic, S. (2011) Probabilistic pose recovery using learned hierarchical object models. ICCV. [22] Stark, M., Goesele, M., and Schiele, B. (2010) Back to the future: Learning shape models from 3d cad data. British Machine Vision Conference. [23] Brooks, R. A. (1983) Model-based three-dimensional interpretations of two-dimensional images. IEEE TPAMI, 5, 140?150. [24] Dickinson, S. J., Pentland, A. P., and Rosenfeld, A. (1992) 3-d shape recovery using distributed aspect matching. IEEE TPAMI, 14, 174?198. [25] Sun, M., Bradski, G., Xu, B.-X., and Savarese, S. (2010) Depth-encoded hough voting for coherent object detection, pose estimation, and shape recovery. ECCV. [26] Xiang, Y. and Savarese, S. (2012) Estimating the aspect layout of object categories. CVPR. [27] Yu, C.-N. and Joachims, T. (2009) Learning structural svms with latent variables. ICML. [28] Schwing, A., Hazan, T., Pollefeys, M., and Urtasun, R. (2012) Efficient structured prediction for 3d indoor scene understanding. CVPR. 9
4562 |@word version:1 middle:3 dalal:1 seems:1 triggs:1 harder:1 lepetit:1 contains:1 score:28 hoiem:2 ours:3 document:1 outperforms:6 current:1 discretization:1 contextual:1 cad:6 luo:1 chicago:2 visible:11 shape:3 enables:3 visibility:8 treating:1 depict:2 fewer:1 accordingly:1 plane:3 vanishing:3 num:3 rescoring:1 toronto:2 location:6 height:3 along:5 direct:1 combine:2 fitting:3 comb:1 inside:3 introduce:3 ra:3 themselves:1 p1:5 multi:5 discretized:2 freeman:1 automatically:1 little:1 equipped:1 window:2 conv:1 begin:2 estimating:4 duo:2 interpreted:1 quantitative:1 voting:1 tackle:1 stricter:2 exactly:1 classifier:1 bio:1 ramanan:1 omit:1 yn:1 arguably:1 positive:1 local:2 consequence:2 encoding:1 id:1 ap:12 chose:1 twice:1 specifying:4 rescore:2 ease:1 range:2 camera:5 union:1 lost:1 footprint:1 displacement:1 yan:1 attain:2 significantly:3 projection:3 matching:2 get:2 cannot:1 onto:1 hx1:1 context:3 map:1 demonstrated:1 center:3 layout:8 cluttered:1 independently:2 convex:7 resolution:1 rectangular:1 penetrate:1 recovery:3 react:1 notion:2 coordinate:9 variation:4 autonomous:5 ferrari:1 dickinson:2 programming:1 hypothesis:14 recognition:3 updating:1 showcase:1 predicts:1 labeled:2 gehler:1 bottom:3 solved:1 parameterize:1 sun:3 environment:2 complexity:1 ui:8 schiele:3 dynamic:2 personal:1 trained:6 depend:2 tight:2 solving:1 shakhnarovich:1 yuille:1 localization:1 efficiency:2 gu:1 compactly:1 patter:1 joint:1 darpa:1 represented:2 overlooking:1 train:7 sven:2 describe:1 pref:1 encoded:2 ace:6 valued:1 whose:3 cvpr:12 otherwise:1 statistic:7 tuytelaars:1 rosenfeld:1 jointly:4 advantage:1 tpami:5 took:1 propose:1 reconstruction:1 product:1 dpms:3 deformable:22 f10:1 bed:11 optimum:1 produce:1 categorization:1 tti:2 object:65 kitti:3 pose:6 measured:1 eq:4 p2:1 c:1 involves:1 predicted:6 quantify:1 iou:5 differ:1 correct:2 attribute:1 filter:4 hull:7 stochastic:1 centered:2 human:1 viewing:3 enable:2 mcallester:1 bin:1 require:4 government:1 f1:1 wall:2 preliminary:1 singularity:1 insert:1 rurtasun:1 around:5 ground:4 normal:1 blake:1 mapping:1 predict:2 visualize:1 driving:5 vary:1 torralba:1 estimation:6 lenz:1 concurrent:1 city:1 navab:1 always:1 aim:1 rather:1 ponce:1 properly:2 improvement:1 joachim:1 contrast:3 inference:7 kushal:1 factoring:3 rigid:3 pepik:2 typically:4 interested:1 compatibility:1 pixel:1 among:1 orientation:10 classification:2 stateof:1 augment:2 denoted:1 flexible:1 plan:1 art:2 initialize:1 aware:1 never:1 represents:5 yu:1 icml:1 thinking:1 future:2 report:2 employ:3 oriented:4 composed:5 murphy:1 geometry:4 detection:35 interest:1 bradski:1 intra:1 evaluation:1 grasp:1 mixture:5 bb1:1 accurate:1 poorer:1 nowadays:1 encourage:1 tuple:3 necessary:2 edge:1 orthogonal:1 indexed:1 savarese:6 goesele:1 rotating:3 re:1 hough:1 deformation:28 girshick:1 fronto:2 instance:3 cover:1 w911nf:1 calibrate:1 cost:6 wap:1 successful:1 dij:1 conducted:1 azimuth:1 too:1 front:4 dependency:1 com1:1 synthetic:1 combined:8 contract:1 probabilistic:2 homography:1 together:1 connecting:1 synthesis:1 again:1 ek:3 koenderink:1 return:1 stark:2 li:3 deform:2 f6:1 coding:1 includes:1 coefficient:2 forsyth:1 notable:1 explicitly:1 vi:8 performed:1 view:18 root:1 lowe:1 hazan:1 parallel:3 annotation:3 accuracy:2 likewise:1 efficiently:3 weak:3 accurately:1 produced:1 ren:1 rectified:13 detector:12 sharing:1 ed:1 pp:3 di:3 sampled:1 dataset:10 recall:5 knowledge:2 car:6 color:1 segmentation:1 sophisticated:1 back:1 higher:1 planar:1 response:2 fua:1 formulation:2 done:2 box:36 furthermore:1 p6:4 horizontal:2 ei:2 su:2 hypothesized:1 requiring:1 fidler:2 regularization:1 hence:1 laboratory:1 semantic:1 attractive:1 during:1 width:1 inferior:2 criterion:1 multiview:1 demonstrate:2 image:17 novel:2 recently:1 fi:6 common:4 rotation:2 physical:1 extend:2 linking:1 belong:2 slight:1 interpretation:1 significant:4 refer:1 consistency:1 teaching:1 dot:1 sanja:1 robot:1 entail:1 impressive:1 supervision:3 similarity:2 etc:1 base:1 du2:1 dominant:3 closest:1 own:2 showed:1 perspective:2 forcing:1 scenario:6 certain:2 binary:1 yi:1 scoring:5 additional:1 floor:2 impose:2 employed:1 converge:1 u0:3 sliding:5 multiple:1 full:3 exceeds:1 match:3 sphere:3 manipulate:1 glasner:1 va:1 ensuring:1 prediction:3 vision:1 histogram:1 represent:5 pyramid:4 robotics:1 achieved:1 doorn:1 winn:1 source:1 unlike:1 induced:1 cyber:1 dpm:20 effectiveness:3 structural:2 near:1 rendering:1 affect:1 fit:1 bedroom:1 reduce:1 idea:1 multiclass:1 det:2 whether:2 hxn:1 returned:1 todorovic:1 amount:1 transforms:2 svms:2 category:6 specifies:1 outperform:1 df1:1 canonical:1 payet:1 estimated:5 per:3 alpert:1 pollefeys:1 vol:1 express:1 four:2 localize:2 utilize:1 sum:4 enforced:1 compete:1 angle:8 parameterized:2 run:1 schneiderman:1 raquel:1 topologically:1 extends:2 reader:1 groundtruth:2 utilizes:1 home:1 wa1:1 patch:1 geiger:1 comparable:1 followed:1 guaranteed:1 correspondence:1 quadratic:6 occur:1 placement:2 fei:6 scene:6 ri:3 aspect:38 min:1 structured:1 alternate:1 belonging:1 across:8 slightly:1 dv:3 iccv:5 taken:1 rectification:2 monocular:2 visualization:1 turn:1 needed:1 stitching:17 experimentation:1 eight:1 observe:2 hierarchical:2 leibe:1 generic:1 shah:1 altogether:1 original:1 thomas:1 top:7 denotes:1 exploit:2 implied:2 already:1 occurs:1 gradient:2 distance:3 d6:1 outer:1 extent:1 urtasun:3 reason:14 toward:1 assuming:2 rother:1 length:3 index:3 modeled:1 relationship:1 ratio:3 equivalently:1 unfortunately:1 potentially:1 hog:2 negative:1 ba:5 enclosing:1 policy:1 perform:1 discretize:2 vertical:4 datasets:3 descent:1 pentland:1 situation:1 extended:1 defining:2 y1:1 arbitrary:1 ttic:2 inferred:1 introduced:2 complement:1 pair:1 khan:1 coherent:1 learned:4 brook:1 address:1 able:1 bbox:3 usually:2 pattern:3 indoor:7 max:4 gool:1 overlap:16 pact:1 representing:2 improve:2 axis:1 carried:1 perf:2 ready:1 schmid:2 acclaimed:2 deviate:1 morphing:1 geometric:1 prior:1 acknowledgement:1 understanding:1 xiang:2 relative:9 manhattan:1 discriminatively:3 bear:1 proven:1 agent:2 viewpoint:26 dvi:2 share:8 pi:24 row:2 eccv:3 supported:2 bias:3 weaker:1 allow:1 side:2 template:10 face:112 felzenszwalb:1 van:1 distributed:1 curve:2 dimension:4 depth:1 world:1 hedau:4 author:2 collection:3 projected:2 employing:1 bb:1 basri:1 cuboid:47 deg:1 anchor:9 belongie:1 discriminative:2 search:2 continuous:4 latent:7 table:5 additionally:2 kanade:1 learn:2 du:2 necessarily:1 domain:1 official:1 pk:1 main:2 dense:2 bounding:14 galun:1 allowed:2 ref:3 xu:1 augmented:3 fig:11 fashion:1 precision:6 position:13 dfi:1 deterministically:1 lie:1 outdoor:4 british:1 specific:5 navigate:1 r8:1 svm:3 incorporating:1 dependance:1 effectively:3 bbn:1 supplement:1 texture:1 occurring:1 margin:2 intersection:1 simply:1 appearance:14 likely:1 army:1 visual:3 stitch:3 contained:1 partially:1 truth:3 determines:1 goal:3 exposition:1 towards:3 room:6 shared:1 change:3 stich:2 fw:1 schwing:2 principal:1 called:2 vote:1 formally:1 unbalanced:1 frontal:4 incorporate:2 evaluate:2 d1:1
3,936
4,563
Distributed Non-Stochastic Experts Varun Kanade? UC Berkeley [email protected] Zhenming Liu? Princeton University [email protected] Bo?zidar Radunovi?c Microsoft Research [email protected] Abstract We consider the online distributed non-stochastic experts problem, where the distributed system consists of one coordinator node that is connected to k sites, and the sites are required to communicate with each other via the coordinator. At each time-step t, one of the k site nodes has to pick an expert from the set {1, . . . , n}, and the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret at time horizon T , while simultaneously keeping communication to a minimum. The two extreme solutions to this problem are: (i) Full communication: p This essentially simulates the nondistributed setting to obtain the optimal O( log(n)T ) regret bound at the cost of T communication. p (ii) No communication: Each site runs an independent copy ? the regret is O( log(n)kT ) and the communication is 0. This paper shows the ? difficulty of simultaneously achieving regret asymptotically better than kT and communication better than T . We give a novel algorithm that for an oblivious ? adversary achieves a non-trivial trade-off: regret O( k 5(1+)/6 T ) and communication O(T /k  ), for any value of  ? (0, 1/5). We also consider a variant of the model, where the coordinator picks the expert. In this model, we show that the label-efficient forecaster of Cesa-Bianchi et al. (2005) already gives us strategy that is near optimal in regret vs communication trade-off. 1 Introduction In this paper, we consider the well-studied non-stochastic expert problem in a distributed setting. In the standard (non-distributed) setting, there are a total of n experts available for the decisionmaker to consult, and at each round t = 1, . . . , T , she must choose to follow the advice of one of the experts, say at , from the set [n] = {1, . . . , n}. At the end of the round, she observes a payoff vector pt ? [0, 1]n , where pt [a] denotes the payoff that would have been received by following the advice of expert a. The payoff received by the decision-maker is pt [at ]. In the non-stochastic setting, an adversary decides the payoff vectors at any time step. At the end of the T rounds, the regret of the decision maker is the difference in the payoff that she would have received using the single best expert at all times in hindsight, and the payoff that she actually received, i.e. R = PT PT maxa?[n] t=1 pt [a] ? t=1 pt [at ]. The goal here is to minimize her regret; this general problem ? This work was performed while the author was at Harvard University supported in part by grant NSF-CCF09-64401 ? This work was performed while the author was at Harvard University supported in part by grants NSF-IIS0964473 and NSF-CCF-0915922. 1 in the non-stochastic setting captures several applications of interest, such as experiment design, online ad-selection, portfolio optimization, etc. (See [1, 2, 3, 4, 5] and references therein.) Tight bounds on regret for the non-stochastic expert problem are obtained by the so-called follow the regularized leader approaches; at time t, the decision-maker chooses a distribution, xt , over the Pt?1 n experts. Here xt minimizes the quantity s=1 pt ? x + r(x), where r is a regularizer. Common regularizers are the entropy function, which results in Hedge [1] or the exponentially weighted forecaster (see chap. 2 in [2]), or as we consider in this paper r(x) = ?? ? x, where ?? ?R [0, ?]n is a random vector, which gives the follow the perturbed leader (FPL) algorithm [6]. We consider the setting when the decision maker is a distributed system, where several different nodes may select experts and/or observe payoffs at different time-steps. Such settings are common, e.g. internet search companies, such as Google or Bing, may use several nodes to answer search queries and the performance is revealed by user clicks. From the point of view of making better predictions, it is useful to pool all available data. However, this may involve significant communication which may be quite costly. Thus, the question of interest is studying the trade-off between cost of communication and cost of inaccuracy (because of not pooling together all data). 2 Models and Summary of Results We consider a distributed computation model consisting of one central coordinator node connected to k site nodes. The site nodes must communicate with each other using the coordinator node. At each time step, the distributed system receives a query1 , which indicates that it must choose an expert to follow. At the end of the round, the distributed system observes the payoff vector. We consider two different models described in detail below: the site prediction model where one of the k sites receives a query at any given time-step, and the coordinator prediction model where the query is always received at the coordinator node. In both these models, the payoff vector, pt , is always observed at one of the k site nodes. Thus, some communication is required to share the information about the payoff vectors among nodes. As we shall see, these two models yield different algorithms and performance bounds. All missing proofs are provided in the long version [7] Goal: The algorithm implemented on the distributed system may use randomness, both to decide which expert to pick and to decide when to communicate with other nodes. We focus on simultaneously minimizing the expected regret and the expected communication used by the (distributed) algorithm. Recall, that the expected regret is: " E[R] = E max a?[n] T X t p [a] ? t=1 T X # t t p [a ], (1) t=1 where the expectation is over the random choices made by the algorithm. The expected communication is simply the expected number (over the random choices) of messages sent in the system. As we show in this paper, this is a challenging problem and to keep the analysis simple we focus on bounds in terms of the number of sites k and the time horizon T , which are often the most important scaling parameters. In particular, our algorithms are variants of follow the perturbed leader (FPL) and hence our bounds are not optimal in terms of the number of experts n. We believe that the dependence on the number of experts in our algorithms (upper bounds) can be strengthened using a different regularizer. Also, all our lower bounds are shown in terms of T and k, for n = 2. For larger n, using techniques similar to Thm. 3.6 in [2] should give the appropriate dependence on n. Adversaries: In the non-stochastic setting, we assume that an adversary may decide the payoff vectors, pt , at each time-step and also the site, st , that receives the payoff vector (and also the query in the site-prediction model). An oblivious adversary cannot see any of the actions of the distributed system, i.e. selection of expert, communication patterns or any random bits used. However, the oblivious adversary may know the description of the algorithm. In addition to knowing the description of the algorithm, an adaptive adversary is stronger and can record all of the past actions of the algorithm, and use these arbitrarily to decide the future payoff vectors and site allocations. Communication: We do not explicitly account for message sizes, since we are primarily concerned with scaling in terms of T and k. We require that message size not depend k or T , but only on the 1 We do not use the word query in the sense of explicitly giving some information or context, but merely as indication of occurrence of an event that forces some site or coordinator to choose an expert 2 number of experts n. In other words, we assume that n is substantially smaller than T and k. All the messages used in our algorithms contain at most n real numbers. As is standard in the distributed systems literature, we assume that communication delay is 0, i.e. the updates sent by any node are received by the recipients before any future query arrives. All our results still hold under the weaker assumption that the number of queries received by the distributed system in the duration required to complete a broadcast is negligible compared to k. 2 We now describe the two models in greater detail, state our main results and discuss related work: 1. S ITE P REDICTION M ODEL: At each time step t = 1, . . . , T , one of the k sites, say st , receives a query and has to pick an expert, at , from the set, [n] = {1, . . . , n}. The payoff vector pt ? [0, 1]n , where pt [i] is the payoff of the ith expert is revealed only to the site st and the decision-maker (distributed system) receives payoff pt [at ], corresponding to the expert actually chosen. The site prediction model is commonly studied in distributed machine learning settings (see [8, 9, 10]). The payoff vectors p1 , . . . , pT and also the choice of sites that receive the query, s1 , . . . , sT , are decided by an adversary. There are two very simple algorithms in this model: (i) Full communication: The coordinator always maintains the current cumulative payoff vector, Pt?1 ? Pt?1 ? t ? =1 p . At time step t, s receives the current cumulative payoff vector ? =1 p from the coordinator, chooses an expert at ? [n] using FPL, receives payoff vector pt and sends pt to the coordinator, which updates its cumulative payoff vector. Note that the total communication ? is 2T and the system simulates (non-distributed) FPL to achieve (optimal) regret guarantee O( nT ). (ii) No communication: Each site maintains cumulative payoff vectors corresponding to the queries received by them, thus implementing k independent versions of FPL. Suppose that the ? ith site ? Pk Pk receives a total of Ti queries ( i=1 Ti = T ), the regret is bounded by i=1 O( nTi ) = O( nkT ) and the total communication is 0. This upper bound is actually tight in the event that there is 0 communication (see the accompanying long version [7]). ? Simultaneously achieving regret that is asymptotically lower than knT using communication asymptotically lower than T turns out to be a significantly challenging question. Our main positive result is the first distributed expert algorithm in the oblivious adversarial (non-stochastic) setting, using sub-linear communication. Finding such an algorithm in the case of an adaptive adversary is an interesting open problem. Theorem 1. When T ? 2k 2.3 , there exists an algorithm ? for the distributed experts problem that against an oblivious adversary achieves regret O(log(n) k 5(1+)/6 T ) and uses communication O(T /k  ), giving non-trivial guarantees in the range  ? (0, 1/5). 2. C OORDINATOR P REDICTION M ODEL: At every time step, the query is received by the coordinator node, which chooses an expert at ? [n]. However, at the end of the round, one of the site nodes, say st , observes the payoff vector pt . The payoff vectors pt and choice of sites st are decided by an adversary. This model is also a natural one and is explored in the distributed systems and streaming literature (see [11, 12, 13] and references therein). ? The full communication protocol is equally applicable here getting optimal regret bound, O( nT ) at the cost of substantial (essentially T ) communication. But here, we do not have any straightforward algorithms that achieve non-trivial regret without using any communication. This model is closely related to the label-efficient prediction problem (see Chapter 6.1-3 in [2]), where the decision-maker has a limited budget and has to spend part of its budget to observe any payoff information. The optimal strategy is to request payoff information randomly with probability C/T at each time-step, if C is the communication budget. We refer to this algorithm as LEF (label-efficient forecaster) [14]. Theorem 2.p[14] (Informal) The LEF algorithms using FPL with communication budget C achieves regret O(T n/C) against both an adaptive and an oblivious adversary. One of the crucial differences between this model and that of the label-efficient setting is that when communication does occur, the site can send cumulative payoff vectors comprising all previous updates to the coordinator rather than just the latest one. The other difference is that, unlike in the label-efficient case, the sites have the knowledge of their local regrets and can use it to decide 2 This is because in regularized leader like approaches, if the cumulative payoff vector changes by a small amount the distribution over experts does not change much because of the regularization effect. 3 when to communicate. However, our lower bounds for natural types of algorithms show that these advantages probably do not help to get better guarantees. Lower Bound Results: In the case of an adaptive adversary, we have an unconditional (for any type of algorithm) lower bound in both the models: Theorem 3. Let n?= 2 be the number of experts. Then any (distributed) algorithm that achieves expected regret o( kT ) must use communication (T /k)(1 ? o(1)). The proof appears in [7]. Notice that in the coordinator prediction model, when C = T /k, this lower bound is matched by the upper bound of LEF. In the case of an oblivious adversary, our results are weaker, but we can show that certain natural types of algorithms are not applicable directly in this setting. The so called regularized leader algorithms, maintain a cumulative payoff vector, Pt , and use only this and a regularizer to select an expert at time t. We consider two variants in the distributed setting: ? t , which is an (approximate) (i) Distributed Counter Algorithms: Here the forecaster only uses P t version of the cumulative payoff vector P . But we make no assumptions on how the forecaster will ? t. P ? t can be maintained while using sub-linear communication by applying techniques from use P distributed systems literature [12]. (ii) Delayed Regularized Leader: Here the regularized leaders don?t try to explicitly maintain an approximate version of the cumulative payoff vector. Instead, they may use an arbitrary communication protocol, but make prediction using the cumulative payoff vector (using any past payoff vectors that they could have received) and some regularizer. We show in Section 3.2 that the distributed counter approach does not yield any non-trivial guarantee in the site-prediction model even against an oblivious adversary. It is possible to show a similar lower bound the in the coordinator prediction model, but is omitted since it follows easily from the idea in the site-prediction model combined with an explicit communication lower bound given in [12]. Section 4 shows that the delayed regularized leader approach is ineffective even against an oblivious adversary for coordinator prediction model, suggesting LEF algorithm is near optimal. Related Work: Recently there has been significant interest in distributed online learning questions (see for example [8, 9, 10]). However, these works have focused mainly on stochastic optimization problems. Thus, the techniques used, such as reducing variance through mini-batching, are not applicable to our setting. Questions such as network structure [9] and network delays [10] are interesting in our setting as well, however, at present our work focuses on establishing some non-trivial regret guarantees in the distributed online non-stochastic experts setting. Study of communication as a resource in distributed learning is also considered in [15, 16, 17]; however, this body of work seems only applicable to offline learning. The other related work is that of distributed functional monitoring [11] and in particular distributed counting[12, 13], and sketching [18]. Some of these techniques have been successfully applied in offline machine learning problems [19]. However, we are the first to analyze the performancecommunication trade-off of an online learning algorithm in the standard distributed functional monitoring framework [11]. An application of a distributed counter to an online Bayesian regression was proposed in Liu et al. [13]. Our lower bounds discussed below, show that approximate distributed counter techniques do not directly yield non-trivial algorithms. 3 3.1 Site-prediction model Upper Bounds We describe our algorithm that simultaneously achieves non-trivial bounds on expected regret and expected communication. We begin by making two assumptions that simplify the exposition. First, we assume that there are only 2 experts. The generalization from 2 experts to n is easy, as discussed in the Remark 1 at the end of this section. Second, we assume that there exists a global query counter, that is available to all sites and the co-ordinator, which keeps track of the total number of queries received across the k sites. We discuss this assumption in Remark 2 at the end of the section. As is often the case in online algorithms, we assume that the time horizon T is known. Otherwise, the standard doubling trick may be employed. The notation used in this Section is defined in Table 1. 4 Symbol pt ` b Pi Qi M (v) FPi (?) FRia (?) FRi (?) Definition Payoff vector at time-step t, pt ? [0, 1]2 The length of block into which inputs are divided Number of input blocks b = T /` Pi` Cumulative payoff vector within block i, Pi = t=(i?1)`+1 pt Pi?1 Cumulative payoff vector until end of block (i ? 1), Qi = j=1 Pj For vector v ? R2 , M (v) = 1 if v1 > v2 ; M (v) = 2 otherwise Random variable denoting the payoff obtained by playing FPL(?) on block i Random variable denoting the regret with respect to action a of playing FPL(?) on block i FRia (?) = Pi [a] ? FPi (?) Random variable denoting the regret of playing FPL(?) on payoff vectors in block i FRi (?) = maxa=1,2 Pi [a] ? FPi (?) = maxa=1,2 FRia (?) Table 1: Notation used in Algorithm DFPL (Fig. 1) and in Section 3.1. DFPL(T , `, ?) ? set b = T /`; ? 0 = `; q = 2`3 T 2 /? 5 for i = 1 . . . , b let Yi = Bernoulli(q) if Yi = 1 then #step phase play FPL(? 0 ) for time-steps (i ? 1)` + 1, . . . , i` else #block phase ai = M (Qi + r) where r ?R [0, ?]2 play ai for time-steps (i ? 1)` + 1, . . . , i` P i t P = i` t=(i?1)`+1 p i+1 i i Q =Q +P FPL(T, n = 2, ?) for t = 1, . .P .,T ? 2 at = M ( t?1 ? =1 p + r) where r ?R [0, ?] t follow expert a at time-step t observe payoff vector pt (a) (b) Figure 1: (a) DFPL: Distributed Follow the Perturbed Leader, (b) FPL: Follow the Perturbed Leader with parameter ? for 2 experts (M (?) is defined in Table 1, r is a random vector) Algorithm Description: Our algorithm DFPL is described in Figure 1(a). We make use of FPL algorithm, described in Figure 1(b), which takes as a parameter the amount of added noise ?. DFPL algorithm treats the T time steps as b(= T /`) blocks, each of length `. At a high level, with probability q on any given block the algorithm is in the step phase, running a copy of FPL (with noise parameter ? 0 ) across all time steps of the block, synchronizing after each time step. Otherwise it is in a block phase, running a copy of FPL (with noise parameter ?) across blocks with the same expert being followed for the entire block and synchronizing after each block. This effectively makes Pi , the cumulative payoff over block i, the payoff vector for the block FPL. The block FPL has on average (1 ? q)T /` total time steps. We begin by stating a (slightly stronger) guarantee for FPL. Lemma 1. Consider the case n = 2. Let p1 , . . . , pT ? [0, 1]2 be a sequence of payoff vectors such that maxt |pt |? ? B and let the number of experts be 2. Then FPL(?) has the following guarantee PT t t on expected regret, E[R] ? B t=1 |p [1] ? p [2]| + ?. ? The proof is a simple modification to the proof of the standard analysis [6] and is given in [7]. The rest of this section is devoted to the proof of Lemma 2 Lemma 2. Consider the case n = 2. If T > 2k 2.3 , Algorithm DFPL (Fig. 1) when ? run with parameters `, T , ? = `5/12 T 1/2 and b, ? 0 , q as defined in Fig 1, has expected regret O( `5/6 T ) and expected communication O(T k/`). In particular for ` = k 1+?for 0 <  < 1/5, the algorithm simultaneously achieves regret that is asymptotically lower than kT and communication that is asymptotically lower3 than T . 3 T Note that here asymptotics is in terms ? of both parameters k and T . Getting communication of the form f (k) for regret bound better than kT , seems to be a fairly difficult and interesting problem 1?? 5 Since we are in the case of an oblivious adversary, we may assume that the payoff vectors p1 , . . . , pT are fixed ahead of time. Without loss of generality let expert 1 (out of {1, 2}) be the one that has greater payoff in hindsight. Recall that FRi1 (? 0 ) denotes the random variable that is the regret of playing FPL(? 0 ) in a step phase on block i with respect to the first expert. In particular, this will be negative if expert 2 is the best expert on block i, even though globally expert 1 is better. In fact, this is exactly what our algorithm exploits: it gains on regret in the communication-expensive, step phase while saving on communication in the block phase.  Pb The regret can be written as R = i=1 Yi ? FRi1 (? 0 ) + (1 ? Yi )(Pi [1] ? Pi [ai ]) . Note that the random variables Yi are independent of the random variables FRi1 (? 0 ) and the random variables ai . As E[Yi ] = q, we can bound the expression for expected regret as follows: E[R] ? q b X E[FRi1 (? 0 )] + (1 ? q) i=1 b X E[Pi [1] ? Pi [ai ]] (2) i=1 We first analyze the second term of the above equation. This is just the regret corresponding to running FPL(?) at the block level, with T /` time steps. Using the fact that maxi |Pi |? ? ` maxt |pt |? ? `, Lemma 1 allows us to conclude that: b X E[Pi [1] ? Pi [ai ]] ? i=1 b `X i |P [1] ? Pi [2]| + ? ? i=1 (3) ? Next, we also analyse the first term of the inequality (2). We chose ? 0 = ` (see Fig. 1) and the ? analysis of FPL guarantees that E[FRi (? 0 )] ? 2 `, where FRi (? 0 ) denotes the random variable that is the actual regret of FPL(? 0 ), not the regret with respect to expert 1 (which is FRi1 (? 0 )). Now i i 0 i 0 0 either 1 (? ) (i.e. expert 1 was the better one on block i), in which case E[FR1 (? )] ? ? FR (? ) = FR i 0 i 0 2 `; otherwise ? FR (? ) = FR2 (? ) (i.e. expert 2 was the better one on block i), in which case E[FRi1 (? 0 )] ? 2 ` + Pi [1] ? Pi [2]. Note that in this expression Pi [1] ? Pi [2] is negative. Putting ? i 0 i everything together we can write that E[FR1 (? )] ? 2 ` ? (P [2] ? Pi [1])+ , where (x)+ = x if x ? 0 and 0 otherwise. Thus, we get the main equation for regret. b b X ? `X i (Pi [2] ? Pi [1])+ + |P [1] ? Pi [2]| +? E[R] ? 2qb ` ? q ? i=1 i=1 | {z } | {z } term 1 (4) term 2 ? ? Note that the first (i.e. 2qb `) and last (i.e. ?) terms of inequality (4) are O( `5/6 T ) for the setting of the parameters as in Lemma 2. The strategy is to show that when ?term 2? becomes large, then ?term 1? is also large in magnitude, but negative, compensating the effect of ?term 1?. We consider a few cases: Case 1: When the best expert is identified quickly and not changed thereafter. Let ? denote the maximum index, i, such that Qi [1] ? Qi [2] ? ?. Note that after the block ? is processed, the algorithm in the block phase will never follow expert 2. Suppose that ? ? (?/`)2 . We note that the correct bound for ?term 2? is now actually P? (`/?) i=1 |Pi [1] ? Pi [2]| ? (`2 ?/?) ? ? since |Pi [1] ? Pi [2]| ? ` for all i. Case 2 The best expert may not be identified quickly, furthermore |Pi [1] ? Pi [2]| is large often. In this case, although ?term 2? may be large (when (Pi [1] ? Pi [2]) is large), this is compensated by the negative regret in ?term 1? in expression (4). This is because if |Pi [1] ? Pi [2]| is large often, but the best expert is not identified quickly, there must be enough blocks on which (Pi [2] ? Pi [1]) is positive and large. Notice that ? ? (?/`)2 . Define ? = ? 2 /T and let S = {i ? ? | |Pi [1] ? Pi [2]| ? ?}. P? Let ? = |S|/?. We show that i=1 (Pi [2] ? Pi [1])+ ? (???)/2 P ? ?. To see this consider S1 = {i ? S | Pi [1] > Pi [2]} and S2 = S \ S1 . First, observe that i?S |Pi [1] ? Pi [2]| ? ???. P P Then, if i?S2 (Pi [2] ? Pi [1]) ? (???)/2, we are done. If not i?S1 (Pi [1] ? Pi [2]) ? (???)/2. P? P? Now notice that i=1 Pi [1] ? Pi [2] ? ?, hence it must be the case that i=1 (Pi [2] ? Pi [1])+ ? 6 (???)/2 ? ?. Now for the value of q = 2`3 T 2 /? 5 and if ? ? ? 2 /(T `), the negative contribution of ?term 1? is at least q???/2 which greater than the maximum possible positive contribution of ?term 2? which is `2 ?/?. It is easy to see that these quantities are equal and hence the total contribution of ?term 1? and ?term 2? together is at most ?. Case 3 When |Pi [1] ? Pi [2]| is ?small? most of the time. In this case the parameter ? is actually well-tuned (which was not the case when |Pi [1] ? Pi [2]| ? `) and gives us a small overall regret. (See Lemma 1.) We have ? < ? 2 /(T `). Note that ?` ? ? = ? 2 /T and that ? ? T /`. In this case P? ?term 2? can be bounded easily as follows: ?` i=1 |Pi [1] ? Pi [2]| ? ?` (??` + (1 ? ?)??) ? 2? The above three cases exhaust all possibilities and hence no matter what the nature of the payoff sequence, the expected regret of DFPL is bounded by O(?) as required. The expected total communication is easily seen to be O(qT +T k/`) ? the q(T /`) blocks on which step FPL is used contribute O(`) communication each, and the (1 ? q)(T /`) blocks where block FPL is used contributed O(k) communication each. Remark 1. Our algorithm can be generalized to n experts by recursively dividing the set of experts in two and applying our algorithm to two meta-experts, to give the result of Theorem 1. Details are provided in [7]. Remark 2. Instead of a global counter, it suffices for the co-ordinator to maintain an approximate counter and notify all sites of beginning an end of blocks by broadcast. This only adds 2k communication per block. See [7] for more details. 3.2 Lower Bounds In this section we give a lower bound on distributed counter algorithms in the site prediction model. Distributed counters allow tight approximation guarantees, i.e. for factor ? additive approximation, ? the communication required is only O(T log(T ) k/?) [12]. We observe that the noise used by ? FPL is quite large, O( T ), and so it is tempting to find a suitable ? and run FPL using approximate cumulative payoffs. We consider the class of algorithms such that: (i) Whenever each site receives a query, it has an (approximate) cumulative payoff of each expert to additive accuracy ?. Furthermore, any communication is only used to maintain such a counter. (ii) Any site only uses the (approximate) cumulative payoffs and any local information it may have to choose an expert when queried. However, our negative result shows that even with a highly accurate counter?? = O(k), the nonstochasticity of the payoff sequence may cause any such algorithm to have ?( kT ) regret. Furthermore, we show that any distributed algorithm that implements (approximate) counters to additive error k/10 on all sites4 is at least ?(T ). Theorem 4. At any time step t, suppose each site has an (approximate) cumulative payoff count, ? t [a], for every expert such that |Pt [a] ? P ? t [a]| ? ?. Then we have the following: P ? t [a] and any local information at the 1. If ? ? k, any algorithm that uses the approximate counts P ? site making the decision, cannot achieve expected regret asymptotically better than ?T . 2. Any protocol on the distributed system that guarantees that at each time step, each site has a ? = k/10 approximate cumulative payoff with probability ? 1/2, uses ?(T ) communication. 4 Coordinator-prediction model In the co-ordinator prediction model, as mentioned earlier it is possible to use the label-efficient forecaster, LEF (Chap. 6 [2, 14]). Let C be an upper bound on the total amount of communication we are allowed to use. The label-efficient predictor translates into the following simple protocol: Whenever a site receives a payoff vector, it will forward that particular payoff to the coordinator with probability p ? C/T . The coordinator will always execute the exponentially weighted forecaster p over the sampled subset of payoffs to make new decisions. Here, the expected regret is O(T log(n)/C). ? In other words, if our regret needs to be O( T ), the communication needs to be linear in T . 4 The approximation guarantee is only required when a site receives a query and has to make a prediction. 7 We observe that in principle there is a possibility of better algorithms in this setting for mainly two reasons: (i) when the sites send payoff vectors to the co-ordinator, they can send cumulative payoffs rather than the latest ones, thus giving more information, and (ii) the sites may decided when to communicate as a function of the payoff vectors instead of just randomly. However, ? we present a lower-bound that shows that for a natural family of algorithms achieving regret O( T ) requires at least ?(T 1? ) for every  > 0, even when k = 1. The type of algorithms we consider may have an arbitrary communication protocol, but it satisfies the following: (i) Whenever a site communicates with the coordinator, the site will report? its local cumulative payoff vector. (ii) When the ? coordinator makes a decision, it will execute, FPL( T ), (follow the perturbed leader with noise T ) using the latest cumulative payoff vector. The proof of Theorem 5 appears in [7] and the results could be generalized to other regularizers. Theorem 5. Consider the distributed non-stochastic expert problem in ?coordinator prediction model. Any algorithm of the kind described above that achieves regret O( T ) must use ?(T 1? ) communication against an oblivious adversary for every constant . Simulations 400 Cumulative regret 4 x 10 No?communication Mini?batch, p=4.64e?002 All?communication HYZ, p=2.24e?001 DFPL, ?=0.00e+000 DFPL, ?=1.48e?001 500 300 200 Worst?case communication 5 100 0 ?100 0 0.5 1 ? 1.5 6 4 2 0 0 2 4 DFPL Mini?batches HYZ 8 500 x 10 (a) 1000 1500 Worst?case regret 2000 (b) Figure 2: (a) - Cumulative regret for the MC sequences as a function of correlation ?, (b) - Worst-case cumulative regret vs. communication cost for the MC and zig-zag sequences. In this section, we describe some simulation results comparing the efficacy of our algorithm DFPL with some other techniques. We compare DFPL against simple algorithms ? full communication and no communication, and two other algorithms which we refer to as mini-batch and HYZ. In the mini-batch algorithm, the coordinator requests randomly, with some probability p at any time step, all cumulative payoff vectors at all sites. It then broadcasts the sum (across all of the sites) back to the sites, so that all sites have the latest cumulative payoff vector. Whenever such a communication does occur, the cost is 2k. We refer to this as mini-batch because it is similar in spirit to the minibatch algorithms used in the stochastic optimization problems. In the HYZ algorithm, we use the distributed counter technique of Huang et al. [12] to maintain the (approximate) cumulative payoff for each expert. Whenever a counter update occurs, the coordinator must broadcast to all nodes to make sure they have the most current update. We consider two types of synthetic sequences. The first is a zig-zag sequence, with ? being the length of one increase/decrease. For the first ? time steps the payoff vector is always (1, 0) (expert 1 being better), then for the next 2? time steps, the payoff vector is (0, 1) (expert 2 is better), and then again for the next 2? time-steps, payoff vector is (1, 0) and so on. The zig-zag sequence is also the sequence used in the proof of the lower bound in Theorem 5. The second is a two-state Markov 1 . While in state 1, the payoff vector chain (MC) with states 1, 2 and Pr[1 ? 2] = Pr[2 ? 1] = 2? is (1, 0) and when in state 2 it is (0, 1). In our simulations we use T = 20000 predictions, and k = 20 sites. Fig. 2 (a) shows the performance of the above algorithms for the MC sequences, the results are averaged across 100 runs, over both the randomness of the MC and the algorithms. Fig. 2 (b) shows the worst-case cumulative communication vs the worst-case cumulative regret trade-off for three algorithms: DFPL, mini-batch and HYZ, over all the described sequences. While in general it is hard to compare algorithms on non-stochastic inputs, our results confirm that for non-stochastic sequences inspired by the lower-bounds in the paper, our algorithm DFPL outperforms other related techniques. 8 References [1] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learnign and an application to boosting. In EuroCOLT, 1995. [2] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [3] T. Cover. Universal portfolios. Mathematical Finance, 1:1?19, 1991. [4] E. Hazan and S. Kale. On stochastic and worst-case models for investing. In NIPS, 2009. [5] E. Hazan. The convex optimization approach to regret minimization. Optimization for Machine Learning, 2012. [6] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71:291?307, 2005. [7] V. Kanade, Z. Liu, and B. Radunovi?c. Distributed non-stochastic experts. In arXiv, 2012. [8] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction. In ICML, 2011. [9] J. Duchi, A. Agarwal, and M. Wainright. Distributed dual averaging in networks. In NIPS, 2010. [10] A. Agarwal and J. Duchi. Distributed delayed stochastic optimization. In NIPS, 2011. [11] G. Cormode, S. Muthukrishnan, and K. Yi. Algorithms for distributed functional monitoring. ACM Transactions on Algorithms, 7, 2011. [12] Z. Huang, K. Yi, and Q. Zhang. Randomized algorithms for tracking distributed count, frequencies and ranks. In PODS, 2012. [13] Z. Liu, B. Radunovi?c, and M. Vojnovi?c. Continuous distributed counting for non-monotone streams. In PODS, 2012. [14] N. Cesa-Bianchi, G. Lugosi, and G. Stoltz. Minimizing regret with label efficient prediction. In ISIT, 2005. [15] M-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In COLT (to appear), 2012. [16] H. Daum?e III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Protocols for learning classifiers on distributed data. In AISTATS, 2012. [17] H. Daum?e III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Efficients protocols for distributed classification and optimization. In arXiv:1204.3523v1, 2012. [18] G. Cormode, M. Garofalakis, P. Haas, and C. Jermaine. Synopses for Massive Data - Samples, Histograms, Wavelets, Sketches. Foundations and Trends in Databases, 2012. [19] K. Clarkson, E. Hazan, and D. Woodruff. Sublinear optimization for machine learning. In FOCS, 2010. 9
4563 |@word version:5 stronger:2 seems:2 dekel:1 open:1 simulation:3 forecaster:7 pick:4 recursively:1 venkatasubramanian:2 liu:4 efficacy:1 woodruff:1 denoting:3 tuned:1 past:2 outperforms:1 current:3 com:1 nt:2 comparing:1 must:8 written:1 additive:3 update:5 v:3 beginning:1 ith:2 record:1 cormode:2 boosting:1 node:16 contribute:1 zhang:1 mathematical:1 focs:1 consists:1 privacy:1 expected:16 p1:3 compensating:1 inspired:1 globally:1 chap:2 eurocolt:1 company:1 actual:1 becomes:1 provided:2 begin:2 bounded:3 matched:1 notation:2 what:2 kind:1 minimizes:1 substantially:1 maxa:3 hindsight:2 finding:1 guarantee:11 berkeley:2 every:4 ti:2 finance:1 exactly:1 classifier:1 grant:2 appear:1 before:1 negligible:1 positive:3 local:4 treat:1 establishing:1 lugosi:2 chose:1 therein:2 studied:2 challenging:2 co:4 limited:1 range:1 averaged:1 decided:3 regret:52 block:32 implement:1 asymptotics:1 universal:1 significantly:1 word:3 get:2 cannot:2 selection:2 context:1 applying:2 missing:1 compensated:1 send:3 straightforward:1 latest:4 kale:1 duration:1 convex:1 focused:1 bachrach:1 pod:2 pt:32 suppose:3 play:2 user:1 shamir:1 massive:1 us:5 harvard:2 trick:1 fr2:1 expensive:1 trend:1 database:1 observed:1 capture:1 worst:6 connected:2 trade:5 counter:14 decrease:1 observes:3 zig:3 substantial:1 mentioned:1 complexity:1 depend:1 tight:3 easily:3 fr1:2 chapter:1 regularizer:4 muthukrishnan:1 describe:3 query:16 quite:2 larger:1 spend:1 say:3 otherwise:5 analyse:1 online:9 advantage:1 indication:1 sequence:12 fr:3 achieve:3 description:3 decisionmaker:1 getting:2 help:1 stating:1 qt:1 received:11 dividing:1 implemented:1 c:1 vkanade:1 closely:1 correct:1 stochastic:17 implementing:1 everything:1 require:1 suffices:1 generalization:2 isit:1 hold:1 accompanying:1 considered:1 achieves:7 omitted:1 applicable:4 label:8 maker:6 successfully:1 weighted:2 minimization:1 always:5 rather:2 kalai:1 focus:3 she:4 bernoulli:1 indicates:1 mainly:2 rank:1 adversarial:1 sense:1 streaming:1 entire:1 her:1 coordinator:24 comprising:1 overall:1 among:1 dual:1 colt:1 classification:1 fairly:1 uc:1 equal:1 saving:1 never:1 synchronizing:2 icml:1 future:2 report:1 simplify:1 oblivious:11 primarily:1 few:1 randomly:3 saha:2 simultaneously:6 delayed:3 phase:8 consisting:1 microsoft:2 maintain:5 interest:3 message:4 possibility:2 highly:1 extreme:1 arrives:1 unconditional:1 regularizers:2 devoted:1 chain:1 kt:6 accurate:1 notify:1 stoltz:1 earlier:1 cover:1 cost:6 subset:1 predictor:1 delay:2 answer:1 perturbed:5 eec:1 synthetic:1 chooses:3 combined:1 st:6 randomized:1 off:5 pool:1 together:3 sketching:1 quickly:3 again:1 central:1 cesa:3 choose:4 broadcast:4 huang:2 expert:60 account:1 suggesting:1 exhaust:1 matter:1 explicitly:3 ad:1 stream:1 performed:2 view:1 try:1 analyze:2 hazan:3 maintains:2 odel:2 contribution:3 minimize:2 accuracy:1 variance:1 yield:3 bayesian:1 mc:5 monitoring:3 randomness:2 whenever:5 definition:1 against:6 frequency:1 proof:7 gain:1 sampled:1 recall:2 knowledge:1 actually:5 back:1 appears:2 varun:1 follow:10 synopsis:1 done:1 though:1 execute:2 generality:1 furthermore:3 just:3 until:1 correlation:1 sketch:1 receives:12 google:1 minibatch:1 believe:1 effect:2 contain:1 ccf:1 hence:4 regularization:1 round:6 game:1 maintained:1 generalized:2 complete:1 theoretic:1 duchi:2 balcan:1 novel:1 recently:1 common:2 functional:3 exponentially:2 discussed:2 knt:1 significant:2 refer:3 cambridge:1 ai:6 queried:1 phillips:2 portfolio:2 etc:1 add:1 certain:1 inequality:2 meta:1 arbitrarily:1 yi:8 seen:1 minimum:1 greater:3 employed:1 fri:4 tempting:1 ii:6 full:4 long:2 divided:1 equally:1 qi:5 prediction:22 variant:3 regression:1 essentially:2 expectation:1 arxiv:2 histogram:1 gilad:1 agarwal:2 receive:1 addition:1 fine:1 else:1 sends:1 crucial:1 rest:1 unlike:1 probably:1 ineffective:1 pooling:1 sure:1 simulates:2 sent:2 spirit:1 consult:1 garofalakis:1 near:2 counting:2 revealed:2 iii:2 easy:2 concerned:1 enough:1 identified:3 click:1 idea:1 knowing:1 translates:1 expression:3 clarkson:1 cause:1 action:3 remark:4 useful:1 involve:1 amount:3 processed:1 schapire:1 nsf:3 notice:3 track:1 per:1 write:1 shall:1 putting:1 thereafter:1 pb:1 achieving:3 blum:1 pj:1 v1:2 asymptotically:6 merely:1 rediction:2 sum:1 monotone:1 run:4 communicate:5 family:1 decide:5 fpi:3 decision:11 scaling:2 bit:1 bound:28 internet:1 followed:1 occur:2 ahead:1 qb:2 vempala:1 request:2 smaller:1 across:5 slightly:1 making:3 s1:4 modification:1 pr:2 resource:1 equation:2 bing:1 discus:2 turn:1 count:3 know:1 end:8 informal:1 studying:1 available:3 observe:6 v2:1 appropriate:1 batching:1 occurrence:1 batch:6 recipient:1 denotes:3 running:3 hyz:5 daum:2 exploit:1 giving:3 already:1 quantity:2 question:4 added:1 strategy:3 costly:1 dependence:2 occurs:1 haas:1 trivial:7 reason:1 length:3 index:1 mini:7 minimizing:2 difficult:1 negative:6 design:1 contributed:1 bianchi:3 upper:5 markov:1 payoff:67 communication:61 mansour:1 arbitrary:2 thm:1 required:6 nti:1 inaccuracy:1 nip:3 adversary:18 below:2 pattern:1 max:1 zhenming:2 event:2 suitable:1 difficulty:1 force:1 regularized:6 nkt:1 natural:4 fpl:28 literature:3 freund:1 loss:1 sublinear:1 interesting:3 allocation:1 wainright:1 foundation:1 xiao:1 principle:1 playing:4 share:1 pi:57 maxt:2 summary:1 changed:1 supported:2 last:1 keeping:1 copy:3 lef:5 offline:2 weaker:2 allow:1 distributed:51 cumulative:29 author:2 made:1 adaptive:4 commonly:1 forward:1 transaction:1 approximate:12 keep:2 confirm:1 global:2 decides:1 conclude:1 leader:11 don:1 search:2 continuous:1 investing:1 table:3 kanade:2 nature:1 protocol:7 aistats:1 pk:2 main:3 s2:2 noise:5 allowed:1 body:1 site:48 advice:2 fig:6 strengthened:1 sub:2 jermaine:1 explicit:1 communicates:1 wavelet:1 theorem:8 xt:2 symbol:1 explored:1 r2:1 maxi:1 exists:2 query1:1 effectively:1 magnitude:1 budget:4 horizon:3 entropy:1 simply:1 tracking:1 bo:1 doubling:1 satisfies:1 acm:1 hedge:1 goal:3 ite:1 exposition:1 change:2 hard:1 reducing:1 averaging:1 lemma:6 total:9 called:2 zag:3 select:2 princeton:2
3,937
4,564
On Triangular versus Edge Representations ? Towards Scalable Modeling of Networks Qirong Ho School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Junming Yin School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Eric P. Xing School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract In this paper, we argue for representing networks as a bag of triangular motifs, particularly for important network problems that current model-based approaches handle poorly due to computational bottlenecks incurred by using edge representations. Such approaches require both 1-edges and 0-edges (missing edges) to be provided as input, and as a consequence, approximate inference algorithms for these models usually require ?(N 2 ) time per iteration, precluding their application to larger real-world networks. In contrast, triangular modeling requires less computation, while providing equivalent or better inference quality. A triangular motif P is a vertex triple containing 2 or 3 edges, and the number of such motifs is ?( i Di2 ) (where Di is the degree of vertex i), which is much smaller than N 2 for low-maximum-degree networks. Using this representation, we develop a novel mixed-membership network model and approximate inference algorithm suitable for large networks with low max-degree. For networks with high maximum degree, the triangular motifs can be naturally subsampled in a node-centric fashion, allowing for much faster inference at a small cost in accuracy. Empirically, we demonstrate that our approach, when compared to that of an edge-based model, has faster runtime and improved accuracy for mixed-membership community detection. We conclude with a large-scale demonstration on an N ? 280, 000-node network, which is infeasible for network models with ?(N 2 ) inference cost. 1 Introduction Network analysis methods such as MMSB [1], ERGMs [20], spectral clustering [17] and latent feature models [12] require the adjacency matrix A of the network as input, reflecting the natural assumption that networks are best represented as a set of edges taking on the values 0 (absent) or 1 (present). This assumption is intuitive, reasonable, and often necessary for some tasks, such as link prediction, but it comes at a cost (which is not always necessary, as we will discuss later) for other tasks, such as community detection in both the single-membership or admixture (mixedmembership) settings. The fundamental difference between link prediction and community detection is that the first is concerned with link outcomes on pairs of vertices, for which providing links as input is intuitive. However, the second task is about discovering the community memberships of individual vertices, and links are in fact no longer the only sensible representation. By representing the input network as a bag of triangular motifs ? by which we mean vertex triples with 2 or 3 edges ? one can design novel models for mixed-membership community detection that outperform models based on the adjacency matrix representation. The main advantage of the bag-of-triangles representation lies in its huge reduction of computational cost for certain network analysis problems, with little or no loss of outcome quality. In the traditional edge representation, if N is the number of vertices, then the adjacency matrix has size ?(N 2 ) ? thus, any network analysis algorithm that touches every element must have ?(N 2 ) runtime complexity. For probabilistic network models, this statement applies to the cost of approximate 1 i j i k (a) j i k j i k j i j k (b) i k j i k j i k (c) j k (d) Figure 1: Four types of triangular motifs: (a) full-triangle; (b) 2-triangle; (c) 1-triangle; (d) empty-triangle. For mixed-membership community detection, we only focus on full-triangles and 2-triangles. inference. For example, the Mixed Membership Stochastic Blockmodel (MMSB) [1] has ?(N 2 ) latent variables, implying an inference cost of ?(N 2 ) per iteration. Looking beyond, the popular p? or Exponential Random Graph models [20] are normally estimated via MCMC-MLE, which entails drawing network samples (each of size ?(N 2 )) from some importance distribution. Finally, latent factor models such as [12] only have ?(N ) latent variables, but the Markov blanket of each variable depends on ?(N ) observed variables, resulting in ?(N 2 ) computation per sweep over all variables. With an inference cost of ?(N 2 ), even modestly large networks with only ? 10, 000 vertices are infeasible, to say nothing of modern social networks with millions of vertices or more. On the other hand, P it can be shown that the number of 2- and 3-edge triangular motifs is upperbounded by ?( i Di2 ), where Di is the degree of vertex i. For networks with low maximum degree, this quantity is  N 2 , allowing us to construct more parsimonious models with faster inference algorithms. Moreover, for networks with high maximum degree, one can subsample ?(N ? 2 ) of these triangular motifs in a node-centric fashion, where ? is a user-chosen parameter. Specifically, we assign triangular motifs to nodes in a natural manner, and then subsample motifs only from nodes with too many of them. In contrast, MMSB and latent factor models rely on distributions over 0/1edges (i.e. edge probabilities), and for real-world networks, these distributions cannot be preserved with small (i.e. o(N 2 )) sample sizes because the 0-edges asymptotically outnumber the 1-edges. As we will show, a triangular representation does not preserve all information found in an edge representation. Nevertheless, we argue that one should represent complex data objects in a task-dependent manner, especially since computational cost is becoming a bottleneck for real-world problems like analyzing web-scale network data. The idea of transforming the input representation (e.g. from network to bag-of-triangles) for better task-specific performance is not new. A classic example is the bag-of-words representation of a document, in which the ordering of words is discarded. This representation has proven effective in natural language processing tasks such as topic modeling [2], even though it eliminates practically all grammatical information. Another example from computer vision is the use of superpixels to represent images [3, 4]. By grouping adjacent pixels into larger superpixels, one obtains a more compact image representation, in turn leading to faster and more meaningful algorithms. When it comes to networks, triangular motifs (Figure 1) are already of significant interest in biology [13], social science [19, 9, 10, 16], and data mining [21, 18, 8]. In particular, 2- and 3-edge triangular motifs are central to the notion of transitivity in the social sciences ? if we observe edges A-B and B-C, does A have an edge to C as well? Transitivity is of special importance, because high transitivity (i.e. we frequently observe the third edge A-C) intuitively leads to stronger clusters with more within-cluster edges. In fact, the ratio of 3-edge triangles to connected vertex triples (i.e. 2- and 3-edge triangular motifs) is precisely the definition of the network clustering coefficient [16], which is a popular measure of cluster strength. In the following sections, we begin by characterizing the triangular motifs, following which we develop a mixed-membership model and inference algorithm based on these motifs. Our model, which we call MMTM or the Mixed-Membership Triangular Model, performs mixed-membership community detection, assigning each vertex i to a mixture of communities. This allows for better outlier detection and more informative visualization compared to single-membership modeling. In addition, mixed-membership modeling has two key advantages: first, MM models such as MMSB, Latent Dirichlet Allocation and our MMTM are easily modified for specialized tasks ? as evidenced by the rich literature on topic models [2, 1, 14, 5]. Second, MM models over disparate data types (text, network, etc.) can be combined by fusing their latent spaces, resulting in a multi-view model ? for example, [14, 5] model both text and network data from the same mixed-membership vectors. Thus, our MMTM can serve as a basic modeling component for massive real-world networks with copious side information. After developing our model and inference algorithm, we present simulated experiments comparing them on a variety of network types to an adjacency-matrix-based model (MMSB) and its inference algorithm. These experiments will show that triangular mixed-membership modeling results in both faster inference and more accurate mixed-membership recovery. We conclude by demonstrating our model/algorithm on a network with N ? 280, 000 nodes and ? 2, 300, 000 edges, which is far too large for ?(N 2 ) inference algorithms such as variational MMSB [1] and the Gibbs sampling MMSB inference algorithm we developed for our experiments. 2 2 Triangular Motif Representation of a Network In this work, we consider undirected networks over N vertices, such as social networks. Most of the ideas presented here also generalize to directed networks, though the analysis is more involved since directed networks can generate more motifs than undirected ones. To prevent confusion, we shall use the term ?1-edge? to refer to edges that exist between two vertices, and the term ?0edge? to refer to missing edges. Now, define a triangular motif Eijk involving vertices i < j < k to be the type of subgraph over these 3 vertices. There are 4 basic classes of triangular motifs (Figure 1), distinguished by their number of 1-edges: full-triangle ?3 (three 1-edges), 2-triangle ?2 (two 1-edges), 1-triangle ?1 (one 1-edge), and empty-triangle ?0 (no 1-edges). The total number of triangles, over all 4 classes, is ?(N 3 ). However, our goal is not to account for all 4 classes; instead, we will focus on ?3 and ?2 while ignoring ?1 and ?0 . We have three primary motivations for this: 1. In the network literature, the most commonly studied ?network motifs? [13], defined as patterns of significantly recurring inter-connections in complex networks, are the threenode connected subgraphs (namely ?3 and ?2 ) [13, 19, 9, 10, 16]. 2. Since the full-triangle and 2-triangle classes are regarded as the basic structural elements of most networks [19, 13, 9, 10, 16], we naturally expect them to characterize most of the community structure in networks (cf. network clustering coefficient, as explained in the introduction). In particular, the ?3 and ?2 triangular motifs preserve almost all 1-edges from the original network: every 1-edge appears in some triangular motif ?2 , ?3 , except for isolated 1-edges (i.e. connected components of size 2), which are less interesting from a large-scale community detection perspective. 3. For real networks, which have far more 0- than 1-edges, focusing only on ?3 and ?2 greatly reduces the number of triangular motifs, via the following lemma: P 1 Lemma 1. The total number of ?3 ?s and ?2 ?s is upper bounded by i 2 (Di )(Di ? 1) = P 2 ?( i Di ), where Di is the degree of vertex i. Proof. Let Ni be the neighbor set of vertex i. For each vertex i, form the set Ti of tuples (i, j, k) where j < k and j, k ? Ni , which represents the set of all pairs of neighbors of i. Because j and k are neighbors of i, for every tuple (i, j, k) ? Ti , Eijk is either a ?3 or a ?2 . It is easy to see that each ?2 is accounted for by exactly one Ti , where i is the center vertex of the ?2 , and that each accounted for by three sets Ti , Tj and Tk , one for each vertex in the full-triangle. Thus, P ?3 isP 1 |T | = i i i 2 (Di )(Di ? 1) is an upper bound of the total number of ?3 ?s and ?2 ?s. P For networks with low maximum degree D, ?( i Di2 ) = ?(N D2 ) is typically much smaller than ?(N 2 ), allowing triangular models to scale to larger networks than edge-based models. As for networks with high maximum degree, we suggest the following node-centric subsampling procedure, which we call ?-subsampling: for each vertex i with degree Di > ? for some threshold ?, sample 1 2 ?(? ? 1) triangles without replacement and uniformly at random from Ti ; intuitively, this is similar to capping the network?s maximum degree at Ds = ?. A full-triangle ?3 associated with vertices i, j and k shall appear in the final subsample only if it has been subsampled from at least one of Ti , Tj and Tk . To obtain the set of all subsampled triangles ?2 and ?3 , we simply take the union of subsampled triangles from each Ti , discarding those full-triangles duplicated in the subsamples. Although this node-centric subsampling does not preserve all properties of a network, such as the distribution of node degrees, it approximately preserves the local cluster properties of each vertex, thus capturing most of the community structure in networks. Specifically, the ?local? clustering coefficient (LCC) of each vertex i, defined as the ratio of #(?3 ) touching i to #(?3 , ?2 ) touching i, is well-preserved. This follows from subsampling the ?3 and ?2 ?s at i uniformly at random, though the LCC has a small upwards bias since each ?3 may also be sampled by the other two vertices j and k. Hence, we expect community detection based on the subsampled triangles to be nearly as accurate as with the original set of triangles ? which our experiments will show. We note that other subsampling strategies [11, 22] preserve various network properties, such as degree distribution, diameter, and inter-node random walk times. In our triangular model, the main property of interest is the distribution over ?3 and ?2 , analogous to how latent factor models and MMSB model distributions over 0- and 1-edges. Thus, subsampling strategies that preserve ?3 /?2 distributions (e.g. our ?-subsampling) would be appropriate for our model. In contrast, 0/1-edge subsampling for MMSB and latent factor models is difficult: most networks have ?(N 2 ) 0-edges but only o(N 2 ) 1-edges, thus sampling o(N 2 ) 0/1-edges leads to high variance in their distribution. 3 3 Mixed-Membership Triangular Model Given a network, now represented by triangular motifs ?3 and ?2 , our goal is to perform community detection for each network vertex i, in the same sense as what an MMSB model would enable. Under an MMSB, each vertex i is assigned to a mixture over communities, as opposed to traditional singlemembership community detection, which assigns each vertex to exactly one community. By taking a mixed-membership approach, one gains many benefits over single-membership models, such as outlier detection, improved visualization, and better interpretability [2, 1]. Following a design principle similar to the one underly? ing the MMSB models, we now present a new mixedmembership network model built on the more parsimonious triangular representation. For each triplet of ver?i ?j ?k tices i, j, k ? {1, . . . , N } , i < j < k, if the subgraph on i, j, k is a 2-triangle with i, j, or k at the center, then let Eijk = 1, 2 or 3 respectively, and if the subgraph is a fullsk,ij si,jk sj,ik triangle, then let Eijk = 4. Whenever i, j, k corresponds to a 1- or an empty-triangle, we do not model Eijk . We assume K latent communities, and that each vertex takes Bxyz Eijk a distribution (i.e. mixed-membership) over them. The ? observed bag-of-triangles {Eijk } is generated according to (1) the distribution over community-memberships at Figure 2: Graphical model representation each vertex, and (2) a tensor of triangle generation proba- for MMTM, our mixed-membership model bilities, containing different triangle probabilities for dif- over triangular motifs. ferent combinations of communities. More specifically, each vertex i is associated with a community mixed-membership vector ?i ? ?K?1 restricted to the (K ? 1)-simplex ?K?1 . This mixed-membership vector ?i is used to generate community indicators si,jk ? {1, . . . , K}, each of which represents the community chosen by vertex i when it is forming a triangle with vertices j and k. The probability of observing a triangular motif Eijk depends on the community-triplet si,jk , sj,ik , sk,ij , and a tensor of multinomial parameters B. Let x, y, z ? {1, . . . , K} be the values of si,jk , sj,ik , sk,ij , and assume WLOG that x < y < z 1 . Then, Bxyz ? ?3 represents the probabilities of generating the 4 triangular motifs2 among vertices i, j and k. In detail, Bxyz,1 is the probability of the 2-triangle whose center vertex has community x, and analogously for Bxyz,2 and community y, and for Bxyz,3 and community z; Bxyz,4 is the probability of the full-triangle. The MMTM generative model is summarized below; see Figure 2 for a graphical model illustration. ? Triangle tensor Bxyz ? Dirichlet (?) for all x, y, z ? {1, . . . , K}, where x < y < z ? Community mixed-membership vectors ?i ? Dirichlet (?) for all i ? {1, . . . , N } ? For each triplet (i, j, k) where i < j < k, ? Community indices si,jk ? Discrete (?i ), sj,ik ? Discrete (?j ), sk,ij ? Discrete (?k ). ? Generate the triangular motif Eijk based on Bxyz and the ordered values of si,jk , sj,ik , sk,ij ; see Table 1 for the exact conditional probabilities. There are 6 entries in Table 1, corresponding to the 6 possible orderings of si,jk , sj,ik , sk,ij . 4 Inference We adopt a collapsed, blocked Gibbs sampling approach, where ? and B have been integrated out. Thus, only the community indices s need to be sampled. For each triplet (i, j, k) where i < j < k, P (si,jk , sj,ik , sk,ij | s?ijk , E, ?, ?) ? P (Eijk |E?ijk , s, ?) P (si,jk | si,?jk , ?) P (sj,ik | sj,?ik , ?) P (sk,ij | sk,?ij , ?) , 1 The cases x = y = z, x = y < z and x < y = z require special treatment, due to ambiguity cased by having identical communities. In the interest of keeping our discussion at a high level, we shall refer the reader to the appendix for these cases. 2 It is possible to generate a set of triangles that does not correspond to a network, e.g. a 2-triangle centered on i for (i, j, k) followed by a 3-triangle for (j, k, `), which produces a mismatch on the edge (j, k). This is a consequence of using a bag-of-triangles model, just as the bag-of-words model in Latent Dirichlet Allocation can generate sets of words that do not correspond to grammatical sentences. In practice, this is not an issue for either our model or LDA, as both models are used for mixed-membership recovery, rather than data simulation. 4 si,jk si,jk sj,ik sj,ik sk,ij sk,ij Order < sj,ik < sk,ij < sk,ij < sj,ik < si,jk < sk,ij < sk,ij < si,jk < si,jk < sj,ik < sj,ik < si,jk Conditional probability of Eijk ? {1, 2, 3, 4} Discrete([Bxyz,1 , Bxyz,2 , Bxyz,3 , Bxyz,4 ]) Discrete([Bxyz,1 , Bxyz,3 , Bxyz,2 , Bxyz,4 ]) Discrete([Bxyz,2 , Bxyz,1 , Bxyz,3 , Bxyz,4 ]) Discrete([Bxyz,3 , Bxyz,1 , Bxyz,2 , Bxyz,4 ]) Discrete([Bxyz,2 , Bxyz,3 , Bxyz,1 , Bxyz,4 ]) Discrete([Bxyz,3 , Bxyz,2 , Bxyz,1 , Bxyz,4 ]) Table 1: Conditional probabilities of Eijk given si,jk , sj,ik and sk,ij . We define x, y, z to be the ordered (i.e. sorted) values of si,jk , sj,ik , sk,ij . where s?ijk is the set of all community memberships except for si,jk , sj,ik , sk,ij , and si,?jk is the set of all community memberships of vertex i except for si,jk . The last three terms are predictive distributions of a multinomial-Dirichlet model, with the multinomial parameter ? marginalized out: P (si,jk | si,?jk , ?) # [si,?jk = si,jk ] + ? . # [si,?jk ] + K? = The first term is also a multinomial-Dirichlet predictive distribution (refer to appendix for details). 5 Comparing Mixed-Membership Network Models on Synthetic Networks For a mixed-membership network model to be useful, it must recover some meaningful notion of mixed community membership for each vertex. The precise definition of network community has been a subject of much debate, and various notions of community [1, 15, 17, 12, 6] have been proposed under different motivations. Our MMTM, too, conveys another notion of community based on membership in full triangles ?3 and 2-triangles ?2 , which are key aspects of network clustering coefficients. In our simulations, we shall compare our MMTM against an adjacencymatrix-based model (MMSB), in terms of how well they recover mixed-memberships from networks generated under a range of assumptions. Note that some of these synthetic networks will not match the generative assumptions of either our model or MMSB; this is intentional, as we want to compare the performance of both models under model misspecification. We shall also demonstrate that MMTM leads to faster inference, particularly when ?-subsampling triangles (as described in Section 2). Intuitively, we expect the mixed-membership recovery of our inference algorithm to depend on (a) the degree distribution of the network, and (b) the ?degree limit? ? used in subsampling the network; performance should increase as the number of vertices i having degree Di ? ? goes up. In particular, our experiments will demonstrate that subsampling yields good performance even when the network contains a few vertices with very large degree Di (a characteristic of many real-world networks). Synthetic networks We compared our MMTM to MMSB3 [1] on multiple synthetic networks, evaluating them according to how well their inference algorithms recovered the vertex mixedmembership vectors ?i . Each network was generated from N = 4, 000 mixed-membership vectors ?i of dimensionality K = 5 (i.e. 5 possible communities), according to one of several models: 1. The Mixed Membership Stochastic Blockmodel [1], an admixture generalization of the stochastic blockmodel. The probability of a link from i to j is ?i B?j for some block matrix B, and we convert all directed edges into undirected edges. In our experiments, we use a B with on-diagonal elements Baa = 1/80, and off-diagonal elements Bab = 1/800. Our values of B are lower than typically seen in the literature, because they are intended to replicate the 1-edge density of real-world networks with size around N = 4, 000. 2. A simplex Latent position model, where the probability of a link between i, j is ?(1 ? 1 2 ||?i ? ?j ||1 ) for some scaling parameter ?. In other words, the closer that ?i and ?j are, the higher the link probability. Note that 0 ? ||?i ? ?j ||1 ? 2, because ?i and ?j lie in the simplex. We choose ? = 1/40, again to reproduce the 1-edge density seen in real networks. 3. A ?Biased? scale-free model that combines the preferred attachment model [7] with a mixed-membership model. Specifically, we generated M = 60, 000 1-edges as follows: (a) pick a vertex i with probability proportional to its degree; (b) randomly pick a destination community k from ?i ; (c) find the set Vk of all vertices v such that ?vk is the largest element of ?v (i.e. the vertices that mostly belong to community k); (d) within Vk , pick the destination vertex j with probability proportional to its degree. The resulting network 3 MMSB is applicable to both directed and undirected networks; our experiments use the latter. 5 MMSB Latent position Biased scale-free Pure membership #0,1-edges 7,998,000 q q q #1-edges 55,696 56,077 60,000 55,651 max(Di ) 51 51 231 44 #?3 , ?2 1,541,085 1,562,710 3,176,927 1,533,365 ? = 20 749,018 746,979 497,737 746,796 ? = 15 418,764 418,448 304,866 418,222 ? = 10 179,841 179,757 144,206 179,693 ?=5 39,996 39,988 35,470 39,986 Table 2: Number of edges, maximum degree, and number of 3- and 2-edge triangles ?3 , ?2 for each N = 4, 000 synthetic network, as well as #triangles when subsampling at various degree thresholds ?. MMSB inference is linear in #0,1-edges, while our MMTM?s inference is linear in #?3 , ?2 . exhibits both a block diagonal structure, as well as a power-law degree distribution. In contrast, the other two models have binomial (i.e. Gaussian-like) degree distributions. To use these models, we must input mixed-memberships ?i . These were generated as follows: 1. Divide the N = 4, 000 vertices into 5 groups of size 800. Assign each group to a (different) dominant community k ? {1, . . . , 5}. 2. Within each group: (a) Pick 160 vertices to have mixed-membership in 3 communities: 0.8 in the dominant community k, and 0.1 in two other randomly chosen communities. (b) The remaining 640 vertices have mixed-membership in 2 communities: 0.8 in the dominant community k, and 0.2 in one other randomly chosen community. In other words, every vertex has a dominant community, and one or two other minor communities. Using these ?i ?s, we generated one synthetic network for each of the three models described. In addition, we generated a fourth ?pure membership? network under the MMSB model, using pure ?i ?s with full membership in the dominant community. This network represents the special case of single-community membership. Statistics for all 4 networks can be found in Table 2. Inference and Evaluation For our MMTM4 , we used our collapsed, blocked Gibbs sampler for inference. The hyperparameters were fixed at ?, ? = 0.1 and K = 5, and we ran each experiment for 2,000 iterations. For evaluation, we estimated all ?i ?s using the last sample, and scored the P estimates according to i ||??i ? ?i ||2 , the sum of `2 distances of each estimate ??i from its true value ?i . These results were taken under the most favorable permutation for the ??i ?s, in order to avoid the permutation non-identifiability issue. We repeated every experiment 5 times. To investigate the effect of ?-subsampling triangles (Section 2), we repeated every MMTM experiment under four different values of ?: 20, 15, 10 and 5. The triangles were subsampled prior to running the Gibbs sampler, and they remained fixed during inference. With MMSB, we opted not to use the variational inference algorithm of [1], because we wanted our experiments to be, as far as possible, a comparison of models rather than inference techniques. To accomplish this, we derived a collapsed, blocked Gibbs sampler for the MMSB model, with added Beta hyperparameters ?1 , ?2 on each element of the block matrix B. The mixed-membership vectors ?i (?i in the original paper) and blockmatrix B were integrated out, and we Gibbs sampled each edge (i, j)?s associated community indicators zi?j , zi?j in a block fashion. Hence, this MMSB sampler uses the exact same techniques as our MMTM sampler, ensuring that we are comparing models rather than inference strategies. Furthermore, its per-iteration runtime is still ?(N 2 ), equal to the original MMSB variational algorithm. All experiments were conducted in exactly the same manner as with MMTM, with the MMSB hyperparameters fixed at ?, ?1 , ?2 = 0.1 and K = 5. Results Figure 3 plots the cumulative `2 error for each experiment, as well as the time taken per trial. On all 4 networks, the full MMTM model performs better than MMSB ? even on the MMSBgenerated network! MMTM also requires less runtime for all but the biased scale-free network, which has a much larger maximum degree than the others (Table 2). Furthermore, ?-subsampling is effective: MMTM with ? = 20 runs faster than full MMTM, and still outperforms MMSB while approaching full MMTM in accuracy. The runtime benefit is most noticable on the biased scale-free network, underscoring the need to subsample real-world networks with high maximum degree. We hypothesize MMSB?s poorer performance on networks of this size (N = 4, 000) results from having ?(N 2 ) latent variables, while noting that the literature has only considered smaller N < 1, 000 networks [1]. Compared to MMTM, having many latent variables not only increases runtime per iteration, but also the number of iterations required for convergence, since the latent variable state space grows exponentially with the number of latent variables. In support of this, we have observed 4 As explained in Section 2, we first need to preprocess the network adjacency list into the ?3 , ?2 triangle representation. The time required is linear in the number of ?3 , ?2 triangles, and is insignificant compared to the actual cost of MMTM inference. 6 4 Mixed?membership community recovery: Accuracy 12 4500 4000 10 2500 2000 6 4 1500 MMSB MMTM MMTM ?=20 MMTM ?=15 MMTM ?=10 MMTM ?=5 1000 500 0 Mixed?membership community recovery: Total runtime MMSB MMTM MMTM ?=20 MMTM ?=15 MMTM ?=10 MMTM ?=5 8 3000 Total runtime (s) Cumulative L2 error 3500 x 10 MMSB Latent position Biased scale?free 2 0 Pure membership MMSB Latent position Biased scale?free Pure membership Figure 3: Mixed-membership community recovery task: Cumulative `2 errors and runtime per trial for MMSB, MMTM and MMTM with ?-subsampling, on N = 4, 000 synthetic networks. that the MMSB sampler?s complete log-likelihood fluctuates greatly across all 2000 iterations; in contrast, the MMTM sampler plateaus within 500 iterations, and remains stable. Scalability Experiments Although the preceding N = 4, 000 experiments appear fairly small, in actual fact, they are close to the feasible limit for adjacency-matrix-based models like MMSB. To demonstrate this, we generated four networks with sizes N ? {1000, 4000, 10000, 40000} from the MMSB generative model. The generative parameters for the N = 4, 000 network are identical to our earlier experiment, while the parameters for the other three network sizes were adjusted to maintain the same average degree5 . We then ran the MMSB, MMTM, and MMTM with ?-subsampling inference algorithms on all 4 networks, and plotted the average per-iteration runtime in Figure 4. The figure clearly exposes the scalability differences between MMSB and MMTM. The ?subsampled MMTM experiments show linear runtime dependence on N , which is expected since the number of subsampled triangles is O(N ? 2 ). The full MMTM experiment is also roughly linear ? though we caution that this is not necessarily true for all networks, particularly high maximum degree ones such as scale-free networks. Conversely, MMSB shows a clear quadratic dependence on N . In fact, we had to omit the MMSB N = 40, 000 experiment because the latent variables would not fit in memory, and even if they did, the extrapolated runtime would have been unreasonably long. 6 A Larger Network Demonstration The MMTM model with ?-subsampling scales to even larger networks than the ones we have been discussing. To demonstrate this, we ran the MMTM Gibbs sampler with ? = 20 on the SNAP Stanford Web Graph6 , containing N = 281, 903 vertices (webpages), 2, 312, 497 1-edges, and approximately 4 billiion 2- and 3-edge triangles ?3 , ?2 , which we reduced to 11, 353, 778 via ? = 20subsampling. Note that the vast majority of triangles are associated with exceptionally high-degree vertices, which make up a small fraction of the network. By using ?-subsampling, we limited the number of triangles that come from such vertices, thus making the network feasible for MMTM. We ran the MMTM sampler with settings identical to our synthetic experiments: 2,000 sampling iterations, hyperparameters fixed to ?, ? = 0.1. The experiment took 74 hours, and we observed log-likelihood convergence within 500 iterations. The recovered mixed-membership vectors ?i are visualized in Figure 5. A key challenge is that the ?i exist in the 4-simplex ?4 , which is difficult to visualize in two dimensions. To overcome this, Figure 5 uses both position and color to communicate the values of ?i . Every vertex i is displayed as a circle ci , whose size is proportional to the network degree of i. The position of ci is equal to a convex combination of the 5 pentagon corners? (x, y) coordinates, where the coordinates are weighted by the elements of ?i . In particular, circles ci at the pentagon?s corners represent single-membership ?i ?s, while circles on the lines connecting the corners represent ?i ?s with mixedmembership in 2 communities. All other circles represent ?i ?s with mixed-membership in ? 3 communities. Furthermore, each circle ci ?s color is also a ?i -weighted convex combination, this time of the RGB values of 5 colors: blue, green, red, cyan and purple. This use of color helps distinguish between vertices with 2 versus 3 or more communities: for example, even though the largest circle sits on the blue-red line (which initially suggets mixed-membership in 2 communities), its dark green color actually comes from mixed-membership in 3 communities: green, red and cyan. 5 6 Note that the maximum degree still increases with N , because MMSB has a binomial degree distribution. Available at http://snap.stanford.edu/data/web-Stanford.html 7 Per?iteration runtime for MMSB and MMTM Gibbs samplers 250 MMSB MMTM MMTM ?=20 MMTM ?=15 MMTM ?=10 MMTM ?=5 Time per iteration (s) 200 150 100 50 0 0 0.5 1 1.5 2 2.5 3 3.5 Number of vertices 4 4 x 10 Figure 4: Per-iteration runtimes for MMSB, MMTM and MMTM with ?-subsampling, on synthetic networks with N ranging from 1,000 to 40,000, but with constant average degree. Figure 5: N = 281, 903 Stanford web graph, MMTM mixed-membership visualization. Most high-degree vertices (large circles) are found at the pentagon?s corners, leading to the intuitive conclusion that the five communities are centered on hub webpages with many links. Interestingly, the highest-degree vertices are all mixed-membership, suggesting that these webpages (which are mostly frontpages) lie on the boundaries between the communities. Finally, if we focus on the sets of vertices near each corner, we see that the green and red sets have distinct degree (i.e. circle size) distributions, suggesting that those communities may be functionally different from the other three. 7 Future Work and Conclusion We have focused exclusively on triangular motifs because of their popularity in the literature, their relationship to community structure through the network clustering coefficient, and the ability to subsample them in a natural, node-centric fashion with minor impact on accuracy. However, the bag-of-network-motifs idea extends beyond triangles ? one could easily consider subgraphs over 4 or more vertices, as in [13]. As with triangular motifs, it is algorithmically infeasible to consider all possible subgraphs; rather, we must focus our attention on a meaningful subset of them. Nevertheless, higher order motifs could be more suited for particular tasks, thus meriting their investigation. In modeling terms, we have applied triangular motifs to a generative mixed-membership setting, which is suitable for visualization but not necessarily for attribute prediction. Recent developments in constrained learning of generative models [23, 24] have yielded significant improvements in predictive accuracy, and these techniques are also applicable to mixed-membership triangular modeling. Also, given how well ? = 20-subsampling works for MMTM at N = 4, 000, the next step would be investigating how to adaptively choose ? as N increases, in order to achieve good performance. To summarize, we have introduced the bag-of-triangles representation as a parsimonius alternative to the network adjacency matrix, and developed a model (MMTM) and inference algorithm for mixedmembership community detection in networks. Compared to mixed-membership models that use the adjacency matrix (exemplified by MMSB), our model features a much smaller latent variable space, leading to faster inference and better performance at mixed-membership recovery. When combined with triangle subsampling, our model and inference algorithm scale easily to networks with 100,000s of vertices, which are completely infeasible for ?(N 2 ) adjacency-matrix-based models ? the adjacency matrix might not even fit in memory, to say nothing of runtime. As a final note, we speculate that the local nature of the triangles lends itself better to parallel inference than the adjacency matrix representation; it may be possible to find good ?triangle separators?, small subsets of triangles that divide the remaining triangles into large, non-vertex-overlapping subsets, which can then be inferred in parallel. This is similar to classical 1-edge separators that divide networks into non-overlapping subgraphs, which are unfortunately inapplicable to adjacencymatrix-based models, as they require separators over both the 0- and 1-edges. With triangle separators, we expect triangle models to scale to networks with millions of vertices and more. Acknowledgments This work was supported by AFOSR FA9550010247, NIH 1R01GM093156 to Eric P. Xing. Qirong Ho is supported by an Agency for Science, Research and Technology, Singapore fellowship. Junming Yin is a Lane Fellow under the Ray and Stephanie Lane Center for Computational Biology. 8 References [1] E.M. Airoldi, D.M. Blei, S.E. Fienberg, and E.P. Xing. Mixed membership stochastic blockmodels. The Journal of Machine Learning Research, 9:1981?2014, 2008. [2] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993?1022, 2003. [3] L. Cao and L. Fei-Fei. Spatially coherent latent topic model for concurrent segmentation and classification of objects and scenes. In ICCV 2007, pages 1?8. IEEE, 2007. [4] B. Fulkerson, A. Vedaldi, and S. Soatto. Class segmentation and object localization with superpixel neighborhoods. In ICCV 2009, pages 670?677. IEEE, 2009. [5] Q. Ho, J. Eisenstein, and E.P. Xing. Document hierarchies from text and links. In Proceedings of the 21st international conference on World Wide Web, pages 739?748. ACM, 2012. [6] Q. Ho, A. Parikh, L. Song, and EP Xing. Multiscale community blockmodel for network exploration. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 2011. [7] M.J. Keeling and K.T.D. Eames. Networks and epidemic models. Journal of the Royal Society Interface, 2(4):295?307, 2005. [8] R. Kondor, N. Shervashidze, and K.M. Borgwardt. The graphlet spectrum. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 529?536. ACM, 2009. [9] D. Krackhardt and M. Handcock. Heider vs simmel: Emergent features in dynamic structures. Statistical Network Analysis: Models, Issues, and New Directions, pages 14?27, 2007. [10] J. Leskovec, L. Backstrom, R. Kumar, and A. Tomkins. Microscopic evolution of social networks. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 462?470. ACM, 2008. [11] J. Leskovec and C. Faloutsos. Sampling from large graphs. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 631?636. ACM, 2006. [12] K.T. Miller, T.L. Griffiths, and M.I. Jordan. Nonparametric latent feature models for link prediction. Advances in Neural Information Processing Systems (NIPS), pages 1276?1284, 2009. [13] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon. Network motifs: Simple building blocks of complex networks. Science, 298(5594):824?827, 2002. [14] R.M. Nallapati, A. Ahmed, E.P. Xing, and W.W. Cohen. Joint latent topic models for text and citations. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 542?550. ACM, 2008. [15] M.E.J. Newman. Modularity and community structure in networks. Proceedings of the National Academy of Sciences, 103(23):8577?8582, 2006. [16] M.E.J. Newman and J. Park. Why social networks are different from other types of networks. Arxiv preprint cond-mat/0305612, 2003. [17] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2:849?856, 2002. [18] N. Shervashidze, SVN Vishwanathan, T. Petri, K. Mehlhorn, and K. Borgwardt. Efficient graphlet kernels for large graph comparison. In Proceedings of the International Workshop on Artificial Intelligence and Statistics. Society for Artificial Intelligence and Statistics, 2009. [19] G. Simmel and K.H. Wolff. The Sociology of Georg Simmel. Free Press, 1950. [20] T.A.B. Snijders. Markov chain monte carlo estimation of exponential random graph models. Journal of Social Structure, 3(2):1?40, 2002. [21] C.E. Tsourakakis. Fast counting of triangles in large real networks without counting: Algorithms and laws. In Data Mining, 2008. ICDM?08. Eighth IEEE International Conference on, pages 608?617. IEEE, 2008. [22] A. Vattani, D. Chakrabarti, and M. Gurevich. Preserving personalized pagerank in subgraphs. In ICML 2011, 2011. [23] J. Zhu, A. Ahmed, and E.P. Xing. Medlda: maximum margin supervised topic models for regression and classification. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1257?1264. ACM, 2009. [24] J. Zhu, N. Chen, and E.P. Xing. Infinite latent svm for classification and multi-task learning. Advances in Neural Information Processing Systems, 25. 9
4564 |@word trial:2 kondor:1 stronger:1 replicate:1 d2:1 simulation:2 rgb:1 pick:4 reduction:1 contains:1 exclusively:1 precluding:1 document:2 interestingly:1 outperforms:1 current:1 di2:3 comparing:3 recovered:2 si:26 assigning:1 gurevich:1 must:4 underly:1 informative:1 wanted:1 hypothesize:1 plot:1 v:1 implying:1 generative:6 discovering:1 intelligence:3 junming:2 blei:2 node:11 sits:1 five:1 mehlhorn:1 beta:1 ik:18 chakrabarti:1 combine:1 ray:1 manner:3 inter:2 expected:1 roughly:1 frequently:1 multi:2 little:1 actual:2 provided:1 begin:1 moreover:1 bounded:1 what:1 r01gm093156:1 developed:2 caution:1 fellow:1 every:7 ti:7 runtime:14 exactly:3 normally:1 omit:1 appear:2 tices:1 local:3 limit:2 consequence:2 analyzing:1 becoming:1 approximately:2 might:1 studied:1 conversely:1 dif:1 limited:1 range:1 directed:4 acknowledgment:1 union:1 practice:1 block:5 graphlet:2 procedure:1 significantly:1 vedaldi:1 word:6 griffith:1 suggest:1 cannot:1 close:1 collapsed:3 equivalent:1 missing:2 center:4 go:1 attention:1 convex:2 focused:1 shen:1 recovery:7 assigns:1 mixedmembership:5 pure:5 subgraphs:5 regarded:1 classic:1 handle:1 notion:4 coordinate:2 fulkerson:1 analogous:1 hierarchy:1 user:1 massive:1 exact:2 us:2 superpixel:1 pa:3 element:7 particularly:3 jk:26 observed:4 ep:1 preprint:1 kashtan:1 connected:3 ordering:2 highest:1 ran:4 bilities:1 transforming:1 agency:1 complexity:1 dynamic:1 depend:1 predictive:3 serve:1 inapplicable:1 localization:1 eric:2 completely:1 triangle:61 easily:3 isp:1 joint:1 emergent:1 represented:2 various:3 distinct:1 fast:1 effective:2 monte:1 artificial:3 newman:2 shervashidze:2 outcome:2 neighborhood:1 whose:2 fluctuates:1 larger:6 stanford:4 say:2 drawing:1 snap:2 epidemic:1 triangular:36 ability:1 statistic:4 itself:1 final:2 subsamples:1 advantage:2 took:1 cao:1 qirong:2 poorly:1 subgraph:3 achieve:1 academy:1 intuitive:3 scalability:2 webpage:3 convergence:2 empty:3 cluster:4 produce:1 generating:1 object:3 tk:2 help:1 develop:2 alon:1 ij:18 minor:2 school:3 c:3 come:4 blanket:1 direction:1 attribute:1 stochastic:4 centered:2 exploration:1 lcc:2 enable:1 adjacency:11 require:5 assign:2 generalization:1 investigation:1 adjusted:1 mm:2 practically:1 around:1 copious:1 intentional:1 considered:1 visualize:1 adopt:1 favorable:1 estimation:1 applicable:2 bag:10 expose:1 largest:2 concurrent:1 weighted:2 clearly:1 always:1 gaussian:1 modified:1 rather:4 avoid:1 derived:1 focus:4 vk:3 improvement:1 likelihood:2 superpixels:2 greatly:2 contrast:5 sigkdd:3 blockmodel:4 opted:1 sense:1 inference:33 motif:33 dependent:1 membership:61 typically:2 integrated:2 initially:1 fa9550010247:1 reproduce:1 pixel:1 issue:3 among:1 html:1 classification:3 development:1 constrained:1 special:3 fairly:1 equal:2 construct:1 having:4 bab:1 sampling:5 runtimes:1 biology:2 represents:4 identical:3 park:1 icml:1 nearly:1 ng:2 future:1 simplex:4 others:1 petri:1 few:1 modern:1 randomly:3 preserve:6 national:1 individual:1 subsampled:8 itzkovitz:1 intended:1 replacement:1 maintain:1 proba:1 detection:13 huge:1 interest:3 mining:5 investigate:1 evaluation:2 mixture:2 upperbounded:1 tj:2 chain:1 accurate:2 poorer:1 edge:59 tuple:1 closer:1 necessary:2 divide:3 walk:1 circle:8 plotted:1 isolated:1 leskovec:2 sociology:1 modeling:9 earlier:1 cost:9 fusing:1 vertex:61 entry:1 subset:3 conducted:1 too:3 characterize:1 accomplish:1 synthetic:9 combined:2 adaptively:1 st:1 density:2 fundamental:1 international:9 borgwardt:2 probabilistic:1 off:1 destination:2 analogously:1 connecting:1 again:1 central:1 ambiguity:1 containing:3 opposed:1 choose:2 corner:5 vattani:1 leading:3 account:1 suggesting:2 orr:1 speculate:1 summarized:1 coefficient:5 depends:2 later:1 view:1 observing:1 red:4 xing:8 recover:2 parallel:2 identifiability:1 purple:1 ni:2 accuracy:6 variance:1 characteristic:1 miller:1 correspond:2 yield:1 preprocess:1 generalize:1 carlo:1 plateau:1 whenever:1 definition:2 against:1 involved:1 cased:1 conveys:1 naturally:2 proof:1 di:12 associated:4 outnumber:1 sampled:3 gain:1 treatment:1 popular:2 duplicated:1 color:5 knowledge:3 dimensionality:1 segmentation:2 actually:1 reflecting:1 centric:5 appears:1 focusing:1 higher:2 supervised:1 improved:2 wei:1 though:5 furthermore:3 just:1 d:1 hand:1 web:5 touch:1 multiscale:1 overlapping:2 quality:2 lda:1 grows:1 building:1 effect:1 true:2 evolution:1 hence:2 assigned:1 soatto:1 spatially:1 adjacent:1 transitivity:3 during:1 eisenstein:1 complete:1 demonstrate:5 confusion:1 performs:2 interface:1 upwards:1 image:2 variational:3 ranging:1 novel:2 parikh:1 nih:1 specialized:1 multinomial:4 empirically:1 cohen:1 exponentially:1 million:2 belong:1 functionally:1 mellon:3 significant:2 refer:4 blocked:3 gibbs:8 handcock:1 language:1 had:1 stable:1 entail:1 longer:1 etc:1 dominant:5 recent:1 touching:2 perspective:1 certain:1 discussing:1 seen:2 preserving:1 preceding:1 full:14 multiple:1 snijders:1 reduces:1 ing:1 faster:8 match:1 ahmed:2 long:1 icdm:1 mle:1 ensuring:1 prediction:4 scalable:1 regression:1 basic:3 involving:1 vision:1 cmu:3 baa:1 impact:1 arxiv:1 iteration:14 represent:5 kernel:1 preserved:2 addition:2 want:1 fellowship:1 chklovskii:1 biased:6 eliminates:1 subject:1 undirected:4 jordan:3 call:2 structural:1 near:1 noting:1 counting:2 easy:1 concerned:1 qho:1 variety:1 fit:2 zi:2 approaching:1 idea:3 svn:1 absent:1 bottleneck:2 song:1 useful:1 clear:1 nonparametric:1 dark:1 visualized:1 diameter:1 reduced:1 generate:5 http:1 outperform:1 exist:2 singapore:1 estimated:2 algorithmically:1 per:11 popularity:1 blue:2 carnegie:3 discrete:9 shall:5 milo:1 mat:1 georg:1 group:3 key:3 four:3 medlda:1 nevertheless:2 demonstrating:1 threshold:2 prevent:1 eijk:12 vast:1 graph:5 asymptotically:1 fraction:1 convert:1 sum:1 run:1 fourth:1 communicate:1 extends:1 almost:1 reasonable:1 reader:1 parsimonious:2 appendix:2 scaling:1 capturing:1 bound:1 cyan:2 followed:1 distinguish:1 quadratic:1 yielded:1 annual:2 strength:1 precisely:1 vishwanathan:1 fei:2 scene:1 personalized:1 lane:2 underscoring:1 aspect:1 kumar:1 developing:1 according:4 combination:3 smaller:4 across:1 stephanie:1 backstrom:1 making:1 simmel:3 intuitively:3 iccv:2 outlier:2 explained:2 restricted:1 taken:2 fienberg:1 visualization:4 remains:1 discus:1 turn:1 available:1 noticable:1 observe:2 spectral:2 appropriate:1 distinguished:1 alternative:1 faloutsos:1 ho:4 original:4 binomial:2 clustering:7 dirichlet:7 cf:1 subsampling:22 graphical:2 remaining:2 marginalized:1 running:1 unreasonably:1 tomkins:1 pentagon:3 especially:1 classical:1 society:2 sweep:1 tensor:3 bxyz:32 already:1 quantity:1 added:1 strategy:3 primary:1 dependence:2 traditional:2 modestly:1 diagonal:3 exhibit:1 microscopic:1 lends:1 distance:1 link:11 simulated:1 majority:1 sensible:1 topic:5 argue:2 index:2 relationship:1 illustration:1 providing:2 demonstration:2 ratio:2 difficult:2 mostly:2 unfortunately:1 statement:1 debate:1 disparate:1 design:2 tsourakakis:1 perform:1 allowing:3 upper:2 markov:2 discarded:1 displayed:1 looking:1 precise:1 misspecification:1 community:66 inferred:1 introduced:1 evidenced:1 pair:2 namely:1 required:2 connection:1 sentence:1 coherent:1 hour:1 nip:1 beyond:2 recurring:1 usually:1 pattern:1 below:1 mismatch:1 exemplified:1 eighth:1 challenge:1 summarize:1 pagerank:1 built:1 max:2 interpretability:1 memory:2 green:4 royal:1 power:1 suitable:2 natural:4 rely:1 indicator:2 zhu:2 representing:2 technology:1 epxing:1 attachment:1 admixture:2 text:4 prior:1 literature:5 l2:1 discovery:3 afosr:1 law:2 loss:1 expect:4 permutation:2 mixed:46 interesting:1 generation:1 allocation:3 proportional:3 proven:1 versus:2 triple:3 incurred:1 degree:35 principle:1 mmtm:53 accounted:2 extrapolated:1 last:2 keeping:1 free:8 infeasible:4 supported:2 side:1 bias:1 neighbor:3 wide:1 taking:2 characterizing:1 benefit:2 grammatical:2 overcome:1 dimension:1 boundary:1 world:8 evaluating:1 rich:1 ferent:1 cumulative:3 commonly:1 far:3 social:7 sj:18 approximate:3 mmsb:43 obtains:1 compact:1 preferred:1 citation:1 ver:1 investigating:1 pittsburgh:3 conclude:2 tuples:1 spectrum:1 latent:26 triplet:4 sk:17 modularity:1 table:6 why:1 nature:1 ignoring:1 complex:3 necessarily:2 separator:4 did:1 blockmodels:1 main:2 motivation:2 subsample:5 hyperparameters:4 scored:1 nothing:2 nallapati:1 repeated:2 fashion:4 wlog:1 position:6 exponential:2 lie:3 third:1 capping:1 remained:1 specific:1 discarding:1 hub:1 list:1 insignificant:1 svm:1 grouping:1 workshop:1 importance:2 ci:4 airoldi:1 margin:1 chen:1 suited:1 yin:2 simply:1 forming:1 ordered:2 applies:1 corresponds:1 acm:9 conditional:3 goal:2 sorted:1 towards:1 feasible:2 exceptionally:1 specifically:4 except:3 uniformly:2 infinite:1 sampler:10 lemma:2 wolff:1 total:5 ijk:3 meaningful:3 cond:1 support:1 latter:1 heider:1 mcmc:1
3,938
4,565
Near-optimal Differentially Private Principal Components Kamalika Chaudhuri UC San Diego [email protected] Anand D. Sarwate TTI-Chicago [email protected] Kaushik Sinha UC San Diego [email protected] Abstract Principal components analysis (PCA) is a standard tool for identifying good lowdimensional approximations to data sets in high dimension. Many current data sets of interest contain private or sensitive information about individuals. Algorithms which operate on such data should be sensitive to the privacy risks in publishing their outputs. Differential privacy is a framework for developing tradeoffs between privacy and the utility of these outputs. In this paper we investigate the theory and empirical performance of differentially private approximations to PCA and propose a new method which explicitly optimizes the utility of the output. We demonstrate that on real data, there is a large performance gap between the existing method and our method. We show that the sample complexity for the two procedures differs in the scaling with the data dimension, and that our method is nearly optimal in terms of this scaling. 1 Introduction Dimensionality reduction is a fundamental tool for understanding complex data sets that arise in contemporary machine learning and data mining applications. Even though a single data point can be represented by hundreds or even thousands of features, the phenomena of interest are often intrinsically low-dimensional. By reducing the ?extrinsic? dimension of the data to its ?intrinsic? dimension, analysts can discover important structural relationships between features, more efficiently use the transformed data for learning tasks such as classification or regression, and greatly reduce the space required to store the data. One of the oldest and most classical methods for dimensionality reduction is principal components analysis (PCA), which computes a low-rank approximation to the second moment matrix of a set of points in Rd . The rank k of the approximation is chosen to be the intrinsic dimension of the data. We view this procedure as specifying a k-dimensional subspace of Rd . Much of today?s machine-learning is performed on the vast amounts of personal information collected by private companies and government agencies about individuals, such as customers, users, and subjects. These datasets contain sensitive information about individuals and typically involve a large number of features. It is therefore important to design machine-learning algorithms which discover important structural relationships in the data while taking into account its sensitive nature. We study approximations to PCA which guarantee differential privacy, a cryptographically motivated definition of privacy [9] that has gained significant attention over the past few years in the machine-learning and data-mining communities [19, 21, 20, 10, 23]. Differential privacy measures privacy risk by a parameter ? that bounds the log-likelihood ratio of output of a (private) algorithm under two databases differing in a single individual. There are many general tools for providing differential privacy. The sensitivity method [9] computes the desired algorithm (PCA) on the data and then adds noise proportional to the maximum change than can be induced by changing a single point in the data set. The PCA algorithm is very sensitive 1 in this sense because the top eigenvector can change by 90 by changing one point in the data set. Relaxations such as smoothed sensitivity [24] are difficult to compute in this setting as well. The SULQ method of Blum et al. [2] adds noise to the second moment matrix and then runs PCA on the noisy matrix. As our experiments show, the amount of noise required is often quite severe and SULQ seems impractical for data sets of moderate size. The general SULQ method does not take into account the quality of approximation to the nonprivate PCA output. We address this by proposing a new method, PPCA, that is an instance of the exponential mechanism of McSherry and Talwar [22]. For any k < d, this differentially private method outputs a k-dimensional subspace; the output is biased towards subspaces which are close to the output of PCA. In our case, the method corresponds to sampling from the matrix Bingham distribution. We implement this method using a Markov Chain Monte Carlo (MCMC) procedure due to Hoff [15] and show that it achieves significantly better empirical performance. In order to understand the performance gap, we prove sample complexity bounds in case of k = 1 for SULQ and PPCA, as well as a general lower bound on the sample complexity for any differentially p private algorithm. We show that (up to log factors) the sample complexity scales as ?(d3/2 d) for SULQ and as O(d) for PPCA. Furthermore, any differentially private algorithm requires ?(d) samples, showing that PPCA is nearly optimal in terms of sample complexity as a function of data dimension. These theoretical results suggest that our experiments exhibit the limit of how well ?differentially private algorithms can perform, and our experiments show that this gap should persist for general k. There are several interesting open questions suggested by this work. One set of issues is computational. Differentially privacy is a mathematical definition, but algorithms must be implemented using finite precision machines. Privacy and computation interact in many places, including pseudorandomness, numerical stability, optimization, and in the MCMC procedure we use to implement PPCA; investigating the impact of approximate sampling is an avenue for future work. A second set of issues is theoretical ? while the privacy guarantees of PPCA hold for all k, our theoretical analysis of sample complexity applies only to k = 1 in which the distance and angles between vectors are related. An interesting direction is to develop theoretical bounds for general k; challenges here are providing the right notion of approximation of PCA, and extending the theory using packings of Grassman or Stiefel manifolds. 2 Preliminaries The data given to our algorithm is a set of n vectors D = {x1 , x2 , . . . , xn } where each xi corresponds to the private value of one individual, xi 2 Rd , and kxi k ? 1 for all i. Let X = [x1 , . . . , xn ] be the matrix whose columns are the data vectors {xi }. Let A = n1 XX T denote the d ? d second moment matrix of the data. The matrix A is positive semidefinite, and has Frobenius norm at most 1. The problem of dimensionality reduction is to find a ?good? low-rank approximation to A. A popular ? F , where k is much solution is to compute a rank-k matrix A? which minimizes the norm kA Ak lower than the data dimension d. The Schmidt approximation theorem [25] shows that the minimizer is given by the singular value decomposition, also known as the PCA algorithm in some areas of computer science. Definition 1. Suppose A is a positive semidefinite matrix whose first k eigenvalues are distinct. Let the eigenvalues of A be 1 (A) ??? 0 and let ? be a diagonal matrix with 2 (A) d (A) ?ii = i (A). The matrix A decomposes as A = V ?V T , (1) where V is an orthonormal matrix of eigenvectors. The top-k subspace of A is the matrix Vk (A) = [v1 v2 ? ? ? vk ] , (2) where vi is the i-th column of V in (1). Given the top-k subspace and the eigenvalue matrix ?, we can form an approximation A(k) = Vk (A)?k Vk (A)T to A, where ?k contains the k largest eigenvalues in ?. In the special case k = 1 2 we have A(1) = 1 (A)v1 v1T , where v1 is the eigenvector corresponding to 1 (A). We refer to v1 as the top eigenvector of the data. For a d ? k matrix V? with orthonormal columns, the quality of V? in approximating A can be measured by ? ? qF (V? ) = tr V? T AV? . (3) The V? which maximizes q(V? ) has columns equal to {vi : i 2 [k]}, corresponding to the top k eigenvectors of A. Our theoretical results apply to the special case k = 1. For these results, we measure the inner product between the output vector v?1 and the true top eigenvector v1 : (4) qA (? v1 ) = |h? v1 , v1 i| . This is related to (3). If we write v?1 in the basis spanned by {vi }, then qF (? v1 ) = v1 ) 1 qA (? 2 + d X v 1 , vi i i h? i=2 2 . Our proof techniques use the geometric properties of qA (?). Definition 2. A randomized algorithm A(?) is an (?, ?)-close approximation to the top eigenvector if for all data sets D of n points, P (qA (A(D)) ?) 1 ?, (5) where the probability is taken over A(?). We study approximations to A? that preserve the privacy of the underlying data. The notion of privacy that we use is differential privacy, which quantifies the privacy guaranteed by a randomized algorithm P applied to a data set D. Definition 3. An algorithm A(B) taking values in a set T provides ?-differential privacy if sup sup S D,D 0 ? (S | B = D) ? e? , ? (S | B = D0 ) (6) where the first supremum is over all measurable S ? T , the second is over all data sets D and D0 differing in a single entry, and ?(?|B) is the conditional distribution (measure) on T induced by the output A(B) given a data set B. The ratio is interpreted to be 1 whenever the numerator and denominator are both 0. Definition 4. An algorithm A(B) taking values in a set T provides (?, )-differential privacy if P (A(D) 2 S) ? e? P (A(D0 ) 2 S) + , (7) for all all measurable S ? T and all data sets D and D differing in a single entry. 0 Here ? and are privacy parameters, where low ? and ensure more privacy. For more details about these definitions, see [9, 26, 8]. The second privacy guarantee is weaker; the parameter bounds the probability of failure, and is typically chosen to be quite small. In this paper we are interested in proving results on the sample complexity of differentially private algorithms that approximate PCA. That is, for a given ? and ?, how large must the number of individuals n in the data set be such that it is ?-differentially private and also a (?, ?)-close approximation to PCA? It is well known that as the number of individuals n grows, it is easier to guarantee the same level of privacy with relatively less noise or perturbation, and therefore the utility of the approximation also improves. Our results characterize how privacy and utility scale with n and the tradeoff between them for fixed n. Related Work Differential privacy was proposed by Dwork et al. [9], and has spawned an extensive literature of general methods and applications [1, 21, 27, 6, 24, 3, 22, 10] Differential privacy has been shown to have strong semantic guarantees [9, 17] and is resistant to many attacks [12] that succeed against some other definitions of privacy. There are several standard approaches for designing differentially-private data-mining algorithms, including input perturbation [2], output perturbation [9], the exponential mechanism [22], and objective perturbation [6]. To our knowledge, other 3 than SULQ method [2], which provides a general differentially-private input perturbation algorithm, this is the first work on differentially-private PCA. Independently, [14] consider the problem of differentially-private low-rank matrix reconstruction for applications to sparse matrices; provided certain coherence conditions hold, they provide an algorithm for constructing a rank 2k approximation B to a matrix A such that kA BkF is O(kA Ak k) plus some additional terms which depend on d, k and n; here Ak is the best rank k approximation to A. Because of their additional assumptions, their bounds are generally incomparable to ours, and our bounds are superior for dense matrices. The data-mining community has also considered many different models for privacy-preserving computation ? see Fung et al. for a survey with more references [11]. Many of the models used have been shown to be susceptible to composition attacks, when the adversary has some amount of prior knowledge [12]. An alternative line of privacy-preserving data-mining work [28] is in the Secure Multiparty Computation setting; one work [13] studies privacy-preserving singular value decomposition in this model. Finally, dimension reduction through random projection has been considered as a technique for sanitizing data prior to publication [18]; our work differs from this line of work in that we offer differential privacy guarantees, and we only release the PCA subspace, not actual data. Independently, Kapralov and Talwar [16] have proposed a dynamic programming algorithm for differentially private low rank matrix approximation which involves sampling from a distribution induced by the exponential mechanism. The running time of their algorithm is O(d6 ), where d is the data dimension. 3 Algorithms and results In this section we describe differentially private techniques for approximating (2). The first is a modified version of the SULQ method [2]. Our new algorithm for differentially-private PCA, PPCA, is an instantiation of the exponential mechanism due to McSherry and Talwar [22]. Both procedures provide differentially private approximations to the top-k subspace: SULQ provides (?, )differential privacy and PPCA provides ?-differential privacy. Input perturbation. The only differentially-private approximation to PCA prior to this work is the SULQ method [2]. The SULQ method perturbs each entry of the empirical second moment matrix A to ensure differential privacy and releases the top k eigenvectors of this perturbed matrix. In 2 2 (d/ ) particular, SULQ recommends adding a matrix N of i.i.d. Gaussian noise of variance 8d log n2 ? 2 and applies the PCA algorithm to A + N . This guarantees a weaker privacy definition known as (?, )-differential privacy. One problem with this approach is that with probability 1 the matrix A + N is not symmetric, so the largest eigenvalue may not be real and the entries of the corresponding eigenvector may be complex. Thus the SULQ algorithm is not a good candidate for practical privacy-preserving dimensionality reduction. However, a simple modification to the basic SULQ approach does guarantee (?, ) differential privacy. Instead of adding a asymmetric Gaussian matrix, the algorithm can add the a symmetric matrix with i.i.d. Gaussian entries N . That is, for 1 ? i ? j ? d, the variable Nij is an independent Gaussian random variable with variance 2 . Note that this matrix is symmetric but not necessarily positive semidefinite, so some eigenvalues may be negative but the eigenvectors are all real. A derivation for the noise variance is given in Theorem 1. Algorithm 1: Algorithm MOD-SULQ (input pertubation) inputs: d ? n data matrix X, privacy parameter ?, parameter outputs: d ? k matrix V?k = [? v1 v?2 ? ? ? v?k ] with orthonormal columns 1 Set A = n1 XX T .; r ? ? 2 d+1 2 Set = n? 2 log d p+d + p1?n . Generate a d ? d symmetric random matrix N whose 2 2? entries are i.i.d. drawn from N 0, 2 . ; 3 Compute V?k = Vk (A + N ) according to (2). ; 4 Exponential mechanism. Our new method, PPCA, randomly samples a k-dimensional subspace from a distribution that ensures differential privacy and is biased towards high utility. The distribution from which our released subspace is sampled is known in the statistics literature as the matrix Bingham distribution [7], which we denote by BMFk (B). The algorithm is in terms of general k < d but our theoretical results focus on the special case k = 1 where we wish to release a onedimensional approximation to the data covariance matrix. The matrix Bingham distribution takes values on the set of all k-dimensional subspaces of Rd and has a density equal to 1 f (V ) = exp(tr(V T BV )), (8) 1 1 F k, d, B 1 1 2 2 where V is a d ? k matrix whose columns are orthonormal and 1 F1 hypergeometric function [7, p.33]. 1 1 2 k, 2 d, B is a confluent Algorithm 2: Algorithm PPCA (exponential mechanism) inputs: d ? n data matrix X, privacy parameter ?, dimension k outputs: d ? k matrix V?k = [? v1 v?2 ? ? ? v?k ] with orthonormal columns 1 Set A = n1 XX T ; 2 Sample V?k = BMF n ? 2A ; By combining results on the exponential mechanism [22] along with properties of PCA algorithm, we can show that this procedure is differentially private. In many cases, sampling from the distribution specified by the exponential mechanism distribution may be difficult computationally, especially for continuous-valued outputs. We implement PPCA using a recently-proposed Gibbs sampler due to Hoff [15]. Gibbs sampling is a popular Markov Chain Monte Carlo (MCMC) technique in which samples are generated according to a Markov chain whose stationary distribution is the density in (8). Assessing the ?burn-in time? and other factors for this procedure is an interesting question in its own right; further details are in Section E.3. Other approaches. There are other general algorithmic strategies for guaranteeing differential privacy. The sensitivity method [9] adds noise proportional to the maximum change that can be induced by changing a single point in the data set. Consider a data set D with m + 1 copies of a unit vector u and m copies of a unit vector u0 with u ? u0 and let D0 havep m copies of u and m+1 copies of u0 . Then v1 (D) = u but v1 (D0 ) = u0 , so kv1 (D) v1 (D0 )k = 2. Thus the global sensitivity does not scale with the number of data points, so as n increases the variance of the noise required by the Laplace mechanism [9] will not decrease. An alternative to global sensitivity is smooth sensitivity [24]; except for special cases, such as the sample median, smooth sensitivity is difficult to compute for general functions. A third method for computing private, approximate solutions to high-dimensional optimization problems is objective perturbation [6]; to apply this method, we require the optimization problems to have certain properties (namely, strong convexity and bounded norms of gradients), which do not apply to PCA. Main results. Our theoretical results are sample complexity bounds for PPCA and MOD-SULQ as well as a general lower bound on the sample complexity for any ?-differentially private algorithm. These results show that the PPCA is nearly optimal in terms the scaling of the sample complexity with respect to the data dimension d, privacy parameter ?, and eigengap . We further show that MOD-SULQ requires more samples as a function of d, despite having a slightly weaker privacy guarantee. Proofs are deferred to the supplementary material. Even though both algorithms can output the top-k PCA subspace for general k ? d, we prove results for the case k = 1. Finding the scaling behavior of the sample complexity with k is an interesting open problem that we leave for future work; challenges here are finding the right notion of approximation of the PCA, and extending the theory using packings of Grassman or Stiefel manifolds. Theorem 1. For the in Algorithm 1, the MOD-SULQ algorithm is (?, ) differentially private. Theorem 2. Algorithm PPCA is ?-differentially private. The fact that these two algorithms are differentially private follows from some simple calculations. Our first sample complexity result provides an upper bound on the number of samples required by 5 PPCA to guarantee a certain level of privacy and accuracy. The sample complexity of PPCA n grows linearly with the dimension d, inversely with ?, and inversely with the correlation gap (1 ?) and eigenvalue gap 1 (A) 2 (A). ? ? 4 1 Theorem 3 (Sample complexity of PPCA). If n > ?(1 ?)(d 1 2 ) log(1/?) + log , 2 d (1 ? )( 1 2) then PPCA is a (?, ?)-close approximation to PCA. Our second result shows a lower bound on the number of samples required by any ?-differentiallyprivate algorithm to guarantee a certain level of accuracy for a large class of datasets, and uses proof techniques in [4, 5]. Theorem 4 (Sample complexity lower bound). Fix d, ?, ? 12 and let 1 = ? ? ln 8+ln(1+exp(d)) 1 exp 2? . For any ? 1 d 2 16 , no ?-differentially private algorithm A can approximate PCA with expected utility greater than ? on all databases with n points in dimension d ? q having eigenvalue gap , where n < max d? , 180 ? ?pd1 ? . Theorem 3 shows that if n scales like ? (1d ?) log 1 1?2 then PPCA produces an approximation v?1 pd that has correlation ? with v1 , whereas Theorem 4 shows that n must scale like for any ? (1 ?) ?-differentially private algorithm. In terms of scaling with d, ? and , the upper and lower bounds match, and they also match up to square-root factors with respect to the correlation. By contrast, the following lower bound on the number of samples required by MOD-SULQ to ensure a certain level of accuracy shows that MOD-SULQ has a less favorable scaling with dimension. 0 Theorem 5 (Sample p complexity lower bound for MOD-SULQ). There are constants c and c such d3/2 log(d/ ) that if n < c (1 c0 (1 ?)), then there is a dataset of size n in dimension d such that ? the top PCA direction v and the output v? of MOD-SULQ satisfy E[|h? v1 , v1 i|] ? ?. Notice that the dependence on n grows as d3/2 in SULQ as opposed to d in PPCA. Dimensionality reduction via PCA is often used in applications where the data points occupy a low dimensional space but are presented in high dimensions. These bounds suggest that PPCA is better suited to such applications than MOD-SULQ. We next turn to validating this intuition on real data. 4 Experiments We chose four datasets from four different domains ? kddcup99, which includes features of 494,021 network connections, census, a demographic data set on 199, 523 individuals, localization, a medical dataset with 164,860 instances of sensor readings on individuals engaged in different activities, and insurance, a dataset on product usage and demographics of 9,822 individuals. After preprocessing, the dimensions of these datasets are 116, 513, 44 and 150 respectively. We chose k to be 4, 8, 10, and 11 such that the top-k PCA subspace had qF (Vk ) at least 80% of kAkF . More details are in Appendix E in the supplementary material. We ran three algorithms on these data sets : standard (non-private) PCA, MOD-SULQ with ? = 0.1 and = 0.01, and PPCA with ? = 0.1. As a sanity check, we also tried a uniformly generated random projection ? since this projection is data-independent we would expect it to have low utility. Standard PCA is non-private; changing a single data point will change the output, and hence violate differential privacy. We measured the utility qF (U ), where U is the k-dimensional subspace output by the algorithm; kU k is maximized when U is the top-k PCA subspace, and thus this reflects how close the output subspace is to the true PCA subspace in terms of representing the data. Although our theoretical results hold for qA (?), the ?energy? qF (?) is more relevant in practice for larger k. Figures 1(a), 1(b), 1(c), and 1(d) show qF (U ) as a function of sample size for the k-dimensional subspace output by PPCA, MOD-SULQ, non-private PCA, and random projections. Each value in the figure is an average over 5 random permutations of the data, as well as 10 random starting points of the Gibbs sampler per permutation (for PPCA), and 100 random runs per permutation (for MOD-SULQ and random projections). 6 0.7 0.6 0.6 0.5 0.5 0.4 0.2 Algorithm 0.4 Nonprivate PPCA Random SULQ 0.3 Utility Utility Algorithm Nonprivate PPCA Random SULQ 0.3 0.2 0.1 0.1 50000 100000 150000 2e+04 4e+04 n 6e+04 8e+04 1e+05 n (a) census (b) kddcup 0.5 0.5 0.4 Algorithm Utility 0.3 Utility Nonprivate PPCA Random SULQ 0.4 Algorithm 0.3 Nonprivate PPCA Random SULQ 0.2 0.2 0.1 2e+04 4e+04 6e+04 8e+04 1e+05 2000 4000 n 6000 8000 10000 n (c) localization (d) insurance Figure 1: Utility qF (U ) for the four data sets KDDCUP LOCALIZATION Non-private PCA 98.97 ? 0.05 100 ? 0 PPCA 98.95 ? 0.05 100 ? 0 Table 1: MOD-SULQ 98.18 ? 0.65 97.06 ? 2.17 Random projections 98.23 ? 0.49 96.28 ? 2.34 Classification accuracy in the k-dimensional subspaces for kddcup99(k = localization(k = 10) in the k-dimensional subspaces reported by the different algorithms. 4), and The plots show that PPCA always outperforms MOD-SULQ, and approaches the performance of non-private PCA with increasing sample size. By contrast, for most of the problems and sample sizes considered by our experiments, MOD-SULQ does not perform much better than random projections. The only exception is localization, which has much lower dimension (44). This confirms that MOD-SULQ does not scale very well with the data dimension d. The performance of both MOD-SULQ and PPCA improve as the sample size increases; the improvement is faster for PPCA than for MOD-SULQ. However, to be fair, MOD-SULQ is simpler and hence runs faster than PPCA. At the sample sizes in our experiments, the performance of non-private PCA does not improve much with a further increase in samples. Our theoretical results suggest that the performance of differentially private PCA cannot be significantly improved over these experiments. Effect of privacy on classification. A common use of a dimension reduction algorithm is as a precursor to classification or clustering; to evaluate the effectiveness of the different algorithms, we projected the data onto the subspace output by the algorithms, and measured the classification accuracy using the projected data. The classification results are summarized in Table 4. We chose the normal vs. all classification task in kddcup99, and the falling vs. all classification task in localization. 1 We used a linear SVM for all classification experiments. For the classification experiments, we used half of the data as a holdout set for computing a projection subspace. We projected the classification data onto the subspace computed based on the holdout set; 10% of this data was used for training and parameter-tuning, and the rest for testing. We repeated the classification process 5 times for 5 different (random) projections for each algorithm, and then ran the entire procedure over 5 random permutations of the data. Each value in the figure is thus an average over 5 ? 5 = 25 rounds of classification. 1 For the other two datasets, census and insurance, the classification accuracy of linear SVM after (non-private) PCAs is as low as always predicting the majority label. 7 Utility versus privacy parameter 0.7 ?? ? ? ? ? ? ? ? ? Utility q(U) 0.6 Algorithm ? 0.5 Non?Private SULQ PPCA 1000 ? 0.4 0.3 0.2 ?? ? 0.5 1.0 1.5 2.0 Privacy parameter alpha Figure 2: Plot of qF (U ) versus ? for a synthetic data set with n = 5,000, d = 10, and k = 2. The classification results show that our algorithm performs almost as well as non-private PCA for classification in the top k PCA subspace, while the performance of MOD-SULQ and random projections are a little worse. The classification accuracy while using MOD-SULQ and random projections also appears to have higher variance compared to our algorithm and non-private PCA; this can be explained by the fact that these projections tend to be farther from the PCA subspace, in which the data has higher classification accuracy. Effect of the privacy requirement. To check the effect of the privacy requirement, we generated a synthetic data set of n = 5,000 points drawn from a Gaussian distribution in d = 10 with mean 0 and whose covariance matrix had eigenvalues {0.5, 0.30, 0.04, 0.03, 0.02, 0.01, 0.004, 0.003, 0.001, 0.001}. In this case the space spanned by the top two eigenvectors has most of the energy, so we chose k = 2 and plotted the utility qF (?) for nonprivate PCA, MOD-SULQ with = 0.05, and PPCA. We drew 100 samples from each privacypreserving algorithm and the plot of the average utility versus ? is shown in Figure 2. As ? increases, the privacy requirement is relaxed and both MOD-SULQ and PPCA approach the utility of PCA without privacy constraints. However, for moderate ? the PPCA still captures most of the utility, whereas the gap between MOD-SULQ and PPCA becomes quite large. 5 Conclusion In this paper we investigated the theoretical and empirical performance of differentially private approximations to PCA. Empirically, we showed that MOD-SULQ and PPCA differ markedly in how well they approximate the top-k subspace of the data. p The reason for this, theoretically, is that the sample complexity of MOD-SULQ scales with d3/2 log d whereas PPCA scales with d. Because PPCA uses the exponential mechanism with qF (?) as the utility function, it is not surprising that it performs well. However, MOD-SULQ often had a performance comparable to random projections, indicating that the real data sets we used were too small for it to be effective. We furthermore showed that PPCA is nearly optimal, in that any differentially private approximation to PCA must use ?(d) samples. Our investigation brought up many interesting issues to consider for future work. The description of differentially private algorithms assume an ideal model of computation : real systems require additional security assumptions that have to be verified. The difference between truly random noise and pseudorandomness and the effects of finite precision can lead to a gap between the theoretical ideal and practice. Numerical optimization methods used in objective perturbation [6] can only produce approximate solutions, and have complex termination conditions unaccounted for in the theoretical analysis. Our MCMC sampling has this flavor : we cannot sample exactly from the Bingham distribution because we must determine the Gibbs sampler?s convergence empirically. Accounting for these effects is an interesting avenue for future work that can bring theory and practice together. Finally, more germane to the work on PCA here is to prove sample complexity results for general k rather than the case k = 1 here. For k = 1 the utility functions qF (?) and qA (?) are related, but for general k it is not immediately clear what metric best captures the idea of ?approximating? PCA. Developing a framework for such approximations is of interest more generally in machine learning. 8 References [1] BARAK , B., C HAUDHURI , K., DWORK , C., K ALE , S., M C S HERRY, F., AND TALWAR , K. Privacy, accuracy, and consistency too: a holistic solution to contingency table release. In PODS (2007), pp. 273? 282. [2] B LUM , A., DWORK , C., M C S HERRY, F., AND N ISSIM , K. Practical privacy: the SuLQ framework. In PODS (2005), pp. 128?138. [3] B LUM , A., L IGETT, K., AND ROTH , A. A learning theory approach to non-interactive database privacy. In STOC (2008), R. E. Ladner and C. Dwork, Eds., ACM, pp. 609?618. [4] C HAUDHURI , K., AND H SU , D. Sample complexity bounds for differentially private learning. In COLT (2011). [5] C HAUDHURI , K., AND H SU , D. Convergence rates for differentially private statistical estimation. In ICML (2012). [6] C HAUDHURI , K., M ONTELEONI , C., AND S ARWATE , A. D. Differentially private empirical risk minimization. Journal of Machine Learning Research 12 (March 2011), 1069?1109. [7] C HIKUSE , Y. Statistics on Special Manifolds. No. 174 in Lecture Notes in Statistics. Springer, New York, 2003. [8] DWORK , C., K ENTHAPADI , K., M C S HERRY, F., M IRONOV, I., AND NAOR , M. Our data, ourselves: Privacy via distributed noise generation. In EUROCRYPT (2006), vol. 4004, pp. 486?503. [9] DWORK , C., M C S HERRY, F., N ISSIM , K., AND S MITH , A. Calibrating noise to sensitivity in private data analysis. In 3rd IACR Theory of Cryptography Conference, (2006), pp. 265?284. [10] F RIEDMAN , A., AND S CHUSTER , A. Data mining with differential privacy. In KDD (2010), pp. 493? 502. [11] F UNG , B. C. M., WANG , K., C HEN , R., AND Y U , P. S. Privacy-preserving data publishing: A survey of recent developments. ACM Comput. Surv. 42, 4 (June 2010), 53 pages. [12] G ANTA , S. R., K ASIVISWANATHAN , S. P., AND S MITH , A. Composition attacks and auxiliary information in data privacy. In KDD (2008), pp. 265?273. [13] H AN , S., N G , W. K., AND Y U , P. Privacy-preserving singular value decomposition. In ICDE (29 2009-april 2 2009), pp. 1267 ?1270. [14] H ARDT, M., AND ROTH , A. Beating randomized response on incoherent matrices. In STOC (2012). [15] H OFF , P. D. Simulation of the matrix Bingham-von Mises-Fisher distribution, with applications to multivariate and relational data. J. Comp. Graph. Stat. 18, 2 (2009), 438?456. [16] K APRALOV, M., AND TALWAR , K. On differentially private low rank approximation. In Proc. of SODA (2013). [17] K ASIVISWANATHAN , S. P., AND S MITH , A. A note on differential privacy: Defining resistance to arbitrary side information. CoRR abs/0803.3946 (2008). [18] L IU , K., K ARGUPTA , H., AND RYAN , J. Random projection-based multiplicative data perturbation for privacy preserving distributed data mining. IEEE Trans. Knowl. Data Eng. 18, 1 (2006), 92?106. [19] M ACHANAVAJJHALA , A., K IFER , D., A BOWD , J. M., G EHRKE , J., AND V ILHUBER , L. Privacy: Theory meets practice on the map. In ICDE (2008), pp. 277?286. [20] M C S HERRY, F. Privacy integrated queries: an extensible platform for privacy-preserving data analysis. In SIGMOD Conference (2009), pp. 19?30. [21] M C S HERRY, F., AND M IRONOV, I. Differentially private recommender systems: Building privacy into the netflix prize contenders. In KDD (2009), pp. 627?636. [22] M C S HERRY, F., AND TALWAR , K. Mechanism design via differential privacy. In FOCS (2007), pp. 94? 103. [23] M OHAMMED , N., C HEN , R., F UNG , B. C. M., AND Y U , P. S. Differentially private data release for data mining. In KDD (2011), pp. 493?501. [24] N ISSIM , K., R ASKHODNIKOVA , S., AND S MITH , A. Smooth sensitivity and sampling in private data analysis. In STOC (2007), D. S. Johnson and U. Feige, Eds., ACM, pp. 75?84. [25] S TEWART, G. On the early history of the singular value decomposition. SIAM Review 35, 4 (1993), 551?566. [26] WASSERMAN , L., AND Z HOU , S. A statistical framework for differential privacy. JASA 105, 489 (2010). [27] W ILLIAMS , O., AND M C S HERRY, F. Probabilistic inference and differential privacy. In NIPS (2010). [28] Z HAN , J. Z., AND M ATWIN , S. Privacy-preserving support vector machine classification. IJIIDS 1, 3/4 (2007), 356?385. 9
4565 |@word private:51 version:1 seems:1 norm:3 c0:1 mith:4 open:2 termination:1 confirms:1 simulation:1 tried:1 decomposition:4 covariance:2 accounting:1 eng:1 tr:2 moment:4 reduction:7 contains:1 ours:1 past:1 existing:1 outperforms:1 current:1 ka:3 surprising:1 must:5 hou:1 chicago:1 numerical:2 kdd:4 kv1:1 plot:3 v:2 stationary:1 half:1 oldest:1 prize:1 farther:1 provides:6 attack:3 simpler:1 mathematical:1 along:1 differential:23 focs:1 prove:3 naor:1 privacy:69 theoretically:1 expected:1 behavior:1 p1:1 v1t:1 company:1 actual:1 little:1 precursor:1 increasing:1 becomes:1 provided:1 discover:2 xx:3 underlying:1 maximizes:1 bounded:1 what:1 interpreted:1 minimizes:1 eigenvector:6 proposing:1 differing:3 finding:2 impractical:1 guarantee:11 interactive:1 exactly:1 unit:2 medical:1 positive:3 limit:1 despite:1 ak:3 meet:1 plus:1 burn:1 chose:4 specifying:1 practical:2 testing:1 practice:4 implement:3 differs:2 procedure:8 area:1 empirical:5 significantly:2 projection:14 suggest:3 cannot:2 close:5 onto:2 risk:3 herry:8 measurable:2 map:1 customer:1 roth:2 attention:1 starting:1 independently:2 pod:2 survey:2 identifying:1 immediately:1 wasserman:1 orthonormal:5 spanned:2 stability:1 proving:1 notion:3 laplace:1 diego:2 today:1 suppose:1 user:1 programming:1 us:2 designing:1 surv:1 asymmetric:1 persist:1 database:3 wang:1 capture:2 thousand:1 ensures:1 sanitizing:1 decrease:1 contemporary:1 ran:2 intuition:1 agency:1 convexity:1 complexity:19 pd:1 dynamic:1 personal:1 depend:1 localization:6 basis:1 packing:2 represented:1 derivation:1 distinct:1 describe:1 effective:1 monte:2 query:1 havep:1 sanity:1 quite:3 whose:6 supplementary:2 valued:1 larger:1 statistic:3 pertubation:1 noisy:1 eigenvalue:9 propose:1 lowdimensional:1 reconstruction:1 product:2 relevant:1 combining:1 holistic:1 chaudhuri:1 description:1 frobenius:1 differentially:35 convergence:2 requirement:3 extending:2 assessing:1 produce:2 guaranteeing:1 tti:1 leave:1 develop:1 stat:1 measured:3 strong:2 implemented:1 c:1 involves:1 auxiliary:1 differ:1 direction:2 germane:1 material:2 require:2 government:1 f1:1 fix:1 preliminary:1 investigation:1 ryan:1 hold:3 considered:3 normal:1 exp:3 algorithmic:1 achieves:1 early:1 released:1 favorable:1 estimation:1 proc:1 label:1 knowl:1 sensitive:5 largest:2 tool:3 reflects:1 minimization:1 brought:1 sensor:1 gaussian:5 always:2 modified:1 rather:1 publication:1 release:5 focus:1 june:1 vk:6 improvement:1 rank:9 likelihood:1 check:2 greatly:1 secure:1 contrast:2 sense:1 inference:1 typically:2 sulq:47 entire:1 integrated:1 transformed:1 interested:1 iu:1 issue:3 classification:19 colt:1 development:1 platform:1 special:5 uc:2 hoff:2 equal:2 having:2 ung:2 sampling:7 icml:1 nearly:4 future:4 few:1 randomly:1 preserve:1 individual:10 ourselves:1 n1:3 ab:1 interest:3 differentiallyprivate:1 investigate:1 mining:8 dwork:6 insurance:3 severe:1 deferred:1 truly:1 semidefinite:3 mcsherry:2 cryptographically:1 chain:3 kddcup99:3 pseudorandomness:2 desired:1 plotted:1 theoretical:12 nij:1 sinha:1 instance:2 column:7 extensible:1 entry:6 hundred:1 johnson:1 too:2 characterize:1 reported:1 perturbed:1 kxi:1 synthetic:2 contender:1 density:2 fundamental:1 sensitivity:9 randomized:3 siam:1 probabilistic:1 off:1 together:1 von:1 opposed:1 worse:1 account:2 summarized:1 includes:1 satisfy:1 mcmc:4 explicitly:1 vi:4 multiplicative:1 performed:1 view:1 root:1 sup:2 kapralov:1 netflix:1 square:1 accuracy:9 variance:5 efficiently:1 maximized:1 carlo:2 pcas:1 comp:1 history:1 whenever:1 ed:2 definition:9 failure:1 against:1 energy:2 pp:14 proof:3 mi:1 sampled:1 ppca:42 dataset:3 holdout:2 intrinsically:1 popular:2 knowledge:2 dimensionality:5 improves:1 confluent:1 appears:1 higher:2 response:1 improved:1 april:1 though:2 furthermore:2 correlation:3 su:2 quality:2 grows:3 usage:1 effect:5 calibrating:1 contain:2 true:2 building:1 hence:2 symmetric:4 semantic:1 round:1 numerator:1 kaushik:1 demonstrate:1 performs:2 bring:1 stiefel:2 recently:1 superior:1 common:1 empirically:2 unaccounted:1 sarwate:1 onedimensional:1 significant:1 refer:1 composition:2 gibbs:4 rd:5 tuning:1 consistency:1 had:3 resistant:1 han:1 add:4 eurocrypt:1 multivariate:1 own:1 showed:2 recent:1 optimizes:1 moderate:2 store:1 certain:5 preserving:9 additional:3 greater:1 relaxed:1 determine:1 ale:1 ii:1 u0:4 violate:1 d0:6 smooth:3 match:2 faster:2 calculation:1 offer:1 impact:1 regression:1 basic:1 denominator:1 metric:1 whereas:3 singular:4 median:1 biased:2 operate:1 rest:1 markedly:1 subject:1 induced:4 validating:1 tend:1 privacypreserving:1 anand:1 mod:27 effectiveness:1 structural:2 near:1 ideal:2 recommends:1 reduce:1 inner:1 incomparable:1 avenue:2 tradeoff:2 idea:1 motivated:1 pca:46 utility:21 eigengap:1 resistance:1 york:1 generally:2 clear:1 involve:1 eigenvectors:5 spawned:1 amount:3 generate:1 occupy:1 notice:1 extrinsic:1 per:2 write:1 vol:1 four:3 blum:1 falling:1 drawn:2 d3:4 changing:4 verified:1 v1:18 vast:1 graph:1 relaxation:1 icde:2 year:1 run:3 talwar:6 angle:1 soda:1 place:1 multiparty:1 almost:1 coherence:1 appendix:1 scaling:6 comparable:1 bound:17 guaranteed:1 activity:1 bv:1 constraint:1 x2:1 relatively:1 developing:2 fung:1 according:2 march:1 feige:1 slightly:1 modification:1 explained:1 illiams:1 census:3 taken:1 computationally:1 ln:2 turn:1 mechanism:11 demographic:2 apply:3 v2:1 schmidt:1 alternative:2 top:16 running:1 ensure:3 clustering:1 publishing:2 sigmod:1 especially:1 approximating:3 classical:1 objective:3 question:2 strategy:1 dependence:1 diagonal:1 exhibit:1 gradient:1 subspace:25 distance:1 perturbs:1 d6:1 majority:1 manifold:3 collected:1 reason:1 analyst:1 relationship:2 ratio:2 providing:2 difficult:3 susceptible:1 stoc:3 negative:1 design:2 perform:2 upper:2 av:1 ladner:1 recommender:1 datasets:5 markov:3 finite:2 defining:1 relational:1 pd1:1 ucsd:2 perturbation:9 smoothed:1 arbitrary:1 community:2 ttic:1 namely:1 required:6 specified:1 extensive:1 connection:1 security:1 hypergeometric:1 nip:1 qa:6 address:1 trans:1 suggested:1 adversary:1 beating:1 reading:1 challenge:2 including:2 max:1 predicting:1 representing:1 improve:2 inversely:2 incoherent:1 lum:2 prior:3 understanding:1 geometric:1 literature:2 hen:2 review:1 expect:1 kakf:1 permutation:4 lecture:1 interesting:6 generation:1 proportional:2 versus:3 contingency:1 jasa:1 riedman:1 qf:11 copy:4 side:1 weaker:3 understand:1 barak:1 taking:3 sparse:1 distributed:2 dimension:20 xn:2 computes:2 san:2 preprocessing:1 projected:3 approximate:6 alpha:1 supremum:1 global:2 investigating:1 nonprivate:6 instantiation:1 xi:3 kddcup:2 bingham:5 continuous:1 quantifies:1 decomposes:1 table:3 nature:1 ku:1 interact:1 investigated:1 complex:3 necessarily:1 constructing:1 domain:1 dense:1 main:1 linearly:1 noise:11 arise:1 bmf:1 n2:1 fair:1 repeated:1 cryptography:1 x1:2 precision:2 wish:1 exponential:9 comput:1 candidate:1 third:1 theorem:9 showing:1 svm:2 intrinsic:2 adding:2 kamalika:1 gained:1 drew:1 corr:1 gap:8 easier:1 flavor:1 suited:1 applies:2 springer:1 corresponds:2 minimizer:1 acm:3 succeed:1 conditional:1 towards:2 fisher:1 change:4 except:1 reducing:1 uniformly:1 sampler:3 principal:3 engaged:1 exception:1 indicating:1 grassman:2 support:1 evaluate:1 bkf:1 phenomenon:1
3,939
4,566
Communication/Computation Tradeoffs in Consensus-Based Distributed Optimization Konstantinos I. Tsianos, Sean Lawlor, and Michael G. Rabbat Department of Electrical and Computer Engineering McGill University, Montr?eal, Canada {konstantinos.tsianos, sean.lawlor}@mail.mcgill.ca [email protected] Abstract We study the scalability of consensus-based distributed optimization algorithms by considering two questions: How many processors should we use for a given problem, and how often should they communicate when communication is not free? Central to our analysis is a problem-specific value r which quantifies the communication/computation tradeoff. We show that organizing the communication among nodes as a k-regular expander graph [1] yields speedups, while when all pairs of nodes communicate (as in a complete graph), there is an optimal number of processors that depends on r. Surprisingly, a speedup can be obtained, in terms of the time to reach a fixed level of accuracy, by communicating less and less frequently as the computation progresses. Experiments on a real cluster solving metric learning and non-smooth convex minimization tasks demonstrate strong agreement between theory and practice. 1 Introduction How many processors should we use and how often should they communicate for large-scale distributed optimization? We address these questions by studying the performance and limitations of a class of distributed algorithms that solve the general optimization problem m minimize F (x) = x?X 1 X lj (x) m j=1 (1) where each function lj (x) is convex over a convex set X ? Rd . This formulation applies widely in machine learning scenarios, where lj (x) measures the loss of model x with respect to data point j, and F (x) is the cumulative loss over all m data points. Although efficient serial algorithms exist [2], the increasing size of available data and problem dimensionality are pushing computers to their limits and the need for parallelization arises [3]. Among many proposed distributed approaches for solving (1), we focus on consensus-based distributed optimization [4, 5, 6, 7] where each component function in (1) is assigned to a different node in a network (i.e., the data is partitioned among the nodes), and the nodes interleave local gradient-based optimization updates with communication using a consensus protocol to collectively converge to a minimizer of F (x). Consensus-based algorithms are attractive because they make distributed optimization possible without requiring centralized coordination or significant network infrastructure (as opposed to, e.g., hierarchical schemes [8]). In addition, they combine simplicity of implementation with robustness to node failures and are resilient to communication delays [9]. These qualities are important in clusters, which are typically shared among many users, and algorithms need to be immune to slow nodes that 1 use part of their computation and communication resources for unrelated tasks. The main drawback of consensus-based optimization algorithms comes from the potentially high communication cost associated with distributed consensus. At the same time, existing convergence bounds in terms of iterations (e.g., (7) below) suggest that increasing the number of processors slows down convergence, which contradicts the intuition that more computing resources are better. This paper focuses on understanding the limitations and potential for scalability of consensus-based optimization. We build on the distributed dual averaging framework [4]. The key to our analysis is to attach to each iteration a cost that involves two competing terms: a computation cost per iteration which decreases as we add more processors, and a communication cost which depends on the network. Our cost expression quantifies the communication/computation tradeoff by a parameter r that is easy to estimate for a given problem and platform. The role of r is essential; for example, when nodes communicate at every iteration, we show that in complete graph topologies, there exists an optimal number of processors nopt = ?1r , while for k-regular expander graphs [1], increasing the network size yields a diminishing speedup. Similar results are obtained when nodes communicate every h > 1 iterations and even when h increases with time. We validate our analysis with experiments on a cluster. Our results show a remarkable agreement between theory and practice. In Section 2 we formalize the distributed optimization problem and summarize the distributed dual averaging algorithm. Section 3 introduces the communication/computation tradeoff and contains the basic analysis where nodes communicate at every iteration. The general case of sparsifying communication is treated in Section 4. Section 5 tests our theorical results on a real cluster implementation and Section 6 discusses some future extensions. 2 Distributed Convex Optimization Assume we have at our disposal a cluster with n processors to solve (1), and suppose without loss of generality that m is divisible by n. In the absence of any other information, we partition the data evenly among the processors and our objective becomes to solve the optimization problem, ? ? m m n n n 1 X 1 X? n X 1X ? minimize F (x) = lj (x) = lj|i (x) = fi (x) (2) x?X m j=1 n i=1 m j=1 n i=1 where we use the notation lj|i to denote loss associated with the jth local data point at processor i (i.e., j|i = (i ? 1) m n + j). The local objective functions fi (x) at each node are assumed to be L-Lipschitz and convex. The recent distributed optimization literature contains multiple consensusbased algorithms with similar rates of convergence for solving this type of problem. We adopt the distributed dual averaging (DDA) framework [4] because its analysis admits a clear separation between the standard (centralized) optimization error and the error due to distributing computation over a network, facilitating our investigation of the communication/computation tradeoff. 2.1 Distributed Dual Averaging (DDA) In DDA, nodes iteratively communicate and update optimization variables to solve (2). Nodes only communicate if they are neighbors in a communication graph G = (V, E), with the |V | = n vertices being the processors. The communication graph is user-defined (application layer) and does not necessarily correspond to the physical interconnections between processors. DDA requires three additional quantities: a 1-strongly convex proximal function ? : Rd ? R satisfying ?(x) ? 0 and ?(0) = 0 (e.g., ?(x) = 12 xT x); a positive step size sequence a(t) = O( ?1t ); and a n ? n doubly stochastic consensus matrix P with entries pij > 0 only if either i = j or (j, i) ? E and pij = 0 otherwise. The algorithm repeats for each node i in discrete steps t, the following updates: n X zi (t) = pij zj (t ? 1) + gi (t ? 1) (3) j=1   1 xi (t) =argmin hzi (t), xi + ?(x) a(t) x?X  1 x ?i (t) = (t ? 1) ? x ?i (t ? 1) + xi (t) t 2 (4) (5) where gi (t ? 1) ? ?fi (xi (t ? 1)) is a subgradient of fi (x) evaluated at xi (t ? 1). In (3), the variable zi (t) ? Rd maintains an accumulated subgradient up to time t and represents node i?s belief of the direction of the optimum. To update zi (t) in (3), each node must communicate to exchange the variables zj (t) with its neighbors in G. If ?(x? ) ? R2 , for the local running averages x ?i (t) defined in (5), the error from a minimizer x? of F (x) after T iterations is bounded by (Theorem 1, [4]) Erri (T ) = F (? xi (T )) ? F (x? ) ? T R2 L2 X + a(t ? 1) T a(T ) 2T t=1 ? ? T n X LX 2 + a(t) ? k? z (t) ? zj (t)k? + k? z (t) ? zi (t)k? ? (6) T t=1 n j=1 Pn where L is the Lipschitz constant, k?k? indicates the dual norm, z?(t) = n1 i=1 zi (t), and k? z (t) ? zi (t)k? quantifies the network error as a disagreement between the direction to the optimum at node i and the consensus direction z?(t) at time t. Furthermore, from Theorem 2 in [4], with a(t) = ?At , after optimizing for A we have a bound on the error, s ? 12 log (T n) ? ? , Erri (T ) ? C1 , C1 = 2LR 19 + (7) 1 ? ?2 T where ?2 is the second largest eigenvalue of P . The dependence on the communication topology is reflected through ?2 , since the sparsity structure of P is determined by G. According to (7), increasing n slows down the rate of convergence even if ?2 does not depend on n. 3 Communication/Computation Tradeoff In consensus-based distributed optimization algorithms such as DDA, the communication graph G and the cost of transmitting a message have an important influence on convergence speed, especially when communicating one message requires a non-trivial amount of time (e.g., if the dimension d of the problem is very high). We are interested in the shortest time to obtain an -accurate solution (i.e., Erri (T ) ? ). From (7), convergence is faster for topologies with good expansion properties; i.e., when the spectral gap ? 1 ? ?2 does not shrink too quickly as n grows. In addition, it is preferable to have a balanced network, where each node has the same number of neighbors so that all nodes spend roughly the same amount of time communicating per iteration. Below we focus on two particular cases and take G to be either a complete graph (i.e., all pairs of nodes communicate) or a k-regular expander [1]. By using more processors, the total amount of communication inevitably increases. At the same time, more data can be processed in parallel in the same amount of time. We focus on the scenario where the size m of the dataset is fixed but possibly very large. To understand whether there is room for speedup, we move away from measuring iterations and employ a time model that explicitly accounts for communication cost. This will allow us to study the communication/computation tradeoff and draw conclusions based on the total amount of time to reach an  accuracy solution. 3.1 Time model At each iteration, in step (3), processor i computes a local subgradient on its subset of the data: m n ?lj|i (x) ?fi (x) n X gi (x) = = . ?x m j=1 ?x (8) The cost of this computation increases linearly with the subset size. Let us normalize time so that one processor compute a subgradient on the full dataset of size m in 1 time unit. Then, using n cpus, each local gradient will take n1 time units to compute. We ignore the time required to compute the projection in step (4); often this can be done very efficiently and requires negligible time when m is large compared to n and d. 3 We account for the cost of communication as follows. In the consensus update (3), each pair of neighbors in G transmits and receives one variable zj (t ? 1). Since the message size depends only on the problem dimension d and does not change with m or n, we denote by r the time required to transmit and receive one message, relative to the 1 time unit required to compute the full gradient on all the data. If every node has k neighbors, the cost of one iteration in a network of n nodes is 1 + kr time units / iteration. (9) n Using this time model, we study the convergence rate bound (7) after attaching an appropriate time unit cost per iteration. To obtain a speedup by increasing the number of processors n for a given problem, we must ensure that -accuracy is achieved in fewer time units. 3.2 Simple Case: Communicate at every Iteration In the original DDA description (3)-(5), nodes communicate at every iteration. According to our time model, T iterations will cost ? = T ( n1 + kr) time units. From (7), the time ? () to reach error  is found by substituting for T and solving for ? (). Ignoring the log factor in (7), we get  1 C2  1 C1 q =  =? ? () = 21 + kr time units. (10) ? ()  n 1 n +kr This simple manipulation reveals some important facts. If communication is free, then r = 0. If in addition the network G is a k-regular expander, then ?2 is fixed [10], C1 is independent of n and ? () = C12 /(2 n). Thus, in the ideal situation, we obtain a linear speedup by increasing the number of processors, as one would expect. In reality, of course, communication is not free. Complete graph. Suppose that G is the complete graph, where k = n ? 1 and ?2 = 0. In this scenario we cannot keep increasing the network size without eventually harming performance due to the excessive communication cost. For a problem with a communication/computation tradeoff r, the optimal number of processors is calculated by minimizing ? () for n: ?? () 1 = 0 =? nopt = ? . (11) ?n r Again, in accordance with intuition, if the communication cost is too high (i.e., r ? 1) and it takes more time to transmit and receive a gradient than it takes to compute it, using a complete graph cannot speedup the optimization. We reiterate that r is a quantity that can be easily measured for a given hardware and a given optimization problem. As we report in Section 5, the optimal value predicted by our theory agrees very well with experimental performance on a real cluster. Expander. For the case where G is a k-regular expander, the communication cost per node remains constant as n increases. From (10) and the expression for C1 in (7), we see that n can be increased without losing performance, although the benefit diminishes (relative to kr) as n grows. 4 General Case: Sparse Communication The previous section analyzes the case where processors communicate at every iteration. Next we investigate the more general situation where we adjust the frequency of communication. 4.1 Bounded Intercommunication Intervals Suppose that a consensus step takes place once every h + 1 iterations. That is, the algorithm repeats h ? 1 cheap iterations (no communication) of cost n1 time units followed by an expensive iteration (with communication) with cost n1 + kr. This strategy clearly reduces the overall average cost per iteration. The caveat is that the network error k? z (t) ? zi (t)k? is higher because of having executed fewer consensus steps. In a cheap iteration we replace the update (3) by zi (t) = zi (t ? 1) + gi (t ? 1). After some straightforward algebra we can show that [for (12), (16) please consult the supplementary material]: Q H n t ?1 t ?1 h?1 X X XX  Ht ?w  zi (t) = P g (wh + k) + gi (t ? Qt + k). (12) ij j w=0 k=0 j=1 k=0 4 where Ht = b t?1 h c counts the number of communication steps in t iterations, and Qt = mod(t, h) if mod(t, h) > 0 and Qt = h otherwise. Using the fact that P 1 = 1, we obtain z?(t) ? zi (t) = H n h?1 n  t ?1 X X 1X 1  Ht ?w   X zs (t) ? zi (t) = ? P gj (wh + k) ij n s=1 n w=0 j=1 (13) k=0 + n Qt ?1  1X X gs (t ? Qt + k) ? gi (t ? Qt + k) . n s=1 (14) k=0 Taking norms, recalling that the fi are convex and Lipschitz, and since Qt ? h, we arrive at H t ?1 X 1 T  H ?w  hL + 2hL 1 ? P t k? z (t) ? zi (t)k? ? n i,: 1 w=0 (15) Using a technique similar to that in [4] to bound the `1 distance of row i of P Ht ?w to its stationary distribution as t grows, we can show that ? log(T n) ? k? z (t) ? zi (t)k? ? 2hL + 3hL (16) 1 ? ?2 for all t ? T . Comparing (16) to equation (29) in [4], the network error within t iterations is no more than h times larger when a consensus step is only performed once every h + 1 ? iterations. Finally, PT we substitute the network error in (6). For a(t) = ?At , we have t=1 a(t) ? 2A T , and  2   ? ? R log (T n) log (T n) 12h 2 ? + 18h ? ? = Ch . (17) Erri (T ) ? + AL 1 + A 1 ? ?2 T T We minimize the leading term Ch over A to obtain s s !?1 R 12h 12h ? ? . A= and Ch = 2RL 1 + 18h + 1 + 18h + L 1 ? ?2 1 ? ?2 Of the T iterations, only HT = b T h?1 c involve communication. So, T iterations will take   1 T 1 ? = (T ? HT ) + HT + kr = + HT kr time units. n n n (18) (19) C2 To achieve -accuracy, ignoring again the logarithmic factor, we need T = 2h iterations, or       T T ?1 C2 1 kr ? () = + kr ? 2h + time units. n h  n h (20) From the last expression, for a fixed number of processors n, there exists an optimal value for h that depends on the network size and communication graph G: s nkr hopt = . (21) ? 18 + 1?12 ? 2 If the network is a complete graph, using hopt yields ? () = O(n); i.e., using more processors hurts performance when not communicating every iteration. On the other hand, if the network is a k-regular expander then ? () = ?c1n + c2 for constants c1 , c2 , and we obtain a diminishing speedup. 4.2 Increasingly Sparse Communication Next, we consider progressively increasing the intercommunication intervals. This captures the intuition that as the optimization moves closer to the solution, progress slows down and a processor should have ?something significantly new to say? before it communicates. Let hj ? 1 denote the number of cheap iterations performed between the (j ? 1)st and jth expensive iteration; i.e., the first communication is at iteration h1 , the second at iteration h1 + h2 , and so on. We consider schemes 5 where hj = j p for p ? 0. The number of iterations that nodes communicate out of the first T total PH iterations is given by HT = max{H : j=1 hj ? T }. We have Z Z HT H T HT X HTp+1 ? 1 H p+1 + p y p dy =? jp ? 1 + y p dy ? ?T ? T , (22) p+1 p+1 y=1 y=1 j=1 1 which means that HT = ?(T p+1 ) as T ? ?. Similar to (15), the network error is bounded as w ?1 H H t ?1 t ?1 X X 1 T  H ?w  hX t L + 2h L = L 1 ? P k?k1 hw + 2ht L. (23) k? z (t) ? zi (t)k? ? t n i,: 1 w=0 w=0 k=0 We split the sum into?two terms based on whether or not the powers of P have converged. Using the n) ? , the `1 term is bounded by 2 when w is large and by T1 when w is small: split point t? = log(T 1? ? 2 k? z (t) ? zi (t)k? ?L HtX ?1?t? k?k1 hw + L w=0 ? L T HtX ?1?t? H t ?1 X k?k1 hw + 2ht L (24) w=Ht ?t? H t ?1 X wp + 2L w=0 wp + 2tp L (25) w=Ht ?t? 1 L (Ht ? t? ? 1) p+1 + p + 2Lt?(Ht ? 1)p + 2tp L T p+1 Lp L + + 2Lt?Htp + 2tp L ? p + 1 T (p + 1) ? (26) (27) since T > Ht ? t? ? 1. Substituting this bound into (6) and taking the step size sequence to be a(t) = tAq with A and q to be determined, we get Erri (T ) ? R2 L2 A 3L2 A 3L2 pA + + + 1?q q q AT 2(1 ? q)T (p + 1)(1 ? q)T (p + 1)(1 ? q)T 1+q T + T 6L2 A X p?q 6L2 t?A X Htp + t . q T t=1 t T t=1 (28) 1 The first four summands converge to zero when 0 < q < 1. Since Ht = ?(t p+1 ), ! p 1 T T  p  1 X O(t p+1 )p 1 X Htp T p+1 ?q+1 p+1 ?q ? ? O = O T (29) T t=1 tq T t=1 tq T PT p T p?q which converges to zero if p+1 < q. To bound the last term, note that T1 t=1 tp?q ? p?q+1 , so the term goes to zero as T ? ? if p < q. In conclusion, Err (T ) converges no slower than i ? n) 1 O( logT(T ) since q?1 p < T q?p . If we choose q = 12 to balance the first three summands, for q?p T p+1 ? (T small p > 0, the rate of convergence is arbitrarily close to O( log ? T increasingly infrequently as T ? ?. n) ), while nodes communicate p Out of T total iterations, DDA executes HT = ?(T p+1 ) expensive iterations involving communication and T ? HT cheap iterations without communication, so      p T 1 kr ? () = O + T p+1 kr = O T + . (30) 1 n n T p+1 In this case, the communication cost kr becomes a less and less significant proportion of ? () as T increases. So for any 0 < p < 12 , if k is fixed, we approach a linear speedup behaviour ?( Tn ). To get Erri (T ) ? , ignoring the logarithmic factor, we need s 2   1?2p Cp 12p + 12 12 ? T = iterations, with Cp = 2LR 7 + . (31) +  (3p + 1)(1 ? ?2 ) 2p + 1 From this last equation we see that for 0 < p < 12 we have Cp < C1 , so using increasingly sparse communication should, in fact, be faster than communicating at every iteration. 6 5 Experimental Evaluation To verify our theoretical findings, we implement DDA on a cluster of 14 nodes with 3.2 GHz Pentium 4HT processors and 1 GB of memory each, connected via ethernet that allows for roughly 11 MB/sec throughput per node. Our implementation is in C++ using the send and receive functions of OpenMPI v1.4.4 for communication. The Armadillo v2.3.91 library, linked to LAPACK and BLAS, is used for efficient numerical computations. 5.1 Application to Metric Learning Metric learning [11, 12, 13] is a computationally intensive problem where the goal is to find a distance metric D(u, v) such that points that are related have a very small distance under D while for unrelated points D is large. Following the formulation in [14], we have a data set {uj , vj , sj }m j=1 with uj , vj ? Rd and sj = {?1, 1} signifying whether or not uj is similar to vj (e.g., similar if they are from the same class). Our goal is to find a symmetric positive semi-definite matrix A  0 p to define a pseudo-metric of the form DA (u, v) = (u ? v)T A(u ? v). To that end, we use a hinge-type loss function lj (A, b) = max{0, sj DA (uj , vj )2 ? b + 1} where b ? 1 is a threshold that determines whether two points are dissimilar according to DA (?, ?). In the batch setting, we formulate the convex optimization problem m X minimize F (A, b) = lj (A, b) subject to A  0, b ? 1. (32) A,b j=1 The subgradient of lj at (A, b) is zero if sj (DA (uj , vj )2 ? b) ? ?1. Otherwise ?lj (A, b) ?lj (A, b) = sj (uj ? vj )T (uj ? vj ), and = ?sj . (33) ?A ?b Since DDA uses vectors xi (t) and zi (t), we represent each pair (Ai (t), bi (t)) as a d2 +1 dimensional vector. The communication cost is thus quadratic in the dimension. In step (3) of DDA, we use the proximal function ?(x) = 21 xT x, in which case (4) simplifies to taking xi (t) = ?a(t ? 1)zi (t), followed by projecting xi (t) to the constraint set by setting bi (t) ? max{1, bi (t)} and projecting Ai (t) to the set of positive semi-definite matrices by first taking its eigenvalue decomposition and reconstructing Ai (t) after forcing any negative eigenvalues to zero. We use the MNIST digits dataset which consists of 28 ? 28 pixel images of handwritten digits 0 through 9. Representing images as vectors, we have d = 282 = 784 and a problem with d2 + 1 = 614657 dimensions trying to learn a 784 ? 784 matrix A. With double precision arithmetic, each DDA message has a size approximately 4.7 MB. We construct a dataset by randomly selecting 5000 pairs from the full MNIST data. One node needs 29 seconds to compute a gradient on this dataset, and sending and receiving 4.7 MB takes 0.85 seconds. The communication/computation tradeoff value is estimated as r = 0.85 29 ? 0.0293. According to (11), when G is a complete graph, we expect to have optimal performance when using nopt = ?1r = 5.8 nodes. Figure 1(left) shows the P xi (t)) for 1 to 14 processors connected as evolution of the average function value F? (t) = n1 i F (? a complete graph, where x ?i (t) is as defined in (5). There is a very good match between theory and practice since the fastest convergence is achieved with n = 6 nodes. In the second experiment, to make r closer to 0, we apply PCA to the original data and keep the top 87 principal components, containing 90% of the energy. The dimension of the problem is reduced dramatically to 87 ? 87 + 1 = 7570 and the message size to 59 KB. Using 60000 random pairs of MNIST data, the time to compute one gradient on the entire dataset with one node is 2.1 seconds, while the time to transmit and receive 59 KB is only 0.0104 seconds. Again, for a complete graph, Figure 1(right) illustrates the evolution of F? (t) for 1 to 14 nodes. As we see, increasing n speeds up the computation. The speedup we get is close to linear at first, but diminishes since communication is not entirely free. In this case r = 0.0104 2.1 = 0.005 and nopt = 14.15. 5.2 Nonsmooth Convex Minimization Next we create an artificial problem where the minima of the components fi (x) at each node are very different, so that communication is essential in order to obtain an accurate optimizer of F (x). 7 4 2 1.2 1 0.8 1.5 0.6 1 0.4 0.5 0 n=1 n=2 n=4 n=6 n=8 n = 10 n = 12 n = 14 1.4 F? (t) 3 2.5 F? (t) 1.6 n=1 n=2 n=4 n=6 n=8 n = 10 n = 12 n = 14 3.5 0.2 50 100 150 200 250 Time (sec) 300 350 400 450 0 10 20 30 Time (sec) 40 50 60 Figure 1: (Left) In a subset of the Full MNIST data for our specific hardware, nopt = ?1r = 5.8. The fastest convergence is achieved on a complete graph of 6 nodes. (Right) In the reduced MNIST data using PCA, the communication cost drops and a speedup is achieved by scaling up to 14 processors. We define fi (x) as a sum of high dimensional quadratics, fi (x) = M X   1 2 max lj|i (x), lj|i (x) , ? lj|i (x) = (x ? c?j|i )T (x ? c?j|i ), ? ? {1, 2}, (34) j=1 where x ? R10,000 , M = 15, 000 and c1j|i , c2j|i are the centers of the quadratics. Figure 2 illustrates again the average function value F? (t) for 10 nodes in a complete graph topology. The baseline performance is when nodes communicate at every iteration (h = 1). For this problem r = 0.00089 and, from (21), hopt = 1. Naturally communicating every 2 iterations (h = 2) slows down convergence. Over the duration of the experiment, with h = 2, each node communicates with its peers 55 times. We selected p = 0.3 for increasingly sparse communication, and got HT = 53 communications per node. As we see, even though nodes communicate as much as the h = 2 case, convergence is even faster than communicating at every iteration. This verifies our intuition that communication is more important in the beginning. Finally, the case where p = 1 is shown. This value is out of the permissible range, and as expected DDA does not converge to the right solution. 5 x 10 h=1 h=2 0.3 h=t h=t 2.4 2.2 F? (t) 2 1.8 1.6 1.4 1.2 20 40 60 80 100 Time (sec) 120 140 160 Figure 2: Sparsifying communication to minimize (34) with 10 nodes in a complete graph topology. When waiting t0.3 iterations between consensus steps, convergence is faster than communicating at every iteration (h = 1), even though the total number of consensus steps performed over the duration of the experiment is equal to communicating every 2 iterations (h = 2). When waiting a linear number of iterations between consensus steps (h = t) DDA does not converge to the right solution. Note: all methods are initialized from the same value; the x-axis starts at 5 sec. 6 Conclusions and Future Work The analysis and experimental evaluation in this paper focus on distributed dual averaging and reveal the capability of distributed dual averaging to scale with the network size. We expect that similar results hold for other consensus-based algorithms such as [5] as well as various distributed averaging-type algorithms (e.g., [15, 16, 17]). In the future we will extend the analysis to the case of stochastic optimization, where ht = tp could correspond to using increasingly larger mini-batches. 8 References [1] O. Reingold, S. Vadhan, and A. Wigderson, ?Entropy waves, the zig-zag graph product, and new constantdegree expanders,? Annals of Mathematics, vol. 155, no. 2, pp. 157?187, 2002. [2] Y. Nesterov, ?Primal-dual subgradient methods for convex problems,? Mathematical Programming Series B, vol. 120, pp. 221?259, 2009. [3] R. Bekkerman, M. Bilenko, and J. Langford, Scaling up Machine Learning, Parallel and Distributed Approaches. Cambridge University Press, 2011. [4] J. Duchi, A. Agarwal, and M. Wainwright, ?Dual averaging for distributed optimization: Convergence analysis and network scaling,? IEEE Transactions on Automatic Control, vol. 57, no. 3, pp. 592?606, 2011. [5] A. Nedic and A. Ozdaglar, ?Distributed subgradient methods for multi-agent optimization,? IEEE Transactions on Automatic Control, vol. 54, no. 1, January 2009. [6] B. Johansson, M. Rabi, and M. Johansson, ?A randomized incremental subgradient method for distributed optimization in networked systems,? SIAM Journal on Control and Optimization, vol. 20, no. 3, 2009. [7] S. S. Ram, A. Nedic, and V. V. Veeravalli, ?Distributed stochastic subgradient projection algorithms for convex optimization,? Journal of Optimization Theory and Applications, vol. 147, no. 3, pp. 516?545, 2011. [8] A. Agarwal and J. C. Duchi, ?Distributed delayed stochastic optimization,? in Neural Information Processing Systems, 2011. [9] K. I. Tsianos and M. G. Rabbat, ?Distributed dual averaging for convex optimization under communication delays,? in American Control Conference (ACC), 2012. [10] F. Chung, Spectral Graph Theory. AMS, 1998. [11] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell, ?Distance metric learning, with application to clustering with side-information,? in Neural Information Processing Systems, 2003. [12] K. Q. Weinberger and L. K. Saul, ?Distance metric learning for large margin nearest neighbor classification,? Journal of Optimization Theory and Applications, vol. 10, pp. 207?244, 2009. [13] K. Q. Weinberger, F. Sha, and L. K. Saul, ?Convex optimizations for distance metric learning and pattern classification,? IEEE Signal Processing Magazine, 2010. [14] S. Shalev-Shwartz, Y. Singer, and A. Y. Ng, ?Online and batch learning of pseudo-metrics,? in ICML, 2004, pp. 743?750. [15] M. A. Zinkevich, M. Weimer, A. Smola, and L. Li, ?Parallelized stochastic gradient descent,? in Neural Information Processing Systems, 2010. [16] R. McDonald, K. Hall, and G. Mann, ?Distributed training strategies for the structured perceptron,? in Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2012, pp. 456?464. [17] G. Mann, R. McDonald, M. Mohri, N. Silberman, and D. D. Walker, ?Efficient large-scale distributed training of conditional maximum entropy models,? in Neural Information Processing Systems, 2009, pp. 1231?1239. 9
4566 |@word interleave:1 norm:2 johansson:2 bekkerman:1 proportion:1 d2:2 decomposition:1 contains:2 series:1 selecting:1 existing:1 err:1 comparing:1 must:2 numerical:1 partition:1 cheap:4 drop:1 update:6 progressively:1 stationary:1 fewer:2 selected:1 beginning:1 lr:2 infrastructure:1 caveat:1 node:41 lx:1 mathematical:1 c2:5 htx:2 consists:1 doubly:1 combine:1 expected:1 roughly:2 frequently:1 multi:1 bilenko:1 cpu:1 considering:1 increasing:9 becomes:2 xx:1 unrelated:2 notation:1 bounded:4 argmin:1 z:1 finding:1 pseudo:2 every:16 preferable:1 control:4 unit:11 ozdaglar:1 before:1 positive:3 negligible:1 engineering:1 local:6 accordance:1 limit:1 t1:2 approximately:1 fastest:2 bi:3 range:1 practice:3 implement:1 definite:2 digit:2 significantly:1 got:1 projection:2 regular:6 suggest:1 get:4 cannot:2 close:2 influence:1 zinkevich:1 center:1 send:1 straightforward:1 go:1 duration:2 convex:13 formulate:1 simplicity:1 communicating:9 hurt:1 transmit:3 mcgill:3 pt:2 suppose:3 annals:1 user:2 magazine:1 losing:1 programming:1 us:1 agreement:2 pa:1 infrequently:1 satisfying:1 expensive:3 role:1 electrical:1 capture:1 connected:2 decrease:1 russell:1 zig:1 balanced:1 intuition:4 nesterov:1 depend:1 solving:4 algebra:1 easily:1 various:1 chapter:1 artificial:1 shalev:1 peer:1 widely:1 solve:4 spend:1 supplementary:1 larger:2 interconnection:1 otherwise:3 say:1 gi:6 online:1 sequence:2 eigenvalue:3 mb:3 product:1 networked:1 organizing:1 achieve:1 description:1 validate:1 normalize:1 scalability:2 convergence:14 cluster:7 optimum:2 double:1 incremental:1 converges:2 measured:1 nearest:1 ij:2 qt:7 progress:2 strong:1 predicted:1 involves:1 come:1 ethernet:1 direction:3 drawback:1 stochastic:5 kb:2 material:1 mann:2 resilient:1 exchange:1 hx:1 behaviour:1 investigation:1 extension:1 hold:1 hall:1 substituting:2 optimizer:1 adopt:1 diminishes:2 coordination:1 largest:1 agrees:1 dda:13 create:1 minimization:2 clearly:1 htp:4 pn:1 hj:3 focus:5 indicates:1 pentium:1 baseline:1 am:1 accumulated:1 lj:15 typically:1 entire:1 diminishing:2 interested:1 lapack:1 overall:1 among:5 dual:10 pixel:1 classification:2 platform:1 equal:1 once:2 construct:1 having:1 ng:2 represents:1 icml:1 excessive:1 throughput:1 future:3 report:1 nonsmooth:1 c1j:1 employ:1 randomly:1 delayed:1 n1:6 tq:2 recalling:1 montr:1 centralized:2 message:6 investigate:1 evaluation:2 adjust:1 introduces:1 primal:1 accurate:2 closer:2 initialized:1 theoretical:1 increased:1 eal:1 lawlor:2 tp:5 measuring:1 cost:21 c1n:1 vertex:1 entry:1 subset:3 delay:2 too:2 proximal:2 st:1 randomized:1 siam:1 receiving:1 harming:1 michael:2 quickly:1 transmitting:1 again:4 central:1 containing:1 opposed:1 possibly:1 choose:1 hzi:1 american:2 chung:1 leading:1 hopt:3 li:1 account:2 potential:1 c12:1 sec:5 north:1 explicitly:1 depends:4 reiterate:1 performed:3 h1:2 linked:1 start:1 wave:1 maintains:1 parallel:2 capability:1 xing:1 minimize:5 accuracy:4 efficiently:1 yield:3 correspond:2 handwritten:1 processor:24 converged:1 executes:1 acc:1 reach:3 failure:1 energy:1 frequency:1 pp:8 naturally:1 associated:2 transmits:1 dataset:6 wh:2 dimensionality:1 formalize:1 sean:2 disposal:1 higher:1 reflected:1 formulation:2 evaluated:1 though:2 shrink:1 strongly:1 generality:1 furthermore:1 done:1 smola:1 intercommunication:2 langford:1 hand:1 receives:1 veeravalli:1 c2j:1 tsianos:3 reveal:1 quality:1 grows:3 verify:1 requiring:1 evolution:2 assigned:1 symmetric:1 iteratively:1 wp:2 attractive:1 please:1 trying:1 complete:13 demonstrate:1 mcdonald:2 tn:1 duchi:2 cp:3 image:2 fi:9 physical:1 rl:1 jp:1 blas:1 extend:1 association:1 significant:2 cambridge:1 ai:3 rd:4 automatic:2 mathematics:1 immune:1 gj:1 summands:2 add:1 something:1 recent:1 optimizing:1 forcing:1 scenario:3 manipulation:1 arbitrarily:1 analyzes:1 additional:1 minimum:1 parallelized:1 converge:4 shortest:1 signal:1 semi:2 arithmetic:1 multiple:1 full:4 reduces:1 smooth:1 faster:4 match:1 serial:1 involving:1 basic:1 metric:9 iteration:47 represent:1 agarwal:2 achieved:4 c1:7 receive:4 addition:3 interval:2 walker:1 permissible:1 parallelization:1 subject:1 expander:7 reingold:1 mod:2 jordan:1 consult:1 vadhan:1 ideal:1 split:2 easy:1 divisible:1 zi:18 rabbat:3 competing:1 topology:5 simplifies:1 tradeoff:9 konstantinos:2 intensive:1 t0:1 whether:4 expression:3 pca:2 distributing:1 gb:1 dramatically:1 clear:1 involve:1 amount:5 ph:1 hardware:2 processed:1 reduced:2 exist:1 zj:4 estimated:1 per:7 discrete:1 vol:7 waiting:2 sparsifying:2 key:1 four:1 threshold:1 r10:1 ht:25 v1:1 ram:1 graph:21 subgradient:9 sum:2 communicate:17 place:1 arrive:1 separation:1 draw:1 dy:2 scaling:3 entirely:1 layer:1 bound:6 followed:2 quadratic:3 g:1 annual:1 constraint:1 erri:6 speed:2 speedup:11 department:1 structured:1 according:4 logt:1 increasingly:5 contradicts:1 reconstructing:1 partitioned:1 lp:1 hl:4 projecting:2 computationally:1 resource:2 equation:2 remains:1 discus:1 eventually:1 count:1 singer:1 end:1 sending:1 studying:1 available:1 apply:1 hierarchical:1 away:1 spectral:2 disagreement:1 appropriate:1 v2:1 batch:3 robustness:1 weinberger:2 slower:1 original:2 substitute:1 top:1 running:1 ensure:1 clustering:1 linguistics:1 hinge:1 wigderson:1 pushing:1 k1:3 build:1 especially:1 uj:7 rabi:1 silberman:1 objective:2 move:2 question:2 quantity:2 strategy:2 sha:1 dependence:1 gradient:7 distance:6 evenly:1 mail:1 consensus:19 trivial:1 mini:1 minimizing:1 balance:1 executed:1 potentially:1 slows:4 negative:1 implementation:3 descent:1 inevitably:1 january:1 situation:2 communication:52 canada:1 pair:6 required:3 address:1 below:2 pattern:1 sparsity:1 summarize:1 max:4 memory:1 belief:1 wainwright:1 power:1 treated:1 attach:1 nedic:2 representing:1 scheme:2 library:1 axis:1 understanding:1 literature:1 l2:6 relative:2 loss:5 expect:3 limitation:2 remarkable:1 h2:1 agent:1 pij:3 row:1 course:1 mohri:1 surprisingly:1 repeat:2 free:4 last:3 jth:2 side:1 allow:1 understand:1 perceptron:1 neighbor:6 saul:2 taking:4 attaching:1 sparse:4 distributed:28 benefit:1 ghz:1 dimension:5 calculated:1 cumulative:1 computes:1 transaction:2 sj:6 ignore:1 keep:2 reveals:1 nopt:5 assumed:1 xi:10 shwartz:1 quantifies:3 reality:1 learn:1 ca:2 ignoring:3 expansion:1 necessarily:1 protocol:1 vj:7 da:4 main:1 linearly:1 weimer:1 expanders:1 verifies:1 facilitating:1 openmpi:1 slow:1 precision:1 communicates:2 hw:3 down:4 theorem:2 specific:2 xt:2 r2:3 admits:1 essential:2 exists:2 mnist:5 kr:13 illustrates:2 margin:1 gap:1 entropy:2 logarithmic:2 lt:2 applies:1 collectively:1 ch:3 minimizer:2 determines:1 conditional:1 goal:2 room:1 shared:1 absence:1 lipschitz:3 change:1 replace:1 determined:2 averaging:9 principal:1 total:5 experimental:3 zag:1 arises:1 signifying:1 dissimilar:1
3,940
4,567
Fully Bayesian inference for neural models with negative-binomial spiking Jonathan W. Pillow Center for Perceptual Systems Department of Psychology The University of Texas at Austin [email protected] James G. Scott Division of Statistics and Scientific Computation McCombs School of Business The University of Texas at Austin [email protected] Abstract Characterizing the information carried by neural populations in the brain requires accurate statistical models of neural spike responses. The negative-binomial distribution provides a convenient model for over-dispersed spike counts, that is, responses with greater-than-Poisson variability. Here we describe a powerful data-augmentation framework for fully Bayesian inference in neural models with negative-binomial spiking. Our approach relies on a recently described latentvariable representation of the negative-binomial distribution, which equates it to a Polya-gamma mixture of normals. This framework provides a tractable, conditionally Gaussian representation of the posterior that can be used to design efficient EM and Gibbs sampling based algorithms for inference in regression and dynamic factor models. We apply the model to neural data from primate retina and show that it substantially outperforms Poisson regression on held-out data, and reveals latent structure underlying spike count correlations in simultaneously recorded spike trains. 1 Introduction A central problem in systems neuroscience is to understand the probabilistic representation of information by neurons and neural populations. Statistical models play a critical role in this endeavor, as they provide essential tools for quantifying the stochasticity of neural responses and the information they carry about various sensory and behavioral quantities of interest. Poisson and conditionally Poisson models feature prominently in systems neuroscience, as they provide a convenient and tractable description of spike counts governed by an underlying spike rate. However, Poisson models are limited by the fact that they constrain the ratio between the spike count mean and variance to one. This assumption does not hold in many brain areas, particularly cortex, where responses are often over-dispersed relative to Poisson [1]. A second limitation of Poisson models in regression analyses (for relating spike responses to stimuli) or latent factor analyses (for finding common sources of underlying variability) is the difficulty of performing fully Bayesian inference. The posterior formed under Poisson likelihood and Gaussian prior has no tractable representation, so most theorists resort to either fast, approximate methods based on Gaussians, [2?9] or slower, sampling-based methods that may scale poorly with data or dimensionality [10?15]. The negative-binomial (NB) distribution generalizes the Poisson with a shape parameter that controls the tradeoff between mean and variance, providing an attractive alternative for over-dispersed spike count data. Although well-known in statistics, it has only recently been applied for neural data [16?18]. Here we describe fully Bayesian inference methods for the neural spike count data based on a recently developed representation of the NB as a Gaussian mixture model [19]. In the 1 weights B shape stimulus C 300 response variance A 200 100 on Poiss latent 0 0 50 mean 100 Figure 1: Representations of the negative-binomial (NB) regression model. (A) Graphical model for standard gamma-Poisson mixture representation of the NB. The linearly projected stimulus t = T xt defines the scale parameter for a gamma r.v. with shape parameter ?, giving t ? Ga(e t , ?), which is in turn the rate for a Poisson spike count: yt ? Poiss( t ). (B) Graphical model illustrating novel representation as a Polya-Gamma (PG) mixture of normals. Spike counts are represented as NB distributed with shape ? and rate pt = 1/(1 + e t ). The latent variable !t is conditionally PG, while (and |x) are normal given (!t , ?), which facilitates efficient inference. (C) Relationship between spike-count mean and variance for different settings of shape parameter ?, illustrating superPoisson variability of the NB model. following, we review the conditionally Gaussian representation for the negative-binomial (Sec. 2), describe batch-EM, online-EM and Gibbs-sampling based inference methods for NB regression (Sec. 3), sampling-based methods for dynamic latent factor models (Sec. 4), and show applications to spiking data from primate retina. 2 The negative-binomial model Begin with the single-variable case where the data Y = {yt } are scalar counts observed at times t = 1, . . . , N . A standard Poisson generalized linear model (GLM) assumes that yt ? Pois(e t ), where the log rate parameter t may depend upon the stimulus. One difficulty with this model is that the variance of the Poisson distribution is equal to its mean, an assumption that is violated in many data sets [20?22]. To relax this assumption, we can consider the negative binomial model, which can be described as a doubly-stochastic or hierarchical Poisson model [18]. Suppose that yt arises according to: ( (yt | t ? t) | ?, ? t) Pois( t ) Ga ?, e t , where we have parametrized the Gamma distribution in terms of its shape and scale parameters. By marginalizing over the top-level model for t , we recover a negative-binomial distribution for yt : where pt is related to p(yt | ?, t t) / (1 pt )? pyt t , via the logistic transformation: e t . 1+e t The extra parameter ? therefore allows for over-dispersion compared to the Poisson, with the count yt having expected value ?e t and variance ?e t (1 + e t ). (See Fig. 1). pt = Bayesian inference for models of this form has long been recognized as a challenging problem, due to the analytically inconvenient form of the likelihood function. To see the difficulty, suppose that T is a linear function of known inputs xt = (xt1 , . . . , xtP )T . Then the conditional posterior t = xt distribution for , up to a multiplicative constant, is p( | ?, Y ) / p( ) ? N Y {exp(xTt )}yt , {1 + exp(xTt )}?+yt t=1 (1) where p( ) is the prior distribution, and where we have assumed for the moment that ? is fixed. The two major issues are the same as those that arise in Bayesian logistic regression: the response 2 depends non-linearly upon the parameters, and there is no natural conjugate prior p( ) to facilitate posterior computation. One traditional approach for Bayesian inference in logistic models is to work directly with the discrete-data likelihood. A variety of tactics along these lines have been proposed, including numerical integration [23], analytic approximations to the likelihood [24?26], or Metropolis-Hastings [27]. A second approach is to assume that the discrete outcome is some function of an unobserved continuous quantity or latent variable. This is most familiar in the case of Bayesian inference for the probit or dichotomized-Gaussian model [28, 29], where binary outcomes yi are assumed to be thresholded versions of a latent Gaussian quantity zi . The same approach has also been applied to logistic and Poisson regression [30, e.g.]. Unfortunately, none of these schemes lead to a fully automatic approach to posterior inference, as they require either approximations (whose quality must be validated) or the careful selection of tuning constants (as is typically required when using, for example, the Metropolis?Hastings sampler in very high dimensions). To proceed with Bayesian inference in the negative-binomial model, we appeal to a recent latentvariable construction (depicted in Fig. 1B) from [19] based on the theory of Polya-Gamma random variables. The basic result we exploit is that the negative binomial likelihood can be represented as a mixture of normals with Polya-Gamma mixing distribution. The algorithms that result from this scheme are both exact (in the sense of avoiding analytic approximations) and fully automatic. Definition 1. A random variable X has a Polya-Gamma distribution with parameters b > 0 and c 2 R, denoted X ? PG(b, c), if 1 1 X X= 2? 2 (k gk , 1/2)2 + c2 /(4? 2 ) D k=1 (2) D where each gk ? Ga(b, 1) is an independent gamma random variable, and where = denotes equality in distribution. We make use of four important facts about Polya-Gamma variables from [19]. First, suppose that p(!) denotes the density of the random variable ! ? PG(b, 0), for b > 0. Then for any choice of a, Z 1 2 (e )a b ? =2 e e ! /2 p(!) d! , (3) (1 + e )b 0 where ? = a b/2. This integral identity allows us to rewrite each term in the negative binomial likelihood (eq. 1) as Z 1 2 {exp( t )}yt ? yt ?t t (1 pt ) pt = /e e !t /2 p(! | ? + yt , 0) d! , (4) {1 + exp( t )}h+yt 0 where ?t = (yt ?)/2, and where the mixing distribution is Polya-Gamma. Conditional upon !t , we have a likelihood proportional to e Q( t ) for some quadratic form Q, which will be conditionally conjugate to any Gaussian or mixture-of-Gaussians prior for t . This conditional Gaussianity can be exploited to great effect in MCMC, EM, and sequential Monte Carlo algorithms, as described in the next section. A second important fact is that the conditional distribution p(! | ) = R 1 0 e e ! 2 ! 2 /2 /2 p(!) p(!) d! is also in the Polya-Gamma class: (! | ) ? PG(b, ). In this sense, the Polya-Gamma distribution is conditionally conjugate to the NB likelihood, which is very useful for Gibbs sampling. Third, although the density of a Polya-Gamma random variable can be expressed only as an infinite series, its expected value is known in closed form: if ! ? PG(b, c), then b tanh(c/2) . (5) 2c As we show in the next section, this expression comes up repeatedly when fitting negative-binomial models via expectation-maximization, where these moments of !t form a set of sufficient statistics for the complete-data log posterior distribution in . E(!) = 3 Finally, despite the awkward form of the density function, it is still relatively easy to simulate random Polya-Gamma draws, avoiding entirely the need to truncate the infinite sum in Equation 2. As the authors of [19] show, this can be accomplished via a highly efficient accept-reject algorithm using ideas from [31]. The proposal distribution requires only exponential, uniform, and normal random variates; and the algorithm?s acceptance probability is uniformly bounded below at 0.9992 (implying roughly 8 rejected draws out of every 10,000 proposals). As we now describe, these four facts are sufficient to allow straightforward Bayesian inference for negative-binomial models. We focus first on regression models, for which we derive simple Gibbs sampling and EM algorithms. We then turn to negative-binomial dynamic factor models, which can be fit using a variant of the forward-filter, backwards-sample (FFBS) algorithm [32]. 3 3.1 Negative-binomial regression Fully Bayes inference via MCMC Suppose that t = xTt for some p-vector of regressors xt . Then, conditional upon !t , the contribution of observation t to the likelihood is Lt ( ) / / exp{?t xTt !t (xTt )2 /2} ( ? ?2 ) ! t yt ? T exp xt . 2 2!t Let ? = diag(!1 , . . . , !n ); let zt = (yt ?)/(2!t ); and let z denote the stacked vector of zt terms. Combining all terms in the likelihood leads to a Gaussian linear-regression model where 1 (z | , ?) ? N (X , ? ). It is usually reasonable to assume a conditionally Gaussian prior, ? N (c, C). Note that C itself may be random, as in, for example, a Bayesian lasso or horseshoe prior [33?35]. Gibbs sampling proceeds in two simple steps: (!t | ?, ) ( | ?, z) ? ? PG(yt + ?, xTt ) N (m, V ) , where PG denotes a Polya-Gamma draw, and where V = (X T ?X + C m = V (X T ?z + C 1 ) 1 1 c) . One may update the dispersion parameter ? via Gibbs sampling, using the method described in [36]. 3.2 Batch EM for MAP estimation We may also use the same data-augmentation trick in an expectation-maximization (EM) algorithm to compute the maximum a-posteriori (MAP) estimate ?. Returning to the likelihood in (4) and ignoring constants of proportionality, we may write the complete-data log posterior distribution, given !1 , . . . , !N , as N ? X yt ? (xT )2 Q( ) = log p( | Y, !1 , . . . , !N ) = (xTt ) ? !t t + log p( ) 2 2 t=1 for some prior p( ). This expression is linear in !t . Therefore we may compute E{Q( )} by substituting ! ? t = E(!t | ), given the current value of , into the above expression. Appealing to (5), these conditional expectations are available in closed form: ? ? ?t E(!t | ) = tanh(xTt /2) , xTt where ?t = (yt ?)/2. In the M step, we re-express E{Q( )} as 1 T E{Q( )} = S + T d + log p( ) , 2 4 where the complete-data sufficient statistics are S d ? X T ?X T X ? = = ? = diag(!?1 , . . . , ! for ? ? N ) and ? = (?1 , . . . , ?N )T . Thus the M step is a penalized weighted least squares problem, which can be solved using standard methods. In fact, it is typically unnecessary to maximize E{Q( )} exactly at each iteration. As is well established in the literature on the EM algorithm, it is sufficient to move to a value of that merely improves that observed-data objective function. We have found that it is much faster to take a single step of the conjugate conjugategradient algorithm (in which case in will be important to check for improvement over the previous iteration); see, e.g. [37] for details. 3.3 Online EM For very large data sets, the above batch algorithm may be too slow. In such cases, we recommend computing the MAP estimate via an online EM algorithm [38], as follows. Suppose that our current estimate of the parameter is (t 1) , and that the current estimate of the complete-data log posterior is 1 T (t 1) Q( ) = S + T d(t 1) + log p( ) , (6) 2 where S (t 1) t 1 X = ! ? i xi xTi i=1 d(t 1) t 1 X = ?i x i , i=1 recalling that ?i = (yi value of !t as ?)/2. After observing new data (yt , xt ), we first compute the expected ? ? ?t (t 1) ! ? t = E(!t | yt , )= tanh( t /2) , t with t = denoting the linear predictor evaluated at the current estimate. We then update the sufficient statistics recursively as xTt (t 1) S (t) = (1 d(t) = (1 (t 1) + t! ? t xt xTt t )S (t 1) + t ?t x t , t )d where t is the learning rate. We then plug these updated sufficient statistics into (6), and solve the M step to move to a new value of . The data can also be processed in batches of size larger than 1, (t) (t) with p obvious modifications to the updates for S and d ; we have found that batch sizes of order p tend to work well, although we are unaware of any theory to support this choice. In high-dimensional problems, the usual practice is to impose sparsity via an `1 penalty on the regression coefficients, leading to a lasso-type prior. In this case, the M-step in the online algorithm can be solved very efficiently using the modified shooting algorithm, a coordinate-descent method described in a different context by [39] and [40]. This online EM is guaranteed to converge P1to a stationary point P1 of the log posterior distribution if the learning rate decays in time such that t=1 t = 1 and t=1 t2 < 1. (If the penalty function is concave and ? is fixed, then this stationary point will be the global maximum.) A simple choice for the learning rate is t = 1/ta for a 2 (0.5, 1), with a = 0.7 being our default choice. 4 Factor analysis for negative-binomial spiking Let t = ( t1 , . . . , tK ) denote a vector of K linear predictors at time t, corresponding to K different neurons with observed counts Yt = (yt1 , . . . , ytK )T . We propose a dynamic negative5 binomial factor model for Yt , with a vector autoregressive (VAR) structure for the latent factors: NB(?, e tk ) for k = 1, . . . K ? + Bft ft 1 + ?t , ?t ? N(0, ? 2 I) . ? = = ytk t ft Here ft denotes an L-vector of latent factors, with L typically much smaller than P . The K ? L factor-loadings matrix B is restricted to have zeroes above the diagonal, and to have positive diagonal entries. These restrictions are traditional in Bayesian factor analysis [41], and ensure that B is formally identified. We also assume that is a diagonal matrix, and impose conjugate inversegamma priors on ? 2 to ensure that, marginally over the latent factors ft , the entries of t have approximately unit variance. Although we do not pursue the point here, the mean term ? can incorporate the effect of known predictors with no additional complication to the analysis. By exploiting the Polya-Gamma data-augmentation scheme, posterior inference in this model may proceed via straightforward Gibbs sampling?something not previously possible for count-data factor models. Prior work on latent variable modeling of spike data has relied on either Gaussian approximations [2?6, 8] or variants of particle filtering [10?13]. Gibbs sampling proceeds as follows. Conditional upon B and ft , we update the latent variables as !tk ? PG(ytk + ?, Bk ft ), where Bk denotes the kth row of the loadings matrix. The mean vector ? and factor-loadings matrix B can both be updated in closed-form via a Gaussian draw using the full conditional distributions given in, for example, [42] or [43]. Given all latent variables and other parameters of the model, the factors ft can be updated in a single block using the forward-filter, backwards-sample (FFBS) algorithm from [32]. First, pass forwards through the data from y1 to yN , recursively computing the filtered moments of ft as Mt = mt = (Vt 1 + B T ?t B) 1 T 1 Mt (B ?t zt + Vt mt 1) , where Mt T + ? 2I Vt = zt = (zt1 , . . . , ztK )T ?t = diag(!t1 , . . . , !tK ) . 1 , ztk = ytk ? 2!tk ?k Then draw fN ? N(mN , MN ) from its conditional distribution. Finally, pass backwards through the data, sampling ft as (ft | mt , Mt , ft+1 ) ? N(at , At ), where At 1 at = = Mt 1 2 +? 1 1 I At (Mt mt + ? 2 ft+1 ) . This will result in a block draw of all N ? L factors from their joint conditional distribution. 5 Experiments To demonstrate our methods, we performed regression and dynamic factor analyses on a dataset of 27 neurons recorded from primate retina (published in [44] and re-used with authors? permission). Briefly, these data consist of spike responses from a simultaneously-recorded population of ON and OFF parasol retinal ganglion cells, stimulated with a flickering, 120-Hz binary white noise stimulus. 5.1 Regression Figure 2 shows a comparison of a Poisson model versus a negative-binomial model for each of the 27 neurons in the retinal dataset. We binned spike counts in 8 ms bins, and regressed against a temporally lagged stimulus, resulting in a 100-element (10 ? 10 pixel) spatial receptive field for each neuron. To benchmark the two methods, we created 50 random train/test splits from a full dataset of 30,000 points, with 7,500 points held out for validation. Using each training set, we used 6 120 100 80 60 40 20 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Increase in Held-Out Log Likelihood 140 Neuron Figure 2: Boxplots of improvement in held-out log likelihoods (NB versus Poisson regression) for 50 train/test splits on each of the 27 neurons in the primate retinal data. our online maximum-likelihood method to fit an NB model to each of the 27 neurons, and then used these models to compute held-out log-likelihoods on the test set versus a standard Poisson GLM. As Figure 2 shows, the NB model has a higher average held-out log-likelihood than the Poisson model. In some cases it is dozens of orders of magnitude better (as in neurons 12?14 and 22?27), suggesting that there is substantial over-dispersion in the data that is not faithfully captured by the Poisson model. We emphasize that this is a ?weak-signal? regime, and that overdispersion is likely to be less when the signal is stronger. Yet these results suggest, at the very least, that many of these neurons have marginal distributions that are quite far from Poisson. Moreover, regardless of the underlying signal strength, the regression problem can be handled quite straightforwardly using our online method, even in high dimensions, without settling for the restrictive Poisson assumption. 5.2 Dynamic factor analysis To study the factor-modeling framework, we conducted parallel experiments on both simulated and real data. First, we simulated two different data sets comprising 1000 time points and 11 neurons, each from a two-factor model: one with high factor autocorrelation ( = 0.98), and one with low factor autocorrelation ( = 0.5). The two questions of interest here are: how well does the fully Bayesian method reconstruct the correlation structure among the unobserved rate parameters tk ; and how well does it distinguish between a high-autocorrelation and low-autocorrelation regime in the underlying low-dimensional representation? The results in Figure 3 suggest that the results, on both counts, are highly accurate. It is especially interesting to compare the left-most column of Figure 3 with the actual cross-sectional correlation of t , the systematic component of variation, in the second column. The correlation of the raw counts yt show a dramatic attenuation effect, compared to the real latent states. Yet this structure is uncovered easily by the model, with together with a full assessment of posterior uncertainty. The approach behaves much like a model-based version of principal-components analysis, appropriate for non-Gaussian data. Finally, Figure 4 shows the results of fitting a two-factor model to the primate retinal data. We are able to uncover latent structure in the data in a completely unsupervised fashion. As with the simulated data, it is interesting to compare the correlation of the raw counts yt with the estimated correlation structure of the latent states. There is also strong support for a low-autocorrelation regime in the factors, in light of the posterior mean factor scores depicted in the right-most pane. 6 Discussion Negative-binomial models have only recently been explored in systems neuroscience, despite their favorable properties for handling data with larger-than-Poisson variation. Likewise, Bayesian inference for the negative binomial model has traditionally been a difficult problem, with the existence of a fully automatic Gibbs sampler only recently discovered [19]. Our paper has made three specific contributions to this literature. First, we have shown that negative-binomial models can lead to 7 1 Neuron 2 3 Index 4 Index 5 Index 6 Index 7 Index 8 Index 9 Index 10 Index 11 Index Correlation Among Spike Counts Actual Correlation Among Latent States Estimated Correlation Among Latent States 1 Neuron 2 3 Index 4 Index 5 Index 6 Index 7 Index 8 Index 9 Index 10 Index 11 Index Correlation Among Spike Counts Actual Correlation Among Latent States Estimated Correlation Among Latent States Neuron Figure 3: Results for two simulated data sets with high factor autocorrelation (top row) and low factor autocorrelation (bottom row). The three left-most columns show the raw correlation among the counts yt ; the actual correlation, E( t tT ), of the latent states; and the posterior mean estimator for the correlation of the latent states. The right-most column shows the simulated spike trains for the 11 neurons, along with the factors ft in blue (with 75% credible intervals), plotted over time. 1 2 3 4 5 6 7 8 9 10 11 Index Index Index Index Index Index Index Index Index Index Correlation among spike counts Estimated correlation of latent states Spike counts Posterior mean factor scores Figure 4: Results for factor analysis of the primate retinal data. substantial improvements in fit, compared to the Poisson, for neural data exhibiting over-dispersion. Such models can be fit straightforwardly via MCMC for a wide class of prior distributions over model parameters (including sparsity-inducing choices, such as the lasso). Second, we have proposed a novel online-EM algorithm for sparse NB regression. This algorithm inherits all the convergence properties of EM, but is scalable to extremely large data sets. Finally, we have embedded a dynamic factor model inside a negative-binomial likelihood. This latter approach can be extended quite easily to spatial interactions, more general state-space models, or mixed models incorporating both regressors and latent variables. All of these extensions, as well as the model-selection question (how many factors?) form promising areas for future research. Acknowledgments We thank E. J. Chichilnisky, A. M. Litke, A. Sher and J. Shlens for retinal data, J. Windle for PG sampling code, and J. H. Macke for helpful comments. This work was supported by a Sloan Research Fellowship, McKnight Scholar?s Award, and NSF CAREER Award IIS-1150186 (JP). 8 References [1] Roland Baddeley, L. F. Abbott, Michael C. A. Booth, Frank Sengpiel, Tobe Freeman, Edward A. Wakeman, and Edmund T. Rolls. Proceedings of the Royal Society of London. Series B: Biological Sciences, 264(1389):1775?1783, 1997. [2] E. Brown, L. Frank, D. Tang, M. Quirk, and M. Wilson. Journal of Neuroscience, 18:7411?7425, 1998. [3] L. Srinivasan, U. Eden, A. Willsky, and E. Brown. Neural Computation, 18:2465?2494, 2006. [4] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Journal of Neurophysiology, 102(1):614, 2009. [5] W. Wu, J.E. Kulkarni, N.G. Hatsopoulos, and L. Paninski. Neural Systems and Rehabilitation Engineering, IEEE Transactions on, 17(4):370?378, 2009. [6] Liam Paninski, Yashar Ahmadian, Daniel Gil Ferreira, Shinsuke Koyama, Kamiar Rahnama Rad, Michael Vidne, Joshua Vogelstein, and Wei Wu. J Comput Neurosci, Aug 2009. [7] J. W. Pillow, Y. Ahmadian, and L. Paninski. Neural Comput, 23(1):1?45, Jan 2011. [8] M Vidne, Y Ahmadian, J Shlens, J W Pillow, J Kulkarni, A M Litke, E J Chichilnisky, E P Simoncelli, and L Paninski. J. Computational Neuroscience, pages 1?25, 2012. To appear. [9] John P. Cunningham, Krishna V. Shenoy, and Maneesh Sahani. Proceedings of the 25th international conference on Machine learning, ICML ?08, pages 192?199, New York, NY, USA, 2008. ACM. [10] A. E. Brockwell, A. L. Rojas, and R. E. Kass. J Neurophysiol, 91(4):1899?1907, Apr 2004. [11] S. Shoham, L. Paninski, M. Fellows, N. Hatsopoulos, J. Donoghue, and R. Normann. IEEE Transactions on Biomedical Engineering, 52:1312?1322, 2005. [12] Ayla Ergun, Riccardo Barbieri, Uri T. Eden, Matthew A. Wilson, and Emery N. Brown. IEEE Trans Biomed Eng, 54(3):419?428, Mar 2007. [13] A. E. Brockwell, R. E. Kass, and A. B. Schwartz. Proceedings of the IEEE, 95:1?18, 2007. [14] R. P. Adams, I. Murray, and D. J. C. MacKay. Proceedings of the 26th Annual International Conference on Machine Learning. ACM New York, NY, USA, 2009. [15] Y. Ahmadian, J. W. Pillow, and L. Paninski. Neural Comput, 23(1):46?96, Jan 2011. [16] M.C. Teich and W.J. McGill. Physical Review Letters, 36(13):754?758, 1976. [17] Arno Onken, Steffen Grnewlder, Matthias H. J. Munk, and Klaus Obermayer. PLoS Comput Biol, 5(11):e1000577, 11 2009. [18] R Goris, E P Simoncelli, and J A Movshon. Computational and Systems Neuroscience (CoSyNe), Salt Lake City, Utah, February 2012. [19] N.G. Polson, J.G. Scott, and J. Windle. Arxiv preprint arXiv:1205.0310, 2012. [20] P. L?ansk`y and J. Vaillant. Biosystems, 58(1):27?32, 2000. [21] V. Ventura, C. Cai, and R.E. Kass. Journal of neurophysiology, 94(4):2928?2939, 2005. [22] Neural Comput, 18(11):2583?2591, Nov 2006. [23] A.M. Skene and J. C. Wakefield. Statistics in Medicine, 9:919?29, 1990. [24] J. Carlin. Statistics in Medicine, 11:141?58, 1992. [25] Eric T. Bradlow, Bruce G. S. Hardie, and Peter S. Fader. Journal of Computational and Graphical Statistics, 11(1):189?201, 2002. [26] A. Gelman, A. Jakulin, M.G. Pittau, and Y. Su. The Annals of Applied Statistics, 2(4):1360?83, 2008. [27] A. Dobra, C. Tebaldi, and M. West. Journal of Statistical Planning and Inference, 136(2):355?72, 2006. [28] James H. Albert and Siddhartha Chib. Journal of the American Statistical Association, 88(422):669?79, 1993. [29] M. Bethge and P. Berens. Advances in neural information processing systems, 20:97?104, 2008. [30] C. Holmes and L. Held. Bayesian Analysis, 1(1):145?68, 2006. [31] Luc Devroye. Statistics & Probability Letters, 79(21):2251?9, 2009. [32] Chris Carter and Robert Kohn. Biometrika, 81(541-53), 1994. [33] Trevor Park and George Casella. Journal of the American Statistical Association, 103(482):681?6, 2008. [34] Chris M. Hans. Biometrika, 96(4):835?45, 2009. [35] Carlos M. Carvalho, Nicholas G. Polson, and James G. Scott. Biometrika, 97(2):465?80, 2010. [36] Mingyuan Zhou, Lingbo Li, David Dunson, and Lawrence Carin. International Conference on Machine Learning (ICML), 2012. [37] Nicholas G. Polson and James G. Scott. Technical report, University of Texas at Austin, http://arxiv.org/abs/1103.5407v3, 2011. [38] O. Capp?e and E. Moulines. Journal of the Royal Statistical Society (Series B), 71(3):593?613, 2009. [39] Suhrid Balakrishnan and David Madigan. Journal of Machine Learning Research, 9:313?37, 2008. [40] Liang Sun and James G. Scott. Technical report, University of Texas at Austin, 2012. [41] H. Lopes and M. West. Statistica Sinica, 14:41?67, 2004. [42] Joyee Ghosh and David B. Dunson. Journal of Computational and Graphical Statistics, 18(2):306?20, 2009. [43] P.R. Hahn, Carlos M. Carvalho, and James G. Scott. Journal of the Royal Statistical Society, Series C, 2012. [44] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Nature, 454:995?999, 2008. 9
4567 |@word neurophysiology:2 illustrating:2 version:2 briefly:1 loading:3 stronger:1 proportionality:1 teich:1 eng:1 pg:10 dramatic:1 recursively:2 carry:1 moment:3 series:4 uncovered:1 score:2 daniel:1 denoting:1 outperforms:1 current:4 ka:3 yet:2 must:1 john:1 fn:1 numerical:1 shape:6 analytic:2 update:4 implying:1 stationary:2 inversegamma:1 filtered:1 provides:2 complication:1 org:1 along:2 c2:1 shooting:1 doubly:1 fitting:2 behavioral:1 autocorrelation:7 inside:1 expected:3 roughly:1 p1:1 planning:1 brain:2 steffen:1 moulines:1 freeman:1 xti:1 actual:4 begin:1 underlying:5 bounded:1 moreover:1 substantially:1 pursue:1 arno:1 developed:1 ghosh:1 finding:1 transformation:1 unobserved:2 ansk:1 vaillant:1 fellow:1 every:1 attenuation:1 concave:1 exactly:1 returning:1 ferreira:1 biometrika:3 schwartz:1 control:1 unit:1 yn:1 appear:1 shenoy:2 t1:2 positive:1 engineering:2 despite:2 jakulin:1 barbieri:1 approximately:1 fader:1 challenging:1 limited:1 liam:1 acknowledgment:1 practice:1 block:2 jan:2 area:2 maneesh:1 reject:1 shoham:1 convenient:2 rahnama:1 suggest:2 madigan:1 ga:3 selection:2 gelman:1 nb:13 context:1 restriction:1 map:3 center:1 yt:27 straightforward:2 regardless:1 estimator:1 holmes:1 shlens:3 population:3 coordinate:1 variation:2 traditionally:1 updated:3 mcgill:1 pt:6 play:1 suppose:5 construction:1 exact:1 annals:1 trick:1 element:1 particularly:1 observed:3 role:1 ft:13 bottom:1 preprint:1 solved:2 sun:1 plo:1 hatsopoulos:2 substantial:2 dynamic:7 mccombs:2 depend:1 rewrite:1 upon:5 division:1 eric:1 completely:1 neurophysiol:1 capp:1 easily:2 joint:1 various:1 represented:2 train:4 stacked:1 fast:1 describe:4 london:1 monte:1 ahmadian:4 klaus:1 outcome:2 whose:1 quite:3 larger:2 solve:1 relax:1 reconstruct:1 statistic:12 latentvariable:2 itself:1 online:8 matthias:1 cai:1 propose:1 interaction:1 combining:1 mixing:2 poorly:1 brockwell:2 description:1 inducing:1 exploiting:1 convergence:1 emery:1 adam:1 tk:6 derive:1 quirk:1 polya:13 school:1 aug:1 edward:1 eq:1 strong:1 come:1 exhibiting:1 filter:2 stochastic:1 bin:1 munk:1 require:1 scholar:1 biological:1 extension:1 kamiar:1 hold:1 normal:5 exp:6 great:1 lawrence:1 matthew:1 substituting:1 major:1 estimation:1 favorable:1 tanh:3 utexas:2 faithfully:1 city:1 tool:1 weighted:1 gaussian:12 modified:1 zhou:1 poi:4 wilson:2 sengpiel:1 validated:1 focus:1 inherits:1 improvement:3 likelihood:17 check:1 litke:3 sense:2 posteriori:1 inference:17 helpful:1 typically:3 accept:1 cunningham:2 comprising:1 biomed:1 pixel:1 issue:1 among:9 denoted:1 spatial:2 integration:1 mackay:1 marginal:1 equal:1 field:1 having:1 sampling:12 park:1 yu:1 unsupervised:1 icml:2 carin:1 future:1 report:2 t2:1 stimulus:6 recommend:1 retina:3 chib:1 gamma:17 simultaneously:2 familiar:1 recalling:1 ab:1 interest:2 acceptance:1 highly:2 mixture:6 light:1 held:7 accurate:2 integral:1 re:2 plotted:1 inconvenient:1 dichotomized:1 biosystems:1 column:4 modeling:2 maximization:2 entry:2 uniform:1 predictor:3 conducted:1 too:1 straightforwardly:2 density:3 international:3 probabilistic:1 off:1 systematic:1 michael:2 bethge:1 together:1 augmentation:3 central:1 recorded:3 cosyne:1 resort:1 macke:1 leading:1 american:2 li:1 suggesting:1 parasol:1 retinal:6 sec:3 gaussianity:1 coefficient:1 sloan:1 depends:1 multiplicative:1 performed:1 closed:3 observing:1 recover:1 bayes:1 relied:1 pyt:1 parallel:1 carlos:2 bruce:1 contribution:2 formed:1 square:1 roll:1 variance:7 efficiently:1 likewise:1 weak:1 bayesian:15 raw:3 none:1 carlo:1 marginally:1 published:1 casella:1 trevor:1 definition:1 against:1 james:7 obvious:1 dataset:3 dimensionality:1 improves:1 credible:1 uncover:1 mingyuan:1 dobra:1 ta:1 higher:1 response:8 awkward:1 wei:1 evaluated:1 mar:1 rejected:1 biomedical:1 wakefield:1 correlation:17 hastings:2 su:1 assessment:1 defines:1 logistic:4 hardie:1 quality:1 scientific:1 facilitate:1 effect:3 usa:2 brown:3 utah:1 analytically:1 equality:1 white:1 conditionally:7 attractive:1 m:1 generalized:1 tactic:1 complete:4 demonstrate:1 tt:1 onken:1 novel:2 recently:5 common:1 behaves:1 spiking:4 mt:10 physical:1 salt:1 jp:1 association:2 relating:1 theorist:1 gibbs:9 automatic:3 tuning:1 particle:1 stochasticity:1 ztk:2 han:1 cortex:1 something:1 posterior:14 recent:1 binary:2 vt:3 yi:2 exploited:1 accomplished:1 joshua:1 captured:1 krishna:1 greater:1 additional:1 impose:2 george:1 recognized:1 converge:1 maximize:1 v3:1 signal:3 ii:1 vogelstein:1 full:3 simoncelli:3 technical:2 faster:1 plug:1 cross:1 long:1 goris:1 roland:1 award:2 variant:2 regression:16 basic:1 scalable:1 expectation:3 poisson:25 arxiv:3 iteration:2 albert:1 cell:1 proposal:2 fellowship:1 interval:1 source:1 extra:1 comment:1 hz:1 tend:1 facilitates:1 conjugategradient:1 balakrishnan:1 backwards:3 split:2 easy:1 variety:1 variate:1 psychology:1 zi:1 fit:4 lasso:3 identified:1 carlin:1 idea:1 tradeoff:1 donoghue:1 texas:4 expression:3 handled:1 kohn:1 penalty:2 movshon:1 peter:1 proceed:2 york:2 repeatedly:1 useful:1 ytk:4 processed:1 carter:1 http:1 nsf:1 gil:1 neuroscience:6 estimated:4 windle:2 blue:1 discrete:2 write:1 siddhartha:1 express:1 srinivasan:1 santhanam:1 four:2 eden:2 abbott:1 thresholded:1 boxplots:1 merely:1 sum:1 letter:2 powerful:1 uncertainty:1 lope:1 bft:1 reasonable:1 wu:2 yt1:1 lake:1 draw:6 entirely:1 guaranteed:1 distinguish:1 quadratic:1 annual:1 tebaldi:1 binned:1 strength:1 constrain:1 regressed:1 simulate:1 extremely:1 pane:1 performing:1 relatively:1 skene:1 department:1 according:1 truncate:1 mcknight:1 conjugate:5 smaller:1 em:13 appealing:1 metropolis:2 primate:6 modification:1 rehabilitation:1 restricted:1 glm:2 equation:1 previously:1 turn:2 count:22 overdispersion:1 tractable:3 generalizes:1 gaussians:2 available:1 apply:1 hierarchical:1 appropriate:1 nicholas:2 alternative:1 batch:5 permission:1 slower:1 existence:1 vidne:2 top:2 assumes:1 binomial:24 denotes:5 ensure:2 graphical:4 medicine:2 exploit:1 giving:1 restrictive:1 especially:1 murray:1 february:1 society:3 hahn:1 move:2 objective:1 question:2 quantity:3 spike:21 receptive:1 usual:1 traditional:2 diagonal:3 obermayer:1 kth:1 thank:1 simulated:5 parametrized:1 koyama:1 chris:2 mail:1 willsky:1 code:1 devroye:1 index:28 relationship:1 ffbs:2 ratio:1 providing:1 riccardo:1 sinica:1 difficult:1 unfortunately:1 ventura:1 robert:1 dunson:2 frank:2 liang:1 gk:2 negative:23 lagged:1 polson:3 design:1 zt:4 neuron:15 dispersion:4 observation:1 shinsuke:1 benchmark:1 horseshoe:1 descent:1 extended:1 variability:3 y1:1 discovered:1 bk:2 david:3 required:1 chichilnisky:3 rad:1 ryu:1 established:1 trans:1 able:1 proceeds:2 below:1 usually:1 scott:7 regime:3 sparsity:2 including:2 royal:3 critical:1 business:1 difficulty:3 natural:1 settling:1 mn:2 scheme:3 temporally:1 created:1 carried:1 sher:2 normann:1 sahani:2 prior:11 review:2 literature:2 xtt:11 marginalizing:1 relative:1 wakeman:1 embedded:1 fully:9 probit:1 mixed:1 interesting:2 limitation:1 proportional:1 filtering:1 carvalho:2 var:1 versus:3 validation:1 sufficient:6 austin:4 row:3 penalized:1 supported:1 allow:1 understand:1 wide:1 characterizing:1 sparse:1 distributed:1 dimension:2 default:1 pillow:6 unaware:1 autoregressive:1 sensory:1 author:2 forward:3 made:1 projected:1 regressors:2 far:1 transaction:2 zt1:1 approximate:1 emphasize:1 nov:1 global:1 reveals:1 yashar:1 xt1:1 assumed:2 unnecessary:1 xi:1 continuous:1 latent:24 stimulated:1 promising:1 nature:1 career:1 ignoring:1 berens:1 diag:3 apr:1 statistica:1 linearly:2 neurosci:1 noise:1 arise:1 fig:2 west:2 fashion:1 slow:1 ny:2 xtp:1 exponential:1 prominently:1 comput:5 governed:1 perceptual:1 third:1 tang:1 dozen:1 xt:8 specific:1 appeal:1 decay:1 explored:1 essential:1 consist:1 incorporating:1 sequential:1 equates:1 magnitude:1 edmund:1 uri:1 booth:1 depicted:2 lt:1 paninski:7 likely:1 ganglion:1 sectional:1 expressed:1 scalar:1 dispersed:3 relies:1 acm:2 conditional:10 identity:1 endeavor:1 rojas:1 quantifying:1 careful:1 flickering:1 luc:1 infinite:2 uniformly:1 sampler:2 principal:1 pas:2 formally:1 support:2 latter:1 arises:1 jonathan:1 violated:1 kulkarni:2 avoiding:2 incorporate:1 mcmc:3 baddeley:1 biol:1 handling:1
3,941
4,568
Dynamic Pruning of Factor Graphs for Maximum Marginal Prediction Christoph H. Lampert IST Austria (Institute of Science and Technology Austria) Am Campus 1, 3400 Klosterneuburg, Austria http://www.ist.ac.at/?chl [email protected] Abstract We study the problem of maximum marginal prediction (MMP) in probabilistic graphical models, a task that occurs, for example, as the Bayes optimal decision rule under a Hamming loss. MMP is typically performed as a two-stage procedure: one estimates each variable?s marginal probability and then forms a prediction from the states of maximal probability. In this work we propose a simple yet effective technique for accelerating MMP when inference is sampling-based: instead of the above two-stage procedure we directly estimate the posterior probability of each decision variable. This allows us to identify the point of time when we are sufficiently certain about any individual decision. Whenever this is the case, we dynamically prune the variables we are confident about from the underlying factor graph. Consequently, at any time only samples of variables whose decision is still uncertain need to be created. Experiments in two prototypical scenarios, multi-label classification and image inpainting, show that adaptive sampling can drastically accelerate MMP without sacrificing prediction accuracy. 1 Introduction Probabilistic graphical models (PGMs) have become useful tools for classical machine learning tasks, such as multi-label classification [1] or semi-supervised learning [2], as well for many realworld applications, for example image processing [3], natural language processing [4], bioinformatics [5], and computational neuroscience [6]. Despite their popularity, the question of how to best perform (approximate) inference in any given graphical models is still far from solved. While variational approximations and related message passing algorithms have been proven useful for certain classes of models (see [7] for an overview), there is still a large number of cases for which sampling-based approaches are the safest choice. Unfortunately, inference by sampling is often computationally costly: many samples are required to reach a confident result, and generating the individual samples can be a complex task in itself, in particular if the underlying graphical model is large and highly connected. In this work we study a particular inference problem: maximum marginal prediction (MMP) in binary-valued PGMs, i.e. the task of determining for each variable in the graphical model which of its states has highest marginal probability. MMP occurs naturally as the Bayes optimal decision rule under Hamming loss [8], and it has also found use as a building block for more complex prediction tasks, such as M -best MAP prediction [9]. The standard approach to sampling-based MMP is to estimate each variable?s marginal probability distribution from a set of samples from the joint probability, and for each variable pick the state of highest estimated marginal probability. In this work, we propose an almost as simple, but more efficient way. We introduce one binary indicator variable for each decision we need to make, and keep estimates of the posterior probabilities of each of these during the process of sampling. As soon as we are confident enough about any of 1 the decisions, we remove it from the factor graph that underlies the sampling process, so no more samples are generated for it. Consequently, the factor graph shrinks over time, and later steps in the sampling procedure are accelerated, often drastically so. Our main contribution lies in the combination of two relatively elementary components that we will introduce in the following section: an estimate for the posterior distributions of the decision variables, and a mean field-like construction for removing individual variables from a factor graph. 2 Adaptive Sampling for Maximum Marginal Prediction Let p(x) be a fixed probability distribution over the set X = {0, 1}V of binary labelings of a vertex set V = {1, . . . , n}. We assume that p is given to us by means of a factor graph, G = (V, F), with factor set F = {F1 . . . , Fk }. Each factor, Fj ? V , has an associated log-potential, ?j , which is a real-valued function of only the variables occurring in Fj . Writing xFj = (xi )i?Fj we have  p(x) ? exp ? E(x) with E(x) = X F ?F ?F (xF ). (1) for any x ? {0, 1}V . Our goal is maximum marginal prediction, i.e. to infer the values of decision variables (zi )i?V that are defined by zi := 0 if ?i ? 0.5, and zi := 1 otherwise, where ?i := p(xi = 1) is the marginal probability of the ith variable taking the value 1. Computing the marginals ?i in a loopy graphical model is in general #P-complete [10], so one has to settle for approximate marginals and approximate predictions. In this work, we assume access to a suitable constructed sampler based on the Markov chain Monte Carlo (MCMC) principle [11, 12], e.g. a Gibbs sampler [3] It produces a chain of states Sm = {x(1) , . . . , x(N ) }, where each x(i) is a random sample from the Pm (j) 1 joint distribution p(x). From the set of sample we can compute an estimate, ? ?i = m j=1 xi of the true marginal, ?i , and make approximate decisions: z?i := 1 if and only if ? ?i ? 0.5. Under mild conditions on the sampling procedure the law of large number guarantees that limN ?? ? ?i = ?i , and the decisions will become correct almost surely. The main problem with sampling-based inference is when to stop sampling [13]. The more samples we have, the lower the variance on the estimates, so the more confident we can be about our decisions. However, each sample we generate increases the computational cost at least proportionally to the numbers of factors and variables involved. At the same time, the variance of the estimators ? ?i is reduced only proportionally to the square root of the sample size. In combination, this means that often, one spends a large amount of computational resources on a small win in predictive accuracy. In the rest of this section, we explain our proposed idea of adaptive sampling in graphical models, which reduces the number of variables and factors during the course of the sampling procedure. As an illustrative example we start by the classical situation of adaptive sampling in the case of a single binary variable. This is a special case of Bayesian hypothesis selection, and ?for the case of i.i.d. data? has recently also been rediscovered in the pattern recognition literature, for example for evaluating decision trees [14]. We then introduce our proposed extensions to correlated samples, and show how the per-variable decisions can be applied in the graphical model situation with potentially many variables and dependencies between them. 2.1 Adaptive Sampling of Binary Variables Let x be a single binary variable, for which we have a set of samples, S = {x(1) , . . . , x(N ) }, available. The main insight lies in the fact that even though samples are used to empirically estimate the (marginal) probability ?, the latter is not the actual quantity of interest to us. Ultimately, we are only interested in the value of the associated decision variable z. Independent samples. Assuming for the moment that the samples are independent (i.i.d.), we can derive an analytic expression for the posterior probability of z given the observed samples, Z 1 2 p(q|S)dq p(z = 0|S) = (2) 0 2 where p(q|S) is the conditional probability density for ? having the value q. Applying Bayes? rule with likelihood p(x|q) = q x (1 ? q)1?x and uniform prior, p(q) = 1, results in Z 12 1 = q m (1?q)N?m dq = I 21 (m+1, N ?m+1), (3) B(m+ 1, N ?m+1) 0 PN ?(?)?(?) (j) where m = j=1 x . The normalization factor B(?, ?) = ?(?+?) is the beta function; the integral is called the incomplete beta function (here evaluated at 12 ). In combination, they form the regularized incomplete beta function Ix (?, ?) [15]. From the above derivation we obtain a stopping criterion of -confidence: given any number of samples we compute p(z = 0|S) using Equation (3). If its value is above 1 ? , we are -confident that the correct decision is z = 0. If it is below , we are equally confident that the correct decision is z = 1. Only if it lies inbetween we need to continue sampling. An analogue derivation to the above leads to a confidence bound for estimates of the marginal probability, ? ? = m/N , itself: p(|? ? ? ?| ? ?|S) = I??+? (m+1, N ?m+1) ? I???? (m+1, N ?m+1). (4) Note that both tests are computable fast enough to be done after each sample, or small batches of samples. Evaluating the regularized incomplete beta function does not require numeric integration, and for fixed parameter  the values N and m that bound the regions of confidence can also be tabulated [16]. A figure illustrating the difference between confidence in the MMP, and confidence in the estimated marginals can be found in the supplemental material. It shows that only relatively few independent samples (tens to hundreds) are sufficient to get a very confident MMP decision, if the actual marginals are close to 0 or 1. Intuitively, this makes sense, since in this situation a even coarse estimate of the marginal is sufficient to make of a decision with low error probability. Only if the true marginal lies inside of a relatively narrow interval around 0.5, the MMP decision becomes hard, and a large number of samples will be necessary to make a confident decision. Our experiments in Section 4 will show that in practical problem where the probability distribution is learned from data, the regions close to 0 and 1 are in fact the most relevant ones. Dependent samples. Practical sampling procedures, such as MCMC, do not create i.i.d. samples, but dependent ones. Using the above bounds directly with these would make the tests overconfident. We overcome this problem, approximately, by borrowing the concept of effective sample size (ESS) from the statistics literature. Intuitively, the ESS reflects how many independent samples, N 0 , a set of N correlated sample is equivalent to. In first order 1 , one estimates the effective sample size as N 0 = PN ?1 (x(j) ???)(x(j+1) ???) 1?r 1 , and j=1 1+r N , where r is the first order autocorrelation coefficient, r = N ?1 ?2 ? 2 is the estimated variance of the sample sequence. Consequently, we can adjust the confidence tests defined above to correlated data: we first collecting a small number of samples, N0 , which we use to estimate initial values of ? 2 and r. Subsequently, we estimate the confidence of a decision by p(z = 0|S) = I 12 (? ?N 0 + 1, (1 ? ? ?)N 0 + 1), (5) i.e. we replace the sample size N by the effective sample size N 0 and the raw count m by its adjusted value ? ?N 0 . 2.2 Adaptive Sampling in Graphical Models In this section we extend the above confidence criterion from single binary decisions to the situation of joint sampling from the joint probability of multiple binary variables. Note that we are only inter(j) ested in per-variable decisions, so we can treat the value of each variable xi in a joint sample x(j) as a separate sample from the marginal probability p(xi ). We will have to take the dependence between (j) (k) different samples xi and xi into account, but between variable dependencies within a sample do not pose problems. Consequently, estimate the confidence of any decision variable zi is straight (1) (N ) forward from Equation (5), applied separately to the binary sample set Si = {xi , . . . , xi }. Note that all quantities defined above for the single variable case need to be computed separately for each decision. For example, each variable has its own autocorrelation estimate and effective sample size. 1 Many more involved methods for estimating the effective sample size exist, see, for example, [17], but in our experiments the first-order method proved sufficient for our purposes. 3 The difference to the binary situation lies in what we do when we are confident enough about the decision of some subset of variables, V c ? V . Simply stopping all sampling would be too risky, since we are still uncertain about the decisions of V u := V \ V c . Continuing to sample until we are certain about all decision will be wasteful, since we know that variables with marginal close to 0.5 require many more samples than others for a confident decision. We therefore propose to continue sampling, but only for the variables about which we are still uncertain. This requires us to derive an expression for p(xu ), the marginal probability of all variables that we are still uncertain about. P Computing p(xu ) = xc , xu ) exactly is almost always infeasible, otherwise, we x ?c ?{0,1}V c p(? would not have needed to resort to sampling based inference in the first place. An alternative idea would be to continue using the original factor graph, but to clamp all variables we are certain about to their MMP values. This is computationally feasible, but it results in samples from a conditional distribution, p(xu |xc = zc ), not from the desired marginal one. The new construction that we introduce combines advantages of both previous ideas: it is computationally as efficient as the value clamping, but it uses a distribution that approximates the marginal distribution as closely as possible. Similar as in mean-field methods [7], the main step consists of finding distributions q and q 0 such 0 that p(x) ? q(xu )q P (xc ). Subsequently,Pq(xu ) can be used as approximate replacement to p(xu ), because p(xu ) = x?c ?{0,1}V c p(x) ? x?c ?{0,1}V c q 0 (? xc )q(xu ) = q(xu ). The main difference to mean-field inference lies in the fact that q and q 0 have different role in our construction. For q 0 we prefer a distribution that factorizes over the variables that we are confident about. Because we want q also to respect the marginal probabilities, ? ?i for i ? V c , as estimated them from the sampling Q xi 0 process so far, we obtain q (xc ) = i?V c ? ?i (1 ? ? ?i )xi . The distribution q contain all variables that we are not yet confident about, so we want to avoid making any limiting assumptions about its potential values or structure. Instead, we define it as the solution of minimizing KL(p|qq 0 ) over all distributions q, which yields the solution q(xu ) ? exp( ?Ex?c ?q0 (xc ) {E(? xc , xu )} ). 0 (6) 0 What remains is to define factors F and log-potentials ? , such that q(xu ) ? exp ? P 0 F ?F 0 ?F (xF ) while also allowing for efficient sampling from q. For this we partition the original factor set into three disjoint sets, F = F c ? F u ? F0 , with F c := {F ? F : F ? V c }, F u := {F ? F : F ? V u }, and F0 := F \ (F c ? F u ). Each factor F0 ? F0 we split further into its certain and uncertain components, F0c ? V c and F0u ? V u , respectively. With this we obtain a decomposition of the exponent in Equation (6): X X X X X Ex?c ?q0 {E(? xc , xu )} = q 0 (? xc )?F c (xF c ) + ?Fu (xFu )+ q 0 (? xF0c )?F (? xF0c , xF0u ) ?F c F c ?F c x Fu ?F u ?F c F0 ?F0 x 0 The first sum is a constant with respect to xu , so we can disregard it in the construction of F 0 . The factors and log-potentials in the second sum already depend only on V u , so we can re-use them in unmodified form for F 0 , we set ?F0 = ?F for every F ? F u . The third sum we rewrite as P 0 u {F u =F ?V u :F ?F0 } ?F u (xF ), with X Y  ?F0 u (xu ) := ? ?xi?i (1 ? ? ?i )1??xi ?F (? xc , xu ). (7) x ?c ?{0,1}Fc i?Fc for any F ? F0 , where we have made use of the explicit form of q?. If factors with identical variable set occur during this construction, we merge them by summing their log-potentials. Ultimately, we obtain a new factor set F 0 := F u ? {F ? V u : F ? F0 }, and probability distribution X  u q(xu ) ? exp ?F0 (xF ) for xu ? {0, 1}V . (8) F ?F 0 Note that during the process, not only the number of variables is reduced, but also the number of factors and the size of each factor can never grow. Consequently, if sampling was feasible for the original distribution p, it will also be feasible for q, and potentially more efficient. 3 Related Work Sequential sampling with the option of early stopping has a long tradition in Bayesian statistics. First introduced by Wald in 1945 [18], the ability to continuously accumulate information until a decision can be made with sufficient confidence was one of the key factors that contributed to the success of 4 Bayesian reasoning for decision making. Today, it has been a standard technique in areas as diverse as clinical medicine (e.g. for early stop of drug trials [19]), social sciences (e.g. for designing and evaluating experiments [20]), and economics (e.g. in modelling stock market behavior [21]). In current machine learning research, sequential sampling is used less frequently for making individual decisions, but in the form of MCMC it has become one of the most successful techniques for statistical inference of probability distributions with many dependent variables [12, 22]. Nevertheless, to the best of our knowledge, the method we propose is the first one that performs early stopping of subsets of variables in this context. Many other approaches to reduce the complexity of sampling iterations exist, however, for example to approximate complex graphical models by simpler ones, such as trees [23], or loopy models of low treewidth [24]. These fall into a different category than the proposed method, though, as they are usually performed statically and prior to the actual inference step, so they cannot dynamically assign computational resources where they are needed most. Beam search [25] and related techniques take an orthogonal approach to ours. They dynamically exclude low-likelihood label combinations from the inference process, but they keep the size and topology of the factor graph fixed. Select and sample [26] disregards a data-dependent subset of variables during each sampling iterations. It is not directly applicable in our situation, though, since it requires that the underlying graphical model is bipartite, such that the individual variables are conditionally independent of each other. Given their complementary nature, we believe that the idea of combining adaptive MMP with beam search and/or select and sample could be a promising direction for future work. 4 Experimental Evaluation To demonstrate the effect of adaptive MMP compared to naive MMP, we performed experiments in two prototypical applications: multi-label classification and binary image inpainting. In both tasks, performance is typically measured by the Hamming loss, so MMP is the preferred method of test time prediction. 4.1 Multi-Label Classification In multi-label classification, the task is to predict for each input y ? Y, which labels out of a label set L = {1, . . . , K} are correct. The difference to multi-class classification is that several labels can be correct simultaneously, or potentially none at all. Multi-label classification can be formulated as simultaneous prediction of K binary labels (xi )i=1,...K , where xi = 1 indicates that the label i is part of the prediction, and xi = 0 indicates that it is not. Even though multi-label classification can in principle be solved by training K independent predictors, several studies have shown that by making use of dependencies between label, the accuracy of the individual predictions can be improved [1, 27, 28]. For our experiments we follow [1] in using a fully-connected conditional random field model. Given an input y, each label variable i has a unary factor Fi = {i} with log-linear potential ?i (xi ) = hwi , yixi , where wi is a label-specific weight vector that was learned from training data. Additionally there are K(K ? 1)/2 pairwise factors, Fij = {i, j}, with log-potentials ?ij (xi , xj ) = ?ij xi xj . Its free parameter ?ij is learned as well. The resulting conditional joint distribution has the form of a Boltzmann machine, p(x|y) ? exp(?Ey (x)), with energy function PL PL PK Ey (x) = i=1 ?i xi + i=1 j=i+1 ?ij xi xj in minimal representation, where ?i and ?ij depend on y. We downloaded several standard datasets and trained the CRF on each of them using a stochastic gradient descent procedure based on the sgd2 package. The necessary gradients are computing using a junction tree algorithms for problems with 20 variables or less, and by Gibbs sampling otherwise. For model selection, when required, we used 10-fold cross-validation on the training set. Note that our goal in this experiment is not to advocate a new model multi-label classification, but to create probability distributions as they would appear in real problems. Nevertheless, we also report classification accuracy in Table 1 to show that a) the learned models have similar characteristics as earlier work, in particular to [29], where the an identical model was trained using structured SVM learning, and b) adaptive MMP can achieve as high prediction accuracy as ordinary Gibbs sampling, as long as the confidence parameter  is not chosen overly optimistically. In fact, in many cases even 2 http://leon.bottou.org/projects/sgd 5 Dataset S YNTH 1 [29] S YNTH 2 [29] S CENE R CV 1-10 [29] M EDIAMILL -10 [29] Y EAST T MC 2007 AWA [30] M EDIAMILL R CV 1 #Labels 6 10 6 10 10 14 22 85 101 103 #Train 471 1000 1211 2916 29415 1500 21519 24295 29415 3000 #Test 5045 10000 1196 2914 12168 917 7077 6180 12168 3000 [29] [28] 6.9 ? 7.0 ? 10.1 9.5 ? 2.1 5.6 ? 18.8 ? 20.2 20.2 ? 1.3 ? 3.3 ? 2.7 ? ? ? 3.6 ? 0.5 ? ? Exact Gibbs Proposed 5.2 5.3 5.2 / 5.2 / 5.2 10.0 10.0 10.0/10.0/10.0 10.4 10.3 10.2/10.2/10.2 4.2 4.2 4.6 / 4.4 / 4.2 18.4 18.6 19.0/18.6/18.4 20.0 20.2 23.4/21.4/20.5 5.3 5.3 5.3 / 5.3 / 5.3 ? 32.2 32.7/32.7/32.7 ? 3.7 3.6 / 3.5 / 3.6 ? 1.5 1.7 / 1.6 / 1.5 Variables Iterations Table 1: Multi-label classification. Dataset characteristics (number of labels, number of training examples, number of test examples) and classification error rate in percent. [29] used the same model as we do, but trained it using a structured SVM framework and predicted using MAP. [28] compared 12 different multi-label classification techniques, we report their mean and standard deviation. The remaining columns give MMP prediction accuracy of the trained CRF models: Exact computes the exact marginal values by a junction tree, Gibbs and Proposed performs ordinary Gibbs sampling, or the proposed adaptive version with  = 10?2 /10?5 /10?8 , both run for up to 500 iterations. 1.0 1.0 1.0 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.0 Factors 0.1 1 10 100 1000 10000 0.0 0.01 0.1 1 10 100 1000 10000 0.5 1.0 1.0 0.4 0.8 0.8 0.3 0.6 0.6 0.2 0.4 0.4 0.1 0.2 0.2 0.0 0.0 0.01 0.1 1 10 100 1000 10000 0.1 1 10 100 1000 10000 0.5 1.0 0.20 0.4 0.8 0.15 0.3 0.6 0.10 0.2 0.4 0.05 0.1 0.2 0.00 0.0 0.1 1 10 100 1000 10000 0.1 1 10 100 1000 10000 1.0 1.0 0.4 0.8 0.8 0.3 0.6 0.6 0.2 0.4 0.4 0.1 0.2 0.2 0.0 0.0 0.1 1 10 100 1000 10000 0.1 1 10 100 1000 10000 0.01 0.1 1 10 100 1000 10000 0.01 0.1 1 10 100 1000 10000 0.01 0.1 1 10 100 1000 10000 0.0 0.01 0.5 0.01 0.01 0.0 0.01 0.25 0.01 Runtime 0.2 0.0 0.01 0.0 0.01 0.1 1 10 100 1000 10000 Figure 1: Results of adaptive pruning on RCV 1 dataset for  = 10?2 , 10?5 , 10?8 (left to right). x-axis: regularization parameter C used for training, y-axis: ratio of iterations/variables/factors/ runtime used by adaptive sampling relative to Gibbs sampling. a relative large value, such as  = 0.01 results in a smaller loss of accuracy than the potential 1%, but overall, a value of 10?5 or less seems advisable. Figures 1 and 2 show in more detail how the adaptive sampling behaves on two exemplary datasets with respect to four aspects: the number of iterations, the number of variables, the number of factors, and the overall runtime. For each aspect we show a box plot of the corresponding relative quantity compared to the Gibbs sampler. For example, a value of 0.5 in iterations means that the adaptive sample terminated after 250 iterations instead of the maximum of 500, because it was confident about all decisions. Values of 0.2 in variables and factors means that the number of variable states samples by the adaptive sampler was 20%, and the number of factors in the corresponding factor graphs was 10% of the corresponding quantities for the Gibbs sampler. Within each plot, we reported results for the complete range of regularization parameters in order to illustrate the effect that regularization has on the distribution of marginals. 6 Variables Iterations 1.0 1.0 1.0 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.0 Factors 0.1 1 10 100 1000 10000 0.0 0.01 0.1 1 10 100 1000 10000 0.5 1.0 1.0 0.4 0.8 0.8 0.3 0.6 0.6 0.2 0.4 0.4 0.1 0.2 0.2 0.0 0.0 0.01 0.1 1 10 100 1000 10000 0.1 1 10 100 1000 10000 0.5 1.0 0.20 0.4 0.8 0.15 0.3 0.6 0.10 0.2 0.4 0.05 0.1 0.2 0.00 0.0 0.1 1 10 100 1000 10000 0.1 1 10 100 1000 10000 1.0 1.0 0.4 0.8 0.8 0.3 0.6 0.6 0.2 0.4 0.4 0.1 0.2 0.1 1 10 100 1000 10000 1 10 100 1000 10000 0.01 0.1 1 10 100 1000 10000 0.01 0.1 1 10 100 1000 10000 0.01 0.1 1 10 100 1000 10000 0.2 0.0 0.01 0.1 0.0 0.01 0.5 0.0 0.01 0.0 0.01 0.25 0.01 Runtime 0.2 0.0 0.01 0.0 0.01 0.1 1 10 100 1000 10000 Figure 2: Results of adaptive pruning on YEAST dataset for  = 10?2 , 10?5 , 10?8 (left to right). x-axis: regularization parameter C used for training, y-axis: ratio of iterations/variables/factors/ runtime used by adaptive sampling relative to Gibbs sampling. Note that the scaling of the y-axis differs between columns. Figure 1 shows results for the relatively simple RCV 1 dataset. As one can see, a large number of variables and factors are removed quickly from the factor graph, leading to a large speedup compared to the ordinary Gibbs sampler. In fact, as the first row shows, it was possible to make a confident decision for all variables far before the 500th iteration, such that the adaptive method terminated early. As a general trend, the weaker the regularization (larger C value in the plot), the earlier the adaptive sampler is able to remove variables and factors, presumably because more extreme values of the energy function result in more marginal probabilities close to 0 or 1. A second insight is that despite the exponential scaling of the confidence parameter between the columns, the runtime grows only roughly linearly. This indicates that we can choose  conservatively without taking a large performance hit. On the hard YEAST dataset (Figure 2) in the majority of cases the adaptive sampling does not terminate early, indicating that some of the variables have marginal probabilities close to 0.5. Nevertheless, a clear gain in speed can be observed, in particular in the weakly regularized case, indicating that nevertheless, many tests for confidence are successful early during the sampling. 4.2 Binary Image Inpainting Inpainting is a classical image processing task: given an image (in our case black-and-white) in which some of the pixels are occluded or have missing values, the goal is to predict a completed image in which the missing pixels are set to their correct value, or at least in a visually pleasing way. Image inpainting has been tackled successfully by grid-shaped Markov random field models, where each pixel is represented by a random variable, unary factors encode local evidence extracted from the image, and pairwise terms encode the cooccurrence of pixel value. For our experiment, we use the Hard Energies from Chinese Characters (HECC) dataset [31], for which the authors provide pre-computed energy functions. The dataset has 100 images, each with between 4992 and 17856 pixels, i.e. binary variables. Each variable has one unary and up to 64 pairwise factors, leading to an overall factor count of 146224 to 553726. Because many of the pairwise factors act repulsively, the underlying energy function is highly non-submodular, and sampling has proven a more successful mean of inference than, for example, message passing [31]. Figure 3 shows exemplary results of the task. The complete set can be found in the supplemental material. In each case, we ran an ordinary Gibbs sampler and the adaptive sampler for 30 seconds, 7 input Gibbs  = 10?2  = 10?5  = 10?8 Figure 3: Example results of binary image inpainting on HECC dataset. From left to right: image to be inpainted, result of Gibbs sampling, result of adaptive sampling, where each method was run for up to 30 seconds per image. The left plot of each result shows the marginal probabilities, the right plot shows how often each pixel was sampled on a log scale from 10 (dark blue) to 100000 (bright red). Gibbs sampling treats all pixels uniformly, reaching around 100 sampling sweeps within the given time budget. Adaptive sampling stops early for parts of the image that it is certain about, and concentrates its samples in the uncertain regions, i.e. pixels with marginal probability close to 0.5. The larger , the more pronounced this effect it. and we visualize the resulting marginal probabilities as well as the number of samples created for each of the pixels. One can see that adaptive sampling comes to a more confident prediction within the given time budget. The larger the  parameter, the earlier to stops sampling the ?easy? pixels, spending more time on the difficult cases, i.e. pixel with marginal probability close to 0.5. 5 Summary and Outlook In this paper we derived an analytic expression for how confident one can be about the maximum marginal predictions (MMPs) of a binary graphical model after a certain number of samples, and we presented a method for pruning factor graphs when we want to stop sampling for a subset of the variables. In combination, this allowed us to more efficiently infer the MMPs: starting from the whole factor graph, we sample sequentially, and whenever we are sufficiently certain about a prediction, we prune it from the factor graph before continuing to sample. Experiments on multilabel classification and image inpainting show a clear increase in performance at virtually no loss in accuracy, unless the confidence is chosen too optimistically. Despite the promising results there are two main limitations that we plan to address. On the one hand, the multi-label experiments showed that sometimes, a conservative estimate of the confidence is required to achieve highest accuracy. This is likely a consequence of the fact that our pruning uses the estimated marginal to build a new factor graph, and even if the decision confidence is high, the marginals can still vary considerately. We plan to tackle this problem by also integrating bounds on the marginals with data-dependent confidence into our framework. A second limitation is that we can currently only handle binary-valued labelings. This is sufficient for multi-label classification and many problems in image processing, but ultimately, one would hope to derive similar early stopping criteria also for graphical models with larger label set. Our pruning method would be readily applicable to this situation, but an open challenge lies in finding a suitable criterion when to prune variables. This will require a deeper understanding of tail probabilities of multinomial decision variables, but we are confident it will be achievable, for example based on existing prior works from the case of i.i.d. samples [14, 32]. 8 References [1] N. Ghamrawi and A. McCallum. Collective multi-label classification. In CIKM, 2005. [2] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML, 2003. [3] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. PAMI, 6(6), 1984. [4] S. Della Pietra, V. Della Pietra, and J. Lafferty. Inducing features of random fields. PAMI, 19(4), 1997. [5] C. Yanover and Y. Weiss. Approximate inference and protein folding. In NIPS, volume 15, 2002. [6] E. Schneidman, M. J. Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087), 2006. [7] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2), 2008. [8] J. Marroquin, S. Mitter, and T. Poggio. Probabilistic solution of ill-posed problems in computational vision. Journal of the American Statistical Association, 82(397), 1987. [9] C. Yanover and Y. Weiss. Finding the m most probable configurations using loopy belief propagation. In NIPS, volume 16, 2004. [10] M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the Ising model. SIAM Journal on Computing, 22, 1993. [11] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRGTR-93-1, Department of Computer Science, University of Toronto, 1993. [12] D. J. C. MacKay. Introduction to Monte Carlo methods. In Proceedings of the NATO Advanced Study Institute on Learning in graphical models, 1998. [13] A. E. Raftery and S. Lewis. How many iterations in the Gibbs sampler. Bayesian Statistics, 4(2), 1992. [14] A. G. Schwing, C. Zach, Y. Zheng, and M. Pollefeys. Adaptive random forest ? how many ?experts? to ask before making a decision? In CVPR, 2011. [15] H. Weiler. The use of incomplete beta functions for prior distributions in binomial sampling. Technometrics, 1965. [16] C. M. Thompson, E. S. Pearson, L. J. Comrie, and H. O. Hartley. Tables of percentage points of the incomplete beta-function. Biometrika, 1941. [17] R. V. Lenth. Some practical guidelines for effective sample size determination. The American Statistician, 55(3), 2001. [18] A. Wald. Sequential tests of hypotheses. Annals of Mathematical Statistics, 16, 1945. [19] D. A. Berry. Bayesian clinical trials. Nature Reviews Drug Discovery, 5(1), 2006. [20] A. E. Raftery. Bayesian model selection in social research. Sociological Methodology, 25, 1995. [21] D. Easley, N. M. Kiefer, M. O?hara, and J. B. Paperman. Liquidity, information, and infrequently traded stocks. Journal of Finance, 1996. [22] C. J. Geyer. Practical Markov chain Monte Carlo. Statistical Science, 7(4), 1992. [23] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14(3), 1968. [24] F. Bach and M. I. Jordan. Thin junction trees. In NIPS, volume 14, 2002. [25] M. J. Collins. A new statistical parser based on bigram lexical dependencies. In ACL, 1996. [26] J. A. Shelton, J. Bornschein, A. S. Sheikh, P. Berkes, and J. L?ucke. Select and sample ? a model of efficient neural inference and learning. In NIPS, volume 24, 2011. [27] Y. Guo and S. Gu. Multi-label classification using conditional dependency networks. In IJCAI, 2011. [28] G. Madjarov, D. Kocev, D. Gjorgjevikj, and S. Dzeroski. An extensive experimental comparison of methods for multi-label learning. Pattern Recognition, 2012. [29] T. Finley and T. Joachims. Training structural SVMs when exact inference is intractable. In ICML, 2008. [30] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009. [31] S. Nowozin, C. Rother, S. Bagon, T. Sharp, B. Yao, and P. Kohli. Decision tree fields. In ICCV, 2011. [32] D. Chafai and D. Concordet. Confidence regions for the multinomial parameter with small sample size. Journal of the American Statistical Association, 104(487), 2009. 9
4568 |@word mild:1 trial:2 illustrating:1 version:1 kohli:1 achievable:1 seems:1 polynomial:1 bigram:1 open:1 ucke:1 decomposition:1 pick:1 sgd:1 inpainting:7 outlook:1 moment:1 initial:1 configuration:1 liu:1 ours:1 existing:1 current:1 si:1 yet:2 readily:1 partition:1 analytic:2 remove:2 plot:5 n0:1 mccallum:1 es:2 ith:1 geyer:1 coarse:1 toronto:1 org:1 simpler:1 kocev:1 mathematical:1 constructed:1 become:3 beta:6 consists:1 combine:1 advocate:1 inside:1 autocorrelation:2 introduce:4 pairwise:5 inter:1 market:1 roughly:1 behavior:1 frequently:1 multi:16 actual:3 becomes:1 project:1 estimating:1 campus:1 underlying:4 rcv:2 what:2 spends:1 supplemental:2 finding:3 guarantee:1 every:1 collecting:1 act:1 tackle:1 finance:1 runtime:6 exactly:1 biometrika:1 hit:1 appear:1 before:3 local:1 treat:2 consequence:1 despite:3 optimistically:2 approximately:1 merge:1 black:1 pami:2 acl:1 dynamically:3 christoph:1 range:1 practical:4 harmeling:1 gjorgjevikj:1 block:1 differs:1 della:2 hara:1 procedure:7 area:1 drug:2 confidence:18 pre:1 integrating:1 protein:1 get:1 cannot:1 close:7 selection:3 context:1 applying:1 writing:1 www:1 equivalent:1 map:2 missing:2 lexical:1 economics:1 starting:1 thompson:1 rule:3 estimator:1 insight:2 population:1 handle:1 limiting:1 qq:1 construction:5 today:1 annals:1 parser:1 exact:4 us:2 designing:1 hypothesis:2 trend:2 infrequently:1 recognition:2 geman:2 ising:1 observed:2 role:1 solved:2 region:4 connected:2 highest:3 removed:1 ran:1 complexity:1 cooccurrence:1 occluded:1 dynamic:1 ultimately:3 multilabel:1 trained:4 depend:2 rewrite:1 weakly:1 predictive:1 bipartite:1 gu:1 accelerate:1 joint:6 stock:2 represented:1 easley:1 derivation:2 train:1 fast:1 effective:7 monte:4 inpainted:1 xfu:1 pearson:1 whose:1 larger:4 valued:3 posed:1 cvpr:2 otherwise:3 ability:1 statistic:4 jerrum:1 unseen:1 itself:2 sequence:1 advantage:1 exemplary:2 bornschein:1 propose:4 clamp:1 maximal:1 relevant:1 combining:1 achieve:2 inducing:1 pronounced:1 ijcai:1 chl:2 produce:1 generating:1 klosterneuburg:1 object:1 derive:3 advisable:1 ac:2 pose:1 illustrate:1 measured:1 dzeroski:1 ij:5 predicted:1 treewidth:1 come:1 direction:1 concentrate:1 fij:1 closely:1 correct:6 hartley:1 attribute:1 subsequently:2 stochastic:2 settle:1 material:2 require:3 assign:1 f1:1 probable:1 elementary:1 adjusted:1 extension:1 pl:2 sufficiently:2 around:2 exp:5 presumably:1 visually:1 predict:2 visualize:1 traded:1 vary:1 early:8 purpose:1 applicable:2 label:27 currently:1 create:2 successfully:1 tool:1 reflects:1 hope:1 always:1 gaussian:1 reaching:1 pn:2 avoid:1 factorizes:1 encode:2 derived:1 joachim:1 modelling:1 likelihood:2 indicates:3 tradition:1 am:1 sense:1 detect:1 inference:16 dependent:5 stopping:5 unary:3 typically:2 chow:1 borrowing:1 labelings:2 interested:1 pixel:11 overall:3 classification:17 ill:1 exponent:1 plan:2 special:1 integration:1 mackay:1 marginal:30 field:8 never:1 having:1 shaped:1 sampling:50 identical:2 icml:2 thin:1 future:1 others:1 report:3 few:1 simultaneously:1 individual:6 pietra:2 replacement:1 statistician:1 technometrics:1 pleasing:1 interest:1 message:2 ested:1 highly:2 rediscovered:1 zheng:1 evaluation:1 adjust:1 extreme:1 inbetween:1 hwi:1 chain:4 integral:1 fu:2 necessary:2 poggio:1 orthogonal:1 unless:1 tree:7 incomplete:5 continuing:2 desired:1 re:1 sacrificing:1 minimal:1 uncertain:6 column:3 earlier:3 unmodified:1 restoration:1 bagon:1 loopy:3 cost:1 ordinary:4 vertex:1 subset:4 deviation:1 uniform:1 hundred:1 predictor:1 successful:3 too:2 reported:1 dependency:5 nickisch:1 confident:17 density:1 siam:1 probabilistic:4 continuously:1 quickly:1 yao:1 choose:1 sinclair:1 resort:1 american:3 leading:2 expert:1 account:1 potential:8 exclude:1 coefficient:1 performed:3 later:1 root:1 red:1 start:1 bayes:3 option:1 contribution:1 square:1 bright:1 accuracy:9 kiefer:1 variance:3 characteristic:2 efficiently:1 yield:1 identify:1 weak:1 bayesian:7 raw:1 none:1 carlo:4 mc:1 ghamrawi:1 straight:1 explain:1 simultaneous:1 reach:1 whenever:2 energy:5 involved:2 naturally:1 associated:2 hamming:3 stop:5 gain:1 proved:1 dataset:9 sampled:1 ask:1 austria:3 knowledge:1 marroquin:1 supervised:2 follow:1 methodology:1 improved:1 wei:2 evaluated:1 shrink:1 though:4 done:1 box:1 strongly:1 stage:2 until:2 correlation:1 hand:1 propagation:1 yeast:2 believe:1 grows:1 building:1 effect:3 concept:1 true:2 contain:1 regularization:5 q0:2 neal:1 white:1 conditionally:1 during:6 illustrative:1 criterion:4 complete:3 demonstrate:1 crf:2 performs:2 fj:3 percent:1 reasoning:1 image:17 variational:2 spending:1 harmonic:1 recently:1 fi:1 behaves:1 multinomial:2 empirically:1 overview:1 volume:4 extend:1 tail:1 approximates:1 association:2 marginals:7 accumulate:1 gibbs:17 cv:2 fk:1 pm:1 grid:1 submodular:1 language:1 pq:1 access:1 f0:12 berkes:1 posterior:4 own:1 showed:1 scenario:1 certain:8 cene:1 binary:17 continue:3 success:1 ey:2 prune:3 surely:1 schneidman:1 semi:2 multiple:1 infer:2 reduces:1 technical:1 xf:5 determination:1 clinical:2 long:2 cross:1 bach:1 equally:1 prediction:19 underlies:1 wald:2 vision:1 iteration:12 normalization:1 safest:1 sometimes:1 beam:2 folding:1 want:3 separately:2 interval:1 grow:1 limn:1 rest:1 virtually:1 lafferty:2 jordan:2 structural:1 split:1 enough:3 easy:1 xj:3 zi:4 topology:1 reduce:1 idea:4 computable:1 expression:3 accelerating:1 tabulated:1 passing:2 useful:2 proportionally:2 clear:2 awa:1 amount:1 dark:1 ten:1 svms:1 category:1 reduced:2 http:2 generate:1 exist:2 percentage:1 neuroscience:1 estimated:5 popularity:1 per:3 disjoint:1 overly:1 diverse:1 blue:1 cikm:1 pollefeys:1 discrete:1 ist:3 key:1 four:1 nevertheless:4 wasteful:1 graph:14 relaxation:1 sum:3 realworld:1 package:1 run:2 place:1 almost:3 family:1 decision:39 prefer:1 scaling:2 bound:4 tackled:1 fold:1 occur:1 segev:1 aspect:2 speed:1 leon:1 statically:1 relatively:4 speedup:1 structured:2 department:1 overconfident:1 combination:5 smaller:1 character:1 wi:1 sheikh:1 making:5 intuitively:2 iccv:1 computationally:3 resource:2 equation:3 remains:1 count:2 needed:2 know:1 available:1 junction:3 batch:1 alternative:1 original:3 binomial:1 remaining:1 completed:1 graphical:15 xc:10 medicine:1 ghahramani:1 chinese:1 build:1 approximating:1 classical:3 sweep:1 question:1 quantity:4 occurs:2 already:1 costly:1 dependence:2 bialek:1 gradient:2 win:1 separate:1 majority:1 assuming:1 rother:1 ratio:2 minimizing:1 difficult:1 unfortunately:1 potentially:3 f0c:1 guideline:1 collective:1 boltzmann:1 perform:1 allowing:1 contributed:1 markov:4 sm:1 datasets:2 descent:1 situation:7 sharp:1 introduced:1 required:3 kl:1 extensive:1 learned:4 narrow:1 nip:4 address:1 able:1 below:1 pattern:2 usually:1 challenge:1 belief:1 analogue:1 wainwright:1 suitable:2 natural:1 regularized:3 indicator:1 yanover:2 zhu:1 advanced:1 technology:1 imply:1 risky:1 axis:5 created:2 raftery:2 finley:1 naive:1 prior:4 literature:2 understanding:1 berry:2 review:1 discovery:1 determining:1 relative:4 law:1 loss:5 fully:1 sociological:1 prototypical:2 limitation:2 proven:2 validation:1 foundation:1 downloaded:1 sufficient:5 principle:2 dq:2 nowozin:1 row:1 course:1 summary:1 soon:1 free:1 infeasible:1 drastically:2 zc:1 weaker:1 deeper:1 institute:2 fall:1 taking:2 pgms:2 liquidity:1 overcome:1 evaluating:3 numeric:1 computes:1 conservatively:1 forward:1 made:2 adaptive:25 author:1 far:3 social:2 transaction:1 pruning:6 approximate:7 preferred:1 nato:1 keep:2 sequentially:1 summing:1 xi:21 search:2 table:3 additionally:1 promising:2 nature:3 terminate:1 transfer:1 correlated:4 yixi:1 forest:1 bottou:1 complex:3 pk:1 main:6 linearly:1 terminated:2 whole:1 lampert:2 allowed:1 complementary:1 xu:19 crgtr:1 mitter:1 explicit:1 mmp:17 exponential:2 zach:1 lie:7 third:1 ix:1 removing:1 specific:1 svm:2 evidence:1 intractable:1 sequential:3 budget:2 occurring:1 clamping:1 fc:2 simply:1 likely:1 lewis:1 extracted:1 conditional:5 goal:3 formulated:1 consequently:5 replace:1 feasible:3 hard:3 uniformly:1 sampler:10 schwing:1 conservative:1 called:1 experimental:2 disregard:2 east:1 indicating:2 select:3 guo:1 latter:1 collins:1 bioinformatics:1 accelerated:1 ediamill:2 mcmc:3 shelton:1 ex:2
3,942
4,569
Efficient Monte Carlo Counterfactual Regret Minimization in Games with Many Player Actions Richard Gibson, Neil Burch, Marc Lanctot, and Duane Szafron Department of Computing Science, University of Alberta Edmonton, Alberta, T6G 2E8, Canada {rggibson | nburch | lanctot | dszafron}@ualberta.ca Abstract Counterfactual Regret Minimization (CFR) is a popular, iterative algorithm for computing strategies in extensive-form games. The Monte Carlo CFR (MCCFR) variants reduce the per iteration time cost of CFR by traversing a smaller, sampled portion of the tree. The previous most effective instances of MCCFR can still be very slow in games with many player actions since they sample every action for a given player. In this paper, we present a new MCCFR algorithm, Average Strategy Sampling (AS), that samples a subset of the player?s actions according to the player?s average strategy. Our new algorithm is inspired by a new, tighter bound on the number of iterations required by CFR to converge to a given solution quality. In addition, we prove a similar, tighter bound for AS and other popular MCCFR variants. Finally, we validate our work by demonstrating that AS converges faster than previous MCCFR algorithms in both no-limit poker and Bluff. 1 Introduction An extensive-form game is a common formalism used to model sequential decision making problems containing multiple agents, imperfect information, and chance events. A typical solution concept in games is a Nash equilibrium profile. Counterfactual Regret Minimization (CFR) [12] is an iterative algorithm that, in 2-player zero-sum extensive-form games, converges to a Nash equilibrium. Other techniques for computing Nash equilibria of 2-player zero-sum games include linear programming [8] and the Excessive Gap Technique [6]. Theoretical results indicate that for a fixed solution quality, CFR takes a number of iterations at most quadratic in the size of the game [12, Theorem 4]. Thus, as we consider larger games, more iterations are required to obtain a fixed solution quality. Nonetheless, CFR?s versatility and memory efficiency make it a popular choice. Monte Carlo CFR (MCCFR) [9] can be used to reduce the traversal time per iteration by considering only a sampled portion of the game tree. For example, Chance Sampling (CS) [12] is an instance of MCCFR that only traverses the portion of the game tree corresponding to a single, sampled sequence of chance?s actions. However, in games where a player has many possible actions, such as no-limit poker, iterations of CS are still very time consuming. This is because CS considers all possible player actions, even if many actions are poor or only factor little into the algorithm?s computation. Our main contribution in this paper is a new MCCFR algorithm that samples player actions and is suitable for games involving many player choices. Firstly, we provide tighter theoretical bounds on the number of iterations required by CFR and previous MCCFR algorithms to reach a fixed solution quality. Secondly, we use these new bounds to propel our new MCCFR sampling algorithm. By using a player?s average strategy to sample actions, convergence time is significantly reduced in large games with many player actions. We prove convergence and show that our new algorithm approaches equilibrium faster than previous sampling schemes in both no-limit poker and Bluff. 1 2 Background A finite extensive game contains a game tree with nodes corresponding to histories of actions h ? H and edges corresponding to actions a ? A(h) available to player P (h) ? N ? {c} (where N is the set of players and c denotes chance). When P (h) = c, ?c (h, a) is the (fixed) probability of chance generating action a at h. Each terminal history z ? Z has associated utilities ui (z) for each player i. We define ?i = maxz,z0 ?Z ui (z) ? ui (z 0 ) to be the range of utilities for player i. Non-terminal histories are partitioned into information sets I ? Ii representing the different game states that player i cannot distinguish between. For example, in poker, player i does not see the private cards dealt to the opponents, and thus all histories differing only in the private cards of the opponents are in the same information set for player i. The action sets A(h) must be identical for all h ? I, and we denote this set by A(I). We define |Ai | = maxI?Ii |A(I)| to be the maximum number of actions available to player i at any information set. We assume perfect recall that guarantees players always remember information that was revealed to them and the order in which it was revealed. A (behavioral) strategy for player i, ?i ? ?i , is a function that maps each information set I ? Ii to a probability distribution over A(I). A strategy profile is a vector of strategies ? = (?1 , ..., ?|N | ) ? ?, one for each player. Let ui (?) denote the expected utility for player i, given that all players play according to ?. We let ??i refer to the strategies in ? excluding ?i . Let ? ? (h) be the probability ? of Q history h ?occurring if all? players choose actions according to ?. We can decompose ? (h) = i?N ?{c} ?i (h), where ?i (h) is the contribution to this probability from player i when playing ? according to ?i (or from chance when i = c). Let ??i (h) be the product of all players? contributions ? (including chance) except that of player i. Let ? (h, h0 ) be the probability of history h0 occurring after h, given h has occurred. Furthermore, for I ? Ii , the probability of player i playing to reach I is ?i? (I) = ?i? (h) for any h ? I, which is well-defined due to perfect recall. A best response to ??i is a strategy that maximizes player i?s expected payoff against ??i . The best response value for player i is the value of that strategy, bi (??i ) = max?i0 ??i ui (?i0 , ??i ). A strategy profile ? is an -Nash equilibrium if no player can unilaterally deviate from ? and gain more than ; i.e., ui (?) +  ? bi (??i ) for all i ? N . A game is two-player zero-sum if N = {1, 2} and u1 (z) = ?u2 (z) for all z ? Z. In this case, the exploitability of ?, e(?) = (b1 (?2 )+b2 (?1 ))/2, measures how much ? loses to a worst case opponent when players alternate positions. A 0-Nash equilibrium (or simply a Nash equilibrium) has zero exploitability. Counterfactual Regret Minimization (CFR) [12] is an iterative algorithm that, for two-player zero sum games, computes an -Nash equilibrium profile with  ? 0. CFR has also been shown to work well in games with more than two players [1, 3]. On each iteration t, the base algorithm, ?vanilla? CFR, traverses the entire game tree once per player, computing the expected utility for player i at each information set I ? Ii under the current profile ? t , assuming to reach I. This P player i plays ? expectation is the counterfactual value for player i, vi (I, ?) = z?ZI ui (z)??i (z[I])? ? (z[I], z), where ZI is the set of terminal histories passing through I and z[I] is that history along z contained in I. For each action a ? A(I), these values determine the counterfactual regret at iteration t, t rit (I, a) = vi (I, ?(I?a) ) ? vi (I, ? t ), where ?(I?a) is the profile ? except that at I, action a is always taken. The regret rit (I, a) measures how much player i would rather play action a at I than play ? t . These regrets are accumulated to PT obtain the cumulative counterfactual regret, RiT (I, a) = t=1 rit (I, a), and are used to update the current strategy profile via regret matching [5, 12], ? T +1 (I, a) = P RiT,+ (I, a) b?A(I) RiT,+ (I, b) , (1) where x+ = max{x, 0} and actions are chosen uniformly at random when the denominator is zero. It is well-known that in a two-player zero-sum game, if both players? average (external) regret, T  1X RiT t t = max ui (?i0 , ??i ) ? ui (?it , ??i ) , 0 ?i ??i T T t=1 is at most /2, then the average profile ? ? T is an -Nash equilibrium. During computation, CFR PT T ?t t stores a cumulative profile si (I, a) = t=1 ?i (I)?i (I, a) and outputs the average profile 2 ? ?iT (I, a) = sTi (I, a)/ P T b?A(I) si (I, b). The original CFR analysis shows that player i?s regret is bounded by the sum of the positive parts of the cumulative counterfactual regrets RiT,+ (I, a): Theorem 1 (Zinkevich et al. [12]) RiT ? X I?I max RiT,+ (I, a). a?A(I) Regret matching minimizes the average of the cumulative counterfactual regrets, and so player i?s average regret is also minimized by Theorem 1. For each player i, let Bi be the partition of Ii such that two information sets I, I 0 are in the same part B ? Bi if and only if player i?s sequence of actions leading to I is the same as the sequence of actions leading to I 0 . Bi is well-defined p due to P perfect recall. Next, define the M -value of the game to player i to be Mi = B?Bi |B|. The best known bound on player i?s average regret is: Theorem 2 (Lanctot et al. [9]) When using vanilla CFR, average regret is bounded by p ?i Mi |Ai | RiT ? ? . T T We prove a tighter bound in Section 3. For large games, CFR?s full game tree traversals can be very expensive. Alternatively, one can traverse a smaller, sampled portion of the tree on each iteration using Monte Carlo CFR (MCCFR) [9]. Let Q = {Q1 , ..., QK } be a set of subsets, or blocks, of the terminal histories Z such that the union of Q spans Z. For example, Chance Sampling (CS) [12] is an instance of MCCFR that partitions Z into blocks such that two histories are in the same block if and only if no two chance actions differ. On each iteration, a block Qj is sampled with PK probability qj , where k=1 qk = 1. In CS, we generate a block by sampling a single action a at each history h ? H with P (h) = c according to its likelihood of occurring, ?c (h, a). In general, the sampled counterfactual value for player i is X ? v?i (I, ?) = ui (z)??i (z[I])? ? (z[I], z)/q(z), z?ZI ?Qj P where q(z) = k:z?Qk qk is the probability that z was sampled. For example, in CS, q(z) = ? t ?c (z). Define the sampled counterfactual regret for action a at I to be r?it (I, a) = v?i (I, ?(I?a) )? P T t T t ? v?i (I, ? ). Strategies are then generated by applying regret matching to R (I, a) = r? (I, a). i t=1 i CS has been shown to significantly reduce computing time in poker games [11, Appendix A.5.2]. Other instances of MCCFR include External Sampling (ES) and Outcome Sampling (OS) [9]. ES takes CS one step further by considering only a single action for not only chance, but also for t the opponents, where opponent actions are sampled according to the current profile ??i . OS is the most extreme version of MCCFR that samples a single action at every history, walking just a single trajectory through the tree on each traversal (Qj = {z}). ES and OS converge to equilibrium faster than vanilla CFR in a number of different domains [9, Figure 1]. ES and OS yield a probabilistic bound on the average regret, and thus provide a probabilistic guarantee that ? ? T converges to a Nash equilibrium. Since Q both algorithms generate blocks by sampling actions independently, we can decompose q(z) = i?N ?{c} qi (z) so that qi (z) is the probability contributed to q(z) by sampling player i?s actions. Theorem 3 (Lanctot et al. [9]) 1 Let X be one of ES or OS (assuming OS also samples opponent actions according to ??i ), let p ? (0, 1], and let ? = minz?Z qi (z) > 0 over all 1 ? t ? T . When using X, with probability 1 ? p, average regret is bounded by !  p p 2|Ii ||Bi | 1 ?i |Ai | RiT ? ? Mi + . ? T p ? T 1 The bound p presented by Lanctot et al. appears slightly different, but the last step of their proof mistakenly used Mi ? |Ii ||Bi |, which is actually incorrect. The bound we present here is correct. 3 3 New CFR Bounds While Zinkevich et al. [12] bound a player?s regret by a sum of cumulative counterfactual regrets (Theorem 1), we can actually equate a player?s regret to a weighted sum of counterfactual set I ? Ii , define RiT (I, ?i ) = P regrets. For aT strategy ?i ? ?i and an information ? In addition, let ?i ? ?i be a player i strategy such that a?A(I) ?i (I, a)Ri (I, a). PT PT t ? t ? 0 ). Note that in a two-player game, ?i = arg max?i ??i t=1 ui (?i0 , ??i t=1 ui (?i , ??i ) = ? T ? T ui (?i , ? ??i ), and thus ?i is a best response to the opponent?s average strategy after T iterations. Theorem 4 RiT = X ? ?i? (I)RiT (I, ?i? ). I?Ii All proofs in this paper are provided in full as supplementary material. Theorem 4 leads to a tighter bound on the average regret when p using CFR. For a strategy ?i ? ?i , define the M -value of ?i to P be Mi (?i ) = B?Bi ?i? (B) |B|, where ?i? (B) = maxI?B ?i? (I). Clearly, Mi (?i ) ? Mi for all ?i ? ?i since ?i? (B) ? 1. For vanilla CFR, we can simply replace Mi in Theorem 2 with Mi (?i? ): Theorem 5 When using vanilla CFR, average regret is bounded by p ?i Mi (?i? ) |Ai | RiT ? . ? T T For MCCFR, we can show a similar improvement to Theorem 3. Our proof includes a bound for CS that appears to have been omitted in previous work. Details are in the supplementary material. Theorem 6 Let X be one of CS, ES, or OS (assuming OS samples opponent actions according to ??i ), let p ? (0, 1], and let ? = minz?Z qi (z) > 0 over all 1 ? t ? T . When using X, with probability 1 ? p, average regret is bounded by !  p p 2|Ii ||Bi | 1 ?i |Ai | RiT ? ? ? Mi (?i ) + . ? T p ? T Theorem 4 states that player i?s regret is equal to the weighted sum of player i?s counterfactual regrets at each I ? Ii , where the weights are equal to player i?s probability of reaching I under ?i? . Since our goal is to minimize average regret, this means that we only need to minimize the average cumulative counterfactual regret at each I ? Ii that ?i? plays to reach. Therefore, when using MCCFR, we may want to sample more often those information sets that ?i? plays to reach, and less often those information sets that ?i? avoids. This inspires our new MCCFR sampling algorithm. 4 Average Strategy Sampling Leveraging the theory developed in the previous section, we now introduce a new MCCFR sampling algorithm that can minimize average regret at a faster rate than CS, ES, and OS. As we just described, we want our algorithm to sample more often the information sets that ?i? plays to reach. Unfortunately, we do not have the exact strategy ?i? on hand. Recall that in a two-player game, ?i? is T a best response to the opponent?s average strategy, ? ??i . However, for two-player zero-sum games, T we do know that the average profile ? ? converges to a Nash equilibrium. This means that player i?s T average strategy, ? ?iT , converges to a best response of ? ??i . While the average strategy is not an exact best response, it can be used as a heuristic to guide sampling within MCCFR. Our new sampling algorithm, Average Strategy Sampling (AS), selects actions for player i according to the cumulative profile and three predefined parameters. AS can be seen as a sampling scheme between OS and ES where a subset of player i?s actions are sampled at each information set I, as opposed to sampling one action (OS) or sampling every action (ES). Given the cumulative profile sTi (I, ?) on iteration T , an exploration parameter  ? (0, 1], a threshold parameter ? ? [1, ?), and a bonus parameter ? ? [0, ?), each of player i?s actions a ? A(I) are sampled independently with probability ( ) ? + ? sTi (I, a) P ?(I, a) = max , , (2) ? + b?A(I) sTi (I, b) 4 Algorithm 1 Average Strategy Sampling (Two-player version) 1: Require: Parameters , ?, ? 2: Initialize regret and cumulative profile: ?I, a : r(I, a) ? 0, s(I, a) ? 0 3: 4: WalkTree(history h, player i, sample prob q): 5: if h ? Z then return ui (h)/q end if 6: if h ? P (c) then Sample action a ? ?c (h, ?), return WalkTree(ha, i, q) end if 7: I ? Information set containing h , ?(I, ?) ? RegretMatching(r(I, ?)) 8: if h ? / P (i) then 9: for a ? A(I) do s(I, a) ? s(I, a) + (?(I, a)/q) end for 10: Sample action a ? ?(I, ?), return WalkTree(ha, i, q) 11: end if 12: for a ? A(I) ndo o 13: ?+? s(I,a) ? ? max , ?+P ?(a) ? 0 s(I,b) , v b?A(I) 14: 15: 16: 17: if Random(0, 1) < ? then v?(a) ? WalkTree(ha, i, q ? min{1, ?}) end if end for P for a ? A(I) do r(I, a) ? r(I, a) + v?(a) ? a?A(I) ?(I, a)? v (a) end for P return a?A(I) ?(I, a)? v (a) P or with probability 1 if either ?(I, a) > 1 or ? + b?A(I) sTi (I, b) = 0. As in ES, at opponent and T chance nodes, a single action is sampled on-policy according to the current opponent profile ??i and the fixed chance probabilities ?c respectively. If ? = 1P and ? = 0, then ?(I, a) is equal to the probability that the average strategy ? ?iT = T T si (I, a)/ b?A(I) si (I, b) plays a at I, except that each action is sampled with probability at least . For choices greater than 1, ? acts as a threshold so that any action taken with probability at least 1/? by the average strategy is always sampled by AS. Furthermore, ??s purpose is to increase the rate of exploration during early AS iterations. When ? > 0, we effectively add ? as a bonus to the cumulative value sTi (I, a) before normalizing. Since player i?s average strategy ? ?iT is not a good ? approximation of ?i for small T , we include ? to avoid making ill-informed choices early-on. As the cumulative profile sTi (I, ?) grows over time, ? eventually becomes negligible. In Section 5, we present a set of values for , ? , and ? that work well across all of our test games. Pseudocode for a two-player version of AS is presented in Algorithm 1. In Algorithm 1, the recursive function WalkTree considers four different cases. Firstly, if we have reached a terminal node, we return the utility scaled by 1/q (line 5), where q = qi (z) is the probability of sampling z contributed from player i?s actions. Secondly, when at a chance node, we sample a single action according to ?c and recurse down that action (line 6). Thirdly, at an opponent?s choice node (lines 8 to 11), we again sample a single action and recurse, this time according to the opponent?s current strategy obtained via regret matching (equation (1)). At opponent nodes, we also update the cumulative profile (line 9) for reasons that we describe in a previous paper [2, Algorithm 1]. For games with more than two players, a second tree walk is required and we omit these details. The final case in Algorithm 1 handles choice nodes for player i (lines 7 to 17). For each action a, we compute the probability ? of sampling a and stochastically decide whether to sample a or not, where Random(0,1) returns a random real number in [0, 1). If we do sample a, then we recurse to obtain t the sampled counterfactual value v?(a) = v?i (I, ?(I?a) ) (line 14). Finally, we update the regrets at I P (line 16) and return the sampled counterfactual value at I, a?A(I) ?(I, a)? v (a) = v?i (I, ? t ). Repeatedly running WalkTree(?, i, 1) ?i ? N provides a probabilistic guarantee that all players? average regret will be minimized. In the supplementary material, we prove that AS exhibits the same regret bound as CS, ES, and OS provided in Theorem 6. Note that ? in Theorem 6 is guaranteed to be positive for AS by the inclusion of  in equation (2). However, for CS and ES, ? = 1 since all of player i?s actions are sampled, whereas ? ? 1 for OS and AS. While this suggests that fewer iterations of CS or ES are required to achieve the same regret bound compared to OS and AS, iterations for OS and AS are faster as they traverse less of the game tree. Just as CS, ES, and OS 5 have been shown to benefit from this trade-off over vanilla CFR, we will show that in practice, AS can likewise benefit over CS and ES and that AS is a better choice than OS. 5 Experiments In this section, we compare the convergence rates of AS to those of CS, ES, and OS. While AS can be applied to any extensive game, the aim of AS is to provide faster convergence rates in games involving many player actions. Thus, we consider two domains, no-limit poker and Bluff, where we can easily scale the number of actions available to the players. No-limit poker. The two-player poker game we consider here, which we call 2-NL Hold?em(k), is inspired by no-limit Texas Hold?em. 2-NL Hold?em(k) is played over two betting rounds. Each player starts with a stack of k chips. To begin play, the player denoted as the dealer posts a small blind of one chip and the other player posts a big blind of two chips. Each player is then dealt two private cards from a standard 52-card deck and the first betting round begins. During each betting round, players can either fold (forfeit the game), call (match the previous bet), or raise by any number of chips in their remaining stack (increase the previous bet), as long as the raise is at least as big as the previous bet. After the first betting round, three public community cards are revealed (the flop) and a second and final betting round begins. If a player has no more chips left after a call or a raise, that player is said to be all-in. At the end of the second betting round, if neither player folded, then the player with the highest ranked five-card poker hand wins all of the chips played. Note that the number of player actions in 2-NL Hold?em(k) at one information set is at most the starting stack size, k. Increasing k adds more betting options and allows for more actions before being all-in. Bluff. Bluff(D1 , D2 ) [7], also known as Liar?s Dice, Perduo, and Dudo, is a two-player dice-bidding game played with six-sided dice over a number of rounds. Each player i starts with Di dice. In each round, players roll their dice and look at the result without showing their opponent. Then, players alternate by bidding a quantity q of a face value f of all dice in play until one player claims that the other is bluffing (i.e., claims that the bid does not hold). To place a new bid, a player must increase q or f of the current bid. A face value of six is considered ?wild? and counts as any other face value. The player calling bluff wins the round if the opponent?s last bid is incorrect, and loses otherwise. The losing player removes one of their dice from the game and a new round begins. Once a player has no more dice left, that player loses the game and receives a utility of ?1, while the winning player earns +1 utility. The maximum number of player actions at an information set is 6(D1 + D2 ) + 1 as increasing Di allows both players to bid higher quantities q. Preliminary tests. Before comparing AS to CS, ES, and OS, we first run some preliminary experiments to find a good set of parameter values for , ? , and ? to use with AS. All of our preliminary experiments are in two-player 2-NL Hold?em(k). In poker, a common approach is to create an abstract game by merging similar card dealings together into a single chance action or ?bucket? [4]. To keep the size of our games manageable, we employ a five-bucket abstraction that reduces the branching factor at each chance node down to five, where dealings are grouped according to expected hand strength squared as described by Zinkevich et al. [12]. Firstly, we fix ? = 1000 and test different values for  and ? in 2-NL Hold?em(30). Recall that ? = 1000 implies actions taken by the average strategy with probability at least 0.001 are always sampled by AS. Figure 1a shows the exploitability in the five-bucket abstract game, measured in milli-big-blinds per game (mbb/g), of the profile produced by AS after 1012 nodes visited. Recall that lower exploitability implies a closer approximation to equilibrium. Each data point is averaged over five runs of AS. The  = 0.05 and ? = 105 or 106 profiles are the least exploitable profiles within statistical noise (not shown). Next, we fix  = 0.05 and ? = 106 and test different values for ? . Figure 1b shows the abstract game exploitability over the number of nodes visited by AS in 2-NL Hold?em(30), where again each data point is averaged over five runs. Here, the least exploitable strategies after 1012 nodes visited are obtained with ? = 100 and ? = 1000 (again within statistical noise). Similar results to Figure 1b hold in 2-NL Hold?em(40) and are not shown. Throughout the remainder of our experiments, we use the fixed set of parameters  = 0.05, ? = 106 , and ? = 1000 for AS. 6 Exploitability (mbb/g) 1 0.5 0.4 0.3 ? 0.2 0.1 0.05 0.01 0.8 0.6 0.4 0.2 0 100 101 102 103 104 105 106 107 108 109 ? Abstract game exploitability (mbb/g) 102 1 10 0 100 ?=101 ?=10 ?=1023 ?=104 ?=10 5 ?=106 ?=10 10-1 10 10 11 10 Nodes Visited 12 10 (b)  = 0.05, ? = 106 (a) ? = 1000 Figure 1: (a) Abstract game exploitability of AS profiles for ? = 1000 after 1012 nodes visited in 2-NL Hold?em(30). (b) Log-log plot of abstract game exploitability over the number of nodes visited by AS with  = 0.05 and ? = 106 in 2-NL Hold?em(30). For both figures, units are in milli-big-blinds per hand (mbb/g) and data points are averaged over five runs with different random seeds. Error bars in (b) indicate 95% confidence intervals. Main results. We now compare AS to CS, ES, and OS in both 2-NL Hold?em(k) and Bluff(D1 , D2 ). Similar to Lanctot et al. [9], our OS implementation is -greedy so that the current player i samples a single action at random with probability  = 0.5, and otherwise samples a single action according to the current strategy ?i . Firstly, we consider two-player 2-NL Hold?em(k) with starting stacks of k = 20, 22, 24, ..., 38, and 40 chips, for a total of eleven different 2-NL Hold?em(k) games. Again, we apply the same five-bucket card abstraction as before to keep the games reasonably sized. For each game, we ran each of CS, ES, OS, and AS five times, measured the abstract game exploitability at a number of checkpoints, and averaged the results. Figure 2a displays the results for 2-NL Hold?em(36), a game with approximately 68 million information sets and 5 billion histories (nodes). Here, AS achieved an improvement of 54% over ES at the final data points. In addition, Figure 2b shows the average exploitability in each of the eleven games after approximately 3.16 ? 1012 nodes visited by CS, ES, and AS. OS performed much worse and is not shown. Since one can lose more as the starting stacks are increased (i.e., ?i becomes larger), we ?normalized? exploitability across each game by dividing the units on the y-axis by k. While there is little difference between the algorithms for the smaller 20 and 22 chip games, we see a significant benefit to using AS over CS and ES for the larger games that contain many player actions. For the most part, the margins between AS, CS, and ES increase with the game size. Figure 3 displays similar results for Bluff(1, 1) and Bluff(2, 1), which contain over 24 thousand and 3.5 million information sets, and 294 thousand and 66 million histories (nodes) respectively. Again, AS converged faster than CS, ES, and OS in both Bluff games tested. Note that the same choices of parameters ( = 0.05, ? = 106 , ? = 1000) that worked well in 2-NL Hold?em(30) also worked well in other 2-NL Hold?em(k) games and in Bluff(D1 , D2 ). 6 Conclusion This work has established a number of improvements for computing strategies in extensive-form games with CFR, both theoretically and empirically. We have provided new, tighter bounds on the average regret when using vanilla CFR or one of several different MCCFR sampling algorithms. These bounds were derived by showing that a player?s regret is equal to a weighted sum of the player?s cumulative counterfactual regrets (Theorem 4), where the weights are given by a best response to the opponents? previous sequence of strategies. We then used this bound as inspiration for our new MCCFR algorithm, AS. By sampling a subset of a player?s actions, AS can provide faster 7 3 10 102 101 100 -1 10 CS ES OS AS 1010 1011 Nodes Visited 0.16 Abstract game exploitability (mbb/g) / k Abstract game exploitability (mbb/g) 104 1012 0.14 0.12 CS ES AS k=40 0.1 0.08 0.06 0.04 k=30 0.02 k=20 0 106 107 108 Game size (# information sets) (b) 2-NL Hold?em(k), k ? {20, 22, ..., 40} (a) 2-NL Hold?em(36) 100 100 10-1 10-1 10-2 10-2 Exploitability Exploitability Figure 2: (a) Log-log plot of abstract game exploitability over the number of nodes visited by CS, ES, OS, and AS in 2-NL Hold?em(36). The initial uniform random profile is exploitable for 6793 mbb/g, as indicated by the black dashed line. (b) Abstract game exploitability after approximately 3.16 ? 1012 nodes visited over the game size for 2-NL Hold?em(k) with even-sized starting stacks k between 20 and 40 chips. For both graphs, units are in milli-big-blinds per hand (mbb/g) and data points are averaged over five runs with different random seeds. Error bars indicate 95% confidence intervals. For (b), units on the y-axis are normalized by dividing by the starting chip stacks. -3 10 10-4 CS ES OS AS 10-5 7 10 108 109 1010 1011 Nodes Visited 1012 1013 (a) Bluff(1, 1) -3 10 10-4 CS ES OS AS 10-5 7 10 108 109 1010 1011 Nodes Visited 1012 1013 (b) Bluff(2, 1) Figure 3: Log-log plots of exploitability over number of nodes visited by CS, ES, OS, and AS in Bluff(1, 1) and Bluff(2, 1). The initial uniform random profile is exploitable for 0.780 and 0.784 in Bluff(1, 1) and Bluff(2, 1) respectively, as indicated by the black dashed lines. Data points are averaged over five runs with different random seeds and error bars indicate 95% confidence intervals. convergence rates in games containing many player actions. AS converged faster than previous MCCFR algorithms in all of our test games. For future work, we would like to apply AS to games with many player actions and with more than two players. All of our theory still applies, except that player i?s average strategy is no longer guaranteed to converge to ?i? . Nonetheless, AS may still find strong strategies faster than CS and ES when it is too expensive to sample all of a player?s actions. Acknowledgments We thank the members of the Computer Poker Research Group at the University of Alberta for helpful conversations pertaining to this work. This research was supported by NSERC, Alberta Innovates ? Technology Futures, and computing resources provided by WestGrid and Compute Canada. 8 References [1] Nick Abou Risk and Duane Szafron. Using counterfactual regret minimization to create competitive multiplayer poker agents. In Ninth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 159?166, 2010. [2] Richard Gibson, Marc Lanctot, Neil Burch, Duane Szafron, and Michael Bowling. Generalized sampling and variance in counterfactual regret minimization. In Twenty-Sixth Conference on Artificial Intelligence (AAAI), pages 1355?1361, 2012. [3] Richard Gibson and Duane Szafron. On strategy stitching in large extensive form multiplayer games. In Advances in Neural Information Processing Systems 24 (NIPS), pages 100?108, 2011. [4] Andrew Gilpin and Tuomas Sandholm. A competitive Texas Hold?em poker player via automated abstraction and real-time equilibrium computation. In Twenty-First Conference on Artificial Intelligence (AAAI), pages 1007?1013, 2006. [5] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68:1127?1150, 2000. [6] Samid Hoda, Andrew Gilpin, Javier Pe?na, and Tuomas Sandholm. Smoothing techniques for computing Nash equilibria of sequential games. Mathematics of Operations Research, 35(2):494?512, 2010. [7] Reiner Knizia. Dice Games Properly Explained. Blue Terrier Press, 2010. [8] Daphne Koller, Nimrod Megiddo, and Bernhard von Stengel. Fast algorithms for finding randomized strategies in game trees. In Annual ACM Symposium on Theory of Computing (STOC?94), pages 750?759, 1994. [9] Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte Carlo sampling for regret minimization in extensive games. In Advances in Neural Information Processing Systems 22 (NIPS), pages 1078?1086, 2009. [10] Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte Carlo sampling for regret minimization in extensive games. Technical Report TR09-15, University of Alberta, 2009. [11] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. Technical Report TR07-14, University of Alberta, 2007. [12] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. In Advances in Neural Information Processing Systems 20 (NIPS), pages 905?912, 2008. 9
4569 |@word innovates:1 private:3 version:3 manageable:1 szafron:4 d2:4 dealer:1 q1:1 abou:1 initial:2 contains:1 current:8 comparing:1 si:4 must:2 partition:2 eleven:2 remove:1 plot:3 update:3 greedy:1 fewer:1 intelligence:2 provides:1 node:23 traverse:4 tr09:1 firstly:4 daphne:1 five:11 along:1 symposium:1 incorrect:2 prove:4 wild:1 behavioral:1 introduce:1 theoretically:1 expected:4 terminal:5 inspired:2 alberta:6 little:2 considering:2 increasing:2 becomes:2 provided:4 begin:4 bounded:5 maximizes:1 bonus:2 minimizes:1 developed:1 informed:1 differing:1 finding:1 guarantee:3 remember:1 every:3 act:1 megiddo:1 scaled:1 unit:4 omit:1 positive:2 before:4 negligible:1 limit:6 approximately:3 black:2 suggests:1 range:1 bi:10 averaged:6 acknowledgment:1 carmelo:2 union:1 regret:48 block:6 recursive:1 practice:1 procedure:1 dice:9 gibson:3 significantly:2 matching:4 confidence:3 cannot:1 mccfr:22 risk:1 applying:1 zinkevich:7 maxz:1 map:1 starting:5 independently:2 d1:4 unilaterally:1 handle:1 autonomous:1 pt:4 play:10 ualberta:1 exact:2 programming:1 losing:1 expensive:2 walking:1 worst:1 thousand:2 trade:1 e8:1 highest:1 ran:1 nash:11 ui:14 econometrica:1 traversal:3 raise:3 efficiency:1 bidding:2 easily:1 milli:3 chip:10 fast:1 effective:1 describe:1 monte:6 pertaining:1 artificial:2 kevin:2 outcome:1 h0:2 heuristic:1 larger:3 supplementary:3 otherwise:2 neil:2 final:3 sequence:4 product:1 remainder:1 achieve:1 validate:1 billion:1 convergence:5 mbb:8 generating:1 perfect:3 converges:5 andrew:2 measured:2 strong:1 dividing:2 c:31 indicate:4 implies:2 differ:1 correct:1 exploration:2 material:3 public:1 liar:1 require:1 fix:2 decompose:2 preliminary:3 tighter:6 secondly:2 hold:23 considered:1 equilibrium:16 seed:3 claim:2 early:2 omitted:1 purpose:1 lose:1 visited:13 grouped:1 create:2 weighted:3 minimization:10 clearly:1 always:4 aim:1 rather:1 reaching:1 avoid:1 johanson:2 bet:3 derived:1 improvement:3 properly:1 likelihood:1 helpful:1 waugh:2 abstraction:3 i0:4 accumulated:1 entire:1 koller:1 selects:1 arg:1 ill:1 denoted:1 smoothing:1 initialize:1 equal:4 once:2 sampling:27 identical:1 look:1 excessive:1 future:2 minimized:2 report:2 richard:3 employ:1 versatility:1 propel:1 extreme:1 recurse:3 nl:19 predefined:1 edge:1 closer:1 traversing:1 tree:11 incomplete:2 walk:1 theoretical:2 instance:4 formalism:1 increased:1 cost:1 subset:4 uniform:2 colell:1 inspires:1 too:1 international:1 randomized:1 probabilistic:3 off:1 michael:7 together:1 na:1 again:5 squared:1 aaai:2 von:1 containing:3 choose:1 opposed:1 worse:1 external:2 stochastically:1 leading:3 return:7 stengel:1 b2:1 includes:1 vi:3 blind:5 performed:1 portion:4 reached:1 start:2 option:1 competitive:2 contribution:3 minimize:3 roll:1 qk:4 variance:1 equate:1 likewise:1 yield:1 dealt:2 produced:1 carlo:6 trajectory:1 history:15 converged:2 reach:6 sixth:1 against:1 nonetheless:2 associated:1 mi:11 proof:3 di:2 gain:1 sampled:18 popular:3 counterfactual:20 recall:6 conversation:1 javier:1 actually:2 appears:2 higher:1 response:7 furthermore:2 just:3 until:1 hand:5 receives:1 mistakenly:1 o:29 westgrid:1 quality:4 indicated:2 grows:1 samid:1 concept:1 contain:2 normalized:2 inspiration:1 round:10 game:77 during:3 branching:1 bowling:5 generalized:1 common:2 pseudocode:1 empirically:1 thirdly:1 million:3 occurred:1 refer:1 significant:1 ai:5 vanilla:7 mathematics:1 inclusion:1 longer:1 base:1 add:2 store:1 seen:1 greater:1 converge:3 determine:1 dashed:2 ii:13 multiple:1 full:2 reduces:1 technical:2 faster:10 match:1 long:1 hart:1 post:2 qi:5 variant:2 involving:2 denominator:1 expectation:1 iteration:16 achieved:1 addition:3 background:1 want:2 whereas:1 interval:3 member:1 leveraging:1 call:3 revealed:3 forfeit:1 automated:1 bid:5 zi:3 tr07:1 earns:1 reduce:3 imperfect:1 texas:2 qj:4 whether:1 six:2 utility:7 passing:1 action:63 repeatedly:1 reduced:1 generate:2 nimrod:1 terrier:1 per:6 blue:1 group:1 four:1 demonstrating:1 threshold:2 neither:1 graph:1 sum:11 sti:7 prob:1 run:6 place:1 throughout:1 decide:1 lanctot:9 decision:1 appendix:1 sergiu:1 bound:18 guaranteed:2 distinguish:1 played:3 display:2 fold:1 quadratic:1 annual:1 strength:1 burch:2 worked:2 ri:1 calling:1 u1:1 span:1 min:1 betting:7 martin:4 department:1 according:14 alternate:2 poor:1 multiplayer:2 smaller:3 slightly:1 across:2 em:21 sandholm:2 partitioned:1 making:2 explained:1 bucket:4 sided:1 taken:3 equation:2 resource:1 eventually:1 count:1 know:1 stitching:1 end:8 available:3 operation:1 opponent:17 apply:2 original:1 denotes:1 running:1 include:3 remaining:1 quantity:2 strategy:37 poker:13 exhibit:1 said:1 win:2 thank:1 card:8 cfr:25 considers:2 reason:1 assuming:3 tuomas:2 unfortunately:1 stoc:1 implementation:1 policy:1 twenty:2 contributed:2 finite:1 payoff:1 flop:1 excluding:1 andreu:1 stack:7 ninth:1 community:1 canada:2 required:5 extensive:9 nick:1 established:1 nip:3 bar:3 including:1 memory:1 max:7 event:1 suitable:1 ranked:1 representing:1 scheme:2 technology:1 axis:2 deviate:1 reiner:1 multiagent:1 piccione:2 agent:3 t6g:1 playing:2 supported:1 last:2 guide:1 face:3 benefit:3 cumulative:13 avoids:1 computes:1 bluff:17 adaptive:1 bernhard:1 keep:2 dealing:2 b1:1 consuming:1 alternatively:1 iterative:3 exploitability:19 reasonably:1 ca:1 hoda:1 marc:4 domain:2 pk:1 main:2 big:5 noise:2 profile:24 aamas:1 exploitable:4 edmonton:1 slow:1 position:1 winning:1 pe:1 minz:2 theorem:16 z0:1 down:2 showing:2 maxi:2 normalizing:1 sequential:2 effectively:1 merging:1 occurring:3 margin:1 gap:1 simply:2 deck:1 bluffing:1 contained:1 nserc:1 u2:1 applies:1 duane:4 loses:3 chance:15 acm:1 ma:1 goal:1 sized:2 replace:1 checkpoint:1 typical:1 except:4 uniformly:1 folded:1 total:1 e:31 player:114 rit:17 gilpin:2 tested:1 correlated:1
3,943
457
Constant-Time Loading of Shallow 1-Dimensional Networks Stephen Judd Siemens Corporate Research, 755 College Rd. E., Princeton, NJ 08540 [email protected] Abstract The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e. independent of the size of the network). This holds even when inter-node communication channels are short and local, thus adhering to more biological and VLSI constraints. 1 Introduction Shallow neural networks are defined in [J ud90]; the definition effectively limits the depth of networks while allowing the width to grow arbitrarily, and it is used as a model of neurological tissue like cortex where neurons are arranged in arrays tens of millions of neurons wide but only tens of neurons deep. Figure I exemplifies a family of networks which are not only shallow but "I-dimensional" as well-we allow the network to be extended as far as one liked in width (i.e. to the right) by repeating the design segments shown. The question we address is how learning time scales with the width. In [Jud88], it was proved that the worst case time complexity 863 864 Judd of training this family is linear in the width. But the proof involved an algorithm that was biologically very implausible and it is this objection that will be somewhat redressed in this paper. The problem with the given algorithm is that it operates only a monolithic serial computer; the single-CPU model of computing has no overt constraints on communication capacities and therefore is too liberal a model to be relevant to our neural machinery. Furthermore, the algorithm reveals very little about how to do the processing in a parallel and distributed fashion. In this paper we alter the model of computing to attain a degree of biological plausibility. We allow a linear number processors and put explicit constraints on the time required to communicate between processors. Both of these changes make the model much more biological (and also closer to the connectionist sty Ie of processing). This change alone, however, does not alter the time complexity-the worst case training time is still linear. But when we change the complexity question being asked, a different answer is obtained. We define a class of tasks (viz. training data) that are drawn at random and then ask for the expected time to load these tasks, rather than the worst-case time. This alteration makes the question much more environmentally relevant. It also leads us into a different domain of algorithms and yields fast loading times. 2 2.1 Shallow I-D Loading Loading A family of the example shallow I-dimensional architectures that we shall examine is characterized solely by an integer, d, which defines the depth of each architecture in the family. An example is shown in figure 1 for d 3. The example also happens to have a fixed fan-in of 2 and a very regular structure, but this is not essential. A member of the family is specified by giving the width n, which we will take to be the number of output nodes. = A task is a set of pairs of binary vectors, each specifying an stimulus to a net and its desired response. A random task of size t is a set of t pairs of independently drawn random strings; there is no guarantee it is a function. Our primary question has to do with the following problem, which is parameterized by some fixed depth d, and by a node function set (which is the collection of different transfer functions that a node can be tuned to perform): Shallow 1-D Loading: Instance: An integer n, and a task. Objective: Find a function (from the node function set) for each node in the network in the shallow I-D architecture defined by d and n such that the resulting circuit maps all the stimuli in the task to their associated responses. Constant-Time Loading of Shallow I-Dimensional Networks Figure 1: A Example Shallow 1-D Architecture 2.2 Model of Computation Our machine model for solving this question is the following: For an instance of shallow 1-D loading of width n, we allow n processors. Each one has access to a piece of the task, namely processor i has access to bits i through i + d of each stimulus, and to bit i of each response. Each processor i has a communication link only to its two neighbours, namely processors i - I and i + 1. (The first and nth processors have only one neighbour.) It takes one time step to communicate a fixed amount of data between neighbours. There is no charge for computation, but this is not an unreasonable cheat because we can show that a matrix multiply is sufficient for this problem, and the size of the matrix is a function only of d (which is fixed). This definition accepts the usual connectionist ideal of having the processor closely identified with the network nodes for which it is "finding the weights", and data available at the processor is restricted to the same "local" data that connectionist machines have. This sort of computation sets the stage for a complexity question, 2.3 Question and Approach We wish to demonstrate that Claim 1 This parallel machine solves shallow J-D loading where each processor is finished in constant expected time The constant is dependent on the depth of the architecture and on the size of the task, but not on the width. The expectation is over the tasks. 865 866 Judd For simplicity we shall focus on one particular processor-the one at the leftmost end-and we shall further restrict our at tention to finding a node function for one particular node. To operate in parallel, it is necessary and sufficient for each processor to make its local decisions in a "safe" manner-that is, it must make choices for its nodes in such a way as to facilitate a global solution. Constant-time loading precludes being able to see all the data; and if only local data is accessible to a processor, then its plight is essentially to find an assignment that is compatible with all nonlocal satisfying assignments. Theorem 2 The expected communication complexity of finding a "safe" node function assignment for a particular node in a shallow l-D architecture is a constant dependent on d and t, but not on n. If decisions about assignments to single nodes can be made easily and essentially without having to communicate with most of the network, then the induced partitioning of the problem admits of fast parallel computation. There are some complications to the details because all these decisions must be made in a coordinated fashion, but we omit these details here and claim they are secondary issues that do not affect the gross complexity measurements. The proof of the theorem comes in two pieces. First, we define a computational problem called path finding and the graph-theoretic notion of domination which is its fundamental core. Then we argue that the loading problem can be reduced to path finding in constant parallel time and give an upper bound for determining domination. 3 Path Finding The following problem is parameterized by an integer I<, which is fixed. Path finding : Instance: An integer n defining the number of parts in a partite graph, and a series of I<xI< adjacency matrices, M I , M 2 , ??. M n - I . Mj indicates connections between the K nodes of part i and the I< nodes of part i + 1. Objective: Find a path of n nodes, one from each part of the n-partite graph. Define Xh to be the binary matrix representing connectivity between the first part of the graph and the ith part: Xl = MI and Xh(j, k) = 1 iff 3m such that Xh(j, m) = 1 and Mh(m, k) = 1. We say "i includes j at h" if every bit in the ith row of Xh is 1 whenever the corresponding bit in the jth row of X h is 1. We say "i dominates at h" or "i is a dominator' if for all rows j, i includes j at h. Lemma 3 Before an algorithm can select a node i from the first part of the graph to be on the path, it is necessary and sufficient for i to have been proven to be a dominator at some h. 0 Constant-Time Loading of Shallow l-Dimensional Networks The minimum h required to prove domination stands as our measure of "communication complexity" . Lemma 4 Shallow J-D Loading can be reduced to path finding in constant parallel time. Proof: Each output node in a shallow architecture has a set of nodes leading into it called a support cone (or "receptive field"), and the collection of functions assigned to those nodes will determine whether or not the output bit is correct in each response. Nodes A,B,C,D,E,G in Figure 1 are the support cone for the first output node (node C), and D,E,F,G,H,J are the cone for the second. Construct each part of the graph as a set of points each corresponding to an assignment over the whole support cone that makes its output bit always correct. This can be done for each cone ih parallel, and since the depth (and the fan-in) is fixed, the set of all possible assignments for the support cone can be enumerated in constant time. Now insert edges between adjacent parts wherever two points correspond to assignments that are mutually compatible. (Note that since the support cones overlap one another, we need to ensure that assignments are consistent with each other.) This also can be done in constant parallel time. We call this construction a compatibility graph. A solution to the loading problem corresponds exactly to a path in the compatibility graph. 0 A dominator in this path-finding graph is exactly what was meant above by a "safe" assignment in the loading problem. 4 Proof of Theorem Since it is possible that there is no assignments to certain cones that correctly map the stimuli it is trivial to prove the theorem, but as a practical matter we are interested in the case where the architecture is actually capable of performing the task. We will prove the theorem using a somewhat more satisfying event. Proof of theorem 2: For each support cone there is 1 output bit per response and there are t such responses. Given the way they are generated, these responses could all be the same with probability .5 t - 1 . The probability of two adjacent cones both having to perform such a constant mapping is .5 2(t-l). Imagine the labelling in Figure 1 to be such that there were many support cones to the left (and right) of the piece shown. Any path through the left side of the compatibility graph that arrived at some point in the part for the cone to the left of C would imply an assignment for nodes A, B, and D. Any path through the right side of the compatibility graph that arrived at some point in the part for the cone of I would imply an assignment for nodes G, H, and J. If cones C and F were both required to merely perform constant mappings, then any and all assignments to A, B, and D would be compatible with any and all assignments to G, H, and J (because nodes C and F could be assigned constant functions themselves, thereby making the others irrelevant). This insures that any point on a path to the left will dominate at the part for I. 867 868 Judd Thus 2 2(t-l) (the inverse of the probability of this happening) is an upper bound on the domination distance, i.e. the communication complexity, i.e. the loading time. 0 More accurately, the complexity is min(c(d, t), f(t), n), where c and f are some unknown functions. But the operative term here is usually c because d is unlikely to get so large as to bring f into play (and of course n is unbounded). The analysis in the proof is sufficient, but it is a far cry from complete. The actual Markovian process in the sequence of X's is much richer; there are so many events in the compatibility graph that cause domination to occur that is takes a lot of careful effort to construct a task that will avoid it! 5 Measuring the Constants Unfortunately, the very complications that give rise to the pleasant robustness of the domination event also make it fiendishly difficult to analyze quantitatively. So to get estimates for the actual constants involved we ran Monte Carlo experiments. We ran experiments for 4 different cases. The first experiment was to measure the distance one would have to explore before finding a dominating assignment for the node labeled A in figure 1. The node function set used was the set of linearly separable functions. In all experiments, if domination occurred for the degenerate reason that there were no solutions (paths) at all, then that datum was thrown out and the run was restarted with a different seed. Figure 2 reports the constants for the four cases. There is one curve for each experiment. The abscissa represents t, the size of the task. The ordinate is the number of support cones that must be consulted before domination can be expected to occur. All points given are the average of at least 500 trials. Since t is an integer the data should not have been interpolated between points, but they are easier to see as connected lines. The solid line (labeled LSA) is for the case just described. It has a bell shape, reflecting three facts: ? when the task is very small almost every choice of node function for one node is compatible with choices for the neighbouring nodes. ? when the task is very large, there so many constraints on what a node must compute that it is easy to resolve what that should be without going far afield. ? when the task is intermediate-sized, the problem is harder. Note the very low distances involved-even the peak of the curve is well below 2, so nowhere would you expect to have to pass data more than 2 support cones away. Although this worst-expected-case would surely be larger for deeper nets, current work is attempting to see how badly this would scale with depth (larger d). The curve labeled LUA is for the case where all Boolean functions are used as the node function set. Note that it is significantly higher in the region 6 < t < 12. The implication is that although the node function set being used here is a superset of the linearly separable functions, it takes more computation at loading time to be able to exploit that extra power. Constant-Time Loading of Shallow I-Dimensional Networks 2.6 ~----~-------r------~------r-----~-------r------~----~ "L5A" -+x "L5B" -t- . .. " .. "LUA" -G:' ........ ~ 2.4 "LUB" ?M?_? : ,')(\ .' , '.', ! t, 2.2 " : / ', / 2 ? / !,' \ ,'I / ? / ???? X I I ~ f ~ 1.6 if :, :/ 1.4 \ ? \, \ \ I I \ ~ \ '. 0. , \'. \ ,. ~ ~ \ \ \ \ \tl. , ~ , \ , X. \ \ , ????? ' I!l. \ .'X \ '.'',. \\ I ~, UI \ ! l' ~ . ..... \ , ''x \\ \ \ +--~ , .,, ..,. , " \ \ i , '')( \ \ ~ 1.2 "'X ~ \ I?<' I! \ ~ I / I : I .: \ ~;-~ I.' 1.8 , , \ !!I o--A..-' l __~.-~~----~------~------~------L-----~--~--~--~--~ o 4 8 12 16 14 2 6 10 Figure 2: Measured Domination Distances. The curve labeled LSB shows the expected distance one has to explore before finding a dominating assignment for the node labelled B in figure 1. The node function set used was the set of linearly separable functions, Note that it is everywhere higher that the LSA curve, indicating that the difficulty of settling on a correct node function for a second-layer node is somewhat higher than finding one for a first-layer node. Finally, there is a curve for node B when all Boolean functions are used (LUB), It is generally higher than when just linearly separable functions are used, but not so markedly so as in the case of node A. 6 Conclusions The model of computation used here is much more biologically relevant than the ones previously used for complexity results, but the algorithm used here runs in an off-line "batch mode" (i.e. it has all the data before it starts processing). This has an unbiological nature, but no more so than the customary connectionist habit of repeating the data many times. 869 870 Judd A weakness of our analysis is that (as formulated here) it is only for discrete node functions, exact answers, and noise-free data. Extensions for any of these additional difficulties may be possible, and the bell shape of the curves should survive. The peculiarities of the regular 3-layer network examined here may appear restrictive, but it was taken as an example only; what is really implied by the term "l-D" is only that the bandwidth of the SCI graph for the architecture be bounded (see [J ud90] for definitions). This constraint allows several degrees of freedom in choosing the architecture, but domination is such a robust combinatoric event that the essential observation about bell-shaped curves made in this paper will persist even in the face of large changes from these examples. We suggest that whatever architectures and node function sets a designer cares to use, the notion of domination distance will help reveal important computational characteristics of the design. Acknowledgements Thanks go to Siemens and CalTech for wads of computer time. References [Jud88] J. S. Judd. On the complexity ofloading shallow neural networks. Journal of Complexity, September 1988. Special issue on Neural Computation, in press. [Jud90] J. Stephen Judd. Neural Network Design and the Complexity of Learning. MIT Press, Cambridge, Massachusetts, 1990.
457 |@word trial:1 loading:17 thereby:1 solid:1 harder:1 series:1 tuned:1 current:1 com:1 must:4 shape:2 afield:1 alone:1 ith:2 short:1 core:1 lua:2 complication:2 node:43 unbiological:1 liberal:1 unbounded:1 unacceptable:1 prove:3 manner:1 inter:1 expected:7 themselves:1 examine:1 abscissa:1 resolve:1 cpu:1 little:1 actual:2 bounded:1 circuit:1 what:4 string:1 finding:12 nj:1 guarantee:1 every:2 charge:1 exactly:2 partitioning:1 unit:1 whatever:1 omit:1 lsa:2 appear:1 before:5 local:4 monolithic:1 limit:1 cheat:1 solely:1 path:13 might:1 examined:1 specifying:1 practical:1 habit:1 bell:3 attain:1 significantly:1 regular:2 suggest:1 get:2 wad:1 put:1 map:2 go:1 independently:1 adhering:1 simplicity:1 array:1 dominate:1 notion:2 construction:1 imagine:1 play:1 exact:1 neighbouring:1 nowhere:1 satisfying:2 persist:1 labeled:4 worst:4 region:1 connected:1 ran:2 gross:1 complexity:15 ui:1 asked:1 solving:1 segment:1 easily:1 mh:1 fast:2 monte:1 choosing:1 richer:1 larger:2 dominating:2 say:2 precludes:1 sequence:1 net:2 relevant:3 iff:1 degenerate:1 achieve:1 liked:1 help:1 measured:1 solves:1 come:1 safe:3 closely:1 correct:3 peculiarity:1 adjacency:1 really:1 biological:3 enumerated:1 insert:1 extension:1 hold:1 seed:1 mapping:2 claim:2 overt:1 mit:1 always:1 rather:1 avoid:1 exemplifies:1 focus:1 viz:1 indicates:1 dependent:2 unlikely:1 vlsi:1 going:1 interested:1 compatibility:5 issue:2 special:1 field:1 construct:2 having:3 shaped:1 represents:1 survive:1 alter:2 report:1 connectionist:4 stimulus:4 others:1 quantitatively:1 neighbour:3 thrown:1 freedom:1 huge:1 multiply:1 weakness:1 implication:1 edge:1 closer:1 capable:1 necessary:2 machinery:1 desired:1 lub:2 instance:3 combinatoric:1 boolean:2 markovian:1 measuring:1 assignment:16 too:1 answer:2 thanks:1 fundamental:1 peak:1 ie:1 accessible:1 off:1 connectivity:1 leading:1 alteration:1 includes:2 matter:1 coordinated:1 piece:3 lot:1 analyze:1 start:1 sort:1 parallel:9 partite:2 characteristic:1 yield:1 correspond:1 accurately:1 carlo:1 processor:14 tissue:1 implausible:2 whenever:1 definition:3 involved:3 proof:6 associated:1 mi:1 proved:1 massachusetts:1 ask:1 actually:1 reflecting:1 higher:4 response:7 arranged:1 done:2 furthermore:2 just:2 stage:1 defines:1 mode:1 reveal:1 facilitate:1 assigned:2 adjacent:2 width:7 leftmost:1 arrived:2 theoretic:1 demonstrate:2 complete:1 bring:1 million:1 occurred:1 measurement:1 cambridge:1 rd:1 access:2 cortex:2 irrelevant:1 certain:1 binary:2 arbitrarily:1 caltech:1 minimum:1 additional:1 somewhat:3 care:1 surely:1 determine:1 stephen:2 corporate:1 characterized:1 plausibility:1 serial:2 essentially:2 expectation:1 operative:1 objection:1 lsb:1 grow:1 extra:1 operate:1 markedly:1 induced:1 member:1 integer:5 call:1 ideal:1 intermediate:1 easy:1 superset:1 affect:1 architecture:11 identified:1 restrict:1 bandwidth:1 whether:1 effort:1 cause:1 deep:1 generally:1 pleasant:1 amount:1 repeating:2 ten:2 reduced:2 designer:1 correctly:1 per:1 dominator:3 discrete:1 shall:3 four:1 drawn:2 graph:13 merely:1 cone:16 run:2 inverse:1 everywhere:1 parameterized:2 communicate:3 you:1 family:5 almost:1 decision:3 bit:7 bound:2 layer:3 datum:1 fan:2 badly:1 occur:2 constraint:5 interpolated:1 min:1 performing:1 separable:4 attempting:1 shallow:18 biologically:3 happens:1 wherever:1 making:1 restricted:1 taken:1 mutually:1 previously:1 end:1 available:1 unreasonable:1 away:1 batch:1 robustness:1 customary:1 ensure:1 exploit:1 giving:1 restrictive:1 implied:1 objective:2 question:7 receptive:1 primary:1 usual:1 september:1 distance:6 link:1 sci:1 capacity:1 argue:1 trivial:1 reason:1 difficult:1 unfortunately:1 rise:1 design:3 unknown:1 perform:3 allowing:1 upper:2 neuron:3 observation:1 defining:1 extended:1 communication:6 consulted:1 ordinate:1 pair:2 required:3 specified:1 namely:2 connection:1 accepts:1 address:1 able:2 usually:1 below:1 power:1 overlap:1 event:4 natural:1 difficulty:2 settling:1 nth:1 representing:1 imply:2 finished:1 acknowledgement:1 determining:1 expect:1 proven:1 degree:2 sufficient:4 consistent:1 cry:1 row:3 elsewhere:1 compatible:4 course:1 free:1 jth:1 side:2 allow:3 deeper:1 wide:1 face:1 distributed:1 curve:8 judd:8 depth:6 stand:1 collection:2 made:3 far:3 nonlocal:1 global:1 reveals:1 environmentally:1 xi:1 channel:1 transfer:1 mj:1 nature:1 robust:1 domain:1 linearly:4 whole:1 noise:1 tl:1 fashion:2 explicit:1 wish:1 xh:4 xl:1 theorem:6 load:1 admits:1 dominates:1 essential:2 ih:1 effectively:1 labelling:1 easier:1 insures:1 explore:2 happening:1 neurological:1 restarted:1 corresponds:1 sized:1 formulated:1 careful:1 labelled:1 change:4 operates:1 lemma:2 called:2 secondary:1 pas:1 siemens:3 domination:11 indicating:1 select:1 college:1 support:9 meant:1 princeton:1
3,944
4,570
Memorability of Image Regions Aditya Khosla Jianxiong Xiao Antonio Torralba Aude Oliva Massachusetts Institute of Technology {khosla,xiao,torralba,oliva}@csail.mit.edu Abstract While long term human visual memory can store a remarkable amount of visual information, it tends to degrade over time. Recent works have shown that image memorability is an intrinsic property of an image that can be reliably estimated using state-of-the-art image features and machine learning algorithms. However, the class of features and image information that is forgotten has not been explored yet. In this work, we propose a probabilistic framework that models how and which local regions from an image may be forgotten using a data-driven approach that combines local and global images features. The model automatically discovers memorability maps of individual images without any human annotation. We incorporate multiple image region attributes in our algorithm, leading to improved memorability prediction of images as compared to previous works. 1 Introduction Human long-term memory can store a remarkable amount of visual information and remember thousands of different pictures even after seeing each of them only once [25, 1]. However, it appears to be the fate of visual memories that they degrade [13, 30]. While most of the work in visual cognition has examined how people forget for general classes of visual or verbal stimuli [30], little work has looked at which image information is forgotten and which is retained. Does all visual information fade alike? Are there some features, image regions or objects that are forgotten more easily than others? Inspired by work in visual cognition showing that humans selectively forget some objects and regions from an image while retaining others [22], we propose a novel probabilistic framework for modeling image memorability, based on the fading of local image information. Recent work on image memorability [6, 7, 12] has shown that there are large differences between the memorabilities of different images, and these differences are consistent across context and observers, suggesting that memory differences are intrinsic to the images themselves. Using machine learning tools such as support vector regression and a fully annotated dataset of images with human memorability scores, Isola et al [7] show that an automatic image ranking algorithm matches individual image memory scores quite well: with dynamic scenes with people interacting as most memorable, static indoor environments and human-scale objects as somewhat less memorable, and outdoor vistas as forgettable. In addition, using manual annotation, Isola et al. quantified the contribution of segmented regions to the image memorability score, creating a memorability map for each individual image that identifies objects that are correlated with high or low memorability scores. However, this previous work did not attempt to discover in an automatic fashion which part of the image is memorable and which regions are forgettable. In this paper, we introduce a novel framework for predicting image memorability that is able to account for how memorability of image regions and different types of features fade over time, offering memorability maps that are more interpretable than [7]. The current work offers three original contributions: (1) a probabilistic model that simulates the forgetting local image regions, (2) the automatic discovery of memorability maps of individual images that reveal which regions are memorable/forgettable, and (3) an improved overall image memorability prediction from [7], using an automatic, data-driven approach combining local and global images features. 1 Original Image! External! Representation! Internal! Representation! +" #" +" ?,"?" #" #" +" Internal Image! Memory ! +" Noisy +" Process! +" #" ~ vj! vj!! Figure 1: Overview of our probabilistic framework. This figure illustrates a possible external or ?observed? representation of an image. The conversion to an internal representation in memory can be thought of as a noisy process where some elements of the image are changed probabilistically as described by ? and ? (Sec. 3.1). The image on the right illustrates a possible internal representation: the green and blue regions remain unchanged, while the red region is forgotten and the pink region is hallucinated. Note that the internal representation cannot be observed and is only shown here for illustrating the framework. 2 Related work Large scale visual memory experiments [26, 25, 1, 13, 14, 28] have shown that humans can remember specific images they have seen among thousands of images, hours to days later, even after being exposed to each picture only once. In addition, humans seem to have a massive capacity in long term memory to store specific details about these images, like remembering whether the glass of orange juice they saw thousands of images earlier was full or half full [1] or which specific door picture they saw after being exposed to hundreds of pictures of doors [28]. However, not all images are equally memorable as shown by the Memory Game experiment described in [7, 12], and importantly, not all kinds of local information are equally retained from an image: on average, observers will more likely remember visual details attached to objects that have a specific semantic label or a distinctive interpretation (for example observers will remember different types of cars by tagging each car with a different brand name, but would more likely confuse different types of apples, which only differ by their color [14]). This suggests that different features, objects and regions in an image may have themselves different memorability status: indeed, works by Isola et al [7, 6] have shown that different individual features, objects, local regions and attributes are correlated with image that are highly memorable or forgettable. For instance, indoor spaces, pictures containing people, particularly if their face is visible, close up views on objects, animals, are more memorable than buildings, pictures of natural landscapes, and natural surfaces in general (like mountains, grass, field). However, to date, there is no work which has attempted to predict which local information from an image is memorable or forgettable, in an automatic manner. 3 Modeling memorability using image regions We propose to predict memorability using a noisy memory process of encoding images in our memory, illustrated in Fig. 1. In our setting, an image consists of different types of image regions and features. After a delay between the first and second presentation of an image, people are likely to remember some image regions and objects more than others. For example, as shown in [7], people and close up views on objects tend to be more memorable than natural objects and regions of landscapes, suggesting for instance that an image region containing a person is less likely to be forgotten than an image region containing a tree. It is well established that stored visual information decays over time [30, 31, 14], which can be represented in a model by a novel image vector with missing global and local information. We postulate that the farther the stored representation of the image is from its veridical representation, the less likely it is to be remembered. Here, we propose to model this noisy memorability process in a probabilistic framework. We assume that the representation of an image is composed of image regions where different regions of an 2 image correspond to different sets of objects. These regions have different probabilities of being forgotten and some regions have a probability of being imagined or hallucinated. We postulate that the likelihood of an image to be remembered depends on the distance between the initial image representation and its internal degraded version. An image with a larger distance to the internal representation is more likely to be forgotten, thereby the image should have a lower memorability score. In our algorithm, we model this probabilistic process and show its effectiveness at predicting image memorability and at producing interpretable memorability maps. 3.1 Formulation Given some image Ij , we define its representation vj and v?j as the external and internal representation of the image respectively. The external representation refers to the original image which is observed, while internal representation refers to the noisy representation of the same image that is stored in the observer?s memory. Assume that there are N types of regions or objects an image can contain. We define vj ? {0, 1}N as a binary vector of size N containing a 1 at index n when the corresponding region is present in image Ij and 0 otherwise. Similarly, the internal representation consists of the same set of region types, but has different presence and absence values as memory is noisy. In this setting, one of two things can happen when the external representation of an image is observed: (1) An image region that was shown is forgotten i.e. v?j (i) = 0 when vj (i) = 1, where vj (i) refers to the ith element of vj , or (2) An image region is hallucinated i.e. an image region that did not exist in the image is believed to be present. We expect this to happen with different probabilities for different types of image regions. Therefore, we define two probability vectors ? ~ , ?~ ? [0, 1]N , where ?i corresponds to the probability of region type i being forgotten while ?i corresponds to the probability of hallucinating a region of type i. Using this representation, we define the distance between the internal and external representation as Dj = D(vj , v?j ) = ||vj ? v?j ||1 . Dj is inversely proportional to the memorability score of an image sj ; the higher the distance of an image in the brain from its true representation, the less likely it is to be remembered, i.e. when D increases, s decreases. Thus, we can compute the expected distance E(Dj |vj ) of an image as: E(Dj |vj ) = N X v (i) ?i j 1?vj (i) ? ?i = vjT ? ~ + (?vj )T ?~ (1) i=1 This represents the expected number of modifications in v from 1 to 0 (?) or from 0 to 1 (?). Thus, over all images, we can define the expected distance E(D|v) as ? ? v1T ?v1T   ? ~ ? ? . . .. .. E(D|v) = ? (2) ? ? ~ ?rank ?~s ? T T vM ?vM where ?i , ?i ? [0, 1] and ?rank represents that the proportionality is only related to the relative ranking of the image memorability scores, and M is the total number of images. We do not explicitly predict a memorability score, rather the ranking of scores between images. The above equation represents a typical ordinal rank regression setting with additional constraints ~ Since we are only interested in the rank, we can rescale the on the learning parameters ? ~ and ?. learned parameters to lie between [0, 1], allowing us to use standard solvers such as SVM-Rank [9]. ~ cannot be uniquely determined when considering ranking of images alone, and thus We note that ? we focus our attention on ? ~ for the rest of this paper. Implementation details: To generate the region types automatically, we randomly sample rectangular regions of arbitrary width and height from the training images. The regions can be overlapping with each other. For each region, we compute a particular feature (described in Sec. 4.2), ensuring the same dimension for all regions of different shapes and sizes (using Bag-of-Words like representations). Then we perform k-means clustering to learn the dictionary of region types as cluster centroids. The region type is determined by the closest cluster centroid. This method allows us to 3 feature ! memorability maps! gradient! ?gradient! overall! memorability map! ?gradient! color! ?color! +" ?color! pooling! texture! ?texture! ?texture! Figure 2: Illustration of multiple feature integration. Refer to Sec. 3.2 for details. bypass the need for human annotation as done in [7]. The details of the dictionary size and feature types used are provided in Sec. 4. As we sample overlapping regions, we only encode the presence of a region type by 1 or 0. There may be more than one sampled region that corresponds to a particular region type. We evaluate our algorithm on test images by applying a similar method as that on the train images. In this case, we assume the dictionary of region types is given, and we simply assign the randomly ~ to compute a score. sampled image regions to region types, and use the learned parameters (~ ?, ?) 3.2 Multiple feature integration We incorporate multiple attributes of each region type such as color, texture and gradient in the form of image features into our algorithm. Our method is illustrated in Fig. 2. For each attribute, we learn a separate dictionary of region types. An image region is encoded using each feature dictionary independently, and the ? ~ , ?~ parameters are learned jointly in our learning algorithm. Subsequently, we use each set of ? ~ and ?~ for individual features to construct memorability maps that are later combined using weighted pooling1 to produce an overall memorability map as shown in Fig. 2. We demonstrate experimentally (Sec. 4) that multiple feature integration helps to improve both the memorability score prediction and produce visually more consistent memorability maps. 4 Experiments In this section, we describe the experimental setup and dataset used (Sec. 4.1), provide details about the region attributes used in our experiments (Sec. 4.2) and describe the experimental results on the image memorability dataset (Sec. 4.3). Experimental results show that our method outperforms state-of-the-art methods on this dataset while providing automatic memorability maps of images that compare favorably to when ground truth segmentation is used. 4.1 Setup Dataset: We use the dataset proposed by Isola et al. [7] consisting of 2222 images from the SUN dataset [32]. The images are fully annotated with segmented object regions and randomly sampled from different scene categories. The images are cropped and resized to 256?256 and a memorability score corresponding to each image is provided. The memorability score is defined as the percentage of correct detections by participants in their study. Performance evaluation: The performance is evaluated using Spearman?s rank correlation(?). We evaluate our performance on 25 different training/testing splits of the data (same splits as [7]) with 1 We weight the importance of individual features by summing the ? ~ corresponding to the particular feature. 4 an equal number of images for training and testing (1111). The train splits are scored by one half of the participants and the test splits are scored by the other half of the participants with a human consistency of ? = 0.75. This can be thought of as an upper bound in the performance of automatic methods. Algorithmic details: We sample 2000 patches per image with size 0.2 ? 0.2 to 0.7 ? 0.7 with random aspect ratios in normalized image coordinates. To speed up convergence of SVM-Rank, we do not include rank constraints for memorability scores that lie within 0.001 of each other. We find that this does not affect the performance significantly. The hyperparameter of the SVM-Rank algorithm is set using 5-fold cross-validation. 4.2 Image region attributes Our goal is to choose various features as attributes that human likely use to represent image regions. In this work, we consider six common attributes, namely gradient, color, texture, shape, saliency and semantic meaning of the images. The attributes are extracted for each region and assigned to a region type as described in Sec. 3.2 with a dictionary size of 1024 for each feature. For each of the attributes, we describe our motivation and the method used for extraction. Gradient: In human vision system, much evidence suggests that retinal ganglion cells and receptive fields of cells in the visual cortex V1 are essentially gradient-based features. Furthermore, recent success of many computer vision algorithms [2, 4] also demonstrated the power of such features. In this work, we use the powerful Histogram of Oriented Gradients (HOG) features for our task. We densely sample HOG [2] with a cell size of 2x2 at a grid spacing of 4 and learn a dictionary of size 256. The descriptors for a given image region are max-pooled at 2 spatial pyramid levels[15] using Locality-Constrained Linear Coding (LLC) [29]. Color: Color is an important part of human vision. Color usually has large variations caused by changes in illumination, shadows, etc, and these variations make the task of robust color description difficult. Isola et al. [7] show that simple image color features, such as mean hue, saturation and intensity, only exhibits very weak correlation with memorability. In contrast to this, color has been shown to yield excellent results in combination with shape features for image classification [11]. Furthermore, many studies show that color names are actually linguistic labels that humans assign to color spectrum space. In this paper, we use the color names feature [27] to better exploit the color information. We densely sample the feature at multiple scales (12, 16, 24 and 32) with a grid spacing of 4. Then we learn a dictionary of size 100 and apply LLC at 2-level spatial pyramid to obtain the color descriptor for each region. Texture: We interact with a variety of materials on a daily basis and we constantly assess their texture properties by visual means and tactile touch. To encode visual texture perception information, we make use of the popular texture features ? Local Binary Pattern [21] (LBP). We use a 2-level spatial pyramid of non-uniform LBP descriptor. Saliency: Image saliency is a biologically inspired model to capture the regions that attract more visual attention and fixation focus [8]. Inspired by this, we extract a saliency value for each pixel using natural statistics [10]. Then we perform average pooling at 3-level spatial pyramid to obtain the descriptor for each region. Shape: Humans constantly use geometric patterns to determine the similarity between visual entities, and the layout of shapes is directly relevant to mid level representations of the image. We denote shape as a histogram of local Self-Similarity geometric pattens (SSIM [23]). We densely sample the SSIM descriptor with a grid spacing of 4 and learn a dictionary of sie 256. The descriptors for a given image region are max-pooled at 2 spatial pyramid levels using LLC. Semantic: High-level semantic meaning contained in images has been shown to be strongly correlated to image memorability [7], where manual annotation of object labels lead to great performance in predicting image memorability. Here, our goal is to design a fully automatic approach to predict image memorability, while still exploiting the semantic information. Thus, we use the automatic Object Bank [17] feature to model the presence/absence of various objects in the images. We reduce the feature dimension by using simple max pooling instead of spatial pyramid pooling. 5 Table 1: Images are sorted into sets according to predictions made on the basis of a variety of features (denoted by column headings). Average measured memorabilities are reported for each set. e.g. The Top 20 row reports average measured memorability of the 20 highest predicted images. ? is the Spearman rank correlation between predictions and measurements. Top 20 Top 100 Bottom 100 Bottom 20 ? ?! Multiple global features [7] 83% 80% 57% 55% 0.46 Our Global 84% 80% 56% 53% 0.48 Our Local 83% 80% 57% 54% 0.45 Our Full Model 85% 81% 55% 52% 0.50 Gradient (HOG)! Semantic (ObjectBank)! ?! 0.048! 0.107! 0.931! 0.909! Figure 3: Visualization of region types and corresponding ? learned by our algorithm for gradient and semantic features. The histograms represent the distribution of memorability scores corresponding to the particular region type. We observe that high-scoring images tend to have a small value of ? while low scoring regions have a high value. This corresponds well with the proposed framework. The color of the bounding boxes corresponds to the memorability score of the image shown (using a jet color scheme). 4.3 Results In this section, we evaluate the performance of our model with single and multiple features, and later explore what the model has learned using memorability maps and the ranking of different types of image regions. Single + multiple features: Fig. 6(a) and Tbl. 1 summarize the performance of our algorithm when using single and multiple features. We compare our results with [7], and find that our algorithm outperforms the automatic methods from [7] by 4%, and achieve comparable performance to when ground truth annotation is used. This shows the effectiveness of our method at predicting memorability. Further, we note that our model provides complementary information to global features as it focuses on local image regions, increasing performance by 2% when combined with our global features. We use the same set of attributes described in Sec. 4.2 as global features in our model. The global features are learned independently using SVM-Rank and the predicted score is combined with the predicted scores of our local model in SVM-Rank algorithm. Despite using the same set of features, we are able to obtain performance gain suggesting that our algorithm is effective at capturing local information in the image that was overlooked by the global features. Memorability maps: We obtain memorability maps using max-pooling of the ? from different image regions. Fig. 4 shows the memorability maps obtained when using different features and the overall memorability map when combining multiple features. Despite using no annotation, the learned maps are similar to those obtained using ground truth objects and segments. From the images shown, we observe that there is no single attribute that is always effective at producing memorability maps, but the combination of the attributes leads to a significantly improved version. We show additional results in Fig. 5. 6 Memory! Original ! Score! Image! Overall ! memorability map! Feature ! Ground truth! memorability maps! segments! 0.900! 1" 2" 3" 4" 5" 6" high! 0.811! 0.561! 0.406! 1" 2" 3" 4" 5" 6" 1" 2" 3" 1 2 3 4 5 6 4" 5" 6" low! 1" 2" 3" 4" 5" 6" Gradient# Saliency# Color# Texture# Shape# Semantic! Figure 4: Visualization of the memorability maps obtained using different features, and the overall memorability map. Additionally, we also include the memorability map obtained when using ground truth segmentation on the right. We observe that it resembles our automatically generated maps. Figure 5: Additional examples of memorability maps generated by our algorithm. Image region types: In Fig. 3, we rank the image region types by their ? value and visualize the regions for the corresponding region type when ? is close to 0 or 1. We observe that the region types are consistent with our intuition of what is memorable from [7]. People often exist in image regions with low ? (i.e. low probability of being forgotten) while natural scenes and plain backgrounds are observed in high ?. Further, we analyze the image region types by computing the standard deviation of the memorability scores of the image regions that correspond to the particular type. Fig. 6(b) and 6(c) show the results. The results are encouraging as regions that have high standard deviation tend to have a value of ? close to 0.5, which means they are not very informative for prediction. The same behavior is observed for multiple feature types, and we find that the overall performance for individual features (shown in Fig. 6(a)) corresponds well with the distance of the peaks in Fig. 6(b) from ? = 0.5. This suggests that our algorithm is effective at learning the regions with high and low probability of being forgotten as proposed in our framework. 7 80 75 Gradient) Color) Texture) Shape) Seman4c) Saliency) 70 0 100 200 300 400 500 600 700 Image rank (N) 800 900 1000 (a) Comparison of results averaged across the 25 splits. Images are ranked by predicted memorability and plotted against the cumulative average of measured memorability scores. ?=0" Standard Deviation" 85 Standard Deviation" Average memorability for top N ranked images (%) Other Human [0.75] Objects and Scenes [0.50] Our Final Model [0.50] Global Only [0.48] Isola et al. [0.46] Local Only [0.45] Gradient [0.40] Shape [0.38] Semantic [0.37] Texture [0.34] Color [0.29] Saliency [0.28] ?=0.5" ?=1" (b) Standard deviation of memorability score of all region types averaged across the 25 splits for all features, sorted by ?. Graphs are smoothed using a median filter. ?=0" ?=0.5" ?=1" (c) Standard deviation of region types for Gradient feature averaged across the 25 splits. No smoothing is applied in this case. Figure 6: Plot of various results and analysis of our method. Fig. 6(b) and Fig. 6(c) are explained in greater detail in Sec. 4.3 5 Conclusion With the emergence of large scale photo collections and growing demands in storing, organizing, interpreting, and summarizing large amount of digital information, it becomes essential to be able to automatically annotate images on various novel dimensions that are interpretable to human users. Recently, learning algorithms have been proposed to automatically interpret whether an image is aesthetically pleasant or not [20, 3], memorable or forgettable [7, 6], and the role that other high level photographic properties plays in image interpretation (photo quality [19], attractiveness [16], composition [5, 18], and object importance [24]). Here, we propose a novel probabilistic framework for automatically constructing memorability maps, discovering regions in the image that are more likely to be memorable or forgettable by human observers. We demonstrate an effective yet interpretable framework to model the process of forgetting. Future development of such automatic algorithms of image memorability could have many exciting and far-reaching applications in computer science, graphics, media, designs, gaming and entertainment industries in general. Acknowledgements We thank Phillip Isola and the reviewers for helpful discussions. This work is funded by NSF grant (1016862) to A.O, Google research awards to A.O and A.T, ONR MURI N000141010933 and NSF Career Award (0747120) to A.T. J.X. is supported by Google U.S./Canada Ph.D. Fellowship in Computer Vision. References [1] T. F. Brady, T. Konkle, G. A. Alvarez, and A. Oliva. Visual long-term memory has a massive storage capacity for object details. PNAS, pages 14325?14329, 2008. [2] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, volume 1, pages 886?893. IEEE, 2005. [3] S. Dhar, V. Ordonez, and T.L. Berg. High level describable attributes for predicting aesthetics and interestingness. In CVPR, pages 1657?1664. IEEE, 2011. [4] P.F. Felzenszwalb, R.B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. TPAMI, 2010. [5] B. Gooch, E. Reinhard, C. Moulding, and P. Shirley. Artistic composition for image creation. In Rendering Techniques 2001: Proceedings of the Eurographics Workshop in London, United Kingdom, June 25-27, 2001, page 83. Springer Verlag Wien, 2001. [6] P. Isola, D. Parikh, A. Torralba, and A. Oliva. Understanding the intrinsic memorability of images. In Advances in Neural Information Processing Systems (NIPS), 2011. 8 [7] P. Isola, J. Xiao, A. Torralba, and A. Oliva. What makes an image memorable? In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 145?152, 2011. [8] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40:1489?1506, 2000. [9] T. Joachims. Training linear SVMs in linear time. In ACM SIGKDD, pages 217?226, 2006. [10] C. Kanan, M.H. Tong, L. Zhang, and G.W. Cottrell. Sun: Top-down saliency using natural statistics. Visual Cognition, 17(6-7):979?1003, 2009. [11] F. S. Khan, J. van de Weijer, A. D. Bagdanov, and M. Vanrell. Portmanteau vocabularies for multi-cue image representation. In NIPS, Granada, Spain, 2011. [12] A. Khosla? , J. Xiao? , P. Isola, A. Torralba, and A. Oliva. Image memorability and visual inception. In SIGGRAPH Asia, 2012. ? indicates equal contribution. [13] T. Konkle, T.F. Brady, G.A Alvarez, and A. Oliva. Conceptual distinctiveness supports detailed visual long-term memory for real-world objects. Journal of Experimental Psychology, (139):558?578, 3 2010. [14] T. Konkle, T.F. Brady, G.A. Alvarez, and A. Oliva. Scene memory is more detailed than you think: the role of categories in visual long-term memory. Psychological Science, (21):1551?1556, 11 2010. [15] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, volume 2, pages 2169?2178. IEEE, 2006. [16] T. Leyvand, D. Cohen-Or, G. Dror, and D. Lischinski. Data-driven enhancement of facial attractiveness. In ACM Transactions on Graphics (TOG), volume 27, page 38. ACM, 2008. [17] L.-J. Li, H. Su, E. P. Xing, and L. Fei-Fei. Object bank: A high-level image representation for scene classification & semantic feature sparsification. In NIPS, Vancouver, Canada, December 2010. [18] L. Liu, R. Chen, L. Wolf, and D. Cohen-Or. Optimizing photo composition. In Computer Graphics Forum, volume 29, pages 469?478. Wiley Online Library, 2010. [19] Y. Luo and X. Tang. Photo and video quality evaluation: Focusing on the subject. In Proceedings of the 10th European Conference on Computer Vision: Part III, pages 386?399. Springer-Verlag, 2008. [20] L. Marchesotti, F. Perronnin, D. Larlus, and G. Csurka. Assessing the aesthetic quality of photographs using generic image descriptors. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1784?1791. IEEE, 2011. [21] T. Ojala, M. Pietikainen, and T. Maenpaa. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. Pattern Analysis and Machine Intelligence, 24(7):971?987, 2002. [22] R. A. Rensink, J. K. O?Regan, and J. J. Clark. To See or not to See: The Need for Attention to Perceive Changes in Scenes. Psychological Science, 8(5):368?373, September 1997. [23] E. Shechtman and M. Irani. Matching local self-similarities across images and videos. In Computer Vision and Pattern Recognition, 2007. CVPR?07. IEEE Conference on, pages 1?8. Ieee, 2007. [24] M. Spain and P. Perona. Some objects are more equal than others: Measuring and predicting importance. Computer Vision?ECCV 2008, pages 523?536, 2008. [25] L. Standing. Learning 10000 pictures. The Quarterly journal of experimental psychology, 25(2):207?222, 1973. [26] L. Standing, J. Conezio, and R.N. Haber. Perception and memory for pictures: Single-trial learning of 2500 visual stimuli. Psychonomic Science; Psychonomic Science, 1970. [27] J. Van De Weijer, C. Schmid, and J. Verbeek. Learning color names from real-world images. In Computer Vision and Pattern Recognition, 2007. CVPR?07. IEEE Conference on, pages 1?8. IEEE, 2007. [28] S. Vogt and S. Magnussen. Long-term memory for 400 pictures on a common theme. Experimental Psychology (formerly Zeitschrift f?ur Experimentelle Psychologie), 54(4):298?303, 2007. [29] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image classification. In CVPR, pages 3360?3367. IEEE, 2010. [30] J. T. Wixted. The Psychology and Neuroscience of Forgetting. Annual Review of Psychology, 55(1), 20040101. [31] J. T. Wixted and S. K. Carpenter. The Wickelgren Power Law and the Ebbinghaus Savings Function. Psychological Science, 18(2):133?134, February 2007. [32] J. Xiao, J. Hays, K.A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition from abbey to zoo. In CVPR, pages 3485?3492. IEEE, 2010. 9
4570 |@word trial:1 illustrating:1 version:2 dalal:1 triggs:1 vogt:1 proportionality:1 thereby:1 shechtman:1 initial:1 liu:1 score:22 united:1 offering:1 outperforms:2 current:1 luo:1 yet:2 cottrell:1 visible:1 happen:2 informative:1 shape:9 plot:1 interpretable:4 grass:1 alone:1 half:3 discovering:1 cue:1 intelligence:1 ith:1 farther:1 provides:1 zhang:1 height:1 consists:2 fixation:1 combine:1 manner:1 introduce:1 tagging:1 expected:3 forgetting:3 indeed:1 themselves:2 behavior:1 growing:1 multi:1 brain:1 v1t:2 inspired:3 automatically:6 little:1 encouraging:1 solver:1 considering:1 increasing:1 provided:2 discover:1 becomes:1 spain:2 medium:1 what:3 mountain:1 kind:1 dror:1 sparsification:1 brady:3 forgotten:12 remember:5 ramanan:1 grant:1 veridical:1 producing:2 interestingness:1 local:18 tends:1 forgettable:7 zeitschrift:1 despite:2 encoding:1 leyvand:1 resembles:1 examined:1 quantified:1 suggests:3 averaged:3 testing:2 thought:2 significantly:2 matching:2 word:1 refers:3 seeing:1 cannot:2 close:4 memorability:67 context:1 applying:1 storage:1 map:26 demonstrated:1 missing:1 reviewer:1 layout:1 attention:4 independently:2 rectangular:1 perceive:1 fade:2 importantly:1 coordinate:1 variation:2 play:1 massive:2 user:1 element:2 recognition:4 particularly:1 muri:1 database:1 observed:6 bottom:2 role:2 wang:1 capture:1 thousand:3 region:81 sun:3 decrease:1 highest:1 intuition:1 environment:1 dynamic:1 trained:1 segment:2 exposed:2 creation:1 distinctive:1 tog:1 basis:2 easily:1 siggraph:1 represented:1 various:4 vista:1 train:2 describe:3 effective:4 london:1 quite:1 encoded:1 larger:1 cvpr:8 otherwise:1 statistic:2 think:1 jointly:1 noisy:6 emergence:1 final:1 online:1 tpami:1 propose:5 relevant:1 combining:2 date:1 organizing:1 achieve:1 multiresolution:1 konkle:3 description:1 exploiting:1 convergence:1 cluster:2 enhancement:1 assessing:1 produce:2 object:25 help:1 gong:1 measured:3 rescale:1 ij:2 aesthetically:1 predicted:4 shadow:1 differ:1 annotated:2 attribute:14 correct:1 subsequently:1 filter:1 human:19 mcallester:1 material:1 assign:2 koch:1 ground:5 lischinski:1 visually:1 great:1 cognition:3 predict:4 algorithmic:1 visualize:1 dictionary:9 torralba:6 abbey:1 overt:1 bag:2 label:3 saw:2 tool:1 weighted:1 mit:1 always:1 rather:1 reaching:1 resized:1 probabilistically:1 linguistic:1 encode:2 focus:3 june:1 joachim:1 ponce:1 rank:14 likelihood:1 indicates:1 contrast:1 sigkdd:1 centroid:2 summarizing:1 glass:1 helpful:1 perronnin:1 attract:1 perona:1 interested:1 pixel:1 overall:7 among:1 classification:4 denoted:1 retaining:1 development:1 animal:1 smoothing:1 art:2 orange:1 integration:3 spatial:7 field:2 once:2 construct:1 extraction:1 equal:3 weijer:2 saving:1 represents:3 yu:1 patten:1 future:1 others:4 stimulus:2 report:1 randomly:3 oriented:2 composed:1 densely:3 individual:8 consisting:1 attempt:1 detection:3 highly:1 evaluation:2 daily:1 facial:1 tree:1 plotted:1 girshick:1 psychological:3 instance:2 column:1 modeling:2 earlier:1 industry:1 measuring:1 artistic:1 deviation:6 hundred:1 uniform:1 delay:1 recognizing:1 graphic:3 stored:3 reported:1 combined:3 person:1 peak:1 international:1 csail:1 standing:2 probabilistic:7 vm:2 postulate:2 eurographics:1 containing:4 choose:1 huang:1 wickelgren:1 external:6 creating:1 leading:1 itti:1 li:1 suggesting:3 account:1 de:2 wien:1 retinal:1 sec:11 pooled:2 coding:2 explicitly:1 ranking:5 depends:1 caused:1 later:3 view:2 observer:5 csurka:1 analyze:1 red:1 xing:1 participant:3 annotation:6 contribution:3 ass:1 degraded:1 descriptor:7 correspond:2 saliency:9 landscape:2 yield:1 weak:1 zoo:1 apple:1 n000141010933:1 manual:2 against:1 static:1 sampled:3 gain:1 dataset:7 massachusetts:1 popular:1 color:23 car:2 segmentation:2 actually:1 appears:1 focusing:1 higher:1 day:1 asia:1 sie:1 improved:3 alvarez:3 formulation:1 done:1 evaluated:1 strongly:1 box:1 furthermore:2 inception:1 correlation:3 touch:1 su:1 gaming:1 overlapping:2 google:2 quality:3 reveal:1 ordonez:1 gray:1 aude:1 name:4 building:1 phillip:1 contain:1 true:1 normalized:1 assigned:1 irani:1 semantic:10 illustrated:2 game:1 width:1 uniquely:1 self:2 portmanteau:1 demonstrate:2 covert:1 interpreting:1 image:144 meaning:2 lazebnik:1 discovers:1 novel:5 recently:1 parikh:1 common:2 rotation:1 juice:1 psychonomic:2 overview:1 cohen:2 attached:1 volume:4 imagined:1 interpretation:2 interpret:1 refer:1 measurement:1 composition:3 automatic:11 consistency:1 grid:3 similarly:1 dj:4 funded:1 cortex:1 surface:1 similarity:3 etc:1 closest:1 recent:3 optimizing:1 driven:3 store:3 verlag:2 hay:1 binary:3 remembered:3 success:1 onr:1 scoring:2 seen:1 additional:3 somewhat:1 isola:10 remembering:1 greater:1 determine:1 full:3 multiple:12 photographic:1 pnas:1 segmented:2 match:1 jet:1 offer:1 long:7 believed:1 cross:1 equally:2 award:2 ensuring:1 prediction:6 verbeek:1 regression:2 oliva:9 vision:11 essentially:1 histogram:4 represent:2 annotate:1 pyramid:7 cell:3 addition:2 cropped:1 lbp:2 spacing:3 background:1 fellowship:1 median:1 rest:1 pooling:5 tend:3 subject:1 simulates:1 thing:1 december:1 fate:1 seem:1 effectiveness:2 presence:3 door:2 yang:1 split:7 aesthetic:2 iii:1 rendering:1 variety:2 affect:1 psychology:5 reduce:1 shift:1 whether:2 six:1 hallucinating:1 tactile:1 antonio:1 pleasant:1 detailed:2 amount:3 hue:1 mid:1 ph:1 reinhard:1 svms:1 category:3 generate:1 exist:2 percentage:1 nsf:2 estimated:1 neuroscience:1 per:1 blue:1 hyperparameter:1 tbl:1 kanan:1 shirley:1 v1:1 graph:1 dhar:1 powerful:1 you:1 patch:1 comparable:1 capturing:1 bound:1 fold:1 annual:1 fading:1 constraint:2 fei:2 scene:9 x2:1 aspect:1 speed:1 according:1 combination:2 pink:1 spearman:2 across:5 remain:1 ur:1 describable:1 larlus:1 modification:1 alike:1 biologically:1 constrained:2 explained:1 iccv:1 invariant:1 equation:1 vjt:1 visualization:2 mechanism:1 ordinal:1 photo:4 apply:1 observe:4 quarterly:1 generic:1 original:4 psychologie:1 top:5 clustering:1 include:2 entertainment:1 exploit:1 february:1 forum:1 unchanged:1 looked:1 receptive:1 exhibit:1 gradient:15 september:1 distance:7 separate:1 thank:1 capacity:2 entity:1 vanrell:1 degrade:2 retained:2 index:1 illustration:1 providing:1 ratio:1 setup:2 difficult:1 kingdom:1 hog:3 favorably:1 implementation:1 reliably:1 design:2 perform:2 allowing:1 conversion:1 upper:1 ssim:2 interacting:1 smoothed:1 arbitrary:1 bagdanov:1 intensity:1 canada:2 overlooked:1 namely:1 khan:1 hallucinated:3 learned:7 established:1 hour:1 nip:3 able:3 beyond:1 usually:1 perception:2 pattern:7 indoor:2 summarize:1 saturation:1 green:1 memory:21 max:4 video:2 haber:1 power:2 natural:7 ranked:2 predicting:6 scheme:1 improve:1 technology:1 inversely:1 library:1 picture:9 identifies:1 extract:1 schmid:2 formerly:1 review:1 geometric:2 discovery:1 acknowledgement:1 understanding:1 vancouver:1 relative:1 law:1 fully:3 expect:1 discriminatively:1 regan:1 proportional:1 remarkable:2 lv:1 clark:1 validation:1 digital:1 consistent:3 xiao:5 exciting:1 bank:2 bypass:1 storing:1 granada:1 row:1 eccv:1 changed:1 supported:1 heading:1 verbal:1 institute:1 distinctiveness:1 face:1 felzenszwalb:1 van:2 dimension:3 llc:3 plain:1 cumulative:1 vocabulary:1 world:2 made:1 collection:1 far:1 transaction:1 sj:1 status:1 global:11 summing:1 conceptual:1 spectrum:1 search:1 khosla:3 table:1 additionally:1 learn:5 robust:1 career:1 interact:1 excellent:1 european:1 constructing:1 vj:13 wixted:2 did:2 motivation:1 bounding:1 scored:2 complementary:1 carpenter:1 memorable:13 fig:12 attractiveness:2 fashion:1 ehinger:1 tong:1 wiley:1 theme:1 lie:2 outdoor:1 tang:1 down:1 specific:4 showing:1 explored:1 decay:1 svm:5 evidence:1 intrinsic:3 essential:1 workshop:1 importance:3 texture:13 illumination:1 illustrates:2 confuse:1 demand:1 chen:1 locality:2 forget:2 photograph:1 simply:1 likely:9 explore:1 ganglion:1 visual:23 aditya:1 contained:1 springer:2 corresponds:6 truth:5 wolf:1 constantly:2 extracted:1 acm:3 goal:2 presentation:1 sorted:2 absence:2 experimentally:1 change:2 typical:1 determined:2 rensink:1 total:1 pietikainen:1 experimental:6 attempted:1 brand:1 selectively:1 berg:1 internal:11 people:6 support:2 ojala:1 jianxiong:1 incorporate:2 evaluate:3 correlated:3
3,945
4,571
Online `1-Dictionary Learning with Application to Novel Document Detection Huahua Wang? University of Minnesota [email protected] Shiva Prasad Kasiviswanathan? General Electric Global Research [email protected] Arindam Banerjee? University of Minnesota [email protected] Prem Melville IBM T.J. Watson Research Center [email protected] Abstract Given their pervasive use, social media, such as Twitter, have become a leading source of breaking news. A key task in the automated identification of such news is the detection of novel documents from a voluminous stream of text documents in a scalable manner. Motivated by this challenge, we introduce the problem of online `1 -dictionary learning where unlike traditional dictionary learning, which uses squared loss, the `1 -penalty is used for measuring the reconstruction error. We present an efficient online algorithm for this problem based on alternating directions method of multipliers, and establish a sublinear regret bound for this algorithm. Empirical results on news-stream and Twitter data, shows that this online `1 -dictionary learning algorithm for novel document detection gives more than an order of magnitude speedup over the previously known batch algorithm, without any significant loss in quality of results. 1 Introduction The high volume and velocity of social media, such as blogs and Twitter, have propelled them to the forefront as sources of breaking news. On Twitter, it is possible to find the latest updates on diverse topics, from natural disasters to celebrity deaths; and identifying such emerging topics has many practical applications, such as in marketing, disease control, and national security [14]. The key challenge in automatic detection of breaking news, is being able to detect novel documents in a stream of text; where a document is considered novel if it is ?unlike? documents seen in the past. Recently, this has been made possible by dictionary learning, which has emerged as a powerful data representation framework. In dictionary learning each data point y is represented as a sparse linear combination Ax of dictionary atoms, where A is the dictionary and x is a sparse vector [1, 12]. A dictionary learning approach can be easily converted into a novel document detection method: let A be a dictionary representing all documents till time t ? 1, for a new data document y arriving at time t, if one does not find a sparse combination x of the dictionary atoms, and the best reconstruction Ax yields a large loss, then y clearly is not well represented by the dictionary A, and is hence novel compared to documents in the past. At the end of timestep t, the dictionary is updated to represent all the documents till time t. Kasiviswanathan et al. [10] presented such a (batch) dictionary learning approach for detecting novel documents/topics. They used an `1 -penalty on the reconstruction error (instead of squared loss com? Part of this wok was done while the author was a postdoc at the IBM T.J. Watson Research Center. H. Wang and A. Banerjee was supported in part by NSF CAREER grant IIS-0953274, NSF grants IIS0916750, 1029711, IIS-0812183, and NASA grant NNX12AQ39A. ? 1 monly used in the dictionary learning literature) as the `1 -penalty has been found to be more effective for text analysis (see Section 3). They also showed this approach outperforms other techniques, such as a nearest-neighbor approach popular in the related area of First Story Detection [16]. We build upon this work, by proposing an efficient algorithm for online dictionary learning with `1 -penalty. Our online dictionary learning algorithm is based on the online alternating directions method which was recently proposed by Wang and Banerjee [19] to solve online composite optimization problems with additional linear equality constraints. Traditional online convex optimization methods such as [25, 8, 5, 6, 22] require explicit computation of the subgradient making them computationally expensive to be applied in our high volume text setting, whereas in our algorithm the subgradients are computed implicitly. The algorithm has simple closed form updates for all steps yielding a fast and scalable algorithm for updating the dictionary. Under suitable assumptions (to cope with the ? non-convexity of the dictionary learning problem), we establish an O( T ) regret bound for the objective, matching the regret bounds of existing methods [25, 5, 6, 22]. Using this online algorithm for `1 -dictionary learning, we obtain an online algorithm for novel document detection, which we empirically validate on traditional news-streams as well as streaming data from Twitter. Experimental results show a substantial speedup over the batch `1 -dictionary learning based approach of Kasiviswanathan et al. [10], without a loss of performance in detecting novel documents. Related Work. Online convex optimization is an area of active research and for a detailed survey on the literature we refer the reader to [18]. Online dictionary learning was recently introduced by Mairal et al. [12] who showed that it provides a scalable approach for handling large dynamic datasets. They considered an `2 -penalty and showed that their online algorithm converges to the minimum objective value in the stochastic case (i.e., with distributional assumptions on the data). However, the ideas proposed in [12] do not translate to the `1 -penalty. The problem of novel document/topics detection was also addressed by a recent work of Saha et al. [17], where they proposed a non-negative matrix factorization based approach for capturing evolving and novel topics. However, their algorithm operates over a sliding time window (does not have online regret guarantees) and works only for `2 -penalty. 2 Preliminaries Notation. Vectors P are always column vectors P and2are denoted by boldface letters. For a matrix Z its norm, kZk1 = i,j |zij | and kZk2F = ij zij . For arbitrary real matrices the standard inner product is defined as hY, Zi = Tr(Y > Z). We use ?max (Z) to denote the largest eigenvalue of Z > Z. For a scalar r ? R, let sign(r) = 1 if r > 0, ?1 if r < 0, and 0 if r = 0. Define soft(r, T ) = sign(r) ? max{|r| ? T, 0}. The operators sign and soft are extended to a matrix by applying it to every entry in the matrix. 0m?n denotes a matrix of all zeros of size m ? n and the subscript is omitted when the dimension of the represented matrix is clear from the context. Dictionary Learning Background. Dictionary learning is the problem of estimating a collection of basis vectors over which a given data collection can be accurately reconstructed, often with sparse encodings. It falls into a general category of techniques known as matrix factorization. Classic dictionary learning techniques for sparse representation (see [1, 15, 12] and references therein) consider a finite training set of signalsPP = [p1 , . . . , pn ] ? Rm?n and optimize the empirical cost function n which is defined as f (A) = i=1 l(pi , A), where l(?, ?) is a loss function such that l(pi , A) should be small if A is ?good? at representing the signal pi in a sparse fashion. Here, A ? Rm?k is referred to as the dictionary. In this paper, we use a `1 -loss function with an `1 -regularization term, and our l(pi , A) = min kpi ? Axk1 + ?kxk1 , where ? is the regularization parameter. x We define the problem of dictionary learning as that of minimizing the empirical cost f (A). In other words, the dictionary learning is the following optimization problem n X def min f (A) = f (A, X) = min l(pi , A) = min kP ? AXk1 + ?kXk1 . A A,X A,X i=1 For maintaining interpretability of the results, we would additionally require that the A and X matrices be non-negative. To prevent A from being arbitrarily large (which would lead to arbitrarily small values of X), we add a scaling constant on A as follows. Let A be the convex set of matrices defined as A = {A ? Rm?k : A ? 0m?k ?j = 1, . . . , k , kAj k1 ? 1}, where Aj is the jth column in A. 2 We use ?A to denote the Euclidean projection onto the nearest point in the convex set A. The resulting optimization problem can be written as min A?A,X?0 kP ? AXk1 + ?kXk1 (1) The optimization problem (1) is in general non-convex. But if one of the variables, either A or X is known, the objective function with respect to the other variable becomes a convex function (in fact, can be transformed into a linear program). 3 Novel Document Detection Using Dictionary Learning In this section, we describe the problem of novel document detection and explain how dictionary learning could be used to tackle this problem. Our problem setup is similar to [10]. Novel Document Detection Task. We assume documents arrive in streams. Let {Pt : Pt ? Rmt ?nt , t = 1, 2, 3, . . . } denote a sequence of streaming matrices where each column of Pt represents a document arriving at time t. Here, Pt represents the term-document matrix observed at time t. Each document is represented is some conventional vector space model such as TF-IDF [13]. The t could be at any granularity, e.g., it could be the day that the document arrives. We use nt to represent the number of documents arriving at time t. We normalize Pt such that each column (document) in Pt has a unit `1 -norm. For simplicity in exposition, we will assume that mt = m for all t.1 We use the notation P[t] to denote the term-document matrix obtained by vertically concatenating the matrices P1 , . . . , Pt , i.e., P[t] = [P1 |P2 | . . . |Pt ]. Let Nt be the number of documents arriving at time ? t, then P[t] ? Rm?Nt . Under this setup, the goal of novel document detection is to identify documents in Pt that are ?dissimilar? to the documents in P[t?1] . Sparse Coding to Detect Novel Documents. Let At ? Rm?k represent the dictionary matrix after time t ? 1; where dictionary At is a good basis to represent of all the documents in P[t?1] . The exact construction of the dictionary is described later. Now, consider a document y ? Rm appearing at time t. We say that it admits a sparse representation over At , if y could be ?well? approximated as a linear combination of few columns from At . Modeling a vector with such a sparse decomposition is known as sparse coding. In most practical situations it may not be possible to represent y as At x, e.g., if y has new words which are absent in At . In such cases, one could represent y = At x + e where e is an unknown noise vector. We consider the following sparse coding formulation l(y, At ) = min ky ? At xk1 + ?kxk1 . x?0 (2) The formulation (2) naturally takes into account both the reconstruction error (with the ky ? At xk1 term) and the complexity of the sparse decomposition (with the kxk1 term). It is quite easy to transform (2) into a linear program. Hence, it can be solved using a variety of methods. In our experiments, we use the alternating directions method of multipliers (ADMM) [2] to solve (2). ADMM has recently gathered significant attention in the machine learning community due to its wide applicability to a range of learning problems with complex objective functions [2]. We can use sparse coding to detect novel documents as follows. For each document y arriving at time t, we do the following. First, we solve (2) to check whether y could be well approximated as a sparse linear combination of the atoms of At . If the objective value l(y, At ) is ?big? then we mark the document as novel, otherwise we mark the document as non-novel. Since, we have normalized all documents in Pt to unit `1 -length, the objective values are in the same scale. Choice of the Error Function. A very common choice of reconstruction error is the `2 -penalty. In fact, in the presence of isotopic Gaussian noise the `2 -penalty on e = y ? At x gives the maximum likelihood estimate of x [21, 23]. However, for text documents, the noise vector e rarely satisfies the Gaussian assumption, as some of its coefficients contain large, impulsive values. For example, in fields such as politics and sports, a certain term may become suddenly dominant in a discussion [10]. In such cases imposing an `1 -penalty on the error is a better choice than imposing an `2 -penalty (e.g., recent research [21, 24, 20] have successfully shown the superiority of `1 over `2 penalty for a 1 As new documents come in and new terms are identified, we expand the vocabulary and zero-pad the previous matrices so that at the current time t, all previous and current documents have a representation over the same vocabulary space. 3 different but related application domain of face recognition). We empirically validate the superiority of using the `1 -penalty for novel document detection in Section 5. Size of the Dictionary. Ideally, in our application setting, changing the size of the dictionary (k) dynamically with t would lead to a more efficient and effective sparse coding. However, in our theoretical analysis, we make the simplifying assumption that k is a constant independent of t. In our experiments, we allow for small increases in the size of the dictionary over time when required. Batch Algorithm for Novel Document Detection. We now describe a simple batch algorithm (slightly modified from [10]) for detecting novel documents. The Algorithm BATCH alternates between a novel document detection and a batch dictionary learning step. Algorithm 1 : BATCH Input: P[t?1] ? Rm?Nt?1 , Pt = [p1 , . . . , pnt ] ? Rm?nt , At ? Rm?k , ? ? 0, ? ? 0 Novel Document Detection Step: for j = 1 to nt do Solve: xj = argminx?0 kpj ? At xk1 + ?kxk1 if kpj ? At xj k1 + ?kxj k1 > ? Mark pj as novel Batch Dictionary Learning Step: Set P[t] ? [P[t?1] | p1 , . . . , pnt ] Solve: [At+1 , X[t] ] = argminA?A,X?0 kP[t] ? AXk1 + ?kXk1 Batch Dictionary Learning. We now describe the batch dictionary learning step. At time t, the dictionary learning step is2 [At+1 , X[t] ] = argminA?A,X?0 kP[t] ? AXk1 + ?kXk1 . (3) Even though conceptually simple, Algorithm BATCH is computationally inefficient. The bottleneck comes in the dictionary learning step. As t increases, so does the size of P[t] , so solving (3) becomes prohibitive even with efficient optimization techniques. To achieve computational efficiency, in [10], the authors solved an approximation of (3) where in the dictionary learning step they only update the A?s and not the X?s.3 This leads to faster running times, but because of the approximation, the quality of the dictionary degrades over time and the performance of the algorithm decreases. In this paper, we propose an online learning algorithm for (3) and show that this online algorithm is both computationally efficient and generates good quality dictionaries under reasonable assumptions. Online `1 -Dictionary Learning 4 In this section, we introduce the online `1 -dictionary learning problem and propose an efficient algorithm for it. The standard goal of online learning is to design algorithms whose regret is sublinear in time T , since this implies that ?on the average? the algorithm performs as well as the best fixed strategy in hindsight [18]. Now consider the `1 -dictionary learning problem defined in (3). Since this problem is non-convex, it may not be possible to design efficient (i.e., polynomial running time) algorithms that solves it without making any assumptions on either the dictionary (A) or the sparse code (X). This also means that it may not be possible to design efficient online algorithm with sublinear regret without making any assumptions on either A or X because an efficient online algorithm with sublinear regret would imply an efficient algorithm for solving (1) in the offline case. Therefore, we focus on obtaining regret bounds for the dictionary update, assuming that the at each timestep the sparse codes given to the batch and online algorithms are ?close?. This motivates the following problem. Definition 4.1 (Online `1 -Dictionary Learning Problem). At time t, the online algorithm picks ? t+1 ) with Pt+1 ? Rm?n and X ? t+1 ? A?t+1 ? A. Then, the nature (adversary) reveals (Pt+1 , X 2 In our algorithms, it is quite straightforward to replace the condition A ? A by some other condition A ? C, where C is some closed non-empty convex set. 3 e[t] = [X e[t?1] | x1 , . . . , xnt ] where xj ?s are coming from the novel In particular, define (recursively) X e[t] k1 . document detection step at time t. In [10], the dictionary learning step is At+1 = argminA?A kP[t] ? AX 4 Rk?n . The problem is to pick the A?t+1 sequence such that the following regret function is minimized4 T T X X ? t k1 ? min R(T ) = kPt ? A?t X kPt ? AXt k1 , A?A t=1 t=1 ? t = Xt + Et and Et is an error matrix dependent on t. where X The regret defined above admits the discrepancy between the sparse coding matrices supplied to the batch and online algorithms through the error matrix. The reason for this generality is because in our application setting, the sparse coding matrices used for updating the dictionaries of the batch and online algorithms could be different. We will later establish the conditions on Et ?s under which we can achieve sublinear regret. All missing proofs and details appear in the full version of the paper [11]. Online `1 -Dictionary Algorithm 4.1 In this section, we design an algorithm for the online `1 -dictionary learning problem, which we call Online Inexact ADMM (OIADMM)5 and bound its regret. Firstly note that because of the non-smooth `1 -norms involved it is computationally expensive to apply standard online learning algorithms like online gradient descent [25, 8], COMID [6], FOBOS [5], and RDA [22], as they require computing a costly subgradient at every iteration. The subgradient of kP ? AXk1 at A = A? is (X ? sign(X > A?> ? P > ))> . Our algorithm for online `1 -dictionary learning is based on the online alternating direction method which was recently proposed by Wang et al. [19]. Our algorithm first performs a simple variable substitution by introducing an equality constraint. The update for each of the resulting variable has a closed-form solution without the need of estimating the subgradients explicitly. Algorithm 2 : OIADMM ? t ? Rk?n , ?t ? 0, ?t ? 0 Input: Pt ? Rm?n , A?t ? Rm?k , ?t ? Rm?n , X e t ?? Pt ? A?t X ?t ? e t ? ?i + (?t /2)k? e t ? ?k2 ?t+1 = argmin? k?k1 + h?t , ? F e t + ?t /?t , 1/?t )) (? ?t+1 = soft(? > e ? Gt+1 ?? ?(?t /?t + ?t ? ?t+1 )Xt A?t+1 = argminA?A ?t (hGt+1 , A ? A?t i + (1/2?t )kA ? A?t k2F ) (? A?t+1 = ?A (max{0, A?t ? ?t Gt+1 })) ? ? ?t+1 = ?t + ?t (Pt ? At+1 Xt ? ?t+1 ) Return A?t+1 and ?t+1 The Algorithm OIADMM is simple. Consider the following minimization problem at time t ? t k1 . min kPt ? AX A?A We can rewrite this above minimization problem as: ? t = ?. min k?k1 such that Pt ? AX A?A,? (4) The augmented Lagrangian of (4) is: L(A, ?, ?) = min A?A,? ? t ? ?i + k?k1 + h?, Pt ? AX 2 ?t ? t ? ? Pt ? AX , (5) 2 F where ? ? Rm?n is a multiplier and ?t > 0 is a penalty parameter. 4 For ease of presentation and analysis, we will assume that m and n don?t vary with time. One could allow for changing m and n by carefully adjusting the size of the matrices by zero-padding. 5 The reason for naming it OIADMM is because the algorithm is based on alternating directions method of multipliers (ADMM) procedure. 5 OIADMM is summarized in Algorithm 2. The algorithm generates a sequence of iterates {?t , At , ?t }? t=1 . At each time t, instead of solving (4) completely, it only runs one step ADMM update of the variables (?t , At , ?t ). The complete analysis of Algorithm 2 is presented in the full version of the paper [11]. Here, we just summarize the main result in the following theorem. Theorem 4.2. Let {?t , A?t , ?t } be the sequences generated by the OIADMM procedure and R(T ) be the regret as defined above. Assume the following conditions hold: (i) ?t, the Frobenius norm of ?k?t k1 is upper bounded by ?, (ii) A?1 = 0m?k , kAopt kF ? D, (iii) ?1 = 0m?n , and (iv) ?t, ? ? t ). Setting ?t, ?t = ? ?m T where ?m = maxt {1/?t }, we have 1/?t ? 2?max (X D ? T ?D T X opt R(T ) ? ? + kA Et k1 . ?m t=1 In the above theorem one could replace ?m by any upper bound on it (i.e., we don?t need to know ?m exactly). ? t ) made Condition on Et ?s for Sublinear Regret. In a standard online learning setting, the (Pt , X available to the online learning algorithm will be the same as (Pt , Xt ) made available ? to the batch ? t = Xt ? Et = 0, yielding a O( T ) regret. dictionary learning algorithm in hindsight, so that X PT More generally, as long as t=1 kEt kp = o(T ) for some suitable p-norm, we get a sublinear regret bound.6 For example, if {Zt } is a sequence of matrices such that for all t, kZt kp = O(1), then setting Et = t? Zt ,  > 0 yields a sublinear regret. This gives a sufficient condition for sublinear regret, and it is an interesting open problem to extend the analysis to other cases. Running Time. For the ith column in the dictionary matrix the projection onto A can be done in O(si log m) time where si is the number of non-zero elements in the ith column using the projection onto `1 -ball algorithm of Duchi et al. [4]. The simplest implementation of OIADMM takes O(mnk) time at each timestep because of the matrix multiplications involved. 5 Experimental Results In this section, we present experiments to compare and contrast the performance of `1 -batch and `1 -online dictionary learning algorithms for the task of novel document detection. We also present results highlighting the superiority of using an `1 - over an `2 -penalty on the reconstruction error for this task (validating the discussion in Section 3). Implementation of BATCH. In our implementation, we grow the dictionary size by ? in each timestep. Growing the dictionary size is essential for the batch algorithm because as t increases the number of columns of P[t] also increases, and therefore, a larger dictionary is required to compactly represent all the documents in P[t] . For solving (3), we use alternative minimization over the variables. The pseudo-code description is given in the full version of the paper [11]. The optimization problems arising in the sparse coding and dictionary learning steps are solved using ADMM?s. Online Algorithm for Novel Document Detection. Our online algorithm7 uses the same novel document detection step as Algorithm BATCH, but dictionary learning is done using OIADMM. For a pseudo-code description, see full version of the paper [11]. Notice that the sparse coding matrices ?1, . . . , X ? t . If these sequence of of the Algorithm BATCH, X1 , . . . , Xt could be different from X matrices are close to each other, then we have a sublinear regret on the objective function.8 Evaluation of Novel Document Detection. For performance evaluation, we assume that documents in the corpus have been manually identified with a set of topics. For simplicity, we assume that each document is tagged with the single, most dominant topic that it associates with, which we call the true topic of that document. We call a document y arriving at time t novel if the true topic of y has not appeared before the time t. So at time t, given a set of documents, the task of novel P P This follows from H?older?s inequality which gives Tt=1 kAopt Et k1 ? kAopt kq ( Tt=1 kEt kp ) for 1 ? opt p, q ? ? and 1/p + 1/q = 1, and by the assuming kA kq is bounded. Here, k ? kp denotes Schatten p-norm. 7 In our experiments, the number of documents introduced in each timestep is almost of the same order, and hence there is no need to change the size of the dictionary across timesteps for the online algorithm. 8 As noted earlier, we can not do a comparison without making any assumptions. 6 6 document detection is to classify each document as either novel (positive) or non-novel (negative). For evaluating this classification task, we use the standard Area Under the ROC Curve (AUC) [13]. Performance Evaluation for `1 -Dictionary Learning. We use a simple reconstruction error measure for comparing the dictionaries produced by our `1 -batch and `1 -online algorithms. We want the dictionary at time t to be a good basis to represent all the documents in P[t] ? Rm?Nt . This leads us to define the sparse reconstruction error (SRE) of a dictionary A at time t as   def 1 SRE(A) = min kP[t] ? AXk1 + ?kXk1 . Nt X?0 A dictionary with a smaller SRE is better on average at sparsely representing the documents in P[t] . Novel Document Detection using `2 -dictionary learning. To justify the choice of using an `1 penalty (on the reconstruction error) for novel document detection, we performed experiments comparing `1 - vs. `2 -penalty for this task. In the `2 -setting, for the sparse coding step we used a fast implementation of the LARS algorithm with positivity constraints [7] and the dictionary learning was done by solving a non-negative matrix factorization problem with additional sparsity constraints (also known as the non-negative sparse coding problem [9]). A complete pseudo-code description is given in the full version of the paper [11].9 Experimental Setup. All reported results are based on a Matlab implementation running on a quadcore 2.33 GHz Intel processor with 32GB RAM. The regularization parameter ? is set to 0.1 which ? t )) yields reasonable sparsities in our experiments. OIADMM parameters ?t is set 1/(2?max (X (chosen according to Theorem 4.2) and ?t is fixed to 5 (obtained through tuning). The ADMM parameters for the sparse coding and batch dictionary learning steps are set as suggested in [10] (refer to the full version [11]). In the batch algorithms, we grow the dictionary sizes by ? = 10 in each timestep. The threshold value ? is treated as a tunable parameter. 5.1 Experiments on News Streams Our first dataset is drawn from the NIST Topic Detection and Tracking (TDT2) corpus which consists of news stories in the first half of 1998. In our evaluation, we used a set of 9000 documents represented over 19528 terms and distributed into the top 30 TDT2 human-labeled topics over a period of 27 weeks. We introduce the documents in groups. At timestep 0, we introduce the first 1000 documents and these documents are used for initializing the dictionary. We use an alternative minimization procedure over the variables of (1) to initialize the dictionary. In these experiments the size of the initial dictionary k = 200. In each subsequent timestep t ? {1, . . . , 8}, we provide the batch and online algorithms the same set of 1000 documents. In Figure 1, we present novel document detection results for those timesteps where at least one novel document was introduced. Table 1 shows the corresponding AUC numbers. The results show that using an `1 -penalty on the reconstruction error is better for novel document detection than using an `2 -penalty. 0.5 False Positive Rate 1 0.5 0 0 ONLINE BATCH?IMPL L2?BATCH 0.5 False Positive Rate 1 0.5 0 0 ONLINE BATCH?IMPL L2?BATCH 0.5 False Positive Rate 1 Timestep 8 1 0.5 0 0 ONLINE BATCH?IMPL L2?BATCH 0.5 False Positive Rate 1 True Positive Rate 0 0 ONLINE BATCH?IMPL L2?BATCH Timestep 6 1 True Positive Rate 0.5 Timestep 5 1 True Positive Rate Timestep 2 True Positive Rate True Positive Rate Timestep 1 1 1 0.5 0 0 ONLINE BATCH?IMPL L2?BATCH 0.5 False Positive Rate Figure 1: ROC curves for TDT2 for timesteps where novel documents were introduced. Comparison of the `1 -online and `1 -batch Algorithms. The `1 -online and `1 -batch algorithms have almost identical performance in terms of detecting novel documents (see Table 1). However, the online algorithm is much more computationally efficient. In Figure 2(a), we compare the running times of these algorithms. As noted earlier, the running time of the batch algorithm goes up as t increases (as it has to optimize over the entire past). However, the running time of the online algorithm is independent of the past and only depends on the number of documents introduced in each timestep (which in this case is always 1000). Therefore, the running time of the online 9 We used the SPAMS package http://spams-devel.gforge.inria.fr/ in our implementation. 7 1 Timestep 1 2 5 6 8 Avg. No. of Novel Docs. 19 53 116 66 65 AUC `1 -online 0.791 0.694 0.732 0.881 0.757 0.771 No. of Nonnovel Docs. 981 947 884 934 935 AUC `1 -batch 0.815 0.704 0.764 0.898 0.760 0.788 AUC `2 -batch 0.674 0.586 0.601 0.816 0.701 0.676 Table 1: AUC Numbers for ROC Plots in Figure 1. algorithm is almost the same across different timesteps. As expected the run-time gap between the `1 -batch and `1 -online algorithms widen as t increases ? in the first timestep the online algorithm is 5.4 times faster, and this rapidly increases to a factor of 11.5 in just 7 timesteps. 200 100 0 0 2 4 Timestep 6 8 Sparse Reconstruction Error Plot for TDT2 1 ONLINE BATCH?IMPL 0.9 0.8 0.7 0.6 0 2 4 Timestep 6 8 Run Time Plot for Twitter 400 ONLINE BATCH?IMPL 300 200 100 0 0 5 Timestep 10 Sparse Reconstruction Error (SRE) 300 ONLINE BATCH?IMPL L2?BATCH CPU Running Time (in mins) CPU Running Time (in mins) Running Time Plot for TDT2 400 Sparse Reconstruction Error (SRE) In Figure 2(b), we compare the dictionaries produced by the `1 -batch and `1 -online algorithms under the SRE metric. In the first few timesteps, the SRE of the dictionaries produced by the online algorithm is slightly lower than that of the batch algorithm. However, this gets corrected after a few timesteps and as expected later on the batch algorithm produces better dictionaries. Sparse Reconstruction Error Plot for Twitter 1 0.9 0.8 0.7 0.6 0.5 0 ONLINE BATCH?IMPL 5 Timestep 10 (b) (c) (d) (a) Figure 2: Running time and SRE plots for TDT2 and Twitter datasets. 5.2 Experiments on Twitter Our second dataset is from an application of monitoring Twitter for Marketing and PR for smartphone and wireless providers. We used the Twitter Decahose to collect a 10% sample of all tweets (posts) from Sept 15 to Oct 05, 2011. From this, we filtered the tweets relevant to ?Smartphones? using a scheme presented in [3] which utilizes the Wikipedia ontology to do the filtering. Our dataset comprises of 127760 tweets over these 21 days and the vocabulary size is 6237 words. We used the tweets from Sept 15 to 21 (34292 in number) to initialize the dictionaries. Subsequently, at each timestep, we give as input to both the algorithms all the tweets from a given day (for a period of 14 days between Sept 22 to Oct 05). Since this dataset is unlabeled, we do a quantitative evaluation of `1 -batch vs. `1 -online algorithms (in terms of SRE) and do a qualitative evaluation of the `1 -online algorithm for the novel document detection task. Here, the size of the initial dictionary k = 100. Figure 2(c) shows the running times on the Twitter dataset. At first timestep the online algorithm is already 10.8 times faster, and this speedup escalates to 18.2 by the 14th timestep. Figure 2(d) shows the SRE of the dictionaries produced by these algorithms. In this case, the SRE of the dictionaries produced by the batch algorithm is consistently better than that of the online algorithm, but as the running time plots suggests this improvement comes at a very steep price. Date 2011-09-26 2011-09-29 2011-10-03 2011-10-04 2011-10-05 Sample Novel Tweets Detected Using our Online Algorithm Android powered 56 percent of smartphones sold in the last three months. Sad thing is it can?t lower the rating of ios! How Windows 8 is faster, lighter and more efficient: WP7 Droid Bionic Android 2.3.4 HP TouchPad white ipods 72 U.S. News: AT&T begins sending throttling warnings to top data hogs: AT&T did away with its unlimited da... #iPhone Can?t wait for the iphone 4s #letstalkiphone Everybody put an iPhone up in the air one time #ripstevejobs Table 2: Sample novel documents detected by our online algorithm. Table 2 below shows a representative set of novel tweets identified by our online algorithm. Using a completely automated process (refer to the full version [11]), we are able to detect breaking news and trending relevant to the smartphone market, such as AT&T throttling data bandwidth, launch of IPhone 4S, and the death of Steve Jobs. 8 References [1] M. Aharon, M. Elad, and A. Bruckstein. The K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Transactions on Signal Processing, 54(11), 2006. [2] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends in Machine Learning, 2011. [3] V. Chenthamarakshan, P. Melville, V. Sindhwani, and R. D. Lawrence. Concept Labeling: Building Text Classifiers with Minimal Supervision. In IJCAI, pages 1225?1230, 2011. [4] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient Projections onto the l1-ball for Learning in High Dimensions. In ICML, pages 272?279, 2008. [5] J. Duchi and Y. Singer. Efficient Online and Batch Learning using Forward Backward Splitting. JMLR, 10:2873?2898, 2009. [6] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite Objective Mirror Descent. In COLT, pages 14?26, 2010. [7] J. Friedman, T. Hastie, H. Hfling, and R. Tibshirani. Pathwise Coordinate Optimization. The Annals of Applied Statistics, 1(2):302?332, 2007. [8] E. Hazan, A. Agarwal, and S. Kale. Logarithmic Regret Algorithms for Online Convex Optimization. Machine Learning, 69(2-3):169?192, 2007. [9] P. O. Hoyer. Non-Negative Sparse Coding. In IEEE Workshop on Neural Networks for Signal Processing, pages 557?565, 2002. [10] S. P. Kasiviswanathan, P. Melville, A. Banerjee, and V. Sindhwani. Emerging Topic Detection using Dictionary Learning. In CIKM, pages 745?754, 2011. [11] S. P. Kasiviswanathan, H. Wang, A. Banerjee, and P. Melville. Online `1 -Dictionary Learning with Application to Novel Document Detection. http://www.cse.psu.edu/?kasivisw/ fullonlinedict.pdf. [12] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online Learning for Matrix Factorization and Sparse Coding. JMLR, 11:19?60, 2010. [13] C. Manning, P. Raghavan, and H. Sch?utze. Introduction to Information Retrieval. Cambridge University Press, 2008. [14] P. Melville, J. Leskovec, and F. Provost, editors. Proceedings of the First Workshop on Social Media Analytics. ACM, 2010. [15] B. Olshausen and D. Field. Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1? Vision Research, 37(23):3311?3325, 1997. [16] S. Petrovi?c, M. Osborne, and V. Lavrenko. Streaming First Story Detection with Application to Twitter. In HLT ?10, pages 181?189. ACL, 2010. [17] A. Saha and V. Sindhwani. Learning Evolving and Emerging Topics in Social Media: A Dynamic NMF Approach with Temporal Regularization. In WSDM, pages 693?702, 2012. [18] S. Shalev-Shwartz. Online Learning and Online Convex Optimization. Foundations and Trends in Machine Learning, 4(2), 2012. [19] H. Wang and A. Banerjee. Online Alternating Direction Method. In ICML, 2012. [20] J. Wright and Y. Ma. Dense Error Correction Via L1-Minimization. IEEE Transactions on Information Theory, 56(7):3540?3560, 2010. [21] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust Face Recognition via Sparse Representation. IEEE Transactions on Pattern Analysis and Machine Intelliegence, 31(2):210?227, Feb. 2009. [22] L. Xiao. Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization. JMLR, 11:2543?2596, 2010. [23] A. Y. Yang, S. S. Sastry, A. Ganesh, and Y. Ma. Fast L1-minimization Algorithms and an Application in Robust Face Recognition: A Review. In International Conference on Image Processing, pages 1849? 1852, 2010. [24] J. Yang and Y. Zhang. Alternating Direction Algorithms for L1-Problems in Compressive Sensing. SIAM Journal of Scientific Computing, 33(1):250?278, 2011. [25] M. Zinkevich. Online Convex Programming and Generalized Infinitesimal Gradient Ascent. In ICML, pages 928?936, 2003. 9
4571 |@word version:7 polynomial:1 norm:6 open:1 prasad:1 decomposition:2 simplifying:1 pick:2 tr:1 recursively:1 initial:2 substitution:1 zij:2 document:79 past:4 outperforms:1 existing:1 current:2 com:3 nt:9 ka:3 comparing:2 si:2 gmail:1 chu:1 written:1 subsequent:1 plot:7 update:6 v:2 half:1 prohibitive:1 ith:2 filtered:1 detecting:4 provides:1 iterates:1 kasiviswanathan:5 cse:1 firstly:1 lavrenko:1 zhang:1 become:2 qualitative:1 consists:1 manner:1 introduce:4 market:1 expected:2 ontology:1 p1:5 growing:1 wsdm:1 cpu:2 window:2 becomes:2 begin:1 estimating:2 notation:2 bounded:2 medium:4 kpj:2 argmin:1 emerging:3 proposing:1 compressive:1 hindsight:2 warning:1 guarantee:1 pseudo:3 sapiro:1 every:2 temporal:1 impl:9 quantitative:1 tackle:1 axt:1 exactly:1 rm:15 k2:1 classifier:1 control:1 unit:2 grant:3 superiority:3 appear:1 before:1 positive:11 vertically:1 io:1 encoding:1 subscript:1 inria:1 acl:1 therein:1 dynamically:1 collect:1 suggests:1 ease:1 factorization:4 forefront:1 range:1 analytics:1 practical:2 regret:20 procedure:3 area:3 kpt:3 empirical:3 evolving:2 composite:2 matching:1 projection:4 word:3 boyd:1 wait:1 get:2 onto:4 close:2 unlabeled:1 operator:1 put:1 context:1 applying:1 optimize:2 zinkevich:1 conventional:1 lagrangian:1 center:2 missing:1 www:1 latest:1 attention:1 straightforward:1 go:1 convex:11 survey:1 kpi:1 kale:1 simplicity:2 identifying:1 splitting:1 classic:1 coordinate:1 updated:1 annals:1 pt:22 construction:1 exact:1 lighter:1 programming:1 us:2 designing:1 associate:1 velocity:1 element:1 expensive:2 approximated:2 updating:2 recognition:3 trend:2 sparsely:1 distributional:1 labeled:1 kxk1:9 observed:1 wang:6 solved:3 initializing:1 news:10 decrease:1 disease:1 substantial:1 convexity:1 complexity:1 ideally:1 dynamic:2 solving:5 rewrite:1 upon:1 efficiency:1 basis:4 completely:2 compactly:1 easily:1 kxj:1 represented:5 fast:3 effective:2 describe:3 kp:11 detected:2 labeling:1 shalev:3 quite:2 emerged:1 whose:1 solve:5 larger:1 say:1 elad:1 otherwise:1 melville:5 statistic:1 transform:1 online:77 sequence:6 eigenvalue:1 reconstruction:14 propose:2 product:1 coming:1 fr:1 relevant:2 rapidly:1 date:1 till:2 translate:1 achieve:2 description:3 frobenius:1 validate:2 normalize:1 ky:2 ijcai:1 empty:1 produce:1 converges:1 nearest:2 ij:1 job:1 p2:1 solves:1 c:2 huwang:1 come:3 implies:1 throttling:2 launch:1 direction:8 stochastic:2 fobos:1 lars:1 human:1 subsequently:1 raghavan:1 require:3 preliminary:1 rda:1 opt:2 correction:1 hold:1 considered:2 wright:2 lawrence:1 week:1 dictionary:87 vary:1 omitted:1 utze:1 largest:1 tf:1 successfully:1 minimization:6 clearly:1 always:2 gaussian:2 pnt:2 modified:1 pn:1 pervasive:1 ax:7 focus:1 ponce:1 improvement:1 consistently:1 check:1 likelihood:1 contrast:1 detect:4 comid:1 twitter:13 dependent:1 streaming:3 entire:1 pad:1 expand:1 transformed:1 voluminous:1 classification:1 colt:1 dual:1 denoted:1 initialize:2 field:2 psu:1 atom:3 manually:1 identical:1 represents:2 k2f:1 icml:3 discrepancy:1 few:3 saha:2 widen:1 national:1 argminx:1 friedman:1 detection:31 trending:1 evaluation:6 umn:2 arrives:1 yielding:2 sre:11 iv:1 euclidean:1 overcomplete:2 theoretical:1 leskovec:1 minimal:1 android:2 column:8 soft:3 classify:1 modeling:1 impulsive:1 earlier:2 measuring:1 cost:2 applicability:1 hgt:1 entry:1 introducing:1 kq:2 reported:1 international:1 siam:1 squared:2 positivity:1 ket:2 inefficient:1 leading:1 return:1 account:1 converted:1 coding:15 summarized:1 coefficient:1 explicitly:1 depends:1 stream:6 axk1:7 later:3 performed:1 closed:3 hazan:1 air:1 who:1 yield:3 identify:1 gathered:1 conceptually:1 identification:1 accurately:1 produced:5 provider:1 monitoring:1 processor:1 explain:1 hlt:1 definition:1 inexact:1 infinitesimal:1 involved:2 naturally:1 proof:1 tunable:1 adjusting:1 popular:1 dataset:5 carefully:1 nasa:1 steve:1 day:4 formulation:2 done:4 though:1 generality:1 marketing:2 xk1:3 just:2 ganesh:2 banerjee:7 celebrity:1 quality:3 aj:1 scientific:1 olshausen:1 building:1 normalized:1 multiplier:5 contain:1 true:7 concept:1 hence:3 equality:2 regularization:4 alternating:8 tagged:1 death:2 white:1 auc:6 noted:2 everybody:1 generalized:1 pdf:1 complete:2 isotopic:1 tt:2 performs:2 duchi:4 l1:4 percent:1 image:1 novel:48 arindam:1 recently:5 parikh:1 propelled:1 common:1 wikipedia:1 mt:1 empirically:2 volume:2 extend:1 significant:2 refer:3 cambridge:1 imposing:2 automatic:1 tuning:1 sastry:2 hp:1 minnesota:2 supervision:1 gt:2 add:1 argmina:4 dominant:2 feb:1 showed:3 recent:2 certain:1 inequality:1 blog:1 watson:2 arbitrarily:2 seen:1 minimum:1 additional:2 employed:1 period:2 signal:3 ii:3 sliding:1 full:7 smooth:1 faster:4 bach:1 long:1 retrieval:1 escalates:1 kzk2f:1 naming:1 post:1 scalable:3 vision:1 metric:1 chandra:1 iteration:1 represent:8 disaster:1 agarwal:1 gforge:1 whereas:1 background:1 want:1 addressed:1 grow:2 source:2 sch:1 unlike:2 ascent:1 validating:1 thing:1 call:3 presence:1 granularity:1 yang:3 iii:1 easy:1 automated:2 variety:1 shiva:1 xj:3 zi:1 timesteps:7 hastie:1 identified:3 bandwidth:1 inner:1 idea:1 absent:1 politics:1 bottleneck:1 whether:1 motivated:1 gb:1 padding:1 penalty:19 matlab:1 generally:1 tewari:1 detailed:1 clear:1 category:1 simplest:1 http:2 supplied:1 nsf:2 notice:1 sign:4 cikm:1 arising:1 tibshirani:1 diverse:1 mnk:1 group:1 key:2 threshold:1 drawn:1 changing:2 prevent:1 pj:1 backward:1 v1:1 timestep:23 ram:1 subgradient:3 tweet:7 run:3 package:1 letter:1 powerful:1 arrive:1 almost:3 wok:1 reader:1 reasonable:2 utilizes:1 sad:1 doc:2 scaling:1 kaj:1 capturing:1 bound:7 def:2 ipod:1 constraint:4 idf:1 unlimited:1 hy:1 generates:2 min:13 subgradients:2 speedup:3 according:1 alternate:1 combination:4 ball:2 manning:1 across:2 slightly:2 smaller:1 making:4 pr:1 computationally:5 previously:1 nnx12aq39a:1 singer:3 know:1 end:1 sending:1 available:2 aharon:1 apply:1 away:1 appearing:1 batch:52 alternative:2 denotes:2 running:14 top:2 maintaining:1 k1:13 build:1 establish:3 suddenly:1 iphone:4 objective:8 already:1 degrades:1 strategy:2 costly:1 traditional:3 hoyer:1 gradient:2 schatten:1 topic:13 reason:2 boldface:1 devel:1 assuming:2 length:1 code:5 tdt2:6 minimizing:1 setup:3 steep:1 hog:1 kzk1:1 negative:6 xnt:1 design:4 implementation:6 motivates:1 zt:2 unknown:1 upper:2 datasets:2 sold:1 finite:1 nist:1 descent:2 situation:1 extended:1 provost:1 arbitrary:1 community:1 peleato:1 nmf:1 rating:1 introduced:5 eckstein:1 required:2 smartphones:2 security:1 huahua:1 able:2 adversary:1 suggested:1 below:1 pattern:1 appeared:1 sparsity:2 challenge:2 summarize:1 program:2 max:5 interpretability:1 suitable:2 natural:1 treated:1 regularized:1 representing:3 older:1 scheme:1 imply:1 sept:3 text:6 review:1 literature:2 l2:6 kf:1 multiplication:1 powered:1 loss:7 sublinear:10 interesting:1 filtering:1 foundation:2 sufficient:1 smartphone:2 xiao:1 editor:1 story:3 pi:5 ibm:3 maxt:1 supported:1 wireless:1 last:1 arriving:6 jth:1 offline:1 allow:2 neighbor:1 fall:1 wide:1 face:3 sparse:34 ghz:1 distributed:2 curve:2 dimension:2 vocabulary:3 evaluating:1 author:2 made:3 collection:2 avg:1 forward:1 spam:2 social:4 cope:1 transaction:3 reconstructed:1 implicitly:1 global:1 active:1 reveals:1 bruckstein:1 mairal:2 corpus:2 shwartz:3 don:2 table:5 additionally:1 nature:1 robust:2 career:1 obtaining:1 complex:1 postdoc:1 electric:1 domain:1 da:1 did:1 main:1 dense:1 big:1 noise:3 osborne:1 x1:2 augmented:1 referred:1 intel:1 representative:1 roc:3 fashion:1 is2:1 comprises:1 explicit:1 kzt:1 concatenating:1 breaking:4 jmlr:3 rk:2 theorem:4 xt:6 sensing:1 admits:2 essential:1 workshop:2 false:5 mirror:1 magnitude:1 gap:1 logarithmic:1 highlighting:1 tracking:1 sport:1 scalar:1 pathwise:1 sindhwani:3 satisfies:1 acm:1 ma:3 oct:2 goal:2 presentation:1 month:1 exposition:1 replace:2 admm:7 price:1 change:1 operates:1 corrected:1 justify:1 averaging:1 experimental:3 svd:1 rarely:1 mark:3 dissimilar:1 prem:1 handling:1
3,946
4,572
Random function priors for exchangeable arrays with applications to graphs and relational data James Robert Lloyd Department of Engineering University of Cambridge Peter Orbanz Department of Statistics Columbia University Zoubin Ghahramani Department of Engineering University of Cambridge Daniel M. Roy Department of Engineering University of Cambridge Abstract A fundamental problem in the analysis of structured relational data like graphs, networks, databases, and matrices is to extract a summary of the common structure underlying relations between individual entities. Relational data are typically encoded in the form of arrays; invariance to the ordering of rows and columns corresponds to exchangeable arrays. Results in probability theory due to Aldous, Hoover and Kallenberg show that exchangeable arrays can be represented in terms of a random measurable function which constitutes the natural model parameter in a Bayesian model. We obtain a flexible yet simple Bayesian nonparametric model by placing a Gaussian process prior on the parameter function. Efficient inference utilises elliptical slice sampling combined with a random sparse approximation to the Gaussian process. We demonstrate applications of the model to network data and clarify its relation to models in the literature, several of which emerge as special cases. 1 Introduction Structured relational data arises in a variety of contexts, including graph-valued data [e.g. 1, 5], micro-array data, tensor data [e.g. 27] and collaborative filtering [e.g. 21]. This data is typified by expressing relations between 2 or more objects (e.g. friendship between a pair of users in a social network). Pairwise relations can be represented by a 2-dimensional array (a matrix); more generally, relations between d-tuples are recorded as d-dimensional arrays (d-arrays). We consider Bayesian models of infinite 2-arrays (Xij )i,j?N , where entries Xij take values in a space X . Each entry Xij describes the relation between objects i and j. Finite samples?relational measurements for n objects?are n ? n-arrays. As the sample size increases, the data aggregates into a larger and larger array. Graph-valued data, for example, corresponds to the case X = {0, 1}. In collaborative filtering problems, the set of objects is subdivided into two disjoint sets, e.g., users and items. Latent variable models for such data explain observations by means of an underlying structure or summary, such as a low-rank approximation to an observed array or an embedding into a Euclidean space. This structure is formalized as a latent (unobserved) variable. Examples include matrix factorization [e.g. 4, 21], non-linear generalisations [e.g. 12, 27, 28], block modelling [e.g. 1, 10], latent distance modelling [e.g. 5] and many others [e.g. 14, 17, 20]. Hoff [4] first noted that a number of parametric latent variable models for relational data are exchangeable?an applicable assumption whenever the objects in the data have no natural ordering e.g., users in a social network or products in ratings data?and can be cast into the common functional form guaranteed to exist by results in probability theory. Building on this connection, 1 0 U1 U2 0 0 U1 Pr{Xij = 1} ? U2 1 1 1 Figure 1: Left: The distribution of any exchangeable random graph with vertex set N and edges E = (Xij )i,j?N can be characterised by a random function ? : [0, 1]2 ? [0, 1]. Given ?, a graph can be sampled by generating a uniform random variable Ui for each vertex i, and sampling edges as Xij ? Bernoulli(?(Ui , Uj )). Middle: A heat map of an example function ?. Right: A 100 ? 100 symmetric adjacency matrix sampled from ?. Only unordered index pairs Xij are sampled in the symmetric case. Rows and columns have been ordered by increasing value of Ui , rather than i. we consider nonparametric models for graphs and arrays. Results of Aldous [2], Hoover [6] and Kallenberg [7] show that random arrays that satisfy an exchangeability property can be represented in terms of a random function. These representations have been further developed in discrete analysis for the special case of graphs [13]; this case is illustrated in Fig. 1. The results can be regarded as a generalization of de Finetti?s theorem to array-valued data. Their implication for Bayesian modeling is that we can specify a prior for an exchangeable random array model by specifying a prior on (measurable) functions. The prior is a distribution on the space of all functions that can arise in the representation result, and the dimension of this space is infinite. A prior must therefore be nonparametric to have reasonably large support since a parametric prior concentrates on a finite-dimensional subset. In the following, we model the representing function explicitly using a nonparametric prior. 2 Background: Exchangeable graphs and arrays A fundamental component of every Bayesian model is a random variable ?, the parameter of the model, which decouples the data. De Finetti?s theorem [9] characterizes this parameter for random sequences: Let X1 , X2 , . . . be an infinite sequence of random variables, each taking values in a common space X . A sequence is called exchangeable if its joint distribution is invariant under arbitrary permutation of the indices, i.e., if d (X1 , X2 , . . .) = (X?(1) , X?(2) , . . .) for all ? ? S? . (2.1) d Here, = denotes equality in distribution, and S? is the set of all permutations of N that permute a finite number of elements. De Finetti?s theorem states that, (Xi )i?N is exchangeable if and only if there exists a random probability measure ? on X such that X1 , X2 , . . . | ? ?iid ?, i.e., conditioned on ?, the observations are independent and ?-distributed. From a statistical perspective, ? represents common structure in the observed data?and thus a natural target of statistical inference? whereas P [Xi |?] captures remaining, independent randomness in each observation. 2.1 De Finetti-type representations for random arrays To specify Bayesian models for graph- or array-valued data, we need a suitable counterpart to de Finetti?s theorem that is applicable when the random sequences in (2.1) are substituted by random arrays X = (Xij )i,j?N . For such data, the invariance assumption (2.1) applied to all elements of X is typically too restrictive: In the graph case Xij ? {0, 1}, for example, the probability of X would then depend only on the proportion of edges present in the graph, but not on the graph structure. Instead, we define exchangeability of random 2-arrays in terms of the simultaneous application of a permutation to rows and columns. More precisely: Definition 2.1. An array X = (Xij )i,j?N is called an exchangeable array if d for every ? ? S? . (Xij ) = (X?(i)?(j) ) 2 (2.2) Since this weakens the hypothesis (2.1) by demanding invariance only under a subset of all permutations of N2 ?those of the form (i, j) 7? (?(i), ?(j))?we can no longer expect de Finetti?s theorem to hold. The relevant generalization of the de Finetti theorem to this case is the following: Theorem 2.2 (Aldous, Hoover). A random 2-array (Xij ) is exchangeable if and only if there is a random (measurable) function F : [0, 1]3 ? X such that d (Xij ) = (F (Ui , Uj , Uij )). (2.3) for every collection (Ui )i?N and (Uij )i?j?N of i.i.d. Uniform[0, 1] random variables, where Uji = Uij for j < i ? N. 2.2 Random graphs The graph-valued data case X = {0, 1} is of particular interest. Here, the array X, interpreted as an adjacency matrix, specifies a random graph with vertex set N. For undirected graphs, X is symmetric. We call a random graph exchangeable if X satisfies (2.2). For undirected graphs, the representation (2.3) simplifies further: there is a random function ? : [0, 1]2 ? [0, 1], symmetric in its arguments, such that  1 if Uij < ?(Ui , Uj ) F (Ui , Uj , Uij ) := (2.4) 0 otherwise satisfies (2.3). Each variable Ui is associated with a vertex, each variable Uij with an edge. The representation (2.4) is equivalent to the sampling scheme U1 , U2 , . . . ?iid Uniform[0, 1] and Xij = Xji ? Bernoulli(?(Ui , Uj )) , (2.5) which is illustrated in Fig. 1. Recent work in discrete analysis shows that any symmetric measurable function [0, 1]2 ? [0, 1] can be regarded as a (suitably defined) limit of adjacency matrices of graphs of increasing size [13]?intuitively speaking, as the number of rows and columns increases, the array in Fig. 1 (right) converges to the heat map in Fig. 1 (middle) (up to a reordering of rows and columns). 2.3 The general case: d-arrays Theorem 2.2 can in fact be stated in a more general setting than 2-arrays, namely for random darrays, which are collections of random variables of the form (Xi1 ...id )i1 ,...,id ?N . Thus, a sequence is a 1-array, a matrix a 2-array. A d-array can be interpreted as an encoding of a relation between d-tuples. In this general case, an analogous theorem holds, but the random function F in (2.3) is in general more complex: In addition to the collections U{i} and U{ij} of uniform variables, the representation requires an additional collection U{ij }j?I for every non-empty subset I ? {1, . . . , d}; e.g., U{i1 i3 i4 } for d ? 4 and I = {1, 3, 4}. The representation (2.3) is then substituted by F : [0, 1]2 d ?1 ?? X and d (Xi1 ,...,id ) = (F (UI1 , . . . , UI(2d ?1) )) . (2.6) For d = 1, we recover a version of de Finetti?s theorem. For a discussion of convergence properties of general arrays similar to those sketched above for random graphs, see [3]. Because we do not explicitly consider the case d > 2 in our experiments, we restrict our presentation of the model to the 2-array-valued case for simplicity. We note, however, that the model and inference algorithms described in the following extend immediately to general d-array-valued data. 3 Model To define a Bayesian model for exchangeable graphs or arrays, we start with Theorem 2.2: A distribution on exchangeable arrays can be specified by a distribution on measurable functions [0, 1]3 ? X . We decompose the function F into two functions ? : [0, 1]2 ? W and H : [0, 1] ? W ? X for a suitable space W, such that d (Xij ) = (F (Ui , Uj , Uij )) = (H(Uij , ?(Ui , Uj ))) . 3 (3.1) Such a decomposition always exists?trivially, choose W = [0, 1]2 . The decomposition introduces a natural hierarchical structure. We initially sample a random function ??the model parameter in terms of Bayesian statistics?which captures the structure of the underlying graph or array. The (Ui ) then represent attributes of nodes or objects and H and the array (Uij ) model the remaining noise in the observed relations. Model definition. For the purpose of defining a Bayesian model, we will model ? as a continuous function with a Gaussian process prior. More precisely, we take W = R and consider a zero-mean Gaussian process prior on CW := C([0, 1]2 , W), the space of continuous functions from[0, 1]2 to W, with kernel function ? : [0, 1]2 ? [0, 1]2 ? W. The full generative model is then: ? ? GP(0, ?) U1 , U2 , . . . ?iid Uniform[0, 1] Xij |Wij ? P [ . |Wij ] where Wij = ?(Ui , Uj ) . (3.2) The parameter space of our the model is the infinite-dimensional space CW . Hence, the model is nonparametric. Graphs and real-valued arrays require different choices of P . In either case, the model first generates the latent array W = (Wij ). Observations are then generated as follows: Observed data Sample space P [Xij ? . |Wij ] Graph Real array X = {0, 1} X =R Bernoulli(?(Wij )) 2 ) Normal(Wij , ?X 2 is a noise variance parameter. where ? is the logistic function, and ?X The Gaussian process prior favors smooth functions, which will in general result in more interpretable latent space embeddings. Inference in Gaussian processes is a well-understood problem, and the choice of a Gaussian prior allows us to leverage the full range of inference methods available for these models. Discussion of modeling assumptions. In addition to exchangeability, our model assumes (i) that the function ? is continuous?which implies measurability as in Theorem 2.2 but is a stronger requirement?and (ii) that its law is Gaussian. Exchangeable, undirected graphs are always representable using a Bernoulli distribution for P [Xij ? . |Wij ]. Hence, in this case, (i) and (ii) are indeed the only assumptions imposed by the model. In the case of real-valued matrices, the model additionally assumes that the function H in (3.1) is of the form d H(Uij , ?(Ui , Uj )) = ?(Ui , Uj ) + ?ij where ?ij ?iid Normal(0, ?) . (3.3) Another rather subtle assumption arises implicitly when the array X is not symmetric, i.e., not guaranteed to satisfy Xij = Xji , for example, if X is a directed graph: In Theorem 2.2, the array (Uij ) is symmetric even if X is not. The randomness in Uij accounts for both Xij and Xji which means the conditional variables Xij |Wij and Xji |Wji are dependent, and a precise representation would have to sample (Xij , Xji )|Wij , Wji jointly, a fact our model neglects in (3.2). However, it can be shown that any exchangeable array can be arbitrarily well approximated by arrays which treat Xij |Wij and Xji |Wji as independent [8, Thm. 2]. Remark 3.1 (Dense vs. sparse data). The methods described here address random arrays that are dense, i.e., as the size of an n ? n array increases the number of non-zero entries grows as O(n2 ). Network data is typically sparse, with O(n) non-zero entries. Density is an immediate R consequence of Theorem 2.2: For graph data the asymptotic proportion of present edges is p := ?(x, y)dxdy, and the graph is hence either empty (for p = 0) or dense (since O(pn2 ) = O(n2 )). Analogous representation theorems for sparse random graphs are to date an open problem in probability. 4 Related work Our model has some noteworthy relations to the Gaussian process latent variable model (GPLVM); a dimensionality-reduction technique [e.g. 11]. GPLVMs can be applied to 2-arrays, but doing so makes the assumption that either the rows or the columns of the random array are independent [12]. In terms of our model, this corresponds to choosing kernels of the form ?U ? ?, where ? represents 4 a tensor product1 and ? represents an ?identity? kernel (i.e., the corresponding kernel matrix is the identity matrix). From this perspective, the application of our model to exchangeable real-valued arrays can be interpreted as a form of co-dimensionality reduction. For graph data, a related parametric model is the eigenmodel of Hoff [4]. This model, also justified by exchangeability arguments, approximates an array with a bilinear form, followed by some link function and conditional probability distribution. Available nonparametric models include the infinite relational model (IRM) [10], latent feature relational model (LFRM) [14], infinite latent attribute model (ILA) [17] and many others. A recent development is the sparse matrix-variate Gaussian process blockmodel (SMGB) of Yan et al. [28]. Although not motivated in terms of exchangeability, this model does not impose an independence assumptions on either rows or columns, in contrast to the GPLVM. The model uses kernels of the form ?1 ? ?2 ; our work suggests that it may not be necessary to impose tensor product structure, which allows for inference with improved scaling. Roy and Teh [20] present a nonparametric Bayesian model of relational data that approximates ? by a piece-wise constant function with a specific hierarchical structure, which is called a Mondrian process in [20]. Some examples of the various available models can be succinctly summarized as follows: Graph data Random function model Latent class [26] IRM [10] Latent distance [5] Eigenmodel [4] LFRM [14] ILA [17] SMGB [28] ? ? GP (0, ?) Wij = mUi Uj where Ui ? {1, . . . , K} Wij = mUi Uj where Ui ? {1, . . . , ?} Wij = ?|Ui ? Uj | Wij = Ui0 ?Uj Wij = Ui0 ?Uj where Ui ? {0, 1}? P (d) ? Wij = d IUid IUjd ?Uid Ujd where Ui ? {0, . . . , ?} ? ? GP (0, ?1 ? ?2 ) Real-valued array data Random function model Mondrian process based [20] PMF [21] GPLVM [12] ? ? Wij ? 5 ? = = ? GP (0, ?) piece-wise constant random function Ui0 Vj GP (0, ? ? ?) Posterior computation We describe Markov Chain Monte Carlo (MCMC) algorithms for generating approximate samples from the posterior distribution of the model parameters given a partially observed array. Most importantly, we describe a random subset-of-regressors approximation that scales to graphs with hundreds of nodes. Given the relatively straightforward nature of the proposed algorithms and approximations, we refer the reader to other papers whenever appropriate. 5.1 Latent space and kernel Theorem 2.2 is not restricted to the use of uniform distributions for the variables Ui and Uij . The proof remains unchanged if one replaces the uniform distributions with any non-atomic probability measure on a Borel space. For the purposes of inference, normal distributions are more convenient, and we henceforth use U1 , U2 , . . . ?iid N (0, Ir ) for some integer r. Since we focus on undirected graphical data, we require the symmetry condition Wij = Wji . This can be achieved by constructing the kernel function in the following way ?(?1 , ?2 ) = ? ? (?1 , ?2 ) =  1 ? ? (?1 , ?2 ) + ? ? (?1 , ??2 ) + ? 2 I 2 s2 exp(?|?1 ? ?2 |2 /(2`2 )) (Symmetry + noise) (RBF kernel) (5.1) (5.2) 1 We define the tensor product of kernel functions as follows: (?U ? ?V )((u1 , v1 ), (u2 , v2 )) = ?U (u1 , u2 ) ? ?V (v1 , v2 ). 5 where ?k = (Uik , Ujk ), ??k = (Ujk , Uik ) and s, `, ? represent a scale factor, length scale and noise respectively (see [e.g. 19] for a discussion of kernel functions). We collectively denote the kernel parameters by ?. 5.2 Sampling without approximating the model In the simpler case of a real-valued array X, we construct an MCMC algorithm over the variables (U, ?, ?X ) by repeatedly slice sampling [16] from the conditional distributions ?i | ??i , ?X , U, X ?X | ?, U, X and Uj | U?j , ?, ?X , X (5.3) where ?X is the noise variance parameter used when modelling real valued data introduced in section 3. Let N = |U{i} | denote the number of rows in the observed array, let ? be the set of all pairs (Ui , Uj ) for all observed relations Xij , let O = |?| denote the number of observed relations, and let K represent the O ? O kernel matrix between all points in ?. Changes to ? affect every entry in the kernel matrix K and so, naively, the computation of the Gaussian likelihood of X takes O(O3 ) time. The cubic dependence on O seems unavoidable, and thus this naive algorithm is unusable for all but small data sets. 5.3 A random subset-of-regressor approximation To scale the method to larger graphs, we apply a variation of a method known as Subsets-ofRegressors (SoR) [22, 23, 25]. (See [18] for an excellent survey of this and other sparse approximations.) The SoR approximation replaces the infinite dimensional GP with a finite dimensional approximation. Our approach is to treat both the inputs and outputs of the GP as latent variables. In particular, we introduce k Gaussian distributed pseudoinputs ? = (?1 , . . . , ?k ) and define target values Tj = ?(?j ). Writing K?? for the kernel matrix formed from the pseudoinputs ?, we have (?i ) ?iid N (0, I2r ) and T | ? ? N (0, K?? ). (5.4) The idea of the SoR approximation is to replace Wij with the posterior mean conditioned on (?, T ), ?1 W = K?? K?? T, (5.5) where K?? is the kernel matrix between the latent embeddings ? and the pseudoinputs ?. By considering random pseudoinputs, we construct an MCMC analogue of the techniques proposed in [24]. The conditional distribution T | U, ?, ?, (?X ), X is amenable to elliptical slice sampling [15]. All other random parameters, including the (Ui ), can again be sampled from their full conditional distributions using slice sampling. The sampling algorithms require that one computes expressions involving (5.5). As a result they cost at most O(k 3 O) time. 6 Experiments We evaluate the model on three different network data sets. Two of these data sets?the high school and NIPS co-authorship data?have been extensively analyzed in the literature. The third data set, a protein interactome, was previously noted by Hoff [4] to be of interest since it exhibits both block structure and transitivity. Data set Recorded data Vertices High school NIPS Protein high school social network densely connected subset of coauthorship network protein interactome 90 234 230 Reference e.g. [4] e.g. [14] e.g. [4] We compare performance of our model on these data sets to three other models, probabilistic matrix factorization (PMF) [21], Hoff?s eigenmodel, and the GPLVM (see also Sec. 4). The models are chosen for comparability, since they all embed nodes into a Euclidean latent space. Experiments for all three models were performed using reference implementations by the respective authors.2 2 Implementations are available for PMF at http://www.mit.edu/~rsalakhu/software.html; for the eignmodel at http://cran.r-project.org/src/contrib/Descriptions/eigenmodel.html; and for the GPLVM at http://www.cs.man.ac.uk/~neill/collab/ . 6 Figure 2: Protein interactome data. Left: Interactome network. Middle: Sorted adjacency matrix. The network exhibits stochastic equivalence (visible as block structure in the matrix) and homophily (concentration of points around the diagonal). Right: Maximum a posteriori estimate of the function ?, corresponding to the function in Fig. 1 (middle). Model Method Iterations [burn-in] PMF [21] Eigenmodel [4] GPLVM [12] Random function model stochastic gradient MCMC stochastic gradient MCMC 1000 10000 [250] 20 sweeps 1000 [200] We use standard normal priors on the latent variables U and pseudo points ?, and log normal priors for kernel parameters. Parameters are chosen to favor slice sampling acceptance after a reasonable number of iterations, as evaluated over a range of data sets, summarized in the table on the right. Balancing computational demands, we sampled T 50 times per iteration whilst all other variables were sampled once per iteration. Algorithm parameters author defaults author defaults author defaults (see below) length scale scale factor target noise U ? log mean std width 1 2 0.1 - 0.5 0.5 0.5 - 0.5 0.5 0.1 4 2 We performed 5-fold cross validation, predicting links in a held out partition given 4 others. Where the models did not restrict their outputs to values between 0 and 1, we truncated any predictions lying outside this range. The following table reports average AUC (area under receiver operating characteristic) for the various models, with numbers for the top performing model set in bold. Significance of results is evaluated by means of a t-test with a p-value of 0.05; results for models not distinguishable from the top performing model in terms of this t-test are also set in bold. AUC results Data set Latent dimensions PMF Eigenmodel GPLVM RFM High school 1 2 0.747 0.742 0.744 0.815 0.792 0.806 0.775 0.827 3 1 NIPS 2 3 1 0.792 0.806 0.782 0.820 0.729 0.789 0.888 0.907 0.789 0.818 0.876 0.914 0.820 0.845 0.883 0.919 0.787 0.805 0.877 0.903 Protein 2 0.810 0.866 0.883 0.910 3 0.841 0.882 0.873 0.912 The random function model outperforms the other models in all tests. We also note that in all experiments, a single latent dimension suffices to achieve better performance, even when the other models use additional latent dimensions. The posterior distribution of ? favors functions defining random array distributions that explain the data well. In this sense, our model fits a probability distribution. The standard inference methods for GPLVM and PMF applied to relational data, in contrast, are designed to fit mean squared error, and should therefore be expected to show stronger performance under a mean squared error metric. As the following table shows, this is indeed the case. 7 RMSE results Data set Latent dimensions PMF Eigenmodel GPLVM RFM High school 1 2 0.245 0.244 0.244 0.239 0.242 0.238 0.241 0.234 3 1 NIPS 2 3 1 0.240 0.236 0.239 0.235 0.141 0.141 0.112 0.114 0.135 0.132 0.109 0.111 0.130 0.124 0.106 0.110 0.151 0.149 0.139 0.138 Protein 2 0.142 0.142 0.137 0.136 3 0.139 0.138 0.138 0.136 An arguably more suitable metric is comparison in terms of conditional edge probability i.e., P (X{ij} | W{ij} ) for all i, j in the held out data. These cannot, however, be computed in a meaningful manner for models such as PMF and GPLVM, which assign a Gaussian likelihood to data. The next table hence reports only comparisons to the eigenmodel. Negative log conditional edge probability3 Data set High school NIPS Latent dimensions 1 2 3 1 2 3 Eigenmodel RFM 220 205 210 199 200 201 88 65 81 57 75 56 1 96 78 Protein 2 3 92 75 86 75 Remark 6.1 (Model complexity and lengthscales). Figure 2 provides a visualisation of ? when modeling the protein interactome data using 1 latent dimension. The likelihood of the smooth peak is sensitive to the lengthscale of the Gaussian process representation of ?. A Gaussian process prior introduces the assumption that ? is continuous. Continuous functions are dense in the space of measurable functions, i.e., any measurable function can be arbitrarily well approximated by a continuous one. The assumption of continuity is therefore not restrictive, but rather the lengthscale of the Gaussian process determines the complexity of the model a priori. The nonparametric prior placed on ? allows the posterior to approximate any function if supported by the data, but by sampling the lengthscale we allow the model to quickly select an appropriate level of complexity. 7 Discussion and conclusions There has been a tremendous amount of research into modelling matrices, arrays, graphs and relational data, but nonparametric Bayesian modeling of such data is essentially uncharted territory. In most modelling circumstances, the assumption of exchangeability amongst data objects is natural and fundamental to the model. In this case, the representation results [2, 6, 7] precisely map out the scope of possible Bayesian models for exchangeable arrays: Any such model can be interpreted as a prior on random measurable functions on a suitable space. Nonparametric Bayesian statistics provides a number of possible priors on random functions, but the Gaussian process and its modifications are the only well-studied model for almost surely continuous functions. For this choice of prior, our work provides a general and simple modeling approach that can be motivated directly by the relevant representation results. The model results in both interpretable representations for networks, such as a visualisation of a protein interactome, and has competitive predictive performance on benchmark data. Acknowledgments The authors would like to thank David Duvenaud, David Knowles and Konstantina Palla for helpful discussions. PO was supported by an EPSRC Mathematical Sciences Postdoctoral Research Fellowship (EP/I026827/1). ZG is supported by EPSRC grant EP/I036575/1. DMR is supported by a Newton International Fellowship and Emmanuel College. 3 The precise calculation implemented is ? log(P (X{ij} | W{ij} )) ? 1000 / (Number of held out edges). 8 References [1] Airoldi, E. M., Blei, D. M., Fienberg, S. E., and Xing, E. P. (2008). Mixed Membership Stochastic Blockmodels. Journal of Machine Learning Research (JMLR), 9, 1981?2014. [2] Aldous, D. J. (1981). Representations for partially exchangeable arrays of random variables. Journal of Multivariate Analysis, 11(4), 581?598. [3] Aldous, D. J. (2010). More uses of exchangeability: Representations of complex random structures. In Probability and Mathematical Genetics: Papers in Honour of Sir John Kingman. [4] Hoff, P. D. (2007). Modeling homophily and stochastic equivalence in symmetric relational data. In Advances in Neural Information Processing Systems (NIPS), volume 20, pages 657?664. [5] Hoff, P. D., Raftery, A. E., and Handcock, M. S. (2002). Latent Space Approaches to Social Network Analysis. Journal of the American Statistical Association, 97(460), 1090?1098. [6] Hoover, D. N. (1979). Relations on probability spaces and arrays of random variables. Technical report, Institute for Advanced Study, Princeton. [7] Kallenberg, O. (1992). Symmetries on random arrays and set-indexed processes. Journal of Theoretical Probability, 5(4), 727?765. [8] Kallenberg, O. (1999). Multivariate Sampling and the Estimation Problem for Exchangeable Arrays. Journal of Theoretical Probability, 12(3), 859?883. [9] Kallenberg, O. (2005). Probabilistic Symmetries and Invariance Principles. Springer. [10] Kemp, C., Tenenbaum, J., Griffiths, T., Yamada, T., and Ueda, N. (2006). Learning systems of concepts with an infinite relational model. In Proceedings of the National Conference on Artificial Intelligence, volume 21. [11] Lawrence, N. D. (2005). Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research (JMLR), 6, 1783?1816. [12] Lawrence, N. D. and Urtasun, R. (2009). Non-linear matrix factorization with Gaussian processes. In Proceedings of the International Conference on Machine Learning (ICML), pages 1?8. ACM Press. [13] Lov?asz, L. and Szegedy, B. (2006). Limits of dense graph sequences. Journal of Combinatorial Theory Series B, 96, 933?957. [14] Miller, K. T., Griffiths, T. L., and Jordan, M. I. (2009). Nonparametric latent feature models for link prediction. Advances in Neural Information Processing Systems (NIPS), pages 1276?1284. [15] Murray, I., Adams, R. P., and Mackay, D. J. C. (2010). Elliptical slice sampling. Journal of Machine Learning Research (JMLR), 9, 541?548. [16] Neal, R. M. (2003). Slice sampling. The Annals of Statistics, 31(3), 705?767. With discussions and a rejoinder by the author. [17] Palla, K., Knowles, D. A., and Ghahramani, Z. (2012). An Infinite Latent Attribute Model for Network Data. In Proceedings of the International Conference on Machine Learning (ICML). [18] Qui?nonero Candela, J. and Rasmussen, C. E. (2005). A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research (JMLR), 6, 1939?1959. [19] Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. MIT Press. [20] Roy, D. M. and Teh, Y. W. (2009). The Mondrian process. In Advances in Neural Information Processing Systems (NIPS). [21] Salakhutdinov, R. (2008). Probabilistic Matrix Factorisation. In Advances in neural information processing systems (NIPS). [22] Silverman, B. W. (1985). Some aspects of the spline smoothing approach to non-parametric regression curve fitting. Journal of the Royal Statistical Society. Series B (Methodological), 47(1), 1?52. [23] Smola, A. J. and Bartlett, P. (2001). Sparse greedy gaussian process regression. In Advances in Neural Information Processing Systems (NIPS). MIT Press. [24] Titsias, M. K. and Lawrence, N. D. (2008). Efficient sampling for Gaussian process inference using control variables. In Advances in Neural Information Processing Systems (NIPS), pages 1681?1688. [25] Wahba, G., Lin, X., Gao, F., Xiang, D., Klein, R., and Klein, B. (1999). The bias-variance tradeoff and the randomized gacv. In Advances in Neural Information Processing Systems (NIPS). [26] Wang, Y. J. and Wong, G. Y. (1987). Stochastic Blockmodels for Directed Graphs. Journal of the American Statistical Association, 82(397), 8?19. [27] Xu, Z., Yan, F., and Qi, Y. (2012). Infinite Tucker Decomposition: Nonparametric Bayesian Models for Multiway Data Analysis. In Proceedings of the International Conference on Machine Learning (ICML). [28] Yan, F., Xu, Z., and Qi, Y. (2011). Sparse matrix-variate Gaussian process blockmodels for network modeling. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence (UAI). 9
4572 |@word version:1 middle:4 seems:1 proportion:2 stronger:2 suitably:1 open:1 decomposition:3 reduction:2 series:2 daniel:1 outperforms:1 elliptical:3 yet:1 must:1 john:1 visible:1 partition:1 designed:1 interpretable:2 v:1 generative:1 intelligence:2 greedy:1 item:1 yamada:1 blei:1 provides:3 node:3 org:1 simpler:1 mathematical:2 mui:2 fitting:1 manner:1 introduce:1 pairwise:1 lov:1 expected:1 indeed:2 xji:6 salakhutdinov:1 palla:2 considering:1 increasing:2 project:1 underlying:3 probability3:1 interpreted:4 developed:1 whilst:1 unobserved:1 pseudo:1 every:5 decouples:1 uk:1 exchangeable:20 control:1 grant:1 arguably:1 engineering:3 understood:1 treat:2 limit:2 consequence:1 bilinear:1 encoding:1 id:3 noteworthy:1 burn:1 studied:1 equivalence:2 specifying:1 suggests:1 co:2 factorization:3 range:3 directed:2 acknowledgment:1 atomic:1 block:3 silverman:1 area:1 yan:3 convenient:1 griffith:2 zoubin:1 ila:2 protein:9 cannot:1 context:1 writing:1 wong:1 www:2 measurable:8 map:3 equivalent:1 imposed:1 pn2:1 straightforward:1 williams:1 survey:1 formalized:1 simplicity:1 immediately:1 factorisation:1 array:62 regarded:2 importantly:1 embedding:1 variation:1 analogous:2 annals:1 target:3 user:3 us:2 hypothesis:1 element:2 roy:3 approximated:2 std:1 database:1 observed:8 epsrc:2 ep:2 wang:1 capture:2 connected:1 ordering:2 src:1 ui:24 complexity:3 depend:1 mondrian:3 predictive:1 titsias:1 gacv:1 po:1 joint:1 represented:3 various:2 heat:2 describe:2 lengthscale:3 monte:1 artificial:2 aggregate:1 choosing:1 outside:1 lengthscales:1 encoded:1 larger:3 valued:13 otherwise:1 favor:3 statistic:4 gp:7 jointly:1 sequence:6 product:3 i036575:1 relevant:2 nonero:1 date:1 achieve:1 description:1 convergence:1 empty:2 requirement:1 generating:2 adam:1 converges:1 object:7 weakens:1 ac:1 ij:8 school:6 implemented:1 c:1 implies:1 concentrate:1 attribute:3 stochastic:6 adjacency:4 require:3 subdivided:1 sor:3 assign:1 suffices:1 generalization:2 hoover:4 decompose:1 clarify:1 hold:2 lying:1 around:1 duvenaud:1 normal:5 exp:1 lawrence:3 scope:1 purpose:2 estimation:1 applicable:2 combinatorial:1 sensitive:1 mit:3 gaussian:24 always:2 i3:1 rather:3 exchangeability:7 focus:1 methodological:1 rank:1 modelling:5 bernoulli:4 likelihood:3 contrast:2 blockmodel:1 sense:1 posteriori:1 inference:9 helpful:1 dependent:1 membership:1 typically:3 initially:1 relation:12 uij:13 visualisation:2 wij:20 i1:2 sketched:1 flexible:1 html:2 priori:1 development:1 smoothing:1 special:2 mackay:1 hoff:6 construct:2 once:1 sampling:14 placing:1 represents:3 icml:3 constitutes:1 others:3 report:3 spline:1 micro:1 densely:1 national:1 individual:1 interest:2 acceptance:1 rfm:3 introduces:2 analyzed:1 tj:1 held:3 chain:1 implication:1 amenable:1 edge:8 necessary:1 respective:1 indexed:1 euclidean:2 irm:2 pmf:8 theoretical:2 column:7 modeling:7 typified:1 cost:1 vertex:5 subset:7 entry:5 uniform:7 hundred:1 too:1 combined:1 density:1 fundamental:3 peak:1 international:5 randomized:1 probabilistic:4 xi1:2 regressor:1 quickly:1 again:1 squared:2 recorded:2 unavoidable:1 choose:1 henceforth:1 american:2 kingman:1 szegedy:1 account:1 de:8 unordered:1 lloyd:1 summarized:2 sec:1 bold:2 satisfy:2 explicitly:2 piece:2 performed:2 view:1 candela:1 doing:1 characterizes:1 start:1 recover:1 competitive:1 xing:1 rmse:1 collaborative:2 formed:1 ir:1 variance:3 characteristic:1 miller:1 bayesian:14 territory:1 iid:6 carlo:1 randomness:2 explain:2 simultaneous:1 whenever:2 definition:2 tucker:1 james:1 associated:1 proof:1 dmr:1 sampled:6 dimensionality:2 subtle:1 specify:2 improved:1 evaluated:2 smola:1 cran:1 continuity:1 logistic:1 measurability:1 grows:1 building:1 concept:1 counterpart:1 equality:1 hence:4 symmetric:8 neal:1 illustrated:2 transitivity:1 width:1 auc:2 noted:2 authorship:1 o3:1 demonstrate:1 wise:2 common:4 functional:1 homophily:2 volume:2 extend:1 association:2 approximates:2 expressing:1 measurement:1 refer:1 cambridge:3 trivially:1 handcock:1 multiway:1 longer:1 operating:1 posterior:5 multivariate:2 recent:2 orbanz:1 aldous:5 perspective:2 collab:1 arbitrarily:2 wji:4 additional:2 dxdy:1 impose:2 utilises:1 surely:1 ii:2 full:3 smooth:2 technical:1 calculation:1 cross:1 lin:1 qi:2 prediction:2 involving:1 regression:3 essentially:1 metric:2 circumstance:1 iteration:4 ui1:1 represent:3 kernel:16 achieved:1 justified:1 background:1 whereas:1 addition:2 fellowship:2 ujd:1 asz:1 undirected:4 jordan:1 call:1 integer:1 leverage:1 embeddings:2 variety:1 independence:1 variate:2 ujk:2 affect:1 fit:2 restrict:2 wahba:1 simplifies:1 idea:1 tradeoff:1 motivated:2 expression:1 bartlett:1 peter:1 speaking:1 remark:2 repeatedly:1 generally:1 amount:1 nonparametric:12 extensively:1 tenenbaum:1 http:3 specifies:1 xij:24 exist:1 disjoint:1 per:2 klein:2 discrete:2 finetti:8 uid:1 kallenberg:5 v1:2 graph:37 uncertainty:1 almost:1 reader:1 reasonable:1 knowles:2 ueda:1 lfrm:2 scaling:1 qui:1 guaranteed:2 followed:1 neill:1 fold:1 replaces:2 i4:1 precisely:3 x2:3 software:1 generates:1 u1:7 aspect:1 argument:2 performing:2 relatively:1 department:4 structured:2 representable:1 describes:1 rsalakhu:1 modification:1 honour:1 intuitively:1 invariant:1 pr:1 restricted:1 fienberg:1 remains:1 previously:1 available:4 eigenmodel:9 apply:1 hierarchical:2 v2:2 appropriate:2 denotes:1 remaining:2 include:2 assumes:2 top:2 graphical:1 newton:1 unifying:1 neglect:1 restrictive:2 ghahramani:2 uj:17 emmanuel:1 approximating:1 murray:1 society:1 unchanged:1 tensor:4 sweep:1 parametric:4 concentration:1 dependence:1 diagonal:1 exhibit:2 comparability:1 gradient:2 amongst:1 distance:2 cw:2 link:3 thank:1 entity:1 ui0:3 kemp:1 urtasun:1 length:2 index:2 robert:1 stated:1 negative:1 implementation:2 teh:2 observation:4 markov:1 benchmark:1 finite:4 gplvm:10 truncated:1 immediate:1 defining:2 relational:13 precise:2 arbitrary:1 thm:1 rating:1 introduced:1 david:2 pair:3 cast:1 namely:1 specified:1 connection:1 product1:1 tremendous:1 nip:12 address:1 below:1 including:2 royal:1 analogue:1 suitable:4 demanding:1 natural:5 predicting:1 advanced:1 representing:1 scheme:1 raftery:1 extract:1 columbia:1 naive:1 prior:19 literature:2 pseudoinputs:4 asymptotic:1 law:1 sir:1 reordering:1 expect:1 permutation:4 xiang:1 mixed:1 filtering:2 rejoinder:1 validation:1 principle:1 balancing:1 row:8 succinctly:1 summary:2 genetics:1 placed:1 supported:4 rasmussen:2 bias:1 allow:1 institute:1 taking:1 emerge:1 sparse:9 distributed:2 slice:7 curve:1 dimension:7 default:3 computes:1 author:6 collection:4 regressors:1 social:4 approximate:3 implicitly:1 uai:1 receiver:1 tuples:2 xi:2 postdoctoral:1 continuous:7 latent:26 uji:1 table:4 additionally:1 nature:1 reasonably:1 symmetry:4 permute:1 excellent:1 complex:2 constructing:1 substituted:2 vj:1 did:1 significance:1 dense:5 blockmodels:3 s2:1 noise:6 arise:1 n2:3 x1:3 xu:2 fig:5 borel:1 uik:2 cubic:1 jmlr:4 third:1 theorem:16 friendship:1 unusable:1 specific:1 embed:1 exists:2 naively:1 airoldi:1 conditioned:2 konstantina:1 demand:1 distinguishable:1 gao:1 ordered:1 partially:2 u2:7 collectively:1 springer:1 corresponds:3 satisfies:2 determines:1 acm:1 conditional:7 identity:2 presentation:1 sorted:1 rbf:1 replace:1 man:1 change:1 infinite:10 generalisation:1 characterised:1 gplvms:1 principal:1 i2r:1 called:3 invariance:4 coauthorship:1 meaningful:1 zg:1 select:1 college:1 support:1 arises:2 evaluate:1 mcmc:5 princeton:1
3,947
4,573
Scalable Inference of Overlapping Communities Prem Gopalan David Mimno Sean M. Gerrish Michael J. Freedman David M. Blei {pgopalan,mimno,sgerrish,mfreed,blei}@cs.princeton.edu Department of Computer Science Princeton University Princeton, NJ 08540 Abstract We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms. 1 Introduction A central problem in network analysis is to identify communities, groups of related nodes with dense internal connections and few external connections [1, 2, 3]. Classical methods for community detection assume that each node participates in a single community [4, 5, 6]. This assumption is limiting, especially in large real-world networks. For example, a member of a large social network might belong to overlapping communities of co-workers, neighbors, and school friends. To address this problem, researchers have developed several methods for detecting overlapping communities in observed networks. These methods include algorithmic approaches [7, 8] and probabilistic models [2, 3, 9, 10]. In this paper, we focus on the mixed-membership stochastic blockmodel (MMSB) [2], a probabilistic model that allows each node of a network to exhibit a mixture of communities. The MMSB casts community detection as posterior inference: Given an observed network, we estimate the posterior community memberships of its nodes. The MMSB can capture complex community structure and has been adapted in several ways [11, 12]; however, its applications have been limited because its corresponding inference algorithms have not scaled to large networks [2]. In this work, we develop algorithms for the MMSB that scale, allowing us to study networks that were previously out of reach for this model. For example, we analyzed social networks with as many as 60,000 nodes. With our method, we can use the MMSB to analyze large networks, finding approximate posteriors in minutes with networks for which the original algorithm takes hours. When compared to other scalable methods for overlapping community detection, we found that the MMSB gives better predictions of new connections and more closely recovers ground-truth communities. Further, we can now use the MMSB to compute descriptive statistics at scale, such as which nodes bridge communities. The original MMSB algorithm optimizes the variational objective by coordinate ascent, processing every pair of nodes in each iteration [2]. This algorithm is inefficient, and it quickly becomes intractable for large networks. In this paper, we develop stochastic optimization algorithms [13, 14] to fit the variational distribution, where we obtain noisy estimates of the gradient by subsampling the network. 1 +,-./"01$2334567,03$ BARABASI, A Yahoo Labs '$ (!345567&8'! !"#! 595:912#&/2! %#"&! %#!&! $&"$ $&#$ !" !" ! #$ $( $ <!345567&8'! 28197=8#2! NEWMAN, M KLEINBERG, J (a) %;"$! %;!$! #$ %"#$ (!)(!*!+,!-.!/0&12! '#&! JEONG, H " !"$! ';$! *$ $) $ (b) Figure 1: Figure 1(a) shows communities (see ?2) discovered in a co-authorship network of 1,600 researchers [16] by an a-MMSB model with 50 communities. The color of author nodes indicates their most likely posterior community membership. The size of nodes indicates bridgeness [17], a measure of participation in multiple communities. Figure 1(b) shows a graphical model of the a-MMSB. The prior over multinomial ? is a symmetric Dirichlet distribution. Priors over Bernoulli ? are Beta distributions. Our algorithm alternates between subsampling from the network and adjusting its estimate of the underlying communities. While this strategy has been used in topic modeling [15], the MMSB introduces new challenges because the Markov blanket of each node is much larger than that of a document. Our simple sampler usually selects unconnected nodes (due to sparse real-world networks). We develop better sampling methods that focus more on the informative data in the network, e.g., the observed links, and thus make inference even faster. 2 Modeling overlapping communities In this section, we introduce the assortative mixed-membership stochastic blockmodel (a-MMSB), a statistical model of networks that models nodes participating in multiple communities. The aMMSB is a subclass of the mixed-membership stochastic blockmodel (MMSB) [2].1 Let y denote the observed links of an undirected network, where yab = 1 if nodes a and b are linked and 0 otherwise. Let K denote the number of communities. Each node a is associated with community memberships ?a , a distribution over communities; each community is associated with a community strength ?k ? (0, 1), which captures how tightly its members are linked. The probability that two nodes are linked is governed by the similarity of their community memberships and the strength of their shared communities. We capture these assumptions in the following generative process of a network. 1. For each community k, draw community strength ?k ? Beta(?). 2. For each node a, draw community memberships ?a ? Dirichlet(?). 3. For each pair of nodes a and b, (a) Draw interaction indicator za?b ? ?a . (b) Draw interaction indicator za?b ? ?b . (c) Draw link yab ? Bernoulli(r), where  ?k if za?b,k = za?b,k = 1, r=  if za?b 6= za?b . (1) 1 We use a subclass of the MMSB models that is appropriate for community detection in undirected networks. In particular, we assume assortativity, i.e., that links imply that nodes are similar. We call this special case the assortative MMSB or a-MMSB. In ?2 we argue why the a-MMSB is more appropriate for community detection than the MMSB. We note that our algorithms are immediately applicable to the MMSB as well. 2 Figure 1(b) represents the corresponding joint distribution of hidden and observed variables. The aMMSB defines a single parameter  to govern inter-community links. This captures assortativity?if two nodes are linked, it is likely that the latent community indicators were the same. The full MMSB differs from the a-MMSB in that the former uses one parameter for each of the K 2 ordered pairs of communities. When the full MMSB is applied to undirected networks, two hypotheses compete to explain a link between each pair of nodes: either both nodes exhibit the same community or they are in different communities that link to each other. We analyze data with a-MMSB via the posterior distribution over latent variables p(?1:N , z, ?1:K |y, ?, ?). The posterior lets us form a predictive distribution of unseen links and measure latent network properties of the observed data. The posterior over ?1:N represents the community memberships of the nodes, and the posterior over the interaction indicator variables z identifies link communities in the network [8]. For example, in a social network one member?s link to another might arise because they are from the same high school while another might arise because they are co-workers. With an estimate of this latent structure, we can characterize the network in interesting ways. In Figure 1(a), we sized author nodes according to their expected posterior bridgeness [17], a measure of participation in multiple communities (see ?5). 3 Stochastic variational inference Our goal is to compute the posterior distribution p(?1:N , z, ?1:K |y, ?, ?). Exact inference is intractable, so we use variational inference [18]. Traditional variational inference is a coordinate ascent algorithm. In the context of the MMSB (and the a-MMSB), coordinate ascent iterates between analyzing all O(N 2 ) node pairs and updating the community memberships of the N nodes [2]. In this section, we will derive a stochastic variational inference algorithm. Our algorithm iterates between sampling random pairs of nodes and updating node memberships. This avoids the periteration O(N 2 ) computation and allows us to scale to large networks. 3.1 Variational inference in a-MMSB In variational inference, we define a family of distributions over the hidden variables q(?, ?, z) and find the member of that family that is closest to the true posterior. (Closeness is measured with KL divergence.) We use the mean-field family, under which each variable is endowed with its own distribution and its own variational parameter. This allows us to tractably optimize the parameters to find a local minimum of the KL divergence. For the a-MMSB, the variational distributions are q(za?b = k) = ?a?b,k ; q(?a ) = Dirichlet(?a ; ?p ); q(?k ) = Beta(?k ; ?k ). (2) The posterior over link community assignments z is parameterized by the per-interaction memberships ?, the node community distributions ? by the community memberships ?, and the link probability ? by the community strengths ?. Notice that ? is of dimension K ? 2, and ? is of dimension N ? K. Minimizing the KL divergence between q and the true posterior is equivalent to optimizing an evidence lower bound (ELBO) L, a bound on the log likelihood of the observations. We obtain this bound by applying Jensen?s inequality [18] to the data likelihood. The ELBO is log p(y|?, ?) ? L(y, ?, ?, ?) , Eq [log p(y, ?, z, ?|?, ?)] ? Eq [log q(?, ?, z)]. (3) The right side of Eq. 3 factorizes to P P P P L = k Eq [log p(?k |?k )] ? k Eq [log q(?k |?k )] + n Eq [log p(?n |?)] ? n Eq [log q(?n |?n )] P + a,b Eq [log p(za?b |?a )] + Eq [log p(za?b |?b )] (4) P ? a,b Eq [log q(za?b |?a?b )] ? Eq [log q(za?b |?a?b )] P + a,b Eq [log p(yab |za?b , za?b , ?)] Notice the first line in Eq. 4 contains summations over communities and nodes; we call these global terms. They relate to the global variables, which are the community strengths ? and per-node memberships ?. The remaining lines contain summations over all node pairs, which we call local terms. They depend on both the global and local variables, the latter being the per-interaction memberships ?. This distinction is important in the stochastic optimization algorithm. 3 3.2 Stochastic optimization Our goal is to develop a variational inference algorithm that scales to large networks. We will use stochastic variational inference [14], which optimizes the ELBO with respect to the global variational parameters using stochastic gradient ascent. Stochastic gradient algorithms follow noisy estimates of the gradient with a decreasing step-size. If the expectation of the noisy gradient is equal to the gradient and if the step-size decreases according to a certain schedule, then we are guaranteed convergence to a local optimum [13]. Subsampling the data to form noisy gradients scales inference as we avoid the expensive all-pairs sums in Eq. 4. Stochastic variational inference is a coordinate ascent algorithm that iteratively updates local and global parameters. For each iteration, we first subsample the network and compute optimal local parameters for the sample, given the current settings of the global parameters. We then update the global parameters using a stochastic natural gradient2 computed from the subsampled data and local parameters. We call the first phase the local step and the second phase the global step [14]. The selection of subsamples in each iteration provides a way to plug in a variety of network subsampling algorithms. However, to maintain a correct stochastic optimization algorithm of the variational objective, the subsampling method must be valid. That is, the natural gradients estimated from the subsample must be unbiased estimates of the true gradients. The global step. The global step updates the global community strengths ? and community memberships ? with a stochastic gradient of the ELBO in Eq. 4. Eq. 4 contains summations over all O(N 2 ) node pairs. Now consider drawing a node pair (a, b) at random from a population distribution g(a, b) over the M = N (N ? 1)/2 node pairs. We can rewrite the ELBO as a random function of the variational parameters that includes the global terms and the local terms associated only with (a, b). The expectation of this random function is equal in objective to Eq. 4. For example, the fourth term in Eq. 4 is rewritten as P 1 (5) a,b Eq [log p(yab |za?b , za?b , ?)] = Eg [ g(a,b) Eq [log p(yab |za?b , za?b , ?)]] Evaluating the rewritten Eq. 4 for a node pair sampled from g gives a noisy but unbiased estimate of the ELBO. Following [15], the stochastic natural gradients computed from a sample pair (a, b) are t ??a,k =?k + 1 t g(a,b) ?a?b,k ??tk,i =?k,i + t?1 ? ?a,k 1 g(a,b) ?a?b,k ? ?a?b,k ? yab,i ? ?t?1 k,i , (6) (7) where yab,0 = yab , and yab,1 = 1?yab . In practice, we sample a ?mini-batch? S of pairs per update, to reduce noise. The intuition behind the above update is analogous to Online LDA [15]. When a single pair (a, b) is sampled, we are computing the setting of ? that would be optimal (given ?t ) if our entire network were a multigraph consisting of the interaction between a and b repeated 1/g(a, b) times. Our algorithm has assumed that the subset of node pairs S are sampled independently. We can relax this assumption by defining a distribution over predefined sets of links. These sets can be defined using prior information about the pairs, such as network topology, which lets us take advantage of more sophisticated sampling strategies. For example, we can define a set for each node, with each set consisting of the node?s adjacent links or non-links. Each iteration we set S to one of these sets sampled at random from the N sets. In order to ensure that set-based sampling results in unbiased gradients, we specify two constraints on sets. First, we assume that the union of these sets s is the total set of all node pairs, U : U = ?i si . Second, we assume that every pair (a, b) occurs in some constant number of sets c and that c ? 1. With these conditions satisfied, we can again rewrite Eq. 4 as the sum over its global terms and an expectation over the local terms. Let h(t) be a distribution over predefined sets of node pairs. For example, the fourth term in Eq. 4 can be rewritten using P P 1 1 (8) (a,b)?st Eq [log p(yab |za?b , za?b , ?)]] a,b Eq [log p(yab |za?b , za?b , ?)] = Eh [ c h(t) 2 Stochastic variational inference uses natural gradients [19] of the ELBO. Computing natural gradients (along with subsampling) leads to scalable variational inference algorithms [14]. 4 Algorithm 1 Stochastic a-MMSB K 1: Initialize ? = (?n )N n=1 , ? = (?k )k=1 randomly. 2: while convergence criteria is not met do 3: Sample a subset S of node pairs. 4: L-step: Optimize (?a?b , ?a?b ) ?(a, b) ? S 5: Compute the natural gradients ?? tn ?n, ??tk ?k 6: G-step: Update (?, ?) using Eq. 9. 7: Set ?t = (?0 + t)?? ; t ? t + 1. 8: end while The natural gradient of the random functions in Eq. 5 and Eq. 8 with respect to the global variational parameters (?, ?) is a noisy but unbiased estimate of the natural gradient of the ELBO in Eq. 4. However we subsample, the global step follows the noisy gradient with an appropriate step-size, ? ? ? + ?t ?? t ; ? ? ? + ?t ??t . (9) P 2 P We require that t ?t < ? and t ?t = ? for convergence to a local optimum [13]. We set ?t , (?0 + t)?? , where ? ? (0.5, 1] is the learning rate and ?0 ? 0 downweights early iterations. The local step. The local step optimizes the interaction parameters ? with respect to a subsample of the network. Recall that there is a per-interaction membership parameter for each node pair? ?a?b and ?a?b ?representing the posterior approximation of which communities are active in determining whether there is a link. We optimize these parameters in parallel. The update for ?a?b given ya,b is t?1 Eq [log(1 ? ?k )] ?ta?b,k |y = 0 ? exp{Eq [log ?a,k ] + ?a?b,k t?1 ?ta?b,k |y = 1 ? exp{Eq [log ?a,k ] + ?a?b,k Eq [log ?k ] + (1 ? ?t?1 a?b,k ) log . (10) The updates for ?a?b are symmetric. This is natural gradient ascent with a step-size of one. We present the full Stochastic a-MMSB algorithm in Algorithm 1. Each iteration subsamples the network and computes the local and global updates. We have derived this algorithm with node pairs sampled from arbitrary population distributions g(a, b) or h(t). One advantage of this approach is that we can explore various subsampling techniques without compromising the correctness of Algorithm 1. We will discuss and study sampling methods in ?3.3 and ?5. First, however, we discuss convergence and complexity. Held-out sets and convergence criteria. We stop training on a network (the training set) when the average change in expected log likelihood on held-out data (the validation set) is less than 0.001%. The test and validation sets used in ?5 have equal parts links and non-links, selected randomly from the network. A 50% links validation set poorly represents the severe class imbalance between links and non-links in real-world networks. However, a validation set matching the network sparsity would have too few links. Therefore, we compute the validation log likelihood at network sparsity by reweighting the average link and non-link log likelihood (estimated from the 50% links validation set) by their respective proportions in the network. We use a separate validation set to choose learning parameters and study sensitivity to K. Per-iteration complexity. Our L-step can be computed in O(nK), where n is the number of node pairs sampled in each iteration. This is unlike MMSB, where the ? updates incur a cost quadratic in K. Step 6 requires that all nodes must be updated in each iteration. The time for a G-step in Algorithm 1 is O(N K) and the total memory required is O(N K). 3.3 Sampling strategies Our algorithm allows us flexibility around how the subset of pairs is sampled, as long as the expectation of the stochastic gradient is equal to the true gradient. There are several ways we can take advantage of this. We can sample based on informative pairs of nodes, i.e., ones that help us better assess the community structure. We can also subsample to make data processing easier, for example, to accomodate a stream of links. Finally, large, real-world networks are often sparse, with links 5 accounting for less than 0.1% of all node pairs (see Figure 2). While we should not ignore non-links, it may help to give preferential attention to links. These intuitions are captured in the following four subsampling methods. Random pair sampling. The simplest method is to sample node pairs uniformly at random. This 1 method is an instance of independent pair sampling, with g(a, b) (used in Eq. 5) equal to N (N ?1)/2 . Random node sampling. This method focuses on local neighborhoods of the network. A set consists of all the pairs that involve one of the N nodes. At each iteration, we sample a set uniformly at random from the N sets, so h(t) (used in Eq. 8) is N1 . Since each pair involves two nodes, each link appears in two sets, so c (also used in Eq. 8) is 2. By reweighting the terms corresponding to pairs in the sampled set, we maintain a correct stochastic optimization. Stratified random pair sampling. This method samples links independently, but focuses on observed links. We divide the M node pairs into two strata: links and non-links. Each iteration either samples a mini-batch of links or samples a mini-batch of non-links. If the non-link stratum is sampled, and N0 is the estimated total number of non-links, then ( 1 if yab = 0, g(a, b) = N0 (11) 0 if yab = 1 The population distribution when the link strata is sampled is symmetric. Stratified random node sampling. This method combines set-based sampling and stratified sampling to focus on observed links in local neighborhoods. For each node we define a ?link set? consisting of all its links, and m ?non-link sets? that partition its non-links. Since the number of non-links associated with each node is usually large, dividing them into many sets allows the computation in each iteration to be fast. At each iteration, we first select a random node and either select its link set or sample one of its m non-link sets, uniformly at random. To compute Eq. 8 we define the number of sets that contain each pair, c = 2, and the population distribution over sets  1 if t is a link set, h(t) = 2N1 (12) if t is a non-link set. 2N m Stratified random node sampling gives the best gains in convergence speed (see ?5). 4 Related work Newman et al. [3] described a model of overlapping communities in networks (?the Poisson model?) where the number of links between two nodes is a Poisson random variable. Recently, other researchers have proposed latent feature network models [20, 21] that compute the probabilities of links based on the interactions between binary features associated with each node. Efficient inference algorithms for these models exploit model-specific approximations that allow scaling in the number of links. These ideas do not extend to the MMSB. Further, these algorithms do not explicitly leverage network sampling. In contrast, the ideas in Algorithm 1 apply to a number of models [14]. It subsamples both links and non-links in an inner loop for scalability. Other scalable algorithms include Clique Percolation (CP) [7] and Link Clustering (LC) [8], which are based on heuristic clique-finding and hierarchical clustering, respectively. These methods are fast in practice, although the underlying problem is NP-complete. Further, because they are not statistical models, there is no clear mechanism for predicting new observations or model checking. In the next section we will compare our method to these alternative scalable methods. Compared to the Poisson model, we will show that the MMSB gives better predictions. Compared to CP and LC, which do not provide predictions, we will show that the MMSB more reliably recovers the true community structure. 5 Empirical study In this section, we evaluate the efficiency and accuracy of Stochastic a-MMSB (AM). First, we evaluate its efficiency on 10 real-world networks. Second, we demonstrate that stratified sampling 6 Figure 2: Network datasets. N is the number of nodes, K max is the maximum number of communities analyzed and d is the percent of node pairs that are links. The time until convergence for the different methods stoch are Tcstoch and Tcbatch , while the time required for 90% of the perplexity at a-MMSB?s convergence is T90% . N K max d(%) stoch T90% Tcstoch Tcbatch TYPE 1.1K 1.6K 5.2K 9.9K 12K 18.7K 27.8K 37K 40.4K 58.2K 19 100 300 32 32 32 512 158 300 64 1.2 0.3 0.1 0.05 0.16 0.11 0.09 0.03 0.02 0.01 1.7m 7.2m 2.3h 7.3h 36m 13.8h 8d 1.5d 4.6d 8d 3.4m 11.7m 4h 8.7h 2.8h 22.1h 10.3d 2.5d 5.2d 9.5d 40.5m 2.2h > 29h > 67h > 67h > 67h - TRANSP. COLLAB . COLLAB . COLLAB . COLLAB . COLLAB . CITE EMAIL COLLAB . SOCIAL time (hours) 10 20 30 40 50 60 Perplexity ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1 2 time (hours) 40 20 10 0 10 20 30 40 50 60 3 4 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 5 stratified node node stratified pair pair 10 15 time (hours) 10 20 30 40 50 60 hep?th2 27K 20 Perplexity 15 20 25 30 35 25 15 20 Perplexity 10 stratified node node stratified pair pair ? 0 0 astro?ph 19K ? ? astro?ph 19K 30 40 30 20 0 hep?ph 12K SOURCE [22] [16] [23] [23] [23] [23] [23],[24] [25] [26] [27] 0 10 20 30 40 50 60 hep?ph 12K 15 20 25 30 35 40 0 hep?th 10K 10 10 20 30 online batch 0 Perplexity 40 gravity 5K 0 US - AIR NETSCIENCE RELATIVITY HEP - TH HEP - PH ASTRO - PH HEP - TH 2 ENRON COND - MAT BRIGHTKITE 0 10 20 30 40 50 60 DATA SET 25 ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 ? 10 stratified node node stratified pair pair 20 30 40 time (hours) Figure 3: Stochastic a-MMSB (with random pair sampling) scales better and finds communities as good as batch a-MMSB on real networks (Top). Stratified random node sampling is an order of magnitude faster than other sampling methods on the hep-ph, astro-ph and hep-th2 networks (Bottom). significantly improves convergence speed on real networks. Third, we compare our algorithm with leading algorithms in terms of accuracy on benchmark graphs and ability to predict links. We measure convergence by computing the link prediction accuracy on a validation set. We set aside a validation and a test set, each having 10% of the network links and an equal number of non-links (see ?3.2). We approximate the probability that a link exists between two nodes using posterior expectations of ? and ?. We then calculate perplexity, which is the exponential of the average predictive log likelihood of held-out node pairs. For random pair and stratified random pair sampling, we use a mini-batch size S = N/2 for graphs with N nodes. For the stratified random node sampling, we set the number of non-link sets m = 10. Based on experiments, we set the parameters ? = 0.5 and ?0 = 1024. We set hyperparameters ? = 1/K and {?1 , ?0 } proportional to the expected number of links and non-links in each community. We implemented all algorithms in C++. Comparing scalability to batch algorithms. AM is an order of magnitude faster than standard batch inference for a-MMSB [2]. Figure 2 shows the time to convergence for four networks3 of varying types, node sizes N and sparsity d. Figure 3 shows test perplexity for batch vs. stochastic inference. For many networks, AM learns rapidly during the early iterations, achieving 90% of the converged perplexity in less than 70% of the full convergence time. For all but the two smallest networks, batch inference did not converge within the allotted time. AM lets us efficiently fit a mixed-membership model to large networks. Comparing sampling methods. Figure 3 shows that stratified random node sampling converges an order of magnitude faster than random node sampling. It is statistically more efficient because the observations in each iteration include all the links of a node and a random sample of its non-links. 3 Following [1], we treat the directed citation network hep-th2 as an undirected network. 7 sparse training dense 0.6 0.5 0.4 0.3 0.2 0.1 0.0 NMI NMI 0.5 0.4 0.3 0.2 0.1 0.0 AM PM LC CP AM PM LC CP AM PM LC CP K=5 training test K=40 ?2 ?4 ?6 ?8 ?10 PM AM PM LC CP K=20 AM PM AM PM AM held?out log likelihood 10% noisy held?out log likelihood 0 noise K=5 K=20 test K=40 ?1 ?2 ?3 ?4 ?5 PM AM PM AM PM AM (a) (b) (c) (d) Figure 4: Figures (a) and (b) show that Stochastic a-MMSB (AM) outperforms the Poisson model (PM), Clique Percolation (CP), and Link Clustering (LC) in accurately recovering overlapping communities in 280 benchmark networks [28]. Each figure shows results on a binary partition of the 280 networks. Accuracy is measured using normalized mutual information (NMI) [28]; error bars denote the 95% confidence interval around the mean NMI. Figures (c) and (d) show that a-MMSB generalizes to new data better than PM on the netscience and us-air network, respectively. Each algorithm was run with 10 random initializations per K. Figure 3 also shows that stratified random pair sampling converges ?1x?2x faster than random pair sampling. Comparing accuracy to scalable algorithms. AM can recover communities with equal or better accuracy than the best scalable algorithms: the Poisson model (PM) [3], Clique percolation (CP) [7] and Link clustering (LC) [8]. We measure the ability of algorithms to recover overlapping communities in synthetic networks generated by the benchmark tool [28].4 Our synthetic networks reflect real-world networks by modeling noisy links and by varying community densities from sparse to dense. We evaluate using normalized mutual information (NMI) between discovered communities and the ground truth communities [28]. We ran PM and a-MMSB until validation log likelihood changed by less than 0.001%. CP and LC are deterministic, but results vary between parameter settings. We report the best solution for each model.5 Figure 4 shows results for the 280 synthetic networks split in two ways. AM outperforms PM, LC, and CP on noisy networks and networks with sparse communities, and it matches the best performance in the noiseless case and the dense case. CP performs best on networks with dense communities?they tend to have more k-cliques?but with a larger margin of error than AM. Comparing predictive accuracy to PM. Stochastic a-MMSB also beats PM [3], the best scalable probabilistic model of overlapping communities, in predictive accuracy. On two networks, we evaluated both algorithms? ability to predict held out links and non-links. We ran both PM and aMMSB until their validation log likelihood changed less than 0.001%. Figures 4(c) and 4(d) show training and testing likelihood. PM overfits, while the a-MMSB generalizes well. Using the a-MMSB as an exploratory tool. AM opens the door to large-scale exploratory analysis of real-world networks. In addition to the co-authorship network in Figure 1(a), we analyzed the ?cond-mat? collaboration network [26] with the number of communities set to 300. This network contains 40,421 scientists and 175,693 links. In the supplement, we visualized the top authors in the network by a measure of their participation in different communities (bridgeness [17]). Finding such bridging nodes in a network is an important task in disease prevention and marketing. Acknowledgments D.M. Blei is supported by ONR N00014-11-1-0651, NSF CAREER 0745520, AFOSR FA9550-091-0668, the Alfred P. Sloan foundation, and a grant from Google. 4 We generated 280 networks for combinations of these parameters: #nodes? {400}; #communities?{5, 10}; #nodes with at least 3 overlapping communities?{100}; community N sizes?{equal, unequal}, when unequal, the community sizes are in the range [ 2K , 2N ]; average node K N N N N degree? {0.1 K , 0.15 K , .., 0.35 K , 0.4 K }, the maximum node degree=2?average node degree; % links of a node that are noisy? {0, 0.1}; random runs?{1,..,5}. 5 CP finds a solution per clique size; LC finds a solution per threshold at which the dendrogram is cut [8] in steps of 0.1 from 0 to 1; PM and a-MMSB find a solution ?K ? {k0 , k0 + 10} where k0 is the true number of communities?increasing by 10 allows for potentially a larger number of communities to be detected; aMMSB also finds a solution for each of random pair or stratified random pair sampling methods with the hyperparameters ? set to the default or set to fit dense clusters. 8 References [1] Santo Fortunato. Community detection in graphs. Physics Reports, 486(35):75?174, 2010. [2] E. Airoldi, D. Blei, S. Fienberg, and E. Xing. Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9:1981?2014, 2008. [3] Brian Ball, Brian Karrer, and M. E. J. Newman. Efficient and principled method for detecting communities in networks. Physical Review E, 84(3):036103, 2011. [4] M. E. J. Newman and M. Girvan. Finding and evaluating community structure in networks. Physical Review E, 69(2):026113, 2004. [5] K. Nowicki and T. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association, 96(455):1077?1087, 2001. [6] Peter J. Bickel and Aiyou Chen. A nonparametric view of network models and Newman-Girvan and other modularities. Proceedings of the National Academy of Sciences, 106(50):21068?21073, 2009. [7] Imre Dernyi, Gergely Palla, and Tams Vicsek. Clique percolation in random networks. Physical Review Letters, 94(16):160202, 2005. [8] Yong-Yeol Ahn, James P. Bagrow, and Sune Lehmann. Link communities reveal multiscale complexity in networks. Nature, 466(7307):761?764, 2010. [9] M. E. J. Newman and E. A. Leicht. Mixture models and exploratory analysis in networks. Proceedings of the National Academy of Sciences, 104(23):9564?9569, 2007. [10] A. Goldenberg, A. Zheng, S. Fienberg, and E. Airoldi. A survey of statistical network models. Foundations and Trends in Machine Learning, 2:129?233, 2010. [11] W. Fu, L. Song, and E. Xing. Dynamic mixed membership blockmodel for evolving networks. In ICML, 2009. [12] Qirong Ho, Ankur P. Parikh, and Eric P. Xing. A multiscale community blockmodel for network exploration. Journal of the American Statistical Association, 107(499):916?934, 2012. [13] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400?407, 1951. [14] M. Hoffman, D. Blei, C. Wang, and J. Paisley. Stochastic variational inference. arXiv:1206.7051, 2012. [15] M. Hoffman, D. Blei, and F. Bach. Online learning for latent Dirichlet allocation. In NIPS, 2010. [16] M. E. J. Newman. Finding community structure in networks using the eigenvectors of matrices. Physical Review E, 74(3):036104, 2006. [17] Tams Nepusz, Andrea Petrczi, Lszl Ngyessy, and Flp Bazs. Fuzzy communities and the concept of bridgeness in complex networks. Physical Review E, 77(1):016107, 2008. [18] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. Introduction to variational methods for graphical models. Machine Learning, 37:183?233, 1999. [19] S. Amari. Differential geometry of curved exponential families-curvatures and information loss. The Annals of Statistics, 1982. [20] M. Morup, M.N. Schmidt, and L.K. Hansen. Infinite multiple membership relational modeling for complex networks. In IEEE MLSP, 2011. [21] M. Kim and J. Leskovec. Modeling social networks with node attributes using the multiplicative attribute graph model. In UAI, 2011. [22] RITA. U.S. Air Carrier Traffic Statistics, Bur. Trans. Stats, 2010. [23] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graph evolution: Densification and shrinking diameters. ACM TKDD, 2007. [24] J. Gehrke, P. Ginsparg, and J. M. Kleinberg. Overview of the 2003 KDD cup. SIGKDD Explorations, 5:149?151, 2003. [25] B. Klimmt and Y. Yang. Introducing the Enron corpus. In CEAS, 2004. [26] M. E. J. Newman. The structure of scientific collaboration networks. Proceedings of the National Academy of Sciences, 98(2):404?409, 2001. [27] J. Leskovec, K. J. Lang, A. Dasgupta, and M. W. Mahone. Community structure in large networks: Natural cluster sizes and the absence of large well-defined cluster. In Internet Mathematics, 2008. [28] Andrea Lancichinetti and Santo Fortunato. Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities. Physical Review E, 80(1):016118, 2009. 9
4573 |@word proportion:1 open:1 accounting:1 contains:3 document:1 outperforms:2 current:1 comparing:4 si:1 lang:1 must:3 partition:2 informative:2 kdd:1 update:10 n0:2 aside:1 v:1 generative:1 selected:1 fa9550:1 blei:6 santo:2 detecting:2 iterates:2 node:85 provides:1 mathematical:1 along:1 beta:3 differential:1 consists:1 lancichinetti:1 combine:1 introduce:1 bur:1 inter:1 expected:3 andrea:2 detects:1 decreasing:1 palla:1 increasing:1 becomes:1 estimating:1 underlying:2 fuzzy:1 developed:1 finding:5 nj:1 every:2 subclass:2 gravity:1 scaled:1 grant:1 carrier:1 scientist:1 local:16 treat:1 analyzing:1 ginsparg:1 might:3 initialization:1 ankur:1 co:4 limited:1 stratified:17 range:1 statistically:1 directed:2 acknowledgment:1 testing:2 pgopalan:1 practice:2 assortative:2 union:1 differs:1 empirical:1 evolving:1 significantly:1 matching:1 confidence:1 selection:1 t90:2 context:1 applying:1 optimize:3 equivalent:1 deterministic:1 attention:1 independently:2 survey:1 immediately:1 stats:1 population:4 exploratory:3 coordinate:4 analogous:1 limiting:1 updated:1 annals:2 exact:1 us:2 hypothesis:1 rita:1 trend:1 expensive:1 updating:2 cut:1 modularities:1 observed:8 bottom:1 wang:1 capture:4 calculate:1 decrease:1 ran:2 disease:1 intuition:2 principled:1 govern:1 complexity:3 dynamic:1 depend:1 rewrite:2 predictive:4 incur:1 efficiency:2 eric:1 joint:1 k0:3 various:1 fast:2 detected:1 newman:8 neighborhood:2 heuristic:1 larger:3 drawing:1 otherwise:1 elbo:8 relax:1 ability:3 statistic:4 amari:1 unseen:1 noisy:11 online:3 ceas:1 subsamples:3 descriptive:1 advantage:3 interaction:9 loop:1 rapidly:1 qirong:1 poorly:1 flexibility:1 academy:3 participating:1 scalability:2 convergence:12 cluster:3 optimum:2 converges:3 tk:2 help:2 derive:1 develop:5 friend:1 measured:2 school:2 eq:37 dividing:1 implemented:1 c:1 involves:1 blanket:1 recovering:1 met:1 closely:1 correct:2 compromising:1 attribute:2 stochastic:32 exploration:2 require:1 brian:2 ammsb:4 summation:3 around:2 ground:2 exp:2 algorithmic:1 predict:2 vary:1 early:2 barabasi:1 smallest:1 bickel:1 estimation:1 applicable:1 percolation:4 hansen:1 bridge:1 robbins:1 correctness:1 gehrke:1 tool:2 weighted:1 hoffman:2 avoid:1 imre:1 factorizes:1 varying:2 aiyou:1 jaakkola:1 derived:1 focus:5 stoch:2 bernoulli:2 indicates:2 likelihood:11 contrast:1 blockmodel:6 sigkdd:1 kim:1 am:19 inference:24 goldenberg:1 membership:22 entire:1 hidden:2 selects:1 yahoo:1 prevention:1 art:1 special:1 initialize:1 mutual:2 equal:9 field:1 having:1 sampling:27 represents:3 icml:1 np:1 report:2 few:2 randomly:2 tightly:1 divergence:3 national:3 subsampled:1 phase:2 consisting:3 geometry:1 maintain:2 n1:2 detection:7 zheng:1 severe:1 introduces:1 mixture:2 analyzed:3 sgerrish:1 behind:1 held:6 predefined:2 fu:1 worker:2 preferential:1 respective:1 divide:1 leskovec:3 instance:1 modeling:5 hep:10 assignment:1 karrer:1 cost:1 introducing:1 subset:3 hundred:1 too:1 characterize:1 synthetic:3 st:1 density:1 sensitivity:1 stratum:3 probabilistic:3 participates:1 physic:1 michael:1 quickly:1 gergely:1 again:1 central:1 satisfied:1 reflect:1 choose:1 external:1 tam:2 american:2 inefficient:1 leading:1 includes:1 mlsp:1 explicitly:1 sloan:1 stream:1 multiplicative:1 view:1 lab:1 analyze:2 traffic:1 linked:4 overfits:1 xing:3 recover:2 parallel:1 monro:1 ass:1 air:3 accuracy:9 efficiently:1 identify:1 periteration:1 accurately:1 researcher:3 converged:1 bridgeness:4 za:21 explain:1 reach:1 assortativity:2 email:1 james:1 naturally:1 associated:5 recovers:2 sampled:10 stop:1 gain:1 adjusting:1 recall:1 color:1 improves:1 schedule:1 sean:1 sophisticated:1 transp:1 appears:1 ta:2 multigraph:1 follow:1 specify:1 evaluated:1 marketing:1 dendrogram:1 until:3 multiscale:2 overlapping:12 reweighting:2 google:1 defines:1 lda:1 reveal:1 scientific:1 contain:2 true:7 unbiased:4 normalized:2 former:1 concept:1 evolution:1 symmetric:3 iteratively:1 nowicki:1 eg:1 adjacent:1 during:1 authorship:2 criterion:2 complete:1 demonstrate:1 tn:1 performs:1 cp:12 percent:1 variational:22 recently:1 parikh:1 multinomial:1 physical:6 overview:1 belong:1 extend:1 association:2 cup:1 paisley:1 flp:1 pm:20 mathematics:1 interleaf:1 similarity:1 ahn:1 curvature:1 posterior:16 closest:1 own:2 optimizing:1 optimizes:3 perplexity:8 certain:1 collab:6 relativity:1 inequality:1 binary:2 onr:1 n00014:1 captured:1 minimum:1 converge:1 multiple:4 full:4 snijders:1 faster:6 match:1 plug:1 downweights:1 long:1 bach:1 vicsek:1 prediction:5 scalable:10 noiseless:1 expectation:5 poisson:5 arxiv:1 iteration:15 addition:1 interval:1 source:1 unlike:1 enron:2 ascent:6 tend:1 undirected:4 member:4 jordan:1 call:4 leverage:1 door:1 yang:1 split:1 variety:1 fit:3 topology:1 reduce:1 idea:2 inner:1 whether:1 bridging:1 song:1 peter:1 gopalan:1 involve:1 clear:1 eigenvectors:1 nonparametric:1 ten:1 yab:14 ph:8 visualized:1 simplest:1 diameter:1 nsf:1 notice:2 estimated:3 per:9 alfred:1 mat:2 dasgupta:1 group:1 four:2 threshold:1 achieving:1 graph:6 sum:2 compete:1 run:2 parameterized:1 fourth:2 letter:1 lehmann:1 family:4 blockstructures:1 draw:5 scaling:1 bound:3 internet:1 guaranteed:1 quadratic:1 adapted:1 strength:6 constraint:1 yong:1 kleinberg:3 speed:2 department:1 according:2 alternate:1 combination:1 ball:1 nmi:5 fienberg:2 previously:1 discus:2 mechanism:1 end:1 generalizes:2 endowed:1 rewritten:3 apply:2 hierarchical:1 appropriate:3 batch:10 alternative:1 ho:1 schmidt:1 faloutsos:1 original:2 top:2 dirichlet:4 subsampling:9 include:3 remaining:1 graphical:2 ensure:1 clustering:4 exploit:1 ghahramani:1 especially:1 classical:1 objective:3 occurs:1 strategy:3 traditional:1 exhibit:2 gradient:21 link:74 separate:1 topic:1 argue:1 astro:4 mini:4 minimizing:1 potentially:1 relate:1 fortunato:2 reliably:1 allowing:1 imbalance:1 observation:3 markov:1 datasets:1 benchmark:5 curved:1 beat:1 defining:1 unconnected:1 relational:1 discovered:2 arbitrary:1 community:87 david:2 cast:1 pair:51 kl:3 required:2 connection:3 jeong:1 unequal:2 distinction:1 hour:5 tractably:1 nip:1 address:1 trans:1 bar:1 bagrow:1 usually:2 sparsity:3 challenge:1 leicht:1 max:2 memory:1 natural:10 eh:1 participation:3 indicator:4 predicting:1 representing:1 imply:1 identifies:1 prior:3 review:6 checking:1 determining:1 afosr:1 netscience:2 girvan:2 loss:1 mixed:7 interesting:1 proportional:1 allocation:1 validation:11 foundation:2 degree:3 collaboration:2 changed:2 supported:1 side:1 allow:1 brightkite:1 neighbor:1 saul:1 sparse:5 mimno:2 dimension:2 default:1 world:9 avoids:1 valid:1 evaluating:2 computes:1 author:3 social:5 mmsb:47 approximate:2 ignore:1 citation:1 clique:7 global:16 active:1 uai:1 corpus:1 assumed:1 latent:6 why:1 nature:1 career:1 complex:3 tkdd:1 did:1 blockmodels:1 dense:6 subsample:5 arise:2 freedman:1 noise:2 hyperparameters:2 repeated:1 lc:11 shrinking:1 exponential:2 governed:1 third:1 learns:1 minute:1 specific:1 jensen:1 densification:1 closeness:1 evidence:1 intractable:2 exists:1 supplement:1 airoldi:2 magnitude:4 accomodate:1 margin:1 nk:1 chen:1 easier:1 likely:2 explore:1 ordered:1 cite:1 truth:2 gerrish:1 acm:1 sized:1 goal:2 shared:1 absence:1 change:1 infinite:1 uniformly:3 sampler:1 total:3 ya:1 cond:2 select:2 allotted:1 internal:1 latter:1 prem:1 evaluate:3 princeton:3
3,948
4,574
Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation Dominique Guillot Dept. of Statistics Stanford University Stanford, CA 94305 Bala Rajaratnam Dept. of Statistics Stanford University Stanford, CA 94305 Benjamin T. Rolfs ICME Stanford University Stanford, CA 94305 [email protected] [email protected] [email protected] Arian Maleki Dept. of ECE Rice University Houston, TX 77005 Ian Wong Dept. of EE and Statistics Stanford University Stanford, CA 94305 [email protected] [email protected] Abstract The `1 -regularized maximum likelihood estimation problem has recently become a topic of great interest within the machine learning, statistics, and optimization communities as a method for producing sparse inverse covariance estimators. In this paper, a proximal gradient method (G-ISTA) for performing `1 -regularized covariance matrix estimation is presented. Although numerous algorithms have been proposed for solving this problem, this simple proximal gradient method is found to have attractive theoretical and numerical properties. G-ISTA has a linear rate of convergence, resulting in an O(log ?) iteration complexity to reach a tolerance of ?. This paper gives eigenvalue bounds for the G-ISTA iterates, providing a closed-form linear convergence rate. The rate is shown to be closely related to the condition number of the optimal point. Numerical convergence results and timing comparisons for the proposed method are presented. G-ISTA is shown to perform very well, especially when the optimal point is well-conditioned. 1 Introduction Datasets from a wide range of modern research areas are increasingly high dimensional, which presents a number of theoretical and practical challenges. A fundamental example is the problem of estimating the covariance matrix from a dataset of n samples {X (i) }ni=1 , drawn i.i.d from a pdimensional, zero-mean, Gaussian distribution with covariance matrix ? ? Sp++ , X (i) ? Np (0, ?), where Sp++ denotes the space of p ? p symmetric, positive definite matrices. When n ? p the maxi? is the sample covariance matrix S = 1 Pn X (i) X (i)T . A mum likelihood covariance estimator ? i=1 n problem however arises when n < p, due to the rank-deficiency in S. In this sample deficient case, common throughout several modern applications such as genomics, finance, and earth sciences, the matrix S is not invertible, and thus cannot be directly used to obtain a well-defined estimator for the inverse covariance matrix ? := ??1 . A related problem is the inference of a Gaussian graphical model ([27, 14]), that is, a sparsity pattern in the inverse covariance matrix, ?. Gaussian graphical models provide a powerful means of dimensionality reduction in high-dimensional data. Moreover, such models allow for discovery of conditional independence relations between random variables since, for multivariate Gaussian data, sparsity in the inverse covariance matrix encodes conditional independences. Specifically, if 1 X = (Xi )pi=1 ? Rp is distributed as X ? Np (0, ?), then (??1 )ij = ?ij = 0 ?? Xi ? ? Xj |{Xk }k6=i,j , where the notation A ?? B|C denotes the conditional independence of A and B given the set of variables C (see [27, 14]). If a dataset, even one with n  p is drawn from a normal distribution with sparse inverse covariance matrix ?, the inverse sample covariance matrix S ?1 will almost surely be a dense matrix, although the estimates for those ?ij which are equal to 0 may be very small in magnitude. As sparse estimates of ? are more robust than S ?1 , and since such sparsity may yield easily interpretable models, there exists significant impetus to perform sparse inverse covariance estimation in very high dimensional low sample size settings. Banerjee et al. [1] proposed performing such sparse inverse covariance estimation by solving the `1 -penalized maximum likelihood estimation problem, ??? = arg min ? log det ? + hS, ?i + ? k?k1 , p (1) ??S++ P where ? > 0 is a penalty parameter, hS, ?i = Tr (S?), and k?k1 = i,j |?ij |. For ? > 0, Problem (1) is strongly convex and hence has a unique solution, which lies in the positive definite cone Sp++ due to the log det term, and is hence invertible. Moreover, the `1 P penalty induces sparsity in ??? , as it is the closest convex relaxation of the 0 ? 1 penalty, k?k0 = i,j I(?ij 6= 0), where I(?) is the indicator function [5]. The unique optimal point of problem (1), ??? , is both invertible (for ? > 0) and sparse (for sufficiently large ?), and can be used as an inverse covariance matrix estimator. In this paper, a proximal gradient method for solving Problem (1) is proposed. The resulting ?graphical iterative shrinkage thresholding algorithm?, or G-ISTA, is shown to converge at a linear rate to ??? , that is, its iterates ?t are proven to satisfy ?t+1 ? ??? ? s ?t ? ??? , (2) F F for a fixed worst-case contraction constant s ? (0, 1), where k?kF denotes the Frobenius norm. The convergence rate s is provided explicitly in terms of S and ?, and importantly, is related to the condition number of ??? . The paper is organized as follows. Section 2 describes prior work related to solution of Problem (1). The G-ISTA algorithm is formulated in Section 3. Section 4 contains the convergence proofs of this algorithm, which constitutes the primary mathematical result of this paper. Numerical results are presented in Section 5, and concluding remarks are made in Section 6. 2 Prior Work While several excellent general convex solvers exist (for example, [11] and [4]), these are not always adept at handling high dimensional problems (i.e., p > 1000). As many modern datasets have several thousands of variables, numerous authors have proposed efficient algorithms designed specifically to solve the `1 -penalized sparse maximum likelihood covariance estimation problem (1). These can be broadly categorized as either primal or dual methods. Following the literature, we refer to primal methods as those which directly solve Problem (1), yielding a concentration estimate. Dual methods [1] yield a covariance matrix by solving the constrained problem, minimize U ?Rp?p ? log det(S + U ) ? p subject to kU k? ? ?, (3) where the primal and dual variables are related by ? = (S + U )?1 . Both the primal and dual problems can be solved using block methods (also known as ?row by row? methods), which sequentially optimize one row/column of the argument at each step until convergence. The primal and dual block problems both reduce to `1 -penalized regressions, which can be solved very efficiently. 2 2.1 Dual Methods A number of dual methods for solving Problem (1) have been proposed in the literature. Banerjee et al. [1] consider a block coordinate descent algorithm to solve the block dual problem, which reduces each optimization step to solving a box-constrained quadratic program. Each of these quadratic programs is equivalent to performing a ?lasso? (`1 -regularized) regression. Friedman et al. [10] iteratively solve the lasso regression as described in [1], but do so using coordinate-wise descent. Their widely used solver, known as the graphical lasso (glasso) is implemented on CRAN. Global convergence rates of these block coordinate methods are unknown. D?Aspremont et al. [9] use Nesterov?s smooth approximation scheme, which produces an ?-optimal ? solution in O(1/?) iterations. A variant of Nesterov?s smooth method is shown to have a O(1/ ?) iteration complexity in [15, 16]. 2.2 Primal Methods Interest in primal methods for solving Problem (1) has been growing for many reasons. One important reason stems from the fact that convergence within a certain tolerance for the dual problem does not necessarily imply convergence within the same tolerance for the primal. Yuan and Lin [30] use interior point methods based on the max-det problem studied in [26]. Yuan [31] use an alternating-direction method, while Scheinberg et al. [24] proposes a similar method and show a sublinear convergence rate. Mazumder and Hastie [18] consider block-coordinate descent approaches for the primal problem, similar to the dual approach taken in [10]. Mazumder and Agarwal [17] also solve the primal problem with block-coordinate descent, but at each iteration perform a partial as opposed to complete block optimization, resulting in a decreased computational complexity per iteration. Convergence rates of these primal methods have not been considered in the literature and hence theoretical guarantees are not available. Hsieh et al. [13] propose a second-order proximal point algorithm, called QUIC, which converges superlinearly locally around the optimum. 3 Methodology In this section, the graphical iterative shrinkage thresholding algorithm (G-ISTA) for solving the primal problem (1) is presented. A rich body of mathematical and numerical work exists for general iterative shrinkage thresholding and related methods; see, in particular, [3, 8, 19, 20, 21, 25]. A brief description is provided here. 3.1 General Iterative Shrinkage Thresholding (ISTA) Iterative shrinkage thresholding algorithms (ISTA) are general first-order techniques for solving problems of the form minimize F (x) := f (x) + g(x), x?X (4) where X is a Hilbert space with inner product h?, ?i and associated norm k?k, f : X ? R is a continuously differentiable, convex function, and g : X ? R is a lower semi-continuous, convex function, not necessarily smooth. The function f is also often assumed to have Lipschitz-continuous gradient ?f , that is, there exists some constant L > 0 such that k?f (x1 ) ? ?f (x2 )k ? L kx1 ? x2 k (5) for any x1 , x2 ? X . For a given lower semi-continuous convex function g, the proximity operator of g, denoted by proxg : X ? X , is given by   1 2 proxg (x) = arg min g(y) + kx ? yk , (6) y?X 2 It is well known (for example, [8]) that x? ? X is an optimal solution of problem (4) if and only if x? = prox?g (x? ? ??f (x? )) 3 (7) for any ? > 0. The above characterization suggests a method for optimizing problem (4) based on the iteration xt+1 = prox?t g (xt ? ?t ?f (xt )) (8) for some choice of step size, ?t . This simple method is referred to as an iterative shrinkage thresholding algorithm (ISTA). For a step size ?t ? L1 , the ISTA iterates xt are known to satisfy   1 ? F (xt ) ? F (x ) ' O , ?t, (9) t where x? is some optimal point, which is to say, they converge to the space of optimal points at a sublinear rate. If no Lipschitz constant L for ?f is known, the same convergence result still holds for ?t chosen such that f (xt+1 ) ? Q?t (xt+1 , xt ), (10) where Q? (?, ?) : X ? X ? R is a quadratic approximation to f , defined by Q? (x, y) = f (y) + hx ? y, ?f (y)i + 1 2 kx ? yk . 2? (11) See [3] for more details. 3.2 Graphical Iterative Shrinkage Thresholding (G-ISTA) The general method described in Section 3.1 can be adapted to the sparse inverse covariance estimation Problem (1). Using the notation introduced in Problem (4), define f, g : Sp++ ? R by f (X) = ? log det(X) + hS, Xi and g(X) = ? kXk1 . Both are continuous convex functions defined on Sp++ . Although the function ?f (X) = S ? X ?1 is not Lipschitz continuous over Sp++ , it is Lipschitz continuous within any compact subset of Sp++ (See Lemma 2 of the Supplemental section). Lemma 1 ([1, 15]). The solution of Problem (1), ??? , satisfies ?I  ???  ?I, for   p ? ? Tr(S) 1 ?= , ?= min ,? , kSk2 + p? ? (12) and ( ?= ? min{1T S ?1 1, (p ? ? p?) S ?1 2 ? (p ? 1)?} 21T (S + ?2 I)?1 1 ? Tr((S + ?2 I)?1 ) if S ? Sp++ otherwise, (13) where I denotes the p ? p dimensional identity matrix and 1 denotes the p-dimensional vector of ones. Note that f + g as defined is a continuous, strongly convex function on Sp++ . Moreover, by Lemma 2 of the supplemental section, f has a Lipschitz continuous gradient when restricted to the compact domain aI  ?  bI. Hence, f and g as defined meet the conditions described in Section 3.1. The proximity operator of ? kXk1 for ? > 0 is the soft-thresholding operator, ?? : Rp?p ? Rp?p , defined entrywise by [?? (X)]i,j = sgn(Xi,j ) (|Xi,j | ? ?)+ , (14) where for some x ? R, (x)+ := max(x, 0) (see [8]). Finally, the quadratic approximation Q?t of f , as in equation (11), is given by Q?t (?t+1 , ?t ) = ? log det(?t ) + hS, ?t i + h?t+1 ? ?t , S ? ??1 t i+ 1 2 k?t+1 ? ?t kF . 2?t (15) The G-ISTA algorithm for solving Problem (1) is given in Algorithm 1. As in [3], the algorithm uses a backtracking line search for the choice of step size. The procedure terminates when a prespecified duality gap is attained. The authors found that an initial estimate of ?0 satisfying [?0 ]ii = 4 (Sii + ?)?1 works well in practice. Note also that the positive definite check of ?t+1 during Step (1) of Algorithm 1 is accomplished using a Cholesky decomposition, and the inverse of ?t+1 is computed using that Cholesky factor. Algorithm 1: G-ISTA for Problem (1) input : Sample covariance matrix S, penalty parameter ?, tolerance ?, backtracking constant c ? (0, 1), initial step size ?1,0 , initial iterate ?0 . Set ? := 2?. while ? > ? do j (1) Line search: Let ?t be the largest  element of {c ?t,0 }j=0,1,... so that for ?1 ?t+1 = ??t ? ?t ? ?t (S ? ?t ) , the following are satisfied: ?t+1  0 and f (?t+1 ) ? Q?t (?t+1 , ?t ), for Q?t as defined in (15).  (2) Update iterate: ?t+1 = ??t ? ?t ? ?t (S ? ??1 t ) (3) Set next initial step, ?t+1,0 . See Section 3.2.1. (4) Compute duality gap: ? = ? log det(S + Ut+1 ) ? p ? log det ?t+1 + hS, ?i + ? k?t+1 k1 , where (Ut+1 )i,j = min{max{([??1 t+1 ]i,j ? Si,j ), ??}, ?}. end output: ?-optimal solution to problem (1), ??? = ?t+1 . 3.2.1 Choice of initial step size, ?0 Each iteration of Algorithm 1 requires an initial step size, ?0 . The results of Section 4 guarantee that any ?0 ? ?min (?t )2 will be accepted by the line search criteria of Step 1 in the next iteration. However, in practice this choice of step is overly cautious; a much larger step can often be taken. Our implementation of Algorithm 1 chooses the Barzilai-Borwein step [2]. This step, given by ?t+1,0 = Tr ((?t+1 ? ?t )(?t+1 ? ?t )) , ?1 Tr ((?t+1 ? ?t )(??1 t ? ?t+1 )) (16) is also used in the SpaRSA algorithm [29], and approximates the Hessian around ?t+1 . If a certain number of maximum backtracks do not result in an accepted step, G-ISTA takes the safe step, ?min (?t )2 . Such a safe step can be obtained from ?max (??1 t ), which in turn can be quickly approximated using power iteration. 4 Convergence Analysis In this section, linear convergence of Algorithm 1 is discussed. Throughout the section, ?t (t = 1, 2, . . . ) denote the iterates of Algorithm 1, and ??? the optimal solution to Problem (1) for ? > 0. The minimum and maximum eigenvalues of a symmetric matrix A are denoted by ?min (A) and ?max (A), respectively. Theorem 1. Assume that the iterates ?t of Algorithm 1 satisfy aI  ?t  bI, ?t for some fixed constants 0 < a < b. If ?t ? a2 , ?t, then   ?t+1 ? ??? ? max 1 ? ?t , 1 ? ?t ?t ? ??? . (17) 2 2 F F b a Furthermore, 1. The step size ?t which yields an optimal worst-case contraction bound s(?t ) is ? = 2 a?2 +b?2 . 2. The optimal worst-case contraction bound corresponding to ? = s(?) : = 1 ? 5 2 2 1 + ab 2 2 a?2 +b?2 is given by Proof. A direct proof is given in the appendix. Note that linear convergence of proximal gradient methods for strongly convex objective functions in general has already been proven (see Supplemental section). It remains to show that there exist constants a and b which bound the eigenvalues of ?t , ?t. The existence of such constants follows directly from Theorem 1, as ?t lie in the bounded domain {? ? Sp++ : f (?) + g(?) < f (?0 ) + g(?0 )}, for all t. However, it is possible to specify the constants a and b to yield an explicit rate; this is done in Theorem 2. 2 Theorem 2. Let ? > 0, define ? and ? as in Lemma 1, and assume ?t ? ?? , ?t. Then?the iterates 0 0 ? ?t of Algorithm 1 satisfy ?I  ?t  b I, ?t, with b = ?? 2 + ?0 ? ?? F ? ? + p(? + ?). Proof. See the Supplementary section. Importantly, note that the bounds of Theorem 2 depend explicitly on the bound of ??? , as given by Lemma 1. These eigenvalue bounds on ?t+1 , along with Theorem 1, provide a closed form linear convergence rate for Algorithm 1. This rate depends only on properties of the solution. Theorem 3. Let ? and ? be as in Lemma 1. Then for a constant step size ?t := ? < ?2 , the iterates of Algorithm 1 converge linearly with a rate of s(?) = 1 ? 2?2 <1 ? ?2 + (? + p(? ? ?))2 (18) Proof. By Theorem 2, for ? < ?2 , the iterates ?t satisfy   ?I  ?t  ??? 2 + ?0 ? ??? F I for all t. Moreover, since ?I  ??  ?I, if ?I  ?0  ?I (for instance, by taking ?0 = (S + ?I)?1 or some multiple of the identity) then this can be bounded as: ? ?? + ?0 ? ??? ? ? + ?p ?0 ? ??? (19) F 2 2 ? ? ? + p(? ? ?). (20) Therefore, ?I  ?t  (? + ? p(? ? ?)) I, (21) and the result follows from Theorem 1. Remark 1. Note that the contraction constant (equation 18) of Theorem 3 is closely related to the condition number of ??? , ?max (??? ) ? ?(??? ) = ? ?min (??? ) ? as 1? ?2 2?2 2?2 ?1? 2 ? 1 ? 2?(??? )?2 . ? 2 + (? + p(? ? ?)) ? + ?2 (22) Therefore, the worst case bound becomes close to 1 as the conditioning number of ??? increases. 5 Numerical Results In this section, we provide numerical results for the G-ISTA algorithm. In Section 5.2, the theoretical results of Section 4 are demonstrated. Section 5.3 compares running times of the G-ISTA, glasso [10], and QUIC [13] algorithms. All algorithms were implemented in C++, and run on an Intel i7 ? 2600k 3.40GHz ? 8 core with 16 GB of RAM. 6 5.1 Synthetic Datasets Synthetic data for this section was generated following the method used by [16, 17]. For a fixed p, a p dimensional inverse covariance matrix ? was generated with off-diagonal entries drawn i.i.d from a uniform(?1, 1) distribution. These entries were set to zero with some fixed probability (in this case, either 0.97 or 0.85 to simulate a very sparse and a somewhat sparse model). Finally, a multiple of the identity was added to the resulting matrix so that the smallest eigenvalue was equal to 1. In this way, ? was insured to be sparse, positive definite, and well-conditioned. Datsets of n samples were then generated by drawing i.i.d. samples from a Np (0, ??1 ) distribution. For each value of p and sparsity level of ?, n = 1.2p and n = 0.2p were tested, to represent both the n < p and n > p cases. problem p = 2000 n = 400 nnz(?) = 3% p = 2000 n = 2400 nnz(?) = 3% p = 2000 n = 400 nnz(?) = 15% p = 2000 n = 2400 nnz(?) = 15% ? algorithm nnz(??? )/?(??? ) glasso QUIC G-ISTA nnz(??? )/?(??? ) glasso QUIC G-ISTA nnz(??? )/?(??? ) glasso QUIC G-ISTA nnz(??? )/?(??? ) glasso QUIC G-ISTA 0.03 time/iter 27.65%/48.14 1977.92/11 1481.80/21 145.60/437 14.56%/10.25 667.29/7 211.29/10 14.09/47 27.35%/64.22 2163.33/11 1496.98/21 251.51/714 19.98%/17.72 708.15/6 301.35/10 28.23/88 0.06 time/iter 15.08%/20.14 831.69/8 257.97/11 27.05/9 3.11%/2.82 490.90/6 24.98/7 3.51/13 15.20%/28.50 862.39/8 318.57/12 47.35/148 5.49%/4.03 507.04/6 491.54/17 4.08/16 0.09 time/iter 7.24%/7.25 604.42/7 68.49/8 8.05/27 0.91%/1.51 318.24/4 5.16/5 2.72/10 7.87%/11.88 616.81/7 96.25/9 7.96/28 65.47%/1.36 313.88/4 4.12/5 1.95/7 0.12 time/iter 2.39%/2.32 401.59/5 15.25/6 3.19/12 0.11%/1.18 233.94/3 1.56/4 2.20/8 2.94%/2.87 48.47/7 23.62/7 3.18/12 0.03%/1.09 233.16/3 1.34/4 1.13/4 Table 1: Timing comparisons for p = 2000 dimensional datasets, generated as in Section 5.1. Above, nnz(A) is the percentage of nonzero elements of matrix A. 5.2 Demonstration of Convergence Rates The linear convergence rate derived for G-ISTA in Section 4 was shown to be heavily dependent on the conditioning of the final estimator. To demonstrate these results, G-ISTA was run on a synthetic dataset, as described in Section 5.1, with p = 500 and n = 300. Regularization parameters of ? = 0.75, 0.1, 0.125, 0.15, and 0.175 were used. Note as ? increases, ??? generally becomes better conditioned. For each value of ?, the numerical optimum was computed to a duality gap of 10?10 using G-ISTA. These values of ? resulted in sparsity levels of 81.80%, 89.67%, 94.97%, 97.82%, and 99.11%, respectively. G-ISTA was then run again, and the Frobenius norm argument errors at each iteration were stored. These errors were plotted on a log scale for each value of ? to demonstrate the dependence of the convergence rate on condition number. See Figure 1, which clearly demonstrates the effects of conditioning. 5.3 Timing Comparisons The G-ISTA, glasso, and QUIC algorithms were run on synthetic datasets (real datasets are presented in the Supplemental section) of varying p, n and with different levels of regularization, ?. All algorithms were run to ensure a fixed duality gap, here taken to be 10?5 . This comparison used efficient C++ implementations of each of the three algorithms investigated. The implementation of G-ISTA was adapted from the publicly available C++ implementation of QUIC Hsieh et al. [13]. Running times were recorded and are presented in Table 1. Further comparisons are presented in the Supplementary section. Remark 2. The three algorithms variable ability to take advantage of multiple processors is an important detail. The times presented in Table 1 are wall times, not CPU times. The comparisons were run on a multicore processor, and it is important to note that the Cholesky decompositions and 7 ? = 0.075, ?(???) = 7.263 2 ? = 0.1, ?(?? ) = 3.9637 10 ? ? = 0.125, ?(???) = 2.3581 ? = 0.15, ?(???) = 1.6996 ? = 0.175, ?(?? ) = 1.3968 0 ||?t??*?||F 10 ? ?2 10 ?4 10 ?6 10 50 100 150 200 250 iteration 300 350 400 450 Figure 1: Semilog plot of ?t ? ??? F vs. iteration number t, demonstrating linear convergence rates of G-ISTA, and dependence of those rates on ?(??? ). inversions required by both G-ISTA and QUIC take advantage of multiple cores. On the other hand, the p2 dimensional lasso solve of QUIC and p-dimensional lasso solve of glasso do not. For this reason, and because Cholesky factorizations and inversions make up the bulk of the computation required by G-ISTA, the CPU time of G-ISTA was typically greater than its wall time by a factor of roughly 4. The CPU and wall times of QUIC were more similar; the same applies to glasso. 6 Conclusion In this paper, a proximal gradient method was applied to the sparse inverse covariance problem. Linear convergence was discussed, with a fixed closed-form rate. Numerical results have also been presented, comparing G-ISTA to the widely-used glasso algorithm and the newer, but very fast, QUIC algorithm. These results indicate that G-ISTA is competitive, in particular for values of ? which yield sparse, well-conditioned estimators. The G-ISTA algorithm was very fast on the synthetic examples of Section 5.3, which were generated from well-conditioned models. For poorly conditioned models, QUIC is very competitive. The Supplemental section gives two real datasets which demonstrate this. For many practical applications however, obtaining an estimator that is well-conditioned is important ([23, 28]). To conclude, although second-order methods for the sparse inverse covariance method have recently been shown to perform well, simple first-order methods cannot be ruled out, as they can also be very competitive in many cases. 8 References [1] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood estimation for multivarate gaussian or binary data. Journal of Machine Learning Research, 9:485?516, 2008. [2] Jonathan Barzilai and Jonathan M. Borwein. Two-Point Step Size Gradient Methods. IMA Journal of Numerical Analysis, 8(1):141?148, 1988. [3] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2:183?202, 2009. ISSN 1936-4954. [4] S. Becker, E.J. Candes, and M. Grant. Templates for convex cone problems with applications to sparse signal recovery. Mathematical Programming Computation, 3:165?218, 2010. [5] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [6] P. Brohan, J. J. Kennedy, I. Harris, S. F. B. Tett, and P. D. Jones. Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. Journal of Geophysical Research, 111, 2006. [7] George H.G. Chen and R.T. Rockafellar. Convergence rates in forward-backward splitting. Siam Journal on Optimization, 7:421?444, 1997. [8] Patrick L. Combettes and Val?erie R. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale Modeling & Simulation, 4(4):1168?1200, 2005. [9] Alexandre D?Aspremont, Onureena Banerjee, and Laurent El Ghaoui. First-order methods for sparse covariance selection. SIAM Journal on Matrix Analysis and Applications, 30(1):56?66, 2008. [10] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9:432?441, 2008. [11] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, April 2011. [12] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1990. [13] Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon, and Pradeep K. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In Advances in Neural Information Processing Systems 24, pages 2330?2338. 2011. [14] S.L. Lauritzen. Graphical models. Oxford Science Publications. Clarendon Press, 1996. [15] Zhaosong Lu. Smooth optimization approach for sparse covariance selection. SIAM Journal on Optimization, 19(4):1807?1827, 2009. ISSN 1052-6234. doi: http://dx.doi.org/10.1137/070695915. [16] Zhaosong Lu. Adaptive first-order methods for general sparse inverse covariance selection. SIAM Journal on Matrix Analysis and Applications, 31:2000?2016, 2010. [17] Rahul Mazumder and Deepak K. Agarwal. A flexible, scalable and efficient algorithmic framework for the Primal graphical lasso. Pre-print, 2011. [18] Rahul Mazumder and Trevor Hastie. The graphical lasso: New insights and alternatives. Pre-print, 2011. [19] Yurii Nesterov. A method of solving a convex programming problem with convergence rate O(1/k2 ). Soviet Mathematics Doklady, 27(2):372?376, 1983. [20] Yurii Nesterov. Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, 2004. [21] Yurii Nesterov. Gradient methods for minimizing composite objective function. CORE discussion papers, Universit?e catholique de Louvain, Center for Operations Research and Econometrics (CORE), 2007. [22] Jennifer Pittman, Erich Huang, Holly Dressman, Cheng-Fang F. Horng, Skye H. Cheng, Mei-Hua H. Tsou, Chii-Ming M. Chen, Andrea Bild, Edwin S. Iversen, Andrew T. Huang, Joseph R. Nevins, and Mike West. Integrated modeling of clinical and gene expression information for personalized prediction of disease outcomes. Proceedings of the National Academy of Sciences of the United States of America, 101(22):8431?8436, 2004. [23] Benjamin T. Rolfs and Bala Rajaratnam. A note on the lack of symmetry in the graphical lasso. Computational Statistics and Data Analysis, 2012. [24] Katya Scheinberg, Shiqian Ma, and Donald Goldfarb. Sparse inverse covariance selection via alternating linearization methods. In Advances in Neural Information Processing Systems 23, pages 2101?2109. 2010. [25] Paul Tseng. On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM Journal on Optimization, 2008. [26] Lieven Vandenberghe, Stephen Boyd, and Shao-Po Wu. Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications, 19:499?533, 1996. [27] J. Whittaker. Graphical Models in Applied Multivariate Statistics. Wiley, 1990. [28] J. Won, J. Lim, S. Kim, and B. Rajaratnam. Condition number regularized covariance estimation. Journal of the Royal Statistical Society Series B, 2012. [29] Stephen J. Wright, Robert D. Nowak, and M?ario A. T. Figueiredo. Sparse reconstruction by separable approximation. IEE Transactions on Signal Processing, 57(7):2479?2493, 2009. [30] Ming Yuan and Yi Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94 (1):19?35, 2007. [31] X.M. Yuan. Alternating direction method of multipliers for covariance selection models. Journal of Scientific Computing, pages 1?13, 2010. 9
4574 |@word h:5 determinant:1 version:1 inversion:2 norm:3 dominique:1 simulation:1 covariance:30 contraction:4 hsieh:3 decomposition:2 tr:5 reduction:1 initial:6 contains:1 series:1 united:1 comparing:1 com:1 si:1 dx:1 numerical:9 designed:1 interpretable:1 update:1 plot:1 v:1 amir:1 xk:1 core:4 prespecified:1 iterates:8 characterization:1 org:1 mathematical:3 along:1 sii:1 direct:1 become:1 yuan:4 introductory:1 andrea:1 roughly:1 growing:1 ming:2 cpu:3 solver:2 becomes:2 provided:2 estimating:1 moreover:4 notation:2 bounded:2 biostatistics:1 superlinearly:1 supplemental:5 wajs:1 guarantee:2 concave:1 finance:1 doklady:1 demonstrates:1 k2:1 universit:1 biometrika:1 grant:2 producing:1 positive:4 timing:3 oxford:1 meet:1 laurent:1 katya:1 studied:1 suggests:1 factorization:1 range:1 bi:2 nevins:1 practical:2 unique:2 horn:1 practice:2 block:8 definite:4 procedure:1 mei:1 area:1 nnz:9 composite:1 boyd:3 pre:2 donald:1 jui:1 cannot:2 interior:1 close:1 operator:3 selection:7 wong:1 optimize:1 equivalent:1 demonstrated:1 center:1 convex:15 recovery:2 splitting:2 estimator:7 insight:1 importantly:2 vandenberghe:2 fang:1 coordinate:5 barzilai:2 heavily:1 programming:3 us:1 element:2 satisfying:1 approximated:1 econometrics:1 kxk1:2 observed:1 mike:1 solved:2 worst:4 thousand:1 yk:2 disease:1 benjamin:2 complexity:3 nesterov:5 depend:1 solving:11 edwin:1 shao:1 easily:1 po:1 k0:1 tx:1 america:1 soviet:1 bild:1 fast:3 doi:2 outcome:1 guillot:1 onureena:1 stanford:12 solve:7 widely:2 say:1 larger:1 otherwise:1 supplementary:2 drawing:1 ability:1 statistic:6 final:1 advantage:2 eigenvalue:5 differentiable:1 propose:1 reconstruction:1 product:1 poorly:1 kx1:1 impetus:1 academy:1 description:1 frobenius:2 cautious:1 convergence:23 optimum:2 produce:1 converges:1 andrew:1 multicore:1 ij:5 lauritzen:1 p2:1 implemented:2 indicate:1 direction:2 safe:2 closely:2 sgn:1 hx:1 datsets:1 wall:3 hold:1 proximity:2 sufficiently:1 considered:1 around:2 normal:1 wright:1 great:1 proxg:2 algorithmic:1 a2:1 smallest:1 earth:1 estimation:13 largest:1 clearly:1 gaussian:6 always:1 pn:1 shrinkage:8 varying:1 publication:1 derived:1 rank:1 likelihood:5 check:1 kim:1 inference:1 dependent:1 el:2 typically:1 integrated:1 relation:1 sparsa:1 arg:2 dual:10 flexible:1 denoted:2 k6:1 proposes:1 constrained:2 equal:2 pdimensional:1 jones:1 constitutes:1 np:3 modern:3 resulted:1 national:1 ima:1 beck:1 friedman:2 ab:1 interest:2 zhaosong:2 pradeep:1 yielding:1 primal:13 nowak:1 partial:1 arian:2 ruled:1 plotted:1 theoretical:4 instance:1 column:1 soft:1 modeling:2 teboulle:1 insured:1 maximization:1 subset:1 entry:2 uniform:1 johnson:1 iee:1 stored:1 proximal:8 synthetic:5 chooses:1 cho:1 fundamental:1 siam:7 off:1 invertible:3 continuously:1 quickly:1 borwein:2 again:1 satisfied:1 recorded:1 opposed:1 pittman:1 huang:2 shiqian:1 matyas:1 prox:2 de:1 rockafellar:1 satisfy:5 explicitly:2 depends:1 closed:3 competitive:3 candes:1 minimize:2 ni:1 publicly:1 efficiently:1 yield:5 chii:1 lu:2 kennedy:1 processor:2 submitted:1 reach:1 trevor:1 proof:5 associated:1 dataset:3 lim:1 ut:2 dimensionality:1 organized:1 hilbert:1 alexandre:1 clarendon:1 attained:1 methodology:1 specify:1 rahul:2 disciplined:1 april:1 entrywise:1 done:1 box:1 strongly:3 furthermore:1 roger:1 until:1 cran:1 hand:1 multiscale:1 banerjee:4 lack:1 icme:1 scientific:1 effect:1 multiplier:1 holly:1 maleki:2 hence:4 regularization:2 alternating:3 symmetric:2 iteratively:1 nonzero:1 dhillon:1 goldfarb:1 attractive:1 during:1 won:1 criterion:1 complete:1 demonstrate:3 l1:1 temperature:1 wise:1 recently:2 charles:1 common:1 conditioning:3 discussed:2 approximates:1 lieven:2 kluwer:1 significant:1 refer:1 cambridge:2 ai:2 erich:1 mathematics:1 patrick:1 multivariate:2 closest:1 optimizing:1 certain:2 inequality:1 binary:1 accomplished:1 yi:1 minimum:1 greater:1 houston:1 somewhat:1 george:1 surely:1 converge:3 signal:3 semi:2 ii:1 multiple:4 stephen:3 reduces:1 stem:1 smooth:4 academic:1 clinical:1 lin:2 ravikumar:1 prediction:1 variant:1 regression:3 scalable:1 iteration:12 represent:1 agarwal:2 decreased:1 horng:1 publisher:1 semilog:1 regional:1 subject:1 deficient:1 ee:1 iterate:2 independence:3 xj:1 hastie:3 lasso:9 reduce:1 inner:1 det:8 i7:1 rajaratnam:3 expression:1 gb:1 becker:1 penalty:4 hessian:1 remark:3 matlab:1 generally:1 adept:1 locally:1 induces:1 http:2 exist:2 percentage:1 overly:1 per:1 tibshirani:1 bulk:1 broadly:1 ario:1 iter:4 demonstrating:1 drawn:3 backward:2 ram:1 imaging:1 relaxation:1 cone:2 run:6 inverse:20 powerful:1 uncertainty:1 throughout:2 almost:1 wu:1 cvx:2 appendix:1 bound:8 bala:2 cheng:2 quadratic:5 adapted:2 constraint:1 deficiency:1 x2:3 software:1 encodes:1 personalized:1 simulate:1 argument:2 min:9 concluding:1 performing:3 separable:1 erie:1 describes:1 terminates:1 increasingly:1 newer:1 joseph:1 restricted:1 ghaoui:2 taken:3 equation:2 remains:1 scheinberg:2 turn:1 jennifer:1 end:1 sustik:1 yurii:3 available:2 ksk2:1 operation:1 backtracks:1 alternative:1 rp:4 existence:1 denotes:5 running:2 ensure:1 graphical:13 iversen:1 k1:3 especially:1 society:1 objective:2 already:1 added:1 print:2 quic:13 primary:1 concentration:1 dependence:2 diagonal:1 gradient:10 topic:1 tseng:1 reason:3 issn:2 providing:1 demonstration:1 minimizing:1 robert:1 implementation:4 unknown:1 perform:4 datasets:7 descent:4 community:1 introduced:1 required:2 louvain:1 pattern:1 sparsity:6 challenge:1 rolf:2 program:2 max:7 royal:1 power:1 regularized:4 indicator:1 scheme:1 brief:1 imply:1 numerous:2 aspremont:3 genomics:1 prior:2 literature:3 discovery:1 kf:2 val:1 glasso:10 lecture:1 sublinear:2 proven:2 thresholding:10 pi:1 row:3 penalized:3 figueiredo:1 catholique:1 allow:1 wide:1 template:1 taking:1 deepak:1 sparse:24 tolerance:4 distributed:1 ghz:1 rich:1 author:2 made:1 forward:2 adaptive:1 transaction:1 compact:2 dressman:1 gene:1 global:2 sequentially:1 assumed:1 conclude:1 xi:5 continuous:8 iterative:9 search:3 table:3 ku:1 robust:1 ca:4 obtaining:1 mazumder:4 symmetry:1 excellent:1 necessarily:2 investigated:1 domain:2 marc:1 sp:10 dense:1 linearly:1 paul:1 cvxr:1 ista:34 categorized:1 body:1 x1:2 referred:1 intel:1 west:1 wiley:1 combettes:1 explicit:1 lie:2 ian:1 theorem:10 xt:8 maxi:1 exists:3 magnitude:1 linearization:1 conditioned:7 kx:2 gap:4 chen:2 backtracking:2 inderjit:1 applies:1 hua:1 satisfies:1 harris:1 rice:2 ma:1 whittaker:1 conditional:3 identity:3 formulated:1 lipschitz:5 change:1 specifically:2 lemma:6 called:1 ece:1 duality:4 accepted:2 geophysical:1 cholesky:4 arises:1 jonathan:2 accelerated:1 dept:4 mum:1 tested:1 handling:1
3,949
4,575
Selective Labeling via Error Bound Minimization Quanquan Gu? , Tong Zhang? , Chris Ding? , Jiawei Han? Department of Computer Science, University of Illinois at Urbana-Champaign ? Department. of Statistics, Rutgers University ? Department. of Computer Science & Engineering, University of Texas at Arlington [email protected], [email protected], [email protected], [email protected] ? Abstract In many practical machine learning problems, the acquisition of labeled data is often expensive and/or time consuming. This motivates us to study a problem as follows: given a label budget, how to select data points to label such that the learning performance is optimized. We propose a selective labeling method by analyzing the out-of-sample error of Laplacian regularized Least Squares (LapRLS). In particular, we derive a deterministic out-of-sample error bound for LapRLS trained on subsampled data, and propose to select a subset of data points to label by minimizing this upper bound. Since the minimization is a combinational problem, we relax it into continuous domain and solve it by projected gradient descent. Experiments on benchmark datasets show that the proposed method outperforms the state-of-the-art methods. 1 Introduction The performance of (semi-)supervised learning methods typically depends on the amount of labeled data. Roughly speaking, the more the labeled data, the better the learning performance will be. However, in many practical machine learning problems, the acquisition of labeled data is often expensive and/or time consuming. To overcome this problem, active learning [9, 10] was proposed, which iteratively queries the oracle (labeler) to obtain the labels at new data points. Representative methods include support vector machine (SVM) active learning [19, 18], agnostic active learning [2, 5, 14], etc. Due to the close interaction between the learner and the oracle, active learning can be advantageous to achieve better learning performance. Nevertheless, in many real-world applications, such an interaction may not be feasible. For example, when one turns to Amazon Mechanical Turk1 to label data, the interaction between the learner and the labeling workers is very limited. Therefore, standard active learning is not very practical in this case. Another potential solution to the label deficiency problem is semi-supervised learning [7, 22, 21, 4], which aims at combining a small number of labeled data and a large amount of unlabeled data to improve the learning performance. In a typical setting of semi-supervised learning, a small set of labeled data is assumed to be given at hand or randomly generated in practice. However, randomly selecting (uniformly sampling) data points to label is unwise because not all the data points are equally informative. It is desirable to obtain a labeled subset which is most beneficial for semisupervised learning. In this paper, based on the above motivation, we investigate a problem as follows: given a fixed label budget, how to select a subset of data points to label such that the learning performance is optimized. We refer to this problem as selective labeling, in contrast to conventional random labeling. To achieve the goal of selective labeling, it is crucial to consider the out-of-sample error of a specific learner. We choose Laplacian Regularized Least Squares (LapRLS) as the learner [4] because it is a 1 https://www.mturk.com/ 1 state-the-art semi-supervised learning method, and takes many linear regression methods as special cases (e.g., ridge regression [15]). We derive a deterministic out-of-sample error bound for LapRLS trained on subsampled data, which suggests to select the data points to label by minimizing this upper bound. The resulting selective labeling method is a combinatorial optimization problem. In order to optimize it effectively and efficiently, we relax it into a continuous optimization problem, and solve it by projected gradient descent algorithm followed by discretization. Experiments on benchmark datasets show that the proposed method outperforms the state-of-the-art methods. The remainder of this paper is organized as follows. In Section 2, we briefly review manifold regularization and LapRLS. In Section 3, we derive an out-of-sample error bound for LapRLS on subsampled data, and present a selective labeling criterion by minimizing the this bound, followed by its optimization algorithm. We discuss the connections between the proposed method and several existing experimental design approaches in Section 4. The experiments are demonstrated in Section 5. We conclude this paper in Section 6. 2 Review of Laplacian Regularized Least Squares Given a data set {(x1 , y1 ), . . . , (xn , yn )} where xi ? Rd and yi ? {?1}, Laplacian Regularized Least Squares (LapRLS) [4] aims to learn a linear function f (x) = wT x. In order to estimate and preserve the geometrical and topological properties of the data, LapRLS [4] assumes that if two data points xi and xj are close in the intrinsic geometry of the data distribution, the labels of this two points are also close to each other. Let f (x) be a? function that maps the original data point x in a compact submanifold M to R, we use ||f ||2M = x?M || ?M f ||2 dx to measure the smoothness of f along the geodesics in the intrinsic manifold of the data, where ?M f is the gradient of f along the manifold M. Recent study on spectral graph theory [8] has demonstrated that ||f ||2M can be discretely approximated through a nearest neighbor graph on a set of data points. Given an affinity matrix W ? Rn?n of the graph, ||f ||2M is approximated as: 1? ||f ||2M ? ||fi ? fj ||22 Wij = f T Lf , (1) 2 ij where fi is a?shorthand for f (xi ), f = [f1 , . . . , fn ]T , D is a diagonal matrix, called degree matrix, n with Dii = j=1 Wij , and L = D ? W is the combinatorial graph Laplacian [8]. Eq. (1) is called Manifold Regularization. Intuitively, the regularization incurs a heavy penalty if neighboring points xi and xj are mapped far apart. Based on manifold regularization, LapRLS solves the following optimization problem, arg min ||XT w ? y||22 + w ?A ?I T ||w||22 + w XLXT w, 2 2 (2) where ?A , ?I > 0 are positive regularization parameters, X = [x1 , . . . , xn ] is the design matrix, y = [y1 , . . . , yn ]T is the response vector, ||w||2 is ?2 regularization of linear function, and wT XLXT w is manifold regularization of f (x) = wT x. When ?I = 0, LapRLS reduces to ridge regression [15]. A bias term b can be incorporated into the form by expanding the weight vector and input feature vector as w ? [w; b] and x ? [x; 1]. Note that Eq. (2) is a supervised version of LapRLS, because only labeled data are used in manifold regularization. Although our derivations are based on this version in the rest of the paper, the results can be extended to semi-supervised version of LapRLS straightforwardly. 3 3.1 The Proposed Method Problem Formulation The generic problem of selective labeling is as follows. Given a set of data points X = {x1 , . . . , xn }, namely the pool of candidate data points, our goal is to find a subsample L ? {1, . . . , n}, which contains the most informative |L| = l points. To derive a selective labeling approach for LapRLS, we first derive an out-of-sample error bound of LapRLS. 2 3.2 Out-of-Sample Error Bound of LapRLS We define the function class of LapRLS as follows. Definition 1. The function class of LapRLS is FB = {x ? wT x | ?A ||w||22 + ?I wT XLXT w ? B}, where X = [x1 , . . . , xn ], and B > 0 is a constant. Consider the following linear regression model, y = XT w? + ?, (3) T ? where X = [x1 , . . . , xn ] is the design matrix, y = [y1 , . . . , yn ] is the response vector, w is the true weight vector which is unknown, and ? = [?1 , . . . , ?n ]T is the noise vector with ?i an unknown noise with zero mean. We assume that different observations have noises that are independent, but with equal variance ? 2 . Moreover, we assume that the true weight vector w? satisfies ?A ||w? ||22 + ?I (w? )T XLXT w? ? B, (4) which implies that the true hypothesis belongs to the function class of LapRLS in Definition 1. In this case, the approximation error vanishes and the excess error equals to the estimation error. Note that this assumption can be relaxed with more effort, under which we can derive a similar error bound as below. For simplicity, the following derivations are built upon the assumption in Eq. (4). In selective labeling, we are interested in estimating w? using LapRLS in Eq. (2) from a subsample L ? {1, . . . , n}. Denote the subsample of X by XL , the subsample of y by yL , and the subsample of ? by ?L . The solution of LapRLS is given by ? L = (XL XTL + ?A I + ?I XL LL XTL )?1 XL yL , w (5) where I is an identity matrix, LL is the graph Laplacian computed based on XL , which is a principal submatrix of L. In the following, we will present a deterministic out-of-sample error bound for LapRLS trained on the subsampled data, which is among the main contributions of this paper. Theorem 2. For any fixed V = [v1 , . . . , vm ] and X = [x1 , . . . , xn ], and a subsample L of X, the expected error of LapRLS trained on L in predicting the true response VT w? is upper bounded as ( ) ? L ? VT w? ||22 ? (B + ? 2 )tr VT (XL XTL + ?A I + ?I XL LL XTL )?1 V . E||VT w (6) Proof. Let ML = ?A I + ?I XL LL XTL . Given L, the expected error (where the expectation is w.r.t. ?L ) is given by ? L ? VT w? ||22 E||VT w = E||VT (XL XTL + ML )?1 XL yL ? VT w? ||22 = ||VT (XL XTL + ML )?1 XL XTL w? ? VT w? ||22 + E||VT (XL XTL + ML )?1 XL ?L ||22 ,(7) | {z } | {z } A1 A2 ? where the second equality follows from yL = XL w + ?L . Now we bound the two terms in the right hand side respectively. The first term is bounded by 1 1 = ||VT (XL XTL + ML )?1 ML w? ||22 ? ||VT (XL XTL + ML )?1 ML2 ||2F ||ML2 w? ||22 ( ) = Btr VT (XL XTL + ML )?1 ML (XL XTL + ML )?1 V ( ) ? Btr VT (XL XTL + ML )?1 V (8) where the first inequality is due to Cauchy Schwarz?s inequality, and the second inequality follows from dropping the negative term. A1 Similarly, the second term can be bounded by ( ) A2 ? ? 2 tr VT (XL XTL + ML )?1 XL XTL (XL XTL + ML )?1 V ( ) ? ? 2 tr VT (XL XTL + ML )?1 V , E[?L ?TL ] (9) where the first equality uses ? ? I, and it becomes equality if ?i are independent and identically distributed (i.i.d.). Combing Eqs. (8) and (9) completes the proof. 2 3 Note that in the above theorem, the sample V could be either the same as or different from the sample X. Sometimes, we are also interested in the expected estimation error of w? as follows. Theorem 3. For any fixed X, and a subsample L of X, the expected error of LapRLS trained on L in estimating the true weight vector w? is upper bounded as ) ( ? L ? w? ||22 ? (B + ? 2 )tr (XL XTL + ?A I + ?I XL LL XTL )?1 E||w (10) The proof of this theorem follows similar derivations of Theorem 2. 3.3 The Criterion of Selective Labeling From Theorem 2, we can see that given a subsample L of X, the expected prediction error of LapRLS on V is upper bounded by Eq. (6). In addition, the right hand side of Eq. (6) does not depend on the labels, i.e., y. More importantly, the error bound derived in this paper is deterministic, which is unlike those probabilistic error bounds derived based on Rademacher complexity [3] or algorithmic stability [6]. Since those probabilistic error bounds only hold for i.i.d. sample rather than a particular sample, they cannot provide a criterion to choose a subsample set for labeling due to the correlation between the pool of candidate points and the i.i.d. sample. On the contrary, the deterministic error bound does not suffer from such a kind of problem. Therefore, it provides a natural criterion for selective labeling. In detail, given a pool of candidate data points, i.e., X, we propose to find a subsample L of {1, . . . , n}, by minimizing the follow objective function ) ( arg min tr XT (XL XTL + ?I XL LL XTL + ?A I)?1 X , (11) L?{1,...,n} where we simply assume V = X. The above problem is a combinatorial optimization problem. Finding the global optimal solution is NP-hard. One potential way to solve it is greedy forward (or backward) selection. However, it is inefficient. Here we propose an efficient algorithm, which solves its continuous relaxation. 3.4 Reformulation We introduce a selection matrix S ? Rn?l , which is defined as { 1, if xi is selected as the j-point in L Sij = 0, otherwise. (12) It is easy to check that each column of S has one and only one 1, and each row has at most one 1. The constraint set for S can be defined as S1 = {S|S ? {0, 1}n?l , ST 1 = 1, S1 ? 1}, (13) where 1 is a vector of all ones, or equivalently, S2 = {S|S ? {0, 1}n?l , ST S = I}, (14) where I is an identity matrix. Based on S, we have XL = XS and LL = ST LS. Thus, Eq. (11) can be equivalently reformulated as ( ) arg min tr XT (XSST XT + ?I XSST LSST XT + ?A I)?1 X S?S2 ( ) = arg min tr XT (XSST L? SST XT + ?A I)?1 X , (15) S?S2 ? where L = I + ?I L. The above optimization problem is still a discrete optimization. Let S3 = {S|S ? 0, ST S = I}, (16) where we relax the binary constraint on S into nonnegative constraint. Note that S3 is a matching polytope [17]. Then we solve the following continuous optimization, ( ) arg min tr XT (XSST L? SST XT + ?A I)?1 X . (17) S?S3 4 We derive a projected gradient descent algorithm to find a local optimum of Eq. (17). We first ignore the nonnegative constraint on S. Since ST S = I, we introduce a Lagrange multiplier ? ? Rl?l , thus the Lagrangian function is ( ) ( ) L(S) = tr XT (XSST L? SST XT + ?A I)?1 X + tr ?(ST S ? I) . (18) The derivative of L(S) with respect to S is2 ?L = ?2(XT BXSST L? S + L? SST XT BXS) + 2S?, (19) ?S where B = A?1 (XXT )A?1 and A = XSST L? SST XT + ?I. Note that the computational burden of the derivative is A?1 , which is the inverse of a d ? d matrix. To overcome this problem, we use the Woodbury matrix identity [12]. Then A?1 can be computed as ( )?1 1 1 1 T T ?1 T ? ?1 (20) A = I ? 2 XS (S L S) + S X XS ST XT , ? ? ? where ST L? S is a l ? l matrix, whose inverse can be solved efficiently when l ? d. To determine the Lagrange multiplier ?, left multiplying Eq. (19) by ST , and using the fact that ST S = I, we obtain ? = ST XT BXSST L? S + ST L? SST XT BXS. (21) Substituting the Lagrange multiplier ? back into Eq. (19), we can obtain the derivative depending only on S. Thus we can use projected gradient descent to find a local optimal solution for Eq. (17). In each iteration, it takes a step proportional to the negative of the gradient of the function at the current point, followed by a projection back into the nonnegative set. 3.5 Discretization Till now, we have obtained a local optimal solution S? by projected gradient descent. However, this S? contains continuous values. In other words, S? ? S3 . In order to determine which l data points to select, we need to project S? into S1 . We use a simple greedy procedure to conduct the discretization: we first find the largest element in S (if there exist multiple largest elements, we choose any one of them), and mark its row and column; then from the unmarked columns and rows we find the largest element and also mark it; this procedure is repeated until we find l elements. 4 Related Work We notice that our proposed method shares similar spirit with optimal experimental design3 in statistics [1, 20, 16], whose intent is to select the most informative data points to learn a function which has minimum variance of estimation, or minimum variance of prediction. For example, A-Optimal Design (AOD) minimizes the expected variance of the model parameter. In particular, for ridge regression, it optimizes the following criterion, ( ) arg min tr (XL XTL + ?A I)?1 , (22) L?{1,...,n} where I is an identity matrix. We can recover this criterion by setting ?I = 0 in Theorem 3. However, the pitfall of AOD is that it does not characterize the quality of predictions on the data, which is essential for classification or regression. To overcome the shortcoming of A-optimal design, Yu et al. [20] proposed a Transdutive Experimental Design (TED) approach. TED selects the samples which minimize the expected predictive variance of ridge regression on the data, ( ) (23) arg min tr XT (XL XTL + ?A I)?1 X . L?{1,...,n} 2 The calculation of the derivative is non-trivial, please refer to the supplementary material for detail. Some literature also call it active learning, while our understand is there is no adaptive interaction between the learner and the oracle within optimal experimental design. Therefore, it is better to call it nonadaptive active learning. 3 5 Although TED is motivated by minimizing the variance of the prediction, it is very interesting to demonstrate that the above criterion is coinciding with minimizing the out-of-sample error bound in Theorem 2 with ?I = 0. The reason is that ( for ridge regression, the) upper bounds of the bias and variance terms share a common factor tr XT (XL XTL + ?A I)?1 X . This is a very important observation because it explains why TED performs very well even though its criterion is minimizing the variance of the prediction. Furthermore, TED can be seen as a special case of our proposed method. He et al. [16] proposed Laplacian Optimal Design (LOD), which selects data points that minimize the expected predictive variance of Laplacian regularized least squares [4] on the data, ( ) arg min tr XT (?I XLXT + XL XTL + ?A I)?1 X , (24) L?{1,...,n} where the graph Laplacian L is computed on all the data points in the pool, i.e., X. LOD selects the points by XL XTL while leaving the graph Laplacian term XLXT fixed. However, our method selects the points by XL XTL as well as the graph Laplacian term i.e., XL LL XTL . This difference is essential, because our criterion has a strong theoretical foundation, i.e., minimizing the out-ofsample error bound of LapRLS. This explains the non-significant improvement of LOD over TED. Admittedly, the term XL LL XTL in our method raised a challenge for optimization. Yet it has been well-solved by the projected gradient descent algorithm derived in previous section. We also notice that similar problem was studied for graphs [13]. However, their method cannot be applied to our setting, because their input is restricted to the adjacency matrix of a graph. 5 Experiments In this section, we evaluate the proposed method on both synthetic and real-world datasets, and compare it with the state-of-the-art methods. All the experiments are conducted in Matlab. 5.1 Compared Methods To demonstrate the effectiveness of our proposed method, we compare it with the following baseline approaches: Random Sampling (Random) uniformly selects data points from the pool as training data. It is the simplest baseline for label selection. A-Optimal Design (AOD) is a classic experimental design method proposed in the community of statistics. There is a parameter ?A to be tuned. Transductive Experiment Design (TED) is proposed in [20], which is the state-of-the-art (non-adaptive) active learning method. There is a parameter ?A to be tuned. Laplacian Optimal Design (LOD) [16] is an extension of TED, which incorporates the manifold structure of the data. Selective Labeling via Error Bound Minimization (Bound) is the proposed method. There are two tunable parameters ?A and ?I in both LOD and Bound. Both LOD and Bound use graph Laplacian. To compute it, we first normalize each data point into a vector with unit ?2 -norm. Then we construct a 5-NN graph and use the cosine distance to measure the similarity between data points throughout of our experiments. Note that the problem setting of our study is to select a batch of data points to label without training a classifier. Therefore, we do not compare our method with typical active learning methods such as SVM active learning [19, 18] and agnostic active learning [2]. After selecting the data points by the above methods, we train a LapRLS [4] as the learner to do classification. There are two parameters in LapRLS, i.e., ?A and ?I . 5.2 Synthetic Dataset To get an intuitive picture of how the above methods (except random sampling, which is trivial) work differently, we show their experimental results on a synthetic dataset in Figure 1. This dataset contains two circles, each of which constitutes a class. It has strong manifold structure. We let the compared methods select 8 data points. As can be seen, the data points selected by AOD are concentrated on the inner circle (belonging to one class), which are not able to train a classifier. The data points selected by TED, LapIOD and Bound are distributed on both inner and outer circles 6 2.5 2.5 2.5 2 2 2 1.5 1.5 1.5 54 8 1 1 0.5 2 3 6 1 861 3 2 0 -0.5 -1.5 -2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 -2.5 -2.5 -1 -1.5 -2 -2.5 -2.5 (a) AOD -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 (b) TED -2.5 -2.5 1 7 -1.5 -2 -2 4 -0.5 4 5 -1 2 8 6 0 2 -0.5 4 -1 -1.5 1 0.5 0 -0.5 7 -1 13 68 0.5 75 5 2 1.5 1 0.5 0 2.5 7 -2 -2 -1.5 -1 -0.5 0 0.5 (c) LOD 1 1.5 2 2.5 -2.5 -2.5 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 (d) Bound Figure 1: Selected points (the red marks) on the two circles dataset by (a) AOD; (b) TED; (c) LOD; and (d) Bound. (belonging to different classes), which are good at training a learner. Furthermore, the 8 data points selected by Bound are uniformly distributed on the two circles, four from the inner circle, and the other four from the outer circle, which can better represent the original data. 5.3 Real Datasets & Parameter Settings In the following, we use three real-world benchmark datasets to evaluate the compared methods. wdbc is the Wisconsin Diagnostic Breast Cancer data set, which is from UCI machine learning repository4 . It aims at predicting the breast cancer as benign or malignant based on the digitalized images. There are 357 positive samples and 212 negative samples. Each sample has 32 attributes. ORL face database5 contains 10 images for each of the 40 human subjects, which were taken at different times, varying the lighting, facial expressions and facial details. The original images (with 256 gray levels) have size 92 ? 112, which are resized to 32 ? 32 for efficiency. Isolet was first used in [11]. It contains 150 people who spoke each letter of the alphabet twice. The speakers are grouped into sets of 30 speakers each, and we use the first group, referred to Isolet1. Each sample is represented by a 617-dimensional feature vector. For each data set, we randomly select 20% data as held-out set for model selection, and the rest 80% data as work set. In order to randomize the experiments, in each run of experiments, we restrict the training data (pool of candidate data points) to be selected from a random sampling of 50% work set (which accounts for 40% of the total data). The remaining half data (40% of the total data) is used as test set. Once the labeled data are selected, we train a semi-supervised version of LapRLS, which uses both labeled and unlabeled data (all the training data) for manifold regularization. We report the classification result on the test set. This random split was repeated 10 times, thus we can compute the mean and standard deviation of the classification accuracy. The parameters of compared methods (See Section 5.1) are tuned by 2-fold cross validation on the held-out set. For the parameters of LapRLS, we use the same parameters of LOD (or Bound) for LapRLS. For the wdbc dataset, the chosen parameters are ?A = 0.001, ?I = 0.01. For ORL, ?A = 0.0001, ?I = 0.001. For Isolet1, ?A = 0.01, ?I = 0.001. For wdbc, we let the compared methods incrementally choose {2, 4, . . . , 20} points to label, for ORL, we incrementally choose {80, 90, . . . , 150} points for labeling, and for Isolet1, we choose {30, 40, . . . , 120} points to query. 5.4 Results on Real Datasets The experimental results are shown in Figure 2. In all subfigures, the x-axis represents the number of labeled points, while the y-axis is the averaged classification accuracy on the test data over 10 runs. In order to show some concrete results, we also list the accuracy and running time (in second) of all the compared methods on the three datasets with 2, 80 and 30 labeled data points respectively in 4 5 http://archive.ics.uci.edu/ml/ http://www.cl.cam.ac.uk/Research/DTG/attarchive:pub/data 7 80 95 90 75 90 85 70 65 75 Random AOD TED LOD Bound 70 65 2 4 6 8 10 12 #Labeled data 14 (a) wdbc 16 18 Accuracy 80 Accuracy Accuracy 85 80 75 70 65 20 80 90 100 110 120 130 #Labeled data (b) ORL 140 150 60 55 Random AOD TED LOD Bound Random AOD TED LOD Bound 50 45 160 40 30 40 50 60 70 80 #Labeled data 90 100 110 120 (c) Isolet1 Figure 2: Comparison of different methods on (a) wdbc; (b) ORL; and (c) Isolet1 using LapRLS. Table 1: Classification accuracy (%) and running time (in second) of compared methods on the three datasets. Dataset wdbc (2 labeled) ORL (80 labeled) Isolet1 (30 labeled) Acc time Acc time Acc time Random 69.47?14.56 ? 72.00?4.05 ? 44.36?3.09 ? AOD 68.59?12.46 0.0 65.17?3.14 32.2 40.27?2.24 7.4 TED 68.33?10.68 0.0 80.33?2.94 39.6 55.98?2.54 41.1 LOD 63.48?8.38 0.1 80.25?2.64 41.7 57.79?1.87 41.5 Bound 88.68?2.82 0.3 83.25?3.17 23.4 61.99?2.14 17.4 Table 1. For each dataset, we did paired t-tests between the proposed method and the other methods in the 95% confidence interval. If it is significant over all the other methods, the corresponding entry of Bound is bolded. We observe that the proposed selective labeling method greatly outperforms the other methods at most cases. AOD is usually worse than random sampling. The reason is that minimizing the variance of model parameter does not guarantee the quality of predictions on the data. TED performs very well. As we mentioned before, the criterion of TED coincides with minimizing the out-of-sample error bound of ridge regression. This explains its good empirical performance. The performance of LOD is slightly better than TED. This is because LOD incorporates the geometric structure into TED. The superior performance of our method is attributed to its theoretical foundation, which guarantees that the learner (LapRLS) can achieve small error on the test data. In addition, the running time of our method is comparable to or even less than the running time of the other methods. One may argue that the above comparison is not fair because we use LapRLS as the learner, which tends to fit the proposed method. Therefore, we also compare different methods using ridge regression (RR) as the learner. We find that our proposed method is also much better than the other methods using RR. For the space limit, we omit the results here and put them in the supplemental material. 6 Conclusions The main contributions of this paper are: (1) We present a deterministic out-of-sample error bound for LapRLS; (2) we present a selective labeling method by minimizing this upper bound; and (3) we present a simple yet effective algorithm to optimize the criterion for selective labeling. Acknowledgement The work was supported in part by U.S. National Science Foundation grants IIS-0905215, CNS0931975, the U.S. Army Research Laboratory under Cooperative Agreement No. W911NF-09-20053 (NS-CTA), the U.S. Air Force Office of Scientific Research MURI award FA9550-08-1-0265, and MIAS, a DHS-IDS Center for Multimodal Information Access and Synthesis at UIUC. We would like to thank the anonymous reviewers for their helpful comments. 8 References [1] A. D. Anthony Atkinson and R. Tobias. Optimum Experimental Designs. Oxford University Press, 2007. [2] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, pages 65?72, 2006. [3] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463?482, 2002. [4] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399?2434, 2006. [5] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS, pages 199?207, 2010. [6] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2:499?526, 2002. [7] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006. [8] F. R. K. Chung. Spectral Graph Theory. American Mathematical Society, February 1997. [9] D. A. Cohn, L. E. Atlas, and R. E. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201?221, 1994. [10] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. In NIPS, pages 705?712, 1994. [11] M. A. Fanty and R. A. Cole. Spoken letter recognition. In NIPS, pages 220?226, 1990. [12] G. H. Golub and C. F. V. Loan. Matrix computations (3rd ed.). Johns Hopkins University Press, Baltimore, MD, USA, 1996. [13] A. Guillory and J. Bilmes. Active semi-supervised learning using submodular functions. In UAI, pages 274?282, 2011. [14] S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333?361, 2011. [15] T. Hastie, R. Tibshirani, and J. H. Friedman. The elements of statistical learning: data mining, inference, and prediction. New York: Springer-Verlag, 2001. [16] X. He, W. Min, D. Cai, and K. Zhou. Laplacian optimal design for image retrieval. In SIGIR, pages 119?126, 2007. [17] B. Korte and J. Vygen. Combinatorial Optimization: Theory and Algorithms. Springer Publishing Company, Incorporated, 4th edition, 2007. [18] G. Schohn and D. Cohn. Less is more: Active learning with support vector machines. In ICML, pages 839?846, 2000. [19] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. In ICML, pages 999?1006, 2000. [20] K. Yu, J. Bi, and V. Tresp. Active learning via transductive experimental design. In ICML, pages 1081?1088, 2006. [21] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency. In NIPS, 2003. [22] X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, pages 912?919, 2003. 9
4575 |@word version:4 briefly:1 advantageous:1 norm:1 elisseeff:1 incurs:1 tr:14 contains:5 selecting:2 pub:1 tuned:3 outperforms:3 existing:1 current:1 com:1 discretization:3 beygelzimer:2 yet:2 dx:1 john:1 fn:1 informative:3 benign:1 atlas:1 greedy:2 selected:7 half:1 fa9550:1 provides:1 zhang:2 mathematical:1 along:2 shorthand:1 xtl:30 combinational:1 cta:1 introduce:2 expected:8 roughly:1 uiuc:2 pitfall:1 company:1 becomes:1 project:1 estimating:2 moreover:1 bounded:5 agnostic:4 kind:1 minimizes:1 supplemental:1 spoken:1 finding:1 guarantee:2 classifier:2 uk:1 unit:1 grant:1 omit:1 yn:3 positive:2 before:1 engineering:1 local:4 tends:1 limit:1 analyzing:1 id:1 oxford:1 twice:1 studied:1 suggests:1 limited:1 bi:1 averaged:1 practical:3 woodbury:1 practice:1 lf:1 procedure:2 empirical:1 matching:1 projection:1 word:1 confidence:1 get:1 cannot:2 close:3 unlabeled:3 selection:4 put:1 risk:1 www:2 conventional:1 deterministic:6 optimize:2 demonstrated:2 map:1 lagrangian:1 center:1 reviewer:1 l:1 sigir:1 amazon:1 simplicity:1 isolet:1 importantly:1 stability:2 classic:1 annals:1 us:2 hypothesis:1 lod:15 agreement:1 element:5 expensive:2 approximated:2 recognition:1 muri:1 labeled:19 cooperative:1 ding:1 solved:2 hanj:1 mentioned:1 vanishes:1 complexity:2 tobias:1 cam:1 geodesic:1 trained:5 depend:1 predictive:2 upon:1 efficiency:1 learner:10 gu:1 multimodal:1 differently:1 represented:1 xxt:1 derivation:3 train:3 alphabet:1 shortcoming:1 effective:1 query:2 labeling:19 whose:2 supplementary:1 solve:4 relax:3 otherwise:1 statistic:4 niyogi:1 schohn:1 transductive:2 btr:2 rr:2 cai:1 propose:4 interaction:4 fanty:1 remainder:1 neighboring:1 uci:2 combining:1 till:1 achieve:3 intuitive:1 normalize:1 olkopf:2 convergence:1 optimum:2 rademacher:2 derive:7 depending:1 ac:1 stat:1 nearest:1 ij:1 eq:12 strong:2 solves:2 c:1 implies:1 ml2:2 attribute:1 human:1 dii:1 material:2 adjacency:1 explains:3 f1:1 generalization:2 anonymous:1 extension:1 hold:1 ic:1 algorithmic:1 substituting:1 a2:2 estimation:3 label:15 combinatorial:4 schwarz:1 cole:1 quanquan:1 largest:3 grouped:1 minimization:3 mit:1 gaussian:2 aim:3 rather:1 zhou:2 resized:1 varying:1 office:1 derived:3 improvement:1 check:1 greatly:1 contrast:1 baseline:2 helpful:1 inference:1 nn:1 jiawei:1 typically:1 koller:1 selective:15 wij:2 interested:2 selects:5 arg:8 among:1 classification:7 art:5 special:2 raised:1 equal:2 construct:1 once:1 field:1 ted:19 sampling:5 labeler:1 represents:1 yu:2 dhs:1 constitutes:1 mias:1 icml:5 np:1 report:1 belkin:1 randomly:3 preserve:1 national:1 uta:1 subsampled:4 geometry:1 friedman:1 investigate:1 mining:1 golub:1 held:2 worker:1 facial:2 digitalized:1 conduct:1 circle:7 theoretical:2 subfigure:1 column:3 w911nf:1 deviation:1 subset:3 entry:1 submanifold:1 conducted:1 characterize:1 straightforwardly:1 guillory:1 synthetic:3 st:12 probabilistic:2 yl:4 vm:1 pool:6 synthesis:1 hopkins:1 concrete:1 choose:6 worse:1 american:1 inefficient:1 derivative:4 chung:1 combing:1 account:1 potential:2 depends:1 red:1 recover:1 contribution:2 minimize:2 air:1 square:5 accuracy:7 variance:10 who:1 efficiently:2 bolded:1 repository4:1 multiplying:1 lighting:1 bilmes:1 hanneke:1 database5:1 acc:3 ed:1 definition:2 acquisition:2 proof:3 attributed:1 hsu:1 tunable:1 dataset:7 organized:1 back:2 vygen:1 supervised:10 arlington:1 follow:1 response:3 coinciding:1 attarchive:1 formulation:1 though:1 furthermore:2 correlation:1 until:1 hand:3 langford:2 cohn:3 incrementally:2 quality:2 gray:1 scientific:1 semisupervised:1 usa:1 true:5 multiplier:3 regularization:10 equality:3 iteratively:1 laboratory:1 ll:9 please:1 speaker:2 cosine:1 coincides:1 criterion:11 ridge:7 demonstrate:2 performs:2 fj:1 balcan:1 geometrical:1 image:4 harmonic:1 fi:2 common:1 superior:1 rl:1 he:2 refer:2 significant:2 cambridge:1 smoothness:1 rd:2 consistency:1 similarly:1 illinois:2 submodular:1 chapelle:1 access:1 han:1 similarity:1 etc:1 recent:1 belongs:1 apart:1 optimizes:1 verlag:1 inequality:3 binary:1 vt:17 yi:1 seen:2 minimum:2 relaxed:1 determine:2 semi:9 ii:1 multiple:1 desirable:1 zien:1 reduces:1 champaign:1 unwise:1 calculation:1 cross:1 retrieval:1 equally:1 award:1 a1:2 laplacian:14 paired:1 prediction:7 regression:10 breast:2 mturk:1 expectation:1 rutgers:2 iteration:1 sometimes:1 represent:1 addition:2 interval:1 baltimore:1 completes:1 leaving:1 crucial:1 sch:2 rest:2 unlike:1 archive:1 comment:1 subject:1 contrary:1 incorporates:2 spirit:1 effectiveness:1 jordan:1 call:2 lafferty:1 structural:1 split:1 identically:1 easy:1 xj:2 fit:1 hastie:1 restrict:1 inner:3 texas:1 chqding:1 motivated:1 expression:1 bartlett:1 effort:1 penalty:1 suffer:1 reformulated:1 speaking:1 york:1 matlab:1 korte:1 sst:6 amount:2 concentrated:1 dtg:1 simplest:1 http:3 exist:1 s3:4 notice:2 diagnostic:1 tibshirani:1 discrete:1 dropping:1 group:1 four:2 reformulation:1 nevertheless:1 spoke:1 backward:1 v1:1 graph:13 relaxation:1 nonadaptive:1 run:2 inverse:2 letter:2 tzhang:1 throughout:1 orl:6 comparable:1 submatrix:1 bound:37 followed:3 atkinson:1 fold:1 topological:1 oracle:3 discretely:1 nonnegative:3 constraint:5 deficiency:1 bousquet:2 min:9 department:3 belonging:2 beneficial:1 slightly:1 s1:3 intuitively:1 restricted:1 sij:1 isolet1:6 taken:1 turn:1 discus:1 malignant:1 observe:1 spectral:2 generic:1 batch:1 original:3 assumes:1 remaining:1 include:1 running:4 publishing:1 ghahramani:2 february:1 society:1 objective:1 randomize:1 md:1 diagonal:1 gradient:8 affinity:1 distance:1 thank:1 mapped:1 outer:2 chris:1 manifold:11 polytope:1 cauchy:1 argue:1 trivial:2 reason:2 minimizing:11 equivalently:2 negative:3 intent:1 design:15 motivates:1 unknown:2 upper:7 ladner:1 observation:2 datasets:8 urbana:1 benchmark:3 descent:6 extended:1 incorporated:2 y1:3 rn:2 community:1 namely:1 mechanical:1 optimized:2 connection:1 lal:1 nip:4 able:1 below:1 ofsample:1 usually:1 challenge:1 built:1 natural:1 force:1 regularized:5 predicting:2 zhu:1 improve:1 picture:1 axis:2 tresp:1 text:1 review:2 literature:1 geometric:2 acknowledgement:1 wisconsin:1 interesting:1 proportional:1 validation:1 foundation:3 degree:1 editor:1 share:2 heavy:1 row:3 cancer:2 supported:1 bias:2 side:2 understand:1 neighbor:1 face:1 distributed:3 overcome:3 xn:6 world:3 fb:1 forward:1 adaptive:2 projected:6 far:1 excess:1 compact:1 ignore:1 ml:15 global:2 active:20 xlxt:6 uai:1 assumed:1 conclude:1 consuming:2 xi:5 continuous:5 why:1 table:2 learn:2 expanding:1 improving:1 cl:1 anthony:1 domain:1 did:1 main:2 motivation:1 subsample:10 noise:3 s2:3 unmarked:1 edition:1 repeated:2 fair:1 x1:6 representative:1 referred:1 tl:1 tong:2 n:1 is2:1 xl:37 candidate:4 theorem:8 specific:1 xt:21 list:1 x:3 svm:2 intrinsic:2 burden:1 essential:2 mendelson:1 effectively:1 aod:11 budget:2 wdbc:6 simply:1 army:1 lagrange:3 sindhwani:1 springer:2 satisfies:1 ma:1 weston:1 goal:2 identity:4 feasible:1 hard:1 loan:1 typical:2 except:1 uniformly:3 wt:5 principal:1 admittedly:1 called:2 total:2 experimental:9 select:9 support:3 mark:3 people:1 laprls:34 evaluate:2
3,950
4,576
A Unifying Perspective of Parametric Policy Search Methods for Markov Decision Processes David Barber Department of Computer Science University College London [email protected] Thomas Furmston Department of Computer Science University College London [email protected] Abstract Parametric policy search algorithms are one of the methods of choice for the optimisation of Markov Decision Processes, with Expectation Maximisation and natural gradient ascent being popular methods in this field. In this article we provide a unifying perspective of these two algorithms by showing that their searchdirections in the parameter space are closely related to the search-direction of an approximate Newton method. This analysis leads naturally to the consideration of this approximate Newton method as an alternative optimisation method for Markov Decision Processes. We are able to show that the algorithm has numerous desirable properties, absent in the naive application of Newton?s method, that make it a viable alternative to either Expectation Maximisation or natural gradient ascent. Empirical results suggest that the algorithm has excellent convergence and robustness properties, performing strongly in comparison to both Expectation Maximisation and natural gradient ascent. 1 Markov Decision Processes Markov Decision Processes (MDPs) are the most commonly used model for the description of sequential decision making processes in a fully observable environment, see e.g. [5]. A MDP is described by the tuple {S, A, H, p1 , p, ?, R}, where S and A are sets known respectively as the state and action space, H ? N is the planning horizon, which can be either finite or infinite, and {p1 , p, ?, R} are functions that are referred as the initial state distribution, transition dynamics, policy (or controller) and the reward function. In general the state and action spaces can be arbitrary sets, but we restrict our attention to either discrete sets or subsets of Rn , where n ? N. We use boldface notation to represent a vector and also use the notation z = (s, a) to denote a state-action pair. Given a MDP the trajectory of the agent is determined by the following recursive procedure: Given the agent?s state, st , at a given time-point, t ? NH , an action is selected according to the policy, at ? ?(?|st ); The agent will then transition to a new state according to the transition dynamics, st+1 ? p(?|at , st ); this process is iterated sequentially through all of the time-points in the planning horizon, where the state of the initial time-point is determined by the initial state distribution s1 ? p1 (?). At each time-point the agent receives a (scalar) reward that is determined by the reward function, where this function depends on the current action and state of the environment. Typically the reward function is assumed to be bounded, but as the objective is linear in the reward function we assume w.l.o.g that it is non-negative. The most widely used objective in the MDP framework is to maximise the total expected reward of the agent over the course of the planning horizon. This objective can take various forms, including an infinite planning horizon, with either discounted or average rewards, or a finite planning horizon. The theoretical contributions of this paper are applicable to all three frameworks, but for notational ease and for reasons of space we concern ourselves with the infinite horizon framework with discounted rewards. In this framework the boundedness of the objective function is ensured by the 1 introduction of a discount factor, ? ? [0, 1), which scales the rewards of the various time-points in a geometric manner. Writing the objective function and trajectory distribution directly in terms of the parameter vector then, for any w ? W, the objective function takes the form   ? X t?1 U (w) = Ept (a,s;w) ? R(a, s) , (1) t=1 where we have denoted the parameter space by W and have used the notation pt (a, s; w) to represent the marginal p(st = s, at = a; w) of the joint state-action trajectory distribution  H?1  Y p(a1:H , s1:H ; w) = ?(aH |sH ; w) p(st+1 |at , st )?(at |st ; w) p1 (s1 ), H ? N. (2) t=1 Note that the policy is now written in terms of its parametric representation, ?(a|s; w). It is well known that the global optimum of (1) can be obtained through dynamic programming, see e.g. [5]. However, due to various issues, such as prohibitively large state-action spaces or highly non-linear transition dynamics, it is not possible to find the global optimum of (1) in most real-world problems of interest. Instead most research in this area focuses on obtaining approximate solutions, for which there exist numerous techniques, such as approximate dynamic programming methods [6], Monte-Carlo tree search methods [19] and policy search methods, both parametric [27, 21, 16, 18] and non-parametric [2, 25]. This work is focused solely on parametric policy search methods, by which we mean gradient-based methods, such as steepest and natural gradient ascent [23, 1], along with Expectation Maximisation [11], which is a bound optimisation technique from the statistics literature. Since their introduction [14, 31, 10, 16] these methods have been the centre of a large amount of research, with much of it focusing on gradient estimation [21, 4], variance reduction techniques [30, 15], function approximation techniques [27, 8, 20] and real-world applications [18, 26]. While steepest gradient ascent has enjoyed some success it is known to suffer from some substantial issues that often make it unattractive in practice, such as slow convergence and susceptibility to poor scaling of the objective function [23]. Various optimisation methods have been introduced as an alternative, most notably natural gradient ascent [16, 24, 3] and Expectation Maximisation [18, 28], which are currently the methods of choice among parametric policy search algorithms. In this paper our primary focus is on the search-direction (in the parameter space) of these two methods. 2 Search Direction Analysis In this section we will perform a novel analysis of the search-direction of both natural gradient ascent and Expectation Maximisation. In gradient-based algorithms of Markov Decision Processes the update of the policy parameters take the form wnew = w + ?M(w)?w U (w), (3) + where ? ? R is the step-size parameter and M(w) is some positive-definite matrix that possibly depends on w. It is well-known that such an update will increase the total expected reward, provided that ? is sufficiently small, and this process will converge to a local optimum of (1) provided the step-size sequence is appropriately selected. While EM doesn?t have an update of the form given in (3) we shall see that the algorithm is closely related to such an update. It is convenient for later reference to note that the gradient ?w U (w) can be written in the following form   ?w U (w) = Ep? (z;w)Q(z;w) ?w log ?(a|s; w) , (4) where we use the expectation notation E[?] to denote the integral/summation w.r.t. a non-negative function. The term p? (z; w) is a geometric weighted average of state-action occupancy marginals given by ? X p? (z; w) = ? t?1 pt (z; w), t=1 while the term Q(z; w) is referred to as the state-action value function and is equal to the total expected future reward from the current time-point onwards, given the current state-action pair, z, 2 and parameter vector, w, i.e. Q(z; w) = ? X t=1   t?1 0 Ept (z0 ;w) ? R(z ) z1 = z . This is a standard result and due to reasons of space we have omitted the details, but see e.g. [27] or section(6.1) of the supplementary material for more details. An immediate issue concerning updates of the form (3) is in the selection of the ?optimal? choice of the matrix M(w), which clearly depends on the sense in which optimality is defined. There are numerous reasonable properties that are desirable of such an update, including the numerical stability and computational complexity of the parameter update, as well as the rate of convergence of the overall algorithm resulting from these updates. While all reasonable criteria the rate of convergence is of such importance in an optimisation algorithm that it is a logical starting point in our analysis. For this reason we concern ourselves with relating these two parametric policy search algorithms to the Newton method, which has the highly desirable property of having a quadratic rate of convergence in the vicinity of a local optimum. The Newton method is well-known to suffer from problems that make it either infeasible or unattractive in practice, but in terms of forming a basis for theoretical comparisons it is a logical starting point. We shall discuss some of the issues with the Newton method in more detail in section(3). In the Newton method the matrix M(w) is set to the negative inverse Hessian, i.e. M(w) = ?H?1 (w), where H(w) = ?w ?Tw U (w). where we have denoted the Hessian by H(w). Using methods similar to those used to calculate the gradient, it can be shown that the Hessian takes the form H(w) = H1 (w) + H2 (w), (5) where H1 (w) = H2 (w) = ? X t=1 ? X   Ep(z1:t ;w) ? t?1 R(zt )?w log p(z1:t ; w)?Tw log p(z1:t ; w) , (6)   t?1 T Ep(z1:t ;w) ? R(zt )?w ?w log p(z1:t ; w) . (7) t=1 We have omitted the details of the derivation, but these can be found in section(6.2) of the supplementary material, with a similar derivation of a sample-based estimate of the Hessian given in [4]. 2.1 Natural Gradient Ascent To overcome some of the issues that can hinder steepest gradient ascent an alternative, natural gradient, was introduced in [16]. Natural gradient ascent techniques originated in the neural network and blind source separation literature, see e.g. [1], and take the perspective that the parameter space has a Riemannian manifold structure, as opposed to a Euclidean structure. Deriving the steepest ascent direction of U (w) w.r.t. a local norm defined on this parameter manifold (as opposed to w.r.t. the Euclidean norm, which is the case in steepest gradient ascent) results in natural gradient ascent. We denote the quadratic form that induces this local norm on the parameter manifold by G(w), i.e. d(w)2 = wT G(w)w. The derivation for natural gradient ascent is well-known, see e.g. [1], and its application to the objective (1) results in a parameter update of the form wk+1 = wk + ?k G?1 (wk )?w U (wk ). In terms of (3) this corresponds to M(w) = G?1 (w). In the case of MDPs the most commonly used local norm is given by the Fisher information matrix of the trajectory distribution, see e.g. [3, 24], and due to the Markovian structure of the dynamics it is given by   T G(w) = ?Ep? (z;w) ?w ?w log ?(a|s; w) . (8) We note that there is an alternate, but equivalent, form of writing the Fisher information matrix, see e.g. [24], but we do not use it in this work. 3 In order to relate natural gradient ascent to the Newton method we first rewrite the matrix (7) into the following form   T H2 (w) = Ep? (z;w)Q(z;w) ?w ?w log ?(a|s; w) . (9) For reasons of space the details of this reformulation of (7) are left to section(6.2) of the supplementary material. Comparing the Fisher information matrix (8) with the form of H2 (w) given in (9) it is clear that natural gradient ascent has a relationship with the approximate Newton method that uses H2 (w) in place of H(w). In terms of (3) this approximate Newton method corresponds to setting M(w) = ?H2?1 (w). In particular it can be seen that the difference between the two methods lies in the non-negative function w.r.t. which the expectation is taken in (8) and (9). (It also appears that there is a difference in sign, but observing the form of M(w) for each algorithm shows that this is not the case.) In the Fisher information matrix the expectation is taken w.r.t. to the geometrically weighted summation of state-action occupancy marginals of the trajectory distribution, while in H2 (w) there is an additional weighting from the state-action value function. Hence, H2 (w) incorporates information about the reward structure of the objective function, whereas the Fisher information matrix does not, and so it will generally contain more information about the curvature of the objective function. 2.2 Expectation Maximisation The Expectation Maximisation algorithm, or EM-algorithm, is a powerful optimisation technique from the statistics literature, see e.g. [11], that has recently been the centre of much research in the planning and reinforcement learning communities, see e.g. [10, 28, 18]. A quantity of central importance in the EM-algorithm for MDPs is the following lower-bound on the log-objective   log U (w) ? Hentropy (q(z1:t , t)) + Eq(z1:t ,t) log ? t?1 R(zt )p(z1:t ; w) , (10) where Hentropy is the entropy function and q(z1:t , t) is known as the ?variational distribution?. Further details of the EM-algorithm for MDPs and a derivation of (10) are given in section(6.3) of the supplementary material or can be found in e.g. [18, 28]. The parameter update of the EM-algorithm is given by the maximum (w.r.t. w) of the ?energy? term,   Q(w, wk ) = Ep? (z;wk )Q(z;wk ) log ?(a|s; w) . (11) Note that Q is a two-parameter function, where the first parameter occurs inside the expectation and the second parameter defines the non-negative function w.r.t. the expectation is taken. This decoupling allows the maximisation over w to be performed explicitly in many cases of interest. For example, when the log-policy is quadratic in w the maximisation problems is equivalent to a least-squares problem and the optimum can be found through solving a linear system of equations. It has previously been noted, again see e.g. [18], that the parameter update of steepest gradient ascent and the EM-algorithm can be related through this ?energy? term. In particular the gradient (4) evaluated at wk can also be written as follows ?w|w=wk U (w) = ?10 w|w=wk Q(w, wk ), where 10 we use the notation ?w to denote the first derivative w.r.t. the first parameter, while the update of the EM-algorithm is given by wk+1 = argmaxw?W Q(w, wk ). In other words, steepest gradient ascent moves in the direction that most rapidly increases Q(w, wk ) w.r.t. the first variable, while the EM-algorithm maximises Q(w, wk ) w.r.t. the first variable. While this relationship is true, it is also quite a negative result. It states that in situations where it is not possible to explicitly perform the maximisation over w in (11) then the alternative, in terms of the EM-algorithm, is this generalised EM-algorithm, which is equivalent to steepest gradient ascent. Considering that algorithms such as EM are typically considered because of the negative aspects related to steepest gradient ascent this is an undesirable alternative. It is possible to find the optimum of (11) numerically, but this is also undesirable as it results in a double-loop algorithm that could be computationally expensive. Finally, this result provides no insight into the behaviour of the EM-algorithm, in terms of the direction of its parameter update, when the maximisation over w in (11) can be performed explicitly. Instead we provide the following result, which shows that the step-direction of the EM-algorithm has an underlying relationship with the Newton method. In particular we show that, under suitable 4 regularity conditions, the direction of the EM-update, i.e. wk+1 ? wk , is the same, up to first order, as the direction of an approximate Newton method that uses H2 (w) in place of H(w). Theorem 1. Suppose we are given a Markov Decision Process with objective (1) and Markovian trajectory distribution (2). Consider the update of the parameter through Expectation Maximisation at the k th iteration of the algorithm, i.e. wk+1 = argmaxw?W Q(w, wk ). Provided that Q(w, wk ) is twice continuously differentiable in the first parameter we have that wk+1 ? wk = ?H2?1 (wk )?w|w=wk U (w) + O(kwk+1 ? wk k2 ). (12) Additionally, in the case where the log-policy is quadratic the relation to the approximate Newton method is exact, i.e. the second term on the r.h.s. (12) is zero. Proof. The idea of the proof is simple and only involves performing a Taylor expansion of ?10 w Q(w, wk ). As Q is assumed to be twice continuously differentiable in the first component this Taylor expansion is possible and gives 10 20 2 ?10 w Q(wk+1 , wk ) = ?w Q(wk , wk ) + ?w Q(wk , wk )(wk+1 ? wk ) + O(kwk+1 ? wk k ). (13) As wk+1 = argmaxw?W Q(w, wk ) it follows that ?10 w Q(wk+1 , wk ) = 0. This means that, upon ignoring higher order terms in wk+1 ? wk , the Taylor expansion (13) can be rewritten into the form ?1 10 wk+1 ? wk = ??20 ?w Q(wk , wk ). w Q(wk , wk ) (14) = ?w|w=wk U (w) and The proof is completed by observing that ?10 w Q(wk , wk ) Q(w , w ) = H (w ). The second statement follows because in the case where the log-policy ?20 k k 2 k w is quadratic the higher order terms in the Taylor expansion vanish. 2.3 Summary In this section we have provided a novel analysis of both natural gradient ascent and Expectation Maximisation when applied to the MDP framework. Previously, while both of these algorithms have proved popular methods for MDP optimisation, there has been little understanding of them in terms of their search-direction in the parameter space or their relation to the Newton method. Firstly, our analysis shows that the Fisher information matrix, which is used in natural gradient ascent, is similar to H2 (w) in (5) with the exception that the information about the reward structure of the problem is not contained in the Fisher information matrix, while such information is contained in H2 (w). Additionally we have shown that the step-direction of the EM-algorithm is, up to first order, an approximate Newton method that uses H2 (w) in place of H(w) and employs a constant step-size of one. 3 An Approximate Newton Method A natural follow on from the analysis in section(2) is the consideration of using M(w) = ?H2?1 (w) in (3), a method we call the full approximate Newton method from this point onwards. In this section we show that this method has many desirable properties that make it an attractive alternative to other parametric policy search methods. Additionally, denoting the diagonal matrix formed from the diagonal elements of H2 (w) by D2 (w), we shall also consider the method that uses M(w) = ?D2?1 (w) in (3). We call this second method the diagonal approximate Newton method. Recall that in (3) it is necessary that M(w) is positive-definite (in the Newton method this corresponds to requiring the Hessian to be negative-definite) to ensure an increase of the objective. In general the objective (1) is not concave, which means that the Hessian will not be negative-definite over the entire parameter space. In such cases the Newton method can actually lower the objective and this is an undesirable aspect of the Newton method. An attractive property of the approximate Hessian, H2 (w), is that it is always negative-definite when the policy is log?concave in the policy parameters. This fact follows from the observation that in such cases H2 (w) is a non-negative mixture of negative-definite matrices, which again is negative-definite [9]. Additionally, the diagonal 5 terms of a negative-definite matrix are negative and so D2 (w) is also negative-definite when the controller is log-concave. To motivate this result we now briefly consider some widely used policies that are either log-concave or blockwise log-concave. Firstly, consider the Gibb?s policy, ?(a|s; w) ? exp wT ?(a, s), where ?(a, s) ? Rnw is a feature vector. This policy is widely used in discrete systems and is logconcave in w, which can be seen from the fact that log ?(a|s; w) is the sum of a linear term and a negative log-sum-exp term, both of which are concave [9]. In systems with a continuous stateaction space a common choice of controller is ?(a|s; wmean , ?) = N (a|K?(s) + m, ?(s)), where wmean = {K, m} and ?(s) ? Rnw is a feature vector. The notation ?(s) is used because there are cases where is it beneficial to have state dependent noise in the controller. This controller is not jointly log-concave in wmean and ?, but it is blockwise log-concave in wmean and ??1 . In terms of wmean the log-policy is quadratic and the coefficient matrix of the quadratic term is negative-definite. In terms of ??1 the log-policy consists of a linear term and a log-determinant term, both of which are concave. In terms of evaluating the search direction it is clear from the forms of D2 (w) and H2 (w) that many of the pre-existing gradient evaluation techniques can be extended to the approximate Newton framework with little difficulty. In particular, gradient evaluation requires calculating the expectation of the derivative of the log-policy w.r.t. p? (z; w)Q(z; w). In terms of inference the only additional calculation necessary to implement either the full or diagonal approximate Newton methods is the calculation of the expectation (w.r.t. to the same function) of the Hessian of the log-policy, or its diagonal terms. As an example in section(6.5) of the supplementary material we detail the extension of the recurrent state formulation of gradient evaluation in the average reward framework, see e.g. [31], to the approximate Newton method. We use this extension in the Tetris experiment that we consider in section(4). Given ns samples and nw parameters the complexity of these extensions scale as O(ns nw ) for the diagonal approximate Newton method, while it scales as O(ns n2w ) for the full approximate Newton method. An issue with the Newton method is the inversion of the Hessian matrix, which scales with O(n3w ) in the worst case. In the standard application of the Newton method this inversion has to be performed at each iteration and in large parameter systems this becomes prohibitively costly. In general H(w) will be dense and no computational savings will be possible when performing this matrix inversion. The same is not true, however, of the matrices D2 (w) and H2 (w). Firstly, as D2 (w) is a diagonal matrix it is trivial to invert. Secondly, there is an immediate source of sparsity that comes from taking the second derivative of the log-trajectory distribution in (7). This property ensures that any (product) sparsity over the control parameters in the log-trajectory distribution will correspond to sparsity in H2 (w). For example, in a partially observable Markov Decision Processes where the policy is modeled through a finite state controller, see e.g. [22], there are three functions to be optimised, namely the initial belief distribution, the belief transition dynamics and the policy. When the parameters of these three functions are independent H2 (w) will be block-diagonal (across the parameters of the three functions) and the matrix inversion can be performed more efficiently by inverting each of the block matrices individually. The reason that H(w) does not exhibit any such sparsity properties is due to the term H1 (w) in (5), which consists of the non-negative mixture of outer-product matrices. The vector in these outer-products is the derivative of the log-trajectory distribution and this typically produces a dense matrix. A undesirable aspect of steepest gradient ascent is that its performance is affected by the choice of basis used to represent the parameter space. An important and desirable property of the Newton method is that it is invariant to non-singular linear (affine) transformations of the parameter space, see e.g. [9]. This means that given a non-singular linear (affine) mapping T ? Rnw ?nw , the Newton ? (w) = U (T w) is related to the Newton update of the original objective update of the objective U  through the same linear (affine) mapping, i.e. v + ?vnt = T w + ?wnt , where v = T w and ?vnt and ?wnt denote the respective Newton steps. In other words running the Newton method on U (w) ? (T ?1 w) will give identical results. An important point to note is that this desirable property and U is maintained when using H2 (w) in an approximate Newton method, while using D2 (w) results in a method that is invariant to rescaling of the parameters, i.e. where T is a diagonal matrix with non-zero elements along the diagonal. This can be seen by using the linearity of the expectation operator and a proof of this statement is provided in section(6.4) of the supplementary material. 6 20 Completed Lines 400 ?2 15 10 5 0 ?10 ?8 ?6 ?4 ?1 ?2 0 300 200 100 0 0 2 (a) Policy Trace 20 40 60 80 Training Iterations 100 (b) Tetris Problem Figure 1: (a) An empirical illustration of the affine invariance of the approximate Newton method, performed on the two state MDP of [16]. The plot shows the trace of the policy during training for the two different parameter spaces, where the results of the latter have been mapped back into the original parameter space for comparison. The plot shows the two steepest gradient ascent traces (blue cross and blue circle) and the two traces of the full approximate Newton method (red cross and red circle). (b) Results of the tetris problem for steepest gradient ascent (black), natural gradient ascent (green), the diagonal approximate Newton method (blue) and the full approximate Newton method (red). 4 Experiments The first experiment we performed was an empirical illustration that the full approximate Newton method is invariant to linear transformations of the parameter space. We considered the simple two state example of [16] as it allows us to plot the trace of the policy during training, since the policy has only two parameters. The policy was trained using both steepest gradient ascent and the full approximate Newton method and in both the original and linearly transformed parameter space. The policy traces of the two algorithms are plotted in figure(1.a). As expected steepest gradient ascent is affected by such mappings, whilst the full approximate Newton method is invariant to them. The second experiment was aimed at demonstrating the scalability of the full and diagonal approximate Newton methods to problems with a large state space. We considered the tetris domain, which is a popular computer game designed by Alexey Pajitnov in 1985. See [12] for more details. Firstly, we compared the performance of the full and diagonal approximate Newton methods to other parametric policy search methods. Tetris is typically played on a 20 ? 10 grid, but due to computational costs we considered a 10 ? 10 grid in the experiment. This results in a state space with roughly 7 ? 2100 states. We modelled the policy through a Gibb?s distribution, where we considered a feature vector with the following features: the heights of each column, the difference in heights between adjacent columns, the maximum height and the number of ?holes?. Under this policy it is not possible to obtain the explicit maximum over w in (11) and so a straightforward application of EM is not possible in this problem. We therefore compared the diagonal and full approximate Newton methods with steepest and natural gradient ascent. Due to reasons of space the exact implementation of the experiment is detailed in section(6.6) of the supplementary material. We ran 100 repetitions of the experiment, each consisting of 100 training iterations, and the mean and standard error of the results are given in figure(1.b). It can be seen that the full approximate Newton method outperforms all of the other methods, while the performance of the diagonal approximate Newton method is comparable to natural gradient ascent. We also ran several training runs of the full approximate Newton method on the full-sized 20 ? 10 board and were able to obtain a score in the region of 14, 000 completed lines, which was obtained after roughly 40 training iterations. An approximate dynamic programming based method has previously been applied to the Tetris domain in [7]. The same set of features were used and a score of roughly 4, 500 completed lines was obtained after around 6 training iterations, after which the solution then deteriorated. In the third experiment we considered a finite horizon (controlled) linear dynamical system. This allowed the search-directions of the various algorithms to be computed exactly using [13] and removed any issues of approximate inference from the comparison. In particular we considered a 3-link rigid manipulator, linearized through feedback linearisation, see e.g. [17]. This system has a 7 Normalised Total Expected Reward Normalised Total Expected Reward 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 200 400 Training Time 600 (a) Model-Based Linear System 1 0.9 0.8 0.7 0.6 0 200 400 600 Training Iterations 800 (b) Model-Free Non-Linear System Figure 2: (a) The normalised total expected reward plotted against training time, in seconds, for the 3-link rigid manipulator. The plot shows the results for steepest gradient ascent (black), EM (blue), natural gradient ascent (green) and the approximate Newton method (red), where the plot shows the mean and standard error of the results. (b) The normalised total expected reward plotted against training iterations for the synthetic non-linear system of [29]. The plot shows the results for EM (blue), steepest gradient ascent (black), natural gradient ascent (green) and the approximate Newton method (red), where the plot shows the mean and standard error of the results. 6-dimensional state space, 3-dimensional action space and a 22-dimensional parameter space. Further details of the system can be found in section(6.7) of the supplementary material. We ran the experiment 100 times and the mean and standard error of the results plotted in figure(2.a). In this experiment the approximate Newton method found substantially better solutions than either steepest gradient ascent, natural gradient ascent or Expectation Maximisation. The superiority of the results in comparison to either steepest or natural gradient ascent can be explained by the fact that H2 (w) gives a better estimate of the curvature of the objective function. Expectation Maximisation performed poorly in this experiment, exhibiting sub-linear convergence. Steepest gradient ascent performed 3684 ? 314 training iterations in this experiment which, in comparison to the 203 ? 34 and 310 ? 40 iterations of natural gradient ascent and the approximate Newton method respectively, illustrates the susceptibility of this method to poor scaling. In the final experiment we considered the synthetic non-linear system considered in [29]. Full details of the system and the experiment can be found in section(6.8) of the supplementary material. We ran the experiment 100 times and the mean and standard error of the results are plotted in figure(2.b). Again the approximate Newton method outperforms both steepest and natural gradient ascent. In this example only the mean parameters of the Gaussian controller are optimised, while the parameters of the noise are held fixed, which means that the log-policy is quadratic in the policy parameters. Hence, in this example the EM-algorithm is a particular (less general) version of the approximate Newton method, where a fixed step-size of one is used throughout. The marked difference in performance between the EM-algorithm and the approximate Newton method shows the benefit of being able to tune the step-size sequence. In this experiment we considered five different step-size sequences for the approximate Newton method and all of them obtained superior results than the EM-algorithm. In contrast only one of the seven step-size sequences considered for steepest and natural gradient ascent outperformed the EM-algorithm. 5 Conclusion The contributions of this paper are twofold: Firstly we have given a novel analysis of Expectation Maximisation and natural gradient ascent when applied to the MDP framework, showing that both have close connections to an approximate Newton method; Secondly, prompted by this analysis we have considered the direct application of this approximate Newton method to the optimisation of MDPs, showing that it has numerous desirable properties that are not present in the naive application of the Newton method. In terms of empirical performance we have found the approximate Newton method to perform consistently well in comparison to EM and natural gradient ascent, highlighting its viability as an alternative to either of these methods. At present we have only considered actor type implementations of the approximate Newton method and the extension to actor-critic methods is a point of future research. 8 References [1] S. Amari. Natural Gradient Works Efficiently in Learning. Neural Computation, 10:251?276, 1998. [2] M. Azar, V. G?omez, and H. Kappen. Dynamic policy programming with function approximation. Journal of Machine Learning Research - Proceedings Track, 15:119?127, 2011. [3] J. Bagnell and J. Schneider. Covariant Policy Search. IJCAI, 18:1019?1024, 2003. [4] J. Baxter and P. Bartlett. Infinite Horizon Policy Gradient Estimation. Journal of Artificial Intelligence Research, 15:319?350, 2001. [5] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, second edition, 2000. [6] D. P. Bertsekas. Approximate Policy Iteration: A Survey and Some New Methods. Research report, Massachusetts Institute of Technology, 2010. [7] D. P. Bertsekas and S. Ioffe. Temporal Differences-Based Policy Iteration and Applications in NeuroDynamic Programming. Research report, Massachusetts Institute of Technology, 1997. [8] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and L. Mark. Natural Actor-Critic Algorithms. Automatica, 45:2471?2482, 2009. [9] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [10] P. Dayan and G. E. Hinton. Using Expectation-Maximization for Reinforcement Learning. Neural Computation, 9:271?278, 1997. [11] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1?38, 1977. [12] C. Fahey. Tetris AI, Computers Play Tetris http://colinfahey.com/tetris/tetris_en. html, 2003. [13] T. Furmston and D. Barber. Efficient Inference for Markov Control Problems. UAI, 29:221?229, 2011. [14] P. W. Glynn. Likelihood Ratio Gradient Estimation for Stochastic Systems. Communications of the ACM, 33:97?84, 1990. [15] E. Greensmith, P. Bartlett, and J. Baxter. Variance Reduction Techniques For Gradient Based Estimates in Reinforcement Learning. Journal of Machine Learning Research, 5:1471?1530, 2004. [16] S. Kakade. A Natural Policy Gradient. NIPS, 14:1531?1538, 2002. [17] H. Khalil. Nonlinear Systems. Prentice Hall, 2001. [18] J. Kober and J. Peters. Policy Search for Motor Primitives in Robotics. Machine Learning, 84(1-2):171? 203, 2011. [19] L. Kocsis and C. Szepesv?ari. Bandit Based Monte-Carlo Planning. European Conference on Machine Learning (ECML), 17:282?293, 2006. [20] V. R. Konda and J. N. Tsitsiklis. On Actor-Critic Algorithms. SIAM J. Control Optim., 42(4):1143?1166, 2003. [21] P. Marbach and J. Tsitsiklis. Simulation-Based Optimisation of Markov Reward Processes. IEEE Transactions on Automatic Control, 46(2):191?209, 2001. [22] N. Meuleau, L. Peshkin, K. Kim, and L. Kaelbling. Learning Finite-State Controllers for Partially Observable Environments. UAI, 15:427?436, 1999. [23] J. Nocedal and S. Wright. Numerical Optimisation. Springer, 2006. [24] J. Peters and S. Schaal. Natural Actor-Critic. Neurocomputing, 71(7-9):1180?1190, 2008. [25] K. Rawlik, Toussaint. M, and S. Vijayakumar. On Stochastic Optimal Control and Reinforcement Learning by Approximate Inference. International Conference on Robotics Science and Systems, 2012. [26] S. Richter, D. Aberdeen, and J. Yu. Natural Actor-Critic for Road Traffic Optimisation. NIPS, 19:1169? 1176, 2007. [27] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy Gradient Methods for Reinforcement Learning with Function Approximation. NIPS, 13:1057?1063, 2000. [28] M. Toussaint, S. Harmeling, and A. Storkey. Probabilistic Inference for Solving (PO)MDPs. Research Report EDI-INF-RR-0934, University of Edinburgh, School of Informatics, 2006. [29] N. Vlassis, M. Toussaint, G. Kontes, and S. Piperidis. Learning Model-Free Robot Control by a Monte Carlo EM Algorithm. Autonomous Robots, 27(2):123?130, 2009. [30] L. Weaver and N. Tao. The Optimal Reward Baseline for Gradient Based Reinforcement Learning. UAI, 17(29):538?545, 2001. [31] R. Williams. Simple Statistical Gradient Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8:229?256, 1992. 9
4576 |@word determinant:1 version:1 briefly:1 inversion:4 norm:4 d2:7 simulation:1 linearized:1 boundedness:1 kappen:1 reduction:2 initial:4 series:1 score:2 denoting:1 outperforms:2 existing:1 current:3 comparing:1 com:1 optim:1 written:3 numerical:2 motor:1 plot:7 designed:1 update:17 intelligence:1 selected:2 steepest:22 meuleau:1 provides:1 firstly:5 five:1 height:3 along:2 direct:1 viable:1 consists:2 inside:1 manner:1 notably:1 expected:8 roughly:3 p1:4 planning:7 discounted:2 little:2 considering:1 becomes:1 provided:5 notation:6 bounded:1 underlying:1 linearity:1 substantially:1 whilst:1 transformation:2 temporal:1 concave:9 stateaction:1 exactly:1 ensured:1 prohibitively:2 k2:1 uk:2 control:7 superiority:1 bertsekas:3 greensmith:1 positive:2 maximise:1 generalised:1 local:5 sutton:2 optimised:2 solely:1 black:3 alexey:1 twice:2 ease:1 harmeling:1 maximisation:17 recursive:1 practice:2 definite:10 implement:1 block:2 richter:1 procedure:1 area:1 empirical:4 convenient:1 boyd:1 word:2 pre:1 road:1 suggest:1 undesirable:4 selection:1 operator:1 close:1 prentice:1 writing:2 equivalent:3 straightforward:1 attention:1 starting:2 primitive:1 convex:1 focused:1 survey:1 williams:1 insight:1 deriving:1 vandenberghe:1 stability:1 autonomous:1 deteriorated:1 pt:2 suppose:1 play:1 exact:2 programming:6 us:4 element:2 storkey:1 expensive:1 ep:6 worst:1 calculate:1 region:1 ensures:1 removed:1 ran:4 substantial:1 environment:3 dempster:1 complexity:2 reward:20 dynamic:10 hinder:1 ghavamzadeh:1 motivate:1 trained:1 rewrite:1 solving:2 singh:1 upon:1 basis:2 po:1 joint:1 various:5 derivation:4 london:2 monte:3 artificial:1 quite:1 widely:3 supplementary:9 amari:1 statistic:2 jointly:1 laird:1 final:1 kocsis:1 sequence:4 differentiable:2 rr:1 ucl:2 product:3 kober:1 loop:1 rapidly:1 poorly:1 description:1 neurodynamic:1 khalil:1 scalability:1 convergence:6 double:1 optimum:6 regularity:1 ijcai:1 produce:1 kontes:1 recurrent:1 ac:2 school:1 eq:1 c:2 involves:1 come:1 exhibiting:1 direction:14 closely:2 stochastic:2 mcallester:1 material:9 behaviour:1 summation:2 secondly:2 extension:4 sufficiently:1 considered:13 around:1 hall:1 exp:2 wright:1 nw:3 mapping:3 rawlik:1 susceptibility:2 omitted:2 estimation:3 outperformed:1 applicable:1 currently:1 individually:1 repetition:1 weighted:2 clearly:1 always:1 gaussian:1 focus:2 schaal:1 notational:1 consistently:1 methodological:1 likelihood:2 contrast:1 ept:2 baseline:1 sense:1 kim:1 inference:5 dependent:1 rigid:2 dayan:1 typically:4 entire:1 relation:2 bandit:1 transformed:1 tao:1 issue:7 among:1 overall:1 html:1 denoted:2 marginal:1 field:1 equal:1 saving:1 having:1 identical:1 yu:1 future:2 report:3 connectionist:1 employ:1 neurocomputing:1 ourselves:2 consisting:1 onwards:2 interest:2 highly:2 evaluation:3 mixture:2 sh:1 held:1 integral:1 tuple:1 necessary:2 respective:1 tree:1 incomplete:1 euclidean:2 taylor:4 circle:2 plotted:5 theoretical:2 column:2 markovian:2 maximization:1 cost:1 kaelbling:1 subset:1 synthetic:2 st:8 international:1 siam:1 vijayakumar:1 probabilistic:1 informatics:1 continuously:2 again:3 central:1 opposed:2 possibly:1 derivative:4 rescaling:1 wk:50 coefficient:1 vnt:2 explicitly:3 depends:3 blind:1 later:1 h1:3 performed:8 observing:2 kwk:2 red:5 traffic:1 wnt:2 contribution:2 square:1 formed:1 variance:2 efficiently:2 correspond:1 modelled:1 iterated:1 carlo:3 trajectory:9 bhatnagar:1 ah:1 against:2 energy:2 glynn:1 naturally:1 proof:4 riemannian:1 proved:1 popular:3 massachusetts:2 logical:2 recall:1 actually:1 back:1 focusing:1 appears:1 higher:2 follow:1 formulation:1 evaluated:1 strongly:1 receives:1 nonlinear:1 defines:1 scientific:1 mdp:7 manipulator:2 contain:1 true:2 requiring:1 vicinity:1 hence:2 attractive:2 adjacent:1 during:2 game:1 maintained:1 noted:1 criterion:1 variational:1 consideration:2 novel:3 recently:1 ari:1 common:1 superior:1 nh:1 relating:1 marginals:2 numerically:1 cambridge:1 piperidis:1 ai:1 enjoyed:1 automatic:1 grid:2 marbach:1 centre:2 robot:2 actor:6 curvature:2 perspective:3 linearisation:1 inf:1 success:1 seen:4 additional:2 schneider:1 converge:1 full:15 desirable:7 calculation:2 cross:2 concerning:1 a1:1 controlled:1 controller:8 optimisation:11 expectation:22 iteration:12 represent:3 invert:1 robotics:2 whereas:1 szepesv:1 furmston:3 source:2 singular:2 appropriately:1 ascent:43 logconcave:1 incorporates:1 call:2 viability:1 baxter:2 restrict:1 idea:1 absent:1 peshkin:1 bartlett:2 suffer:2 peter:2 hessian:9 action:13 generally:1 clear:2 aimed:1 detailed:1 tune:1 amount:1 discount:1 induces:1 http:1 exist:1 sign:1 track:1 blue:5 discrete:2 shall:3 affected:2 reformulation:1 demonstrating:1 nocedal:1 geometrically:1 sum:2 run:1 inverse:1 powerful:1 place:3 throughout:1 reasonable:2 separation:1 decision:9 scaling:2 comparable:1 bound:2 played:1 quadratic:8 gibb:2 aspect:3 optimality:1 performing:3 department:2 according:2 alternate:1 poor:2 beneficial:1 across:1 em:25 kakade:1 tw:2 making:1 s1:3 argmaxw:3 explained:1 invariant:4 taken:3 computationally:1 equation:1 previously:3 discus:1 rewritten:1 fahey:1 alternative:8 robustness:1 thomas:1 original:3 running:1 ensure:1 completed:4 newton:60 unifying:2 calculating:1 konda:1 society:1 objective:18 move:1 quantity:1 occurs:1 parametric:10 primary:1 costly:1 diagonal:16 bagnell:1 exhibit:1 gradient:60 link:2 mapped:1 athena:1 outer:2 seven:1 manifold:3 barber:3 trivial:1 reason:6 boldface:1 modeled:1 relationship:3 illustration:2 prompted:1 ratio:1 statement:2 relate:1 blockwise:2 trace:6 negative:19 implementation:2 zt:3 policy:44 perform:3 maximises:1 observation:1 markov:10 finite:5 ecml:1 immediate:2 situation:1 extended:1 hinton:1 communication:1 vlassis:1 rn:1 mansour:1 arbitrary:1 community:1 david:1 introduced:2 pair:2 namely:1 inverting:1 edi:1 z1:10 connection:1 nip:3 able:3 dynamical:1 sparsity:4 including:2 green:3 royal:1 belief:2 suitable:1 natural:33 difficulty:1 weaver:1 occupancy:2 technology:2 mdps:6 numerous:4 naive:2 geometric:2 literature:3 understanding:1 fully:1 toussaint:3 h2:23 agent:5 affine:4 article:1 rubin:1 critic:5 course:1 summary:1 free:2 infeasible:1 tsitsiklis:2 normalised:4 institute:2 taking:1 benefit:1 edinburgh:1 overcome:1 feedback:1 transition:5 world:2 evaluating:1 doesn:1 commonly:2 reinforcement:7 transaction:1 approximate:48 observable:3 global:2 sequentially:1 rnw:3 ioffe:1 uai:3 automatica:1 assumed:2 search:18 continuous:1 additionally:4 decoupling:1 ignoring:1 obtaining:1 expansion:4 excellent:1 european:1 domain:2 dense:2 linearly:1 azar:1 noise:2 edition:1 allowed:1 referred:2 board:1 slow:1 n:3 sub:1 originated:1 explicit:1 lie:1 vanish:1 weighting:1 third:1 z0:1 theorem:1 showing:3 concern:2 unattractive:2 sequential:1 importance:2 illustrates:1 hole:1 horizon:8 entropy:1 aberdeen:1 forming:1 highlighting:1 contained:2 omez:1 partially:2 scalar:1 springer:1 covariant:1 corresponds:3 wnew:1 acm:1 sized:1 marked:1 twofold:1 fisher:7 infinite:4 determined:3 wt:2 total:7 tetri:9 invariance:1 exception:1 college:2 mark:1 latter:1
3,951
4,577
Near-Optimal MAP Inference for Determinantal Point Processes Jennifer Gillenwater Alex Kulesza Ben Taskar Computer and Information Science University of Pennsylvania {jengi,kulesza,taskar}@cis.upenn.edu Abstract Determinantal point processes (DPPs) have recently been proposed as computationally efficient probabilistic models of diverse sets for a variety of applications, including document summarization, image search, and pose estimation. Many DPP inference operations, including normalization and sampling, are tractable; however, finding the most likely configuration (MAP), which is often required in practice for decoding, is NP-hard, so we must resort to approximate inference. This optimization problem, which also arises in experimental design and sensor placement, involves finding the largest principal minor of a positive semidefinite matrix. Because the objective is log-submodular, greedy algorithms have been used in the past with some empirical success; however, these methods only give approximation guarantees in the special case of monotone objectives, which correspond to a restricted class of DPPs. In this paper we propose a new algorithm for approximating the MAP problem based on continuous techniques for submodular function maximization. Our method involves a novel continuous relaxation of the log-probability function, which, in contrast to the multilinear extension used for general submodular functions, can be evaluated and differentiated exactly and efficiently. We obtain a practical algorithm with a 1/4-approximation guarantee for a more general class of non-monotone DPPs; our algorithm also extends to MAP inference under complex polytope constraints, making it possible to combine DPPs with Markov random fields, weighted matchings, and other models. We demonstrate that our approach outperforms standard and recent methods on both synthetic and real-world data. 1 Introduction Informative subset selection problems arise in many applications where a small number of items must be chosen to represent or cover a much larger set; for instance, text summarization [1, 2], document and image search [3, 4, 5], sensor placement [6], viral marketing [7], and many others. Recently, probabilistic models extending determinantal point processes (DPPs) [8, 9] were proposed for several such problems [10, 5, 11]. DPPs offer computationally attractive properties, including exact and efficient computation of marginals [8], sampling [12, 5], and (partial) parameter estimation [13]. They are characterized by a notion of diversity, as shown in Figure 1; points in the plane sampled from a DPP (center) are more spread out than those sampled independently (left). However, in many cases we would like to make use of the most likely configuration (MAP inference, right), which involves finding the largest principal minor of a positive semidefinite matrix. This is an NP-hard problem [14], and so we must resort to approximate inference methods. The DPP probability is a log-submodular function, and hence greedy algorithms are natural; however, the standard greedy algorithm of Nemhauser and Wolsey [15] offers an approximation guarantee of 1 1/e only for non-decreasing (monotone) submodular functions, and does not apply for general 1 Independent DPP sample DPP MAP Figure 1: From left to right, a set of points in the plane sampled independently at random, a sample drawn from a DPP, and an approximation of the DPP MAP set estimated by our algorithm. DPPs. In addition, we are are often interested in conditioning MAP inference on knapsack-type budget constraints, matroid constraints, or general polytope constraints. For example, we might consider a DPP model over edges of a bipartite graph and ask for the most likely set under the one-to-one matching constraint. In this paper we propose a new algorithm for approximating MAP inference that handles these types of constraints for non-monotone DPPs. Recent work on non-monotone submodular function optimization can be broadly split into combinatorial versus continuous approaches. Among combinatorial methods, modified greedy, local search and simulated annealing algorithms provide certain constant factor guarantees [16, 17, 18] and have been recently extended to optimization under knapsack and matroid constraints [19, 20]. Continuous methods [21, 22] use a multilinear extension of the submodular set function to the convex hull of the feasible sets and then round fractional solutions obtained by maximizing in the interior of the polytope. Our algorithm falls into the continuous category, using a novel and efficient non-linear continuous extension specifically tailored to DPPs. In comparison to the constant-factor algorithms for general submodular functions, our approach is more efficient because we have explicit access to the objective function and its gradient. In contrast, general submodular functions assume a simple function oracle and need to employ sampling to estimate function and gradient values in the polytope interior. We show that our non-linear extension enjoys some of the critical properties of the standard multilinear extension and propose an efficient algorithm that can handle solvable polytope constraints. Our algorithm compares favorably to greedy and recent ?symmetric? greedy [18] methods on unconstrained simulated problems, simulated problems under matching constraints, and a real-world matching task using quotes from political candidates. 2 Background Determinantal point processes (DPPs) are distributions over subsets that prefer diversity. Originally, DPPs were introduced to model fermions in quantum physics [8], but since then they have arisen in a variety of other settings including non-intersecting random paths, random spanning trees, and eigenvalues of random matrices [9, 23, 12]. More recently, they have been applied as probabilistic models for machine learning problems [10, 13, 5, 11]. Formally, a DPP P on a set of items Y = {1, 2, . . . , N } is a probability measure on 2Y , the set of all subsets of Y. For every Y ? Y we have: P(Y ) / det(LY ) (1) where L is a positive semidefinite matrix. LY ? [Lij ]i,j2Y denotes the restriction of L to the entries indexed by elements of Y , and det(L; ) = 1. If L is written as a Gram matrix, L = B > B, then the quantity det(LY ) can be interpreted as the squared volume spanned by the column vectors Bi for i 2 Y . If Lij = Bi> Bj is viewed as a measure of similarity between items i and j, then when i and j are similar their vectors are relatively non-orthogonal, and therefore sets including both i and j will span less volume and be less probable. This is illustrated in Figure 2. As a result, DPPs assign higher probability to sets that are diverse under L. 2 Figure 2: (a) The DPP probability of a set Y depends on the volume spanned by vectors Bi for i 2 Y . (b) As length increases, so does volume. (c) As similarity increases, volume decreases. The normalization constant in Equation (1) can be computed explicitly thanks to the identity X det(LY ) = det(L + I) , (2) Y where I is the N ? N identity matrix. In fact, a variety of probabilistic inference operations can be performed efficiently, including sampling, marginalization, and conditioning [12, 24]. However, the maximum a posteriori (MAP) problem arg maxY det(LY ) is NP-hard [14]. In many practical situations it would be useful to approximate the MAP set; for instance, during decoding, online training, etc. 2.1 Submodularity A function f : 2Y ! R is called submodular if it satisfies f (X [ {i}) f (Y [ {i}) f (X) f (Y ) (3) whenever X ? Y and i 62 Y . Intuitively, the contribution made by a single item i only decreases as the set grows. Common submodular functions include the mutual information of a set of variables and the number of cut edges leaving a set of vertices of a graph. A submodular function f is called nondecreasing (or monotone) when X ? Y implies f (X) ? f (Y ). It is possible to show that log det(LY ) is a submodular function: entropy is submodular, and the entropy of a Gaussian is proportional to log det(?Y ) (plus a linear term in |Y |), where ? is the covariance matrix. Submodular functions are easy to minimize, and a variety of algorithms exist for approximately maximizing them; however, to our knowledge none of these existing algorithms simultaneously allows for general polytope constraints on the set Y , offers an approximation guarantee, and can be implemented in practice without expensive sampling to approximate the objective. We provide a technique that addresses all three criteria for the DPP MAP problem, although approximation guarantees for the general polytope case depend on the choice of rounding algorithm and remain an open problem. We use the submodular maximization algorithm of [21] as a starting point. 3 MAP Inference We seek an approximate solution to the generalized DPP MAP problem arg maxY 2S log det(LY ), where S ? [0, 1]N and Y 2 S means that the characteristic vector I(Y ) is in S. We will assume that S is a down-monotone, solvable polytope; down-monotone means that for x, y 2 [0, 1]N , x 2 S implies y 2 S whenever x y (that is, whenever xi yi 8i), and solvable means that for any linear objective function g(x) = a> x, we can efficiently find x 2 S maximizing g(x). One common approach for approximating discrete optimization problems is to replace the discrete variables with continuous analogs and extend the objective function to the continuous domain. When the resulting continuous optimization is solved, the result may include fractional variables. Typically, a rounding scheme is then used to produce a valid integral solution. As we will detail below, 3 we use a novel non-linear continuous relaxation that has a nice property: when the polytope is unconstrained, S = [0, 1]N , our method will (essentially) always produce integral solutions. For more complex polytopes, a rounding procedure is required. When the objective f (Y ) is a submodular set function, as in our setting, the multilinear extension can be used to obtain certain theoretical guarantees for the relaxed optimization scheme described above [21, 25]. The multilinear extension is defined on a vector x 2 [0, 1]N : XY Y F (x) = xi (1 xi )f (Y ) . (4) Y i2Y i62Y That is, F (x) is the expected value of f (Y ) when Y is the random set obtained by including element i with probability xi . Unfortunately, this expectation generally cannot be computed efficiently, since it involves summing over exponentially many sets Y . Thus, to use the multilinear extension in practice requires estimating its value and derivative via Monte Carlo techniques. This makes the optimization quite computationally expensive, as well as introducing a variety of technical convergence issues. Instead, for the special case of DPP probabilities we propose a new continuous extension that is efficiently computable and differentiable. We refer to the following function as the softmax extension: XY Y F? (x) = log xi (1 xi ) exp(f (Y )) . (5) Y i2Y i62Y See the supplementary material for a visual comparison of Equations (4) and (5). While the softmax extension also involves a sum over exponentially many sets Y , we have the following theorem. Theorem 1. For a positive semidefinite matrix L and x 2 [0, 1]N , XY Y xi (1 xi ) det(LY ) = det(diag(x)(L I) + I) . (6) Y i2Y i62Y All proofs are included in the supplementary material. Corollary 2. For f (Y ) = log det(LY ), we have F? (x) = log det(diag(x)(L I) + I) and @ ? F (x) = tr((diag(x)(L I) + I) 1 (L I)i ) , @xi where (L I)i denotes the matrix obtained by zeroing all except the ith row of L I. (7) Corollary 2 says that softmax extension for the DPP MAP problem is computable and differentiable in O(N 3 ) time. Using a variant of gradient ascent (Section 3.1), this will be sufficient to efficiently find a local maximum of the softmax extension over an arbitrary solvable polytope. It then remains to show that this local maximum comes with approximation guarantees. 3.1 Conditional gradient When the optimization polytope S is simple?for instance, the unit cube [0, 1]N ?we can apply generic gradient-based optimization methods like L-BFGS to rapidly find a local maximum of the softmax extension. In situations where we are able to efficiently project onto the polytope S, we can apply projected gradient methods. In the general case, however, we assume only that the polytope is solvable. In such settings, we can use the conditional gradient algorithm (also known as the FrankWolfe algorithm) [26, 27]. Algorithm 1 describes the procedure; intuitively, at each step we move to a convex combination of the current point and the point maximizing the linear approximation of the function given by the current gradient. This ensures that we move in an increasing direction while remaining in S. Note that finding y requires optimizing a linear function over S; this step is efficient whenever the polytope is solvable. 3.2 Approximation bound In order to obtain an approximation bound for the DPP MAP problem, we consider the two-phase optimization in Algorithm 2, originally proposed in [21]. The second call to LOCAL - OPT is necessary in theory; however, in practice it can usually be omitted with minimal loss (if any). We will show that Algorithm 2 produces a 1/4-approximation. 4 Algorithm 1 LOCAL - OPT Input: function F? , polytope S x 0 while not converged do y arg maxy0 2S rF? (x)> y 0 ? arg max?0 2[0,1] F? (?0 x + (1 x ?x + (1 ?)y end while Output: x Algorithm 2 Approximating the DPP MAP Input: kernel L, polytope S Let F? (x) = log det(diag(x)(L I) + I) ? , S) x LOCAL - OPT (F ? , S \{y 0 | y 0 ? 1 x}) y LOCAL - OPT (F ? x : F? (x) > F? (y) Output: y : otherwise ?0 )y) We begin by proving that the continuous extension F? is concave in positive directions, although it is not concave in general. Lemma 3. When u, v 0, we have @2 ? F (x + su + tv) ? 0 @s@t wherever 0 < x + su + tv < 1. Corollary 4. F? (x + tv) is concave along any direction v (8) 0 (equivalently, v ? 0). Corollary 4 tells us that a local optimum x of F? has certain global properties?namely, that F? (x) F? (y) whenever y ? x or y x. This leads to the following result from [21]. Lemma 5. If x is a local optimum of F? (?), then for any y 2 [0, 1]N , 2F? (x) F? (x _ y) + F? (x ^ y) , (9) where (x _ y)i = max(xi , yi ) and (x ^ y)i = min(xi , yi ). Following [21], we now define a surrogate function F? ? . Let Xi ? [0, 1] be a subset of the unit interval representing xi = |Xi |, where |Xi | denotes the measure of Xi . (Note that this representation is overcomplete, since there are in general many subsets of [0, 1] with measure xi .) F? ? is defined on X = (X1 , X2 , . . . , XN ) by F? ? (X ) = F? (x), x = (|X1 |, |X2 |, . . . , |XN |) . (10) Lemma 6. F? ? is submodular. Lemmas 5 and 6 suffice to prove the following theorem, which appears for the multilinear extension in [21], bounding the approximation ratio of Algorithm 2. Theorem 7. Let F? (x) be the softmax extension of a nonnegative submodular function f (Y ) = log det(LY ), let OPT = maxx0 2S F? (x0 ), and let x and y be local optima of F? in S and S \ {y 0 | y 0 ? 1 x}, respectively. Then max(F? (x), F? (y)) 1 OPT 4 1 max log det(LY ) . 4 Y 2S (11) Note that the softmax extension is an upper bound on the multilinear extension, thus Equation (11) is at least as tight as the corresponding result in [21]. Corollary 8. Algorithm 2 yields a 1/4-approximation to the DPP MAP problem whenever log det(LY ) 0 for all Y . In general, the objective value obtained by Algorithm 2 is bounded below by 14 (OPT p0 ) + p0 , where p0 = minY log det(LY ). In practice, filtering of near-duplicates can be used to keep p0 from getting too small; however, in our empirical tests p0 did not seem to have a significant effect on approximation quality. 5 3.3 Rounding When the polytope S is unconstrained, it is easy to show that the results of Algorithm 1?and, in turn, Algorithm 2?are integral (or can be rounded without loss). Theorem 9. If S = [0, 1]N , then for any local optimum x of F? , either x is integral or at least one fractional coordinate xi can be set to 0 or 1 without lowering the objective. More generally, however, the polytope S can be complex, and the output of Algorithm 2 needs to be rounded. We speculate that the contention resolution rounding schemes proposed in [21] for the multilinear extension F may be extensible to F? , but do not attempt to prove so here. Instead, in our experiments we apply pipage rounding [28] and threshold rounding (rounding all coordinates up or down using a single threshold), which are simple and seem to work well in practice. 3.4 Model combination In addition to theoretical guarantees and the empirical advantages we demonstrate in Section 4, the proposed approach to the DPP MAP problem offers a great deal of flexibility. Since the general framework of continuous optimization is widely used in machine learning, this technique allows DPPs to be easily combined with other models. For instance, if S is the local polytope for a Markov random field, then, augmenting the objective with the (linear) log-likelihood of the MRF?additive linear objective terms do not affect the lemmas proved above?we can approximately compute the MAP configuration of the DPP-MRF product model. We might in this way model diverse objects placed in a sequence, or fit to an underlying signal like an image. Empirical studies of these possibilities are left to future work. 4 Experiments To illustrate the proposed method, we compare it to the widely used greedy algorithm of Nemhauser and Wolsey [15] (Algorithm 3) and the recently proposed deterministic ?symmetric? greedy algorithm [18], which has a 1/3 approximation guarantee for unconstrained non-monotone problems. Note that, while a naive implementation of the arg max in Algorithm 3 requires evaluating the objective for each item in U , here we can exploit the fact that DPPs are closed under conditioning to compute all necessary values with only two matrix inversions [5]. We report baseline runtimes using this optimized greedy algorithm, which is about 10 times faster than the naive version at N = 200. The code and data for all experiments can be downloaded from http://www.seas.upenn.edu/?jengi/dpp-map.html. 4.1 Synthetic data As a first test, we approximate the MAP configuration for DPPs with random kernels drawn from a Wishart distribution. Specifically, we choose L = B > B, where B 2 RN ?N has entries drawn independently from the standard normal distribution, bij ? N (0, 1). This results in L ? WN (N, I), a Wishart distribution with N degrees of freedom and an identity covariance matrix. This distribution has several desirable properties: (1) in terms of eigenvectors, it spreads its mass uniformly over all unitary matrices [29], and (2) the probability density of eigenvalues 1 , . . . , N is ! N QN N 2 X Y j=i+1 ( i j) exp , (12) i ((N i)!)2 i=1 i=1 the first term of which deters the eigenvalues from being too large, and the second term of which encourages the eigenvalues to be well-separated [30]. Property (1) implies that we will see a variety of eigenvectors, which play an important role in the structure of a DPP [5]. Property (2) implies that interactions between these eigenvectors will be important, as no one eigenvalue is likely to dominate. Combined, these properties suggest that samples should encompass a wide range of DPPs. Figure 3a shows performance results on these random kernels in the unconstrained setting. Our proposed algorithm outperforms greedy in general, and the performance gap tends to grow with the size of the ground set, N . (We let N vary in the range [50, 200] since prior work with DPPs 6 0.5 0 ?0.5 50 100 150 200 log prob. ratio (vs. greedy) log prob. ratio (vs. sym gr.) log prob. ratio (vs. greedy) 1 6 4 2 0 50 100 4 2 0 50 100 2 1 150 200 3 2 1 0 50 100 150 N N (a) (b) 150 200 150 200 N time ratio (vs. greedy) 3 time ratio (vs. sym greedy) time ratio (vs. greedy) 4 100 200 6 N N 0 50 150 8 200 15 10 5 0 50 100 N (c) Figure 3: Median and quartile log probability ratios (top) and running time ratios (bottom) for 100 random trials. (a) The proposed algorithm versus greedy on unconstrained problems. (b) The proposed algorithm versus symmetric greedy on unconstrained problems. (c) The proposed algorithm versus greedy on constrained problems. Dotted black lines indicate equal performance. in real-world scenarios [5, 13] has typically operated in this range.) Moreover, Figure 3a (bottom) illustrates that our method is of comparable efficiency at medium N , and becomes more efficient as N grows. Despite the fact that the symmetric greedy algorithm [18] has an improved approximation guarantee of 1/3, essentially the same analysis applies to Figure 3b. Figure 3c summarizes the performance of our algorithm in a constrained setting. To create plausible constraints, in this setting we generate two separate random matrices B (1) and B (2) , and then select (1) (2) (1) (2) random pairs of rows (Bi , Bj ). Averaging (Bi + Bj )/2 creates one row of the matrix B; we then set L = B > B. The constraints require that if xk corresponding to the (i, j) pair is 1, no other xk0 can have first element i or second element j; i.e., the pairs cannot overlap. Since exact duplicate pairs produce identical rows in L, they are never both selected and can be pruned ahead of time. This means our constraints are of a form that allows us to apply pipage rounding to the possibly fractional result. Figure 3c shows even greater gains over greedy in this setting; however, enforcing the constraints precludes using fast methods like L-BFGS, so our optimization procedure is in this case somewhat slower than greedy. 4.2 Matched summarization Finally, we demonstrate our approach using real-world data. Consider the following task: given a set of documents, select a set of document pairs such that the two elements within a pair are similar, but the overall set of pairs is diverse. For instance, we might want to compare the opinions of various authors on a range of topics?or even to compare the statements made at different points in time by the same author, e.g., a politician believed to have changed positions on various issues. In this vein, we extract all the statements made by the eight main contenders in the 2012 US Republican primary debates: Bachmann, Cain, Gingrich, Huntsman, Paul, Perry, Romney, and Santorum. See the supplementary material for an example of some of these statements. Each pair of candidates (a, b) constitutes one instance of our task. The task output is a set of statement pairs where the first statement in each pair comes from candidate a and the second from candidate b. The goal of optimization is to find a set that is diverse (contains many topics, such as healthcare, foreign policy, immigration, etc.) but where both statements in each pair are topically similar. Before formulating a DPP objective for this task, we perform some pre-processing. We filter short statements, leaving us with an average of 179 quotes per candidate (min = 93, max = 332 quotes). 7 log probability ratio (SoftMax / Greedy) Algorithm 3 Greedy MAP for DPPs Input: kernel L, polytope S Y ;, U Y while U is not empty do i? arg maxi2U log det(LY [{i} ) if log det(LY [{i? } ) < log det(LY ) then break end if Y Y [ {i? } U {i | i 62 Y, I(Y [ {i}) 2 S} end while Output: Y 0.2 0.15 0.1 0.05 0 0 0.2 0.4 0.6 0.8 1 ? Figure 4: Log ratio of the objective value achieved by our method to that achieved by greedy for ten settings of match weight . We parse the quotes, keeping only nouns. We further filter nouns by document frequency, keeping only those that occur in at least 10% of the quotes. Then we generate a feature matrix W where Wqt is the number of times term t appears in quote q. This matrix is then normalized so that kWq k2 = 1, where Wq is the qth row of W . For a given pair of candidates (a, b) we compute the quality of each (a) (b) possible quote pair (qi , qj ) as the dot product of their rows in W . While the model will naturally ignore low-quality pairs, for efficiency we throw away such pairs in pre-processing. For each of (a) (a) (b) candidate a?s quotes qi we keep a pair with quote j = arg maxj 0 quality(qi , qj 0 ) from candidate b, and vice-versa. The scores of the unpruned quotes, which we denote r, are re-normalized to span the [0, 1] range. To create a feature vector describing each pair, we simply add the corresponding pair of quote feature vectors and re-normalize, forming a new W matrix. Our task is to select some high-quality representative subset of the unpruned quote pairs. We formulate this as a DPP objective with kernel L = M SM , where Sij is a measurement of similarity between quote pairs i and j, and M is a diagonal matrix with Mii representing the match quality of p pair i. We set S = W W T and diag(M ) = exp( r), where is a hyperparameter. Large places more emphasis on picking high-quality pairs than on making the overall set diverse. To help limit the number of pairs selected when optimizing the objective, we add some constraints. For each candidate we cluster their quotes using k-means on the word feature vectors and impose the constraint that no more than one quote per cluster can be selected. We round the final solution using the threshold rounding scheme described in Section 3.3. Figure 4 shows the result of optimizing this constrained objective, averaged over all 56 candidate pairs. For all settings of we outperform greedy. In general, we observe that our algorithm is most improved compared to greedy when the constraints are in play. In this case, when is small the constraints are less relevant, since the model has an intrinsic preference for smaller sets. On the other hand, when is very large the algorithms must choose as many pairs as possible in order to maximize their score; in this case the constraints play an important role. 5 Conclusion We presented a new approach to solving the MAP problem for DPPs based on continuous algorithms for submodular maximization. Unlike the multilinear extension used in the general case, the softmax extension we propose is efficiently computable and differentiable. Furthermore, it allows for general solvable polytope constraints, and yields a guaranteed 1/4-approximation in a subclass of DPPs. Our method makes it easy to combine DPPs with other models like MRFs or matching models, and is faster and more reliable than standard greedy methods on synthetic and real-world problems. Acknowledgments This material is based upon work supported under a National Science Foundation Graduate Research Fellowship, Sloan Research Fellowship, and NSF Grant 0803256. 8 References [1] A. Nenkova, L. Vanderwende, and K. McKeown. A Compositional Context-Sensitive Multi-Document Summarizer: Exploring the Factors that Influence Summarization. In Proc. SIGIR, 2006. [2] H. Lin and J. Bilmes. Multi-document Summarization via Budgeted Maximization of Submodular Functions. In Proc. NAACL/HLT, 2010. [3] F. Radlinski, R. Kleinberg, and T. Joachims. Learning Diverse Rankings with Multi-Armed Bandits. In Proc. ICML, 2008. [4] Y. Yue and T. Joachims. Predicting Diverse Subsets Using Structural SVMs. In Proc. ICML, 2008. [5] A. Kulesza and B. Taskar. k-DPPs: Fixed-Size Determinantal Point Processes. In Proc. ICML, 2011. [6] C. Guestrin, A. Krause, and A. Singh. Near-Optimal Sensor Placements in Gaussian Processes. In Proc. ICML, 2005. [7] D. Kempe, J. Kleinberg, and E. Tardos. Influential Nodes in a Diffusion Model for Social Networks. In Automata, Languages and Programming, volume 3580 of Lecture Notes in Computer Science. 2005. [8] O. Macchi. The Coincidence Approach to Stochastic Point Processes. Advances in Applied Probability, 7(1), 1975. [9] D. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes: Elementary Theory and Methods. 2003. [10] A. Kulesza and B Taskar. Structured Determinantal Point Processes. In Proc. NIPS, 2010. [11] A. Kulesza, J. Gillenwater, and B. Taskar. Discovering Diverse and Salient Threads in Document Collections. In Proc. EMNLP, 2012. [12] J. Hough, M. Krishnapur, Y. Peres, and B. Vir?ag. Determinantal Processes and Independence. Probability Surveys, 3, 2006. [13] A. Kulesza and B. Taskar. Learning Determinantal Point Processes. In Proc. UAI, 2011. [14] C. Ko, J. Lee, and M. Queyranne. An Exact Algorithm for Maximum Entropy Sampling. Operations Research, 43(4), 1995. [15] G. Nemhauser, L. Wolsey, and M. Fisher. An Analysis of Approximations for Maximizing Submodular Set Functions I. Mathematical Programming, 14(1), 1978. [16] U. Feige, V. Mirrokni, and J. Vondrak. Maximizing Non-Monotone Submodular Functions. In Proc. FOCS, 2007. [17] T. Robertazzi and S. Schwartz. An Accelerated Sequential Algorithm for Producing D-optimal Designs. SIAM J. Sci. Stat. Comput., 10(2), 1989. [18] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. A Tight Linear Time (1/2)-Approximation for Unconstrained Submodular Maximization. In Proc. FOCS, 2012. [19] A. Gupta, A. Roth, G. Schoenebeck, and K. Talwar. Constrained Nonmonotone Submodular Maximization: Offline and Secretary Algorithms. In Internet and Network Economics, volume 6484 of LNCS. 2010. [20] S. Gharan and J. Vondr?ak. Submodular Maximization by Simulated Annealing. In Proc. Soda, 2011. [21] C. Chekuri, J. Vondr?ak, and R. Zenklusen. Submodular Function Maximization via the Multilinear Relaxation and Contention Resolution Schemes. arXiv:1105.4593, 2011. [22] M. Feldman, J. Naor, and R. Schwartz. Nonmonotone Submodular Maximization via a Structural Continuous Greedy Algorithm. Automata, Languages and Programming, 2011. [23] A. Borodin and A. Soshnikov. Janossy Densities I. Determinantal Ensembles. Journal of Statistical Physics, 113(3), 2003. [24] A. Borodin. Determinantal Point Processes. arXiv:0911.1153, 2009. [25] M. Feldman, J. Naor, and R. Schwartz. A Unified Continuous Greedy Algorithm for Submodular Maximization. In Proc. FOCS, 2011. [26] D. Bertsekas. Nonlinear Programming. Athena Scientific, 1999. [27] M. Frank and P. Wolfe. An Algorithm for Quadratic Programming. Naval Research Logistics Quarterly, 3(1-2), 1956. [28] A. Ageev and M. Sviridenko. Pipage Rounding: A New Method of Constructing Algorithms with Proven Performance Guarantee. Journal of Combinatorial Optimization, 8(3), 2004. [29] A. James. Distributions of Matrix Variates and Latent Roots Derived from Normal Samples. Annals of Mathematical Statistics, 35(2), 1964. [30] P. Hsu. On the Distribution of Roots of Certain Determinantal Equations. Annals of Eugenics, 9(3), 1939. 9
4577 |@word trial:1 version:1 inversion:1 open:1 seek:1 covariance:2 p0:5 tr:1 configuration:4 contains:1 score:2 document:8 frankwolfe:1 past:1 outperforms:2 existing:1 current:2 qth:1 nonmonotone:2 must:4 written:1 determinantal:11 vere:1 additive:1 informative:1 v:6 greedy:29 selected:3 discovering:1 item:5 plane:2 xk:1 ith:1 short:1 node:1 preference:1 mathematical:2 along:1 focs:3 fermion:1 prove:2 naor:3 combine:2 x0:1 upenn:2 expected:1 multi:3 decreasing:1 armed:1 increasing:1 becomes:1 project:1 estimating:1 begin:1 suffice:1 bounded:1 underlying:1 mass:1 moreover:1 medium:1 matched:1 interpreted:1 unified:1 finding:4 ag:1 guarantee:12 every:1 subclass:1 concave:3 exactly:1 k2:1 schwartz:4 jengi:2 unit:2 ly:16 healthcare:1 grant:1 vir:1 producing:1 bertsekas:1 positive:5 before:1 local:13 tends:1 limit:1 topically:1 despite:1 ak:2 path:1 approximately:2 might:3 plus:1 black:1 emphasis:1 wqt:1 bi:5 range:5 averaged:1 graduate:1 practical:2 acknowledgment:1 practice:6 procedure:3 lncs:1 empirical:4 matching:4 pre:2 word:1 suggest:1 cannot:2 interior:2 selection:1 onto:1 context:1 influence:1 restriction:1 www:1 map:24 deterministic:1 center:1 maximizing:6 roth:1 economics:1 starting:1 independently:3 convex:2 sigir:1 resolution:2 formulate:1 automaton:2 survey:1 ageev:1 spanned:2 dominate:1 proving:1 handle:2 notion:1 coordinate:2 tardos:1 annals:2 play:3 exact:3 programming:5 element:5 wolfe:1 expensive:2 cut:1 vein:1 bottom:2 taskar:6 role:2 coincidence:1 solved:1 ensures:1 decrease:2 miny:1 depend:1 tight:2 solving:1 singh:1 creates:1 bipartite:1 efficiency:2 upon:1 matchings:1 easily:1 various:2 separated:1 fast:1 monte:1 tell:1 quite:1 larger:1 supplementary:3 widely:2 say:1 plausible:1 otherwise:1 precludes:1 statistic:1 nondecreasing:1 final:1 online:1 advantage:1 eigenvalue:5 differentiable:3 sequence:1 propose:5 interaction:1 product:2 schoenebeck:1 relevant:1 rapidly:1 flexibility:1 normalize:1 getting:1 convergence:1 empty:1 optimum:4 extending:1 sea:1 produce:4 cluster:2 mckeown:1 ben:1 object:1 help:1 illustrate:1 augmenting:1 stat:1 pose:1 minor:2 throw:1 implemented:1 involves:5 implies:4 come:2 indicate:1 direction:3 submodularity:1 filter:2 hull:1 quartile:1 stochastic:1 opinion:1 material:4 require:1 assign:1 opt:7 probable:1 multilinear:11 elementary:1 extension:22 exploring:1 ground:1 normal:2 exp:3 great:1 bj:3 vary:1 omitted:1 estimation:2 proc:13 combinatorial:3 quote:15 sensitive:1 largest:2 vice:1 create:2 weighted:1 sensor:3 gaussian:2 always:1 modified:1 gharan:1 corollary:5 derived:1 joachim:2 naval:1 likelihood:1 contrast:2 political:1 baseline:1 romney:1 posteriori:1 inference:10 secretary:1 mrfs:1 foreign:1 typically:2 kwq:1 bandit:1 interested:1 arg:7 among:1 issue:2 html:1 overall:2 noun:2 constrained:4 kempe:1 special:2 softmax:9 mutual:1 cube:1 field:2 equal:1 never:1 sampling:6 runtimes:1 identical:1 jones:1 icml:4 constitutes:1 future:1 np:3 others:1 report:1 duplicate:2 employ:1 simultaneously:1 national:1 maxj:1 phase:1 immigration:1 attempt:1 freedom:1 possibility:1 semidefinite:4 operated:1 edge:2 integral:4 partial:1 necessary:2 xy:3 orthogonal:1 tree:1 indexed:1 hough:1 re:2 overcomplete:1 theoretical:2 minimal:1 politician:1 instance:6 column:1 cover:1 extensible:1 maximization:10 introducing:1 vertex:1 subset:7 entry:2 rounding:11 gr:1 too:2 synthetic:3 combined:2 contender:1 thanks:1 density:2 siam:1 probabilistic:4 physic:2 lee:1 decoding:2 rounded:2 picking:1 intersecting:1 squared:1 janossy:1 choose:2 possibly:1 emnlp:1 wishart:2 resort:2 derivative:1 zenklusen:1 diversity:2 bfgs:2 speculate:1 explicitly:1 sloan:1 depends:1 ranking:1 performed:1 break:1 root:2 closed:1 contribution:1 minimize:1 pipage:3 characteristic:1 efficiently:8 ensemble:1 correspond:1 yield:2 none:1 carlo:1 bilmes:1 converged:1 whenever:6 hlt:1 frequency:1 james:1 naturally:1 proof:1 sampled:3 gain:1 proved:1 hsu:1 ask:1 knowledge:1 fractional:4 appears:2 originally:2 higher:1 improved:2 evaluated:1 furthermore:1 marketing:1 chekuri:1 hand:1 parse:1 su:2 nonlinear:1 perry:1 quality:7 scientific:1 grows:2 effect:1 naacl:1 normalized:2 hence:1 symmetric:4 illustrated:1 deal:1 attractive:1 round:2 during:1 encourages:1 criterion:1 generalized:1 demonstrate:3 image:3 novel:3 recently:5 contention:2 common:2 viral:1 conditioning:3 exponentially:2 volume:7 analog:1 extend:1 marginals:1 refer:1 significant:1 measurement:1 versa:1 feldman:3 dpps:22 unconstrained:8 zeroing:1 gillenwater:2 submodular:29 language:2 dot:1 access:1 similarity:3 etc:2 add:2 recent:3 optimizing:3 scenario:1 buchbinder:1 certain:4 success:1 yi:3 cain:1 guestrin:1 greater:1 relaxed:1 somewhat:1 impose:1 maximize:1 signal:1 encompass:1 desirable:1 technical:1 faster:2 characterized:1 match:2 offer:4 believed:1 lin:1 qi:3 variant:1 mrf:2 ko:1 essentially:2 expectation:1 arxiv:2 maxx0:1 normalization:2 represent:1 tailored:1 arisen:1 kernel:5 achieved:2 gingrich:1 addition:2 background:1 want:1 fellowship:2 annealing:2 interval:1 krause:1 grow:1 leaving:2 median:1 unlike:1 ascent:1 yue:1 seem:2 call:1 unitary:1 near:3 structural:2 split:1 easy:3 wn:1 variety:6 marginalization:1 matroid:2 affect:1 fit:1 pennsylvania:1 independence:1 variate:1 computable:3 det:21 qj:2 thread:1 queyranne:1 compositional:1 useful:1 generally:2 eigenvectors:3 ten:1 svms:1 category:1 http:1 generate:2 outperform:1 exist:1 nsf:1 dotted:1 estimated:1 per:2 diverse:9 broadly:1 discrete:2 hyperparameter:1 salient:1 threshold:3 drawn:3 budgeted:1 diffusion:1 lowering:1 graph:2 relaxation:3 monotone:10 sum:1 prob:3 talwar:1 soda:1 extends:1 place:1 mii:1 prefer:1 summarizes:1 comparable:1 bound:3 internet:1 guaranteed:1 quadratic:1 oracle:1 nonnegative:1 ahead:1 placement:3 constraint:20 occur:1 alex:1 summarizer:1 x2:2 vanderwende:1 sviridenko:1 kleinberg:2 vondrak:1 span:2 min:2 pruned:1 formulating:1 relatively:1 influential:1 tv:3 structured:1 combination:2 remain:1 describes:1 smaller:1 feige:1 making:2 wherever:1 maxy:2 intuitively:2 restricted:1 sij:1 macchi:1 computationally:3 equation:4 remains:1 jennifer:1 turn:1 describing:1 tractable:1 end:3 operation:3 apply:5 eight:1 observe:1 away:1 differentiated:1 generic:1 quarterly:1 slower:1 knapsack:2 denotes:3 remaining:1 include:2 top:1 running:1 exploit:1 approximating:4 objective:17 move:2 quantity:1 primary:1 mirrokni:1 diagonal:1 surrogate:1 nemhauser:3 gradient:8 separate:1 simulated:4 sci:1 athena:1 topic:2 polytope:21 spanning:1 enforcing:1 length:1 code:1 ratio:11 equivalently:1 unfortunately:1 statement:7 frank:1 favorably:1 debate:1 design:2 implementation:1 xk0:1 summarization:5 policy:1 perform:1 upper:1 markov:2 sm:1 logistics:1 situation:2 extended:1 peres:1 rn:1 deters:1 arbitrary:1 introduced:1 namely:1 required:2 pair:25 optimized:1 polytopes:1 nip:1 address:1 able:1 eugenics:1 below:2 usually:1 borodin:2 kulesza:6 rf:1 including:7 max:6 reliable:1 critical:1 overlap:1 natural:1 predicting:1 solvable:7 representing:2 scheme:5 republican:1 naive:2 extract:1 lij:2 text:1 nice:1 prior:1 loss:2 lecture:1 wolsey:3 proportional:1 filtering:1 proven:1 versus:4 foundation:1 downloaded:1 degree:1 sufficient:1 unpruned:2 row:6 changed:1 placed:1 supported:1 keeping:2 sym:2 enjoys:1 offline:1 fall:1 wide:1 dpp:23 xn:2 world:5 gram:1 valid:1 quantum:1 evaluating:1 qn:1 made:3 author:2 projected:1 collection:1 social:1 approximate:6 maxy0:1 ignore:1 vondr:2 keep:2 global:1 krishnapur:1 uai:1 summing:1 xi:18 search:3 continuous:16 latent:1 complex:3 constructing:1 domain:1 diag:5 did:1 spread:2 main:1 bounding:1 arise:1 paul:1 x1:2 representative:1 position:1 explicit:1 daley:1 comput:1 candidate:10 bij:1 down:3 theorem:5 bachmann:1 gupta:1 intrinsic:1 sequential:1 ci:1 budget:1 illustrates:1 gap:1 entropy:3 simply:1 likely:4 forming:1 visual:1 applies:1 satisfies:1 conditional:2 viewed:1 identity:3 goal:1 replace:1 fisher:1 feasible:1 hard:3 included:1 specifically:2 except:1 uniformly:1 averaging:1 principal:2 lemma:5 called:2 experimental:1 formally:1 select:3 wq:1 radlinski:1 arises:1 accelerated:1
3,952
4,578
Approximating Concavely Parameterized Optimization Problems S?oren Laue Friedrich-Schiller-Universit?at Jena Germany [email protected] Joachim Giesen Friedrich-Schiller-Universit?at Jena Germany [email protected] Jens K. Mueller Friedrich-Schiller-Universit?at Jena Germany [email protected] Sascha Swiercy Friedrich-Schiller-Universit?at Jena Germany [email protected] Abstract We consider an abstract class of optimization problems that are parameterized concavely in a single parameter, and show that the solution path along the ? parameter can always be approximated with accuracy ? > 0 by a set of size O(1/ ?). A ? lower bound of size ?(1/ ?) shows that the upper bound is tight up to a constant factor. We also devise an algorithm that calls a step-size oracle and computes an ? approximate path of size O(1/ ?). Finally, we provide an implementation of the oracle for soft-margin support vector machines, and a parameterized semi-definite program for matrix completion. 1 Introduction Problem description. Let D be a set, I ? R an interval, and f : I ? D ? R such that (1) f (t, ?) is bounded from below for every t ? I, and (2) f (?, x) is concave for every x ? D. We study the parameterized optimization problem h(t) = minx?D f (t, x). A solution x?t ? D is called optimal at parameter value t if f (t, x?t ) = h(t), and x ? D is called an ?-approximation at t if ?(t, x) := f (t, x) ? h(t) ? ?. Of course it holds ?(t, x?t ) = 0. A subset P ? D is called an ?-path if P contains an ?-approximation for every t ? I. The size of a smallest ?-approximation path is called the ?-path complexity of the parameterized optimization problem. The aim of this paper is to derive upper and lower bounds on the path complexity, and to provide efficient algorithms to compute ?-paths. Motivation. The rather abstract problem from above is motivated by regularized optimization problems that are abundant in machine learning, i.e., by problems of the form min f (t, x) := r(x) + t ? l(x), x?D where r(x) is a regularization- and l(x) a loss term. The parameter t controls the trade-off between regularization and loss. Note that here f (?, x) is always linear and hence concave in the parameter t. 1 Previous work. Due to the widespread use of regularized optimization methods in machine learning regularization path following algorithms have become an active area of research. Initially, exact path tracking methods have been developed for many machine learning problems [16, 18, 3, 9] starting with the algorithm for SVMs by Hastie et al. [10]. Exact tracking algorithms tend to be slow and numerically unstable as they need to invert large matrices. Also, the exact regularization path can be exponentially large in the input size [5, 14]. Approximation algorithms can overcome these problems [4]. Approximation path algorithms with approximation guarantees have been developed for SVMs with square loss [6], the LASSO [14], and matrix completion and factorization problems [8, 7]. ? Contributions. We provide a structural upper bound in O(1/ ?) for the ?-path complexity for the abstract problem class described above. We?show that this bound is tight up to a multiplicative constant by constructing a lower bound in ?(1/ ?). Finally, we devise a generic algorithm to compute ?-paths that calls a problem specific oracle providing a step-size ? certificate. If such a certificate exists, then the algorithm computes a path of complexity in O(1/ ?). Finally, we demonstrate the implementation of the oracle for standard SVMs and a matrix completion problem. ? Resulting in the first algorithms for both problems that compute ?-paths of complexity in O 1/ ? . Previously, no approximation path algorithms have been known for standard SVMs but only a heuristic [12] and an approximation algorithm for square loss SVMs [6] with complexity in O(1/?). The best approximation path algorithm for matrix completion also has complexity in ? O(1/?). To our knowledge, the  only known approximation path algorithm with complexity in O 1/ ? is [14] for the LASSO. 2 Upper Bound Here we show that any problem that fits the problem definition from the introduction for a compact ? interval I = [a, b] has an ?-path with complexity in O(1/ ?). Let (a, b) be the interior of [a, b] and let g : (a, b) ? R be concave, then g is continuous and has a 0 0 (t), respectively, at every point t ? I (see for example [15]). (t) and g+ left- and right derivative g? Note that f (?, x) is concave by assumption and h is concave as the minimum over a family of concave functions. 0 0 Lemma 1. For all t ? (a, b), h0? (t) ? f? (t, x?t ) ? f+ (t, x?t ) ? h0+ (t). Proof. For all t0 < t it holds h(t0 ) ? f (t0 , x?t ) and hence h(t) ? h(t0 ) ? f (t, x?t ) ? f (t0 , x?t ) which implies h(t) ? h(t0 ) f (t, x?t ) ? f (t0 , x?t ) 0 h0? (t) := lim ? lim =: f? (t, x?t ). t0 ?t t0 ?t t ? t0 t ? t0 0 0 0 The inequality f+ (t, x?t ) ? h0+ (t) follows analogously, and f? (t, x?t ) ? f+ (t, x?t ) follows after ? some algebra from the concavity of f (?, xt ) and the definition of the derivatives (see [15]). Definition 2. Let I = [a, b] be a compact interval, ? > 0, and t0 = a. Let  Tk = t | t ? (tk?1 , b] such that ?(t, x?tk?1 ) := f (t, x?tk?1 ) ? h(t) = ? , and tk = min Tk for all integral k > 0 such that Tk 6= ?. Finally, let P ? = {x?tk | k ? N such that Tk 6= ?}. ?1 2 Lemma 3. Let s1 , . . . , sn ? R>0 , then (s1 + . . . + sn )(s?1 1 + . . . + sn ) ? n . Proof. The claim holds for n = 1 as s1 s?1 = 1 = 12 . Assume the claim holds for n ? 1 and 1 ?1 ?1 let a = s1 + . . . + sn?1 and b = s1 + . . . + s?1 n?1 . The rectangle with side lengths asn and ?1 ?1 bsn has circumference 2(asn + bsn ) and area asn bsn ? = ab. Since the square minimizes the circumference for a given area we have 2(as?1 + bs ) ? 4 ab. The claim for n now follows from n n ? ? ?1 2 2 2 (a + sn )(b + s?1 n ) = ab + asn + bsn + 1 ? ab + 2 ab + 1 = ( ab + 1) ? ((n ? 1) + 1) = n . 2 Lemma 4. The size of P ? is at most q  ?  (b ? a)(h0? (a) ? h0? (b)) /? ? O 1/ ? . Proof. Let a = t0 ? t1 ? . . . be the sequence from Definition 2. Define ?k = tk+1 ? tk and ?k = h0? (tk ) ? h0? (tk+1 ). We have 0 ?k ?k ? (f? (tk , x?tk ) ? h0? (tk+1 ))(tk+1 ? tk )   f (tk+1 , x?tk ) ? f (tk , x?tk ) h(tk+1 ) ? h(tk ) ? ? (tk+1 ? tk ) tk+1 ? tk tk+1 ? tk = f (tk+1 , x?tk ) ? h(tk+1 ) = ?(tk+1 , x?tk ), where the first inequality follows from Lemma 1 and the second inequality follows from concavity and the definition of derivatives (see [15]). Thus, there exists sk > 0 such that ?k ? ?sk and ?k ? s?1 k . It follows from Lemma 3 that ?n2 ? ? ?1 ?(s1 + . . . + sn )(s?1 1 + . . . + sn ) (b ? a)(?1 + . . . + ?n ) ? (b ? ? (?1 + . . . + ?n )(?1 0 a)(h? (a) ? h0? (b)), + . . . + ?n ) ? h0? (t) for t ? b (which can be proved from conwhere the last inequality follows from cavity, see again [15]). Hence, the size of P ? must be finite, or more q the sequence (tk ) and thus  specifically n is bounded by (b ? a)(h0? (a) ? h0? (b)) /?. h0? (b) Theorem 5. P ? is an ?-path for I = [a, b]. Proof. For any x ? D, ?(?, x) is a continuous function. Hence, x?tk is an ?-approximation for all t ? [tk , tk+1 ], because if there would be t ? (tk , tk+1 ] with ?(t, x?tk ) > ?, then by continuity, there would be also t0 ? (tk , tk+1 ) with ?(t, x?tk ) = ? which contradicts the minimality of tk+1 . The claim of the theorem follows since the proof of Lemma 4 shows that the sequence (tk ) is finite and hence the intervals [tk , tk+1 ] cover the whole [a, b]. 3 Lower Bound Here we show that there exists a problem that fits the problem description from the introduction ? whose ?-path complexity is in ?(1/ ?). This shows that the upper bound from the previous section is tight up to a constant. Let I = [a, b], D = R, f (t, x) = 12 x2 ? tx and thus   2 1 2 1 1 h(t) = min x ? tx = x?t ? tx?t = ? t2 , x?R 2 2 2 where the last equality follows from the convexity and differentiability of f (t, x) in x which together ? ? imply ?f ?x (t, xt ) = xt ? t = 0.  For ? > 0 and x ? R let Ix = t ? [a, b] ?(t, x) := 21 x2 ? tx + 12 t2 ? ? , which is an interval ? since 12 x2 ? tx + 12 t2 is a quadratic function in t. The length of this interval is 2 2? independent ? of x. Hence, the ?-path complexity for the problem is at least (b ? a)/2 2?. Let us compare this lowerqbound with the upper from the previous section which gives for the q  (b?a)2 0 0 ? . Hence the upper specific problem at hand, (b ? a)(h? (a) ? h? (b)) /? = = b?a ? ? ? bound is tight up to constant of at most 2 2. 4 Generic Algorithm So far we have only discussed structural complexity ? bounds for ?-paths. Now we give a generic algorithm to compute an ?-path of complexity in O(1/ ?). When applying the generic algorithm to 3 a specific problem a plugin-subroutine PATH P OLYNOMIAL needs to be implemented for the specific problem. The generic algorithm builds on the simple idea that has been introduced in [6] to compute an (?/?)-approximation (for ? > 1) and only update this approximation along the parameter interval I = [a, b] when it fails to be an ?-approximation. The plugin-subroutine PATH P OLYNOMIAL provides a bound on the step-size for the algorithm, i.e., a certificate for how long the approximation is valid along the interval I. Hence we describe the idea behind the construction of this certificate first. 4.1 Step-size certificate and algorithm We always consider a problem that fits the problem description from the introduction. Definition 6. Let P be the set of all concave polynomials p : I ? R of degree at most 2. For t ? I, x ? D and ? > 0 let Pt (x, ?) := {p ? P | p ? h, f (t, x) ? p(t) ? ?}, 0 where p ? h means p(t ) ? h(t0 ) for all t0 ? I. Note that P contains constant and linear polynomials with second derivative p00 = 0 and quadratic polynomials with constant second derivative p00 < 0. If Pt (x, ?) 6= ?, then x is an ?-approximation at parameter value t, because there exists p ? P such that ?(t, x) ? f (t, x) ? p(t) ? ?. Definition 7. [Step-size] For t ? I = [a, b], p ? P, ? > 0, and ? > 1, let ?t := t ? a and ?t (p, ?) = ? , if p00 < 0 and ?t > 0. ? ?t2 |p00 | The step-size is given as ? (1) ? : p00 = 0 ? ?t (p) (2) ?t (p, ?) = ?t (p, ?) : p00 < 0, ?t (p, ?) ? ? ? (3) ?t (p, ?) : p00 < 0, ?t (p, ?) ? where (1) ?t (p) 1 2 1 2 ?t (? ? 1) s  2   2? 1 1 2 = + ?t ?t (p, ?) ? ? ?t ?t (p, ?) + |p00 | 2 2 s   2? 1 = 1? ? 00 |p | ? = (2) ?t (p, ?) (3) ?t (p, ?) To simplify the notation we will skip the argument ? of the step-size ?t whenever the value of ? is obvious from the context. (2) (3) Observation 8. If ?t (p, ?) = 1/2, then ?t (p) = ?t (p), because ?t (p, ?) = 1/2 implies ?t = q 2? ? |p00 | . Lemma 9. For t ? (a, b), x ? D, ? > 0 and ? > 1. If there exists p ? Pt (x, ?/?), then x is an ?-approximation for all t0 ? [t, b] with t0 ? t + ?t (p). Proof. Let g : [a, b] ? R be the following linear function, g(t0 ) = (t0 ? t) p(t) + ?/? ? p(a) ? + p(t) + . t?a ? Then, for all t0 ? [t, b], f (t0 , x) ? (t0 ? t) f (t, x) ? f (a, x) p(t) + ?/? ? p(a) ? + f (t, x) ? (t0 ? t) + p(t) + = g(t0 ) t?a t?a ? 4 where the first inequality follows from the concavity of f (?, x), and the second inequality follows from f (t, x) ? p(t) ? ?/? and from p(a) ? h(a) ? f (a, x). Thus, x is an ?-approximation for all t0 ? [t, b] that satisfy g(t0 ) ? p(t0 ) ? ? because ?(t0 , x) = f (t0 , x) ? h(t0 ) ? f (t0 , x) ? p(t0 ) ? g(t0 ) ? p(t0 ) ? ?. We finish the proof by considering three cases. (i) If p00 = 0, then g(t0 ) ? p(t0 ) is a linear function in t0 , and g(t0 ) ? p(t0 ) ? ? solves to t0 ? t ? (1) ?t (? ? 1) = ?t (p) = ?t (p). (ii) If p00 < 0, then g(t0 ) ? p(t0 ) is a quadratic polynomial in t0 with second derivative ?p00 > 0, (2) and the equation g(t0 )?p(t0 ) ? ? solves to t0 ?t ? ?t (p). Note that we do not need the condition ?t (p) ? 1/2 here. (iii) The caseq p00 < 0 and ?t (p) ? 1/2 can q be reduced to Case (ii). From ?t (p) ? 1/2 we obtain 2? and thus a ? t ? ?. Let p? the restriction of p onto the interval t ? a = ?t ? |p2? 00 |? |p00 |? =: a  [? a, b] and ??t = t ? a ?, then p?00 = p00 , and thus ?t (? p) = ?/ ? ??2 |? p00 | = 1 . Hence by Observation 8, t (3) (3) 2 (2) ?t (p) = ?t (? p) = ?t (? p). The claim follows from Case (ii). Assume now that we have an oracle PATH P OLYNOMIAL available that on input t ? (a, b) and ?/? > 0 returns x ? D and p ? Pt (x, ?/?), then the following algorithm G ENERIC PATH returns an ?-path if it terminates. Algorithm 1 G ENERIC PATH Input: f : [a, b] ? D ? R that fits the problem description, and ? > 0 Output: ?-path for the interval [a, b] choose t? ? (a, b) P := C OMPUTE PATH (f, t?, ?) define f? : [a, b] ? D ? R, (t, x) 7? f (a + b ? t, x) [then f? also fits the problem description] P := P ? C OMPUTE PATH (f?, a + b ? t?, ?) return P Algorithm 2 C OMPUTE PATH Input: f : [a, b] ? D ? R that fits the problem description, t? ? (a, b) and ? > 0 Output: ?-path for the interval [t?, b] t := t? and P := ? while t ? b do  (x, p) := PATH P OLYNOMIAL t, ?/? P := P ? {x} t := min b, t + ?t (p) end while return P 4.2 Analysis of the generic algorithm The running time of the algorithm G ENERIC PATH is essentially determined by the complexity of the computed path times the cost of the oracle PATH ? P OLYNOMIAL. In the following we show that the complexity of the computed path is at most O(1/ ?). ? Observation 10. For c ? R let ?c : R ?  ? R, x 7? x2 + c ? x. Then we have > |c| 1. limx?? ?c (x) = 0 2. ?0c (x) = ?xx2 +c ? 1 for the derivative of ?c . Thus, ?0c (x) > 0 for c < 0 and ?c is monotonously increasing. 5 (2) Furthermore, ?t (p) = q r 2? |p00 | + ?t2 ?t (p) ?  1 2 2  1 2 2 ?t2 ?t (p) + ? ? = r ?t2 ?t (p) + ? ? =    1 2 2 1 2   + ?t2 ?(1 ? ?) ? ?t ?t (p) + 1 2   + ?t2 ?(1 ? ?) ? ?t ?t (p) + ? ? = ??2 ?(1??) ?t ?t (p) + ? ? t ? ?t ?t (p) + 1 2  1 2  + ?t (? ? 1) + ?t (? ? 1). Lemma 11. Given t ? I and p ? P, then ?t (p) is continuous in |p00 |. (2) (3) Proof. The continuity for |p00 | > 0 follows from the definitions of ?t (p) and ?t (p), and from Observation 8. Since ?t (p) > 1/2 for small |p00 | the continuity at |p00 | = 0 follows from Observation 10, because (2) (1) lim ?t (p) = lim ??t2 ?(1??) (?t ? (?t (p) + ? ? 1/2)) + ?t (? ? 1) = ?t (? ? 1) = ?t (p), 00 |p00 |?0 |p |?0 where we have used ?t (p) ? ? as |p00 | ? 0. Lemma 12. Given t ? I and p1 , p2 ? P, then ?t (p1 ) ? ?t (p2 ) if |p001 | ? |p002 |. Proof. The claim is that ?t (p) is monotonously decreasing in |p00 |. Since ?t is continuous in |p00 | (1) (2) (3) by Lemma 11 it is enough to check the monotonicity of ?t (p), ?t (p) and ?t (p). The mono(1) (3) tonicity of ?t (p) and ?t (p) follows directly from the definitions of the latter. The monotonicity (2) of ?t (p) follows from Observation 10 since we have    1 (2) ?t (p) = ??t2 ?(1??) ?t ?t (p) + ? ? + ?t (? ? 1), 2 (2) and thus ?t (p) is monotonously decreasing in |p00 | because ?t2 ?(1 ? ?) < 0 and ?t (p) is monotonously decreasing in |p00 |. Lemma 13. Given t ? I and p ? P, then ?t (p) is monotonously increasing in ?t and hence in t. Proof. Since ?t (p) is continuous in ?t by Observation 8 it is enough to check the monotonicity of (1) (2) (3) (1) (3) ?t (p), ?t (p) and ?t (p). The monotonicity of ?t (p) and ?t (p) follows directly from the (2) definitions of the latter. It remains toshow the monotonicity of ?t (p) for ?t (p) ? 21 . For c ? 0  let ??1 : R>0 ? R, y 7? 1 2 ?c (??1 c (y)) = y. Apparently, c y ? y . The notation is justified because for ??1 c is monotonously decreasing, and we have ??1 c (y) > 0 we have (2) ?1 ?1 ?t (p) = ?c1 (??1 c2 (?t )) ? ?t = ?c1 (?c2 (?t )) ? ?c2 (?c2 (?t )), 1 with c1 = |p2?00 | and c2 = c?1 . Note that ??1 c2 (?t ) > 0 since ?t (p) ? 2 , and c2 < c1 since ? > 1. 0 0 ?1 Because ?c1 ? ?c2 < 0 for c1 > c2 , both ?c2 and ?c1 ? ?c2 are monotonously decreasing in their (2) respective arguments. Hence, ?t (p) is monotonously increasing in ?t . Theorem 14. If there exists p ? P and ?? > 0 such that |q 00 | ? |p00 | for all q that are returned by the oracle?PATH 1 terminates after at most  P OLYNOMIAL on input t ? [a, b] and ? ? ??. Then Algorithm ? O 1/ ? steps, and thus returns an ?-path of complexity in O(1/ ?). Proof. For all t ? [t?, b], where t? ? (a, b) is chosen in algorithm G ENERIC PATH, we have ?t (q) ? ?t (p) ? ?t?(p). Here the first inequality is due to Lemma 12 and the second inequality is due to Lemma 13. Hence, the number of steps in the first call of C OMPUTE PATH is upper bounded by (b? t?)/(min{?t?(p), b? t?})+1. Similarly, the number of steps in the second call of C OMPUTE PATH is upper bounded by (t? ? a)/(min{?a+b?t?(p), t? ? a}) + 1. 6 (1) For the asymptotic behavior, observe that ?t?(p) = ?t? (p) does not depend on ? for p00 = 0. For |p00 | > 0 observe that lim??0 ?t?(p, ?) = 0. Hence, there exists ?? > 0 such that ?t?(p, ?) < 1/2 and (3) ?t? (p, ?) ? b ? t? for all ? < ??, and thus r   ? ? |p00 | b ? t? 1 b ? t? ? ? + 1 = (3) (b ? t ) + 1 ? O . +1 = ? 2? ? ? 1 ? min{?t?(p), b ? t?} ?t? (p) ?  Analogously, (t? ? a)/(min{?a+b?t?(p), t? ? a}) + 1 ? O 1/ ? , which completes the proof. 5 Applications Here we demonstrate on two examples that Lagrange duality can be a tool for implementing the oracle PATH P OLYNOMIAL in the generic path algorithm. This approach obtains the step-size certificate from an approximate solution that has to be computed anyway. 5.1 Support vector machines Given data points xi ? Rd together with labels yi ? {?1} for i = 1, . . . , n. A support vector machine (SVM) is the following parameterized optimization problem ! n X 1 2 T min kwk + t max{0, 1 ? yi (w xi + b)} =: f (t, w) 2 w?Rd ,b?R i=1 parameterized in the regularization parameter t ? [0, ?). The Lagrangian dual of the SVM is given as   1 s.t. 0 ? ?i ? t, y T ? = 0, maxn ? ?T K? + 1T ? =: d(?) ??R 2 where K = AT A, A = (y1 x1 , . . . , yn xn ) ? Rd?n and y = (y1 , . . . , yn ) ? Rn . Algorithm 3 PATH P OLYNOMIAL SVM Input: t ? (0, ?) and ? > 0 Output: w ? Rd and p ? Pt (w, ?) compute a primal solution w ? Rd and a dual solution ? ? Rn such that f (t, w) ? d(?) < ? define p : I ? R, t0 7? d ?t0 /t return (w, p) Lemma 15. Let (w, p) be the output of PATH P OLYNOMIAL SVM on input t > 0 and ? > 0, then p ? Pt (w, ?) and |p00 | ? max0???1 ? ?T K ? ? . [Hence, Theorem 14 applies here.] ? Proof. Let ? be the dual solution computed by PATH P OLYNOMIAL SVM and p be the polynomial defined in PATH P OLYNOMIAL SVM. Then, 2 t0 1 T t0 1 ? K? + 1T ? and thus p00 (t0 ) = ? 2 ?T K? ? 0 2 t 2 t t since K is positive semidefinite. Hence, p ? P. For p ? Pt (w, ?), it remains to show that p ? h = minw?Rd f (?, w) and f (t, w) ? p(t) ? ?. The latter follows immediately from p(t) = d(?). For t0 > 0 let ?0 = ?t0 /t, then ?0 is feasible for the dual SVM at parameter value t0 since ? is feasible for the dual SVM at t. It follows, p(t0 ) = d(?0 ) ? h(t0 ) = minw?Rd f (?, w). Finally, observe that ? ?T K ? ?. ?i ? t implies |p00 | = t12 ?T K? ? max0???1 ? p(t0 ) = ? The same results hold when using any positive kernel K. In the kernel case one has the following primal SVM (see [2]), ( !) ! n n X X 1 T min ??Rm ,b 2 ? K? + t ? max 0, 1 ? yi i=1 ?j yj Kij + b j=1 7 =: f (t, ?) . We have implemented the algorithm G ENERIC PATH for SVMs in Matlab using LIBSVM [1] as the SVM solver. To assess the practicability of the proposed algorithm we ran it on several datasets taken from the LIBSVM website. For each dataset we have measured the size of the computed ?-path (number of nodes) for t ? [0.1, 10] and ? ? {2?i | i = 2, . . . , 10}. Figure 5.1 shows the size of paths as a function of ? using double logarithmic plots. A straight line plot with slope ? 21 ? corresponds to an empirical path complexity that follows the function 1/ ?. 1/sqrt(epsilon) a1a duke fourclass scale mushrooms w1a # nodes # nodes 1/sqrt(epsilon) a1a duke fourclass scale mushrooms w1a 1 10 ?3 10 ?2 ?1 10 ?3 10 10 epsilon ?2 ?1 10 10 epsilon (a) Path complexity for a linear SVM 5.2 1 10 (b) Path complexity for a SVM with Gaussian kernel exp(??ku ? vk22 ) for ? = 0.5 Matrix completion Matrix completion asks for a completion X of an (n ? m)-matrix Y that has been observed only at the indices in ? ? {1, . . . , m} ? {1, . . . , n}. The problem can be solved by the following convex semidefinite optimization approach, see [17, 11, 13],   X  2 1 A X  0. min Xij ? Yij + t ? tr(A) + tr(B) s.t. XT B 2 X?Rn?m , A?Rn?n , B?Rm?m (i,j)?? The Lagrangian dual of this convex semidefinite program is given as   X 1 tI ? max ?  0, and ?ij = 0 if (i, j) ? / ?. ?2ij + ?ij Yij s.t. ?T tI 2 ??Rn?m (i,j)?? ? for X ? = (X, A, B) be the primal objective function at parameter value t, and d(?) be Let f (t, X) the dual objective function. Analogously to the SVM case we have the following: Algorithm 4 PATH P OLYNOMIAL M ATRIX C OMPLETION Input: t ? (0, ?) and ? > 0 ? and p ? Pt (X, ? ?) Output: X ? and a dual solution ? ? Rn?m such that f (t, X) ? ? d(?) < ? compute a primal solution X  0 0 define p : I ? R, t 7? d t /t ? ? p) return (X, ? p) be the output of PATH P OLYNOMIAL M ATRIXCOMPLETION on inLemma 16. Let (X, ? ?) and |p00 | ? max ? ? 2 put t > 0 and ? > 0, then p ? Pt (X, ??F1 k?kF , where     tI ? Ft = ? ? Rn?m  0, ?ij = 0, ?(i, j) ? /? .  ?T tI The proof for Lemma 16 is similar to the proof of Lemma 15, and Lemma 16 shows that Theorem 14 can be applied here. Acknowledgments schaft (GI-711/3-2). This work has been supported by a grant of the Deutsche Forschungsgemein- 8 References [1] Chih-Chung Chang and Chih-Jen Lin. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3):27:1?27:27, 2011. [2] Olivier Chapelle. Training a Support Vector Machine in the Primal. Neural Computation, 19(5):1155?1178, 2007. [3] Alexandre d?Aspremont, Francis R. Bach, and Laurent El Ghaoui. Full Regularization Path for Sparse Principal Component Analysis. In Proceedings of the International Conference on Machine Learning (ICML), pages 177?184, 2007. [4] Jerome Friedman, Trevor Hastie, Holger H?ofling, and Robert Tibshirani. Pathwise Coordinate Optimization. The Annals of Applied Statistics, 1(2):302?332, 2007. [5] Bernd G?artner, Martin Jaggi, and Clement Maria. An Exponential Lower Bound on the Complexity of Regularization Paths. arXiv.org, arXiv:0903.4817v, 2010. [6] Joachim Giesen, Martin Jaggi, and S?oren Laue. Approximating Parameterized Convex Optimization Problems. In Proceedings of Annual European Symposium on Algorithms (ESA), pages 524?535, 2010. [7] Joachim Giesen, Martin Jaggi, and S?oren Laue. Optimizing over the Growing Spectrahedron. In Proceedings of Annual European Symposium on Algorithms (ESA), pages 503?514, 2012. [8] Joachim Giesen, Martin Jaggi, and S?oren Laue. Regularization Paths with Guarantees for Convex Semidefinite Optimization. In Proceedings International Conference on Artificial Intelligence and Statistics (AISTATS), pages 432?439, 2012. [9] Bin Gu, Jian-Dong Wang, Guan-Sheng Zheng, and Yue cheng Yu. Regularization Path for ?Support Vector Classification. IEEE Transactions on Neural Networks and Learning Systems, 23(5):800?811, 2012. [10] Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. The entire regularization path for the support vector machine. The Journal of Machine Learning Research, 5:1391?1415, 2004. [11] Martin Jaggi and Marek Sulovsk?y. A Simple Algorithm for Nuclear Norm Regularized Problems. In Proceedings of the International Conference on Machine Learning (ICML), pages 471?478, 2010. [12] Masayuki Karasuyama and Ichiro Takeuchi. Suboptimal Solution Path Algorithm for Support Vector Machine. In Proceedings of the International Conference on Machine Learning (ICML), pages 473?480, 2011. [13] S?oren Laue. A hybrid algorithm for convex semidefinite optimization. In Proceedings of the International Conference on Machine Learning (ICML), 2012. [14] Julien Mairal and Bin Yu. Complexity Analysis of the Lasso Regularization Path. In Proceedings of the International Conference on Machine Learning (ICML), 2012. [15] A. Wayne Roberts and Dale Varberg. Convex functions. Academic Press, New York, 1973. [16] Saharon Rosset and Ji Zhu. Piecewise linear regularized solution paths. Annals of Statistics, 35(3):1012?1030, 2007. [17] Nathan Srebro, Jason D. M. Rennie, and Tommi Jaakkola. Maximum-Margin Matrix Factorization. In Proceedings of Advances in Neural Information Processing Systems 17 (NIPS), 2004. [18] Zhi-li Wu, Aijun Zhang, Chun-hung Li, and Agus Sudjianto. Trace Solution Paths for SVMs via Parametric Quadratic Programming. In KDD Worskshop: Data Mining Using Matrices and Tensors, 2008. 9
4578 |@word polynomial:5 norm:1 asks:1 tr:2 atrix:1 contains:2 com:1 mushroom:2 must:1 kdd:1 plot:2 update:1 intelligence:1 website:1 certificate:6 provides:1 node:3 org:1 zhang:1 along:3 c2:11 become:1 symposium:2 artner:1 behavior:1 p1:2 growing:1 decreasing:5 zhi:1 considering:1 increasing:3 solver:1 bounded:4 notation:2 deutsche:1 minimizes:1 developed:2 guarantee:2 every:4 ti:4 concave:7 ofling:1 universit:4 rm:2 control:1 wayne:1 grant:1 yn:2 t1:1 positive:2 plugin:2 laurent:1 path:68 factorization:2 acknowledgment:1 yj:1 definite:1 area:3 empirical:1 onto:1 interior:1 put:1 context:1 applying:1 restriction:1 lagrangian:2 circumference:2 starting:1 convex:6 immediately:1 nuclear:1 anyway:1 coordinate:1 annals:2 construction:1 pt:9 exact:3 duke:2 olivier:1 programming:1 approximated:1 observed:1 ft:1 solved:1 wang:1 t12:1 trade:1 ran:1 convexity:1 complexity:21 depend:1 tight:4 algebra:1 gu:1 tx:5 w1a:2 describe:1 artificial:1 h0:14 whose:1 heuristic:1 rennie:1 statistic:3 gi:1 sequence:3 description:6 double:1 tk:49 fourclass:2 derive:1 completion:7 measured:1 ij:4 p2:4 solves:2 implemented:2 skip:1 implies:3 tommi:1 implementing:1 bin:2 f1:1 yij:2 a1a:2 hold:5 exp:1 claim:6 smallest:1 giesen:5 label:1 tool:1 tonicity:1 always:3 gaussian:1 aim:1 rather:1 jaakkola:1 joachim:5 maria:1 check:2 mueller:1 el:1 entire:1 initially:1 subroutine:2 germany:4 dual:8 classification:1 holger:1 yu:2 icml:5 t2:12 simplify:1 piecewise:1 intell:1 ab:6 friedman:1 limx:1 mining:1 zheng:1 semidefinite:5 behind:1 primal:5 spectrahedron:1 integral:1 respective:1 minw:2 masayuki:1 abundant:1 kij:1 soft:1 cover:1 cost:1 subset:1 monotonously:8 sulovsk:1 rosset:2 international:6 minimality:1 off:1 dong:1 analogously:3 together:2 again:1 p00:35 choose:1 derivative:7 chung:1 return:7 li:2 syst:1 de:3 satisfy:1 multiplicative:1 jason:1 apparently:1 kwk:1 francis:1 ichiro:1 slope:1 contribution:1 ass:1 square:3 vk22:1 accuracy:1 takeuchi:1 karasuyama:1 informatik:1 straight:1 sqrt:2 whenever:1 trevor:2 definition:10 eneric:5 obvious:1 proof:15 proved:1 dataset:1 knowledge:1 lim:5 alexandre:1 furthermore:1 jerome:1 hand:1 sheng:1 widespread:1 continuity:3 regularization:11 hence:15 equality:1 demonstrate:2 saharon:2 ji:2 exponentially:1 discussed:1 numerically:1 rd:7 clement:1 similarly:1 chapelle:1 jaggi:5 optimizing:1 inequality:8 jens:1 devise:2 yi:3 minimum:1 bsn:4 semi:1 ii:3 full:1 academic:1 bach:1 long:1 lin:1 essentially:1 arxiv:2 kernel:3 oren:5 invert:1 swiercy:2 justified:1 ompute:5 c1:7 interval:11 completes:1 jian:1 yue:1 tend:1 call:4 structural:2 iii:1 enough:2 fit:6 finish:1 hastie:3 lasso:3 suboptimal:1 idea:2 t0:58 motivated:1 returned:1 york:1 matlab:1 practicability:1 p001:1 svms:7 differentiability:1 reduced:1 xij:1 tibshirani:2 mono:1 libsvm:3 rectangle:1 parameterized:8 family:1 chih:2 wu:1 soeren:1 bound:13 cheng:1 quadratic:4 annual:2 oracle:8 x2:4 nathan:1 argument:2 min:11 martin:5 maxn:1 terminates:2 contradicts:1 b:1 s1:6 ghaoui:1 taken:1 equation:1 previously:1 remains:2 end:1 available:1 observe:3 generic:7 asn:4 running:1 epsilon:4 build:1 approximating:2 tensor:1 objective:2 laue:6 parametric:1 minx:1 schiller:4 unstable:1 length:2 index:1 providing:1 robert:3 trace:1 implementation:2 upper:9 observation:7 datasets:1 finite:2 technol:1 y1:2 rn:7 esa:2 introduced:1 bernd:1 friedrich:4 nip:1 trans:1 below:1 program:2 max:4 marek:1 hybrid:1 regularized:4 xx2:1 zhu:2 imply:1 library:1 julien:1 aspremont:1 sn:7 kf:1 asymptotic:1 loss:4 srebro:1 aijun:1 degree:1 course:1 supported:1 last:2 side:1 sparse:1 overcome:1 xn:1 valid:1 computes:2 concavity:3 dale:1 far:1 transaction:1 approximate:2 compact:2 uni:3 obtains:1 cavity:1 monotonicity:5 active:1 mairal:1 sascha:2 concavely:2 xi:2 continuous:5 sk:2 ku:1 european:2 constructing:1 aistats:1 motivation:1 whole:1 n2:1 x1:1 slow:1 jena:7 fails:1 exponential:1 guan:1 ix:1 theorem:5 specific:4 xt:4 jen:1 svm:13 chun:1 exists:7 margin:2 logarithmic:1 lagrange:1 tracking:2 pathwise:1 chang:1 applies:1 jkm:1 corresponds:1 acm:1 feasible:2 specifically:1 determined:1 lemma:17 max0:2 called:4 principal:1 duality:1 support:8 latter:3 hung:1
3,953
4,579
A nonparametric variable clustering model Konstantina Palla? University of Cambridge [email protected] David A. Knowles? Stanford University [email protected] Zoubin Ghahramani University of Cambridge [email protected] Abstract Factor analysis models effectively summarise the covariance structure of high dimensional data, but the solutions are typically hard to interpret. This motivates attempting to find a disjoint partition, i.e. a simple clustering, of observed variables into highly correlated subsets. We introduce a Bayesian non-parametric approach to this problem, and demonstrate advantages over heuristic methods proposed to date. Our Dirichlet process variable clustering (DPVC) model can discover blockdiagonal covariance structures in data. We evaluate our method on both synthetic and gene expression analysis problems. 1 Introduction Latent variables models such as principal components analysis (Pearson, 1901; Hotelling, 1933; Tipping and Bishop, 1999; Roweis, 1998) and factor analysis (Young, 1941) are popular for summarising high dimensional data, and can be seen as modelling the covariance of the observed dimensions. Such models may be used for tasks such as collaborative filtering, dimensionality reduction, or data exploration. For all these applications sparse factor analysis models can have advantages in terms of both predictive performance and interpretability (Fokoue, 2004; Fevotte and Godsill, 2006; Carvalho et al., 2008). For example, data exploration might involve investigating which variables have significant loadings on a shared factor, which is aided if the model itself is sparse. However, even using sparse models interpreting the results of a factor analysis can be non-trivial since a variable will typically have significant loadings on multiple factors. As a result of these problems researchers will often simply cluster variables using a traditional agglomerative hierarchical clustering algorithm (Vigneau and Qannari, 2003; Duda et al., 2001). Interest in variable clustering exists in many applied fields, e.g. chemistry (Basak et al., 2000a,b) and acturial science (Sanche and Lonergan, 2006). However, it is most commonly applied to gene expression analysis (Eisen et al., 1998; Alon et al., 1999; D?haeseleer et al., 2005), which will also be the focus of our investigation. Note that variable clustering represents the opposite regime to the usual clustering setting where we partition samples rather than dimensions (but of course a clustering algorithm can be made to work like this simply by transposing the data matrix). Typical clustering algorithms, and their probabilistic mixture model analogues, consider how similar entities are (e.g. in terms of Euclidean distance) rather how correlated they are, which would be closer in spirit to the ability of factor analysis to model covariance structure. While using correlation distance (one minus the Pearson correlation coefficient) between variables has been proposed for clustering genes with heuristic methods, the corresponding probabilistic model appears not to have been explored to the best of our knowledge. ? These authors contributed equally to this work 1 To address the general problem of variable clustering we develop a simple Bayesian nonparametric model which partitions observed variables into sets of highly correlated variables. We denote our method DPVC for ?Dirichlet Process Variable Clustering?. DPVC exhibits the usual advantages over heuristic methods of being both probabilistic and non-parametric: we can naturally handle missing data, learn the appropriate number of clusters from data, and avoid overfitting. The paper is organised as follows. Section 2 describes the generative process. In Section 3 we note relationships to existing nonparametric sparse factor analysis models, Dirichlet process mixture models, structure learning with hidden variables, and the closely related ?CrossCat? model (Shafto et al., 2006). In Section 4 we describe efficient MCMC and variational Bayes algorithms for performing posterior inference in DPVC, and point out computational savings resulting from the simple nature of the model. In Section 5 we present results on synthetic data where we test the method?s ability to recover a ?true? partitioning, and then focus on clustering genes based on gene expression data, where we assess predictive performance on held out data. Concluding remarks are given in Section 6. 2 The Dirichlet Process Variable Clustering Model Consider observed data {yn ? RD : n = 1, .., N } where we have D observed dimensions and N samples. The D observed dimensions correspond to measured variables for each sample, and our goal is to cluster these variables. We partition the observed dimensions d = {1, ..., D} according to the Chinese restaurant process (Pitman, 2002, CRP). The CRP defines a distribution over partitionings (clustering) where the maximum possible number of clusters does not need to be specified a priori. The CRP can be described using a sequential generative process: D customers enter a Chinese restaurant one at a time. The first customer sits at some table and each subsequent customer sits at table k with mk current customers with probability proportional to mk , or at a new table with probability proportional to ?, where ? is a parameter of the CRP. The seating arrangement of the customers at tables corresponds to a partitioning of the D customers. We write (c1 , ..., cD ) ? CRP(?), cd ? N (1) where cd = k denotes that variable d belongs to cluster k. The CRP partitioning allows each dimension to belong only to one cluster. For each cluster k we have a single latent factor xkn ? N (0, ?x2 ) (2) which models correlations between the variables in cluster k. Given these latent factors, real valued observed data can be modeled as ydn = gd xcd n + dn (3) where gd is a factor loading for dimension d, and dn ? N (0, ?d2 ) is Gaussian noise. We place a Gaussian prior N (0, ?g2 ) on every element gd independently. It is straightforward to generalise the model by substituting other noise models for Equation 3, for example using a logistic link for binary data ydn ? {0, 1}. However, in the following we will focus on the Gaussian case. To improve the flexibility of the model, we put Inverse Gamma priors on ?g2 and ?d2 and a Gamma prior on the CRP concentration parameter ? as follows: ? ? G(1, 1) ?g2 ? IG(1, 1) ?d2 ? IG(1, 0.1) Note that we fix ?x = 1 due to the scale ambiguity in the model. 3 Related work Since DPVC is a hybrid mixture/factor analysis model there is of course a wealth of related work, but we aim to highlight a few interesting connections here. DPVC can be seen as a simplification of the infinite factor analysis models proposed by Knowles and Ghahramani (2007) and Rai and Daum?e III (2008), which we will refer to as Non-parametric 2 x1 y1 y2 x2 y3 y4 x3 y5 y6 Figure 1: Graphical model structure that could be learnt using the model, corresponding to cluster assignments c = {1, 1, 1, 2, 2, 3}. Gray nodes represent the D = 6 observed variables yd and white nodes represent the K = 3 latent variables xk . Sparse Factor Analysis (NSFA). Where they used the Indian buffet process to allow dimensions to have non-zero loadings on multiple factors, we use the Chinese restaurant process to explicitly enforce that a dimension can be explained by only one factor. Obviously this will not be appropriate in all circumstances, but where it is appropriate we feel it allows easier interpretation of the results. To see the relationship more clearly, introduce the indicator variable zdk = I[cd = k]. We can then write our model as yn = (G ? Z)xn + n (4) where G is a D ? K Gaussian matrix, and ? denotes elementwise multiplication. Replacing our Chinese restaurant process prior on Z with an Indian buffet prior recovers an infinite factor analysis model. Equation 4 has the form of a factor analysis model. It is straightforward to show that the conditional covariance of y given the factor loading matrix W := G ? Z is ?x2 WWT + ?2 I. Analogously for DPVC we find  2 ?x gd gd0 + ?d2 ?dd0 , cd = cd0 cov(ydn , yd0 n |G, c) = (5) 0, otherwise Thus we see the covariance structure implied by DPVC is block diagonal: only dimensions belonging to the same cluster have non-zero covariance. The obvious probabilistic approach to clustering genes would be to simply apply a Dirichlet process mixture (DPM) of Gaussians, but considering the genes (our dimensions) as samples, and our samples as ?features? so that the partitioning would be over the genes. However, this approach would not achieve the desired result of clustering correlated variables, and would rather cluster together variables close in terms of Euclidean distance. For example two variables which have the relationship yd = ayd0 for a = ?1 (or a = 2) are perfectly correlated but not close in Euclidean space; a DPM approach would likely fail to cluster these together. Also, practitioners typically choose either to use restrictive diagonal Gaussians, or full covariance Gaussians which result in considerably greater computational cost than our method (see Section 4.3). DPVC can also be seen as performing a simple form of structure learning, where the observed variables are partitioned into groups explained by a single latent variable. This is subset of the structures considered in Silva et al. (2006), but we maintain uncertainty over the structure using a fully Bayesian analysis. Figure 1 illustrates this idea. DPVC is also closely related to CrossCat (Shafto et al., 2006). CrossCat also uses a CRP to partition variables into clusters, but then uses a second level of independent CRPs to model the dependence of variables within a cluster. In other words whereas the latent variables x in Figure 1 are discrete variables (indicating cluster assignment) in CrossCat, they are continuous variables in DPVC corresponding to the latent factors. For certain data the CrossCat model may be more appropriate but our simple factor analysis model is more computationally tractable and often has good predictive performance as well. The model of Niu et al. (2012) is related to CrossCat in the same way that NSFA is related to DPVC, by allowing an observed dimension to belong to multiple features using the IBP rather than only one cluster using the CRP. 4 Inference We demonstrate both MCMC and variational inference for the model. 3 Algorithm 1 Marginal conditional 1: for m = 1 to M do 2: ?(m) ? P (?) 3: Y (m) ? P (Y |?(m) ) 4: end for 4.1 Algorithm 2 Successive conditional 1: 2: 3: 4: 5: 6: ?(1) ? P (?) Y (1) ? P (Y |?(1) ) for m = 2 to M do ?(m) ? Q(?|?(m?1) , Y (m?1) ) Y (m) ? P (Y |?(m) ) end for MCMC We use a partially collapsed Gibbs sampler to explore the posterior distribution over all latent variables g, c, X as well as hyperparameters ?d2 , ?g2 and ?. The Gibbs update equations for the factor loadings g, factors X, noise variance ?d2 and ?g2 are standard, and therefore only sketched out below with the details deferred to supplementary material. The Dirichlet concentration parameter ? is sampled using slice sampling (Neal, 2003). We sample the cluster assignments c using Algorithm 8 of Neal (2000), with g integrated out but instantiating X. Updating the factor loading matrix G is done elementwise, sampling from gdk |Y, G?dk , C, X, ?g , ?x , ?d , ? ? N (??g , ??1 (6) g ) The factors X can be jointly sampled as X:n |Y, G, C, ?g , ?x , ?d , ? ? N (?X:n , ??1 (7) X:n ) When sampling the cluster assignments, c we found it beneficial to integrate out g, while instantiating X. We require Z P (cd = k|yd: , xk: , ?g , c?d ) = P (cd |c?d ) P (yd: |xk: , gd )p(gd |?g )dgd the calculation of which is given in the supplementary material, along with expressions for ??g , ?g , ?X:n and ?X:n . We confirm the correctness of our algorithm using the joint distribution testing methodology of Geweke (2004). There are two ways to sample from the joint distribution, P (Y, ?) over parameters, ? = {g, c, X} and data, Y defined by a probabilistic model such as DPVC. The first we will refer to as ?marginal-conditional? sampling, shown in Algorithm 1. Both steps here are straightforward: sampling from the prior followed by sampling from the likelihood model. The second way, referred to as ?successive-conditional? sampling is shown in Algorithm 2, where Q represents a single (or multiple) iteration(s) of our MCMC sampler. To validate our sampler we can then check, either informally or using hypothesis tests, whether the samples drawn from the joint P (Y, ?) in these two different ways appear to have come from the same distribution. We apply this method to our DPVC sampler with just N = D = 2, and all hyperparameters fixed as follows: ? = 1, ?d = 0.1, ?g = 1, ?x = 1. We draw 104 samples using both the marginalconditional and successive-conditional procedures. We look at various characteristics of the samples, including the number of clusters and the mean of X. The distribution of the number of features under the successive-conditional sampler matches that under the marginal-conditional sampler almost perfectly. Under the correct successive-conditional sampler the average number of clusters is 1.51 (it should be 1.5): a hypothesis test did not reject the null hypothesis that the means of the two distributions are equal. While this cannot completely guarantee correctness of the algorithm and code, 104 samples is a large number for such a small model and thus gives strong evidence that our algorithm is correct. 4.2 Variational inference We use Variational Message Passing (Winn and Bishop, 2006) under the Infer.NET framework (Minka et al., 2010) to fit an approximate posterior q to the true posterior p, by minimising the Kullback-Leibler divergence Z KL(q||p) = ?H[q(v)] ? q(v) log p(v)dv (8) 4 R where H[q(v)] = ? q(v) log q(v)dv is the entropy and v = {w, g, c, X, ?d2 , ?g2 }, where w is introduced so that the Dirichlet process can be approximated as w ? Dirichlet(?/T, ..., ?/T ) cd ? Discrete(w) (9) (10) where we have truncated to allow a maximum of T clusters. Where not otherwise specified we choose T = D so that every dimension could use its own cluster if this is supported by the data. Note that the Dirichlet process is recovered in the limit T ? ?. We use a variational posterior of the form q(v) = qw (w)q?g2 (?g2 ) D Y qcd (cd )q?d2 (?d2 )qgd |cd (gd |cd ) N Y qxnd (xnd ) (11) n=1 d=1 where qw is a Dirichlet distribution, each qcd is a discrete distribution on {1, .., T }, q?g2 and q?d2 are Inverse Gamma distributions and qnd and qgd |cd are univariate Gaussian distributions. We found that using the structured approximation qgd |cd (gd |cd ) where the variational distribution on gd is conditional on the cluster assignment cd gave considerably improved performance. Using the representation of the Dirichlet process in Equation 10 this model is conditionally conjugate (i.e. all variables have exponential family distributions conditioned on their Markov blanket) so the VB updates are standard and therefore omitted here. Due to the symmetry of the model under permutation of the clusters, we are require to somehow break symmetry initially. We experimented with initialising either the variational distribution over the factors qxnd (xnd ) with mean N (0, 0.1) and variance 1 or each cluster assignments distribution qcd (cd ) to a sample from a uniform Dirichlet. We found initialising the cluster assignments gave considerably better solutions on average. We also typically ran the algorithm L = 10 times and took the solution with the best lower bound on the marginal likelihood. We also experimented with using Expectation Propagation (Minka, 2001) for this model but found that the algorithm often diverged, presumably because of the multimodality in the posterior. It might be possible to alleviate this using damping, but we leave this to future work. 4.3 Computational complexity DPVC enjoys some computational savings compared to NSFA. For both models sampling the factor loadings matrix is O(DKN ), where K is the number of active features/clusters. However, for DPVC sampling the factors X is considerably cheaper. Calculating the diagonal precision matrix is O(KD) (compared to O(K 2 D) for the precision in NSFA), and finding the square root of the diagonal elements is negligible at O(K) (compared to a O(K 3 ) Cholesky decomposition for NSFA). Finally both models require an O(DKN ) operation to calculate the conditional mean of X. Thus where NSFA is O(DKN + DK 2 + K 3 ), DPVC is only O(DKN ), which is the same complexity as k-means or Expectation Maximisation (EM) for mixture models with diagonal Gaussian clusters. Note that mixture models with full covariance clusters would typically cost O(DKN 3 ) in this setting due to the need to perform Cholesky decompositions on N ? N matrices. 5 Results We present results on synthetic data and two gene expression data sets. We show comparisons to k-means and hierarchical clustering, for which we use the algorithms provided in the Matlab statistics toolbox. We also compare to our implementation of Bayesian factor analysis (see for example Kaufman and Press (1973) or Rowe and Press (1998)) and the non-parametric sparse factor analysis (NSFA) model of (Knowles and Ghahramani, 2011). We experimented with three publicly available implementations of DPM of Gaussian using full covariance matrices, but found that none of them were sufficiently numerically robust to cope with the high dimensional and sometimes ill conditioned gene expression data analysed in Section 5. To provide a similar comparison we implemented a DPM of diagonal covariance Gaussians using a collapsed Gibbs sampler. 5 1.1 1 0.9 RAND index 0.8 0.7 0.6 0.5 DPVC MCMC 0.4 K?means (distance) 0.3 K?means (correlation) DPVC VB 0.2 0.1 1 10 2 10 sample size, N 3 10 Figure 2: Performance of DPVC compared to k-means at recoverying the true partitioning used to simulate the data. Dataset DPVC NSFA DPM FA (K = 5) FA (K = 10) FA (K = 20) Breast cancer ?0.876 ? 0.024 ?0.634 ? 0.038 ?1.348 ? 0.108 ?1.129 ? 0.043 ?1.275 ? 0.056 ?1.605 ? 0.072 Yeast ?0.849 ? 0.012 ?0.653 ? 0.061 ?1.397 ? 0.419 ?1.974 ? 1.925 ?1.344 ? 0.165 ?1.115 ? 0.052 Table 1: Predictive performance (mean log predictive loglikelihood over the test elements) results on two gene expression datasets. 5.1 Synthetic data In order to test the ability of the models to recover a true underlying partitioning of the variables into correlated groups we use synthetic data. We generate synthetic data with D = 20 dimensions partitioned into K = 5 equally sized clusters (of four variables). Within each cluster we sample analoguously to our model: sample xkn ? N (0, 1) for all k, n, then gd ? N (0, 1) for all d and finally sample ydn ? N (gd xcd n , 0.1) for all d, n. We vary the sample size N and perform 10 repeats for each sample size. We compare k-means (with the true number of clusters 5) using Euclidean distance and correlation distance, and DPVC with inference using MCMC or variational Bayes. To compare the inferred and true partitions we calculate the well known Rand index, which varies between 0 and 1, with 1 denoting perfect recovering of the true clustering. The results are shown in Figure 2. We see that the MCMC implementation of DPVC consistently outperforms the k-means methods. As expected given the nature of the data simulation, k-means using the correlation distance performs better than using Euclidean distance. DPVC VB?s performance is somewhat disappointing, suggesting that even the structured variational posterior we use is a poor approximation of the true posterior. We emphasise that k-means is given a significant advantage: it is provided with the true number of clusters. In this light, the performance of DPVC MCMC is impressive, and the seemingly poor performance of DPVC VB is more forgivable (DPVC VB used a truncation level T = D = 20). 5.2 Breast cancer dataset We assess these algorithms in terms of predictive performance on the breast cancer dataset of West et al. (2007), including 226 genes across 251 individuals. The samplers were found to have converged after around 500 samples according to standard multiple chain convergence measures, so 1000 MCMC iterations were used for all models. The predictive log likelihood was calculated using every 10th sample form the final 500 samples. We ran 10 repeats holding out a different random 10% of the the elements of the matrix as test data each time. The results are shown in Table 1. We see that NSFA performs the best, followed by DPVC. This is not surprising and is the price DPVC pays for a more interpretable solution. However, DPVC does outperform both the DPM and the finite (non-sparse) factor analysis models. We also ran DPVC VB on this dataset but its performance was 6 20 20 20 40 40 40 60 60 60 80 80 80 100 100 100 120 120 120 140 140 140 160 160 160 180 180 180 200 200 200 220 220 50 100 150 200 220 50 100 150 200 50 100 150 200 Figure 3: Clustering of the covariance structure. Left: k-means using correlation distance. Middle: Agglomerative heirarchical clustering using average linkage and correlation distance. Right: DPVC MCMC. significantly below that of the MCMC method, with a predictive log likelihood of ?1.154 ? 0.010. Performing a Gene Ontology enrichment analysis we find clusters enriched for genes involved in both cell cycle regulation and cell division, which is biologically reasonable in a cancer orientated dataset On this relatively small dataset it is possible to visualise the D ? D empirical correlation matrix of the data, and investigate what structure our clustering has uncovered, as shown in Figure 3. The genes have been reordered in each plot according three different clusterings coming from k-means, hierarchical clustering and DPVC (MCMC, note we show the clustering corresponding to the posterior sample with the highest joint probability). For both k-means and hierarchical clustering it was necessary to ?tweak? the number of clusters to give a sensible result. Hierarchical clustering in particular appeared to have a strong bias towards putting the majority of the genes in one large cluster/clade. Note that such a visualisation is straightforward only because we have used a clustering based method rather than a factor analysis model, emphasising how partitionings can be more useful summaries of data for certain tasks than low dimensional embeddings. 5.3 Yeast in varying environmental conditions We use the data set of (Gasch et al., 2000), a collection of N = 175 non-cell-cycle experiments on S. cerevisiae (yeast), including conditions such as heat shock, nitrogen depletion and amino acid starvation. Measurements are available for D = 6152 genes. Again we ran 10 repeats holding out a different random 10% of the the elements of the matrix as test data each time. The results shown in Table 1 are broadly consistent with our findings for the breat cancer dataset: DPVC sits between NSFA and the less performant DPM and FA models. Running 1000 iterations of DPVC MCMC on this dataset takes around 1.2 hours on a standard dual core desktop running at 2.5GHz with 4Gb RAM. Unfortunately we were unable to run the VB algorithm on a dataset of this size due to memory constraints. 6 Discussion We have introduced DPVC, a model for clustering variables into highly correlated subsets. While, as expected, we found the predictive performance of DPVC is somewhat worse than that of state of the art nonparametric sparse factor analysis models (e.g. NSFA), DPVC outperforms both nonparametric mixture models and Bayesian factor analysis models when applied to high dimensional data such as gene expression microarrays. For a practitioner we see interpretability as the key advantage of DPVC relative to a model such as NSFA: one can immediately see which groups of variables are correlated, and use this knowledge to guide further analysis. An example use one could envisage would be using DPVC in an analoguous fashion to principal components regression: regressing a dependent variable against the inferred factors X. Regression coefficients would then correspond to the predictive ability of the clusters of variables. 7 7 Acknowledgements This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) under Grant Number EP/I036575/1 and EP/H019472/1. References Alon, U., Barkai, N., Notterman, D., Gish, K., Ybarra, S., Mack, D., and Levine, A. (1999). Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences, 96(12):6745. Basak, S., Balaban, A., Grunwald, G., and Gute, B. (2000a). Topological indices: their nature and mutual relatedness. Journal of chemical information and computer sciences, 40(4):891?898. Basak, S., Grunwald, G., Gute, B., Balasubramanian, K., and Opitz, D. (2000b). Use of statistical and neural net approaches in predicting toxicity of chemicals. Journal of Chemical Information and Computer Sciences, 40(4):885?890. Carvalho, C. M., Chang, J., Lucas, J. E., Nevins, J. R., Wang, Q., and West, M. (2008). Highdimensional sparse factor modeling: Applications in gene expression genomics. Journal of the American Statistical Association, 103(484):1438?1456. D?haeseleer, P. et al. (2005). How does gene expression clustering work? Nature biotechnology, 23(12):1499?1502. Duda, R. O., Hart, P. E., and Stork, D. G. (2001). Pattern Classification. Wiley-Interscience, 2nd edition. Eisen, M., Spellman, P., Brown, P., Botstein, D., Sherlock, G., Zhang, M., Iyer, V., Anders, K., Botstein, D., Futcher, B., et al. (1998). Gene expression: Clustering. Proc Natl Acad Sci US A, 95(25):14863?8. Fevotte, C. and Godsill, S. J. (2006). A Bayesian approach for blind separation of sparse sources. Audio, Speech, and Language Processing, IEEE Transactions on, 14(6):2174?2188. Fokoue, E. (2004). Stochastic determination of the intrinsic structure in Bayesian factor analysis. Technical report, Statistical and Applied Mathematical Sciences Institute. Gasch, A., Spellman, P., Kao, C., Carmel-Harel, O., Eisen, M., Storz, G., Botstein, D., and Brown, P. (2000). Genomic expression programs in the response of yeast cells to environmental changes. Science?s STKE, 11(12):4241. Geweke, J. (2004). Getting it right. Journal of the American Statistical Association, 99(467):799? 804. Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24:417?441. Kaufman, G. M. and Press, S. J. (1973). Bayesian factor analysis. Technical Report 662-73, Sloan School of Management, University of Chicago. Knowles, D. A. and Ghahramani, Z. (2007). Infinite sparse factor analysis and infinite independent components analysis. In 7th International Conference on Independent Component Analysis and Signal Separation, volume 4666, pages 381?388. Springer. Knowles, D. A. and Ghahramani, Z. (2011). Nonparametric Bayesian sparse factor models with application to gene expression modeling. The Annals of Applied Statistics, 5(2B):1534?1552. Minka, T. P. (2001). Expectation propagation for approximate Bayesian inference. In Conference on Uncertainty in Artificial Intelligence (UAI), volume 17. Minka, T. P., Winn, J. M., Guiver, J. P., and Knowles, D. A. (2010). Infer.NET 2.4. Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of computational and graphical statistics, 9(2):249?265. Neal, R. M. (2003). Slice sampling. The Annals of Statistics, 31(3):705?741. Niu, D., Dy, J., and Ghahramani, Z. (2012). A nonparametric bayesian model for multiple clustering with overlapping feature views. Journal of Machine Learning Research, 22:814?822. 8 Pearson, K. (1901). On lines and planes of closest fit to systems of points in space. Philosophical Magazine Series 6, 2:559?572. Pitman, J. (2002). Combinatorial stochastic processes. Technical report, Department of Statistics, University of California at Berkeley. Rai, P. and Daum?e III, H. (2008). The infinite hierarchical factor regression model. In Advances in Neural Information Processing Systems (NIPS). Rowe, D. B. and Press, S. J. (1998). Gibbs sampling and hill climbing in Bayesian factor analysis. Technical Report 255, Department of Statistics, University of California Riverside. Roweis, S. (1998). EM algorithms for PCA and SPCA. In Advances in Neural Information Processing Systems (NIPS), pages 626?632. MIT Press. Sanche, R. and Lonergan, K. (2006). Variable reduction for predictive modeling with clustering. In Casualty Actuarial Society Forum, pages 89?100. Shafto, P., Kemp, C., Mansinghka, V., Gordon, M., and Tenenbaum, J. (2006). Learning crosscutting systems of categories. In Proceedings of the 28th annual conference of the Cognitive Science Society, pages 2146?2151. Silva, R., Scheines, R., Glymour, C., and Spirtes, P. (2006). Learning the structure of linear latent variable models. The Journal of Machine Learning Research, 7:191?246. Tipping, M. E. and Bishop, C. M. (1999). Probabilistic principal component analysis. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 61(3):611?622. Vigneau, E. and Qannari, E. (2003). Clustering of variables around latent components. Communications in Statistics-Simulation and Computation, 32(4):1131?1150. West, M., Chang, J., Lucas, J., Nevins, J. R., Wang, Q., and Carvalho, C. (2007). High-dimensional sparse factor modelling: Applications in gene expression genomics. Technical report, ISDS, Duke University. Winn, J. and Bishop, C. M. (2006). Variational message passing. Journal of Machine Learning Research, 6(1):661. Young, G. (1941). Maximum likelihood estimation and factor analysis. Psychometrika, 6(1):49?53. 9
4579 |@word middle:1 loading:8 duda:2 nd:1 d2:10 simulation:2 gish:1 eng:1 covariance:12 decomposition:2 minus:1 xkn:2 analoguous:1 reduction:2 uncovered:1 series:2 denoting:1 outperforms:2 existing:1 current:1 recovered:1 surprising:1 analysed:1 subsequent:1 partition:6 chicago:1 plot:1 interpretable:1 update:2 generative:2 intelligence:1 desktop:1 xk:3 plane:1 core:1 node:2 sits:3 successive:5 zhang:1 mathematical:1 dn:2 along:1 qcd:3 isds:1 interscience:1 multimodality:1 introduce:2 expected:2 ontology:1 palla:1 balasubramanian:1 considering:1 yd0:1 provided:2 discover:1 underlying:1 psychometrika:1 qw:2 null:1 what:1 kaufman:2 finding:2 guarantee:1 berkeley:1 every:3 y3:1 uk:2 partitioning:8 grant:1 yn:2 appear:1 negligible:1 engineering:1 limit:1 cd0:1 acad:1 niu:2 yd:4 might:2 nevins:2 testing:1 maximisation:1 block:1 x3:1 procedure:1 empirical:1 reject:1 significantly:1 word:1 zoubin:2 qnd:1 cannot:1 close:2 put:1 collapsed:2 customer:6 missing:1 straightforward:4 educational:1 independently:1 guiver:1 immediately:1 array:1 toxicity:1 handle:1 feel:1 annals:2 magazine:1 duke:1 us:2 hypothesis:3 element:5 approximated:1 updating:1 xnd:2 observed:11 epsrc:1 ep:2 levine:1 wang:2 notterman:1 calculate:2 cycle:2 highest:1 ran:4 ydn:4 complexity:2 cam:2 predictive:11 reordered:1 division:1 completely:1 joint:4 various:1 actuarial:1 heat:1 describe:1 artificial:1 starvation:1 pearson:3 heuristic:3 stanford:2 valued:1 supplementary:2 loglikelihood:1 otherwise:2 ability:4 cov:1 statistic:7 jointly:1 itself:1 envisage:1 final:1 seemingly:1 obviously:1 advantage:5 net:3 took:1 coming:1 i036575:1 date:1 flexibility:1 achieve:1 roweis:2 academy:1 validate:1 kao:1 getting:1 convergence:1 cluster:37 perfect:1 leave:1 alon:2 ac:2 develop:1 measured:1 school:1 mansinghka:1 ibp:1 strong:2 implemented:1 c:1 recovering:1 come:1 blanket:1 shafto:3 closely:2 correct:2 stochastic:2 exploration:2 material:2 require:3 fix:1 emphasising:1 investigation:1 alleviate:1 sufficiently:1 considered:1 around:3 normal:1 presumably:1 diverged:1 substituting:1 vary:1 omitted:1 estimation:1 proc:1 combinatorial:1 council:1 correctness:2 mit:1 clearly:1 genomic:1 gaussian:7 cerevisiae:1 aim:1 rather:5 avoid:1 varying:1 focus:3 dkn:5 consistently:1 modelling:2 likelihood:5 check:1 colon:1 inference:6 dependent:1 anders:1 typically:5 integrated:1 initially:1 hidden:1 visualisation:1 sketched:1 dual:1 ill:1 classification:1 priori:1 lucas:2 art:1 mutual:1 marginal:4 field:1 equal:1 saving:2 sampling:12 represents:2 y6:1 look:1 broad:1 future:1 summarise:1 report:5 gordon:1 few:1 harel:1 gamma:3 divergence:1 national:1 individual:1 cheaper:1 transposing:1 maintain:1 interest:1 message:2 highly:3 investigate:1 regressing:1 deferred:1 mixture:8 crosscat:6 light:1 natl:1 held:1 chain:2 closer:1 necessary:1 damping:1 euclidean:5 desired:1 mk:2 modeling:3 heirarchical:1 assignment:7 cost:2 tweak:1 subset:3 uniform:1 varies:1 learnt:1 synthetic:6 gd:11 considerably:4 casualty:1 international:1 probabilistic:6 analogously:1 together:2 again:1 ambiguity:1 management:1 choose:2 worse:1 cognitive:1 american:2 suggesting:1 chemistry:1 h019472:1 coefficient:2 explicitly:1 sloan:1 blind:1 break:1 root:1 view:1 recover:2 bayes:2 collaborative:1 ass:2 square:1 publicly:1 variance:2 characteristic:1 acid:1 correspond:2 climbing:1 bayesian:12 none:1 researcher:1 tissue:1 converged:1 rowe:2 against:1 involved:1 minka:4 obvious:1 nitrogen:1 naturally:1 recovers:1 sampled:2 dataset:9 popular:1 knowledge:2 storz:1 dimensionality:1 geweke:2 appears:1 wwt:1 tipping:2 methodology:2 botstein:3 improved:1 rand:2 response:1 done:1 just:1 crp:10 correlation:9 replacing:1 overlapping:1 propagation:2 somehow:1 defines:1 logistic:1 gray:1 yeast:4 barkai:1 brown:2 true:9 y2:1 chemical:3 leibler:1 spirtes:1 neal:4 white:1 conditionally:1 fevotte:2 hill:1 demonstrate:2 performs:2 interpreting:1 silva:2 variational:10 physical:1 stork:1 volume:2 visualise:1 belong:2 interpretation:1 association:2 elementwise:2 interpret:1 numerically:1 ybarra:1 significant:3 refer:2 measurement:1 cambridge:2 gibbs:4 enter:1 rd:1 language:1 gdk:1 impressive:1 posterior:9 own:1 closest:1 belongs:1 disappointing:1 certain:2 binary:1 seen:3 greater:1 somewhat:2 signal:1 multiple:6 full:3 infer:2 technical:5 match:1 determination:1 calculation:1 minimising:1 hart:1 equally:2 instantiating:2 regression:3 breast:3 circumstance:1 expectation:3 iteration:3 represent:2 sometimes:1 cell:4 c1:1 whereas:1 winn:3 wealth:1 source:1 dpm:7 spirit:1 practitioner:2 revealed:1 iii:2 embeddings:1 spca:1 restaurant:4 fit:2 gave:2 psychology:1 perfectly:2 opposite:1 idea:1 microarrays:1 whether:1 expression:15 pca:1 gb:1 linkage:1 speech:1 passing:2 biotechnology:1 riverside:1 remark:1 matlab:1 useful:1 involve:1 informally:1 nonparametric:7 tenenbaum:1 category:1 generate:1 outperform:1 disjoint:1 vigneau:2 broadly:1 write:2 discrete:3 probed:1 group:3 putting:1 four:1 key:1 drawn:1 shock:1 ram:1 run:1 inverse:2 oligonucleotide:1 uncertainty:2 place:1 almost:1 family:1 reasonable:1 knowles:6 separation:2 draw:1 dy:1 initialising:2 vb:7 bound:1 pay:1 followed:2 simplification:1 topological:1 annual:1 constraint:1 x2:3 simulate:1 concluding:1 attempting:1 performing:3 relatively:1 glymour:1 structured:2 department:2 according:3 rai:2 poor:2 belonging:1 conjugate:1 describes:1 beneficial:1 kd:1 em:2 across:1 partitioned:2 biologically:1 explained:2 dv:2 mack:1 depletion:1 computationally:1 equation:4 scheines:1 fail:1 tractable:1 end:2 available:2 gaussians:4 operation:1 apply:2 hierarchical:6 appropriate:4 enforce:1 hotelling:2 zdk:1 buffet:2 denotes:2 clustering:35 dirichlet:13 running:2 graphical:2 daum:2 calculating:1 restrictive:1 ghahramani:6 chinese:4 society:3 forum:1 implied:1 arrangement:1 opitz:1 parametric:4 concentration:2 dependence:1 usual:2 traditional:1 diagonal:6 fa:4 exhibit:1 distance:10 link:1 unable:1 sci:1 entity:1 majority:1 sensible:1 seating:1 agglomerative:2 y5:1 gasch:2 trivial:1 kemp:1 code:1 modeled:1 relationship:3 y4:1 index:3 performant:1 regulation:1 unfortunately:1 holding:2 godsill:2 implementation:3 motivates:1 contributed:1 allowing:1 perform:2 summarising:1 markov:2 datasets:1 finite:1 truncated:1 communication:1 y1:1 orientated:1 inferred:2 enrichment:1 david:1 introduced:2 specified:2 kl:1 connection:1 toolbox:1 philosophical:1 california:2 hour:1 nip:2 address:1 below:2 pattern:2 regime:1 xcd:2 appeared:1 program:1 sherlock:1 interpretability:2 including:3 memory:1 royal:1 analogue:1 dd0:1 hybrid:1 predicting:1 indicator:1 spellman:2 improve:1 genomics:2 prior:6 acknowledgement:1 blockdiagonal:1 multiplication:1 relative:1 fully:1 highlight:1 permutation:1 interesting:1 filtering:1 organised:1 carvalho:3 proportional:2 futcher:1 integrate:1 consistent:1 cd:16 cancer:5 course:2 summary:1 supported:2 repeat:3 truncation:1 enjoys:1 bias:1 allow:2 guide:1 generalise:1 institute:1 sparse:13 pitman:2 emphasise:1 slice:2 ghz:1 dimension:14 xn:1 calculated:1 basak:3 eisen:3 author:1 commonly:1 made:1 collection:1 ig:2 cope:1 transaction:1 approximate:2 relatedness:1 kullback:1 gene:24 confirm:1 overfitting:1 investigating:1 active:1 uai:1 continuous:1 latent:10 table:7 carmel:1 learn:1 nature:4 robust:1 symmetry:2 complex:1 did:1 noise:3 hyperparameters:2 edition:1 fokoue:2 amino:1 x1:1 enriched:1 referred:1 west:3 grunwald:2 fashion:1 wiley:1 precision:2 exponential:1 young:2 bishop:4 explored:1 dk:2 experimented:3 evidence:1 exists:1 intrinsic:1 sequential:1 effectively:1 iyer:1 konstantina:1 illustrates:1 conditioned:2 easier:1 entropy:1 simply:3 likely:1 explore:1 univariate:1 g2:9 partially:1 chang:2 springer:1 corresponds:1 environmental:2 conditional:11 goal:1 sized:1 towards:1 shared:1 price:1 hard:1 aided:1 change:1 typical:1 infinite:5 sampler:9 principal:4 tumor:1 indicating:1 highdimensional:1 cholesky:2 indian:2 evaluate:1 mcmc:13 audio:1 correlated:8
3,954
458
A Connectionist Learning Approach to Analyzing Linguistic Stress Prahlad Gupta Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 David S. Touretzky School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract We use connectionist modeling to develop an analysis of stress systems in terms of ease of learnability. In traditional linguistic analyses, learnability arguments determine default parameter settings based on the feasibilty of logicall y deducing correct settings from an initial state. Our approach provides an empirical alternative to such arguments. Based on perceptron learning experiments using data from nineteen human languages, we develop a novel characterization of stress patterns in terms of six parameters. These provide both a partial description of the stress pattern itself and a prediction of its learnability, without invoking abstract theoretical constructs such as metrical feet. This work demonstrates that machine learning methods can provide a fresh approach to understanding linguistic phenomena. 1 LINGUISTIC STRESS The domain of stress systems in language is considered to have a relatively good linguistic theory, called metrical phonologyl. In this theory, the stress patterns of many languages can be described concisely, and characterized in terms of a set of linguistic "parameters," such as bounded vs. unbounded metrical feet, left vs. right dominant feet, etc. 2 In many languages, stress tends to be placed on certain kinds of syllables rather than on others; the former are termed heavy syllables, and the latter light syllables. Languages that distinguish lFor an overview of the theory, see [Goldsmith 90, chapter 4]. 2See [Dresher 90] for one such parameter scheme. 225 226 Gupta and Touretzky OUTPUT UNIT (PERCEPTRON) Input INPUT LAYER (2 x 13 units) Figure 1: Perceptron model used in simulations. between heavy and light syllables are termed quantity-sensitive (QS), while languages that do not make this distinction are termed quantity-insensitive (QI). In some QS languages, what counts as a heavy syllable is a closed syllable (a syllable that ends in a consonant), while in others it is a syllable with a long vowel. We examined the stress patterns of nineteen QI and QS systems, summarized and exemplified in Table 1. The data were drawn primarily from descriptions in [Hayes 80]. 2 PERCEPTRON SIMULATIONS In separate experiments, we trained a perceptron to produce the stress pattern of each of these languages. 1\\10 input representations were used. In the syllabic representation, used for QI patterns only, a syllable was represented as a [11] vector, and [00] represented no syllable. In the weight-string representation, which was necessary for QS languages, the input patterns used were [1 0] for a heavy syllable, [0 1] for a light syllable, and [00] for no syllable. For stress systems with up to two levels of stress, the output targets used in training were 1.0 for primary stress, 0.5 for secondary stress, and 0 for no stress. For stress systems with three levels of stress, the output targets were 0.6 for secondary stress, 0.35 for tertiary stress, and 1.0 and 0 respectively for primary stress and no stress. The input data set for all stress systems consisted of all word-forms of up to seven syllables. With the syllabic input representation there are 7 of these, and with the weight-string representation, there are 255. The perceptron's input array was a buffer of 13 syllables; each word was processed one syllable at a time by sliding it through the buffer (see Figure 1). The desired output at each step was the stress level of the middle syllable of the buffer. Connection weights were adjusted at each step using the back-propagation learning algorithm [Rumelhart 86]. One epoch consisted of one presentation of the entire training set. The network was trained for as many epochs as necessary to ensure that the stress value produced by the perceptron was within 0.1 of the target value, for each syllable of the word, for all words in the training set. A learning rate of 0.05 and momentum of 0.90 was used in all simulations. Initial weights were uniformly distributed random values in the range ?0.5. Each simulation was run at least three times, and the learning times averaged. Connectionist Learning and Linguistic Stress REF LANGUAGE DESCRIPTION OF S1RESS PATIERN Quantity-Insensitive Languages: Ll Latvian Fixed word-initial stress. L2 French Fixed word-final stress. L3 Maranungku Primary stress on first syllable, secondary stress on alternate succeeding syllables. Weri L4 Primary stress on last syllable, secondary stress on alternate preceding syllables. L5 Garawa Primary stress on first syllable, secondary stress on penultimate syllable, tertiary stress on alternate syllables preceding the penUlt, no stress on second syllable. Lakota L6 Primary stress On second syllable. L7 Swahili Primary stress on penultimate syllable. L8 Paiute Primary stress on second syllable, secondary stress on alternate succeeding syllables. Warao L9 Primary stress on penultimate syllable. secondary stress on alternate preceding syllables. Quantity-Sensitive Languages: LlO Koya Primary stress on first syllable, secondary stress on heavy syllables. (Heavy = closed syllable or syllable with long vowel.) L11 Eskimo (Primary) stress on final and heavy syllables. (Heavy = closed syllable.) L12 Gurkhali Primary stress on first syllable except when first syllable light and second syllable heavy. (Heavy = long vowel.) Primary stress on last syllable except when last is light and L13 Yapese penultimate heavy. (Heavy = long vowel.) Primary stress on first syllable if heavy. else on second sylL14 Ossetic lable. (Heavy = long vowel.) L15 Rotuman Primary stress on last syllable if heavy. else on penultimate syllable. JHeavy =long vowel.) L16 Komi Primary stress on first heavy syllable. or on last syllable if none heavy. (Heavy = long vowel.) L17 Cheremis Primary stress on last heavy syllable. or on first syllable if none heavy. (Heavy = long vowel.) Primary stress on first heavy syllable. or on first syllable if L18 Mongolian none heavy. (Heavy =long vowel.) L19 Mayan Primary stress on last heavy syllable. or on last syllable if none heavy. (Heavy = long vowel.) EXAMPLES SlSOSOSOSOSoSo SOSoSoSOSOSOSI SlSOS2S0S2S0S2 S2S0S2S0S2S0S1 SlSOSOS3S0S2S0 SOSI SOSOSoSOSo SOSOSOSOSOSlSO SOSlSOS2SOS2S0 SOS2S0S2SOS1S0 LILoLoH2LoLoLo LILoLoLoLoLoLo LOLoLoHILoLoLl LOLoLoLoLoLoLI LILoL?J-fJLoLoLo L?HI L0J-fJL?LoLo LOLoL?J-fJLoLoLl L?J-fJL?J-fJL?HIL? ? HI L?L?J-fJL?L L? LOLILoLoLoLoLo LOLoL?J-fJLoLoHl LOLoLoLoLoLlLo L?L?H1L?L?J-fJL? LOLoLoLoLoLoLl LOL?J-fJLoLoH1L LILoLoLoLoLoLo ? L?L ?H1LoL?J-fJL? LILoLoLoLoLoLo LOL?J-fJLoLoHlLo LOLoLoLoLoLoLI Table 1: Stress patterns: description and example stress assignment. Examples are of stress assignment in seven-syllable words. Primary stress is denoted by the superscript 1 (e.g., Sl), secondary stress by the superscript 2, tertiary stress by the superscript 3, and no stress by the superscript O. "S" indicates an arbitrary syllable, and is used for the QI stress patterns. For QS stress patterns, "H" and "L" are used to denote Heavy and Light syllables, respectivel y. 227 228 Gupta and Touretzky 3 PRELIMINARY ANALYSIS OF LEARNABILITY OF STRESS The learning times differ considerably for {Latvian, French}, {Maranungku, Weri} , {Lakota, Polish} and Garawa, as shown in the last column of Table 2. Moreover, Paiute and Warao were unlearnable with this mode1. 3 Differences in learning times for the various stress patterns suggested that the factors ("parameters") listed below are relevant in determining learnability. 1. Inconsistent Primary Stress (IPS): it is computationally expensive to learn the pattern if neither edge receives primary stress except in mono- and di-syllables; this can be regarded as an index of computational complexity that takes the values {O, I}: 1 if an edge receives primary stress inconsistently, and 0, otherwise. 2. Stress clash avoidance (SeA): if the components of a stress pattern can potentially lead to stress clash4, then the language may either actually permit such stress clash, or it may avoid it. This index takes the values {O, I}: 0 if stress clash is permitted, and 1 if stress clash is avoided. 3. Alternation (AIt): an index of learnability with value 0 if there is no alternation, and value 1 if there is. Alternation refers to a stress pattern that repeats on alternate syllables. 4. Multiple Primary Stresses (MPS): has value 0 if there is exactly one primary stress, and value 1 if there is more then one primary stress. It has been assumed that a repeating pattern of primary stresses will be on alternate, rather than adjacent syllables. Thus, [Alternation=O] implies [MPS=O]. Some of the hypothetical stress patterns examined below include ones with more than one primary stress; however, as far as is known, no actually occurring QI stress pattern has more than one primary stress. 5. Multiple Stress Levels (MSL): has value 0 if there is a single level of stress (primary stress only), and value 1 otherwise. Note that it is possible to order these factors with respect to each other to form a fivedigit binary string characterizing the ease/difficulty of learning. That is, the computational complexity of learning a stress pattern can be characterized as a 5-bit binary number whose bits represent the five factors above, in decreasing order of significance. Table 2 shows that this characterization captures the learning times of the QI patterns quite accurately. As an example of how to read Thble 2, note that Garawa takes longer to learn than Latvian (165 vs. 17 epochs). This is reflected in the parameter setting for Garawa, "01101", being lexicographically greater than that for Latvian, "00000". A further noteworthy point is that this framework provides an account of the non-learnability of Paiute and Warao, viz,. that stress patterns whose parameter string is lexicographically greater than "10000" are unlearnable by the perceptron. 4 TESTING THE QI LEARNABILITY PREDICTIONS We devised a series of thirty artificial QI stress patterns (each a variation on some language in Table 1) to examine our parameter scheme in more detail. The details of the patterns 3They were learnable in a three-layer model, which exhibited a similar ordering of learning times [Gupta 92]. 4Placement of stress on adjacent syllables. 229 Connectionist Learning and Linguistic Stress IPS SCA Alt MPS MSL 0 0 0 0 0 0 0 1 0 1 0 1 1 0 1 0 0 0 1 0 1 0 1 0 1 QI LANGUAGES Latvian French Maranungku Weri Garawa Lakota Swahili Paiute Warao REF Ll L2 L3 L4 L5 L6 L7 L8 L9 EPOCHS (syllabic) 17 16 37 34 165 255 254 ** ** Table 2: Preliminary analysis of learning times for QI stress systems. using the syllabic input representation. IPS=Inconsistent Primary Stress; SCA=Stress Clash Avoidance; Alt=Altemation; MPS=Multiple Primary Stresses; MSL=Multiple Stress Levels. References LI-L9 refer to Table 1. II Agg 0 I IPS 0 SCA 0 0 0 0 0 0 0 0 0 I Alt I MPS I MSL 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 0.25 1 0 1 0 0 0 1 0 0 0.50 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 1 1 0 0 0 0 0 2 0 0 0 0 0 " QI LANGS Latvian French Maranungku Weri Garawa Lakota Swahili Paiute Warao I REF I TIME Ll L2 L3 L4 L5 L6 L7 L8 L9 " QS LANGS I REF I TIME II 2 2 Koya Eskimo LI0 Lll 2 3 Gurkhali Yapese Ossetic Rotuman L12 L13 L14 L15 19 19 30 29 Komi Cheremis Mongolian Mayan L16 L17 U8 U9 216 212 2306 2298 3 3 7 10 10 ** ** Table 3: Summary of results and analysis of QI and QS learning (using weightstring input representations). Agg=Aggregative Information; IPS=Inconsistent Primary Stress; SCA=Stress Clash Avoidance; Alt=Altemation; MPS=Multiple Primary Stresses; MSL=Multiple Stress Levels. References index into Table 1. Time is learning time in epochs. 230 Gupta and Touretzky are not crucial for present purposes (see [Gupta 92] for details). What is important to note is that the learnability predictions generated by the analytical scheme described in the previous section show good agreement with actual perceptron learning experiments on these patterns. The learning results are summarized in Table 4. It can be seen that the 5-bit characterization fits the learning times of various actual and hypothetical patterns reasonably well (although there are exceptions - for example, the hypothetical stress patterns with reference numbers h21 through h25 have a higher 5-bit characterization than other stress patterns, but lower learning times.) Thus, the "complexity measure" suggested here appears to identify a number of factors relevant to the learnability of QI stress patterns within a minimal twolayer connectionist architecture. It also assesses their relative impacts. The analysis is undoubtedly a Simplification, but it provides a completely novel framework within which to relate the various learning results. The important point to note is that this analytical framework arises from a consideration of (a) the nature of the stress systems, and (b) the learning results from simulations. That is, this framework is empirically based, and makes no reference to abstract constructs of the kind that linguistic theory employs. Nevertheless, it provides a descriptive framework, much as the linguistic theory does. 5 INCORPORATING QS SYSTEMS INTO THE ANALYSIS Consideration of the QS stress patterns led to refinement of the IPS parameter without changing its setting for the QI patterns. This parameter is modified so that its value indicates the proportion of cases in which primary stress is not assigned at the edge of a word. Additionally, through analysis of connection weights for QS patterns, a sixth parameter, Aggregative Information, is added as a further index of computational complexity. 6. Aggregative Information (Agg) : has value 0 if no aggregative information is required (single-positional information suffices); 1 if one kind of aggregative information is required; and 2 if two kinds of aggregative information are required. Detailed discussion of the analysis leading to these refinements is beyond the scope of this paper; the interested reader is referred to [Gupta 92]. The point we wish to make here is that, with these modifications, the same parameter scheme can be used for both the QI and QS language classes, with good learnability predictions within each class, as shown in Table 3. Note that in this table, learning times for all languages are reported in terms of the weight-string representation (255 input patterns) rather than the unweighted syllabic representation (7 input patterns) used for the initial QI studies. Both the QI and QS results fall into a single analysis within this generalized parameter scheme and weight-string representation, but with a less perfect fit than the within-class results. 6 DISCUSSION Traditional linguistic analysis has devised abstract theoretical constructs such as "metrical foot" to describe linguistic stress systems. Learnability arguments were then used to determine default parameter settings (e.g., whether feet should by default be assumed to be bounded or unbounded, left or right dominant, etc.) based on the feasibility of logically deducing correct settings from an initial state. As an example, in one analysis Connectionist Learning and Linguistic Stress IPS SCA Alt MPS MSL LANGUAGE REF 0 0 0 Ll L2 0 0 1 Latvian French Latvian2stress Latvian3stress French2stress French 3stress Latvian2edge Latvian2edge2stress 0 0 1 1 1 1 0 0 0 1 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 1 1 1 1 0 0 1 1 1 0 0 1 1 1 1 1 0 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0 1 1 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 hI h2 h3 h4 h5 h6 EPOCHS (syllabic) 17 16 21 11 23 14 30 37 impossible Maranungku Weri Maranungku3stress Weri3stress Latvian2edge2stress-alt Garawa-SC Garawa2stress-SC Maranungku 1stress Weri 1stress Latvian2edge-alt Garawal stress-SC Latvian2edge2stress-lalt L3 L4 h7 h8 h9 hl0 hll h12 h13 h14 h15 h16 37 34 43 41 58 38 50 61 65 78 88 85 impossible Garawa-non-alt Latvian3stress2edge-SCA Latvian2edge-SCA Latvian2edge2stress-SCA Garawa Garawa2stress Latvian2edge2stress-alt-SCA Garawalstress Latvian2edge-alt-SCA Latvian2edge2stress-lalt-SCA Lakota Swahili Lakota2stress Lakota2edge Lakota2edge2stress Paiute Warao Lakota-alt Lakota2stress-alt h17 h18 h19 h20 L5 h21 h22 h23 h24 h25 L6 L7 h26 h27 h28 L8 L9 h29 h30 164 163 194 206 165 71 91 121 126 129 255 254 ** ** ** ** ** ** ** Table 4: Analysis of Quantity-Insensitive learning using the syllabic input representation. IPS=Inconsistent Primary Stress; SCA=Stress Clash Avoidance; Alt=Altemation; MPS=Multiple Primary Stresses; MSL=Multiple Stress Levels. References LI-L9 index into Table 1. 231 232 Gupta and Touretzky [Dresher 90, p. 191], "metrical feet" are taken to be "iterative" by default, since there is evidence that can cause revision of this default if it turns out to be the incorrect setting, but there might not be such disconfirming evidence if the feet were by default taken to be "non-iterative". We provide an alternative to logical deduction arguments for determining "markedness" of parameter values, by measuring learnability (and hence markedness) empirically. The parameters of our novel analysis generate both a partial description of each stress pattern and a prediction of its learnability. Furthermore, our parameters encode linguistically salient concepts (e.g., stress clash avoidance) as well as concepts that have computational significance (single-positional vs. aggregative information.) Although our analyses do not explicitly invoke theoretical linguistic constructs such as metrical feet, there are suggestive similarities between such constructs and the weight patterns the perceptron develops [Gupta 91]. In conclusion, this work offers a fresh perspective on a well-studied linguistic domain, and suggests that machine learning techniques in conjunction with more traditional tools might provide the basis for a new approach to the investigation of language. Acknowledgements We would like to acknowledge the feedback prOvided by Deirdre Wheeler throughout the course of this work. The first author would like to thank David Evans for access to exceptional computing facilities at Carnegie Mellon's Laboratory for Computational Linguistics, and Dan Everett, Brian MacWhinney, Jay McClelland, Eric Nyberg, Brad Pritchett and Steve Small for helpful discussion of earlier versions of this paper. Of course, none of them is responsible for any errors. The second author was supported by a grant from Hughes Aircraft Corporation, and by the Office of Naval Research under contract number NOOOI4-86-K-0678. References [Dresher 90] Dresher, B., & Kaye, J., A Computational Learning Model for Metrical Phonology, Cognition 34, 137-195. [Goldsmith 90] Goldsmith, J., Autosegmental and Metrical Phonology, Basil Blackwell, Oxford, England, 1990. [Gupta 91] Gupta, P. & Touretzky, D., What a perceptron reveals about metrical phonology. Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, 334339. Lawrence Erlbaum, Hillsdale, NJ, 1991. [Gupta 92] Gupta, P. & Touretzky, D., Connectionist Models and Linguistic Theory: Investigations of Stress Systems in Language. Manuscript. [Hayes 80] Hayes, B., A Metrical Theory of Stress Rules, doctoral dissertation, Massachusetts Institute of Technology, Cambridge, MA, 1980. Circulated by the Indiana University Linguistics Club, 1981. [Rumelhart 86] Rumelhart, D., Hinton, G., & Williams, R, Learning Internal Representations by Error Propagation, in D. Rumelhart, J. McClelland & the PDP Research Group. Parallel Distributed Processing. Volume 1: Foundations, MIT Press, Cambridge, MA, 1986.
458 |@word aircraft:1 version:1 middle:1 proportion:1 simulation:5 llo:1 invoking:1 twolayer:1 autosegmental:1 initial:5 series:1 clash:8 evans:1 thble:1 succeeding:2 v:4 tertiary:3 dissertation:1 characterization:4 provides:4 club:1 five:1 unbounded:2 h4:1 incorrect:1 dan:1 examine:1 decreasing:1 deirdre:1 actual:2 lll:1 revision:1 provided:1 bounded:2 moreover:1 what:3 kind:4 string:6 indiana:1 corporation:1 nj:1 hypothetical:3 exactly:1 demonstrates:1 unit:2 grant:1 tends:1 analyzing:1 oxford:1 noteworthy:1 might:2 doctoral:1 studied:1 examined:2 suggests:1 ease:2 range:1 averaged:1 thirty:1 responsible:1 testing:1 hughes:1 disconfirming:1 wheeler:1 sca:12 empirical:1 word:8 ossetic:2 refers:1 impossible:2 h8:1 langs:2 williams:1 lolo:1 circulated:1 q:12 avoidance:5 array:1 regarded:1 rule:1 variation:1 target:3 agreement:1 pa:2 rumelhart:4 expensive:1 capture:1 ordering:1 complexity:4 trained:2 eric:1 completely:1 basis:1 chapter:1 represented:2 various:3 describe:1 artificial:1 sc:3 whose:2 quite:1 otherwise:2 nyberg:1 itself:1 final:2 superscript:4 ip:8 descriptive:1 analytical:2 h22:1 relevant:2 description:5 sea:1 produce:1 perfect:1 develop:2 h3:1 school:1 implies:1 differ:1 foot:8 correct:2 human:1 hillsdale:1 suffices:1 preliminary:2 investigation:2 brian:1 adjusted:1 considered:1 lawrence:1 scope:1 cognition:1 purpose:1 linguistically:1 sensitive:2 exceptional:1 tool:1 mit:1 hil:1 rather:3 modified:1 avoid:1 office:1 conjunction:1 linguistic:16 hl0:1 encode:1 viz:1 naval:1 indicates:2 logically:1 polish:1 helpful:1 entire:1 deduction:1 interested:1 l7:4 lfor:1 denoted:1 construct:5 msl:7 connectionist:7 others:2 develops:1 primarily:1 employ:1 vowel:10 undoubtedly:1 light:6 metrical:10 edge:3 partial:2 necessary:2 l17:2 desired:1 theoretical:3 minimal:1 column:1 modeling:1 earlier:1 measuring:1 assignment:2 erlbaum:1 learnability:14 reported:1 considerably:1 l5:4 mode1:1 invoke:1 contract:1 cognitive:1 leading:1 li:2 account:1 summarized:2 li0:1 mp:8 explicitly:1 closed:3 parallel:1 ass:1 kaye:1 identify:1 accurately:1 produced:1 none:5 touretzky:7 sixth:1 di:1 noooi4:1 massachusetts:1 logical:1 actually:2 back:1 appears:1 manuscript:1 steve:1 higher:1 permitted:1 reflected:1 furthermore:1 receives:2 h23:1 propagation:2 french:6 fjl:6 consisted:2 concept:2 former:1 hence:1 assigned:1 facility:1 read:1 laboratory:1 adjacent:2 ll:4 generalized:1 stress:118 goldsmith:3 consideration:2 novel:3 empirically:2 overview:1 insensitive:3 volume:1 mellon:3 refer:1 cambridge:2 language:20 lol:2 l3:4 access:1 longer:1 similarity:1 etc:2 dominant:2 perspective:1 termed:3 certain:1 buffer:3 eskimo:2 binary:2 alternation:4 seen:1 greater:2 preceding:3 determine:2 ii:2 sliding:1 multiple:8 lexicographically:2 england:1 characterized:2 h7:1 long:10 offer:1 devised:2 feasibility:1 qi:17 prediction:5 impact:1 represent:1 thirteenth:1 else:2 crucial:1 exhibited:1 nineteen:2 inconsistent:4 fit:2 psychology:1 architecture:1 l13:2 whether:1 six:1 cause:1 detailed:1 listed:1 repeating:1 processed:1 mcclelland:2 l16:2 generate:1 sl:1 carnegie:3 group:1 salient:1 nevertheless:1 basil:1 drawn:1 mono:1 changing:1 neither:1 run:1 throughout:1 reader:1 h12:1 bit:4 layer:2 hi:3 syllable:60 distinguish:1 simplification:1 dresher:4 annual:1 placement:1 l9:6 argument:4 macwhinney:1 relatively:1 department:1 alternate:7 l0j:1 modification:1 taken:2 computationally:1 turn:1 count:1 end:1 h6:1 permit:1 alternative:2 inconsistently:1 ensure:1 include:1 linguistics:2 l6:4 phonology:3 h20:1 society:1 added:1 quantity:5 primary:37 traditional:3 separate:1 thank:1 penultimate:5 seven:2 l12:2 fresh:2 mayan:2 index:6 potentially:1 relate:1 aggregative:7 l11:1 acknowledge:1 hinton:1 pdp:1 arbitrary:1 david:2 required:3 h9:1 blackwell:1 connection:2 concisely:1 distinction:1 beyond:1 suggested:2 below:2 pattern:34 exemplified:1 u9:1 difficulty:1 latvian:7 hll:1 scheme:5 technology:1 epoch:6 understanding:1 l2:4 acknowledgement:1 determining:2 relative:1 syllabic:7 h2:1 foundation:1 heavy:28 l8:4 course:2 summary:1 placed:1 last:9 repeat:1 supported:1 perceptron:11 institute:1 fall:1 characterizing:1 distributed:2 feedback:1 default:6 unweighted:1 author:2 refinement:2 avoided:1 far:1 everett:1 suggestive:1 hayes:3 reveals:1 pittsburgh:2 assumed:2 consonant:1 l14:1 iterative:2 table:14 additionally:1 learn:2 reasonably:1 nature:1 domain:2 significance:2 komi:2 ait:1 ref:5 referred:1 momentum:1 wish:1 jay:1 h15:1 learnable:1 gupta:13 alt:13 evidence:2 incorporating:1 occurring:1 led:1 positional:2 deducing:2 brad:1 agg:3 ma:2 h13:1 presentation:1 u8:1 except:3 uniformly:1 called:1 secondary:9 l4:4 exception:1 h21:2 internal:1 latter:1 arises:1 h5:1 phenomenon:1 unlearnable:2
3,955
4,580
Hierarchical Optimistic Region Selection driven by Curiosity Odalric-Ambrym Maillard Lehrstuhl f?ur Informationstechnologie Montanuniversit?at Leoben Leoben, A-8700, Austria [email protected] Abstract This paper aims to take a step forwards making the term ?intrinsic motivation? from reinforcement learning theoretically well founded, focusing on curiositydriven learning. To that end, we consider the setting where, a ?xed partition P of a continuous space X being given, and a process ? de?ned on X being unknown, we are asked to sequentially decide which cell of the partition to select as well as where to sample ? in that cell, in order to minimize a loss function that is inspired from previous work on curiosity-driven learning. The loss on each cell consists of one term measuring a simple worst case quadratic sampling error, and a penalty term proportional to the range of the variance in that cell. The corresponding problem formulation extends the setting known as active learning for multi-armed bandits to the case when each arm is a continuous region, and we show how an adaptation of recent algorithms for that problem and of hierarchical optimistic sampling algorithms for optimization can be used in order to solve this problem. The resulting procedure, called Hierarchical Optimistic Region SElection driven by Curiosity (HORSE.C) is provided together with a ?nite-time regret analysis. 1 Introduction In this paper, we focus on the setting of intrinsically motivated reinforcement learning (see Oudeyer and Kaplan [2007], Baranes and Oudeyer [2009], Schmidhuber [2010], Graziano et al. [2011]), which is an important emergent topic that proposes new dif?cult and interesting challenges for the theorist. Indeed, if some formal objective criteria have been proposed to implement speci?c notions of intrinsic rewards (see Jung et al. [2011], Martius et al. [2007]), so far, many - and only - experimental work has been carried out for this problem, often with interesting output (see Graziano et al. [2011], Mugan [2010], Konidaris [2011]) but unfortunately no performance guarantee validating a proposed approach. Thus proposing such an analysis may have great immediate consequences for validating some experimental studies. Motivation. A typical example is the work of Baranes and Oudeyer [2009] about curiosity-driven learning (and later on Graziano et al. [2011], Mugan [2010], Konidaris [2011]), where a precise algorithm is de?ned together with an experimental study, yet no formal goal is de?ned, and no def analysis is performed as well. They consider a so-called sensory-motor space X = S ?M ? [0, 1]d where S is a (continuous) state space and M is a (continuous) action space. There is no reward, yet one can consider that the goal is to actively select and sample subregions of X for which a notion of ?learning progress? - this intuitively measures the decay of some notion of error when successively sampling into one subregion - is maximal. Two key components are advocated in Baranes and Oudeyer [2009], in order to achieve successful results (despite that success is a fuzzy notion): ? The use of a hierarchy of regions, where each region is progressively split into sub-regions. 1 ? Splitting leaf-regions in two based on the optimization of the dissimilarity, amongst the regions, of the learning progress. The idea is to identify regions with a learning complexity that is a globally constant in that region, which also provides better justi?cation for allocating samples between identi?ed regions. We believe it is possible to go one step towards a full performance analysis of such algorithms, by relating the corresponding active sampling problem to existing frameworks. Contribution. This paper aims to take a step forwards making the term ?intrinsic motivation? from reinforcement learning theoretically well founded, focusing on curiosity-driven learning. We introduce a mathematical framework in which a metric space (which intuitively plays the role of the state-action space) is divided into regions and a learner has to sample from an unknown random function in a way that reduces a notion of error measure the most. This error consists of two terms, the ?rst one is a robust measure of the quadratic error between the observed samples and their unknown mean, the second one penalizes regions with non constant learning complexity, thus enforcing the notion of curiosity. The paper focuses on how to choose the region to sample from, when a partition of the space is provided. The resulting problem formulation can be seen as a non trivial extension of the setting of active learning in multi-armed bandits (see Carpentier et al. [2011] or Antos et al. [2010]), where the main idea is to estimate the variance of each arm and sample proportionally to it, to the case when each arm is a region as opposed to a point. In order to deal with this dif?culty, the maximal and minimal variance inside each region is tracked by means of a hierarchical optimization procedure, in the spirit of the HOO algorithm from Bubeck et al. [2011]. This leads to a new procedure called Hierarchical Optimistic Region SElection driven by Curiosity (HORSE.C) for which we provide a theoretical performance analysis. Outline. The outline of the paper is the following. In Section 2 we introduce the precise setting and de?ne the objective function. Section 3 de?nes our assumptions. Then in Section 4 we present the HORSE.C algorithm. Finally in Section 5, we provide the main Theorem 1 that gives performance guarantees for the proposed algorithm. 2 Setting: Robust region allocation with curiosity-inducing penalty. Let X assumed to be a metric space and let Y ? Rd be a normed space, equipped with the Euclidean norm || ? ||. We consider an unknown Y-valued process de?ned on X , written ? : X ? M+ 1 (Y), (Y) refers to the set of all probability measures on Y, such that for all x ? X , the random where M+ 1 variable Y ? ?(x) has mean ?(x) ? Rd and covariance matrix ?(x) ? Md,d (R) assumed to be def diagonal. We thus introduce for convenience the notation ?(x) = trace(?(x)), where trace is the trace operator (this corresponds to the variance in dimension 1). We call X the input space or sampling space, and Y the output space or value space. def Intuition Intuitively when applied to the setting of Baranes and Oudeyer [2009], then X = S ? A is the space of state-action pairs, where S is a continuous state space and A a continuous action def space, ? is the transition kernel of an unknown MDP, and ?nally Y = S. This is the reason why d we consider Y ? R and not only Y ? R as would seem more natural. One difference is that we assume (see Section 3) that we can sample anywhere in X , which is a restrictive yet common assumption in the reinforcement learning literature. How to get rid of this assumption is an open and challenging question that is left for future work. Sampling error and robustness Let us consider a sequential sampling process on X , i.e. a process that samples at time t a value Yt ? ?(Xt ) at point Xt , where Xt ? F<t is a measurable function of the past inputs and outputs {(Xs , Ys )}s<t . It is natural to look at the following quantity, that we call t average noise vector ?t : ? def 1 ?t = Ys ? ?(Xs ) ? Rd . t s=1 One interesting property is that this is a martingale difference sequence, which means that the norm of this vector enjoys a concentration property. More precisely (see [Maillard, 2012, Lemma 1] in the extended version of the paper), we have for all?deterministic t ?t > 0 1 1? 2 ?(Xs ) . E[ ||?t || ] = E t t s=1 2 A similar property holds for a region R ? X that has been sampled nt (R) times, and in order to be robust against a bad sampling strategy inside a region, it is natural to look at the worst case error, that we de?ne as def supx?R ?(x) . eR (nt ) = nt (R) One reason for looking at robustness is that for instance, in the case we work with an MDP, we are generally not completely free to choose the sample Xs ? S ? A: we can only choose the action and the next state is generally given by Nature. Thus, it is important to be able to estimate this worst case error so as to prevent from bad situations. Goal Now let P be a ?xed, known partition of the space X and consider the following game. The goal of an algorithm is, at each time step t, to propose one point xt where to sample the space X , so that its allocation of samples {nt (R)}R?P (that is, the number of points sampled in each region) minimizes some objective function. Thus, the algorithm is free to sample everywhere in each region, with the goal that the total number of points chosen in each region is optimal in some sense. A simple candidate for this objective function would be the following ? ? def LP (nt ) = max eR (nt ) ; R ? P , however, in order to incorporate a notion of curiosity, we would also like to penalize regions that have a variance term ? that is non homogeneous (i.e. the less homogeneous, the more samples we allocate). Indeed, if a region has constant variance, then we do not really need to understand more its internal structure, and thus its better to focus on an other region that has very heterogeneous variance. For instance, one would like to split such a region in several homogeneous parts, which is essentially the idea behind section C.3 of Baranes and Oudeyer [2009]. We thus add a curiositypenalization term to the previous objective function, which leads us to de?ne the pseudo-loss of an def allocation nt = {nt (R)}R?P in the following way: ? ? def (1) LP (nt ) = max eR (nt ) + ?|R|(max ?(x) ? min ?(x)) ; R ? P . x?R x?R Indeed, this means that we do not want to focus just on regions with high variance, but also trade-off with highly heterogeneous regions, which is coherent with the notion of curiosity (see Oudeyer and Kaplan [2007]). For convenience, we also de?ne the pseudo-loss of a region R by def LR (nt ) = eR (nt ) + ?|R|(max ?(x) ? min ?(x)) . x?R x?R Regret The regret (or loss) of an allocation algorithm at time T is de?ned as the difference between the cumulated pseudo-loss of the allocations nt = {nR,t }R?P proposed by the algorithm and that of the best allocation strategy n?t = {n?R,t }R?P at each time steps; we de?ne def RT = T ? t=|P| LP (nt ) ? LP (n?t ) , where an optimal allocation at time t is de?ned by ? ? ? nt (R) = t . n?t ? argmin LP (nt ) ; {nt (R)}R?P is such that R?P Note that the sum starts at t = |P| for a technical reason, since for t < |P|, whatever the allocation, there is always at least one region with no sample, and thus LP (nt ) = ?. Example 1 In the special case when X = {1, . . . , K} is ?nite with K ? T , and when P is the complete partition (each cell corresponds to exactly one point), the penalization term is canceled. Thus the problem reduces to the choice of the quantities nt (i) for each arm i, and the loss of an ? ? allocation simply becomes ?(i) def L(nt ) = max ;1?i?K . nt (i) This almost corresponds to the already challenging setting analyzed for instance in Carpentier et al. [2011] or Antos et al. [2010]. The difference is that we are interested in the cumulative regret of our allocation instead of only the regret suffered for the last round as considered in Carpentier et al. whereas they consider the mean sampling [2011] or Antos et al. [2010]. Also we directly target n?(i) t (i) error (but both terms are actually of the same order). Thus the setting we consider can be seen as a generalization of these works to the case when each arm corresponds to a continuous sampling domain. 3 3 Assumptions In this section, we introduce some mild assumptions. We essentially assume that the unknown distribution is such that it has a sub-Gaussian noise, and a smooth mean and variance functions. These are actually very mild assumptions. Concerning the algorithm, we consider it can use a partition tree of the space, and that this one is essentially not degenerated (a typical binary tree that satis?es all the following assumptions is such that each cell is split in two children of equal volume). Such assumptions on trees have been extensively discussed for instance in Bubeck et al. [2011]. Sampling At any time, we assume that we are able to sample at any point in X , i.e. we assume we have a generative model1 of the unknown distribution ?. Unknown distribution We assume that ? is sub-Gaussian, meaning that for all ?xed x ? X ?? ? Rd ln E exp[??, Y ? ?(X)?] ? and has diagonal covariance matrix in each point2 . ?T ?(x)? , 2 The function ? is assumed to be Lipschitz w.r.t a metric ?1 , i.e. it satis?es ?x, x? ? X ||?(x) ? ?(x? )|| ? ?1 (x, x? ) . Similarly, the function ? is assumed to be Lipschitz w.r.t a metric ?2 , i.e. it satis?es ?x, x? ? X |?(x) ? ?(x? )| ? ?2 (x, x? ) . Hierarchy We assume that Y is a convex and compact subset of [0, 1]d . We consider an in?nite binary tree T whose nodes correspond to regions of X . A node is indexed by a pair (h, i), where h ? 0 is the depth of the nodes in T and 0 ? i < 2h is the position of the node at depth h. We write R(h, i) ? X the region associated with node (h, i). The regions are ?xed in advance, are all assumed to be measurable with positive measure, and must satisfy that for each h ? 1, {R(h, i)}0?i<2h is a def partition of X that is compatible with depth h ? 1, where R(0, 0) = X ; in particular for all h ? 0, for all 0 ? i < 2h , then R(h, i) = R(h + 1, 2i) ? R(h + 1, 2i + 1) . In dimension d, a standard way to de?ne such a tree is to split each parent node in half along the largest side of the corresponding hyper-rectangle, see Bubeck et al. [2011] for details. For a ?nite sub-tree Tt of T , we write Leaf (Tt ) for the set of all leaves of Tt . For a region (h, i) ? Tt , we denote by Ct (h, i) the set of its children in Tt , and by Tt (h, i) the subtree of Tt starting with root node (h, i). Algorithm and partition The partition P is assumed to be such that each of its regions R corresponds to one region R(h, i) ? T ; equivalently, there exists a ?nite sub-tree T0 ? T such that Leaf (T0 ) = P. An algorithm is only allowed to expand one node of Tt at each time step t. In the sequel, we write indifferently P ? T and (h, i) ? T or P and R(h, i) ? X to refer to the partition or one of its cell. Exponential decays Finally, we assume that the ?1 and ?2 diameters of the region R(h, i) as well as its volume |R(h, i)| decay at exponential rate in the sense that there exists positive constants ?, ?1 , ?2 and c, c1 , c2 such that for all h ? 0, then |R(h, i)| ? c? h , max ?1 (x, x? ) ? c1 ?1h and ? max ?2 (x, x? ) ? c2 ?2h . ? x ,x?R(h,i) x ,x?R(h,i) Similarly, we assume that there exists positive constants c? ? c, c?1 ? c1 and c?2 ? c2 such that for all h ? 0, then |R(h, i)| ? c? ? h , max ?1 (x, x? ) ? c?1 ?1h and ? max ?2 (x, x? ) ? c?2 ?2h . ? x ,x?R(h,i) x ,x?R(h,i) This assumption is made to avoid degenerate trees and for general purpose only. It actually holds for any reasonable binary tree. 1 using the standard terminology in Reinforcement Learning. this assumption is only here to make calculations easier and avoid nasty technical considerations that anyway do not affect the order of the ?nal regret bound but only concern second order terms. 2 4 4 Allocation algorithm In this section, we now introduce the main algorithm of this paper in order to solve the problem considered in Section 2. It is called Hierarchical Optimistic Region SElection driven by Curiosity. Before proceeding, we need to de?ne some quantities. 4.1 High-probability upper-bound and lower-bound estimations Let us consider the following (biased) estimator t t 1 ? 1 ? def ? 2t (R) = ? ||Ys ||2 I{Xs ? R} ? || Ys I{Xs ? R}||2 . Nt (R) s=1 Nt (R) s=1 t (R)?1 , it has more importantly a positive bias Apart from a small multiplicative biased by a factor NN t (R) due to the fact that the random variables do not share the same mean; this phenomenon is the same as the estimation of the average variance for independent but? non i.i.d variables with different means ?n n {?i }i?n , where the bias would be given by n1 i=1 [?i ? n1 j=1 ?j ]2 (see Lemma 5). In our case, it is thus always non negative, and under the assumption that ? is Lipschitz w.r.t the metric ?1 , it is fortunately bounded by d1 (R)2 , the diameter of R w.r.t the metric ?1 . We then introduce the two following key quantities, de?ned for all x ? R and ? ? [0, 1] by ? t ? 1 ? d ln(2d/?) d ln(2d/?) def 2 ? t (R) + (1 + 2 d) + + ?2 (Xs , x)I{Xs ? R}, Ut (R, x, ?) = ? 2Nt (R) 2Nt (R) Nt (R) s=1 ? t ? d ln(2d/?) 1 ? def ? 2t (R) ? (1 + 2 d) ? d1 (R)2 ? Lt (R, x, ?) = ? ?2 (Xs , x)I{Xs ? R} . 2Nt (R) Nt (R) s=1 Note that we would have preferred to replace the terms involving ln(2d/?) with a term depending on the empirical variance, in the spirit of Carpentier et al. [2011] or Antos et al. [2010]. However, contrary to the estimation of the mean, extending the standard results valid for i.i.d data to the case of a martingale difference sequence is non trivial for the estimation of the variance, especially due to the additive bias resulting from the fact that the variables may not share the same mean, but also to the absence of such results for U-statistics (up to the author?s knowledge). For that reason such an extension is left for future work. The following results (we provide the proof in [Maillard, 2012, Appendix A.3]) show that Ut (R, x, ?) is a high probability upper bound on ?(x) while Lt (R, x, ?) is a high probability lower bound on ?(x). Proposition 1 Under the assumptions that Y is a convex subset of [0, 1]d , ? is sub-Gaussian, ? is Lipschitz w.r.t. ?2 and R ? X is compact and convex, then ? ? P ?x ? X ; Ut (R, x, ?) ? ?(x) ? t? . Similarly, under the same assumptions, then ? ? P ?x ? X ; Lt (R, x, ?) ? ?(x) ? b(x, R, Nt (R), ?) ? t? , where we introduced for convenience the quantity b(x, R, n, ?) def = ? 2 max ?2 (x, x ) + d1 (R) + 2(1 + 2 d) ? x ?R ? 2 ? d ln(2d/?) d ln(2d/?) + . 2n 2n Now on the other other hand, we have that (see the proof in [Maillard, 2012, Appendix A.3]) Proposition 2 Under the assumptions that Y is a convex subset of [0, 1]d , ? is sub-Gaussian, ? is Lipschitz w.r.t. ?1 , ? is Lipschitz w.r.t. ?2 and R ? X is compact and convex, then ? ? P ?x ? X ; Ut (R, x, ?) ? ?(x) + b(x, R, Nt (R), ?) ? t? . Similarly, under the same assumptions, then ? ? P ?x ? X ; Lt (R, x, ?) ? ?(x) ? t? . 5 4.2 Hierarchical Optimistic Region SElection driven by Curiosity (HORSE.C). The pseudo-code of the HORSE.C algorithm is presented in Figure 1 below. This algorithm relies on the estimation of the quantities maxx?R ?(x) and minx?R ?(x) in order to de?ne which point Xt+1 to sample at time t + 1. It is chosen by expanding a leaf of a hierarchical tree Tt ? T , in an optimistic way, starting with a tree T0 with leaves corresponding to the partition P. The intuition is the following: let us consider a node (h, i) of the tree Tt expanded by the algorithm at time t. The maximum value of ? in R(h, i) is thus achieved for one of its children node (h? , i? ) ? Ct (h, i). Thus if we have computed an upper bound on the maximal value of ? in each child, then we have an upper bound on the maximum value of ? in R(h, i). Proceeding in a similar way for the lower bound, this motivates the following two recursive de?nitions: ? ? ? + ? ? ? def + ? ? ? t (h , i ; ?) ; (h , i ) ? Ct (h, i) ? t (h, i; ?) = min max Ut (R(h, i), x, ?) , max ? , ? x?R(h,i) ?? ? t (h, i; ?) def = max ? ? ? ? ? ? ? t (h , i ; ?) ; (h? , i? ) ? Ct (h, i) min Lt (R(h, i), x, ?) , min ? x?R(h,i) ? . These values are used in order to build an optimistic estimate of the quantity LR(h,i) (Nt ) in region (h, i) (step 4), and then to select in which cell of the partition we should sample (step 5). Then the ?+ algorithm chooses where to sample in the selected region so as to improve the estimations of ? t and ? ? t . This is done by alternating (step 6.) between expanding a leaf following a path that is optimistic ? ?? ?+ according to ? t (step 7,8,9), or according to ? t (step 11.) Thus, at a high level, the algorithm performs on each cell (h, i) ? P of the given partition two hierarchical searches, one for the maximum value of ? in region R(h, i) and one for its minimal value. This can be seen as an adaptation of the algorithm HOO from Bubeck et al. [2011] with the main difference that we target the variance and not just the mean (this is more dif?cult). On the other hand, there is a strong link between step 5, where we decide to allocate samples between regions {R(h, i)}(h,i)?P , and the CH-AS algorithm from Carpentier et al. [2011]. 5 Performance analysis of the HORSE.C algorithm In this section, we are now ready to provide the main theorem of this paper, i.e. a regret bound on the performance of the HORSE.C algorithm, which is the main contribution of this work. To this end, we make use of the notion of near-optimality dimension, introduced in Bubeck et al. [2011], and that measures a notion of intrinsic dimension of the maximization problem. De?nition (Near optimality dimension) For c > 0, the c-optimality dimension of ? restricted to the region R with respect to the pseudo-metric ?2 is de?ned as ? ? ? ? ln(N (Rc? , ?2 , ?)) def , 0 where R = x ? R ; ?(x) ? max ?(x) ? ? , max lim sup c? x?R ln(??1 ) ??0 and where N (Rc? , ?2 , ?) is the ?-packing number of the region Rc? . Let d+ (h0 , i0 ) be the c-optimality dimension of ? restricted to the region R(h0 , i0 ) (see e.g. Bubeck def et al. [2011]), with the constant c = 4(2c2 + c21 )/c?2 . Similarly, let d? (h0 , i0 ) be the c-optimality dimension of ?? restricted to the region R(h0 , i0 ). Let us ?nally de?ne the biggest near-optimality dimension of ? over each cell of the partition P to be ? ? ? ? def d? = max max d+ (h0 , i0 ), d? (h0 , i0 ) ; (h0 , i0 ) ? P . Theorem 1 (Regret bound for HORSE.C) Under the assumptions of Section 3 and if moreover ?12 ? ?2 , then for all ? ? [0, 1], the regret of the Hierarchical Optimistic Region SElection driven by Curiosity procedure parameterized with ? is bounded with probability higher than 1 ? 2? as follows. ? ? T ? ? ? 1 h0 + 2?c? RT ? B h0 , n?t (h0 , i0 ), ?t , max ? nt (h0 , i0 ) (h0 ,i0 )?P t=|P| 6 Algorithm 1 The HORSE.C algorithm. Require: An in?nite binary tree T , a partition P ? T , ? ? [0, 1], ? ? 0 6? 1: Let T0 be such that Leaf (T0 ) = P, and ?i,t = ?2 i2 (2t+1)|P|t 3 , t := 0. 2: while true do 3: de?ne for each region (h, i) ? Tt the estimated loss ? + ? ? + (h, i; ?) def ? ? t (h, i; ?) ? ? ?? + ?|R(h, i)| ? L?t (h, i) = t t (h, i; ?) , Nt (R(h, i)) where ? = ?Nt (R(h,i)),t , where by convention L?t (h, i) if it is unde?ned. 4: choose the next region of the current partition P ? T to sample ? ? def (Ht+1 , It+1 ) = argmax L?t (h, i) ; (h, i) ? P . 5: 6: if Nt (R(h, i)) = n is odd then sequentially select a path of children of (Ht+1 , It+1 ) in Tt de?ned by the initial node def 0 0 (Ht+1 , It+1 ) = (Ht+1 , It+1 ), and then ? ? j+1 j+1 def j j ?+ , It+1 ) = argmax ? (Ht+1 t (h, i; ?n,t ) ; (h, i) ? Ct (Ht+1 , It+1 ) , j 7: 8: 9: 10: 11: 12: 13: j t+1 t+1 until j = jt+1 is such that (Ht+1 , It+1 ) ? Leaf (Tt ). jt+1 jt+1 expand the node (Ht+1 , It+1 ) in order to de?ne Tt+1 and then de?ne the candidate child ? ? def jt+1 jt+1 ?+ (ht+1 , it+1 ) = argmax ? t (h, i; ?n,t ) ; (h, i) ? Ct+1 (Ht+1 , It+1 ) . sample at point Xt+1 and receive the value Yt+1 ? ?(Xt+1 ), where ? ? def Xt+1 = argmax Ut (R(ht+1 , it+1 ), x, ?n,t ) ; x ? R(ht+1 , it+1 ) , else ?+ ?? proceed similarly than steps 6,7,8 with ? t replaced with ? t . end if t := t + 1. end while where ?t is a shorthand notation for the quantity ?n?t (h0 ,i0 ),t?1 , where n?t (h0 , i0 ) is the optimal allocation at round t for the region (h0 , i0 ) ? P and where ? ? ? ? d ln(2d/?k,t ) d ln(2d/?k,t ) def h 2 2h + B(h0 , k, ?k,t ) = min 2c2 ?2 + c1 ?1 + 2(1 + 2 d) , h0 ?h 2Nh0 (h, k) 2Nh0 (h, k) in which we have used the following quantity ? ? ? ? d ln(2d/?k,t ) 1 def h?h0 2 k?2 . [2 + 4 d + d ln(2d/?k,t )/2] Nh0 (h, k) = C(c?2 ?2h )?d? 2(2c2 ?2h + c21 ?12h )2 Note that the assumption ?12 ? ?2 is only here so that d? can be de?ned w.r.t the metric ?2 only. We can remove it at the price of using instead a metric mixing ?1 and ?2 together and of much more technical considerations. Similarly, we could have expressed the result using the local values d+ (h0 , i0 ) instead of the less precise d? (neither those, nor d? need to be known by the algorithm). The full proof of this theorem is reported in the appendix. The main steps of the proof are as follows. First we provide upper and lower con?dence bounds for the estimation of the quantities Ut (R, x, ?) and Lt (R, x, ?). Then, we lower-bound the depth of the subtree of each region (h0 , i0 ) ? P that contains a maximal point argmaxx?R(h0 ,i0 ) ?(x), and proceed similarly for a minimal point. This uses the near-optimality dimension of ? and ?? in the region R(h0 , i0 ), and enables to provide an ?+ ?? upper bound on ? t (h, i; ?) as well as a lower bound on ? t (h, i; ?). This then enables us to deduce bounds relating the estimated loss L?t (h, i) to the true loss LR(h,i) (Nt ). Finally, we relate the true loss of the current allocation to the one using the optimal one n?t+1 (h0 , i0 ) by discussing whether a region has been over or under sampled. This ?nal part is closed in spirit to the proof of the regret bound for CH-AS in Carpentier et al. [2011]. In order to better understand the gain in Theorem 1, we provide the following corollary that gives more insights about the order of magnitude of the regret. 7 ? def ?d ? Corollary 1 Let ? = 1+ln max{2, ?2 ? } . Under the assumptions of Theorem 1, assuming that the partition P of the X is well behaved, i.e. that for all (h0 , i0 ) ? P, then n?t+1 (h0 , i0 ) grows ? space ? 1 ?2h0 ? ? , then for all ? ? [0, 1], with probability higher than 1 ? 2? we at least at speed O ln(t) ?2 have ? ? T 1 ? ? ?? ln(t) ? 2? 1 h0 + 2?c? RT = O . max ? n?t (h0 , i0 ) (h0 ,i0 )?P nt (h0 , i0 ) t=|P| This regret term has to be compared with the typical range of the cumulative loss of the optimal allocation strategy, that is given by ? + ? T T ? ? ?(h0 ,i0 ) ? h0 + + 2?c? LP (n?t ) = max (? ? ? (h0 ,i0 ) (h0 ,i0 )) , n?t (h0 , i0 ) (h0 ,i0 )?P t=|P| t=|P| def def where = maxx?R(h0 ,i0 ) ?(x), and similarly ?? (h0 ,i0 ) = minx?R(h0 ,i0 ) ?(x). Thus, this shows that, after normalization, the relative regret on each cell (h0 , i0 ) is roughly of order 1 ? ln(t) ? 2? ? 1 1 , i.e. decays at speed n?t (h0 , i0 ) 2? . This shows that we are not only able ?+ (h0 ,i0 ) n? t (h0 ,i0 ) to compete with the performance of the best allocation strategy, but we actually achieve the exact same performance with multiplicative factor 1, up to a second order term. Note also that, when speci?ed to the case of Example 1, the order of this regret is competitive with the standard results from Carpentier et al. [2011]. ?+ (h0 ,i0 ) The lost of the variance term ?+ (h0 , i0 )?1 (that is actually a constant) here comes from the fact that we are only able to use Hoeffding?s like bounds for the estimation of the variance. In order to remove it, one would need empirical Bernstein?s bounds for variance estimation in the case of martingale difference sequences. This is postponed to future work. 6 Discussion In this paper, we have provided an algorithm together with a regret analysis for a problem of online allocation of samples in a ?xed partition, where the objective is to minimize a loss that contains a penalty term that is driven by a notion of curiosity. A very speci?c case (?nite state space) already corresponds to a dif?cult question known as active learning in the multi-armed bandit setting and has been previously addressed in the literature (e.g. Antos et al. [2010], Carpentier et al. [2011]). We have considered an extension of this problem to a continuous domain where a ?xed partition of the space as well as a generative model of the unknown dynamic are given, using our curiosity-driven loss function as a measure of performance. Our main result is a regret bound for that problem, that shows that our procedure is ?rst order optimal, i.e. achieves the same performance as the best possible allocation (thus with multiplicative constant 1). We believe this result contributes to ?lling the important gap that exists between existing algorithms for the challenging setting of intrinsic reinforcement learning and a theoretical analysis of such, the HORSE.C algorithm being related in spirit to, yet simpler and less ambitious the RIAC algorithm from Baranes and Oudeyer [2009]. Indeed, in order to achieve the objective that tries to address RIAC, one should ?rst remove the assumption that the partition is given: One trivial solution is to run the HORSE.C algorithm in episodes of doubling length, starting with the trivial partition, and to select at the end of each a possibly better partition based on computed con?dence intervals, however making ef?cient use of previous samples and avoiding a blow-up of candidate partitions happen to be a challenging question; then one should relax the generative model assumption (i.e. that we can sample wherever we want), a question that shares links with a problem called autonomous exploration. Thus, even if the regret analysis of the HORSE.C algorithm is already a strong, new result that is interesting independently of such dif?cult speci?c goals and of the reinforcement learning framework (no MDP structure is required), those questions are naturally left for future work. Acknowledgements The research leading to these results has received funding from the European Community?s Seventh Framework Programme (FP7/2007-2013) under grant agreement no 270327 (CompLACS) and no 216886 (PASCAL2). 8 References Andr`as Antos, Varun Grover, and Csaba Szepesv`ari. Active learning in heteroscedastic noise. Theoretical Computer Science, 411(29-30):2712?2728, 2010. A. Baranes and P.-Y. Oudeyer. R-IAC: Robust Intrinsically Motivated Exploration and Active Learning. IEEE Transactions on Autonomous Mental Development, 1(3):155?169, October 2009. S?ebastien Bubeck, R?emi Munos, Gilles Stoltz, and Csaba Szepesv`ari. X-armed bandits. Journal of Machine Learning Research, 12:1655?1695, 2011. Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, R?emi Munos, and Peter Auer. Upper-con?dence-bound algorithms for active learning in multi-armed bandits. In Jyrki Kivinen, Csaba Szepesv`ari, Esko Ukkonen, and Thomas Zeugmann, editors, Algorithmic Learning Theory, volume 6925 of Lecture Notes in Computer Science, pages 189?203. Springer Berlin / Heidelberg, 2011. Vincent Graziano, Tobias Glasmachers, Tom Schaul, Leo Pape, Giuseppe Cuccu, J. Leitner, and J. Schmidhuber. Arti?cial Curiosity for Autonomous Space Exploration. Acta Futura (in press), (1), 2011. Tobias Jung, Daniel Polani, and Peter Stone. Empowerment for continuous agent-environment systems. Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems, 19(1): 16?39, 2011. G.D. Konidaris. Autonomous robot skill acquisition. PhD thesis, University of Massachusetts Amherst, 2011. Odalric-Ambrym Maillard. Hierarchical optimistic region selection driven by curiosity. HAL, 2012. URL http://hal.archives-ouvertes.fr/hal-00740418. Georg Martius, J. Michael Herrmann, and Ralf Der. Guided self-organisation for autonomous robot development. In Proceedings of the 9th European conference on Advances in arti?cial life, ECAL?07, pages 766?775, Berlin, Heidelberg, 2007. Springer-Verlag. Jonathan Mugan. Autonomous Qualitative Learning of Distinctions and Actions in a Developing Agent. PhD thesis, University of Texas at Austin, 2010. Pierre-Yves Oudeyer and Frederic Kaplan. What is Intrinsic Motivation? A Typology of Computational Approaches. Frontiers in neurorobotics, 1(November):6, January 2007. J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). Autonomous Mental Development, IEEE Transactions on, 2(3):230?247, 2010. 9
4580 |@word mild:2 version:1 norm:2 open:1 covariance:2 arti:2 initial:1 contains:2 typology:1 daniel:1 past:1 existing:2 nally:2 current:2 com:1 nt:37 gmail:1 yet:4 written:1 must:1 additive:1 partition:23 happen:1 enables:2 motor:1 remove:3 neurorobotics:1 progressively:1 generative:3 leaf:9 half:1 selected:1 cult:4 lr:3 mental:2 provides:1 node:12 simpler:1 mathematical:1 along:1 c2:6 rc:3 qualitative:1 consists:2 shorthand:1 inside:2 introduce:6 theoretically:2 indeed:4 behavior:1 roughly:1 nor:1 multi:4 inspired:1 globally:1 armed:5 equipped:1 becomes:1 provided:3 notation:2 bounded:2 moreover:1 what:1 xed:6 argmin:1 minimizes:1 fuzzy:1 proposing:1 csaba:3 guarantee:2 pseudo:5 cial:2 fun:1 exactly:1 whatever:1 grant:1 positive:4 before:1 local:1 consequence:1 despite:1 path:2 acta:1 challenging:4 heteroscedastic:1 dif:5 range:2 c21:2 recursive:1 regret:17 implement:1 lost:1 procedure:5 nite:7 empirical:2 maxx:2 refers:1 get:1 convenience:3 selection:7 operator:1 measurable:2 deterministic:1 yt:2 go:1 starting:3 normed:1 convex:5 independently:1 splitting:1 estimator:1 insight:1 importantly:1 ralf:1 notion:11 anyway:1 autonomous:7 hierarchy:2 play:1 target:2 exact:1 homogeneous:3 us:1 agreement:1 nitions:1 observed:1 role:1 worst:3 region:56 episode:1 trade:1 alessandro:1 intuition:2 environment:1 complexity:2 reward:2 asked:1 tobias:2 dynamic:1 ghavamzadeh:1 learner:1 completely:1 packing:1 baranes:7 emergent:1 leo:1 horse:12 hyper:1 h0:45 whose:1 solve:2 valued:1 relax:1 statistic:1 online:1 sequence:3 propose:1 maximal:4 adaptation:2 fr:1 culty:1 mixing:1 degenerate:1 achieve:3 schaul:1 inducing:1 rst:3 parent:1 extending:1 depending:1 odd:1 received:1 advocated:1 progress:2 strong:2 subregion:1 come:1 convention:1 guided:1 exploration:3 leitner:1 glasmachers:1 require:1 generalization:1 really:1 creativity:1 proposition:2 extension:3 frontier:1 hold:2 considered:3 montanuniversit:1 exp:1 great:1 algorithmic:1 achieves:1 purpose:1 estimation:9 largest:1 always:2 gaussian:4 aim:2 avoid:2 corollary:2 focus:4 sense:2 nn:1 i0:37 unde:1 bandit:5 expand:2 interested:1 canceled:1 proposes:1 development:3 animal:1 special:1 equal:1 sampling:11 look:2 future:4 replaced:1 argmax:4 n1:2 riac:2 satis:3 highly:1 ouvertes:1 odalricambrym:1 analyzed:1 behind:1 antos:6 allocating:1 stoltz:1 tree:13 indexed:1 euclidean:1 penalizes:1 theoretical:3 minimal:3 instance:4 measuring:1 maximization:1 subset:3 successful:1 seventh:1 leoben:2 reported:1 supx:1 chooses:1 amherst:1 sequel:1 off:1 graziano:4 together:4 complacs:1 michael:1 thesis:2 successively:1 opposed:1 choose:4 possibly:1 hoeffding:1 leading:1 actively:1 de:25 blow:1 satisfy:1 later:1 try:1 performed:1 root:1 multiplicative:3 optimistic:11 sup:1 closed:1 start:1 competitive:1 contribution:2 minimize:2 yves:1 variance:16 correspond:1 identify:1 vincent:1 nh0:3 cation:1 ed:2 against:1 konidaris:3 acquisition:1 naturally:1 associated:1 proof:5 con:3 sampled:3 gain:1 intrinsically:2 massachusetts:1 austria:1 knowledge:1 ut:7 lim:1 maillard:6 actually:5 auer:1 focusing:2 higher:2 varun:1 tom:1 lehrstuhl:1 formulation:2 done:1 anywhere:1 just:2 until:1 hand:2 behaved:1 believe:2 mdp:3 hal:3 alexandra:1 grows:1 true:3 alternating:1 i2:1 deal:1 round:2 game:1 self:1 criterion:1 stone:1 outline:2 complete:1 tt:14 mohammad:1 performs:1 meaning:1 consideration:2 ef:1 funding:1 ari:3 common:1 tracked:1 volume:3 discussed:1 relating:2 refer:1 theorist:1 rd:4 similarly:9 robot:3 deduce:1 add:1 recent:1 driven:12 apart:1 schmidhuber:3 verlag:1 binary:4 success:1 discussing:1 life:1 postponed:1 der:1 nition:1 pape:1 seen:3 fortunately:1 speci:4 full:2 reduces:2 smooth:1 technical:3 calculation:1 divided:1 concerning:1 y:4 involving:1 heterogeneous:2 essentially:3 metric:9 kernel:1 normalization:1 achieved:1 cell:11 penalize:1 c1:4 whereas:1 want:2 receive:1 szepesv:3 addressed:1 interval:1 else:1 suffered:1 biased:2 archive:1 validating:2 contrary:1 spirit:4 seem:1 call:2 near:4 bernstein:1 split:4 affect:1 iac:1 idea:3 texas:1 t0:5 whether:1 motivated:2 allocate:2 url:1 penalty:3 peter:2 proceed:2 action:6 generally:2 proportionally:1 martius:2 giuseppe:1 extensively:1 subregions:1 diameter:2 zeugmann:1 http:1 andr:1 estimated:2 lazaric:1 write:3 georg:1 key:2 terminology:1 prevent:1 neither:1 carpentier:9 polani:1 nal:2 ht:12 rectangle:1 sum:1 compete:1 run:1 everywhere:1 parameterized:1 extends:1 almost:1 reasonable:1 decide:2 appendix:3 def:33 ct:6 bound:20 quadratic:2 precisely:1 software:1 dence:3 speed:2 emi:2 min:6 optimality:7 expanded:1 ned:11 developing:1 according:2 point2:1 hoo:2 ur:1 lp:7 making:3 wherever:1 intuitively:3 restricted:3 ln:17 previously:1 ecal:1 fp7:1 end:5 hierarchical:11 mugan:3 pierre:1 robustness:2 thomas:1 restrictive:1 especially:1 build:1 objective:7 question:5 quantity:10 already:3 strategy:4 concentration:1 rt:3 md:1 diagonal:2 nr:1 amongst:1 minx:2 link:2 berlin:2 topic:1 odalric:2 trivial:4 reason:4 enforcing:1 degenerated:1 curiositydriven:1 code:1 assuming:1 length:1 equivalently:1 unfortunately:1 october:1 relate:1 trace:3 negative:1 kaplan:3 ebastien:1 motivates:1 ambitious:1 unknown:9 gilles:1 upper:7 animats:1 november:1 january:1 immediate:1 situation:1 extended:1 looking:1 precise:3 nasty:1 community:1 introduced:2 pair:2 required:1 identi:1 coherent:1 distinction:1 address:1 able:4 curiosity:17 below:1 challenge:1 model1:1 max:21 pascal2:1 natural:3 kivinen:1 arm:5 improve:1 ne:13 carried:1 ready:1 literature:2 acknowledgement:1 relative:1 loss:14 lecture:1 ukkonen:1 interesting:4 proportional:1 allocation:17 grover:1 penalization:1 agent:3 editor:1 share:3 austin:1 compatible:1 jung:2 last:1 free:2 enjoys:1 formal:3 side:1 understand:2 ambrym:2 bias:3 munos:2 dimension:10 depth:4 transition:1 cumulative:2 valid:1 sensory:1 forward:2 made:1 reinforcement:7 author:1 adaptive:2 herrmann:1 founded:2 far:1 programme:1 transaction:2 compact:3 skill:1 preferred:1 sequentially:2 active:7 rid:1 assumed:6 continuous:9 search:1 lling:1 why:1 nature:1 robust:4 expanding:2 contributes:1 argmaxx:1 heidelberg:2 european:2 domain:2 main:8 motivation:5 noise:3 child:6 allowed:1 biggest:1 cient:1 martingale:3 sub:7 position:1 exponential:2 candidate:3 justi:1 theorem:6 bad:2 xt:8 jt:5 er:4 decay:4 x:10 concern:1 organisation:1 intrinsic:7 exists:4 frederic:1 sequential:1 cumulated:1 phd:2 dissimilarity:1 magnitude:1 subtree:2 gap:1 easier:1 lt:6 simply:1 bubeck:7 expressed:1 doubling:1 springer:2 ch:2 corresponds:6 relies:1 goal:6 jyrki:1 towards:1 lipschitz:6 replace:1 absence:1 price:1 typical:3 lemma:2 called:5 total:1 oudeyer:10 experimental:3 e:3 empowerment:1 select:5 internal:1 jonathan:1 phenomenon:1 incorporate:1 d1:3 avoiding:1
3,956
4,581
Nonparametric Max-Margin Matrix Factorization for Collaborative Prediction Minjie Xu, Jun Zhu and Bo Zhang State Key Laboratory of Intelligent Technology and Systems (LITS) Tsinghua National Laboratory for Information Science and Technology (TNList) Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China [email protected],{dcszj,dcszb}@mail.tsinghua.edu.cn Abstract We present a probabilistic formulation of max-margin matrix factorization and build accordingly a nonparametric Bayesian model which automatically resolves the unknown number of latent factors. Our work demonstrates a successful example that integrates Bayesian nonparametrics and max-margin learning, which are conventionally two separate paradigms and enjoy complementary advantages. We develop an efficient variational algorithm for posterior inference, and our extensive empirical studies on large-scale MovieLens and EachMovie data sets appear to justify the aforementioned dual advantages. 1 Introduction Collaborative prediction is a task of predicting users? potential preferences on currently unrated items (e.g., movies) based on their currently observed preferences and their relations with others?. One typical setting formalizes it as a matrix completion problem, i.e., to fill in missing entries (or, preferences) into a partially observed user-by-item matrix. Often there is extra information available (e.g., users? age, gender; movies? genre, year, etc.) [10] to help with the task. Among other popular approaches, factor-based models have been used extensively in collaborative prediction. The underlying idea behind such models is that there is only a small number of latent factors influencing the preferences. In a linear factor model, a user?s rating of an item is modeled as a linear combination of these factors, with user-specific coefficients and item-specific factor values. Thus, given a N ? M preference matrix for N users and M items, a K-factor model fits it with a N ? K coefficient matrix U and a M ? K factor matrix V as U V > . Various computational methods have been successfully developed to implement such an idea, including probabilistic matrix factorization (PMF) [13, 12] and deterministic reconstruction/approximation error minimization, e.g., max-margin matrix factorization (M3 F) with hinge loss [14, 11, 16]. One common problem in latent factor models is how to determine the number of factors, which is unknown a priori. A typical solution relies on some general model selection procedure, e.g., cross-validation, which explicitly enumerates and compares many candidate models and thus can be computationally expensive. On the other hand, probabilistic matrix factorization models have lend themselves naturally to leverage recent advances in Bayesian nonparametrics to bypass explicit model selection [17, 1]. However, it remains largely unexplored how to borrow such advantages into deterministic max-margin matrix factorization models, particularly the very successful M3 F. To address the above problem, this paper presents infinite probabilistic max-margin matrix factorization (iPM3 F), a nonparametric Bayesian-style M3 F model that utilizes nonparametric Bayesian techniques to automatically resolve the unknown number of latent factors in M3 F models. The first key step towards iPM3 F is a general probabilistic formulation of the standard M3 F, which is based on the maximum entropy discrimination principle [4]. We can then principally extend it to a non1 parametric model, which in theory has an unbounded number of latent factors. To avoid overfitting we impose a sparsity-inducing Indian buffet process prior on the latent coefficient matrix, selecting only an appropriate number of active factors. We develop an efficient variational method to infer posterior distributions and learn parameters (if ever exist) and our extensive empirical results on MovieLens and EachMovie demonstrate appealing performances. The rest of the paper is structured as follows. In Section 2, we briefly review the formalization of max-margin matrix factorization; In Section 3, we present a general probabilistic formulation of M3 F, and then its nonparametric extension and a fully Bayesian formulation; In Section 4, we discuss how to perform learning and inference; In Section 5, we give empirical results on 2 prevalent collaborative filtering data sets; And finally, we conclude in Section 6. 2 Max-margin matrix factorization Given a preference matrix Y ? RN ?M , which is partially observed and usually sparse, we denote the observed entry indices by I. The task of traditional matrix factorization is to find a low-rank matrix X ? RN ?M to approximate Y under some loss measure, e.g., the commonly used squared error, and use Xij as the reconstruction of the missing entries Yij wherever ij ? / I. Max-margin matrix factorization (M3 F) [14] extends the model by using a sparsity-inducing norm regularizer for a low-norm factorization and adopting hinge loss for the error measure, which is applicable to binary, discrete ordinal, or categorical data. For the binary case where Yij ? {?1} and one predicts 3 by Ybij = sign(Xij ), the optimization problem of MX F is defined as min X kXk? + C h (Yij Xij ) , (1) ij?I where h(x) = max(0, 1 ? x) is the hinge loss and kXk? is the nuclear norm of X. M3 F can be equivalently reformulated as a semi-definite programming (SDP) and thus learned using standard SDP solvers, but it is unfortunately very slow and can only scale up to thousands of users and items. As shown in [14], the nuclear norm can be written in a variational form, namely kXk? = min X=U V >  1 kU k2F + kV k2F . 2 (2) Based on the equivalence, a fast M3 F model is proposed in [11], which uses gradient descent to solve an equivalent problem, only on U and V instead  X   1 min U,V 2 kU k2F + kV k2F + C h Yij Ui Vj> , (3) ij?I where U ? RN ?K is the user coefficient matrix, V ? RM ?K the item factor matrix, and K the number of latent factors. We use Ui to denote the ith row of U , and Vj likewise. The fast M3 F model can scale up to millions of users and items. But one unaddressed resulting problem is that it needs to specify the unknown number of latent factors, K, a priori. Below we present a nonparametric Bayesian approach, which effectively bypasses the model selection problem and produces very robust prediction. We also design a blockwise coordinate descent algorithm that directly solves problem (3) rather than working on a smoothing relaxation [11], and it turns out to be as efficient and accurate. To save space, we defer this part to Appendix B. 3 Nonparametric Bayesian max-margin matrix factorization Now we present the nonparametric Bayesian max-margin matrix factorization models. We start with a brief introduction to maximum entropy discrimination, which lays the basis for our methods. 3.1 Maximum entropy discrimination We consider the binary classification setting since it suffices for our model. Given a set of training data {(xd , yd )}D d=1 (yd ? {?1}) and a discriminant function F (x; ?) parameterized by ?, maximum entropy discrimination (MED) [4] seeks to learn a distribution p(?) rather than perform a point estimation of ? as is the case with standard SVMs that typically lack a direct probabilistic interpretation. Accordingly, MED takes expectation over the original discriminant function with respect to p(?) and has the new prediction rule y? = sign (Ep [F (x; ?)]) . 2 (4) To find p(?), MED solves the following relative-entropic regularized risk minimization problem X min KL (p(?)kp0 (?)) + C h` (yd Ep [F (xd ; ?)]) , (5) p(?) d where p0 (?) is the pre-specified prior distribution of ?, KL(pkp0 ) the Kullback-Leibler divergence, or relative entropy, between two distributions, C the regularization constant and h` (x) = max(0, `? x) (` > 0) the generalized hinge loss. By defining F as the log-likelihood ratio of a Bayesian generative model1 , MED provides an elegant way to integrate discriminative max-margin learning and Bayesian generative modeling. In fact, MED subsumes SVM as a special case and has been extended to incorporate latent variables [5, 18] and perform structured output prediction [21]. Recent work has further extended MED to unite Bayesian nonparametrics and max-margin learning [20, 19], which have been largely treated as isolated topics, for learning better classification models. The present work contributes by introducing a novel generalization of MED to handle the challenging matrix factorization problems. 3.2 Probabilistic max-margin matrix factorization Like PMF [12], we treat U and V as random variables, whose joint prior distribution is denoted by p0 (U, V ). Then, our goal is to infer their posterior distribution p(U, V )2 after a set of observations have been provided. We first consider the binary case where Yij takes value from {?1}. If the factorization, U and V , is given, we can naturally define the discriminant function F as F ((i, j); U, V ) = Ui Vj> . (6) Furthermore, since both U and V are random variables, we need to resolve the uncertainty in order to derive a prediction rule. Here, we choose the canonical MED approach, namely the expectation operator, which is linear and has shown promise in [18, 19], rather than the log-marginalized-likelihood ratio approach [5], which requires an extra likelihood model. Hence, substituting the discriminant function (6) into (4), we have the prediction rule  Ybij = sign Ep [Ui Vj> ] . (7) Then following the principle of MED learning, we define probabilistic max-margin matrix factorization (PM3 F) as solving the following optimization problem min p(U,V ) KL(p(U, V )kp0 (U, V )) + C X ij?I   h` Yij Ep [Ui Vj> ] . (8) Note that our probabilistic formulation is strictly more general than the original M3 F model, which is in fact a special case of PM3 F under a standard Gaussian prior Q and a mean-field assumption Q on p(U, V ). Specifically, if we assume p0 (U, V ) = i N (Ui |0, I) j N (Vj |0, I) and p(U, V ) = Q Q p(U )p(V ), then one can prove p(U ) = i N (Ui |?i , I), p(V ) = j N (Vj |?j , I) and PM3 F reduces accordingly to a M3 F problem (3), namely min ?,?  X  1 (k?k2F + k?k2F ) + C h` Yij ?i ?> . j 2 ij?I (9) Ratings: For ordinal ratings Yij ? {1, 2, . . . , L}, we use the same strategy as in [14] to define the loss function. Specifically, we introduce thresholds ?0 ? ?1 ? ? ? ? ? ?L , where ?0 = ?? and ?L = +?, to discretize R into L intervals. The prediction rule is changed accordingly to  Ybij = max r|Ep [Ui Vj> ] ? ?r + 1. (10) In a hard-margin setting, we would require that ?Yij ?1 + ` ? Ep [Ui Vj> ] ? ?Yij ? `. (11) While in a soft-margin setting, we define the loss as ij ?1 X  YX ij?I 1 2 r=1 h` (Ep [Ui Vj> ] ? ?r ) + L?1 X r=Yij   X L?1 X  r h` (?r ? Ep [Ui Vj> ]) = h` Tij (?r ? Ep [Ui Vj> ]) (12) ij?I r=1 F can also be directly specified without any reference to probabilistic models [4], as is our case. We abbreviated the posterior p(U, V |Y ) since we don?t specify the likelihood p(Y |U, V ) anyway. 3 where Tijr = ( +1 for r ? Yij . The loss thus defined is an upper bound to the sum of absolute ?1 for r < Yij differences between the predicted ratings and the true ratings, a loss measure closely related to Normalized Mean Absolute Error (NMAE) [7, 14]. Furthermore, we can learn a more flexible model to capture users? diverse rating criteria by replacing user-common thresholds ?r in the prediction rule (10) and the loss (12) with user-specific ones ?ir . Finally, we may as well treat the additionally introduced thresholds ?ir as random variables and infer their posterior distribution, hereby giving the full PM3 F model as solving min p(U,V,?) 3.3 KL(p(U, V, ?)kp0 (U, V, ?)) + C X L?1 X ij?I r=1   h` Tijr (Ep [?ir ] ? Ep [Ui Vj> ]) . (13) Infinite PM3 F (iPM3 F) As we have stated, one common problem with finite factor-based models, including PM3 F, is that we need to explicitly select the number of latent factors, i.e., K. In this section, we present an infinite PM3 F model which, through Bayesian nonparametric techniques, automatically adapts and selects the number of latent factors during learning. Without loss of generality, we consider learning a binary3 coefficient matrix Z ? {0, 1}N ?? . For finite-sized binary matrices, we may define their prior as given by a Beta-Bernoulli process [8]. While in the infinite case, we allow Z to have an infinite number of columns. Similar to the nonparametric matrix factorization model [17], we adopt IBP prior over unbounded binary matrices as previously established in [3] and furthermore, we focus on its stick-breaking construction [15], which facilitates the development of efficient inference algorithms. Specifically, let ?k ? (0, 1) be a parameter associated with each column of Z (with respect to its left-ordered equivalent class). Then the IBP prior can be described as given by the following generative process i.i.d. for i = 1, . . . , N Zik ? Bernoulli(?k ) ?1 = ?1 , ?k = ?k ?k?1 = k Y i=1 ?i , where ?i ? Beta(?, 1) (?k), i.i.d. for i = 1, . . . , +?. (14) (15) This process results in a descending sequence of ?k . Specifically, given a finite data set (N < +?), the probability of seeing the kth factor decreases exponentially with k and the number of active factors K+ follows a Poisson(?HN ), where HN is the N th harmonic number. Alternatively, we can use a Beta process prior over Z as in [9]. As for the counterpart, we place an isotropic Gaussian prior over the item factor matrix V . Prior specified, we may follow the above probabilistic framework to perform max-margin training, with U replaced by Z. In summary, the stick-breaking construction for the IBP prior results in an augmented iPM3 F problem for binary data as min p(?,Z,V ) KL(p(?, Z, V )kp0 (?, Z, V )) + C ij?I where p0 (?, Z, V ) = p0 (?)p0 (Z|?)p0 (V ) with ?k ? Beta(?, 1) Zik |? ? Bernoulli(?k ) Vjk ? N (0, ? 2 ) X   h` Yij Ep [Zi Vj> ] , (16) i.i.d. for k = 1, . . . , +?, i.i.d. for i = 1, . . . , N (?k), i.i.d. for j = 1, . . . , M, k = 1, . . . , +?. For ordinal ratings, we augment the iPM3 F problem from (13) likewise and, apart from adopting the same prior assumptions for ?, Z and V , assume p0 (?) = p0 (?|?, Z, V ) with ?ir ? N (?r , ? 2 ) i.i.d. for i = 1, . . . , N, r = 1, . . . , L ? 1, where ?1 < ? ? ? < ?L?1 are specified as a prior guidance towards an ascending sequence of largemargin thresholds. 3 Learning real-valued coefficients can be easily done as in [3] by defining U = Z ? W , where W is a real-valued matrix and ? denotes the Hadamard product or element-wise product. 4 3.4 The fully Bayesian model (iBPM3 F) To take iPM3 F one step further towards a Bayesian-style model, we introduce priors for hyperparameters and perform fully-Bayesian inference [12], where model parameters and hyperparameters are integrated out when making prediction. This approach naturally fits in our MEDbased model thanks to the adoption of the expectation operator when defining prediction rule (7) and (10). Another observation is that the hyper-parameter ? in a way serves the same role as the regularization constant C, and thus we also try simplifying the model by omitting C in iBPM3 F. We admit though, however many level of hyper-parameters are stacked and treated as stochastic and integrated out, there always exists a gap between our model and a canonical Bayesian one since we reject a likelihood. We believe the connection is better justified under the general regularized Bayesian inference framework [19] with a trivial non-informative likelihood. Here we use the same Gaussian-Wishart prior over the latent factor matrix V as well as its hyper-parameters ? and ?, thus yielding a doubly augmented problem for binary dataas X  min p(?,Z,?,?,V ) h` Yij Ep [Zi Vj> ] , KL(p(?, Z, ?, ?, V )kp0 (?, Z, ?, ?, V )) + (17) ij?I where we?ve omitted the regularization constant C and set p0 (?, Z, ?, ?, V ) to be factorized as p0 (?)p0 (Z|?)p0 (?, ?)p0 (V |?, ?), with ? and Z enjoying the same priors as in iPM3 F and (?, ?) ? GW(?0 , ?0 , W0 , ?0 ) = N (?|?0 , (?0 ?)?1 )W(?|W0 , ?0 ), Vj |?, ? ? N (Vj |?, ??1 ) i.i.d. for j = 1, . . . , M . And note that exactly the same process applies as well to the full model for ordinal ratings. 4 Learning and inference under truncated mean-field assumptions Now, we briefly discuss how to perform learning and inference in iPM3 F. For iBPM3 F, similar procedures are applicable. We defer all the details to Appendix D for saving space. Specifically, we introduce a simple variational inference method to approximate the optimal posterior, which turns out to perform well in practice. We make the following truncated mean-field assumption p(?, Z, V ) = p(?)p(Z)p(V ) = K Y k=1 where K is the truncation level and p(?k ) ? N Y K Y i=1 k=1 p(Zik ) ? p(V ), i.i.d. for k = 1, . . . , K, i.i.d. for i = 1, . . . , N, k = 1, . . . , K. ?k ? Beta(?k1 , ?k2 ) Zik ? Bernoulli(?ik ) (18) (19) (20) Note that we make no further assumption on the functional form of p(V ) and that we factorize p(Z) into element-wise i.i.d. p(Zik ) and parameterize it with Bernoulli(?ik ) merely out of the pursuit of a simpler denotation for subsequent deduction. Actually it can be shown that p(Z) indeed enjoys all these properties given the mildest truncated mean-field assumption p(?, Z, V ) = p(?)p(Z)p(V ). For ordinal ratings, we make an additional mean-field assumption p(?, Z, V, ?) = p(?, Z, V )p(?), (21) where p(?, Z, V ) is treated exactly the same as for binary data and p(?) is left in free forms. One noteworthy point is that given p(Z), we may calculate the expectation of the posterior effective dimensionality of the latent factor space as ! Ep [K+ ] = K X k=1 1? N Y (1 ? ?ik ) . (22) i=1 Then the problem can be solved using an iterative procedure that alternates between optimizing each component at a time, as outlined below (We defer the details to Appendix D.): Infer p(V ): The linear discriminant function and the isotropic Gaussian prior on V leads to an QM isotropic Gaussian posterior p(V ) = j=1 N (Vj |?j , ? 2 I) while the M mean vectors ?j can be obtained via solving M independent binary SVMs   min ?j X 1 k?j k2 + C h` Yij ?j ?i> . 2? 2 i|ij?I 5 (23) Infer p(?) and p(Z): Since ? is marginalized before exerting any influence in the loss term, its update is independent of the loss and hence we adopt the same update rules as in [2]; The subproblem on p(Z) decomposes into N independent convex optimization problems, one for each ?i as min ?i K  X k=1    X EZ [log p(Zik )] ? E?,Z [log p0 (Zik |?)] + C h` Yij ?i ?> , j (24) j|ij?I P where EZ [log p(Zik )] = ?ik log ?ik +(1??ik ) log(1??ik ), E?,Z [log p0 (Zik |?)] = ?ik kj=1 E? [log ?j ]+ Q Q (1 ? ?ik )E? [log(1 ? kj=1 ?j )] and E? [log ?j ] = ?(?k1 ) ? ?(?k1 + ?k2 ), E? [log(1 ? kj=1 ?j )] ? L?k , where L?k in turn is the multivariate lower bound as in [2]. We may use the similar subgradient technique as in [19] to approximately solve for ?i . Here we introduce an alternative solution, which is as efficient and guarantees convergence as iteration goes on. We update ?i via coordinate descent, with each conditional optimal ?ik sought by binary search. (See Appendix D.1.3 for details.) QN QL?1 Infer p(?): p(?) remains an isotropic Gaussian as p(?) = i=1 r=1 N (?ir |%ir , ? 2 ) and the mean %ir of each component is solution to the corresponding subproblem min %ir   X 1 2 r > (% ? ? ) + C h T (% ? ? ? ) , ir r ir i ` ij j 2? 2 (25) j|ij?I to which the binary search solver for each ?ik also applies. Note that as ? ? +?, the Gaussian distribution regresses to a uniform distribution and problem (25) reduces accordingly to the corresponding conditional subproblem for ? in the original M3 F (Appendix B.3). 5 Experiments and discussions We conduct experiments on the MovieLens 1M and EachMovie data sets, and compare our results with fast M3 F [11] and two probabilistic matrix factorization methods, PMF [13] and BPMF [12]. Data sets: The MovieLens data set contains 1,000,209 anonymous ratings (ranging from 1 to 5) of 3,952 movies made by 6,040 users, among which 3,706 movies are actually rated and every user has at least 20 ratings. The EachMovie data set contains 2,811,983 ratings of 1,628 movies made by 72,916 users, among which 1,623 movies are actually rated and 36,656 users has at least 20 ratings. As in [7, 11], we discarded users with fewer than 20 ratings, leaving us with 2,579,985 ratings. There are 6 possible rating values, {0, 0.2, . . . , 1} and we mapped them to {1, 2, . . . , 6}. Protocol: As in [7, 11], we test our method in a pure collaborative prediction setting, neglecting any external information other than the user-item-rating triplets in the data sets. We adopt as well the all-but-one protocol to partition the data set into training set and test set, that is to randomly withhold one of the observed ratings from each user into test set and use the rest as training set. Validation set, when needed, is constructed likewise from the constructed training set. Also as described in [7], we consider both weak and strong generalization. For weak, the training ratings for all users are always available, so a single-stage training process will suffice; while for strong, training is first carried out on a subset of users, and then keeping the learned latent factor matrix V fixed, we train the model a second time on the other users for their user profiles (coefficients Z and thresholds ?) and perform prediction on these users only. We partition the users accordingly as in [7, 11], namely 5,000 and 1,040 users for weak and strong respectively in MovieLens, and 30,000 and 6,565 in EachMovie. We repeat the random partition thrice. We compute Normalized Mean Absolute Error (NMAE) as the error measure and report the averaged performance.4 Implementation details: We perform cross-validation to choose the best regularization constant C for iPM3 F as well as to guide early-stopping during the learning process. The candidate C values are the same 11 values which are log-evenly distributed between 0.13/4 and 0.12 as in [11]. We set the truncation level K = 100 (same for M3 F and PMF models), ? = 3, ? = 1, ? = 1.5`; ?1 , . . . , ?L?1 are set to be symmetric with respect to 0, with a step-size of 2`; We set the margin parameter ` = 9. Although M3 F is invariant to ` (Appendix B.4), we find that setting ` = 9 achieved a good balance between performance and training time (Figure 1). The difference is largely believed to attribute to the uniform convergence standard we used when solving SVM subproblems. Finally, for iBPM3 F, we find that although removing C can achieve competitive results with iPM3 F, keeping C will produce even better performance. Hence we learn iBPM3 F using the selected C for iPM3 F. 6 Table 1: NMAE performance of different models on MovieLens and EachMovie. Algorithm M3 F [11] PMF [13] BPMF [12] M3 F? iPM3 F iBPM3 F 5.1 MovieLens weak strong .4156 ? .0037 .4203 ? .0138 .4332 ? .0033 .4413 ? .0074 .4235 ? .0023 .4450 ? .0085 .4176 ? .0016 .4227 ? .0072 .4031 ? .0030 .4135 ? .0109 .4050 ? .0029 .4089 ? .0146 EachMovie weak strong .4397 ? .0006 .4341 ? .0025 .4466 ? .0016 .4579 ? .0016 .4352 ? .0014 .4445 ? .0005 .4348 ? .0023 .4301 ? .0034 .4211 ? .0019 .4224 ? .0051 .4268 ? .0029 .4403 ? .0040 Experimental results Table 1 presents the NMAE performance of different models, where the performance of M3 F is cited from the corresponding paper [11] and represents the state-of-the-art. We observe that iPM3 F significantly outperforms M3 F, PMF and BPMF in terms of the NMAE error measure on both data sets for both settings. Moreover, we find that the fully Bayesian formulation of iPM3 F achieves comparable performances in most cases as iPM3 F and that our coordinate descent algorithm for M3 F (M3 F? ) performs quite similar to the original gradient descent algorithm for M3 F. In summary, the effect of endowing M3 F models with a probabilistic formulation is intriguing in that not only the performance of the model is largely improved but with the help of Bayesian nonparametric techniques, the effort of selecting the number of latent factors is saved as well. Another observation from Table 1 is that in gener- Table 2: NMAE on the purged EachMovie. al almost all models perform worse on EachMovie Algorithm weak strong than on MovieLens. A closer investigation finds that 3 M F [11] .4009 ? .0012 .4028 ? .0064 the EachMovie data set has a special rating. When PMF [13] .4153 ? .0016 .4329 ? .0059 a user has rated an item as zero star, he might either BPMF [12] .4021 ? .0011 .4119 ? .0062 express a genuine dislike or, when the weight of the M3 F? .4059 ? .0012 .4095 ? .0052 rating is less than 1, indicate that he never plans to .3954 ? .0026 .3977 ? .0034 iPM3 F see that movie since it just ?sounds awful?. Ideally iBPM3 F .3982 ? .0021 .4026 ? .0067 we should treat such a declaration as less authoritative than a regular rating of zero star and hence omit it from the data set. We have tried this setting by removing these special ratings.5 Table 2 presents the NMAE results of different models. Again, the coordinate descent M3 F performs comparably with fast M3 F; iPM3 F performs better than all the other methods; And iBPM3 F performs comparably with iPM3 F. 5 0.46 9.5 2000 x 10 0.5 partition #1 partition #2 partition #3 NMAE time 0.44 1000 0.43 500 0.46 1 9 25 49 margin parameter: ` 100 400 0 900 0.44 8 0.42 7.5 0.42 0.5 partition #1 partition #2 partition #3 8.5 NMAE 1500 Objective value NMAE 0.45 Average time per iteration (s) 9 0.48 7 0.4 0 10 20 30 # of iterations 40 50 0.38 0 10 20 30 # of iterations 40 50 Figure 1: Influence of ` on M3 F. Figure 2: Objective values dur- Figure 3: NMAE during the We fixed ` = 9 across the exper- ing the training of iPM3 F on training of iPM3 F on MovieLeniments. MovieLens 1M. s 1M. Closer analysis of iPM3 F 5.2 The posterior dimensionality: As indicated in Eq. (22), we may calculate the expectation of the effective dimensionality K+ of the latent factor space to roughly have a sense of how the iPM3 F model automatically chooses the latent dimensionality. Since we take ? = 3 in the IBP prior (15) and N ? 104 , the expected prior dimensionality ?HN is about 30. We find that when the truncation level K is set small, e.g., 60 or 80, the expected posterior dimensionality very quickly saturates, 4 5 Note that M3 F models output discretized ordinal ratings while PMF models output real-valued ratings. After discarding users with less than 20 normal ratings, we are left with 35,281 users and 2,315,060 ratings. 7 Table 3: Performance of iPM3 F with and without probabilistic treatment of ? Algorithm w/ prob. w/o prob. margin MovieLens .4031 ? .0030 .4056 ? .0043 .0024 ? .0013 EachMovie .4211 ? .0019 .4256 ? .0011 .0045 ? .0016 pEachMovie .3954 ? .0026 .4026 ? .0023 .0072 ? .0045 often within the first few iterations; While for sufficiently large Ks, e.g., 150 or 200, iPM3 F tends to output a sparse Z of expected dimensionality around 135 or 110 respectively. (For each truncation level, we rerun our model and perform cross-validation to select the best regularization constant C.) This interesting observation verifies our model?s capability of automatic model complexity control. Stability: As Figure 2 and 3 shows, iPM3 F performs quite stably against 3 different randomly partitioned subsets. iBPM3 F expresses a similar trait, but the test performance does not keep dropping with the decreasing of the objective value. Therefore we use a validation set to guide the earlystopping during the learning process, terminating when validation error starts to rebound. Treating thresholds ?: When predicting ordinal ratings, the introduced thresholds ? are very important since they underpin the large-margin principle of max-margin matrix factorization models. Nevertheless without a proper probabilistic treatment, the subproblems on thresholds (25) are not strictly convex, very often giving rise to a section of candidate thresholds that are ?equally optimal? for the solution. Under our probabilistic model however, we can easily get rid of this non-strict convexity by introducing for them a Gaussian prior as stated above in section 3.3. We compare performances of iPM3 F both with and without the probabilistic treatment of ? and as shown in Table 3, the improvement is outstanding. Finally, Table 4 presents the running time of vari- Table 4: Running time of different models. ous models on both EachMovie and MovieLens data Algorithm MovieLens EachMovie Iters sets. For M3 F, the original paper [11] reported about M3 F [11] ?5h ?15h 100 5h on MovieLens with a standard 3.06Ghz Pentium PMF [13] 8.7m 25m 50 4 CPU and about 15h on EachMovie, which are fairBPMF [12] 19m 1h 50 ly acceptable for factorizing a matrix with millions 3 ? M F 4h 10h 50 3 of entries. Our current implementations of M F and U, V 3.8h 9.5h iPM3 F consume about 4.5h and 10h on MovieLens ? 125s 750s and EachMovie respectively with a 3.00Ghz Core i5 3 iPM F 4.6h 5.5h 50 CPU. A closer investigation discovers that most of V 4.3h 4.3h the running time is spent on learning U (or Z) and ? 18m 1h V in PM3 F models, which breaks down into a set of struct SVM optimization problems that are learned by SVM . More efficient SVM solvers can be immediately applied to further improve the efficiency. Furthermore, the blockwise coordinate descent algorithm can naturally be parallelized, since the sub-problems of learning different Ui (or Vj ) are not coupled. We leave this improvement in future work. 6 Conclusions We?ve presented an infinite probabilistic max-margin matrix factorization method, which utilizes the advantages of nonparametric Bayesian techniques to bypass the model selection problem of maxmargin matrix factorization methods. We?ve also developed efficient blockwise coordinate descent algorithms for variational inference and performed extensive evaluation on two large benchmark data sets. Empirical results demonstrate appealing performance. Acknowledgments This work is supported by the National Basic Research Program (973 Program) of China (Nos. 2013CB329403, 2012CB316301), National Natural Science Foundation of China (Nos. 91120011, 61273023), and Tsinghua University Initiative Scientific Research Program (No. 20121088071). 8 References [1] N. Ding, Y. Qi, R. Xiang, I. Molloy, and N. Li. Nonparametric Bayesian matrix factorization by power-EP. In Proceedings of the 24th AAAI Conference on Artificial Intelligence, 2010. [2] F. Doshi-Velez, K. Miller, J. Van Gael, and Y.W. Teh. Variational inference for the Indian buffet process. Journal of Machine Learning Research, 5:137?144, 2009. [3] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. 2005. [4] T. Jaakkola, M. Meila, and T. Jebara. Maximum entropy discrimination. In Advances in Neural Information Processing Systems, 1999. [5] T. Jebara. Discriminative, generative and imitative learning. PhD Thesis, 2002. [6] T. Joachims, T. Finley, and C.N. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1):27?59, 2009. [7] B. Marlin and R.S. Zemel. The multiple multiplicative factor model for collaborative filtering. In Proceedings of the 21st International Conference on Machine Learning, 2004. [8] E. Meeds, Z. Ghahramani, R. Neal, and S. Roweis. Modeling dyadic data with binary latent factors. In Advances in Neural Information Processing Systems, 2007. [9] J. Paisley and L. Carin. Nonparametric factor analysis with Beta process priors. In Proceedings of the 26th International Conference on Machine Learning, 2009. [10] I. Porteous, A. Asuncion, and M. Welling. Bayesian matrix factorization with side information and Dirichlet process mixtures. In Proceedings of the 24th AAAI Conference on Artificial Intelligence, 2010. [11] J.D.M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference on Machine Learning, 2005. [12] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. In Proceedings of the 25th International Conference on Machine Learning, 2008. [13] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems, 2008. [14] N. Srebro, J.D.M. Rennie, and T. Jaakkola. Maximum-margin matrix factorization. In Advances in Neural Information Processing Systems, 2005. [15] Y.W. Teh, D. Gorur, and Z. Ghahramani. Stick-breaking construction of the Indian buffet process. In Proceedings of the 21th AAAI Conference on Artificial Intelligence, 2007. [16] M. Weimer, R. Karatzoglou, and A. Smola. Improving maximum margin matrix factorization. Machine Learning, 72(3):263?276, 2008. [17] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems, 2007. [18] J. Zhu, A. Ahmed, and E.P. Xing. MedLDA: Maximum margin supervised topic models for regression and classification. In Proceedings of the 26th International Conference on Machine Learning, 2009. [19] J. Zhu, N. Chen, and E.P. Xing. Infinite latent SVM for classification and multi-task learning. In Advances in Neural Information Processing Systems, 2011. [20] J. Zhu, N. Chen, and E.P. Xing. Infinite SVM: a Dirichlet process mixture of large-margin kernel machines. In Proceedings of the 28th International Conference on Machine Learning, 2011. [21] J. Zhu and E.P. Xing. Maximum entropy discrimination Markov networks. Journal of Machine Learning Research, 10:2531?2569, 2009. 9
4581 |@word briefly:2 norm:4 nd:1 seek:1 tried:1 simplifying:1 p0:16 ipm:1 tnlist:1 contains:2 selecting:2 outperforms:1 current:1 com:1 gmail:1 intriguing:1 written:1 subsequent:1 partition:9 informative:1 treating:1 update:3 discrimination:6 zik:9 generative:4 fewer:1 selected:1 item:11 intelligence:3 accordingly:6 plane:1 isotropic:4 ith:1 core:1 provides:1 preference:6 simpler:1 zhang:1 unbounded:2 ybij:3 constructed:2 direct:1 beta:6 vjk:1 ik:11 initiative:1 prove:1 doubly:1 introduce:4 expected:3 indeed:1 roughly:1 themselves:1 sdp:2 multi:1 discretized:1 salakhutdinov:2 gener:1 decreasing:1 automatically:4 resolve:3 kp0:5 cpu:2 solver:3 provided:1 underlying:1 suffice:1 moreover:1 factorized:1 developed:2 marlin:1 formalizes:1 guarantee:1 unexplored:1 every:1 xd:2 exactly:2 demonstrates:1 rm:1 stick:3 k2:3 qm:1 control:1 enjoy:1 appear:1 omit:1 ly:1 before:1 influencing:1 treat:3 tsinghua:4 tends:1 yd:3 noteworthy:1 approximately:1 might:1 china:3 k:1 equivalence:1 challenging:1 factorization:31 adoption:1 averaged:1 acknowledgment:1 practice:1 implement:1 definite:1 pkp0:1 procedure:3 empirical:4 reject:1 significantly:1 pre:1 regular:1 seeing:1 griffith:2 get:1 selection:4 operator:2 risk:1 influence:2 descending:1 equivalent:2 deterministic:2 missing:2 go:1 convex:2 immediately:1 pure:1 rule:7 fill:1 borrow:1 nuclear:2 stability:1 handle:1 anyway:1 coordinate:6 bpmf:4 construction:3 user:29 programming:1 us:1 element:2 expensive:1 particularly:1 lay:1 predicts:1 observed:5 ep:15 role:1 subproblem:3 ding:1 solved:1 capture:1 parameterize:1 thousand:1 calculate:2 decrease:1 convexity:1 ui:14 complexity:1 ideally:1 terminating:1 solving:4 efficiency:1 meed:1 basis:1 easily:2 joint:1 various:1 genre:1 regularizer:1 stacked:1 train:1 fast:5 effective:2 monte:1 artificial:3 zemel:1 hyper:3 whose:1 quite:2 solve:2 valued:3 consume:1 rennie:2 nmae:11 advantage:4 sequence:2 reconstruction:2 product:2 hadamard:1 achieve:1 adapts:1 roweis:1 inducing:2 kv:2 convergence:2 produce:2 leave:1 help:2 derive:1 develop:2 completion:1 spent:1 ij:15 ibp:4 eq:1 strong:6 solves:2 predicted:1 indicate:1 closely:1 saved:1 attribute:1 stochastic:1 karatzoglou:1 require:1 suffices:1 generalization:2 anonymous:1 investigation:2 imitative:1 yij:17 extension:1 strictly:2 sufficiently:1 around:1 normal:1 substituting:1 sought:1 entropic:1 adopt:3 omitted:1 early:1 achieves:1 estimation:1 integrates:1 applicable:2 currently:2 successfully:1 minimization:2 gaussian:8 always:2 rather:3 avoid:1 jaakkola:2 focus:1 joachim:1 improvement:2 prevalent:1 rank:1 likelihood:6 bernoulli:5 pentium:1 sense:1 inference:10 stopping:1 typically:1 integrated:2 relation:1 deduction:1 selects:1 rerun:1 aforementioned:1 dual:1 among:3 classification:4 priori:2 denoted:1 flexible:1 development:1 smoothing:1 special:4 art:1 iters:1 augment:1 field:5 genuine:1 saving:1 never:1 represents:1 lit:1 yu:1 k2f:6 carin:1 rebound:1 future:1 report:1 others:1 intelligent:1 few:1 randomly:2 national:3 divergence:1 ve:3 replaced:1 gorur:1 mnih:2 evaluation:1 mixture:2 yielding:1 behind:1 chain:1 accurate:1 closer:3 neglecting:1 conduct:1 enjoying:1 unite:1 pmf:9 isolated:1 guidance:1 column:2 modeling:2 soft:1 introducing:2 entry:4 subset:2 uniform:2 successful:2 reported:1 chooses:1 thanks:1 cited:1 st:1 international:6 probabilistic:21 quickly:1 squared:1 again:1 aaai:3 thesis:1 choose:2 hn:3 wishart:1 worse:1 admit:1 external:1 cb329403:1 style:2 li:1 potential:1 star:2 dur:1 subsumes:1 coefficient:7 explicitly:2 performed:1 try:1 break:1 multiplicative:1 start:2 competitive:1 xing:4 capability:1 asuncion:1 defer:3 collaborative:7 ir:10 largely:4 likewise:3 miller:1 weak:6 bayesian:25 comparably:2 carlo:1 against:1 regress:1 doshi:1 naturally:4 hereby:1 associated:1 treatment:3 popular:1 enumerates:1 dimensionality:7 exerting:1 actually:3 supervised:1 follow:1 specify:2 improved:1 formulation:7 nonparametrics:3 done:1 though:1 generality:1 furthermore:4 just:1 stage:1 smola:1 hand:1 working:1 replacing:1 lack:1 stably:1 indicated:1 scientific:1 believe:1 omitting:1 effect:1 normalized:2 true:1 counterpart:1 regularization:5 hence:4 symmetric:1 laboratory:2 leibler:1 neal:1 gw:1 during:4 criterion:1 generalized:1 demonstrate:2 performs:5 ranging:1 variational:6 harmonic:1 novel:1 wise:2 discovers:1 common:3 endowing:1 functional:1 exponentially:1 million:2 extend:1 interpretation:1 he:2 trait:1 velez:1 paisley:1 automatic:1 meila:1 outlined:1 particle:1 thrice:1 etc:1 posterior:10 multivariate:1 recent:2 optimizing:1 apart:1 binary:13 unrated:1 ous:1 additional:1 impose:1 parallelized:1 determine:1 paradigm:1 semi:1 full:2 sound:1 multiple:1 infer:6 reduces:2 eachmovie:15 ing:1 ahmed:1 cross:3 believed:1 equally:1 qi:1 prediction:15 basic:1 regression:1 expectation:5 poisson:1 iteration:5 kernel:1 adopting:2 achieved:1 justified:1 dcszb:1 interval:1 leaving:1 extra:2 rest:2 strict:1 med:9 elegant:1 facilitates:1 unaddressed:1 structural:1 leverage:1 fit:2 zi:2 idea:2 cn:1 effort:1 reformulated:1 tij:1 gael:1 cb316301:1 nonparametric:15 extensively:1 svms:3 exist:1 xij:3 canonical:2 sign:3 per:1 diverse:1 discrete:1 promise:1 dropping:1 medlda:1 express:2 key:2 threshold:9 nevertheless:1 relaxation:1 merely:1 subgradient:1 wood:1 year:1 beijing:1 sum:1 prob:2 parameterized:1 uncertainty:1 i5:1 extends:1 place:1 almost:1 utilizes:2 appendix:6 acceptable:1 comparable:1 bound:2 denotation:1 min:12 department:1 structured:2 alternate:1 combination:1 across:1 partitioned:1 appealing:2 wherever:1 making:1 maxmargin:1 largemargin:1 invariant:1 principally:1 computationally:1 remains:2 previously:1 discus:2 turn:3 abbreviated:1 needed:1 ordinal:7 ascending:1 serf:1 available:2 pursuit:1 molloy:1 observe:1 appropriate:1 save:1 alternative:1 buffet:4 struct:1 original:5 denotes:1 running:3 dirichlet:2 porteous:1 hinge:4 marginalized:2 yx:1 giving:2 k1:3 build:1 ghahramani:3 objective:3 parametric:1 strategy:1 traditional:1 gradient:2 kth:1 mx:1 separate:1 mapped:1 w0:2 evenly:1 topic:2 mail:1 discriminant:5 trivial:1 modeled:1 index:1 ratio:2 balance:1 equivalently:1 minjie:1 unfortunately:1 ql:1 blockwise:3 subproblems:2 stated:2 rise:1 underpin:1 design:1 implementation:2 proper:1 unknown:4 perform:11 teh:2 discretize:1 upper:1 observation:4 markov:2 discarded:1 benchmark:1 finite:3 descent:8 truncated:3 defining:3 extended:2 ever:1 saturates:1 rn:3 jebara:2 rating:28 introduced:2 purged:1 namely:4 plan:1 kl:6 extensive:3 specified:4 connection:1 learned:3 established:1 address:1 usually:1 below:2 sparsity:2 model1:1 program:3 max:21 including:2 lend:1 power:1 treated:3 natural:1 regularized:2 predicting:2 zhu:5 improve:1 movie:7 technology:3 brief:1 rated:3 carried:1 conventionally:1 jun:1 categorical:1 coupled:1 finley:1 kj:3 prior:21 review:1 dislike:1 relative:2 xiang:1 loss:13 fully:4 interesting:1 filtering:3 srebro:2 age:1 validation:6 foundation:1 integrate:1 authoritative:1 principle:3 bypass:3 row:1 changed:1 summary:2 repeat:1 supported:1 truncation:4 free:1 keeping:2 enjoys:1 guide:2 allow:1 side:1 absolute:3 sparse:2 distributed:1 ghz:2 van:1 withhold:1 vari:1 qn:1 commonly:1 made:2 welling:1 approximate:2 cutting:1 kullback:1 keep:1 overfitting:1 active:2 rid:1 conclude:1 discriminative:2 factorize:1 alternatively:1 don:1 factorizing:1 search:2 latent:20 iterative:1 triplet:1 decomposes:1 table:9 additionally:1 learn:4 ku:2 robust:1 exper:1 contributes:1 improving:1 protocol:2 vj:19 weimer:1 hyperparameters:2 profile:1 verifies:1 dyadic:1 complementary:1 xu:1 augmented:2 slow:1 formalization:1 sub:1 explicit:1 awful:1 candidate:3 breaking:3 removing:2 down:1 specific:3 discarding:1 svm:7 pm3:8 exists:1 effectively:1 phd:1 margin:29 gap:1 chen:2 entropy:7 ez:2 kxk:3 ordered:1 partially:2 bo:1 applies:2 gender:1 relies:1 dcszj:1 declaration:1 conditional:2 goal:1 sized:1 towards:3 hard:1 movielens:14 typical:2 infinite:9 specifically:5 justify:1 experimental:1 m3:31 select:2 indian:4 outstanding:1 incorporate:1
3,957
4,582
Learning Networks of Heterogeneous Influence Nan Du? Le Song? Alex Smola? Ming Yuan? Georgia Institute of Technology? , Google Research? [email protected] [email protected] [email protected] [email protected] Abstract Information, disease, and influence diffuse over networks of entities in both natural systems and human society. Analyzing these transmission networks plays an important role in understanding the diffusion processes and predicting future events. However, the underlying transmission networks are often hidden and incomplete, and we observe only the time stamps when cascades of events happen. In this paper, we address the challenging problem of uncovering the hidden network only from the cascades. The structure discovery problem is complicated by the fact that the influence between networked entities is heterogeneous, which can not be described by a simple parametric model. Therefore, we propose a kernelbased method which can capture a diverse range of different types of influence without any prior assumption. In both synthetic and real cascade data, we show that our model can better recover the underlying diffusion network and drastically improve the estimation of the transmission functions among networked entities. 1 Introduction Networks have been powerful abstractions for modeling a variety of natural and artificial systems that consist of a large collection of interacting entities. Due to the recent increasing availability of large-scale networks, network modeling and analysis have been extensively applied to study the spreading and diffusion of information, ideas, and even virus in social and information networks (see e.g., [17, 5, 18, 1, 2]). However, the process of influence and diffusion often occurs in a hidden network that might not be easily observed and identified directly. For instance, when a disease spreads among people, epidemiologists can know only when a person gets sick, but they can hardly ever know where and from whom he (she) gets infected. Similarly, when consumers rush to buy some particular products, marketers can know when purchases occurred, but they cannot track in further where the recommendations originally came from [12]. In all such cases, we could observe only the time stamp when a piece of information has been received by a particular entity, but the exact path of diffusion is missing. Therefore, it is an interesting and challenging question whether we can uncover the diffusion paths based just on the time stamps of the events. There are many recent studies on estimating correlation or causal structures from multivariate timeseries data (see e.g., [2, 6, 13]). However, in these models, time is treated as discrete index and not modeled as a random variable. In the diffusion network discovery problem, time is treated explicitly as a continuous variable, and one is interested in capturing how the occurrence of event at one node affects the time for its occurence at other nodes. This problem recently has been explored by a number of studies in the literature. Specifically, Meyers and Leskovec inferred the diffusion network by learning the infection probability between two nodes using a convex programming, called CONNIE [14]. Gomez-Rodriguez et al. inferred the network connectivity using a submodular optimization, called NETINF [4]. However, both CONNIE and NETINF assume that the transmission model for each pair of nodes is fixed with predefined transmission rate. Recently, Gomez-Rodriguez et al. proposed an elegant method, called NETRATE [3], using continuous temporal dynamics model to allow variable diffusion rates across network edges. NETRATE makes fewer number of assumptions and achieves better performance in various aspects than the previous two approaches. However, the limitation of NETRATE is that it requires the influence model on each edge to have a 1 0.1 histogram exp rayleigh KernelCascade pdf pdf 0.2 0.08 0.1 0.1 histogram exp rayleigh KernelCascade 0.06 0.04 0.02 0 0 10 20 30 t(hours) 40 50 (a) Pair 1 0 0 histogram exp rayleigh KernelCascade 0.08 pdf 0.3 0.06 0.04 0.02 20 t(hours) 40 0 0 50 t(hours) (b) Pair 2 100 (c) Pair 3 Figure 1: The histograms of the interval between the time when a post appeared in one site and the time when a new post in another site links to it. Dotted and dash lines are density fitted by NETRATE. The solid lines are given by KernelCascade. fixed parametric form, such as exponential, power-law, or Rayleigh distribution, although the model parameters learned from cascades could be different. In practice, the patterns of information diffusion (or a spreading disease) among entities can be quite complicated and different from each other, going far beyond what a single family of parametric models can capture. For example, in twitter, an active user can be online for more than 12 hours a day, and he may instantly respond to any interesting message. However, an inactive user may just log in and respond once a day. As a result, the spreading pattern of the messages between the active user and his friends can be quite different from that of the inactive user. Another example is from the information diffusion in a blogsphere: the hyperlinks between posts can be viewed as some kind of information flow from one media site to another, and the time difference between two linked posts reveal the pattern of diffusion. In Figure 1, we examined three pairs of media sites from the MemeTracker dataset [3, 9], and plotted the histograms of the intervals between the the moment when a post first appeared in one site and the moment when it was linked by a new post in another site. We can observe that information can have very different transmission patterns for these pairs. Parametric models fitted by NETRATE may capture the simple pattern in Figure 1(a), but they might miss the multimodal patterns in Figure 1(b) and Figure 1(c). In contrast, our method, called KernelCascade, is able to fit both data accurately and thus can handle the heterogeneity. In the reminder of this paper, we present the details of our approach KernelCascade. Our key idea is to model the continuous information diffusion process using survival analysis by kernelizing the hazard function. We obtain a convex optimization problem with grouped lasso type of regularization and develop a fast block-coordinate descent algorithm for solving the problem. The sparsity patterns of the coefficients provide us the structure of the diffusion network. In both synthetic and real world data, our method can better recover the underlying diffusion networks and drastically improve the estimation of the transmission functions among networked entities. 2 Preliminary In this section, we will present some basic concepts from survival analysis [7, 8], which are essential for our later modeling. Given a nonnegative random variable T corresponding to the time R twhen an event happens, let f (t) be the probability density function of T and F (t) = P r(T ? t) = 0 f (x)dx be its cumulative distribution function. The probability that an event does not happen up to time t is thus given by the survival function S(t) = P r(T ? t) = 1 ? F (t). The survival function is a continuous and monotonically decreasing function with S(0) = 1 and S(?) = limt?? S(t) = 0. Given f (t) and S(t), we can define the instantaneous risk (or rate) that an event has not happened yet up to time t but happens at time t by the hazard function P r(t ? T ? t + ?t|T ? t) f (t) h(t) = lim = . (1) ?t?0 ?t S(t) With this definition, h(t)?t will be the approximate probability that an event happens in [t, t + ?t) given that the event has not happened yet up to t. Furthermore, the hazard function h(t) is also d related to the survival function S(t) via the differential equation h(t) = ? dt log S(t), where we have used f (t) = ?S 0 (t). Solving the differential equation with boundary condition S(0) = 1, we can recover the survival function S(t) and the density function f (t) based on the hazard function h(t), i.e.,  Z t   Z t  S(t) = exp ? h(x) dx and f (t) = h(t) exp ? h(x) dx . (2) 0 0 2 a b c t0 t1 a b t2 t3 c t4 t0 t1 a b t3 c t2 d e d e d e (a) Hidden network (b) Node e gets infected at time t4 (c) Node e survives Figure 2: Cascades over a hidden network. Solid lines in panel(a) represent connections in a hidden network. In panel (b) and (c), filled circles indicate infected nodes while empty circles represent uninfected ones. Node a, b, c and d are the parents of node e which got infected at t0 < t1 < t2 < t3 respectively and tended to infect node e. In panel (b), node e survives given node a, b and c shown in green dash lines. However, it was infected by node d. In panel (c), node e survives even though all its parents got infected. 3 Modeling Cascades using Survival Analysis We use survival analysis to model information diffusion for networked entities. We will largely follow the presentation of Gomez-Rodriguez et al. [3], but add clarification when necessary. We assume that there is a fixed population of N nodes connected in a directed network G = (V, E). Neighboring nodes are allowed to directly influence each other. Nodes along a directed path may influence each other only through a diffusion process. Because the true underlying network is unknown, our observations are only the time stamps when events occur to each node in the network. The time stamps are then organized as cascades, each of which corresponds to a particular event. For instance, a piece of news posted on CNN website about ?Facebook went public? can be treated as an event. It can spread across the blogsphere and trigger a sequence of posts from other sites referring to it. Each site will have a time stamp when this particular piece of news is being discussed and cited. The goal of the model is to capture the interplay between the hidden diffusion network and the cascades of observed event time stamps. More formally, a directed edge, j ? i, is associated with an transmission function fji (ti |tj ), which is the conditional likelihood of an event happening to node i at time ti given that the same event has already happened to node j at time tj . The transmission function attempts to capture the temporal dependency between the two successive events for node i and j. In addition, we focus on shiftinvariant transmission functions whose value only depends on the time difference, i.e., fji (ti |tj ) = fji (ti ? tj ) = fji (?ji ) where ?ji := ti ? tj . Given the likelihood function, we can compute the corresponding survival function Sji (?ji ) and hazard function hji (?ji ). When there is no directed edge j ? i, the transmission function and hazard function are both identically zeros, i.e., fji (?ji ) = 0 and hji (?ji ) = 0, but the survival function is identically one, i.e., Sji (?ji ) = 1. Therefore, the structure of the diffusion network is reflected in the non-zero patterns of a collection of transmission functions (or hazard functions). A cascade is an N -dimensional vector tc := (tc1 , . . . , tcN )> with i-th dimension recording the time stamp when event c occurs to node i. Furthermore, tci ? [0, T c ] ? {?}, and the symbol ? labels nodes that have not been influenced during observation window [0, T c ] ? it does not imply that nodes are never influenced. The ?clock? is set to 0 at the start of each cascade. A dataset can  contain a collection, C, of cascades t1 , . . . , t|C| . The time stamps assigned to nodes by a cascade induce a directed acyclic graph (DAG) by defining node j as the parent of i if tj < ti . Thus, it is meaningful to refer to parents and children within a cascade [3], which is different from the parent-child structural relation on the true underlying diffusion network. Since the true network is inferred from many cascades (each of which imposes its own DAG structure), the inferred network is typically not a DAG. The likelihood `(tc ) of a cascade induced by event c is then simply a product of all individual likelihood `i (tc ) that event c occurs to each node i. Depending on whether event c actually occurs to node i in the data, we can compute this individual likelihood as: Event c did occur at node i. We assume that once an event occurs at node i under the influence of a particular parent j in a cascade, the same event will not happen again. In Figure 2(b), node e is susceptible given its parent a, b, c and d. However, only node d is the first parent who infects node e. Because each parent could be equally likely to first influence node i, the likelihood is just a simple sum over the likelihoods of the mutually disjoint events that node i has survived from the influence of all the other parents except the first parent j, i.e., X Y X Y c `+ fji (?cji ) Ski (?cki ) = hji (?cji ) Ski (?cki ). (3) i (t ) = j:tcj <tci j:tcj <tci k:k6=j,tck <tci 3 k:tck <tci Event c did not occur at node i. In other words, node i survives from the influence of all parents (see Figure 2(c) for illustration). The likelihood is a product of survival functions, i.e., Y c `? Sji (T c ? tj ). (4) i (t ) = tj ?T c Combining the above two scenarios together, we can obtain the overall likelihood of a cascade tc by multiplying together all individual likelihoods, i.e., Y Y c c `? `+ `(tc ) = (5) i (t ) ? i (t ) . tci >T c | tci ?T c {z } uninfected nodes | {z infected nodes } Therefore, the likelihood of all Q cascades is a product of the these individual cascade likelihoods, i.e. `({t1 , . . . , t|C| }) = c=1,...,|C| `(tc ). In the end, we take the negative log of this likelihood function and regroup all terms associated with edges pointing to node i together to derive ? ? X ?X X X X ? log S(?cji ) + h(?cji )? (6) L({t1 , . . . , t|C| }) = ? log ? c c c i j {c|tc <tc } c {c|ti 6T } {tj <ti } i j There are two interesting implications from this negative log likelihood function. First, the function can be expressed using only the hazard and the survival function. Second, the function is decomposed into additive contribution from each node i. We can therefore estimate the hazard and survival function for each node separately. Previously, Gomez-Rodriguez et al. [3] used parametric hazard and survival functions, and they estimated the model parameters using the convex programming. In contrast, we will instead formulate an algorithm using kernels and grouped parameter regularization, which allows us to estimate complicated hazard and survival functions without overfitting. 4 KernelCascade for Learning Diffusion Networks This section presents our kernel method for uncovering diffusion networks from cascades. Our key idea is to kernelize the hazard function used in the negative log-likelihood in (6), and then estimate the parameters using grouped lasso type of optimization. 4.1 Kernelizing survival analysis Kernel methods are powerful tools for generalizing classical linear learning approaches to analyze nonlinear relations. A kernel function, k : X ? X ? R, is a real-valued positive definite symmetric function iff. for any set of points {?1 , ?2 , . . . , ?m } ? X the kernel matrix K with entris Kls := k(?l , ?s ) is positive definite. We want to model heterogeneous transmission functions, fji (?ji ), from j to i. Rather than directly kernelizing the transmission function, we kernelize the hazard function instead, by assuming that it is a linear combination of m kernel functions, i.e., m X l hji (?ji ) = ?ji k(?l , ?ji ), (7) l=1 where we fix one argument of each kernel function, k(?l , ?), to a point ?l in a uniform grid of m locations in the range of (0, maxc T c ]. To achieve fully nonparametric modeling of the hazard function, we can let m grow as we see more cascades. Alternatively, we can also place a nonlinear basis function on each time point in the observed cascades. For efficiency consideration, we will use a fixed uniform grid in our later experiments. Since the hazard function is always positive, we use positive kernel functions and require the weights to be positive, i.e., k(?, ?) ? 0 l and ?ji ? 0 to capture such constraint. For simplicity of notation, we will define vectors ?ji := 1 m > (?ji , . . . , ?ji ) , and k(?ji ) := (k(?1 , ?ji ), . . . , k(?m , ?ji ))> . Hence, the hazard function can be written as hji (?ji ) = ?> ji k(?ji ). In addition, the survival function and likelihood function can also be kernelizedRbased on their ? respective relation with the hazard function in (2). More specifically, let gl (?ji ) := 0 ji k(?l , x)dx and the corresponding vector g(?ji ) := (g1 (?ji ), . . . , gm (?ji ))> . We then can derive    > Sji (?ji ) = exp ??> and fji (?ji ) = ?> (8) ji g(?ji ) ji k(?ji ) exp ??ji g(?ji ) . In the formulation, we need to perform integration over the kernel function to compute gl (?ji ). This can be done efficiently for many kernels, such as the Gaussian RBF kernel, the Laplacian kernel, the Quartic kernel, and the Triweight kernel. In later experiments, we mainly focus on the Gaussian 4 RBF kernel, k(?l , ?s ) = exp(?k?l ? ?s k2 /(2? 2 )), and derive a closed form solution for gl (?ji ) as ?      Z ?ji 2?? ?l ? ?ji ?l ? gl (?ji ) = k(?l , x) dx = erfc ? erfc ? , (9) 2 2? 2? 0 R 2 ? where erfc(t) := ?2? t e?x dx is the error function. Yet, our method is not limited to the particular RBF kernel. If there is no closed form solution for the one-dimensional integration, we can use a large number of available numerical integration methods for this purpose [15]. We note that given a dataset, both the vector k(?ji ) and g(?ji ) need to be computed only once as a preprocessing, and then can be reused in the algorithm. 4.2 Estimating sparse diffusion networks Next we plug in the kernelized hazard function and survival function into the likelihood of cascades in (6). Since the negative log likelihood is separable for each node i, we can optimize the set of variables {?ji }N j=1 separately. As a result, the negative log likelihood for the data associated with node i can be estimated as X X X X  c c ?> (10) log ?> Li {?ji }N ji k(?ji ). ji g(?ji ) ? j=1 = c c c j {c|tc <tc } c {tj <ti } {c|ti 6T } i j A desirable feature of this function is that it is convex in its arguments, {?ji }N j=1 , which allows us to bring various convex optimization tools to solve the problem efficiently. In addition, we want to induce a sparse network structure from the data and avoid overfitting. Basically, if the coefficients ?ji = 0, then there is no edge (or direct influence) from node j to i. For P this purpose, we will impose grouped lasso type of regularization on the coefficients ?ji , i.e., ( j k?ji k)2 [16, 19]. Grouped lasso type of regularization has the tendency to select a small number of groups of non-zero coefficients but push other groups of coefficients to be zero. Overall, the optimization problem trades off between the data likelihood term and the group sparsity of the coefficients 2 X  min Li {?ji }N k?ji k , s.t. ?ji ? 0, ?j, (11) j=1 + ? {?ji }N j=1 j where ? is the regularization parameter. After we obtain a sparse solution from the above optimization, we obtain partial network structures, each of which centers around a particular node i. We can then join all the partial structures together and obtain the overall diffusion network. The corresponding hazard function along each edge can also be obtained from (8). 4.3 Optimization We note that (11) is a nonsmooth optimization problem because of the regularizer. There are many ways to solve the optimization problem, and we will illustrate this using a simple algorithm originating from multiple kernel learning [16, 19]. In this approach, an additional set of variables are introduced to turn the P optimization problem into a smooth optimization problem. More specifically, let ?i ? 0 and j ?j = 1. Then using Cauchy-Schwartz inequality, we have P P P P P 1/2 1/2 2 2 ( j k?ji k)2 = ( j (k?ji k /?j )?j )2 ? ( j k?ji k /?j )( j ?j ) = j k?ji k /?j , where the equality holds when X ?j = k?ji k / k?ji k . (12) j With these additional variables, ?j , we can solve an alternative smooth optimization problem, which is jointly convex in both ?ji and ?j X X k?ji k2  min Li {?ji }N +? , s.t. ?ji ? 0, ?j ? 0, ?j = 1, ?j. (13) j=1 ?j {?ji ,?j }N j=1 j j There are many ways to solve the convex optimization problem in (13). In this paper, we used a block coordinate descent approach alternating between the optimization of ?ji and ?j . More specifically, when we fixed ?ji , we can obtain the best ?j using the closed form formula in (12); when we fixed ?j , we can optimize over ?ji using, e.g., a projected gradient method. The overall algorithm pseudocodes are given in Algorithm 1. Moreover, we can speed up the optimization in three ways. First, because the optimization is independent for each node i, the overall process can be easily parallelized into N separate sub-problems. Second, we can prune the possible nodes that were never infected before node i in any cascade where i was infected. Third, if we further assume that all the edges from the same node belong to the same type of models, especially when the sample 5 size is small, the N edges could share a common set of m parameters, and thus we can only estimate N ? m parameters in total. Algorithm 1: KernelCascade Initialize the diffusion network G to be empty; for i = 1 to N do N Intialize {?ji }N j=1 and {?j }j=1 ; repeat N Update {?ji }N j=1 using projected gradient method with {?j }j=1 from last update; N N Update {?j }j=1 using formula (12) with {?ji }j=1 from last update; until convergence; Extract the sparse neighborhood N (i) of node i from nonzero ?ji ; Join N (i) to the diffusion network G 5 Experimental Results We will evaluate KernelCascade on both realistic synthetic networks and real world networks. We compare it to NETINF [4] and NETRATE [3], and we show that KernelCascade can perform significantly better in terms of both recovering the network structures and the transmission functions. 5.1 Synthetic Networks Network generation. We first generate synthetic networks that mimic the structural properties of real networks. These synthetic networks can then be used for simulation of information diffusion. Since the latent networks for generating cascades are known in advance, we can perform detailed comparisons between various methods. We use Kronecker generator [10] to examine two types of networks with directed edges: (i) the core-periphery structure [11], which mimics the information diffusion process in real world networks, and (ii) the Erd?os-R?enyi random networks. Influence function. For each edge j ? i in a network G, we will assign it a mixture of two Rayleigh distributions: fji (t|?,  a1 , b1 , a2 ,b2 ) = ?R1 (t|a1 , b1 ) + (1 ? ?)R2 (t|a2 , b2 ) where Ri (t|ai , bi ) =  2 2 t?ai 2 i exp ? t?a , t > ai , and ? ? (0, 1) is a mixing proportion. We examine t?ai bi bi three different parameter settings for the transmission function: (1) all edges in network G have the same transmission function p(t) = f (t|0.5, 10, 1, 20, 1); (2) all edges in network G have the same transmission function q(t) = f (t|0.5, 0, 1, 20, 1); and (3) all edges in network G are uniformly randomly assigned to either p(t) or q(t). Cascade generation. Given a network G and the collection of transmission functions fji for each edge, we generate a cascade from G by randomly choosing a node of G as the root of the cascade. The root node j is then assigned to time stamp tj = 0. For each neighbor node i pointed by j, its event time ti is sampled from fji (t). The diffusion process will continue by further infecting the neighbors pointed by node i in a breadth-first fashion until either the overall time exceed the predefined observation time window T c or there is no new node being infected. If a node is infected more than once by multiple parents, only the first infection time stamp will be recorded. Experiment setting and evaluation metric. We consider a combination of two network topologies (i)-(ii) with three different transmission function settings (1)-(3), which results in six different experimental settings. For each setting, we randomly instantiate the network topologies and transmission functions for 10 times and then vary the number of cascades from 50, 100, 200, 400, 800 to 1000. For KernelCascade, we use a Gaussian RBF kernel. The kernel bandwidth ? is chosen using median pairwise distance between grid time points. The regularization parameter is chosen using two fold cross-validation. NETINF requires the desired number of edges as input, and we give it an advantage and supply the true number of edges to it. For NETRATE, we experimented with both exponential and Rayleigh transmission function. We compare different methods in terms of (1) F 1 score for the network recovery. F 1 := 2?precision?recall precision+recall , where precision is the fraction of edges in the inferred network that also present in the true network and recall is the fraction of edges in the true network that also present in the inferred network; (2) KL divergence between the estimated transmission function and the true transmission function, averaged over all edges in a network; (3) the shape of the fitted transmission function compared to the true transmission function. 6 0.6 netinf netrate(rayleigh) netrate(exp) KernelCascade 0.4 0.2 500 num of cascades 0.6 0.4 0.2 0 0 1000 (a) Core-Periphery, p(t) netinf netrate(rayleigh) netrate(exp) KernelCascade 0.4 0.2 500 num of cascades 0.4 0.8 0.6 0.4 0.2 500 num of cascades 0 0 1000 netinf netrate(rayleigh) netrate(exp) KernelCascade 500 num of cascades 1000 log(KL Distance) 6 4 netrate(rayleigh) netrate(exp) KernelCascade 2 0 0 1000 8 500 num of cascades (a) Core-Periphery, p(t) (b) Core-Periphery, q(t) 8 8 6 4 netrate(rayleigh) netrate(exp) KernelCascade 1000 4 0 0 4 netrate(rayleigh) netrate(exp) KernelCascade 2 500 num of cascades 1000 (c) Core-Periphery, mix p(t), q(t) 8 6 2 6 0 0 1000 log(KL Distance) log(KL Distance) netrate(rayleigh) netrate(exp) KernelCascade log(KL Distance) log(KL Distance) 4 500 num of cascades 1000 (c) Core-Periphery, mix p(t), q(t) 8 500 num of cascades 500 num of cascades 1 0.6 0 0 netinf netrate(rayleigh) netrate(exp) KernelCascade (e) Random, q(t) (f) Random, mix p(t), q(t) Figure 3: F1 Scores for network recovery. 6 0 0 0 0 0.2 8 2 0.4 1000 netinf netrate(rayleigh) netrate(exp) KernelCascade 0.8 1000 (d) Random, p(t) 0 0 500 num of cascades average F1 0.6 2 0.6 1 average F1 average F1 1 0.8 0.2 (b) Core-Periphery, q(t) 0.8 0 0 netinf netrate(rayleigh) netrate(exp) KernelCascade log(KL Distance) 0 0 1 0.8 average F1 1 average F1 average F1 1 0.8 netrate(rayleigh) netrate(exp) KernelCascade 500 num of cascades 1000 6 4 netrate(rayleigh) netrate(exp) KernelCascade 2 0 0 500 num of cascades 1000 (d)Random, p(t) (e) Random, q(t) (f) Random, mix p(t), q(t) Figure 4: KL Divergence between the estimated and the true transmission function. F1 score for network recovery. From Figure 3, we can see that in all cases, KernelCascade performs consistently and significantly better than NETINF and NETRATE. Furthermore, its performance also steadily increases as we increase the number of cascades, and finally KernelCascade recovers the entire network with around 1000 cascades. In contrast, the competitor methods seldom fully recover the entire network given the same number of cascades. We also note that the performance of NETRATE is very sensitive to the choice of the transmission function (exponential vs. Rayleigh). For instance, depending on the actual data generating process, the performance of NETRATE with Rayleigh model can vary from the second best to the worst. KL divergence for transmission function. Besides better network recovery, KernelCascade also estimates the transmission function better. In all cases we experimented, KernelCascade leads to drastic improvement in recovering the transmission function (Figure 4). We also observe that as we increase the number of cascades, KernelCascade adapts better to the actual transmission function. In contrast, the performance of NETRATE with exponential model does not improve with increasing number of cascades, since the parametric model assumption is incorrect. We note that NETINF does not recover the transmission function, and hence there is no corresponding curve in the plot. Visualization of the transmission function. We also visualize the estimated transmission function for an edge from different methods in Figure 5. We can see that KernelCascade captures the essential 7 0.5 0.5 KernelCascade exp rayleigh data 0.4 0.3 pdf pdf 0.4 0.2 0.1 KernelCascade exp rayleigh data 0.3 0.2 0.1 0 0 10 t 20 0 0 30 10 t 20 30 (a) An edge with transmission function p(t) (b) An edge with transmission function q(t) Figure 5: Estimated transmission function of a single edge based on 1000 cascades against the true transmission function (blue curve). (a) KernelCascade (b) NETINF (c) NETRATE Figure 6: Estimated network of top 32 sites. Edges in grey are correctly uncovered, while edges highlighted in red are either missed or estimated falsely. features of the true transmission function, i.e., bi-modal behavior, while the competitor methods miss out the important statistical feature completely. 5.2 Real world dataset Finally, we use the MemeTracker dataset [3] to compare NETINF, NETRATE and KernelCascade. In this dataset, the hyperlinks between articles and posts can be used to represent the flow of information from one site to another site. When a site publishes a new post, it will put hyperlinks to related posts in some other sites published earlier as its sources. Later as it also becomes ?older?, it will be cited by other newer posts as well. As a consequence, all the time-stamped hyperlinks form a cascade for particular piece of information (or event) flowing among different sites. The networks formed by these hyperlinks are used to be the ground truth. We have extracted a network consisting of top 500 sites with 6,466 edges and 11,530 cascades from 7,181,406 posts in a month, and we want to recover the the underlying networks. From Table 1, we can see that KernelCascade achieves a much better F 1 score for network recovery compared to other methods. Finally, we visualize the estimated sub-network structure for the top 32 sites in Figure 6. By comparison, KernelCascade has a relatively better performance with fewer misses and false predictions. Table 1: Network recovery results from MemeTracker dataset. methods precision recall F1 predicted edges NETINF 0.62 0.62 0.62 6466 NETRATE(exp) 0.93 0.23 0.37 1600 KernelCascade 0.79 0.66 0.72 5368 6 Conclusion In this paper, we developed a flexible kernel method, called KernelCascade, to model the latent diffusion processes and to infer the hidden network with heterogeneous influence between each pair of nodes. In contrast to previous state-of-the-art, such as NETRATE, NETINF and CONNIE, KernelCascade makes no restricted assumption on the specific form of the transmission function over network edges. Instead, it can infer it automatically from the data, which allows each pair of nodes to have a different type of transmission model and better captures the heterogeneous influence among entities. We obtain an efficient algorithm and demonstrate experimentally that KernelCascade can significantly outperforms previous state-of-the-art in both synthetic and real data. In future, we will explore the combination of kernel methods, sparsity inducing norms and other point processes to address a diverse range of social network problems. Acknowledgement: L.S. is supported by NSF IIS-1218749 and startup funds from Gatech. 8 References [1] M. De Choudhury, W. A. Mason, J. M. Hofman, and D. J. Watts. Inferring relevant social networks from interpersonal communication. In WWW, pages 301?310, 2010. [2] N. Eagle, A. S. Pentland, and D. Lazer. From the cover: Inferring friendship network structure by using mobile phone data. Proceedings of the National Academy of Sciences, 106(36):15274?15278, Sept. 2009. [3] M. Gomez-Rodriguez, D. Balduzzi, and B. Sch?olkopf. Uncovering the temporal dynamics of diffusion networks. In ICML, pages 561?568, 2011. [4] M. Gomez-Rodriguez, J. Leskovec, and A. Krause. Inferring networks of diffusion and influence. In KDD, pages 1019?1028, 2010. ? Tardos. Maximizing the spread of influence through a [5] D. Kempe, J. M. Kleinberg, and E. social network. In KDD, pages 137?146, 2003. [6] M. Kolar, L. Song, A. Ahmed, and E. Xing. Estimating time-varying networks. The Annals of Applied Statistics, 4(1):94?123, 2010. [7] J. F. Lawless. Statistical Models and Methods for Lifetime Data. Wiley-Interscience, 2002. [8] E. T. Lee and J. Wang. Statistical Methods for Survival Data Analysis. Wiley-Interscience, Apr. 2003. [9] J. Leskovec, L. Backstrom, and J. M. Kleinberg. Meme-tracking and the dynamics of the news cycle. In KDD, pages 497?506, 2009. [10] J. Leskovec, D. Chakrabarti, J. M. Kleinberg, C. Faloutsos, and Z. Ghahramani. Kronecker graphs: An approach to modeling networks. Journal of Machine Learning Research, 11:985? 1042, 2010. [11] J. Leskovec, K. J. Lang, and M. W. Mahoney. Empirical comparison of algorithms for network community detection. In WWW, pages 631?640, 2010. [12] J. Leskovec, A. Singh, and J. M. Kleinberg. Patterns of influence in a recommendation network. In PAKDD, pages 380?389, 2006. [13] A. C. Lozano and V. Sindhwani. Block variable selection in multivariate regression and highdimensional causal inference. In NIPS, pages 1486?1494, 2010. [14] S. A. Myers and J. Leskovec. On the convexity of latent social network inference. In NIPS, pages 1741?1749, 2010. [15] W. Press, S. Teukolsky, W. Vetterling, and B. Flannery. Numerical recipes in C: the art of scientific computing. Cambridge, 1992. [16] A. Rakotomamonjy, F. Bach, S. Canu, Y. Grandvalet, et al. Simplemkl. Journal of Machine Learning Research, 9:2491?2521, 2008. [17] D. Watts and S. Strogatz. Collective dynamics of small-world networks. Nature, 393(6684):440?442, June 1998. [18] D. J. Watts and P. S. Dodds. Influentials, networks, and public opinion formation. Journal of Consumer Research, 34(4):441?458, 2007. [19] Z. Xu, R. Jin, H. Yang, I. King, and M. R. Lyu. Simple and efficient multiple kernel learning by group lasso. In ICML, pages 1175?1182, 2010. 9
4582 |@word cnn:1 proportion:1 norm:1 reused:1 grey:1 simulation:1 solid:2 moment:2 memetracker:3 uncovered:1 score:4 outperforms:1 virus:1 lang:1 yet:3 dx:6 written:1 numerical:2 realistic:1 additive:1 happen:3 shape:1 kdd:3 plot:1 update:4 fund:1 v:1 fewer:2 website:1 instantiate:1 core:7 num:12 node:59 location:1 successive:1 org:1 along:2 direct:1 differential:2 supply:1 chakrabarti:1 yuan:1 incorrect:1 interscience:2 falsely:1 pairwise:1 behavior:1 examine:2 ming:1 decreasing:1 decomposed:1 automatically:1 actual:2 window:2 increasing:2 becomes:1 estimating:3 underlying:6 notation:1 panel:4 medium:2 moreover:1 what:1 kind:1 infecting:1 developed:1 temporal:3 ti:11 k2:2 schwartz:1 t1:6 positive:5 before:1 consequence:1 analyzing:1 path:3 simplemkl:1 might:2 examined:1 challenging:2 limited:1 range:3 regroup:1 bi:4 averaged:1 directed:6 practice:1 block:3 definite:2 survived:1 empirical:1 cascade:49 got:2 significantly:3 word:1 induce:2 get:3 cannot:1 selection:1 put:1 risk:1 influence:19 optimize:2 www:2 missing:1 center:1 maximizing:1 convex:7 formulate:1 simplicity:1 recovery:6 his:1 population:1 handle:1 coordinate:2 tardos:1 kernelize:2 annals:1 play:1 trigger:1 user:4 exact:1 programming:2 uninfected:2 gm:1 observed:3 role:1 wang:1 capture:8 worst:1 connected:1 news:3 cycle:1 went:1 trade:1 disease:3 meme:1 convexity:1 dynamic:4 singh:1 solving:2 hofman:1 efficiency:1 tcj:2 basis:1 completely:1 easily:2 multimodal:1 various:3 regularizer:1 enyi:1 fast:1 artificial:1 startup:1 formation:1 neighborhood:1 choosing:1 quite:2 whose:1 valued:1 solve:4 statistic:1 g1:1 jointly:1 highlighted:1 online:1 interplay:1 sequence:1 advantage:1 myers:1 propose:1 product:4 neighboring:1 networked:4 combining:1 relevant:1 iff:1 mixing:1 achieve:1 adapts:1 academy:1 inducing:1 olkopf:1 recipe:1 parent:13 empty:2 transmission:42 convergence:1 r1:1 generating:2 depending:2 friend:1 develop:1 derive:3 illustrate:1 lsong:1 received:1 recovering:2 predicted:1 indicate:1 tcn:1 tci:7 human:1 opinion:1 public:2 require:1 assign:1 fix:1 f1:9 preliminary:1 hold:1 around:2 ground:1 exp:24 lyu:1 visualize:2 pointing:1 achieves:2 vary:2 a2:1 purpose:2 estimation:2 spreading:3 label:1 sensitive:1 grouped:5 tool:2 survives:4 always:1 gaussian:3 rather:1 choudhury:1 avoid:1 mobile:1 varying:1 gatech:4 focus:2 june:1 she:1 consistently:1 improvement:1 likelihood:20 mainly:1 contrast:5 inference:2 twitter:1 abstraction:1 vetterling:1 typically:1 entire:2 hidden:8 kernelized:1 relation:3 originating:1 going:1 interested:1 uncovering:3 among:6 overall:6 flexible:1 k6:1 art:3 integration:3 initialize:1 kempe:1 once:4 never:2 lawless:1 dodds:1 icml:2 future:2 purchase:1 t2:3 nonsmooth:1 mimic:2 randomly:3 divergence:3 national:1 individual:4 consisting:1 attempt:1 detection:1 message:2 evaluation:1 mahoney:1 mixture:1 tj:11 predefined:2 implication:1 edge:29 partial:2 necessary:1 netinf:16 pseudocodes:1 respective:1 filled:1 incomplete:1 desired:1 circle:2 plotted:1 rush:1 causal:2 leskovec:7 fitted:3 instance:3 modeling:6 earlier:1 interpersonal:1 infected:11 cover:1 rakotomamonjy:1 uniform:2 dependency:1 synthetic:7 referring:1 person:1 density:3 cited:2 cki:2 lee:1 off:1 together:4 connectivity:1 again:1 recorded:1 li:3 de:1 b2:2 availability:1 coefficient:6 infects:1 explicitly:1 depends:1 piece:4 later:4 root:2 closed:3 linked:2 analyze:1 red:1 start:1 recover:6 xing:1 complicated:3 contribution:1 formed:1 largely:1 who:1 efficiently:2 t3:3 accurately:1 basically:1 multiplying:1 cc:1 published:1 maxc:1 influenced:2 tended:1 infection:2 facebook:1 definition:1 competitor:2 against:1 steadily:1 associated:3 lazer:1 recovers:1 sampled:1 dataset:7 recall:4 reminder:1 lim:1 organized:1 uncover:1 actually:1 originally:1 dt:1 day:2 follow:1 reflected:1 modal:1 flowing:1 erd:1 formulation:1 done:1 though:1 furthermore:3 just:3 smola:2 lifetime:1 correlation:1 clock:1 until:2 nonlinear:2 o:1 google:1 rodriguez:6 reveal:1 scientific:1 concept:1 true:11 contain:1 tc1:1 lozano:1 regularization:6 assigned:3 hence:2 equality:1 symmetric:1 alternating:1 nonzero:1 during:1 pdf:5 demonstrate:1 performs:1 bring:1 instantaneous:1 consideration:1 recently:2 common:1 ji:73 discussed:1 he:2 occurred:1 belong:1 refer:1 cambridge:1 dag:3 ai:4 seldom:1 grid:3 canu:1 similarly:1 pointed:2 submodular:1 add:1 sick:1 multivariate:2 own:1 recent:2 quartic:1 phone:1 scenario:1 periphery:7 sji:4 inequality:1 came:1 continue:1 additional:2 impose:1 parallelized:1 prune:1 monotonically:1 ii:3 multiple:3 desirable:1 meyers:1 mix:4 infer:2 smooth:2 plug:1 cross:1 ahmed:1 hazard:19 bach:1 post:12 equally:1 a1:1 laplacian:1 prediction:1 basic:1 regression:1 heterogeneous:5 metric:1 histogram:5 represent:3 limt:1 kernel:22 addition:3 want:3 separately:2 krause:1 interval:2 grow:1 median:1 source:1 sch:1 recording:1 induced:1 elegant:1 flow:2 structural:2 yang:1 exceed:1 identically:2 variety:1 affect:1 fit:1 identified:1 lasso:5 topology:2 bandwidth:1 idea:3 dunan:1 inactive:2 whether:2 t0:3 six:1 cji:4 song:2 hardly:1 detailed:1 nonparametric:1 extensively:1 generate:2 nsf:1 dotted:1 happened:3 estimated:9 disjoint:1 track:1 correctly:1 hji:5 instantly:1 diverse:2 blue:1 discrete:1 group:4 key:2 stamped:1 breadth:1 diffusion:32 graph:2 fraction:2 sum:1 powerful:2 respond:2 place:1 family:1 missed:1 capturing:1 nan:1 gomez:6 dash:2 fold:1 nonnegative:1 eagle:1 occur:3 constraint:1 kronecker:2 alex:2 pakdd:1 ri:1 diffuse:1 kleinberg:4 aspect:1 speed:1 argument:2 min:2 separable:1 relatively:1 combination:3 watt:3 across:2 newer:1 backstrom:1 happens:3 restricted:1 equation:2 mutually:1 previously:1 visualization:1 turn:1 know:3 drastic:1 end:1 fji:11 available:1 observe:4 occurrence:1 kernelizing:3 alternative:1 faloutsos:1 top:3 balduzzi:1 especially:1 erfc:3 ghahramani:1 society:1 classical:1 question:1 already:1 occurs:5 parametric:6 gradient:2 distance:7 link:1 separate:1 entity:9 whom:1 cauchy:1 consumer:2 assuming:1 besides:1 index:1 modeled:1 illustration:1 kolar:1 susceptible:1 negative:5 ski:2 collective:1 unknown:1 perform:3 observation:3 descent:2 jin:1 timeseries:1 pentland:1 heterogeneity:1 defining:1 ever:1 communication:1 interacting:1 community:1 inferred:6 introduced:1 pair:8 publishes:1 kl:9 connection:1 learned:1 hour:4 nip:2 address:2 beyond:1 able:1 pattern:9 appeared:2 sparsity:3 hyperlink:5 green:1 power:1 event:27 natural:2 treated:3 predicting:1 older:1 improve:3 technology:1 imply:1 connie:3 extract:1 occurence:1 sept:1 prior:1 understanding:1 discovery:2 literature:1 acknowledgement:1 law:1 fully:2 interesting:3 limitation:1 generation:2 acyclic:1 generator:1 validation:1 imposes:1 article:1 grandvalet:1 share:1 gl:4 repeat:1 last:2 supported:1 drastically:2 allow:1 epidemiologist:1 institute:1 neighbor:2 sparse:4 boundary:1 dimension:1 curve:2 world:5 cumulative:1 collection:4 preprocessing:1 projected:2 far:1 social:5 approximate:1 active:2 buy:1 overfitting:2 b1:2 alternatively:1 continuous:4 latent:3 table:2 nature:1 du:1 posted:1 did:2 apr:1 spread:3 netrate:39 allowed:1 child:2 xu:1 site:16 join:2 georgia:1 fashion:1 wiley:2 precision:4 sub:2 inferring:3 exponential:4 isye:1 stamp:11 third:1 infect:1 formula:2 friendship:1 specific:1 symbol:1 explored:1 r2:1 experimented:2 mason:1 survival:19 consist:1 essential:2 false:1 push:1 t4:2 flannery:1 tc:10 rayleigh:22 generalizing:1 simply:1 likely:1 explore:1 happening:1 expressed:1 strogatz:1 tracking:1 recommendation:2 sindhwani:1 marketer:1 corresponds:1 truth:1 kls:1 extracted:1 teukolsky:1 conditional:1 viewed:1 presentation:1 goal:1 month:1 rbf:4 king:1 experimentally:1 specifically:4 except:1 uniformly:1 miss:3 called:5 clarification:1 total:1 tendency:1 experimental:2 meaningful:1 shiftinvariant:1 formally:1 select:1 highdimensional:1 kernelbased:1 people:1 evaluate:1
3,958
4,583
Symmetric Correspondence Topic Models for Multilingual Text Analysis Kosuke Fukumasu? Koji Eguchi? Eric P. Xing? Graduate School of System Informatics, Kobe University, Kobe 657-8501, Japan ? School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA ? [email protected], [email protected], [email protected] Abstract Topic modeling is a widely used approach to analyzing large text collections. A small number of multilingual topic models have recently been explored to discover latent topics among parallel or comparable documents, such as in Wikipedia. Other topic models that were originally proposed for structured data are also applicable to multilingual documents. Correspondence Latent Dirichlet Allocation (CorrLDA) is one such model; however, it requires a pivot language to be specified in advance. We propose a new topic model, Symmetric Correspondence LDA (SymCorrLDA), that incorporates a hidden variable to control a pivot language, in an extension of CorrLDA. We experimented with two multilingual comparable datasets extracted from Wikipedia and demonstrate that SymCorrLDA is more e?ective than some other existing multilingual topic models. 1 Introduction Topic models (also known as mixed-membership models) are a useful method for analyzing large text collections [1, 2]. In topic modeling, each document is represented as a mixture of topics, where each topic is represented as a word distribution. Latent Dirichlet Allocation (LDA) [2] is one of the well-known topic models. Most topic models assume that texts are monolingual; however, some can capture statistical dependencies between multiple classes of representations and can be used for multilingual parallel or comparable documents. Here, a parallel document is a merged document consisting of multiple language parts that are translations from one language to another, sometimes including sentence-to-sentence or word-to-word alignments. A comparable document is a merged document consisting of multiple language parts that are not translations of each other but instead describe similar concepts and events. Recently published multilingual topic models [3, 4], which are the equivalent of Conditionally Independent LDA (CI-LDA) [5, 6], can discover latent topics among parallel or comparable documents. SwitchLDA [6] was modeled by extending CI-LDA. It can control the proportions of languages in each multilingual topic. However, both CI-LDA and SwitchLDA preserve dependencies between languages only by sharing per-document multinomial distributions over latent topics, and accordingly the resulting dependencies are relatively weak. Correspondence LDA (CorrLDA) [7] is another type of topic model for structured data represented in multiple classes. It was originally proposed for annotated image data to simultaneously model words and visual features, and it can also be applied to parallel or comparable documents. In the modeling, it first generates topics for visual features in an annotated image. Then only the topics associated with the visual features in the image are used to generate words. In this sense, visual features can be said to be the pivot in modeling annotated image data. However, when CorrLDA is applied to multilingual documents, a language that plays the role of the pivot (a pivot language1 ) 1 Note that the term ?pivot language? does not have exactly the same meaning as that commonly used in the machine translation community, where it means an intermediary language for translation between more than three languages. 1 must be specified in advance. The pivot language selected is sensitive to the quality of the multilingual topics estimated with CorrLDA. For example, a translation of a Japanese book into English would presumably have a pivot to the Japanese book, but a set of international news stories would have pivots that di?er based on the country an article is about. It is often di?cult to appropriately select the pivot language. To address this problem, which we call the pivot problem, we propose a new topic model, Symmetric Correspondence LDA (SymCorrLDA), that incorporates a hidden variable to control the pivot language, in an extension of CorrLDA. Our SymCorrLDA addresses the problem of CorrLDA and can select an appropriate pivot language by inference from the data. We evaluate various multilingual topic models, i.e., CI-LDA, SwitchLDA, CorrLDA, and our SymCorrLDA, as well as LDA, using comparable articles in di?erent languages (English, Japanese, and Spanish) extracted from Wikipedia. We first demonstrate through experiments that CorrLDA outperforms the other existing multilingual topic models mentioned, and then show that our SymCorrLDA works more e?ectively than CorrLDA in any case of selecting a pivot language. 2 Multilingual Topic Models with Multilingual Comparable Documents Bilingual topic models for bilingual parallel documents that have word-to-word alignments have been developed, such as those by [8]. Their models are directed towards machine translation, where word-to-word alignments are involved in the generative process. In contrast, we focus on analyzing dependencies among languages by modeling multilingual comparable documents, each of which consists of multiple language parts that are not translations of each other but instead describe similar concepts and events. The target documents can be parallel documents, but word-to-word alignments are not taken into account in the topic modeling. Some other researchers explored di?erent types of multilingual topic models that are based on the premise of using multilingual dictionaries or WordNet [9, 10, 11]. In contrast, CI-LDA and SwitchLDA only require multilingual comparable documents that can be easily obtained, such as from Wikipedia, when we use those models for multilingual text analysis. This is more similar to the motivation of this paper. Below, we introduce LDA-style topic models that handle multiple classes and can be applied to multilingual comparable documents for the above-mentioned purposes. 2.1 Conditionally Independent LDA (CI-LDA) CI-LDA [5, 6] is an extension of the LDA model to handle multiple classes, such as words and citations in scientific articles. The CI-LDA framework was used to model multilingual parallel or comparable documents by [3] and [4]. Figure 1(b) shows a graphical model representation of CILDA for documents in L languages, and Figure 1(a) shows that of LDA for reference. D, T , and Nd respectively indicate the number of documents, number of topics, and number of word tokens that appear in a specific language part in a document d. The superscript ?(?)? indicates the variables corresponding to a specific language part in a document d. For better understanding, we show below the process of generating a document according to the graphical model of the CI-LDA model. 1. 2. 3. For all D documents, sample ?d ? Dirichlet(?) (?) For all T topics and for all L languages, sample ?(?) t ? Dirichlet(? ) For each of the Nd(?) words w(?) in language ? (? ? {1, ? ? ? , L}) of document d: i a. Sample a topic z(?) ? Multinomial(? ) d i (?) b. Sample a word w(?) i ? Multinomial(? (?) ) zi For example, when we deal with Japanese and English bilingual data, w(1) and w(2) are a Japanese and an English word, respectively. CI-LDA preserves dependencies between languages only by sharing the multinomial distributions with parameters ?d . Accordingly, there are substantial chances that some topics are assigned only to a specific language part in each document, and the resulting dependencies are relatively weak. 2.2 SwitchLDA Similarly to CI-LDA, SwitchLDA [6] can be applied to multilingual comparable documents. However, di?erent from CI-LDA, SwitchLDA can adjust the proportions of multiple di?erent languages for each topic, according to a binomial distribution for bilingual data or a multinomial distribution for data of more than three languages. Figure 1(c) depicts a graphical model representation of SwitchLDA for documents in L languages. The generative process is described below. 2 (a) LDA (b) CI-LDA (c) SwitchLDA Figure 1: Graphical model representations of (a) LDA, (b) CI-LDA, and (c) SwitchLDA 1. 2. 3. For all D documents, sample ?d ? Dirichlet(?) For all T topics: (?) a. For all L languages, sample ?(?) t ? Dirichlet(? ) b. Sample ?t ? Dirichlet(?) For each of the Nd words wi in document d: a. Sample a topic zi ? Multinomial(?d ) b. Sample a language label si ? Multinomial(?zi ) i) c. Sample a word wi ? Multinomial(?(s zi ) Here, ?t indicates a multinomial parameter to adjust the proportions of L di?erent languages for topic t. If all components of hyperparameter vector ? are large enough, SwitchLDA becomes equivalent to CI-LDA. SwitchLDA is an extension of CI-LDA to give emphasis or de-emphasis to specific languages for each topic. Therefore, SwitchLDA may represent multilingual topics more flexibly; however, it still has the drawback that the dependencies between languages are relatively weak. 2.3 Correspondence LDA (CorrLDA) CorrLDA [7] can also be applied to multilingual comparable documents. In the multilingual setting, this model first generates topics for one language part of a document. We refer to this language as a pivot language. For the other languages, the model then uses the topics that were already generated in the pivot language. Figure 2(a) shows a graphical model representation of CorrLDA assuming L languages, when p is the pivot language that is specified in advance. Here, Nd(?) (? ? {p, 2, ? ? ? , L}) denotes the number of words in language ? in document d. The generative process is shown below: 1. 2. 3. For all D documents? pivot language parts, sample ?d(p) ? Dirichlet(?(p) ) (?) For all T topics and for all L languages (including the pivot language), sample ?(?) t ? Dirichlet(? ) For each of the Nd(p) words w(p) in the pivot language p of document d: i (p) a. Sample a topic z(p) i ? Multinomial(?d ) (p) b. Sample a word wi ? Multinomial(?(p) (p) ) 4. For each of the Nd(?) words w(?) ? (? ?) {2, ? ? ? , L}) of document d: i in language ( (p) a. Sample a topic y(?) ? Uni f orm z , ? ? ? , z(p)(p) i 1 zi b. (?) Sample a word w(?) i ? Multinomial(? (?) ) Nd yi This model can capture more direct dependencies between languages, due to the constraints that topics have to be selected from the topics selected in the pivot language parts. However, when CorrLDA is applied to multilingual documents, a pivot language must be specified in advance. Moreover, the pivot language selected is sensitive to the quality of the multilingual topics estimated with CorrLDA. 3 Symmetric Correspondence Topic Models When CorrLDA is applied to parallel or comparable documents, this model first generates topics for one language part of a document, which we refer to this language as a pivot language. For the other languages, the model then uses the topics that were already generated in the pivot language. CorrLDA has the great advantage that it can capture more direct dependency between languages; 3 (a) CorrLDA (b) SymCorrLDA (c) alternative SymCorrLDA Figure 2: Graphical model representations of (a) CorrLDA, (b) SymCorrLDA, and (c) its variant however, it has a disadvantage that it requires a pivot language to be specified in advance. Since the pivot language may di?er based on the subject, such as the country a document is about, it is often di?cult to appropriately select the pivot language. To address this problem, we propose Symmetric Correspondence LDA (SymCorrLDA). This model generates a flag that specifies a pivot language for each word, adjusting the probability of being pivot languages in each language part of a document according to a binomial distribution for bilingual data or a multinomial distribution for data of more than three languages. In other words, SymCorrLDA estimates from the data the best pivot language at the word level in each document. The pivot language flags may be assigned to the words in the originally written portions in each language, since the original portions may be described confidently and with rich vocabulary. Figure 2(b) shows a graphical model representation of SymCorrLDA. SymCorrLDA?s generative process is shown as follows, assuming L languages: 1. For all D documents: For all L languages, sample ?d(?) ? Dirichlet(?(?) ) Sample ?d ? Dirichlet(?) (?) For all T topics and for all L languages, sample ?(?) t ? Dirichlet(? ) (?) (?) For each of the Nd words wi in language ? (? ? {1, ? ? ? , L}) of document d: a. Sample a pivot language flag xi(?) ? Multinomial(?d ) (?) b. If (xi(?)=?), sample a topic z(?) i ? Multinomial(? d ) ) ( (m) (?) (?) c. If (xi =m,?), sample a topic yi ? Uni f orm z1 , ? ? ? , z(m)(m) Md ( ) (?) (?) d. Sample a word w(?) i ? Multinomial ? x(?) =? ? (?) + (1 ? ? x(?) =? )? (?) a. b. 2. 3. zi i yi i The pivot language flag xi(?) = ? for an arbitrary language ? indicates that the pivot language for the word wi(?) is its own language ?, and xi(?) = m indicates that the pivot language for w(?) i is another language m di?erent from its own language ?. The indicator function ? takes the value 1 when the designated event occurs and 0 if otherwise. Unlike CorrLDA, the uniform distribution at Step 3-c is not based on the topics that are generated for all Nd(m) words with the pivot language flags, but based only on the topics that are already generated for Md(m) (Md(m) ? Nd(m) ) words with the pivot language flags at each step while in the generative process.2 The full conditional probability for collapsed Gibbs sampling of this model is given by the following equations, assuming symmetric Dirichlet priors parameterized by ?(?) , ?(?) (? ? {1, ? ? ? , L}), and ?: (?) (?) (?) (?) (?) P(zi(?) = t, xi(?) = ?|w(?) i = w , z?i , w?i , x?i , ? , ? , ?) ? W (?) T TD C (?) ? + ? Ctd,?i + ?(?) nd?,?i + ? w t,?i ? ?? ? ? (?) (?) (?) nd?,?i + j,? nd j + L? ? C T? D ? C W ?T + T ? + W (?) ?(?) t w(?) (?) t d,?i (?) w (?) (1) t,?i (?) (?) (?) (m) (?) P(yi(?) = t, xi(?) = m|w(?) i = w , y?i , z , w?i , x?i , ? , ?) ? Md(m) words may indeed di?er in size at the step of generating each word in the generative process. However, this is not problematic for inference, such as by collapsed Gibbs sampling, where any topic is first randomly assigned to every word, and a more appropriate topic is then re-assigned to each word, based on the topics previously assigned to all Nd(m) words, not Md(m) words, with the pivot language flags. 2 4 Table 1: Summary of bilingual data No. of documents No. of word types (vocab) No. of word tokens Table 2: Summary of trilingual data Japanese English 229,855 124,046 173,157 61,187,469 80,096,333 Japanese English Spanish No. of documents 90,602 No. of word types (vocab) 70,902 98,474 96,191 No. of word tokens 25,952,978 33,999,988 25,701,830 (?) C W(?) ?T + ?(?) T D(m) Ctd ndm,?i + ? w t,?i ? ? ?? (?) ndm,?i + j,m nd j + L? Nd(m) ? C W ?T + W (?) ?(?) (?) w (?) w (2) t,?i (?) (?) (?) (?) (?) (?) where w(?) = {w(?) i }, z = {zi }, and x = {xi }. W and Nd respectively indicate the total number of vocabulary words (word types) in the specified language, and the number of word tokens that appear in the specified language part of document d. nd? and ndm are the number of times, for an arbitrary word i ? {1, ? ? ? , Nd(?) } in an arbitrary language j ? {1, ? ? ? , L} of document d, the flags xi( j) = ? and T D(?) xi( j) = m respectively are allocated to document d. Ctd indicates the (t, d) element of a T ? D topic-document count matrix, meaning the number of times topic t is allocated to the document d?s W (?) T language part specified in parentheses. Cwt indicates the (w, t) element of a W (?) ? T word-topic count matrix, meaning the number of times topic t is allocated to word w in the language specified in parentheses. The subscript ??i? indicates when wi is removed from the data. Now we slightly modify SymCorrLDA by replacing Step 3-c in its generative process by: (m) 3-c. If (xi(?)=m,?), sample a topic y(?) i ? Multinomial(?d ) Figure 2(c) shows a graphical model representation of this alternative SymCorrLDA. In this model, non-pivot topics are dependent on the distribution behind the pivot topics, not dependent directly on the pivot topics as in the original SymCorrLDA. By this modification, the generative process is more naturally described. Accordingly, Eq. (2) of the full conditional probability is replaced by: (?) (?) (?) (m) (?) P(yi(?) = t, xi(?) = m|w(?) i = w , y?i , z , w?i , x?i , ? , ?) ? (?) C W(?) ?T + ?(?) T D(m) + ?(m) Ctd ndm,?i + ? w t,?i ? ?? ?? (?) T D(m) ndm,?i + j,m nd j + L? ? ? C W ?T + T ?(m) + W (?) ?(?) (?) t C t? d w (?) w (3) t,?i As you can see in the second term of the right-hand side above, the constraints are relaxed by this modification so that topics do not always have to be selected from the topics selected for the words with the pivot language flags, di?erently from that of Eq. (2). We will show through experiments how the modification a?ects the quality of the estimated multilingual topics, in the following section. 4 Experiments In this section, we demonstrate some examples with SymCorrLDA, and then we compare multilingual topic models using various evaluation methods. For the evaluation, we use held-out loglikelihood using two datasets, the task of finding an English article that is on the same topic as that of a Japanese article, and a task with the languages reversed. 4.1 Settings The datasets used in this work are two collections of Wikipedia articles: one is in English and Japanese, the other is in English, Japanese, and Spanish, and articles in each collection are connected across languages via inter-language links, as of November 2, 2009. We extracted text content from the original Wikipedia articles, removing link information and revision history information. We used WP2TXT3 for this purpose. For English articles, we removed 418 types of standard stop words [12]. For Spanish articles, we removed 351 types of standard stop words [13]. As for Japanese articles, we removed function words, such as symbols, conjunctions and particles, using part-of-speech tags annotated by MeCab4 . The statistics of the datasets after preprocessing are shown in Tables 1 and 2. We assumed each set of Wikipedia articles connected via inter-language links between two (or 3 4 http://wp2txt.rubyforge.org/ http://mecab.sourceforge.net/ 5 0th iteration 5th iteration 20th iteration 50th iteration 80000 70000 ? d = (1,0,0) Europe Austria Physics frequency 60000 50000 Shogi (Japanese chess) Japanese Language Mount Fuji Mobile phone ? d ,1 40000 0.0 30000 0.5 Western art history 20000 Personal computer Sony 1.0 Bull?gh"ng Horyu?-ji (Horyu Temple) 10000 ? d = (0,1,0) 0 0 0.2 0.4 0.6 0.8 (a) Examples with bilingual data 1 ?d,1 Figure 3: Change of frequency distribution of ?d,1 according to number of iterations Proporon of Japanese pivot ? d = (0,0,1) Figure 4: Document titles and corresponding ?d nen (year) ingurando (England) daihyo? (representave) rigu (league) sizun (season) Topic 269 species insects eggs body larvae 0.0 rui (species) shu (species) karada (body) konchu? (insect) dobutsu (animal) Topic 13 japan osaka kyoto hughes japanese osaka kyoto shi (city) nen (year) kobe 0.5 Topic 201 ireland irish scotland sco!sh dublin Europe (b) Examples with trilingual data Topic 251 united cup manchester manager league NFL airurando (Ireland) suko"orando (Scotland) nen (year) daburin (Dublin) kitaairurando (Northern Ireland) 1.0 Topic 426 car vehicle vehicles cars truck kuruma (car) jidosha(automobile) sharyo? (vehicle) unten (driving) torakku (truck) Topic 59 castle ba"le oda hideyoshi nobunaga nobunaga shiro (castle) hideyoshi shi(surname) oda Figure 5: Topic examples and corresponding proportion of pivots assigned to Japanese. An English translation for each Japanese word follows in parentheses, except for Japanese proper nouns. three) languages as a comparable document that consists of two (or three) language parts. To carry out the evaluation in the task of finding counterpart articles that we will describe later, we randomly divided the Wikipedia document collection at the document level into 80% training documents and 20% test documents. Furthermore, to compute held-out log-likelihood, we randomly divided each of the training documents at the word level into 90% training set and 10% held-out set. We first estimated CI-LDA, SwitchLDA, CorrLDA, and SymCorrLDA and its alternative version (?SymCorrLDA-alt?) as well as LDA for a baseline, using collapsed Gibbs sampling with the training set. In addition, we estimated a special implementation of SymCorrLDA, setting ?d in a simple way for comparison, where the pivot language flag for each word is randomly selected according to the proportion of the length of each language part (?SymCorrLDA-rand?). For all the models, we assumed symmetric Dirichlet hyperparameters ? = 50/T and ? = 0.01, which have often been used in prior work [14]. We imposed the convergence condition of collapsed Gibbs sampling, such that the percentage change of held-out log-likelihood is less than 0.1%. For SymCorrLDA, we assumed symmetric Dirichlet hyperparameters ? = 1. For SwitchLDA, we assumed symmetric Dirichlet hyperparameters ? = 1. We investigated the e?ect of ? in SymCorrLDA and ? in SwitchLDA; however, the held-out log-likelihood was almost constant when varying these hyperparameters. LDA does not distinguish languages, so for a baseline we assumed all the language parts connected via inter-language links to be mixed together as a single document. 4.2 Pivot assignments Figure 3 demonstrates how the frequency distribution of the pivot language-flag (binomial) parameter ?d,1 for the Japanese language with the bilingual dataset5 in SymCorrLDA changes while in iterations of collapsed Gibbs sampling. This figure shows that the pivot language flag is randomly assigned at the initial state, and then it converges to an appropriate bias for each document as the iterations proceed. We next demonstrate how the pivot language flags are assigned to each document. Figure 4(a) shows the titles of eight documents and the corresponding ?d when using the bilingual data (T = 500). If ?d,1 is close to 1, the article can be considered to be more related to a subject on Japanese or Japan. In contrast, if ?d,1 is close to 0 and therefore ?d,2 = 1 ? ?d,1 is close to 1, the article can be considered to be more related to a subject on English or English-speaking countries. Therefore, a pivot is assigned considering the language biases of the articles. Figure 4(b) shows the titles of six documents and the corresponding ?d = (?d,1 , ?d,2 , ?d,3 ) when using the trilingual 5 The parameter for English was ?d,2 = 1 ? ?d,1 in this case. 6 Table 3: Per-word held-out log-likelihood with bilingual data. Boldface indicates the best result in each column. T=500 Japanese English LDA CI-LDA SwitchLDA CorrLDA1 CorrLDA2 SymCorrLDA SymCorrLDA-alt SymCorrLDA-rand -8.127 -8.136 -8.139 -7.463 -7.777 -7.433 -7.476 -7.483 -8.633 -8.644 -8.641 -8.403 -8.197 -8.175 -8.206 -8.222 T=1000 Japanese English -7.992 -8.008 -8.012 -7.345 -7.663 -7.317 -7.358 -7.373 Table 4: Per-word held-out log-likelihood with trilingual data. Boldface indicates the best result in each column. T=500 T=1000 Japanese English Spanish Japanese English Spanish -8.530 -8.549 -8.549 -8.346 -8.109 -8.084 -8.116 -8.137 CorrLDA1 CorrLDA2 CorrLDA3 SymCorrLDA SymCorrLDA-alt -7.408 -7.655 -7.794 -7.394 -7.440 -8.512 -8.198 -8.460 -8.178 -8.209 -8.667 -8.467 -8.338 -8.289 -8.330 -7.305 -7.572 -7.700 -7.287 -7.330 -8.393 -8.122 -8.383 -8.093 -8.120 -8.545 -8.401 -8.274 -8.215 -8.254 data (T = 500). Here, ?d,1 , ?d,2 , and ?d,3 respectively indicate the pivot language-flag (multinomial) parameters corresponding to Japanese, English, and Spanish parts in each document. We further demonstrate the proportions of pivot assignments at the topic level. Figure 5 shows the content of 6 topics through 10 words with the highest probability for each language and for each topic when using the bilingual data (T = 500), some of which are biased to Japanese (Topics 13 and 59) or English (Topics 201 and 251), while the others have almost no bias. It can be seen that the pivot bias to specific languages can be interpreted. 4.3 Held-out log-likelihood By measuring the held-out log-likelihood, we can evaluate the quality of each topic model. The higher the held-out log-likelihood, the greater the predictive ability of the model. In this work, we estimated multilingual topic models with the training set and computed the log-likelihood of generating the held-out set that was mentioned in Section 4.1. Table 3 shows the held-out log-likelihood of each multilingual topic model estimated with the bilingual dataset when T = 500 and 1000. Note that the held-out log-likelihood (i.e., the micro-average per-word log-likelihood of the 10% held-out set) is shown for each language in this table, while the model estimation was performed over the 90% training set in all the languages. Hereafter, CorrLDA1 refers to the CorrLDA model that was estimated when Japanese was the pivot language. As described in Section 2.3, the CorrLDA model first generates topics for the pivot language part of a document, and for the other language parts of the document, the model then uses the topics that were already generated in the pivot language. CorrLDA2 refers to the CorrLDA model when English was the pivot language. As the results in Table 3 show, the held-out log-likelihoods of CorrLDA1 and CorrLDA2 are much higher than those of the other prior models: CI-LDA, SwitchLDA, and LDA, in both cases. This is because CorrLDA can capture direct dependencies between languages, due to the constraints that topics have to be selected from the topics selected in the pivot language parts. On the other hand, CI-LDA and SwitchLDA are too poorly constrained to e?ectively capture the dependencies between languages, as mentioned in Sections 2.1 and 2.2. In particular, CorrLDA1 has the highest held-out log-likelihood among all the prior models for Japanese, while CorrLDA2 is the best among all the prior models for English. This is probably due to the fact that CorrLDA can estimate topics from the pivot language parts (Japanese in the case of CorrLDA1) without any specific constraints; however, great constraints (topics having to be selected from the topics selected in the pivot language parts) are imposed for the other language parts. In SymCorrLDA, the held-out log-likelihood for Japanese is larger than that of CorrLDA1 (and the other models), and the held-out log-likelihood for English is larger than that of CorrLDA2. This is probably because SymCorrLDA estimates the pivot language appropriately adjusted for each word in each document. Next, we compare SymCorrLDA and its alternative version (SymCorrLDA-alt). We observed in Table 3 that the held-out log-likelihood of SymCorrLDA-alt is smaller than that of the original SymCorrLDA, and comparable to CorrLDA?s best. This is because the constraints in SymCorrLDA-alt are relaxed so that topics do not always have to be selected from the topics selected for the words with the pivot language flags. For further consideration, let us examine the results of the simplified implementation: SymCorrLDA-rand, which we defined in Section 4.1. SymCorrLDA-rand?s held-out log-likelihood lies even below CorrLDA?s best. These results reflect the fact that the performance of SymCorrLDA in its full form is inherently a?ected by the nature of the language biases in the multilingual comparable documents, rather than merely being a?ected by the language part length. 7 Table 4 shows the held-out log-likelihood with the trilingual data when T = 500 and 1000. Here, CorrLDA3 refers to the CorrLDA model that was estimated when Spanish was the pivot language. As you can see in this table, SymCorrLDA?s held-out log-likelihood is larger than CorrLDA?s best. SymCorrLDA can estimate the pivot language appropriately adjusted for each word in each document in the trilingual data, as with the bilingual data. SymCorrLDA-alt behaves similarly as with the bilingual data. For both the bilingual and trilingual data, the improvements with SymCorrLDA were statistically significant, compared to each of the other models, according to the Wilcoxon signed-rank test at the 5% level in terms of the word-by-word held-out log-likelihood. As for the scalability, SymCorrLDA is as scalable as CorrLDA because the time complexity of SymCorrLDA is the same order as that of CorrLDA: the number of topics times the sum of vocabulary size in each language. On clock time, SymCorrLDA does pay some extra, such as around 40% of the time for CorrLDA in the case of the bilingual data, for allocating the pivot language flags. 4.4 Finding counterpart articles Given an article, we can find its unseen counterpart articles in other languages using a multilingual topic model. To evaluate this task, we experimented with the bilingual dataset. We estimated document-topic distributions of test documents for each language, using the topic-word distributions that were estimated by each multilingual topic model with training documents. We then evaluated the performance of finding English counterpart articles using Japanese articles as queries, and vice versa. For estimating the document-topic distributions of test documents, we used re-sampling of LDA using the topic-word distribution estimated beforehand by each multilingual topic model [15]. We then computed the Jensen-Shannon (JS) divergence [16] between a document-topic distribution of Japanese and that of English for each test document. Each held-out English-Japanese article pair connected via an inter-language link is considered to be on the same topic; therefore, JS divergence of such an article pair is expected to be small if the latent topic estimation is accurate. We first assumed each held-out Japanese article to be a query and the corresponding English article to be relevant, and evaluated the ranking of all the test articles of English in ascending order of the JS divergence; then we conducted the task with the languages reversed. Table 5 shows the results of mean reciprocal Table 5: MRR in counterpart article finding task. rank (MRR), when T = 500 and 1000. The re- Boldface indicates the best result in each column. ciprocal rank is defined as the multiplicative inJapanese to English English to Japanese verse of the rank of the counterpart article corT=500 T=1000 T=500 T=1000 responding to each query article, and the mean LDA 0.0743 0.1027 0.0870 0.1262 CI-LDA 0.1426 0.1464 0.1697 0.1818 reciprocal rank is the average of it over all the SwitchLDA 0.1357 0.1347 0.1668 0.1653 query articles. CorrLDA works much more efCorrLDA1 0.2987 0.3281 0.2863 0.3111 CorrLDA2 0.2829 0.3063 0.3161 0.3464 fectively than the other prior models: CI-LDA, SymCorrLDA 0.3256 0.3592 0.3348 0.3685 SwitchLDA, and LDA, and overall, SymCorrLDA works the most e?ectively. We observed that the improvements with SymCorrLDA were statistically significant according to the Wilcoxon signed-rank test at the 5% level, compared with each of the other models. Therefore, it is clear that SymCorrLDA estimates multilingual topics the most successfully in this experiment. 5 Conclusions In this paper, we compared the performance of various topic models that can be applied to multilingual documents, not using multilingual dictionaries, in terms of held-out log-likelihood and in the task of cross-lingual link detection. We demonstrated through experiments that CorrLDA works significantly more e?ectively than CI-LDA, which was used in prior work on multilingual topic models. Furthermore, we proposed a new topic model, SymCorrLDA, that incorporates a hidden variable to control a pivot language, in an extension of CorrLDA. SymCorrLDA has an advantage in that it does not require a pivot language to be specified in advance, while CorrLDA does. We demonstrated that SymCorrLDA is more e?ective than CorrLDA and the other topic models, through experiments with Wikipedia datasets using held-out log-likelihood and in the task of finding counterpart articles in other languages. SymCorrLDA can be applied to other kinds of data that have multiple classes of representations, such as annotated image data. We plan to investigate this in future work. 8 Acknowledgments We thank Sinead Williamson, Manami Matsuura, and the anonymous reviewers for valuable discussions and comments. This work was supported in part by the Grant-in-Aid for Scientific Research (#23300039) from JSPS, Japan. References [1] Thomas Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd Anuual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 50?57, Berkeley, California, USA, 1999. [2] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, 2003. [3] David Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. Polylingual topic models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 880?889, Stroudsburg, Pennsylvania, USA, 2009. [4] Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. Mining multilingual topics from wikipedia. In Proceedings of the 18th International Conference on World Wide Web, pages 1155?1156, Madrid, Spain, 2009. [5] Elena Erosheva, Stephen Fienberg, and John La?erty. Mixed-membership models of scientific publications. Proceedings of the National Academy of Sciences of the United States of America, 101:5220?5227, 2004. [6] David Newman, Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. Statistical entity-topic models. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 680?686, Philadelphia, Pennsylvania, USA, 2006. [7] David M. Blei and Michael I. Jordan. Modeling annotated data. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pages 127?134, Toronto, Canada, 2003. [8] Bing Zhao and Eric P. Xing. BiTAM: Bilingual topic admixture models for word alignment. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics, pages 969?976, Sydney, Australia, 2006. [9] Jordan Boyd-Graber and David M. Blei. Multilingual topic models for unaligned text. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pages 75?82, Montreal, Canada, 2009. [10] Jagadeesh Jagarlamudi and Hal Daume. Extracting multilingual topics from unaligned comparable corpora. In Advances in Information Retrieval, volume 5993 of Lecture Notes in Computer Science, pages 1?12. Springer, 2010. [11] Duo Zhang, Qiaozhu Mei, and ChengXiang Zhai. Cross-lingual latent topic extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1128?1137, Uppsala, Sweden, 2010. [12] James P. Callan, W. Bruce Croft, and Stephen M. Harding. The INQUERY retrieval system. In Proceedings of the 3rd International Conference on Database and Expert Systems Applications, pages 78?83, Valencia, Spain, 1992. [13] Jacques Savoy. Report on CLEF-2002 experiments: Combining multiple sources of evidence. In Advances in Cross-Language Information Retrieval, volume 2785 of Lecture Notes in Computer Science, pages 66?90. Springer, 2003. [14] Mark Steyvers and Tom Gri?ths. Handbook of Latent Semantic Analysis, chapter 21: Probabilistic Topic Models. Lawrence Erbaum Associates, Mahwah, New Jersey and London, 2007. [15] Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. Evaluation methods for topic models. In Proceedings of the 26th International Conference on Machine Learning, pages 1105?1112, Montreal, Canada, 2009. [16] Jianhua Lin. Divergence measures based on the shannon entropy. IEEE Transactions on Information Theory, 37(1):145?151, 1991. 9
4583 |@word version:2 proportion:6 nd:21 hu:1 carry:1 initial:1 selecting:1 united:2 hereafter:1 document:79 cort:1 outperforms:1 existing:2 si:1 must:2 written:1 john:1 hofmann:1 generative:8 selected:13 intelligence:1 accordingly:3 cult:2 mccallum:1 scotland:2 reciprocal:2 smith:1 blei:3 toronto:1 uppsala:1 org:1 zhang:1 bitam:1 direct:3 ect:2 consists:2 ectively:4 oda:2 introduce:1 inter:4 expected:1 indeed:1 examine:1 manager:1 vocab:2 salakhutdinov:1 td:1 considering:1 becomes:1 revision:1 discover:2 moreover:1 estimating:1 spain:2 erty:1 harding:1 duo:1 kind:1 interpreted:1 developed:1 finding:6 berkeley:1 every:1 inquery:1 exactly:1 demonstrates:1 control:4 grant:1 appear:2 modify:1 analyzing:3 mount:1 subscript:1 signed:2 emphasis:2 wallach:2 graduate:1 statistically:2 directed:1 acknowledgment:1 hughes:1 mei:1 empirical:1 significantly:1 boyd:1 word:67 refers:3 orm:2 close:3 collapsed:5 equivalent:2 imposed:2 demonstrated:2 shi:2 reviewer:1 flexibly:1 sigir:2 iain:1 osaka:2 steyvers:2 handle:2 target:1 play:1 smyth:1 us:3 pa:1 element:2 associate:1 database:1 observed:2 role:1 capture:5 news:1 connected:4 sun:1 jagadeesh:1 removed:4 highest:2 valuable:1 mentioned:4 substantial:1 complexity:1 personal:1 predictive:1 eric:2 easily:1 represented:3 various:3 america:1 chapter:1 jersey:1 describe:3 london:1 query:4 artificial:1 newman:1 widely:1 larger:3 loglikelihood:1 otherwise:1 ability:1 statistic:1 unseen:1 superscript:1 advantage:2 net:1 propose:3 unaligned:2 relevant:1 combining:1 poorly:1 academy:1 scalability:1 sourceforge:1 manchester:1 convergence:1 extending:1 generating:3 converges:1 stroudsburg:1 andrew:2 ac:2 montreal:2 erent:6 school:2 eq:2 sydney:1 c:1 indicate:3 merged:2 annotated:6 drawback:1 australia:1 require:2 premise:1 surname:1 anonymous:1 adjusted:2 extension:5 around:1 considered:3 presumably:1 great:2 lawrence:1 driving:1 dictionary:2 purpose:2 estimation:2 ruslan:1 intermediary:1 applicable:1 label:1 title:3 sensitive:2 vice:1 city:1 successfully:1 always:2 representa:1 rather:1 season:1 mobile:1 varying:1 publication:1 conjunction:1 focus:1 improvement:2 rank:6 indicates:10 likelihood:23 contrast:3 sigkdd:1 baseline:2 sense:1 inference:2 dependent:2 membership:2 hidden:3 tao:1 overall:1 among:5 insect:2 development:2 plan:1 animal:1 art:1 noun:1 special:1 constrained:1 having:1 ng:2 sampling:6 irish:1 extraction:1 clef:1 future:1 others:1 report:1 micro:1 kobe:5 randomly:5 preserve:2 simultaneously:1 ve:1 divergence:4 national:1 replaced:1 consisting:2 detection:1 investigate:1 mining:2 zheng:1 evaluation:4 adjust:2 alignment:5 mixture:1 sh:1 behind:1 held:27 allocating:1 beforehand:1 accurate:1 callan:1 sweden:1 koji:1 chaitanya:1 re:3 dublin:2 column:3 modeling:7 temple:1 disadvantage:1 measuring:1 assignment:2 bull:1 uniform:1 jsps:1 conducted:1 too:1 dependency:11 cwt:1 international:7 probabilistic:2 physic:1 informatics:1 michael:2 together:1 reflect:1 padhraic:1 book:2 castle:2 zhao:1 style:1 expert:1 japan:4 account:1 de:1 ranking:1 vehicle:3 later:1 performed:1 multiplicative:1 jason:1 portion:2 xing:2 parallel:9 erosheva:1 sco:1 bruce:1 ni:1 weak:3 researcher:1 published:1 history:2 sharing:2 verse:1 frequency:3 involved:1 james:1 naturally:1 associated:1 di:12 matsuura:1 stop:2 dataset:2 ective:2 adjusting:1 sinead:1 austria:1 knowledge:1 car:3 originally:3 higher:2 tom:1 rand:4 evaluated:2 furthermore:2 clock:1 hand:2 web:1 replacing:1 western:1 quality:4 lda:44 scientific:3 hal:1 usa:4 concept:2 counterpart:7 assigned:9 symmetric:9 semantic:2 deal:1 conditionally:2 spanish:8 ndm:5 chengxiang:1 demonstrate:5 gh:1 image:5 meaning:3 consideration:1 recently:2 wikipedia:10 behaves:1 multinomial:18 ji:1 jp:2 volume:2 larva:1 association:2 mellon:1 refer:2 significant:2 cup:1 gibbs:5 versa:1 rd:1 league:2 similarly:2 particle:1 language:138 shiro:1 europe:2 wilcoxon:2 j:3 own:2 mrr:2 phone:1 meeting:2 yi:5 xiaochuan:1 propor:1 seen:1 greater:1 relaxed:2 stephen:2 multiple:10 full:3 kyoto:2 england:1 cross:3 retrieval:5 lin:1 divided:2 parenthesis:3 variant:1 scalable:1 cmu:1 iteration:7 sometimes:1 represent:1 addition:1 country:3 jian:2 allocated:3 appropriately:4 biased:1 extra:1 unlike:1 source:1 probably:2 comment:1 subject:3 valencia:1 incorporates:3 jordan:3 call:1 extracting:1 enough:1 zi:8 gri:1 pennsylvania:2 pivot:68 nfl:1 six:1 speech:1 proceed:1 speaking:1 useful:1 clear:1 generate:1 specifies:1 http:2 northern:1 percentage:1 problematic:1 ctd:4 kosuke:1 estimated:12 jacques:1 per:4 chemudugunta:1 carnegie:1 hyperparameter:1 erently:1 merely:1 year:3 sum:1 parameterized:1 you:2 uncertainty:1 almost:2 jianhua:1 comparable:19 pay:1 distinguish:1 correspondence:8 truck:2 annual:3 naradowsky:1 constraint:6 tag:1 generates:5 ected:2 relatively:3 structured:2 designated:1 according:7 lingual:2 across:1 slightly:1 smaller:1 wi:6 modification:3 chess:1 indexing:1 taken:1 fienberg:1 equation:1 previously:1 bing:1 count:2 sony:1 ascending:1 nen:3 eight:1 appropriate:3 alternative:4 original:4 thomas:1 binomial:3 dirichlet:17 denotes:1 dataset5:1 responding:1 graphical:8 linguistics:2 murray:1 already:4 occurs:1 md:5 said:1 ireland:3 reversed:2 link:6 thank:1 entity:1 topic:120 boldface:3 assuming:3 length:2 modeled:1 zhai:1 shu:1 ba:1 implementation:2 proper:1 datasets:5 november:1 arbitrary:3 community:1 canada:3 david:7 pair:2 specified:10 sentence:2 z1:1 california:1 informaion:1 address:3 below:5 confidently:1 including:2 event:3 natural:1 indicator:1 epxing:1 admixture:1 philadelphia:1 text:7 prior:7 understanding:1 discovery:1 lecture:2 monolingual:1 mixed:3 allocation:3 article:31 port:1 story:1 translation:8 summary:2 token:4 supported:1 english:30 side:1 bias:5 wide:1 mimno:2 vocabulary:3 world:1 rich:1 collection:5 commonly:1 preprocessing:1 simplified:1 qiaozhu:1 transaction:1 citation:1 uni:2 multilingual:42 handbook:1 corpus:1 pittsburgh:1 assumed:6 xi:12 latent:10 table:13 nature:1 inherently:1 hanna:2 williamson:1 automobile:1 japanese:35 investigated:1 motivation:1 bilingual:18 hyperparameters:4 daume:1 mahwah:1 graber:1 body:2 madrid:1 depicts:1 egg:1 aid:1 lie:1 croft:1 removing:1 elena:1 specific:6 er:3 symbol:1 explored:2 experimented:2 jensen:1 alt:7 evidence:1 ci:23 corrlda:38 rui:1 chen:1 entropy:1 visual:4 springer:2 chance:1 extracted:3 acm:3 conditional:2 towards:1 content:2 change:3 except:1 wordnet:1 flag:16 total:1 specie:3 la:1 shannon:2 eguchi:2 select:3 mark:2 evaluate:3 savoy:1
3,959
4,584
Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses Martin J. Wainwright Departments of Statistics and EECS University of California, Berkeley Berkeley, CA 94720 [email protected] Po-Ling Loh Department of Statistics University of California, Berkeley Berkeley, CA 94720 [email protected] Abstract We investigate a curious relationship between the structure of a discrete graphical model and the support of the inverse of a generalized covariance matrix. We show that for certain graph structures, the support of the inverse covariance matrix of indicator variables on the vertices of a graph reflects the conditional independence structure of the graph. Our work extends results that have previously been established only in the context of multivariate Gaussian graphical models, thereby addressing an open question about the significance of the inverse covariance matrix of a non-Gaussian distribution. Based on our population-level results, we show how the graphical Lasso may be used to recover the edge structure of certain classes of discrete graphical models, and present simulations to verify our theoretical results. 1 Introduction Graphical model inference is now prevalent in many fields, running the gamut from computer vision and civil engineering to political science and epidemiology. In many applications, learning the edge structure of an underlying graphical model is of great importance?for instance, a graphical model may be used to represent friendships between people in a social network, or links between organisms with the propensity to spread an infectious disease [1]. It is well known that zeros in the inverse covariance matrix of a multivariate Gaussian distribution indicate the absence of an edge in the corresponding graphical model. This fact, combined with techniques in high-dimensional statistical inference, has been leveraged by many authors to recover the structure of a Gaussian graphical model when the edge set is sparse (e.g., see the papers [2, 3, 4, 5] and references therein). Recently, Liu et al. [6, 7] introduced the notion of a nonparanormal distribution, which generalizes the Gaussian distribution by allowing for univariate monotonic transformations, and argued that the same structural properties of the inverse covariance matrix carry over to the nonparanormal. However, the question of whether a relationship exists between conditional independence and the structure of the inverse covariance matrix in a general graph remains unresolved. In this paper, we focus on discrete graphical models and establish a number of interesting links between covariance matrices and the edge structure of an underlying graph. Instead of only analyzing the standard covariance matrix, we show that it is often fruitful to augment the usual covariance matrix with higherorder interaction terms. Our main result has a striking corollary in the context of tree-structured graphs: for any discrete graphical model, the inverse of a generalized covariance matrix is always (block) graph-structured. In particular, for binary variables, the inverse of the usual covariance matrix corresponds exactly to the edge structure of the tree. We also establish several corollaries that apply to more general discrete graphs. Our methods are capable of handling noisy or missing data in a seamless manner. 1 Other related work on graphical model selection for discrete graphs includes the classic ChowLiu algorithm for trees [8]; nodewise logistic regression for discrete models with pairwise interactions [9, 10]; and techniques based on conditional entropy or mutual information [11, 12]. Our main contribution is to present a clean and surprising result on a simple link between the inverse covariance matrix and edge structure of a discrete model, which may be used to derive inference algorithms applicable even to data with systematic corruptions. The remainder of the paper is organized as follows: In Section 2, we provide brief background and notation on graphical models, and describe the classes of augmented covariance matrices we will consider. In Section 3, we state our main results on the relationship between the support of generalized inverse covariance matrices and the edge structure of a discrete graphical model. We relate our population-level results to concrete algorithms that are guaranteed to recover the edge structure of a discrete graph with high probability. In Section 4, we report the results of simulations used to verify our theoretical claims. For detailed proofs, we refer the reader to the technical report [13]. 2 Background and problem setup In this section, we provide background on graphical models and exponential families. We then work through a simple example that illustrates the phenomena and methodology studied in this paper. 2.1 Graphical models An undirected graph G = (V, E) consists of a collection of vertices V = {1, 2, . . . , p} and a collection of unordered vertex pairs E ? V ? V , meaning no distinction is made between edges (s, t) and (t, s). We associate to each vertex s ? V a random variable Xs taking values in some space X . The random vector X := (X1 , . . . , Xp ) is a Markov random field with respect to G if XA ? ? XB | XS whenever S is a cutset of A and B, meaning every path from A to B in G must pass through S. We have used the shorthand XA := {Xs : s ? A}. In particular, Xs ?? Xt | X\{s,t} whenever (s, t) ? / E. By the Hammersley-Clifford theorem for strictly positive distributions [14], the Markov properties imply a factorization of the distribution of X: Y P(x1 , . . . , xp ) ? ?C (xC ), (1) C?C where C is the set of all cliques (fully-connected subsets of V ) and ?C (xC ) are the corresponding clique potentials. The factorization (1) may alternatively be represented in terms of an exponential family associated with the clique structure of G. For each clique C ? C, we define a family of sufficient statistics {?C;? : X |C| ? R, ? ? IC } associated with variables in C, where IC indexes the sufficient statistics corresponding to C. We also introduce a canonical parameter ?C;? ? R associated with each sufficient statistic ?C;? . For a given assignment of canonical parameters ?, we may express the clique potentials as X ?C (xC ) = ?C;? ?C;? (xC ) := h?C , ?C i, ??IC so equation (1) may be rewritten as P? (x1 , . . . , xp ) = exp X h?C , ?C i ? A(?) , (2) C?C where A(?) := log P x?X p exp P C?C h?C ,  ?C i is the (log) partition function. Note that for a graph with only pairwise interactions, we have C = V ? E. If we associate the function ?s (xs ) = xs with clique {s} and the function ?st (xs , xt ) = xs xt with edge (s, t), the factorization (2) becomes X X P? (x1 , . . . , xp ) = exp ? s xs + ?st xs xt ? A(?) . (3) s?V (s,t)?E 2 When X = {0, 1}, this family of distributions corresponds to the inhomogeneous Ising model. When X = R (and with certain additional restrictions on the weights), the family (3) corresponds to a Gauss-Markov random field. Both of these models are minimal exponential families, meaning the sufficient statistics are linearly independent [15]. For a discrete graphical model with X = {0, 1, . . . , m?1}, it is convenient to make use of sufficient statistics involving indicator functions. For clique C, define the subset of configurations |C| X0 = {J = (j1 , . . . , j|C| ) | j` 6= 0 for all ` = 1, . . . , |C|}, |C| |C| for which no variables take the value 0. Then |X0 | = (m ? 1)|C| . For any configuration J ? X0 , we define the indicator function  1 if xC = J, ?C;J (xC ) = 0 otherwise, and consider the family of models X h?C , ?C i ? A(?) , P? (x1 , . . . , xp ) = exp where xj ? X = {0, 1, . . . , m ? 1}, (4) C?C P |C| with h?C , ?C i = J?X |C| ?C;J ?C;J (xC ). Note in particular that when m = 2, X0 is a singleton 0 state containing the vector of all ones, and the sufficient statistics are given by Y ?C;J (xC ) = xs , for C ? C and J = {1}|C| ; s?C i.e., the indicator functions may simply be expressed as products of variables appearing in the clique. When the graphical model has only pairwise interactions, elements of C have cardinality at most two, and the model (4) clearly reduces to the Ising model (3). Finally, as with the equation (3), the family (4) is a minimal exponential family. 2.2 Covariance matrices and beyond Consider the usual covariance matrix ? = cov(X1 , . . . , Xp ). When X is Gaussian, it is a wellknown consequence of the Hammersley-Clifford theorem that the entries of the precision matrix ? = ??1 correspond to rescaled conditional correlations [14]. The magnitude of ?st is a scalar multiple of the correlation of Xs and Xt conditioned on X\{s,t} , and encodes the strength of the edge (s, t). In particular, the sparsity pattern of ?st reflects the edge structure of the graph: ?st = 0 if and only if Xs ? ? Xt | X\{s,t} . For more general distributions, however, Corr(Xs , Xt | X\{s,t} ) is a function of X\{s,t} , and it is not known whether the entries of ? have any relationship with the strengths of edges in the graph. Nonetheless, it is tempting to conjecture that inverse covariance matrices, and more generally, inverses of higher-order moment matrices, might be related to graph structure. Let us explore this possibility by considering a simple example, namely the binary Ising model (3) with X = {0, 1}. Example 1. Consider a simple chain graph on four nodes, as illustrated in Figure 1(a). In terms of the factorization (3), let the node potentials be ?s = 0.1 for all s ? V and the edge potentials be ?st = 2 for all (s, t) ? E. For a multivariate Gaussian graphical model defined on G, standard theory predicts that the inverse covariance matrix ? = ??1 of the distribution is graph-structured: ?st = 0 if and only if (s, t) ? / E. Surprisingly, this is also the case for the chain graph with binary variables: a little computation show that ? takes the form shown in panel (f). However, this statement is not true for the single-cycle graph shown in panel (b), with added edge (1, 4). Indeed, as shown in panel (g), the inverse covariance matrix has no nonzero entries at all. But for a more complicated graph, say the one in (e), we again observe a graph-structured inverse covariance matrix. Still focusing on the single-cycle graph in panel (b), suppose that instead of considering the ordinary covariance matrix, we compute the covariance matrix of the augmented random vector (X1 , X2 , X3 , X4 , X1 X3 ), where the extra term X1 X3 is represented by the dotted edge shown 3 (a) Chain (b) Single cycle ? 9.80 ??3.59 ?chain = ? ? 0 0 ?3.59 34.30 ?4.77 0 0 ?4.77 34.30 ?3.59 (c) Edge augmented ? 0 0 ? ?. ?3.59? 9.80 (f) (d) With 3-cliques ? 51.37 ??5.37 ?loop = ? ??0.17 ?5.37 ?5.37 51.37 ?5.37 ?0.17 ?0.17 ?5.37 51.37 ?5.37 (e) Dino ? ?5.37 ?0.17? ?, ?5.37? 51.37 (g) Figure 1. (a)?(e) Different examples of graphical models. (f) Inverse covariance for chain-structured graph in (a). (g) Inverse covariance for single-cycle graph in (b). in panel (c). The 5 ? 5 inverse of this generalized covariance matrix takes the form ? ? 1.15 ?0.02 1.09 ?0.02 ?1.14 ??0.02 0.05 ?0.02 0 0.01 ? ? ? ?aug = 103 ? ? 1.09 ?0.02 1.14 ?0.02 ?1.14? . ??0.02 0 ?0.02 0.05 0.01 ? ?1.14 0.01 ?1.14 0.01 1.19 This matrix safely separates nodes 1 and 4, but the entry corresponding to the phantom edge (1, 3) is not equal to zero. Indeed, we would observe a similar phenomenon if we chose to augment the graph by including the edge (2, 4) rather than (1, 3). Note that the relationship between entries of ?aug and the edge strength is not direct; although the factorization (3) has no potential corresponding to the augmented ?edge? (1, 3), the (1, 3) entry of ?aug is noticeably larger in magnitude than the entries corresponding to actual edges with nonzero potentials. This example shows that the usual inverse covariance matrix is not always graph-structured, but computing generalized covariance matrices involving higher-order interaction terms may indicate graph structure. Now let us consider a more general graphical model that adds the 3-clique interaction terms shown in panel (d) to the usual Ising terms. We compute the covariance matrix of the augmented vector  ?(X) = X1 , X2 , X3 , X4 , X1 X2 , X2 X3 , X3 X4 , X1 X4 , X1 X3 , X1 X2 X3 , X1 X3 X4 ? {0, 1}11 . Empirically, we find that the 11?11 inverse of the matrix cov(?(X)) continues to respect aspects of the graph structure: Q in particular, thereQare zeros in position (?, ?), corresponding to the associated functions X? = s?? Xs and X? = s?? X? , whenever ? and ? do not lie within the same maximal clique. (For instance, this applies to the pairs (?, ?) = ({2}, {4}) and (?, ?) = ({2}, {1, 4}).) The goal of this paper is to understand when certain inverse covariances do (and do not) capture the structure of a graphical model. The underlying principles behind the behavior demonstrated in Example 1 will be made concrete in Theorem 1 and its corollaries in the next section. 3 Main results and consequences We now state our main results on the structure of generalized inverse covariance matrices and graph structure. We present our results in two parts: one concerning statements at the population level, and the other concerning statements at the level of statistical consistency based on random samples. 3.1 Population-level results Our main result concerns a connection between the inverses of generalized inverse covariance matrices associated with the model (4) and the structure of the graph. We begin with some notation. e = (V, E) e with no Recall that a triangulation of a graph G = (V, E) is an augmented graph G chordless 4-cycles. (For instance, the single cycle in panel (b) is a chordless 4-cycle, whereas panel 4 e (c) shows a triangulated graph. The dinosaur graph in panel (e) is also triangulated.) The edge set E corresponds to the original edge set E plus the additional edges added to form the triangulation. In general, G admits many different triangulations; the results we prove below will hold for any fixed triangulation of G. We also require some notation for defining generalized covariance matrices. Let S be a collection of subsets of vertices, and define the random vector  |C| ?(X; S) = ?S;J , J ? X0 , S ? S ? C , (5) consisting of all sufficient statistics over cliques in S. We will often be interested in situations where S contains all subsets of a given set. For a subset A ? V , we let pow(A) denote the set of all non-empty subsets of A. (For instance, pow({1, 2}) = {1, 2, (1, 2)}.) Furthermore, for a collection of subsets S, we let pow(S) be the set of all subsets {pow(S), S ? S}, discarding any duplicates that arise. We are now ready to state our main theorem regarding the support of a certain type of generalized inverse covariance matrix. Theorem 1. [Triangulation and block graph-structure.] Consider an arbitrary discrete graphical model of the form (4), and let T be the set of maximal cliques in any triangulation of G. Then the inverse ? of the augmented covariance matrix cov(?(X; pow(T ))) is block graph-structured in the following sense: (a) For any two subsets A and B which are not subsets of the same maximal clique, the block ?(pow(A), pow(B)) is zero. (b) For almost all parameters ?, the entire block ?(pow(A), pow(B)) is nonzero whenever A and B belong to a common maximal clique. The proof of this result relies on convex analysis and the geometry of exponential families [15, 16]. In particular, in any minimal exponential family, there is a one-to-one correspondence between exponential parameters (?? in our notation) and mean parameters (?? = E[?? (X)]). This correspondence is induced by the Fenchel-Legendre duality between the log partition function A and its dual A? , and allows us to relate ? to the graph structure. Note that when the original graph G is a tree, the graph is already triangulated and the set T in Theorem 1 is equal to the edge set E. Hence, Theorem 1 implies that the inverse ? of the augmented covariance matrix with sufficient statistics for all vertices and edges is graph-structured, and blocks of nonzeros in ? correspond to edges in the graph. In particular, the (m ? 1)p ? (m ? 1)p submatrix ?V,V corresponding to sufficient statistics of vertices is block graph-structured; in the case when m = 2, the submatrix ?V,V is simply the p ? p block corresponding to the vector (X1 , . . . , Xp ). When G is not triangulated, however, we may need to invert a larger augmented covariance matrix and include sufficient statistics over pairs (s, t) ? / E, as well. In fact, it is not necessary to take the set of sufficient statistics over all maximal cliques, and we may consider a slightly smaller augmented covariance matrix. Recall that any triangulation T gives rise to a junction tree representation of G, where nodes of the junction tree are subsets of V corresponding to maximal cliques in T , and the edges are intersections of adjacent cliques known as separator sets [15]. The following corollary involves the generalized covariance matrix containing only sufficient statistics for nodes and separator sets of T : Corollary 1. Let S be the set of separator sets in any triangulation of G, and let ? be the inverse of e cov(?(X; V ? pow(S))). Then ?V,V is block graph-structured: ?s,t = 0 whenever (s, t) ? / E. The proof of this corollary is based on applying the block matrix inversion formula [17] to express ?V,V in terms of the matrix ? from Theorem 1. Panel (c) of Example 1 and the associated matrix ?aug provides a concrete instance of this corollary in action. In panel (c), the single separator set in the triangulation is {1, 3}, so augmenting the usual covariance matrix with the additional sufficient statistic X1 X3 and taking the inverse should yield a graph-structured matrix. Indeed, edge (2, 4) e and as predicted by Corollary 1, we observe that ?aug (2, 4) = 0. does not belong to E, Note that V ? pow(S) ? pow(T ), and the set of sufficient statistics considered in Corollary 1 is generally much smaller than the set of sufficient statistics considered in Theorem 1. Hence, the generalized covariance matrix of Corollary 1 has a smaller dimension than the generalized covariance matrix of Theorem 1, and is much more tractable for estimation. 5 Although Theorem 1 and Corollary 1 are clean results at the population level, however, forming the proper augmented covariance matrix requires some prior knowledge of the graph?namely, which edges are involved in a suitable triangulation. In the case of a graph with only singleton separator sets, Corollary 1 specializes to the following useful corollary, which only involves the covariance matrix over indicators of vertices of G: Corollary 2. For any graph with singleton separator sets, the inverse matrix ? of the ordinary covariance matrix cov(?(X; V )) is graph-structured. (This class includes trees as a special case.) Again, we may relate this corollary to Example 1?the inverse covariance matrices for the tree graph in panel (a) and the dinosaur graph in panel (e) are exactly graph-structured. Indeed, although the dinosaur graph is not a tree, it possesses the nice property that the only separator sets in its junction tree are singletons. Corollary 1 also guarantees that inverse covariances may be partially graph-structured, in the sense that (?V,V )st = 0 for any pair of vertices (s, t) separable by a singleton separator set. This is because for any such pair (s, t), we form a junction tree with two nodes, one containing s and one containing t, and apply Corollary 1 to conclude that (?V,V )st = 0. Indeed, the matrix ?V,V over singleton vertices is agnostic to which triangulation we choose for the graph. In settings where there exists a junction tree representation of the graph with only singleton separator sets, Corollary 2 has a number of useful implications for the consistency of methods that have traditionally only been applied for edge recovery in Gaussian graphical models. In such settings, Corollary 2 implies that it suffices to estimate the support of ?V,V from the data. 3.2 Consequences for graphical Lasso for trees Moving beyond the population level, we now establish results concerning the statistical consistency of methods for graph selection in discrete graphical models, based on i.i.d. draws from a discrete graph. We describe how a combination of our population-level results and some concentration inequalities may be leveraged to analyze the statistical behavior of log-determinant methods for discrete tree-structured graphical models, and suggest extensions of these methods when observations are systematically corrupted by noise or missing data. Given p-dimensional random variables (X1 , . . . , Xp ) with covariance ?? , consider the estimator X b ? arg min{trace(??) b ? log det(?) + ?n ? |?st |}, (6) ?0 s6=t ? b is an estimator for ? . For multivariate Gaussian data, this program is an `1 -regularized where ? maximum likelihood estimate known as the graphical Lasso, and is a well-studied method for recovering the edge structure in a Gaussian graphical model [18, 19, 20]. Although the program (6) has no relation to the MLE in the case of a discrete graphical model, it is still useful for estimating ?? := (?? )?1 , and our analysis shows the surprising result that the program is consistent for reb of the covering the structure of any tree-structured Ising model. We consider a general estimate ? covariance matrix ? such that r  log p  ? ? b P k? ? ? kmax ? ?(? ) ? c exp(??(n, p)) (7) n for functions ? and ?, where k ? kmax denotes the elementwise `? -norm. In the case of fullyb = 1 Pn xi xT ? xxT is the usual observed i.i.d. data with sub-Gaussian parameter ? 2 , where ? i i=1 n sample covariance, this bound holds with ?(?? ) = ? 2 and ?(n, p) = c0 log p. In addition, we require a certain mutual incoherence condition on the true covariance matrix ?? to control the correlation of non-edge variables with edge variables in the graph. Let ?? = ?? ? ?? , where ? denotes the Kronecker product. Then ?? is a p2 ? p2 matrix indexed by vertex pairs. The incoherence condition is given by maxc k??eS (??SS )?1 k1 ? 1 ? ?, ? ? (0, 1], (8) e?S where S := {(s, t) : ??st 6= 0} is the set of vertex pairs corresponding to nonzero elements of the precision matrix ?? ?equivalently, the edge set of the graph, by our theory on tree-structured discrete graphs. For more intuition on the mutual incoherence condition, see Ravikumar et al. [4]. 6 Our global edge recovery algorithm proceeds as follows: Algorithm 1 (Graphical Lasso). b of the true covariance matrix ?. 1. Form a suitable estimate ? b 2. Optimize the graphical Lasso program (6) with parameter ?n , denoting the solution by ?. b at level ?n to obtain an estimate of ?? . 3. Threshold the entries of ? We then have the following consistency result, a straightforward consequence of the graph structure b of ?? and concentration properties of ?: Corollary 3. Suppose we have a tree-structured Ising model with degree at most d, satisfying the 2 b the sample covariance mutual incoherence condition (8). Algorithm 1 with ? q If n % d log p, then q  matrix and parameters ?n ? c?1 logn p and ?n = c2 c?1 logn p + ?n recovers all edges (s, t) with |??st | > ?n /2, with probability at least 1 ? c exp(?c0 log p). Hence, if |??st | > ?n /2 for all edges (s, t) ? E, Corollary 3 guarantees that the log-determinant method plus thresholding recovers the full graph exactly. In the case of the standard sample covariance matrix, this method has been implemented by Banerjee et al. [18]; our analysis establishes consistency of their method for discrete trees. The scaling n % d2 log p is unavoidable, as shown by information-theoretic analysis [21], and also appears in other past work on Ising models [10, 9, 11]. Our analysis also has a cautionary message: the proof of Corollary 3 relies heavily on the population-level result in Corollary 2, which ensures that ?? is tree-structured. For a general graph, we have no guarantees that ?? will be graph-structured (e.g., see panel (b) in Figure 1), so the graphical Lasso (6) is inconsistent in general. On the positive side, if we restrict ourselves to tree-structured graphs, the estimator (6) is attractive, b of the population covariance ?? that satisfies the deviation since it relies only on an estimate ? condition (7). In particular, when the samples {xi }ni=1 are contaminated by noise or missing data, b of ?? . Furthermore, the program (6) is always convex all we require is a sufficiently good estimate ? b even when the estimator ? is not positive semidefinite (as will often be the case for missing/corrupted data). As a concrete example of how we may correct the program (6) to handle corrupted data, consider the case when each entry of xi is missing independently with probability ?, and the corresponding observations zi are zero-filled for missing entries. A natural estimator is ! n X 1 1 T b= zi zi ? M ? zz T , (9) ? n i=1 (1 ? ?)2 where ? denotes elementwise division by the matrix M with diagonal entries (1 ? ?) and offdiagonal entries (1 ? ?)2 , correcting for the bias in both the mean and second moment terms. The deviation condition (7) may be shown to hold w.h.p., where ?(?? ) scales with (1 ? ?) (cf. Loh and b and a subsequent version of Wainwright [22]). Similarly, we may derive an appropriate estimator ? Algorithm 1 in situations when the data are systematically contaminated by other forms of additive or multiplicative corruption. Generalizing to the case of m-ary discrete graphical models with m > 2, we may easily modify the program (6) by replacing the elementwise `1 -penalty by the corresponding group `1 -penalty, where the groups are the indicator variables for a given vertex. Precise theoretical guarantees may be derived from results on the group graphical Lasso [23]. 4 Simulations Figure 2 depicts the results of simulations we performed to test our theoretical predictions. In all cases, we generated binary Ising models with node weights 0.1 and edge weights 0.3 (using spin {?1, 1} variables). The five curves show the results of our graphical Lasso method applied to the dinosaur graph in Figure 1. Each curve plots the probability of success in recovering the 15 7 edges of the graph, as a function of the rescaled sample size logn p , where p = 13. The leftmost (red) curve corresponds to the case of fully-observed covariates (? = 0), whereas the remaining four curves correspond to increasing missing data fractions ? ? {0.05, 0.1, 0.15, 0.2}, using the corrected estimator (9). We observe that all five runs display a transition from success probability 0 to success probability 1 in roughly the same range of the rescaled sample size, as predicted by our theory. Indeed, since the dinosaur graph has only singleton separators, Corollary 2 ensures that the inverse covariance matrix is exactly graph-structured. Note that the curves shift right as the fraction ? of missing data increases, since the problem becomes harder. success prob, avg over 1000 trials success prob vs. sample size for dino graph with missing data 1 0.8 0.6 0.4 rho = 0 rho = 0.05 rho = 0.1 rho = 0.15 rho = 0.2 0.2 0 0 100 200 n/log p 300 400 500 Figure 2. Simulation results for our graphical Lasso method on binary Ising models, allowing for missing data in the observations. The figure shows simulation results for the dinosaur graph. Each point represents an average over 1000 trials. The horizontal axis gives the rescaled sample size logn p . 5 Discussion The correspondence between the inverse covariance matrix and graph structure of a Gauss-Markov random field is a classical fact, with many useful consequences for efficient estimation of Gaussian graphical models. It has long been an open question as to whether or not similar properties extend to a broader class of graphical models. In this paper, we have provided a partial affirmative answer to this question and developed theoretical results extending such relationships to discrete undirected graphical models. As shown by our results, the inverse of the ordinary covariance matrix is graph-structured for special subclasses of graphs with singleton separator sets. More generally, we have shown that it is worthwhile to consider the inverses of generalized covariance matrices, formed by introducing indicator functions for larger subsets of variables. When these subsets are chosen to reflect the structure of an underlying junction tree, the edge structure is reflected in the inverse covariance matrix. Our population-level results have a number of statistical consequences for graphical model selection. We have shown how our results may be used to establish consistency (or inconsistency) of the standard graphical Lasso applied to discrete graphs, even when observations are systematically corrupted by mechanisms such as additive noise and missing data. As noted by an anonymous reviewer, the Chow-Liu algorithm might also potentially be modified to allow for missing or corrupted observations. However, our proposed method and further offshoots of our population-level result may be applied even in cases of non-tree graphs, which is beyond the scope of the Chow-Liu algorithm. Acknowledgments PL acknowledges support from a Hertz Foundation Fellowship and an NDSEG Fellowship. MJW and PL were also partially supported by grants NSF-DMS-0907632 and AFOSR-09NL184. The authors thank the anonymous reviewers for helpful feedback. 8 References [1] M.E.J. Newman and D.J. Watts. Scaling and percolation in the small-world network model. Phys. Rev. E, 60(6):7332?7342, December 1999. [2] T. Cai, W. Liu, and X. Luo. A constrained `1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 106:594?607, 2011. [3] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34:1436?1462, 2006. [4] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing `1 -penalized log-determinant divergence. Electronic Journal of Statistics, 4:935?980, 2011. [5] M. Yuan. High-dimensional inverse covariance matrix estimation via linear programming. Journal of Machine Learning Research, 99:2261?2286, August 2010. [6] H. Liu, F. Han, M. Yuan, J.D. Lafferty, and L.A. Wasserman. High dimensional semiparametric Gaussian copula graphical models. arXiv e-prints, March 2012. Available at http://arxiv.org/abs/1202.2169. [7] H. Liu, J.D. Lafferty, and L.A. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:2295?2328, 2009. [8] C.I. Chow and C.N. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14:462?467, 1968. [9] A. Jalali, P.D. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using group-sparse regularization. Journal of Machine Learning Research - Proceedings Track, 15:378?387, 2011. [10] P. Ravikumar, M.J. Wainwright, and J.D. Lafferty. High-dimensional Ising model selection using `1 -regularized logistic regression. Annals of Statistics, 38:1287, 2010. [11] A. Anandkumar, V.Y.F. Tan, and A.S. Willsky. High-dimensional structure learning of Ising models: Local separation criterion. Annals of Statistics, 40(3):1346?1375, 2012. [12] G. Bresler, E. Mossel, and A. Sly. Reconstruction of markov random fields from samples: Some observations and algorithms. In APPROX-RANDOM, pages 343?356, 2008. [13] P. Loh and M.J. Wainwright. Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses. arXiv e-prints, November 2012. [14] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [15] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1?305, January 2008. [16] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970. [17] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1990. [18] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research, 9:485?516, 2008. [19] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso. Biostatistics, 9(3):432?441, July 2008. [20] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19?35, 2007. [21] Narayana P. Santhanam and Martin J. Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information Theory, 58(7):4117?4134, 2012. [22] P. Loh and M.J. Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. Annals of Statistics, 40(3):1637?1664, 2012. [23] L. Jacob, G. Obozinski, and J. P. Vert. Group Lasso with Overlap and Graph Lasso. In International Conference on Machine Learning (ICML), pages 433?440, 2009. 9
4584 |@word trial:2 determinant:3 version:1 inversion:1 norm:1 c0:2 open:2 d2:1 simulation:6 covariance:64 jacob:1 thereby:1 harder:1 carry:1 moment:2 liu:7 configuration:2 contains:1 selecting:1 denoting:1 nonparanormal:3 past:1 wainwrig:1 surprising:2 luo:1 must:1 subsequent:1 partition:2 j1:1 additive:2 plot:1 v:1 provides:1 node:7 org:1 five:2 narayana:1 c2:1 direct:1 yuan:3 consists:1 shorthand:1 prove:1 introduce:1 manner:1 x0:5 pairwise:3 indeed:6 roughly:1 behavior:2 little:1 actual:1 cardinality:1 considering:2 becomes:2 begin:1 estimating:1 underlying:4 notation:4 panel:14 increasing:1 agnostic:1 biostatistics:1 provided:1 affirmative:1 developed:1 transformation:1 guarantee:5 safely:1 berkeley:6 every:1 subclass:1 exactly:4 biometrika:1 cutset:1 control:1 grant:1 positive:3 engineering:1 local:1 modify:1 chowliu:1 limit:1 consequence:6 mach:1 analyzing:1 oxford:1 path:1 incoherence:4 might:2 chose:1 plus:2 therein:1 studied:2 meinshausen:1 factorization:5 range:1 acknowledgment:1 horn:1 block:10 x3:10 vert:1 convenient:1 suggest:1 selection:7 context:2 applying:1 kmax:2 restriction:1 fruitful:1 phantom:1 demonstrated:1 missing:13 optimize:1 reviewer:2 straightforward:1 independently:1 convex:3 recovery:2 correcting:1 wasserman:2 estimator:7 s6:1 population:11 classic:1 notion:1 traditionally:1 handle:1 annals:4 suppose:2 heavily:1 tan:1 programming:1 associate:2 element:2 trend:1 satisfying:1 continues:1 ising:11 predicts:1 observed:2 capture:1 ensures:2 connected:1 cycle:7 rescaled:4 disease:1 intuition:1 convexity:1 covariates:1 division:1 po:1 easily:1 represented:2 xxt:1 describe:2 newman:1 larger:3 rho:5 say:1 s:1 otherwise:1 statistic:22 cov:5 noisy:2 cai:1 reconstruction:1 interaction:6 product:2 unresolved:1 remainder:1 maximal:6 loop:1 infectious:1 empty:1 extending:1 derive:2 augmenting:1 stat:1 lauritzen:1 aug:5 p2:2 recovering:2 predicted:2 implemented:1 indicate:2 implies:2 triangulated:4 involves:2 inhomogeneous:1 correct:1 noticeably:1 argued:1 require:3 suffices:1 anonymous:2 strictly:1 extension:1 pl:2 hold:3 mjw:1 sufficiently:1 considered:2 ic:3 exp:6 great:1 scope:1 claim:1 estimation:11 applicable:1 percolation:1 uhlmann:1 propensity:1 establishes:1 reflects:2 minimization:1 clearly:1 gaussian:15 always:3 modified:1 rather:1 pn:1 broader:1 corollary:24 derived:1 focus:1 prevalent:1 likelihood:2 political:1 sense:2 helpful:1 inference:4 el:1 entire:1 chow:3 relation:1 interested:1 arg:1 dual:1 augment:2 logn:4 constrained:1 special:2 copula:1 mutual:4 field:5 equal:2 zz:1 x4:5 represents:1 yu:1 icml:1 report:2 contaminated:2 sanghavi:1 duplicate:1 divergence:1 geometry:1 consisting:1 ourselves:1 ab:1 friedman:1 message:1 investigate:1 possibility:1 semidefinite:1 behind:1 xb:1 chain:5 implication:1 edge:43 capable:1 partial:1 necessary:1 tree:23 indexed:1 filled:1 theoretical:5 minimal:3 instance:5 fenchel:1 assignment:1 ordinary:3 introducing:1 vertex:13 addressing:1 deviation:2 subset:13 entry:12 johnson:1 answer:1 eec:1 corrupted:5 combined:1 st:13 epidemiology:1 international:1 seamless:1 systematic:1 reb:1 concrete:4 clifford:2 again:2 unavoidable:1 reflect:1 containing:4 leveraged:2 choose:1 ndseg:1 american:1 potential:6 singleton:9 unordered:1 includes:2 rockafellar:1 multiplicative:1 performed:1 analyze:1 red:1 recover:3 offdiagonal:1 complicated:1 contribution:1 formed:1 ni:1 spin:1 correspond:3 yield:1 corruption:2 ary:1 maxc:1 phys:1 whenever:5 nonetheless:1 involved:1 dm:1 proof:4 associated:6 recovers:2 recall:2 knowledge:1 organized:1 focusing:1 appears:1 higher:2 methodology:1 reflected:1 furthermore:2 xa:2 sly:1 correlation:3 horizontal:1 replacing:1 banerjee:2 logistic:2 verify:2 true:3 hence:3 regularization:1 nonzero:4 illustrated:1 attractive:1 adjacent:1 covering:1 noted:1 criterion:1 generalized:15 leftmost:1 theoretic:2 meaning:3 variational:1 recently:1 common:1 raskutti:1 empirically:1 belong:2 organism:1 extend:1 elementwise:3 association:1 refer:1 cambridge:1 approx:1 consistency:6 similarly:1 dino:2 moving:1 han:1 add:1 multivariate:5 triangulation:11 wellknown:1 certain:6 inequality:1 binary:7 success:5 inconsistency:1 additional:3 tempting:1 july:1 multiple:1 full:1 reduces:1 nonzeros:1 technical:1 long:1 lin:1 concerning:3 mle:1 ravikumar:4 prediction:1 involving:2 regression:3 pow:12 vision:1 arxiv:3 represent:1 invert:1 nl184:1 background:3 whereas:2 addition:1 fellowship:2 semiparametric:2 extra:1 posse:1 induced:1 undirected:3 december:1 inconsistent:1 lafferty:3 jordan:1 anandkumar:1 curious:1 structural:1 independence:2 xj:1 zi:3 hastie:1 lasso:14 restrict:1 regarding:1 det:1 shift:1 whether:3 penalty:2 loh:4 action:1 generally:3 useful:4 detailed:1 http:1 canonical:2 nsf:1 dotted:1 track:1 tibshirani:1 nodewise:1 discrete:25 express:2 group:5 santhanam:1 four:2 threshold:1 clean:2 graph:79 fraction:2 run:1 inverse:41 prob:2 striking:1 extends:1 family:12 reader:1 almost:1 electronic:1 separation:1 draw:1 scaling:2 submatrix:2 bound:1 guaranteed:1 display:1 correspondence:3 strength:3 kronecker:1 x2:5 cautionary:1 encodes:1 aspect:1 min:1 separable:1 martin:2 conjecture:1 department:2 structured:23 combination:1 watt:1 march:1 legendre:1 hertz:1 smaller:3 slightly:1 rev:1 ghaoui:1 equation:2 previously:1 remains:1 mechanism:1 tractable:1 generalizes:1 junction:6 rewritten:1 available:1 apply:2 observe:4 worthwhile:1 appropriate:1 appearing:1 original:2 denotes:3 running:1 include:1 cf:1 remaining:1 graphical:51 xc:8 k1:1 establish:4 approximating:1 classical:1 question:4 added:2 chordless:2 already:1 print:2 concentration:2 dependence:1 usual:7 diagonal:1 jalali:1 link:3 higherorder:1 separate:1 thank:1 provable:1 willsky:1 index:1 relationship:6 minimizing:1 equivalently:1 setup:1 statement:3 relate:3 potentially:1 trace:1 rise:1 proper:1 allowing:2 observation:6 markov:5 november:1 january:1 defining:1 situation:2 precise:1 arbitrary:1 august:1 introduced:1 pair:7 namely:2 connection:1 california:2 distinction:1 established:1 beyond:3 proceeds:1 below:1 pattern:1 sparsity:1 hammersley:2 program:7 including:1 wainwright:8 suitable:2 overlap:1 natural:1 regularized:2 indicator:7 brief:1 imply:1 mossel:1 axis:1 gamut:1 ready:1 specializes:1 acknowledges:1 aspremont:1 prior:1 nice:1 afosr:1 fully:2 bresler:1 interesting:1 foundation:1 degree:1 vasuki:1 sufficient:15 xp:8 consistent:1 principle:1 thresholding:1 systematically:3 dinosaur:6 penalized:1 surprisingly:1 supported:1 side:1 bias:1 understand:1 allow:1 taking:2 sparse:5 curve:5 dimension:2 feedback:1 transition:1 world:1 author:2 collection:4 made:2 avg:1 social:1 transaction:2 clique:18 global:1 conclude:1 xi:3 alternatively:1 learn:1 ca:2 separator:11 significance:1 spread:1 main:7 linearly:1 ling:1 arise:1 noise:3 x1:18 augmented:11 depicts:1 precision:3 sub:1 position:1 exponential:8 lie:1 theorem:11 formula:1 friendship:1 xt:8 discarding:1 x:15 admits:1 concern:1 exists:2 corr:1 importance:1 magnitude:2 illustrates:1 conditioned:1 civil:1 entropy:1 intersection:1 generalizing:1 simply:2 univariate:1 explore:1 forming:1 expressed:1 partially:2 scalar:1 monotonic:1 applies:1 corresponds:5 satisfies:1 relies:3 obozinski:1 ploh:1 conditional:4 goal:1 absence:1 corrected:1 pas:1 duality:1 gauss:2 e:1 support:6 people:1 princeton:2 phenomenon:2 handling:1
3,960
4,585
Multi-scale Hyper-time Hardware Emulation of Human Motor Nervous System Based on Spiking Neurons using FPGA C. Minos Niu Department of Biomedical Engineering University of Southern California Los Angeles, CA 90089 [email protected] Sirish K. Nandyala Department of Biomedical Engineering University of Southern California Los Angeles, CA 90089 [email protected] Won Joon Sohn Department of Biomedical Engineering University of Southern California Los Angeles, CA 90089 [email protected] Terence D. Sanger Department of Biomedical Engineering Department of Neurology Department of Biokinesiology University of Southern California Los Angeles, CA 90089 [email protected] Abstract Our central goal is to quantify the long-term progression of pediatric neurological diseases, such as a typical 10-15 years progression of child dystonia. To this purpose, quantitative models are convincing only if they can provide multi-scale details ranging from neuron spikes to limb biomechanics. The models also need to be evaluated in hyper-time, i.e. significantly faster than real-time, for producing useful predictions. We designed a platform with digital VLSI hardware for multiscale hyper-time emulations of human motor nervous systems. The platform is constructed on a scalable, distributed array of Field Programmable Gate Array (FPGA) devices. All devices operate asynchronously with 1 millisecond time granularity, and the overall system is accelerated to 365x real-time. Each physiological component is implemented using models from well documented studies and can be flexibly modified. Thus the validity of emulation can be easily advised by neurophysiologists and clinicians. For maximizing the speed of emulation, all calculations are implemented in combinational logic instead of clocked iterative circuits. This paper presents the methodology of building FPGA modules emulating a monosynaptic spinal loop. Emulated activities are qualitatively similar to real human data. Also discussed is the rationale of approximating neural circuitry by organizing neurons with sparse interconnections. In conclusion, our platform allows emulating pathological abnormalities such that motor symptoms will emerge and can be analyzed. It compels us to test the origins of childhood motor disorders and predict their long-term progressions. 1 Challenges of studying developmental motor disorders There is currently no quantitative model of how a neuropathological condition, which mainly affects the function of neurons, ends up causing the functional abnormalities identified in clinical examinations. The gap in knowledge is particularly evident for disorders in developing human nervous systems, i.e. childhood neurological diseases. In these cases, the ultimate clinical effect of cellu1 lar injury is compounded by a complex interplay among the child?s injury, development, behavior, experience, plasticity, etc. Qualitative insight has been provided by clinical experiences into the association between particular types of injury and particular types of outcome. Their quantitative linkages, nevertheless, have yet to be created ? neither in clinic nor in cellular physiological tests. This discrepancy is significantly more prominent for individual child patients, which makes it very difficult to estimate the efficacy of treatment plans. In order to understand the consequence of injury and discover new treatments, it is necessary to create a modeling toolset with certain design guidelines, such that child neurological diseases can be quantitatively analyzed. Perhaps more than any other organ, the brain necessarily operates on multiple spatial and temporal scales. On the one hand, it is the neurons that perform fundamental computations, but neurons have to interact with large-scale organs (ears, eyes, skeletal muscles, etc.) to achieve global functions. This multi-scale nature worths more attention in injuries, where the overall deficits depend on both the cellular effects of injuries and the propagated consequences. On the other hand, neural processes in developmental diseases usually operate on drastically different time scales, e.g. spinal reflex in milliseconds versus learning in years. Thus when studying motor nervous systems, mathematical modeling is convincing only if it can provide multi-scale details, ranging from neuron spikes to limb biomechanics; also the models should be evaluated with time granularity as small as 1 millisecond, meanwhile the evaluation needs to continue trillions of cycles in order to cover years of life. It is particularly challenging to describe the multi-scale nature of human nervous system when modeling childhood movement disorders. Note that for a child who suffered brain injury at birth, the full development of all motor symptoms may easily take more than 10 years. Therefore the millisecondbased model needs to be evaluated significantly faster than real-time, otherwise the model will fail to produce any useful predictions in time. We have implemented realistic models for spiking motoneurons, sensory neurons, neural circuitry, muscle fibers and proprioceptors using VLSI and programmable logic technologies. All models are computed in Field Programmable Gate Array (FPGA) hardware in 365 times real-time. Therefore one year of disease progression can be assessed after one day of emulation. This paper presents the methodology of building the emulation platform. The results demonstrate that our platform is capable of producing physiologically realistic multi-scale signals, which are usually scarce in experiments. Successful emulations enabled by this platform will be used to verify theories of neuropathology. New treatment mechanisms and drug effects can also be emulated before animal experiments or clinical trials. 2 Methodology of multi-scale neural emulation A. Human arm B. Monosynaptic spinal loop C. Inner structure of muscle spindle Gamma Secondary dynamic Gamma output input static Primary input output Bag 1 ?MN Bag 2 Chain Figure 1: Illustration of the multi-scale nature of motor nervous system. The motor part of human nervous system is responsible for maintaining body postures and generating voluntary movements. The multi-scale nature of motor nervous system is demonstrated in Fig.1. When the elbow (Fig.1A) is maintaining a posture or performing a movement, a force is established by the involved muscle based on how much spiking excitation the muscle receives from its ?motoneurons (Fig.1B). The ?-motoneurons are regulated by a variety of sensory input, part of which comes directly from the proprioceptors in the muscle. As the primary proprioceptor found in skeletal muscles, a muscle spindle is another complex system that has its own microscopic Multiple-InputMultiple-Output structure (Fig.1C). Spindles continuously provide information about the length and lengthening speed of the muscle fiber. A muscle with its regulating motoneurons, sensory neurons and proprioceptors constitutes a monosynaptic spinal loop. This minimalist neurophysiological 2 structure is used as an example for explaining the multi-scale hyper-time emulation in hardware. Additional structures can be added to the backbone set-up using similar methodologies. 2.1 Modularized architecture for multi-scale models Decades of studies on neurophysiology provided an abundance of models characterizing different components of the human motor nervous system. The informational characteristics of physiological components allowed us to model them as functional structures, i.e. each of which converting input signals to certain outputs. In particular, within a monosynaptic spinal loop illustrated in Fig.1B, stretching the muscle will elicit a chain of physiological activities in: muscle stretch ? spindle ? sensory neuron ? synapse ? motoneuron ? muscle contraction. The adjacent components must have compatible interfaces, and the interfacing variables must also be physiologically realistic. In our design, each component is mathematically described in Table 1: Table 1: Functional definition of neural models COMPONENT Neuron Synapse Muscle Spindle MATHEMATICAL DEFINITION S(t) = fneuron (I, t) I(t) = fsynapse (S, t) ? t) T (t) = fmuscle (S, L, L, ? A(t) = fspindle (L, L, ?dynamic , ?static , t) all components are modeled as black-box functions that map the inputs to the outputs. The meanings of these mathematical definitions are explained below. This design allows existing physiological models to be easily inserted and switched. In all models the input signals are time-varying, e.g. I = I(t), L = L(t) , etc. The argument of t in input signals are omitted throughout this paper. 2.2 Selection of models for emulation Models were selected in consideration of their computational cost, physiological verisimilitude, and whether it can be adapted to the mathematical form defined in Table 1. Model of Neuron The informational process for a neuron is to take post-synaptic current I as the input, and produce a binary spike train S in the output. The neuron model adopted in the emulation was developed by Izhikevich [1]: v0 u0 = 0.04v 2 + 5v + 140 ? u + I = a(bv ? u) (1) (2) if v = 30 mV, then v ? c, u ? u + d where a, b, c, d are free parameters tuned to achieve certain firing patterns. Membrane potential v directly determines a binary spike train S(t) that S(t) = 1 if v ? 30, otherwise S(t) = 0. Note that v in Izhikevich model is in millivolts and time t is in milliseconds. Therefore the coefficients in eq.1 need to be adjusted in correspondence to SI units. Model of Synapse When a pre-synaptic neuron spikes, i.e. S(0) = 1, an excitatory synapse subsequently issues an Excitatory Post-Synaptic Current (EPSC) that drives the post-synaptic neuron. Neural recording of hair cells in rats [2] provided evidence that the time profile of EPSC can be well characterized using the equations below:   ( t ? t Vm ? e ?d Vm ? e? ?r Vm if t ? 0 I(t) = (3) 0 otherwise The key parameters in a synapse model is the time constants for rising (?r ) and decaying (?d ). In our emulation ?r = 0.001 s and ?r = 0.003 s. 3 Model of Muscle force and electromyograph (EMG) The primary effect of skeletal muscle is to convert ?-motoneuron spikes S into force T , depending ? We used Hill?s muscle model on the muscle?s instantaneous length L and lengthening speed L. in the emulation with parameter tuning described in [3]. Another measurable output of muscle is electromyograph (EMG). EMG is the small skin current polarized by motor unit action potential (MUAP) when it travels along muscle fibers. Models exist to describe the typical waveform picked by surface EMG electrodes. In this project we chose to implement the one described in [4]. Model of Proprioceptor Spindle is a sensory organ that provides the main source of proprioceptive information. As can be seen in Fig.1C, a spindle typically produces two afferent outputs (primary Ia and secondary II) ? There according to its gamma fusimotor drives (?dynamic and ?static ) and muscle states (L and L). is currently no closed-form models describing spindle functions due to spindle?s significant nonlinearity. On representative model that numerically approximates the spindle dynamics was developed by Mileusnic et al. [5]. The model used differential equations to characterize a typical cat soleus spindle. Eqs.4-10 present a subset of this model for one type of spindle fiber (bag1): ! ?dynamic x?0 = ? x0 /? (4) ?dynamic + ?2bag1 x?1 x?2 = x2 1 = [TSR ? TB ? TP R ? ?1 x0 ] M (5) (6) where TSR TB TP R CSS = KSR (L ? x1 ? LSR0 ) (7) 0.3 = (B0 + B1 x0 ) ? (x1 ? R) ? CSS ? |x2 | = KP R (x1 ? LP R0 )   2 = ?1 1 + e?1000x2 (8) (9) (10) Eq.8 and 10 suggest that evaluating the spindle model requires multiplication, division as well as more complex arithmetics like polynomials and exponentials. The implementation details are described in Section 3. 2.3 Neuron connectivity with sparse interconnections Although the number of spinal neurons (~1 billion) is significantly less compared to that of cortical neurons (~100 billion), a fully connected spinal network still means approximately 2 trillion synaptic endings [6]. Implementing such a huge number of synapses imposes a major challenge, if not impossible, given limited hardware resource. In this platform we approximated the neural connectivity by sparsely connecting sensory neurons to motoneurons as parallel pathways. We do not attempt to introduce the full connectivity. The rationale is that in a neural control system, the effect of a single neuron can be considered as mapping current state x to change in state x? through a band-limited channel. Therefore when a collection of neurons are firing stochastically, the probability of x? depends on both x and the firing behavior s (s = 1 when spiking, otherwise s = 0) of each neuron, as such: p(x|x, ? s) = p(x|s ? = 1)p(s = 1|x) + p(x|s ? = 0)p(s = 0|x) (11) Eq.11 is a master equation that determines a probability flow on the state. From the Kramers-Moyal expansion we can associate this probability flow with a partial differential equation: i ?  X ? ? p(x, t) = ? D(i) (x)p(x, t) (12) ?t ?x i=1 where D(i) (x) is a time-invariant term that modifies the change of probability density based on its i-th gradient. 4 Under certain conditions [7, 8], D(i) (x) for i > 2 all vanish and therefore the probability flow can be described deterministically using a linear operator L:   ? ? ? 2 (2) D (x) p(x, t) = Lp(x, t) (13) p(x, t) = ? D(1) (x) + ?t ?x ?x2 This means that various Ls can be superimposed to achieve complex system dynamics (illustrated in Fig.2A). B. Equivalent network with sparse interconnections A. Neuron function as superimposed linear operators SN Sensory Input + SN SN SN ?MN ?MN ?MN Motor Output ?MN Figure 2: Functions of neuron population can be described as the combination of linear operators (A). Therefore the original neural function can be equivalently produced by sparsely connected neurons formalizing parallel pathways (B). As a consequence, the statistical effect of two fully connected neuron populations is equivalent to ones that are only sparsely connected, as long as the probability flow can be described by the same L. For a movement task, in particular, it is the statistical effect from the neuron ensemble onto skeletal muscles that determines the global behavior. Therefore we argue that it is feasible to approximate the spinal cord connectivity by sparsely interconnecting sensory and motor neurons (Fig.2B). Here a pool of homogenous sensory neurons projects to another pool of homogeneous ?-motoneurons. Pseudorandom noise is added to the input of all homogeneous neurons within a population. It is worth noting that this approximation significantly reduces the number of synapses that need to be implemented in hardware. 3 Hardware implementation on FPGA We select FPGA as the implementation device due to its inherent parallelism that resembles the nervous system. FPGA is favored over GPU or clustered CPUs because it is relatively easy to network hundreds of nodes under flexible protocols. The platform is distributed on multiple nodes of Xilinx Spartan-6 devices. The interfacing among FPGAs and computers is created using OpalKelly development board XEM6010. The dynamic range of variables is tight in models of Izhikevich neuron, synapse and EMG. This helps maintaining the accuracy of models even when they are evaluated in 32-bit fixed-point arithmetics. The spindle model, in contrast, requires floating-point arithmetics due to its wide dynamic range and complex calculations (see eq.4-10). Hyper-time computations with floating-point numbers are resource consuming and therefore need to be implemented with special attentions. 3.1 Floating-point arithmetics in combinational logic Our arithmetic implementations are compatible with IEEE-754 standard. Typical floating-point arithmetic IP cores are either pipe-lined or based on iterative algorithms such as CORDIC, all of which require clocks to schedule the calculation. In our platform, no clock is provided for model evaluations thus all arithmetics need to be executed in pure combinational logic. Taking advantage of combinational logic allows all model evaluations to be 1) fast, the evaluation time depends entirely on the propagating and settling time of signals, which is on the order of microseconds, and 2) parallel, each model is evaluated on its own circuit without waiting for any other results. Our implementations of adder and multiplier are inspired by the open source project ?Free FloatingPoint Madness?, available at http://www.hmc.edu/chips/. Please contact the authors of this paper if the modified code is needed. 5 Fast combinational floating-point division Floating-point division is even more resource demanding than multiplications. We avoided directly implementing the dividing algorithm by approximating it with additions and multiplications. Our approach is inspired by an algorithm described in [9], which provides a good approximation of the inverse square root for any positive number x within one Newton-Raphson iteration: 1 x Q(x) = ? ? x(1.5 ? ? x2 ) 2 x (x > 0) (14) Q(x) can be implemented only using floating-point adders and multipliers. Thereby any division with a positive divisor can be achieved if two blocks of Q(x) are concatenated: a a (15) = ? ? = a ? Q(b) ? Q(b) (b > 0) b b? b This algorithm has been adjusted to also work with negative divisors (b < 0). Numerical integrators for differential equations Evaluating the instantaneous states of differential equation models require a fixed-step numerical integrator. Backward Euler?s Method was chosen to balance the numerical error and FPGA usage: x? xn+1 = f (x, t) = xn + T f (xn+1 , tn+1 ) (16) (17) where T is the sampling interval. f (x, t) is the derivative function for state variable x. 3.2 Asynchronous spike-based communication between FPGA chips Clock Spike clean count Counter 1 1 2 1 2 3 Figure 3: Timing diagram of asynchronous spike-based communication FPGA nodes are networked by transferring 1-bit binary spikes to each other. Our design allowed the sender and the receiver to operate on independent clocks without having to synchronize. The timing diagram of the spike-based communication is shown in Fig.3. The sender issues Spike with a pulse width of 1/(365 ? Femu ) second. Each Spike then triggers a counting event on the receiver, meanwhile each Clock first reads the accumulated spike count and subsequently cleans the counter. Note that the phase difference between Spike and Clock is not predictable due to asynchronicity. 3.3 Serialize neuron evaluations within a homogeneous population Different neuron populations are instantiated as standalone circuits. Within in each population, however, homogeneous neurons mentioned in Section 2.3 are evaluated in series in order to optimize FPGA usage. Within each FPGA node all modules operate with a central clock, which is the only source allowed to trigger any updating event. Therefore the maximal number of neurons that can be serialized (Nserial ) is restrained by the following relationship: Ffpga = C ? Nserial ? 365 ? Femu (18) Here Ffpga is the fastest clock rate that a FPGA can operate on; C = 4 is the minimal clock cycles needed for updating each state variable in the on-chip memory; Femu = 1 kHz is the time granularity of emulation (1 millisecond), and 365 ? Femu represents 365x real-time. Consider that Xilinx 6 Spartan-6 FPGA devices peaks at 200MHz central clock frequency, the theoretical maximum of neurons that can be serialized is Nserial 6 200 MHz/(4 ? 365 ? 1 kHz) ? 137 (19) In the current design we choose Nserial = 128. 4 Results: emulated activities of motor nervous system Figure 4 shows the implemented monosynaptic spinal loop in schematics and in operation. Each FPGA node is able to emulate monosynaptic spinal loops consisting of 1,024 sensory and 1,024 motor neurons, i.e. 2,048 neurons in total. The spike-based asynchronous communication is successful between two FPGA nodes. Note that the emulation has to be significantly slowed down for on-line plotting. When the emulation is at full speed (365x real-time) the software front-end is not able to visualize the signals due to limited data throughput. 128 SNs 128 ?MNs SN ?MN 128 SNs 128 ?MNs SN ?MN ... 8 parallel pathways 2,048 neurons Figure 4: The neural emulation platform in operation. Left: Neural circuits implemented for each FPGA node including 2,048 neurons. SN = Sensory Neuron; ?MN = ?-motoneuron. Center: One working FPGA node. Right: Two FPGA nodes networked using asynchronous spiking protocol. The emulation platform successfully created multi-scale information when the muscle is externally stretched (Fig.5A). We also tested if our emulated motor system is able to produce the recruitment order and size principles observed in real physiological data. It has been well known that when a voluntary motor command is sent to the ?-motoneuron pool, the motor units are recruited in an order that small ones get recruited first, followed by the big ones [10]. The comparison between our results and real data are shown in Fig.5B, where the top panel shows 20 motor unit activities emulated using our platform, and the bottom panel shows decoded motor unit activities from real human EMG [11]. No qualitative difference was found. 5 Discussion and future work We designed a hardware platform for emulating the multi-scale motor nervous activities in hypertime. We managed to use one node of single Xilinx Spartan-6 FPGA to emulate monosynaptic spinal loops consisting of 2,048 neurons, associated muscles and proprioceptors. The neurons are organized as parallel pathways with sparse interconnections. The emulation is successfully accelerated to 365x real-time. The platform can be scaled by networking multiple FPGA nodes, which is enabled by an asynchronous spike-based communication protocol. The emulated monosynaptic spinal loops are capable of producing reflex-like activities in response to muscle stretch. Our results of motor unit recruitment order are compatible with the physiological data collected in real human subjects. There is a question of whether this stochastic system turns out chaotic, especially with accumulated errors from Backward Euler?s integrator. Note that the firing property of a neuron population is usually stable even with explicit noise [8], and spindle inputs are measured from real robots so the integrator errors are corrected at every iteration. To our knowledge, the system is not critically sensitive to the initial conditions or integrator errors. This question, however, is both interesting and important for in-depth investigations in the future. 7 It has been shown [12] that replicating classic types of spinal interneurons (propriospinal, Iaexcitatory, Ia-inhibitory, Renshaw, etc.) is sufficient to produce stabilizing responses and rapid reaching movement in a wrist. Our platform will introduce those interneurons to describe the known spinal circuitry in further details. Physiological models will also be refined as needed. For the purpose of modeling movement behavior or diseases, Izhikevich model is a good balance between verisimilitude and computational cost. Nevertheless when testing drug effects along disease progression, neuron models are expected to cover sufficient molecular details including how neurotransmitters affect various ion channels. With the advancing of programmable semiconductor technology, it is expected to upgrade our neuron model to Hodgkin-Huxley?s. For the muscle models, Hill?s type of model does not fit the muscle properties accurately enough when the muscle is being shortened. Alternative models will be tested. Other studies showed that the functional dexterity of human limbs ? especially in the hands ? is critically enabled by the tendon configurations and joint geometry [13]. As a result, if our platform is used to understand whether known neurophysiology and biomechanics are sufficient to produce able and pathological movements, it will be necessary to use this platform to control human-like limbs. Since the emulation speed can be flexibly adjusted from arbitrarily slow to 365x real-time, when speeded to exactly 1x real-time the platform will function as a digital controller with 1kHz refresh rate. The main purpose of the emulation is to learn how certain motor disorders progress during childhood development. This first requires the platform to reproduce motor symptoms that are compatible with clinical observations. For example it has been suggested that muscle spasticity in rats is associated with decreased soma size of ?-motoneurons [14], which presumably reduced the firing threshold of neurons. Thus when lower firing threshold is introduced to the emulated motoneuron pool, similar EMG patterns as in [15] should be observed. It is also necessary for the symptoms to evolve with neural plasticity. In the current version we presume that the structure of each component remains time invariant. In the future work Spike Timing Dependent Plasticity (STDP) will be introduced such that all components are subject to temporal modifications. B. Verify motor unit recruitment pattern A. Multi-scale activities from emulation Emulation 1s Stretch Spindle Ia Sensory post-synaptic current Real Data Motoneurons Muscle Force EMG Figure 5: A) Physiological activity emulated by each model when the muscle is sinusoidally stretched. B) Comparing the emulated motor unit recruitment order with real experimental data. Acknowledgments The authors thank Dr. Gerald Loeb for helping set up the emulation of spindle models. This project is supported by NIH NINDS grant R01NS069214-02. 8 References [1] Izhikevich, E. M. Simple model of spiking neurons. IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council 14, 1569?1572 (2003). [2] Glowatzki, E. & Fuchs, P. A. Transmitter release at the hair cell ribbon synapse. Nature neuroscience 5, 147?154 (2002). [3] Shadmehr, R. & Wise, S. P. A Mathematical Muscle Model. In Supplementary documents for ?Computational Neurobiology of Reaching and Pointing?, 1?18 (MIT Press, Cambridge, MA, 2005). [4] Fuglevand, A. J., Winter, D. A. & Patla, A. E. Models of recruitment and rate coding organization in motor-unit pools. Journal of neurophysiology 70, 2470?2488 (1993). [5] Mileusnic, M. P., Brown, I. E., Lan, N. & Loeb, G. E. Mathematical models of proprioceptors. I. Control and transduction in the muscle spindle. Journal of neurophysiology 96, 1772?1788 (2006). [6] Gelfan, S., Kao, G. & Ruchkin, D. S. The dendritic tree of spinal neurons. The Journal of comparative neurology 139, 385?411 (1970). [7] Sanger, T. D. Neuro-mechanical control using differential stochastic operators. In Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, 4494?4497 (2010). [8] Sanger, T. D. Distributed control of uncertain systems using superpositions of linear operators. Neural computation 23, 1911?1934 (2011). [9] Lomont, C. Fast inverse square root (2003). URL http://www.lomont.org/Math/Papers/ 2003/InvSqrt.pdf. [10] Henneman, E. Relation between size of neurons and their susceptibility to discharge. Science (New York, N.Y.) 126, 1345?1347 (1957). [11] De Luca, C. J. & Hostage, E. C. Relationship between firing rate and recruitment threshold of motoneurons in voluntary isometric contractions. Journal of neurophysiology 104, 1034?1046 (2010). [12] Raphael, G., Tsianos, G. A. & Loeb, G. E. Spinal-like regulator facilitates control of a two-degree-offreedom wrist. The Journal of neuroscience : the official journal of the Society for Neuroscience 30, 9431?9444 (2010). [13] Valero-Cuevas, F. J. et al. The tendon network of the fingers performs anatomical computation at a macroscopic scale. IEEE transactions on bio-medical engineering 54, 1161?1166 (2007). [14] Brashear, A. & Elovic, E. Spasticity: Diagnosis and Management (Demos Medical, 2010), 1 edn. [15] Levin, M. F. & Feldman, A. G. The role of stretch reflex threshold regulation in normal and impaired motor control. Brain research 657, 23?30 (1994). 9
4585 |@word neurophysiology:5 trial:1 version:1 rising:1 polynomial:1 open:1 pulse:1 soleus:1 contraction:2 thereby:1 initial:1 configuration:1 series:1 efficacy:1 tuned:1 document:1 existing:1 current:7 com:1 comparing:1 si:1 gmail:1 yet:1 must:2 gpu:1 refresh:1 realistic:3 numerical:3 plasticity:3 motor:29 designed:2 standalone:1 selected:1 device:5 nervous:12 serialized:2 core:1 renshaw:1 proprioceptor:7 provides:2 math:1 node:11 org:1 mathematical:6 along:2 constructed:1 differential:5 qualitative:2 combinational:5 pathway:4 introduce:2 x0:3 expected:2 rapid:1 behavior:4 nor:1 multi:14 brain:3 integrator:5 inspired:2 informational:2 cpu:1 elbow:1 provided:4 monosynaptic:8 discover:1 project:4 circuit:4 formalizing:1 panel:2 backbone:1 developed:2 temporal:2 quantitative:3 every:1 cuevas:1 exactly:1 fuglevand:1 scaled:1 bio:1 control:7 unit:9 grant:1 medical:2 producing:3 before:1 positive:2 engineering:6 timing:3 consequence:3 semiconductor:1 shortened:1 niu:2 firing:7 advised:1 approximately:1 black:1 chose:1 resembles:1 challenging:1 fastest:1 limited:3 range:2 speeded:1 acknowledgment:1 responsible:1 wrist:2 testing:1 block:1 implement:1 chaotic:1 drug:2 elicit:1 significantly:6 pre:1 suggest:1 tsr:2 onto:1 get:1 selection:1 operator:5 impossible:1 www:2 measurable:1 map:1 demonstrated:1 equivalent:2 maximizing:1 modifies:1 optimize:1 attention:2 flexibly:2 l:1 center:1 stabilizing:1 disorder:5 pure:1 insight:1 array:3 enabled:3 population:7 classic:1 discharge:1 cs:2 trigger:2 edn:1 homogeneous:4 origin:1 associate:1 approximated:1 particularly:2 updating:2 sparsely:4 pediatric:1 observed:2 inserted:1 module:2 bottom:1 role:1 childhood:4 epsc:2 cord:1 cycle:2 connected:4 movement:7 counter:2 disease:7 mentioned:1 developmental:2 predictable:1 dynamic:9 gerald:1 depend:1 tight:1 division:4 easily:3 joint:1 chip:3 cat:1 fiber:4 various:2 emulate:2 neurotransmitter:1 finger:1 train:2 instantiated:1 fast:3 describe:3 kp:1 spartan:3 hyper:5 outcome:1 refined:1 birth:1 supplementary:1 interconnection:4 otherwise:4 asynchronously:1 ip:1 interplay:1 advantage:1 net:2 maximal:1 raphael:1 causing:1 networked:2 loop:8 organizing:1 achieve:3 lengthening:2 kao:1 los:4 billion:2 electrode:1 impaired:1 produce:6 generating:1 comparative:1 help:1 depending:1 propagating:1 measured:1 b0:1 progress:1 eq:5 dividing:1 implemented:8 come:1 quantify:1 waveform:1 emulation:24 subsequently:2 stochastic:2 human:12 implementing:2 require:2 offreedom:1 clustered:1 investigation:1 minos:2 dendritic:1 mathematically:1 adjusted:3 helping:1 stretch:4 considered:1 stdp:1 normal:1 presumably:1 minimalist:1 mapping:1 predict:1 visualize:1 circuitry:3 pointing:1 major:1 omitted:1 susceptibility:1 purpose:3 travel:1 bag:2 currently:2 superposition:1 sensitive:1 council:1 organ:3 create:1 successfully:2 mit:1 interfacing:2 modified:2 reaching:2 varying:1 command:1 publication:1 release:1 transmitter:1 superimposed:2 mainly:1 contrast:1 dependent:1 accumulated:2 typically:1 transferring:1 floatingpoint:1 vlsi:2 relation:1 reproduce:1 overall:2 among:2 issue:2 flexible:1 favored:1 development:4 plan:1 platform:19 spatial:1 neuropathological:1 animal:1 homogenous:1 field:2 special:1 having:1 sampling:1 biology:1 represents:1 constitutes:1 throughput:1 discrepancy:1 future:3 quantitatively:1 inherent:1 pathological:2 winter:1 gamma:3 individual:1 loeb:3 usc:1 floating:7 geometry:1 phase:1 consisting:2 divisor:2 attempt:1 organization:1 huge:1 regulating:1 interneurons:2 evaluation:5 ribbon:1 analyzed:2 chain:2 capable:2 partial:1 necessary:3 experience:2 tree:1 theoretical:1 minimal:1 uncertain:1 modeling:4 compels:1 sinusoidally:1 injury:7 cover:2 tp:2 mhz:2 cost:2 subset:1 euler:2 hundred:1 fpga:21 successful:2 levin:1 front:1 characterize:1 emg:8 density:1 fundamental:1 spindle:18 peak:1 international:1 vm:3 terence:1 pool:5 connecting:1 continuously:1 connectivity:4 central:3 ear:1 management:1 choose:1 dr:1 stochastically:1 derivative:1 potential:2 de:1 coding:1 coefficient:1 mv:1 afferent:1 depends:2 root:2 picked:1 closed:1 decaying:1 parallel:5 square:2 accuracy:1 who:1 characteristic:1 stretching:1 ensemble:1 upgrade:1 accurately:1 produced:1 emulated:9 critically:2 worth:2 drive:2 presume:1 synapsis:2 networking:1 synaptic:6 definition:3 frequency:1 involved:1 associated:2 static:3 propagated:1 treatment:3 knowledge:2 organized:1 schedule:1 day:1 isometric:1 methodology:4 response:2 synapse:7 evaluated:6 box:1 symptom:4 biomedical:4 clock:10 hand:3 receives:1 working:1 adder:2 multiscale:1 lar:1 perhaps:1 tsianos:1 izhikevich:5 building:2 effect:8 validity:1 verify:2 multiplier:2 usage:2 managed:1 brown:1 read:1 restrained:1 proprioceptive:1 illustrated:2 adjacent:1 during:1 width:1 please:1 excitation:1 won:1 clocked:1 rat:2 xilinx:3 prominent:1 pdf:1 hill:2 evident:1 demonstrate:1 tn:1 performs:1 interface:1 ranging:2 meaning:1 consideration:1 instantaneous:2 dexterity:1 wise:1 nih:1 functional:4 spiking:6 spinal:16 khz:3 discussed:1 association:1 approximates:1 numerically:1 moyal:1 significant:1 cambridge:1 feldman:1 stretched:2 tuning:1 nonlinearity:1 replicating:1 stable:1 robot:1 surface:1 v0:1 etc:4 own:2 showed:1 certain:5 binary:3 continue:1 arbitrarily:1 life:1 muscle:33 seen:1 motoneuron:14 additional:1 converting:1 r0:1 signal:6 u0:1 ii:1 multiple:4 full:3 arithmetic:7 reduces:1 compounded:1 faster:2 characterized:1 calculation:3 clinical:5 long:3 biomechanics:3 raphson:1 luca:1 post:4 molecular:1 schematic:1 prediction:2 scalable:1 neuro:1 hair:2 controller:1 patient:1 iteration:2 spasticity:2 achieved:1 cell:2 ion:1 addition:1 interval:1 decreased:1 diagram:2 source:3 suffered:1 macroscopic:1 operate:5 recording:1 recruited:2 subject:2 sent:1 facilitates:1 flow:4 noting:1 granularity:3 abnormality:2 counting:1 easy:1 enough:1 variety:1 affect:2 fit:1 architecture:1 identified:1 inner:1 angeles:4 whether:3 fuchs:1 ultimate:1 url:1 linkage:1 york:1 action:1 programmable:4 useful:2 band:1 hardware:8 sohn:1 documented:1 http:2 reduced:1 exist:1 millisecond:5 inhibitory:1 neuroscience:3 anatomical:1 diagnosis:1 skeletal:4 waiting:1 key:1 soma:1 nevertheless:2 threshold:4 lan:1 neither:1 clean:2 millivolt:1 backward:2 advancing:1 year:5 convert:1 inverse:2 recruitment:6 master:1 hodgkin:1 throughout:1 asynchronicity:1 polarized:1 tendon:2 ninds:1 bit:2 entirely:1 followed:1 correspondence:1 annual:1 activity:9 adapted:1 bv:1 huxley:1 x2:5 software:1 regulator:1 speed:5 argument:1 performing:1 pseudorandom:1 relatively:1 ksr:1 department:6 developing:1 according:1 combination:1 membrane:1 lp:2 modification:1 explained:1 invariant:2 slowed:1 valero:1 equation:6 resource:3 remains:1 describing:1 count:2 fail:1 mechanism:1 lined:1 needed:3 turn:1 end:2 studying:2 adopted:1 available:1 operation:2 progression:5 limb:4 alternative:1 gate:2 fpgas:1 original:1 top:1 maintaining:3 newton:1 sanger:3 medicine:1 concatenated:1 especially:2 approximating:2 society:2 contact:1 skin:1 added:2 question:2 spike:18 posture:2 primary:4 southern:4 regulated:1 microscopic:1 gradient:1 deficit:1 thank:1 argue:1 collected:1 cellular:2 length:2 code:1 modeled:1 relationship:2 illustration:1 convincing:2 balance:2 equivalently:1 difficult:1 executed:1 hmc:1 regulation:1 negative:1 design:5 guideline:1 implementation:5 embc:1 perform:1 neuron:51 observation:1 voluntary:3 emulating:3 communication:5 neurobiology:1 introduced:2 mechanical:1 pipe:1 california:4 established:1 able:4 suggested:1 usually:3 below:2 pattern:3 parallelism:1 challenge:2 tb:2 including:2 memory:1 terry:1 ia:3 event:2 demanding:1 examination:1 force:4 settling:1 synchronize:1 scarce:1 arm:1 mn:10 technology:2 eye:1 created:3 sn:9 multiplication:3 evolve:1 fully:2 rationale:2 interesting:1 versus:1 digital:2 clinic:1 switched:1 degree:1 sufficient:3 imposes:1 plotting:1 kramers:1 principle:1 compatible:4 excitatory:2 supported:1 free:2 asynchronous:5 drastically:1 understand:2 explaining:1 wide:1 characterizing:1 taking:1 emerge:1 sparse:4 modularized:1 distributed:3 depth:1 cortical:1 evaluating:2 ending:1 xn:3 sensory:12 author:2 qualitatively:1 collection:1 avoided:1 transaction:2 approximate:1 logic:5 global:2 b1:1 receiver:2 neuropathology:1 consuming:1 neurology:2 demo:1 physiologically:2 iterative:2 decade:1 table:3 nature:5 channel:2 learn:1 ca:4 interact:1 expansion:1 complex:5 necessarily:1 meanwhile:2 protocol:3 official:1 main:2 joon:1 big:1 noise:2 profile:1 child:5 allowed:3 body:1 x1:3 fig:11 representative:1 board:1 transduction:1 slow:1 interconnecting:1 decoded:1 deterministically:1 explicit:1 exponential:1 vanish:1 abundance:1 externally:1 down:1 physiological:10 evidence:1 gap:1 madness:1 sender:2 neurophysiological:1 neurological:3 reflex:3 determines:3 trillion:2 ma:1 goal:1 microsecond:1 feasible:1 change:2 typical:4 neurophysiologists:1 clinician:1 operates:1 corrected:1 shadmehr:1 total:1 secondary:2 experimental:1 select:1 assessed:1 accelerated:2 tested:2
3,961
4,586
Online Sum-Product Computation over Trees Fabio Vitale Department of Computer Science University of Milan 20135 Milan, Italy [email protected] Mark Herbster Stephen Pasteris Department of Computer Science University College London London WC1E 6BT, England, UK {m.herbster, s.pasteris}@cs.ucl.ac.uk Abstract We consider the problem of performing efficient sum-product computations in an online setting over a tree. A natural application of our methods is to compute the marginal distribution at a vertex in a tree-structured Markov random field. Belief propagation can be used to solve this problem, but requires time linear in the size of the tree, and is therefore too slow in an online setting where we are continuously receiving new data and computing individual marginals. With our method we aim to update the data and compute marginals in time that is no more than logarithmic in the size of the tree, and is often significantly less. We accomplish this via a hierarchical covering structure that caches previous local sum-product computations. Our contribution is three-fold: we i) give a linear time algorithm to find an optimal hierarchical cover of a tree; ii) give a sum-productlike algorithm to efficiently compute marginals with respect to this cover; and iii) apply ?i? and ?ii? to find an efficient algorithm with a regret bound for the online allocation problem in a multi-task setting. 1 Introduction The use of graphical models [1, 2] is ubiquitous in machine learning. The application of the batch sum-product algorithm to tree-structured graphical models, including hidden Markov models, Kalman filtering and turbo decoding, is surveyed in [3]. Our aim is to adapt these techniques to an online setting. In our online model we are given a tree and a fixed set of parameters. We then receive a potentially unbounded online sequence of ?prediction requests? and ?data updates.? A prediction request indicates a vertex for which we then return the posterior marginal at that vertex. Each data update associates a new ?factor? to that vertex. Classical belief propagation requires time linear in the size of the tree for this task. Our algorithm requires time linear in the height of an optimal hierarchical cover of this tree. The height of the cover is in the worst case logarithmic in the size the tree. Thus our per trial prediction/update time is at least an exponential improvement over classical belief propagation. The paper is structured as follows. In Section 2 we introduce our notation leading to our definition of an optimal hierarchical cover. In Section 3 we give our optimal hierarchical covering algorithm. In Section 4 we show how we may use this cover as a structure to cache computations in our sumproduct-like algorithm. Finally, in Section 5 we give a regret bound and a sketch of an application of our techniques to an online multi-task allocation [4] problem. Previous work. Pearl [5] introduced belief propagation for Bayes nets which computes marginals in time linear in the size of the tree. In [6] an algorithm for the online setting was given for a Bayes net on a tree T which required O(log |V (T )|) time per marginalization step, where |V (T )| is the number of vertices in the tree. In this work we consider a Markov random field on a tree. We give an algorithm whose performance is bounded by O(?? (T )). The term ?? (T ) is the height of our 1 optimal hierarchical cover which is upper bounded by O(min(log |V (T )|, diameter(T ))), but may in fact be exponentially smaller. 2 Hierarchical cover of a tree In this section we introduce our notion of a hierarchical cover of a tree and its dual the decomposition tree. Graph-theoretical preliminaries. A graph G is a pair of sets (V, E) such that E is a set of unordered pairs of distinct elements from V . The elements of V are called vertices and those of E are called edges. In order to avoid ambiguities deriving from dealing with different graphs, in some cases we will highlight the membership to graph G denoting these sets as V (G) and E(G) respectively. With slight abuse of notation, by writing v ? G, we mean v ? V (G). S is a subgraph G (we write S ? G) iff V (S) ? V (G) and E(S) = {(i, j) : i, j ? V (S), (i, j) ? E(G)}. Given any subgraph S ? G, we define its boundary (or inner border) ?G (S) and its neighbourhood (or outer border) NG (S) as: ?G (S) := {i : i ? S, j ? / S, (i, j) ? E(G)}, and NG (S) := {j : i ? S, j ? / S, (i, j) ? E(G)}. With slight abuse of notation, NG (v) := NG ({v}), and thus the degree of a vertex v is |NG (v)|. Given any graph G, we define the set of its leaves as leaves(G) := {i ? G : |NG (i)| = 1}, and its interior G? := {i ? G : |NG (i)| = # 1}. A path P in a graph G is defined by a sequence of distinct vertices $v1 , v2 , ..., vm % of G, such that for all i < m we have that (vi , vi+1 ) ? E(G). In this case we say that v1 and vm are connected by the subgraph P . A tree T is a graph in which for all v, w ? T there exists a unique path connecting v with w. In this paper we will only consider trees with a non-empty edge set and thus the vertex set will always have a cardinality of at least 2. The distance dT (v, w) between v, w ? T is the path length |E(P )|. The pair (T, r) denotes a rooted tree T with root vertex r. Given a rooted tree (T, r) and any vertex i ? V (T ), the (proper) descendants of i are all vertices that can be connected with r via paths P ? T containing i (excluding i). Analogously, the (proper) ancestors of i are all vertices that lie on the path P ? T connecting i with r (excluding i). We denote the set of all descendants (resp. all ancestors) of i by ?rT (i) (resp. ?rT (i)). We shall omit the root r when it is clear from the context. Vertex i is the parent (resp. child) of j, which is denoted by ?rT (j) (resp. i ? ?rT (j)) if (i, j) ? E(T ) and i ? ?rT (j) (resp. i ? ?rT (j)). Given a tree T we use the notation S ? T only if S is a tree and subgraph of T . The height of a rooted tree (T, r) is the maximum length of a path P ? T connecting the root to any vertex: hr (T ) := maxv?T dT (v, r). The diameter ?(T ) of a tree T is defined as the length of the longest path between any two vertices in T . 2.1 The hierarchical cover of a tree In this section we describe a splitting process that recursively decomposes a given tree T . A (decomposition) tree (D, r) identifies this splitting process, generating a tree-structured collection S of subtrees that hierarchically cover the given tree T . This process recursively splits at each step a subtree of T (that we call a ?component?) resulting from some previous splits. More precisely, a subtree S ? T is split into two or more subcomponents and the decomposition of S depends only on the choice of a vertex v ? S ? , which we call splitting vertex, in the following way. The splitting vertex v ? S ? of S induces the split set ?(S, v) = {S1 , . . . , S|NS (v)| } which is the unique set of S?s subtrees which overlap at a vertex v, uniquely, that represent a cover for S, i.e., it satisfies (i) ?S ! ??(S,v) S # = S and (ii) {v} = Si ? Sj for all 1 ? i < j ? |NS (v)|. Thus the split may be visualized by considering the forest F resulting from removing a vertex from S, but afterwards each component S1 , . . . , S|NS (v)| of F has the ?removed vertex? v added back to it. A component having only two vertices is called atomic, since it cannot be split further. We indicate with S v ? T the component subtree whose splitting vertex is v, and we denote atomic components by S (i,j) , where E(S (i,j) ) = {(i, j)}. We finally denote by S the set of all component subtrees obtained by this splitting process. Since the method is recursive, we can associate a rooted tree (D, r), with T ?s decomposition into a hierarchical cover, whose internal vertices are the splitting vertices of the splitting process. Its leaves correspond to the single edges (of E(T )) of each atomic component, and a vertex ?parent-child? relation c ? ?rD (p) corresponds to the ?splits-into? relation S c ? ?(S p , p) (see Figure 1). We will now formalize the splitting process by defining the hierarchical cover S of a tree T , which is a key concept used by our algorithm. 2 Definition 1. A hierarchical cover S of a tree T is a tree-structured collection of subtrees that hierarchically cover the tree T satisfying the following three properties: 1. T ? S , 2. for all S ? S with S ? #= ? there exists an x ? S ? such that ?(S, x) ? S , 3. for all S, R ? S such that S #? R and R #? S, we have |V (R) ? V (S)| ? 1. The above definition recursively generates a cover. The splitting process that generates a hierarchical cover S of T is formalized as rooted tree (D, r) in the following definition. Definition 2. If S is a hierarchical cover of T we define the associated decomposition tree (D, r) as a rooted tree, whose vertex set V (D) := T ? ? E(T ) where D? = T ? and leaves(D) = E(T ), such that the following three properties hold: 1. S r = T , 2. for all c, p ? D? , c ? ?rD (p) iff S c ? ?(S p , p) , 3. for all (c, p) ? E(T ) 1 , we have (c, p) ? ?rD (p) iff S (c,p) ? ?(S p , p) . The following lemma shows that with any given hierarchical cover S it is possible to associate a unique decomposition tree (D, r). Lemma 3. A hierarchical cover S of T defines a unique decomposition tree (D, r) such that if S ? S there exists a v ? V (D) such that S = S v and if v, w ? V (D) and v #= w, then S v #= S w . For a given hierarchical cover S in the following we define the height and the exposure: two properties which measure different senses of the ?size? of a cover. The height of a hierarchical cover S is the height of the associated decomposition tree D. Note that the height of a decomposition tree D may be exponentially smaller than the height of T , since, for example, it is not difficult to show that there exists a decomposition tree isomorphic to a binary tree when the input tree T is a path graph. If R ? T and SR is a hierarchical cover of R, we define the exposure of SR (with respect to tree T ) as maxQ?SR |?T (Q)|. Thus the exposure is a measure relative to a ?containing? tree (which can be the input tree T itself) and the height is independent of any containing tree. In Section 4 the covering subtrees correspond to cached ?joint distributions,? which are defined on the boundary vertices of the subtrees, and require memory exponential in the boundary size. Thus we are interested in covers with small exposure. We now define a measure of the optimal height with respect to a given exposure value. Definition 4. A hierarchical cover with exposure at most k is called a k-hierarchical cover. Given any subtree R ? T , the k-decomposition potential ?k (R) of R is the minimum height of all hierarchical covers of SR with exposure (with respect to T ) not larger than k. The ?-decomposition potential ?? (R) is the minimum height of all hierarchical covers of R. If |?T (R)| > k then ?k (R) := ?. Let?s consider some examples. Given a star graph, i.e., a graph with a single central vertex and any number of adjacent vertices, there is in fact only one possible hierarchical cover obtained by splitting the central vertex so that ?? (star) = 1. For path graphs, ?? (path) = ?(log |path|), as mentioned above. An interesting example is a star with path graphs rather than single edges. Specifically, a |star-path| star-path may be formed by a set of log |star-path| path graphs P1 , P2 , . . . each with log |star-path| edges. These path graphs are then joined at a central vertex. In this case we have ?? (star-path) = O(log log(|star-path|)); as each path has a hierarchical cover of height O(log log(|star-path|)), each of these path covers may then be joined to create a cover of the star-path. In Theorem 6 we will see the generic bound ?? (T ) ? O(min(?(T ), log |V (T )|)). The star-path thus illustrates that the bound may be exponentially loose. In Theorem 6 we will see that ?2 (T ) ? 2?? (T ). Thus we may restrict our algorithm to hierarchical covers with an exposure of 2 at very little cost in efficiency. Hence, we will now focus our attention on 2-hierarchical covers. 2-Hierarchical covers. Given any element Q #= T in a 2-hierarchical cover of T then |?T (Q)| ? {1, 2}. Consider the case in which ?T (Q) = {v, w}, i.e. |?T (Q)| = 2. Then Q can be specified by 1 Observe that (c, p) ? E(T ) implies c, p ? V (T ) and (c, p) ? leaves(D). 3 ! " the two vertices v, w and defined as follows: Q := wv := argmaxS?T (|V (S)| : v, w ? leaves(S)), that is the maximal subtree of T , having v and w among its leaves. Consider now the case in which ?T (Q) = {w}, i.e. |?T (Q)| = 1. Q is now defined as the T ?s subtree containing vertex w together with all the descendents ?w T (z) where z ? NT (w). Hence, a subtree such as Q can be uniquely determined by the w?s neighbor z ? NT (w). In order to denote ! " subtree Q in this case we use the following notation: Q := w!z . Observe ! that " one can also represent a ?boundary one? subtree with the previous notation by writing Q := w" , where # is any 2 chosen leaf of T belonging to ?w T (z) (see Figure 1). (2, s)-Hierarchical covers. We now introduce the notion of (2, s)-hierarchical covers (which, for simplicity, we shall also call (2, s)-covers) with respect to a rooted tree (T, s). This notion explicitly depends on a given vertex s ? V (T ), which, for the sake of simplicity, will be assumed to be a leaf of T . (2, s)-Hierarchical covers are guaranteed to not be much larger than a 2-hierarchical cover (see Theorem 6). They are also amenable to a bottom-up construction. Definition 5. Given any subtree R ? T , a 2-hierarchical cover SR is a (2, s)-hierarchical cover of R if, !for"all S ? SR \ {T }, there exists v,!w"? S where v ? ?sT (w) such that (case 1: |?T (Q)| = 1) w s 2 S= w ! v , or (case 2: |?T (Q)| = 2) S = v . In the former case v ? ?T (w). We define ?s (R) to be the minimal height of any possible (2, s)-hierarchical cover of R ? T . Thus every subtree of a (2, s)-hierarchical cover is necessarily ?oriented? with respect to a root s. 3 Computing an optimal hierarchical cover From a ?big picture? perspective, a (2, s)-hierarchical cover G is recursively constructed in a bottomup fashion: in the initialization phase G contains only the atomic components convering T , i.e. the ones formed only by a pair of adjacent vertices of V (T ). We have then at this stage |G| = |E(T )|. Then G grows step by step through the addition of new covering subtrees of T . At each time step t, at least one subtree of T is added to G. All the subtrees added at each step t must strictly contain only subtrees added before step t. We now introduce the formal description of our method for constructing a (2, s)-hierarchical cover G. As we said, the construction of G proceeds in incremental steps. At each step t the method operates on a tree Tt , whose vertices are part of V (T ). The construction of Tt is accomplished starting by Tt?1 (if t > 0) in such a way that V (Tt ) ? V (Tt?1 ), where T0 is set to be the subtree of (T, s) containing the root and all the internal vertices. During each step t all the while-loop instructions of Figure 1 are executed: (1) some vertices (the black ones in Figure 1) are selected through a depth-first visit (during the backtracking steps) of Tt starting from s 3 , (2) for each selected vertex v, subtree S v is obtained from merging subtrees added to G in previous steps and overlapping at vertex v, (3) in order to create tree Tt+1 from Tt the previously selected vertices of Tt are removed, (4) the edge set E(Tt+1 ) is created from E(Tt ) in such a way to preserve the Tt ?s structure, but all the edges incident to the vertices removed from V (Tt ) (the black vertices Figure 1) in the while-loop step 3 need to be deleted. The possible disconnection that would arise by the removal of these parts of Tt is avoided by completing the construction of Et+1 through the addition of some new edges. These additional edges are not part of E(T ) and link each vertex v with its grand-parent in Tt if vertex v?s parent was deleted (see the dashed line edges in Figure 1) during the construction of Tt+1 from Tt . In the final while-loop step the variable t gets incremented by 1. Basically, the key for obtaining optimality with this construction method can be explained with the following observation. At each time step t, when we add a covering subtree S v for some vertex v ? V (Tt ) selected by the algorithm (black vertices in Figure 1), the whole (2, s)-cover of S v becomes completely contained in G and its height is t + 1, which can be proven to be the minimum possible height of a (2, s)-cover of S v . Hence, at each time step t we construct the t + 1-th level (in the hierarchical nested sense) of G in such a way to achieve local optimality for all elements contained in all levels smaller or equal to t + 1. As the next theorem states, the running of the algorithm is linear in |V (T )|. 2 This representation is not necessarily unique, as if !1 , !2 ? leaves(T ) ? Q, we have 3 Observe that s is the unique vertex belonging to V (Tt ) for all time steps t ? 0. 4 !w" !1 = ! w "# !w"$ !2 = " z . Theorem 6. Given a rooted tree (T, s), the algorithm in Figure 1 outputs G, an optimal (2, s)hierarchical cover in time linear in |V (T )| of height ?2s (T ) which is bounded as ?? (T ) ? ?2 (T ) ? ?2s (T ) ? 2?? (T ) ? O(min(log |V (T )|, ?(T ))) . Before we provide the detailed description of the algorithm for constructing an optimal (2, s)hierarchical cover we need some ancillary definitions. We call a vertex v ? V (Tt ) \ {s} mergeable (at time t) if and only if either (i) v ? leaves(Tt ) or (ii) v has a single child in Tt and that child is not mergeable. If v ? V (Tt ) \ {s} is mergeable we write v ? Mt . We also use the following shorthands for making more intuitive our notation: We set ctv := ?sTt (v) when |?sTt (v)| = 1, ptv := ?sTt (v) when v #= s and gvt := ?sTt (ptv ) when v, ptv #= s. Finally, given u, u# ? V (T ) such that u# ? ?sT (u), we indicate with with ?sT (u 1? u# ) the child of u which is ancestor of u# in T . ????????????????????? Input: Rooted tree (T, s). ????????????????????? Initialisation: T0 ??T ? ? {s}; t ? 0; ? ! s " Else G ? G ? v " z # v * # ) !! * & ' !$ !! ($ # 3. V (Tt+1 ) ? V (Tt ) \ Mt . 4. E(Tt+1 ) ? {(v, ptv ) : v, ptv ? V (Tt+1 )}? {(v, gvt ) : v, gvt ? V (Tt+1 ), ptv &? V (Tt+1 )}. ' !" 5. t ? t + 1. % " # ) * & !+ !! (% ! % * & !% $ " . !" !" $ . ) !+ !# !" ! % " !$ ' !pt " ctv % & 2. For all v ? Mt , merge as follows: If v ? leaves(Tt ) then z ? ?sT (ptv (? v). G?G? $ " 1. Construct Mt via depth-first search of Tt from s. (! ! $ G ? ?Tv(v) : v ? V (T ) \ {s} . ????????????????????? # $ While V (Tt ) &= {s} !pt " (+ ! !$ !% ,-./01-23-45-26786(/ ' ) !+ !! !" !$ !% ,-./01-23-45-26786(696(/ ????????????????????? :45-260;/.74<1-460;6(/ =-.5->?@-6A-./01-2 Output: Optimal (2, s)-hierarchical cover G of T . B<?/.--26>44-46/76/C-6D$E2FGH0-.>.1C01>@617A-. ????????????????????? Figure 1: Left: Pseudocode for the linear time construction algorithm for an optimal (2, s)-hierarchical cover. Right: Pictorial explanation of the pseudocode and the details of the hierarchical cover. In order to clarify the method, we describe some of the details of the cover and some merge operations that are performed in the diagram. Vertex 1 is the root vertex s. In each component, depicted as enclosed in a line, the black node is the splitting vertex, i.e., a mergeable vertex of the tree Tt . The boundary definition may be clarified by highlighting, for instance, that ?T (S 2 ) = {4} and ?T (S 10 ) = {8, 12}. Subtree S 2 contains vertices 1, 2, 3 and 4. Vertex 2 is the splitting vertex of S 2 . ?(S 2 , 2) = {S (1,2) , S (2,3) , S (2,4) }, i.e., at time t = 0, S 2 is formed by merging the three atomic subtrees S (1,2) , S (2,3) and S (2,4) , which were added in the initialization step. These three subtrees overlap at only vertex 2, which is depicted in black because it is mergeable in T0 . For what concerns the decomposition tree (D, r), we have ?rD (5) = {(4, 5), 6}, which implies that S 5 is therefore formed by the atomic component S (4,5) and the non-atomic component S 6 . At time t = 1, S 12 is obtained by merging S 10 together with S 13 , which have been both created at time t = 0. Observe that in T1 vertex 12 is a leaf and the z variable in the while-loop step 2 is assigned to vertex 10 (v and and ptv is respectively vertex 12 and with the square bracket notation we ! "8). Regarding! the " subtree ! 8 " representation !8" 8 can write, for instance, S 2 = 14 and S 12 = 10 " (? 11 ? 14 ). Observe that, according to the definition of a (2, s)-hierarchical cover, we have 4 ? ?1T (1) and 10 ? ?1T (8). Finally, notice that the height of the (2, s)-hierarchical cover of S v is equal to t + 1 iff v is depicted in black in Tt . 4 Online marginalization In this section we introduce our algorithm for efficiently computing marginals by summing over products of variables in a tree topology. Formally our model is specified by a triple (T, ?, D) where 5 T is a tree, ? = (?e,l,m : e ? E(T ), l ? INk , m ? INk ) so that ?e is a positive symmetric k ? k matrix and D = (dv,c : v ? V (T ), c ? INk ) is a |V (T )| ? k matrix. In a probabilistic setting it is natural to view each normalized ?e as a stochastic symmetric ?transition? matrix and the ?data? D as a right stochastic matrix corresponding to ?beliefs? about k different labels at each vertex in T . In our online setting ? is a fixed parameter and D is changing over time and thus an element in a sequence (D1 , . . . , Dt , . . .) where successive elements only differ in a single row. Thus at each point at time we receive information at a single vertex. In our intended application (see Section 5) of the model there is no necessary ?randomness? in the generation of the data. However the language of probability provides a natural metaphor we use for V (T ) our computed quantities. Thus a (k-ary) labeling of T is a vector ? ? L with L := INk and its ?probability? with respect to (?, D) is % % 1 p(?|?, D) := ?(i,j),?(i),?(j) dv,?(v) , (1) Z (i,j)?E(T ) & with the normalising constant Z := ??L the marginal probability at a vertex v as ' v?V (T ) (i,j)?E(T ) ?(i,j),?(i),?(j) p(v ? a|?, D) := ( ??L : ?(v)=a ' v?V (T ) dv,?(v) . We denote (2) p(?|?, D) . Using the hierarchical cover for efficient online marginalization. In the previous section we discussed a method to compute a hierarchical cover of a tree T with optimal height ?2s (T ) in time linear in T . In this subsection we will show how these covering components form a covering set of cached ?marginals??. So that we may either compute p(v ? a|?, D) or update a single row of the data matrix D and recompute the changed cached marginals all in time linear in ?2s (T ). Definition 7. Given a tree S ? T , the potential function, ?TS : L(?T (S)) ? R with respect to (?, D) is defined by: ? ?? ? ( % % ?) := ?(v,w),?(v),?(w) dv,?(v) (3) ?TS (? ??L(S) : ?(?T (S))=? ? (v,w)?E(S) v?S\?T (S) Where L(X) := INX k with X ? V (T ) is thus the restriction of L to X and likewise if ? ? L then ?(X) ? L(X) is the restriction of ? to X. For each tree in our hierarchical cover S ? S we will have an associated potential function. Intuitively each of these potential functions summarize the information in their interior by the marginal function defined on their boundary. Thus trees S ? S with a boundary size of 1 require k values to be cached, the ??? weights; while boundary size 2 trees requires k 2 values, the ??? weights. This clarifies our motivation to find a cover with both small height and exposure. We also cache ? weights that represent the product of ? weights; these weights allow efficient computation on high degree vertices. The set of cached values necessary for fast online computation correspond to these three types of weights of which there is a linear quantity and on any given update or marginalization step only O(?2s (T )) of them are accessed. Definitions of weights and potentials. Given a tree T and a hierarchical cover S it is isomorphic to a decomposition tree (D, r). The decomposition tree will serve a dual purpose. First, each vertex z ? D will serve as a ?name? for a tree S z ? S. Second, in the same way that the ?messages passing? in belief propagation the follows the topology of the input tree, the structure of our computations follows the decomposition tree D. We now introduce our notations for computing and traversing the decomposition tree. As the cover has trees with one or two boundary vertices (excepting T which has none) we define the corresponding vertices of the decomposition tree, Ci := {z ? D : |?T (S z )| = i} for i ? {1, 2}. In this section since we are concerned with the traversal of (D, r) we abbreviate ?D , ?D as both ? , ? respectively as convenient. As ?D (v) is a set of children, we define the following functions to select specific children, )(v) := w if w ? ?(v), ?(v) ? ?T (S (w) ) for v ? D? ? (C1 ? C2 ) and *(v) := w if w ? ?(v), w #= )(v) for w ? C2 and v ? D? ? C2 . When clear from the context we will use )v for )(v) as well as *v for *(v). We also need notation for the potentially two boundary vertices of a tree S v ? S if v ? D \ {r}. Observe that for v ? C1 ? C2 one boundary vertex of S v is necessarily v? :=? v and if v ? C2 there exists an ancestor v? of v in D of so that {v, ? v?} = ?T (S v ). We also extend the split notation to pick out the specific 6 v ?a (v) := ?TS (v? ? a), v ?ab (v) := ?TS (v? ? a, v? ? b), ?(T,v,v) ? ?a# (v) := dva ? ?T )#a (v) := ?(T,v,v) ? (v ?T (v? ? a), ? a), (v ? C1 ) ?a (v) := dva (v ? C2 ) ?a (v) := dva ' ?a (w), (v ? V (T )) ?TR (v ? a), (v ? V (T )) w??(v)?C1 ' R??(T,v) ?(T,? v ,v) (v ?V (T )\{r}) ?a$ (v) := dv?a ?T (v ?V (T )\{r}) )$a (v) := ?(T,v,? v) (v ?T (? v ? a), (v ? C2 ) ? a), (v ? C2 ) Table 1: Weight definitions complementary subtrees of T resulting from a split thus ?(T, p, q) := Q ? ?(T, p) if q ? Q and define ?(T, p, q) := ?{R ? ?(T, p) : q #? R}. Observe that T = ?(T, p, q) ? ?(T, p, q) and {p} = ?(T, p, q) ? ?(T, p, q). We shall use the notation (v1 ? a1 , v2 ? a2 , . . . , vm ? am ) to represent a labeling of {v1 , v2 , . . . , vm } that maps vi to ai . In Table 1 we now give the weights used in our online marginalization algorithm. The ?a , ?ab , ?a weights are cached values maintained by the algorithm and the weights ?a , ?a$ , ?a% , -$a , and -%a are temporary values4 computed ?on-thefly.? The indices a, b ? INk and thus the memory requirements of our algorithm are linear in the cardinality of the tree and quadratic in the number of labels. Identities for weights and potentials. For the following lemma we introduce the notion of the extension of a labelling. We extend by a vertex v ? V (T ) and a label a ? INk , the labelling ? ? L(X) to the labelling ?av ? L(X ? {v}) which satisfies ?av (v) = a and ?av (X) = ?. Lemma 8. Given a tree, S ? T , and a vertex v ? S then if v ? S \ ?T (S) % ( % ?TR (?(?T (R))) ?TS (?)= dva ?TR (?av (?T (R))) else if v ? ?T (S) then ?TS (?) = a?INk R??(S,v) R??(S,v) Thus a direct consequence of Lemma 8 is that we can compute the marginal probability at v as ? (v) p(v ? a|?, D) = & a ? (v) . The recursive application of such factorizations is the basis of our b?INk b algorithm (these factorizations are summarized in Table 2 in the technical appendices). Algorithm initialization and complexity. In Figure 2 we give our algorithm for computing the marginals at vertices with respect to (?, D). A number of our identities assumed for a given vertex that it is in the interior of the tree and hence in the interior of decomposition tree. Thus before we find the hierarchical cover of our input tree we extend the tree by adding a ?dummy? edge from each leaf of the tree to a new dummy vertex. These dummy edges play no role except to simplify notation. The hierarchical cover is then found on this enlarged tree; the cover height may at most only increase by one. By setting the values in dummy edges and vertices in ? and D to one, this ensures that all marginal computations are unchanged. The running time of the algorithm is as follows. The computation of the hierarchical cover5 is linear in |V (T )| as is the initialization step. The update and marginalization are linear in cover height ?? (T ). The algorithm also scales quadratically in k on the marginalization step and cubically in k on update as the merge of two C2 trees require the multiplication of two k ? k matrices. Thus for example if the set of possible labels is linear in the size of the tree classical belief propagation may be faster. Finally we observe that we may reduce the cubic dependence to a quadratic dependence on k via a cover with the height bounded by the diameter of T as opposed to ?? (T ). This follows as the only cubic step is in the update of a non-atomic (non-edge) ?-potential. Thus if we can build a cover, with only atomic ?-potentials the running time will scale with k quadratically. We accomplish this by modifying the cover algorithm (Figure 1) to only merge leaf vertices. Observe that the height of this cover is now O(diameter(T )); and we have a hierarchical factorization into ?-potentials and only atomic ?-potentials. 5 Multi-task learning in the allocation model with T REE -H EDGE We conclude by sketching a simple online learning application to multi-task learning that is amenable to our methods. The inspiration is that we have multiple tasks and a given tree structure that describes our prior expectation of ?relatedness? between tasks (see e.g., [7, Sec. 3.1.3]). 4 5 Note: if for ?a (v) if the product is empty then the product evaluates to 1; and if v ? C1 then )$a (v) := 1. The construction of the decomposition tree may be simultaneously accomplished with the same complexity. 7 Marginalization (vertex v ? D? ) : 1. w ? r 2. ?a (w) ? ?a (r) 3. while(w &= v) 4. w ? ?v (w) 5. if(w ? C1 ) 6. ?a# (w) ? & ?a (?(w))/?a (w) 7. )#a (w) ? b ?ab (*(w))?b# (w) 8. ?a (w) = ?a (w))#a (w) 9. else 10. if(w = *(?(w))) 11. ?a# (w) ? )$a (?(w))?a (?(w)) 12. ?a$ (w) ? ?a# (?(w)) 13. else 14. ?a# (w) ? )#a (?(w))?a (?(w)) $ 15. ?a$ (w) ? &?a (?(w)) # 16. )a (w) ? &b ?b# (w)?ab (*(w)) 17. )$a (w) ? b ?b$ (w)?ab (+(w)) 18. ?a (w) ? )#a (w))$a (w)?a (w) 19. & 20. Output: ?a (v)/( b ?b (v)) Initialization: The ?, ? and ? weights are initialised in a bottom-up fashion on the decomposition tree - we initialise the weights of a vertex after we have initialised the weights of all its children. Specifically, we first do a depth-first search of D starting from r: When we reach an edge (v, w) ? E(T ), if neither v or w is a leaf then we set ?ab ((v, w)) ? ?(v,w),a,b otherwise assuming w is a leaf we set ?a (v) ? 1 (dummy edge). When we reach a vertex, v ? V (T ), for the last time (i.e. 'just before we backtrack from v) then set: ?a (v) ? dva w??(v)?C1 ?a (w), and if v ? C2 then & ?ab (v) ?& c ?ca (*(v))?cb (+(v))?c (v), or if v ? C1 then ?a (v) ? c ?ca (*(v))?c (v). Update (vertex v ? D? ; data d ? [0, ?)k ): a ; dv ? d; w ? v 1. ?a (v) ? ?a (v) ddva 2. while(w &= r) 3. if(w ? C1 ) 4. ?aold ? ?a& (w) 5. ?a (w) ? c ?ca (*(w))?c (w) 6. ?a (?(w)) ? ?a (?(w))?a (w)/?aold 7. else & 8. ?ab (w) ? c ?ca (*(w))?cb (+(w))?c (w) 9. w ? ?(w) Figure 2: Algorithm: Initialization, Marginalization and Update 1. Parameters: A triple (T, ?, D1 ) and ? ? (0, ?). 2. For t = 1 to ! do 3. Receive: v t ? V (T ) 4. Predict: p?t = (p(v t ? a|?, Dt ))a?INk 5. Receive: y t ? [0, 1]k 6. Incur loss: Lmix (y t , p?t ) t 7. Update: Dt+1 = Dt ; Dt+1 (v t ) = (? pt (a)e??y (a) )a?INk Figure 3: T REE -H EDGE Thus each vertex represents a task and if we have an edge between vertices then a priori we expect those tasks to be related. Thus the hope is that information received for one task (vertex) will allow us to improve our predictions on another task. For us each of these tasks is an allocation task as addressed often with the H EDGE algorithm [4]. A similar application of the H EDGE algorithm in multi-task learning was given in [8]. Their the authors considered a more challenging set-up where the task structure is unknown and the hope is to do well if there is a posteriori a small clique of closely related tasks. Our strong assumption of prior ?tree-structured? knowledge allows us to obtain a very efficient algorithm and sharp bounds which are not directly comparable to their results. Finally, this set-up is also closely related to online graph labeling problem as in e.g., [9, 10, 11]. Thus the set-up is as follows. We incorporate our prior knowledge of task-relatedness with the triple (T, ?, D1 ). Then on a trial t, the algorithm is given a v t ? V (T ), representing the task. The & algorithm then gives a non-negative prediction vector p?t ? {p : ka=1 p(a) = 1} for task v t and t k receives an outcome y ? [0, 1] . It then suffers a mixture loss Lmix (y t , p?t ) := y t ? p?t . The aim is to predict to minimize this loss. We give the algorithm in Figure 3. The notation follows Section 4 and the method therein implies that on each trial we can predict and update in O(?? (T )) time. We obtain the following theorem (a proof sketch is contained in appendix C of the long version). Theorem 9. Given a tree T , a vertex sequence $v 1 , . . . , v " % and an outcome sequence $y 1 , . . . , y " % the loss of the T REE -H EDGE algorithm with the parameters (?, D1 ) and ? > 0 is, for all labelings V (T ) ? ? INk , bounded by * ) ! ! ( ( t 1 ? ln 2 t t t t=1 Lmix (y , p? ) ? c? y (?(v )) + t=1 ? log2 p(?|?, D1 ) with c? := 1 ? e?? . (4) Acknowledgements. We would like to thank David Barber, Guy Lever and Massimiliano Pontil for valuable discussions. We, also, acknowledge the financial support of the PASCAL 2 European Network of Excellence. 8 References [1] David Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press, 2012. [2] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. [3] Frank R. Kschischang, Brenden J. Frey, and Hans Andrea Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498?519, 2001. [4] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997. [5] Judea Pearl. Reverend Bayes on inference engines: A distributed hierarchical approach. In Proc. Natl. Conf. on AI, pages 133?136, 1982. [6] Arthur L. Delcher, Adam J. Grove, Simon Kasif, and Judea Pearl. Logarithmic-time updates and queries in probabilistic networks. J. Artif. Int. Res., 4:37?59, February 1996. [7] Theodoros Evgeniou, Charles A. Micchelli, and Massimiliano Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6:615?637, 2005. [8] Jacob Abernethy, Peter L. Bartlett, and Alexander Rakhlin. Multitask learning with expert advice. In COLT, pages 484?498, 2007. [9] Mark Herbster, Massimiliano Pontil, and Lisa Wainer. Online learning over graphs. In ICML, pages 305?312. ACM, 2005. [10] Mark Herbster, Guy Lever, and Massimiliano Pontil. Online prediction on large diameter graphs. In NIPS, pages 649?656. MIT Press, 2008. [11] Nicol`o Cesa-Bianchi, Claudio Gentile, and Fabio Vitale. Fast and optimal prediction on a labeled tree. In COLT, 2009. 9
4586 |@word multitask:1 trial:3 version:1 instruction:1 decomposition:21 jacob:1 pick:1 tr:3 recursively:4 contains:2 initialisation:1 denoting:1 loeliger:1 ka:1 subcomponents:1 nt:2 si:1 delcher:1 must:1 update:14 maxv:1 leaf:17 selected:4 normalising:1 provides:1 recompute:1 node:1 clarified:1 successive:1 boosting:1 theodoros:1 accessed:1 unbounded:1 height:25 constructed:1 c2:10 direct:1 descendant:2 shorthand:1 introduce:7 excellence:1 andrea:1 p1:1 multi:5 little:1 metaphor:1 cache:3 cardinality:2 considering:1 becomes:1 notation:14 bounded:5 what:1 reverend:1 c01:1 every:1 uk:2 omit:1 before:4 t1:1 positive:1 local:2 frey:1 pasteris:2 consequence:1 path:25 ree:3 abuse:2 merge:4 black:6 initialization:6 therein:1 challenging:1 factorization:3 unique:6 atomic:10 recursive:2 regret:2 pontil:4 significantly:1 convenient:1 get:1 cannot:1 interior:4 context:2 writing:2 restriction:2 map:1 exposure:9 attention:1 starting:3 formalized:1 splitting:13 simplicity:2 deriving:1 financial:1 initialise:1 notion:4 resp:5 construction:8 pt:3 play:1 associate:3 element:6 satisfying:1 recognition:1 labeled:1 bottom:2 role:1 worst:1 ensures:1 connected:2 removed:3 incremented:1 valuable:1 mentioned:1 complexity:2 traversal:1 incur:1 serve:2 efficiency:1 completely:1 basis:1 joint:1 kasif:1 distinct:2 fast:2 describe:2 london:2 massimiliano:4 query:1 labeling:3 outcome:2 abernethy:1 whose:5 larger:2 solve:1 say:1 otherwise:1 itself:1 final:1 online:18 sequence:5 net:2 ucl:1 product:9 maximal:1 argmaxs:1 loop:4 subgraph:4 iff:4 achieve:1 description:2 intuitive:1 milan:2 parent:4 empty:2 requirement:1 generating:1 cached:6 incremental:1 adam:1 ac:1 received:1 ptv:8 strong:1 p2:1 c:1 indicate:2 implies:3 differ:1 closely:2 modifying:1 stochastic:2 ancillary:1 require:3 generalization:1 preliminary:1 strictly:1 extension:1 clarify:1 hold:1 stt:4 considered:1 cb:2 predict:3 a2:1 purpose:1 proc:1 label:4 create:2 hope:2 mit:1 always:1 aim:3 rather:1 avoid:1 mergeable:5 claudio:1 focus:1 improvement:1 longest:1 indicates:1 sense:1 am:1 posteriori:1 inference:1 membership:1 cubically:1 bt:1 hidden:1 relation:2 ancestor:4 labelings:1 interested:1 dual:2 among:1 pascal:1 denoted:1 priori:1 colt:2 marginal:6 field:2 construct:2 equal:2 having:2 ng:7 evgeniou:1 represents:1 icml:1 simplify:1 oriented:1 preserve:1 simultaneously:1 individual:1 pictorial:1 phase:1 intended:1 ab:8 message:1 mixture:1 bracket:1 sens:1 natl:1 subtrees:13 amenable:2 grove:1 edge:22 necessary:2 arthur:1 traversing:1 tree:90 re:1 theoretical:1 minimal:1 instance:2 cover:74 yoav:1 cost:1 vertex:86 too:1 accomplish:2 st:4 herbster:4 grand:1 probabilistic:2 vm:4 receiving:1 decoding:1 connecting:3 continuously:1 analogously:1 together:2 sketching:1 ambiguity:1 central:3 lever:2 containing:5 opposed:1 cesa:1 guy:2 conf:1 expert:1 leading:1 return:1 potential:11 unordered:1 star:12 summarized:1 sec:1 int:1 descendent:1 explicitly:1 vi:3 depends:2 performed:1 root:6 view:1 bayes:3 simon:1 contribution:1 minimize:1 formed:4 square:1 efficiently:2 likewise:1 correspond:3 clarifies:1 bayesian:1 backtrack:1 basically:1 none:1 randomness:1 ary:1 reach:2 suffers:1 definition:13 evaluates:1 initialised:2 associated:3 proof:1 judea:2 subsection:1 knowledge:2 ubiquitous:1 formalize:1 back:1 dt:7 just:1 stage:1 sketch:2 receives:1 christopher:1 overlapping:1 propagation:6 defines:1 grows:1 artif:1 name:1 concept:1 contain:1 normalized:1 former:1 hence:4 assigned:1 inspiration:1 symmetric:2 adjacent:2 during:3 uniquely:2 covering:7 rooted:9 maintained:1 tt:34 theoretic:1 wainer:1 reasoning:1 charles:1 pseudocode:2 mt:4 exponentially:3 discussed:1 slight:2 extend:3 marginals:8 cambridge:1 ai:2 rd:4 language:1 han:1 add:1 posterior:1 perspective:1 italy:1 binary:1 wv:1 accomplished:2 minimum:3 additional:1 gentile:1 dashed:1 stephen:1 ii:4 afterwards:1 multiple:2 technical:1 faster:1 england:1 adapt:1 long:1 visit:1 a1:1 prediction:7 expectation:1 represent:4 kernel:1 c1:9 receive:4 addition:2 addressed:1 else:5 diagram:1 sr:6 call:4 iii:1 split:9 concerned:1 marginalization:9 restrict:1 topology:2 inner:1 regarding:1 reduce:1 t0:3 bartlett:1 peter:1 passing:1 clear:2 detailed:1 induces:1 visualized:1 diameter:5 schapire:1 notice:1 per:2 dummy:5 write:3 shall:3 key:2 deleted:2 changing:1 neither:1 v1:4 graph:18 sum:6 decision:1 appendix:2 dva:5 comparable:1 bound:5 completing:1 guaranteed:1 fold:1 quadratic:2 turbo:1 precisely:1 sake:1 generates:2 min:3 optimality:2 performing:1 department:2 structured:6 tv:1 according:1 request:2 belonging:2 smaller:3 describes:1 making:1 s1:2 aold:2 explained:1 dv:6 intuitively:1 ln:1 previously:1 loose:1 operation:1 apply:1 observe:9 hierarchical:57 v2:3 generic:1 neighbourhood:1 batch:1 denotes:1 running:3 graphical:2 log2:1 wc1e:1 build:1 february:1 classical:3 unchanged:1 ink:11 micchelli:1 added:6 quantity:2 rt:6 dependence:2 inx:1 said:1 lmix:3 fabio:3 distance:1 link:1 thank:1 outer:1 barber:2 assuming:1 kalman:1 length:3 index:1 difficult:1 executed:1 robert:1 potentially:2 frank:1 negative:1 proper:2 unknown:1 bianchi:1 upper:1 av:4 observation:1 markov:3 acknowledge:1 t:6 defining:1 excluding:2 sharp:1 sumproduct:1 brenden:1 introduced:1 david:2 pair:4 required:1 specified:2 engine:1 quadratically:2 temporary:1 pearl:3 maxq:1 nip:1 proceeds:1 pattern:1 summarize:1 including:1 memory:2 explanation:1 belief:7 overlap:2 natural:3 hr:1 abbreviate:1 representing:1 improve:1 picture:1 identifies:1 created:2 ctv:2 prior:3 acknowledgement:1 removal:1 multiplication:1 nicol:1 relative:1 freund:1 loss:4 expect:1 highlight:1 interesting:1 generation:1 allocation:4 filtering:1 proven:1 enclosed:1 triple:3 degree:2 incident:1 row:2 changed:1 last:1 formal:1 allow:2 disconnection:1 lisa:1 neighbor:1 distributed:1 boundary:11 depth:3 transition:1 computes:1 author:1 collection:2 avoided:1 transaction:1 sj:1 relatedness:2 dealing:1 clique:1 summing:1 assumed:2 conclude:1 excepting:1 bottomup:1 search:2 decomposes:1 table:3 ca:4 kschischang:1 obtaining:1 forest:1 necessarily:3 european:1 constructing:2 hierarchically:2 border:2 big:1 arise:1 whole:1 motivation:1 child:8 complementary:1 enlarged:1 advice:1 fashion:2 cubic:2 slow:1 n:3 surveyed:1 exponential:2 lie:1 removing:1 theorem:7 specific:2 bishop:1 thefly:1 rakhlin:1 concern:1 exists:6 merging:3 adding:1 ci:1 subtree:17 illustrates:1 labelling:3 depicted:3 logarithmic:3 backtracking:1 highlighting:1 contained:3 joined:2 springer:1 corresponds:1 nested:1 satisfies:2 acm:1 identity:2 specifically:2 determined:1 operates:1 unimi:1 except:1 lemma:5 called:4 isomorphic:2 vitale:3 formally:1 college:1 select:1 internal:2 mark:3 support:1 alexander:1 incorporate:1 d1:5
3,962
4,587
Phoneme Classification using Constrained Variational Gaussian Process Dynamical System Sungrack Yun Qualcomm Korea Seoul, South Korea [email protected] Hyunsin Park Department of EE, KAIST Daejeon, South Korea [email protected] Sanghyuk Park Department of EE, KAIST Daejeon, South Korea [email protected] Jongmin Kim Department of EE, KAIST Daejeon, South Korea [email protected] Chang D. Yoo Department of EE, KAIST Daejeon, South Korea [email protected] Abstract For phoneme classification, this paper describes an acoustic model based on the variational Gaussian process dynamical system (VGPDS). The nonlinear and nonparametric acoustic model is adopted to overcome the limitations of classical hidden Markov models (HMMs) in modeling speech. The Gaussian process prior on the dynamics and emission functions respectively enable the complex dynamic structure and long-range dependency of speech to be better represented than that by an HMM. In addition, a variance constraint in the VGPDS is introduced to eliminate the sparse approximation error in the kernel matrix. The effectiveness of the proposed model is demonstrated with three experimental results, including parameter estimation and classification performance, on the synthetic and benchmark datasets. 1 Introduction Automatic speech recognition (ASR), the process of automatically translating spoken words into text, has been an important research topic for several decades owing to its wide array of potential applications in the area of human-computer interaction (HCI). The state-of-the-art ASR systems typically use hidden Markov models (HMMs) [1] to model the sequential articulator structure of speech signals. There are various issues to consider in designing a successful ASR and certainly the following two limitations of an HMM need to be overcome. 1) An HMM with a first-order Markovian structure is suitable for capturing short-range dependency in observations and speech requires a more flexible model that can capture long-range dependency in speech. 2) Discrete latent state variables and sudden state transitions in an HMM have limited capacity when used to represent the continuous and complex dynamic structure of speech. These limitations must be considered when seeking to improve the performance of an ASR. To overcome these limitations, various models have been considered to model the complex structure of speech. For example, the stochastic segment model [2] is a well-known generalization of the HMM that represents long-range dependency over observations using a time-dependent emission function. And the hidden dynamical model [3] is used for modeling the complex nonlinear dynamics of a physiological articulator. Another promising research direction is to consider a nonparametric Bayesian model for nonlinear probabilistic modeling of speech. Owing to the fact that nonparametric models do not assume any 1 fixed model structure, they are generally more flexible than parametric models and can allow dependency among observations naturally. The Gaussian process (GP) [4], a stochastic process over a real-valued function, has been a key ingredient in solving such problems as nonlinear regression and classification. As a standard supervised learning task using the GP, Gaussian process regression (GPR) offers a nonparametric Bayesian framework to infer the nonlinear latent function relating the input and the output data. Recently, researchers have begun focusing on applying the GP to unsupervised learning tasks with high-dimensional data, such as the Gaussian process latent variable model (GP-LVM) for reduction of dimensionality [5-6]. In [7], a variational inference framework was proposed for training the GP-LVM. The variational approach is one of the sparse approximation approaches [8]. The framework was extended to the variational Gaussian process dynamical system (VGPDS) in [9] by augmenting latent dynamics for modeling high-dimensional time series data. High-dimensional time series have been incorporated in many applications of machine learning such as robotics (sensor data), computational biology (gene expression data), computer vision (video sequences), and graphics (motion capture data). However, no previous work has considered the GP-based approach for speech recognition tasks that involve high-dimensional time series data. In this paper, we propose a GP-based acoustic model for phoneme classification. The proposed model is based on the assumption that the continuous dynamics and nonlinearity of the VGPDS can be better represent the statistical characteristic of real speech than an HMM. The GP prior over the emission function allows the model to represent long-range dependency over the observations of speech, while the HMM does not. Furthermore, the GP prior over the dynamics function enables the model to capture the nonlinear dynamics of a physiological articulator. Our contributions are as follows: 1) we introduce a GP-based model for phoneme classification tasks for the first time, showing that the model has the potential of describing the underlying characteristics of speech in a nonparametric way; 2) we propose a prior for hyperparameters and a variance constraint that are specially designed for ASR; and 3) we provide extensive experimental results and analyses to reveal clearly the strength of our proposed model. The remainder of the paper is structured as follows: Section 2 introduces the proposed model after a brief description of the VGPDS. Section 3 provides extensive experimental evaluations to prove the effectiveness of our model, and Section 4 concludes the paper with a discussion and plans for future work. 2 2.1 Acoustic modeling using Gaussian Processes Variational Gaussian Process Dynamical System The VGPDS [9] models time series data by assuming that there exist latent states that govern the data. Let Y = [[y11 , ? ? ? yN 1 ]T , ? ? ? , [y1D , ? ? ? yN D ]T ] ? RN ?D , t = [t1 , ? ? ? , tN ]T ? RN + , and T T N ?Q X = [[x11 , ? ? ? xN 1 ] , ? ? ? , [x1Q , ? ? ? xN Q ] ] ? R be observed data, time, and corresponding latent state, where N , D, and Q(< D) are the number of samples, the dimension of the observation space, and the dimension of the latent space, respectively. In the VGPDS, these variables are related as follows: xnj = gj (tn ) + ?nj , ?nj ? N (0, 1/?jx ), yni = fi (xn ) + ni , ni ? N (0, 1/?iy ), (1) where fi (x) ? GP(?fi (x), kif (x, x0 )) and gj (t) ? GP(?gj (t), kjg (t, t0 )) are the emission function from the latent space to the i-th dimension of the observation space and the dynamics function from the time space to the j-th dimension of the latent space, respectively. Here, n ? {1, ? ? ? , N }, i ? {1, ? ? ? , D}, and j ? {1, ? ? ? , Q}. In this paper, a zero-mean function is used for all GPs. Fig. 1 shows graphical representations of HMM and VGPDS. Although the Gaussian process dynamical model (GPDM) [10], which involves an auto-regressive dynamics function, is also a GP-based model for time-series, it is not considered in this paper. The marginal likelihood of the VGPDS is given as Z p(Y|t) = p(Y|X)p(X|t)dX. 2 (2) Figure 1: Graphical representations of (left) the left-to-right HMM and (right) the VGPDS: In the left figure, yn ? RD and xn ? {1, ? ? ? , C} are observations and discrete latent states. In the right figure, yni , fni , xnj , gnj , and tn are observations, emission function points, latent states, dynamics function points, and times, respectively. All function points in the same plate are fully connected. Since the integral in Eq. (2) is not tractable, a variational method is used by introducing a variational distribution q(X). A variational lower bound on the logarithm of the marginal likelihood is Z p(Y|X)p(X|t) log p(Y|t) ? q(X) log dX q(X) Z Z q(X) = q(X) log p(Y|X)dX ? q(X) log dX p(X|t) = L ? KL(q(X)||p(X|t)). (3) By the assumption of independence over the observation dimension, the first term in Eq. (3) is given as D Z D X X L= q(X) log p(yi |X)dX = Li . (4) i=1 i=1 In [9], a variational approach which involves sparse approximation of the covariance matrix obtained from GP is proposed. The variational lower bound on Li is given as " # ? i |1/2 1 T ?iy (?iy )N/2 |K (? 2 yi Wi yi ) ? ?1 ?2i )), (?0i ? Tr(K (5) Li ? log e ? i ? i |1/2 2 (2?)N/2 |?iy ?2i + K ? i )?1 ?T . Here, K ? i ? RM ?M is a kernel matrix calcuwhere Wi = ?iy IN ? (?iy )2 ?1i (?iy ?2i + K 1i ? ? RM ?Q that are used for sparse lated using the i-th kernel function and inducing input variables X approximation of the full kernel matrix Ki . The closed-form of the statistics {?0i , ?1i , ?2i }D i=1 , which are functions of variational parameters and inducing points, can be found in [9]. In the secQQ QN QQ ond term of Eq. (3), p(X|t) = j=1 p(xj ) and q(X) = n j=1 N (?nj , snj ) are the prior for the latent state and the variational distribution that is used for approximating the posterior of the latent state, respectively. The parameter set ?, which consists of the hyperparameters {? f , ? g } of the kernel functions, the noise variances {? y , ? x }, the variational parameters {[?n1 , ? ? ? , ?nQ ], [sn1 , ? ? ? , snQ ]}N n=1 of ? is estimated by maximizing the lower bound on log p(Y|t) q(X), and the inducing input points X, in Eq. (3) using a scaled conjugate gradient (SCG) algorithm. 2.2 Acoustic modeling using VGPDS For several decades, HMM has been the predominant model for acoustic speech modeling. However, as we mentioned in Section 1, the model suffers from two major limitations: discrete state variables and first-order Markovian structure which can model short-range dependency over the observations. 3 To overcome such limitations of the HMM, we propose an acoustic speech model based on the VGPDS, which is a nonlinear and nonparametric model that can be used to represent the complex dynamic structure of speech and long-range dependency over observations of speech. In addition, to fit the model to large-scale speech data, we describe various implementation issues. 2.2.1 Time scale modification The time length of each phoneme segment in an utterance varies with various conditions such as position of the phoneme segment in the utterance, emotion, gender, and other speaker and environment conditions. To incorporate this fact into the proposed acoustic model, the time points tn are modified as follows: n?1 tn = , (6) N ?1 where n and N are the observation index and the number of observations in a phoneme segment, respectively. This time scale modification makes all phoneme signals have unit time length. 2.2.2 Hyperparameters To compute the kernel matrices in Eq. (5), the kernel function must be defined. We use the radial basis function (RBF) kernel for the emission function f as follows: ? ? Q X f k f (x, x0 ) = ?f exp ?? ?j (xj ? x0j )2 ? , (7) j=1 where ?f and ?jf are the RBF kernel variance and the j-th inverse length scale, respectively. The RBF kernel function is adopted for representing smoothness of speech. For the dynamics function g, the following kernel function is used:  k g (t, t0 ) = ?g exp ?? g (t ? t0 )2 + ?tt0 + b, (8) where ? and b are linear kernel variance and bias, respectively. The above dynamics kernel, which consists of both linear and nonlinear components, is used for representing the complex dynamics of the articulator. All hyperparameters are assumed to be independent in this paper. In [11], same kernel function parameters are shared over all dimensions of human-motion capture data and high-dimensional raw video data. However, this extensive sharing of the hyperparameters is unsuitable for speech modeling. Even though each dimension of observations is normalized in advance to have unit variance, the signal-to-noise ratio (SNR) is not consistent over all dimensions. To handle this problem, this paper considers each dimension to be modeled independently using different kernel function parameters. Therefore, the hyperparameter sets are defined as g g Q f f g }}D ? f = {?if , {?1i , ? ? ? , ?Qi i=1 and ? = {?j , ?j , ?j , bj }j=1 . 2.2.3 Priors on the hyperparameters In the parameter estimation of the VGPDS, the SCG algorithm does not guarantee the optimal solution. To overcome this problem, we place the following prior on the hyperparameters of the kernel functions as given below p(?) ? exp(?? 2 /? ? ), f (9) g where ? ? {? , ? } and ?? are the hyper-parameter and the model parameter of the prior, respectively. In this paper, ?? is set to the sample variance for the hyperparameters of the emission kernel functions, and ?? is set to 1 for the hyperparameters of the dynamics kernel functions. Uniform priors are adopted for other hyperparameters, then the parameters of the VGPDS are estimated by maximizing the joint distribution p(Y, ?|t) = p(Y|t, ?)p(?). 2.2.4 Variance constraint In the lower bound of Eq. (5), the second term on the right-hand side is the regularization term that represents the sparse approximation error of the full kernel matrix Ki . Note that with more inducing 4 input points, approximation error becomes smaller. However, only a small number of inducing input points can be used owing to the limited availability of computational power, which increases the effect of the regularization term. To mitigate this problem, we introduce the following constraint on the diagonal terms of the covariance matrix as given below: Tr(hKi iq(X) ) + 1/?iy = ?i2 , N (10) where hKi iq(X) and ?i2 are the expectation of the full kernel matrix Ki and the sample variance of the i-th dimension of the observation, respectively. This constraint is designed so that the variance of each observation calculated from the estimated model is equal to the sample variance. By using ?0i = Tr(hKi iq(X) ), the inverse noise variance parameter is obtained directly by ?iy = (?i2 ? ? log ? y ?0i /N )?1 without separate gradient-based optimization. Then, the partial derivative ??0i i = 1 N ? 2 ??0i is used for SCG-based optimization. In Section 3.1, the effectiveness of the variance constraint is demonstrated empirically. 2.3 Classification For classification with trained VGPDSs, maximum-likelihood (ML) decoding is used. Let D(l) = {Y(l) , t(l) } and ?(l) be the observation and parameter sets of the l-th VGPDS, respectively. Given the test data D? = {Y? , t? }, the classification result ?l ? {1, ? ? ? , L} can be obtained by ?l = = 3 arg max log p(Y? |t? , Y(l) , t(l) , ?(l) ) l arg max log l p(Y(l) , Y? |t(l) , t? , ?(l) ) . p(Y(l) |t(l) , ?(l) ) (11) Experiments To evaluate the effectiveness of the proposed model, three different kinds of experiments have been designed: 1. Parameter estimation: validating the effectiveness of the proposed variance constraint (Section 2.2.4) on model parameter estimation 2. Two-class classification using synthetic data: demonstrating explicitly the advantages of the proposed model over the HMM with respect to the degree of dependency over the observations 3. Phoneme classification: evaluating the performance of the proposed model on real speech data Each experiment is described in detail in the following subsections. In this paper, the proposed model is referred to as the constrained-VGPDS (CVGPDS). 3.1 Parameter estimation In this subsection, the experiments of parameter estimation on synthetic data are described. Synthetic data are generated by using a phoneme model that is selected from the trained models in Section 3.3 and then modified. The RBF kernel variances of the emission functions and the emission noise variances are modified from the selected model. In this experiment, the emission noise variances and inducing input points are estimated, while all other parameters are fixed to the true values used in generating the data. Fig. 2 shows the parameter estimation results. The estimates of the 39-dimensional noise variance of the emission functions are shown with the true noise variances, the true RBF kernel variances, and the sample variances of the synthetic data. The top row denotes the estimation results without the variance constraint, and the bottom row with the variance constraint. By comparing the two figures 5 Figure 2: Results of parameter estimation: (top-left) VGPDS with M = 5, (top-right) VGPDS with M = 30, and (bottom) CVGPDS with M = 5 on the top row, we can confirm that the estimation result of the noise variance with M = 30 inducing input points is better than that with M = 5 inducing input points. This result is obvious in the sense that smaller values of M produce more errors in the sparse approximation of the covariance matrix. However, both noise variance estimates are still different from the true values. By comparing the top and bottom rows, we can see that the proposed CVGPDS outperforms the VGPDS in terms of parameter estimation. Remarkably, the estimation result of the CVGPDS with M = 5 inducing input points is much better than the result of the VGPDS with M = 30. Based on these observations, we can conclude that the proposed CVGPDS is considerably more robust to the sparse approximation error compared to the VGPDS, as we claimed in Section 2.2.4. 3.2 Two-class classification using synthetic data This section aims to show that when there is strong dependency over the observations, the proposed CVGPDS is a more appropriate model than the HMM for the classification task. To this end, we first generated several sets of two-class classification datasets with different degrees of dependency over the observations. The considered classification task is to map each input segment to one of two class labels. Using s ? {1, ..., S} as the segment index, the synthetic dataset D = {Ys , ts , ls }Ss=1 consists of S segments, where the s-th segment has Ns samples. Here, Ys ? RNs ?D , ts ? RNs , and ls are the observation data, time, and class label of the s-th segment, respectively. The synthetic dataset is generated as follows: ? Mean and kernel functions of two GPs gj (t) and fi (x) are defined as gj (t) : ?gj (t) = aj t + bj , kjg (t, t0 ) = 1t=t0 PZi f fi (x) : ?i (x) = z=1 wz N (x; mzi ?zi ), kif (x, x0 ) = ?i exp(??i ||x ? x0 ||) (12) where {aj , bj }, {wz , mzi , ?zi }, and {?i , ?i } are respectively the parameters of the linear, Gaussian mixture, and RBF kernel functions. The superscript z denotes the component index of the Gaussian mixture, and Zi is the number of components in fi (x). 6 ? For the s-th segment, {Ys , ts , ls }, 1. ls is selected as either class 1 or 2. 2. Ns is randomly selected from interval [20, 30], and ts is obtained by using Eq. (6). 3. From ts , the mean vector ?gj (ts ) and covariance matrix Kgj are obtained for j = 1, ..., Q. Let Xs ? RNs ?Q be the latent state of the s-th segment. Then, the j-th column of Xs is generated by the Ns -dimensional Gaussian distribution N (?gj (ts ), Kgj ). 4. From Xs , the mean vector ?fi (Xs ) and covariance matrix Kfi are obtained for i = 1, ..., D. Then, the i-th column of Ys is generated by the Ns -dimensional Gaussian distribution, N (?fi (Xs ), Kfi ). Note that parameter ?i controls the degree of dependency over the observations. For instance, if ?i decreases, the off-diagonal terms of the emission kernel matrix Kfi increase, which means stronger correlations over the observations. The experimental setups are as follows. The synthesized dataset consists of 200 segments in total (100 segments per class). The dimensions of the latent space and observation space are set to Q = 2 and D = 5, respectively. We use 6(= Zi ) components for the mean function of the emission kernel function. In this experiment, three datasets are synthesized and used to compare the CVGPDS and the HMM. When generating each dataset, we use two different ?i values, one for each class, while all other parameters in Eq. (12) are shared between the two classes. As a result, the degree of correlation between the observations is the only factor that distinguishes the two classes. The three generated datasets have different degrees of correlation over the observations, as a result of setting different ?i values for generating each dataset. In particular, the third dataset is constructed with two limitations of HMM such that it is well represented by an HMM. This could be achieved simply by changing the form of the mean function ?gj (t) from a linear to a step function, and setting ?i = ? so that each data sample is generated independently of the others. In the third dataset, the two classes are set to have different ?i values. The classification experiments are conducted using an HMM and CVGPDS. Table 1: Classification accuracy for the two-class synthetic datasets (10-fold CV average [%]): All parameters except ?i are set to be equal for classes 1 and 2. In the case of ?i = ?, ?i are set to be different. ?i (class 1 : class 2) 0.1 : 0.5 1.0 : 2.0 ?:? HMM 61.0 68.5 88.5 CVGPDS 78.0 79.0 92.0 Table 1 summarizes the classification performance of the HMM and CVGPDS for the three synthetic datasets. Remarkably, in all cases, the proposed CVGPDS outperforms the HMM, even in the case of ?i = ? (the fourth column), where we assumed the dataset follows HMM-like characteristics. Comparing the second and the third columns of Table 1, we can see that the performance of the HMM degrades by 6.5% as ?i becomes smaller, while the proposed CVGPDS almost maintains its performance with only 1.0% reduction. This result demonstrates the superiority of the proposed CVGPDS in modeling data with strong correlations over the observations. Apparently, the HMM failed to distinguish the two classes with different degree of dependency over the observations. In contrast, the proposed CVGPDS distinguishes the two classes more effectively by capturing the different degrees of inter-dependencies over the observations incorporated in each class. 3.3 Phoneme classification In this section, phoneme classification experiments is described on real speech data from the TIMIT database. The TIMIT database contains a total of 6300 phonetically rich utterances, each of which is manually segmented based on 61 phoneme transcriptions. Following the standard regrouping of phoneme labels [11], 61 phonemes are reduced to 48 phonemes selected for modeling. As observations, 39-dimensional Mel-frequency cepstral coefficients (MFCCs) (13 static coefficients, ?, and 7 ??) extracted from the speech signals with standard 25 ms frame size, and 10 ms frame shifts are used. The dimension of the latent space is set to Q = 2. For the first phoneme classification experiment, 100 segments per phoneme are randomly selected using the phoneme boundary provided information in the TIMIT database. The number of inducing input points is set to M = 10. A 10-fold cross-validation test was conducted to evaluate the proposed model in comparison with an HMM that has three states and a single Gaussian distribution with a full covariance matrix per state. Parameters of the HMMs are estimated by using the conventional expectation-maximization (EM) algorithm with a maximum likelihood criterion. Table 2: Classification accuracy on the 48-phoneme dataset (10-fold CV average [%]): 100 segments are used for training and testing each phoneme model HMM VGPDS CVGPDS 49.19 48.17 49.36 Table 2 shows the experimental results of a 48-phoneme classification. Compared to the HMM and VGPDS, the proposed CVGPDS performs more effectively. For the second phoneme classification experiment, the TIMIT core test set consisting of 192 sentences is used for evaluation. We use the same 100 segments for training the phoneme models as in the first phoneme classification experiment. The size of the training dataset is smaller than that of conventional approaches due to our limited computational ability. When evaluating the models, we merge the labels of 48 phonemes into the commonly used 39 phonemes [11]. Given speech observations with boundary information, a sequence of log-likelihoods is obtained, and then a bigram is constructed to incorporate linguistic information into the classification score. In this experiment, the number of inducing input points is set to M = 5. Table 3: Classification accuracy on the TIMIT core test set [%]: 100 segments are used for training each phoneme model HMM VGPDS CVGPDS 57.83 61.44 61.54 Table 3 shows the experimental results of phoneme classification for the TIMIT core test set. As shown by the results in Table 2, the proposed CVGPDS performed better than the HMM and VGPDS. However, the classification accuracies in Table 3 are lower than the state-of-the-art phoneme classification results [12-13]. The reasons for low accuracy are as follows: 1) insufficient amount of data is used for training the model owing to limited availability of computational power; 2) a mixture model for the emission is not considered. These remaining issues need to be addressed for improved performance. 4 Conclusion In this paper, a VGPDS-based acoustic model for phoneme classification was considered. The proposed acoustic model can represent the nonlinear latent dynamics and dependency among observations by GP priors. In addition, we introduced a variance constraint on the VGPDS. Although the proposed model could not achieve the state-of-the-art performance of phoneme classification, the experimental results showed that the proposed acoustic model has potential for speech modeling. For future works, extension to phonetic recognition and mixture of the VGPDS will be considered. Acknowledgments This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST) (No.2012-0005378 and No.2012-0000985) 8 References [1] F. Jelinek, ?Continuous speech recognition by statistical methods,? Proceedings of the IEEE, Vol.64, pp.532556, 1976. [2] M. Ostendorf, V. Digalakis, and J. Rohlicek, ?From HMMs to segment models: A unified view of stochastic modeling for speech recognition,? IEEE Trans. on Speech and Audio Processing, Vol.4, pp.360-378, 1996. [3] L. Deng, D. Yu, and A. Acero, ?Structured Speech Modeling,? IEEE Trans. on Audio, Speech, and Language Processing, Vol.14, pp.1492-1504, 2006. [4] C. E. Rasmussen and C. K. I. Williams, ?Gaussian Process for Machine Learning,? MIT Press, Cambridge, MA, 2006. [5] N. D. Lawrence, ?Probabilistic non-linear principal component analysis with Gaussian process latent variable models,? Journal of Machine Learning Research (JMLR), Vol.6, pp.1783-1816, 2005. [6] N. D. Lawrence, ?Learning for larger datasets with the Gaussian process latent variable model,? International Conference on Artificial Intelligence and Statistics (AISTATS), pp.243-250, 2007. [7] M. K. Titsias and N. D. Lawrence, ?Bayesian Gaussian Process Latent Variable Model,? International Conference on Artificial Intelligence and Statistics (AISTATS), pp.844-851, 2010. [8] J. Qui?nonero-Candela and C. E. Rasmussen, ?A Unifying View of Sparse Approximate Gaussian Process Regression,? Journal of Machine Learning Research (JMLR), Vol.6, pp.1939-1959, 2005. [9] A. C. Damianou, M. K. Titsias, and N. D. Lawrence, ?Variational Gaussian Process Dynamical Systems,? Advances in Neural Information Processing Systems (NIPS), 2011. [10] J. M. Wang, D. J. Fleet, and A. Hertzmann, ?Gaussian Process Dynamical Models for Human Motion,? IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.30, pp.283-298, 2008. [11] K. F. Lee and H. W. Hon, ?Speaker-independent phone recognition using hidden Markov models,? IEEE Trans. on Acoustics, Speech and Signal Processing, vol.37, pp.1641-1648, 1989. [12] A. Mohamed, G. Dahl, and G. Hinton, ?Acoustic modeling using deep belief networks,? IEEE Trans. on Audio, Speech, and Language Processing, Vol.20, no.1, pp. 14-22, 2012. [13] F. Sha and L. K. Saul, ?Large margin hidden markov models for automatic speech recognition,? Advances in Neural Information Processing Systems (NIPS), 2007. 9
4587 |@word h:1 bigram:1 stronger:1 scg:3 covariance:6 tr:3 reduction:2 series:5 contains:1 score:1 yni:2 xnj:2 outperforms:2 com:2 comparing:3 gmail:1 dx:5 must:2 enables:1 designed:3 intelligence:3 selected:6 jongmin:1 nq:1 fni:1 short:2 core:3 sudden:1 regressive:1 provides:1 constructed:2 prove:1 hci:1 consists:4 introduce:2 x0:4 inter:1 automatically:1 becomes:2 provided:1 underlying:1 kind:1 spoken:1 unified:1 nj:3 guarantee:1 mitigate:1 rm:2 lated:1 scaled:1 control:1 unit:2 demonstrates:1 grant:1 yn:3 superiority:1 t1:1 lvm:2 merge:1 hmms:4 limited:4 range:7 kfi:3 acknowledgment:1 ond:1 testing:1 area:1 word:1 radial:1 acero:1 applying:1 conventional:2 map:1 demonstrated:2 maximizing:2 williams:1 independently:2 l:4 array:1 handle:1 qq:1 gps:2 designing:1 recognition:7 database:3 observed:1 bottom:3 wang:1 capture:4 connected:1 decrease:1 mentioned:1 environment:1 govern:1 hertzmann:1 dynamic:17 trained:2 solving:1 segment:18 titsias:2 basis:1 joint:1 represented:2 various:4 describe:1 artificial:2 hyper:1 larger:1 kaist:7 valued:1 s:1 ability:1 qualcomm:2 statistic:3 gp:15 superscript:1 sequence:2 advantage:1 propose:3 interaction:1 remainder:1 nonero:1 achieve:1 description:1 inducing:11 produce:1 generating:3 iq:3 ac:3 augmenting:1 eq:8 strong:2 involves:2 direction:1 owing:4 stochastic:3 human:3 enable:1 translating:1 government:1 generalization:1 extension:1 considered:8 exp:4 lawrence:4 bj:3 major:1 jx:1 estimation:12 label:4 mit:1 clearly:1 sensor:1 gaussian:22 aim:1 modified:3 linguistic:1 emission:14 articulator:4 likelihood:5 contrast:1 kim:1 sense:1 inference:1 dependent:1 eliminate:1 typically:1 hidden:5 x11:1 issue:3 classification:32 flexible:2 among:2 arg:2 hon:1 plan:1 constrained:2 art:3 marginal:2 emotion:1 equal:2 asr:5 manually:1 biology:1 represents:2 park:3 nrf:1 unsupervised:1 yu:1 future:2 others:1 distinguishes:2 randomly:2 national:1 consisting:1 n1:1 y1d:1 evaluation:2 certainly:1 introduces:1 predominant:1 mixture:4 pzi:1 integral:1 partial:1 korea:8 logarithm:1 instance:1 column:4 modeling:14 markovian:2 maximization:1 introducing:1 snr:1 uniform:1 successful:1 conducted:2 graphic:1 dependency:15 varies:1 synthetic:10 considerably:1 international:2 probabilistic:2 off:1 lee:1 decoding:1 iy:9 derivative:1 li:3 potential:3 availability:2 coefficient:2 explicitly:1 performed:1 view:2 closed:1 candela:1 apparently:1 maintains:1 timit:6 contribution:1 ni:2 accuracy:5 phonetically:1 phoneme:32 variance:26 characteristic:3 bayesian:3 raw:1 sn1:1 researcher:1 damianou:1 suffers:1 sharing:1 frequency:1 pp:10 mohamed:1 obvious:1 naturally:1 static:1 dataset:10 begun:1 subsection:2 dimensionality:1 focusing:1 supervised:1 improved:1 though:1 furthermore:1 correlation:4 hand:1 ostendorf:1 nonlinear:9 aj:2 reveal:1 tt0:1 effect:1 normalized:1 true:4 regularization:2 i2:3 speaker:2 mel:1 m:2 criterion:1 plate:1 yun:1 tn:5 performs:1 motion:3 variational:15 recently:1 fi:8 empirically:1 hki:3 relating:1 synthesized:2 cambridge:1 cv:2 smoothness:1 automatic:2 rd:1 nonlinearity:1 language:2 mfccs:1 funded:1 gj:9 posterior:1 showed:1 phone:1 claimed:1 phonetic:1 regrouping:1 yi:3 deng:1 signal:5 full:4 infer:1 segmented:1 offer:1 long:5 cross:1 y:4 qi:1 regression:3 vision:1 expectation:2 kernel:26 represent:5 robotics:1 achieved:1 addition:3 remarkably:2 interval:1 addressed:1 specially:1 snj:1 south:5 validating:1 effectiveness:5 ee:5 independence:1 xj:2 fit:1 zi:4 shift:1 t0:5 fleet:1 expression:1 gnj:1 speech:33 deep:1 generally:1 involve:1 amount:1 nonparametric:6 reduced:1 exist:1 yoo:1 estimated:5 per:3 discrete:3 hyperparameter:1 mest:1 vol:8 key:1 demonstrating:1 changing:1 dahl:1 inverse:2 fourth:1 place:1 almost:1 x0j:1 summarizes:1 qui:1 capturing:2 bound:4 ki:3 distinguish:1 fold:3 rohlicek:1 strength:1 constraint:10 gpdm:1 department:4 structured:2 conjugate:1 describes:1 smaller:4 em:1 wi:2 modification:2 describing:1 tractable:1 end:1 adopted:3 appropriate:1 top:5 denotes:2 remaining:1 graphical:2 unifying:1 unsuitable:1 approximating:1 classical:1 seeking:1 parametric:1 degrades:1 sha:1 diagonal:2 gradient:2 separate:1 capacity:1 hmm:28 topic:1 considers:1 reason:1 assuming:1 length:3 index:3 modeled:1 insufficient:1 ratio:1 setup:1 y11:1 implementation:1 observation:33 markov:4 datasets:7 benchmark:1 t:7 extended:1 incorporated:2 mzi:2 hinton:1 frame:2 rn:5 introduced:2 kl:1 extensive:3 sentence:1 acoustic:13 nip:2 trans:5 dynamical:8 below:2 pattern:1 including:1 max:2 video:2 wz:2 belief:1 power:2 suitable:1 digalakis:1 representing:2 improve:1 brief:1 concludes:1 auto:1 utterance:3 text:1 prior:10 fully:1 limitation:7 ingredient:1 validation:1 foundation:1 degree:7 consistent:1 row:4 supported:1 rasmussen:2 bias:1 allow:1 side:1 wide:1 saul:1 cepstral:1 sparse:8 jelinek:1 overcome:5 dimension:12 xn:4 transition:1 calculated:1 evaluating:2 qn:1 rich:1 boundary:2 commonly:1 approximate:1 transcription:1 gene:1 confirm:1 ml:1 assumed:2 conclude:1 continuous:3 latent:20 decade:2 table:9 promising:1 robust:1 kgj:2 complex:6 aistats:2 noise:9 hyperparameters:10 fig:2 referred:1 n:4 position:1 jmlr:2 gpr:1 third:3 showing:1 x:5 physiological:2 sequential:1 effectively:2 kr:3 margin:1 simply:1 failed:1 chang:1 gender:1 extracted:1 ma:1 rbf:6 jf:1 shared:2 daejeon:4 vgpds:28 except:1 principal:1 total:2 experimental:7 kjg:2 seoul:1 incorporate:2 evaluate:2 audio:3
3,963
4,588
Nystr?om Method vs Random Fourier Features: A Theoretical and Empirical Comparison Tianbao Yang? , Yu-Feng Li? , Mehrdad Mahdavi\ , Rong Jin\ , Zhi-Hua Zhou? ? Machine Learning Lab, GE Global Research, San Ramon, CA 94583 \ Michigan State University, East Lansing, MI 48824 ? National Key Laboratory for Novel Software Technology, Nanjing University, 210023, China [email protected],mahdavim,[email protected],liyf,[email protected] Abstract Both random Fourier features and the Nystr?om method have been successfully applied to efficient kernel learning. In this work, we investigate the fundamental difference between these two approaches, and how the difference could affect their generalization performances. Unlike approaches based on random Fourier features where the basis functions (i.e., cosine and sine functions) are sampled from a distribution independent from the training data, basis functions used by the Nystr?om method are randomly sampled from the training examples and are therefore data dependent. By exploring this difference, we show that when there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nystr?om method can yield impressively better generalization error bound than random Fourier features based approach. We empirically verify our theoretical findings on a wide range of large data sets. 1 Introduction Kernel methods [16], such as support vector machines, are among the most effective learning methods. These methods project data points into a high-dimensional or even infinite-dimensional feature space and find the optimal hyperplane in that feature space with strong generalization performance. One limitation of kernel methods is their high computational cost, which is at least quadratic in the number of training examples, due to the calculation of kernel matrix. Although low rank decomposition approaches (e.g., incomplete Cholesky decomposition [3]) have been used to alleviate the computational challenge of kernel methods, they still require computing the kernel matrix. Other approaches such as online learning [9] and budget learning [7] have also been developed for large-scale kernel learning, but they tend to yield performance worse performance than batch learning. To avoid computing kernel matrix, one common approach is to approximate a kernel learning problem with a linear prediction problem. It is often achieved by generating a vector representation of data that approximates the kernel similarity between any two data points. The most well known approaches in this category are random Fourier features [13, 14] and the Nystr?om method [20, 8]. Although both approaches have been found effective, it is not clear what are their essential difference, and which method is preferable under which situations. The objective of this work is to understand the difference between these two approaches, both theoretically and empirically The theoretical foundation for random Fourier transform is that a shift-invariant kernel is the Fourier transform of a non-negative measure [15]. Using this property, in [13], the authors proposed to represent each data point by random Fourier features. Analysis in [14] shows that, the generalization error bound for kernel learning based on random Fourier features is given by O(N ?1/2 + m?1/2 ), where N is the number of training examples and m is the number of sampled Fourier components. 1 An alternative approach for large-scale kernel classification is the Nystr?om method [20, 8] that approximates the kernel matrix by a low rank matrix. It randomly samples a subset of training b for the random samples. It then represents each data examples and computes a kernel matrix K point by a vector based on its kernel similarity to the random samples and the sampled kernel matrix b Most analysis of the Nystr?om method follows [8] and bounds the error in approximating the K. kernel matrix. According to [8], the approximation error of the Nystr?om method, measured in spectral norm 1 , is O(m?1/2 ), where m is the number of sampled training examples. Using the arguments in [6], we expected an additional error of O(m?1/2 ) in the generalization performance caused by the approximation of the Nystr?om method, similar to random Fourier features. Contributions In this work, we first establish a unified framework for both methods from the viewpoint of functional approximation. This is important because random Fourier features and the Nystr?om method address large-scale kernel learning very differently: random Fourier features aim to approximate the kernel function directly while the Nystr?om method is designed to approximate the kernel matrix. The unified framework allows us to see a fundamental difference between the two methods: the basis functions used by random Fourier features are randomly sampled from a distribution independent from the training data, leading to a data independent vector representation; in contrast, the Nystr?om method randomly selects a subset of training examples to form its basis functions, leading to a data dependent vector representation. By exploring this difference, we show that the additional error caused by the Nystr?om method in the generalization performance can be improved to O(1/m) when there is a large gap in the eigen-spectrum of the kernel matrix. Empirical studies on a synthetic data set and a broad range of real data sets verify our analysis. 2 A Unified Framework for Approximate Large-Scale Kernel Learning Let D = {(x1 , y1 ), . . . , (xN , yN )} be a collection of N training examples, where xi ? X ? Rd , yi ? Y. Let ?(?, ?) be a kernel function, H? denote the endowed Reproducing Kernel Hilbert Space, and K = [?(xi , xj )]N ?N be the kernel matrix for the samples in D. Without loss of generality, we assume ?(x, x) ? 1, ?x ? X . Let (?i , vi ), i = 1, . . . , N be the eigenvalues and eigenvectors of K ranked in the descending order of eigenvalues. Let V = [Vij ]N ?N = (v1 , . . . , vN ) denote b = {b bm } denote the randomly the eigenvector matrix. For the Nystr?om method, let D x1 , . . . , x b = [?(b bj )]m?m denote the corresponding kernel matrix. Similarly, let sampled examples, K xi , x bi , v b ranked in the descending order of eigenvalues, and bi ), i ? [m]} denote the eigenpairs of K {(? b b bm ). We introduce two linear operators induced by examples in D and V = [Vij ]m?m = (b v1 , . . . , v b i.e., D, N m 1 X 1 X LN [f ] = ?(xi , ?)f (xi ), Lm [f ] = ?(b xi , ?)f (b xi ). (1) N i=1 m i=1 It can be shown that both LN and Lm are self-adjoint operators. According to [18], the eigenvalbi /m, i ? [m], respectively, and their corresponding ues of LN and Lm are ?i /N, i ? [N ] and ? normalized eigenfunctions ?j , j ? [N ] and ? bj , j ? [m] are given by N 1 X ?j (?) = p Vi,j ?(xi , ?), j ? [N ], ?j i=1 m 1 Xb ? bj (?) = q Vi,j ?(b xi , ?), j ? [m]. b i=1 ?j (2) ? ) = exp(?kx ? To make our discussion concrete, we focus on the RBF kernel 2 , i.e., ?(x, x ? k22 /[2? 2 ]), whose inverse Fourier transform is given by a Gaussian distribution p(u) = x N (0, ? ?2 I) [15]. Our goal is to efficiently learn a kernel prediction function by solving the following optimization problem: min f ?HD N ? 1 X kf k2H? + `(f (xi ), yi ), 2 N i=1 1 (3) We choose the bound based on spectral norm according to the discussion in [6]. The improved bound obtained in the paper for the Nystrom method is valid for any kernel matrix that satisfies the eigengap condition. 2 2 where HD = span(?(x1 , ?), . . . , ?(xN , ?)) is a span over all the training examples 3 , and `(z, y) is a convex loss function with respect to z. To facilitate our analysis, we assume maxy?Y `(0, y) ? 1 and `(z, y) has a bounded gradient |?z `(z, y)| ? C. The high computational cost of kernel learning arises from the fact that we have to search for an optimal classifier f (?) in a large space HD . Given this observation, to alleviate the computational cost of kernel classification, we can reduce space HD to a smaller space Ha , and only search for the solution f (?) ? Ha . The main challenge is how to construct such a space Ha . On the one hand, Ha should be small enough to make it possible to perform efficient computation; on the other hand, Ha should be rich enough to provide good approximation for most bounded functions in HD . Below we show that the difference between random Fourier features and the Nystr?om method lies in the construction of the approximate space Ha . For each method, we begin with a description of a vector representation of data, and then connect the vector representation to the approximate large kernel machine by functional approximation. Random Fourier Features The random Fourier features are constructed by first sampling Fourier components u1 , . . . , um from p(u), projecting each example x to u1 , . . . , um separately, and then passing them through sine and cosine functions, i.e., zf (x) = > > > (sin(u> 1 x), cos(u1 x), . . . , sin(um x), cos(um x)). Given the random Fourier features, we then learn a linear machine f (x) = w> zf (x) by solving the following optimization problem: min2m w?R N ? 1 X `(w> zf (xi ), yi ). kwk22 + 2 N i=1 (4) To connect the linear machine (4) to the kernel machine in (3) by a functional approximation, we can construct a functional space Haf = span(s1 (?), c1 (?), . . . , sm (?), cm (?)), where sk (x) = sin(u> k x) f , we have and ck (x) = cos(u> x). If we approximate H in (3) by H D a k min f ?Hfa N ? 1 X kf k2H? + `(f (xi ), yi ). 2 N i=1 (5) The following proposition connects the approximate kernel machine in (5) to the linear machine in (4). Proofs can be found in supplementary file. Proposition 1 The approximate kernel machine in (5) is equivalent to the following linear machine min2m w?R N ? > 1 X w (w ? ?) + `(w> zf (xi ), yi ), 2 N i=1 s/c c > s where ? = (?1s , ?1c , ? ? ? , ?m , ?m ) and ?i (6) = exp(? 2 kui k22 /2). Comparing (6) to the linear machine based on random Fourier features in (4), we can see that other s/c than the weights {?i }m i=1 , random Fourier features can be viewed as to approximate (3) by restricting the solution f (?) to Haf . The Nystr?om Method The Nystr?om method approximates the full kernel matrix K by first sambr = b1 , ? ? ? , x bm , and then constructing a low rank matrix by K pling m examples, denoted by x ? > ? b b b b bj )]N ?m , K = [?(b bj )]m?m , K is the pseudo inverse of K, Kb K Kb , where Kb = [?(xi , x xi , x b and r denotes the rank of K. In order to train a linear machine, we can derive a vector representab1 , . . . , ? br ) and b r?1/2 Vbr> (?(x, x b r = diag(? b1 ), . . . , ?(x, x bm ))> , where D tion of data by zn (x) = D > b b br ). It is straightforward to verify that zn (xi ) zn (xj ) = [Kr ]ij . Given the vector Vr = (b v1 , . . . , v representation zn (x), we then learn a linear machine f (x) = w> zn (x) by solving the following optimization problem: N ? 1 X 2 min kwk2 + `(w> zn (xi ), yi ). w?Rr 2 N i=1 3 We use HD , instead of H? in (3), owing to the representer theorem [16]. 3 (7) In order to see how the Nystr?om method can be cast into the unified framework of approximating the large scale kernel machine by functional approximation, we construct the following functional space Han = span(? b1 , . . . , ? br ), where ? b1 , . . . , ? br are the first r normalized eigenfunctions of the operator Lm . The following proposition shows that the linear machine in (7) using the vector representation of the Nystr?om method is equivalent to the approximate kernel machine in (3) by restricting the solution f (?) to an approximate functional space Han . Proposition 2 The linear machine in (7) is equivalent to the following approximate kernel machine minn f ?Ha N ? 1 X kf k2H? + `(f (xi ), yi ), 2 N i=1 (8) Although both random Fourier features and the Nystr?om method can be viewed as variants of the unified framework, they differ significantly in the construction of the approximate functional space Ha . In particular, the basis functions used by random Fourier features are sampled from a Gaussian distribution that is independent from the training examples. In contrast, the basis functions used by the Nystr?om method are sampled from the training examples and are therefore data dependent. This difference, although subtle, can have significant impact on the classification performance. In the case of large eigengap, i.e., the first few eigenvalues of the full kernel matrix are much larger than the remaining eigenvalues, the classification performance is mostly determined by the top eigenvectors. Since the Nystr?om method uses a data dependent sampling method, it is able to discover the subspace spanned by the top eigenvectors using a small number of samples. In contrast, since random Fourier features are drawn from a distribution independent from training data, it may require a large number of samples before it can discover this subspace. As a result, we expect a significantly lower generalization error for the Nystr?om method. To illustrate this point, we generate a synthetic data set consisted of two balanced classes with a total of N = 10, 000 data points generated from uniform distributions in two balls of radius 0.5 centered at (?0.5, 0.5) and (0.5, 0.5), respectively. The ? value in the RBF kernel is chosen by cross-validation and is set to 6 for the synthetic data. To avoid a trivial task, 100 redundant features, each drawn from a uniform distribution on the unit interval, are added to each example. The data points in the first two dimensions are plotted in Figure 1(a) 4 , and the eigenvalue distribution is shown in Figure 1(b). According to the results shown in Figure 1(c), it is clear that the Nystr?om method performs significantly better than random Fourier features. By using only 100 samples, the Nystr?om method is able to make perfect prediction, while the decision made by random Fourier features based method is close to random guess. To evaluate the approximation error of the functional space, we plot in Figure 1(e) and 1(f), respectively, the first two eigenvectors of the approximate kernel matrix computed by the Nystr?om method and random Fourier features using 100 samples. Compared to the eigenvectors computed from the full kernel matrix (Figure 1(d)), we can see that the Nystr?om method achieves a significantly better approximation of the first two eigenvectors than random Fourier features. Finally, we note that although the concept of eigengap has been exploited in many studies of kernel learning [2, 12, 1, 17], to the best of our knowledge, this is the first time it has been incorporated in the analysis for approximate large-scale kernel learning. 3 Main Theoretical Result ? ? Let fm be the optimal solution to the approximate kernel learning problem in (8), and let fN be the ? solution to the full version of kernel learning in (3). Let f be the optimal solution to   ? min F (f ) = kf k2H? + E [`(f (x), y)] , f ?H? 2 where E[?] takes expectation over the joint distribution P (x, y). Following [10], we define the excess risk of any classifier f ? H? as ?(f ) = F (f ) ? F (f ? ). 4 Note that the scales of the two axes in Figure 1(a) are different. 4 (9) Synthetic data Synthetic data 0 1 100 10 0.9 90 ?1 0.7 0.6 0.5 0.4 0.3 0.2 0.5 10 2000 4000 6000 8000 10000 0.01 0 ?0.01 2000 4000 6000 8000 0.4N 0.6N 0.8N 0.01 0.0095 0 10000 2000 4000 6000 8000 0.02 0 ?0.02 2000 4000 6000 8000 5 10 20 50 # random samples 100 (c) Classification accuracy vs the number of samples 10000 0.04 ?0.04 0 Nystrom Method Random Fourier Features 60 N 0.0105 Eigenvector 2 0.02 0.2N (b) Eigenvalues (in logarithmic scale) vs. rank. N is the total number of data points. Eigenvector 1 0.01 Eigenvector 2 Eigenvector 1 0.0105 70 40 rank ?5 1 Eigenvector 1 0 80 50 Eigenvector 2 ?0.5 (a) Synthetic data: the first two dimensions ?0.02 0 ?3 10 ?4 1st dimension 0.0095 0 ?2 10 10 0.1 0 ?1 accuaracy 10 Eigenvalues/N 2nd dimension 0.8 10000 0.04 0.02 0 ?0.02 ?0.04 0 2000 4000 6000 8000 10000 2000 4000 6000 8000 10000 0.05 0 ?0.05 0 (d) the first two eigenvectors of the (e) the first two eigenvectors com- (f) the first two eigenvectors comfull kernel matrix puted by Nystr?om method puted by random Fourier features Figure 1: An Illustration Example ? by the generalization Unlike [6], in this work, we aim to bound the generalization performance of fm ? performance of fN , which better reflects the impact of approximating HD by Han . In order to obtain a tight bound, we exploit the local Rademacher complexity [10]. Define ?(?) =  P 1/2 N 2 2 . Let ?e as the solution to ?e2 = ?(e ?) where the existence and uniqueness i=1 min(? , ?i ) N   q 6 ln N . According of ?e are determined by the sub-root property of ?(?) [4], and  = max ?e, N to [10], we have 2 = O(N ?1/2 ), and when the eigenvalues of kernel function follow a p-power law, ? ? ). Section 4 ) by ?(fN it is improved to 2 = O(N ?p/(p+1) ). The following theorem bounds ?(fm will be devoted to the proof of this theorem. Theorem 1 For 162 e?2N ? ? ? 1, ?r+1 = O(N/m) and 2 ln(2N 3 ) + m (?r ? ?r+1 )/N = ?(1) ? 3 r 2 ln(2N 3 ) m ! , with a probability 1 ? 3N ?3 , we have ? ?(fm ) ? ? 3?(fN )   1e 2 1 + O  + , ? m e suppresses the polynomial term of ln N . where O(?) Theorem 1 shows that the additional error caused by the approximation of the Nystr?om method is improved to?O(1/m) when there is a large gap between ?r and ?r+1 . Note that the improvement from O(1/ m) to O(1/m) is very significant from the theoretical viewpoint, because it is well known that the generalization error for kernel learning is O(N ?1/2 ) [4]5 . As a result, to achieve a similar performance as the standard kernel learning, the number of required samples has to be 5 It is possible to achieve a better generalization error bound of O(N ?p/(p+1) ) by assuming the eigenvalues of kernel matrix follow a p-power law [10]. However, large eigengap doest not immediately indicate power law distribution for eigenvalues and and consequently a better generalization error. 5 ? O(N ) if the additional error caused by the kernel approximation is bounded by O(1/ m), leading to a high computational cost. On the other hand, with O(1/m) bound for the additional error caused ? by the kernel approximation, the number of required samples is reduced to N , making it more practical for large-scale kernel learning. We also note that the improvement made for the Nystr?om method relies on the property that Han ? HD and therefore requires data dependent basis functions. As a result, it does not carry over to random Fourier features. 4 Analysis In this section, we present the analysis that leads to Theorem 1. Most of the proofs can be found in ? the supplementary materials. We first present a theorem to show that the excessive risk bound of fm b is related to the matrix approximation error kK ? Kr k2 . Theorem 2 For 162 e?2N ? ? ? 1, with a probability 1 ? 2N ?3 , we have ! b r k2 2 kK ? K ? ? ?N + +e , ?(fm ) ? 3?(fN ) + C2 ? N? where C2 is a numerical constant. In the sequel, we let Kr be the best rank-r approximation matrix for K. By the triangle inequality, b r k2 ? kK ? Kr k2 + kKr ? K b r k2 ? ?r+1 + kKr ? K b r k2 , we thus proceed to bound kK ? K b r k2 . Using the eigenfunctions of Lm and LN , we define two linear operators Hr and H br kKr ? K as r r X X b r [f ](?) = Hr [f ](?) = ?i (?)h?i , f iH? , H ? bi (?)h? bi , f iH? , (10) i=1 i=1 b r k2 is related to the linear operator where f ? H? . The following theorem shows that kKr ? K br . ?H = Hr ? H br > 0 and ?r > 0, we have Theorem 3 For ? 1/2 1/2 b r ? Kr k2 ? N kL ?HL k2 , kK N N where kLk2 stands for the spectral norm of a linear operator L. 1/2 1/2 Given the result in Theorem 3, we move to bound the spectral norm of LN ?HLN . To this end, we assume a sufficiently large eigengap ? = (?r ? ?r+1 )/N . The theorem below bounds 1/2 1/2 kLN ?HLN k2 using matrix perturbation theory [19]. Theorem 4 For ? = (?r ? ?r+1 )/N > 3kLN ? Lm kHS , we have 1/2 1/2 kLN ?HLN k2 ? ? r where ? = max 4kLN ? Lm kHS , ? ? kLN ? Lm kHS ! 2kLN ? Lm kHS ?r+1 , . N ? ? kLN ? Lm kHS Remark To utilize the result in Theorem 4, we consider the case when ?r+1 = O(N/m) and ? = ?(1). We have    1 1/2 1/2 2 ? kLN ?HLN k2 ? O max kLN ? Lm kHS , kLN ? Lm kHS . m ? 1/2 1/2 Obviously, in order to achieve O(1/m) bound for kLN ?HLN k2 , we need an O(1/ m) bound for kLN ? Lm kHS , which is given by the following theorem. 6 Theorem 5 For ?(x, x) ? 1, ?x ? X , with a probability 1 ? N ?3 , we have r 2 ln(2N 3 ) 2 ln(2N 3 ) kLN ? Lm kHS ? + . m m Theorem 5 directly follows from Lemma 2 of [18]. Therefore, by assuming the conditions in Theb r k2 ? orem 1 and combining results from Theorems 3, 4, and 5, we immediately have kK ? K O (N/m). Combining this bound with the result in Theorem 2 and using the union bound, we have,  1 ? ? with a probability 1 ? 3N ?3 , ?(fm ) ? 3?(fN ) + C? 2 + m + e?N . We complete the proof of Theorem 1 by using the fact e?N < 1/N ? 1/m. 5 Empirical Studies To verify our theoretical findings, we evaluate the empirical performance of the Nystr?om method and random Fourier features for large-scale kernel learning. Table 1 summarizes the statistics of the six data sets used in our study, including two for regression and four for classification. Note that datasets C PU, C ENSUS, A DULT and F OREST were originally used in [13] to verify the effectiveness of random Fourier features. We evaluate the classification performance by accuracy, and the performance of regression by mean square error of the testing data. We use uniform sampling in the Nystr?om method owing to its simplicity. We note that the empirical performance of the Nystr?om method may be improved by using a different implementation [21, 11]. We download the codes from the website http://berkeley.intel-research.net/ arahimi/c/random-features for the implementation of random Fourier features. A RBF kernel is used for both methods and for all the datasets. A ridge regression package from [13] is used for the two regression tasks, and LIBSVM [5] is used for the classification tasks. All parameters are selected by a 5-fold cross validation. All experiments are repeated ten times, and prediction performance averaged over ten trials is reported. Figure 2 shows the performance of both methods with varied number of random samples. Note that for large datasets (i.e., C OVTYPE and F OREST), we restrict the maximum number of random samples to 200 because of the high computational cost. We observed that for all the data sets, the Nystr?om method outperforms random Fourier features 6 . Moreover, except for C OVTYPE with 10 random samples, the Nystr?om method performs significantly better than random Fourier features, according to t-tests at 95% significance level. We finally evaluate that whether the large eigengap condition, the key assumption for our main theoretical result, holds for the data sets. Due to the large size, except for C PU, we compute the eigenvalues of kernel matrix based on 10, 000 randomly selected examples from each dataset. As shown in Figure 3 (eigenvalues are in logarithm scale), we observe that the eigenvalues drop very quickly as the rank increases, leading to a significant gap between the top eigenvalues and the remaining eigenvalues. 6 Conclusion and Discussion We study two methods for large-scale kernel learning, i.e., the Nystr?om method and random Fourier features. One key difference between these two approaches is that the Nystr?om method uses data 6 We note that the classification performance of A DULT data set reported in Figure 2 does not match with the performance reported in [13]. Given the fact that we use the code provided by [13] and follow the same cross validation procedure, we believe our result is correct. We did not use the KDDCup dataset because of the problem of oversampling, as pointed out in [13]. Table 1: Statistics of data Sets TASK Reg. Reg. Class. DATA # TRAIN CPU 6,554 CENSUS 18,186 ADULT 32,561 # TEST 819 2,273 16,281 #Attr. 21 119 123 TASK Class. Class. Class. 7 DATA # TRAIN COD-RNA 59,535 COVTYPE 464,810 FOREST 522,910 # TEST 271,617 116,202 58,102 #Attr. 8 54 54 CPU 1.5 1 0.5 10 20 1 0.5 70 50 40 10 20 30 50 100 200 500 1000 # random samples 80 80 75 75 60 accuracy(%) 70 70 65 60 50 10 20 50 100 Nystrom Method Random Fourier Features 60 Nystrom Method Random Fourier Features 80 40 1.5 10 20 200 # random samples 55 500 20 50 100 # random samples 100 200 500 1000 Nystrom Method Random Fourier Features 70 65 60 Nystrom Method Random Fourier Features 10 50 # random samples FOREST COVTYPE accuracy(%) accuracy(%) 90 80 2 # random samples COD_RNA 100 2.5 0 50 100 200 500 1000 90 Nystrom Method Random Fourier Features accuracy(%) mean square error mean square error 3 Nystrom Method Random Fourier Features 2 0 ADULT CENSUS 2.5 55 200 10 20 50 100 # random samples 200 Figure 2: Comparison of the Nymstr?om method and random Fourier features. For regression tasks, the mean square error (with std.) is reported, and for classification tasks, accuracy (with std.) is reported. CPU 0 CENSUS 0 10 ADULT 0 10 10 ?4 10 ?6 10 rank ?8 0.2N 0.4N 0.6N 0.8N rank 0.2N 0.4N 0.6N 10 N ?2 Eigenvalues/N ?4 10 ?6 10 rank ?8 0.2N 0.4N 0.6N N 0.6N 0.8N N 0.8N N FOREST 0 ?2 10 ?4 10 ?6 10 10 0.4N 10 rank ?8 0.8N 0.2N COVTYPE 0 rank ?10 0.8N 10 10 ?6 10 ?8 10 N ?4 10 10 COD?RNA 0 Eigenvalues/N ?6 10 ?8 10 10 ?4 10 Eigenvalues/N 10 10 ?2 10 Eigenvalues/N Eigenvalues/N Eigenvalues/N ?2 ?2 10 0.2N 0.4N 0.6N ?2 10 ?4 10 ?6 10 rank ?8 0.8N N 10 0.2N 0.4N 0.6N Figure 3: The eigenvalue distributions of kernel matrices. N is the number of examples used to compute eigenvalues. dependent basis functions while random Fourier features introduce data independent basis functions. This difference leads to an improved analysis for kernel learning approaches based on the Nystr?om method. We show that when there is a large eigengap of kernel matrix, the approximation error of Nystr?om method can be improved to O(1/m), leading to a significantly better generalization performance than random Fourier features. We verify the claim by an empirical study. As implied from our study, it is important to develop data dependent basis functions for large-scale kernel learning. One direction we plan to explore is to improve random Fourier features by making the sampling data dependent. This can be achieved by introducing a rejection procedure that rejects the sample Fourier components when they do not align well with the top eigenfunctions estimated from the sampled data. Acknowledgments This work was partially supported by ONR Award N00014-09-1-0663, NSF IIS-0643494, NSFC (61073097) and 973 Program (2010CB327903). 8 References [1] A. Azran and Z. Ghahramani. Spectral methods for automatic multiscale data clustering. In CVPR, pages 190?197, 2006. [2] F. R. Bach and M. I. Jordan. Learning spectral clustering. Technical Report UCB/CSD-031249, EECS Department, University of California, Berkeley, 2003. [3] F. R. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. In ICML, pages 33?40, 2005. [4] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. Annals of Statistics, pages 44?58, 2002. [5] C. Chang and C. Lin. Libsvm: a library for support vector machines. TIST, 2(3):27, 2011. [6] C. Cortes, M. Mohri, and A. Talwalkar. On the impact of kernel approximation on learning accuracy. In AISTAT, pages 113?120, 2010. [7] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The forgetron: A kernel-based perceptron on a fixed budget. In NIPS, 2005. [8] P. Drineas and M. W. Mahoney. On the nystrom method for approximating a gram matrix for improved kernel-based learning. JMLR, 6:2153?2175, 2005. [9] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. IEEE Transactions on Signal Processing, pages 2165?2176, 2004. [10] V. Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. Springer, 2011. [11] S. Kumar, M. Mohri, and A. Talwalkar. Ensemble nystrom method. NIPS, pages 1060?1068, 2009. [12] U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395?416, 2007. [13] A. Rahimi and B. Recht. Random features for large-scale kernel machines. NIPS, pages 1177? 1184, 2007. [14] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. NIPS, pages 1313?1320, 2009. [15] W. Rudin. Fourier analysis on groups. Wiley-Interscience, 1990. [16] B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001. [17] T. Shi, M. Belkin, and B. Yu. Data spectroscopy: eigenspace of convolution operators and clustering. The Annals of Statistics, 37(6B):3960?3984, 2009. [18] S. Smale and D.-X. Zhou. Geometry on probability spaces. Constructive Approximation, 30(3):311?323, 2009. [19] G. W. Stewart and J. Sun. Matrix Perturbation Theory. Academic Press, 1990. [20] C. Williams and M. Seeger. Using the nystrom method to speed up kernel machines. NIPS, pages 682?688, 2001. [21] K. Zhang, I. W. Tsang, and J. T. Kwok. Improved nystrom low-rank approximation and error analysis. In ICML, pages 1232?1239, 2008. 9
4588 |@word trial:1 version:1 polynomial:1 norm:4 nd:1 dekel:1 decomposition:3 nystr:39 carry:1 tist:1 outperforms:1 com:2 comparing:1 fn:6 numerical:1 designed:1 plot:1 drop:1 v:3 selected:2 guess:1 website:1 rudin:1 vbr:1 zhang:1 constructed:1 c2:2 interscience:1 introduce:2 lansing:1 theoretically:1 expected:1 zhouzh:1 klk2:1 zhi:1 cpu:3 project:1 begin:1 bounded:3 discover:2 moreover:1 provided:1 eigenspace:1 what:1 cm:1 eigenvector:7 suppresses:1 developed:1 unified:5 finding:2 pseudo:1 kkr:4 berkeley:2 preferable:1 um:4 classifier:2 k2:15 unit:1 yn:1 eigenpairs:1 before:1 nju:1 local:2 nsfc:1 koltchinskii:1 china:1 co:3 range:2 bi:4 averaged:1 practical:1 acknowledgment:1 testing:1 union:1 procedure:2 empirical:7 significantly:6 reject:1 nanjing:1 close:1 operator:7 risk:3 descending:2 equivalent:3 shi:1 tianbao:1 straightforward:1 williams:1 convex:1 simplicity:1 recovery:1 immediately:2 attr:2 spanned:1 hd:8 annals:2 construction:2 us:2 std:2 observed:1 tsang:1 sun:1 balanced:1 complexity:2 solving:3 tight:1 predictive:1 basis:10 triangle:1 drineas:1 sink:1 joint:1 differently:1 train:3 effective:2 cod:2 shalev:1 whose:1 supplementary:2 larger:1 cvpr:1 statistic:5 transform:3 online:2 obviously:1 eigenvalue:24 rr:1 net:1 combining:2 achieve:3 adjoint:1 description:1 haf:2 aistat:1 olkopf:1 rademacher:2 generating:1 perfect:1 derive:1 illustrate:1 develop:1 measured:1 ij:1 strong:1 indicate:1 differ:1 direction:1 radius:1 correct:1 owing:2 kb:3 centered:1 material:1 require:2 generalization:13 alleviate:2 randomization:1 proposition:4 rong:1 exploring:2 hold:1 sufficiently:1 exp:2 k2h:4 bj:5 lm:14 claim:1 orest:2 achieves:1 uniqueness:1 successfully:1 reflects:1 weighted:1 minimization:2 mit:1 gaussian:2 rna:2 aim:2 lamda:1 ck:1 zhou:2 avoid:2 ax:1 focus:1 improvement:2 rank:16 contrast:3 seeger:1 talwalkar:2 dependent:8 selects:1 among:1 classification:10 denoted:1 plan:1 construct:3 sampling:4 represents:1 broad:1 yu:2 icml:2 excessive:1 representer:1 report:1 few:1 belkin:1 randomly:6 national:1 kitchen:1 geometry:1 connects:1 investigate:1 mahoney:1 devoted:1 xb:1 ovtype:2 incomplete:1 logarithm:1 plotted:1 theoretical:7 stewart:1 zn:6 cost:5 introducing:1 subset:2 uniform:3 reported:5 connect:2 eec:1 synthetic:6 st:1 recht:2 fundamental:2 sequel:1 quickly:1 concrete:1 choose:1 worse:1 leading:5 li:1 mahdavi:1 caused:5 vi:3 sine:2 tion:1 root:1 lab:1 contribution:1 om:40 square:4 accuracy:8 efficiently:1 ensemble:1 yield:2 nystrom:12 e2:1 proof:4 mi:1 sampled:10 dataset:2 knowledge:1 hilbert:1 subtle:1 originally:1 forgetron:1 follow:3 improved:9 generality:1 smola:2 hand:3 replacing:1 multiscale:1 puted:2 believe:1 facilitate:1 k22:2 concept:1 verify:6 consisted:1 normalized:2 regularization:1 laboratory:1 sin:3 self:1 cosine:2 complete:1 ridge:1 cb327903:1 performs:2 novel:1 common:1 functional:9 empirically:2 approximates:3 kwk2:1 significant:3 rd:1 automatic:1 similarly:1 pointed:1 han:4 similarity:2 pling:1 pu:2 align:1 n00014:1 inequality:2 onr:1 yi:7 exploited:1 additional:5 redundant:1 signal:1 ii:1 doest:1 full:4 rahimi:2 technical:1 match:1 academic:1 calculation:1 bach:2 cross:3 lin:1 award:1 impact:3 prediction:4 variant:1 regression:5 expectation:1 kernel:73 represent:1 achieved:2 c1:1 separately:1 interval:1 sch:1 unlike:2 eigenfunctions:4 file:1 induced:1 tend:1 kwk22:1 effectiveness:1 jordan:2 yang:1 enough:2 affect:1 xj:2 fm:7 restrict:1 reduce:1 cn:1 br:7 shift:1 whether:1 six:1 bartlett:1 eigengap:7 passing:1 proceed:1 remark:1 clear:2 eigenvectors:9 ten:2 category:1 mahdavim:1 generate:1 reduced:1 http:1 nsf:1 oversampling:1 tutorial:1 estimated:1 group:1 key:3 four:1 drawn:2 libsvm:2 utilize:1 v1:3 sum:1 luxburg:1 inverse:2 package:1 vn:1 decision:1 summarizes:1 bound:18 fold:1 quadratic:1 oracle:1 software:1 bousquet:1 fourier:50 u1:3 argument:1 min:5 span:4 kumar:1 speed:1 department:1 according:6 ball:1 smaller:1 making:2 s1:1 maxy:1 hl:1 projecting:1 invariant:1 census:3 ln:11 singer:1 ge:2 end:1 endowed:1 observe:1 kwok:1 spectral:7 batch:1 alternative:1 eigen:2 existence:1 kln:13 denotes:1 remaining:2 top:4 clustering:4 exploit:1 ghahramani:1 establish:1 approximating:4 feng:1 implied:1 objective:1 move:1 added:1 mehrdad:1 gradient:1 subspace:2 trivial:1 assuming:2 code:2 minn:1 illustration:1 kk:6 mostly:1 tyang:1 smale:1 negative:1 implementation:2 perform:1 zf:4 observation:1 convolution:1 datasets:3 sm:1 jin:1 situation:1 incorporated:1 y1:1 perturbation:2 reproducing:1 varied:1 hln:5 download:1 cast:1 required:2 kl:1 california:1 nip:5 address:1 able:2 adult:3 beyond:1 below:2 challenge:2 orem:1 program:1 max:3 ramon:1 including:1 power:3 ranked:2 hr:3 kivinen:1 improve:1 technology:1 library:1 ues:1 kf:4 law:3 loss:2 expect:1 impressively:1 limitation:1 validation:3 foundation:1 viewpoint:2 vij:2 mohri:2 supported:1 understand:1 perceptron:1 wide:1 sparse:1 dimension:4 xn:2 valid:1 stand:1 rich:1 computes:1 gram:1 author:1 collection:1 made:2 san:1 bm:4 transaction:1 excess:1 approximate:17 global:1 b1:4 xi:18 shwartz:1 spectrum:2 kddcup:1 msu:1 search:2 sk:1 table:2 learn:3 ca:1 rongjin:1 spectroscopy:1 forest:3 kui:1 williamson:1 constructing:1 diag:1 did:1 significance:1 main:3 csd:1 repeated:1 x1:3 intel:1 vr:1 wiley:1 sub:1 lie:1 jmlr:1 theorem:20 covtype:3 cortes:1 essential:1 mendelson:1 ih:2 restricting:2 kr:5 budget:2 kx:1 gap:4 rejection:1 michigan:1 logarithmic:1 explore:1 partially:1 chang:1 hua:1 springer:1 khs:9 satisfies:1 relies:1 goal:1 viewed:2 consequently:1 rbf:3 infinite:1 determined:2 except:2 hyperplane:1 lemma:1 total:2 east:1 ucb:1 support:3 cholesky:1 arises:1 constructive:1 evaluate:4 reg:2
3,964
4,589
Repulsive Mixtures Vinayak Rao Gatsby Computational Neuroscience Unit University College London [email protected] Francesca Petralia Department of Statistical Science Duke University [email protected] David B. Dunson Department of Statistical Science Duke University [email protected] Abstract Discrete mixtures are used routinely in broad sweeping applications ranging from unsupervised settings to fully supervised multi-task learning. Indeed, finite mixtures and infinite mixtures, relying on Dirichlet processes and modifications, have become a standard tool. One important issue that arises in using discrete mixtures is low separation in the components; in particular, different components can be introduced that are very similar and hence redundant. Such redundancy leads to too many clusters that are too similar, degrading performance in unsupervised learning and leading to computational problems and an unnecessarily complex model in supervised settings. Redundancy can arise in the absence of a penalty on components placed close together even when a Bayesian approach is used to learn the number of components. To solve this problem, we propose a novel prior that generates components from a repulsive process, automatically penalizing redundant components. We characterize this repulsive prior theoretically and propose a Markov chain Monte Carlo sampling algorithm for posterior computation. The methods are illustrated using synthetic examples and an iris data set. Key Words: Bayesian nonparametrics; Dirichlet process; Gaussian mixture model; Model-based clustering; Repulsive point process; Well separated mixture. 1 Introduction Discrete mixture models characterize the density of y ? Y ? <m as f (y) = k X ph ?(y; ?h ) (1) h=1 where p = (p1 , . . . , pk )T is a vector of probabilities summing to one, and ?(?; ?) is a kernel depending on parameters ? ? ?, which may consist of location and scale parameters. In analyses of finite mixture models, a common concern is over-fitting in which redundant mixture components located close together are introduced. Over-fitting can have an adverse impact on predictions and degrade unsupervised learning. In particular, introducing components located close together can lead to splitting of well separated clusters into a larger number of closely overlapping clusters. Ideally, the criteria for selecting k in a frequentist analysis and the prior on k and {?h } in a Bayesian analysis should guard against such over-fitting. However, the impact of the criteria used and prior chosen can be subtle. 1 Recently, [1] studied the asymptotic behavior of the posterior distribution in over-fitted Bayesian mixture models having more components than needed. They showed that a carefully chosen prior will lead to asymptotic emptying of the redundant components. However, several challenging practical issues arise. For their prior and in standard Bayesian practice, one assumes that ?h ? P0 independently a priori. For example, if we consider a finite location-scale mixture of multivariate Gaussians, one may choose P0 to be multivariate Gaussian-inverse Wishart. However, the behavior of the posterior can be sensitive to P0 for finite samples, with higher variance P0 favoring allocation to fewer clusters. In addition, drawing the component-specific parameters from a common prior tends to favor components located close together unless the variance is high. Sensitivity to P0 is just one of the issues. For finite samples, the weight assigned to redundant components is often substantial. This can be attributed to non- or weak identifiability. Each mixture component can potentially be split into multiple components having the same parameters. Even if exact equivalence is ruled out, it can be difficult to distinguish between models having different degrees of splitting of well-separated components into components located close together. This issue can lead to an unnecessarily complex model, and creates difficulties in estimating the number of components and component-specific parameters. Existing strategies, such as the incorporation of order constraints, do not adequately address this issue, since it is difficult to choose reasonable constraints in multivariate problems and even with constraints, the components can be close together. The problem of separating components has been studied for Gaussian mixture models ([2]; [3]). Two Gaussians can be separated by placing an arbitrarily chosen lower bound on the distance between their means. Separated Gaussians have been mainly utilized to speed up convergence of the Expectation-Maximization (EM) algorithm. In choosing a minimal separation level, it is not clear how to obtain a good compromise between values that are too low to solve the problem and ones that are so large that one obtains a poor fit. To avoid such arbitrary hard separation thresholds, we instead propose a repulsive prior that smoothly pushes components apart. In contrast to the vast majority of the recent Bayesian literature on discrete mixture models, instead of drawing the component-specific parameters {?h } independently from a common prior P0 , we propose a joint prior for {?1 , . . . , ?k } that is chosen to assign low density to ?h s located close together. The deviation from independence is specified a priori by a pair of repulsion parameters. The proposed class of repulsive mixture models will only place components close together if it results in a substantial gain in model fit. As we illustrate, the prior will favor a more parsimonious representation of densities, while improving practical performance in unsupervised learning. We provide strong theoretical results on rates of posterior convergence and develop Markov chain Monte Carlo algorithms for posterior computation. 2 Bayesian repulsive mixture models 2.1 Background on Bayesian mixture modeling Considering the finite mixture model in expression (1), a Bayesian specification is completed by choosing priors for the number of components k, the probability weights p, and the componentspecific parameters ? = (?1 , . . . , ?k )T . Typically, k is assigned a Poisson or multinomial prior, p a Dirichlet(?) prior with ? = (?1 , . . . , ?k )T , and ?h ? P0 independently, with P0 often chosen to be conjugate to the kernel ?. Posterior computation can proceed via a reversible jump Markov chain Monte Carlo algorithm involving moves for adding or deleting mixture components. Unfortunately, in making a k ? k + 1 change in model dimension, efficient moves critically depend on the choice of proposal density. [4] proposed an alternate Markov chain Monte Carlo method, which treats the parameters as a marked point process, but does not have clear computational advantages relative to reversible jump. It has become popular to use over-fitted mixture models in which k is chosen as a conservative upper bound on the number of components under the expectation that only relatively few of the components will be occupied by subjects in the sample. From a practical perspective, the success of over-fitted mixture models has been largely due to ease in computation. As motivated in [5], simply letting ?h = c/k for h = 1, . . . , k and a constant c > 0 leads to an approximation to a Dirichlet process mixture model for the density of y, which is obtained in the 2 limit as k approaches infinity. An alternative finite approximation to a Dirichlet process mixture is obtained by truncating the stick-breaking representation of [6], leading to a similarly simple Gibbs sampling algorithm [7]. These approaches are now used routinely in practice. 2.2 Repulsive densities We seek a prior on the component parameters in (1) that automatically favors spread out components near the support of the data. Instead of generating the atoms ?h independently from P0 , one could generate them from a repulsive process that automatically pushes the atoms apart. This idea is conceptually related to the literature on repulsive point processes [8]. In the spatial statistics literature, a variety of repulsive processes have been proposed. One such model assumes that points are clustered spatially, with the cluster centers having a Strauss density [9], that is p(k, ?) ? ? k ?r(?) where k is the number of clusters, ? > 0, 0 < ? ? 1 and r(?) is the number of pairwise centers that lie within a pre-specified distance r of each other. A possibly unappealing feature is that repulsion is not directly dependent on the pairwise distances between the clusters. We propose an alternative class of priors, which smoothly push apart components based on pairwise distances. Definition 1. A density h(?) is repulsive if for any ? > 0 there is a corresponding  > 0 such that h(?) < ? for all ? ? ? \ G , where G = {? : d(?s , ?i ) > ; s = 1, . . . , k; i < s} and d is a metric. Depending on the specification of the metric d(?s , ?j ), a prior satisfying definition 1 may limit overfitting or favor well separated clusters. When d(?s , ?j ) is the distance between sub-vectors of ?s and ?j corresponding to only locations the proposed prior favors well separated clusters. Instead, when d(?s , ?j ) is the distance between the sth and jth kernel, a prior satisfying definition 1 limits overfitting in density estimation. Though both cases can be implemented, in this paper we will focus exclusively on the clustering problem. As a convenient class of repulsive priors which smoothly push components apart, we propose ?(?) = c1 k Y ! g0 (?h ) h(?), (2) h=1 with c1 being the normalizing constant that depends on the number of components k. The proposed prior is related to a class of point processes from the statistical physics and spatial statistics literature referred to as Gibbs processes [10]. We assume g0 : ? ? <+ and h : ?k ? [0, ?) are continuous with respect to Lesbesgue measure, and h is bounded above by a positive constant c2 and is repulsive according to definition 1. It follows that density ? defined in (2) is also repulsive. A special hardcore repulsion is produced if the repulsion function is zero when at least one pairwise distance is smaller than a pre-specified threshold. Such a density implies choosing a minimal separation level between the atoms. As mentioned in the introduction, we avoid such arbitrary hard separation thresholds by considering repulsive priors that smoothly push components apart. In particular, we propose Ytwo repulsion functions defined as h(?) = g{d(?s , ?j )} (3) h(?) = min g{d(?s , ?j )} (4) {(s,j)?A} {(s,j)?A} with A = {(s, j) : s = 1, . . . , k; j < s} and g : <+ ? [0, M ] a strictly monotone differentiable function with g(0) = 0, g(x) > 0 for all x > 0 and M < ?. It is straightforward to show that h in (3) and (4) is integrable and satisfies definition 1. The two alternative repulsion functions differ in their dependence on the relative distances between components, with all the pairwise distances playing a role in (3), while (4) only depends on the minimal separation. A flexible choice of g corresponds to   g{d(?s , ?j )} = exp ? ? {d(?s , ?j )}?? , (5) where ? > 0 is a scale parameter and ? is a positive integer controlling the rate at which g approaches zero as d(?s , ?j ) decreases. Figure 1 shows contour plots of the prior ?(?1 , ?2 ) defined as (2) with g0 being the standard normal density, the repulsive function defined as (3) or (4) and g defined as (5) for different values of (?, ?). As ? and ? increase, the prior increasingly favors well separated components. 3 (I) (II) 5 5 0 0 ?5 ?5 ?5 0 5 ?5 (III) 0 5 (IV) 5 5 0 0 ?5 ?5 ?5 0 5 ?5 0 5 Figure 1: Contour plots of the repulsive prior ?(?1 , ?2 ) under (3), either (4) or (5) and (6) with hyperparameters (?, ?) equal to (I)(1, 2), (II)(1, 4), (III)(5, 2) and (IV )(5, 4) 2.3 Theoretical properties Pk0 Let the true density f0 : <m ? <+ be defined as f0 = h=1 p0h ?(?0h ) with ?0h ? ? and ?0j s such that there exists an 1 > 0 such that min{(s,j):s<j} d(?0s , ?0j ) ? 1 with d being the Euclidean Pk distance. Let f = h=1 ph ?(?h ) with ?h ? ?. Let ? ? ? with ? = (?1 , . . . , ?k )T and ? satisfying definition 1. Let p ? ? with ? = Dirichlet(?) and k ? ? with ?(k = k0 ) > 0. Let ? = (p, ?). These assumptions on f0 and f will be referred to as condition B0. Let ? be the prior induced on ?? j=1 Fk , where Fk is the space of all distributions defined as (1). We will focus on ? being a location parameter, though the results can be extended to location-scale R kernels. Let | ? |1 denote the L1 norm and KL(f0 , f ) = f0 log(f0 /f ) refer to the KullbackLeibler (K-L) divergence between f0 and f . Density f0 belongs to the K-L support of the prior ? if ?{f : KL(f0 , f ) < } > 0 for all  > 0. The next lemma provides sufficient conditions under which the true density is in the K-L support of the prior. Lemma 1. Assume condition B0 is satisfied with m = 1. Let D0 be a compact set containing parameters (?01 , . . . , ?0k0 ). Suppose ? ? ? with ? satisfying definition 1. Let ? and ? satisfy the following conditions: A1. for any y ? Y, the map ? ? ?(y; ?) is uniformly continuous A2. for any y ? Y, ?(y; ?) is bounded above by a constant R  A3. f0 log sup??D0 ?(?) ? log {inf ??D0 ?(?)} < ? A4. ? is continuous with respect to Lebesgue measure and for any vector x ? ?k with min{(s,j):s<j} d(xs , xj ) ? ? for some ? > 0 there is a ? > 0 such that ?(?) > 0 for all ? satisfying ||? ? x||1 < ? Then f0 is in the K-L support of the prior ?. Lemma 2. The repulsive density in (2) with h defined as either (3) or (4) satisfies condition A4 in lemma 1. The next lemma formalizes the posterior rate of concentration for univariate location mixtures of Gaussians. Lemma 3. Let condition B0 be satisfied, let m = 1 and ? be the normal kernel depending on a location parameter ? and a scale parameter ?. Assume that condition (i), (ii) and (iii) of theorem 3.1 in [11] and assumption A4 in lemma 1 are satisfied. Furthermore, assume that C1) the joint density ? leads to exchangeable random variables and  for all k the marginal density of the location parameter ?1 satisfies ?m (|?1 | ? t) . exp ?q1 t2 for a given q1 > 0 4 C2) there are constants u1 , u2 , u3 > 0, possibly depending on f0 , such that for any  ? u3 ?(||? ? ?0 ||1 ? ) ? u1 exp(?u2 k0 log(1/)) Then the posterior rate of convergence relative to the L1 metric is n = n?1/2 log n. Lemma 3 is essentially a modification of theorem 3.1 in [11] to the proposed repulsive mixture model. Lemma 4 gives sufficient conditions for ? to satisfy condition C1 and C2 in lemma 3. Lemma 4. Let ? be defined as (2) and h be defined as either (3) or (4), then ? satisfies condition C2 in lemma 3. Furthermore, if for a positive constant n1 the function g0 satisfies g0 (|x| ? t) . exp(?n1 t2 ), ? satisfies condition C1 in lemma 3. As motivated above, when the number of mixture components is chosen to be unnecessarily large, it is appealing for the posterior distribution of the weights of the extra components to be concentrated near zero. Theorem 1 formalizes the rate of concentration with increasing sample size n. One of the main assumptions required in theorem 1 is that the posterior rate of convergence relative to the L1 metric is ?n = n?1/2 (log n)q with q ? 0. We provided the contraction rate, under the proposed prior specification and univariate Gaussian kernel, in lemma 3. However, theorem 1 is a more general statement and it applies to multivariate mixture density of any kernel. Theorem 1. Let assumptions B0 ? B5 be satisfied. Let ? be defined as (2) and h be defined as either (3) or (4). If ? ? = max(?1 , . . . , ?k ) < m/2 and for positive constants r1 , r2 , r3 the function g satisfies g(x) ? r1 xr2 for 0 ? x < r3 then " ( ! )# k X 0 ?1/2 q(1+s(k0 ,?)/sr2 ) lim lim sup En P min p?(i) > M n (log n) =0 M ?? n?? {??Sk } i=k0 +1 with s(k0 , ?) = k0 ? 1 + mk0 + ? ? (k ? k0 ), sr2 = r2 + m/2 ? ? ? and Sk the set of all possible permutations of {1, . . . , k}. Assumptions (B1 ? B5) can be found in the supplementary material. Theorem 1 is a modification of theorem 1 in [1] to the proposed repulsive mixture model. Theorem 1 implies that the posterior expectation of weights of the extra components is of order O n?1/2 (log n)q(1+s(k0 ,?)/sr2 ) . When g is defined as (5), parameters r1 and r2 can be chosen such that r1 = ? and r2 = ?. When the number of components is unknown, with only an upper bound known, the posterior rate of convergence is equivalent to the parametric rate n?1/2 [12]. In this case, the rate in theorem 1 is n?1/2 under usual priors or the repulsive prior. However, in our experience using usual priors, the sum of the extra components can be substantial in small to moderate sample sizes, and often has high variability. As we show in Section 3, for repulsive priors the sum of the extra component weights is close to zero and has small variance for small as well as large sample sizes. On the other hand, when an upper bound on the number of components is unknown, the posterior rate of concentration is n?1/2 (log n)q with q > 0. In this case, according to theorem 1, using the proposed prior specification the logarithmic factor in theorem 1 of [1] can be improved. 2.4 Parameter calibration and posterior computation The parameters involved in the repulsion function h are chosen such that a priori, with high probability, the clusters will be adequately separated. Consider the case where ? is a location-scale kernel with location and scale parameters (?, ?) and is symmetric about ?. Here, it is natural to relate the separation of two densities to the distance between their location parameters. The following definition introduces the concept of separation level between two densities. Definition 2. Let f1 and f2 be two densities having location-scale parameters (?1 , ?1 ) and (?2 , ?2 ) respectively, with ?1 , ?2 ? ? and ?1 , ?2 ? ?. Given a metric t(?, ?), a positive constant c and a function ? : ? ? ? ? <+ , f1 and f2 are c-separated if t(?1 , ?2 ) ? c?(?1 , ?2 )1/2 Definition 2 is in the spirit of [2] but generalized to any symmetric location-scale kernel. A mixture of k densities is c-separated if all pairs of densities are c-separated. The parameters of the repulsion 5 (II) (I) 0.4 0.6 0.3 0.4 0.2 0.2 0.1 0 ?10 ?5 0 5 0 10 ?2 0 1 3 0.8 2 0.6 1 0.4 0 0.2 ?1 0 ?3 ?2 ?1 0 2 (IV) (III) 1 2 ?2 ?2 3 ?1 0 1 2 3 Figure 2: (I) Student?s t density, (II) two-components mixture of poorly (solid) and well separated (dot-dash) Gaussian densities, referred as (IIa, IIb), (III) mixture of poorly (dot-dash) and well separated (solid) Gaussian and Pearson densities, referred as (IIIa, IIIb), (IV ) two-components mixture of two-dimensional non-spherical Gaussians function, (?, ?), will be chosen such that, for an a priori chosen separation level c, definition 2 is satisfied with high probability. In practice, for a given pair (?, ?), we estimate the probability of pairwise c-separation empirically by simulating N replicates of (?h , ?h ) for each component h = 1, . . . , k from the prior. The appropriate values (?, ?) are obtained by starting with small values, and increasing until the pre-specified pairwise c-separated probability is reached. In practice, only ? will be calibrated to reach a particular probability value. This is because ? controls the rate at which the density tends to zero as two components approach but not the separation level across them. In practice we have found that ? = 2 provides a good default value and we fix ? at this value in all our applications below. A possible issue with the proposed repulsive mixture prior is that the full conditionals are nonstandard, complicating posterior computation. To address this, we propose a data augmentation scheme, introducing auxiliary slice variables to facilitate sampling [13]. This algorithm is straightforward to implement and is efficient by MCMC standards. Further details can be found in the supplementary material. It will be interesting in future work to develop fast approximations to MCMC for implementation of repulsive mixture models, such as variational methods for approximating the full posterior and optimization methods for obtaining a maximum a posteriori estimate. The latter approach would provide an alternative to usual maximum likelihood estimation via the EM algorithm, which provides a penalty on components located close together. 3 Synthetic examples Synthetic toy examples were considered to assess the performance of the repulsive prior in density estimation, classification and emptying the extra components. Figure 2 plots the true densities in the various synthetic cases that we considered. For each synthetic dataset, repulsive and non-repulsive mixture models were compared considering a fixed upper bound on the number of components; extra components should be assigned small probabilities and hence effectively excluded. The auxiliary variable sampler was run for 10, 000 iterations with a burn-in of 5, 000. The chain was thinned by keeping every 10th simulated draw. To overcome the label switching problem, the samples were post-processed following the algorithm of [14]. Details on parameters involved in the true densities and choice of prior distributions can be found in the supplementary material. Table 1 shows summary statistics of the K-L divergence, the misclassification error and the sum of extra weights under repulsive and non-repulsive mixtures with six mixture components as the upper bound. Table 1 shows also the misclassification error resulting from hierarchical clustering [15]. In practice, observations drawn from the same mixture component were considered as belonging to the same category and for each dataset a similarity matrix was constructed. The misclassification error was established in terms of divergence between the true similarity matrix and the posterior similar6 ity matrix. As shown in table 1, the K-L divergences under repulsive and non-repulsive mixtures become more similar as the sample size increases. For smaller sample sizes, the results are more similar when components are very well separated. Since a repulsive prior tends to discourage overlapping mixture components, a repulsive model might not estimate the density quite as accurately when a mixture of closely overlapping components is needed. However, as the sample size increases, the fitted density approaches the true density regardless of the degree of closeness among clusters. Again, though repulsive and non-repulsive mixtures perform similarly in estimating the true density, repulsive mixtures place considerably less probability on extra components leading to more interpretable clusters. In terms of misclassification error, the repulsive model outperforms the other two approaches while, in most cases, the worst performance was obtained by the non-repulsive model. Potentially, one may favor fewer clusters, and hence possibly better separated clusters, by penalizing the introduction of new clusters more through modifying the precision in the Dirichlet prior for the weights; in the supplemental materials, we demonstrate that this cannot solve the problem. Table 1: Mean and standard deviation of K-L divergence, misclassification error and sum of extra weights resulting from non-repulsive (N-R) and repulsive (R) mixtures with a maximum number of clusters equal to six under different synthetic data scenarios n=100 I IIb IIIa IIIb IV I IIa IIb IIIa IIIb IV K-L divergence N-R 0?05 0?03 0?07 0?05 0?08 0?22 0?00 0?01 0?01 0?00 0?01 0?02 0?03 0?01 0?02 0?02 0?03 0?04 0?00 0?00 0?00 0?00 0?00 0?00 0?03 0?08 0?09 0?07 0?09 0?24 0?01 0?01 0?01 0?01 0?01 0?03 0?02 0?02 0?03 0?03 0?03 0?04 0?00 0?00 0?00 0?00 0?00 0?00 Misclassification HCT 0?12 0?11 N-R 0?68 0?26 0?41 0?06 0?12 0?17 0?78 0?05 0?21 0?13 0?45 0?65 0?42 0?24 0?14 0?03 0?42 0?14 0?09 0?02 0?20 0?19 R 0?09 0?10 0?05 0?09 0?06 0?05 0?11 0?08 0?04 0?08 0?03 0?02 0?06 0?09 0?00 0?05 0?00 0?09 0?05 0?08 0?00 0?03 0?00 0?18 0?05 0?04 0?02 0?03 0?01 0?03 0?05 0?02 0?02 0?03 0?01 0?01 Sum of extra weights N-R 0?30 0?21 0?09 0?29 R R 4 n=1000 IIa 0?16 0?07 0?13 0?30 0?21 0?03 0?16 0?03 0?10 0?11 0?07 0?09 0?07 0?07 0?11 0?11 0?04 0?10 0?03 0?03 0?01 0?01 0?01 0?01 0?01 0?08 0?01 0?00 0?00 0?00 0?00 0?26 0?01 0?01 0?01 0?01 0?01 0?05 0?01 0?00 0?00 0?00 0?00 0?03 Real data We assessed the clustering performance of the proposed method on a real dataset. This dataset consists of 150 observations from three different species of iris each with four measurements. This dataset was previously analyzed by [16] and [17] proposing new methods to estimate the number of clusters based on minimizing loss functions. They concluded the optimal number of clusters was two. This result did not agree with the number of species due to low separation in the data between two of the species. Such point estimates of the number of clusters do not provide a characterization of uncertainty in clustering in contrast to Bayesian approaches. Repulsive and non-repulsive mixtures were fitted under different choices of upper bound on the number of components. Since the data contains three true biological clusters, with two of these having similar distributions of the available features, we would expect the posterior to concentrate on two or three components. Posterior means and standard deviations of the three highest weights were (0?30, 0?23, 0?13) and (0?05, 0?04, 0?04) for non-repulsive and (0?60, 0?30, 0?04) and (0?04, 0?03, 0?02) for repulsive under six components. Clearly, repulsive priors lead to a posterior more concentrated on two components, and assign low probability to more than three components. 7 25 20 15 10 5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Figure 3: Posterior density of the total probability weight assigned to more than three components in the Iris data under a max of 6 or 10 components for non-repulsive (6:solid, 10:dash-dot) and repulsive (6:dash, 10:dot) mixtures. Figure 3 shows the density of the total probability assigned to the extra components. This quantity was computed considering the number of species as the true number of clusters. According to figure 3, our repulsive prior specification leads to extra component weights very close to zero regardless of the upper bound on the number of components. The posterior uncertainty is also small. Non-repulsive mixtures assign large weight to extra components, with posterior uncertainty increasing considerably as the number of components increases. Discussions We have proposed a new repulsive mixture modeling framework, which should lead to substantially improved unsupervised learning (clustering) performance in general applications. A key aspect is soft penalization of components located close together to favor, without sharply enforcing, well separated clusters that should be more likely to correspond to the true missing labels. We have focused on Bayesian MCMC-based methods, but there are numerous interesting directions for ongoing research, including fast optimization-based approaches for learning mixture models with repulsive penalties. Acknowledgments This research was partially supported by grant 5R01-ES-017436-04 from the National Institute of Environmental Health Sciences (NIEHS) of the National Institutes of Health (NIH) and DARPA MSEE. 8 References [1] J. Rousseau and K. Mengersen. Asymptotic Behaviour of the Posterior Distribution in Over-Fitted Models. Journal of the Royal Statistical Society B, 73:689?710, 2011. [2] S. Dasgupta. Learning Mixtures of Gaussians. Proceedings of the 40th Annual Symposium on Foundations of Computer Science, pages 633?644, 1999. [3] S. Dasgupta and L. Schulman. A Probabilistic Analysis of EM for Mixtures of Separated, Spherical Gaussians. The Journal of Machine Learning Research, 8:203?226, 2007. [4] M. Stephens. Bayesian Analysis of Mixture Models with an Unknown Number of Components - An Alternative to Reversible Jump Methods. The Annals of Statistics, 28:40?74, 2000. [5] H. Ishwaran and M. Zarepour. Dirichlet Prior Sieves in Finite Normal Mixtures. Statistica Sinica, 12:941? 963, 2002. [6] J. Sethuraman. A Constructive Denition of Dirichlet Priors. Statistica Sinica, 4:639?650, 1994. [7] H. Ishwaran and L. F. James. Gibbs Sampling Methods for Stick-Breaking Priors. Journal of the American Statistical Association, 96:161?173, 2001. [8] M. L. Huber and R. L. Wolpert. Likelihood-Based Inference for Matern Type-III Repulsive Point Processes. Advances in Applied Probability, 41:958?977, 2009. [9] A. Lawson and A. Clark. Spatial Cluster Modeling. Chapman & Hall CRC, London, UK, 2002. [10] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer, 2008. [11] Catia Scricciolo. Posterior Rates of Convergence for Dirichlet Mixtures of Exponential Power Densities. Electronic Journal of Statistics, 5:270?308, 2011. [12] H. Ishwaran, L. F. James, and J. Sun. Bayesian Model Selection in Finite Mixtures by Marginal Density Decompositions. Journal of American Statistical Association, 96:1316?1332, 2001. [13] Paul Damien, Jon Wakefield, and Stephen Walker. Gibbs Sampling for Bayesian Non-Conjugate and Hierarchical Models by Using Auxiliary Variables. Journal of the Royal Statistical Society B, 61:331? 344, 1999. [14] M. Stephens. Dealing with label switching in mixture models. Journal of the Roya; statistical society B, 62:795?810, 2000. [15] H. Locarek-Junge and C. Weihs. Classification as a Tool for Research. Springer, 2009. [16] C. Sugar and G. James. Finding the number of clusters in a data set: an information theoretic approach. Journal of the American Statistical Association, 98:750?763, 2003. [17] J. Wang. Consistent selection of the number of clusters via crossvalidation. Biometrika, 97:893?904, 2010. 9
4589 |@word norm:1 seek:1 contraction:1 p0:9 decomposition:1 q1:2 solid:3 contains:1 exclusively:1 selecting:1 outperforms:1 existing:1 vere:1 plot:3 interpretable:1 fewer:2 characterization:1 provides:3 location:13 guard:1 c2:4 constructed:1 become:3 symposium:1 consists:1 fitting:3 thinned:1 pairwise:7 theoretically:1 huber:1 indeed:1 behavior:2 p1:1 multi:1 relying:1 spherical:2 automatically:3 considering:4 increasing:3 provided:1 estimating:2 bounded:2 msee:1 substantially:1 degrading:1 proposing:1 supplemental:1 finding:1 formalizes:2 every:1 biometrika:1 uk:2 stick:2 unit:1 exchangeable:1 control:1 grant:1 positive:5 treat:1 tends:3 limit:3 switching:2 might:1 burn:1 studied:2 equivalence:1 challenging:1 ease:1 practical:3 acknowledgment:1 practice:6 implement:1 convenient:1 word:1 pre:3 cannot:1 close:12 selection:2 equivalent:1 map:1 center:2 missing:1 pk0:1 straightforward:2 regardless:2 starting:1 independently:4 truncating:1 focused:1 splitting:2 ity:1 annals:1 controlling:1 suppose:1 exact:1 duke:4 satisfying:5 iib:3 located:7 utilized:1 role:1 wang:1 worst:1 sun:1 decrease:1 highest:1 substantial:3 mentioned:1 sugar:1 ideally:1 depend:1 compromise:1 creates:1 f2:2 joint:2 darpa:1 k0:9 routinely:2 various:1 separated:19 fast:2 london:2 monte:4 choosing:3 pearson:1 quite:1 larger:1 solve:3 supplementary:3 drawing:2 favor:8 statistic:5 iiib:3 advantage:1 differentiable:1 ucl:1 propose:8 poorly:2 crossvalidation:1 convergence:6 cluster:25 r1:4 generating:1 depending:4 illustrate:1 ac:1 stat:1 develop:2 damien:1 b0:4 strong:1 implemented:1 auxiliary:3 implies:2 differ:1 concentrate:1 direction:1 closely:2 modifying:1 material:4 crc:1 behaviour:1 assign:3 f1:2 clustered:1 fix:1 rousseau:1 biological:1 strictly:1 considered:3 hall:1 normal:3 exp:4 u3:2 a2:1 estimation:3 label:3 sensitive:1 tool:2 clearly:1 gaussian:6 occupied:1 avoid:2 focus:2 likelihood:2 mainly:1 contrast:2 posteriori:1 inference:1 dependent:1 repulsion:8 typically:1 favoring:1 issue:6 classification:2 flexible:1 among:1 priori:4 spatial:3 special:1 marginal:2 equal:2 having:6 sampling:5 atom:3 chapman:1 placing:1 broad:1 unnecessarily:3 unsupervised:5 jones:1 jon:1 future:1 t2:2 few:1 divergence:6 national:2 lebesgue:1 vrao:1 unappealing:1 n1:2 replicates:1 introduces:1 mixture:57 analyzed:1 chain:5 experience:1 unless:1 iv:6 euclidean:1 ruled:1 theoretical:2 minimal:3 fitted:6 modeling:3 soft:1 rao:1 vinayak:1 maximization:1 introducing:2 deviation:3 mk0:1 too:3 kullbackleibler:1 characterize:2 synthetic:6 calibrated:1 considerably:2 density:39 sensitivity:1 probabilistic:1 physic:1 together:10 augmentation:1 again:1 satisfied:5 containing:1 choose:2 possibly:3 wishart:1 american:3 leading:3 toy:1 student:1 satisfy:2 depends:2 matern:1 sup:2 reached:1 identifiability:1 ass:1 variance:3 largely:1 correspond:1 conceptually:1 weak:1 bayesian:14 accurately:1 critically:1 produced:1 carlo:4 nonstandard:1 reach:1 definition:11 hardcore:1 against:1 involved:2 james:3 attributed:1 gain:1 dataset:5 popular:1 lim:2 subtle:1 carefully:1 higher:1 supervised:2 improved:2 nonparametrics:1 though:3 furthermore:2 just:1 wakefield:1 until:1 hand:1 overlapping:3 reversible:3 facilitate:1 zarepour:1 concept:1 true:10 adequately:2 hence:3 assigned:5 sieve:1 spatially:1 excluded:1 symmetric:2 francesca:1 illustrated:1 iris:3 criterion:2 generalized:1 theoretic:1 demonstrate:1 l1:3 ranging:1 variational:1 novel:1 recently:1 nih:1 common:3 multinomial:1 empirically:1 association:3 refer:1 measurement:1 gibbs:4 fk:2 similarly:2 iia:3 dot:4 specification:5 f0:12 calibration:1 similarity:2 posterior:25 multivariate:4 showed:1 recent:1 perspective:1 belongs:1 apart:5 inf:1 moderate:1 scenario:1 arbitrarily:1 success:1 integrable:1 redundant:5 ii:5 stephen:3 multiple:1 p0h:1 full:2 d0:3 post:1 a1:1 impact:2 prediction:1 involving:1 essentially:1 expectation:3 poisson:1 metric:5 iteration:1 kernel:9 c1:5 proposal:1 addition:1 background:1 conditionals:1 walker:1 concluded:1 extra:13 subject:1 induced:1 spirit:1 integer:1 near:2 split:1 iii:6 variety:1 independence:1 fit:2 xj:1 idea:1 expression:1 motivated:2 b5:2 six:3 penalty:3 proceed:1 clear:2 xr2:1 ph:2 concentrated:2 processed:1 category:1 generate:1 neuroscience:1 discrete:4 similar6:1 dasgupta:2 redundancy:2 key:2 four:1 threshold:3 drawn:1 penalizing:2 vast:1 monotone:1 sum:5 run:1 inverse:1 uncertainty:3 place:2 reasonable:1 electronic:1 separation:12 parsimonious:1 draw:1 bound:8 dash:4 distinguish:1 annual:1 incorporation:1 constraint:3 infinity:1 sharply:1 generates:1 u1:2 speed:1 aspect:1 min:4 relatively:1 department:2 according:3 alternate:1 poor:1 weihs:1 conjugate:2 belonging:1 smaller:2 across:1 em:3 increasingly:1 sth:1 appealing:1 modification:3 making:1 agree:1 previously:1 r3:2 needed:2 letting:1 repulsive:53 available:1 gaussians:7 hct:1 ishwaran:3 hierarchical:2 appropriate:1 simulating:1 frequentist:1 alternative:5 assumes:2 dirichlet:10 clustering:6 completed:1 a4:3 approximating:1 society:3 r01:1 move:2 g0:5 quantity:1 strategy:1 concentration:3 dependence:1 parametric:1 usual:3 distance:11 separating:1 simulated:1 majority:1 degrade:1 enforcing:1 minimizing:1 difficult:2 dunson:2 unfortunately:1 sinica:2 potentially:2 statement:1 relate:1 implementation:1 unknown:3 perform:1 upper:7 observation:2 markov:4 finite:9 extended:1 variability:1 sweeping:1 arbitrary:2 introduced:2 david:1 pair:3 required:1 specified:4 kl:2 established:1 address:2 below:1 max:2 including:1 royal:2 deleting:1 power:1 misclassification:6 difficulty:1 natural:1 scheme:1 numerous:1 sethuraman:1 health:2 prior:46 literature:4 schulman:1 asymptotic:3 relative:4 fully:1 loss:1 permutation:1 expect:1 interesting:2 allocation:1 clark:1 penalization:1 foundation:1 degree:2 sufficient:2 consistent:1 playing:1 summary:1 placed:1 supported:1 keeping:1 jth:1 iiia:3 institute:2 slice:1 overcome:1 dimension:1 default:1 complicating:1 contour:2 jump:3 mengersen:1 obtains:1 compact:1 dealing:1 overfitting:2 summing:1 b1:1 continuous:3 sk:2 table:4 learn:1 obtaining:1 improving:1 complex:2 discourage:1 did:1 pk:2 spread:1 main:1 statistica:2 arise:2 hyperparameters:1 paul:1 referred:4 en:1 gatsby:2 precision:1 sub:1 daley:1 exponential:1 lie:1 lawson:1 breaking:2 theorem:12 specific:3 r2:4 x:1 concern:1 normalizing:1 consist:1 exists:1 a3:1 closeness:1 adding:1 strauss:1 effectively:1 push:5 smoothly:4 wolpert:1 logarithmic:1 simply:1 univariate:2 likely:1 partially:1 u2:2 applies:1 springer:2 corresponds:1 satisfies:7 environmental:1 marked:1 absence:1 adverse:1 hard:2 change:1 infinite:1 uniformly:1 sampler:1 lemma:14 conservative:1 total:2 specie:4 e:1 petralia:1 college:1 support:4 latter:1 arises:1 assessed:1 ongoing:1 constructive:1 mcmc:3
3,965
459
Kernel Regression and Backpropagation Training with Noise Petri Koistinen and Lasse Holmstrom Rolf Nevanlinna Institute, University of Helsinki Teollisuuskatu 23, SF-0051O Helsinki, Finland Abstract One method proposed for improving the generalization capability of a feedforward network trained with the backpropagation algorithm is to use artificial training vectors which are obtained by adding noise to the original training vectors. We discuss the connection of such backpropagation training with noise to kernel density and kernel regression estimation. We compare by simulated examples (1) backpropagation, (2) backpropagation with noise, and (3) kernel regression in mapping estimation and pattern classification contexts. 1 INTRODUCTION Let X and Y be random vectors taking values in R d and RP, respectively. Suppose that we want to estimate Y in terms of X using a feedforward network whose input-output mapping we denote by y g(x, w). Here the vector w includes all the weights and biases of the network. Backpropagation training using the quadratic loss (or error) function can be interpreted as an attempt to minimize the expected loss '\(w) ElIg(X, w) _ Y1I2. (1) Suppose that EIIYW < 00. Then the regression function = = (2) m(x) = E[YIX = x]. minimizes the loss Ellb(X) - YI1 2 over all Borel measurable mappings b. Therefore, backpropagation training can also be viewed as an attempt to estimate m with the network g. 1033 1034 Koistinen and Holmstrom In practice, one cannot minimize -' directly because one does not know enough about the distribution of (X, Y). Instead one minimizes a sample estimate (3) in the hope that weight vectors w that are near optimal for ~n are also near optimal for -'. In fact, under rather mild conditions the minimizer of ~n actually converges towards the minimizing set of weights for -' as n -+ 00, with probability one (White, 1989). However, if n is small compared to the dimension of w, minimization of ~n can easily lead to overfitting and poor generalization, i.e., weights that render ~n small may produce a large expected error -'. Many cures for overfitting have been suggested. One can divide the available samples into a training set and a validation set, perform iterative minimization using the training set and stop minimization when network performance over the validation set begins to deteriorate (Holmstrom et al., 1990, Weigend et al., 1990). In another approach, the minimization objective function is modified to include a term which tries to discourage the network from becoming too complex (Weigend et al., 1990). Network pruning (see, e.g., Sietsma and Dow, 1991) has similar motivation. Here we consider the approach of generating artificial training vectors by adding noise to the original samples. We have recently analyzed such an approach and proved its asymptotic consistency under certain technical conditions (Holmstrom and Koistinen, 1990). 2 ADDITIVE NOISE AND KERNEL REGRESSION Suppose that we have n original training vectors (Xi, Yi) and want to generate artificial training vectors using additive noise. If the distributions of both X and Y are continuous it is natural to add noise to both X and Y components of the sample. However, if the distribution of X is continuous and that of Y is discrete (e.g., in pattern classification), it feels more natural to add noiRe to the X components only. In Figure 1 we present sampling procedures for both ca~es. In the x-only case the additive noise is generated from a random vector Sx with density Kx whereas in the x-and-y case the noise is generated from a random vector SXy with density Kxy. Notice that we control the magnitude of noise with a scalar smoothing parameter h > O. In both cases the sampling procedures can be thought of as generating random samples from new random vectors Xk n ) and y~n) . Using the same argument as in the Introduction we see that a network trained with the artificial samples tends to approximate the regression function E[y~n) IXkn)]. Generate I uniformly on {1, ... , n} and denote by I and I( .11 = i) the density and conditional density of Xk n ). Then in the x-only case we get n m~n)(Xkn)) := E[y~n)IXkn)] = LYiP(I i=l = iIXkn)) Kernel Regression and Backpropagation Training with Noise Procedure 2. (Add noise to both x and y) Procedure 1. (Add noise to x only) 1. Select i E {I, ... , n} with equal probability for each index. 2. Draw a sample (sx, Sy) from density /{Xy on Rd+p. 1. Select i E {I, ... , n} with equal probability for each index. 2. Draw a sample Sx from density I<x on Rd. x~n) 3. Set (n) Yh Xi + hsx 3. Set x~n) + hsx Yi + h Sy. Xi Yh(n) Yi? Figure 1: Two Procedures for Generating Artificial Training Vectors. tt f(X~n)II = i)P(I n = Denoting Yi /{x = i) = f(Xh n )) tt n h-d/{x?Xkn ) - xi)/h). n- 1 Yi 2:7=1 n- 1 h- d/{x?Xkn ) - xi)/h)? by k we obtain (n)( ) _ mh X - 2:~=1 k?x - xi)/h)Yi ",0 . L..,j=1 k?x - xi)/h) (4) We result in the same expression also in the x-and-y case provided that fY/{Xy(x,y)dy 0 and that we take k(x) fI<Xy(x,y)dy (Watson, 1964). The expression (4) is known as the (N adaraya-Watson) kernel regression estimator (Nadaraya, 1964, Watson, 1964, Devroye and Wagner, 1980). = = A common way to train a p-class neural network classifier is to train the network to associate a vector x from class j with the j'th unit vector (0, ... ,0,1,0, ... ,0). It is easy to see that then the kernel regression estimator components estimate the class a posteriori probabilities using (Parzen-Rosenblatt) kernel density estimators for the class conditional densities. Specht (1990) argues that such a classifier can be considered a neural network. Analogously, a kernel regression estimator can be considered a neural network though such a network would need units proportional to the number of training samples. Recently Specht (1991) has advocated using kernel regression and has also presented a clustering variant requiring only a fixed amount of units. Notice also the resemblance of kernel regression to certain radial basis function schemes (Moody and Darken, 1989, Stokbro et al., 1990). An often used method for choosing h is to minimize the cross-validated error (HardIe and Marron, 1985, Friedman and Silverman, 1989) (5) Another possibility is to use a method suggested by kernel density estimation theory (Duin, 1976, Habbema et al., 1974) whereby one chooses that h maximizing a crossvalidated (pseudo) likelihood function o Lxy (h) = II f:,L(Xi, yd, i=1 o Lx(h) = IT f:'h,i(Xi), i=1 (6) 1035 1036 Koistinen and Holmstrom I;r where i (I; h i) is a kernel density estimate with kernel Kxy (Kx) and smoothing para~~ter h hut with the i'th sample point left out. 3 EXPERIMENTS In the first experiment we try to estimate a mapping go from noisy data (x, y), go(X)+Ny =asinX+b+Ny , UNI( -71",71"), Ny '" N(O, (72), Y a=0.4,b=0.5 = X '" (7 O.l. Here UNI and N denote the uniform and the normal distribution. We experimented with backpropagation, backpropagation with noise and kernel regression. Backpropagation loss function was minimized using Marquardt's method. The network architecture was FN-1-13-1 with 40 adaptable weights (a feedforward network with one input, 13 hidden nodes, one output, and logistic activation functions in the hidden and output layers). We started the local optimizations from 3 different random initial weights and kept the weights giving the least value for ~n. Backpropagation training with noise was similar except that instead of the original n vectors we used IOn artificial vectors generated with Procedure 2 using SXy '" N(O, 12 ). Magnitude of noise was chosen with the criterion Lxy (which, for backpropagation, gave better results than M). In the kernel regression experiments SXy was kept the same. Table 1 characterizes the distribution of J, the expected squared distance of the estimator 9 (g(., w) or m~n) from go, J = E[g(X) - gO(X)]2. Table 2 characterizes the distribution of h chosen according to the criteria Lxy and M and Figure 2 shows the estimators in one instance. Notice that, on the average, kernel regression is better than backpropagation with noise which is better than plain backpropagation. The success of backpropagation with noise is partly due to the fact that (7 and n have here been picked favorably. Notice too that in kernel regression the results with the two cross-validation methods are similar although the h values they suggest are clearly different . In the second experiment we trained classifiers for a four-dimensional two-class problem with equal a priori probabilities and class-conditional densities N(J.ll, C 1 ) and N(J.l2' C2), J.ll = 2.32[1 0 0 O]T, C = 14; 1 J.l2 = 0, C2 = 414. An FN-4-6-2 with 44 adaptable weights was trained to associate vectors from class 1 with [0.9 O.l]T and vectors from class 2 with [0.1 0.9jT. We generated n/2 original vectors from each class and a total of IOn artificial vectors using Procedure 1 with Sx '" N(O, 14). We chose the smoothing parameters, hI and h2' separately for the two classes using the criterion Lx: hi was chosen by evaluating Lx on class i samples only. We formed separate kernel regression estimators for each class; the i'th estimator was trained to output 1 for class i vectors and 0 for the other sample vectors. The M criterion then produces equal values for hI and h2. The classification rule was to classify x to class i if the output corresponding to the i'th class was the maximum output. The error rates are given in Table 3. (The error rate of the Bayesian classifier is 0.116 in this task.) Table 4 summarizes the distribution of hI and h2 as selected by Lx and M . Kernel Regression and Backpropagation Training with Noise Table 1: Results for Mapping Estimation. Mean value (left) and standard deviation (right) of J based on 100 repetitions are given for each method. BP BP+noise, Lxy n 40 80 .0218 .00764 .016 .0048 .0104 .00526 .0079 .0018 Kernel regression M Lxy .00446 .0022 .00365 .0019 .00250 .00078 .00191 .00077 Table 2: Values of h Suggested by the Two Cross-validation Methods in the Mapping Estimation Experiment. Mean value and standard deviation based on 100 repetitions are given. Lxy n 40 80 0.149 0.114 M 0.020 0.011 0.276 0.241 0.086 0.062 Table 3: Error Rates for the Different Classifiers. Mean value and standard deviation based on 25 repetitions are given for each method. BP+noise, BP Lx n 44 88 176 Kernel regression .281 .264 .210 .054 .028 .023 .189 .163 .145 Lx .018 .011 .0lD .201 .182 .164 M .022 .010 .0089 .207 .184 .164 .027 .013 .011 Table 4: Values of hl and h2 Suggested by the Two Cross-validation Methods in the Classification Experiment. Mean value and standard deviation based on 25 repetitions are given. Lx n 44 88 176 M hl .818 .738 .668 h2 .078 .056 .048 1.61 1.48 1.35 .14 .11 .090 hl = h2 1.14 .27 1.01 .19 .868 .lD 1037 1038 Koistinen and Holmstrom 4 CONCLUSIONS Additive noise can improve the generalization capability of a feedforward network trained with the backpropagation approach. The magnitude of the noise cannot be selected blindly, though. Cross-validation-type procedures seem to suit well for the selection of noise magnitude. Kernel regression, however, seems to perform well whenever backpropagation with noise performs well. If the kernel is fixed in kernel regression, we only have to choose the smoothing parameter h, and the method is not overly sensitive to its selection. References [Devroye and Wagner, 1980] Devroye, 1. and Wagner, T. (1980). Distribution-free consistency results in non parametric discrimination and regression function estimation. The Annals of Statistics, 8(2):231-239. [Duin, 1976] Duin, R. P. W. (1976). On the choice of smoothing parameters for Parzen estimators of probability density functions. IEEE Transactions on Computers, C-25:1175-1179. [Friedman and Silverman, 1989] Friedman, J. and Silverman, B. (1989). Flexible parsimonious smoothing and additive modeling. Technometrics, 31(1):3-2l. [Habbema et al., 1974] Habbema, J. D. F., Hermans, J., and van den Broek, K. (1974). A stepwise discriminant analysis program using density estimation. In Bruckmann, G., editor, COMPSTAT 1974, pages 101-110, Wien. Physica Verlag. [HardIe and Marron, 1985] HardIe, W. and Marron, J. (1985). Optimal bandwidth selection in nonparametric regression function estimation. The Annals of Statistics, 13(4):1465-148l. [Holmstrom and Koistinen, 1990] Holmstrom, 1. and Koistinen, P. (1990). Using additive noise in back-propagation training. Research Reports A3, Rolf Nevanlinna Institute. To appear in IEEE Trans. Neural Networks. [Holmstrom et al., 1990] Holmstrom, L., Koistinen, P., and Ilmoniemi, R. J. (1990). Classification of un averaged evoked cortical magnetic fields. In Proc. IJCNN-90WASH DC, pages II: 359-362. Lawrence Erlbaum Associates. [Moody and Darken, 1989] Moody, J. and Darken, C. (1989). Fast learning in networks of locally-tuned processing units. Neural Computation, 1:281-294. [N adaraya, 1964] Nadaraya, E. (1964). On estimating regression. Theor. Probability Appl., 9:141-142. [Sietsma and Dow, 1991] Sietsma, J. and Dow, R. J. F. (1991). Creating artificial neural networks that generalize. Neural Networks, 4:67-79. [Specht, 1991] Specht, D. (1991). A general regression neural network. IEEE Transactions on Neural Networks, 2(6):568-576. [Specht, 1990] Specht, D. F. (1990). Probabilistic neural networks. Neural Networks, 3(1):109-118. [Stokbro et al., 1990] Stokbro, K., Umberger, D., and Hertz, J. (1990). Exploiting neurons with localized receptive fields to learn chaos. NORDITA preprint. [Watson, 1964] Watson, G. (1964). Smooth regression analysis. Sankhyii Ser. A, 26:359-372. [Weigend et al., 1990] Weigend, A., Huberman, B., and Rumelhart, D. (1990). Predicting the future: A connectionist approach. International Journal of Neural Systems, 1(3):193-209. Kernel Regression and Backpropagation Training with Noise [White, 1989] White, H. (1989). Learning in artificial neural networks: A statistical perspective. Neural Computation, 1:425-464. 1.5.....--.....----.---.....--....----.---.....--....----, o 1 0 0.5 0 0 -0.54 --- true -3 kernel 0 -2 3 2 4 1.5 ? - .. ': .. ? 0 1 0.5 0 .. . .. -0.54 -3 -2 - .. ' -I 0 1 ---- BP - BP+noise 2 3 4 Figure 2: Results From a Mapping Estimation Experiment. Shown are the n = 40 original vectors (o's), the artificial vectors (dots), the true function asinx + band the fitting results using kernel regression, backpropagation and backpropagation with noise. Here h = 0.16 was chosen with Lxy. Values of J are 0.0075 (kernel regression), 0.014 (backpropagation with noise) and 0.038 (backpropagation) . 1039
459 |@word mild:1 seems:1 xkn:3 ld:2 initial:1 denoting:1 tuned:1 marquardt:1 activation:1 fn:2 additive:6 discrimination:1 selected:2 xk:2 yi1:1 node:1 lx:7 c2:2 fitting:1 deteriorate:1 expected:3 begin:1 provided:1 estimating:1 interpreted:1 minimizes:2 pseudo:1 classifier:5 ser:1 control:1 unit:4 appear:1 local:1 tends:1 becoming:1 yd:1 chose:1 evoked:1 appl:1 nadaraya:2 sietsma:3 averaged:1 practice:1 backpropagation:24 silverman:3 procedure:8 thought:1 radial:1 suggest:1 get:1 cannot:2 selection:3 context:1 measurable:1 maximizing:1 compstat:1 go:4 estimator:9 rule:1 feel:1 annals:2 suppose:3 associate:3 rumelhart:1 preprint:1 trained:6 basis:1 easily:1 mh:1 train:2 fast:1 holmstrom:10 artificial:10 choosing:1 whose:1 statistic:2 noisy:1 exploiting:1 produce:2 generating:3 converges:1 advocated:1 generalization:3 theor:1 physica:1 hut:1 considered:2 normal:1 lawrence:1 mapping:7 finland:1 estimation:9 proc:1 sensitive:1 repetition:4 hope:1 minimization:4 clearly:1 yix:1 modified:1 rather:1 validated:1 likelihood:1 posteriori:1 hidden:2 classification:5 flexible:1 priori:1 smoothing:6 equal:4 ilmoniemi:1 field:2 sampling:2 stokbro:3 petri:1 minimized:1 report:1 future:1 connectionist:1 suit:1 attempt:2 friedman:3 technometrics:1 possibility:1 analyzed:1 xy:3 divide:1 instance:1 classify:1 modeling:1 deviation:4 uniform:1 erlbaum:1 too:2 marron:3 para:1 chooses:1 density:14 international:1 koistinen:8 probabilistic:1 parzen:2 analogously:1 moody:3 squared:1 choose:1 creating:1 wien:1 includes:1 try:2 picked:1 hsx:2 characterizes:2 capability:2 minimize:3 formed:1 sy:2 generalize:1 bayesian:1 teollisuuskatu:1 whenever:1 stop:1 proved:1 actually:1 back:1 adaptable:2 though:2 dow:3 propagation:1 hardie:3 logistic:1 resemblance:1 requiring:1 true:2 lxy:7 white:3 ll:2 whereby:1 criterion:4 tt:2 argues:1 performs:1 chaos:1 recently:2 fi:1 common:1 rd:2 consistency:2 dot:1 add:4 perspective:1 certain:2 verlag:1 watson:5 success:1 yi:6 ii:3 smooth:1 technical:1 cross:5 variant:1 regression:30 blindly:1 kernel:30 ion:2 whereas:1 want:2 separately:1 seem:1 near:2 ter:1 feedforward:4 enough:1 easy:1 gave:1 architecture:1 bandwidth:1 expression:2 render:1 kxy:2 amount:1 nonparametric:1 locally:1 band:1 generate:2 notice:4 overly:1 rosenblatt:1 discrete:1 nordita:1 four:1 kept:2 weigend:4 parsimonious:1 draw:2 dy:2 summarizes:1 layer:1 hi:4 quadratic:1 ijcnn:1 duin:3 bp:6 helsinki:2 argument:1 according:1 poor:1 hertz:1 hl:3 den:1 discus:1 know:1 specht:6 habbema:3 available:1 magnetic:1 rp:1 original:6 clustering:1 include:1 giving:1 objective:1 sxy:3 parametric:1 receptive:1 distance:1 separate:1 simulated:1 fy:1 discriminant:1 devroye:3 index:2 minimizing:1 favorably:1 perform:2 neuron:1 darken:3 dc:1 nevanlinna:2 connection:1 trans:1 suggested:4 pattern:2 rolf:2 herman:1 program:1 natural:2 predicting:1 scheme:1 improve:1 started:1 l2:2 asymptotic:1 loss:4 proportional:1 localized:1 validation:6 h2:6 editor:1 free:1 bias:1 institute:2 taking:1 wagner:3 crossvalidated:1 van:1 dimension:1 plain:1 evaluating:1 cure:1 cortical:1 transaction:2 pruning:1 approximate:1 uni:2 overfitting:2 xi:9 un:1 continuous:2 iterative:1 table:8 learn:1 ca:1 improving:1 complex:1 discourage:1 motivation:1 noise:31 lasse:1 borel:1 ny:3 xh:1 sf:1 yh:2 jt:1 experimented:1 a3:1 stepwise:1 adding:2 magnitude:4 wash:1 sx:4 kx:2 scalar:1 minimizer:1 conditional:3 viewed:1 towards:1 except:1 uniformly:1 huberman:1 total:1 partly:1 e:1 select:2
3,966
4,590
Kernel Latent SVM for Visual Recognition Yang Wang Department of Computer Science University of Manitoba [email protected] Weilong Yang School of Computing Science Simon Fraser University [email protected] Greg Mori School of Computing Science Simon Fraser University [email protected] Arash Vahdat School of Computing Science Simon Fraser University [email protected] Abstract Latent SVMs (LSVMs) are a class of powerful tools that have been successfully applied to many applications in computer vision. However, a limitation of LSVMs is that they rely on linear models. For many computer vision tasks, linear models are suboptimal and nonlinear models learned with kernels typically perform much better. Therefore it is desirable to develop the kernel version of LSVM. In this paper, we propose kernel latent SVM (KLSVM) ? a new learning framework that combines latent SVMs and kernel methods. We develop an iterative training algorithm to learn the model parameters. We demonstrate the effectiveness of KLSVM using three different applications in visual recognition. Our KLSVM formulation is very general and can be applied to solve a wide range of applications in computer vision and machine learning. 1 Introduction We consider the problem of learning discriminative classification models for visual recognition. In particular, we are interested in models that have the following two characteristics: 1) can be used on weakly labeled data; 2) have nonlinear decision boundaries. Linear classifiers are a class of popular learning methods in computer vision. In the case of binary classification, they are prediction models in the form of f (x) = w> x, where x is the feature vector, and w is a vector of model parameters1 . The classification decision is based on the value of f (x). Linear classifiers are amenable to efficient and scalable learning/inference ? an important factor in many computer vision applications that involve high dimension features and large datasets. The person detection algorithm in [2] is an example of the success of linear classifiers in computer vision. The detector is trained by learning a linear support vector machine based on HOG descriptors of positive and negative examples. The model parameter w in this detector can be thought as a statistical template for HOG descriptors of persons. The reliance on a rigid template w is a major limitation of linear classifiers. As a result, the learned models usually cannot effectively capture all the variations (shape, appearance, pose, etc.) in natural images. For example, the detector in [2] usually only works well when a person is in an upright posture. In the literature, there are two main approaches for addressing this limitation. The first one is to introduce latent variables into the linear model. In computer vision, this is best exemplified by the success of deformable part models (DPM) [5] for object detection. DPM captures shape and pose variations of an object class with a root template covering the whole object and several part templates. By allowing these parts to deform from their ideal locations with respect to the root template, DPM provides more flexibility than a rigid template. Learning a DPM involves solving a latent 1 Without loss of generality, we assume linear models without the bias term. 1 SVM (LSVM) [5, 17] ? an extension of regular linear SVM for handling latent variables. LSVM provides a general framework for handling ?weakly labeled data? arising in many applications. For example, in object detection, the training data are weakly labeled because we are only given the bounding boxes of the objects without the detailed annotation for each part. In addition to modeling part deformation, another popular application of LSVM is to use it as a mixture model where the mixture component is represented as a latent variable [5, 6, 16]. The other main approach is to directly learn a nonlinear classifier. The kernel method [1] is a representative example along this line of work. A limitation of kernel methods is that the learning is more expensive than linear classifiers on large datasets, although efficient algorithms exist for certain types of kernels (e.g. histogram intersection kernel (HIK) [10]). One possible way to address the computational issue is to use nonlinear mapping to convert the original feature into some higher dimensional space, then apply linear classifiers in the high dimensional space [14]. Latent SVM and kernel methods represent two different, yet complementary approaches for learning classification models that are more expressive than linear classifiers. They both have their own advantages and limitations. The advantage of LSVM is that it provides a general and elegant formulation for dealing with many weakly supervised problems in computer vision. The latent variables in LSVM can often have some intuitive and semantic meanings. As a result, it is usually easy to adapt LSVM to capture various prior knowledge about the unobserved variables in various applications. Examples of latent variables in the literature include part locations in object detection [5], subcategories in video annotation [16], object localization in image classification [8], etc. However, LSVM is essentially a parametric model. So the capacity of these types of models is limited by the parametric form. In contrast, kernel methods are non-parametric models. The model complexity is implicitly determined by the number of support vectors. Since the number of support vectors can vary depending on the training data, kernel methods can adapt their model complexity to fit the data. In this paper, we propose kernel latent SVM (KLSVM) ? a new learning framework that combines latent SVMs and kernel methods. As a result, KLSVM has the benefits of both approaches. On one hand, the latent variables in KLSVM can be something intuitive and semantically meaningful. On the other hand, KLSVM is nonparametric in nature, since the decision boundary is defined implicitly by support vectors. We demonstrate KLSVM on three applications in visual recognition: 1) object classification with latent localization; 2) object classification with latent subcategories; 3) recognition of object interactions. 2 Preliminaries In this section, we introduce some background on latent SVM and on the dual form of SVMs used for deriving kernel SVMs. Our proposed model in Sec. 3 will build upon these two ideas. Latent SVM: We assume a data instance is in the form of (x, h, y), where x is the observed variable and y is the class label. Each instance is also associated with a latent variable h that captures some unobserved information about the data. For example, say we want to learn a ?car? model from a set of positive images containing cars and a set of negative images without cars. We know there is a car somewhere in a positive image, but we do not know its exact location. In this case, h can be used to represent the unobserved location of the car in the image. In this paper, we consider binary classification for simplicity, i.e. y ? {+1, ?1}. Multi-class classification can be easily converted to binary classification, e.g. using one-vs-all or one-vs-one strategy. To simplify the notation, we also assume the latent variable h takes its value from a discrete set of labels h ? H. However, our formulation is general. We will show how to deal with more complex h in Sec. 3.2 and in one of the experiments (Sec. 4.3). In latent SVM, the scoring function of sample x is defined as fw (x) = maxh w> ?(x, h), where ?(x, h) is the feature vector defined for the pair of (x, h). For example, in the ?car model? example, ?(x, h) can be a feature vector extracted from the image patch P at location h of the image x. The objective function of LSVM is defined as L(w) = 12 ||w||2 + C i max(0, 1 ? yi fw (xi )). LSVM is essentially a non-convex optimization problem. However, the learning problem becomes convex once the latent variable h is fixed for positive examples. Therefore, we can train the LSVM by an iterative algorithm that alternates between inferring h on positive examples and optimizing the model parameter w. Dual form with fixed h on positive examples : Due to its nature of non-convexity, it is not straightforward to derive the dual form for the general LSVM. Therefore, as a starting point, we first consider a simpler scenario assuming h is fixed (or observed) on the positive training examples. As previously mentioned, the LSVM is then relaxed to a convex problem with this assumption. Note that we will relax this assumption in Sec. 3. In the above ?car model? example, this means that we have the ground-truth bounding boxes of the cars in each image. More formally, we are given 2 M +N M positive samples {xi , hi }M i=1 , and N negative samples {xj }j=M +1 . Inspired by linear SVMs, > our goal is to find a linear discriminant fw (x, h) = w ?(x, h) by solving the following quadratic program: X X 1 P(w? ) = min ||w||2 + C1 ?i + C 2 ?j,h (1a) w,? 2 i j,h s.t. w> ?(xi , hi ) ? 1 ? ?i , ?i ? {1, 2, ..., M }, (1b) > ?w ?(xj , h) ? 1 ? ?j,h ?j ? {M + 1, M + 2, ..., M + N }, ?h ? H ?i ? 0, ?j,h ? 0 ?i, ?j, ?h ? H (1c) (1d) Similar to standard SVMs, {?i } and {?j,h } are the slack variables for handling soft margins. It is interesting to note that the optimization problem in Eq. 1 is almost identical to that of standard linear SVMs. The only difference lies in the constraint on the negative training examples (Eq. 1c). Since we assume h?s are not observed on negative images, we need to enumerate all possible values for h?s in Eq. 1c. Intuitively, this means every image patch from a negative image (i.e. non-car image) is not a car. It is easy to show that Eq. 1 is convex. Similar to the dual form of standard SVMs, we can derive the dual form of Eq. 1 as follows: X XX XX 1 X ?i + ?j,h ? || D(?? , ? ? ) = max ?i ?(xi , hi ) ? ?j,h ?(xj , h)||2 (2a) ?,? 2 i j i j h s.t. h 0 ? ?i ? C1 , ?i; 0 ? ?j,h ? C2 , ?j, ?h ? H ? (2b) ? ? The optimal primal parameters w for Eq. 1 and the optimal dual parameters (? , ? ) for Eq. 2 are related as follows: X XX ? w? = ?i? ?(xi , hi ) ? ?j,h ?(xj , h) (3) i j h Let us define ? to be the concatenations of {?i : ?i} and {?j,h : ?j, ?h ? H}, so |?| = M +N ?|H|. Let ? be a |?| ? D matrix where D is the dimension of ?(x, h). ? is obtained by stacking together {?(xi , hi ) : ?i} and {??(xj , h) : ?j, ?h ? H}. We also define Q = ??> and 1 to be a vector of all 1?s. Then Eq. 2a can be rewritten as (we omit the linear constraints on ? for simplicity): 1 max ?> ? 1 ? ?> Q? ? 2 (4) The advantage of working with the dual form in Eq. 4 is that it only involves a socalled kernel matrix Q. Each entry of Q is a dot-product of two vectors in the form of ?(x, h)> ?(x0 , h0 ). We can replace the dot-product with any other kernel functions in the form of k(?(x, h), ?(x0 , h0 )) to get nonlinear classifiers [1]. The scornew ing function can be kernelized as follows: f (xnew P for the testing images x  ) = P P ? ? new new new new maxhnew ,h )) ? j h ?j,h k(?(xj , h), ?(x ,h )) . i ?i k(?(xi , hi ), ?(x Another important, yet often overlooked fact is that the optimal values of the two quadratic programs in Eqs. 1 and 2 have some specific meanings. They correspond to the inverse of the (soft) margin of the resultant SVM classifier [9, 15]: P(w? ) = D(?? , ? ? ) = SVM 1margin . In the next section, we will exploit this fact to develop the kernel latent support vector machines. 3 Kernel Latent SVM Now we assume the variables {hi }M i=1 on the positive training examples are unobserved. If the scoring function used for classification is in the form of f (x) = maxh w> ?(x, h), we can use the LSVM formulation [5, 17] to learn the model parameters w. As mentioned earlier, the limitation of LSVM is the linearity assumption of w> ?(x, h). In this section, we propose kernel latent SVM (KLSVM) ? a new latent variable learning method that only requires a kernel function K(x, h, x0 , h0 ) between a pair of (x, h) and (x0 , h0 ). Note that when {hi }M i=1 are observed on the positive training examples, we can plug them in Eq. 2 to learn a nonlinear kernelized decision function that separates the positive and negative examples. 3 M When {hi }M i=1 are latent, an intuitive thing to do is to find the labeling of {hi }i=1 so that when we plug them in and solve for Eq. 2, the resultant nonlinear decision function separates the two classes as widely as possible. In other words, we look for a set of {h?i } which can maximize the SVM margin (equivalent to minimizing D(?? , ? ? , {hi })). The same intuition was previously used to develop the max-margin clustering method in [15]. Using this intuition, we write the optimal function value of the dual form as D(?? , ? ? , {hi }) since now it implicitly depends on the labelings {hi }. We can jointly find the labelings {hi } and solve for (?? , ? ? ) by the following optimization problem: min D(?? , ? ? , {hi }) (5a) {hi } = min max {hi } ?,? X ?i + i XX j s.t. 0 ? ?i ? C1 , ?i; h XX 1 X ?j,h ? || ?i ?(xi , hi ) ? ?j,h ?(xj , h)||2 (5b) 2 i j h 0 ? ?j,h ? C2 , ?j, ?h ? H (5c) The most straightforward way of solving Eq. 5 is to optimize D(?? , ? ? , {hi }) for every possible combination of values for {hi }, and then take the minimum. When hi takes its value from a discrete set of K possible choices (i.e. |H| = K), this naive approach needs to solve M K quadratic programs. This is obviously too expensive. Instead, we use the following iterative algorithm: ? Fix ? and ?, compute the optimal {hi }? by XX 1 X ?i ?(xi , hi ) ? ?j,h ?(xj , h)||2 (6) {hi }? = arg max || 2 i {hi } j h ? Fix {hi }, compute the optimal (?? , ? ? ) by ? ? ?X ? XX X X X 1 ?j,h ? || (?? , ? ? ) = arg max ?i + ?j,h ?(xj , h)||2 (7) ?i ?(xi , hi ) ? ? ? 2 i ?,? i j j h h The optimization problem in Eq. 7 is a quadratic program similar to that of a standard dual SVM. As a result, Eq. 7 can be kernelized as Eq. 4 and solved using standard dual solver in regular SVMs. In Sec. 3.1, we describe how to kernelize and solve the optimization problem in Eq. 6. 3.1 Optimization over {hi } The complexity of a simple enumeration approach for solving Eq. 6 is again O(M K ), which is clearly too expensive for practical purposes. Instead, we solve it iteratively using an algorithm similar to co-ordinate ascent. Within an iteration, we choose one positive training example t. We update ht while fixing hi for all i 6= t. The optimal h?t can be computed as follows: h?t = arg max ||?t ?(xt , ht ) + ht X ?i ?(xi , hi ) ? XX j i:i6=t ?j,h ?(xj , h)||2 ?> ? ? arg max ||?t ?(xt , ht )||2 + 2 ? ht (8a) h X ?i ?(xi , hi ) ? XX j i:i6=t ?j,h ?(xj , h)? ?t ?(xt , ht ) (8b) h By replacing the dot-product ?(x, h)> ?(x0 , h0 ) with a kernel function k(?(x, h), ?(x0 , h0 )), we obtain the kernerlized version of Eq. 8(b) as follows X h?t = arg max ?t ?t k(?(xt , ht ), ?(xt , ht )) + 2 ?i ?t k(?(xi , hi ), ?(xt , ht )) ht i:i6=t ?2 XX j ?j,h ?t k(?(xj , h), ?(xt , ht )) (9) h It is interesting to notice that if the t-th example is not a support vector (i.e. ?t = 0), the function value of Eq. 9 will be zero regardless of the value of ht . This means in KLSVM we can improve the training efficiency by only performing Eq. 9 on positive examples corresponding to support vectors. For other positive examples (non-support vectors), we can simply set their latent variables the same 4 as the previous iteration. Note that in LSVM, the inference during training needs to be performed on every positive example. Connection to LSVM: When a linear kernel is used, the inference problem (Eq. 8) has a very interesting connection to LSVM in [5]. Recall that for linear kernels, the model parameters w and dual variables (?, ?) are related by Eq. 3. Then Eq. 8 becomes: > ?t ?(xt , ht ) (10a) h?t = arg max ||?t ?(xt , ht )||2 + 2 w ? ?t ?(xt , hold t ) ht 1 > ? arg max ?t w> ?(xt , ht ) + ?t2 ||?(xt , ht )||2 ? ?t2 ?(xt , hold t ) ?(xt , ht ) 2 ht (10b) where hold t is the value of latent variable of the t-th example in the previous iteration. Let us consider the situation when ?t 6= 0 and the feature vector ?(x, h) is l2 normalized, which is commonly used in computer vision. In this case, ?t2 ?(xt , ht )> ?(xt , ht ) is a constant, and we have > old old > old ?(xt , hold t ) ?(xt , ht ) > ?(xt , ht ) ?(xt , ht ) if ht 6= ht . Then Eq. 10 is equivalent to: > h?t = arg max w> ?(xt , ht ) ? ?t ?(xt , hold t ) ?(xt , ht ) (11) ht Eq. 11 is very similar to the inference problem in LSVM, i.e., h?t = arg maxht w> ?(xt , ht ), but > with an extra term ?t ?(xt , hold t ) ?(xt , ht ) which penalizes the choice of ht for being the same old value as previous iteration ht . This has a very appealing intuitive interpretation. If the t-th positive example is a support vector, the latent variable hold from previous iteration causes this example to lie very close to (or even on the wrong side) the decision boundary, i.e. the example is not wellseparated. During the current iteration, the second term in Eq. 11 penalizes hold to be chosen again since we already know the example will not be well-separated if we choose hold again. The amount > of penalty depends on the magnitudes of ?t and ?(xt , hold t ) ?(xt , ht ). We can interpret ?t as how ? old > old ?bad? ht is, and ?(xt , ht ) ?(xt , ht ) as how close ht is to hold t . Eq. 11 penalizes the new ht to old be ?close? to ?bad? ht . 3.2 Composite Kernels So far we have assumed that the latent variable h takes its value from a discrete set of labels. Given a pair of (x, h) and (x0 , h0 ), the types of kernel function k(x, h; x0 , h0 ) we can choose from are still limited to a handful of standard kernels (e.g. Gaussian, RBF, HIK, etc). In this section, we consider more interesting cases where h involves some complex structures. This will give us two important benefits. First of all, it allows us to exploit structural information in the latent variables. This is in analog to structured output learning (e.g. [12, 13]). More importantly, it gives us more flexibility to construct new kernel functions by composing from simple kernels. Before we proceed, let us first motivate the composite kernel with an example application. Suppose we want to detect some complex person-object interaction (e.g. ?person riding a bike?) in an image. One possible solution is to detect persons and bikes in an image, then combine the results by taking into account of their relationship (i.e. ?riding?). Imagine we already have kernel functions corresponding to some components (e.g. person, bike) of the interaction. In the following, we will show how to compose a new kernel for the ?person riding a bike? classifier from those components. We denote the latent variable using ~h to emphasize that now it is a vector instead of a single discrete value. We denote it as ~h = (z1 , z2 , ...), where zu is the u-th component of ~h and takes its value from a discrete set of possible labels. For the structured latent variable, it is assumed that there are certain dependencies between some pairs of (zu , zv ). We can use an undirected graph G = (V, E) to capture the structure of the latent variable, where a vertex u ? V corresponds to the label zu , and an edge (u, v) ? E corresponds to the dependency between zu and zv . As a concrete example, consider the ?person riding a bike? recognition problem. The latent variable in this case has two components ~h = (zperson , zbike ) corresponding to the location of person and bike, respectively. On the training data, we have access to the ground-truth bounding box of ?person riding a bike? as a whole, but not the exact location of ?person? or ?bike? within the bounding box. So ~h is latent in this application. The edge connecting zperson and zbike captures the relationship (e.g. ?riding on?, ?next to?, etc.) between these two objects. Suppose we already have kernel functions corresponding to the vertices and edges in the graph, we can then define the composite kernel as the summation of the kernels over all the vertices and edges. 5 Figure 1: Visualization of how the latent variable (i.e. object location) changes during the learning. The red bounding box corresponds to the initial object location. The blue bounding box corresponds to the object location after the learning. Method BOF + linear SVM BOF + kernel SVM linear LSVM KLSVM Acc (%) 45.57 ? 4.23 50.53 ? 6.53 75.07 ? 4.18 84.49 ? 3.63 Table 1: Results on the mammal dataset. We show the mean/std of classification accuracies over five rounds of experiments. K(?(x, ~h), ?(x0 , ~h0 )) = X ku (?(x, zu ), ?(x0 , zu0 )) + u?V X kuv (?(x, zu , zv ), ?(x0 , zu0 , zv0 )) (12) (u,v)?E When the latent variable ~h forms a tree structure, there exist efficient inference algorithms for solving Eq. 9, such as dynamic programming. It is also possible for Eq. 12 to include kernels defined on higher-order cliques in the graph, as long as we have some pre-defined kernel functions for them. 4 Experiments We evaluate KLSVM in three different applications of visual recognition. Each application has a different type of latent variables. For these applications, we will show that KLSVM outperforms both the linear LSVM [5] and the regular kernel SVM. Note that we implement the learning of linear LSVM by ourselves using the same iterative algorithm as the one in [5]. 4.1 Object Classification with Latent Localization Problem and Dataset: We consider object classification with image-level supervision. Our training data only have image-level labels indicating the presence/absence of each object category in an image. The exact object location in the image is not provided and is considered as the latent variable h in our formulation. We define the feature vector ?(x, h) as the HOG feature extracted from the image at location h. During testing, the inference of h is performed by enumerating all possible locations of the image. We evaluate our algorithm on the mammal dataset [8] which consists of 6 mammal categories. There are about 45 images per category. For each category, we use half of the images for training and the remaining half for testing. We assume the object size is the same for the images of the same category, which is a reasonable assumption for this dataset. This dataset was used to evaluate the linear LSVM in [8]. Results: We compare our algorithm with linear LSVM. To demonstrate the benefit of using latent variables, we also compare with two simple baselines using linear and kernel SVMs based on bag-offeatures (BOF) extracted from the whole image (i.e. without latent variables). For both baselines, we aggregate the quantized HOG features densely sampled from the whole image. Then, the features are fed into the standard linear SVM and kernel SVM respectively. We use the histogram intersection kernel (HIK) [10] since it has been proved to be successful for vision applications, and efficient learning/inference algorithms exist for this kernel. We run the experiments for five rounds. In each round, we randomly split the images from each category into training and testing sets. For both linear LSVM and KLSVM, we initialize the latent variable at the center location of each image and we set C1 = C2 = 1. For both algorithms, we use one-versus-one classification scheme. We use the HIK kernel in the KLSVM. Table 1 summarizes the mean and standard deviations of the classification accuracies over five rounds of experiments. Across all experiments, both linear LSVM and KLSVM achieve significantly better results than approaches using BOF features from the whole image. This is intuitively reasonable since most of images on this dataset share very similar scenes. So BOF feature without latent variables cannot capture the subtle differences between each category. Table 1 also shows KLSVM significantly outperforms linear LSVM. Fig. 1 shows examples of how the latent variables change on some training images during the learning of the KLSVM. For each training image, the location of the object (latent variable h) is initialized to the center of the image. After the learning algorithm terminates, the latent variables accurately locate the objects. 6 Figure 2: Visualization of some testing examples from the ?bird? (left) and ?boat? (right) categories. Each row corresponds to a subcategory. We can see that visually similar images are grouped into the same subcategory. Method non-latent linear SVM linear LSVM non-latent kernel SVM KLSVM Acc (%) 50.69 ? 0.38 53.13 ? 0.63 52.98 ? 0.22 55.17 ? 0.27 Table 2: Results on CIFAR10 Dataset. We show the mean/std of classification accuracies over five folds of experiments. Each fold uses a different batch of the training data. 4.2 Object Classification with Latent Subcategory Problem and Dataset: Our second application is also on object classification. But here we consider a different type of latent variable. Objects within a category usually have a lot of intra-class variations. For example, consider the images for the ?bird? category shown in the left column of Fig. 2. Even though they are examples of the same category, they still exhibit very large appearance variations. It is usually very difficult to learn a single ?bird? model that captures all those variations. One way to handle the intra-class variation is to split the ?bird? category into several subcategories. Examples within a subcategory will be more visually similar than across all subcategories. Here we use the latent variable h to indicate the subcategory an image belongs to. If a training image belongs to the class c, its subcategory label h takes value from a set Hc of subcategory labels corresponding to the c-th class. Note that subcategories are latent on the training data, so they may or may not have semantic meanings. The feature vector ?(x, h) is defined as a sparse vector whose feature dimension is |Hc | times of the dimension of ?(x), where ?(x) is the HOG descriptor extracted from the image x. In the experiments, we set |Hc | = 3 for all c?s. Then we can define ?(x, h = 1) = (?(x); 0; 0), ?(x, h = 2) = (0; ?(x); 0), and so on. Similar models have been proposed to address the viewpoint changing in object detection [6] and semantic variations in YouTube video tagging [16]. We use the CIFAR10 [7] dataset in our experiment. It consists of images from ten classes including airplane, automobile, bird, cat, etc. The training set has been divided into five batches and each batch contains 10000 images. There are in total 10000 test images. Results: Again we compare with three baselines: linear LSVM, non-latent linear SVM, non-latent kernel SVM. Similarly, we use HIK kernel for the kernel-based methods. For non-latent approaches, we simply feed feature vector ?(x) to SVMs without using any latent variable. We run the experiments in five folds. Each fold use a different training batch but the same testing batch. We set C1 = C2 = 0.01 for all the experiments and initialize the subcategory labels of training images by k-means clustering. Table 2 summarizes the results. Again, KLSVM outperforms other baseline approaches. It is interesting to note that both linear LSVM and KLSVM outperform their non-latent counterparts, which demonstrates the effectiveness of using latent subcategories in object classification. We visualize examples of the correctly classified testing images from the ?bird? and ?boat? categories in Fig. 2. Images on the same row are assigned the same subcategory labels. We can see that visually similar images are automatically grouped into the same subcategory. 4.3 Recognition of Object Interaction Problem and Dataset: Finally, we consider an application where the latent variable is more complex and requires the composite kernel introduced in Sec. 3.2. We would like to recognize complex interactions between two objects (also called ?visual phrases? [11]) in static images. We build a dataset consisting of four object interaction classes, i.e. ?person riding a bicycle?, ?person next to a bicycle?, ?person next to a car? and ?bicycle next to a car? based on the visual phrase dataset in [11]. Each class contains 86?116 images. Each image is only associated with one of the four object interaction label. There is no ground-truth bounding box information for each object. We use 40 images from each class for training and the rest for testing. Our approach: We treat the locations of objects as latent variables. For example, when learning the model for ?person riding a bicycle?, we treat the locations of ?person? and ?bicycle? as latent variables. In this example, each image is associated with latent variables ~h = (z1 , z2 ), where z1 denotes the location of the ?person? and z2 denotes the location of the ?bicycle?. To reduce the search space of inference, we first apply off-the-shelf ?person? and ?bicycle? detectors [5] on 7 Method BOF + linear SVM BOF + kernel SVM linear LSVM KLSVM Acc(%) 42.92 58.46 46.33 ? 1.4 66.42 ? 0.99 Table 3: Results on object interaction dataset. For the approaches using latent variables, we show the mean/std of classification accuracies over five folds of experiments. Figure 3: Visualization of how latent variables (i.e. object locations) change during the learning. The left image is from the ?person riding a bicycle? category, and the right image is from the ?person next to a car? category. Yellow bounding boxes corresponds to the initial object locations. The blue bounding boxes correspond to the object locations after the learning. each image. For each object, we generate five candidate bounding boxes which form a set Zi , i.e. |Z1 | = |Z2 | = 5 and zi ? Zi . Then, the inference of ~h is performed by enumerating 25 combinations of z1 and z2 . We also assume there are certain dependencies between the pair of (z1 , z2 ). Then the kernel between two images can be defined as follows: X K(?(x, ~h), ?(x0 , ~h0 )) = ku (?(x, zu ), ?(x0 , zu0 )) + kp (?(z1 , z2 ), ?(z10 , z20 )) (13) u={1,2} We define ?(x, zu ) as the bag-of-features (BOF) extracted from the bounding box zu in the image x. For each bounding box, we split the region uniformly into four equal quadrants. Then we compute the bag-of-features for each quadrant by aggregating quantized HOG features. The final feature vector is the concatenation of these four bag-of-features histograms. This feature representation is similar to the spatial pyramid feature representation. In our experiment, we choose HIK for ku (?). The kernel kp (?) captures the spatial relationship between z1 and z2 such as above, below, overlapping, next-to, near, and far. Here ?(z1 , z2 ) is a sparse binary vector and its k-th element is set to 1 if the corresponding k-th relation is satisfied between bounding boxes z1 and z2 . Note that kp (?) does not depend on the images. Similar representation has been used in [4]. We define kp (?) as a simple linear kernel. Results: We compare with the simple BOF + linear SVM, and BOF + kernel SVM approaches. These two baselines use the same BOF feature representation as our approach except that the features are extracted from the whole image. We choose the HIK in the kernel SVM. Note that this is a strong baseline since [3] has shown that a similar pyramid feature representation with kernel SVM achieves top performances on the task of person-object interaction recognition. The other baseline is the standard linear LSVM, in which we build the feature vector ?(x, h) by simply concatenating both unary features and pairwise features, i.e. ?(x, h) = [?(x, z1 ); ?(x, z2 ); ?(z1 , z2 )]. Again, we set C1 = C2 = 1 for all experiments. We run the experiments for five rounds for approaches using latent variables. In each round, we randomly initialize the choices of z1 and z2 . Table 3 summarizes the results. The kernel latent SVM that uses HIK for ku (?) achieves the best performance. Fig. 3 shows examples of how the latent variables change on some training images during the learning of the KLSVM. For each training image, both latent variables z1 and z2 are randomly initialized to one of five candidate bounding boxes. As we can see, the initial bounding boxes can accurately locate the target objects but their spatial relations are different to ground-truth labels. After learning algorithm terminates, the latent variables not only locate the target objects, but more importantly they also capture the correct spatial relationship between objects. 5 Conclusion We have proposed kernel latent SVM ? a new learning framework that combines the benefits of LSVM and kernel methods. Our learning framework is very general. The latent variables can not only be a single discrete value, but also be more complex values with interdependent structures. Our experimental results on three different applications in visual recognition demonstrate that KLSVM outperforms using LSVM or using kernel methods alone. We believe our work will open the possibility of constructing more powerful and expressive prediction models for visual recognition. Acknowledgement: This work was supported by a Google Research Award and NSERC. Yang Wang was partially supported by a NSERC postdoc fellowship. 8 References [1] C. J. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121?167, 1998. [2] N. Dalal and B. Triggs. Histogram of oriented gradients for human detection. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005. [3] V. Delaitre, I. Laptev, and J. Sivic. Recognizing human actions in still images: a study of bag-of-features and part-based representations. In British Machine Vision Conference, 2010. [4] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. In IEEE International Conference on Computer Vision, 2009. [5] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1672?1645, 2010. [6] C. Gu and X. Ren. Discriminative mixture-of-templates for viewpoint classification. In European Conference on Computer Vision, 2010. [7] A. Krizhevsky. Learning multiple layers of features from tiny images. Master?s thesis, University of Toronto, 2009. [8] M. P. Kumar, B. Packer, and D. Koller. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, 2010. [9] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. R. Ghaoui, and M. I. Jordan. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:24?72, 2004. [10] S. Maji, A. C. Berg, and J. Malik. Classification using intersection kernel support vector machines is efficient. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2008. [11] M. A. Sadeghi and A. Farhadi. Recognition using visual phrases. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2011. [12] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Advances in Neural Information Processing Systems, volume 16. MIT Press, 2004. [13] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453?1484, 2005. [14] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. Pattern Analysis and Machine Intellingence, 34(3), 2012. [15] L. Xu, J. Neufeldand, B. Larson, and D. Schuurmans. Maximum margin clustering. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 17, pages 1537?1544. MIT Press, Cambridge, MA, 2005. [16] W. Yang and G. Toderici. Discriminative tag learning on youtube videos with latent sub-tags. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2011. [17] C.-N. Yu and T. Joachims. Learning structural SVMs with latent variables. In International Conference on Machine Learning, 2009. 9
4590 |@word version:2 dalal:1 triggs:1 open:1 mammal:3 initial:3 contains:2 outperforms:4 current:1 z2:14 yet:2 additive:1 hofmann:1 shape:2 update:1 v:2 alone:1 half:2 intelligence:1 provides:3 quantized:2 location:22 toronto:1 simpler:1 five:10 along:1 c2:5 consists:2 combine:4 compose:1 introduce:2 pairwise:1 x0:13 tagging:1 multi:2 lsvms:2 inspired:1 automatically:1 toderici:1 enumeration:1 farhadi:1 solver:1 becomes:2 provided:1 xx:10 notation:1 linearity:1 bike:8 unobserved:4 every:3 classifier:11 wrong:1 demonstrates:1 ramanan:2 omit:1 positive:16 before:1 aggregating:1 treat:2 vahdat:1 bird:6 co:1 limited:2 range:1 practical:1 testing:8 implement:1 thought:1 composite:4 significantly:2 vedaldi:1 word:1 pre:1 regular:3 quadrant:2 altun:1 get:1 cannot:2 close:3 tsochantaridis:1 optimize:1 equivalent:2 map:1 center:2 straightforward:2 regardless:1 starting:1 layout:1 convex:4 simplicity:2 importantly:2 deriving:1 handle:1 variation:7 kernelize:1 imagine:1 suppose:2 target:2 exact:3 programming:2 us:2 lanckriet:1 element:1 recognition:17 expensive:3 std:3 labeled:3 observed:4 taskar:1 wang:2 capture:10 solved:1 region:1 desai:1 mentioned:2 intuition:2 convexity:1 complexity:3 cristianini:1 dynamic:1 trained:2 weakly:4 solving:5 motivate:1 depend:1 laptev:1 localization:3 upon:1 efficiency:1 gu:1 easily:1 represented:1 various:2 cat:1 maji:1 train:1 separated:1 describe:1 kp:4 labeling:1 aggregate:1 h0:10 whose:1 widely:1 solve:6 say:1 relax:1 jointly:1 final:1 obviously:1 advantage:3 propose:3 interaction:9 product:3 flexibility:2 achieve:1 deformable:1 intuitive:4 object:44 depending:1 develop:4 derive:2 fixing:1 pose:2 school:3 eq:30 strong:1 c:2 involves:3 indicate:1 correct:1 arash:1 human:2 mcallester:1 fix:2 preliminary:1 summation:1 extension:1 hold:11 considered:1 ground:4 visually:3 mapping:1 bicycle:8 visualize:1 major:1 vary:1 achieves:2 purpose:1 bag:5 label:12 grouped:2 successfully:1 tool:1 mit:2 clearly:1 gaussian:1 shelf:1 parameters1:1 joachim:2 contrast:1 baseline:7 detect:2 inference:9 rigid:2 unary:1 typically:1 kernelized:3 relation:2 koller:2 labelings:2 interested:1 issue:1 classification:23 dual:11 arg:9 socalled:1 spatial:4 initialize:3 equal:1 once:1 construct:1 identical:1 look:1 yu:1 t2:3 simplify:1 randomly:3 oriented:1 packer:1 densely:1 recognize:1 ourselves:1 consisting:1 detection:7 possibility:1 mining:1 intra:2 mixture:3 semidefinite:1 primal:1 wellseparated:1 amenable:1 edge:4 cifar10:2 tree:1 old:7 penalizes:3 initialized:2 deformation:1 girshick:1 instance:2 column:1 modeling:1 soft:2 earlier:1 phrase:3 stacking:1 addressing:1 entry:1 vertex:3 deviation:1 recognizing:1 successful:1 krizhevsky:1 too:2 dependency:3 person:22 international:2 off:1 together:1 connecting:1 concrete:1 offeatures:1 thesis:1 again:6 satisfied:1 containing:1 choose:5 deform:1 converted:1 account:1 avahdat:1 sec:6 depends:2 performed:3 root:2 lot:1 red:1 annotation:2 simon:3 greg:1 accuracy:4 descriptor:3 characteristic:1 correspond:2 yellow:1 accurately:2 ren:1 classified:1 acc:3 detector:4 bof:11 resultant:2 associated:3 static:1 sampled:1 dataset:13 proved:1 popular:2 recall:1 knowledge:2 car:13 subtle:1 feed:1 higher:2 supervised:1 zisserman:1 wei:1 formulation:5 box:15 though:1 generality:1 hand:2 working:1 expressive:2 replacing:1 nonlinear:7 overlapping:1 google:1 believe:1 riding:9 normalized:1 counterpart:1 assigned:1 iteratively:1 semantic:3 deal:1 round:6 during:7 self:1 covering:1 larson:1 hik:8 demonstrate:4 image:62 meaning:3 volume:2 analog:1 interpretation:1 interpret:1 cambridge:1 i6:3 similarly:1 dot:3 access:1 manitoba:1 maxh:2 supervision:1 etc:5 something:1 own:1 optimizing:1 belongs:2 kuv:1 scenario:1 certain:3 binary:4 success:2 yi:1 scoring:2 guestrin:1 minimum:1 relaxed:1 maximize:1 multiple:1 desirable:1 ing:1 adapt:2 plug:2 long:1 divided:1 fraser:3 award:1 prediction:2 scalable:1 vision:17 essentially:2 histogram:4 kernel:63 represent:2 iteration:6 pyramid:2 c1:6 addition:1 background:1 want:2 fellowship:1 extra:1 rest:1 ascent:1 elegant:1 undirected:1 dpm:4 thing:1 effectiveness:2 jordan:1 structural:2 near:1 yang:4 ideal:1 presence:1 split:3 easy:2 delaitre:1 xj:12 fit:1 zi:3 zv0:1 suboptimal:1 reduce:1 idea:1 airplane:1 enumerating:2 bartlett:1 penalty:1 proceed:1 cause:1 action:1 enumerate:1 detailed:1 involve:1 amount:1 nonparametric:1 ten:1 svms:13 category:15 generate:1 outperform:1 exist:3 tutorial:1 notice:1 arising:1 per:1 correctly:1 blue:2 discrete:6 write:1 zv:3 four:4 reliance:1 changing:1 ht:40 graph:3 convert:1 run:3 inverse:1 powerful:2 master:1 almost:1 reasonable:2 patch:2 sfu:3 lsvm:34 decision:6 summarizes:3 layer:1 hi:32 paced:1 xnew:1 fold:5 quadratic:4 constraint:2 handful:1 scene:1 tag:2 min:3 kumar:1 performing:1 department:1 structured:3 alternate:1 combination:2 across:2 terminates:2 appealing:1 intuitively:2 ghaoui:1 mori:2 visualization:3 previously:2 slack:1 know:3 fed:1 rewritten:1 z10:1 apply:2 fowlkes:1 batch:5 original:1 denotes:2 clustering:3 include:2 remaining:1 top:1 somewhere:1 exploit:2 build:3 society:4 objective:1 malik:1 already:3 posture:1 parametric:3 strategy:1 exhibit:1 gradient:1 separate:2 capacity:1 concatenation:2 discriminant:1 assuming:1 relationship:4 minimizing:1 difficult:1 hog:6 negative:7 perform:1 allowing:1 subcategory:10 datasets:2 markov:1 situation:1 locate:3 overlooked:1 ordinate:1 introduced:1 pair:5 connection:2 z1:14 sivic:1 learned:2 address:2 usually:5 exemplified:1 below:1 pattern:7 program:4 max:14 including:1 video:3 natural:1 rely:1 boat:2 scheme:1 improve:1 sadeghi:1 naive:1 prior:1 literature:2 l2:1 interdependent:2 acknowledgement:1 discovery:1 loss:1 subcategories:6 discriminatively:1 interesting:5 limitation:6 versus:1 viewpoint:2 editor:1 tiny:1 share:1 row:2 supported:2 bias:1 side:1 burges:1 wide:1 template:7 taking:1 felzenszwalb:1 saul:1 sparse:2 benefit:4 boundary:3 dimension:4 commonly:1 far:2 transaction:1 emphasize:1 implicitly:3 dealing:1 clique:1 z20:1 assumed:2 discriminative:4 xi:13 search:1 latent:77 iterative:4 table:7 learn:6 nature:2 ku:4 ca:4 composing:1 schuurmans:1 zu0:3 hc:3 complex:6 automobile:1 constructing:1 postdoc:1 european:1 bottou:1 main:2 whole:6 bounding:15 complementary:1 xu:1 fig:4 representative:1 sub:1 inferring:1 explicit:1 concatenating:1 lie:2 candidate:2 british:1 bad:2 specific:1 xt:30 zu:9 svm:32 effectively:1 magnitude:1 margin:8 intersection:3 simply:3 appearance:2 visual:10 nserc:2 partially:1 corresponds:6 truth:4 extracted:6 ma:1 goal:1 rbf:1 replace:1 absence:1 fw:3 change:4 youtube:2 upright:1 determined:1 uniformly:1 semantically:1 except:1 total:1 called:1 experimental:1 meaningful:1 indicating:1 formally:1 berg:1 support:11 evaluate:3 handling:3
3,967
4,591
Multilabel Classification using Bayesian Compressed Sensing Ashish Kapoor? , Prateek Jain? and Raajay Viswanathan? ? Microsoft Research, Redmond, USA ? Microsoft Research, Bangalore, INDIA {akapoor, prajain, t-rviswa}@microsoft.com Abstract In this paper, we present a Bayesian framework for multilabel classification using compressed sensing. The key idea in compressed sensing for multilabel classification is to first project the label vector to a lower dimensional space using a random transformation and then learn regression functions over these projections. Our approach considers both of these components in a single probabilistic model, thereby jointly optimizing over compression as well as learning tasks. We then derive an efficient variational inference scheme that provides joint posterior distribution over all the unobserved labels. The two key benefits of the model are that a) it can naturally handle datasets that have missing labels and b) it can also measure uncertainty in prediction. The uncertainty estimate provided by the model allows for active learning paradigms where an oracle provides information about labels that promise to be maximally informative for the prediction task. Our experiments show significant boost over prior methods in terms of prediction performance over benchmark datasets, both in the fully labeled and the missing labels case. Finally, we also highlight various useful active learning scenarios that are enabled by the probabilistic model. 1 Introduction Large scale multilabel classification problems arise in several practical applications and has recently generated a lot of interest with several efficient algorithms being proposed for different settings [1, 2]. A primary reason for thrust in this area is due to explosion of web-based applications, such as Picasa, Facebook and other online sharing sites, that can obtain multiple tags per data point. For example, users on the web can annotate videos and images with several possible labels. Such applications have provided a new dimension to the problem as these applications typically have millions of tags. Most of the existing multilabel methods learn a decision function or weight vector per label and then combine the decision functions in a certain manner to predict labels for an unseen point [3, 4, 2, 5, 6]. However, such approaches quickly become infeasible in real-world as the number of labels in such applications is typically very large. For instance, traditional multi-label classification techniques based on 1-vs-all SVM [7] is prohibitive because of both large train and test times. To alleviate this problem, [1] proposed a compressed sensing (CS) based method that exploits the fact that usually the label vectors are very sparse, i.e., the number of positive labels/tags present in a point is significantly less than the total number of labels. Their algorithm uses the following result from the CS literature: an s-sparse vector in RL can be recovered efficiently using K = O(s log L/s) measurements. Their method projects label vectors into a s log L/s dimensional space and learns a regression function in the projected space (independently for each dimension). For test points, the learnt regression function is applied in the reduced space and then standard recovery algorithms from CS literature are used to obtain sparse predicted labels [8, 9]. However, in 1 this method, learning of the decision functions is independent of the sparse recovery and hence in practice, it requires several measurements to match accuracy of the standard baseline methods such as 1-vs-all SVM. Another limitation of this method is that the scheme does not directly apply when labels are missing, a common aspect in real-world web applications. Finally, the method does not lend itself naturally to uncertainty analysis that can be used for active learning of labels. In this paper, we address some of the issues mentioned above using a novel Bayesian framework for multilabel classification. In particular, we propose a joint probabilistic model that combines compressed sensing [10, 11] with a Bayesian learning model on the projected space. Our model can be seen as a Bayesian co-training model, where the lower dimensional projected space can be thought of as latent variables. And these latent variables are generated by two different views: a) using a random projection of the label vector, b) using a (linear) predictor over the input data space. Hence, unlike the method of [1], our model can jointly infer predictions in the projected space and projections of the label vector. This joint inference leads to more efficient utilization of the latent variable space and leads to significantly better accuracies than the method of [1] while using same number of latent variables K. Besides better prediction performance, there are several other advantages offered by our probabilistic model. First, the model naturally handles missing labels as the missing labels are modeled as random variables that can be marginalized out. Second, the model enables derivation of a variational inference method that can efficiently compute joint posterior distribution over all the unobserved random variables. Thus, we can infer labels not only for the test point but also for all the missing labels in the training set. Finally, the inferred posterior over labels provide an estimate of uncertainty making the proposed method amenable to active learning. Active learning is an important learning paradigm that has received a lot of attention due to the availability of large unlabeled data but paucity of labels over these data sets. In the traditional active learning setting (for binary/multiclass classification), at each round the learner actively seeks labels for a selected unlabeled point and updates its models using the provided label. Several criteria, such as uncertainty [12], expected informativeness [13, 14], reduction in version space [15], disagreement among a committee of classifiers [16], etc. have been proposed. While heuristics have been proposed [17] in the case of 1-vs-all SVMs, it is still unclear how these methods can be extended to multilabel classification setting in a principled manner. Our proposed model naturally handles the active learning task as the variational inference procedure provides the required posteriors which can guide information acquisition. Further, besides the traditional active learning scenario, where all the labels are revealed for a selected data, the model leads to extension of information foraging to more practical and novel scenarios. For example, we introduce active diagnosis, where the algorithm only asks about labels for the test case that potentially can help with prediction over the rest of the unobserved tags. Similarly, we can extend to a generalized active learning setting, where the method seeks answer to questions of the type: ?does label ?A? exists in data point x?. Such extensions are made feasible due to the Bayesian interpretation of the multilabel classification task. We demonstrate the above mentioned advantages of our model using empirical validation on benchmark datasets. In particular, experiments show that the method significantly outperforms ML-CS based method by [1] and also obtains accuracies matching 1-vs-all SVM while projecting onto Kdimensional space that is typically less than half the total number of labels. We expect these gains to become even more significant for datasets with larger number of labels. We also show that the proposed framework is robust to missing labels and actually outperforms 1-vs-all SVM with about 85-95% missing labels while using K = .5L only. Finally, we demonstrate that our active learning strategies select significantly more informative labels/points than the random selection strategy. 2 Approach Assume that we are given a set of training data points X = {xi } with labels Y = {yi }, where each yi = [yi1 , .., yiL ] ? [0, 1]L is a multilabel binary vector of size L. Further, let us assume that there are data points in the training set for which we have partially observed labeled vectors that leads to the following partitioning: X = XL ? XP . Here the subscripts L and P indicate fully and partially labeled data respectively. Our goal then is to correctly predict all the labels for data in the test set XU . Further, we also seek an active learning procedure that would request as few labels as possible from an oracle to maximize classification rate over the test set. If we treat each label independently then standard machine learning procedures could be used to train individual classifiers and this can even be extended to do active learning. However, such procedures 2 can be fairly expensive when the number of labels is huge. Further, these methods would simply ignore the missing data, thus may not utilize statistical relationship amongst the labels. Recent techniques in multilabel classification alleviate the problem of large output space [1, 18], but cannot handle the missing data cases. Finally, there are no clear methods of extending these approaches for active learning. We present a probabilistic graphical model that builds upon ideas of compressed sensing and utilizes statistical relations across the output space for prediction and active information acquisition. The key idea in compressed sensing is to consider a linear transformation of the L dimensional label vector y to a K dimensional space z, where K  L, via a random matrix ?. The efficiency in the classification system is improved by considering regression functions to the compressed vectors z instead of the true label space. The proposed framework considers Gaussian process priors over the compressed label space and has the capability to propagate uncertainties to the output label space by considering the constraints imposed by the random projection matrix. There are several benefits of the proposed method: 1) first it naturally handles missing data by marginalizing over the unobserved labels, 2) the Bayesian perspective leads to valid probabilities that reflect the true uncertainties in the system, which in turn helps guide active learning procedures, 3) finally, the experiments show that the model significantly outperforms state-of-the-art compressed sensing based multilabel classification methods. 2.1 A Model for Multilabel Classification with Bayesian Compressed Sensing We propose a model that simultaneously handles two key aspects: first is the task of compressing and recovering the label vector yi to and from the lower dimensional representation zi . Second, given an input data xi the problem is estimating low dimensional representation in the compressed space. Instead of separately solving each of the tasks, the proposed approach aims at achieving better performance by considering both of these tasks jointly, thereby modeling statistical relationships amongst different variables of interest. Figure 1 illustrates the factor graph corresponding to the proposed model. For every data point xi , the output labels yi influence the compressed latent vector zi via the random projection matrix ?. These compressed signals in turn also get influenced by the d-dimensional feature vector xi via the K different linear regression functions represented as a d ? K matrix W. Consequently, the role of zi is not only to compress the output space but also to consider the compatibility with the input data point. The latent variable W corresponding to the linear model has a spherical Gaussian prior and is motivated by Gaussian Process regression [19]. Note that when zi is observed, the model reduces to simple Gaussian Process regression. One of the critical assumptions in compressed sensing is that the output labels yi is sparse. The proposed model induces this constraint via a zero-mean Gaussian prior on each of the labels (i.e. yij ? N (0, 1/?ij )), where the precision ?ij of the normal distribution follows a Gamma prior ?ij ? ?(a0 , b0 ) with hyper-parameters a0 and b0 . The Gamma prior has been earlier proposed in the context of Relevance Vector Machine (RVM) [20] as it not only induces sparsity but also is a conjugate prior to the precision ?ij of the zero mean Gaussian distributions. Intuitively, marginalizing the precision in the product of Gamma priors and the Gaussian likelihoods leads to a potential function on the labels that is a student-t distribution and has a significant probability mass around zero. Thus, the labels yij naturally tend to zero unless they need to explain observed data. Finally, the conjugate-exponential form between the precisions ?i and the output labels yi leads to an efficient inference procedure that we describe later in the paper. Note that, for labeled training data xi ? XL all the labels yi are observed, while only some or none of the labels are observed for the partially labeled and test cases respectively. The proposed model ties the input feature vectors X to the output space Y via the compressed representations Z according to the following distribution: N Y 1 p(W) fxi (w, zi )g? (yi , zi )h?i (yi )p(?i ) Z i=1 QK where Z is the partition function (normalization term), p(W) = i=1 N (wi , 0, I) is the spherical QL Gaussian prior on the linear regression functions and p(?i ) = j=1 ?(?ij ; a0 , b0 ) is the product of Gamma priors on each individual label. Finally, the potentials fxi (?, ?), g? (?, ?) and h?i (?) take the p(Y, Z, W, [?i ]N i=1 |X, ?) = 3 ?? 1 ?? ?? ? 2 ? ?=1 ?? ? ?(0, ??1 ? ?(?? ; ?0 , ?0 ) ??? ?1? 0 0 ??? ??2 ?1 ) ??? ?? (?? , ?? ) ??1 ??2 ??? ?? ? ?=1 ?(?? ; 0, ?) ? ??? ?, ?? ?? ? = 1 ?? ? Figure 1: A Bayesian model for multilabel classification via compressed sensing. The input data is xi with multiple labels yi , which are fully observed for the case of fully labeled training data set L, partially observed for training data with missing labels P, or completely unobserved as in test data U. The latent variables zi indicate the compressed label space, and ?i with independent Gamma priors enforce the sparsity. The set of regression functions described by W is also a latent random variable and is connected across all the data points. following form: fxi (W, zi ) = e? ||WT xi ?zi ||2 2? 2 , g? (yi , zi ) = e ? ||?yi ?zi ||2 2?2 , h?i (yi ) = L Y j=1 N (yij ; 0, 1 ?ij ). Intuitively, the potential term fxi (W, zi ) favors configurations that are aligned with output of the linear regression function when applied to the input feature vector. Similarly, the term g? (yi , zi ) favors configurations that are compatible with the output compressive projections determined by ?. Finally, as described earlier, h?i (yi ) enforces sparsity in the output space. The parameters ? 2 and ?2 denote noise parameters and determine how tight the relation is between the labels in the output space, the compressed space and the regression coefficients. By changing the value of these parameters we can emphasize or de-emphasize the relationship between the latent variables. In summary, our model provides a powerful framework for modeling multilabel classification using compressive sensing. The model promises statistical efficiency by jointly considering compressive sensing and regression within a single model. Moreover, as we will see in the next section this model allows efficient numerical procedures for inferring unobserved labels by resolving the constraints imposed by the potential functions and the observed data. The model naturally handles the case of missing data (incomplete labels) by automatically marginalizing the unobserved data as a part of the inference mechanism. Finally, the probabilistic nature of the approach provides us with valid probabilistic quantities that can be used to perform active selection of the unlabeled points. 2.2 Inference First, consider the simpler scenario where the training data set only consists of fully labeled instances XL with labels YL . Thus our aim is to infer p(YU |X, YL , ?), the posterior distribution over unlabeled data. Performing exact inference is prohibitive in this model primarily due to the following reason. First, notice that the joint distribution is a product of a Gaussian (Spherical prior on W and compatibility terms with zi ) and non-Gaussian terms (the Gamma priors). Along with these sparsity terms, the projection of the label space into the compressed space precludes usage of exact inference via a junction tree algorithm. Thus, we resort to approximate inference techniques. In particular we perform an approximate inference by maximizing the variational lower bound by assuming that the posterior over the unobserved random variable W, YU , Z and [?i ]N i=1 can be factorized: Z p(Y, Z, W, [?i ]N i=1 |X, ?) F = q(YU )q(Z)q(W)q([?i ]N ) log i=1 q(YU )q(Z)q(W)q([?i ]N YU ,Z,W,[?]N i=1 ) i=1 Z ? log p(Y, Z, W, [?i ]N i=1 |X, ?) YU ,Z,W,[?]N i=1 4 Here, the posteriors on the precisions ?i are assumed to be Gamma distributed while the rest of the distributions are constrained to be Gaussian. Further, each of these joint posterior denQ sities are assumed to have the following per data point factorization: q(YU ) = q(y i ), i?U Q QN Ql j q(Z) = i?U ?L q(zi ) and q([?]N ) = q(? ). Similarly the posterior over the regresi=1 i i=1 j=1 QK sion functions has a per dimension factorization: q(W) = i=1 q(wi ). The approximate inference algorithm aims to compute good approximations to the real posteriors by iteratively optimizing the above described variational bound. Specifically, given the approximations q t (yi ) ? N (?tyi , ?tyi ) (similar forms for zi and wi ) and q t (?ij ) ? ?(atij , btij ) from the tth iteration the update rules are as follows: t T ?2 ?1 Update for q t+1 (yi ): ?t+1 ?] , yi = [diag(E(?i )) + ? ? 0 Update for q t+1 (?ij ): at+1 ij = aij + 0.5, t+1 T ?2 t ?t+1 ?zi , yi = ?yi ? ? 0 t+1 t+1 2 bt+1 ij = bij + 0.5[?yi (j, j) + [?yi (j)] ], ?2 Update for q t+1 (zi ): ?t+1 I + ??2 I]?1 , zi = [? t+1 ?2 t+1 T ?t+1 [?W ] xi + ??2 ??t+1 zi = ?zi [? yi ], ?2 Update for q t+1 (wi ): ?t+1 XXT + I]?1 , wi = [? ?2 t+1 T ?t+1 ?wi X[?t+1 wi = ? z (i)] . Alternating between the above described updates can be considered as message passing between the low-dimensional regression outputs and higher dimensional output labels, which in turn are constrained to be sparse. By doing the update on q(yi ), the algorithms attempts to explain the compressed signal zi using sparsity imposed by the precisions ?i . Similarly, by updating q(zi ) and q(W) the inference procedures reasons about a compressed representation that is most efficient in terms of reconstruction. By iterating between these updates the model consolidates information from the two key components, compressed sensing and regression, that constitute the system and is more effective than doing these tasks in isolation. Also note that the most expensive step is in the first update for computing ?t+1 yi , which if naively implemented would require an inversion of an L ? L matrix. However, this inversion can be computed easily using Sherman-Morrison-Woodbury formula, which in turn reduces the complexity of the update to O(K 3 + K 2 L). The only other significant update is the posterior computation q(w) that is O(d3 ), where d is the dimensionality of the feature space. Consequently, this scheme is fairly efficient and has time complexity similar to that of other non-probabilistic approaches. Finally, note that straightforward extension to non-linear regression functions can be done via the kernel trick. Handling Missing Labels in Training Data: The proposed model and the inference procedure naturally handles the case of missing labels in the training set via the variational inference. Lets consider a data point xp with set of partially observed labels ypo . If we denote ypu as the set of unobserved labels, then all the above mentioned update steps stay the same except for the one that updates q(zp ), which takes the following form: t+1 ?2 T t+1 o ?t+1 xp ?W + ??2 ?uo [?t+1 u ; yp ]]. zp = ?zp [? yp Here ?uo denotes re-ordering of the columns on ? according to the indices of the observed and unobserved labels. Intuitively, the compressed signal zp now considers compatibility with the unobserved labels, while taking into account the observed labels, and in doing so effectively facilitates message passing between all the latent random variables. Handling a Test Point: While it might seem that the above mentioned framework works in the transductive setting, we here show such is not the case and that the framework can seamlessly handle test data in an inductive setting. Note that given a training set, we can recover the posterior distribution q(W) that summarizes the regression parameter. This posterior distribution is sufficient for doing inference on a test point x? . Intuitively, the key idea is that the information about the training set is fully captured in the regression parameters, thus, the labels for the test point can be simply recovered by only iteratively updating q(y? ), q(z? ) and q(?? ). 2.3 Active Learning The main aim in active learning is to seek bits of information that would promise to enhance the discriminatory power of the framework the most. When employed in a traditional classification setting, the active learning procedure boils down to the task of seeking the label for one of the unlabeled examples that promises to be most informative and then update the classification model by incorporating it into the existing training set. However, multilabel classification enables richer forms of active information acquisitions, which we describe below: 5 ? Traditional Active Learning: This is similar to the active learning scenario in traditional classification tasks. In particular, the goal is to select an unlabeled sample for which all the labels will be revealed. ? Active Diagnosis: Given a test data point, at every iteration the active acquisition procedure seeks a label for each test point that is maximally informative for the same and promises to improve the prediction accuracy over the rest of the unknown labels. Note that Active Diagnosis is highly relevant for real-world tasks. For example, consider the wikipedia page classification problem. Just knowing a few labels about the page can be immensely useful in inferring the rest of the labels. Active diagnosis should be able to leverage the statistical dependency amongst the output label space, in order to ask for labels that are maximally informative. A direct generalization of the above two paradigms is a setting in which the active learning procedure selects a label for one point in the training set. Specifically, the key difference between this scenario and the traditional active learning is that only one label is chosen to be revealed for the selected data point instead of the entire set of labels. Non-probabilistic classification schemes, such as SVMs, can handle traditional active learning by first establishing the confidence in the estimate of each label by using the distance from the classification boundary (margin) and then selecting the point that is closest to the margin. However, it is fairly non-trivial to extend those approaches to tackle the active diagnosis and generalized information acquisition. On the other hand the proposed Bayesian model provides a posterior distribution over the unknown class labels as well as other latent variables and can be used for active learning. In particular, measures such as uncertainty or information gain can be used to guide the selective sampling procedure for active learning. Formally, we can write these two selection criteria as: Uncertainty: arg max H(yi ) yi ?YU InfoGain: arg max H(YU /yi ) ? Eyi [H(YU /yi |yi )]. yi ?YU Here, H(?) denotes Shannon entropy and is a measure of uncertainty. The uncertainty criterion seeks to select the labels that have the highest entropy, whereas the information gain criterion seeks to select a label that has the highest expected reduction in uncertainty over all the other unlabeled points or unknown labels. Either of these criteria can be computed given the inferred posteriors; however we note that the information gain criterion is far more expensive to compute as it requires repeated inference by considering all possible labels for every unlabeled data point. The uncertainty criterion on the other hand is very simple and often guides active learning with reasonable amount of gains. In this work we will consider uncertainty as the primary active learning criterion. Finally, we?d like to point that the different described forms of active learning can naturally be addressed with these heuristics by appropriately choosing the set of possible candidates and the posterior distributions over which the entropy is measured. 3 Experiments In this section, we present experimental results using our methods on standard benchmark datasets. The goals of our experiments are three-fold: a) demonstrate that the proposed jointly probabilistic method is significantly better than the standard compressed sensing based method by [1] and gets comparable accuracy to 1-vs-all SVM while projecting labels onto much smaller dimensionality K compared to the total number of labels L, b) show robustness of our method to missing labels, c) demonstrate various active learning scenarios and compare them against the standard baselines. We use Matlab for all our implementations. We refer to our Compressed Sensing based Bayesian Multilabel classification method as BML-CS . In BML-CS method, the hyper-parameters a0 and b0 are set to 10?6 , which in turn leads to a fairly uninformative prior. The noise parameters ? and ? are found by maximizing the marginalized likelihood of the Gaussian Process Regression model [19]. We use liblinear for SVM implementation; error penalty C is selected using cross-validation. We also implemented the multilabel classification method based on compressed sensing (ML-CS ) [1] with CoSamp [8] being the underlying sparse vector recovery algorithm. For our experiments, we use standard multilabel datasets. In particular, we choose datasets where the number of labels is high. Such datasets generally tend to have only a few labels per data point and the compressed sensing methods can exploit this sparsity to their advantage. 6 100 40 100 40 35 90 35 40 Precision (in %) 60 30 25 20 Precision (in %) 80 Precision (in %) Precision (in %) 80 70 60 50 30 25 20 40 20 1 vs all SVM BML?CS ML?CS 0 0 20 40 60 80 (a) CAL500 dataset 100 1 vs all SVM BML?CS ML?CS 15 10 0 20 40 60 80 1 vs all SVM BML?CS ML?CS 30 100 (b) Bookmarks dataset 20 0 20 40 60 80 100 (c) RCV1 dataset 1 vs all SVM BML?CS ML?CS 15 10 0 20 40 60 80 100 (d) Corel5k dataset Figure 2: Comparison of precision values (in top-1 label) for different methods with different values of K, dimensionality of the compressed label space. The SVM baseline uses all the L labels. The x-axis shows K as a percentage of the total number of labels L. Clearly, for each of the dataset the proposed method obtains accuracy similar to 1-vs-all SVM method while projecting to only K = L/2 dimensions. Also, our method consistently obtains significantly higher accuracies than the CS method of [1] while using the same number of latent variables K. K 10% 25% 50% 75% 100% Top-3 SVM BML-CS ML-CS 0.04 0.36 0.38 0.48 0.74 0.61 0.44 0.75 0.53 0.70 0.61 K 10% 25% 50% 75% 100% Top-3 SVM BML-CS ML-CS 0.33 0.19 0.65 0.59 0.75 0.75 0.69 0.75 0.71 0.75 0.72 Top-5 SVM BML-CS ML-CS 0.09 0.32 0.28 0.41 0.67 0.51 0.40 0.60 0.55 0.65 0.57 K 10% 25% 50% 75% 100% Top-3 SVM BML-CS ML-CS 0.10 0.06 0.15 0.08 0.20 0.17 0.09 0.17 0.10 0.19 0.10 K 10% 25% 50% 75% 100% Top-3 SVM BML-CS ML-CS 0.20 0.08 0.27 0.17 0.27 0.27 0.21 0.27 0.22 0.27 0.22 (a) Top-5 SVM BML-CS ML-CS 0.07 0.04 0.10 0.05 0.14 0.12 0.06 0.13 0.07 0.13 0.07 (b) Top-5 SVM BML-CS ML-CS 0.23 0.14 0.44 0.39 0.54 0.52 0.49 0.53 0.50 0.53 0.51 (c) Top-5 SVM BML-CS ML-CS 0.15 0.06 0.22 0.14 0.22 0.23 0.17 0.23 0.18 0.23 0.17 (d) Figure 3: Precision values obtained by various methods in retrieving 3 and 5 labels respectively. First column in each table shows K as the fraction of number of labels L. 1-vs-all SVM requires training L weight vectors, while both BML-CS and ML-CS trains K weight vectors. BML-CS is consistently more accurate than ML-CS although its accuracy is not as close to SVM as it is for the case of top-1 labels (see Figure 2). For each of the algorithms we recover the top 1, 3, 5 most likely positive labels and set remaining labels to be negative. For each value of t ? {1, 3, 5}, we report precision in prediction, i.e., fraction of true positives to the total number of positives predicted. 3.1 Multilabel Classification Accuracies We train both ML-CS and our method BML-CS on all datasets using different values of K, i.e., the dimensionality of the space of latent variables z for which weight vectors are learned. Figure 2 compares precision (in predicting 1 positive label) of our proposed method on four different datasets for different values of K with the corresponding values obtained by ML-CS and SVM . Note that 1vs-all SVM learns all L > K weight vectors, hence it is just one point in the plot; we provide a line for ease of comparison. It is clear from the figure that both BML-CS and ML-CS are significantly worse than 1-vs-all SVM when K is very small compared to total number of labels L. However, for around K = 0.5L, our method achieves close to the baseline (1-vs-all SVM) accuracy while ML-CS still achieves significantly worse accuracies. In fact, even with K = L, ML-CS still obtains significantly lower accuracy than SVM baseline. In Figure 3 we tabulate precision for top-3 and top-5 retrieved positive labels. Here again, the proposed method is consistently more accurate than ML-CS . However, it requires larger K to obtain similar precision values as SVM. This is fairly intuitive as for higher recall rate the multilabel problems become harder and hence our method requires more weight vectors to be learned per label. 3.2 Missing Labels Next, we conduct experiments for multilabel classification with missing labels. Specifically, we remove a fixed fraction of training labels randomly from each dataset considered. We then apply 7 Variation of precision with incomplete labels 86 79 Precision (in %) Precision (in %) 80 BML?CS SVM 88 84 82 80 77 76 75 74 76 80 73 0 95 70 78 78 85 90 Percentage of labels missing 80 BML?CS Active BML?CS Rand Precision (in %) 90 60 50 40 30 5 10 15 Active learning rounds (1 point per round) 20 BML?CS Active BML?CS Rand 20 0 20 40 60 80 100 Active learning rounds (1 label per point per round) (a) (b) (c) Figure 4: (a) Precision (in retrieving the most likely positive label) obtained by BML-CS and SVM methods on RCV1 dataset with varying fraction of missing labels. We observe that BML-CS obtains higher precision values than baseline SVM.(k = 0.5L) (b) Precision obtained after each round of active learning by BML-CS-Active method and by the baseline random selection strategy over RCV1 dataset.(c) Precision after active learning, where one label per point is added to the training set, in comparison with random baseline on RCV1 dataset. Parameters for (b) & (c): k = 0.1L. Both (b) and (c), start with 100 points initially. BML-CS as well as 1-vs-all SVM method to such training data. Since, SVM cannot directly handle missing labels, we always set a missing label to be a negative label. In contrast, our method can explicitly handle missing labels and can perform inference by marginalizing the unobserved tags. As the number of positive labels is significantly smaller than the negative labels, when only a small fraction of labels are removed, both SVM and BML-CS obtain similar accuracies to the case where all the labels are present. However, as the number of missing labels increase there is a smooth dip in the precision of the two methods. Figure 4 (a) compares precision obtained by BML-CS with the precision obtained by 1-vs-all SVM. Clearly, our method performs better than SVM, while using only K = .5L weight vectors. 3.3 Active Learning In this section, we provide empirical results for some of the active learning tasks we discussed in Section 2.3. For each of the tasks, we use our Bayesian multilabel method to compute the posterior over the label vector. We then select the desired label/point appropriately according to each individual task. For each of the tasks, we compare our method against an appropriate baseline method. Traditional Active Learning: The goal here is to select most informative points which if labeled completely will increase the accuracy by the highest amount. We use uncertainty sampling where we consider the entropy of the posterior over label vector as the selection criterion for BML-CSActive method. We compare the proposed method against the standard random selection baseline. For these experiments, we initialize both the methods with an initial labeled dataset of 100 points and then after each active learning round we seek all the labels for the selected training data point. Figure 4 (b) compares precisions obtained by BML-CS-Active method with the precisions obtained by the baseline method after every active learning round. After just 15 active learning rounds, our method is able to gain about 6% of accuracy while random selection method do not provide any gain in the accuracy. Active Diagnosis: In this type of active learning, we query one label for each of the training points in each round. For each training point, we choose a label with the most uncertainty and ask for its label. Figure 4 (c) plots the improvement in precision values with number of rounds of active learning, for estimating the top-1 label. From the plot, we can see that after just 20 rounds, choosing points by uncertainty has an improvement of 20% over the random baseline. 4 Conclusion and Future Work We presented a Bayesian framework for multilabel classification that uses compressive sensing. The proposed framework jointly models the compressive sensing/reconstruction task with learning regression over the compressed space. We present an efficient variational inference scheme that jointly resolves compressed sensing and regression tasks. The resulting posterior distribution can further be used to perform different flavors of active learning. Experimental evaluations highlight the efficacy of the framework. Future directions include considering other structured prediction tasks that are sparse and applying the framework to novel scenarios. Further, instead of myopic next best information seeking we also seek to investigate non-myopic selective sampling where an optimal subset of unlabeled data are selected. 8 References [1] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS, pages 772?780, 2009. [2] B. Hariharan, L. Zelnik-Manor, S. V. N. Vishwanathan, and M. Varma. Large scale max-margin multilabel classification with priors. In ICML, pages 423?430, 2010. [3] G. Tsoumakas and I. Katakis. Multi-label classification: An overview. IJDWM, 3(3):1?13, 2007. [4] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453?1484, 2005. [5] M. R. Boutell, J. Luo, X. Shen, and C. M. Brown. Learning multi-label scene classification. Pattern Recognition, 37(9):1757?1771, 2004. [6] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003. [7] R. M. Rifkin and A. Klautau. In defense of one-vs-all classification. Journal of Machine Learning Research, 5:101?141, 2004. [8] D. Needell and J. A. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301 ? 321, 2009. [9] S. Foucart. Hard thresholding pursuit: an algorithm for compressive sensing, 2010. preprint. [10] D. Baron, S. S. Sarvotham, and R. G. Baraniuk. Bayesian compressive sensing via belief propagation. IEEE Transactions on Signal Processing, 58(1), 2010. [11] S. Ji, Y. Xue, and L. Carin. Bayesian compressive sensing. IEEE Transactions on Signal Processing, 56(6), 2008. [12] N. Cesa-Bianchi, A Conconi, and C. Gentile. Learning probabilistic linear-threshold classifiers via selective sampling. In COLT, 2003. [13] N. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian Process method: Informative vector machines. NIPS, 2002. [14] D. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4), 1992. [15] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. In ICML, 2000. [16] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28(2-3), 1997. [17] B. Yang, J.-Tao Sun, T. Wang, and Z. Chen. Effective multi-label active learning for text classification. In KDD, pages 917?926, 2009. [18] J. Weston, S. Bengio, and N. Usunier. Large scale image annotation: learning to rank with joint wordimage embeddings. Machine Learning, 81(1):21?35, 2010. [19] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. [20] M. E. Tipping. Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211?244, 2001. 9
4591 |@word version:1 inversion:2 compression:1 zelnik:1 seek:9 propagate:1 infogain:1 asks:1 thereby:2 harder:1 liblinear:1 reduction:2 configuration:2 raajay:1 efficacy:1 selecting:1 initial:1 tabulate:1 outperforms:3 existing:2 recovered:2 com:1 luo:1 numerical:1 partition:1 informative:7 thrust:1 hofmann:1 enables:2 kdd:1 remove:1 plot:3 update:15 v:18 half:1 prohibitive:2 selected:6 yi1:1 provides:6 herbrich:1 simpler:1 zhang:1 along:1 direct:1 become:3 retrieving:2 consists:1 combine:2 introduce:1 manner:2 expected:2 multi:5 spherical:3 automatically:1 resolve:1 considering:6 project:2 provided:3 estimating:2 moreover:1 underlying:1 mass:1 factorized:1 katakis:1 prateek:1 compressive:8 unobserved:12 transformation:2 every:4 tackle:1 tie:1 classifier:3 partitioning:1 utilization:1 uo:2 positive:8 treat:1 establishing:1 subscript:1 might:1 corel5k:1 co:1 ease:1 factorization:2 discriminatory:1 practical:2 woodbury:1 enforces:1 practice:1 procedure:13 area:1 empirical:2 significantly:11 thought:1 projection:7 matching:1 confidence:1 altun:1 get:2 onto:2 unlabeled:9 selection:8 cannot:2 close:2 tsochantaridis:1 context:1 influence:1 applying:1 imposed:3 missing:24 maximizing:2 straightforward:1 attention:1 williams:1 independently:2 boutell:1 shen:1 recovery:4 needell:1 rule:1 varma:1 enabled:1 handle:12 variation:1 shamir:1 user:1 exact:2 us:3 trick:1 expensive:3 recognition:1 updating:2 labeled:9 observed:11 role:1 taskar:1 preprint:1 wang:1 compressing:1 connected:1 sun:1 ordering:1 highest:3 removed:1 mentioned:4 principled:1 complexity:2 seung:1 multilabel:24 solving:1 tight:1 upon:1 efficiency:2 learner:1 completely:2 easily:1 joint:7 various:3 represented:1 xxt:1 derivation:1 train:4 jain:1 fast:1 describe:2 effective:2 query:2 hyper:2 choosing:2 heuristic:2 larger:2 richer:1 compressed:32 precludes:1 favor:2 unseen:1 transductive:1 jointly:7 itself:1 online:1 advantage:3 propose:2 reconstruction:2 product:3 aligned:1 relevant:1 kapoor:1 rifkin:1 intuitive:1 cosamp:2 extending:1 zp:4 help:2 derive:1 measured:1 ij:10 b0:4 received:1 recovering:1 c:56 predicted:2 indicate:2 implemented:2 direction:1 tsoumakas:1 require:1 generalization:1 alleviate:2 yij:3 extension:3 immensely:1 around:2 considered:2 normal:1 lawrence:1 predict:2 achieves:2 label:138 rvm:1 mit:1 clearly:2 gaussian:14 always:1 aim:4 manor:1 sion:1 varying:1 joachim:1 improvement:2 consistently:3 rank:1 likelihood:2 seamlessly:1 contrast:1 seeger:1 baseline:12 inference:19 inaccurate:1 typically:3 bt:1 a0:4 entire:1 initially:1 relation:2 koller:2 selective:4 selects:1 tao:1 compatibility:3 arg:2 issue:1 classification:34 among:1 colt:1 art:1 constrained:2 fairly:5 initialize:1 mackay:1 sampling:5 yu:11 icml:2 carin:1 future:2 report:1 bangalore:1 few:3 primarily:1 randomly:1 simultaneously:1 gamma:7 individual:3 microsoft:3 attempt:1 interest:2 huge:1 message:2 highly:1 investigate:1 evaluation:1 myopic:2 amenable:1 accurate:2 explosion:1 unless:1 tree:1 incomplete:3 conduct:1 re:1 desired:1 instance:2 column:2 modeling:2 earlier:2 subset:1 predictor:1 tishby:1 dependency:1 foraging:1 answer:1 learnt:1 xue:1 stay:1 probabilistic:11 yl:2 kdimensional:1 enhance:1 ashish:1 quickly:1 again:1 reflect:1 cesa:1 choose:2 worse:2 resort:1 yp:2 actively:1 account:1 potential:4 de:1 student:1 availability:1 coefficient:1 eyi:1 explicitly:1 later:1 view:1 lot:2 doing:4 start:1 recover:2 capability:1 annotation:1 hariharan:1 accuracy:16 baron:1 qk:2 efficiently:2 bayesian:16 none:1 explain:2 influenced:1 sharing:1 facebook:1 against:3 acquisition:5 naturally:9 boil:1 gain:7 hsu:1 dataset:10 ask:2 recall:1 dimensionality:4 actually:1 higher:4 tipping:1 maximally:3 improved:1 rand:2 done:1 just:4 langford:1 hand:2 web:3 tropp:1 propagation:1 bml:31 usa:1 usage:1 brown:1 true:3 inductive:1 hence:4 alternating:1 iteratively:2 round:12 criterion:9 generalized:2 demonstrate:4 performs:1 image:2 variational:7 harmonic:1 novel:3 recently:1 common:1 wikipedia:1 rl:1 yil:1 overview:1 ji:1 million:1 extend:2 interpretation:1 discussed:1 significant:4 measurement:2 refer:1 similarly:4 sherman:1 etc:1 posterior:19 closest:1 recent:1 perspective:1 optimizing:2 retrieved:1 scenario:8 certain:1 binary:2 yi:31 seen:1 captured:1 guestrin:1 gentile:1 employed:1 determine:1 paradigm:3 maximize:1 signal:6 morrison:1 resolving:1 multiple:2 infer:3 reduces:2 smooth:1 match:1 cross:1 prediction:11 regression:20 annotate:1 normalization:1 iteration:2 kernel:1 whereas:1 uninformative:1 separately:1 addressed:1 appropriately:2 rest:4 unlike:1 tend:2 facilitates:1 seem:1 leverage:1 yang:1 revealed:3 bengio:1 embeddings:1 isolation:1 zi:23 idea:4 knowing:1 multiclass:1 klautau:1 motivated:1 defense:1 penalty:1 passing:2 constitute:1 matlab:1 useful:2 iterating:1 clear:2 generally:1 amount:2 induces:2 svms:2 tyi:2 tth:1 reduced:1 percentage:2 notice:1 per:10 correctly:1 diagnosis:6 write:1 promise:5 key:7 four:1 threshold:1 achieving:1 d3:1 changing:1 utilize:1 graph:1 fraction:5 bookmark:1 uncertainty:17 powerful:1 baraniuk:1 reasonable:1 utilizes:1 decision:3 summarizes:1 comparable:1 bit:1 bound:2 fold:1 oracle:2 constraint:3 vishwanathan:1 scene:1 tag:5 aspect:2 performing:1 rcv1:4 consolidates:1 structured:2 viswanathan:1 according:3 request:1 conjugate:2 across:2 smaller:2 wi:7 kakade:1 making:1 projecting:3 intuitively:4 turn:5 committee:2 mechanism:1 prajain:1 usunier:1 junction:1 pursuit:1 apply:2 observe:1 fxi:4 enforce:1 disagreement:1 appropriate:1 robustness:1 compress:1 denotes:2 top:14 remaining:1 include:1 graphical:1 marginalized:2 paucity:1 exploit:2 build:1 seeking:2 objective:1 question:1 quantity:1 added:1 strategy:3 primary:2 traditional:9 unclear:1 amongst:3 distance:1 considers:3 trivial:1 reason:3 assuming:1 besides:2 modeled:1 relationship:3 index:1 ql:2 potentially:1 negative:3 implementation:2 unknown:3 perform:4 bianchi:1 datasets:10 markov:1 benchmark:3 extended:2 inferred:2 atij:1 required:1 learned:2 boost:1 nip:3 address:1 able:2 redmond:1 usually:1 below:1 pattern:1 sparsity:6 max:4 video:1 lend:1 belief:1 power:1 critical:1 predicting:1 scheme:5 improve:1 axis:1 text:2 prior:15 literature:2 interdependent:1 marginalizing:4 freund:1 fully:6 expect:1 highlight:2 limitation:1 validation:2 offered:1 sufficient:1 xp:3 informativeness:1 thresholding:1 compatible:1 summary:1 rasmussen:1 infeasible:1 aij:1 guide:4 india:1 taking:1 sparse:10 benefit:2 distributed:1 boundary:1 dimension:4 dip:1 world:3 valid:2 qn:1 made:1 adaptive:1 projected:4 far:1 transaction:2 approximate:3 obtains:5 ignore:1 emphasize:2 ml:22 active:58 assumed:2 xi:8 latent:13 iterative:1 table:1 learn:2 nature:1 robust:1 diag:1 main:1 noise:2 arise:1 repeated:1 xu:1 site:1 tong:1 precision:30 inferring:2 exponential:1 xl:3 candidate:1 learns:2 bij:1 formula:1 down:1 sensing:25 foucart:1 svm:36 exists:1 naively:1 incorporating:1 effectively:1 cal500:1 illustrates:1 margin:5 chen:1 flavor:1 entropy:4 simply:2 likely:2 conconi:1 partially:5 sarvotham:1 weston:1 goal:4 consequently:2 feasible:1 hard:1 determined:1 specifically:3 except:1 wt:1 total:6 experimental:2 shannon:1 select:6 formally:1 support:1 relevance:2 handling:2
3,968
4,592
Kernel Hyperalignment Alexander Lorbert & Peter J. Ramadge Department of Electrical Engineering Princeton University Abstract We offer a regularized, kernel extension of the multi-set, orthogonal Procrustes problem, or hyperalignment. Our new method, called Kernel Hyperalignment, expands the scope of hyperalignment to include nonlinear measures of similarity and enables the alignment of multiple datasets with a large number of base features. With direct application to fMRI data analysis, kernel hyperalignment is well-suited for multi-subject alignment of large ROIs, including the entire cortex. We report experiments using real-world, multi-subject fMRI data. 1 Introduction One of the goals of multi-set data analysis is forming qualitative comparisons between datasets. To the extent that we can control and design experiments to facilitate these comparisons, we must first ask whether the data are aligned. In its simplest form, the primary question of interest is whether corresponding features among the datasets measure the same quantity. If yes, we say the data are aligned; if not, we must first perform an alignment of the data. The alignment problem is crucial to multi-subject fMRI data analysis, which is the motivation for this work. An appreciable amount of effort is devoted to designing experiments that maintain the focus of a subject. This is to ensure temporal alignment across subjects for a common stimulus. However, with each subject exhibiting his/her own unique spatial response patterns, there is a need for spatial alignment. Specifically, we want between subject correspondence of voxel j at TR i (Time of Repetition). The typical approach taken is anatomical alignment [20] whereby anatomical landmarks are used to anchor spatial commonality across subjects. In linear algebra parlance, anatomical alignment is an affine transformation with 9 degrees of freedom. Recently, Haxby et al. [9] proposed Hyperalignment, a function-based alignment procedure. Instead of a 9-parameter transformation, a higher-order, orthogonal transformation is derived from voxel time-series data. The underlying assumption of hyperalignment is that, for a fixed stimulus, a subject?s time-series data will possess a common geometry. Accordingly, the role of alignment is to find isometric transformations of the per-subject trajectories traced out in voxel space so that the transformed time-series best match each other. Using their method, the authors were able to achieve a between-subject classification accuracy on par with?and even greater than?within-subject accuracy. Suppose that subject data are recorded in matrices X1:m ? Rt?n . This could be data from an experiment involving m subjects, t TRs, and n voxels. We are interested in extending the regularized hyperalignment problem P 2 minimize i<j kXi Ri ? Xj Rj kF (1) subject to RTk Ak Rk = I k = 1, 2, . . . , m , where matrices A1:m ? Rn?n are symmetric and positive definite. In general, the above problem manifests itself in many application areas. For example, when Ak = I we have hyperalignment or 1 a multi-set orthogonal Procrustes problem, commonly used in shape analysis [6, 7]. When Ak = XTk Xk , (1) represents a form of multi-set Canonical Correlation Analysis (CCA) [12, 13, 8]. The success of hyperalignment engenders numerous questions and in this work we address two of them. First, is hyperalignment scalable? In [9], the authors consider a subset of ventral temporal cortex (VT), using hundreds of voxels. The relatively-low voxel count alleviates a huge computational cost and storage burden. However, the current method for solving (1) is infeasible when considering many or all voxels, and therefore limits the scope of hyperalignment to a local alignment procedure. For example, if n = 50,000 voxels, then storing the n ? n matrix for one subject requires over 18 gigabytes of memory. Moreover, computing a full SVD for a matrix this size is a tall order. Coupled with scalability, we also ask whether we can include new features of our subjects? data. For example, we may want to augment the input data with the associated second-order mixtures, i.e., n voxels become ( n1 ) + ( n2 ) = n(n+1)/2 features. Again, for a reasonably-sized voxel count, running hyperalignment is infeasible. Addressing scalability and feature extension results in the main contribution of kernel hyperalignment. The inclusion of a large feature space motivates the use of kernel methods. Additionally, numerous optimization problems that use the kernel trick possess global optimizers spanned by the mapped examples. This is guaranteed by the Representer Theorem [14, 18]. Therefore, the two separate issues of scalability and feature extension are merged into a single problem through the use of kernel methods. With kernel hyperalignment, the bottleneck shifts from voxel count to the number of TRs times subjects (or the original inputs to the number of examples). The problem we address in this paper is the alignment of multiple datasets in the same and extended feature space. Multi-set data analysis by means of kernel methods has already been considered in the framework of CCA [16, 1]. Our approach deviates from [1] and [15] because we focus on alignment and never leave feature space until training and testing. We use the kernel trick as a means of navigating through a high-dimensional orthogonal group. Our CCA variant is more constrained, and each dataset is assigned the same kernel, supplying us with a richer, single reproducing kernel Hilbert space (RKHS) over a collection of m smaller and distinct ones. Allowing for subject-specific kernels leads to the difficult problem of selecting them?a significantly harder problem than selecting a single kernel. In this respect, we assume a single kernel can provide the sought-after linearity used for comparing multiple datasets. The paper is organized as follows: in ?2 we review regularized hyperalignment, or the regularized multi-set orthogonal Procrustes problem. Next, in ?3 we formulate its kernel variant, and in ?4 we discuss classification with aligned data. We provided experimental results in ?5, and we conclude in ?6. All proofs are supplied in the Supplemental Material. 2 Hyperalignment The hyperalignment problem of (1) is equivalent to [7]: Pm 2 minimize i=1 kXi Ri ? YkF P m 1 and RTk Ak Rk = I for k = 1, . . . , m . subject to Y = m j=1 Xj Rj (2) The matrix Y is the image centroid and serves as the catalyst for computing a solution: for dataset i, fix a centroid and solve for Ri . This process cycles over all datasets for a specified number of rounds, or until approximate convergence is reached (see Algorithm 1). The dynamic centroid Y can be a sample mean or a leave-one-out (LOO) mean. Regardless of type, the last round should use 1/2 the fixed sample mean provided by the penultimate round. We can set Qk = Ak Rk , using the 1 symmetric, positive definite square root , yielding the key operation ?1 minimize kXk Ak 2 Qk ? Yk2F (3) subject to QTk Qk = I . The above is the familiar orthogonal Procrustes problem [19] and is solved using the SVD of ?1 Ak 2 XTk Y. 1 In practice, we would use the Cholesky factorization of Ak . However, in deriving the kernel hyperalignment procedure it is necessary to familiarize the reader with this approach. 2 3 Kernel Hyperalignment The previous section dealt with alignment based on the original data. In the context of optimization, the alignment problem of (1) is indifferent to both data generation and data recording. There are, however, implicit assumptions about these two processes. The data are generated according to a common input signal, and each of the m datasets represents a specific view of this signal. In other words, the matrices X1:m have row correspondence. The alignment problem of (1) seeks column correspondence through a linear mapping of the original features. In fMRI, the m views are manifested by m subjects experiencing a common, synchronous stimulus. Each data matrix records fMRI time-series data: the rows are indexed by a TR and the columns are indexed by a voxel. There are t TRs and n voxels per subject, i.e., Xk ? Rt?n . The synchrony of the stimulus ensures row correspondence. Hyperalignment can be posed as the minimization problem of (2) with Ak = I. Voxel (column) correspondence is then achieved via an orthogonal constraint placed on each of the linear mappings. The orthogonal constraint present in hyperalignment follows a subject-independent isometry assumption. We can view the time-series data of each subject as a trajectory in Rn . For a fixed stimulus this trajectory is [approximately] identical?up to a rotationreflection?across subjects. As stated above, we are assuming equivalence of the per-view information in its original form, but we are not assuming that this information can be related through a linear mapping. Now suppose there is a common set of N features?derived from each n-dimensional example?that does allow for a linear relationship between views. Alternatively, there may be derivative features of interest that lead to better alignment via a linear mapping. For example, it is conceivable that second-order data, i.e., pairwise mixtures of the original data, obey a linear construct and may be a preferred feature set for alignment. In general, we wish to formulate an alignment technique for this new feature set. Rather than limit expression of the data to the n given coordinates, we consider an N -coordinate representation, where N may be much greater than n. Let Xi ? Rt?n have i0 -th row [xii0 ]T with xii0 ? Rn . We introduce the row-based mapping of Xi : ? ? ?1 (xi1 ) ?2 (xi1 ) ? ? ? ?N (xi1 ) ? .. .. ? ? Rt?N . (4) ?(Xi ) = ? ... . . ? ?1 (xit ) ?2 (xit ) ??? ?N (xit ) The N functions ?1:N : Rn ? R are used to derive N features from the original data. For matrix Xi ? Rt?n let ?i = ?(Xi ). In general, for Xi ? Rt?n and Xj ? Rs?n , we define the Gram matrix Kij , ?i ?Tj ? Rt?s . We also write Ki , Kii = ?i ?Ti . We assume that there is an appropriate positive definite kernel, k? : Rn ? Rn ? R, so that we can leverage the kernel trick [2, 10] and obtain the i0 j 0 -th element of Kij via ? x i0 , x j 0 ) . (Kij )i0 j 0 = k( i j (5) Using the feature map ?(?), we form the regularized Kernel Hyperalignment problem: P 2 minimize i<j k?(Xi )Ri ? ?(Xj )Rj kF subject to RTk Ak Rk = I for k = 1, . . . , m . (6) The latent variables are R1:m ? RN ?N and we are given symmetric, positive definite matrices A1:m ? RN ?N . Although different than the original hyperalignment problem, obtaining a solution to (6) is accomplished in the same way: fix a centroid and find the individual linear maps. To this end, the key operation involves solving arg min k?i R ? ?k2F ?1 arg min k?i Ai 2 Q ? ?k2F , or RT Ai R=I (7) QT Q=I where ?i = ?(Xi ), i ? 1, is the current, individual dataset under consideration and ? = P 1 ? ? j?A ?j Rj is a centroid based on the current estimates of R1:m , denoted R1:m . The index set |A| A ? {1, . . . , m} determines how the estimated centroid is calculated (sample or LOO mean). 3 The difficulty of (7) lies in the size of N . Any of the well-known kernels correspond to an N so large that direct computation is generally impractical. For example, if using second-order interactions as the feature set, the number of unknowns in kernel hyperalignment is O(mn4 ) in contrast to O(mn2 ) unknowns for hyperalignment. Nevertheless, the minimization problem of (7) places us in familiar territory of solving an orthogonal Procrustes problem. Since we are now in feature space, the matrix Ai poses a problem unless we confine it to a specific ?1/2 form. For example, if Ai is random, finding Ai would be infeasible for large N . Additionally, T the constraint Ri Ai Ri = I would lack any intuition. Therefore, we restrict Ai = ?I + ??Ti ?i with ? > 0 and ? ? 0. As with regularized hyperalignment [22], when (?, ?) = (1, 0) we obtain hyperalignment and when (?, ?) ? (0, 1) we obtain a form of CCA. Let Ki have eigen-decomposition Vi ?i ViT , where ?i = diag{?i1 , . . . , ?it } or diagj {?ij } for short. We introduce two symmetric, positive definite matrices: Bi = Vi diagj { ? 1 }ViT and ?+??ij Ci = Vi diagj { ?1ij ( ? 1 ?+??ij ? ?1 )}VT . i ? ? 12 Lemma 3.1. For Ai = ?I + ??Ti ?i we have Ai = ?1 I ? ? 12 + ?Ti Ci ?i and ?i Ai = Bi ?i . We can use Lemma 3.1 to transform (7) into arg min kBi ?i Q ? ?k2F or  h i P 1 ? arg max tr QT ?Ti Bi |A| , j?A Bj ?j Qj (8) QT Q=I QT Q=I ? j is the current estimate of Qj . Solving for the matrix Q is still well beyond practical where Q computation. The following lemma is the gateway for managing this problem. ? ? St(N, d) and G ? ? O(d), then Q ? = IN ? U(I ? d ? G) ? U ? T ? O(N ).2 Lemma 3.2. If U ? = Id ) and Householder Familiar applications of the above lemma include the identity matrix (G ? = ?Id ). If G ? is block diagonal with 2 ? 2 blocks of Givens rotations, then the reflections (G ? taken two at a time, are the two-dimensional planes of rotation [7]. We therefore columns of U, ? refer to U as the plane support matrix. ? Lemma 3.2 can be interpreted as a lifting mechanism for identity deviations. The difference Id ? G ? d ? G) ? U ? T = IN ? Q, ? ?lifts? this differrepresents a O(d) deviation from identity. Applying U(I ence to a O(N ) deviation from identity. Reversing directions, we can also utilize Lemma 3.2 for ? = U(I ? d ? G) ? U ? T , the rank of the deviation, IN ? Q, is upper compressing O(N ). From IN ? Q bounded by d, producing a subset of O(N ). Motivated by Lemma 3.2 we impose Qi = IN ? U(I ? Gi )UT , (9) where U ? St(N, r), Gi ? O(r), and 1 ? r ? N . Ideally, we want r small to benefit from a reduced dimension. As is typically the case when using kernel methods, leveraging the Representer Theorem shifts the dimensionality of the problem from the feature cardinality to the number of examples, i.e., r = mt. We pool all of the data, forming the mt ? N matrix  T , (10) ?0 = ?T1 ?T2 ? ? ? ?Tm ? 21 and set U = ?T0 K0 ? RN ?r with K0 = ?0 ?T0 assumed positive definite. As long as r ? N , ?1 ?1 ?1 ? 12 the orthogonality constraint is met because (?T0 K0 2 )T (?T0 K0 2 ) = K0 2 K0 K0 = Ir . Theorem 3.3 (Hyperalignment Representer Theorem). Within the set of global minimizers of (6) ?1 ?1 there exists a solution {R?1 , . . . , R?m } = {A1 2 Q?1 , . . . , Am 2 Q?m } that admits a representation ? 12 Q?i = IN ? U(I ? G?i )UT , where U = ?T0 K0 and G?i ? O(mt) (i = 1, . . . , m). St(N, d) , {Z : Z ? RN ?d , ZT Z = Id } is the (N, d) Stiefel Manifold (N ? d), and O(N ) , {Z : Z ? RN ?N , ZT Z = IN } is the orthogonal group of N ? N matrices. 2 4 ? ?), ?, ?, X1:m ? Rt?n Input: k(?, Output: R1:m , linear maps in feature space Initialize feature maps ?1 , . . . , ?m ? Rt?N  T Initialize plane support ?0 = ?T1 ?T2 ? ? ? ?Tm Initialize G1:m ? Rr?r as identity (r = mt) foreach round do foreach subject/view i do ( {1, 2, . . . , m} sample mean A? {1, 2, . . . , m} \ {i} LOO mean Input: X1:m ? Rt?n , A1:m ? Rn?n Output: R1:m ? Rn?n Initialize Q1:m as identity (n ? n) ?1/2 ? i 1:m Set X ? Xi Ai foreach round do foreach subject/view i do ( {1, 2, . . . , m} sample mean A? {1, 2, . . . , m} \ {i} LOO mean Y? Y? 1 X ? Xj Qj |A| j?A ? ? ? V] ? ? SVD(B ? Ti Y) [U T ?V ? Gi ? U ? ? ? V] ? ? SVD(X ? Ti Y) [U T ? ? Qi ? UV end end foreach subject/view i do end end foreach subject/view i do 1 ?2 Ri ? Ai 1 X ? Bj Gj |A| j?A ?1 ?1 Qi ? I ? ?T0 K0 2 (Ir ? Gi )K0 2 ?0 ?1 2 Ri ? Ai Qi Qi end end Algorithm 2: Regularized Kernel Hyperalignment Algorithm 1: Regularized Hyperalignment When mt is large enough so that evaluating an SVD of numerous mt ? mt matrices is prohibitive, we can first perform PCA-like reduction. Let K0 have eigen-decomposition V0 ?0 V0T , where the nonnegative diagonal entries of ?0 are sorted in decreasing order. We set ?00 = V0T0 ?0 , where ?1/2 V00 is formed by the first r columns of V0 , and then use U = ?T00 K00 . In general, rather 2 than compute Q according to (7), involving N (N ?1)/2 = O(N ) degrees of freedom (when N is finite), we end up with r(r?1)/2 = O(r2 ) degrees of freedom via the kernel trick. ? 21 ? i = Bi Ki0 K Let B 0 ? Rt?r . We reduce (8) in terms of Gi and obtain (Supplementary Material) ? ? ?? X 1 ?T ? ? jG ? j ?? , Gi = arg max tr ?GT B B (11) i |A| G?O(r) j?A ? j is the current estimate of Gj . Equation (11) is the classical orthogonal Procrustes probwhere G i h ? jG ? j , then a maximizer is given by U ?V ? T [7]. ?? ?V ? T is the SVD of GT B ?T 1 P B lem. If U i |A| j?A The kernel hyperalignment procedure is given in Algorithm 2. Using the approach taken in this section also leads to an efficient solution of the standard orthogonal Procrustes problem for n ? 2t (Supplementary Material). In turn, this leads to an efficient iterative solution for the hyperalignment problem when n is large. 4 Alignment Assessment An alignment procedure is not subject to the typical train-and-test paradigm. The lack of spatial correspondence demands an align-train-test approach. We assume these three sets have withinsubject (or within-view) alignment. With all other parameters fixed, if the aligned test error is smaller than the unaligned test error, there is strong evidence suggesting that alignment was the underlying cause. Kernel hyperalignment returns linear transformations R1:m that act on data living in feature space. In general, we cannot directly train and test in the feature space due to its large size. We can, however, learn from relational data. For example, we can compute distances between examples and, subsequently, produce nearest neighbor classifiers. Assume (?, ?) = (1, 0), i.e., the R1:m 5 are orthogonal. If x1 ? Rn is a view-i example and x2 ? Rn is a view-j example, the respective pre-aligned and post-aligned squared distances between the two examples are given by ? 1 , x1 ) + k(x ? 2 , x2 ) ? 2k(x ? 1 , x2 ) k?(xT1 ) ? ?(xT2 )k2F = k(x ? 1 , x1 ) + k(x ? 2 , x2 ) ? 2?(xT )Ri RT ?(xT )T . k?(xT1 )Ri ? ?(xT2 )Rj k2F = k(x 1 j 2 (12) (13) The cross-term in (13) has not been expanded for a simple reason: it is too messy. We realized early on that the alignment and training phase would be replete with lengthy expansions and, consequently, sought to simplify matters with a computer science solution. Both binary and unary operations in feature space can be accomplished with a simple class. Our Phi class stores expressions of the following forms: PK PK PK T bIN + k=1 ?(Xa(k) )T Mk ?(Xa(k) ) . (14) k=1 Mk ?(Xa(k) ) k=1 ?(Xa(k) ) Mk | {z } | {z } | {z } Type 1 Type 2 Type 3 Each class instance stores matrices M1:K , scalar b, right address vector a, and left address vector a. The address vectors are pointers to the input data. This allows for faster manipulation and smaller memory allocation. Addition and subtraction require a common type. If types match, then the M matrices must be checked for compatible sizes. Multiplication is performed for types 1 with 2, 1 with 3, 2 with 1, 3 with 2, and 3 with 3. The first of these cases, for example, produces a numeric result via the kernel trick. We also define scalar multiplication and division for all types and matrix multiplication for types 1 and 2. A transpose operator applies for all types and maps type 1 to 2, 2 to 1, and 3 to 3. More advanced operations, such as powers and inverses, are also possible. Our implementation was done in Matlab. The construction of the Phi class allows us to stay in feature space and avoid lengthy expansions. In s?n turn, this facilitates implementing the richer set of SVM classifiers. Let X?1 , . . . , Xm be our ? ?R s?N training data with feature representation ??? = ?(X?? ) ? R . Recall that kernel hyperalignment seeks to align in feature space. Before alignment we might have considered K???? = ??? ?T?? ; we now consider the Gram matrix (??? Ri )(???Rj )T = ??? Ri RTj ?T?? . If every row of X?? has a corresponding label, we can train an SVM with ? ? ??1 R1 RT1 ?T?1 ??1 R1 RT2 ?T?2 ? ? ? ??1 R1 RTm ?Tm ? ?? ?T ? ??1 R1 ??1 R1 ? ? ? ? ?? ? ? ?? R2 RT ?T ??2 R2 RT2 ?T?2 KA? =? ... ??? ... ? =? 2 . 1 ?1 ? , (15) .. ? ? .. . ?m ?m ? Rm ? Rm T T T T ?m ?m ? R m R 1 ?? ? Rm Rm ?m ? 1 where KA? = KTA? ? Rms?ms denotes the aligned kernel matrix. The unaligned kernel matrix, KU? , is also an m ? m block matrix with ij-th block K????. Using the dual formulation of an SVM, a classifier can be constructed from the relational data exhibited among the examples [4]. Similar to a k-nearest neighbor classifier relying on pairwise distances, an SVM relies on the kernel matrix. The kernel matrix is a matrix of inner products and is therefore linear. This enables us to assess a partition-based alignment. In fMRI, we perform two alignments?one for each hemisphere. Each alignment produces two aligned kernel matrices, which we sum and then input into an SVM. Thus, linearity provides us the means to handle finer partitions by simply summing the aligned kernel matrices. 6 Table 1: Seven label classification using movie-based alignment Below is the cross-validated, between-subject classification accuracy (within-subject in brackets) with (?, ?) = (1, 0). Four hundred TRs per subject were used for the alignment. Chance = 1/7 ? 14.29%. Kernel Linear Quadratic Gaussian Sigmoid 5 Ventral Temporal Entire Cortex 2,997 voxels/hemisphere 133,590 voxels/hemisphere Anatomical Kernel Hyp. Anatomical Kernel Hyp. 35.71% [42.68%] 35.00% [43.32%] 36.25% [43.39%] 35.89% [43.21%] 48.57% [42.68%] 50.36% [42.32%] 48.57% [43.39%] 48.21% [43.21%] 34.64% [26.79%] 36.07% [25.54%] 36.07% [26.07%] 35.00% [26.79%] 36.25% [26.79%] 36.43% [25.54%] 36.43% [26.07%] 36.25% [26.79%] Experiments The data used in this section consisted of fMRI time-series data from 10 subjects who viewed a movie and also engaged in a block-design visualization experiment [17]. Each subject saw Raiders of the Lost Ark (1981) lasting a total of 2213 TRs. In the visualization experiment, subjects were shown images belonging to a specific class for 16 TRs followed by 10 TRs of rest. The 7 classes were: (1) female face, (2) male face, (3) monkey, (4) house, (5) chair, (6) shoe and (7) dog. There were 8 runs total, and each run had every image class represented once. We assess alignment by classification accuracy. To provide the same number of voxels per ROI for all subjects, we first performed anatomical alignment. We then selected a contiguous block of 400 TRs from the movie data to serve as the per-subject input of the kernel hyperalignment. Next, we extracted labeled examples from the visualization experiment by taking an offset time average of each 16 TR class exposure. An offset of 6 seconds factored in the hemodynamic response. This produced 560 labeled examples: 10 subjects ? 8 runs/subject ? 7 examples/run. Kernel hyperalignment allows us to (a) use nonlinear measures of similarity, and (b) consider more voxels for the alignment. Consequently, we (a) experiment with a variety of kernels, and (b) do not need to pre-select or screen voxels as was done in [9]?we include them all. Table 1 features results from a 7-label classification experiment. Recall that a linear kernel reduces to hyperalignment. We classified using a multi-label ?-SVM [3]. We used the first 400 TRs from each subject?s movie data, and aligned each hemisphere separately. The kernel functions are supplied in the Supplementary Material. As observed in [9] and repeated here, hyperalignment leads to increased between-subject accuracy and outperforms within-subject accuracy. Thus, we are extracting more common structure across subjects. Whereas employing Algorithm 1 for 2,997 voxels is feasible (and slow), 133,590 voxels is not feasible at all. To complete the picture, we plot the effects of regularization. Figure 1 displays the cross-validated, between-subject classification accuracy for varying (?, ?) where ? = 1??. This traces out a route from CCA (? ? 0) to hyperalignment (? = 1). When compared to the alignments in [9], our voxel counts are orders of magnitude larger. For our four chosen kernels, hyperalignment (? = 1) presents itself as the option with near-greatest accuracy. Our results support the robustness of hyperalignment and imply that voxel selection may be a crucial pre-processing step when dealing with the whole volume. More voxels mean more noisy voxels, and hyperalignment does not distinguish itself from anatomical alignment when the entire cortex is considered. We can visualize this phenomenon with Multidimensional Scaling (MDS) [21]. MDS takes as input all of the pairwise distances between subjects (the previous section discussed distance calculations). Figure 2 depicts the optimal Euclidean representation of our 10 subjects before and after kernel hyperalignment ((?, ?) = (1, 0)) with respect to the first 400 TRs of the movie data. Focusing on VT, kernel hyperalignment manages to cluster 7 of the 10 subjects. However, when we shift to the entire cortex, we see that anatomical alignment has already succeeded in a similar clustering. Kernel hyperalignment manages to group the subjects closer together, and manifests itself as a re-centering. 7 Quadratic Kernel Gaussian Kernel 0.55 0.5 0.5 0.45 0.4 0.35 0.45 0.4 0.35 Sigmoid Kernel 0.55 0.5 BSC Accuracy 0.55 0.5 BSC Accuracy 0.55 BSC Accuracy BSC Accuracy Linear Kernel 0.45 0.4 0.35 0.45 0.4 0.35 0.3 0.3 0.3 0.3 0.25 0.25 0.25 0.25 0.2 0 0.2 0.4 0.6 ? ( = 1-?) 0.8 1 0.2 0 0.2 0.4 0.6 ? ( = 1-?) 0.8 1 0.2 0 0.2 0.4 0.6 ? ( = 1-?) 0.8 1 0.2 0 0.2 0.4 0.6 ? ( = 1-?) 0.8 1 Figure 1: Cross-validated between-subject classification accuracy (7 labels) as a function of the regularization parameter, ? = 1??, for various kernels after alignment. The solid curves are for Ventral Temporal and the dashed curves are for the entire cortex. Chance = 1/7 ? 14.29%. Ventral Temporal Entire Cortex 4 7 Linear Kernel 5 10 7 52 6 4 1 10 83 5 7 3 91 82 10 34 62 1 9 4 3 56 2 10 1 8 6 8 9 9 7 Gaussian Kernel 5 9 9 43 10218 9 9 16 2 8 10 2 65 4 3 2 5 4 7 8 3 8 6 4 10 10 1 6 7 3 7 5 7 1 Figure 2: Visualizing alignment with MDS Each locus pair approximates the normalized relationship among the 10 subjects in 2D - before (left) and after (right) applying kernel hyperalignment. Centroids are translated to the origin and numbers correspond to individual subjects. 6 Conclusion We have extended hyperalignment in both scale and feature space. Kernel hyperalignment can handle a large number of original features and incorporate nonlinear measures of similarity. We have also shown how to use the linear maps?applied in feature space?for post-alignment classification. In the setting of fMRI, we have demonstrated successful alignment with a variety of kernels. Kernel hyperalignment achieved better between-subject classification over anatomical alignment for VT. There was no noticeable difference when we considered the entire cortex. Nevertheless, kernel hyperalignment proved robust and did not degrade with increasing voxel count. We envision a fruitful path for kernel hyperalignment. Empirically, we have noticed a tradeoff between feature cardinality and classification accuracy, motivating the need for intelligent feature selection within our established framework. Although we have limited our focus to fMRI data analysis, kernel hyperalignment can be applied to other research areas which rely on multi-set Procrustes problems. 8 References [1] F.R. Bach and M.I. Jordan. Kernel independent component analysis. The Journal of Machine Learning Research, 3:1?48, 2003. [2] C.M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. [3] C.C. Chang and C.J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1?27:27, 2011. Software available at http: //www.csie.ntu.edu.tw/?cjlin/libsvm. [4] P.H. Chen, C.J. Lin, and B. Sch?olkopf. A tutorial on ?-support vector machines. Applied Stochastic Models in Business and Industry, 21(2):111?136, 2005. [5] A. Edelman, T. As, A. Arias, and T. Smith. The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl, 1998. [6] C. Goodall. Procrustes methods in the statistical analysis of shape. Journal of the Royal Statistical Society. Series B (Methodological), pages 285?339, 1991. [7] J.C. Gower and G.B. Dijksterhuis. Procrustes Problems, volume 30. Oxford University Press, USA, 2004. [8] D.R. Hardoon, S. Szedmak, and J. Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12):2639?2664, 2004. [9] J.V. Haxby, J.S. Guntupalli, A.C. Connolly, Y.O. Halchenko, B.R. Conroy, M.I. Gobbini, M. Hanke, and P.J. Ramadge. A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron, 72(2):404?416, 2011. [10] T. Hofmann, B. Sch?olkopf, and A.J. Smola. Kernel methods in machine learning. The Annals of Statistics, pages 1171?1220, 2008. [11] R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press, 1990. [12] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321?377, 1936. [13] J.R. Kettenring. Canonical analysis of several sets of variables. Biometrika, 58(3):433, 1971. [14] G.S. Kimeldorf and G. Wahba. A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. The Annals of Mathematical Statistics, 41(2):495?502, 1970. [15] M. Kuss and T. Graepel. The geometry of kernel canonical correlation analysis. Technical report, Max Planck Institute, 2003. [16] P.L. Lai and C. Fyfe. Kernel and nonlinear canonical correlation analysis. International Journal of Neural Systems, 10(5):365?378, 2000. [17] M.R. Sabuncu, B.D. Singer, B. Conroy, R.E. Bryan, P.J. Ramadge, and J.V. Haxby. Function based inter-subject alignment of human cortical anatomy. Cerebral Cortex, 2009. [18] B. Sch?olkopf, R. Herbrich, and A. Smola. A generalized representer theorem. In Computational learning theory, pages 416?426. Springer, 2001. [19] P.H. Schonemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1?10, March 1966. [20] J. Talairach and P. Tournoux. Co-planar stereotaxic atlas of the human brain: 3-dimensional proportional system: an approach to cerebral imaging. Thieme, 1988. [21] J.B. Tenenbaum, V. De Silva, and J.C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000. [22] H. Xu, A. Lorbert, P. J. Ramadge, J. S. Guntupalli, and J. V. Haxby. Regularized hyperalignment of multi-set fmri data. Proceedings of the 2012 IEEE Signal Processing Workshop, Ann Arbor Michigan, 2012. 9
4592 |@word r:1 seek:2 decomposition:2 q1:1 tr:5 solid:1 harder:1 reduction:2 series:7 halchenko:1 selecting:2 rkhs:1 hemodynamic:1 envision:1 outperforms:1 current:5 comparing:1 ka:2 must:3 partition:2 shape:2 enables:2 haxby:4 hofmann:1 plot:1 atlas:1 prohibitive:1 selected:1 accordingly:1 plane:3 xk:2 smith:1 short:1 record:1 supplying:1 pointer:1 provides:1 herbrich:1 raider:1 mathematical:1 constructed:1 direct:2 become:1 qualitative:1 edelman:1 introduce:2 pairwise:3 inter:1 multi:12 brain:1 relying:1 decreasing:1 considering:1 cardinality:2 increasing:1 provided:2 hardoon:1 underlying:2 moreover:1 linearity:2 kimeldorf:1 bounded:1 psychometrika:1 thieme:1 interpreted:1 monkey:1 supplemental:1 finding:1 transformation:5 impractical:1 temporal:6 every:2 multidimensional:1 expands:1 ti:7 act:1 biometrika:2 classifier:4 rm:4 control:1 producing:1 planck:1 positive:6 t1:2 engineering:1 local:1 before:3 limit:2 ak:10 id:4 oxford:1 path:1 approximately:1 might:1 equivalence:1 appl:1 co:1 ramadge:4 factorization:1 limited:1 bi:4 unique:1 practical:1 horn:1 testing:1 mn4:1 practice:1 block:6 definite:6 lost:1 optimizers:1 procedure:5 area:2 significantly:1 word:1 pre:3 kbi:1 cannot:1 selection:2 operator:1 storage:1 context:1 applying:2 www:1 hyperalignment:54 equivalent:1 map:6 demonstrated:1 fruitful:1 exposure:1 regardless:1 vit:2 formulate:2 factored:1 spanned:1 deriving:1 his:1 gigabyte:1 handle:2 coordinate:2 annals:2 construction:1 suppose:2 experiencing:1 k00:1 designing:1 origin:1 trick:5 element:1 recognition:1 ark:1 labeled:2 observed:1 role:1 csie:1 electrical:1 solved:1 ensures:1 cycle:1 compressing:1 intuition:1 messy:1 ideally:1 dynamic:1 solving:4 algebra:1 serve:1 division:1 translated:1 engenders:1 k0:11 represented:1 various:1 train:4 distinct:1 mn2:1 fyfe:1 lift:1 richer:2 posed:1 solve:1 supplementary:3 say:1 larger:1 statistic:2 gi:6 g1:1 transform:1 itself:4 t00:1 noisy:1 rr:1 interaction:1 unaligned:2 product:1 aligned:10 alleviates:1 achieve:1 representational:1 scalability:3 olkopf:3 convergence:1 cluster:1 extending:1 r1:12 produce:3 leave:2 tall:1 derive:1 pose:1 rt2:2 nearest:2 qt:4 ij:5 noticeable:1 strong:1 involves:1 met:1 exhibiting:1 direction:1 anatomy:1 merged:1 subsequently:1 stochastic:2 human:3 material:4 implementing:1 bin:1 kii:1 require:1 fix:2 ntu:1 extension:3 confine:1 considered:4 roi:2 scope:2 mapping:5 bj:2 visualize:1 ventral:5 commonality:1 sought:2 early:1 estimation:1 label:5 guntupalli:2 saw:1 repetition:1 minimization:2 bsc:4 gaussian:3 rather:2 avoid:1 varying:1 derived:2 focus:3 xit:3 validated:3 methodological:1 rank:1 contrast:1 centroid:7 am:1 minimizers:1 i0:4 unary:1 entire:7 typically:1 her:1 relation:1 transformed:1 i1:1 interested:1 arg:5 issue:1 among:3 dual:1 augment:1 denoted:1 classification:11 smoothing:1 spatial:4 constrained:1 initialize:4 construct:1 never:1 once:1 familiarize:1 identical:1 represents:2 k2f:5 representer:4 fmri:10 report:2 stimulus:5 t2:2 simplify:1 spline:1 intelligent:2 individual:3 familiar:3 geometry:3 phase:1 maintain:1 n1:1 freedom:3 hyp:2 interest:2 huge:1 alignment:42 indifferent:1 male:1 mixture:2 bracket:1 yielding:1 tj:1 devoted:1 succeeded:1 closer:1 necessary:1 respective:1 orthogonal:14 unless:1 indexed:2 euclidean:1 taylor:1 re:1 mk:3 kij:3 industry:1 column:5 instance:1 increased:1 ence:1 contiguous:1 cost:1 addressing:1 subset:2 deviation:4 entry:1 hundred:2 successful:1 connolly:1 johnson:1 too:1 loo:4 motivating:1 rtm:1 kxi:2 st:3 international:1 siam:1 stay:1 xi1:3 pool:1 together:1 again:1 squared:1 recorded:1 derivative:1 return:1 suggesting:1 de:1 matter:1 vi:3 performed:2 root:1 view:12 reached:1 option:1 synchrony:1 hanke:1 contribution:1 ass:2 formed:1 minimize:4 accuracy:14 square:1 qk:3 ir:2 who:1 correspond:2 yes:1 dealt:1 territory:1 bayesian:1 produced:1 manages:2 trajectory:3 lorbert:2 finer:1 kuss:1 classified:1 checked:1 lengthy:2 centering:1 associated:1 proof:1 kta:1 dataset:3 proved:1 ask:2 manifest:2 recall:2 ut:2 dimensionality:2 hilbert:1 organized:1 graepel:1 focusing:1 higher:1 isometric:1 planar:1 response:2 formulation:1 done:2 xa:4 implicit:1 smola:2 parlance:1 correlation:4 until:2 langford:1 ykf:1 nonlinear:5 maximizer:1 lack:2 assessment:1 facilitate:1 effect:1 usa:1 consisted:1 normalized:1 regularization:2 assigned:1 symmetric:4 round:5 visualizing:1 whereby:1 m:1 generalized:2 complete:1 reflection:1 silva:1 stiefel:1 sabuncu:1 image:3 consideration:1 recently:1 common:8 rotation:2 sigmoid:2 mt:7 empirically:1 overview:1 foreach:6 volume:2 cerebral:2 discussed:1 m1:1 approximates:1 refer:1 cambridge:1 ai:13 uv:1 pm:1 inclusion:1 shawe:1 jg:2 had:1 similarity:3 cortex:10 gateway:1 gj:2 base:1 v0:2 gt:2 align:2 own:1 isometry:1 female:1 hemisphere:4 manipulation:1 store:2 route:1 manifested:1 binary:1 success:1 vt:4 accomplished:2 greater:2 impose:1 managing:1 subtraction:1 paradigm:1 dashed:1 signal:3 living:1 multiple:3 full:1 rj:6 reduces:1 replete:1 technical:1 match:2 faster:1 calculation:1 offer:1 long:1 ki0:1 cross:4 bach:1 lin:2 post:2 lai:1 a1:4 qi:5 involving:2 scalable:1 variant:2 kernel:67 achieved:2 addition:1 want:3 separately:1 whereas:1 crucial:2 sch:3 rest:1 posse:2 exhibited:1 subject:56 recording:1 facilitates:1 leveraging:1 jordan:1 extracting:1 near:1 leverage:1 enough:1 variety:2 xj:5 variate:1 restrict:1 wahba:1 reduce:1 inner:1 tm:3 tradeoff:1 shift:3 qj:3 bottleneck:1 whether:3 synchronous:1 expression:2 motivated:1 t0:6 pca:1 rms:1 effort:1 peter:1 cause:1 matlab:1 generally:1 procrustes:11 amount:1 tenenbaum:1 simplest:1 reduced:1 http:1 supplied:2 canonical:5 tutorial:1 estimated:1 rtk:3 per:6 bryan:1 anatomical:9 write:1 group:3 key:2 four:2 nevertheless:2 traced:1 libsvm:2 utilize:1 kettenring:1 imaging:1 sum:1 xt2:2 run:4 inverse:1 place:1 reader:1 scaling:1 cca:5 ki:2 guaranteed:1 followed:1 display:1 correspondence:7 distinguish:1 quadratic:2 trs:10 nonnegative:1 constraint:5 orthogonality:2 ri:12 x2:4 software:1 min:3 chair:1 expanded:1 xtk:2 relatively:1 department:1 according:2 march:1 belonging:1 across:4 smaller:3 tw:1 lem:1 lasting:1 goodall:1 taken:3 equation:1 visualization:3 discus:1 count:5 mechanism:1 turn:2 cjlin:1 locus:1 singer:1 serf:1 end:8 available:1 operation:4 obey:1 appropriate:1 hotelling:1 robustness:1 eigen:2 original:8 denotes:1 running:1 include:4 ensure:1 clustering:1 gower:1 classical:1 society:1 noticed:1 question:2 quantity:1 already:2 realized:1 gobbini:1 primary:1 rt:14 md:3 diagonal:2 navigating:1 conceivable:1 distance:5 separate:1 mapped:1 penultimate:1 landmark:1 degrade:1 seven:1 manifold:1 extent:1 reason:1 assuming:2 index:1 relationship:2 difficult:1 trace:1 stated:1 design:2 implementation:1 motivates:1 zt:2 unknown:2 perform:3 allowing:1 upper:1 anal:1 neuron:1 tournoux:1 datasets:7 finite:1 extended:2 relational:2 qtk:1 rn:15 reproducing:1 householder:1 dog:1 pair:1 specified:1 conroy:2 established:1 address:5 able:1 beyond:1 below:1 pattern:2 xm:1 rt1:1 including:1 memory:2 max:3 rtj:1 royal:1 power:1 greatest:1 difficulty:1 rely:1 regularized:9 business:1 advanced:1 movie:5 technology:1 imply:1 numerous:3 picture:1 library:1 coupled:1 szedmak:1 deviate:1 review:1 voxels:15 geometric:1 kf:2 multiplication:3 catalyst:1 par:1 generation:1 allocation:1 proportional:1 degree:3 affine:1 storing:1 row:6 compatible:1 placed:1 last:1 transpose:1 infeasible:3 allow:1 institute:1 neighbor:2 face:2 taking:1 v0t:1 benefit:1 curve:2 calculated:1 dimension:1 world:1 gram:2 evaluating:1 numeric:1 cortical:1 author:2 commonly:1 collection:1 voxel:11 employing:1 transaction:1 yk2f:1 approximate:1 preferred:1 dealing:1 global:3 anchor:1 summing:1 xt1:2 conclude:1 assumed:1 xi:9 alternatively:1 latent:1 iterative:1 table:2 additionally:2 learn:1 reasonably:1 ku:1 robust:1 obtaining:1 expansion:2 diag:1 did:1 pk:3 main:1 motivation:1 whole:1 n2:1 repeated:1 x1:7 xu:1 v00:1 screen:1 depicts:1 slow:1 wish:1 lie:1 house:1 rk:4 theorem:5 specific:4 xt:2 bishop:1 r2:3 offset:2 admits:1 svm:6 evidence:1 burden:1 exists:1 workshop:1 ci:2 aria:1 lifting:1 magnitude:1 demand:1 chen:1 suited:1 michigan:1 simply:1 forming:2 shoe:1 kxk:1 phi:2 scalar:2 chang:1 applies:1 springer:2 determines:1 relies:1 chance:2 extracted:1 acm:1 talairach:1 goal:1 sized:1 identity:6 sorted:1 consequently:2 viewed:1 appreciable:1 ann:1 feasible:2 specifically:1 typical:2 reversing:1 lemma:8 called:1 total:2 engaged:1 svd:6 experimental:1 arbor:1 select:1 cholesky:1 support:5 alexander:1 incorporate:1 stereotaxic:1 princeton:1 phenomenon:1
3,969
4,593
Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints Stefan Habenschuss? , Johannes Bill? , Bernhard Nessler Institute for Theoretical Computer Science, Graz University of Technology {habenschuss,bill,nessler}@igi.tugraz.at Abstract Recent spiking network models of Bayesian inference and unsupervised learning frequently assume either inputs to arrive in a special format or employ complex computations in neuronal activation functions and synaptic plasticity rules. Here we show in a rigorous mathematical treatment how homeostatic processes, which have previously received little attention in this context, can overcome common theoretical limitations and facilitate the neural implementation and performance of existing models. In particular, we show that homeostatic plasticity can be understood as the enforcement of a ?balancing? posterior constraint during probabilistic inference and learning with Expectation Maximization. We link homeostatic dynamics to the theory of variational inference, and show that nontrivial terms, which typically appear during probabilistic inference in a large class of models, drop out. We demonstrate the feasibility of our approach in a spiking WinnerTake-All architecture of Bayesian inference and learning. Finally, we sketch how the mathematical framework can be extended to richer recurrent network architectures. Altogether, our theory provides a novel perspective on the interplay of homeostatic processes and synaptic plasticity in cortical microcircuits, and points to an essential role of homeostasis during inference and learning in spiking networks. 1 Introduction Experimental findings from neuro- and cognitive sciences have led to the hypothesis that humans create and maintain an internal model of their environment in neuronal circuitry of the brain during learning and development [1, 2, 3, 4], and employ this model for Bayesian inference in everyday cognition [5, 6]. Yet, how these computations are carried out in the brain remains largely unknown. A number of innovative models has been proposed recently which demonstrate that in principle, spiking networks can carry out quite complex probabilistic inference tasks [7, 8, 9, 10], and even learn to adapt to their inputs near optimally through various forms of plasticity [11, 12, 13, 14, 15]. Still, in network models for concurrent online inference and learning, most approaches introduce distinct assumptions: Both [12] in a spiking Winner-take-all (WTA) network, and [15] in a rate based WTA network, identified the limitation that inputs must be normalized before being presented to the network, in order to circumvent an otherwise nontrivial (and arguably non-local) dependency of the intrinsic excitability on all afferent synapses of a neuron. Nessler et al. [12] relied on population coded input spike trains; Keck et al. [15] proposed feed-forward inhibition as a possible neural mechanism to achieve this normalization. A theoretically related issue has been encountered by Deneve [7, 11], in which inference and learning is realized in a two-state Hidden Markov Model by a single spiking neuron. Although synaptic learning rules are found to be locally computable, the learning update for intrinsic excitabilities remains intricate. In a different approach, Brea et al. [13] have recently proposed a promising model for Bayes optimal sequence learning in spiking networks ? These authors contributed equally to this work. 1 in which a global reward signal, which is computed from the network state and synaptic weights, modulates otherwise purely local learning rules. Also the recent innovative model for variational learning in recurrent spiking networks by Rezende et al. [14] relies on sophisticated updates of variational parameters that complement otherwise local learning rules. There exists great interest in developing Bayesian spiking models which require minimal nonstandard neural mechanisms or additional assumptions on the input distribution: such models are expected to foster the analysis of biological circuits from a Bayesian perspective [16], and to provide a versatile computational framework for novel neuromorphic hardware [17]. With these goals in mind, we introduce here a novel theoretical perspective on homeostatic plasticity in Bayesian spiking networks that complements previous approaches by constraining statistical properties of the network response rather than the input distribution. In particular we introduce ?balancing? posterior constraints which can be implemented in a purely local manner by the spiking network through a simple rule that is strongly reminiscent of homeostatic intrinsic plasticity in cortex [18, 19]. Importantly, it turns out that the emerging network dynamics eliminate a particular class of nontrivial computations that frequently arise in Bayesian spiking networks. First we develop the mathematical framework for Expectation Maximization (EM) with homeostatic posterior constraints in an instructive Winner-Take-all network model of probabilistic inference and unsupervised learning. Building upon the theoretical results of [20], we establish a rigorous link between homeostatic intrinsic plasticity and variational inference. In a second step, we sketch how the framework can be extended to recurrent spiking networks; by introducing posterior constraints on the correlation structure, we recover local plasticity rules for recurrent synaptic weights. 2 Homeostatic plasticity in WTA circuits as EM with posterior constraints We first introduce, as an illustrative and representative example, a generative mixture model p(z, y|V ) with hidden causes z and binary observed variables y, and a spiking WTA network N which receives inputs y(t) via synaptic weights V . As shown in [12], such a network N can implement probabilistic inference p(z|y, V ) through its spiking dynamics, and maximum likelihood learning through local synaptic learning rules (see Figure 1A). The mixture model comprises PK K binary and mutually exclusive components zk ? {0, 1}, k=1 zk = 1, each specialized on a different N -dimensional input pattern: p(y, z|V ) = K Y ? bk zk e N Y  zk (?ki )yi ? (1 ? ?ki )1?yi (1) i=1 k=1 ! ? log p(y, z|V ) = X k with X zk X Vki yi ? Ak + ?bk , (2) i ? ebk = 1 and ?ki = ?(Vki ) and Ak = X log(1 + eVki ) , (3) i k where ?(x) = (1 + exp(?x))?1 denotes the logistic function, and ?ki the expected activation of input i under the mixture component k. For simplicity and notational convenience, we will treat the prior parameters ?bk as constants throughout the paper. Probabilistic inference of hidden causes zk based on an observed input y can be implemented by a spiking WTA network N of K neurons which fire with the instantaneous spiking probability (for ?t ? 0), euk (t) p(zk spikes in [t, t + ?t]) = ?t ? rnet ? P u (t) ? p(zk = 1|y, V ) , (4) j je P ? with the input potential uk (t) = i Vki yi (t) ? Ak + bk . Each WTA neuron k receives spiking inputs yi via synaptic weights Vki and responds with an instantaneous spiking probability which depends exponentially on its input potential uk in accordance with biological findings [21]. Stochastic winner-take-all (soft-max) competition between the neurons is modeled via divisive normalization (4) [22]. The input is defined as yi (t) = 1 if input neuron i emitted a spike within the last ? milliseconds, and 0 otherwise, corresponding to a rectangular post-synaptic potential (PSP) of length ? . We define zk (t) = 1 at spike times t of neuron k and zk (t) = 0 otherwise. 2 Figure 1: A. Spiking WTA network model. B. Input templates from MNIST database (digits 0-5) are presented in random order to the network as spike trains (the input template switches after every 250ms, black/white pixels are translated to high/low firing rates between 20 and 90 Hz). C. Sketch of intrinsic homeostatic plasticity maintaining a certain target average activation. D. Homeostatic plasticity induces average firing rates (blue) close to target values (red). E. After a learning period, each WTA neuron has specialized on a particular input motif. F. WTA output spikes during a test phase before and after learning. Learning leads to a sparse output code. In addition to the spiking input, each neuron?s potential uk features an intrinsic excitability ?Ak +?bk . Note that, besides the prior constant ?bk , this excitability depends on the normalizing term Ak , and hence on all afferent synaptic weights through (3): WTA neurons which encode strong patterns with high probabilities ?ki require lower intrinsic excitabilities, while neurons with weak patterns require larger excitabilities. In the presence of synaptic plasticity, i.e., time-varying Vki , it is unclear how biologically realistic neurons could communicate ongoing changes in synaptic weights from distal synaptic sites to the soma. This critical issue was apparently identified in [12] and [15]; both papers circumvent the problem (in similar probabilistic models) by constraining the input y (and also the synaptic weights in [15]) in order to maintain constant and uniform values Ak across all WTA neurons. Here, we propose a different approach to cope with the nontrivial computations Ak during inference and learning in the network. Instead of assuming that the inputs y meet a normalization constraint, we constrain the network response during inference, by applying homeostatic dynamics to the intrinsic excitabilities. This approach turns out to be beneficial in the presence of time-varying synaptic weights, i.e., during ongoing changes of Vki and Ak . The resulting interplay of intrinsic and synaptic plasticity can be best understood from the standard EM lower bound [23], F (V , q(z|y)) = L(V ) ? h KL (q(z|y) || p(z|y, V ) ip? (y) = h log p(y, z|V ) ip? (y)q(z|y) + h H(q(z|y)) ip? (y) ? E-step , (5) ? M-step , (6) where L(V ) = hlog p(y|V )ip? (y) denotes the log-likelihood of the input under the model, KL (? || ?) the Kullback-Leibler divergence, and H(?) the entropy. The decomposition holds for arbitrary distributions q. In hitherto proposed neural implementations of EM [11, 12, 15, 24], the network implements the current posterior distribution in the E-step, i.e., q = p and KL (q || p) = 0. In contrast, by applying homeostatic plasticity, the network response will be constrained to implement a variational posterior from a class of ?homeostatic? distributions Q: the long-term average activation of each WTA neuron zk is constrained to an a priori defined target value. Notably, we will see that the resulting network response q ? describes an optimal variational E-Step in the sense that q ? (z|y) = arg minq?Q KL (q(z|y) || p(z|y, V )). Importantly, homeostatic plasticity fully regulates the intrinsic excitabilities, and as a side effect eliminates the non-local terms Ak in the E-step, 3 while synaptic plasticity of the weights Vki optimizes the underlying probabilistic model p(y, z|V ) in the M-step. In summary, the network response implements q ? as the variational E-step, the M-Step can be performed via gradient ascent on (6) with respect to Vki . As derived in section 2.1, this gives rise to the following temporal dynamics and plasticity rules in the spiking network, which instantiate a stochastic version of the variational EM scheme: X uk (t) = Vki yi (t) + bk , b? k (t) = ?b ? (rnet ? mk ? ?(zk (t) ? 1)) , (7) i V? ki (t) = ?V ? ?(zk (t) ? 1) ? (yj (t) ? ?(Vki )) , (8) where ?(?) denotes the Dirac delta function, and ?b , ?V are learning rates (which were kept timeinvariant in the simulations with ?b = 10 ? ?V ). Note that (8) is a spike-timing dependent plasticity rule (cf. [12]) and is non-zero only at post-synaptic spike times t, for which zk (t) = 1. The effect of the homeostatic intrinsic plasticity rule (7) is illustrated in Figure 1C: it aims to keep the long-term average activation of each WTA neuron k close to a certain target value mk . More precisely, if rk is a neuron?s long-term average firing rate, then homeostatic plasticity will ensure that rkP /rnet ? mk . The target activations mk ? (0, 1) can be chosen freely with the obvious constraint that k mk = 1. Note that (7) is strongly reminiscent of homeostatic intrinsic plasticity in cortex [18, 19]. We have implemented these dynamics in a computer simulation of a WTA spiking network N . Inputs y(t) were defined by translating handwritten digits 0-5 (Figure 1B) from the MNIST dataset [25] into input spike trains. Figure 1D shows that, at the end of a 104 s learning period, homeostatic plasticity has indeed achieved that rk ? rnet ? mk . Figure 1E illustrates the patterns learned by each WTA neuron after this period (shown are the ?ki ). Apparently, the WTA neurons have specialized on patterns of different intensity which correspond to different values of Ak . Figure 1F shows the output spiking behavior of the circuit before and after learning in response to a set of test patterns. The specialization to different patterns has led to a distinct sparse output code, in which any particular test pattern evokes output spikes from only one or two WTA neurons. Note that homeostasis forces all WTA neurons to participate in the competition, and thus prevents neurons from becoming underactive if their synaptic weights decrease, and from becoming overactive if their synaptic weights increase, much like the original Ak terms (which are nontrivial to compute for the network). Indeed, the learned synaptic parameters and the resulting output behavior corresponds to what would be expected from an optimal learning algorithm for the mixture model (1)-(3).1 2.1 Theory for the WTA model In the following, we develop the three theoretical key results for the WTA model (1)-(3): ? Homeostatic intrinsic plasticity finds the network response distribution q ? (z|y) ? Q closest to the posterior distribution p(z|y, V ), from a set of ?homeostatic? distributions Q. ? The interplay of homeostatic and synaptic plasticity can be understood from the perspective of variational EM. ? The critical non-local terms Ak defined by (3) drop out of the network dynamics. E-step: variational inference with homeostasis The variational distribution q(z|y) we consider for the model (1)-(3) is a 2N ? K dimensional object. Since q describes a conditional probability distribution, it is non-negative and normalized for all y. In addition, we constrain q to be a ?homeostatic? distribution q ? Q such that the average activation of each hidden variable (neuron) zk equals an a-priori specified mean activation mk under the input statistics p? (y). This is sketched in Figure 2. Formally we define the constraint set, X Q = {q : hzk ip? (y)q(z|y) = mk , for all k = 1 . . . K} , with mk = 1 . (9) k 1 Without adaptation of intrinsic excitabilities, the network would start performing erroneous inference, learning would reinforce this erroneous behavior, and performance would quickly break down. We have verified this in simulations for the present WTA model: Consistently across trials, a small subset of WTA neurons became dominantly active while most neurons remained silent. 4 Figure 2: A. Homeostatic posterior constraints in the WTA model: Under the variational distribution q, the average activation of each variable zk must equal mk . B. For each set of synaptic weights V there exists a unique assignment of intrinsic excitabilities b, such that the constraints are fulfilled. C. Theoretical decomposition of the intrinsic excitability bk into ?Ak , ?bk and ?k . D. During variational EM the bk predominantly ?track? the dynamically changing non-local terms ?Ak (relative comparison between two WTA neurons from Figure 1). The constrained maximization problem q ? (z|y) = arg maxq?Q F (V , q(z|y)) can be solved with the help of Lagrange multipliers (cf. [20]). We find that the q ? which maximizes the objective function F during the E-step (and thus minimizes the PKL-divergence to the posterior p(z|y, V )) has the convenient form q ? (z|y) ? p(z|y, V ) ? exp( k ?k? zk ) with some ?k? . Hence, it suffices to consider distributions of the form, X X q? (z|y) ? exp( zk ( Vki yi + ?bk ? Ak + ?k )) , (10) | {z } i k =:bk for the maximization problem. We identify ?k as the variational parameters which remain to be optimized. Note that any distribution of this form can be implemented by the spiking network N if the intrinsic excitabilities are set to bk = ?Ak + ?bk + ?k . The optimal variational distribution q ? (z|y) = q?? (z|y) then has ? ? = arg max? ?(?), i.e. the variational parameter vector which maximizes the dual [20], X X X ?k zk )ip? (y) . (11) p(z|y, V ) exp( ?(?) = ?k mk ? hlog k z k Due to concavity of the dual, a unique global maximizer ? ? exists, and thus also the corresponding optimal intrinsic excitabilities b?k = ?Ak +?bk +?k? are unique. Hence, the posterior constraint q ? Q can be illustrated as in Figure 2B: For each synaptic weight configuration V there exists, under a particular input distribution p? (y), a unique configuration of intrinsic excitabilities b such that the resulting network output fulfills the homeostatic constraints. The theoretical relation between the intrinsic excitabilities bk , the original nontrivial term ?Ak and the variational parameters ?k is sketched in Figure 2C. Importantly, while bk is implemented in the network, Ak , ?k and ?bk are not explicitly represented in the implementation anymore. Finding the optimal b in the dual perspective, i.e. those intrinsic excitabilities which fulfill the homeostatic constraints, amounts to gradient ascent ?? ?(?) on the dual, which leads to the following homeostatic learning rule for the intrinsic excitabilities, ?bk ? ??k ?(?) = mk ? hzk ip? (y)q(z|y) . (12) Note that the intrinsic homeostatic plasticity rule (7) in the network corresponds to a sample-based stochastic version of this theoretically derived adaptation mechanism (12). Hence, given enough time, homeostatic plasticity will automatically install near-optimal intrinsic excitabilities b ? b? and implement the correct variational distribution q ? up to stochastic fluctuations in b due to the nonzero learning rate ?b . The non-local terms Ak have entirely dropped out of the network dynamics, since the intrinsic excitabilities bk can be arbitrarily initialized, and are then fully regulated by the local homeostatic rule, which does not require knowledge of Ak . As a side remark, note that although the variational parameters ?k are not explicitly present in the implementation, they can be theoretically recovered from the network at any point, via 5 Figure 3: A. Input templates from MNIST dataset (digits 0,3 at a ratio 2:1, and digits 0,3,4 at a ratio 1:1:1) used during the first and second learning period, respectively. B. Learned patterns at the end of each learning period. C. Network performance converges in the course of learning. F is a tight lower bound to L. D. Illustration of pattern learning and re-learning dynamics in a 2-D projection in the input space. Each black dot corresponds to the pattern ?ki of one WTA neuron k. Colored dots are input samples from the training set (blue/green/red ? digits 0/3/4). ?k = bk + Ak ? ?bk . Notably, in all our simulations we have consistently found small absolute values of ?k , corresponding to a small KL-divergence between q ? and p.2 Hence, a major effect of the local homeostatic plasticity rule during learning is to dynamically track and effectively implement the non-local terms ?Ak . This is shown in Figure 2D, in which the relative excitabilities of two WTA neurons bk ? bj are plotted against the corresponding non-local Ak ? Aj over the course of learning in the first simulation (Figure 1). M-step: interplay of synaptic and homeostatic intrinsic plasticity During the M-step, we aim to increase the EM lower bound F in (6) w.r.t. the synaptic parameters V . Gradient ascent yields, ?Vki F (V , q(z|y)) = h?Vki log p(y, z|V )ip? (y)q(z|y) (13) = h zk ? (yj ? ?(Vki )) ip? (y)q(z|y) , (14) ? where q is the variational distribution determined during the E-step, i.e., we can set q = q . Note the formal correspondence of (14) with the network synaptic learning rule (8). Indeed, if the network activity implements q ? , it can be shown easily that the expected update of synaptic weights due to the synaptic plasticity (8) is proportional to (14), and hence implements a stochastic version of the theoretical M-step (cf. [12]). 2.2 Dynamical properties of the Bayesian spiking network with homeostasis To highlight a number of salient dynamical properties emerging from homeostatic plasticity in the considered WTA model, Figure 3 shows a simulation of the same network N with homeostatic dynamics as in Figure 1, only with different input statistics presented to the network, and uniform 1 mk = K . During the first 5000s, different writings of 0?s and 3?s from the MNIST dataset were presented, with 0?s occurring twice as often as 3?s. Then the input distribution p? (y) abruptly switched to include also 4?s, with each digit occurring equally often. The following observations can be made: Due to the homeostatic constraint, each neuron responds on average to mk ? T out of T presented inputs. As a consequence, the number of neurons which specialize on a particular digit is 2 This is assuming for simplicity uniform prior parameters ?bk . Note that a small KL-divergence is in fact often observed during variational EM since F , which contains the negative KL-divergence, is being maximized. 6 directly proportional to the frequency of occurrence of that digit, i.e. 8:4 and 4:4:4 after the first and second learning period, respectively (Figure 3B). In general, if uniform target activations mk are chosen, output resources are allocated precisely in proportion to input frequency. Figure 3C depicts the time course of the EM lower bound F as well as the average likelihood L (assuming uniform ?bk ) under the model during a single simulation run, demonstrating both convergence and tightness of the lower bound. As expected due to the stabilizing dynamics of homeostasis, we found variability in performance among different trials to be small (not shown). Figure 3D illustrates the dynamics of learning and re-learning of patterns ?ki in a 2D projection of input patterns onto the first two principal components. 3 Homeostatic plasticity in recurrent spiking networks The neural model so far was essentially a feed-forward network, in which every postsynaptic spike can directly be interpreted as one sample of the instantaneous posterior distribution [12]. The lateral inhibition served only to ensure the normalization of the posterior. We will now extend the concept of homeostatic processes as posterior constraints to the broader class of recurrent networks and sketch the utility of the developed framework beyond the regulation of intrinsic excitabilities. Recently it was shown in [9, 10] that recurrent networks of stochastically spiking neurons can in principle carry out probabilistic inference through a sampling process. At every point in time, the joint network state z(t) represents one sample of a posterior. However, [9] and [10] did not consider unsupervised learning on spiking input streams. For the following considerations, we divide the definition of the probabilistic model in two parts. First, we define a Boltzmann distribution, X X ?bk zk + 1 ? kj zk zj )/norm. , p(z) = exp( W (15) 2 k j6=k ? kj = W ? jk as ?prior? for the hidden variables z which will be represented by a recurrently with W ? kj as connected network of K spiking neurons. For the purpose of this section, we treat ?bk and W constants. Secondly, we define a conditional distribution in the exponential-family form [23], X p(y|z, V ) = exp(f0 (y) + Vki zk fi (y) ? A(z, V )) , (16) k,i that specifies the likelihood of observable inputs y, given a certain network state z. This defines the generative model p(y, z|V ) = p(z) p(y|z, V ). We map this probabilistic model to the spiking network and define that for every k and every point in time t the variable zk (t) has the value 1, if the corresponding neuron has fired within the time window (t ? ?, t]. In accordance with the neural sampling theory, in order for a spiking network to sample from the correct posterior p(z|y, V ) ? p(z) p(y|z, V ) given the input y, each neuron must compute in its membrane potential the log-odd [9], X p(zk = 1|z \k , V ) X ? kj )zj ? . . . = Vki fi (y) ?Ak (V ) + ?bk + uk = log (?Akj (V ) + W | {z } p(zk = 0|z \k , V ) {z } | i j6=k recurrent weight | {z } intr. excitability feedforward drive (17) where z \k = (z1 , . . . , zk?1 , zk+1 , . . . zK )T . The Ak , Akj , . . . are given by the decomposition of A(z, V ) along the binary combinations of z as, X 1X A(z, V ) = A0 (V ) + zk Ak (V ) + zk zj Akj (V ) + . . . (18) 2 j6=k k ? kj . InNote, that we do not aim at this point to give learning rules for the prior parameters ?bk and W stead we proceed as in the last section and specify a-priori desired properties of the average network response under the input distribution p? (y), ckj = hzk zj ip? (y)q(z|y) and 7 mk = hzk ip? (y)q(z|y) . (19) Let us explore some illustrative configurations for mk and ckj . One obvious choice is closely re1 lated to the goal of maximizing the entropy of the output code by fixing hzk i to K and hzk zj i 1 to hzk ihzj i = K 2 , thus enforcing second order correlations to be zero. Another intuitive choice would be to set all hzk zj i very close to zero, which excludes that two neurons can be active simultaneously and thus recovers the function of a WTA. It is further conceivable to assign positive correlation targets to groups of neurons, thereby creating populations with redundant codes. Finally, with a topographical organization of neurons in mind, all three basic ideas sketched above might be combined: one could assign positive correlations to neighboring neurons in order to create local cooperative populations, mutual exclusion at intermediate distance, and zero correlation targets between distant neurons. With this in mind, we can formulate the goal of learning for the network in the context of EM with posterior constraints: we constrain the E-step such that the average posterior fulfills the chosen targets, and adapt the forward weights V in the M-step according to (6). Analogous to the first-order case, the variational solution of the E-step under these constraints takes the form, ? ? X X 1 q?,? (z|y) ? p(z|y, V ) ? exp ? ?k zk + ?kj zk zj ? , (20) 2 k j6=k with symmetric ?kl = ?lk as variational parameters. A neural sampling network N with input weights Vki will sample from q?,? if the intrinsic excitabilities are set to bk = ?Ak + ?bk + ?k , and ? kj + ?kj . The variational parameters the symmetric recurrent synaptic weights to Wkj = ?Akj + W ?, ? (and hence also b, W ) which optimize the dual problem ?(b, ?) are uniquely defined and can be found iteratively via gradient ascent. Analogous to the last section, this yields the intrinsic plasticity rule (12) for bk . In addition, we obtain for the recurrent synapses Wkj , ?Wkj ? ckj ? hzk zj ip? (y)q(z|y) , (21) which translates to an anti-Hebbian spike-timing dependent plasticity rule in the network implementation. For any concrete instantiation of f0 (y), fi (y) and A(z, V ) in (16) it is possible to derive learning rules for Vki for the M-step via ?Vki F (V , q). Of course not all models entail local synaptic learning rules. In particular it might be necessaryQ to assume conditional independence of the inputs y given the network state z, i.e., p(y|z, V ) = i p(yi |z, V ). Furthermore, in order to fulfill the neural computability condition (17) for neural sampling [9] with a recurrent network of point neurons, it might be necessary to choose A(z, V ) such that terms of order higher than 2 vanish in the decomposition. This can be shown to hold, for example, in a model with conditionally independent Gaussian distributed inputs yi . It is ongoing work to find further biologically realistic network models in the sense of this theory and to assess their computational capabilities through computer experiments. 4 Discussion Complex and non-local computations, which appear during probabilistic inference and learning, arguably constitute one of the cardinal challenges in the development of biologically realistic Bayesian spiking network models. In this paper we have introduced homeostatic plasticity, which to the best of our knowledge had not been considered before in the context of EM in spiking networks, as a theoretically grounded approach to stabilize and facilitate learning in a large class of network models. Our theory complements previously proposed neural mechanisms and provides, in particular, a simple and biologically realistic alternative to the assumptions on the input distribution made in [12] and [15]. Indeed, our results challenge the hypothesis of [15] that feedforward inhibition is critical for correctly learning the structure of the data with biologically plausible plasticity rules. More generally, it turns out that the enforcement of a balancing posterior constraint often simplifies inference in recurrent spiking networks by eliminating nontrivial computations. Our results suggest a crucial role of homeostatic plasticity in the Bayesian brain: to constrain activity patterns in cortex to assist the autonomous optimization of an internal model of the environment. Acknowledgments. Written under partial support by the European Union - projects #FP7-269921 (BrainScaleS), #FP7-216593 (SECO), #FP7-237955 (FACETS-ITN), #FP7-248311 (AMARSi), #FP7-216886 (PASCAL2) - and the Austrian Science Fund FWF #I753-N23 (PNEUMA). 8 References [1] K. P. K?ording and D. M. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427(6971):244? 247, 2004. [2] G. Orban, J. Fiser, R.N. Aslin, and M. Lengyel. Bayesian learning of visual chunks by human observers. Proceedings of the National Academy of Sciences, 105(7):2745?2750, 2008. [3] J. Fiser, P. Berkes, G. Orban, and M. Lengyel. Statistically optimal perception and learning: from behavior to neural representation. Trends in Cogn. Sciences, 14(3):119?130, 2010. [4] P. Berkes, G. Orban, M. Lengyel, and J. Fiser. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science, 331:83?87, 2011. [5] T. L. Griffiths and J. B. Tenenbaum. Optimal predictions in everyday cognition. Psychological Science, 17(9):767?773, 2006. [6] D. E. Angelaki, Y. Gu, and G. C. DeAngelis. Multisensory integration: psychophysics, neurophysiology and computation. Current opinion in neurobiology, 19(4):452?458, 2009. [7] S. Deneve. Bayesian spiking neurons I: Inference. Neural Computation, 20(1):91?117, 2008. [8] A. Steimer, W. Maass, and R.J. Douglas. Belief propagation in networks of spiking neurons. Neural Computation, 21:2502?2523, 2009. [9] L. Buesing, J. Bill, B. Nessler, and W. Maass. Neural dynamics as sampling: A model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput Biol, 7(11):e1002211, 11 2011. [10] D. Pecevski, L. Buesing, and W. Maass. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons. PLoS Comput Biol, 7(12), 12 2011. [11] S. Deneve. Bayesian spiking neurons II: Learning. Neural Computation, 20(1):118?145, 2008. [12] B. Nessler, M. Pfeiffer, and W. Maass. STDP enables spiking neurons to detect hidden causes of their inputs. In Proc. of NIPS 2009, volume 22, pages 1357?1365. MIT Press, 2010. [13] J. Brea, W. Senn, and J.-P. Pfister. Sequence learning with hidden units in spiking neural networks. In Proc. of NIPS 2011, volume 24, pages 1422?1430. MIT Press, 2012. [14] D. J. Rezende, D. Wierstra, and W. Gerstner. Variational learning for recurrent spiking networks. In Proc. of NIPS 2011, volume 24, pages 136?144. MIT Press, 2012. [15] C. Keck, C. Savin, and J. L?ucke. Feedforward inhibition and synaptic scaling?two sides of the same coin? PLoS Computational Biology, 8(3):e1002432, 2012. [16] Joshua B. Tenenbaum, Charles Kemp, Thomas L. Griffiths, and Noah D. Goodman. How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022):1279?1285, 2011. [17] J. Schemmel, D. Br?uderle, A. Gr?ubl, M. Hock, K. Meier, and S. Millner. A wafer-scale neuromorphic hardware system for large-scale neural modeling. Proc. of ISCAS?10, pages 1947?1950, 2010. [18] N.S. Desai, L.C. Rutherford, and G.G. Turrigiano. Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nature Neuroscience, 2(6):515, 1999. [19] A. Watt and N. Desai. Homeostatic plasticity and STDP: keeping a neurons cool in a fluctuating world. Frontiers in Synaptic Neuroscience, 2, 2010. [20] J. Graca, K. Ganchev, and B. Taskar. Expectation maximization and posterior constraints. In Proc. of NIPS 2007, volume 20. MIT Press, 2008. [21] R. Jolivet, A. Rauch, HR L?uscher, and W. Gerstner. Predicting spike timing of neocortical pyramidal neurons by simple threshold models. Journal of Computational Neuroscience, 21:35?49, 2006. [22] E.P. Simoncelli and D.J. Heeger. A model of neuronal responses in visual area MT. Vision Research, 38(5):743?761, 1998. [23] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, 2006. [24] M. Sato. Fast learning of on-line EM algorithm. Rapport Technique, ATR Human Information Processing Research Laboratories, 1999. [25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, volume 86, pages 2278?2324, 11 1998. 9
4593 |@word neurophysiology:1 trial:2 version:3 eliminating:1 proportion:1 norm:1 ucke:1 simulation:7 decomposition:4 thereby:1 versatile:1 carry:2 configuration:3 contains:1 document:1 ording:1 existing:1 current:2 recovered:1 activation:10 yet:1 must:3 reminiscent:2 written:1 realistic:4 distant:1 plasticity:39 enables:1 drop:2 update:3 fund:1 generative:2 instantiate:1 colored:1 provides:2 mathematical:3 along:1 wierstra:1 install:1 specialize:1 manner:1 introduce:4 theoretically:4 notably:2 expected:5 indeed:4 intricate:1 behavior:4 frequently:2 brain:3 automatically:1 brea:2 little:1 n23:1 window:1 project:1 underlying:1 circuit:3 maximizes:2 hitherto:1 what:1 interpreted:1 minimizes:1 emerging:2 developed:1 finding:3 temporal:1 every:5 graca:1 lated:1 uk:5 unit:1 appear:2 arguably:2 before:4 positive:2 dropped:1 understood:3 local:17 treat:2 accordance:2 timing:3 consequence:1 ak:28 meet:1 firing:3 becoming:2 fluctuation:1 black:2 might:3 twice:1 dynamically:2 statistically:1 unique:4 acknowledgment:1 lecun:1 yj:2 union:1 implement:8 digit:8 cogn:1 area:1 convenient:1 projection:2 griffith:2 suggest:1 convenience:1 close:3 onto:1 context:3 applying:2 writing:1 nessler:5 optimize:1 bill:3 map:1 maximizing:1 attention:1 minq:1 rectangular:1 formulate:1 stabilizing:1 simplicity:2 rule:21 importantly:3 population:3 autonomous:1 analogous:2 target:9 spontaneous:1 hypothesis:2 trend:1 recognition:2 jk:1 pkl:1 database:1 cooperative:1 observed:3 role:2 taskar:1 solved:1 graz:1 connected:1 desai:2 plo:3 decrease:1 environment:3 instructive:1 reward:1 dynamic:13 tight:1 purely:2 upon:1 gu:1 translated:1 easily:1 joint:1 various:1 represented:2 train:3 distinct:2 fast:1 deangelis:1 quite:1 richer:1 larger:1 plausible:1 tightness:1 otherwise:5 stead:1 seco:1 statistic:3 ip:12 online:1 interplay:4 sequence:2 turrigiano:1 propose:1 adaptation:2 neighboring:1 fired:1 achieve:1 academy:1 intuitive:1 everyday:2 competition:2 dirac:1 convergence:1 keck:2 hzk:9 converges:1 object:1 help:1 derive:1 recurrent:14 develop:2 fixing:1 odd:1 received:1 strong:1 implemented:5 cool:1 closely:1 correct:2 stochastic:7 human:3 translating:1 opinion:1 require:4 assign:2 suffices:1 biological:2 secondly:1 frontier:1 hold:2 considered:2 stdp:2 exp:7 great:1 cognition:2 bj:1 pecevski:1 circuitry:1 major:1 vki:19 purpose:1 proc:5 homeostasis:5 concurrent:1 create:2 ganchev:1 stefan:1 mit:4 gaussian:1 aim:3 rather:1 fulfill:2 varying:2 broader:1 encode:1 rezende:2 derived:2 notational:1 consistently:2 likelihood:4 contrast:1 rigorous:2 sense:2 detect:1 inference:23 motif:1 dependent:2 abstraction:1 typically:1 eliminate:1 a0:1 hidden:7 relation:1 pixel:1 issue:2 arg:3 sketched:3 dual:5 among:1 priori:3 development:2 constrained:3 special:1 integration:2 mutual:1 psychophysics:1 equal:2 sampling:6 biology:1 represents:1 unsupervised:3 aslin:1 cardinal:1 employ:2 simultaneously:1 divergence:5 national:1 phase:1 iscas:1 fire:1 maintain:2 organization:1 interest:1 uscher:1 mixture:4 uderle:1 partial:1 necessary:1 divide:1 initialized:1 re:2 plotted:1 desired:1 theoretical:8 minimal:1 mk:17 psychological:1 soft:1 modeling:1 facet:1 steimer:1 assignment:1 maximization:6 neuromorphic:2 introducing:1 subset:1 uniform:5 gr:1 optimally:1 dependency:1 combined:1 chunk:1 akj:4 probabilistic:13 quickly:1 concrete:1 choose:1 cognitive:1 stochastically:1 creating:1 potential:5 rapport:1 stabilize:1 explicitly:2 igi:1 afferent:2 depends:2 stream:1 performed:1 break:1 observer:1 apparently:2 red:2 start:1 recover:1 relied:1 bayes:1 capability:1 ass:1 became:1 largely:1 maximized:1 correspond:1 identify:1 yield:2 weak:1 bayesian:15 handwritten:1 buesing:2 served:1 drive:1 j6:4 lengyel:3 nonstandard:1 synapsis:2 synaptic:33 definition:1 against:1 sensorimotor:1 frequency:2 obvious:2 recovers:1 dataset:3 treatment:1 knowledge:2 sophisticated:1 feed:2 higher:1 response:9 specify:1 microcircuit:1 strongly:2 furthermore:1 fiser:3 correlation:5 sketch:4 receives:2 hock:1 maximizer:1 propagation:1 defines:1 logistic:1 aj:1 building:1 facilitate:2 effect:3 normalized:2 multiplier:1 concept:1 hence:7 excitability:24 symmetric:2 leibler:1 nonzero:1 iteratively:1 laboratory:1 illustrated:2 white:1 distal:1 conditionally:1 maass:4 during:18 uniquely:1 illustrative:2 m:1 neocortical:1 demonstrate:2 hallmark:1 variational:25 instantaneous:3 novel:3 recently:3 consideration:1 predominantly:1 common:1 fi:3 specialized:3 charles:1 spiking:44 mt:1 regulates:1 winner:3 exponentially:1 volume:5 extend:1 winnertake:1 had:1 dot:2 f0:2 cortex:3 entail:1 inhibition:4 berkes:2 posterior:21 closest:1 recent:2 exclusion:1 perspective:5 re1:1 optimizes:1 certain:3 binary:3 arbitrarily:1 yi:10 joshua:1 additional:1 freely:1 period:6 redundant:1 signal:1 itn:1 ii:1 simoncelli:1 schemmel:1 habenschuss:2 hebbian:1 adapt:2 long:3 post:2 equally:2 coded:1 feasibility:1 prediction:1 neuro:1 basic:1 austrian:1 essentially:1 expectation:4 vision:1 normalization:4 grounded:1 achieved:1 addition:3 wkj:3 grow:1 pyramidal:2 allocated:1 crucial:1 goodman:1 eliminates:1 ascent:4 hz:1 emitted:1 fwf:1 near:2 presence:2 constraining:2 feedforward:3 enough:1 intermediate:1 bengio:1 switch:1 independence:1 architecture:2 identified:2 ckj:3 silent:1 idea:1 simplifies:1 haffner:1 computable:1 translates:1 br:1 specialization:1 rauch:1 utility:1 assist:1 abruptly:1 proceed:1 cause:3 constitute:1 remark:1 york:1 generally:1 johannes:1 amount:1 locally:1 tenenbaum:2 hardware:2 induces:1 specifies:1 zj:8 millisecond:1 senn:1 delta:1 fulfilled:1 track:2 correctly:1 neuroscience:3 blue:2 wafer:1 group:1 key:1 salient:1 soma:1 demonstrating:1 threshold:1 changing:1 douglas:1 verified:1 kept:1 deneve:3 computability:1 excludes:1 run:1 communicate:1 arrive:1 throughout:1 evokes:1 family:1 scaling:1 entirely:1 ki:9 bound:5 correspondence:1 encountered:1 activity:3 nontrivial:7 sato:1 noah:1 constraint:20 precisely:2 constrain:4 orban:3 innovative:2 savin:1 performing:1 format:1 developing:1 according:1 combination:1 watt:1 membrane:1 psp:1 across:2 em:13 beneficial:1 describes:2 remain:1 postsynaptic:1 wta:28 biologically:5 pneuma:1 resource:1 mutually:1 previously:2 remains:2 turn:3 mechanism:4 enforcement:2 mind:4 fp7:5 end:2 fluctuating:1 occurrence:1 anymore:1 alternative:1 coin:1 altogether:1 original:2 thomas:1 denotes:3 cf:3 ensure:2 tugraz:1 include:1 graphical:1 maintaining:1 establish:1 objective:1 realized:1 spike:13 millner:1 exclusive:1 responds:2 unclear:1 gradient:5 regulated:1 conceivable:1 distance:1 link:2 reinforce:1 lateral:1 atr:1 participate:1 kemp:1 enforcing:1 assuming:3 length:1 code:4 modeled:1 besides:1 illustration:1 ratio:2 regulation:1 hlog:2 negative:2 rise:1 rutherford:1 implementation:5 boltzmann:1 unknown:1 contributed:1 neuron:48 observation:1 markov:1 anti:1 extended:2 variability:1 neurobiology:1 homeostatic:41 arbitrary:1 intensity:1 bk:32 complement:3 introduced:1 meier:1 kl:8 specified:1 optimized:1 z1:1 learned:3 maxq:1 nip:4 jolivet:1 beyond:1 dynamical:2 pattern:15 perception:1 challenge:2 max:2 green:1 belief:1 pascal2:1 critical:3 force:1 circumvent:2 predicting:1 hr:1 pfeiffer:1 scheme:1 technology:1 lk:1 carried:1 kj:8 prior:5 relative:2 fully:2 highlight:1 limitation:2 proportional:2 switched:1 principle:2 foster:1 balancing:3 course:4 summary:1 last:3 keeping:1 dominantly:1 side:3 formal:1 institute:1 template:3 absolute:1 sparse:2 distributed:1 overcome:1 cortical:3 world:1 concavity:1 ubl:1 forward:3 author:1 made:2 far:1 cope:1 observable:1 intr:1 bernhard:1 kullback:1 keep:1 global:2 active:2 instantiation:1 reveals:1 promising:1 learn:1 zk:33 nature:2 euk:1 gerstner:2 complex:3 rkp:1 european:1 bottou:1 did:1 pk:1 arise:1 angelaki:1 neuronal:3 site:1 representative:1 je:1 depicts:1 comprises:1 heeger:1 exponential:1 comput:2 vanish:1 rk:2 down:1 erroneous:2 remained:1 bishop:1 recurrently:1 normalizing:1 essential:1 intrinsic:30 exists:4 mnist:4 effectively:1 modulates:1 illustrates:2 occurring:2 entropy:2 wolpert:1 led:2 explore:1 visual:2 prevents:1 lagrange:1 springer:1 corresponds:3 relies:1 conditional:3 goal:3 brainscales:1 change:2 determined:1 principal:1 pfister:1 experimental:1 divisive:1 multisensory:1 timeinvariant:1 formally:1 internal:3 support:1 fulfills:2 ongoing:3 topographical:1 biol:2
3,970
4,594
Controlled Recognition Bounds for Visual Learning and Exploration Vasiliy Karasev1 1 Alessandro Chiuso2 University of California, Los Angeles 2 Stefano Soatto1 University of Padova Abstract We describe the tradeoff between the performance in a visual recognition problem and the control authority that the agent can exercise on the sensing process. We focus on the problem of ?visual search? of an object in an otherwise known and static scene, propose a measure of control authority, and relate it to the expected risk and its proxy (conditional entropy of the posterior density). We show this analytically, as well as empirically by simulation using the simplest known model that captures the phenomenology of image formation, including scaling and occlusions. We show that a ?passive? agent given a training set can provide no guarantees on performance beyond what is afforded by the priors, and that an ?omnipotent? agent, capable of infinite control authority, can achieve arbitrarily good performance (asymptotically). In between these limiting cases, the tradeoff can be characterized empirically. 1 Introduction We are interested in visual learning for recognition of objects and scenes embedded in physical space. Rather than using datasets consisting of collections of isolated snapshots, however, we wish to actively control the sensing process during learning. This is because, in the presence of nuisance factors involving occlusion and scale changes, learning requires mobility [1]. Visual learning is thus a process of discovery, literally uncovering occluded portions of an object or scene, and viewing it from close enough that all structural details are revealed.1 We call this phase of learning exploration or mapping, accomplished by actively controlling the sensor motion within a scene, or by manipulating an object so as to discover all aspects.2 Once exploration has been performed, one has a model (or ?map? or ?representation?) of the scene or object of interest. One can then attempt to detect, localize or recognize a particular object or scene, or a class of them, provided intra-class variability has been exposed during exploration. This phase can yield localization ? where one wishes to recognize a portion of a mapped scene and, as a byproduct, infer the pose relative to the map ? or search where a particular object mapped during the exploration phase is detected and localized within an otherwise known scene. This can also be interpreted as a change detection problem, where one wishes to revisit a known map to detect changes. In the case 1 It has been shown [1] that mobility is required in order to reduce the Actionable Information Gap, the difference between the complexity of a maximal invariant of the data and the minimal sufficient statistic of a complete representation of the underlying scene. 2 Note that we are not suggesting that one should construct a three-dimensional (3-D) model of an object or a scene for recognition, as opposed to using collections of 2-D images. From an information perspective, there is no gain in replacing a collection of 2-D images with a 3-D model computed from them. What matters is how these images are collected. The multiple images must portray the same scene or object, lest one cannot attribute the variability in the data to nuisance factors as opposed to intrinsic variability of the object of interest. The multiple images must enable establishing correspondence between different images of the same scene. Temporal continuity enables that. 1 where a known object is sought in an unknown map, exploration and search have to be conducted simultaneously. Within this scenario, exploration and search can be framed as optimal control and optimal stopping time problems. These relate to active vision (next-best-view generation), active learning, robotic motion planning, sequential decision in the setting of partially-observable Markov decision processes (POMDP) and a number of related fields (including Information Bottleneck, Value of Information) and a vast literature that we cannot extensively review here. As often in this class of problems, inference algorithms are essentially intractable, so we wish to design surrogate tasks and prove performance bounds to ensure desirable properties of the surrogate solution. In this manuscript we consider the problem of detecting and estimating discrete parameters of an unknown object in a known environment. To this purpose we: 1. Describe the simplest model that includes scaling and occlusion nuisances, a two dimensional ?cartoon flatland,? and a test suite to perform simulation experiments. We derive an explicit probability model to compute the posterior density given photometric measurements. 2. Discuss the tradeoff between performance in a visual decision task and the control authority that the explorer possesses. This tradeoff is akin the tradeoff between rate and distortion in a communication system, but it pertains to decision and control tasks, as opposed to the transmission of data. We characterize this tradeoff for the simple case of a static environment, where control authority relates to reachability and energy. 3. Discuss and test algorithms for visual search based on the maximization of the conditional entropy of future measurements and the proxies of this quantity. These algorithms can be used to locate an unknown object in unknown position of a known environment, or to perform change detection in an otherwise known map, for the purpose of updating it. 4. Provide experimental validation of the algorithms, including regret and expected exploration length. 1.1 Related prior work Active search and recognition of objects in the scene has been one of the mainstays of Active Perception in the eighties [2, 3], and has recently resurged (see [4] and references therein). The problem can be formulated as a POMDP [5], solving which requires developing approximate, nearoptimal policies. Active recognition using next-best-view generation and object appearance is discussed in [6] where authors use PCA to embed object images in a linear, low dimensional space. The scheme does not incorporate occlusions or scale changes. More recently, information driven sensor control for object recognition was used in [7, 8, 9], who deal with visual and sonar sensors, but take features (e.g. SIFT, SURF) to be the observed data. A utility function that accounts for occlusions, viewing angle, and distance to the object is proposed in [10] who aim to actively learn object classifiers during the training stage. Exploration and learning of 3D object surface models by robotic manipulation is discussed in [11]. The case of object localization (and tracking if object is moving) is discussed in [12]; information-theoretic approach for solving this problem using a sensor network is described in [13]. Both authors used realistic, nonlinear sensor models, which however are different from photometric sensors and are not affected by the same nuisances. Typically, informationtheoretic utility functions used in these problems are submodular and thus can be efficiently optimized by greedy heuristics [14, 15]. With regards to models, our work is different in several aspects: instead of choosing the next best view on a sphere centered at the object, we model a cluttered environment where the object of interest occupies a negligible volume and is therefore fully occluded when viewed from most locations. Second, we wish to operate in a continuous environment, rather than in a world that is discretized at the outset. Third, given the significance of quantization-scale and occlusions in a visual recognition task, we model the sensing process such that it accounts for both. 2 Preliminaries Let y 2 Y denote data3 (measurements) and x 2 X a hidden class variable from a finite alphabet that we are interested in inferring. If prior p(x) and conditional distributions p(y|x) are known, the 3 Random variables will be displayed in boldface (e.g. y), and realizations in regular fonts (e.g. y). 2 expected risk can be written as Pe = Z max p(xi |y))dy p(y)(1 i (1) and minimized by Bayes? decision rule, which chooses the class label with maximum a posteriori probability. If the distributions above are estimated empirically, the expected risk depends on the data set. We are interested in controlling the data acquisition process so as to make this risk as small as possible. We use the problem of visual search (finding a not previously seen object in a scene) as a motivation. It is related to active learning and experimental design. In order to enforce temporal continuity, we model the search agent (?explorer?) as a dynamical system of the form: ( ?t+1 = ?t gt+1 = f (gt , ut ) (2) yt = h(gt , ?) + nt where gt denotes the pose state at time t, ut denotes the control, and ? denotes the scene that describes the search environment ? a collection of objects (simply-connected surfaces supporting a radiance function) of which the target x is one instance. Constraints on the controller enter through f ; photometric nuisances, quantization and occlusions enter through the measurement map h. Additive and unmodeled phenomena that affect observed data are incorporated into nt , the ?noise? term. 2.1 Signal models The simplest model that includes both scaling and occlusion nuisances is the ?cartoon flatland?, where a bounded subset of R2 is populated by self-luminous line segments, corresponding to clutter objects. We denote an instance of this model, the scene, by ? = ( 1 , . . . , C ), which is a collection of C objects k . The number of objects in the scene C is the clutter density parameter that can possibly grow to be infinite in the limit. Each object is described by its center (ck ), length (lk ), binary orientation (ok ), and radiance function supported on the segment ?k . This is the ?texture? or ?appearance? of the object, which in the simplest case can be assumed to be a constant function: k = (ck , lk , ok , ?k ) 2 [0, 1]3 ? {0, 1} ? [R2 ! R+ ] (3) An agent can move continuously throughout the search domain. We take the state gt 2 R2 to be its current position, ut 2 R2 the currently exerted move, and assume trivial dynamics: gt+1 = gt + ut . More complex agents where gt 2 SE(3) can be incorporated without conceptual difficulties. The measurement model is that of an omnidirectional m-pixel camera, with each entry of yt 2 Rm in (2) given by: Z (i+ 12 ) 2? Z 1 m yt (i) = ?`(?,gt ) (z)d?d? + nt (i), with z = (? cos(?), ? sin(?)) (4) (i 1 2? 2) m 0 where is the angle subtended by each pixel. The integrand is a collection of radiance functions which are supported on objects (line segments). Because of occlusions, only the closest objects that intersect the pre-image contribute to the image. The index of the object (clutter or object of interest) that contributes to the image is denoted by `(?, gt ) and is defined as: ? ? n o l k lk ok `(?, gt ) = arg min k 9(sk , k ) 2 [ , ] ? R+ s.t. ck + sk = g + g?(?) k (5) 1 ok k 2 2 2? m Above, g and g?(?) = (cos(?), sin(?)) are current position and direction, respectively. ck , lk , and ok are k-th segment center, length, and orientation. Condition ck + sk = g + g?(?) k encodes intersection of ray g + g?(?) with a point on a segment k. The segment closest to viewer, i.e. one that is visible, has the smallest k . Integration over 2? m in (4) accounts for quantization, and the layer model (5) describes occlusions. While the measurement model is non-trivial (in particular, it is not differentiable), it is the simplest that captures the nuisance phenomenology. All unmodeled phenomena are lumped in the additive term nt , which we assume to be zero-mean Gaussian ?noise? with covariance 2 I. In order to design control sequences to minimize risk, we need to evaluate the uncertainty of future measurements, those we have not yet measured, which are a function of the control action to be taken. To that end, we write the probability model for computing the posterior and the predictive density. 3 We first describe the general case of visual exploration where the environment is unknown. We begin with noninformative prior for objects k = 1, . . . , C p( k) = p(ck )p(lk )p(ok )p(?k ) = U [0, Nc ]2 ? Exp( ) ? Ber(1/2) ? U [0, N? ] (6) where U ,Exp and Ber denote uniform, exponential, and Bernoulli distributions parameterized by Nc , , and N? . Then p(?) = p( 1 , ..., C ). The posterior is then computed by Bayes rule4 : p(?|y t , g t ) / t Y ? =1 p(y? |g? , ?)p(?) = t Y ? =1 N (y? h(g? , ?); 2 I)p(?) (7) Above, N (z, ?) denotes the value of a zero-mean Gaussian density with covariance ? at z. The density can be decomposed as a product of likelihoods since knowledge of environment (?) and location (gt ) is sufficient to predict measurement yt up to Gaussian noise. The predictive distribution (distribution of the next measurement conditioned on the past) is computed by marginalization: Z p(yt+1 |y t , g t , gt+1 ) = p(?|y t , g t , gt+1 )p(yt+1 |?, y t , gt+1 )d? (8) Z = p(?|y t , g t )N (yt+1 h(gt+1 , ?), 2 I)d? (9) The marginalization above is essentially intractable. In this paper we focus on visual search of a particular object in an otherwise known environment, so marginalization is only performed with respect to a single object in the environment, x, whose parameters are discrete, but otherwise analogous to (6): p(x) = U {0, ..., Nc 1}2 ? Exp( ) ? Ber(1/2) ? U {0, . . . , N? 1} (10) We denote by xi , i = 1, ..., |X | object with parameters (ci , li , oi , ?i ) and write ?i = (xi , 1 , . . . , C ) to denote the scene with known clutter objects 1 , ..., C augmented by an unknown object xi . In this case, we have: t Y p(x|y t , g t ) / N (y? h(g? , ?); 2 I)p(x) (11) ? =1 p(yt+1 |y t , g t , gt+1 ) = 3 |X | X i=1 p(xi |y t , g t )N (yt+1 h(gt+1 , ?i ), 2 I) (12) The role of control in active recognition It is clear from equations (11) and (12) that the history of agent?s positions g t plays a key role in the process of acquiring new information on the object of interest x for the purpose of recognition. This is encoded by the conditional density (11). In the context of the identification of the model (2), one would say that data y t (a function of the scene and the history of positions) must be sufficiently informative [16] on x, meaning that y t contains enough information to estimate x; this can be measured e.g. through the Fisher information matrix if x is deterministic but unknown, or by the posterior p(x|y t ) in a probabilistic setting. This depends upon whether ut is sufficiently exciting, a ?richness? condition that has been extensively used in the identification and adaptive control literature [17, 18], which guarantees that the state trajectory g t explores the space of interest. If this condition is not satisfied, there are limitations on the performance that can be attained during the search process. There are two extreme cases which set an upper and lower bounds on recognition error: 1. Passive recognition: there is no active control, and instead a collection of vantage points g t is given a-priori. Under this scenario it is easy to prove that, averaging over the possible scenes and initial agent locations, the probability of error approaches chance (i.e. that given by the prior distribution) as clutter density and/or the environment volume increase. 2. Full control on g t : if the control action can take the ?omnipotent agent? anywhere, and infinite time is available to collect measurements, then the conditional entropy H(x|y t ) decreases asymptotically to zero thus providing arbitrarily good recognition rate in the limit. 4 . . superscript in e.g. y t indicates history of y up to t, i.e. y t = (y1 , . . . , yt ) and ytt+T = (yt , . . . yt+T ) 4 In general, there is a tradeoff between the ability to gather new information through suitable control actions, which we name ?control authority?, and the recognition rate. In the sequel we shall propose a measure for the ?control authority? over the sensing process; later in the paper we will consider conditional entropy as a proxy (upper bound) on probability of error and evaluate empirically how control authority affects the conditional entropy decrease. 3.1 Control authority Unlike the passive case, in the controlled scenario time plays an important role. This happens in two ways. One is related to the ability to visit previously unexplored regions and therefore is related to the reachable space under input and time constraints, the other is the effect of noise which needs to be averaged. If objects in the scene move, this can be done only at an expense in energy, and achieving asymptotic performance may not be possible under control limitations. This considerably more complex scenario is beyond our scope in this paper. We focus on the simplest case of static environment. Control authority depends on (i) the controller u, as measured for instance by a norm5 kuk : U [0, T ] ! R and (ii) on the geometry of the state space, the input-to-state map and on the environment. We propose to measure control authority in the following manner: associate to each pair of locations in the state space (go , gf ) and a given time horizon T the cost kuk required to move from go at time t = 0 to gf at time t = T along a minimum cost path i.e. . J? (go , gf , T ) = inf kuk (13) u : gu (0)=go ,gu (T )=gf ? where gu (t) is the state vector at time t under control u. If gf is not reachable from go in time T we set J? (go , gf , T ) = 1. This will depend on the dynamical properties of the agent g? = f (g, u) (or gt+1 = f (gt , ut ) for discrete time) as well as on the scene ? where the agent has to navigate through while avoiding obstacles. The control authority (CA) can be measured via the volume of the reachable space for fixed control cost, and will be a function of the initial configuration g0 and of the scene ?, i.e. . CA(k, go , ?) = V ol{gf : J? (g0 , gf , k) ? 1} (14) If instead one is interested in average performance (e.g. w.r.t. the possible scene distributions with fixed clutter density), a reasonable measure is the average of smallest volume (as g0 varies) of the reachable space with a unit cost input ? ? . CA(k) = E? inf CA(k, go , ?) (15) go If planning on an indefinitely long time horizon is allowed, then one would minimize J(go , gf , T ) over time T : . J(go , gf ) = inf J(go , gf , T ) (16) T with 0 . CA1 = inf (V ol{gf : J(go , gf ) ? 1}) go (17) The figures CA(k, go , ?) in (14), CA(k) and CA1 in (17) are proxies of the exploration ability which, in turn, is related to the ability to gather new information on the task at hand. The data acquisition process can be regarded as an experiment design problem [16] where the choice of the control signal guides the experiment. Control authority, as defined above, measures how much freedom one has on the sampling procedure; the larger the CA, the more freedom the designer has. Hence, having fixed (say) the number of snapshots of the scene one may consider the time interval over which these snapshots can be taken, the designer is trying to maximize the information the data contains on the task (making a decision on class label); this information is of course a nondecreasing function of CA. More control authority corresponds to more freedom in the choice of which samples one is taking (from which location and at which scale). Therefore the risk, considered against CA(k) in (15), CA(k, go , ?) in (14) or CA1 in (17) will follow a surface that depends on the clutter: For any given clutter (or clutter density), the risk will be a monotonically non-increasing function of control authority CA(k). This is illustrated in Fig. 4. 5 This could be, for instance, total energy, (average) power, maximum amplitude and so on. We can assume that the control is such that kuk ? 1 5 4 Control policy Given gt , ?, and a finite control authority CA(k, gt , ?), in order to minimize average risk (1) with respect to a sequence of control actions we formulate a finite k-step horizon optimal control problem: Z ?t+k 1 t+k t 1 t+k 1 t+k ut = arg min p(yt+1 |y , ut+k ) 1 max p(xi |y t , yt+1 , ut+k ) dyt+1 (18) t t ut+k t i 1 which is unfortunately intractable. As is standard, we can settle for the greedy k = 1 case: Z u?t = arg min p(yt+1 |y t , ut ) 1 max p(xi |y t , yt+1 , ut ) dyt+1 ut i (19) but it is still often impractical. We relax the problem further by choosing to minimize the upper bound on Bayesian risk, of which a convenient one is the conditional entropy (see [19], which shows Pe ? 12 H(x|y)): This implies that control action can be chosen by entropy minimization: u?t = arg min H(x|y t , yt+1 , ut ) (20) ut Using chain rules of entropy, we can rewrite minimization of H(x|y t , yt+1 , ut ) as maximization of conditional entropy of next measurement: u?t = arg min H(x|y t , yt+1 , ut ) ut = arg min H(x|y t ) = arg max H(yt+1 |y t , ut ) = arg max H(yt+1 |y t , ut ) ut I(yt+1 ; x|y t , ut ) ut ut H(yt+1 |y t , ut , x) (21) (22) (23) because H(yt+1 |y t , ut , x) = H(nt ) is due to Gaussian noise, since yt+1 = h(gt+1 ; ?) + nt+1 and both gt+1 and ? are known (the only unknown object in the scene is x, and it is conditioned on). H(yt+1 |y t , ut ) is the entropy of a Gaussian mixture distribution which can be easily approximated by Monte Carlo, and for which both lower [20] and upper bounds [21] are known. Since the controller has energy limitations, i.e. is unable to traverse the environment in one step, optimization is taken over a small ball in R2 centered at current location gt . In practice, the set of controls needs to be discretized and entropy computed for each action. However, rather than myopically choosing the next control, we instead choose the next target position, in a ?bandit? approach [22, 23]: maximization in (23) is taken with respect to all locations in the world, rather than the set of controls (locations reachable in one step), and the agent takes the minimum energy path toward the most informative location. Since this location is typically not reachable in a single step, one can adopt a ?stubborn? strategy that follows the planned path to the target location before choosing next action, and an ?indecisive? ? that replans as soon as additional information becomes available as a consequence of motion. We demonstrate the characteristics of conditional entropy as a criterion for planning in Fig. 1. 5 Experiments In addition to evaluating ?indecisive? and ?stubborn? strategies, we also consider several different uncertainty measures. Section 4 provided arguments for H(yt+1 |y t , g) (a ?max-ent? approach) which is a proxy for minimization of Bayesian risk. Another option is to maximize covariance of p(yt+1 |y t , g) (?max-var?), for example due to reduced computational cost. Alternatively, if we do not wish to hypothesize future measurements and compute p(yt+1 |y t , g), we may search by approaching the mode of the posterior distribution p(x|y t ) (?max-posterior?). To test average performance of these strategies, we consider search in 100 environment instances, each containing 40 known clutter objects and one unknown object. Clutter objects are sampled from the continuous prior distribution (6) and unknown object is chosen from the prior (10) discretized to |X | ? 9000. Agent?s sensor has m = 30 pixels, with additive noise set to half of the difference between object colors. Conditional entropy of the next measurement, H(yt+1 |y t , gt+1 ), is calculated over the entire map, on a 16x16 grid. Search is terminated once residual entropy falls below a threshold value: H(x|y t ) < 0.001. We are interested in average search time (expressed in terms of number of steps) and average regret, which 6 entropy lower bound upper bound entropy lower bound entropy upper bound lower bound upper bound Figure 1: ?Value of measurement? described by conditional entropy H(yt+1 |y t , g) as a function of location g. We focus on three special cases, and for each show entropy, its lower bound (see [20]), and upper bound (based on Gaussian approximation, see [24]). In all cases, the agent is at the bottom of the environment, and a small unknown object is at the top. The agent has made one measurement (y1 ) and must now determine the best location to visit. The left three panels demonstrate a case of scaling: object is seen, but due to noise and quantization its parameters are uncertain. Agent gains information if putative object location (top) is approached. Middle three panels demonstrate partial occlusion: a part of the object has been seen, and there is now a region (bottom right corner) that is uninformative ? measurements taken there are predictable. Full occlusion is shown in the right three panels. The object has not been seen (due to occluder in the middle of the environment) and the best action is to visit new area. Notice that lower and upper bounds are maximized at the same point as actual entropy. This was a common occurrence in many experiments that we did. Because we are interested in the maximizing point, rather than the maximizing value, even if the bounds are loose, using them for navigation can lead to reasonable results. 15 finish finish start 10 5 start 0 15 5 10 15 20 10 5 0 10 20 30 Figure 2: A typical run of ?indecisive? (left) and ?stubborn? (right) strategies. Objects are colored according to their radiance and the unknown object is shown as a thick line. Traveled path is shown in black. The thinner lines are the planned paths that were not traversed to the end because of replanning. Stubborn explorer traverses each planned segment to its end. Right: Residual entropy H(x|y t ) shown over time for the two strategies (top: ?indecisive?, bottom: ?stubborn?). Lower and upper bounds on H(x|y t , yt+1 ) can be computed prior to measuring yt+1 using upper and lower bounds on H(yt+1 |y t ). Sharp decrease occurs when object becomes visible. we define as the excess fraction of the minimum energy path to the center of the unknown object (c0 ) . ,c0 ) J(xo ,c0 ) that the explorer takes: regret = cu (xoJ(x . Because it is not always necessary to reach the o ,c0 ) object to recognize it (viewing it closely from multiple viewpoints may be sufficient), this quantity is an approximation to minimum search effort. We show an example of a typical problem instance in Fig. 2. Statistics of strategies? performance are shown in Fig. 3. Minimum energy path and random walk strategy play roles of lower and upper bounds. For each of the three uncertainty measures, ?indecisive? outperformed ?stubborn? in terms of both average path length and average regret, as also shown in Table 1. Notice however that for specific problem instances ?indecisive? can be much worse than ?stubborn? ? the curves for the two strategy types cross. Generally, ?max-ent? strategy seems to perform best, followed by ?max-var?, and ?max-posterior?. ?Random-walk? strategy was unable to find the object unless it was visible initially or became visible by chance. We next indecisive stubborn Average search duration max-ent max-var max-p(x|y t ) 28.42 32.70 41.00 34.26 36.17 41.49 max-ent 1.27 1.71 Average regret max-var max-p(x|y t ) 1.44 1.96 1.78 2.19 Table 1: Search time statistics for different strategies. 7 Figure 3: Search time statistics for a 100 world test suite. Left: cumulative distribution of distance until detection traveled by the max-entropy, max-posterior, max-variance explorers, and random walker. Right: cumulative distribution of regret for the explorers. prior entropy environment volume reachable volume without clutter Figure 4: Left: Control authority. The red dashed curve corresponds to reachable volume in the absence of clutter. The black dashed line is the normalized maximum reachable volume in the environment. Right: Residual entropy H(x|y t ), as a function of control authority and clutter density. Black dashed line indicates H(x), entropy prior to taking any measurements. Lines correspond to residual entropy for a given control authority averaged over the test suite; markers ? to residual entropy on a specific problem instance. For certain scenes, agent is unable to significantly reduce entropy because the object never becomes unoccluded (once object is seen, there is a sharp drop in residual entropy, as shown in Fig. 2). empirically evaluated explorer?s exploration ability under finite control authority. Reachable volume was computed by Monte Carlo sampling, following (14)-(15) for several clutter density values. For each clutter density, we generated 40 scene instances and tested ?indecisive? max-entropy strategy with respect to control authority. Here |X | ? 2000, and other parameters remained as in previous experiment. Fig. 4 empirically verifies discussion in Section 3. 6 Discussion We have described a simple model that captures the phenomenology of nuisances in a visual search problem, that includes uncertainty due to occlusion, scaling, and other ?noise? processes, and used it to compute the entropy of the prediction density to be used as a utility function in the control policy. We have then related the amount of ?control authority? the agent can exercise during the data acquisition process with the performance in the visual search task. The extreme cases show that if one is given a passively gathered dataset of an arbitrary number of images, performance cannot be guaranteed beyond what is afforded by the prior. In the limit of infinite control authority, arbitrarily good decision performance can be attained. In between, we have empirically characterized the tradeoff between decision performance and control authority. We believe this to be a natural extension of rate-distortion tradeoffs where the underlying task is not transmission and storage of data, but usage of (visual) data for decision and control. Acknowledgments Research supported on ARO W911NF-11-1-0391 and DARPA MSEE FA8650-11-1-7154. 8 References [1] S. Soatto. Steps towards a theory of visual information: Active perception, signal-to-symbol conversion and the interplay between sensing and control. arXiv:1110.2053, 2011. [2] R. Bajcsy. Active perception. 76(8):996?1005, 1988. [3] D. H. Ballard. Animate vision. Artificial Intelligence, 48(1):57?86, 1991. [4] A. Andreopoulos and J. K. Tsotsos. A theory of active object localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2009. [5] N. Roy, G. Gordon Gordon, and S. Thrun. Finding approximate POMDP solutions through belief compression. Journal of Artificial Intelligence Research, 23:1?40, 2005. [6] H. Kopp-Borotschnig, L. Paletta, M. Prantl, and A. Pinz. Appearance-based active object recognition. Image and Vision Computing, 18(9):715?727, 2000. [7] R. Eidenberger and J. Scharinger. Active perception and scene modeling by planning with probabilistic 6d object poses. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), 2010. [8] J. Ma and J. W. Burdick. Dynamic sensor planning with stereo for model identification on a mobile platform. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2010. [9] G. A. Hollinger, U. Mitra, and G. S. Sukhatme. Active classification: Theory and application to underwater inspection. In International Symposium on Robotics Research, 2011. [10] Z. Jia, A. Saxena, and T. Chen. Robotic object detection: Learning to improve the classifiers using sparse graphs for path planning. In IJCAI, 2011. [11] M. Krainin, B. Curless, and D. Fox. Autonomous generation of complete 3d object models using next best view manipulation planning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2011. [12] F. Bourgault, A. G?oktogan, T. Furukawa, and H. F. Durrant-Whyte. Coordinated search for a lost target in a Bayesian world. Advanced Robotics, 18(10), 2004. [13] G. M. Hoffmann and C. J. Tomlin. Mobile sensor network control using mutual information methods and particle filters. IEEE Transactions on Automatic Control, 55(1), 2010. [14] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In Uncertainty in Artificial Intelligence, 2005. [15] J.L. Williams, J.W. Fisher III, and A.S. Willsky. Performance guarantees for information theoretic active inference. AI & Statistics (AISTATS), 2007. [16] L. Pronzato. Optimal experimental design and some related control problems. Automatica, 44:303?325, 2008. [17] R. Bitmead. Persistence of excitation conditions and the convergence of adaptive schemes. Information Theory, IEEE Transactions on, 30(2):183 ? 191, 1984. [18] L. Ljung. System Identification, Theory for the User. Prentice Hall, 1997. [19] M. E. Hellman and J. Raviv. Probability of error, equivocation and the Chernoff bound. IEEE Transactions on Information Theory, 16:368?372, 1970. [20] J. R. Hershey and P. A. Olsen. Approximating the Kullback Leibler divergence between Gaussian mixture models. Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 4(6), 2007. [21] M. F. Huber, T. Bailey, Durrant-Whyte H., and U. D. Hanebeck. On entropy approximation for Gaussian mixture random vectors. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), 2008. [22] L. Valente, R. Tsai, and S. Soatto. Information gathering control via exploratory path planning. In Proceedings of the Conference on Information Sciences and Systems. March 2012. [23] R. Vidal, Omid Shakernia, H. J. Kim, D. H. Shim, and S. Sastry. Probabilistic pursuit-evasion games: theory, implementation, and experimental evaluation. IEEE Transactions on Robotics, 18(5), 2002. [24] T. M. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991. 9
4594 |@word cu:1 middle:2 compression:1 seems:1 c0:4 simulation:2 covariance:3 initial:2 configuration:1 contains:2 past:1 current:3 nt:6 yet:1 must:4 written:1 additive:3 visible:4 realistic:1 informative:2 noninformative:1 enables:1 burdick:1 hypothesize:1 drop:1 greedy:2 half:1 intelligence:3 inspection:1 indefinitely:1 colored:1 detecting:1 authority:24 contribute:1 location:14 traverse:2 along:1 symposium:1 prove:2 ray:1 manner:1 huber:1 expected:4 planning:8 ol:2 discretized:3 occluder:1 decomposed:1 rule4:1 actual:1 increasing:1 becomes:3 provided:2 discover:1 underlying:2 estimating:1 bounded:1 begin:1 panel:3 what:3 interpreted:1 msee:1 ca1:3 finding:2 impractical:1 suite:3 guarantee:3 temporal:2 unexplored:1 saxena:1 classifier:2 rm:1 control:56 unit:1 before:1 negligible:1 mitra:1 thinner:1 limit:3 consequence:1 mfi:1 mainstay:1 establishing:1 path:10 black:3 therein:1 collect:1 co:2 averaged:2 acknowledgment:1 camera:1 practice:1 regret:6 lost:1 procedure:1 intersect:1 area:1 significantly:1 convenient:1 persistence:1 outset:1 pre:1 regular:1 vantage:1 cannot:3 close:1 storage:1 risk:10 context:1 prentice:1 map:8 deterministic:1 yt:34 center:3 maximizing:2 go:16 williams:1 cluttered:1 duration:1 pomdp:3 formulate:1 rule:2 regarded:1 exploratory:1 underwater:1 autonomous:1 analogous:1 limiting:1 controlling:2 target:4 play:3 user:1 associate:1 element:1 roy:1 recognition:15 approximated:1 updating:1 observed:2 role:4 bottom:3 capture:3 region:2 connected:1 richness:1 decrease:3 alessandro:1 environment:19 predictable:1 complexity:1 pinz:1 occluded:2 dynamic:2 depend:1 solving:2 segment:7 rewrite:1 exposed:1 predictive:2 animate:1 localization:3 upon:1 gu:3 easily:1 darpa:1 icassp:1 alphabet:1 describe:3 monte:2 detected:1 approached:1 artificial:3 formation:1 choosing:4 whose:1 heuristic:1 encoded:1 larger:1 distortion:2 say:2 otherwise:5 relax:1 ability:5 statistic:5 tomlin:1 nondecreasing:1 superscript:1 interplay:1 sequence:2 differentiable:1 propose:3 aro:1 maximal:1 product:1 realization:1 achieve:1 bitmead:1 los:1 ent:4 ijcai:1 convergence:1 transmission:2 raviv:1 object:67 derive:1 pose:3 measured:4 reachability:1 implies:1 whyte:2 direction:1 thick:1 closely:1 attribute:1 filter:1 exploration:12 centered:2 occupies:1 viewing:3 enable:1 settle:1 preliminary:1 traversed:1 viewer:1 extension:1 sufficiently:2 considered:1 hall:1 exp:3 mapping:1 predict:1 scope:1 sought:1 radiance:4 smallest:2 adopt:1 purpose:3 outperformed:1 label:2 currently:1 replanning:1 kopp:1 minimization:3 sensor:9 gaussian:8 always:1 aim:1 rather:5 ck:6 mobile:2 focus:4 bernoulli:1 likelihood:1 indicates:2 kim:1 detect:2 posteriori:1 inference:2 stopping:1 typically:2 entire:1 initially:1 hidden:1 bandit:1 manipulating:1 interested:6 pixel:3 arg:8 uncovering:1 orientation:2 classification:1 denoted:1 priori:1 platform:1 integration:2 special:1 mutual:1 field:1 construct:1 exerted:1 having:1 once:3 cartoon:2 sampling:2 chernoff:1 never:1 photometric:3 future:3 minimized:1 gordon:2 eighty:1 intelligent:2 simultaneously:1 recognize:3 divergence:1 phase:3 occlusion:13 consisting:1 geometry:1 attempt:1 freedom:3 detection:4 interest:6 intra:1 evaluation:1 mixture:3 extreme:2 navigation:1 chain:1 capable:1 byproduct:1 partial:1 necessary:1 mobility:2 literally:1 unless:1 fox:1 walk:2 isolated:1 minimal:1 uncertain:1 instance:9 modeling:1 obstacle:1 planned:3 cover:1 w911nf:1 measuring:1 maximization:3 cost:5 subset:1 entry:1 uniform:1 conducted:1 characterize:1 nearoptimal:1 varies:1 considerably:1 chooses:1 density:14 explores:1 international:7 sequel:1 probabilistic:3 continuously:1 satisfied:1 opposed:3 choose:1 possibly:1 containing:1 worse:1 corner:1 actively:3 li:1 suggesting:1 account:3 includes:3 automation:2 matter:1 coordinated:1 depends:4 performed:2 view:4 later:1 portion:2 start:2 bayes:2 option:1 red:1 jia:1 minimize:4 oi:1 became:1 variance:1 who:2 efficiently:1 characteristic:1 yield:1 maximized:1 correspond:1 gathered:1 identification:4 bayesian:3 curless:1 carlo:2 trajectory:1 equivocation:1 xoj:1 history:3 reach:1 against:1 energy:7 acquisition:3 sukhatme:1 static:3 gain:2 sampled:1 dataset:1 knowledge:1 ut:27 color:1 amplitude:1 manuscript:1 ok:6 attained:2 follow:1 hershey:1 done:1 evaluated:1 anywhere:1 stage:1 subtended:1 until:1 hand:1 replacing:1 nonlinear:1 marker:1 continuity:2 mode:1 believe:1 name:1 effect:1 usage:1 normalized:1 analytically:1 hence:1 soatto:2 leibler:1 omnidirectional:1 illustrated:1 deal:1 sin:2 game:1 during:6 nuisance:8 self:1 lumped:1 excitation:1 criterion:1 trying:1 complete:2 theoretic:2 demonstrate:3 motion:3 stefano:1 passive:3 hellman:1 image:13 meaning:1 recently:2 nonmyopic:1 common:1 empirically:7 physical:1 volume:9 discussed:3 measurement:17 enter:2 ai:1 framed:1 automatic:1 grid:1 populated:1 sastry:1 particle:1 submodular:1 reachable:10 moving:1 robot:1 surface:3 gt:26 posterior:9 closest:2 perspective:1 inf:4 driven:1 scenario:4 manipulation:2 certain:1 binary:1 arbitrarily:3 accomplished:1 furukawa:1 seen:5 minimum:5 additional:1 guestrin:1 determine:1 maximize:2 monotonically:1 dashed:3 signal:4 relates:1 multiple:3 desirable:1 full:2 infer:1 ii:1 characterized:2 cross:1 sphere:1 long:1 visit:3 controlled:2 prediction:1 involving:1 controller:3 vision:4 essentially:2 arxiv:1 robotics:5 addition:1 uninformative:1 krause:1 indecisive:8 interval:1 grow:1 walker:1 myopically:1 operate:1 unlike:1 posse:1 lest:1 bajcsy:1 call:1 structural:1 near:1 presence:1 revealed:1 iii:1 enough:2 easy:1 affect:2 marginalization:3 finish:2 approaching:1 reduce:2 tradeoff:9 angeles:1 bottleneck:1 whether:1 pca:1 utility:3 effort:1 akin:1 stereo:1 fa8650:1 speech:1 action:8 generally:1 se:1 clear:1 amount:1 clutter:16 extensively:2 simplest:6 reduced:1 revisit:1 notice:2 designer:2 estimated:1 discrete:3 write:2 data3:1 shall:1 affected:1 key:1 threshold:1 achieving:1 localize:1 evasion:1 kuk:4 iros:1 vast:1 asymptotically:2 graph:1 fraction:1 tsotsos:1 run:1 angle:2 parameterized:1 uncertainty:5 throughout:1 reasonable:2 putative:1 decision:9 dy:1 scaling:5 bound:20 layer:1 followed:1 guaranteed:1 correspondence:1 pronzato:1 constraint:2 scene:29 afforded:2 encodes:1 aspect:2 integrand:1 argument:1 min:6 passively:1 developing:1 according:1 ball:1 march:1 describes:2 making:1 happens:1 invariant:1 iccv:1 xo:1 gathering:1 taken:5 equation:1 previously:2 discus:2 turn:1 loose:1 end:3 available:2 pursuit:1 phenomenology:3 vidal:1 enforce:1 occurrence:1 bailey:1 thomas:1 actionable:1 denotes:4 top:3 ensure:1 graphical:1 approximating:1 icra:2 move:4 g0:3 quantity:2 occurs:1 font:1 strategy:12 hoffmann:1 surrogate:2 distance:2 unable:3 mapped:2 thrun:1 collected:1 trivial:2 toward:1 padova:1 boldface:1 willsky:1 multisensor:1 length:4 index:1 providing:1 nc:3 unfortunately:1 relate:2 expense:1 design:5 implementation:1 policy:3 unknown:13 perform:3 upper:12 conversion:1 snapshot:3 datasets:1 markov:1 finite:4 displayed:1 supporting:1 variability:3 communication:1 incorporated:2 locate:1 y1:2 sharp:2 unmodeled:2 arbitrary:1 hanebeck:1 stubborn:8 pair:1 required:2 optimized:1 california:1 acoustic:1 ytt:1 beyond:3 dynamical:2 perception:4 below:1 andreopoulos:1 omid:1 including:3 max:21 belief:1 power:1 suitable:1 explorer:7 difficulty:1 natural:1 residual:6 advanced:1 scheme:2 improve:1 lk:5 portray:1 gf:13 traveled:2 prior:11 literature:2 discovery:1 review:1 relative:1 asymptotic:1 embedded:1 fully:1 shim:1 ljung:1 generation:3 limitation:3 var:4 localized:1 validation:1 agent:18 gather:2 sufficient:3 proxy:5 exciting:1 viewpoint:1 course:1 supported:3 soon:1 guide:1 ber:3 fall:1 taking:2 sparse:1 regard:1 curve:2 calculated:1 world:4 evaluating:1 cumulative:2 author:2 collection:7 adaptive:2 made:1 transaction:4 excess:1 approximate:2 observable:1 informationtheoretic:1 olsen:1 kullback:1 active:15 robotic:3 unoccluded:1 conceptual:1 automatica:1 assumed:1 xi:7 alternatively:1 search:23 continuous:2 sonar:1 sk:3 table:2 learn:1 ballard:1 ca:12 contributes:1 complex:2 domain:1 surf:1 significance:1 did:1 aistats:1 terminated:1 motivation:1 noise:8 verifies:1 allowed:1 augmented:1 fig:6 x16:1 paletta:1 wiley:1 position:6 inferring:1 explicit:1 wish:6 exponential:1 exercise:2 pe:2 durrant:2 third:1 omnipotent:2 remained:1 embed:1 specific:2 navigate:1 sift:1 sensing:5 r2:5 symbol:1 fusion:1 intrinsic:1 intractable:3 quantization:4 sequential:1 ci:1 texture:1 conditioned:2 horizon:3 gap:1 chen:1 entropy:32 intersection:1 simply:1 appearance:3 visual:16 expressed:1 tracking:1 partially:1 acquiring:1 corresponds:2 chance:2 ma:1 conditional:12 viewed:1 formulated:1 towards:1 fisher:2 absence:1 change:5 infinite:4 typical:2 averaging:1 total:1 dyt:2 experimental:4 pertains:1 phenomenon:2 tsai:1 incorporate:1 evaluate:2 tested:1 avoiding:1
3,971
4,595
Waveform Driven Plasticity in BiFeO3 Memristive Devices: Model and Implementation Christian Mayr, Paul Staerke, Johannes Partzsch, Rene Schueffny Institute of Circuits and Systems TU Dresden, Dresden, Germany {christian.mayr,johannes.partzsch,rene.schueffny}@tu-dresden.de Love Cederstroem Zentrum Mikroelektronik Dresden AG Dresden, Germany [email protected] Yao Shuai Inst. of Ion Beam Physics and Materials Res. Helmholtz-Zentrum Dresden-Rossendorf e.V. Dresden, Germany [email protected] Nan Du, Heidemarie Schmidt Professur Materialsysteme der Nanoelektronik TU Chemnitz, Chemnitz, Germany [email protected],[email protected] Abstract Memristive devices have recently been proposed as efficient implementations of plastic synapses in neuromorphic systems. The plasticity in these memristive devices, i.e. their resistance change, is defined by the applied waveforms. This behavior resembles biological synapses, whose plasticity is also triggered by mechanisms that are determined by local waveforms. However, learning in memristive devices has so far been approached mostly on a pragmatic technological level. The focus seems to be on finding any waveform that achieves spike-timing-dependent plasticity (STDP), without regard to the biological veracity of said waveforms or to further important forms of plasticity. Bridging this gap, we make use of a plasticity model driven by neuron waveforms that explains a large number of experimental observations and adapt it to the characteristics of the recently introduced BiFeO3 memristive material. Based on this approach, we show STDP for the first time for this material, with learning window replication superior to previous memristor-based STDP implementations. We also demonstrate in measurements that it is possible to overlay short and long term plasticity at a memristive device in the form of the well-known triplet plasticity. To the best of our knowledge, this is the first implementations of triplet plasticity on any physical memristive device. 1 Introduction Neuromorphic systems try to replicate cognitive processing functions in integrated circuits. Their complexity/size is largely determined by the synapse implementation, as synapses are significantly more numerous than neurons [1]. With the recent push towards larger neuromorphic systems and higher integration density of these systems, this has resulted in novel approaches especially for the synapse realization. Proposed solutions on the one hand employ nanoscale devices in conjuction with conventional circuits [1] and on the other hand try to integrate as much synaptic functionality (short- and long term plasticity, pulse shaping, etc) in as small a number of devices as possible. In 1 this context, memristive devices 1 as introduced by L. Chua [2] have recently been proposed as efficient implementations of plastic synapses in neuromorphic systems. Memristive devices offer the possibility of having the actual learning mechanism, synaptic weight storage and synaptic weight effect (i.e. amplification of the presynaptic current) all in one device, compared to the distributed mechanisms in conventional circuit implementations [3]. Moreover, a high-density passive array on top of a conventional semiconductor chip is possible [1]. The plasticity in these memristors, i.e. their resistance change, is defined by the applied waveforms [4], which are fed into the rows and columns of the memristive array by CMOS pre- and postsynaptic neurons [1]. This resembles biological synapses, whose plasticity is also triggered by mechanisms that are determined by local waveforms [5, 6]. However, learning in memristors has so far been approached mostly on a pragmatic technological level. The goal seems to be to find any waveform that achieves spiketiming-dependent plasticity (STDP) [4], without regard to the biological veracity of said waveforms or to further important forms of plasticity [7]. Bridging this gap, we make use of a plasticity rule introduced by Mayr and Partzsch [6] which is driven in a biologically realistic way by neuron waveforms and which explains a large number of experimental observations. We adapt it to a model of the recently introduced BiFeO3 memristive material [8]. Measurement results of the modified plasticity rule implemented on a sample device are given, exhbiting configurable STDP behaviour and pulse triplet [7] reproduction. 2 Materials and Methods 2.1 Local Correlation Plasticity (LCP) The LCP rule as introduced by Mayr and Partzsch [6] combines two local waveforms, the synaptic conductance g(t) and the membrane potential u(t). Presynaptic activity is encoded in g(t), which determines the conductance change due to presynaptic spiking. Postsynaptic activity in turn is signaled to the synapse by u(t). The LCP rule combines both in a formulation for the change of the synaptic weight w that is similar to the well-known Bienenstock-Cooper-Munroe rule [9]: dw = B ? g(t) ? (u(t) ? ?u ) (1) dt In this equation, ?u denotes the voltage threshold between weight potentiation and depression, which is normally set to the resting potential. Please note that coincident pre- and postsynaptic activities are detected in this rule by multiplication: A weight change only occurs if both presynaptic conductance is elevated and postsynaptic membrane potential is away from rest. The waveforms for g(t) and u(t) are determined by the employed neuron model. Mayr et al. [6] use a spike response model [10], with waveforms triggered at times of pre- and postsynaptic spikes: ? ? e? g(t) = G t?tpre n ?pre u(t) = Up,n ? ?(t ? tpost n ) + Urefr ? e tpre n t?tpost ? ? n post pre for tpre n ? t < tn+1 , (2) for tpost ? t < tpost n n+1 , (3) tpost n where and denote the n-th pre- and postsynaptic spike, respectively. The presynaptic con? and decay time constant ?pre . The postsynaptic ductance waveform is an exponential with height G potential at a spike is defined by a Dirac pulse with integral Up,n , followed by an exponential decay with height Urefr (< 0) and membrane time constant ?post . Following [6], postsynaptic adaptation is realised in the value of Up,n . For this, Up,n is decreased from a nominal value Up if the postsynaptic pulse occurs shortly after another postsynaptic pulse: ? Up,n = Up ? (1 ? e post tpost ?tn?1 n ?post ) (4) The time constant for the exponential decay in this equation is the same as the membrane time constant. 1 In 1971 Leon Chua postulated the existence of a device where the current or voltage is directly controlled by voltage flux or charge respectively, this was called a memristor. Using a general state space description Chua and Kang later extended the theory to cover the very broad class of memristive devices [2]. Even though the two terms are used interchangeably in other studies, since the devices used in this study do not fit the strict definition of memristor, we will refer to them as memristive devices in the following. 2 g in nS u in mV ?w in % 1.0 0.8 0.6 0.4 0.2 0.0 15 10 5 0 ?5 ?10 2.0 1.5 1.0 0.5 0.0 0 20 40 60 t in ms 80 100 120 Figure 1: Progression of the conductance g, the membrane potential u and the synapse weight w for a sample spike pattern. Figure 1 shows the pre- and postsynaptic waveforms, as well as the synaptic weight for a sample spike train. For the simple waveforms, two principal weight change mechanisms are present: If the presynaptic side is active at a postsynaptic spike, the weight is instantaneously increased by the large elevation of the membrane potential. In contrast, all presynaptic activity falling into the refractoriness period of the neuron (exponential decay after spike) integrates as a weight decrease. As shown in [6], this simple model can replicate a multitude of experimental evidence, on par with the most advanced (and complex) phenomenological plasticity models currently available. In addition, the LCP rule directly links synaptic plasticity to other pre- and postsynaptic adaptation processes by their influence on the local waveforms. This can be used to explain further experimental results [6]. In Sec. 3.1, we will adapt the above rule equations to the characteristics of our memristive device, which is introduced in the next section. 2.2 Memristive Device Non-volatile passive analog memory has often been discussed for applications in neuromorphic systems because of the space limitations of analog circuitry. However, until recently only a few groups had access to sufficient materials and devices. Developments in the field of nano material science, especially in the last decade, opened new possibilities for creating compact circuit elements with unique properties. Most notably after HP released information about their so-called Memristor [11] much effort has been put in the analysis of thin film semiconductor-metal-metaloxide compounds. One of the commonly used materials in this class is BiFeO3 (BFO). The complete conducting mechanisms in BFO are not fully understood yet, with partly contradictory results reported in literature, but it has been confirmed that different physical effects are overlayed and dominate in different states. Particularly the resistive switching effect seems promising for neuromorphic devices and will be discussed in more detail. It has been shown in [12, 8] that the effect can appear uni- or bipolar and is highly dependent on the processing regarding the substrate, growth method, doping, etc. [13]. We use BFO grown by pulsed laser deposition on Pt/Ti/SiO2 /Si substrate with an Au top contact, see in Fig. 2. Memristors were fabricated with circular top plates, which were contacted with needle probes, whereas the continuous bottom plate was contacted at one edge of the die. The BFO films have a thickness of some 100nm. The created devices show a unipolar resistive switching with a rectifying behavior. For a positive bias the device goes into a low resistive state (LRS) and stays there until a negative bias is applied which resets it back to a high resistive state (HRS). The state can be measured without influencing it by applying a low voltage of under 2V. Figure 3 shows a voltage-current-diagram which indicates some of the characteristics of the device. The measurement consists of three parts: 1) A rising negative voltage is applied which resets the device from an intermediate level to HRS. 2) A rising voltage lowers the resistance exponentially. 3 Figure 2: Photograph of the fabricated memristive material that was used for the measurements. 3) A falling positive voltage does not affect the resistance anymore and the relation is nearly ohmic. Because of the rectifying characteristic the current in LRS and HRS for negative voltages does not exhibit as large a dynamic range as for positive voltages. 800 10?3 700 10?4 600 10?5 abs(I) in A Im in uA 500 400 300 200 10?6 10?7 100 10?8 0 ?100 ?6 ?4 ?2 0 Vm in V 2 4 6 10?9?6 ?4 ?2 0 Vm in V 2 4 6 Figure 3: Voltage-current diagram of the device as linear and log-scale plot 2.3 Phenomenological Device Model To apply the LCP model to the BFO device and enable circuit design, a simplified device model is required. We have based our model on the framework of Chua and Kang [2]; that is, using an output function (i.e., for current Im ) dependent on time, state and input (i.e., voltage Vm ). Recently, this has been widely used for the modeling of memristive devices [11, 14, 15]. In contrast to many memristive device models which are based on a sinh function for the output relationship (following Yang et al. [14]), we model the BFO device as two semiconductor junctions. The junctions can abstractly be described by a diode equation: Id = I0 (exp(qV /kT ) ? 1) [16]. In an attempt to catch the basic characteristics, our device could be modeled employing two diode equations letting a state variable, x, influence the output and roughly represent the conductance: ( ) Im = h(x, Vm , t) = I01 ? (ed1 ?Vm (t) ? 1) ? I02 ? (e?d2 ?Vm (t) ? 1) ? x(t) (5) where Vm is the voltage over the device2 and the diode like equations guarantee a zero crossing hysteresis. The use of parameters I0i and di now allows individual control of current characteristics for negative and positive voltages, and as shown in the previous section these are rather asymmetric for our BFO devices. For the purpose of modeling plasticity, our focus has been on the dynamic behavior of the conductance change; this was investigated in some detail by Querlioz et al. [15] and has served as the basis for our model of the state variable: dx = f (x, Vm , t) = ?(x) ? ?(Vm ) dt 2 (6a) With sinh(z) = 1/2 ? (ez ? e?z ), our approach is not fundamentally different from using a sinh function. 4 In the above the functions ?(x) and ?(Vm ) relate to how the current state affects the state development and the effect of the applied voltage, respectively. ?(x) is described by an exponential function. ? x?Gmin ? ? e??1 Gmax ?Gmin , Vm (t) > 0, Gmax ?x ?? ?(x) = (6b) e 2 Gmax ?Gmin , Vm (t) ? 0, x > Gmin , ? ? 0, else In ?(Vm ) we again favor using separate exponential over sinh functions for increased controllability of the different voltage domains (positive and negative). Here the parameters ?1 and ?2 govern the voltage dependence of the state modification, with ?1 and ?2 scaling the result. With ?1 and ?2 , the speed of state saturation is set: { ( ) ?1 ? (e?1 Vm ? 1 ,) Vm (t) ? 0, (6c) ?(Vm ) = ?2 ? 1 ? e??2 Vm , Vm (t) < 0, For implementation, we have used one of the most prominent commercially available simulators R R for custom analog and mixed-signal integrated circuit design, the Cadence? Spectre? . Using behavioral current sources, the equations for h(x, Vm , t) and f (x, Vm , t) can be implemented and simulated with feasibility for circuit design. Depicted in Fig. 4 are the conductance change over time, at different voltages, for model (Fig. 4a) and measurements (Fig. 4b). It can be seen how the exponential dependency on device voltage gives rise to different levels of operation (Equations (5) and (6c)). Also the saturation of conductance change for a given voltage is visible (Equation (6b)). The sharp changes of current seen in the model are a result of our simplistic approach, whereas the real devices show slower transitions. In addition, it can be noted that above 5 V the real device appears to experience a significantly steeper rise in current. However, the target is to have reasonable characteristics in the region of operation below 5 V which is relevant in our plasticity rule experiments. 1.6 Im (t) Vm (t) 6 5 1.0 4 0.8 3 0.6 0.4 20 30 40 50 60 5 1.0 4 0.8 3 0.6 2 0.2 1 10 6 0.4 2 0.2 Im (t) Vm (t) 1.2 Im in mA Im in mA 1.2 0.0 0 1.4 Vm in Volt 1.4 0.0 0 70 t in s Vm in Volt 1.6 1 10 20 30 40 50 60 70 t in s (a) (b) Figure 4: Device current for different applied voltages for model (a) and measurement (b). 3 3.1 Results Modified LCP A nonlinearity or learning threshold is required in order to carry out the correlation operation between pre- and postsynaptic waveforms that characterizes various forms of long term learning [9, 17]. In the original LCP rule, this is done by the multiplication of pre- and postsynaptic waveforms, i.e. only coincident activity results in learning. Memristive devices are usually operated in an additive manner, i.e. the pre- and postsynaptic waveforms are applied to both terminals of the device, thus adding/subtracting their voltage curves. In order for the state of the memristive device to only be affected by an overlap of both waveforms, a positive and negative modification threshold is required [4]. As can be seen from equation 6c, the internal voltage driven state change ?(Vm ) is affected by two different parameters ?1 and ?2 which govern the thresholds for negative and positive voltages. For our devices, these work out to effective modification thresholds of -2V and 5 Vpre in V Vpost in V ?2 ?1 0 1 2 Vpre-Vpost in V 4 2 0 ?2 ?Im in % 2 1 0 ?1 ?2 50 40 30 20 10 0 0 20 40 60 t in ms 80 100 120 Figure 5: Modification of the original LCP rule for the BFO memristive device, from top to bottom: pre- and postsynaptic voltages/waveforms, exponential decay with ?pre resp. ?post (postsynaptic waveform plotted as inverse to illustrate waveform function); resultant voltage difference across memristive device and corresponding memristance modification thresholds (horizontal grey lines); and memristance change as computed from the model of sec. 2.3 +2.3V. Thus, we need waveforms where coincident activity causes a voltage rise above the positive threshold resp. a voltage drop below the negative threshold. In addition, we need a dependence between voltage level and weight change, as the simplest method to differentiate between weights is the voltage saturation characteristic in Fig. 3. That is, a single stimulus (e.g. pulse pairing in STDP) should result in a distinctive memristive programming voltage, driving the memristive device into the corresponding voltage saturation level via the (for typical experiments) 60 stimulus repetitions. Apart from quantitative adjustments to the original LCP rule, this requires one qualitative adjustment. The presynaptic conductance waveform is now taken as a voltage trace and a short rectangular pulse is added immediately before the exponential downward trace, arriving at a waveform similar to the spike response model for the postsynaptic trace, see uppermost curve in Fig. 5. We call this the modified LCP rule. For overlapping pre- and postsynaptic waveforms, the rectangular pulses of both waveforms ?ride up? on the exponential slopes of their counterparts when looking at the voltage difference Vm = Vpre ? Vpost across the memristive device for pre- and postsynaptic waveforms applied to both terminals of the device (see third curve from top in Fig. 5). Since the rectangular pulses are short compared to the exponential waveforms, they represent a constant voltage whose amplitude depends on the time difference between both waveforms (as expressed by the exponential slopes) as required above. Thus, as in the original LCP rule, the exponential slopes of pre- and postsynaptic neuron govern the STDP time windows. Repeated application of such a pre-post pairing drives the memristive device in its corresponding voltage-dependent saturation level. Similar to the original LCP rule, short term plasticity of the postsynaptic action potentials can now be added to make the model more biologically realistic (e.g. with respect to the triplet learning protocol [6]). We employ the same attenuation function as in equation 4, adjusting the duration of the postsynaptic action potential, see second curve from top in Fig. 5. Please note: One further important advantage of using this modified LCP rule is that both preand postsynaptic waveform are causal, i.e. they start only at the pre- respectively postsynaptic pulse. This is in contrast to most currently proposed waveforms for memristive learning, i.e. these waveforms have to start well in advance of the actual pulse [4], which requires preknowledge of a pulse occurrence. Especially in an unsupervised learning context with self-driven neuron spiking, this preknowledge is simply not existent. 6 120 120 ?pre=15ms, ?post=35ms ?pre=30ms, ?post=50ms 80 60 60 40 40 20 0 20 0 ?20 ?20 ?40 ?40 ?60 ?60 ?80 ?200 ?150 ?100 ?50 ?pre=15ms, ?post=35ms ?pre=30ms, ?post=50ms 100 80 ?Im in % ?Im in % 100 0 ?t in ms 50 100 150 ?80 ?200 200 ?150 ?100 ?50 (a) 0 ?t in ms 50 100 150 200 (b) Figure 6: Results for STDP protocol: (a) model simulation, (b) measurement with BFO memristive device. 3.2 Measurement results The waveforms developed in the previous section can be tested in actual protocols for synaptic plasticity. As a first step, we investigate the behaviour of the BFO memristive device in a standard pair-based STDP experiment. For this, we apply 60 spike pairings of different relative timings at a low repetition frequency (4Hz), comparable to biological measurement protocols [17]. Measurements were performed with a BFO memristive device as shown in Fig. 2. As shown in the model simulations in Fig. 6a, the developed waveforms are transformed by the memristive device into approx. exponentially decaying conductance changes. This is in good agreement with biological measurements [17] and common STDP models [7]. The model results are confirmed in measurements for the BFO memristive device, as shown in Fig. 6b. Notably, the measurements result in smooth, continuous curves. This is an expression of the continuous resistance change in the BFO material, which results in a large number of stable resistance levels. This is in contrast e.g. to memristive materials that rely on ferroelectric switching, which exhibit a limited number of discrete resistance levels [18, 1]. Moreover, the nonlinear behaviour of the BFO memristive device has only limited effect on the resulting STDP learning window. The resistance change is directly linked to the applied waveforms. For example, as shown in Fig. 6, an increase in time constants results in correspondingly longer STDP time windows. Following our modeling approach, these time constants are directly linked to the time constants of the underlying neuron and synapse model. 60 30 45 20 ?t1 in ms 15 0 0 ?10 ?20 ?15 ?30 ?30 ?30 (a) ?20 0 ?10 10 ?t2 t in ms 20 ? Im in % 30 10 30 (b) Figure 7: Measurement results for the triplet protocol of Froemke and Dan [7]. (a) biological measurement data, adapted from [7], (b) measurement with BFO memristive device. 7 Experiments have shown that weight changes of single spike pairings, as expressed by STDP, are nonlinearly integrated when occuring shortly after one another. Commonly, triplets of spikes are used to investigate this effect, as carried out by [7]. The main deviation of these experimental results compared to a pure STDP rule occur for the post-pre-post triplet [6], which can be attributed to postsynaptic adaptation [7]. With this adaptation included in our waveforms (equation 4, as seen in the action potential duration in the second curve from the top of Fig. 5), the BFO memristive device measurements well resemble the post-pre-post results of [7]. The measurement results in Fig. 7b show more depression than the biological data for the pre-post-pre triplet (upper left quadrant). This is because changes in resistance need some time to build up after a stimulating pulse. In the pre-post-pre case, the weight increase has not fully developed when it is overwritten by the second presynaptic pulse, which results in weight decrease. This effect is dependent on the measured device and the parameters of the stimulation waveforms (cf. Supplementary Material). For keeping the stimulation waveforms as simple as possible, only postsynaptic adaptation has been included. However, it has been shown that presynaptic short-term plasticity also has a strong influence on long-term learning [19, 6]. With our modeling approach, a model of short-term plasticity can be easily connected to the stimulation waveforms by modulating the length of the presynaptic pulse. Along the same lines, the postsynaptic waveform can be shifted by a slowly changing voltage analogous to the original LCP rule (cf. Eq. 1) to introduce a metaplastic regulation of weight potentiation and depression [6]. Together, these extensions open up an avenue for the seamless integration of different forms of plasticity in learning memristive devices. 3.3 Conclusion Starting from a waveform-based general plasticity rule and a model of the memristive device, we have shown a direct way to go from these premises to biologically realistic learning in a BiFeO3 memristive device. Employing the LCP rule for memristive learning has several advantages. As a memristor is a two-terminal device, the separation of the learning in two waveforms in the LCP rule lends itself naturally to employing it in a passive array of memristors [1, 4]. In addition, this waveform-defined plasticity behaviour enables easy control of the STDP time windows, which is further aided by the excellent multi-level memristive programming capability of the BiFeO3 memristive devices. There is only a very small number of memristors where plasticity has been shown at actual devices at all [18, 1]. Among those, our highly-configurable, finely grained learning curves are unique, other implementations exhibit statistical variations [1], can only assume a few discrete levels [18] or the learning windows are device-inherent, i.e. cannot be adjusted [20]. This comes at the price that in contrast to e.g. phase-change materials, BiFeO3 is not easily integrated on top of CMOS [8]. The waveform-defined plasticity of the LCP rule enables the explicit inclusion of short term plasticity in long term memristive learning, as shown for the triplet protocol. As the pre- and postsynaptic waveforms are generated in the CMOS neuron circuits below the memristive array [1], short term plasticity can thus be added at little extra overall circuit cost and without modification of the memristive array itself. In contrast to our easily controlled short term plasticity, the only previous work targeting memristive short term plasticity employed intrinsic (i.e. non-controllable) device properties [20]. To the best of our knowledge, this is the first time triplets or other higher-order forms of plasticity have been shown for a physical memristive device. In a wider neuroscience context, waveform defined plasticity as shown here could be seen as a general computational principle, i.e. synapses are not likely to measure time differences as in naive forms of STDP rules, they are more likely to react to local static [21] and dynamic [5] state variables. Some interesting predictions could be derived from that, e.g. STDP time constants that are linked to synaptic conductance changes or to the membrane time constant [22, 6]. These predictions could be easily verified experimentally. Acknowledgments The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007- 2013) under grant agreement no. 269459 (Coronet). 8 References [1] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu, ?Nanoscale memristor device as synapse in neuromorphic systems,? Nano Letters, vol. 10, no. 4, pp. 1297?1301, 2010. [2] L. Chua and S. M. Kang, ?Memristive devices and systems,? Proceedings of the IEEE, vol. 64, no. 2, pp. 209 ? 223, feb. 1976. [3] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D. Amit, ?Spike-driven synaptic plasticity: Theory, simulation, VLSI implementation,? Neural Computation, vol. 12, pp. 2227?2258, 2000. [4] M. Laiho, E. Lehtonen, A. Russel, and P. Dudek, ?Memristive synapses are becoming reality,? The Neuromorphic Engineer, November 2010. [Online]. Available: http://www.inenews.org/view.php?source=003396-2010-11-26 [5] S. Dudek and M. Bear, ?Homosynaptic long-term depression in area CAl of hippocampus and effects of N-methyl-D-aspartate receptor blockade,? PNAS, vol. 89, pp. 4363?4367, 1992. [6] C. Mayr and J. Partzsch, ?Rate and pulse based plasticity governed by local synaptic state variables,? Frontiers in Synaptic Neuroscience, vol. 2, pp. 1?28, 2010. [7] R. Froemke and Y. Dan, ?Spike-timing-dependent synaptic modification induced by natural spike trains,? Nature, vol. 416, pp. 433?438, 2002. [8] Y. Shuai, S. Zhou, D. Burger, M. Helm, and H. Schmidt, ?Nonvolatile bipolar resistive switching in au/bifeo[sub 3]/pt,? Journal of Applied Physics, vol. 109, no. 12, p. 124117, 2011. [Online]. Available: http://link.aip.org/link/?JAP/109/124117/1 [9] E. Bienenstock, L. Cooper, and P. Munro, ?Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex,? Journal of Neuroscience, vol. 2, pp. 32?48, 1982. [10] W. Gerstner and W. Kistler, spiking neuron models: single neurons, populations, plasticity. University Press, 2002. Cambridge [11] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, ?The missing memristor found,? Nature, vol. 453, no. 7191, pp. 80?83, May 2008. [Online]. Available: http://dx.doi.org/10.1038/nature06932 [12] C. Wang, K. juan Jin, Z. tang Xu, L. Wang, C. Ge, H. bin Lu, H. zhong Guo, M. He, and G. zhen Yang, ?Switchable diode effect and ferroelectric resistive switching in epitaxial bifeo[sub 3] thin films,? Applied Physics Letters, vol. 98, no. 19, p. 192901, 2011. [13] Y. Shuai, S. Zhou, C. Wu, W. Zhang, D. B?rger, S. Slesazeck, T. Mikolajick, M. Helm, and H. Schmidt, ?Control of rectifying and resistive switching behavior in bifeo3 thin films,? Applied Physics Express, vol. 4, no. 9, p. 095802, 2011. [Online]. Available: http://apex.jsap.jp/link?APEX/4/095802/ [14] J. J. AU Yang, M. D. Pickett, X. Li, O. A. A., D. R. Stewart, and R. S. Williams, ?Memristive switching mechanism for metal//oxide//metal nanodevices,? Nature Nanotechnology, pp. 429,430,431,432,433, July 2008. [Online]. Available: http://dx.doi.org/10.1038/nnano.2008.160 [15] D. Querlioz, P. Dollfus, O. Bichler, and C. Gamrat, ?Learning with memristive devices: How should we model their behavior?? in Nanoscale Architectures (NANOARCH), 2011 IEEE/ACM International Symposium on, june 2011, pp. 150 ?156. [16] B. G. Streetman and S. K. Banerjee, Solid State Electronic Devices. Pearson Prentice Hall, 2006. [17] G.-Q. Bi and M.-M. Poo, ?Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type,? Journal of Neuroscience, vol. 18, no. 24, pp. 10 464?10 472, 1998. [18] F. Alibart, S. Pleutin, O. Bichler, C. Gamrat, T. Serrano-Gotarredona, B. Linares-Barranco, and D. Vuillaume, ?A memristive nanoparticle/organic hybrid synapstor for neuroinspired computing,? Advanced Functional Materials, vol. 22, no. 3, pp. 609?616, 2012. [Online]. Available: http://dx.doi.org/10.1002/adfm.201101935 [19] R. Froemke, I. Tsay, M. Raad, J. Long, and Y. Dan, ?Contribution of individual spikes in burst-induced long-term synaptic modification,? Journal of Neurophysiology, vol. 95, pp. 1620?1629, 2006. [20] T. Ohno, T. Hasegawa, T. Tsuruoka, K. Terabe, J. Gimzewski, and M. Aono, ?Short-term plasticity and long-term potentiation mimicked in single inorganic synapses,? Nature Materials, vol. 10, pp. 591?595, 2011. [21] A. Ngezahayo, M. Schachner, and A. Artola, ?Synaptic activity modulates the induction of bidirectional synaptic changes in adult mouse hippocampus,? The Journal of Neuroscience, vol. 20, no. 3, pp. 2451? 2458, 2000. [22] J.-P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner, ?Optimal spike-timing dependent plasticity for precise action potential firing in supervised learning,? Neural Computation, vol. 18, pp. 1309?1339, 2006. 9
4595 |@word neurophysiology:1 rising:2 hippocampus:2 seems:3 replicate:2 open:1 d2:1 grey:1 pulse:16 simulation:3 overwritten:1 solid:1 carry:1 current:12 com:1 pickett:1 si:1 yet:1 dx:4 nanoscale:3 realistic:3 visible:1 additive:1 plasticity:41 christian:2 enables:2 plot:1 drop:1 device:69 short:12 lr:2 chua:5 org:5 zhang:1 height:2 along:1 burst:1 direct:1 contacted:2 symposium:1 adfm:1 replication:1 pairing:4 consists:1 resistive:7 vpre:3 combine:2 qualitative:1 behavioral:1 dan:3 introduce:1 manner:1 notably:2 roughly:1 behavior:5 love:2 simulator:1 multi:1 terminal:3 actual:4 little:1 window:6 ua:1 burger:1 moreover:2 underlying:1 circuit:10 developed:3 ag:1 finding:1 fabricated:2 guarantee:1 quantitative:1 ti:1 charge:1 growth:1 attenuation:1 bipolar:2 control:3 normally:1 grant:1 mayr:6 appear:1 positive:8 before:1 understood:1 local:7 timing:5 influencing:1 t1:1 semiconductor:3 switching:7 receptor:1 id:1 firing:1 becoming:1 au:3 dresden:7 resembles:2 limited:2 range:1 bi:1 unique:2 acknowledgment:1 union:1 area:1 significantly:2 organic:1 pre:30 quadrant:1 specificity:1 unipolar:1 needle:1 cannot:1 targeting:1 cal:1 storage:1 context:3 influence:3 put:1 applying:1 prentice:1 www:1 conventional:3 missing:1 poo:1 go:2 williams:2 ed1:1 duration:2 starting:1 rectangular:3 immediately:1 pure:1 react:1 rule:23 array:5 dominate:1 methyl:1 dw:1 population:1 i02:1 variation:1 analogous:1 resp:2 pt:2 nominal:1 target:1 cultured:1 substrate:2 programming:2 agreement:2 element:1 helmholtz:1 crossing:1 particularly:1 asymmetric:1 bottom:2 wang:2 gmin:4 region:1 connected:1 decrease:2 technological:2 govern:3 complexity:1 dynamic:3 existent:1 distinctive:1 basis:1 easily:4 chip:1 various:1 grown:1 ohmic:1 train:2 laser:1 effective:1 doi:3 detected:1 approached:2 pearson:1 whose:3 encoded:1 larger:1 film:4 widely:1 supplementary:1 toyoizumi:1 favor:1 artola:1 abstractly:1 itself:2 online:6 differentiate:1 triggered:3 advantage:2 subtracting:1 interaction:1 reset:2 adaptation:5 serrano:1 tu:5 relevant:1 realization:1 amplification:1 description:1 dirac:1 cmos:3 wider:1 illustrate:1 measured:2 received:1 strukov:1 eq:1 strong:1 implemented:2 diode:4 resemble:1 come:1 waveform:50 functionality:1 opened:1 enable:1 kistler:1 material:15 bin:1 explains:2 premise:1 behaviour:4 potentiation:3 elevation:1 biological:8 im:11 adjusted:1 extension:1 frontier:1 hall:1 stdp:17 exp:1 circuitry:1 driving:1 achieves:2 released:1 purpose:1 integrates:1 currently:2 conjuction:1 vpost:3 modulating:1 repetition:2 qv:1 instantaneously:1 uppermost:1 modified:4 rather:1 zhou:2 zhong:1 voltage:37 derived:1 focus:2 june:1 indicates:1 annunziato:1 contrast:6 inst:1 dependent:8 i0:1 integrated:4 bienenstock:2 relation:1 vlsi:1 transformed:1 germany:4 overall:1 among:1 orientation:1 development:3 integration:2 field:1 having:1 broad:1 unsupervised:1 nearly:1 thin:3 commercially:1 t2:1 cadence:1 aip:1 fundamentally:1 stimulus:2 employ:2 dudek:2 inherent:1 few:2 resulted:1 zentrum:2 individual:2 phase:1 metaplastic:1 overlayed:1 ab:1 attempt:1 conductance:11 possibility:2 highly:2 circular:1 custom:1 investigate:2 operated:1 kt:1 integral:1 edge:1 gmax:3 experience:1 re:1 signaled:1 plotted:1 causal:1 increased:2 column:1 doping:1 modeling:4 cover:1 stewart:2 neuromorphic:8 cost:1 deviation:1 seventh:1 configurable:2 reported:1 dependency:1 thickness:1 density:2 international:1 stay:1 seamless:1 physic:4 vm:26 together:1 mouse:1 yao:1 jo:1 again:1 nm:1 slowly:1 juan:1 nano:2 cognitive:1 creating:1 oxide:1 leading:1 li:1 potential:10 de:4 sec:2 hysteresis:1 postulated:1 mv:1 depends:1 later:1 try:2 performed:1 view:1 linked:3 steeper:1 realised:1 characterizes:1 start:2 decaying:1 capability:1 slope:3 rectifying:3 contribution:1 php:1 characteristic:8 largely:1 conducting:1 plastic:2 lu:2 confirmed:2 served:1 drive:1 explain:1 synapsis:8 synaptic:18 definition:1 i01:1 frequency:1 pp:16 spectre:1 resultant:1 naturally:1 di:1 attributed:1 con:1 static:1 adjusting:1 knowledge:2 shaping:1 amplitude:1 back:1 appears:1 salamon:1 bidirectional:1 higher:2 dt:2 supervised:1 response:2 synapse:6 formulation:1 done:1 though:1 refractoriness:1 tsuruoka:1 binocular:1 shuai:4 correlation:2 until:2 hand:2 horizontal:1 nonlinear:1 overlapping:1 banerjee:1 effect:10 counterpart:1 volt:2 linares:1 blockade:1 interchangeably:1 self:1 please:2 noted:1 die:1 m:14 prominent:1 hippocampal:1 plate:2 complete:1 demonstrate:1 lcp:17 tn:2 occuring:1 passive:3 novel:1 recently:6 funding:1 barranco:1 superior:1 volatile:1 common:1 stimulation:3 spiking:3 physical:3 functional:1 exponentially:2 jp:1 analog:3 elevated:1 discussed:2 resting:1 he:1 measurement:18 refer:1 rene:2 cambridge:1 approx:1 hp:1 inclusion:1 nonlinearity:1 had:1 phenomenological:2 apex:2 ride:1 access:1 stable:1 longer:1 cortex:1 etc:2 feb:1 recent:1 pulsed:1 driven:6 apart:1 compound:1 selectivity:1 der:1 seen:5 employed:2 period:1 signal:1 july:1 pnas:1 smooth:1 adapt:3 offer:1 long:9 post:16 controlled:2 feasibility:1 prediction:2 basic:1 simplistic:1 represent:2 ion:1 beam:1 nanodevices:1 addition:4 whereas:2 cell:1 decreased:1 diagram:2 else:1 source:2 extra:1 rest:1 finely:1 strict:1 hz:1 induced:2 sio2:1 call:1 yang:3 intermediate:1 easy:1 affect:2 fit:1 architecture:1 regarding:1 avenue:1 tsay:1 expression:1 munro:1 bridging:2 effort:1 resistance:9 cause:1 action:4 depression:4 johannes:2 simplest:1 http:6 overlay:1 shifted:1 neuroscience:5 discrete:2 vol:17 affected:2 express:1 group:1 badoni:1 memristive:52 threshold:8 falling:2 changing:1 verified:1 snider:1 inverse:1 letter:2 reasonable:1 wu:1 electronic:1 separation:1 fusi:1 scaling:1 comparable:1 nan:2 followed:1 sinh:4 activity:7 adapted:1 occur:1 strength:1 ohno:1 homosynaptic:1 speed:1 ductance:1 leon:1 membrane:7 across:2 postsynaptic:31 biologically:3 modification:9 taken:1 equation:12 turn:1 mechanism:7 letting:1 ge:1 fed:1 fp7:1 available:8 junction:2 operation:3 probe:1 progression:1 apply:2 away:1 occurrence:1 anymore:1 schmidt:4 mimicked:1 shortly:2 slower:1 existence:1 original:6 top:8 denotes:1 cf:2 spiketiming:1 especially:3 build:1 amit:1 contact:1 added:3 spike:19 occurs:2 dependence:3 said:2 exhibit:3 lends:1 link:4 separate:1 simulated:1 presynaptic:11 barber:1 induction:1 length:1 modeled:1 relationship:1 regulation:1 mostly:2 relate:1 hasegawa:1 trace:3 negative:8 rise:3 implementation:10 design:3 upper:1 neuron:14 observation:2 coincident:3 november:1 controllability:1 jin:1 extended:1 looking:1 precise:1 sharp:1 introduced:6 pair:1 required:4 nonlinearly:1 kang:3 tpre:3 adult:1 below:3 pattern:1 usually:1 saturation:5 memory:1 overlap:1 natural:1 rely:1 hybrid:1 hr:3 advanced:2 numerous:1 created:1 carried:1 zhen:1 catch:1 naive:1 literature:1 multiplication:2 relative:1 fully:2 par:1 bear:1 mixed:1 interesting:1 limitation:1 integrate:1 sufficient:1 metal:3 switchable:1 principle:1 row:1 last:1 keeping:1 arriving:1 side:1 bias:2 preknowledge:2 institute:1 correspondingly:1 distributed:1 regard:2 curve:7 transition:1 tpost:6 commonly:2 simplified:1 programme:1 far:2 employing:3 flux:1 compact:1 uni:1 active:1 gotarredona:1 continuous:3 helm:2 triplet:10 decade:1 reality:1 promising:1 nature:4 controllable:1 mazumder:1 du:2 investigated:1 complex:1 froemke:3 excellent:1 domain:1 protocol:6 european:1 gerstner:2 main:1 paul:1 repeated:1 chemnitz:4 xu:1 fig:14 cooper:2 nonvolatile:1 n:1 sub:2 explicit:1 exponential:13 governed:1 third:1 grained:1 tang:1 aspartate:1 decay:5 multitude:1 reproduction:1 evidence:1 intrinsic:1 adding:1 modulates:1 downward:1 push:1 gap:2 depicted:1 photograph:1 simply:1 likely:2 ez:1 visual:1 expressed:2 adjustment:2 chang:1 determines:1 acm:1 ma:2 stimulating:1 russel:1 goal:1 towards:1 price:1 change:21 deposition:1 included:2 determined:4 typical:1 aided:1 experimentally:1 nanotechnology:1 principal:1 contradictory:1 called:2 engineer:1 pfister:1 partly:1 experimental:5 pragmatic:2 preand:1 internal:1 guo:1 tested:1
3,972
4,596
Fused sparsity and robust estimation for linear models with unknown variance Arnak S. Dalalyan ENSAE-CREST-GENES 92245 MALAKOFF Cedex, FRANCE [email protected] Yin Chen University Paris Est, LIGM 77455 Marne-la-Valle, FRANCE [email protected] Abstract In this paper, we develop a novel approach to the problem of learning sparse representations in the context of fused sparsity and unknown noise level. We propose an algorithm, termed Scaled Fused Dantzig Selector (SFDS), that accomplishes the aforementioned learning task by means of a second-order cone program. A special emphasize is put on the particular instance of fused sparsity corresponding to the learning in presence of outliers. We establish finite sample risk bounds and carry out an experimental evaluation on both synthetic and real data. 1 Introduction Consider the classical problem of Gaussian linear regression1 : Y = X? ? + ? ? ?, ? ? Nn (0, In ), (1) where Y ? Rn and X ? Rn?p are observed, in the neoclassical setting of very large dimensional unknown vector ? ? . Even if the ambient dimensionality p of ? ? is larger than n, it has proven possible to consistently estimate this vector under the sparsity assumption. The letter states that the number of nonzero elements of ? ? , denoted by s and called intrinsic dimension, is small compared to the sample size n. Most famous methods of estimating sparse vectors, the Lasso and the Dantzig Selector (DS), rely on convex relaxation of `0 -norm penalty leading to a convex program that in? > 0, the Lasso and the DS [26, 4, 5, 3] are volves the `1 -norm of ?. More precisely, for a given ? defined as   L 1 2 b ? ? = arg minp kY ? X?k2 + ?k?k1 (Lasso) ??R 2 b DS = arg min k?k1 subject to kX> (Y ? X?)k? ? ?. ? ? (DS) ? The performance of these algorithms depends heavily on the choice of the tuning parameter ?. ? Several empirical and theoretical studies emphasized that ? should be chosen proportionally to the noise standard deviation ? ? . Unfortunately, in most applications, the latter is unavailable. It is therefore vital to design statistical procedures that estimate ? and ? in a joint fashion. This topic received special attention in last years, cf. [10] and the references therein, with the introduction of computationally efficient and theoretically justified ?-adaptive procedures the square-root Lasso [2] (a.k.a. scaled Lasso [24]) and the `1 penalized log-likelihood minimization [20]. In the present work, we are interested in the setting where ? ? is not necessarily sparse, but for a known q ? p matrix M, the vector M? ? is sparse. We call this setting ?fused sparsity scenario?. 1 We denote by In the n ? n identity matrix. For a vector v, we use the standard notation kvk1 , kvk2 and kvk? for the `1 , `2 and `? norms, corresponding respectively to the sum of absolute values, the square root of the sum of squares and the maximum of the coefficients of v. 1 The term ?fused? sparsity, introduced by [27], originates from the case where M? is the discrete derivative of a signal ? and the aim is to minimize the total variation, see [12, 19] for a recent overview and some asymptotic results. For general matrices M, tight risk bounds were proved in [14]. We adopt here this framework of general M and aim at designing a computationally efficient procedure capable to handle the situation of unknown noise level and for which we are able to provide theoretical guarantees along with empirical evidence for its good performance. This goal is attained by introducing a new procedure, termed Scaled Fused Dantzig Selector (SFDS), which is closely related to the penalized maximum likelihood estimator but has some advantages in terms of computational complexity. We establish tight risk bounds for the SFDS, which are nearly as strong as those proved for the Lasso and the Dantzig selector in the case of known ? ? . We also show that the robust estimation in linear models can be seen as a particular example of the fused sparsity scenario. Finally, we carry out a ?proof of concept? type experimental evaluation to show the potential of our approach. 2 Estimation under fused sparsity with unknown level of noise 2.1 Scaled Fused Dantzig Selector We will only consider the case rank(M) = q ? p, which is more relevant for the applications we have in mind (image denoising and robust estimation). Under this condition, one can find a (p ? q) ? p matrix N such that the augmented matrix M = [M> N> ]> is of full rank. Let us denote by mj the jth column of the matrix M ?1 , so that M ?1 = [m1 , ..., mp ]. We also introduce: M ?1 = [M? , N? ], M? = [m1 , ..., mq ] ? Rp?q , N? = [mq+1 , ..., mp ] ? Rp?(p?q) . Given two positive tuning parameters ? and ?, we define the Scaled Fused Dantzig Selector (SFDS) b ? (?, b) as a solution to the following optimization problem: ? > |m> ? j X (X? ? Y )| ? ??kXmj k2 , j ? q; ? ? > N> (P1) minimize kXmj k2 |(M?)j | subject to ? X (X? ? Y ) = 0, ? ? j=1 ? n?? 2 + Y > X? ? kY k22 . q X This estimator has several attractive properties: (a) it can be efficiently computed even for very large scale problems using a second-order cone program, (b) it is equivariant with respect to the scale transformations both in the response Y and in the lines of M and, finally, (c) it is closely related to the penalized maximum likelihood estimator. Let us give further details on these points. 2.2 Relation with the penalized maximum likelihood estimator One natural way to approach the problem of estimating ? ? in our setup is to rely on the standard procedure of penalized log-likelihood minimization. If the noise distribution is Gaussian, ? ? Nn (0, In ), the negative log-likelihood (up to irrelevant additive terms) is given by `(Y , X; ?, ?) = n log(?) + kY ? X?k22 . 2? 2 In the context of large dimension we are concerned with, i.e., when p/n is not small, the maximum likelihood estimator is subject to overfitting and is of very poor quality. If it is plausible to expect that the data can be fitted sufficiently well by a vector ? ? such that for some matrix M, only a small fraction of elements of M? ? are nonzero, then one can considerably improve the quality of estimation by adding a penalty term to the log-likelihood. However, the most appealing penalty, the number of nonzero elements of M?, leads to a nonconvex optimization problem which cannot be efficiently solved even for moderately large values of p. Instead, convex penalties of the form P j ?j |(M?)j |, where wj > 0 are some weights, have proven to provide high accuracy estimates b PL , ? at a relatively low computational cost. This corresponds to defining the estimator (? bPL ) as the 2 minimizer of the penalized log-likelihood q 2 X ? , X; ?, ?) = n log(?) + kY ? X?k2 + `(Y ?j |(M?)j |. 2 2? j=1 To ensure the scale equivariance, the weights ?j should be chosen inversely proportionally to ?: ?j = ? ?1 ? ? j . This leads to the estimator   q kY ? X?k22 X |(M?)j | PL PL b (? , ? + b ) = arg min n log(?) + ? ?j . ?,? 2? 2 ? j=1 Although this problem can be cast [20] as a problem of convex minimization (by making the change of parameters ? = ?/? and ? = 1/?), it does not belong to the standard categories of convex problems that can be solved either by linear programming or by second-order cone programming or by semidefinite programming. Furthermore, the smooth part of the objective function is not Lipschitz which makes it impossible to directly apply most first-order optimization methods developed in recent years. Our goal is to propose a procedure that is close in spirit to the penalized maximum likelihood but has the additional property of being computable by standard algorithms of second-order cone programming. To achieve this goal, at the first step, we remark that it can be useful to introduce a penalty term that depends exclusively on ? and that prevents the estimator of ? ? from being too large or too small. One can show that the only function (up to a multiplicative constant) that can serve as penalty without breaking the property of scale equivariance is the logarithmic function. Therefore, we introduce an additional tuning parameter ? > 0 and look for minimizing the criterion q n? log(?) + kY ? X?k22 X |(M?)j | + ? ?j . 2? 2 ? j=1 (2) If we make the change of variables ?1 = M?/?, ?2 = N?/? and ? = 1/?, we get a convex function for which the first-order conditions [20] take the form > m> ? j sign({M?}j ), j X (Y ? X?) ? ? (3) > N> ? X (Y (4) ? X?) = 0,  1 kY k22 ? Y > X? = ? 2 . n? (5) Thus, any minimizer of (2) should satisfy these conditions. Therefore, to simplify the problem of optimization we propose to replace minimization of (2) by the minimization of the weighted `1 P norm j ? ? j |(M?)j | subject to some constraints that are as close as possible to (3-5). The only problem here is that the constraints (3) and (5) are not convex. The ?convexification? of these constraints leads to the procedure described in (P1). As we explain below, the particular choice of ? ? j s is dictated by the desire to enforce the scale equivariance of the procedure. 2.3 Basic properties b ? A key feature of the SFDS is its scale equivariance. Indeed, one easily checks that if (?, b) is a b solution to (P1) for some inputs X, Y and M, then ?(?, ? b) will be a solution to (P1) for the inputs X, ?Y and M, whatever the value of ? ? R is. This is the equivariance with respect to the scale change in the response Y . Our method is also equivariant with respect to the scale change in M. b ? b ? More precisely, if (?, b) is a solution to (P1) for some inputs X, Y and M, then (?, b) will be a solution to (P1) for the inputs X, Y and DM, whatever the q ? q diagonal matrix D is. The latter property is important since if we believe that for a given matrix M the vector M? ? is sparse, then this is also the case for the vector DM? ? , for any diagonal matrix D. Having a procedure the output of which is independent of the choice of D is of significant practical importance, since it leads to a solution that is robust with respect to small variations of the problem formulation. The second attractive feature of the SFDS is that it can be computed by solving a convex optimization problem of second-order cone programming (SOCP). Recall that an SOCP is a constrained 3 optimization problem that can be cast as minimization with respect to w ? Rd of a linear function a> w under second-order conic constraints of the form kAi w + bi k2 ? c> i w + di , where Ai s are some ri ? d matrices, bi ? Rri , ci ? Rd are some vectors and di s are some real numbers. The problem (P1) belongs well to this category, since it can be written as min(u1 + . . . + uq ) subject to > |m> ?j = 1, . . . , q; j X (X? ? Y )| ? ??kXmj k2 , q > N> 4n?kY k22 ? 2 + (Y > X?)2 ? 2kY k22 ? Y > X?. ? X (X? ? Y ) = 0, kXmj k2 |(M?)j | ? uj ; Note that all these constraints can be transformed into linear inequalities, except the last one which is a second order cone constraint. The problems of this type can be efficiently solved by various standard toolboxes such as SeDuMi [22] or TFOCS [1]. 2.4 Finite sample risk bound To provide theoretical guarantees for our estimator, we impose the by now usual assumption of restricted eigenvalues on a suitably chosen matrix. This assumption, stated in Definition 2.1 below, was introduced and thoroughly discussed by [3]; we also refer the interested reader to [28]. D?efinition 2.1. We say that a n ? q matrix A satisfies the restricted eigenvalue condition RE(s, 1), if kA?k2 ? ? ?(s, 1) = min min > 0. nk?J k2 |J|?s k?J c k1 ?k?J k1 We say that A satisfies the strong restricted eigenvalue condition RE(s, s, 1), if ? ?(s, s, 1) = min |J|?s min k?J c k1 ?k?J k1 ? kA?k2 > 0, nk?J?J0 k2 where J0 is the subset of {1, ..., q} corresponding to the s largest in absolute value coordinates of ?. For notational convenience, we assume that M is normalized in such a way that the diagonal ele> ments of n1 M> ? X XM? are all equal to 1. This can always be done by multiplying M from the left by a suitably chosen positive definite diagonal matrix. Furthermore, we will repeatedly use the > ?1 > > projector2 ? = XN? (N> N? X onto the subspace of Rn spanned by the columns of ? X XN? ) XN? . We denote by r = rank{?} the rank of this projector which is typically very small compared to n ? p, and is always smaller than n ? (p ? q). In all theoretical results, the matrices X and M are assumed deterministic. p Theorem 2.1. Let us fix a tolerance level ? ? (0, 1) and define ? = 2n? log(q/?). Assume that the tuning parameters ?, ? > 0 satisfy p (n ? r) log(1/?) + log(1/?) ? r ?1? ?2 . (6) ? n n If the vector M? ? is s-sparse and the matrix (In ? ?)XM? satisfies the condition RE(s, 1) with some ? > 0 then, with probability at least 1 ? 6?, it holds: r r 2? log(q/?) ? ? 2s log(1/?) 4 ? ? b ? + ? )s + (7) kM(? ? ? )k1 ? 2 (b ? n ? n p p  2?s log(q/?) b ? ? ? )k2 ? 2(b + ?? 8 log(1/?) + r . (8) kX(? ? + ?? ) ? If, in addition, (In ? ?)XM? satisfies the condition RE(s, s, 1) with some ? > 0 then, with a probability at least 1 ? 6?, we have: r r 4(b ? + ? ? ) 2s log(q/?) ? ? 2 log(1/?) ? b kM? ? M? k2 ? + (9) ?2 n ? n Moreover, with a probability at least 1 ? 7?, we have: r ?? ?kM? ? k1 s1/2 ? ? log(q/?) 2 log(1/?) ? ? ?1/2 ? b ? 1/2 + + + (? + kM? k1 )? . (10) 1/2 n? n ? n?? 2 Here and in the sequel, the inverse of a singular matrix is understood as MoorePenrose pseudoinverse. 4 Before looking at the consequences of these risk bounds in the particular case of robust estimation, let us present some comments highlighting the claims of Theorem 2.1. The first comment is about the conditions on the tuning parameters ? and ?. It is interesting to observe that the roles of these parameters are very clearly defined: ? controls the quality of estimating ? ? while ? determines the quality of estimating ? ? . One can note that all the quantities entering in the right-hand side of (6) are known, so that it is not hard to choose ? and ? in such a way that they satisfy the conditions of Theorem 2.1. However, in practice, this theoretical choice may be too conservative in which case it could be a better idea to rely on cross validation. The second remark is about the rates of convergence. According to (8), the rate of estimation b ? ? ? )k2 is of the order of s log(q)/n, which is known measured in the mean prediction loss n1 kX(? 2 ? as fast or parametric rate. The vector M? is also estimated with the nearly parametric rate in both `1 and `2 -norms. To the best of our knowledge, this is the first work where such kind of fast rates are derived in the context of fused sparsity with unknown noise-level. With some extra work, one can check that if, for instance, ? = 1 and |? ? 1| ? cn?1/2 for some constant c, then the estimator ? b has also a risk of the order of sn?1/2 . However, the price to pay for being adaptive with respect to the noise level is the presence of kM? ? k1 in the bound on ? b, which deteriorates the quality of estimation in the case of large signal-to-noise ratio. Even if Theorem 2.1 requires the noise distribution to be Gaussian, the proposed algorithm remains valid in a far broader context and tight risk bounds can be obtained under more general conditions on the noise distribution. In fact, one can see from the proof that we only need to know confidence sets for some linear and quadratic functionals of ?. For instance, such kind of confidence sets can be readily obtained in the case of bounded errors ?i using the Bernstein inequality. It is also worthwhile to mention that the proof of Theorem 2.1 is not a simple adaptation of the arguments used to prove analogous results for ordinary sparsity, but contains some qualitatively novel ideas. More precisely, the cornerstone of the proof of risk bounds for the Dantzig selector [4, 3, 9] is that the true parameter ? ? is a feasible solution. In our case, this argument cannot be used anymore. Our proposal is then e that simultaneously satisfies the following three conditions: M? e has the to specify another vector ? ? e ? same sparsity pattern as M? , ? is close to ? and lies in the feasible set. A last remark is about the restricted eigenvalue conditions. They are somewhat cumbersome in this abstract setting, but simplify a lot when the concrete example of robust estimation is considered, cf. the next section. At a heuristical level, these conditions require from the columns of XM? to be not very strongly correlated. Unfortunately, this condition fails for the matrices appearing in the problem of multiple change-point detection, which is an important particular instance of fused sparsity. There are some workarounds to circumvent this limitation in that particular setting, see [17, 11]. The extension of these kind of arguments to the case of unknown ? ? is an open problem we intend to tackle in the near future. 3 Application to robust estimation This methodology can be applied in the context of robust estimation, i.e., when we observe Y ? Rn and A ? Rn?k such that the relation Yi = (A? ? )i + ? ? ?i , iid ?i ? N (0, 1) holds only for some indexes i ? I ? {1, ..., n}, called inliers. The indexes does not belonging to I will be referred to as outliers. The setting we are interested in is the one frequently encountered in computer vision [13, 25]: the dimensionality k of ? ? is small as compared to n but the presence of outliers causes the complete failure of the least squares estimator. In what follows, we use the standard assumption that the matrix n1 A> A has diagonal entries equal to one. Following the ideas developed in [6, 7, 8, 18, 15], we introduce a new vector ? ? Rn that serves to characterize the outliers. If an entry ?i of ? is nonzero, then the corresponding observation Yi is an outlier. This leads to the model: ? ? Y = A? ? + n? ? + ? ? ? = X? ? + ? ? ?, where X = [ n In A], and ? = [? ? ; ? ? ]> . Thus, we have rewritten the problem of robust estimation in linear models as a problem of estimation in high dimension under the fused sparsity scenario. Indeed, we have X ? Rn?(n+k) 5 b of ? ? for which ? b b = [In 0n?k ]? and ? ? ? Rn+k , and we are interested in finding an estimator ? contains as many zeros as possible. This means that we expect that the number of outliers is significantly smaller than the sample size. We are thus in the setting of fused sparsity with M = [In 0n?k ]. Setting N = [0k?n Ik ], we define the Scaled Robust Dantzig Selector (SRDS) as b ? b, ? a solution (?, b) of the problem: minimize k?k1 ?? ? nkA? + n ? ? Y k? ? ??, ? ? ? ? A> (A? + n ? ? Y ) = 0, subject to ? ? ? ? n?? 2 + Y > (A? + n?) ? kY k22 . (P2) Once again, this can be recast in a SOCP and solved with great efficiency by standard algorithms. Furthermore, the results of the previous section provide us with strong theoretical guarantees for the SRDS. To state the corresponding result, we will need a notation for the largest and the smallest singular values of ?1n A denoted by ? ? and ?? respectively. p Theorem 3.1. Let us fix a tolerance level ? ? (0, 1) and define ? = 2n? log(n/?). Assume that p  the tuning parameters ?, ? > 0 satisfy ?? ? 1 ? nk ? n2 (n ? k) log(1/?) + log(1/?) . Let ? denote the orthogonal projector onto the k-dimensional subspace of Rn spanned by the columns of ? ? A. If the vector ? is s-sparse and the matrix n(In ? ?) satisfies the condition RE(s, 1) with some ? > 0 then, with probability at least 1 ? 5?, it holds: r r 4 2? log(n/?) ? ? 2s log(1/?) ? ? kb ? ? ? k1 ? 2 (b ? + ? )s + , (11) ? n ? n r r 2(b ? + ? ? ) 2s log(n/?) 2 log(1/?) k(In ? ?)(b ? ? ? ? )k2 ? + ?? . (12) ? n n ? If, in addition, n (In ? ?) satisfies the condition RE(s, s, 1) with some ? > 0 then, with a probability at least 1 ? 6?, we have: r r 4(b ? + ? ? ) 2s log(n/?) ? ? 2 log(1/?) ? kb ? ? ? k2 ? + ?2 n ? n r r p ?   ? ? ? ? ? ( k + 2 log(1/?)) ? 2s log(n/?) ? 2 log(1/?) 4(b ? + ? ) ? b ? ? k2 ? ? k? + + ??2 ?2 n ? n n Moreover, with a probability at least 1 ? 7?, the following inequality holds: r ?k? ? k1 s1/2 ? ? log(n/?) 2 log(1/?) ?? ? ? ?1/2 + + (? + k? k1 )? . ? b ? 1/2 + 1/2 n? n ? n?? (13) All the comments made after Theorem 2.1, especially those concerning the tuning parameters and the rates of convergence, hold true for the risk bounds in Theorem 3.1 as well. Furthermore, the restricted eigenvalue condition in the latter theorem is much simpler ? and deserves a special attention. In particular, one can remark that the failure of RE(s, 1) for n(In ? ?) implies that there is a unit vector ? in Im(A) such that |?(1) | + . . . + |?(n?s) | ? |?(n?s+1) | + . . . + |?(n) |, where ?(k) stands for the kth smallest (in absolute value) entry of ?. To gain a better understanding of how restrictive this assumption is, let us consider the case where the rows a1 , . . . , an of A are i.i.d. zero mean Gaussian vectors. Since ? ? Im(A), its coordinates ?i are also i.i.d. Gaussian random variables (they can be considered N (0, 1) due to the homogeneity of the inequality we are interested in). The inequality |?(1) | + . . . + |?(n?s) | ? |?(n?s+1) | + . . . + |?(n) | can be written P as n1 i |?i | ? n2 (|?(n?s+1) | + . . . + |?(n) |). While the left-hand side of this inequality tends to ? E[|?1 |] > 0, the right-hand side is upper-bounded by 2s maxi |?i |, which is on the order of 2s nlog n . n ? Therefore, if 2s nlog n is small, the condition RE(s, 1) is satisfied. This informal discussion can be made rigorous by studying large deviations of the quantity max??Im(A)\{0} k?k? /k?k1 . A simple ? sufficient condition entailing RE(s, 1) for n(In ? ?) is presented in the following lemma. Pn ? 2skAk Lemma 3.2. Let us set ?s (A) = inf u?Sk?1 n1 i=1 |ai u|? ?n2,? . If ?s (A) > 0, then n (In ? p ?) satisfies both RE(s, 1) and RE(s, s, 1) with ?(s, 1) ? ?(s, s, 1) ? ?s (A)/ (? ? )2 + ?s (A)2 . 6 SFDS b ? ? ? |2 |? Lasso ? Square-Root Lasso b ? ? ? |2 |? |b ??? | b ? ? ? |2 |? |b ? ? ?? | ( T, p, s? , ? ? ) Ave StD Ave StD Ave StD Ave StD Ave StD (200, 400, 2, .5) (200, 400, 2, 1) (200, 400, 2, 2) (200, 400, 5, .5) (200, 400, 5, 1) (200, 400, 5, 2) (200, 400, 10, .5) (200, 400, 10, 1) (200, 400, 10, 2) 0.04 0.09 0.23 0.06 0.20 0.34 0.10 0.19 1.90 0.03 0.05 0.17 0.01 0.05 0.11 0.01 0.09 0.20 0.18 0.42 0.75 0.28 0.56 0.34 0.36 0.27 4.74 0.14 0.35 0.55 0.11 0.10 0.21 0.02 0.26 1.01 0.07 0.16 0.31 0.13 0.31 0.73 0.15 0.31 0.61 0.05 0.11 0.21 0.09 0.04 0.25 0.00 0.04 0.08 0.06 0.13 0.25 0.11 0.25 0.47 0.10 0.19 1.80 0.04 0.09 0.18 0.06 0.02 0.29 0.01 0.09 0.04 0.20 0.46 0.79 0.18 0.66 0.69 0.36 0.27 3.70 0.14 0.37 0.56 0.27 0.05 0.70 0.02 0.26 0.48 Table 1: Comparing our procedure SFDS with the (oracle) Lasso and the SqRL on a synthetic dataset. The b ? ? ? |2 and |b average values and the standard deviations of the quantities |? ? ? ? ? | over 500 trials are reported. They represent respectively the accuracy in estimating the regression vector and the level of noise. The proof of the lemma can be found in the supplementary material. One can take note that the problem (P2) boils down to computing (b ?, ? b) as a solution to ? ? nk(In ? ?)( n? ? Y )k? ? ??, ? minimize k?k1 subject to n?? 2 + n[(In ? ?)Y ]> ? ? k(In ? ?)Y k22 . b = (A> A)?1 A> (Y ? ?n ? b ). and then setting ? 4 Experiments For the empirical evaluation we use a synthetic dataset with randomly drawn Gaussian design matrix X and the real-world dataset fountain-P113 , on which we apply our methodology for computing the fundamental matrices between consecutive images. 4.1 Comparative evaluation on synthetic data We randomly generated a n ? p matrix X with independent entries distributed according to the standard normal distribution. Then we chose a vector ? ? ? Rp that has exactly s nonzero elements all equal to one. The indexes of these elements were chosen at random. Finally, the response Y ? Rn was computed by adding a random noise ? ? Nn (0, In ) to the signal X? ? . Once Y and X available, we computed three estimators of the parameters using the standard sparsity penalization (in order to be able to compare our approach to the others): the SFDS, the Lasso and the squareroot tuning parameters for all these p Lasso (SqRL). We used the ?universal? p p methods: (?, ?) = ( 2n log(p), 1) for the SFDS, ? = 2 log(p) for the SqRL and ? = ? ? 2 log(p) for the Lasso. Note that the latter is not really an estimator but rather an oracle since it exploits the knowledge of the true ? ? . This is why the accuracy in estimating ? ? is not reported in Table 1. To reduce the well known bias toward zero [4, 23], we performed a post-processing for all of three procedures. It consisted in computing least squares estimators after removing all the covariates corresponding to vanishing coefficients of the estimator of ? ? . The results summarized in Table 1 show that the SFDS is competitive with the state-of-the-art methods and, a bit surprisingly, is sometimes more accurate than the oracle Lasso using the true variance in the penalization. We stress however that the SFDS is designed for being applied in?and has theoretical guarantees for?the broader setting of fused sparsity. 4.2 Robust estimation of the fundamental matrix To provide a qualitative evaluation of the proposed methodology on real data, we applied the SRDS to the problem of fundamental matrix estimation in multiple-view geometry, which constitutes an 3 available at http://cvlab.epfl.ch/?strecha/multiview/denseMVS.html 7 ? b kb ? k0 100 ? k0 n kb 1 2 3 4 5 6 7 8 9 10 Average 0.13 218 1.3 0.13 80 0.46 0.13 236 1.37 0.17 90 0.52 0.16 198 1.13 0.17 309 1.84 0.20 17 0.12 0.18 31 0.19 0.17 207 1.49 0.11 8 1.02 0.15 139.4 0.94 Table 2: Quantitative results on fountain dataset. Figure 1: Qualitative results on fountain dataset. Top left: the values of ? bi for the first pair of images. There is a clear separation between outliers and inliers. Top right: the first pair of images and the matches classified as wrong by SRDS. Bottom: the eleven images of the dataset. essential step in almost all pipelines of 3D reconstruction [13, 25]. In short, if we have two images I and I 0 representing the same 3D scene, then there is a 3?3 matrix F, called fundamental matrix, such that a point x = (x, y) in I1 matches with the point x0 = (x0 , y 0 ) in I 0 only if [x; y; 1] F [x0 ; y 0 ; 1]> = 0. Clearly, F is defined up to a scale factor: if F33 6= 0, one can assume that F33 = 1. Thus, each pair x ? x0 of matching points in images I and I 0 yields a linear constraint on the eight remaining coefficients of F. Because of the quantification and the presence of noise in images, these linear relations are satisfied up to some error. Thus, estimation of F from a family of matching points {xi ? x0i ; i = 1, . . . , n} is a problem of linear regression. Typically, matches are computed by comparing local descriptors (such as SIFT [16]) and, for images of reasonable resolution, hundreds of matching points are found. The computation of the fundamental matrix would not be a problem in this context of large sample size / low dimension, if the matching algorithms were perfectly correct. However, due to noise, repetitive structures and other factors, a non-negligible fraction of detected matches are wrong (outliers). Elimination of these outliers and robust estimation of F are crucial steps for performing 3D reconstruction. Here, we apply the SRDS to the problem of estimation of F for 10 pairs of consecutive images provided by the fountain dataset [21]: the 11 images are shown at the bottom of Fig. 1. Using SIFT descriptors, we found more than 17.000 point matches in most pairs of images among the 10 pairs we are considering. The CPU time for computing each matrix using the SeDuMi solver [22] was about 7 seconds, despite such a large dimensionality. The number of outliers and the estimated noise-level for each pair of images are reported in Table 2. We also showed in Fig. 1 the 218 outliers for the first pair of images. They are all indeed wrong correspondncies, even those which correspond to the windows (this is due to the repetitive structure of the window). 5 Conclusion and perspectives We have presented a new procedure, SFDS, for the problem of learning linear models with unknown noise level under the fused sparsity scenario. We showed that this procedure is inspired by the penalized maximum likelihood but has the advantage of being computable by solving a secondorder cone program. We established tight, nonasymptotic, theoretical guarantees for the SFDS with a special attention paid to robust estimation in linear models. The experiments we have carried out are very promising and support our theoretical results. In the future, we intend to generalize the theoretical study of the performance of the SFDS to the case of non-Gaussian errors ?i , as well as to investigate its power in variable selection. The extension to the case where the number of lines in M is larger than the number of columns is another interesting topic for future research. 8 References [1] Stephen Becker, Emmanuel Cand`es, and Michael Grant. Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput., 3(3):165?218, 2011. [2] A. Belloni, Victor Chernozhukov, and L. Wang. Square-root lasso: Pivotal recovery of sparse signals via conic programming. Biometrika, to appear, 2012. [3] Peter J. Bickel, Ya?acov Ritov, and Alexandre B. Tsybakov. Simultaneous analysis of lasso and Dantzig selector. Ann. Statist., 37(4):1705?1732, 2009. [4] Emmanuel Candes and Terence Tao. The Dantzig selector: statistical estimation when p is much larger than n. Ann. Statist., 35(6):2313?2351, 2007. [5] Emmanuel J. Cand`es. The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris, 346(9-10):589?592, 2008. [6] Emmanuel J. Cand`es and Paige A. Randall. Highly robust error correction by convex programming. IEEE Trans. Inform. Theory, 54(7):2829?2840, 2008. [7] Arnak S. Dalalyan and Renaud Keriven. L1 -penalized robust estimation for a class of inverse problems arising in multiview geometry. In NIPS, pages 441?449, 2009. [8] Arnak S. Dalalyan and Renaud Keriven. Robust estimation for an inverse problem arising in multiview geometry. J. Math. Imaging Vision., 43(1):10?23, 2012. [9] Eric Gautier and Alexandre Tsybakov. High-dimensional instrumental variables regression and confidence sets. Technical Report arxiv:1105.2454, September 2011. [10] Christophe Giraud, Sylvie Huet, and Nicolas Verzelen. High-dimensional regression with unknown variance. submitted, page arXiv:1109.5587v2 [math.ST]. [11] Z. Harchaoui and C. L?evy-Leduc. Multiple change-point estimation with a total variation penalty. J. Amer. Statist. Assoc., 105(492):1480?1493, 2010. [12] Za??d Harchaoui and C?eline L?evy-Leduc. Catching change-points with lasso. In John Platt, Daphne Koller, Yoram Singer, and Sam Roweis, editors, NIPS. Curran Associates, Inc., 2007. [13] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, June 2004. [14] A. Iouditski, F. Kilinc Karzan, A. S. Nemirovski, and B. T. Polyak. On the accuracy of l1-filtering of signals with block-sparse structure. In NIPS 24, pages 1260?1268. 2011. [15] S. Lambert-Lacroix and L. Zwald. Robust regression through the Huber?s criterion and adaptive lasso penalty. Electron. J. Stat., 5:1015?1053, 2011. [16] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91?110, 2004. [17] E. Mammen and S. van de Geer. Locally adaptive regression splines. Ann. Statist., 25(1):387?413, 1997. [18] Nam H. Nguyen, Nasser M. Nasrabadi, and Trac D. Tran. Robust lasso with missing and grossly corrupted observations. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 1881?1889. 2011. [19] A. Rinaldo. Properties and refinements of the fused lasso. Ann. Statist., 37(5B):2922?2952, 2009. [20] Nicolas St?adler, Peter B?uhlmann, and Sara van de Geer. `1 -penalization for mixture regression models. TEST, 19(2):209?256, 2010. [21] C. Strecha, W. von Hansen, L. Van Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In Conference on Computer Vision and Pattern Recognition, pages 1?8, 2009. [22] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw., 11/12(1-4):625?653, 1999. [23] T. Sun and C.-H. Zhang. Comments on: `1 -penalization for mixture regression models. TEST, 19(2): 270?275, 2010. [24] T. Sun and C.-H. Zhang. Scaled sparse linear regression. arXiv:1104.4595, 2011. [25] R. Szeliski. Computer Vision: Algorithms and Applications. Texts in Computer Science. Springer, 2010. [26] Robert Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B, 58(1): 267?288, 1996. [27] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B Stat. Methodol., 67(1):91?108, 2005. [28] Sara A. van de Geer and Peter B?uhlmann. On the conditions used to prove oracle results for the Lasso. Electron. J. Stat., 3:1360?1392, 2009. 9
4596 |@word trial:1 norm:5 instrumental:1 valle:1 suitably:2 open:1 km:5 paid:1 mention:1 carry:2 contains:2 exclusively:1 ka:2 enpc:1 comparing:2 optim:1 written:2 readily:1 john:1 additive:1 eleven:1 strecha:2 designed:1 vanishing:1 short:1 math:4 evy:2 simpler:1 daphne:1 zhang:2 along:1 kvk2:1 ik:1 qualitative:2 prove:2 introduce:4 x0:4 theoretically:1 huber:1 indeed:3 equivariant:2 p1:7 frequently:1 cand:3 multi:1 inspired:1 cpu:1 window:2 considering:1 solver:1 provided:1 estimating:6 notation:2 moreover:2 bounded:2 what:1 kind:3 developed:2 finding:1 transformation:1 guarantee:5 quantitative:1 tackle:1 exactly:1 biometrika:1 scaled:7 k2:17 ser:2 control:1 wrong:3 grant:1 originates:1 whatever:2 unit:1 arnak:4 appear:1 positive:2 before:1 understood:1 local:1 negligible:1 tends:1 consequence:1 acad:1 despite:1 chose:1 therein:1 dantzig:10 assoc:1 sara:2 nemirovski:1 bi:3 practical:1 camera:1 practice:1 block:1 definite:1 procedure:13 j0:2 empirical:3 universal:1 significantly:1 trac:1 matching:4 confidence:3 srds:5 get:1 cannot:2 close:3 convenience:1 onto:2 selection:2 put:1 context:6 risk:9 impossible:1 zwald:1 karzan:1 projector:2 deterministic:1 missing:1 dalalyan:4 attention:3 convex:10 resolution:2 recovery:2 estimator:16 fountain:4 spanned:2 nam:1 mq:2 handle:1 variation:3 coordinate:2 analogous:1 rri:1 heavily:1 programming:7 designing:1 curran:1 secondorder:1 associate:1 element:5 roy:1 recognition:1 std:5 convexification:1 observed:1 role:1 bottom:2 ensae:2 solved:4 wang:1 wj:1 renaud:2 sun:2 knight:1 complexity:1 moderately:1 covariates:1 tight:4 solving:2 entailing:1 serve:1 distinctive:1 efficiency:1 eric:1 easily:1 joint:1 k0:2 various:1 lacroix:1 fast:2 detected:1 zemel:1 saunders:1 larger:3 plausible:1 kai:1 say:2 supplementary:1 compressed:1 advantage:2 eigenvalue:5 propose:3 nlog:2 reconstruction:2 tran:1 fr:2 adaptation:1 relevant:1 achieve:1 roweis:1 ky:10 convergence:2 comparative:1 develop:1 stat:4 heuristical:1 measured:1 x0i:1 received:1 keith:1 p2:2 soc:2 strong:3 implies:1 closely:2 correct:1 hartley:1 kb:4 material:1 elimination:1 require:1 fix:2 really:1 im:3 extension:2 pl:3 correction:1 hold:5 sufficiently:1 considered:2 normal:1 great:1 claim:1 electron:2 bickel:1 adopt:1 smallest:2 consecutive:2 estimation:23 chernozhukov:1 gautier:1 hansen:1 uhlmann:2 largest:2 weighted:1 minimization:6 clearly:2 gaussian:7 always:2 aim:2 rather:1 pn:1 efinition:1 shrinkage:1 broader:2 kvk1:1 derived:1 june:1 notational:1 consistently:1 rank:4 likelihood:11 check:2 rigorous:1 ave:5 nn:3 epfl:1 typically:2 relation:3 koller:1 transformed:1 france:2 interested:5 i1:1 tao:1 arg:3 aforementioned:1 html:1 among:1 denoted:2 constrained:1 special:4 art:1 equal:3 once:2 having:1 softw:1 look:1 nearly:2 constitutes:1 future:3 report:1 spline:1 simplify:2 others:1 leduc:2 randomly:2 simultaneously:1 homogeneity:1 geometry:4 n1:5 detection:1 investigate:1 highly:1 evaluation:5 mixture:2 kvk:1 semidefinite:1 inliers:2 implication:1 accurate:1 ambient:1 capable:1 sedumi:3 eline:1 orthogonal:1 taylor:1 re:11 catching:1 theoretical:10 fitted:1 instance:4 column:5 ordinary:1 cost:1 introducing:1 deviation:3 subset:1 entry:4 deserves:1 hundred:1 keriven:2 too:3 characterize:1 reported:3 corrupted:1 rosset:1 synthetic:4 considerably:1 thoroughly:1 st:2 adler:1 fundamental:5 international:1 sequel:1 terence:1 michael:2 fused:19 concrete:1 again:1 von:1 satisfied:2 imagery:1 choose:1 derivative:1 leading:1 potential:1 nonasymptotic:1 socp:3 de:3 summarized:1 coefficient:3 inc:1 satisfy:4 mp:2 depends:2 multiplicative:1 root:4 lot:1 performed:1 view:3 lowe:1 competitive:1 candes:1 minimize:4 square:7 accuracy:4 variance:3 descriptor:2 efficiently:3 bpl:1 yield:1 correspond:1 generalize:1 famous:1 lambert:1 iid:1 multiplying:1 classified:1 huet:1 submitted:1 explain:1 simultaneous:1 inform:1 cumbersome:1 za:1 definition:1 failure:2 grossly:1 dm:2 proof:5 di:2 boil:1 gain:1 proved:2 dataset:7 recall:1 ele:1 knowledge:2 dimensionality:3 alexandre:2 attained:1 methodology:3 response:3 specify:1 zisserman:1 fua:1 formulation:1 done:1 ritov:1 strongly:1 amer:1 furthermore:4 d:4 hand:3 sturm:1 quality:5 believe:1 k22:9 concept:1 normalized:1 true:4 consisted:1 entering:1 symmetric:1 nonzero:5 attractive:2 mammen:1 criterion:2 stress:1 multiview:3 complete:1 l1:2 saharon:1 image:15 novel:2 ji:1 overview:1 belong:1 discussed:1 m1:2 significant:1 refer:1 cambridge:1 ai:2 smoothness:1 tuning:8 rd:2 shawe:1 calibration:1 isometry:1 recent:2 dictated:1 showed:2 perspective:1 irrelevant:1 belongs:1 inf:1 termed:2 scenario:4 nonconvex:1 inequality:6 christophe:1 yi:2 victor:1 seen:1 additional:2 somewhat:1 impose:1 moorepenrose:1 accomplishes:1 nasrabadi:1 signal:6 stephen:1 full:1 multiple:4 harchaoui:2 keypoints:1 smooth:1 technical:1 match:5 cross:1 concerning:1 post:1 a1:1 prediction:1 basic:1 regression:10 vision:6 arxiv:3 repetitive:2 represent:1 sometimes:1 justified:1 addition:2 proposal:1 singular:2 crucial:1 extra:1 cedex:1 subject:7 comment:4 spirit:1 call:1 sqrl:3 near:1 presence:4 bernstein:1 vital:1 concerned:1 lasso:22 perfectly:1 polyak:1 reduce:1 idea:3 cn:1 computable:2 sylvie:1 bartlett:1 becker:1 penalty:8 stereo:1 peter:3 paige:1 cause:1 remark:4 repeatedly:1 matlab:1 cornerstone:1 useful:1 proportionally:2 clear:1 tsybakov:2 locally:1 statist:6 category:2 http:1 sign:1 estimated:2 deteriorates:1 arising:2 tibshirani:2 discrete:1 key:1 drawn:1 imaging:1 relaxation:1 fraction:2 cone:9 year:2 sum:2 inverse:3 letter:1 almost:1 reader:1 family:1 reasonable:1 verzelen:1 separation:1 squareroot:1 bit:1 bound:9 pay:1 quadratic:1 encountered:1 oracle:4 precisely:3 constraint:7 belloni:1 ri:1 scene:1 u1:1 argument:3 min:7 performing:1 relatively:1 according:2 poor:1 belonging:1 smaller:2 sam:1 appealing:1 making:1 s1:2 randall:1 outlier:11 restricted:6 invariant:1 pipeline:1 computationally:2 remains:1 singer:1 mind:1 know:1 serf:1 informal:1 studying:1 available:2 rewritten:1 apply:3 observe:2 worthwhile:1 eight:1 enforce:1 v2:1 appearing:1 uq:1 anymore:1 weinberger:1 rp:3 top:2 remaining:1 cf:2 ensure:1 yoram:1 exploit:1 restrictive:1 k1:16 uj:1 establish:2 especially:1 classical:1 emmanuel:4 objective:1 intend:2 marne:1 quantity:3 parametric:2 usual:1 diagonal:5 september:1 kth:1 subspace:2 sci:1 topic:2 toward:1 index:3 ratio:1 minimizing:1 setup:1 unfortunately:2 robert:2 negative:1 stated:1 design:2 unknown:9 upper:1 observation:2 finite:2 situation:1 defining:1 looking:1 rn:10 workarounds:1 introduced:2 david:1 cast:2 paris:2 toolbox:2 pair:8 established:1 acov:1 trans:1 nip:3 able:2 below:2 pattern:2 xm:4 sparsity:18 program:5 recast:1 max:1 gool:1 power:1 natural:1 rely:3 circumvent:1 quantification:1 methodol:1 zhu:1 representing:1 nasser:1 improve:1 inversely:1 conic:2 carried:1 sn:1 text:1 understanding:1 asymptotic:1 loss:1 expect:2 interesting:2 limitation:1 filtering:1 proven:2 validation:1 penalization:4 sufficient:1 minp:1 editor:2 row:1 penalized:9 surprisingly:1 last:3 jth:1 side:3 bias:1 szeliski:1 template:1 volves:1 sparse:11 absolute:3 tolerance:2 distributed:1 van:4 dimension:4 xn:3 valid:1 stand:1 world:1 qualitatively:1 adaptive:4 made:2 refinement:1 nguyen:1 far:1 functionals:1 crest:1 f33:2 selector:10 emphasize:1 gene:1 pseudoinverse:1 overfitting:1 assumed:1 xi:1 sk:1 why:1 table:5 promising:1 mj:1 robust:18 nicolas:2 unavailable:1 necessarily:1 equivariance:5 noise:16 n2:3 cvlab:1 pivotal:1 augmented:1 fig:2 referred:1 benchmarking:1 platt:1 fashion:1 fails:1 pereira:1 comput:1 lie:1 breaking:1 theorem:9 down:1 removing:1 emphasized:1 tfocs:1 sift:2 maxi:1 sensing:1 giraud:1 ments:1 evidence:1 intrinsic:1 essential:1 adding:2 importance:1 ci:1 kx:3 nk:4 chen:2 yin:2 logarithmic:1 prevents:1 desire:1 highlighting:1 rinaldo:1 springer:1 ch:1 corresponds:1 minimizer:2 satisfies:8 determines:1 identity:1 goal:3 ann:4 lipschitz:1 replace:1 price:1 change:7 hard:1 feasible:2 except:1 denoising:1 lemma:3 conservative:1 total:2 called:3 geer:3 experimental:2 la:1 e:3 est:1 ya:1 support:1 latter:4 correlated:1
3,973
4,597
A Conditional Multinomial Mixture Model for Superset Label Learning Thomas G. Dietterich EECS, Oregon State University Corvallis, OR 97331 [email protected] Li-Ping Liu EECS, Oregon State University Corvallis, OR 97331 [email protected] Abstract In the superset label learning problem (SLL), each training instance provides a set of candidate labels of which one is the true label of the instance. As in ordinary regression, the candidate label set is a noisy version of the true label. In this work, we solve the problem by maximizing the likelihood of the candidate label sets of training instances. We propose a probabilistic model, the Logistic StickBreaking Conditional Multinomial Model (LSB-CMM), to do the job. The LSBCMM is derived from the logistic stick-breaking process. It first maps data points to mixture components and then assigns to each mixture component a label drawn from a component-specific multinomial distribution. The mixture components can capture underlying structure in the data, which is very useful when the model is weakly supervised. This advantage comes at little cost, since the model introduces few additional parameters. Experimental tests on several real-world problems with superset labels show results that are competitive or superior to the state of the art. The discovered underlying structures also provide improved explanations of the classification predictions. 1 Introduction In supervised classification, the goal is to learn a classifier from a collection of training instances, where each instance has a unique class label. However, in many settings, it is difficult to obtain such precisely-labeled data. Fortunately, it is often possible to obtain a set of labels for each instance, where the correct label is one of the elements of the set. For example, captions on pictures (in newspapers, facebook, etc.) typically identify all of the people the picture but do not necessarily indicate which face belongs to each person. Imprecisely-labeled training examples can be created by detecting each face in the image and defining a label set containing all of the names mentioned in the caption. A similar case arises in bird song classification [2]. In this task, a field recording of multiple birds singing is divided into 10-second segments, and experts identify the species of all of the birds singing in each segment without localizing each species to a specific part of the spectrogram. These examples show that superset-labeled data are typically much cheaper to acquire than standard single-labeled data. If effective learning algorithms can be devised for superset-labeled data, then they would have wide application. The superset label learning problem has been studied under two main formulations. In the multiinstance multi-label (MIML) formulation [15], the training data consist of pairs (Bi , Yi ), where Bi = {xi,1 , . . . , xi,ni } is a set of instances and Yi is a set of labels. The assumption is that for every instance xi,j ? Bi , its true label yi,j ? Yi . The work of Jie et al. [9] and Briggs et al. [2] learn classifiers from such set-labeled bags. In the superset label formulation (which has sometimes been confusingly called the ?partial label? problem) [7, 10, 8, 12, 4, 5], each instance xn has a candidate label set Yn that contains the unknown 1 true label yn . This formulation ignores any bag structure and views each instance independently. It is more general than the MIML formulation, since any MIML problem can be converted to a superset label problem (with loss of the bag information). Furthermore, the superset label formulation is natural in many applications that do not involve bags of instances. For example, in some applications, annotators may be unsure of the correct label, so permitting them to provide a superset of the correct label avoids the risk of mislabeling. In this paper, we employ the superset label formulation. Other relevant work includes Nguyen et al. [12] and Cour et al. [5] who extend SVMs to handle superset labeled data. In the superset label problem, the label set Yn can be viewed as a corruption of the true label. The standard approach to learning with corrupted labels is to assume a generic noise process and incorporate it into the likelihood function. In standard supervised learning, it is common to assume that the observed label is sampled from a Bernoulli random variable whose most likely outcome is equal to the true label. In ordinary least-squares regression, the assumption is that the observed value is drawn from a Gaussian distribution whose mean is equal to the true value and whose variance is a constant ? 2 . In the superset label problem, we will assume that the observed label set Yn is drawn from a set-valued distribution p(Yn |yn ) that depends only on the true label. When computing the likelihood, this will allow us to treat the true label as a latent variable that can be marginalized away. When the label information is imprecise, the learning algorithm has to depend more on underlying structure in the data. Indeed, many semi-supervised learning methods [16] model cluster structure of the training data explicitly or implicitly. This suggests that the underlying structure of the data should also play important role in the superset label problem. In this paper, we propose the Logistic Stick-Breaking Conditional Multinomial Model (LSB-CMM) for the superset label learning problem. The model has two components: the mapping component and the coding component. Given an input xn , the mapping component maps xn to a region k. Then the coding component generates the label according to a multinomial distribution associated with k. The mapping component is implemented by the Logistic Stick Breaking Process(LSBP) [13] whose Bernoulli probabilities are from discriminative functions. The mapping and coding components are optimized simultaneously with the variational EM algorithm. LSB-CMM addresses the superset label problem in several aspects. First, the mapping component models the cluster structure with a set of regions. The fact that instances in the same region often have the same label is important for inferring the true label from noisy candidate label sets. Second, the regions do not directly correspond to classes. Instead, the number of regions is automatically determined by data, and it can be much larger than the number of classes. Third, the results of the LSB-CMM model can be more easily interpreted than the approaches based on SVMs [5, 2]. The regions provide information about how data are organized in the classification problem. 2 The Logistic Stick Breaking Conditional Multinomial Model The superset label learning problem seeks to train a classifier f : Rd 7? {1, ? ? ? , L} on a given d dataset (x, Y ) = {(xn , Yn )}N n=1 , where each instance xn ? R has a candidate label set Yn ? {1, ? ? ? , L}. The true labels y = {yn }N are not directly observed. The only information is that n=1 the true label yn of instance xn is in the candidate set Yn . The extra labels {l|l 6= yn , l ? Yn } causing ambiguity will be called the distractor labels. For any test instance (xt , yt ) drawn from the same distribution as {(xn , yn )}N n=1 , the trained classifier f should be able to map xt to yt with high probability. When |Yn | = 1 for all n, the problem is a supervised classification problem. We require |Yn | < L for all n; that is, every candidate label set must provide at least some information about the true label of the instance. 2.1 The Model As stated in the introduction, the candidate label set is a noisy version of the true label. To train a classifier, we first need a likelihood function p(Yn |xn ). The key to our approach is to write this PL as p(Yn |xn ) = yn =1 p(Yn |yn )p(yn |xn ), where each term is the product of the underlying true classifier, p(yn |xn ), and the noise model p(Yn |yn ). We then make the following assumption about the noise distribution: 2 Figure 1: The LSB-CMM. Square nodes are discrete, circle nodes are continuous, and double-circle nodes are deterministic. Assumption: All labels in the candidate label set Yn have the same probability of generating Yn , but no label outside of Yn can generate Yn  ?(Yn ) if l ? Yn p(Yn |yn = l) = . (1) 0 if l ? / Yn This assumption enforces three constraints. First, the set of labels Yn is conditionally independent of the input xn given yn . Second, labels that do not appear in Yn have probability 0 of generating Yn . Third, all of the labels in Yn have equal probability of generating Yn (symmetry). Note that these constraints do not imply that the training data are correctly labeled. That is, suppose that the most likely label for a particular input xn is yn = l. Because p(yn |xn ) is a multinomial distribution, a different label yn = l0 might be assigned to xn by the labeling process. Then this label is further corrupted by adding distractor labels to produce Yn . Hence, it could be that l 6? Yn . In short, in this model, we have the usual ?multinomial noise? in the labels which is then further compounded by ?superset noise?. The third constraint can be criticized for being simplistic; we believe it can be replaced with a learned noise model in future work. Given (1), we can marginalize away yn in the following optimization problem maximizing the likelihood of observed candidate labels. f? = = arg max f arg max f N X log n=1 N X n=1 L X p(yn |xn ; f )p(Yn |yn ) yn =1 log X p(yn |xn ; f ) + yn ?Yn N X log(?(Yn )). (2) n=1 Under the conditional independence and symmetry assumptions, the last term does not depend on f and so can be ignored in the optimization. This result is consistent with the formulation in [10]. We propose the Logistic Stick-Breaking Conditional Multinomial Model to instantiate f (see Figure 1). In LSB-CMM, we introduce a set of K regions (mixture components) {1, . . . , K}. LSB-CMM has two components. The mapping component maps each instance xn to a region zn , zn ? {1, . . . , K}. Then the coding component draws a label yn from the multinomial distribution indexed by zn with parameter ?zn . We denote the region indexes of the training instances by z = (zn )N n=1 . In the mapping component, we employ the Logistic Stick Breaking Process(LSBP) [13] to model the instance-region relationship. LSBP is a modification of the Dirichlet Process (DP) [14]. In LSBP, the sequence of Bernoulli probabilities are the outputs of a sequence of logistic functions instead of being random draws from a Beta distribution as in the Dirichlet process. The input to the k-th logistic function is the dot product of xn and a learned weight vector wk ? Rd+1 . (The added dimension corresponds to a zeroth feature fixed to be 1 to provide an intercept term.) To regularize these logistic functions, we posit that each wk is drawn from a Gaussian distribution Normal(0, ?), where ? = diag(?, ? 2 , ? ? ? , ? 2 ). This regularizes all terms in wk except the intercept. For each xn , a T sequence of probabilities {vnk }K k=1 is generated from logistic functions, where vnk = expit(wk xn ) and expit(u) = 1/(1 + exp(?u)) is the logistic function. We truncate k at K by setting wK = (+?, 0, ? ? ? , 0) and thus vnK = 1. Let w denote the collection of all K wk . Given the probabilities 3 vn1 , . . . , vnK computed from xn , we choose the region zn according to a stick-breaking procedure: p(zn = k) = ?nk = vnk k?1 Y (1 ? vni ). (3) i=1 Here we stipulate that the product is 1 when k = 1. Let ?n = (?n1 , ? ? ? , ?nK ) constitute the parameter of a multinomial distribution. Then zn is drawn from this distribution. In the coding component of LSB-CMM, we first draw K L-dimensional multinomial probabilities ? = {?k }K k=1 from the prior Dirichlet distribution with parameter ?. Then, for each instance xn with mixture zn , its label yn is drawn from the multinomial distribution with ?zn . In the traditional multi-class problem, yn is observed. However, in the SLL problem yn is not observed and Yn is generated from yn . The generative process of the whole model is summarized below: wk zn ? Normal(0, ?), 1 ? k ? K ? 1, wK = (+?, 0, ? ? ? , 0) ?nk = expit(wkT xn ) ? Mult(?n ), k?1 Y (1 ? expit(wiT xn )) (4) (5) i=1 ?k yn Yn ? Dirichlet(?) ? Mult(?zn ) ? Dist1(yn ) (Dist1 is some distribution satisfying (1)) (6) (7) (8) As shown in (2), the model needs to maximize the likelihood that each yn is in Yn . After incorporating the priors, we can write the penalized maximum likelihood objective as ? ? N X X max LL = log ? p(yn |xn , w, ?)? + log(p(w|0, ?)). (9) n=1 yn ?Yn This cannot be solved directly, so we apply variational EM [1]. 2.2 Variational EM The hidden variables in the model are y, z, and ?. For these hidden variables, we introduce the ? ? variational distribution q(y, z, ?|?, ? ), where ?? = {??n }N ? = {? ?k }K n=1 and ? k=1 are the parameters. Then we factorize q as ? ? q(z, y, ?|?, ?) = N Y q(zn , yn |??n ) n=1 K Y q(?k |? ?k ), (10) k=1 where ??n is a K ? L matrix and q(zn , yn |??n ) is a multinomial distribution in which p(zn = k, yn = l) = ??nkl . This distribution is constrained by the candidate label set: if a label l ? / Yn , then ??nkl = 0 for any value of k. The distribution q(?k |? ?k ) is a Dirichlet distribution with parameter ? ?k . After we set the distribution q(z, y, ?), our variational EM follows standard methods. The detailed derivation can be found in the supplementary materials [11]. Here we only show the final updating step with some analysis. In the E step, the parameters of variational distribution are updated as (11) and (12).   ?nk exp Eq(?k |?? k ) [log(?kl )] , if l ? Yn ??nkl ? , 0, if l ? / Yn ? ?k = ?+ N X ??nkl . (11) (12) n=1 The update of ??n in (11) indicates the key difference between the LSB-CMM model and traditional clustering models. The formation of regions is directed by both instance similarities and class labels. 4 P If the instance xn wants to join region k (i.e., l ??nkl is large), then it must be similar to wk as well as to instances in that region in order to make ?nk large. Simultaneously, its candidate labels must fit the ?label flavor? of region k, where the ?label flavor? means region k prefers labels having large values in ? ? k . The update of ? ? in (12) can be interpreted as having each instance xn vote for the label l for region k with weight ??nkl . In the M step, we need to solve the maximization problem in (13) for each wk , 1 ? k ? K ? 1. Note that wK is fixed. Each wk can be optimized separately. The optimization problem is similar to the problem of logistic regression and is also a concave maximization problem, which can be solved by any gradient-based method, such as BFGS. N h i X 1 max ? wkT ??1 wk + ??nk log(expit(wkT xn )) + ??nk log(1 ? expit(wkT xn )) , (13) wk 2 n=1 PL ? PK ? ? ? where ??nk = l=1 ?nkl and ?nk = j=k+1 ?nj . Intuitively, the variable ?nk is the probability that instance xn belongs to region k, and ??nk is the probability that xn belongs to region {k + 1, ? ? ? , K}. Therefore, the optimal wk discriminates instances in region k against instances in regions ? k. 2.3 Prediction For a test instance xt , we predict the label with maximum posterior probability. The test instance can be mapped to a region with w, but the coding matrix ? is marginalized out in the EM. We use the variational distribution p(?k |? ?k ) as the prior of each ?k and integrate out all ?k -s. Given a test point xt , the prediction is the label l that maximizes the probability p(yt = l|xt , w, ? ? ) calculated as (14). The detailed derivation is also in the supplementary materials [11]. K X ? ? kl ?tk P p(yt = l|xt , w, ? ?) = , (14) ? kl l? k=1   Qk?1 where ?tk = expit(wkT xt ) i=1 (1 ? expit(wiT xt )) . The test instance goes to region k with probability ?tk , and its label is decided by the votes (? ?k ) in that region. 2.4 Complexity Analysis and Practical Issues In the E step, for each region k, the algorithm iterates over all candidate labels of all instances, so the complexity is O(N KL). In the M step, the algorithm solves K ? 1 separate optimization problems. Suppose each optimization problem takes O(V N d) time, where V is the number of BFGS iterations. Then the complexity is O(KV N d). Since V is usually larger than L, the overall complexity of one EM iteration is O(KV N d). Suppose the EM steps converge within m iterations, where m is usually less than 50. Then the overall complexity is O(mKV N d). The space complexity is O(N K), since PL we only store the matrix l=1 ??nkl and the matrix ? ?. In prediction, the mapping phase requires O(Kd) time to multiply w and the test instance. After the stick breaking process, which takes O(K) calculations, the coding phase requires O(KL) calculation. Thus the overall time complexity is O(K max{d, L}). Hence, the prediction time is comparable to that of logistic regression. There are several practical issues that affect the performance of the model. Initialization: From the model design, we can expect that instances in the same region have the same label. Therefore, it is reasonable to initialize ? ? to have each region prefer only one label, that is, each ? ? k has one 1 , so that element with large value and all others with small values. We initialize ? to ?nk = K all regions have equal probability to be chosen at the start. Initialization of these two variables is enough to begin the EM iterations. We find that such initialization works well for our model and generally is better than random initialization. Calculation of Eq(?k |?? k ) [log(?kl )] in (11): Although it has a closed-form solution, we encountered numerical issues, so we calculate it via Monte Carlo sampling. This does not change complexity analysis above, since the training is dominated by M step. Priors: We found that using a non-informative prior for Dirichlet(?) worked best. From (12) and (14), we can see that when ? is marginalized, the distribution is non-informative when ? is set to small values. We use ? = 0.05 in our experiments. 5 1.6 1.6 ?? ? ? ? ? ?? ? ? ?? ? ? ? ?? ? ?? ?? ? ? ? ?? ?????? ? ??? ? ?? ?? ?? ? ? ?? ? ? ??? ? ? ?? ? ? ? ? ? ? ? ?? ? ?? ? ? ? ? ? ? ? ? ? ?? ?? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ?1.6 ?0.8 ? ? ? ? ? ?? ? ? ?? ? ? ??? ? ? ?? ?? ?? ??? ? ? ? ? ?? ? ? ? ? ? ? ???? ? ? ? ? ???? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??? ? ? ?? ? ? ? ?? ? ? ?? ? ? ? ?? ?? ? 0 ?0.8 0 ?1.6 ? ? ? ? ? 0.8 class 1 class 2 class 3 ?1.6 ?0.8 0 0.8 ? 0.8 ?? ? ? ? ? ?? ? ? ?? ? ? ? ?? ? ?? ?? ? ? ? ?? ?????? ? ??? ? ?? ?? ?? ? ? ?? ? ? ??? ? ? ?? ? ? ? ? ? ? ? ?? ? ?? ? ? ? ? ? ? ? ? ? ?? ?? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? 1.6 ?1.6 ?0.8 ? ? ? ? ? ?? ? ? ?? ? ? ??? ? ? ?? ?? ?? ??? ? ? ? ? ?? ? ? ? ? ? ? ???? ? ? ? ? ???? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??? ? ? ?? ? ? ? ?? ? ? ?? ? ? ? ?? ?? ? 0 ? ? ? 0.8 class 1 class 2 class 3 1.6 Figure 2: Decision boundaries of LSB-CMM on a linearly-inseparable problem. Left: all data points have true labels. Right: labels of gray data points are corrupted. 3 Experiments In this section, we describe the results of several experiments we conducted to study the behavior of our proposed model. First, we experiment with a toy problem to show that our algorithm can solve problems with linearly-inseparable classes. Second, we perform controlled experiments on three synthetic datasets to study the robustness of LSB-CMM with respect to the degree of ambiguity of the label sets. Third, we experiment with three real-world datasets. LSB-CMM Model: The LSB-CMM model has three parameters K, ? 2 , ?. We find that the model is insensitive to K if it is sufficiently large. We set K = 10 for the toy problems and K = 5L for other problems. ? is set to 0.05 for all experiments. When the data is standardized, the regularization parameter ? 2 = 1 generally gives good results, so ? 2 is set to 1 in all superset label tasks. Baselines: We compared the LSB-CMM model with three state-of-the-art methods. Supervised SVM: the SVM is always trained with the true labels. Its performance can be viewed as an upper bound on the performance of any SSL algorithm. LIBSVM [3] with RBF kernel was run to construct a multi-class classifier in one-vs-one mode. One third of the training data was used to tune the C parameter and the RBF kernel parameter ?. CLPL: CLPL [5] is a linear model that encourages large average scores of candidate labels. The model is insensitive to the C parameter, so we set the C value to 1000 (the default value in their code). SIM: SIM [2] minimizes the ranking loss of instances in a bag. In controlled experiments and in one of the real-world problems, we could not make the comparison to LSB-CMM because of the lack of bag information. The ? parameter is set to 10?8 based on authors? recommendation. 3.1 A Toy Problems In this experiment, we generate a linearly-inseparable SLL problem. The data has two dimensions and six clusters drawn from six normal distributions with means at the corners of a hexagon. We assign a label to each cluster so that the problem is linearly-inseparable (see (2)). In the first task, we give the model the true labels. In the second task, we add a distractor label for two thirds of all instances (gray data points in the figure). The distractor label is randomly chosen from the two labels other than the true label. The decision boundaries found by LSB-CMM in both tasks are shown in (2)). We can see that LSB-CMM can successfully give nonlinear decision boundaries for this problem. After injecting distractor labels, LSB-CMM still recovers the boundaries between classes. There is minor change of the boundary at the edge of the cluster, while the main part of each cluster is classified correctly. 3.2 Controlled Experiments We conducted controlled experiments on three UCI [6] datasets: {segment (2310 instances, 7 classes), pendigits (10992 instances, 10 classes), and usps (9298 instances, 10 classes)}. Tenfold cross validation is performed on all three datasets. For each training instance, we add distractor labels with controlled probability. As in [5], we use p, q, and ? to control the ambiguity level of 6 Figure 3: Three regions learned by the model on usps candidate label sets. The roles and values of these three variables are as follows: p is the probability that an instance has distractor labels (p = 1 for all controlled experiments); q ? {1, 2, 3, 4} is the number of distractor labels; and ? ? {0.3, 0.7, 0.9, 0.95} is the maximum probability that a distractor label co-occurs with the true label [5], also called the ambiguity degree. We have two settings for these three variables. In the first setting, we hold q = 1 and vary ?, that is, for each label l, we choose a specific label l0 6= l as the (unique) distractor label with probability ? or choose any other label with probability 1 ? ?. In the extreme case when ? = 1, l0 and l always co-occur, and they cannot be distinguished by any classifier. In the second setting, we vary q and pick distractor labels randomly for each candidate label set. The results are shown in Figure (4). Our LSB-CMM model significantly outperforms the CLPL approach. As the number of distractor labels increases, performance of both methods goes down, but not too much. When the true label is combined with different distractor labels, the disambiguation is easy. The co-occurring distractor labels provide much less disambiguation. This explains why large ambiguity degree hurts the performance of both methods. The small dataset (segment) suffers even more from large ambiguity degree, because there are fewer data points that can ?break? the strong correlation between the true label and the distractors. To explore why the LSB-CMM model has good performance, we investigated the regions learned by the model. Recall that ?nk is the probability that xn is sent to region k. In each region k, the representative instances have large values of ?nk . We examined all ?nk from the model trained on the usps dataset with 3 random distractor labels. For each region k, we selected the 9 most representative instances. Figure (3) shows representative instances for three regions. These are all from class ?2? but are written in different styles. This shows that the LSB-CMM model can discover the sub-classes in the data. In some applications, the whole class is not easy to discriminate from other classes, but sometimes each sub-class can be easily identified. In such cases, LSB-CMM will be very useful and can improve performance. Explanation of the results via regions can also give better understanding of the learned classifier. In order to analyze the performance of the classifier learned from data with either superset labels or fully observed labels, one traditional method is to compute the confusion matrix. While the confusion matrix can only tell the relationships between classes, the mixture analysis can indicate precisely which subclass of a class are confused with which subclasses of other classes. The regions can also help the user identify and define new classes as refinements of existing ones. 3.3 Real-World Problems We apply our model on three real-world problems. 1) BirdSong dataset [2]: This contains 548 10-second bird song recordings. Each recording contains 1-40 syllables. In total there are 4998 syllables. Each syllable is described by 38 features. The labels of each recording are the bird species that were singing during that 10-second period, and these species become candidate labels set of each syllable in the recording. 2) MSRCv2 dataset: This dataset contains 591 images with 23 classes. The ground truth segmentations (regions with labels) are given. The labels of all segmentations in an image are treated as candidate labels for each segmentation. Each segmentation is described by 48-dimensional gradient and color histograms. 3) Lost dataset [5]: This dataset contains 1122 faces, and each face has the true label and a set of candidate labels. Each face is described by 108 PCA components. Since the bag information (i.e., which faces are in the same scene) is missing, 7 number of ambiguous labels 2 3 1 ? ? ? 4 ? ? 0.9 ? 0.9 0.9 ? ? ? ? number of ambiguous labels 2 3 1 ? 0.3 0.7 0.9 ambiguity degree (a) segment 0.95 ? 0.3 SVM LSB?CMM, vary q LSB?CMM, vary ? CLPL, vary q CLPL, vary ? 0.7 0.9 ambiguity degree (b) pendigits 0.7 SVM LSB?CMM, vary q LSB?CMM, vary ? CLPL, vary q CLPL, vary ? 0.7 ? 0.8 0.8 0.8 ? 0.7 accuracy 4 1.0 1.0 4 1.0 number of ambiguous labels 2 3 1 0.95 ? 0.3 SVM LSB?CMM, vary q LSB?CMM, vary ? CLPL, vary q CLPL, vary ? 0.7 0.9 ambiguity degree 0.95 (c) usps Figure 4: Classification performance on synthetic data (red: LSB-CMM; blue: CLPL). The dot-dash line is for different q values (number of distractor labels) as shown on the top x-axis. The dashed line is for different ? (ambiguity degree) values as shown on the bottom x-axis. Table 1: Classification Accuracies for Superset Label Problems LSB-CMM SIM CLPL SVM BirdSong 0.715(0.042) 0.589(0.035) 0.637(0.034) 0.790(0.027) MSRCv2 0.459(0.032) 0.454(0.043) 0.411(0.044) 0.673(0.043) Lost 0.703(0.058) 0.710(0.045) 0.817(0.038) SIM is not compared to our model on this dataset. We run 10-fold cross validation on these three datasets. The BirdSong and MSRCv2 datasets are split by recordings/images, and the Lost dataset is split by faces. The classification accuracies are shown in Table (1). Accuracies of the three superset label learning algorithms are compared using the paired t-test at the 95% confidence level. Values statistically indistinguishable from the best performance are shown in bold. Our LSB-CMM model out-performs the other two methods on the BirdSong database, and its performance is comparable to SIM on the MSRCv2 dataset and to CLPL on the Lost dataset. It should be noted that the input features are very coarse, which means that the cluster structure of the data is not well maintained. The relatively low performance of the SVM confirms this. If the instances were more precisely described by finer features, one would expect our model to perform better in those cases as well. 4 Conclusions This paper introduced the Logistic Stick-Breaking Conditional Multinomial Model to address the superset label learning problem. The mixture representation allows LSB-CMM to discover cluster structure that has predictive power for the superset labels in the training data. Hence, if two labels co-occur, LSB-CMM is not forced to choose one of them to assign to the training example but instead can create a region that maps to both of them. Nonetheless, each region does predict from a multinomial, so the model still ultimately seeks to predict a single label. Our experiments show that the performance of the model is either better than or comparable to state-of-the-art methods. Acknowledgment This material is based upon work supported by the National Science Foundation under Grant No. 1125228. The code as an R package is available at: http://web.engr.oregonstate.edu/?liuli/files/LSB-CMM_1.0.tar.gz. 8 References [1] C. M. Bishop. Pattern recognition and machine learning. Springer, 2006. [2] F. Briggs & X. F. Fern & R. Raich. Rank-Loss Support Instance Machines for MIML Instance Annotation. In proc. KDD, 2012. [3] C.-C. Chang & C.-J. Lin. LIBSVM: A Library for Support Vector Machines. ACM Trans. on Intelligent Systems and Technology, 2(3):1-27, 2011. [4] T. Cour & B. Sapp & C. Jordan & B. Taskar. Learning From Ambiguously Labeled Images. In Proc. CVPR 2009. [5] T. Cour & B. Sapp & B. Taskar. Learning from Partial Labels. Journal of Machine Learning Research, 12:1225-1261, 2011. [6] A. Frank & A. Asuncion. UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. [7] Y. Grandvalet. Logistic Regression for Partial Labels. In Proc. IPMU, 2002. [8] E. Hullermeier & J. Beringer. Learning from Ambiguously Labeled Examples. In Proc. IDA-05, 6th International Symposium on Intelligent Data Analysis Madrid, 2005. [9] L. Jie & F. Orabona. Learning from Candidate Labeling Sets. In Proc. NIPS, 2010. [10] R. Jin & Z. Ghahramani. Learning with Multiple Labels. In Proc. NIPS, 2002. [11] L-P. Liu & T. Dietterich. A Conditional Multinomial Mixture Model for Superset Label Learning (Supplementary Materials), http://web.engr.oregonstate.edu/?liuli/pdf/lsb_cmm_supp.pdf . [12] N. Nguyen & R. Caruana. Classification with Partial Labels. In Proc. KDD, 2008. [13] L. Ren & L. Du & L. Carin & D. B. Dunson. Logistic Stick-Breaking Process. Journal of Machine Learning Research, 12:203-239, 2011. [14] Y. W. Teh. Dirichlet Processes. Encyclopedia of Machine Learning, to appear. Springer. [15] Z.-H. Zhou & M.-L. Zhang. Multi-Instance Multi-Label Learning with Application To Scene Classification. Advances in Neural Information Processing Systems, 19, 2007 [16] X. Zhu & A. B. Goldberg. Introduction to Semi-Supervised Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 3(1):1-130, 2009. 9
4597 |@word repository:1 version:2 confirms:1 seek:2 pick:1 liu:2 contains:5 score:1 outperforms:1 existing:1 ida:1 must:3 written:1 numerical:1 informative:2 kdd:2 update:2 v:1 generative:1 instantiate:1 fewer:1 selected:1 intelligence:1 short:1 provides:1 detecting:1 node:3 iterates:1 coarse:1 zhang:1 beta:1 become:1 symposium:1 introduce:2 indeed:1 behavior:1 distractor:16 multi:5 automatically:1 little:1 tenfold:1 begin:1 discover:2 underlying:5 confused:1 maximizes:1 interpreted:2 minimizes:1 nj:1 every:2 subclass:2 concave:1 classifier:10 stick:10 control:1 grant:1 yn:74 appear:2 vni:1 treat:1 might:1 zeroth:1 bird:5 initialization:4 studied:1 pendigits:2 examined:1 suggests:1 co:4 bi:3 statistically:1 directed:1 unique:2 decided:1 enforces:1 practical:2 acknowledgment:1 lost:4 procedure:1 mult:2 significantly:1 imprecise:1 confidence:1 vnk:5 cannot:2 marginalize:1 risk:1 intercept:2 map:5 deterministic:1 yt:4 maximizing:2 missing:1 go:2 independently:1 wit:2 assigns:1 regularize:1 handle:1 hurt:1 updated:1 play:1 suppose:3 user:1 caption:2 goldberg:1 element:2 satisfying:1 recognition:1 updating:1 labeled:10 database:1 observed:8 role:2 bottom:1 taskar:2 solved:2 capture:1 singing:3 calculate:1 region:39 hullermeier:1 mentioned:1 discriminates:1 complexity:8 ultimately:1 trained:3 weakly:1 depend:2 segment:5 engr:2 predictive:1 upon:1 usps:4 dist1:2 easily:2 derivation:2 train:2 forced:1 effective:1 describe:1 monte:1 artificial:1 labeling:2 tell:1 formation:1 outcome:1 outside:1 whose:4 larger:2 solve:3 valued:1 supplementary:3 cvpr:1 mislabeling:1 noisy:3 final:1 advantage:1 sequence:3 propose:3 product:3 ambiguously:2 causing:1 sll:3 relevant:1 stipulate:1 uci:3 kv:2 cour:3 cluster:8 double:1 produce:1 generating:3 tk:3 help:1 minor:1 sim:5 strong:1 eq:2 implemented:1 c:1 job:1 come:1 indicate:2 solves:1 posit:1 correct:3 material:4 mkv:1 explains:1 require:1 assign:2 pl:3 hold:1 sufficiently:1 ground:1 normal:3 exp:2 ic:1 mapping:8 predict:3 inseparable:4 vary:14 proc:7 injecting:1 bag:7 label:138 stickbreaking:1 create:1 successfully:1 gaussian:2 always:2 vn1:1 zhou:1 confusingly:1 imprecisely:1 tar:1 derived:1 l0:3 bernoulli:3 likelihood:7 indicates:1 rank:1 baseline:1 typically:2 cmm:33 hidden:2 arg:2 classification:10 issue:3 overall:3 art:3 constrained:1 initialize:2 ssl:1 field:1 equal:4 construct:1 having:2 sampling:1 carin:1 future:1 others:1 intelligent:2 few:1 employ:2 randomly:2 simultaneously:2 national:1 cheaper:1 replaced:1 phase:2 n1:1 multiply:1 introduces:1 mixture:9 extreme:1 tgd:1 edge:1 partial:4 indexed:1 circle:2 criticized:1 instance:47 nkl:8 localizing:1 zn:15 caruana:1 maximization:2 ordinary:2 cost:1 conducted:2 too:1 eec:3 corrupted:3 synthetic:2 combined:1 person:1 international:1 probabilistic:1 synthesis:1 lsbp:4 ambiguity:10 containing:1 choose:4 corner:1 expert:1 style:1 li:1 toy:3 converted:1 bfgs:2 bold:1 coding:7 wk:15 includes:1 summarized:1 oregon:2 explicitly:1 ranking:1 depends:1 performed:1 view:1 break:1 closed:1 analyze:1 red:1 competitive:1 start:1 annotation:1 asuncion:1 square:2 ni:1 accuracy:4 variance:1 who:1 qk:1 correspond:1 identify:3 fern:1 ren:1 carlo:1 corruption:1 finer:1 classified:1 ping:1 suffers:1 facebook:1 against:1 nonetheless:1 associated:1 recovers:1 sampled:1 dataset:12 recall:1 distractors:1 color:1 sapp:2 organized:1 segmentation:4 supervised:7 improved:1 formulation:8 furthermore:1 correlation:1 web:2 nonlinear:1 lack:1 logistic:17 mode:1 gray:2 believe:1 name:1 dietterich:2 true:23 hence:3 assigned:1 regularization:1 conditionally:1 ll:1 during:1 indistinguishable:1 encourages:1 ambiguous:3 noted:1 maintained:1 pdf:2 confusion:2 performs:1 image:5 variational:7 superior:1 common:1 multinomial:17 insensitive:2 extend:1 corvallis:2 rd:2 dot:2 similarity:1 etc:1 add:2 posterior:1 belongs:3 store:1 yi:4 additional:1 fortunately:1 spectrogram:1 converge:1 maximize:1 period:1 dashed:1 semi:2 multiple:2 compounded:1 calculation:3 cross:2 lin:1 divided:1 devised:1 permitting:1 paired:1 controlled:6 prediction:5 regression:5 simplistic:1 iteration:4 sometimes:2 kernel:2 histogram:1 want:1 separately:1 lsb:34 extra:1 archive:1 file:1 wkt:5 recording:6 sent:1 jordan:1 split:2 superset:26 enough:1 easy:2 independence:1 fit:1 affect:1 identified:1 six:2 pca:1 birdsong:4 song:2 constitute:1 prefers:1 jie:2 ignored:1 useful:2 generally:2 detailed:2 involve:1 tune:1 encyclopedia:1 svms:2 generate:2 http:3 correctly:2 blue:1 write:2 discrete:1 key:2 drawn:8 libsvm:2 run:2 package:1 reasonable:1 draw:3 disambiguation:2 decision:3 prefer:1 comparable:3 bound:1 hexagon:1 dash:1 syllable:4 fold:1 encountered:1 multiinstance:1 occur:2 precisely:3 constraint:3 worked:1 scene:2 raich:1 dominated:1 generates:1 aspect:1 miml:4 relatively:1 according:2 truncate:1 unsure:1 kd:1 em:8 modification:1 intuitively:1 briggs:2 available:1 apply:2 away:2 generic:1 distinguished:1 robustness:1 thomas:1 standardized:1 dirichlet:7 clustering:1 top:1 marginalized:3 ghahramani:1 objective:1 added:1 occurs:1 usual:1 traditional:3 gradient:2 dp:1 separate:1 mapped:1 code:2 index:1 relationship:2 acquire:1 difficult:1 dunson:1 frank:1 stated:1 design:1 unknown:1 perform:2 teh:1 upper:1 datasets:6 jin:1 defining:1 regularizes:1 discovered:1 introduced:1 pair:1 kl:6 optimized:2 orst:1 learned:6 nip:2 trans:1 address:2 able:1 below:1 usually:2 pattern:1 max:5 explanation:2 power:1 natural:1 treated:1 zhu:1 improve:1 technology:1 imply:1 library:1 picture:2 axis:2 created:1 gz:1 prior:5 understanding:1 oregonstate:3 loss:3 expect:2 fully:1 lecture:1 annotator:1 validation:2 foundation:1 integrate:1 degree:8 consistent:1 grandvalet:1 penalized:1 supported:1 last:1 allow:1 wide:1 face:7 boundary:5 dimension:2 xn:33 world:5 avoids:1 calculated:1 default:1 ignores:1 author:1 collection:2 refinement:1 nguyen:2 newspaper:1 implicitly:1 ml:1 xi:3 discriminative:1 factorize:1 continuous:1 latent:1 why:2 table:2 learn:2 symmetry:2 du:1 investigated:1 necessarily:1 diag:1 pk:1 main:2 linearly:4 whole:2 noise:6 representative:3 join:1 madrid:1 sub:2 inferring:1 candidate:21 breaking:10 third:6 down:1 specific:3 xt:8 bishop:1 svm:7 consist:1 incorporating:1 adding:1 occurring:1 nk:15 flavor:2 likely:2 explore:1 recommendation:1 chang:1 springer:2 ipmu:1 corresponds:1 truth:1 acm:1 conditional:8 goal:1 viewed:2 rbf:2 orabona:1 change:2 determined:1 except:1 called:3 specie:4 discriminate:1 total:1 experimental:1 vote:2 people:1 support:2 arises:1 incorporate:1
3,974
4,598
A Linear Time Active Learning Algorithm for Link Classification? Nicol` o Cesa-Bianchi Dipartimento di Informatica Universit` a degli Studi di Milano, Italy Claudio Gentile Dipartimento di Scienze Teoriche ed Applicate Universit`a dell?Insubria, Italy Giovanni Zappella Dipartimento di Matematica Universit`a degli Studi di Milano, Italy Fabio Vitale Dipartimento di Informatica Universit` a degli Studi di Milano, Italy Abstract We present very efficient active learning algorithms for link classification in signed networks. Our algorithms are motivated by a stochastic model in which edge labels are obtained through perturbations of a initial sign assignment consistent with a two-clustering of the nodes. We provide a theoretical analysis within this model, showing that we can achieve an optimal (to whithin a constant factor) number of mistakes on any graph G = (V, E) such that |E| = ?(|V |3/2 ) by querying O(|V |3/2 ) edge labels. More generally, we show an algorithm that achieves optimality to within a factor of O(k) by querying at most order of |V | + (|V |/k)3/2 edge labels. The running time of this algorithm is at most of order |E| + |V | log |V |. 1 Introduction A rapidly emerging theme in the analysis of networked data is the study of signed networks. From a mathematical point of view, signed networks are graphs whose edges carry a sign representing the positive or negative nature of the relationship between the incident nodes. For example, in a protein network two proteins may interact in an excitatory or inhibitory fashion. The domain of social networks and e-commerce offers several examples of signed relationships: Slashdot users can tag other users as friends or foes, Epinions users can rate other users positively or negatively, Ebay users develop trust and distrust towards sellers in the network. More generally, two individuals that are related because they rate similar products in a recommendation website may agree or disagree in their ratings. The availability of signed networks has stimulated the design of link classification algorithms, especially in the domain of social networks. Early studies of signed social networks are from the Fifties. E.g., [8] and [1] model dislike and distrust relationships among individuals as (signed) weighted edges in a graph. The conceptual underpinning is provided by the theory of social balance, formulated as a way to understand the structure of conflicts in a network of individuals whose mutual relationships can be classified as friendship or hostility [9]. The advent of online social networks has revamped the interest in these theories, and spurred a significant amount of recent work ?see, e.g., [7, 11, 14, 3, 5, 2], and references therein. Many heuristics for link classification in social networks are based on a form of social balance summarized by the motto ?the enemy of my enemy is my friend?. This is equivalent to saying that the signs on the edges of a social graph tend to be consistent with some twoclustering of the nodes. By consistency we mean the following: The nodes of the graph can be partitioned into two sets (the two clusters) in such a way that edges connecting nodes ? This work was supported in part by the PASCAL2 Network of Excellence under EC grant 216886 and by ?Dote Ricerca?, FSE, Regione Lombardia. This publication only reflects the authors? views. 1 from the same set are positive, and edges connecting nodes from different sets are negative. Although two-clustering heuristics do not require strict consistency to work, this is admittely a rather strong inductive bias. Despite that, social network theorists and practitioners found this to be a reasonable bias in many social contexts, and recent experiments with online social networks reported a good predictive power for algorithms based on the twoclustering assumption [11, 13, 14, 3]. Finally, this assumption is also fairly convenient from the viewpoint of algorithmic design. In the case of undirected signed graphs G = (V, E), the best performing heuristics exploiting the two-clustering bias are based on spectral decompositions of the signed adiacency matrix.  Noticeably, these heuristics run in time ? |V |2 , and often require a similar amount of memory storage even on sparse networks, which makes them impractical on large graphs. In order to obtain scalable algorithms with formal performance guarantees, we focus on the active learning protocol, where training labels are obtained by querying a desired subset of edges. Since the allocation of queries can match the graph topology, a wide range of graph-theoretic techniques can be applied to the analysis of active learning algorithms. In the recent work [2], a simple stochastic model for generating edge labels by perturbing some unknown two-clustering of the graph nodes was introduced. For this model, the authors proved that querying the edges of a low-stretch spanning tree of the input graph G = (V, E) is sufficient to predict the remaining edge labels making a number of mistakes within a factor of order (log |V |)2 log log |V | from the theoretical optimum. The overall running time is O(|E| ln |V |). This result leaves two main problems open: First, low-stretch trees are a powerful structure, but the algorithm to construct them is not easy to implement. Second, the tree-based analysis of [2] does not generalize to query budgets larger than |V | ? 1 (the edge set size of a spanning tree). In this paper we introduce a different active learning approach for link classification that can accomodate a large spectrum of query budgets. We show that on any graph with ?(|V |3/2 ) edges, a query budget of O(|V |3/2 ) is sufficient to predict the remaining edge labels within a constant factor from the optimum. More in 3/2 general, we show that a budget of at most order of |V | + |Vk | queries is sufficient to make a number of mistakes within a factor of O(k) from the optimum with a running time of order |E| + (|V |/k) log(|V |/k). Hence, a query budget of ?(|V |), of the same order as the algorithm based on low-strech trees, achieves an optimality factor O(|V |1/3 ) with a running time of just O(|E|). At the end of the paper we also report on a preliminary set of experiments on medium-sized synthetic and real-world datasets, where a simplified algorithm suggested by our theoretical findings is compared against the best performing spectral heuristics based on the same inductive bias. Our algorithm seems to perform similarly or better than these heuristics. 2 Preliminaries and notation We consider undirected and connected graphs G = (V, E) with unknown edge labeling Yi,j ? {?1, +1} for each (i, j) ? E. Edge labels can collectively be represented by the associated signed adjacency matrix Y , where Yi,j = 0 whenever (i, j) 6? E. In the sequel, the edge-labeled graph G will be denoted by (G, Y ). We define a simple stochastic model for assigning binary labels Y to the edges of G. This is used as a basis and motivation for the design of our link classification strategies. As we mentioned in the introduction, a good trade-off between accuracy and efficiency in link classification is achieved by assuming that the labeling is well approximated by a twoclustering of the nodes. Hence, our stochastic labeling model assumes that edge labels are obtained by perturbing an underlying labeling which is initially consistent with an arbitrary (and unknown) two-clustering. More formally, given an undirected and connected graph G = (V, E), the labels Yi,j ? {?1, +1}, for (i, j) ? E, are assigned as follows. First, the nodes in V are arbitrarily partitioned into two sets, and labels Yi,j are initially assigned consistently with this partition (within-cluster edges are positive and between-cluster edges are negative). Note that the consistency is equivalent to the following multiplicative rule: For any (i, j) ? E, the label Yi,j is equal to the product of signs on the edges of any path connecting i to j in G. This is in turn equivalent to say that any simple cycle within the graph contains an even number of negative edges. Then, given a nonnegative constant p < 12 ,  labels are randomly flipped in such a way that P Yi,j is flipped ? p for each (i, j) ? E. 2 We call this a p-stochastic assignment. Note that this model allows for correlations between flipped labels. A learning algorithm in the link classification setting receives a training set of signed edges and, out of this information, builds a prediction model for the labels of the remaining edges. It is quite easy to prove a lower bound on the number of mistakes that any learning algorithm makes in this model. Fact 1. For any undirected graph G = (V, E), any training set E0 ? E of edges, and any learning algorithm that is given the labels of the edges in E 0 , the number M of mistakes made by A on the remaining E \ E0 edges satisfies E M ? p E \ E0 , where the expectation is with respect to a p-stochastic assignment of the labels Y . Proof. Let Y be the following randomized labeling: first, edge labels are set consistently with an arbitrary two-clustering of V . Then, a set of 2p|E| edges is selected uniformly at random and the labels of these edges are set randomly (i.e., flipped or not flipped with equal probability). Clearly, P(Yi,j is flipped) = p for each (i, j) ? E. Hence this is a p-stochastic E \ E0 randomly assignment of the labels. Moreover, E \ E contains in expectation 2p 0 labeled edges, on which A makes p E \ E0 mistakes in expectation. In this paper we focus on active learning algorithms. An active learner for link classification first constructs a query set E0 of edges, and then receives the labels of all edges in the query set. Based on this training information, the learner builds a prediction model for the labels of the remaining edges E \ E0 . We assume that the only labels ever revealed to the learner are those in the query set. In particular, no labels are revealed during the prediction phase. It is clear from Fact 1 that any active learning algorithm that queries the labels of at most a constant fraction of the total number of edges will make on average ?(p|E|) mistakes. We often write VG and EG to denote, respectively, the node set and the edge set of some underlying graph G. For any two nodes i, j ? VG , Path(i, j) is any path in G having i and j as terminals, and |Path(i, j)| is its length (number of edges). The diameter DG of a graph G is the maximum over pairs i, j ? VG of the shortest path between i and j. Given a tree T = (VT , ET ) in G, and two nodes i, j ? VT , we denote by dT (i, j) the distance of i and j within T , i.e., the length of the (unique) path PathT (i, j) connecting the two nodes in T . Moreover, ?T (i, j) denotes the parity of this path, i.e., the product of edge signs along it. When T is a rooted tree, we denote by ChildrenT (i) the set of children of i in T . Finally, T 0 , T 00 ? G such that VT 0 ? VT 00 ? ?, we let  given two disjoint subtrees EG (T 0 , T 00 ) ? (i, j) ? EG : i ? VT 0 , j ? VT 00 . 3 Algorithms and their analysis In this section, we introduce and analyze a family of active learning algorithms for link classification. The analysis is carried out under the p-stochastic assumption. As a warm up, we start off recalling the connection to the theory of low-stretch spanning trees (e.g., [4]), which turns out to be useful in the important special case when the active learner is afforded to query only |V | ? 1 labels. Let Eflip ? E denote the (random) subset of edges whose labels have been flipped in a p-stochastic assignment, and consider the following class of active learning algorithms parameterized by an arbitrary spanning tree T = (VT , ET ) of G. The algorithms in this class use E0 = ET as query set. The label of any test edge e0 = (i, j) 6? ET is predicted as the parity ?T (e0 ). Clearly enough, if a test edge e0 is predicted wrongly, then either e0 ? Eflip or PathT (e0 ) contains at least one flipped edge. Hence, the number of mistakes MT made by our active learner on the set of test edges E \ ET can be deterministically bounded by X X   MT ? |Eflip | + I e ? PathT (e0 ) I e ? Eflip (1) e0 ?E\ET e?E  where I ? denotes the indicator of the Boolean predicate at argument. A quantity which can be related to MT is the average stretch of a spanning tree T which, for our purposes, reduces to h i P 1 0 . e0 ?E\ET PathT (e ) |E| |V | ? 1 + 3 A stunning result of [4] shows that every connected, undirected andunweighted graph has a spanning tree with an average stretch of just O log2 |V | log log |V | . If our active learner uses a spanning tree with the same low stretch, then the following result holds. Theorem 1 ([2]). Let (G, Y ) = ((V, E), Y ) be a labeled graph with p-stochastic assigned labels Y . If the active learner queries the edges of a spanning tree T = (VT, ET ) with  2 average stretch O log |V | log log |V | , then E MT ? p|E| ? O log2 |V | log log |V | . We call the quantity multiplying p |E| in the upper bound the optimality factor of the algorithm. Recall that Fact 1 implies that this factor cannot be smaller than a constant when the query set size is a constant fraction of |E|.  Although low-stretch trees can be constructed in time O |E| ln |V | , the algorithms are fairly complicated (we are not aware of available implementations), and the constants hidden in the asymptotics can be high. Another disadvantage is that we are forced to use a query set of small and fixed size |V | ? 1. In what follows we introduce algorithms that overcome both limitations. A key aspect in the analysis of prediction performance is the ability to select a query set so creates a short circuit with a training path. This is quantified by P that each test edge 0 I e ? Path (e ) in (1). We make this explicit as follows. Given a test edge (i, j) T e?E and a path Path(i, j) whose edges are queried edges, we say that we are predicting label Yi,j using path Path(i, j) Since (i, j) closes Path(i, j) into a circuit, in this case we also say that (i, j) is predicted using the circuit. Fact 2. Let (G, Y ) = ((V, E), Y ) be a labeled graph with p-stochastic assigned labels Y . Given query set E0 ? E, the number M of mistakes made when predicting test edges (i, j) ? E \ E0 using training paths Path(i, j) whose length is uniformly bounded by ` satisfies EM ? ` p |E \ E0 | .  P Proof. We have the chain of inequalities EM ? (i,j)?E\E0 1 ? (1 ? p)|Path(i,j)| ?  P P ` ? (i,j)?E\E0` p ? ` p |E \ E0 | . (i,j)?E\E0 1 ? (1 ? p) For instance, if the input graph G = (V, E) has diameter DG and the queried edges are those of a breadth-first spanning tree, which can be generated in O(|E|) time, then the above fact holds with |E0 | = |V | ? 1, and ` = 2 DG . Comparing to Fact 1 shows that this simple breadth-first strategy is optimal up to constants factors whenever G has a constant diameter. This simple observation is especially relevant in the light of the typical graph topologies encountered in practice, whose diameters are often small. This argument is at the basis of our experimental comparison ?see Section 4 . Yet, this mistake bound can be vacuous on graph having a larger diameter. Hence, one may think of adding to the training spanning tree new edges so as to reduce the length of the circuits used for prediction, at the cost of increasing the size of the query set. A similar technique based on short circuits has been used in [2], the goal there being to solve the link classification problem in a harder adversarial environment. The precise tradeoff between prediction accuracy (as measured by the expected number of mistakes) and fraction of queried edges is the main theoretical concern of this paper. We now introduce an intermediate (and simpler) algorithm, called treeCutter, which improves on the optimality factor when the diameter DG is not small. In particular, we demonstrate that treeCutter achieves p a good upper bound on the number of mistakes on any graph such that |E| ? 3|V | + |V |. This algorithm is especially p effective when the input graph is dense, with an optimality factor between O(1) and O( |V |). Moreover, the total time for predicting the test edges scales linearly with the number of such edges, i.e., treeCutter predicts edges in constant amortized time. Also, the space is linear in the size of the input graph. The algorithm (pseudocode given in Figure 1) is parametrized by a positive integer k ranging from 2 to |V |. The actual setting of k depends on the graph topology and the desired fraction of query set edges, and plays a crucial role in determining the prediction performance. Setting k ? DG makes treeCutter reduce to querying only the edges of a breadth-first spanning tree of G, otherwise it operates in a more involved way by splitting G into smaller node-disjoint subtrees. 4 In a preliminary step (Line 1 in Figure 1), treeCutter draws an arbitrary breadth-first spanning tree T = (VT , ET ). Then subroutine extractTreelet(T, k) is used in a do-while loop to split T into vertex-disjoint subtrees T 0 whose height is k (one of them might have a smaller height). extractTreelet(T, k) is a very simple procedure that performs a depthfirst visit of the tree T at argument. During this visit, each internal node may be visited several times (during backtracking steps). We assign each node i a tag hT (i) representing the height of the subtree of T rooted at i. hT (i) can be recursively computed during the visit. After this assignment, if we have hT (i) = k (or i is the root of T ) we return the subtree Ti of T rooted at i. Then treeCutter removes (Line 6) Ti from T along with all edges of ET which are incident to nodes of Ti , and then iterates until VT gets empty. By construction, the diameter of the generated subtrees will not be larger than 2k. Let T denote the set of these subtrees. For each T 0 ? T , the algorithm queries all the labels of ET 0 , each edge (i, j) ? EG \ ET 0 such that i, j ? VT 0 is set to be a test edge, and label Yi,j is predicted using PathT 0 (i, j) (note that this coincides with PathT 0 (i, j), since T 0 ? T ), that is, Y?i,j = ?T (i, j). Finally, for each pair of distinct subtrees T 0 , T 00 ? T such that there exists a node of VT 0 adjacent to a node of VT 00 , i.e., such that EG (T 0 , T 00 ) is not empty, we query the label of an arbitrarily selected edge (i0 , i00 ) ? EG (T 0 , T 00 ) (Lines 8 and 9 in Figure 1). Each edge (u, v) ? EG (T 0 , T 00 ) whose label has not been previously queried is then part of the test set, and its label will be predicted as Y?u,v ? ?T (u, i0 ) ? Yi0 ,i00 ? ?T (i00 , v) (Line 11). That is, using the path obtained by concatenating PathT 0 (u, i0 ) to edge (i0 , i00 ) to PathT 0 (i00 , v). The following theorem1 quantifies the number of mistakes made by treeCutter. The treeCutter(k) Parameter: k ? 2 Initialization: T ? ?. 1. Draw an arbitrary breadth-first spanning tree T of G 2. Do 3. T 0 ? extractTreelet(T, k), and query all labels in ET 0 4. T ? T ? {T 0 } 5. For each i, j ? VT 0 , set predict Y?i,j ? ?T (i, j) 6. T ? T \ T0 7. While (VT 6? ?) 8. For each T 0 , T 00 ? T : T 0 6? T 00 9. If EG (T 0 , T 00 ) 6? ? query the label of an arbitrary edge (i0 , i00 ) ? EG (T 0 , T 00 ) 10. For each (u, v) ? EG (T 0 , T 00 ) \ {(i0 , i00 )}, with i0 , u ? VT 0 and v, i00 ? VT 00 11. predict Y?u,v ? ?T 0 (u, i0 ) ? Yi0 ,i00 ? ?T 00 (i00 , v) Figure 1: treeCutter pseudocode. extractTreelet(T, k) Parameters: tree T , k ? 2. 1. Perform a depth-first visit of T starting from the root. 2. During the visit 3. For each i ? VT visited for the |1 + ChildrenT (i)|-th time (i.e., the last visit of i) 4. If i is a leaf set hT (i) ? 0 5. Else set hT (i) ? 1 + max{hT (j) : j ? ChildrenT (i)} 6. If hT (i) = k or i ? T ?s root return subtree rooted at i Figure 2: extractTreelet pseudocode. 2 | |V | |E| requirement on the graph density in the statement, i.e., |V | ? 1 + |V 2k2 + 2k ? 2 implies that the test set is not larger than the query set. This is a plausible assumption in active learning scenarios, and a way of adding meaning to the bounds. Theorem 2. For any integer k ? 2, the number M of mistakes made by treeCutter on 2 any graph G(V, E) with |E| ? 2|V | ? 2 + |Vk2| + |Vk | satisfies EM ? min{4k + 1, 2DG }p|E|, while the query set size is bounded by |V | ? 1 + |V |2 2k2 + |V | 2k ? |E| 2 . We now refine the simple argument leading to treeCutter, and present our active link classifier. The pseudocode of our refined algorithm, called starMaker, follows that of 1 Due to space limitations long proofs are presented in the supplementary material. 5 Figure 1 with the following differences: Line 1 is dropped (i.e., starMaker does not draw an initial spanning tree), and the call to extractTreelet in Line 3 is replaced by a call to extractStar. This new subroutine just selects the star T 0 centered on the node of G having largest degree, and queries all labels of the edges in ET 0 . The next result shows that this algorithm gets a constant optimality factor while using a query set of size O(|V |3/2 ). Theorem 3. The number M of mistakes made by starMaker on any given graph G(V, E) 3 with |E| ? 2|V | ? 2 + 2|V | 2 satisfies EM ? 5 p|E|, while the query set size is upper bounded 3 by |V | ? 1 + |V | 2 ? |E| 2 . Finally, we combine starMaker with treeCutter so as to obtain an algorithm, called 3 treeletStar, that can work with query sets smaller than |V | ? 1 + |V | 2 labels. treeletStar is parameterized by an integer k and follows Lines 1?6 of Figure 1 creating a set T of trees through repeated calls to extractTreelet. Lines 7?11 are instead replaced by the following procedure: a graph G0 = (VG0 , EG0 ) is created such that: (1) each node in VG0 corresponds to a tree in T , (2) there exists an edge in EG0 if and only if the two corresponding trees of T are connected by at least one edge of EG . Then, extractStar is used to generate a set S of stars of vertices of G0 , i.e., stars of trees of T . Finally, for each pair of distinct stars S 0 , S 00 ? S connected by at least one edge in EG , the label of an arbitrary edge in EG (S 0 , S 00 ) is queried. The remaining edges are all predicted. Theorem 4. For any integer k ? 2 and for any graph G = (V, E) with |E| ? 2|V | ? 2 + 3 2 |V k|?1 + 1 2 , the number M of mistakes made by treeletStar(k) on G satisfies EM = 3 O(min{k, DG }) p|E|, while the query set size is bounded by |V | ? 1 + |V k|?1 + 1 2 ? |E| 2 . Hence, even if DG is large, setting k = |V |1/3 yields a O(|V |1/3 ) optimality factor just by querying O(|V |) edges. On the other hand, a truly constant optimality factor is obtained by querying as few as O(|V |3/2 ) edges (provided the graph has sufficiently many edges). As a direct consequence (and surprisingly enough), on graphs which are only moderately dense we need not observe too many edges in order to achieve a constant optimality factor. It is instructive to compare the bounds obtained by treeletStar to the ones we can achieve by using the cccc algorithm of [2], or the low-stretch spanning trees given in Theorem 1. Because cccc operates within a harder adversarial setting, it is easy to show that Theorem 9 in [2] extends to the p-stochastic assignment model by replacing ?2 (Y ) with p|E| therein.2  23 p |V |, where ? ? (0, 1] is the fraction of The resulting optimality factor is of order 1?? ? queried edges out of the total number of edges. A quick comparison to Theorem 4 reveals that treeletStar achieves a sharper mistake bound for p any value of ?. For instance, in order to obtain an optimality factor which is lower than |V |, cccc has to query in the worst case a fraction of edges that goes to one as |V | ? ?. On top of this, our algorithms are faster and easier to implement ?see Section 3.1. Next, we compare to query sets produced by low-stretch spanning trees. A low-stretch spanning tree achieves a polylogarithmic optimality factor by querying |V | ? 1 edge labels. The results in [4] show that we cannot hope to get a better optimality factor using a single low-stretch spanning tree combined by the analysis in (1). For a comparable amount ?(|V |) of queried labels, Theorem 4 offers the larger optimality factor |V |1/3 . However, we can get a constant optimality factor by increasing the query set size to O(|V |3/2 ). It is not clear how multiple low-stretch trees could be combined to get a similar scaling. 3.1 Complexity analysis and implementation We now compute bounds on time and space requirements for our three algorithms. Recall the different lower bound conditions on the graph density that must hold to ensure that the 2 query set size is not larger than the test set size. These were |E| ? 2|V | ? 2 + |Vk2| + |Vk | for 3 treeCutter(k) in Theorem 2, |E| ? 2|V | ? 2 + 2|V | 2 for starMaker in Theorem 3, and   32 |E| ? 2|V | ? 2 + 2 |V k|?1 + 1 for treeletStar(k) in Theorem 4. 2 This theoretical comparison is admittedly unfair, as cccc has been designed to work in a harder setting than p-stochastic. Unfortunately, we are not aware of any other general active learning scheme for link classification to compare with. 6 Theorem 5. For any input graph G = (V, E) which is dense enough to ensure that the query set size is no larger than the test set size, the total time needed for predicting all test labels is: O(|E|) for treeCutter(k) and for all k  O |E| + |V | log |V | for starMaker   |V | |V | O |E| + log for treeletStar(k) and for all k. k k In particular, whenever k|E| = ?(|V | log |V |) we have that treeletStar(k) works in constant amortized time. For all three algorithms, the space required is always linear in the input graph size |E|. 4 Experiments In this preliminary set of experiments we only tested the predictive performance of treeCutter(|V |). This corresponds to querying only the edges of the initial spanning tree T and predicting all remaining edges (i, j) via the parity of PathT (i, j). The spanning tree T used by treeCutter is a shortest-path spanning tree generated by a breadth-first visit of the graph (assuming all edges have unit length). As the choice of the starting node in the visit is arbitrary, we picked the highest degree node in the graph. Finally, we run through the adiacency list of each node in random order, which we empirically observed to improve performance. Our baseline is the heuristic ASymExp from [11] which, among the many spectral heuristics proposed there, turned out to perform best on all our datasets. With integer parameter z, ASymExp(z) predicts using a spectral transformation of the training sign matrix Ytrain , whose only non-zero entries are edges. The label of edge (i, j) is   the signs of the training predicted using exp(Ytrain (z)) i,j . Here exp Ytrain (z) = Uz exp(Dz )Uz> , where Uz Dz Uz> is the spectral decomposition of Ytrain containing only the z largest eigenvalues and their corresponding eigenvectors. Following [11], we ran ASymExp(z) with the values z = 1, 5, 10, 15. This heuristic uses the two-clustering bias as follows : expand exp(Ytrain ) in a series of n n powers Ytrain . Then each Ytrain )i,j is a sum of values of paths of length n between i and j. Each path has value 0 if it contains at least one test edge, otherwise its value equals the product of queried labels on the path edges. Hence, the sign of exp(Ytrain ) is the sign of a linear combination of path values, each corresponding to a prediction consistent with the two-clustering bias ?compare this to the multiplicative rule used by treeCutter. Note that ASymExp and the other spectral heuristics from [11] have all running times of order  ? |V |2 . We performed a first set of experiments on synthetic signed graphs created from a subset of the USPS digit recognition dataset. We randomly selected 500 examples labeled ?1? and 500 examples labeled ?7? (these two classes are not straightforward to tell apart). Then, we created a graph using a k-NN rule with k = 100. The edges were labeled as follows: all edges incident to nodes with the same USPS label were labeled +1; all edges incident to nodes with different USPS labels were labeled ?1. Finally, we randomly pruned the positive edges so to achieve an unbalance of about 20% between the two classes.3 Starting from this edge label assignment, which is consistent with the two-clustering associated with the USPS labels, we generated a p-stochastic label assignment by flipping the labels of a random subset of the edges. Specifically, we used the three following synthetic datasets: DELTA0: No flippings (p = 0), 1,000 nodes and 9,138 edges; DELTA100: 100 randomly chosen labels of DELTA0 are flipped; DELTA250: 250 randomly chosen labels of DELTA0 are flipped. We also used three real-world datasets: MOVIELENS: A signed graph we created using Movielens ratings.4 We first normalized the ratings by subtracting from each user rating the average rating of that user. Then, we created a user-user matrix of cosine distance similarities. This matrix was sparsified by 3 4 This is similar to the class unbalance of real-world signed networks ?see below. www.grouplens.org/system/files/ml-1m.zip. 7 DELTA0 DELTA100 1 0.8 0.6 0.4 1 0.8 0.6 0.4 20 30 40 TRAINING SET SIZE (%) 50 20 30 40 TRAINING SET SIZE (%) MOVIELENS F-MEASURE (%) 0.2 3 4 5 6 7 TRAINING SET SIZE (%) 10 20 30 40 TRAINING SET SIZE (%) 8 9 0.4 10 ASymExp z=1 ASymExp z=5 ASymExp z=10 ASymExp z=15 TreeCutter 0.6 0.4 0.2 10 20 30 40 50 10 TRAINING SET SIZE (%) 20 30 40 TRAINING SET SIZE (%) Figure 3: F-measure against training set size for treeCutter(|V |) and ASymExp(z) with different values of z on both synthetic and real-world datasets. By construction, treeCutter never makes a mistake when the labeling is consistent with a two-clustering. So on DELTA0 treeCutter does not make mistakes whenever the training set contains at least one spanning tree. With the exception of EPINIONS, treeCutter outperforms ASymExp using a much smaller training set. We conjecture that ASymExp responds to the bias not as well as treeCutter, which on the other hand is less robust than ASymExp to bias violations (supposedly, the labeling of EPINIONS). zeroing each entry smaller than 0.1 and removing all self-loops. Finally, we took the sign of each non-zero entry. The resulting graph has 6,040 nodes and 824,818 edges (12.6% of which are negative). SLASHDOT: The biggest strongly connected component of a snapshot of the Slashdot social network,5 similar to the one used in [11]. This graph has 26,996 nodes and 290,509 edges (24.7% of which are negative). EPINIONS: The biggest strongly connected component of a snapshot of the Epinions signed network,6 similar to the one used in [13, 12]. This graph has 41,441 nodes and 565,900 edges (26.2% of which are negative). Slashdot and Epinions are originally directed graphs. We removed the reciprocal edges with mismatching labels (which turned out to be only a few), and considered the remaining edges as undirected. The following table summarizes the key statistics of each dataset: Neg. is the fraction of negative edges, |V |/|E| is the fraction of edges queried by treeCutter(|V |), and Avgdeg is the average degree of the nodes of the network. Dataset DELTA0 DELTA100 DELTA250 SLASHDOT EPINIONS MOVIELENS |V | 1000 1000 1000 26996 41441 6040 |E| 9138 9138 9138 290509 565900 824818 Neg. 21.9% 22.7% 23.5% 24.7% 26.2% 12.6% |V |/|E| 10.9% 10.9% 10.9% 9.2% 7.3% 0.7% Avgdeg 18.2 18.2 18.2 21.6 27.4 273.2 Our results are summarized in Figure 3, where we plot F-measure (preferable to accuracy due to the class unbalance) against the fraction of training (or query) set size. On all datasets, but MOVIELENS, the training set size for ASymExp ranges across the values 5%, 10%, 25%, and 50%. Since MOVIELENS has a higher density, we decided to reduce those fractions to 1%, 3%, 5% and 10%. treeCutter(|V |) uses a single spanning tree, and thus we only have a single query set size value. All results are averaged over ten runs of the algorithms. The randomness in ASymExp is due to the random draw of the training set. The randomness in treeCutter(|V |) is caused by the randomized breadth-first visit. 5 6 50 EPINIONS 0.8 0.2 2 0.6 50 ASymExp z=1 ASymExp z=5 ASymExp z=10 ASymExp z=15 TreeCutter 0.6 0.4 1 1 0.8 SLASHDOT ASymExp z=1 ASymExp z=5 ASymExp z=10 ASymExp z=15 TreeCutter 0.6 ASymExp z=1 ASymExp z=5 ASymExp z=10 ASymExp z=15 TreeCutter 0.4 10 F-MEASURE (%) 10 F-MEASURE (%) DELTA250 ASymExp z=1 ASymExp z=5 ASymExp z=10 ASymExp z=15 TreeCutter F-MEASURE (%) F-MEASURE (%) F-MEASURE (%) ASymExp z=1 ASymExp z=5 ASymExp z=10 ASymExp z=15 TreeCutter snap.stanford.edu/data/soc-sign-Slashdot081106.html. snap.stanford.edu/data/soc-sign-epinions.html. 8 50 References [1] Cartwright, D. and Harary, F. Structure balance: A generalization of Heider?s theory. Psychological review, 63(5):277?293, 1956. [2] Cesa-Bianchi, N., Gentile, C., Vitale, F., Zappella, G. A correlation clustering approach to link classification in signed networks. In Proceedings of the 25th conference on learning theory (COLT 2012). To appear, 2012. [3] Chiang, K., Natarajan, N., Tewari, A., and Dhillon, I. Exploiting longer cycles for link prediction in signed networks. In Proceedings of the 20th ACM Conference on Information and Knowledge Management (CIKM). ACM, 2011. [4] Elkin, M., Emek, Y., Spielman, D.A., and Teng, S.-H. Lower-stretch spanning trees. SIAM Journal on Computing, 38(2):608?628, 2010. [5] Facchetti, G., Iacono, G., and Altafini, C. Computing global structural balance in large-scale signed social networks. PNAS, 2011. [6] Giotis, I. and Guruswami, V. Correlation clustering with a fixed number of clusters. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1167?1176. ACM, 2006. [7] Guha, R., Kumar, R., Raghavan, P., and Tomkins, A. Propagation of trust and distrust. In Proceedings of the 13th international conference on World Wide Web, pp. 403?412. ACM, 2004. [8] Harary, F. On the notion of balance of a signed graph. Michigan Mathematical Journal, 2(2):143?146, 1953. [9] Heider, F. Attitude and cognitive organization. J. Psychol, 21:107?122, 1946. [10] Hou, Y.P. Bounds for the least Laplacian eigenvalue of a signed graph. Acta Mathematica Sinica, 21(4):955?960, 2005. [11] Kunegis, J., Lommatzsch, A., and Bauckhage, C. The Slashdot Zoo: Mining a social network with negative edges. In Proceedings of the 18th International Conference on World Wide Web, pp. 741?750. ACM, 2009. [12] Leskovec, J., Huttenlocher, D., and Kleinberg, J. Trust-aware bootstrapping of recommender systems. In Proceedings of ECAI 2006 Workshop on Recommender Systems, pp. 29?33. ECAI, 2006. [13] Leskovec, J., Huttenlocher, D., and Kleinberg, J. Signed networks in social media. In Proceedings of the 28th International Conference on Human Factors in Computing Systems, pp. 1361?1370. ACM, 2010. [14] Leskovec, J., Huttenlocher, D., and Kleinberg, J. Predicting positive and negative links in online social networks. In Proceedings of the 19th International Conference on World Wide Web, pp. 641?650. ACM, 2010. [15] Von Luxburg, U. A tutorial on spectral clustering. Statistics and Computing, 17(4): 395?416, 2007. 9
4598 |@word seems:1 yi0:2 open:1 decomposition:2 harder:3 recursively:1 carry:1 initial:3 contains:5 series:1 outperforms:1 comparing:1 assigning:1 yet:1 must:1 hou:1 partition:1 iacono:1 remove:1 designed:1 plot:1 leaf:2 website:1 selected:3 reciprocal:1 short:2 chiang:1 iterates:1 node:31 org:1 simpler:1 dell:1 height:3 mathematical:2 along:2 constructed:1 direct:1 symposium:1 prove:1 combine:1 introduce:4 excellence:1 expected:1 uz:4 terminal:1 actual:1 increasing:2 provided:2 notation:1 underlying:2 moreover:3 medium:2 advent:1 bounded:5 what:1 circuit:5 emerging:1 finding:1 transformation:1 bootstrapping:1 impractical:1 guarantee:1 every:1 ti:3 preferable:1 universit:4 k2:2 classifier:1 unit:1 grant:1 appear:1 positive:6 dropped:1 depthfirst:1 mistake:19 consequence:1 despite:1 path:23 signed:21 might:1 therein:2 initialization:1 quantified:1 acta:1 childrent:3 range:2 averaged:1 seventeenth:1 directed:1 commerce:1 unique:1 decided:1 practice:1 implement:2 digit:1 procedure:2 asymptotics:1 convenient:1 protein:2 get:5 cannot:2 close:1 wrongly:1 storage:1 context:1 www:1 equivalent:3 quick:1 dz:2 go:1 straightforward:1 starting:3 splitting:1 rule:3 insubria:1 notion:1 construction:2 play:1 user:9 us:3 amortized:2 approximated:1 recognition:1 natarajan:1 predicts:2 labeled:9 huttenlocher:3 observed:1 role:1 worst:1 connected:7 cycle:2 trade:1 highest:1 removed:1 ran:1 mentioned:1 supposedly:1 environment:1 complexity:1 moderately:1 seller:1 predictive:2 negatively:1 creates:1 efficiency:1 learner:7 basis:2 usps:4 represented:1 attitude:1 forced:1 distinct:2 effective:1 query:37 labeling:7 tell:1 refined:1 whose:9 heuristic:10 larger:7 enemy:2 quite:1 say:3 solve:1 otherwise:2 plausible:1 supplementary:1 ability:1 statistic:2 snap:2 think:1 online:3 eigenvalue:2 took:1 subtracting:1 product:4 networked:1 relevant:1 loop:2 rapidly:1 turned:2 achieve:4 exploiting:2 cluster:4 optimum:3 empty:2 requirement:2 generating:1 friend:2 develop:1 measured:1 strong:1 soc:2 predicted:7 implies:2 stochastic:14 centered:1 milano:3 raghavan:1 human:1 material:1 noticeably:1 adjacency:1 require:2 assign:1 generalization:1 preliminary:4 dipartimento:4 stretch:14 underpinning:1 hold:3 sufficiently:1 considered:1 exp:5 algorithmic:1 predict:4 achieves:5 early:1 purpose:1 label:56 visited:2 grouplens:1 largest:2 weighted:1 reflects:1 hope:1 clearly:2 always:1 rather:1 claudio:1 publication:1 focus:2 vk:3 consistently:2 elkin:1 adversarial:2 baseline:1 vk2:2 i0:8 nn:1 initially:2 hidden:1 expand:1 subroutine:2 selects:1 overall:1 classification:13 among:2 html:2 denoted:1 colt:1 special:1 fairly:2 mutual:1 equal:3 construct:2 aware:3 having:3 never:1 flipped:10 report:1 few:2 randomly:7 dg:8 individual:3 stunning:1 replaced:2 phase:1 slashdot:7 recalling:1 organization:1 interest:1 mining:1 violation:1 truly:1 light:1 chain:1 subtrees:6 edge:102 tree:37 desired:2 e0:24 theoretical:5 leskovec:3 psychological:1 instance:2 boolean:1 disadvantage:1 assignment:9 cost:1 vertex:2 subset:4 entry:3 predicate:1 guha:1 too:1 reported:1 my:2 synthetic:4 combined:2 density:3 international:4 randomized:2 siam:2 sequel:1 off:2 connecting:4 von:1 cesa:2 management:1 containing:1 cognitive:1 creating:1 leading:1 return:2 star:4 summarized:2 availability:1 caused:1 depends:1 performed:1 multiplicative:2 view:2 root:3 picked:1 analyze:1 start:1 complicated:1 accuracy:3 yield:1 generalize:1 produced:1 zoo:1 multiplying:1 randomness:2 foe:1 classified:1 whenever:4 ed:1 against:3 pp:6 involved:1 mathematica:1 associated:2 di:7 proof:3 proved:1 dataset:3 recall:2 knowledge:1 improves:1 originally:1 dt:1 higher:1 strongly:2 just:4 correlation:3 until:1 hand:2 receives:2 web:3 trust:3 replacing:1 propagation:1 normalized:1 inductive:2 hence:7 assigned:4 dhillon:1 eg:13 adjacent:1 during:5 self:1 rooted:4 coincides:1 cosine:1 theoretic:1 demonstrate:1 performs:1 ranging:1 meaning:1 pseudocode:4 mt:4 empirically:1 perturbing:2 significant:1 epinions:9 theorist:1 queried:9 ebay:1 consistency:3 similarly:1 zeroing:1 similarity:1 longer:1 recent:3 italy:4 apart:1 scenario:1 theorem1:1 inequality:1 binary:1 arbitrarily:2 vt:18 yi:9 neg:2 gentile:2 zip:1 shortest:2 multiple:1 pnas:1 reduces:1 match:1 faster:1 offer:2 long:1 visit:9 ricerca:1 laplacian:1 prediction:9 scalable:1 expectation:3 achieved:1 else:1 crucial:1 fifty:1 strict:1 file:1 tend:1 undirected:6 practitioner:1 call:5 integer:5 structural:1 revealed:2 intermediate:1 easy:3 enough:3 split:1 topology:3 reduce:3 tradeoff:1 t0:1 motivated:1 guruswami:1 emek:1 ytrain:8 generally:2 useful:1 clear:2 eigenvectors:1 tewari:1 amount:3 ten:1 informatica:2 diameter:7 generate:1 inhibitory:1 tutorial:1 sign:12 cikm:1 disjoint:3 write:1 discrete:1 key:2 breadth:7 eg0:2 ht:7 graph:50 fraction:10 sum:1 run:3 luxburg:1 parameterized:2 powerful:1 extends:1 saying:1 reasonable:1 family:1 distrust:3 draw:4 summarizes:1 scaling:1 comparable:1 bound:10 encountered:1 nonnegative:1 refine:1 annual:1 afforded:1 tag:2 kleinberg:3 aspect:1 argument:4 optimality:15 min:2 pruned:1 performing:2 kumar:1 conjecture:1 combination:1 mismatching:1 smaller:6 across:1 em:5 kunegis:1 harary:2 partitioned:2 making:1 scienze:1 ln:2 agree:1 previously:1 turn:2 lommatzsch:1 needed:1 end:1 available:1 observe:1 spectral:7 assumes:1 remaining:8 clustering:13 running:5 spurred:1 denotes:2 log2:2 top:1 ensure:2 unbalance:3 tomkins:1 especially:3 build:2 g0:2 cartwright:1 quantity:2 flipping:1 strategy:2 responds:1 fabio:1 distance:2 link:16 parametrized:1 spanning:24 studi:3 assuming:2 length:6 relationship:4 balance:5 sinica:1 unfortunately:1 statement:1 sharper:1 negative:10 design:3 implementation:2 unknown:3 perform:3 bianchi:2 disagree:1 upper:3 observation:1 snapshot:2 datasets:6 recommender:2 sparsified:1 ever:1 precise:1 perturbation:1 arbitrary:8 rating:5 introduced:1 vacuous:1 pair:3 required:1 giotis:1 connection:1 conflict:1 polylogarithmic:1 suggested:1 below:1 i00:10 max:1 memory:1 pascal2:1 power:2 zappella:2 warm:1 predicting:6 indicator:1 representing:2 scheme:1 improve:1 created:5 carried:1 psychol:1 review:1 dislike:1 nicol:1 determining:1 limitation:2 allocation:1 querying:9 vg:3 incident:4 degree:3 sufficient:3 consistent:6 viewpoint:1 excitatory:1 supported:1 parity:3 last:1 surprisingly:1 ecai:2 bias:8 formal:1 understand:1 wide:4 sparse:1 overcome:1 depth:1 giovanni:1 world:7 unweighted:1 author:2 made:7 simplified:1 ec:1 social:16 matematica:1 ml:1 global:1 active:17 reveals:1 conceptual:1 degli:3 spectrum:1 quantifies:1 table:1 stimulated:1 nature:1 robust:1 fse:1 interact:1 domain:2 protocol:1 main:2 dense:3 linearly:1 motivation:1 child:1 repeated:1 positively:1 biggest:2 fashion:1 theme:1 deterministically:1 explicit:1 concatenating:1 unfair:1 stanford:2 theorem:12 removing:1 friendship:1 showing:1 list:1 concern:1 exists:2 workshop:1 adding:2 accomodate:1 budget:5 subtree:3 easier:1 michigan:1 backtracking:1 recommendation:1 collectively:1 bauckhage:1 corresponds:2 satisfies:5 acm:8 sized:1 formulated:1 goal:1 towards:1 typical:1 specifically:1 uniformly:2 operates:2 movielens:6 admittedly:1 total:4 called:3 teng:1 experimental:1 vitale:2 exception:1 formally:1 select:1 internal:1 spielman:1 heider:2 tested:1 instructive:1
3,975
4,599
On-line Reinforcement Learning Using Incremental Kernel-Based Stochastic Factorization Andr?e M. S. Barreto School of Computer Science McGill University Montreal, Canada [email protected] Doina Precup School of Computer Science McGill University Montreal, Canada [email protected] Joelle Pineau School of Computer Science McGill University Montreal, Canada [email protected] Abstract Kernel-based stochastic factorization (KBSF) is an algorithm for solving reinforcement learning tasks with continuous state spaces which builds a Markov decision process (MDP) based on a set of sample transitions. What sets KBSF apart from other kernel-based approaches is the fact that the size of its MDP is independent of the number of transitions, which makes it possible to control the trade-off between the quality of the resulting approximation and the associated computational cost. However, KBSF?s memory usage grows linearly with the number of transitions, precluding its application in scenarios where a large amount of data must be processed. In this paper we show that it is possible to construct KBSF?s MDP in a fully incremental way, thus freeing the space complexity of this algorithm from its dependence on the number of sample transitions. The incremental version of KBSF is able to process an arbitrary amount of data, which results in a model-based reinforcement learning algorithm that can be used to solve continuous MDPs in both off-line and on-line regimes. We present theoretical results showing that KBSF can approximate the value function that would be computed by conventional kernel-based learning with arbitrary precision. We empirically demonstrate the effectiveness of the proposed algorithm in the challenging threepole balancing task, in which the ability to process a large number of transitions is crucial for success. 1 Introduction The task of learning a policy for a sequential decision problem with continuous state space is a long-standing challenge that has attracted the attention of the reinforcement learning community for years. Among the many approaches that have been proposed to solve this problem, kernel-based reinforcement learning (KBRL) stands out for its good theoretical guarantees [1, 2]. KBRL solves a continuous state-space Markov decision process (MDP) using a finite model constructed based on sample transitions only. By casting the problem as a non-parametric approximation, it provides a statistically consistent way of approximating an MDP?s value function. Moreover, since it comes down to the solution of a finite model, KBRL always converges to a unique solution. Unfortunately, the good theoretical properties of kernel-based learning come at a price: since the model constructed by KBRL grows with the amount of sample transitions, the number of operations performed by this algorithm quickly becomes prohibitively large as more data become available. Such a computational burden severely limits the applicability of KBRL to real reinforcement learning (RL) problems. Realizing that, many researchers have proposed ways of turning KBRL into a more practical tool [3, 4, 5]. In this paper we focus on our own approach to leverage KBRL, an algorithm called kernel-based stochastic factorization (KBSF) [4]. KBSF uses KBRL?s kernel-based strategy to perform a soft aggregation of the states of its MDP. By doing so, our algorithm is able to summarize the information contained in KBRL?s model in an MDP whose size is independent of the number of sample transitions. KBSF enjoys good theoretical 1 guarantees and has shown excellent performance on several tasks [4]. The main limitation of the algorithm is the fact that, in order to construct its model, it uses an amount of memory that grows linearly with the number of sample transitions. Although this is a significant improvement over KBRL, it still hinders the application of KBSF in scenarios in which a large amount of data must be processed, such as in complex domains or in on-line reinforcement learning. In this paper we show that it is possible to construct KBSF?s MDP in a fully incremental way, thus freeing the space complexity of this algorithm from its dependence on the number of sample transitions. In order to distinguish it from its original, batch counterpart, we call this new version of our algorithm incremental KBSF, or iKBSF for short. As will be seen, iKBSF is able to process an arbitrary number of sample transitions. This results in a model-based RL algorithm that can be used to solve continuous MDPs in both off-line and on-line regimes. A second important contribution of this paper is a theoretical analysis showing that it is possible to control the error in the value-function approximation performed by KBSF. In our previous experiments with KBSF, we defined the model used by this algorithm by clustering the sample transitions and then using the clusters?s centers as the representative states in the reduced MDP [4]. However, we did not provide a theoretical justification for such a strategy. In this paper we fill this gap by showing that we can approximate KBRL?s value function at any desired level of accuracy by minimizing the distance from a sampled state to the nearest representative state. Besides its theoretical interest, the bound is also relevant from a practical point of view, since it can be used in iKBSF to guide the on-line selection of representative states. Finally, a third contribution of this paper is an empirical demonstration of the performance of iKBSF in a new, challenging control problem: the triple pole-balancing task, an extension of the well-known double pole-balancing domain. Here, iKBSF?s ability to process a large number of transitions is crucial for achieving a high success rate, which cannot be easily replicated with batch methods. 2 Background In reinforcement learning, an agent interacts with an environment in order to find a policy that maximizes the discounted sum of rewards [6]. As usual, we assume that such an interaction can be modeled as a Markov decision process (MDP, [7]). An MDP is a tuple M ? (S, A, Pa , ra , ?), where S is the state space and A is the (finite) action set. In this paper we are mostly concerned with MDPs with continuous state spaces, but our strategy will be to approximate such models as finite MDPs. In a finite MDP the matrix Pa ? R|S|?|S| gives the transition probabilities associated with action a ? A and the vector ra ? R|S| stores the corresponding expected rewards. The discount factor ? ? [0, 1) is used to give smaller weights to rewards received further in the future. Consider an MDP M with continuous state space S ? [0, 1]d . Kernel-based reinforcement learning (KBRL) uses sample transitions to derive a finite MDP that approximates the continuous model [1, 2]. Let Sa = {(sak , rka , s?ak )|k = 1, 2, ..., na } be sample transitions associated with action a ? A, where sak , s?ak ? S and rka ? R. Let ? : R+ 7? R+ be a Lipschitz continuous function and let k? (s, s? ) be a kernel function defined as k? (s, s? ) = ? (k s ? s? k /?), where k ? k is a norm in Rd and ? > 0. Finally, a k? (s, saj ). define the normalized kernel function associated with action a as ??a (s, sai ) = k? (s, sai )/?nj=1 The model constructed by KBRL has the following transition and reward functions:  a  a ri , if s? = s?ai , ?? (s, sai ), if s? = s?ai , a ? a ? ? ? (1) and R (s, s ) = P (s |s) = 0, otherwise. 0, otherwise Since only transitions ending in the states s?ai have a non-zero probability of occurrence, one can define a finite MDP M? composed solely of these n = ?a na states [2, 3]. After V? ? , the optimal ? has been computed, value function of M, the value of any state-action pair can be determined as:  a ??a (s, sai ) ria + ? V? ? (s?ai ) , where s ? S and a ? A. Ormoneit and Sen [1] proved that, Q(s, a) = ?ni=1 if na ? ? for all a ? A and the widths of the kernels ? shrink at an ?admissible? rate, the probability of choosing a suboptimal action based on Q(s, a) converges to zero. ? but the time and Using dynamic programming, one can compute the optimal value function of M, space required to do so grow fast with the number of states n [7, 8]. Therefore, the use of KBRL leads to a dilemma: on the one hand, one wants to use as many transitions as possible to capture the dynamics of M, but on the other hand one wants to have an MDP M? of manageable size. 2 Kernel-based stochastic factorization (KBSF) provides a practical way of weighing these two conflicting objectives [4]. Our algorithm compresses the information contained in KBRL?s model M? in an MDP M? whose size is independent of the number of transitions n. The fundamental idea behind KBSF is the ?stochastic-factorization trick?, which we now summarize. Let P ? Rn?n be a transition-probability matrix and let P = DK be a factorization in which D ? Rn?m and K ? Rm?n are stochastic matrices. Then, swapping the factors D and K yields another transition matrix P? = KD that retains the basic topology of P?that is, the number of recurrent classes and their respective reducibilities and periodicities [9]. The insight is that, in some cases, one can work with P? instead of P; when m ? n, this replacement affects significantly the memory usage and computing time. ? Let S? ? {s?1 , s?2 , ..., s?m } KBSF results from the application of the stochastic-factorization trick to M. a n ?m a ? ? a ? Rm?na with elbe a set of representative states in S. KBSF computes matrices D ? R and K a a a a a ? ? ements di j = ??? (s?i , s? j ) and ki j = ?? (s?i , s j ), where ??? is defined as ??? (s, s?i ) = k?? (s, s?i )/?mj=1 k?? (s, s? j ). ? A, P? a , r? a , ?), where P? a = K ?a ? aD The basic idea of the algorithm is to replace the MDP M? with M? ? (S, a a a a n a a ? and r? = K r (r ? R is the vector composed of sample rewards ri ). Thus, instead of solving an    ?1 ? D ? 2 ? ... D ? |A| ? ] ? Rm?n MDP with n states, one solves a model with m states only. Let D? ? [ D ? ? ? Rm?|A| , the optimal action-value function of M, ? 1K ? 2 ...K ? |A| ] ? Rm?n . Based on Q ? and let K ? [K ? ? , where ? is the ?max? operator one can obtain an approximate value function for M? as v? = ?DQ ? ? )ia . We have showed that the error in v? is bounded by: applied row wise, that is, v?i = maxa (DQ k?v? ? v? k? ?   ? a 1 1 C? a ? ? P ? DK , max max k?ra ? D?ra k? + Cmax (1 ? max d ) + ij ? i j 1?? a 2 a (1 ? ?)2 (2) where k?k? is the infinity norm, v? ? ? Rn is the optimal value function of KBRL?s MDP, C? = maxa,i r?ia ? mina,i r?ia , C? = maxa,i r?ia ? mina,i r?ia , and Ka is matrix K with all elements equal to zero ? a (see [4] for details). except for those corresponding to matrix K 3 Incremental kernel-based stochastic factorization In the batch version of KBSF, described in Section 2, the matrices P? a and vectors r? a are determined using all the transitions in the corresponding sets Sa simultaneously. This has two undesirable consequences. First, the construction of the MDP M? requires an amount of memory of O(nmax m), where nmax = maxa na . Although this is a significant improvement over KBRL?s memory usage, which is O(n2max ), in more challenging domains even a linear dependence on nmax may be impractical. Second, with batch KBSF the only way to incorporate new data into the model M? is to recompute ? aD ? a for all actions a for which there are new sample transitions available. the multiplication P? a = K Even if we ignore the issue of memory usage, this is clearly inefficient in terms of computation. In this section we present an incremental version of KBSF that circumvents these important limitations. Suppose we split the set of sample transitions Sa in two subsets S1 and S2 such that S1 ? S2 = 0/ and S1 ? S2 = Sa . Without loss of generality, suppose that the sample transitions are indexed so that S1 ? {(sak , rka , s?ak )|k = 1, 2, ..., n1 } and S2 ? {(sak , rka , s?ak )|k = n1 + 1, n1 + 2, ..., n1 + n2 = na }. Let P? S1 and r? S1 be matrix P? a and vector r? a computed by KBSF using only the n1 transitions in S1 (if n1 = 0, we define P? S1 = 0 ? Rm?m and r? S1 = 0 ? Rm for all a ? A). We want to compute P? S1 ?S2 and r? S1 ?S2 from P? S1 , r? S1 , and S2 , without using the set of sample transition S1 . We start with the transition matrices P? a . We know that S n1 p?i 1j = ? k? ita d?taj = t=1 n1 n1 k?? (s?ta , s? j ) k? (s?i , sta )k?? (s?ta , s? j ) k? (s?i , sta ) 1 = n1 ? n1 ? ?m k?? (s?ta , s?l ) . m a a k? (s?i , sal ) t=1 ?l=1 l=1 t=1 ?l=1 k? (s?i , sl ) ?l=1 k?? (s?t , s?l ) 1 1 +n2 k? (s?i , sal ), wi 2 = ?nl=n To simplify the notation, define wi 1 = ?nl=1 k? (s?i , sal ), and cti j = 1 +1 S k? (s?i ,sta )k?? (s?ta ,s? j ) a ?m l=1 k?? (s?t ,s?l ) , with t ? {1, 2, ..., n1 + n2 }. Then, S ?S2 p?i 1j S = 1 S1 S2 wi + wi   n1 +n2 t n1 t = c ci j + ?t=n ?t=1 i j +1 1 3 1 S1 S2 wi + wi   S S n1 +n2 t . p?i 1j wi 1 + ?t=n c i j +1 1 n1 +n2 t Now, defining bi 2j = ?t=n c , we have the simple update rule: 1 +1 i j   1 S S S S ?S bi 2j + p?i 1j wi 1 . p?i 1j 2 = S1 S2 wi + wi S (3) We can apply similar reasoning to derive an update rule for the rewards r?ia . We know that n1 1 n1 1 a a ( s ? , s )r = k n1 ? ? i t t wS1 ? k? (s?i , sta )rta . k? (s?i , sal ) t=1 ?l=1 i t=1 Let hti = k? (s?i , sta )rta , with t ? {1, 2, ..., n1 + n2 }. Then,     1 1 S1 S1 S ?S n1 n1 +n2 t + n1 +n2 ht = t . h h w r ? + r?i 1 2 = S1 ? ? ? S S S i i t=n1 +1 i t=1 i t=n1 +1 i wi + wi 2 wi 1 + wi 2 S r?i 1 = n1 +n2 Defining ei 2 = ?t=n ht , we have the following update rule: 1 +1 i S S ?S2 r?i 1 S S = 1 S1 S2 wi + wi  S S S ei 2 + r?i 1 wi 1  . (4) S Since bi 2j , ei 2 , and wi 2 can be computed based on S2 only, we can discard the sample transitions in S1 S after computing P? S1 and r? S1 . To do that, we only have to keep the variables wi 1 . These variables can a m be stored in |A| vectors w ? R , resulting in a modest memory overhead. Note that we can apply the ideas above recursively, further splitting the sets S1 and S2 in subsets of smaller size. Thus, we have a fully incremental way of computing KBSF?s MDP which requires almost no extra memory. Algorithm 1 shows a step-by-step description of how to update M? based on a set of sample transitions. Using this method to update its model, KBSF?s space complexity drops from O(nm) to O(m2 ). Since the amount of memory used by KBSF is now independent of n, it can process an arbitrary number of sample transitions. Algorithm 1 Update KBSF?s MDP P? a , r? a , wa for all a ? A Input: a S for all a ? A Output: Updated M? and wa for a ? A do a for t = 1, ..., na do zt ? ?m l=1 k?? (s?t , s?l ) a na ? |S | for i = 1, 2, ..., m do na w? ? ?t=1 k? (s?i , sta ) for j = 1, 2, ..., m do na b ? ?t=1 k? (s?i , sta )k?? (s?ta , s? j )/zt 1 p?i j ? wa +w? (b + p?i j wai ) i na e ? ?t=1 k? (s?i , sta )rta 1 a r?i ? wa +w ? (e + r?i wi ) i wai ? wai + w? Algorithm 2 Incremental KBSF (iKBSF) s?i Representative states, i = 1, 2, ..., m tm Interval to update model Input: tv Interval to update value function n Total number of sample transitions ? a) Output: Approximate value function Q(s, m?|A| ? Q ? arbitrary matrix in R P? a ? 0 ? Rm?m , r? a ? 0 ? Rm , wa ? 0 ? Rm , ?a ? A for t = 1, 2, ..., n do ? t , a) = ?m Select a based on Q(s i=1 ??? (st , s?i )q?ia Execute S a in st and observe rt and s?t Sa ? Sa {(st , rt , s?t )} if (t mod tm = 0) then Add new representative states to M? using Sa Update M? and wa using Algorithm 1 and Sa Sa ? 0/ for all a ? A ? if (t mod tv = 0) update Q Instead of assuming that S1 and S2 are a partition of a fixed dataset Sa , we can consider that S2 was generated based on the policy learned by KBSF using the transitions in S1 . Thus, Algorithm 1 provides a flexible framework for integrating learning and planning within KBSF. A general description of the incremental version of KBSF is given in Algorithm 2. iKBSF updates the model M? and the ? at fixed intervals tm and tv , respectively. When tm = tv = n, we recover the batch value function Q version of KBSF; when tm = tv = 1, we have an on-line method which stores no sample transitions. ? Note that Algorithm 2 also allows for the inclusion of new representative states to the model M. Using Algorithm 1 this is easy to do: given a new representative state s?m+1 , it suffices to set wam+1 = a = 0, and p?m+1, j = p? j,m+1 = 0 for j = 1, 2, ..., m + 1 and all a ? A. Then, in the following 0, r?m+1 applications of Eqns (3) and (4), the dynamics of M? will naturally reflect the existence of state s?m+1 . 4 4 Theoretical Results Our previous experiments with KBSF suggest that, at least empirically, the algorithm?s performance improves as m ? n [4] . In this section we present theoretical results that confirm this property. The results below are particularly useful for iKBSF because they provide practical guidance towards where and when to add new representative states. Suppose we have a fixed set of sample transitions Sa . We will show that, if we are free to define the representative states, then we can use KBSF to approximate KBRL?s solution to any desired level of accuracy. To be more precise, let d? ? maxa,i min j k s?ai ? s? j k, that is, d? is the maximum distance from a sampled state s?ai to the closest representative state. We will show that, by minimizing d? , we can make k?v? ? v? k? as small as desired (cf. Eqn (2)). Let s?a? ? s?ak with k = argmaxi min j k s?ai ? s? j k and s?a? ? s?h where h = argmin j k s?a? ? s? j k, that is, s?a? is the sampled state in Sa whose distance to the closest representative state is maximal, and s?a? is the representative state that is closest to s?a? . Using these definitions, we can select the pair (s?a? , s?a? ) that maximizes k s?a? ? s?a? k: s?? ? s?b? and s?? ? s?b? where b = argmaxa k s?a? ? s?a? k. Obviously, k s?? ? s?? k= d? . We make the following simple assumptions: (i) s?a? and s?a? are unique for all a ? A, (ii) 0? ? (x)dx ? L? < ?, (iii) ? (x) ? ? (y) if x < y, (iv) ? A? , ?? > 0, ? B? ? 0 such that A? exp(?x) ? ? (x) ? ?? A? exp(?x) if x ? B? . Assumption (iv) implies that the kernel function ? will eventually decay exponentially. We start by introducing the following definition: Definition 1. Given ? ? (0, 1] and s, s? ? S, the ?-radius of k? with respect to s and s? is defined as ?(k? , s, s? , ?) = max{x ? R+ |? (x/?) = ?k? (s, s? )}. R The existence of ?(k? , s, s? , ?) is guaranteed by assumptions (ii) and (iii) and the fact that ? is continuous [1]. To provide some intuition on the meaning of the ?-radius of k? , suppose that ? is strictly decreasing and let c = ? (k s?s? k /?). Then, there is a s?? ? S such that ? (k s?s?? k /?) = ?c. The radius of k? in this case is k s ? s?? k. It should be thus obvious that ?(k? , s, s? , ?) ?k s ? s? k. We can show that ? has the following properties (proved in the supplementary material): Property 1. If k s ? s? k<k s ? s?? k, then ?(k? , s, s? , ?) ? ?(k? , s, s?? , ?). Property 2. If ? < ? ? , then ?(k? , s, s? , ?) > ?(k? , s, s? , ? ? ). Property 3. For ? ? (0, 1) and ? > 0, there is a ? > 0 such that ?(k? , s, s? , ?)? k s?s? k< ? if ? < ? . We now introduce a notion of dissimilarity between two states s, s? ? S which is induced by a specific set of sample transitions Sa and the choice of kernel function: Definition 2. Given ? > 0, the? -dissimilarity between s and s? with respect to ??a is defined as  na ?k=1 |??a (s, sak ) ? ??a (s? , sak )|, if k s ? s? k? ? , ? a ?(?? , s, s , ? ) = 0, otherwise. The parameter ? defines the volume of the ball within which we want to compare states. As we will see, this parameter links Definitions 1 and 2. Note that ?(??a , s, s? , ? ) ? [0, 2]. It is possible to show that ? satisfies the following property (see supplementary material): Property 4. For ? > 0 and ? > 0, there is a ? > 0 such that ?(??a , s, s? , ? ) < ? if k s ? s? k< ? . Definitions 1 and 2 allow us to enunciate the following result: Lemma 1. For any ? ? (0, 1] and any t ? m ? 1, let ? a = ?(k?? , s?a? , s?a? , ?/t), let ??a = a = max ?(? a , s?a , s? , ?). Then, max ?(??a , s?ai , s? j , ? a ), and let ?max ? i j i, j i, j kPa ? DKa k? ? ? 1 ?a + ?a . 1 + ? ? 1 + ? max (5) Proof. See supplementary material. a ? ? a , one might think at first that the right-hand side of Eqn (5) decreases monotonically Since ?max ? a as ? ? 0. This is not necessarily true, though, because ??a ? ?max as ? ? 0 (see Property 2). We are finally ready to prove the main result of this section. 5 Proposition 1. For any ? > 0, there are ?1 , ?2 > 0 such that k?v? ? v? k? < ? if d? < ?1 and ?? < ?2 . Proof. Let r? ? [(r1 )? , (r2 )? , ..., (r|A| )? ]? ? Rn . From Eqn (1) and the definition of r? a , we can write ? a ra = P? a r? ? DKa r? = (P? a ? DKa )?r ? P? a ? DKa k?rk? . k?ra ? D?ra k? = P? a r? ? DK ? ? ? ? (6) Thus, plugging Eqn (6) back into Eqn (2), it is clear that there is a ? > 0 such that k?v? ? v? k? < ? a ? if maxa P ? DKa ? < ? and maxi (1 ? max j di j ) < ?. We start by showing that if d? and ?? are small enough, then maxa P? a ? DKa ? < ?. From Lemma 1 we know that, for any set of m ? n representative states, and for any ? ? (0, 1], the following must hold: max kPa ? DKa k? ? (1 + ?)?1 ?? + ?(1 + ?)?1 ?MAX , a where ?MAX = maxa,i,s ?(k? , s?ai , s, ?) and ?? = maxa ??a = maxa,i, j ?(??a , s?ai , s? j , ? a ), with ? a = ?(k?? , s?a? , s?a? , ?/(n ? 1)). Note that ?MAX is independent of the representative states. Define ? such that ?/(1 + ?)?MAX < ?. We have to show that, if we define the representative states in such a way that d? is small enough, and set ?? accordingly, then we can make ?? < (1 ? ?)? ? ??MAX ? ? ? . From Property 4 we know that there is a ?1 > 0 such that ?? < ? ? if ? a < ?1 for all a ? A. From Property 1 we know that ? a ? ?(k?? , s?? , s?? , ?/(n ? 1)) for all a ? A. From Property 3 we know that, for any ? ? > 0, there is a ? ? > 0 such that ?(k?? , s?? , s?? , ?/(n ? 1)) < d? + ? ? if ?? < ? ? . Therefore, if ? It remains to show that there d? < ?1 , we can take any ? ? < ?1 ? d? to have an upper bound ? ? for ?. a is a ? > 0 such that mini max j di j > 1 ? ? if ?? < ? . Recalling that d?iaj = k?? (s?ai , s? j )/?m k=1 k?? (s?i , s?k ), a a a a a let h = argmax j k?? (s?i , s? j ), and let yi = k?? (s?i , s?h ) and y?i = max j6=h k?? (s?i , s? j ). Then, for any i,  max j d?iaj = yai / yai + ? j6=h k?? (s?ai , s? j ) ? yai /(yai + (m ? 1)y?ai ). From Assump. (i) and Prop. 3 we know that there is a ?ia > 0 such that yai > (m ? 1)(1 ? ?)y?ai /? if ?? < ?ia . Thus, by making ? = mina,i ?ia , we can guarantee that mini max j di j > 1 ? ?. If we take ?2 = min(? , ? ? ), the result follows. Proposition 1 tells us that, regardless of the specific reinforcement-learning problem at hand, if the distances between sampled states and the respective nearest representative states are small enough, then we can make KBSF?s approximation of KBRL?s value function as accurate as desired by setting ?? to a small value. How small d? and ?? should be depends on the particular choice of kernel k? and on the characteristics of the sets of transitions Sa . Of course, a fixed number m of representative states imposes a minimum possible value for d? , and if this value is not small enough decreasing ?? may actually hurt the approximation. Again, the optimal value for ?? in this case is problem-dependent. Our result supports the use of a local approximation based on representative states spread over the state space S. This is in line with the quantization strategies used in batch-mode kernel-based reinforcement learning to define the states s? j [4, 5]. In the case of on-line learning, we have to adaptively define the representative states s? j as the sample transitions come in. One can think of several ways of doing so [10]. In the next section we show a simple strategy for adding representative states which is based on the theoretical results presented in this section. 5 Empirical Results We now investigate the empirical performance of the incremental version of KBSF. We start with a simple task in which iKBSF is contrasted with batch KBSF. Next we exploit the scalability of iKBSF to solve a difficult control task that, to the best of our knowledge, has never been solved before. We use the ?puddle world? problem as a proof of concept [11]. In this first experiment we show that iKBSF is able to recover the model that would be computed by its batch counterpart. In order to do so, we applied Algorithm 2 to the puddle-world task using a random policy to select actions. Figure 1a shows the result of such an experiment when we vary the parameters tm and tv . Note that the case in which tm = tv = 8000 corresponds to the batch version of KBSF. As expected, the performance of KBSF decision policies improves gradually as the algorithm goes through more sample transitions, and in general the intensity of the improvement is proportional to the amount of data processed. More important, the performance of the decision policies after all sample transitions have been processed is essentially the same for all values of tm and tv , which shows that iKBSF can be used as a tool to circumvent KBSF?s memory demand (which is linear in n). Thus, if one has a batch of sample transitions that does not fit in the available memory, it is possible to split the data in chunks of smaller sizes and still get the same value-function approximation that would 6 3 be computed if the entire data set were processed at once. As shown in Figure 1b, there is only a small computational overhead associated with such a strategy (this results from unnormalizing and normalizing the elements of P? a and r? a several times through update rules (3) and (4)). 1.5 ? = 1000 ? = 2000 ? = 4000 ? = 8000 0.0 ?3 ?2 0.5 ?1 Return 0 Seconds 1.0 1 2 ? = 1000 ? = 2000 ? = 4000 ? = 8000 0 2000 4000 6000 Number of sample transitions 8000 0 (a) Performance 2000 4000 6000 Number of sample transitions 8000 (b) Run times Figure 1: Results on the puddle-world task averaged over 50 runs. iKBSF used 100 representative states evenly distributed over the state space and tm = tv = ? (see legends). Sample transitions were collected by a random policy. The agents were tested on two sets of states surrounding the ?puddles?: a 3 ? 3 grid over [0.1, 0.3] ? [0.3, 0.5] and the four states {0.1, 0.3} ? {0.9, 1.0}. But iKBSF is more than just a tool for avoiding the memory limitations associated with batch learning. We illustrate this fact with a more challenging RL task. Pole balancing has a long history as a benchmark problem because it represents a rich class of unstable systems [12, 13, 14]. The objective in this task is to apply forces to a wheeled cart moving along a limited track in order to keep one or more poles hinged to the cart from falling over [15]. There are several variations of the problem with different levels of difficulty; among them, balancing two poles at the same time is particularly hard [16]. In this paper we raise the bar, and add a third pole to the pole-balancing task. We performed our simulations using the parameters usually adopted with the double pole task, except that we added a third pole with the same length and mass as the longer pole [15]. This results in a problem with an 8-dimensional state space S. In our experiments with the double-pole task, we used 200 representative states and 106 sample transitions collected by a random policy [4]. Here we start our experiment with triple pole-balancing using exactly the same configuration, and then we let KBSF refine its model M? by incorporating more sample transitions through update rules (3) and (4). Specifically, we used Algorithm 2 with a ? ? at each value0.3-greedy policy, tm = tv = 106 , and n = 107 . Policy iteration was used to compute Q function update. As for the kernels, we adopted Gaussian functions with widths ? = 100 and ?? = 1 (to improve efficiency, we used a KD-tree to only compute the 50 largest values of k? (s?i , ?) and the 10 largest values of k?? (s?ai , ?)). Representative states were added to the model on-line every time the agent encountered a sample state s?ai for which k?? (s?ai , s? j ) < 0.01 for all j ? 1, 2, ..., m (this corresponds to setting the maximum allowed distance d? from a sampled state to the closest representative state). We compare iKBSF with fitted Q-iteration using an ensemble of 30 trees generated by Ernst et al.?s extra-trees algorithm [17]. We chose this algorithm because it has shown excellent performance in both benchmark and real-world reinforcement-learning tasks [17, 18].1 Since this is a batch-mode learning method, we used its result on the initial set of 106 sample transitions as a baseline for our empirical evaluation. To build the trees, the number of cut-directions evaluated at each node was fixed at dim(S) = 8, and the minimum number of elements required to split a node, denoted here by ?min , was first set to 1000 and then to 100. The algorithm was run for 50 iterations, with the structure of the trees fixed after the 10th iteration. As shown in Figure 2a, both fitted Q-iteration and batch KBSF perform poorly in the triple polebalancing task, with average success rates below 55%. This suggests that the amount of data used 1 Another reason for choosing fitted Q-iteration was that some of the most natural competitors of iKBSF have already been tested on the simpler double pole-balancing task, with disappointing results [19, 4]. 7 50000 ? ? ? ? Successful episodes 0.5 0.6 0.7 ? ? ? ? ? ? ? ? ? ? ? ? ? ? Batch KBSF 2e+06 4e+06 6e+06 8e+06 Number of sample transitions (a) Performance 50 0.4 ? 0.3 ? ? 1e+07 ? Batch KBSF iKBSF TREE?1000 TREE?100 2e+06 4e+06 6e+06 8e+06 Number of sample transitions (b) Run times 1e+07 ? Number of representative states 1000 2000 3000 4000 iKBSF TREE?1000 TREE?100 Seconds (log) 200 500 2000 10000 ? 0.8 0.9 by these algorithms is insufficient to describe the dynamics of the control task. Of course, we could give more sample transitions to fitted Q-iteration and batch KBSF. Note however that, since they are batch-learning methods, there is an inherent limit on the amount of data that these algorithms can use to construct their approximation. In contrast, the amount of memory required by iKBSF is independent of the number of sample transitions n. This fact together with the fact that KBSF?s computational complexity is only linear in n allow our algorithm to process a large amount of data within a reasonable time. This can be observed in Figure 2b, which shows that iKBSF can build an approximation using 107 transitions in under 20 minutes. As a reference for comparison, fitted Q-iteration using ?min = 1000 took an average of 1 hour and 18 minutes to process 10 times less data. ? ? ? ? ? ? ? ? 2e+06 6e+06 1e+07 Number of sample transitions (c) Size of KBSF?s MDP Figure 2: Results on the triple pole-balancing task averaged over 50 runs. The values correspond to the fraction of episodes initiated from the test states in which the 3 poles could be balanced for 3000 steps (one minute of simulated time). The test set was composed of 256 states equally distributed over the hypercube defined by ?[1.2 m, 0.24 m/s, 18o , 75o /s, 18o , 150o /s, 18o , 75o /s]. Shadowed regions represent 99% confidence intervals. As shown in Figure 2a, the ability of iKBSF to process a large number of sample transitions allows our algorithm to achieve a success rate of approximately 80%. This is similar to the performance of batch KBSF on the double-pole version of the problem [4]. The good performance of iKBSF on the triple pole-balancing task is especially impressive when we recall that the decision policies were evaluated on a set of test states representing all possible directions of inclination of the three poles. In order to achieve the same level of performance with KBSF, approximately 2 Gb of memory would be necessary, even using sparse kernels, whereas iKBSF used less than 0.03 Gb of memory. To conclude, observe in Figure 2c how the number of representative states m grows as a function of the number of sample transitions processed by KBSF. As expected, in the beginning of the learning process m grows fast, reflecting the fact that some relevant regions of the state space have not been visited yet. As more and more data come in, the number of representative states starts to stabilize. 6 Conclusion This paper presented two contributions, one practical and one theoretical. The practical contribution is iKBSF, the incremental version of KBSF. iKBSF retains all the nice properties of its precursor: it is simple, fast, and enjoys good theoretical guarantees. However, since its memory complexity is independent of the number of sample transitions, iKBSF can be applied to datasets of any size, and it can also be used on-line. To show how iKBSF?s ability to process large amounts of data can be useful in practice, we used the proposed algorithm to learn how to simultaneously balance three poles, a difficult control task that had never been solved before. As for the theoretical contribution, we showed that KBSF can approximate KBRL?s value function at any level of accuracy by minimizing the distance between sampled states and the closest representative state. This supports the quantization strategies usually adopted in kernel-based RL, and also offers guidance towards where and when to add new representative states in on-line learning. Acknowledgments The authors would like to thank Amir massoud Farahmand for helpful discussions regarding this work. Funding for this research was provided by the National Institutes of Health (grant R21 DA019800) and the NSERC Discovery Grant program. 8 References [1] D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine Learning, 49 (2?3): 161?178, 2002. [2] D. Ormoneit and P. Glynn. Kernel-based reinforcement learning in average-cost problems. IEEE Transactions on Automatic Control, 47(10):1624?1636, 2002. [3] N. Jong and P. Stone. Kernel-based models for reinforcement learning in continuous state spaces. In Proceedings of the International Conference on Machine Learning (ICML)? Workshop on Kernel Machines and Reinforcement Learning, 2006. [4] A. M. S. Barreto, D. Precup, and J. Pineau. Reinforcement learning using kernel-based stochastic factorization. In Advances in Neural Information Processing Systems (NIPS), pages 720? 728, 2011. [5] B. Kveton and G. Theocharous. Kernel-based reinforcement learning on representative states. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 124?131, 2012. [6] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [7] M. L. Puterman. Markov Decision Processes?Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., 1994. [8] M. L. Littman, T. L. Dean, and L. P. Kaelbling. On the complexity of solving Markov decision problems. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), pages 394?402, 1995. [9] A. M. S. Barreto and M. D. Fragoso. Computing the stationary distribution of a finite Markov chain through stochastic factorization. SIAM Journal on Matrix Analysis and Applications, 32: 1513?1523, 2011. [10] Y. Engel, S. Mannor, and R. Meir. The kernel recursive least squares algorithm. IEEE Transactions on Signal Processing, 52:2275?2285, 2003. [11] R. S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Advances in Neural Information Processing Systems (NIPS), pages 1038? 1044, 1996. [12] D. Michie and R. Chambers. BOXES: An experiment in adaptive control. Machine Intelligence 2, pages 125?133, 1968. [13] C. W. Anderson. Learning and Problem Solving with Multilayer Connectionist Systems. PhD thesis, Computer and Information Science, University of Massachusetts, 1986. [14] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, 13: 834?846, 1983. [15] F. J. Gomez. Robust non-linear control through neuroevolution. PhD thesis, The University of Texas at Austin, 2003. [16] A. P. Wieland. Evolving neural network controllers for unstable systems. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), pages 667?673, 1991. [17] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6:503?556, 2005. [18] D. Ernst, G. B. Stan, J. Gonc?alves, and L. Wehenkel. Clinical data based optimal STI strategies for HIV: a reinforcement learning approach. In Proceedings of the IEEE Conference on Decision and Control (CDC), pages 124?131, 2006. [19] F. Gomez, J. Schmidhuber, and R. Miikkulainen. Efficient non-linear control through neuroevolution. In Proceedings of the European Conference on Machine Learning (ECML), pages 654?662, 2006. 9
4599 |@word version:10 manageable:1 norm:2 simulation:1 recursively:1 initial:1 configuration:1 precluding:1 ka:1 yet:1 ws1:1 must:3 attracted:1 dx:1 john:1 partition:1 drop:1 update:14 stationary:1 greedy:1 intelligence:3 weighing:1 amir:1 accordingly:1 ria:1 beginning:1 realizing:1 short:1 hinged:1 provides:3 recompute:1 node:2 mannor:1 coarse:1 simpler:1 along:1 constructed:3 become:1 farahmand:1 prove:1 overhead:2 introduce:1 dprecup:1 ra:7 expected:3 planning:1 discounted:1 decreasing:2 precursor:1 becomes:1 provided:1 moreover:1 bounded:1 maximizes:2 notation:1 mass:1 what:1 argmin:1 r21:1 maxa:10 nj:1 impractical:1 guarantee:4 sai:4 every:1 exactly:1 prohibitively:1 rm:10 control:12 grant:2 before:2 local:1 limit:2 severely:1 consequence:1 sutton:3 theocharous:1 ak:5 initiated:1 solely:1 approximately:2 might:1 chose:1 suggests:1 challenging:4 factorization:10 limited:1 bi:3 statistically:1 averaged:2 unique:2 practical:6 acknowledgment:1 kveton:1 practice:1 recursive:1 empirical:4 evolving:1 significantly:1 confidence:1 integrating:1 argmaxa:1 nmax:3 suggest:1 get:1 cannot:1 undesirable:1 selection:1 operator:1 conventional:1 dean:1 center:1 go:1 attention:1 regardless:1 splitting:1 m2:1 insight:1 rule:5 fill:1 notion:1 variation:1 justification:1 hurt:1 updated:1 mcgill:6 construction:1 suppose:4 programming:2 us:3 pa:2 trick:2 element:4 particularly:2 michie:1 cut:1 observed:1 solved:2 capture:1 region:2 hinders:1 episode:2 trade:1 decrease:1 balanced:1 sal:4 environment:1 intuition:1 complexity:6 reward:6 littman:1 dynamic:5 raise:1 solving:4 dilemma:1 efficiency:1 easily:1 joint:1 surrounding:1 fast:3 describe:1 argmaxi:1 dka:7 artificial:2 tell:1 choosing:2 whose:3 hiv:1 supplementary:3 solve:5 otherwise:3 ability:4 think:2 obviously:1 sen:2 took:1 interaction:1 maximal:1 relevant:2 ernst:3 poorly:1 achieve:2 description:2 scalability:1 cluster:1 double:5 r1:1 incremental:12 converges:2 derive:2 recurrent:1 montreal:3 illustrate:1 freeing:2 ij:1 nearest:2 school:3 received:1 sa:14 solves:2 c:3 come:4 implies:1 direction:2 radius:3 stochastic:11 material:3 yai:5 suffices:1 generalization:1 proposition:2 extension:1 strictly:1 hold:1 exp:2 wheeled:1 vary:1 visited:1 largest:2 engel:1 tool:3 mit:1 clearly:1 always:1 gaussian:1 casting:1 barto:2 focus:1 improvement:3 contrast:1 baseline:1 dim:1 helpful:1 dependent:1 entire:1 issue:1 among:2 flexible:1 denoted:1 equal:1 construct:4 never:2 once:1 represents:1 icml:1 future:1 connectionist:1 simplify:1 inherent:1 sta:8 composed:3 simultaneously:2 national:1 argmax:1 replacement:1 n1:26 recalling:1 neuronlike:1 interest:1 investigate:1 evaluation:1 nl:2 behind:1 swapping:1 chain:1 accurate:1 tuple:1 necessary:1 respective:2 modest:1 indexed:1 iv:2 tree:10 desired:4 guidance:2 theoretical:13 fitted:5 soft:1 da019800:1 retains:2 cost:2 applicability:1 pole:19 subset:2 introducing:1 kaelbling:1 successful:2 stored:1 adaptively:1 st:3 chunk:1 fundamental:1 international:2 siam:1 standing:1 off:3 together:1 precup:2 quickly:1 na:12 thesis:2 again:1 nm:1 reflect:1 aaai:2 inefficient:1 return:1 coding:1 stabilize:1 inc:1 doina:1 ad:2 depends:1 performed:3 view:1 doing:2 start:6 aggregation:1 recover:2 contribution:5 square:1 ni:1 accuracy:3 characteristic:1 ensemble:1 yield:1 correspond:1 researcher:1 cybernetics:1 j6:2 history:1 wai:3 definition:7 competitor:1 glynn:1 obvious:1 naturally:1 associated:6 di:4 proof:3 sampled:6 proved:2 dataset:1 massachusetts:1 recall:1 knowledge:1 improves:2 actually:1 back:1 reflecting:1 ta:5 iaj:2 execute:1 shrink:1 though:1 generality:1 evaluated:2 just:1 box:1 anderson:2 hand:4 eqn:5 ei:3 gonc:1 defines:1 pineau:2 mode:3 quality:1 grows:5 mdp:24 usage:4 normalized:1 true:1 concept:1 counterpart:2 puterman:1 width:2 eqns:1 stone:1 mina:3 demonstrate:1 geurts:1 reasoning:1 meaning:1 wise:1 funding:1 empirically:2 rl:4 exponentially:1 volume:1 approximates:1 significant:2 ai:17 rd:1 automatic:1 grid:1 inclusion:1 had:1 moving:1 longer:1 impressive:1 add:4 closest:5 own:1 showed:2 apart:1 discard:1 scenario:2 store:2 disappointing:1 schmidhuber:1 success:4 joelle:1 yi:1 seen:1 minimum:2 monotonically:1 signal:1 ii:2 offer:1 long:2 clinical:1 equally:1 plugging:1 basic:2 multilayer:1 essentially:1 controller:1 iteration:8 kernel:28 represent:1 background:1 want:4 whereas:1 interval:4 grow:1 crucial:2 rka:4 extra:2 induced:1 cart:2 legend:1 mod:2 effectiveness:1 call:1 leverage:1 split:3 easy:1 concerned:1 iii:2 enough:4 affect:1 fit:1 topology:1 suboptimal:1 idea:3 tm:10 regarding:1 texas:1 gb:2 action:9 useful:2 clear:1 amount:13 discount:1 processed:6 reduced:1 sl:1 meir:1 wieland:1 andr:1 massoud:1 track:1 write:1 discrete:1 four:1 achieving:1 falling:1 ht:2 fraction:1 year:1 sum:1 run:5 sti:1 uncertainty:1 almost:1 reasonable:1 circumvents:1 decision:10 bound:2 ki:1 guaranteed:1 distinguish:1 gomez:2 ements:1 refine:1 encountered:1 ijcnn:1 infinity:1 ri:2 min:5 tv:10 ball:1 kd:2 smaller:3 son:1 wi:20 wam:1 making:1 s1:27 gradually:1 remains:1 eventually:1 know:7 neuroevolution:2 adopted:3 available:3 operation:1 apply:3 observe:2 chamber:1 occurrence:1 sak:6 batch:19 existence:2 original:1 compress:1 clustering:1 cf:1 wehenkel:2 cmax:1 exploit:1 build:3 especially:1 approximating:1 hypercube:1 objective:2 added:2 already:1 parametric:1 strategy:8 dependence:3 usual:1 interacts:1 rt:2 distance:6 link:1 thank:1 simulated:1 evenly:1 collected:2 unstable:2 reason:1 assuming:1 besides:1 length:1 modeled:1 mini:2 insufficient:1 minimizing:3 demonstration:1 balance:1 difficult:3 unfortunately:1 mostly:1 zt:2 policy:11 perform:2 upper:1 markov:6 datasets:1 benchmark:2 finite:8 ecml:1 defining:2 precise:1 rn:4 arbitrary:5 community:1 canada:3 intensity:1 pair:2 required:3 inclination:1 learned:1 conflicting:1 hour:1 nip:2 kbsf:51 able:4 bar:1 below:2 usually:2 regime:2 challenge:1 summarize:2 program:1 max:22 memory:16 shadowed:1 ia:10 difficulty:1 force:1 circumvent:1 natural:1 turning:1 ormoneit:3 kpa:2 representing:1 jpineau:1 improve:1 mdps:4 stan:1 ready:1 health:1 nice:1 discovery:1 multiplication:1 fully:3 loss:1 cdc:1 limitation:3 proportional:1 ita:1 triple:5 agent:3 consistent:1 imposes:1 dq:2 balancing:10 row:1 austin:1 periodicity:1 course:2 saj:1 free:1 enjoys:2 guide:1 allow:2 side:1 institute:1 sparse:2 distributed:2 transition:57 stand:1 ending:1 computes:1 world:4 rich:1 author:1 reinforcement:22 replicated:1 adaptive:2 taj:1 miikkulainen:1 transaction:3 approximate:7 ignore:1 keep:2 confirm:1 uai:1 conclude:1 continuous:11 mj:1 learn:1 robust:1 ca:3 rta:3 excellent:2 complex:1 necessarily:1 european:1 domain:3 did:1 main:2 spread:1 linearly:2 s2:17 n2:10 allowed:1 representative:31 wiley:1 precision:1 third:3 hti:1 admissible:1 down:1 rk:1 minute:3 specific:2 showing:4 maxi:1 r2:1 dk:3 decay:1 normalizing:1 burden:1 incorporating:1 quantization:2 workshop:1 sequential:1 adding:1 ci:1 phd:2 dissimilarity:2 demand:1 alves:1 gap:1 assump:1 contained:2 nserc:1 corresponds:2 satisfies:1 prop:1 cti:1 kbrl:20 towards:2 price:1 lipschitz:1 replace:1 hard:1 man:1 determined:2 except:2 contrasted:1 specifically:1 lemma:2 called:1 total:1 puddle:4 jong:1 select:3 support:2 barreto:3 incorporate:1 tested:2 avoiding:1
3,976
46
432 Performance Measures for Associative Memories that Learn and Forget Anthony /(uh Department of Electrical Engineering University of Hawaii at Manoa Honolulu HI, 96822 ABSTRACT Recently, many modifications to the McCulloch/Pitts model have been proposed where both learning and forgetting occur. Given that the network never saturates (ceases to function effectively due to an overload of information), the learning updates can continue indefinitely. For these networks, we need to introduce performance measmes in addition to the information capacity to evaluate the different networks. We mathematically define quantities such as the plasticity of a network, the efficacy of an information vector, and the probability of network saturation. From these quantities we analytically compare different networks. 1. Introduction Work has recently been undertaken to quantitatively measure the computational aspects of network models that exhibit some of the attributes of neural networks. The McCulloch/Pitts model discussed in [1] was one of the earliest neural network models to be analyzed. Some computational properties of what we call a Hopfield Associative Memory Network (HAMN) :similar to the McCulloch/Pitts model was discussed by Hopfield in [2]. The HAMN can be measured quantitatively by defining and evaluating the information capacity as [2-6] have shown, but this network fails to exhibit more complex computational capabilities that neural network have due to its simplified structure. The HAMN belongs to a class of networks which we call static. In static networks the learning and recall procedures are separate. The network first learns a set of data and after learning is complete, recall occurs. In dynamic networks, as opposed to static networks, updated learning and associative recall are intermingled and continual. In many applications such as in adaptive communications systems, image processing, and speech recognition dynamic networks are needed to adaptively learn the changing information data. This paper formally develops and analyzes some dynamic models for neural networks. Some existing models [7-10] are analyzed, new models are developed, and measures are formulated for evaluating the performance of different dynamic networks. In [2-6]' the asymptotic information capacity of the HAMN is defined and evaluated. In [4-5]' this capacity is found by first assuming that the information vectors (Ns) to be stored have components that are chosen randomly and independently of all other components in all IVs. The information capacity then gives the maximum number of Ns that can be stored in the HAMN such that IVs can be recovered with high probability during retrieval. At or below capacity, the network with high probability, successfully recovers the desired IVs. Above capacity, the network quickly degrades and eventually fails to recover any of the desired IVs. This phenomena is sometimes referred to as the "forgetting catastrophe" [10]. In this paper we will refer to this phenomena as network saturation. There are two ways to avoid this phenomena. The first method involves learning a limited number of IVs such that this number is below capacity. After this leaming takes place, no more learning is allowed. Once learning has stopped, the network does not change (defined as static) and therefore lacks many of the interesting computational @ American Institute of Physics 1988 433 capabilities that adaptive learning and neural network models have . The second method is to incorporate some type oC forgetting mechanism in the learning structure so that the inCormation stored in the network can never exceed capacity. This type of network would be able to adapt to the changing statistics of the IVs and the network would only be able to recall the most recently learned IVs. This paper focuses on analyzing dynamic networks that adaptively learn new inCormation and do not exhibit network saturation phenomena by selectively Corgetting old data. The emphasis is on developing simple models and much oC the analysis is performed on a dynamic network that uses a modified Hebbian learning rule. Section 2 introduces and qualitatively discusses a number of network models that are classified as dynamic networks. This section also defines some pertinent measures Cor evaluating dynamic network models. These measures include the plasticity of a network, the probability oC network saturation, and the efficacy of stored IVs. A network with no plasticity cannot learn and a network with high plasticity has interconnection weights that exhibit large changes. The efficacy oC a stored IV as a function oC time is another important parameter as it is used in determining the rate at which a network forgets information. In section 3, we mathematically analyze a simple dynamic network referred to as the Attenuated Linear Updated Learning (AL UL) network that uses linear updating and a modified Hebbian rule. Quantities introduccd in section 3 are analytically dctcrmincd for the ALUL network. By adjusting the attenuation parameter of the AL UL network, the Corgetting factor is adjusted. It is shown that the optimal capacity for a large AL UL network in steady state defined by (2.13,3.1) is a factor of e less than the capacity of a HAMN. This is the tradeoff that must be paid for having dynamic capabilities. We also conjecture that no other network can perform better than this network when a worst case criterion is used. Finally, section 4 discusses further directions for this work along with possible applications in adaptive signal processing. 2. Dynamic Associative Memory Networks The network models discussed in this paper are based on the concept of associative memory. Associative memories are composed of a collection of interconnected elements that have data storage capabilities. Like other memory structures, there are two operations that occur in associative memories. In the learning operation (referred to as a write operation for conventional memories), inCormation is stored in the network structure. In the recall operation (referred to as a read operation for conventional memories), information is retrieved from the memory structure. Associative memories recall information on the basis of data content rather than by a specific address. The models that we consider will have learning and recall operations that are updated in discrete time with the activation state XU) consisting of N cells that take on the values {-l,1}. 2.1. Dynamic Network MeasureS General associative memory networks are described by two sets of equations. If we let XU) represent the activation state at time i and W( k) represent the weight matrix 01? interconnection state at time k then the activation or recall equation is described by X(j+ 1) = f (XU), W(k)), i? 0, k? 0, X(O) = X (2.1 ) where X is the data probe vector used for reca.ll. The learning algorithm or int.erconnection equation is described by W(k+ 1) = g(V(i),O::; i< k, W(O)) where {V( i)} are the information vectors (IV)s to be stored and W(O) is the initial state of the interconnection matrix. Usually the learning algorithm time scale is much longer than 434 the recall equation time scale so that W in (2.1) can be considered time invariant. Often (2.1) is viewed as the equation governing short term memory and (2 .2) is the equation governing long term memory. From the Hebbian hypothesis we note that the data probe vectors should have an effect on the interconnection matrix W. If a number of data p!'Obe vectors recall an IV V( a') , the strength of recall of the IV V( i) should be increased by appropriate modification of W. If another IV is never recalled, it should gradually be forgotten by again adjusting terms of W. Following the analysis in [4,5] we assume that all components of IVs introduced are independent and identically distributed Bernoulli random variables with the probability of a 1 or -1 being chosen equal to ~. Our analysis focuses on learning algorithms. Before describing some dynamic learning algorithms we present some definitions. A network is defined as dynamic if given sorne period of time the rate of change of W is never nonzero. In addition we will primarily discuss networks where learning is gradual and updated at discrete times as shown in (2.2). By gradual, we want networks where each update usually consists of one IV being learned and/or forgotten. IVs that have been introduced recently should have a high probability of recovery. The probability of recall for one IV should also be a monotonic decreasing function of time, given that the IV is not repeated. The networks that we consider should also have a relatively low probability of network saturation. Quantitatively, we let e(k,l,i} be the event that an IV introduced at time l can be recovered at time k with a data probe vector which is of Hamming distance i f!'Om the desired IV. The efficacy of network recovery is then given as p(k,l,i) = Pr(e(k,l,i)). In the analysis performed we say a a vector V can recover V(I), if V(I) = 6(V) where 6(.) is a synchronous activation update of all cells in the network. The capacity for dynamic networks is then given by O(k,i,l) = maxm3-Pr(r(e(k,l,i),05:I<k)= m) > l-l O<i< N 2 (2.3) where r(X} gives the cardinality of the number of events that occur in the set X. Closely related to the capacity of a network is network saturation. Saturation occurs when the network is overloaded with IVs such that few or none of the IVs can be successfully recovered. When a network at time 0 starts to leal'll IVs, at some time l < i we have that O(l,i,l? OU,i,l). For k>1 the network saturation probability is defined by S(k,m) where S describes the probability that the network cannot recover m IVs. Another important measure in analyzing the performance of dynamic networks is t.he plasticity of the interconnections of the weight matrix W. Following definitions that are similar to [10], define N 2: 2: V AR{ Wi,j(k) - Wi,j(k-l)} h(k) = i". ii-I N(N-l) (2.4) as the incremental synaptic intensity and N 2: 2:V AR{ Wi,j(k)} H(k) = i"..;;= 1 N(N-l) (2 .5) as the cumulative synaptic intensity . From these definitions we can define the plasticity of the network as P(k) = h(k) H(k) (2.6) When network plasticity is zero, the network does not change and no learning takes place. When plasticity is high, the network interconnections exhibit large changes. 435 When analyzing dynamic networks we are often interested if the network reaches a steady state. We say a dynamic network reaches steady state if limH(k) = H (2.7) Ie--.oo where H is a finite nonzero constant. If the IVs have stationary statistics and given that the learning operations are time invariant, then if a network reaches steady state, we have that (2.8) limP(k) = P Ie-+oo where P is a finite constant. It is also easily verified from (2.6) that if the plasticity converges to a nonzero constant in a dynamic network, then given the above conditions on the IVs and the learning operations the network will eventually reach steady state. Let us also define the synaptic state at time k for activation state V as s(k, V) = W(k)V From the synaptic state, we Can define the SNR of V, which we show closely related to the efficacy of an IV and the capacity of the network . SNR(k, V,i) = (E(s.(k V)))2 ., VAR(si(k, V)) (2.9) III section 3 is (2.1O) Another quantity that is important in measuring dynamic networks is the complexity of implementation. Quantities dealing with network complexity are discussed in [12] and this paper focuses on networks that are memory less. A network is memoryless if (2.2) can be expressed in the following form: W(k+ 1) = 9 #( W(k), V(k)) (2.11) Networks that are not memoryless have the disadvantage that all Ns need t.o be saved during all learning updates. The complexity of implementation is greatly increased in terms of space complexity and very likely increased in terms of time complexity. 2.2. Examples of Dynamic Associative Memory Networks The previous subsection discussed some quantities to measure dynamic networks. This subsection discusses some examples of dynamic associative memo!,y networks and qualitatively discusses advantages and disadvantages of different networks . All the networks considered have the memoryless propel?ty. The first network that we discuss is described by the following difference equation W(k+ 1) = a(k)W(k) + b(k)L(V(k)) (2.12) with W(O) being the initial value of weights before any learning has taken place . Networks with these learning rules will be labeled as Linear Updated Learning (LUL) networks and in addition if O<a(k)<l for k2::0 the network is labeled as an Attenuated Linear Updated Learning (ALUL) network. We will primarily deal with ALUL where O<a(k)<l and b(k) do not depend on the position in W. This model is a specialized version of Grossberg's Passive Decay LTM equation discussed in [11]. If the learning algorithm is of the conelation type then L(V(J.?)) = V(k)V(kf-1 (2.13) This learning scheme has similarities to the marginalist learning schemes introduced in [10]. One of the key parameters in the ALUL network is the value of the attenuation coefficient a. From simulations and intuition we know that if the attenuation coefficient is to high, the network will saturate and if the attenuation parameter is to low, the network will 436 forget all but the most recently introduced IVs. Fig. 1 uses Monte Carlo methods to show a plot of the number of IVs recoverable in a 64 cell network when a = 1, (the HAMN) as a function of the learning time scale. From this figure we clearly see that network saturation is exhibited and for the time k ~ 25 no IV are recoverable with high probability. Section 3 further analyzes the AL UL network and derives the value of different measUl'es introduced in section 2.1. Another learning scheme called bounded learning (BL) can be described by L(V(k)) = { V(k)V(k)T - I F(W(k)~A 0 F( W(J.:))<A (2.14) By setting the attenuation parameter a = 1 and letting F(W(k)) = ~a;<Wi.i(k) (2 .15) I,J this is identical to the learning with bounds scheme discussed in [10]. Unfortunately there is a serious drawbacks to this model. If A is too large the network will saturate with high probability. If A is set such that the probability of network saturation is low then the network has the characteristic of not learning for almost all values of k > k(A) = min I :7 F( W(I))~ A. Th~efore we have that the efficacy of netwOl'k recovery, p (k,1 ,0) ~ 0 for all J.: ~ I ~ k{A). In order for the (BL) scheme to be classified as dynamic learning, the attenuation parameter a must have values between 0 and 1. This learning scheme is just a more complex version of the learning scheme derived from (2.10,2 .11). Let us qualitatively analyze the learning scheme when a and b are constant. There are two cases to consider. When A> H, then the network is not affected by the bounds and the network behaves as the AL UL network. When A <H, then the network accepts IVs until the bound is reached. When the bound is reached, the network waits until the values of the interconnection matrix have attenuated to the prescribed levels where learning can continue. If A is judiciously chosen, BL with a < 1 provides a means for a network to avoid saturation. By holding an IV until H(k )<A, it is not too difficult to show that this learning scheme is equivalent to an AL UL network with b(k) time varying. A third learning scheme called refresh learning (RL) can be described by (2 .12) with b(k)=I, W(O)=O, and a(k) = 1 -.5(kmod(l)) (2.16) This learning scheme learns a set of IV and periodically refreshes the weighting matrix so that all interconnections are O. RL can be classified as dynamic learning, but learning is not gradual during the periodic l'efresh cycle. Another problem with this learning scheme is that the efficacy of the IVs depend on where during the period they were learned. IVs learned late in a period are quickly forgotten where as IVs learned eady in a period have a longer time in which they are recoverable. In all the learning schemes introduced, the network has both learning and forgetting capabilities, A network introduced in [7,8] separates the learning and forgetting tasks by using the standard HAMN algorithm to learn IV and a random selective forgetting algorithm to unlearn excess information. The algorithm which we call random selective forgetting (RSF) can be described formally as follows. W(k+ 1) = Y(J.:) + L(V(k)) (2.17) where n(FU!::(k))) Y(k) = W(k) -Jl(k) 2..; i= 1 (V(k,a')V(k,i)T -n(F(W(k)))I) (2.18) 437 Each of the vectors V( k, i) are obtained by choosing a random vector V in the same manner IVs are chosen and letting V be the initial state of the HAMN with interconnection matrix W(k). The recall operation described by (2.1) is repeated until the activation has settled into a local minimum state . V(k,i) is then assigned this state. /L(k) is the rate at which the randomly selected local minimum energy states are forgotten, W(k) is given by (2.15), and n (X) is a nonnegative integer valued function that is a monotonically increasing function of X. The analysis of the RSF algorithm is difficult, because the energy manifold that describes the energy of each activation state and the updates allowable for (2.1) must be well understood. There is a simple transformation between the weighting matrix and the energy of an activation state given below, E(X(k)) = -~~~Wi,jX;?(j)Xj(k) i k>O (2 .19) j but aggregately analyzing all local minimum energy activation states is complex. Through computer simulations and simplified assumptions [7,8] have come up with a qualitative explanation of the RSF algorithm based on an eigenvalue approach. 3. Analysis of the ALUL Network Section 2 focused on defining properties and analytical measures for dynamic AMN along with presenting some examples of some learning algorithms for dynamic AMN. This section will focus on the analysis of one of the simpler algorithms, the ALUL network. From (2.12) we have that the time invariant ALUL network can be described by the following interconnection state equation. W(k+ 1) = aW(k) + bL(V(k)) (3.1 ) where a and b are nonnegative real numbers . Many of the measures introduced in section 2 can easily be determined for the AL UL network. To calculate the incremental synaptic intensity h (k) and the cumulative synaptic intensity H(k) let the initial condition of the interconnection state W",i(O) be independent of all other interconnections states and independent of all IVs. If E W",i(O) = 0 and V AR W.. ,j(O) = "Y then (3.2) and (3.3) In steady state when a < 1 we have that p = 2(1~) (3.4) From this simple relationship between the attenuation parameter a and the plasticity measure P, we can directly relate plasticity to other measures such as the capacity of the network. We define the steady state capacity as C(i,i)= lim C(k,i,i) for networks where k--o.o steady state exists. To analytically determine the capacity first assume that S(k, V(j)) = S(k-i) is a jointly Gaussian random vector. Further assume that Si(l) for 1~ i< N, 1~ 1< m are all independent and identically distributed. Then for N sufficiently large, f(a) = a2(k...,.-,l}(1~2), and 438 SNR(k, VU)) = = (N-l)f(a) SNR(k-n I-f{a) = c{a )logN ? 1 j<k (3.5) we have that p{k,j,O) = 1~ ~l _ N 2 V21rC (a )logN (3.6) j<k Given a we first find the largest m= k-j>O where ~~p{k,j,O)= 1 when lim p(k,j,O) ~ 1. Note that N-oo c(a?2. By letting c(a)= 2 the maximum m is given when 2logN f(a) = N I-f (a) (3.7) Solving for m we get that 1 I [ 210gN og (N + 21ogN)(1-a2) m = 1 2 -.......::.-------~ loga +1 (3.8) It is also possible to find the value of a that maximizes m. If we let f = 1 - a2 , then I [ 2logN og (N+ 2logN)f 1 m ~ (3.9) f . m .IS at a maximum vaI ue wh en f 2elogN or w h en m ~ NT . h ?IS correspon ds to N 2elogN a ~ 2m -l. Note that this is a factor of e less than the maximum number of Ns allowable 2m in a static HAMN [4,5], such that one of the Ns is recoverable. By following the analysis in [5], the independence assumption and the Gaussian assumptions used earlier can be removed. The arguments involve using results from exchangeability theory and normal approximation theory. ~ A similar and somewhat more cumbersome analysis can be performed to show that in steady state the maximum capacity achievable is when a ~ 2m -l and given by 2m lim C(k,O,f) N-oo = ~N 4e og (3.10) This again is a factor of e less than the maximum number of Ns allowable in a static HAMN [4,5]' such that all Ns are recoverable. Fig. 2 shows a Monte Carlo simulation of the number of Ns recoverable in a 64 cell network versus the learning time scale for a varying between .5 and .99. We can see that the network reaches approximate steady state when k:2: 35. The maximum capacity achievable is when a ~ .9 and the capacity is around 5. This is slightly more than the theoretical value predicted by the analysis just shown when we compare to Fig. 1. For smaller simulations conducted with larger networks the simulated capacity was closer to the predicted value. From the simulations and the analysis we observe that when a is too small Ns are forgotten at too high a rate and when 439 a is too high network saturation occurs. Using the same arguments, it is possible to analyze the capacity of the network and efficacy of rvs when k is small. Assuming zero initial conditions and a ~ 2m-l we can 2m The learning behavior can be summarize the learning behavior of the AL UL network. divided into three phases. In the first phase for k< N all Ns are remembered and - 4elogN the characteristics of the network are similar to the HAMN below saturation. In the second phase some rvs are forgotten as the rate of forgetting becomes nonzero. During this phase the maximum capacity is reached as shown in fig . 2. At this capacity the network cannot dynamically recall all IVs so the network starts to forget more information then it receives. This continues until steady state is reached where the learning and forgetting rates are equal. If initial conditions are nonzero the network starts in phase 1 or the beginning of phase 2 if H( k) is below the value corresponding to the maximum capacity and at the end of phase 2 for larger H( k). The calculation of the network saturation probabilities S( k, m) is trivial for large networks when the capacity curves have been found. When m~ C(k,O,E) then S(k,m) ~ 0 otherwise S(k ,m) ~ 1. Before leaving this section let us briefly examine AL UL networks where a (k) and b(k) are time varying. An example of a time varying network is the marginalist learning scheme introduced in [10]. The network is defined by fixing the value of the = D(N) for all k. This value is fixed by setting a= 1 and varying b. Since the VARSi(k,V(k-l)) is a monotonic increasing function of k, b(k) must also be a monotonic increasing function of k. It is not too difficult to show that when k is large, the marginalist learning scheme is equivalent to the steady state AL UL defined by (3.1). The argument is based on noting that the steady state SNR depends not on the update time, but on the difference between the update time and when the rv was stored as is the case with the marginalist learning scheme. The optimal value of D( N) giving the highest capacity is when D(N) = 4elogN and SNR(k,k-l,i) b(k+ 1) = where m = 2m b(k) 2m-l (3.11) N 4elogN' If performance is defined by a worst case criterion with the criterion being J(I,N) = min(C(k,O,E),k~/) (3.12) then we conjecture that for I large, no AL UL as defined in (2.12,2.13) can have larger J(I,N) than the optimal ALUL defined by (3.1). If we consider average capacity, we note that the RL network has an average capacity of N 810gN which is larger than the optimal AL UL network defined in (3.1). However, for most envisioned applications a worst case criterion is a more accurate measure of performance than a criterion based on average capacity. 4. Summary This paper has introduced a number of simple dynamic neural network models and defined several measures to evaluate the performance of these models. All parameters for the steady state AL UL network described by (3.1) were evaluated and the attenuation parameter a giving the largest capacity was found. This capacity was found to be a factor of e less than the static HANIN capacity. Furthermore we conjectured that if we consider a worst case performance criteria that no AL UL network could perform better than the 440 optimal ALUL network defined by (3.1). Finally, a number of other dynamic models including BL, RL, and marginalist learning were stated to be equivalent to AL UL networks under certain conditions. The network models that were considered in this paper all have binary vector valued activation states and may be to simplistic to be considered in many signal processing application. By generalizing the analysis to more complicated models with analog vector valued activation states and continuous time updating it may be possible to use these generalized models in speech and image processing. A specific example would be a controller for a moving robot. The generalized network models would learn the input data by adaptively changing the interconnections of the network. Old data would be forgotten and data that was repeatedly being recalled would be reinforced. These network models could also be used when the input data statistics are nonstationary. References [I] W. S. McCulloch and W . Pitts, "A Logical Calculus of the Ideas Iminent in Nervous Activity", Bulletin of Mathematical Biophysics, 5, 115-133, 1943. [2) J. J. Hopfield, "Neural Networks and Physical Systems with Emergent Collective Computational Abilities ", Proc. Natl. Acad. Sci. USA 79, 2554-2558, 1982. [3J Y. S. Abu-Mostafa and J. M. St. Jacques, "The Information Capacity of the Hopfield Model", IEEE Trans. Inform. Theory, vol. IT-31, 461-464, 1985. [4) R. J. McEliece, E. C. Posner, E. R. Rodemich and S. S. Venkatesh, "The Capacity of 'the Hopfield Associative Memory", IEEE Trans. Inform. Theory, vol. IT-33, 461-482, 1987. [5J A. Kuh and B. W. Dickinson, "Information Capacity of Associative Memories ", to be published IEEE Trans. Inform. Theory. [6] D. J. Amit, H. Gutfreund, and H. Sompolinsky, "Spin-Glass Models of Nev.ral Networks", Phys. Rev. A, vol. 32, 1007-1018, 1985. [7J J. J. Hopfield, D. I. Feinstein, and R. G. Palmer, " 'Unlearning' has a StabIlizing effect in Collective Memories", Nature, vol. 304, 158-159, 1983. [8] R. J. Sasiela, "Forgetting as a way to Improve Neural-Net Behavior" , AIP Conference Proceedings 151, 386-392, 1986. [9] J. D. Keeler, "Basins of Attraction of Neural Network Models", Proceedings 151, 259-265, 1986. [10] J. P. Nadal, G. Toulouse, J. P. Changeux, and S. Dehaene, "Networks of Formal AlP Conference Neurons and Memory Palimpsests", Europhysics Let., Vol. 1,535-542, 1986. [11) S. Grossberg, "Nonlinear Neural Networks: Principles, Mechanisms, and Architectures ", Neural Networks in press. [12J S. S. Venkatesh and D. Psaltis, "Information Storage and Retrieval in Two Associative Nets ", California Institute of Technology Pasadena, Dept. of Elect. Eng., preprint, 1986. 441 "HAMN Capacity" 10 N=64, 1024 trials 8 > 0 -a- Average # of IV 6 ::ea: co C) CIS 4 "- co > < 2 0 0 10 20 Update Time 30 40 Fig. 1 "ALUL Capacity" 10 ...... a=.5 a=.7 -II- a=.90 a=.95 a=.99 -Go N=64, 1024 trials ~0 8 en 6 ...en ~ .... 0 ::ea: co 4 C) CIS "- co > < 2 0 0 10 20 Update Time Fig. 2 30 40
46 |@word trial:2 briefly:1 version:2 achievable:2 calculus:1 gradual:3 simulation:5 eng:1 paid:1 initial:6 efficacy:8 existing:1 recovered:3 nt:1 activation:11 si:2 must:4 refresh:1 periodically:1 limp:1 plasticity:11 pertinent:1 plot:1 update:9 stationary:1 selected:1 nervous:1 beginning:1 short:1 indefinitely:1 provides:1 simpler:1 mathematical:1 along:2 qualitative:1 consists:1 unlearning:1 manner:1 introduce:1 forgetting:10 behavior:3 examine:1 decreasing:1 cardinality:1 increasing:3 becomes:1 bounded:1 maximizes:1 mcculloch:4 what:1 nadal:1 developed:1 gutfreund:1 transformation:1 forgotten:7 continual:1 attenuation:8 k2:1 before:3 engineering:1 local:3 understood:1 acad:1 analyzing:4 emphasis:1 dynamically:1 co:4 limited:1 palmer:1 grossberg:2 vu:1 procedure:1 honolulu:1 wait:1 get:1 cannot:3 palimpsest:1 storage:2 conventional:2 equivalent:3 go:1 independently:1 focused:1 stabilizing:1 recovery:3 rule:3 attraction:1 posner:1 updated:6 dickinson:1 us:3 hypothesis:1 element:1 recognition:1 updating:2 continues:1 netwol:1 labeled:2 preprint:1 electrical:1 worst:4 calculate:1 cycle:1 sompolinsky:1 removed:1 highest:1 envisioned:1 intuition:1 complexity:5 dynamic:29 depend:2 solving:1 basis:1 uh:1 ogn:1 easily:2 hopfield:6 emergent:1 lul:1 monte:2 nev:1 intermingled:1 choosing:1 larger:4 valued:3 say:2 interconnection:13 otherwise:1 ability:1 statistic:3 toulouse:1 jointly:1 associative:14 advantage:1 eigenvalue:1 analytical:1 net:2 interconnected:1 incremental:2 converges:1 oo:4 fixing:1 measured:1 predicted:2 involves:1 come:1 direction:1 closely:2 saved:1 attribute:1 drawback:1 alp:1 mathematically:2 adjusted:1 obe:1 keeler:1 sufficiently:1 considered:4 around:1 normal:1 pitt:4 mostafa:1 jx:1 a2:3 proc:1 psaltis:1 largest:2 successfully:2 clearly:1 gaussian:2 modified:2 rather:1 avoid:2 exchangeability:1 varying:5 og:3 earliest:1 derived:1 focus:4 bernoulli:1 ral:1 greatly:1 glass:1 pasadena:1 selective:2 interested:1 logn:5 equal:2 once:1 never:4 having:1 identical:1 quantitatively:3 develops:1 primarily:2 few:1 serious:1 randomly:2 aip:1 composed:1 phase:7 consisting:1 propel:1 introduces:1 analyzed:2 natl:1 accurate:1 fu:1 closer:1 iv:41 old:2 desired:3 theoretical:1 stopped:1 increased:3 earlier:1 gn:2 ar:3 disadvantage:2 measuring:1 snr:6 conducted:1 too:6 stored:8 aw:1 periodic:1 adaptively:3 st:1 ie:2 physic:1 quickly:2 again:2 settled:1 opposed:1 hawaii:1 american:1 rsf:3 int:1 coefficient:2 depends:1 performed:3 analyze:3 reached:4 start:3 recover:3 capability:5 complicated:1 om:1 spin:1 characteristic:2 reinforced:1 none:1 carlo:2 published:1 classified:3 reach:5 cumbersome:1 inform:3 phys:1 synaptic:6 definition:3 ty:1 energy:5 recovers:1 static:7 hamming:1 adjusting:2 wh:1 logical:1 recall:14 limh:1 subsection:2 lim:3 ou:1 ea:2 rodemich:1 evaluated:2 furthermore:1 governing:2 just:2 until:5 d:1 mceliece:1 receives:1 manoa:1 nonlinear:1 lack:1 defines:1 usa:1 effect:2 concept:1 analytically:3 assigned:1 read:1 memoryless:3 nonzero:5 deal:1 amn:2 ll:2 during:5 ue:1 elect:1 steady:14 oc:5 criterion:6 generalized:2 allowable:3 presenting:1 complete:1 passive:1 image:2 recently:5 unlearn:1 specialized:1 behaves:1 rl:4 physical:1 jl:1 discussed:7 he:1 analog:1 refer:1 efore:1 moving:1 robot:1 longer:2 similarity:1 retrieved:1 conjectured:1 belongs:1 loga:1 certain:1 binary:1 continue:2 remembered:1 analyzes:2 minimum:3 somewhat:1 determine:1 period:4 monotonically:1 signal:2 ii:2 recoverable:6 rv:3 hebbian:3 adapt:1 calculation:1 long:1 retrieval:2 divided:1 europhysics:1 biophysics:1 simplistic:1 controller:1 sometimes:1 represent:2 cell:4 addition:3 want:1 leaving:1 exhibited:1 dehaene:1 call:3 integer:1 nonstationary:1 noting:1 exceed:1 iii:1 identically:2 xj:1 independence:1 architecture:1 idea:1 attenuated:3 tradeoff:1 judiciously:1 synchronous:1 ul:15 speech:2 repeatedly:1 involve:1 jacques:1 discrete:2 write:1 vol:5 affected:1 abu:1 key:1 changing:3 verified:1 undertaken:1 place:3 almost:1 bound:4 hi:1 nonnegative:2 activity:1 strength:1 occur:3 aspect:1 argument:3 min:2 prescribed:1 relatively:1 conjecture:2 department:1 developing:1 describes:2 slightly:1 smaller:1 wi:5 rev:1 modification:2 invariant:3 gradually:1 pr:2 taken:1 equation:9 discus:6 eventually:2 mechanism:2 describing:1 needed:1 know:1 letting:3 feinstein:1 cor:1 end:1 operation:9 probe:3 observe:1 appropriate:1 include:1 giving:2 amit:1 bl:5 quantity:6 occurs:3 degrades:1 exhibit:5 distance:1 separate:2 simulated:1 capacity:38 sci:1 evaluate:2 manifold:1 trivial:1 assuming:2 relationship:1 difficult:3 unfortunately:1 holding:1 relate:1 memo:1 vai:1 stated:1 implementation:2 collective:2 perform:2 neuron:1 finite:2 defining:2 saturates:1 communication:1 intensity:4 introduced:11 overloaded:1 venkatesh:2 recalled:2 california:1 learned:5 accepts:1 trans:3 address:1 able:2 below:5 usually:2 summarize:1 saturation:14 including:1 memory:20 explanation:1 event:2 scheme:16 improve:1 technology:1 kf:1 determining:1 asymptotic:1 interesting:1 var:1 versus:1 basin:1 principle:1 summary:1 formal:1 institute:2 bulletin:1 distributed:2 leal:1 curve:1 evaluating:3 cumulative:2 qualitatively:3 adaptive:3 collection:1 simplified:2 excess:1 approximate:1 kuh:1 dealing:1 continuous:1 learn:6 nature:1 complex:3 anthony:1 allowed:1 repeated:2 xu:3 fig:6 referred:4 en:4 n:10 fails:2 position:1 forgets:1 third:1 weighting:2 late:1 learns:2 saturate:2 specific:2 changeux:1 decay:1 cease:1 derives:1 exists:1 effectively:1 ci:2 ltm:1 forget:3 generalizing:1 likely:1 expressed:1 catastrophe:1 monotonic:3 viewed:1 formulated:1 leaming:1 content:1 change:5 determined:1 called:2 e:1 formally:2 selectively:1 overload:1 incorporate:1 dept:1 phenomenon:4
3,977
460
MODELS WANTED: MUST FIT DIMENSIONS OF SLEEP AND DREAMING* J. Allan Hohson, Adam N. Mamelak t and Jeffrey P. Sutton t Laboratory of Neurophysiology and Department of Psychiatry Harvard Medical School 74 Fenwood Road, Boston, MA 02115 Abstract During waking and sleep, the brain and mind undergo a tightly linked and precisely specified set of changes in state. At the level of neurons, this process has been modeled by variations of Volterra-Lotka equations for cyclic fluctuations of brainstem cell populations. However, neural network models based upon rapidly developing knowledge ofthe specific population connectivities and their differential responses to drugs have not yet been developed. Furthermore, only the most preliminary attempts have been made to model across states. Some of our own attempts to link rapid eye movement (REM) sleep neurophysiology and dream cognition using neural network approaches are summarized in this paper. 1 INTRODUCTION New models are needed to test the closely linked neurophysiological and cognitive theories that are emerging from recent scientific studies of sleep and dreaming. This section describes four separate but related levels of analysis at which modeling may ?Based, in part, upon an invited address by J.A.H. at NIPS, Denver, Dec. 2 1991 and, in part, upon a review paper by J.P.S., A.N.M. and J.A.H. published in the P.ychiatric Annal?. t Currently in the Department of Neurosurgery, University of California, San Francisco, CA 94143 : Also in the Center for Biological Information Processing, Whitaker College, E25-201, Massachusetts Institute of Technology, Cambridge, MA 02139 3 4 Hobson, Mamelak, and Sutton be applied and outlines some of the desirable features of such models in terms of the burgeoning data of sleep and dream science. In the subsequent sections, we review our own preliminary efforts to develop models at some of the levels discussed. 1.1 THE INDIVIDUAL NEURON Existing models or "neuromines" faithfully represent membrane properties but ignore the dynamic biochemical changes that change neural excitability over the long term. This is particularly important in the modeling of state control where the crucial neurons appear to act more like hormone pumps than like simple electrical transducers. Put succinctly, we need models that consider the biochemical or "wet" aspects of nerve cells, as well as the "dry" or electrical aspects (cf. McKenna et al., in press). 1.2 NEURAL POPULATION INTERACTIONS To mimic the changes in excitability of the modulatory neurons which control sleep and dreaming, new models are needed which incorporate both the engineering principles of oscillators and the biological principles of time-keeping. The latter principle is especially relevant in determining the dramatica.lly variable long period time-constants that are observed within and across species. For example, we need to equip population models borrowed from field biology (McCarley and Hobson, 1975) with specialized properties of "wet" neurons as mentioned in section 1.1. 1.3 COGNITIVE CONSEQUENCES OF MODULATION OF NEURAL NETWORKS To understand the state-dependent changes in cognition, such as those that distinguish waking and dreaming, a potentially fruitful approach is to mimic the known effects of neuromod ulation and examine the information processing properties of neural networks. For example, if the input-output fidelity of networks can be altered by changing their mode (see Sutton et al., this volume), we might be better able to understand the changes in both instantaneous associative properties and long term plasticity alterations that occur in sleep and dreaming. We might thus trap the brain-mind into revealing its rules for making moment-to-moment crosscorrelations of its data and for changing the content and status of its storage in memory. 1.4 STATE-DEPENDENT CHANGES IN COGNITION At the highest level of analysis, psychological data, even that obtained from the introspection of waking and dreaming subjects, need to be more creatively reduced with a view to modeling the dramatic alterations that occur with changes in brain state. As an example, consider the instability of orientation of dreaming, where times, places, persons and actions change without notice. Short of mastering the thorny problem of generating narrative text from a data base, and thus synthesizing an artificial dream, we need to formulate rules and measures of categorizing constancy and transformations (Sutton and Hobson, 1991). Such an approach is a Models Wanted: Must Fit Dimensions of Sleep and Dreaming means of further refining the algorithms of cognition itself, an effort which is now limited to simple activation models that cannot change mode. An important characteristic of the set of new models that are proposed is that each level informs, and is informed by, the other levels. This nested, interlocking feature is represented in figure 1. It should be noted that any erroneous assumptions made at level 1 will have effects at levels 2 and 3 and these will, in turn, impede our capacity to integrate levels 3 and 4. Level 4 models can and should thus proceed with a degree ofindependence from levels 1, 2 and 3. Proceeding from level 1 upward is the "bottom-up" approach, while proceeding from level 4 downward is the "topdown" approach. We like to think it might be possible to take both approaches in our work while according equal respect to each. LEVEL SCHEMA IV COGNITIVE STATES (eg. dream plot sequences) A-.B III MODULATION OF NETWORKS ~~ (eg. hippocampus, cortex) Il NEURAL POPULATIONS (eg. pontine brainstem) I SINGLE NEURONS (eg. NE, 5HT, ACh neurons) FEATURES <C--+D E--+F ,~ ~ (~2~ t?(7 ~ variable associative and learning states modulation of 1-0 processing variable timeconstant oscillator wet hormonal aspects Figure 1: Four levels at which modeling innovations are needed to provide more realistic simulations of brain-mind states such as waking and dreaming. See text for discussion. 5 6 Hobson, Marnelak, and Sutton 2 STATES OF WAKING AND SLEEPING The states of waking and sleeping, including REM and non-REM (NREM) sleep, have characteristic behavioral, neuronal, polygraphic and psychological features that span all four levels. These properties are summarized in figures 2 and 3. Changes occurring within and between different levels are affected by the sleepwake or circadian cycle and by the relative shifts in brain chemistry. A WAKE E~I~-------- NREM SLEEP REM SLEEP -------------1---------- EEGI:_==:::==::::;::: 1~~':':.fJd,~ 1===::::::::: EOG ~ ~1--.."l'-'o...J Sensation and Vivid, Perception Externally Generated Dull or Absent Vivid, IntemoUy Generated TlJougllf Logical Progressive Logical Perseverotive Illogical Bizarre Movement Contiooous Voluntary Episodic Involuntary Commanded tM Inhibited c B D _ o _ Ml o , , t Time (hours) --ll- __ ...,(1 _ 0_ ..,[) _ - - -- .,- - -_ Time (lJours) Figure 2: (a) States of waking and NREM and REM sleeping in humans. Characteristic behavioral, polygraphic and psychological features are shown for each state. (b) Ultradian sleep cycle of NREM and REM sleep shown in detailed sleep-stage graphs of 3 subjects. (c) REM sleep periodograms of 15 subjects. From Hobson and Steriade (1986), with permission. Models Wanted: Must Fit Dimensions of Sleep and Dreaming 2.1 CIRCADIAN RHYTHMS The circadian cycle has been studied mathematically using oscillator and other non-linear dynamical models to capture features of sleep-wake rhythms (Moore-Ede and Czeisler, 1984; figure 2). Shorter (infradian) and longer (ultradian) rhythms, relative to the circadian rhythm, have also been examined. In general, oscillators are used to couple neural, endocrine and other pathways important in controlling a variety of functions, such as periods of rest and activity, energy conservation and thermoregulation. The oscillators can be sensitive to external cues or zeitgebers, such as light and daily routines, and there is a stong linkage between the circadian clock and the NREM-REM sleep oscillator. 2.2 RECIPROCAL INTERACTION MODEL In the 1970s, a brainstem oscillator became identified that was central to regulating sleeping and waking. Discrete cell populations in the pons that were most active during waking, less active in NREM sleep and silent during REM sleep were found to contain the monoamines norepinephrine (NE) and serotonin (5HT). Among the many cell populations that became active during REM sleep, but were generally quiescent otherwise, were cells associated with acetylcholine (ACh) release. BlillJ A C t- 15 o 4 , \ 3 2 I 20 40 10 10 100 o 'vi o Figure 3: (a) Reciprocal interaction model of REM sleep generation showing the structural interaction between cholinergic and monoaminergic cell populations. Plus sign implies excitatory influences; minus sign implies inhibitory influences. (b) Model output of the cholinergic unit derived from Lotka-Volterra equations. (c) Histogram of the discharge rate from a cholinergic related pontine cell recorded over 12 normalized sleep-wake cycles. Model cholinergic (solid line) and monoaminergic (dotted line) outputs. (d) Noradrenergic discharge rates before (S), during (D) and following (W) a REM sleep episode. From Hobson and Steriade (1986), with permission. 7 8 Hobson, Mamelak, and Sutton By making a variety of simplifying assumptions, McCarley and Hobson (1975) were able to structurally and mathematically model the oscillations between these monoaminergic and cholinergic cell populations (figure 3). This level 2 model consists of two compartments, one being monoaminergic-inhibitory and the other cholinergic-excitatory. It is based pupon the assumptions offield biology (VoIterraLotka) and of dry neuromines (level 3). The excitation (inhibition) originating from each compartment influences the other and also feeds back on itself. Numerous predictions generated by the model have been verified experimentally (Hobson and Steriade, 1986). Because the neural population model shown in figure 3 uses the limited passive membrane type of neuromine discussed in the introduction, the resulting oscillator has a time-constant in the millisecond range, not even close to the real range of minutes to hours that characterize the sleep-dream cycle (figure 2). As such, the model is clearly incapable of realistically representing the long-term dynamic properties that characterize interacting neuromodulatory populations. To surmount this limitation, two modifications are possible: one is to remodel the individual neuromines equipping them with mathematics describing up and down regulation of receptors and intracellular biochemistry that results in long-term changes in synaptic efficacy (c/. McKenna et al., in press); another is to model the longer time constants of the sleep cycle in terms of protein transport times between the two populations in brainstems of realistically varying width (c/. Hobson and Steriade, 1986). 3 NEUROCOGNITIVE ASPECTS OF WAKING, SLEEPING AND DREAMING Since the discovery that REM sleep is correlated with dreaming, significant advances have been made in understanding both the neural and cognitive processes occurring in different states of the sleep-wake cycle. During waking, wherein the brain is in a state of relative aminergic dominance, thought content and cognition display consistency and continuity. NREM sleep mentation is typically characterized by ruminative thoughts void of perceptual vividness or emotional tone. Within this state, the aminergic and cholinergic systems are more evenly balanced than in either the wake or REM sleep states. As previously noted, REM sleep is a state associated with relative cholinergic activation. Its mental status manifestations include graphic, emotionally charged and formally bizarre images encompassing visual hallucinations and delusions. 3.1 ACTIVATION.SYNTHESIS MODEL The activation-synthesis hypothesis (Hobson and McCarley, 1977) was the first account of dream mentation based on the neurophysiological state of REM sleep. It considered factors present at levels 3 and 4, according to the scheme in section 1, and attempted to bridge these two levels. In the model, cholinergic activation and reciprocal monoaminergic disinhibition of neural networks in REM sleep generated the source of dream formation. However, the details of how neural networks might actually synthesize information in the REM sleep state was not specified. Models Wanted: Must Fit Dimensions of Sleep and Dreaming 3.2 NEURAL NETWORK MODELS Several neural network models have subsequently been proposed that also attempt to bridge levels 3 and 4 (for example, Crick and Mitchison, 1983). Recently, Mamelak and Hobson (1989) have suggested a neurocognitive model of dream bizarreness that extends the activation-synthesis hypothesis. In the model, the monoaminergic withdrawal in sleep relative to waking leads to a decrease in the signal-to-noise ratio in neural networks (figure 4). When this is coupled with phasic cholinergic excitation of the cortex, via brainstem ponto-geniculo-occipital (PGO) cell firing (figure 5), cognitive information becomes altered and discontinuous. A central premise of the model is that the monoamines and acetylcholine function as neuromodulators, which modify ongoing activity in networks, without actually supplying afferent input information. Implementation of the Mamelak and Hobson model as a temporal sequencing network is described by Sutton et al. in this volume. Computer simulations demonstrate how changes in modulation similar to some monoaminergic and cholinergic effects can completely alter the way information is collectively sequenced within the same network. This occurs even in the absence of plastic changes in the weights connecting the artificial neurons. Incorporating plasticity, which generally involves neuromodulators such as the monoamines, is a logical next step. This would build important level 1 features into a level 3-4 model and potentially provide useful insight into some state-dependent learning operations. " ?? 10 . .. ? ?r 11111 1111111111 11111 111111' II 111111111111 .-10 I "'" 1111 fill 11111111111111' II 1'.-.'0 ... ? ?r - -_ _ _ _--+-_ _ .?10 I II I B .r ... r-----"77"'.__ ~ J 1 I III 1 I D. ::::--~ II II ........ .__ '_'.",.' ........ te""_ Figure 4: (a) Monoaminergic innervation of the brain is widespread. (b) Plot of the neuron firing probability as a function of the relative membrane potential for various values of monoaminergic modulation (parameterized by Q). Higher (lower) modulation is correlated with smaller (larger) Q values. (c) Neuron firing when subjected to supra- and sub-threshold inputs of +10 mvand -10 mv, respectively, for Q = 2 and Q = 10. (d) For a given input, the repertoire of network outputs generally increases as Q increases. From Mamelak and Hobson (1989), with permission. 9 10 Hobson, Mamelak, and Sutton A B i ~ i i ' iI' Unil' r g) LGlk~ LGBi--' ~ Figure 5: (a) Cholinergic input from the bramstem to the thalamus and cortex is widespread. (b) Unit recordings from PGO burst cells in the pons are correlated with PGO waves recorded in the lateral geniculate bodies (LGB) of the thalamus. 4 CONCLUSION After discussing four levels at which new models are needed, we have outlined some preliminary efforts at modeling states of waking and sleeping. We suggest that this area of research is ripe for the development of integrative models of brain and mind. Acknowledgements Supported by NIH grant MH 13,923, the HMS/MMHC Research & Education Fund, the Livingston, Dupont-Warren and McDonnell-Pew Foundations, DARPA under ONR contract NOOOl4-85-K-0124, the Sloan Foundation and Whitaker College. References Crick F, Mitchison G (1983) The function of dream sleep. Nature 304 111-114. Hobson JA, McCarley RW (1977) The brain as a dream-state generator: An activation-synthesis hypothesis of the dream process. Am J P.ych 134 1335-1368. Hobson JA, Steriade M (1986) Neuronal basis of behavioral state control. In: Mountcastle VB (ed) Handbook of Phy.iology - The NeMJou. Syltem, Vol IV. Bethesda: Am Physiol Soc, 701-823. Mamelak AN, Hobson JA (1989) Dream bizarrenes as the cognitive correlate of altered neuronal behavior in REM sleep. J Cog Neuro.ci 1(3) 201-22. McCarley RW, Hobson JA (1975) Neuronal excitability over the sleep cycle: A structural and mathematical model. Science 189 58-60. McKenna T, Davis J, Zornetzer (eds) In press. Single Neuron Computation. San Diego, Academic. Moore-Ede Me, Czeisler CA (eds) (1984) Mathematical Model. of the Circadian Sleep- Wake Cycle. New York: Raven. Sutton JP, Hobson (1991) Graph theoretical representation of dream content and discontinuity. Sleep Re.earch 20 164.
460 |@word neurophysiology:2 noradrenergic:1 hippocampus:1 integrative:1 simulation:2 simplifying:1 dramatic:1 minus:1 solid:1 moment:2 phy:1 cyclic:1 efficacy:1 existing:1 activation:7 yet:1 must:4 ulation:1 physiol:1 realistic:1 subsequent:1 plasticity:2 dupont:1 wanted:4 plot:2 fund:1 cue:1 tone:1 reciprocal:3 short:1 supplying:1 mental:1 mathematical:2 burst:1 differential:1 transducer:1 consists:1 pathway:1 behavioral:3 allan:1 periodograms:1 rapid:1 behavior:1 examine:1 brain:9 rem:19 innervation:1 becomes:1 emerging:1 developed:1 informed:1 transformation:1 temporal:1 act:1 control:3 unit:2 grant:1 medical:1 appear:1 before:1 engineering:1 modify:1 consequence:1 sutton:9 receptor:1 fluctuation:1 modulation:6 firing:3 might:4 plus:1 studied:1 examined:1 limited:2 commanded:1 range:2 episodic:1 area:1 drug:1 thought:2 revealing:1 road:1 protein:1 suggest:1 cannot:1 close:1 put:1 storage:1 influence:3 instability:1 fruitful:1 interlocking:1 charged:1 center:1 occipital:1 formulate:1 rule:2 insight:1 fill:1 population:12 variation:1 discharge:2 controlling:1 diego:1 us:1 hypothesis:3 harvard:1 ede:2 synthesize:1 particularly:1 observed:1 constancy:1 bottom:1 electrical:2 capture:1 cycle:9 episode:1 movement:2 highest:1 decrease:1 mentioned:1 balanced:1 unil:1 dynamic:2 upon:3 completely:1 basis:1 livingston:1 mh:1 darpa:1 represented:1 various:1 dreaming:13 artificial:2 formation:1 larger:1 otherwise:1 serotonin:1 hormonal:1 think:1 nrem:7 itself:2 associative:2 sequence:1 interaction:4 steriade:5 relevant:1 rapidly:1 realistically:2 ach:2 supra:1 circadian:6 generating:1 adam:1 develop:1 informs:1 school:1 borrowed:1 soc:1 involves:1 implies:2 sensation:1 closely:1 discontinuous:1 subsequently:1 brainstem:5 human:1 education:1 ja:4 premise:1 preliminary:3 repertoire:1 biological:2 mathematically:2 considered:1 cognition:5 biochemistry:1 narrative:1 geniculate:1 wet:3 currently:1 sensitive:1 bridge:2 faithfully:1 neurosurgery:1 clearly:1 varying:1 acetylcholine:2 categorizing:1 release:1 derived:1 refining:1 sequencing:1 psychiatry:1 am:2 dependent:3 biochemical:2 typically:1 originating:1 upward:1 fidelity:1 orientation:1 among:1 development:1 field:1 equal:1 biology:2 progressive:1 alter:1 mimic:2 introspection:1 inhibited:1 tightly:1 individual:2 jeffrey:1 attempt:3 regulating:1 earch:1 hallucination:1 cholinergic:12 iology:1 light:1 daily:1 shorter:1 creatively:1 iv:2 re:1 annal:1 theoretical:1 psychological:3 modeling:5 mccarley:5 pons:2 pump:1 graphic:1 characterize:2 person:1 contract:1 synthesis:4 e25:1 connecting:1 connectivity:1 central:2 recorded:2 neuromodulators:2 cognitive:6 external:1 account:1 potential:1 chemistry:1 alteration:2 summarized:2 sloan:1 afferent:1 vi:1 mv:1 view:1 linked:2 schema:1 wave:1 il:1 compartment:2 became:2 characteristic:3 ofthe:1 dry:2 plastic:1 ponto:1 published:1 synaptic:1 ed:3 energy:1 associated:2 couple:1 monoamine:3 massachusetts:1 logical:3 knowledge:1 routine:1 actually:2 back:1 nerve:1 thorny:1 feed:1 higher:1 response:1 wherein:1 furthermore:1 stage:1 equipping:1 clock:1 transport:1 widespread:2 continuity:1 mode:2 scientific:1 impede:1 effect:3 contain:1 normalized:1 excitability:3 dull:1 laboratory:1 moore:2 eg:4 ll:1 during:6 disinhibition:1 width:1 davis:1 noted:2 rhythm:4 excitation:2 manifestation:1 outline:1 demonstrate:1 passive:1 image:1 instantaneous:1 recently:1 nih:1 specialized:1 denver:1 jp:1 volume:2 discussed:2 significant:1 cambridge:1 neuromodulatory:1 pew:1 consistency:1 mathematics:1 outlined:1 delusion:1 cortex:3 longer:2 inhibition:1 base:1 own:2 recent:1 incapable:1 onr:1 discussing:1 period:2 signal:1 ii:6 desirable:1 thalamus:2 hormone:1 characterized:1 academic:1 long:5 prediction:1 neuro:1 lly:1 histogram:1 represent:1 sequenced:1 cell:10 dec:1 sleeping:6 wake:6 void:1 source:1 crucial:1 invited:1 rest:1 subject:3 recording:1 undergo:1 structural:2 monoaminergic:9 iii:2 variety:2 geniculo:1 fit:4 identified:1 silent:1 tm:1 shift:1 absent:1 mentation:2 linkage:1 effort:3 proceed:1 york:1 action:1 generally:3 modulatory:1 detailed:1 useful:1 crosscorrelations:1 reduced:1 rw:2 timeconstant:1 inhibitory:2 notice:1 dotted:1 millisecond:1 sign:2 discrete:1 aminergic:2 vol:1 affected:1 lotka:2 dominance:1 four:4 burgeoning:1 threshold:1 changing:2 verified:1 ht:2 graph:2 surmount:1 parameterized:1 place:1 extends:1 vivid:2 hobson:20 oscillation:1 vb:1 distinguish:1 display:1 sleep:41 activity:2 fjd:1 occur:2 precisely:1 aspect:4 span:1 department:2 developing:1 according:2 mcdonnell:1 membrane:3 across:2 describes:1 smaller:1 mastering:1 bethesda:1 making:2 modification:1 equation:2 previously:1 turn:1 describing:1 needed:4 mind:4 phasic:1 subjected:1 operation:1 permission:3 ych:1 cf:1 vividness:1 include:1 whitaker:2 emotional:1 especially:1 build:1 occurs:1 volterra:2 link:1 separate:1 lateral:1 capacity:1 evenly:1 me:1 equip:1 dream:13 modeled:1 ratio:1 syltem:1 innovation:1 regulation:1 potentially:2 synthesizing:1 implementation:1 neuron:11 voluntary:1 interacting:1 waking:13 mmhc:1 specified:2 california:1 hour:2 nip:1 discontinuity:1 address:1 able:2 suggested:1 topdown:1 dynamical:1 perception:1 involuntary:1 including:1 memory:1 representing:1 scheme:1 altered:3 noool4:1 technology:1 eye:1 ne:2 numerous:1 bizarre:2 hm:1 coupled:1 eog:1 text:2 review:2 understanding:1 discovery:1 acknowledgement:1 mountcastle:1 determining:1 relative:6 encompassing:1 generation:1 limitation:1 generator:1 foundation:2 integrate:1 degree:1 ripe:1 principle:3 succinctly:1 excitatory:2 supported:1 keeping:1 warren:1 understand:2 institute:1 dimension:4 made:3 san:2 correlate:1 ignore:1 status:2 ml:1 active:3 handbook:1 mamelak:8 francisco:1 conservation:1 quiescent:1 mitchison:2 pontine:2 zornetzer:1 norepinephrine:1 nature:1 ca:2 intracellular:1 noise:1 body:1 neuronal:4 structurally:1 sub:1 perceptual:1 pgo:3 externally:1 minute:1 down:1 erroneous:1 cog:1 specific:1 showing:1 mckenna:3 trap:1 incorporating:1 raven:1 ci:1 neurocognitive:2 te:1 downward:1 occurring:2 boston:1 neurophysiological:2 visual:1 collectively:1 nested:1 ma:2 oscillator:8 crick:2 content:3 change:14 experimentally:1 emotionally:1 absence:1 specie:1 attempted:1 withdrawal:1 formally:1 college:2 latter:1 ongoing:1 incorporate:1 correlated:3
3,978
4,600
Compressive neural representation of sparse, high-dimensional probabilities xaq pitkow Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14607 [email protected] Abstract This paper shows how sparse, high-dimensional probability distributions could be represented by neurons with exponential compression. The representation is a novel application of compressive sensing to sparse probability distributions rather than to the usual sparse signals. The compressive measurements correspond to expected values of nonlinear functions of the probabilistically distributed variables. When these expected values are estimated by sampling, the quality of the compressed representation is limited only by the quality of sampling. Since the compression preserves the geometric structure of the space of sparse probability distributions, probabilistic computation can be performed in the compressed domain. Interestingly, functions satisfying the requirements of compressive sensing can be implemented as simple perceptrons. If we use perceptrons as a simple model of feedforward computation by neurons, these results show that the mean activity of a relatively small number of neurons can accurately represent a highdimensional joint distribution implicitly, even without accounting for any noise correlations. This comprises a novel hypothesis for how neurons could encode probabilities in the brain. 1 Introduction Behavioral evidence shows that animal behaviors are often influenced not only by the content of sensory information but also by its uncertainty. Different theories have been proposed about how neuronal populations could represent this probabilistic information [1, 2]. Here we propose a new theory of how neurons could represent probability distributions, based on the burgeoning field of ?compressive sensing.? An arbitrary probability distribution over multiple variables has a parameter count that is exponential in the number of variables. Representing these probabilities can therefore be prohibitively costly. One common approach is to use graphical models to parameterize the distribution in terms of a smaller number of interactions. Here I consider an alternative approach. In many cases of interest, only a few unknown states have high probabilities while the rest have neglible ones; such a distribution is called ?sparse?. I will show that sufficiently sparse distributions can be described by a number of parameters that is merely linear in the number of variables. Until recently, it was generally thought that encoding of sparse signals required dense sampling at a rate greater than or equal to signal bandwidth. However, recent findings prove that it is possible to fully characterize a signal at a rate limited not by its bandwidth but by its information content [3, 4, 5, 6] which can be much smaller. Here I apply such compression to sparse probability distributions over binary variables, which are, after all, just signals with some particular properties. 1 In most applications of compressive sensing, the ultimate goal is to reconstruct the original signal efficiently. Here, we do not wish to reconstruct the signal at all. Instead, we use the guarantees that the signal could be reconstructed to ensure that the signal is accurately represented by its compressed version. Below, when we do reconstruct it is only to show that our method actually works in practice. We don?t expect that the brain needs to explicitly reconstruct a probability distribution in some canonical mathematical representation in order to gain the advantages of probabilistic reasoning. Traditional compressive sensing considers signals that lives in an N -dimensional space but have only S nonzero coordinates in some basis. We say that such a signal is S-sparse. If we were told the location of the nonzero entries, then we would need only S measurements to characterize their coefficients and thus the entire signal. But even if we don?t know where those entries are, it still takes little more than S linear measurements to perfectly reconstruct the signal. Furthermore, those measurements can be fixed in advance without any knowledge of the structure of the signal. Under certain conditions, these excellent properties can be guaranteed [3, 4, 5]. The basic mathematical setup of compressive sensing is as follows. Assume that an N -dimensional signal s has S nonzero coefficients. We make M linear measurements y of this signal by applying the M ? N matrix A: y = As (1) We would then like to recover the original signal s from these measurements. Under conditions on the measurement matrix A described below, the original can be found perfectly by computing the vector with minimal `1 norm that reproduces the measurements, ? = argmin ks0 k`1 such that As0 = y = As s (2) s0 The `1 norm is usually used instead of `0 because (2) can be solved far more efficiently [3, 4, 5, 7]. Compressive sensing is generally robust to two deviations from this ideal setup. First, target signals may not be strictly S-sparse. However, they may be ?compressible? in the sense that they are well approximated by an S-sparse signal. Signals whose rank-ordered coefficients fall off at least as fast as rank?1 satisfy this property [4]. Second, measurements may be corrupted by noise with bounded ? is bounded by the amplitude . Under these conditions, the error of the `1 -reconstructed signal s error of the best S-sparse approximation sS plus a term proportional to the measurement noise: ? k? s ? sk`2 ? C0 ksS ? sk`2 / S + C1  (3) for some constants C0 and C1 [8]. Several conditions on A have been used in compressive sensing to guarantee good performance [4, 6, 9, 10, 11]. Modulo various nuances, they all essentially ensure that most or all relevant sparse signals lie sufficiently far from the null space of A: It would be impossible to recover signals in the null space since their measurements are all zero and cannot therefore be distinguished. The most commonly used condition is the Restricted Isometry Property (RIP), which says that A preserves `2 norms of all S-sparse vectors within a factor of 1 ? ?S that depends on the sparsity, (1 ? ?S )ksk`2 ? kAsk`2 ? (1 + ?S )ksk`2 (4) If A satisfies the RIP with small enough ?S , then `1 recovery is guaranteed to succeed. For random matrices whose elements are independent and identically distributed Gaussian or Bernoulli variates, the RIP holds as long as the number of measurements M satisfies M ? CS log N/S (5) for some constant C that depends on ?S [8]. No other recovery method, however intractable, can perform substantially better than this [8]. 2 Compressing sparse probability distributions Compressive sensing allows us to use far fewer resources to accurately represent high-dimensional objects if they are sufficiently sparse. Even if we don?t ultimately intend to reconstruct the signal, the reconstruction theorem described above (3) ensures that we have implicitly represented all the relevant information. This compression proves to be extremely useful when representing multivariate joint probability distributions, whose size is exponentially large even for the simplest binary states. 2 Consider the signal to be a probability distribution over an n-dimensional binary vector x ? {?1, +1}n , which I will write sometimes as a function p(x) and sometimes as a vector p indexed by the binary state x. I assume p is sparse in the canonical basis of delta-functions on each state, ?x,x0 . The dimensionality of this signal is N = 2n , which for even modest n can be so large it cannot be represented explicitly. The measurement matrix A for probability vectors has size M ? 2n . Each row corresponds to a different measurement, indexed by i. Each column corresponds to a different binary state x. This column index x ranges over all possible binary vectors of length n, in some conventional sequence. For example, if n = 3 then the column index would take the 8 values x ? {??? ; ??+ ; ?+? ; ?++ ; +?? ; +?+ ; ++? ; +++} Each element of the measurement matrix, Ai (x), can be viewed as a function applied to the binary state. When this matrix operates on a probability distribution p(x), the result y is a vector of M expectation values of those functions, with elements X yi = Ai p = Ai (x)p(x) = hAi (x)ip(x) (6) x For example, if Ai (x) = xi then yi = hxi ip(x) measures the mean of xi drawn from p(x). For suitable measurement matrices A, we are guaranteed accurate reconstruction of S-sparse probability distributions as long as the number of measurements is M ? O(S log N/S) = O(Sn ? S log S) (7) n The exponential size of the probability vector, N = 2 , is cancelled by the logarithm. For distributions with a fixed sparseness S, the required number of measurements per variable, M/n, is then independent of the number of variables.1 In many cases of interest it is impractical to calculate these expectation values directly: Recall that the probabilities may be too expensive to represent explicitly in the first place. One remedy is to draw T samples xt from the distribution p(x), and use a sum over these samples to approximate the expectation values, 1X yi ? Ai (xt ) xt ? p(x) (8) T t The probability p?(x) estimated from T samples has errors with variance p(x)(1 ? p(x))/T , which is bounded by 1/4T . This allows us to use the performance limits from robust compressive sensing, which according to (3) creates an error in the reconstructed probabilities that is bounded by C1 (9) k? p ? pk`2 ? C0 kpS ? pk`2 + ? T where pS is a vector with the top S probabilities preserved and the rest set to zero. Strictly speaking, (3) applies to bounded errors, whereas here we have a bounded variance but possibly large errors. To ensure accurate reconstruction, we can choose the constant C1 large enough that errors larger than some threshold (say, 10 standard deviations) have a negligible probability. 2.1 Measurements by random perceptrons In compressive sensing it is common to use a matrix with independent Bernoulli-distributed random values, Ai (x) ? B( 12 ), which guarantees A satisfies the RIP [12]. Each row of this matrix represents all possible outputs of an arbitrarily complicated Boolean function of the n binary variables x. Biological neural networks would have great difficulty computing such arbitrary functions in a simple manner. However, neurons can easily compute a large class of simpler boolean functions, the perceptrons. These are simple threshold functions of a weighted average of the input X  Ai (x) = sgn Wij xj ? ?j (10) j 1 Depending on the problem, the number of significant nonzero entries S may grow with the number of variables. This growth may be fast (e.g. the number of possible patterns grows as en ) or slow (e.g. the number of possible translations of a given pattern grows only as n). 3 Measurement i where W is an M ? n matrix. Here I take W to have elements drawn randomly from a standard normal distribution, Wij ? N (0, 1), and call the resultant functions ?random perceptrons?. An example measurement matrix for random perceptrons is shown in Figure 1. These functions are readily implemented by individual neurons, where xj is the instantaneous activity of neuron j, Wij is the synaptic weight between neurons i and j, and the sgn function approximates a spiking threshold at ?. State vector x Figure 1: Example measurement matrix Ai (x) for M = 100 random perceptrons applied to all 29 possible binary vectors of length n = 9. The step nonlinearity sgn is not essential, but some type of nonlinearity is: Using a purely linear function of the states, A = W x, would result in measurements y = Ap = W hxi. This provides at most n linearly independent measurements of p(x), even when M > n. In most cases this is not enough to adequately capture the full distribution. Nonlinear Ai (x) allow a greater number of linearly independent measurements of p(x). Although the dimensionality of W is merely M ? n, 2 which is much smaller than the 2n -dimensional space of probabilities, (10) can generate O(2n ) distinct perceptrons [13]. By including an appropriate threshold, a perceptron can assign any individual state x a positive response and assign a negative response to every other state. This shows that random perceptrons generate the canonical basis and can thus span the space of possible p(x). In what follows, I assume that ? = 0 for simplicity. In the Appendix I prove that random perceptrons with zero threshold satisfy the requirements for compressive sensing in the limit of large n. Present research is directed toward deriving the condition number of these measurement matrices for finite n, in order to provide rigorous bounds on the number of measurements required in practice. Below I present empirical evidence that even a small number of random perceptrons largely preserves the information about sparse distributions. 3 3.1 Experiments Fidelity of compressed sparse distributions To test random perceptrons in compressive sensing of probabilities, I generated sparse distributions using small Boltzmann machines [14], and compressed them using random perceptrons driven by samples from the Boltzmann machine. Performance was then judged by comparing `1 reconstructions to the true distributions, which are exactly calculable for modest n. In a Boltzmann Machine, binary states x occur with probabilities given by the Boltzmann distribution with energy function E(x), p(x) ? e?E(x) E(x) = ?b>x ? x>Jx (11) determined by biases b and pairwise couplings J. Sampling from this distribution can be accomplished by running Glauber dynamics [15], at each time step turning a unit on with probability p(xi = +1|x\i ) = 1/(1 + e??E ), where ?E = E(xi = +1, x\i ) ? E(xi = ?1, x\i ). Here x\i is the vector of all components of x except the ith. For simulations I distinguished between two types of units, hidden and visible, x = (h, v). On each trial I first generated a sample of all units according to (11). I then fixed only the visible units and allowed the hidden units to fluctuate according to the conditional probability p(h|v) to be represented. This probability is given again by the Boltzmann distribution, now with energy function E(h|v) = ?(bh ? Jhv v)> h ? h>Jhh h 4 (12) All bias terms b were set to zero, and all pairwise couplings J were random draws from a zeromean normal distribution, Jij ? N (0, 31 ). Experiments used n hidden and n visible units, with n ? {8, 10, 12}. This distribution of couplings produced sparse posterior distributions whose rankordered probabilities fell faster than rank?1 and were thus compressible [4]. The compression was accomplished by passing the hidden unit activities h through random perceptrons a with weights W , according to a = sgn (W h). These perceptron activities fluctuate along with their inputs. The mean activity of these perceptron units compressively senses the probability distribution according to (8). This process of sampling and then compressing a Boltzmann distribution can be implemented by the simple neural network shown in Figure 2. Perceptrons a feedforward W recurrent Jhh feedforward Jvh Inputs v neurons Samplers h time Figure 2: Compressive sensing of a probability distribution by model neurons. Left: a neural architecture for generating and then encoding a sparse, high-dimensional probability distribution. Right: activity of each population of neurons as a function of time. Sparse posterior probability distribution are generated by a Boltzmann Machine with visible units v (Inputs), hidden units h (Samplers), feedforward couplings Jvh from visible to hidden units, and recurrent connections between hidden units Jhh . The visible units? activities are fixed by an input. The hidden units are stochastic, and sample from a probability distribution p(h|v). The samples are recoded by feedforward weights W to random perceptrons a. The mean activity y of the time-dependent perceptron responses captures the sparse joint distribution of the hidden units. We are not ultimately interested in reconstruction of the large, sparse distribution, but rather the distribution?s compressed representation. Nonetheless, reconstruction is useful to show that the information has been preserved. I reconstruct sparse probabilities using nonnegative `1 minimization with measurement constraints [16, 17], minimizing kpk`1 + ?kAp ? yk2`2 (13) where ? is a regularization parameter that was set to 2T in all simulations. Reconstructions were quite good, as shown in Figure 3. Even with far fewer measurements than signal dimensions, reconstruction accuracy is limited only by the sampling of the posterior. Enough random perceptrons do not lose any available information. In the context of probability distributions, `1 reconstruction has a serious flaw: All distributions have P the same `1 norm: kpk`1 = x p(x) = 1! To minimize the `1 norm, therefore, the estimate will not be a probability distribution. Nonetheless, the individual probabilities of the most significant states are accurately reconstructed, and only the highly improbable states are set to zero. Figure 3B shows that the shortfall is small: `1 reconstruction recovers over 90% of the total probability mass. 3.2 Preserving computationally important relationships There is value in being able to compactly represent these high-dimensional objects. However, it would be especially useful to perform probabilistic computations using these representations, such as marginalization and evidence integration. Since marginalization is a linear operation on the probability distribution, this is readily implementable in the linearly compressed domain. In contrast, evidence integration is a multiplicative process acting in the canonical basis, so this operation will be more complicated after the linear distortions of compressive measurement A. Nonetheless, such computations should be feasible as long as the informative relationships are preserved in the compressed space: Similar distributions should have similar compressive representations, and dissimilar 5 B C Measurements M 20 State x 0 .9 .99 .999 Sum of 1-reconstructed probabilities Reconstructions 102 State x 80 320 D 10 ?1 Reconstruction error (MSE) Samples T Histogram Reconstruction Probability A 103 10?3 104 Probability n=8 10 12 Sampling error 2 4 8 16 32 Measurement ratio M/n Figure 3: Reconstruction of sparse posteriors from random perceptron measurements. (A) A sparse posterior distribution over 10 nodes in a Boltzmann machine is sampled 1000 times, fed to 50 random perceptrons, and reconstructed by nonnegative `1 minimization. (B) A histogram of the sum of reconstructed probabilities reveals the small shortfall from a proper normalization of 1. (C) Scatter plots show reconstructions versus true probabilities. Each box uses different numbers of compressive measurements M and numbers of samples T . (D) With increasing numbers of compressive measurements, the mean squared reconstruction error falls to 1/T = 10?3 , the limit imposed by finite sampling. distributions should have dissimilar compressive representations. In fact, that is precisely the guarantee of compressive sensing: topological properties of the underlying space are preserved in the compressive domain [18]. Figure 4 illustrates how not only are individual sparse distributions recoverable despite significant compression, but the topology of the set of all such distributions is retained. For this experiment, an input x is drawn from a dictionary of input patterns X ? {+1, ?1}n . Each pattern in X is a translation of a single binary template x0 whose elements are generated by thresholding a noisy sinusoid (Figure 4A): x0j = sgn [4 sin (2?j/n) + ?j ] with ?j ? N (0, 1). On each trial, one of these possible patterns is drawn randomly with equal probability 1/|X |, and then is measured by a noisy process that randomly flips bits with a probability ? = 0.35 to give a noisy pattern r. This process induces a posterior distribution over the possible input patterns Y 1 1 p(x|r) = p(x) p(ri |xi ) = p(x)? N ?h(x,r) (1 ? ?)h(x,r) (14) Z Z i where h(x, r) is the Hamming distance between x and r. This posterior is nonzero for all patterns in the dictionary. The noise level and the similarities between the dictionary elements together control the sparseness. 1000 trials of this process generates samples from the set of all possible posterior distributions. Just as the underlying set of inputs has a translation symmetry, the set of all possible posterior distributions has a cyclic permutation symmetry. This symmetry can be revealed by a nonlinear embedding [19] of the set of posteriors into two dimensions (Figure 4B). Compressive sensing of these posteriors by 10 random perceptrons produces a much lowerdimensional embedding that preserves this symmetry. Figure 4C shows that the same nonlinear embedding algorithm applied to the reduced representation, and one sees the same topological pattern. In compressive sensing, similarity is measured by Euclidean distance. When applied to probability distributions it will be interesting to examine instead how well information-geometric measures like the Kullback-Leibler divergence are preserved under this dimensionality reduction [20]. 4 Discussion Probabilistic inference appears to be essential for both animals and machines to perform well on complex tasks with natural levels of ambiguity, but it remains unclear how the brain represents and manipulates probability. Present population models of neural inference either struggle with highdimensional distributions [1] or encode them by hard-to-measure high-order correlations [2]. Here I have proposed an alternative mechanism by which the brain could efficiently represent probabilities: random perceptrons. In this model, information about probabilities is compressed and distributed 6 A i true pattern x ii posterior 1 C 50 noisy pattern r iii iv true pattern index 1 B possible patterns X 100 pattern index 100 nonlinear embedding of posterior distributions (N=100) nonlinear embedding of compressed posteriors (M=10) Figure 4: Nonlinear embeddings of a family of probability distributions with a translation symmetry. (A) The process of generating posterior distributions: (i) A set of 100 possible patterns is generated as cyclic translations of a binary pattern (only 9 shown). With uniform probability, one of these patterns is selected (ii), and a noisy version is obtained by randomly flipping bits with probability 0.35 (iii). From such noisy patterns, an observer can infer posterior probability distributions over possible inputs (iv). (B) The set of posteriors from 1000 iterations of this process is nonlinearly mapped [19] from 100 dimensions to 2 dimensions. Each point represents one posterior and is colored according to the actual pattern from which the noisy observations were made. The permutation symmetry of this process is revealed as a circle in this mapping. (C) This circular structure is retained even after each posterior is compressed into the mean output of 10 random perceptrons. in neural population activity. Amazingly, the brain need not measure any correlations between the perceptron outputs to capture the joint statistics of the sparse input distribution. Only the mean activities are required. Figure 2 illustrates one network that implements this new representation, and many variations on this circuit are possible. Successful encoding in this compressed representation requires that the input distribution be sparse. Posterior distributions over sensory stimuli like natural images are indeed expected to be highly sparse: the features are sparse [21], the prior over images is sparse [22], and the likelihood produced by sensory evidence is usually restrictive, so the posteriors should be even sparser. Still, it will be important to quantify just how sparse the relevant posteriors are under different conditions. This would permit us to predict how neural representations in a fixed population should degrade as sensory evidence becomes weaker. Brains appear to have a mix of structure and randomness. The results presented here show that purely random connections are sufficient to ensure that a sparse probability distribution is properly encoded. Surprisingly, more structured connections cannot allow a network with the same computational elements to encode distributions with substantially fewer neurons, since compressive sensing is already nearly optimal [8]. On the other hand, some representational structure may make it easier to perform computations later. Note that unknown randomness is not an impediment to further processing, as reconstruction can be performed even without explicit knowledge of random perceptron measurement matrix [23]. Even in the most convenient representations, inference is generally intractable and requires approximation. Since compressive sensing preserves the essential geometric relationships of the signal space, learning and inference based on these relationships may be no harder after the compression, and could even be more efficient due to the reduced dimensionality. Biologically plausible mechanisms for implementing probabilistic computations in the compressed representation is important work for the future. Appendix: Asymptotic orthogonality of random perceptron matrix To evaluate the quality of the compressive sensing matrix A, we need to ensure that S-sparse vectors are not projected to zero by the action of A. Here I show that the random ? perceptrons are asymptotically well-conditioned: A?>A? ? I for large n and M , where A? = A/ M . This ensures that distinct inputs yield distinct measurements. 7 First I compute the mean and variance of the mean inner product hCxx0 iW between columns of A? for a given pair of states x 6= x0 . For compactness I will write wi for the ith row of the perceptron weight matrix W . Angle brackets h iW indicate averages over random perceptron weights Wij ? N (0, 1). We find DX E 1 X hCxx0 iW = A?i (x)A?i (x0 ) = hsgn (wi ?x) sgn (wi ?x0 )iW (15) i i M W and since the different wi are independent, this implies that hCxx0 iW = hsgn (wi ?x) sgn (wi ?x0 )iW (16) The n-dimensional half-space in W where sgn (wi ? x) = +1 intersects with the corresponding half-space for x0 in a wedge-shaped region with an angle of ? = cos?1 (x ? x0 /kxk`2 kx0 k`2 ). This angle is related to the Hamming distance h = h(x, x0 ): ?(h) = cos?1 (x ? x0 /n) = cos?1 (1 ? 2h/n) (17) hCxx0 iW =P [ sgn (wi ?x) = sgn (wi ?x0 )] ? P [ sgn (wi ?x) 6= sgn (wi ?x0 )] (18) The signs of wi ?x and wi ?x0 agree within this wedge region and its reflection about W = 0, and disagree in the supplementary wedges. The mean inner product is therefore =1 ? 2 ? ?(h) The variance of Cxx0 caused by variability in W is given by 2 2 Vxx0 = Cxx ? hCxx0 iW 0 W E E XD XD = A?2i (x)A?2i (x0 ) + A?i (x)A?i (x0 )A?j (x)A?j (x0 ) W i=j * = X i 2 W i6=j sgn (wi ?x) sgn (wi ?x0 ) M M 2 (19) (20) ? 2 hCxx0 iW (21) + X  sgn (wi ?x) sgn (wi ?x0 ) 2 2 ? ? + ? hCxx0 iW (22) M M W W i6=j M2 ? M 1 2 + (1 ? 2?(h)/?)2 ? hCxx0 iW (23) = M M2   2 1 = 1 ? 1 ? ?2 ?(h(x, x0 )) (24) M This variance falls with M , so for large numbers of measurements M the inner products between columns concentrates around the various state-dependent mean values (19). Next I consider the diversity of inner products for different pairs (x, x0 ) of binary state vectors. I take the limit of large M so that the diversity is dominated by variations over the particular pairs, rather than by variations over measurements. The mean inner product depends only on the Hamming 0 distance h between  x and x , which for sparse signals with random support has a binomial distribun ?n tion, p(h) = h 2 with mean n/2 and variance n/4. Designating by an overbar the average over randomly chosen states x and x0 , the mean C and variance ?C 2 of the inner product are C = hCxx0 iW = 1 ? ?2 cos?1 (1 ? 2h n )=0  2 ?C n 16 4 ?C 2 = ?h2 = = 2 ?h 4 ? 2 n2 ? n (25) (26) This proves that in the limit of large n and M , different columns of the random perceptron measurement matrix have inner products that concentrate around 0. The matrix of inner products is thus orthonormal almost surely, A?>A? ? I. Consequently, with enough measurements the random perceptrons asymptotically provide an isometry. Future work will investigate how the measurement matrix behaves for finite n and M , which will determine the number of measurements required in practice to capture a signal of a given sparseness. Acknowledgments Thanks to Alex Pouget, Jeff Beck, Shannon Starr, and Carmelita Navasca for helpful conversations. 8 References [1] Ma W, Beck J, Latham P, Pouget A (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9: 1432?8. [2] Berkes P, Orb?an G, Lengyel M, Fiser J (2011) Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science 331: 83?7. [3] Cand`es E, Romberg J, Tao T (2006) Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory 52: 489?509. [4] Cand`es E, Tao T (2006) Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Transactions on Information Theory 52: 5406?5425. [5] Donoho D (2006) Compressed sensing. IEEE Transactions on Information Theory 52: 1289?1306. [6] Cand`es E, Plan Y (2011) A probabilistic and RIPless theory of compressed sensing. IEEE Transactions on Information Theory 57: 7235?7254. [7] Donoho DL, Maleki A, Montanari A (2009) Message-passing algorithms for compressed sensing. Proc Natl Acad Sci USA 106: 18914?9. [8] Cand`es E, Wakin M (2008) An introduction to compressive sampling. Signal Processing Magazine 25: 21?30. [9] Kueng R, Gross D (2012) RIPless compressed sensing from anisotropic measurements. Arxiv preprint arXiv:12051423 . [10] Calderbank R, Howard S, Jafarpour S (2010) Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property. Selected Topics in Signal Processing 4: 358?374. [11] Gurevich S, Hadani R (2009) Statistical rip and semi-circle distribution of incoherent dictionaries. arXiv cs.IT. [12] Mendelson S, Pajor A, Tomczak-Jaegermann N (2006) Uniform uncertainty principle for Bernoulli and subgaussian ensembles. arXiv math.ST. [13] Irmatov A (2009) Bounds for the number of threshold functions. Discrete Mathematics and Applications 6: 569?583. [14] Ackley D, Hinton G, Sejnowski T (1985) A learning algorithm for Boltzmann machines. Cognitive Science 9: 147?169. [15] Glauber RJ (1963) Time-dependent statistics of the Ising model. Journal of Mathematical Physics 4: 294?307. [16] Yang J, Zhang Y (2011) Alternating direction algorithms for L1 problems in compressive sensing. SIAM Journal on Scientific Computing 33: 250?278. [17] Zhang Y, Yang J, Yin W (2010) YALL1: Your ALgorithms for L1. CAAM Technical Report : TR09-17. [18] Baraniuk R, Cevher V, Wakin MB (2010) Low-dimensional models for dimensionality reduction and signal recovery: A geometric perspective. Proceedings of the IEEE 98: 959?971. [19] der Maaten LV, Hinton G (2008) Visualizing high-dimensional data using t-SNE. Journal of Machine Learning Research 9: 2579?2605. [20] Carter KM, Raich R, Finn WG, Hero AO (2011) Information-geometric dimensionality reduction. IEEE Signal Process Mag 28: 89?99. [21] Olshausen BA, Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381: 607?9. [22] Stephens GJ, Mora T, Tkacik G, Bialek W (2008) Thermodynamics of natural images. arXiv q-bio.NC. [23] Isely G, Hillar CJ, Sommer FT (2010) Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication. arXiv q-bio.NC. 9
4600 |@word trial:3 version:2 compression:7 norm:5 c0:3 km:1 ks0:1 simulation:2 accounting:1 tkacik:1 jafarpour:1 harder:1 reduction:3 cyclic:2 mag:1 interestingly:1 kx0:1 comparing:1 scatter:1 dx:1 gurevich:1 readily:2 visible:6 informative:1 plot:1 half:2 fewer:3 selected:2 ith:2 colored:1 provides:1 math:1 node:1 location:1 compressible:2 tr09:1 simpler:1 zhang:2 mathematical:3 along:1 yall1:1 calculable:1 prove:2 behavioral:1 manner:1 pairwise:2 x0:21 indeed:1 expected:3 behavior:1 cand:4 examine:1 brain:8 little:1 actual:1 pajor:1 increasing:1 becomes:1 bounded:6 underlying:2 circuit:1 mass:1 null:2 what:1 argmin:1 substantially:2 compressive:31 finding:1 impractical:1 guarantee:4 every:1 growth:1 xd:2 exactly:1 prohibitively:1 control:1 unit:15 bio:2 appear:1 positive:1 negligible:1 struggle:1 limit:5 acad:1 despite:1 encoding:4 ap:1 plus:1 co:4 limited:3 range:1 directed:1 acknowledgment:1 practice:3 implement:1 empirical:1 universal:1 thought:1 convenient:1 projection:1 cannot:3 romberg:1 judged:1 bh:1 context:1 impossible:1 applying:1 conventional:1 imposed:1 deterministic:1 hillar:1 simplicity:1 recovery:4 pitkow:1 manipulates:1 pouget:2 m2:2 deriving:1 orthonormal:1 mora:1 population:6 embedding:5 coordinate:1 variation:3 construction:1 target:1 spontaneous:1 modulo:1 rip:5 exact:1 magazine:1 us:1 designating:1 hypothesis:1 element:7 satisfying:1 approximated:1 expensive:1 ising:1 ackley:1 ft:1 preprint:1 solved:1 capture:4 parameterize:1 calculate:1 region:2 compressing:2 ensures:2 gross:1 environment:1 dynamic:1 ultimately:2 purely:2 creates:1 basis:4 compactly:1 easily:1 joint:4 represented:5 various:2 intersects:1 distinct:3 fast:2 sejnowski:1 kp:1 whose:5 quite:1 larger:1 encoded:1 plausible:1 say:3 s:1 reconstruct:7 compressed:17 distortion:1 wg:1 statistic:2 emergence:1 noisy:7 ip:2 advantage:1 sequence:1 propose:1 reconstruction:18 interaction:1 jij:1 product:8 mb:1 relevant:3 representational:1 requirement:2 p:1 produce:1 generating:2 object:2 depending:1 coupling:4 recurrent:2 supplementary:1 measured:2 as0:1 implemented:3 c:2 indicate:1 implies:1 quantify:1 orb:1 concentrate:2 wedge:3 direction:1 stochastic:1 sgn:16 implementing:1 assign:2 ao:1 biological:1 strictly:2 hold:1 sufficiently:3 around:2 normal:2 great:1 mapping:1 predict:1 dictionary:4 jx:1 proc:1 lose:1 iw:12 weighted:1 minimization:2 gaussian:1 rather:3 fluctuate:2 compressively:1 probabilistically:1 encode:3 kueng:1 properly:1 rank:3 bernoulli:3 likelihood:1 contrast:1 rigorous:1 sense:1 helpful:1 inference:5 flaw:1 dependent:3 entire:1 compactness:1 hidden:9 wij:4 interested:1 tao:2 fidelity:1 animal:2 plan:1 integration:2 field:3 equal:2 shaped:1 sampling:10 represents:3 nearly:1 future:2 report:1 stimulus:1 serious:1 few:1 randomly:5 preserve:5 divergence:1 individual:4 beck:2 jhh:3 subsampled:1 ripless:2 interest:2 message:1 highly:3 circular:1 investigate:1 bracket:1 sens:1 natl:1 amazingly:1 accurate:2 improbable:1 modest:2 indexed:2 iv:2 euclidean:1 logarithm:1 incomplete:1 circle:2 minimal:1 cevher:1 column:6 boolean:2 deviation:2 entry:3 deciphering:1 uniform:2 successful:1 too:1 characterize:2 corrupted:1 thanks:1 st:1 siam:1 shortfall:2 probabilistic:8 told:1 off:1 physic:1 together:1 again:1 squared:1 ambiguity:1 choose:1 possibly:1 cognitive:2 diversity:2 coefficient:3 satisfy:3 explicitly:3 caused:1 depends:3 performed:2 multiplicative:1 observer:1 later:1 tion:1 recover:2 complicated:2 rochester:3 minimize:1 accuracy:1 variance:7 largely:1 efficiently:3 ensemble:1 correspond:1 yield:1 bayesian:1 accurately:4 produced:2 lengyel:1 randomness:2 influenced:1 kpk:2 synaptic:1 energy:2 nonetheless:3 frequency:1 resultant:1 jaegermann:1 recovers:1 hamming:3 gain:1 sampled:1 recall:1 knowledge:2 conversation:1 dimensionality:6 cj:1 amplitude:1 actually:1 appears:1 response:3 box:1 zeromean:1 furthermore:1 just:3 fiser:1 correlation:3 until:1 hand:1 nonlinear:7 quality:3 scientific:1 grows:2 olshausen:1 usa:1 true:4 remedy:1 adequately:1 regularization:1 maleki:1 sinusoid:1 alternating:1 nonzero:5 leibler:1 glauber:2 visualizing:1 sin:1 starr:1 latham:1 l1:2 reflection:1 reasoning:1 image:4 hallmark:1 instantaneous:1 novel:2 recently:1 common:2 behaves:1 spiking:1 exponentially:1 anisotropic:1 approximates:1 measurement:44 kask:1 significant:3 ai:9 mathematics:1 i6:2 nonlinearity:2 dj:1 hxi:2 similarity:2 yk2:1 gj:1 berkes:1 multivariate:1 isometry:3 recent:1 posterior:22 perspective:1 driven:1 certain:1 binary:13 arbitrarily:1 life:1 yi:3 accomplished:2 der:1 preserving:1 greater:2 lowerdimensional:1 xaq:1 surely:1 determine:1 signal:36 ii:2 recoverable:1 multiple:1 full:1 bcs:1 infer:1 mix:1 semi:1 rj:1 technical:1 faster:1 long:3 basic:1 essentially:1 expectation:3 arxiv:6 histogram:2 represent:7 sometimes:2 normalization:1 iteration:1 cell:1 c1:4 preserved:5 whereas:1 grow:1 rest:2 fell:1 call:1 subgaussian:1 near:1 yang:2 ideal:1 feedforward:5 revealed:2 enough:5 identically:1 iii:2 embeddings:1 xj:2 variate:1 marginalization:2 architecture:1 bandwidth:2 perfectly:2 topology:1 impediment:1 inner:8 ultimate:1 speaking:1 passing:2 action:1 generally:3 useful:3 induces:1 carter:1 simplest:1 reduced:2 generate:2 canonical:4 kap:1 sign:1 estimated:2 delta:1 per:1 write:2 discrete:1 burgeoning:1 threshold:6 drawn:4 asymptotically:2 merely:2 sum:3 angle:3 uncertainty:3 baraniuk:1 place:1 family:1 x0j:1 almost:1 draw:2 maaten:1 appendix:2 cxx:1 ks:1 bit:2 bound:2 guaranteed:3 topological:2 nonnegative:2 activity:11 occur:1 constraint:1 precisely:1 orthogonality:1 alex:1 ri:1 your:1 raich:1 dominated:1 generates:1 extremely:1 span:1 relatively:1 department:1 structured:1 according:6 smaller:3 wi:17 biologically:1 caam:1 restricted:1 computationally:1 resource:1 agree:1 remains:1 count:1 mechanism:2 know:1 flip:1 hero:1 fed:1 finn:1 available:1 operation:2 permit:1 apply:1 appropriate:1 cancelled:1 distinguished:2 alternative:2 original:3 top:1 running:1 ensure:5 binomial:1 sommer:1 graphical:1 wakin:2 hsgn:2 restrictive:1 prof:2 especially:1 intend:1 already:1 flipping:1 strategy:1 costly:1 receptive:1 usual:1 traditional:1 bialek:1 hai:1 unclear:1 distance:4 mapped:1 sci:1 degrade:1 topic:1 considers:1 toward:1 nuance:1 length:2 code:2 index:4 relationship:4 retained:2 ratio:1 minimizing:1 tomczak:1 nc:2 setup:2 sne:1 negative:1 ba:1 recoded:1 proper:1 boltzmann:9 unknown:2 perform:4 disagree:1 neuron:13 observation:1 howard:1 finite:3 implementable:1 hinton:2 variability:1 communication:1 arbitrary:2 nonlinearly:1 required:5 pair:3 connection:3 able:1 below:3 usually:2 pattern:19 stephen:1 sparsity:1 including:1 suitable:1 difficulty:1 natural:4 turning:1 representing:2 thermodynamics:1 incoherent:1 sn:1 neglible:1 prior:1 geometric:5 asymptotic:1 fully:1 expect:1 permutation:2 ksk:2 calderbank:1 interesting:1 isely:1 proportional:1 versus:1 lv:1 h2:1 sufficient:1 s0:1 thresholding:1 principle:3 translation:5 row:3 surprisingly:1 bias:2 allow:2 weaker:1 perceptron:11 fall:3 template:1 sparse:41 distributed:4 dimension:4 cortical:1 sensory:4 commonly:1 made:1 projected:1 adaptive:1 far:4 transaction:4 reconstructed:7 approximate:1 implicitly:2 kullback:1 reproduces:1 reveals:2 xi:6 hadani:1 don:3 sk:2 nature:1 robust:3 symmetry:6 mse:1 excellent:1 complex:1 domain:3 pk:2 dense:1 montanari:1 linearly:3 neurosci:1 noise:4 n2:1 allowed:1 neuronal:1 en:1 ny:1 slow:1 comprises:1 wish:1 explicit:1 exponential:3 lie:1 theorem:1 xt:3 sensing:26 evidence:6 dl:1 intractable:2 essential:3 mendelson:1 nat:1 illustrates:2 conditioned:1 sparseness:3 sparser:1 easier:1 yin:1 kxk:1 ordered:1 applies:1 corresponds:2 satisfies:3 ma:1 succeed:1 conditional:1 goal:1 viewed:1 consequently:1 donoho:2 jeff:1 content:2 feasible:1 hard:1 determined:1 except:1 operates:1 sampler:2 acting:1 called:1 total:1 e:4 shannon:1 perceptrons:23 highdimensional:2 internal:1 support:1 dissimilar:2 evaluate:1
3,979
4,601
Newton-Like Methods for Sparse Inverse Covariance Estimation Figen Oztoprak Sabanci University [email protected] Peder A. Olsen IBM, T. J. Watson Research Center [email protected] Jorge Nocedal Northwestern University [email protected] Steven J. Rennie IBM, T. J. Watson Research Center [email protected] Abstract We propose two classes of second-order optimization methods for solving the sparse inverse covariance estimation problem. The first approach, which we call the Newton-LASSO method, minimizes a piecewise quadratic model of the objective function at every iteration to generate a step. We employ the fast iterative shrinkage thresholding algorithm (FISTA) to solve this subproblem. The second approach, which we call the Orthant-Based Newton method, is a two-phase algorithm that first identifies an orthant face and then minimizes a smooth quadratic approximation of the objective function using the conjugate gradient method. These methods exploit the structure of the Hessian to efficiently compute the search direction and to avoid explicitly storing the Hessian. We also propose a limited memory BFGS variant of the orthant-based Newton method. Numerical results, including comparisons with the method implemented in the QUIC software [1], suggest that all the techniques described in this paper constitute useful tools for the solution of the sparse inverse covariance estimation problem. 1 Introduction Covariance selection, first described in [2], has come to refer to the problem of estimating a normal distribution that has a sparse inverse covariance matrix P, whose non-zero entries correspond to edges in an associated Gaussian Markov Random Field, [3]. A popular approach to covariance selection is to maximize an `1 penalized log likelihood objective, [4]. This approach has also been applied to related problems, such as sparse multivariate regression with covariance estimation, [5], and covariance selection under a Kronecker product structure, [6]. In this paper, we consider the same objective function as in these papers, and present several Newton-like algorithms for minimizing it. Following [4, 7, 8], we state the problem as P? = arg max log det(P) ? trace(SP) ? ?kvec(P)k1 , (1) P0 where ? is a (fixed) regularization parameter, PN (2) S = N1 i=1 (xi ? ?)(xi ? ?)T n is the empirical sample covariance, ? is known, the xi ? R are assumed to be independent, 2 identically distributed samples, and vec(P) defines a vector in Rn obtained by stacking the columns of P. We recast (1) as the minimization problem def min F (P) = L(P) + ?kvec(P)k1 , P 0 1 (3) where L is the negative log likelihood function L(P) = ?log det(P) + trace(SP). The convex problem (3) has a unique solution P? that satisfies the optimality conditions [7] S ? [P? ]?1 + ?Z? = 0, (4) (5) where 1 if Pij? > 0 = ?1 if Pij? < 0 ? ? ? [?1, 1] if Pij? = 0. ? We note that Z solves the dual problem Z?ij ? ? Z? = arg maxkvec(Z)k? ?1 U (Z), U (Z) = ?log det(S + ?Z) + n. (6) S+?Z0 The main contribution of this paper is to propose two classes of second-order methods for solving problem (3). The first class employs a piecewise quadratic model in the step computation, and can be seen as a generalization of the sequential quadratic programming method for nonlinear programming 2 [9]; the second class minimizes a smooth quadratic model of F over a chosen orthant face in Rn . We argue that both types of methods constitute useful tools for solving the sparse inverse covariance matrix estimation problem. An overview of recent work on the sparse inverse covariance estimation problem is given in [10, 11]. First-order methods proposed include block-coordinate descent approaches, such as COVSEL, [4, 8] and GLASSO [12], greedy coordinate descent, known as SINCO [13], projected subgradient methods PSM [14], first order optimal gradient ascent [15], and the alternating linearization method ALM [16]. Second-order methods include the inexact interior point method IPM proposed in [17], and the coordinate relaxation method described in [1] and implemented in the QUIC software. It is reported in [1] that QUIC is significantly faster than the ALM , GLASSO , PSM , SINCO and IPM methods. We compare the algorithms presented in this paper to the method implemented in QUIC. 2 Newton Methods We can define a Newton iteration for problem (1) by constructing a quadratic, or piecewise quadratic, model of F using first and second derivative information. It is well known [4] that the derivatives of the log likelihood function (4) are given by def def g = L0 (P) = vec(S ? P?1 ) and H = L00 (P) = (P?1 ? P?1 ), (7) where ? denotes the Kronecker product. There are various ways of using these quantities to define a model of F , and each gives rise to a different Newton-like iteration. In the Newton-LASSO Method, we approximate the objective function F at the current iterate Pk by the piecewise quadratic model qk (P) = L(Pk ) + gk> vec(P ? Pk ) + 12 vec> (P ? Pk )Hk vec(P ? Pk ) + ?kvec(P)k1 , (8) where gk = L0 (Pk ), and similarly for Hk . The trial step of the algorithm is computed as a minimizer of this model, and a backtracking line search ensures that the new iterate lies in the positive definite cone and decreases the objective function F . We note that the minimization of qk is often called the LASSO problem [18] in the case when the unknown is a vector. It is advantageous to perform the minimization of (8) in a reduced space; see e.g. [11] and the references therein. Specifically, at the beginning of the k-th iteration we define the set Fk of (free) variables that are allowed to move, and the active set Ak . To do so, we compute the steepest descent for the function F , which is given by ?(gk + ?Zk ), where ? 1 if (Pk )ij > 0 ? ? ? ?1 if (Pk )ij < 0 ? ?1 if (Pk )ij = 0 and [gk ]ij > ? (Zk )ij = (9) ? ? 1 if (Pk )ij = 0 and [gk ]ij < ?? ? ? 1 ? ? [gk ]ij if (Pk )ij = 0 and | [gk ]ij | ? ?. 2 The sets Fk , Ak are obtained by considering a small step along this steepest descent direction, as this guarantees descent in qk (P). For variables satisfying the last condition in (9), a small perturbation of Pij will not decrease the model qk . This suggests defining the active and free sets of variables at iteration k as Ak = {(i, j)|(Pk )ij = 0 and |[gk ]ij | ? ?}, Fk = {(i, j)|(Pk )ij 6= 0 or |[gk ]ij | > ?}. (10) The algorithm minimizes the model qk over the set of free variables. Let us define pF = vec(P)F , to be the free variables, and let pkF = vecF (Pk ) denote their value at the current iterate ? and similarly for other quantities. Let us also define HkF to be the matrix obtained by removing from Hk the columns and rows corresponding to the active variables (with indices in Ak ). The reduced model is given by > qF (P) = L(Pk ) + gkF (pF ? pkF ) + 12 (pF ? pkF )> HkF (pF ? pkF ) + ?kpF k1 . (11) The search direction d is defined by  d= dF dA   = ? F ? pkF p 0  , (12) ? F is the minimizer of (11). The algorithm performs a line search along the direction D = where p mat(d), where the operator mat(d) satisfies mat(vec(D)) = D. The line search starts with the unit steplength and backtracks, if necessary, to obtain a new iterate Pk+1 that satisfies the sufficient decrease condition and positive definiteness (checked using a Cholesky factorization): F (Pk+1 ) ? F (Pk ) < ? (qF (Pk+1 ) ? qF (Pk )) and Pk+1  0, (13) where ? ? (0, 1). It is suggested in [1] that coordinate descent is the most effective iteration for solving the LASSO problem (11). We claim, however, that other techniques merit careful investigation. These include gradient projection [19] and iterative shrinkage thresholding algorithms, such as ISTA [20] and FISTA [21]. In section 3 we describe a Newton-LASSO method that employs the FISTA iteration. Convergence properties of the Newton-LASSO method that rely on the exact solution of the LASSO problem (8) are given in [22]. In practice, it is more efficient to solve problem (8) inexactly, as discussed in section 6. The convergence properties of inexact Newton-LASSO methods will be the subject of a future study. The Orthant-Based Newton method computes steps by solving a smooth quadratic approximation 2 of F over an appropriate orthant ? or more precisely, over an orthant face in Rn . The choice of this orthant face is done, as before, by computing the steepest descent direction of F , and is characterized by the matrix Zk in (9). Specifically the first four conditions in (9) identify an orthant 2 in Rn where variables are allowed to move, while the last condition in (9) determines the variables to be held at zero. Therefore, the sets of free and active variables are defined as in (10). If we define zF = vecF (Z), then the quadratic model of F over the current orthant face is given by 1 > (14) QF (P) = L(Pk ) + gF (pF ? pkF ) + (pF ? pkF )> HF (pF ? pkF ) + ?z> F pF . 2 The minimizer is p?F = pkF ? H?1 F (gF + ?zF ), and the step of the algorithm is given by    ?  dF pF ? pkF d= = . (15) dA 0 If pkF +d lies outside the current orthant, we project it onto this orthant and perform a backtracking line search to obtain the new iterate Pk+1 , as discussed in section 4. The orthant-based Newton method therefore moves from one orthant face to another, taking advan2 tage of the fact that F is smooth in every orthant in Rn . In Figure 1 we compare the two methods discussed so far. The optimality conditions (5) show that P? is diagonal when ? ? |Sij | for all i 6= j, and given by (diag(S) + ?I)?1 . This suggests that a good choice for the initial value (for any value of ? > 0) is P0 = (diag(S) + ?I)?1 . 3 (16) Method NL (Newton-LASSO) Repeat: Method OBN (Orthant-Based Newton) Repeat: 1. Phase I: Determine the sets of fixed and free indices Ak and Fk , using (10). 2. Phase II: Compute the Newton step D given by (12), by minimizing the piecewise quadratic model (11) for the free variables Fk . 3. Globalization: Choose Pk+1 by performing an Armijo backtracking line search starting from Pk + D. 4. k ? k + 1. 1. Phase I: Determine the active orthant face through the matrix Zk given in (9). 2. Phase II: Compute the Newton direction D given by (15), by minimizing the smooth quadratic model (14) for the free variables Fk . 3. Globalization: Choose Pk+1 in the current orthant by a projected backtracking line search starting from Pk + D. 4. k ? k + 1. Figure 1: Two classes of Newton methods for the inverse covariance estimation problem (3). Numerical experiments indicate that this choice is advantageous for all methods considered. A popular orthant based method for the case when the unknown is a vector is OWL [23]; see also [11]. Rather than using the Hessian (7), OWL employs a quasi-Newton approximation to minimize the reduced quadratic, and applies an alignment procedure to ensure descent. However, for reasons that are difficult to justify the OWL step employs the reduced inverse Hessian (as apposed to the inverse of the reduced Hessian), and this can give steps of poor quality. We have dispensed with the alignment as it is not needed in our experiments. The convergence properties of OBM methods are the subject of a future study (we note in passing that the convergence proof given in [23] is not correct). 3 A Newton-LASSO Method with FISTA Iteration Let us consider a particular instance of the Newton-LASSO method that employs the Fast Iterative Shrinkage Thresholding Algorithm FISTA [21] to solve the reduced subproblem (11). We recall that for the problem min2 f (x) + ?kxk1 , (17) x?Rn where f is a smooth convex quadratic function, the ISTA iteration [20] is given by   1 ? i ? ?f (? xi ) , xi = S?/c x c (18) where c is a constant such that cI ? f 00 (x)  0, and the FISTA acceleration is given by ti ? 1 (xi ? xi?1 ), (19) ti+1   p = 1 + 1 + 4t2i /2. Here S?/c denotes the soft thresholding ? i+1 = xi + x ? 1 = x0 , t1 = 1, ti+1 where x operator given by  (S? (y))i = 0 if |yi | ? ?, yi ? ?sign(yi ) otherwise. We can apply the ISTA iteration (18) to the reduced quadratic in (11) starting from x0 = vecFk (X0 ) (which is not necessarily equal to pk = vecFk (Pk )). Substituting in the expressions for the first and second derivative in (7) gives   1 ? ? xi = S?/c vecFk (Xi ) ? gkFk + HkFk vecFk (Xi ? Pk ) c   1 ?1 ?1 ? ?1 ? = S?/c vecFk (Xi ) ? vecFk (S ? 2Pk + Pk Xi Pk ) , c 4 where the constant c should satisfy c > 1/(eigmin Pk )2 . The FISTA acceleration step is given by ? denote the free variables part of the (approximate) solution of (11) obtained by the (19). Let x FISTA iteration. Phase I of the Newton- LASSO - FISTA method selects the free and active sets, Fk , Ak , as indicated by (10).  Phase II, applies the FISTA iteration to the reduced problem (11), and sets ? x Pk+1 ? mat . The computational cost of K iterations of the FISTA algorithm is O(Kn3 ). 0 4 An Orthant-Based Newton-CG Method We now consider an orthant-based Newton method in which a quadratic model of F is minimized approximately using the conjugate gradient (CG) method. This approach is attractive since, in addition to the usual advantages of CG (optimal Krylov iteration, flexibility), each CG iteration can be efficiently computed by exploiting the structure of the Hessian matrix in (7). Phase I of the orthant-based Newton-CG method computes the matrix Zk given in (9), which is used 2 to identify an orthant face in Rn . Variables satisfying the last condition in (9) are held at zero and their indices are assigned to the set Ak , while the rest of the variables are assigned to Fk and are allowed to move according to the signs of Zk : variables with (Zk )ij = 1 must remain non-negative, and variables with (Zk )ij = ?1 must remain non-positive. Having identified the current orthant face, phase II of the method constructs the quadratic model QF in the free variables, and computes an approximate solution by means of the conjugate gradient method, as described in Algorithm 1. Conjugate Gradient Method for Problem (14) input : Gradient g, orthant indicator z, current iterate P0 , maximum steps K, residual tolerance r , and the regularization parameter ?. output: Approximate Newton direction d = cg(P0 , g, z, ?, K) n = size(P0 , 1) , G = mat(g) , Z = mat(z); A = {(i, j) : [P0 ]ij = 0 & |Gij | ? ?}; B = P?1 0 , X0 = 0n?n , x0 = vec(X0 ); R0 = ?(G + ?Z), [R0 ]A ? 0; k = 0, q0 = r0 = vec(R0 ); ( ? [r0 ]F = vF ) while k ? min(n2 , K) and krk k > r do Qk = reshape(qk , n, n); Yk = BQk B, [Yk ]A ? 0, yk = vec(Yk ); ?k = r> k rk ; q> k yk xk+1 = xk + ?k qk ; rk+1 = rk ? ?k yk ; ?k = r> k+1 rk+1 ; r> k rk qk+1 = rk+1 + ?k qk ; k ? k + 1; end return d = xk+1 Algorithm 1: CG Method for Minimizing the Reduced Model QF . The search direction of the method is given by D = mat(d), where d denotes the output of Algorithm 1. If the trial step Pk + D lies in the current orthant, it is the optimal solution of (14). Otherwise, there is at least one index such that (i, j) ? Ak and [L0 (Pk + D)]ij ? / [??, ?], or (i, j) ? Fk and (Pk + D)ij Zij < 0. In this case, we perform a projected line search to find a point in the current orthant that yields a decrease in F . Let ?(?) denote the orthogonal projection onto the orthant face defined by Zk , i.e.,  Pij if sign(Pij ) = sign(Zk )ij ?(Pij ) = (20) 0 otherwise. 5 The line search computes a steplength ?k to be the largest member of the sequence {1, 1/2, . . . , 1/2i , . . .} such that e (Pk )T (?(Pk + ?k D) ? Pk ) , F (?(Pk + ?k D)) ? F (Pk ) + ? ?F (21) e denotes the minimum norm subgradient of F . The where ? ? (0, 1) is a given constant and ?F new iterate is defined as Pk+1 = ?(Pk + ?k D). The conjugate gradient method requires computing matrix-vector products involving the reduced Hessian, HkF . For our problem, we have   kF HkF (pF ? pkF ) = Hk pF ?p 0 F   ?1  pF ?pkF = P?1 mat Pk F . (22) k 0 The second line follows from the identity (A ? B)vec(C) = vec(BCA> ). The cost of performing K steps of the CG algorithm is O(Kn3 ) operations, and K = n2 steps is needed to guarantee an exact solution. Our practical implementation computes a small number of CG steps relative to n, K = O(1), and as a result the search direction is not an accurate approximation of the true Newton step. However, such inexact Newton steps achieve a good balance between the computational cost and the quality of the direction. 5 Quasi-Newton Methods The methods considered so far employ the exact Hessian of the likelihood function L, but one can also approximate it using (limited memory) quasi-Newton updating. At first glance it may not seem promising to approximate a complicated Hessian like (7) in this manner, but we will see that quasiNewton updating is indeed effective, provided that we store matrices using the compact limited memory representations [9]. Let us consider an orthant-based method that minimizes the quadratic model (14), where HF is replaced by a limited memory BFGS matrix, which we denote by BF . This matrix is not formed explicitly, but is defined in terms of the difference pairs yk = gk+1 ? gk , sk = vec(Pk+1 ? Pk ). (23) It is shown in [24, eq(5.11)] that the minimizer of the model QF is given by p?F = pF + B?1 F (?zF ? gF ) = ?1 (?zF ? gF ) + 1 T ? 2 RF W(I ? ?1 MWT RF RTF W) ?1 MWT RF (?zF ? gF ). (24) Here RF is a matrix consisting of the set of unit vectors that span the subspace of free variables, ? is a scalar, W is an n2 ? 2t matrix containing the t most recent correction pairs (23), and M is a 2t ? 2t matrix formed by inner products between the correction pairs. The total cost of computing the minimizer p?F is 2t2 |F| + 4t|F| operations, where |F| is the cardinality of F. Since the memory parameter t in the quasi-Newton updating scheme is chosen to be a small number, say between 5 and 20, the cost of computing the subspace minimizer (24) is quite affordable. A similar approach was taken in [25] for the related constrained inverse covariance sparsity problem. We have noted above that OWL, which is an orthant based quasi-Newton method does not correctly approximate the minimizer (24). We note also that quasi-Newton updating can be employed in Newton-LASSO methods, but we do not discuss this approach here for the sake of brevity. 6 Numerical Experiments We generated test problems by first creating a random sparse inverse covariance matrix1 , ??1 , and then sampling data to compute a corresponding non-sparse empirical covariance matrix S. The dimensions, sparsity, and conditioning of the test problems are given along with the results in Table 2. For each data set, we solved problem (3) with ? values in the range [0.01, 0.5]. The number of samples used to compute the sample covariance matrix was 10n. 1 http://www.cmap.polytechnique.fr/?aspremon/CovSelCode.html, [7] 6 The algorithms we tested are listed in Table 1. With the exception of C:QUIC, all of these algorithms were implemented in MATLAB. Here NL and OBN are abbreviations for the methods in Figure 1. NL-Coord is a MATLAB implementation of the QUIC algorithm that follows the C-version [1] Algorithm NL-FISTA NL-Coord OBN-CG-K OBN-CG-D OBN-LBFGS ALM? C:QUIC Description Newton-LASSO-FISTA method Newton-LASSO method using coordinate descent Orthant-based Newton-CG method with a limit of K CG iterations OBN-CG-K with K=5 initially and increased by 1 every 3 iterations. Orthant-based quasi-Newton method (see section 5) Alternating linearization method [26]. The C implementation of QUIC given in [1]. Table 1: Algorithms tested. ? For ALM, the termination criteria was changed to the `? norm and the value of ABSTOL was set to 10?6 to match the stopping criteria of the other algorithms. faithfully. We have also used the original C-implementation of QUIC and refer to it as C:QUIC. For the Alternating Linearization Method (ALM) we utilized the MATLAB software available at [26], which implements the first-order method described in [16]. The NL-FISTA algorithm terminated the FISTA iteration when the minimum norm subgradient of the LASSO subproblem qF became less than 1/10 of the minimum norm subgradient of F at the previous step. Let us compare the computational cost of the inner iteration techniques used in the Newton-like methods discussed in this paper. (i) Applying K steps of the FISTA iteration requires O(Kn3 ) operations. (ii) Coordinate descent, as implemented in [1], requires O(Kn|F|) operations for K coordinate descent sweeps through the set of free variables; (iii) Applying KCG iterations of the CG methods costs O(KCG n3 ) operations. The algorithms were terminated when either 10n iterations were executed or the minimum norm ? (P)k? ? 10?6 . The time limit of each run subgradient of F was sufficiently small, i.e. when k?F was set to 5000 seconds. The results presented in Table 2 show that the ALM method was never the fastest algorithm, but nonetheless outperformed some second-order methods when the solution was less sparse. The numbers in bold indicate the fastest MATLAB implementation for each problem. As for the other methods, no algorithm appears to be consistently superior to the others, and the best choice may depend on problem characteristics. The Newton-LASSO method with coordinate descent (NL-Coord) is the most efficient when the sparsity level is below 1%, but the methods introduced in this paper, NL-FISTA, OBN-CG and OBN-LBFGS, seem more robust and efficient for problems that are less sparse. Based on these results, OBN-LBFGS appears to be the best choice as a universal solver for the covariance selection problem. The C implementation of the QUIC algorithm is roughly five times faster than its Matlab counterpart (OBN-Coord). C:QUIC was best in the two sparsest conditions, but not in the two densest conditions. We expect that optimized C implementations of the presented algorithms will also be significantly faster. Note also that the crude strategy for dynamically increasing the number of CG-steps in OBN-CG-D was effective, and we expect it could be further improved. Our focus in this paper has been on exploring optimization methods and ideas rather than implementation efficiency. However, we believe the observed trends will hold even for highly optimized versions of all tested algorithms. 7 problem n = 500 Card(??1 ) = 2.4% n = 500 Card(??1 ) = 20.1% n = 1000 Card(??1 ) = 3.5% n = 1000 Card(??1 ) = 11% n = 2000 Card(??1 ) = 1% n = 2000 Card(??1 ) = 18.7% ? algorithm card(P? ) cond(P? ) NL-FISTA NL-Coord OBN-CG-5 OBN-CG-D OBN-LBFGS ALM C:QUIC card(P? ) cond(P? ) NL-FISTA NL-Coord OBN-CG-5 OBN-CG-D OBN-LBFGS ALM C:QUIC card(P? ) cond(P? ) NL-FISTA NL-Coord OBN-CG-5 OBN-CG-D OBN-LBFGS ALM C:QUIC card(P? ) cond(P? ) NL-FISTA NL-Coord OBN-CG-5 OBN-CG-D OBN-LBFGS ALM C:QUIC card(P? ) cond(P? ) NL-FISTA NL-Coord OBN-CG-5 OBN-CG-D OBN-LBFGS ALM C:QUIC card(P? ) cond(P? ) NL-FISTA NL-Coord OBN-CG-5 OBN-CG-D OBN-LBFGS ALM C:QUIC 0.5 iter time 0.74% 8.24 8 5.71 21 3.86 15 4.07 12 3.88 47 5.37 445 162.96 16 0.74 0.21% 3.39 4 1.25 4 0.42 3 0.83 3 0.84 9 1.00 93 35.75 6 0.19 0.18% 6.22 7 28.20 9 5.23 9 15.34 8 15.47 34 18.27 247 617.63 10 2.38 0.10% 4.20 4 9.03 4 2.23 3 4.70 3 4.61 8 4.29 113 283.99 6 1.18 0.13% 7.41 8 264.94 14 54.33 13 187.41 9 127.11 41 115.13 - >5000 11 18.07 0.05% 2.32 P? = P0 P? = P0 P? = P0 P? = P0 P? = P0 52 874.22 8 10.35 0.1 iter time 7.27% 27.38 10 22.01 49 100.63 97 26.24 34 15.41 178 21.92 387 152.76 41 15.62 14.86% 16.11 19 13.12 14 19.69 27 7.36 15 5.22 82 11.42 78 32.98 13 3.79 6.65% 18.23 9 106.79 24 225.59 51 87.73 21 51.99 111 80.02 252 639.49 22 46.14 8.18% 11.75 7 72.21 12 79.71 20 35.85 12 26.87 67 40.31 99 255.79 11 17.42 1.75% 23.71 10 1039.08 34 1178.07 78 896.24 27 532.15 155 497.31 >5000 17 183.53 1.49% 4.72 7 153.18 7 71.55 6 71.54 6 75.82 26 78.34 76 1262.83 8 24.65 0.05 iter time 11.83% 51.01 11 37.04 66 279.69 257 70.91 65 43.29 293 38.23 284 115.11 58 35.64 25.66% 32.27 15 34.53 21 71.51 101 28.40 31 14.14 155 23.04 149 61.35 21 11.91 13.19% 39.59 12 203.07 36 951.23 108 198.17 39 132.38 178 111.49 186 462.34 34 186.87 18.38% 26.75 10 156.46 19 408.62 47 83.42 27 78.98 124 82.51 106 267.02 19 90.62 4.33% 46.54 10 1490.37 >5000 203 2394.95 43 1038.26 254 785.36 >5000 40 818.54 10.51% 17.02 9 694.93 13 1152.86 21 250.11 13 188.93 71 232.23 106 1800.67 13 256.90 0.01 iter time 32.48% 118.56 12 106.27 103 1885.89 1221 373.63 189 275.29 519 84.13 574 219.80 100 206.42 47.33% 99.49 13 100.90 55 791.84 795 240.90 176 243.55 455 78.33 720 292.43 56 103.58 25.03% 132.13 12 801.79 >5000 1103 2026.26 171 1584.14 548 384.30 734 1826.29 72 1445.17 36.34% 106.34 22 554.08 49 4837.46 681 1778.88 148 2055.44 397 297.90 577 1448.83 52 1100.72 14.68% 134.54 >5000 >5000 >5000 >5000 610 2163.12 >5000 >5000 31.68% 79.61 12 2852.86 >5000 397 4766.69 110 5007.83 318 1125.67 >5000 33 3899.68 Table 2: Results for 5 Newton-like methods and the QUIC, ALM method. 8 References [1] C. J. Hsieh, M. A. Sustik, P. Ravikumar, and I. S. Dhillon. Sparse inverse covariance matrix estimation using quadratic approximation. Advances in Neural Information Processing Systems (NIPS), 24, 2011. [2] A. P. Dempster. Covariance selection. Biometrics, 28:157?75, 1972. [3] J. D. Picka. Gaussian Markov random fields: theory and applications. Technometrics, 48(1):146?147, 2006. [4] O. Banerjee, L. El Ghaoui, A. d?Aspremont, and G. Natsoulis. Convex optimization techniques for fitting sparse Gaussian graphical models. In ICML, pages 89?96. ACM, 2006. [5] A.J. Rothman, E. Levina, and J. Zhu. Sparse multivariate regression with covariance estimation. Journal of Computational and Graphical Statistics, 19(4):947?962, 2010. [6] T. Tsiligkaridis and A. O. Hero III. Sparse covariance estimation under Kronecker product structure. In ICASSP 2006 Proceedings, pages 3633?3636, Kyoto, Japan, 2012. [7] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. The Journal of Machine Learning Research, 9:485? 516, 2008. [8] A. d?Aspremont, O. Banerjee, and L. El Ghaoui. First-order methods for sparse covariance selection. SIAM Journal on Matrix Analysis and Applications, 30(1):56?66, 2008. [9] J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research. 1999. [10] I. Rish and G. Grabarnik. ELEN E6898 Sparse Signal Modeling (Spring 2011): Lecture 7, Beyond LASSO: Other Losses (Likelihoods). https://sites.google.com/site/eecs6898sparse2011/, 2011. [11] S. Sra, S. Nowozin, and S. J. Wright. Optimization for Machine Learning. MIT Press, 2011. [12] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical LASSO. Biostatistics, 9(3):432, 2008. [13] K. Scheinberg and I. Rish. SINCO-a greedy coordinate ascent method for sparse inverse covariance selection problem. Technical report, IBM RC24837, 2009. [14] J. Duchi, S. Gould, and D. Koller. Projected subgradient methods for learning sparse Gaussians. In Proc. of the Conf. on Uncertainty in AI. Citeseer, 2008. [15] Z. Lu. Smooth optimization approach for sparse covariance selection. Arxiv preprint arXiv:0904.0687, 2009. [16] K. Scheinberg, S. Ma, and D. Goldfarb. Sparse inverse covariance selection via alternating linearization methods. Arxiv preprint arXiv:1011.0097, 2010. [17] L. Li and K. C. Toh. An inexact interior point method for L1-regularized sparse covariance selection. Mathematical Programming Computation, 2(3):291?315, 2010. [18] R. Tibshirani. Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society B, 58(1):267?288, 1996. [19] B. T. Polyak. The conjugate gradient method in extremal problems. U.S.S.R. Computational Mathematics and Mathematical Physics, 9:94?112, 1969. [20] I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on pure and applied mathematics, 57(11):1413?1457, 2004. [21] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009. [22] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1):387?423, 2009. [23] G. Andrew and J. Gao. Scalable training of L1-regularized log-linear models. In ICML, pages 33?40. ACM, 2007. [24] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190?1208, 1995. [25] J. Dahl, V. Roychowdhury, and L. Vandenberghe. Maximum likelihood estimation of gaussian graphical models: numerical implementation and topology selection. UCLA Preprint, 2005. [26] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Matlab scripts for alternating direction method of multipliers. Technical report, http://www.stanford.edu/ boyd/papers/admm/, 2012. 9
4601 |@word trial:2 version:2 advantageous:2 norm:5 bf:1 termination:1 covariance:27 p0:10 hsieh:1 natsoulis:1 citeseer:1 ipm:2 initial:1 series:1 zij:1 current:9 com:3 rish:2 toh:1 chu:1 must:2 numerical:5 kpf:1 greedy:2 xk:3 beginning:1 steepest:3 matrix1:1 five:1 mathematical:3 along:3 fitting:1 manner:1 x0:6 alm:13 indeed:1 roughly:1 byrd:1 pf:13 considering:1 cardinality:1 solver:1 project:1 estimating:1 provided:1 increasing:1 biostatistics:1 minimizes:5 guarantee:2 every:3 ti:3 unit:2 positive:3 before:1 t1:1 limit:2 ak:8 approximately:1 defrise:1 therein:1 coord:10 dynamically:1 suggests:2 fastest:2 limited:5 factorization:1 range:1 unique:1 practical:1 practice:1 block:1 definite:1 implement:1 procedure:1 universal:1 empirical:2 significantly:2 projection:2 boyd:2 suggest:1 onto:2 interior:2 selection:13 operator:2 applying:2 www:2 center:2 starting:3 convex:3 pure:1 vandenberghe:1 coordinate:10 exact:3 programming:4 densest:1 trend:1 satisfying:2 updating:4 utilized:1 kxk1:1 steven:1 subproblem:3 t2i:1 observed:1 solved:1 preprint:3 ensures:1 decrease:4 yk:7 dempster:1 depend:1 solving:5 rtf:1 efficiency:1 icassp:1 various:1 fast:3 effective:3 describe:1 elen:1 outside:1 whose:1 quite:1 stanford:1 solve:3 rennie:1 say:1 otherwise:3 statistic:1 advantage:1 sequence:1 propose:3 product:5 fr:1 flexibility:1 achieve:1 description:1 exploiting:1 convergence:4 andrew:1 ij:21 eq:1 solves:1 implemented:5 come:1 indicate:2 direction:11 pkf:13 correct:1 owl:4 generalization:1 investigation:1 rothman:1 exploring:1 correction:2 hold:1 sufficiently:1 considered:2 wright:2 normal:1 claim:1 substituting:1 steplength:2 estimation:13 proc:1 outperformed:1 extremal:1 largest:1 faithfully:1 tool:2 minimization:4 bqk:1 mit:1 gaussian:5 rather:2 avoid:1 pn:1 shrinkage:5 l0:3 focus:1 consistently:1 likelihood:7 hk:4 hkf:4 cg:30 stopping:1 el:3 initially:1 koller:1 quasi:7 selects:1 arg:2 dual:1 html:1 constrained:2 field:2 equal:1 construct:1 having:1 never:1 sampling:1 icml:2 future:2 minimized:1 t2:1 others:1 piecewise:5 report:2 employ:7 nonsmooth:1 beck:1 replaced:1 phase:9 consisting:1 n1:1 technometrics:1 friedman:1 highly:1 alignment:2 nl:20 held:2 accurate:1 aspremon:1 edge:1 necessary:1 orthogonal:1 biometrics:1 covsel:1 instance:1 column:2 soft:1 increased:1 modeling:1 teboulle:1 stacking:1 cost:7 entry:1 reported:1 kn:1 eec:1 siam:3 kn3:3 physic:1 daubechies:1 containing:1 choose:2 conf:1 creating:1 derivative:3 return:1 li:1 japan:1 bfgs:2 de:1 bold:1 satisfy:1 explicitly:2 script:1 start:1 hf:2 complicated:1 contribution:1 minimize:1 formed:2 became:1 qk:10 characteristic:1 efficiently:2 correspond:1 identify:2 yield:1 lu:2 checked:1 mwt:2 sinco:3 inexact:4 nonetheless:1 associated:1 proof:1 popular:2 recall:1 globalization:2 appears:2 improved:1 done:1 nonlinear:1 banerjee:3 glance:1 google:1 defines:1 quality:2 indicated:1 scientific:1 believe:1 true:1 multiplier:1 counterpart:1 regularization:2 assigned:2 alternating:5 q0:1 dhillon:1 goldfarb:1 attractive:1 noted:1 dispensed:1 criterion:2 yun:1 polytechnique:1 performs:1 duchi:1 l1:2 parikh:1 superior:1 overview:1 conditioning:1 discussed:4 refer:2 vec:13 ai:1 fk:9 mathematics:2 similarly:2 peder:1 obn:29 multivariate:3 recent:2 store:1 binary:1 watson:2 jorge:1 yi:3 seen:1 minimum:4 employed:1 r0:5 determine:2 maximize:1 signal:1 ii:5 kyoto:1 smooth:7 technical:2 faster:3 characterized:1 match:1 levina:1 gkf:1 ravikumar:1 variant:1 regression:3 involving:1 scalable:1 df:2 affordable:1 iteration:22 arxiv:4 addition:1 rest:1 ascent:2 subject:2 member:1 seem:2 call:2 iii:2 identically:1 iterate:7 hastie:1 lasso:20 identified:1 polyak:1 inner:2 idea:1 topology:1 det:3 expression:1 grabarnik:1 hessian:9 passing:1 constitute:2 matlab:6 useful:2 listed:1 reduced:10 generate:1 http:3 roychowdhury:1 sign:4 correctly:1 tibshirani:2 mat:8 iter:4 four:1 dahl:1 nocedal:4 imaging:1 subgradient:6 relaxation:1 cone:1 run:1 inverse:17 psm:2 uncertainty:1 vf:1 def:3 bound:1 quadratic:19 kronecker:3 precisely:1 constraint:1 n3:1 software:3 sake:1 ucla:1 min:2 optimality:2 figen:2 performing:2 span:1 spring:1 separable:1 gould:1 according:1 poor:1 conjugate:6 remain:2 sij:1 ghaoui:3 taken:1 scheinberg:2 discus:1 needed:2 merit:1 hero:1 end:1 sustik:1 available:1 operation:6 gaussians:1 backtracks:1 apply:1 appropriate:1 reshape:1 original:1 denotes:4 include:3 ensure:1 graphical:4 newton:42 obm:1 exploit:1 k1:4 society:1 sweep:1 objective:6 move:4 quantity:2 quic:19 strategy:1 usual:1 diagonal:1 gradient:10 subspace:2 card:12 argue:1 tage:1 tseng:1 reason:1 index:4 minimizing:4 balance:1 difficult:1 executed:1 gk:11 trace:2 negative:2 rise:1 min2:1 implementation:9 unknown:2 perform:3 zf:5 markov:2 descent:13 orthant:32 defining:1 communication:1 rn:7 perturbation:1 peleato:1 introduced:1 pair:3 eckstein:1 optimized:2 nip:1 beyond:1 suggested:1 krylov:1 below:1 sparsity:4 recast:1 including:1 memory:6 max:1 rf:4 royal:1 rely:1 regularized:2 indicator:1 residual:1 zhu:2 scheme:1 identifies:1 aspremont:3 gf:5 bca:1 kf:1 relative:1 glasso:2 expect:2 lecture:1 loss:1 northwestern:2 pij:7 sufficient:1 thresholding:6 storing:1 nowozin:1 ibm:5 row:1 qf:8 penalized:1 changed:1 repeat:2 last:3 free:13 face:10 taking:1 sparse:24 distributed:1 tolerance:1 dimension:1 computes:5 kvec:3 projected:4 far:2 approximate:7 compact:1 olsen:1 l00:1 active:6 assumed:1 xi:13 search:12 iterative:5 sk:1 tsiligkaridis:1 table:5 promising:1 zk:10 robust:1 sra:1 mol:1 necessarily:1 constructing:1 krk:1 da:2 diag:2 sp:2 pk:48 main:1 terminated:2 n2:3 allowed:3 ista:3 site:2 definiteness:1 sparsest:1 lie:3 crude:1 removing:1 rk:6 sequential:1 ci:1 cmap:1 linearization:4 backtracking:4 lbfgs:9 gao:1 oztoprak:1 scalar:1 applies:2 springer:1 minimizer:7 satisfies:3 inexactly:1 determines:1 quasinewton:1 acm:2 ma:1 abbreviation:1 identity:1 acceleration:2 careful:1 admm:1 fista:23 specifically:2 justify:1 called:1 gij:1 total:1 cond:6 exception:1 cholesky:1 armijo:1 brevity:1 tested:3
3,980
4,602
Bayesian Pedigree Analysis using Measure Factorization Bonnie Kirkpatrick Computer Science Department University of British Columbia [email protected] Alexandre Bouchard-C?ot?e Statistics Department University of British Columbia [email protected] Abstract Pedigrees, or family trees, are directed graphs used to identify sites of the genome that are correlated with the presence or absence of a disease. With the advent of genotyping and sequencing technologies, there has been an explosion in the amount of data available, both in the number of individuals and in the number of sites. Some pedigrees number in the thousands of individuals. Meanwhile, analysis methods have remained limited to pedigrees of < 100 individuals which limits analyses to many small independent pedigrees. Disease models, such those used for the linkage analysis log-odds (LOD) estimator, have similarly been limited. This is because linkage analysis was originally designed with a different task in mind, that of ordering the sites in the genome, before there were technologies that could reveal the order. LODs are difficult to interpret and nontrivial to extend to consider interactions among sites. These developments and difficulties call for the creation of modern methods of pedigree analysis. Drawing from recent advances in graphical model inference and transducer theory, we introduce a simple yet powerful formalism for expressing genetic disease models. We show that these disease models can be turned into accurate and computationally efficient estimators. The technique we use for constructing the variational approximation has potential applications to inference in other large-scale graphical models. This method allows inference on larger pedigrees than previously analyzed in the literature, which improves disease site prediction. 1 Introduction Finding genetic correlates of disease is a long-standing important problem with potential contributions to diagnostics and treatment of disease. The pedigree model for inheritance is one of the best defined models in biology, and it has been an area of active statistical and biological research for over a hundred years. The most commonly used method to analyze genetic correlates of disease is quite old. After Mendel introduced, in 1866, the basic model for the inheritance of genomic sites [1] Sturtevant was the first, in 1913, to provide a method for ordering the sites of the genome [2]. The method of Sturtevant became the foundation for linkage analysis with pedigrees [3, 4, 5, 6]. The problem can be thought of in Sturtevant?s framework as that of finding the position of a disease site relative to an map of existing sites. This is the log-odds (LOD) estimator for linkage analysis which is a likelihood ratio test, described in more detail below. The genomic data available now is quite different than the type of data available when LOD was initially developed. Genomic sites are becoming considerably denser in the genome and technologies allow us to interrogate the genome for the position of sites [7]. Additionally, most current pedigree 1 analysis methods are exponential either in the number of sites or in the number of individuals. This produces a limit on the size of the pedigrees under consideration to around < 100 individuals. This is in contrast to the size of pedigrees being collected: for example the work of [8] includes a connected human pedigree containing 13 generations and 1623 individuals, and the work of [9] includes a connected non-human data set containing thousands of breeding dogs. Apart from the issues of pedigree size, the LOD value is difficult to interpret, since there are few models for the distribution of the statistic. These developments and difficulties call for the creation of modern methods of pedigree analysis. In this work, we propose a new framework for expressing genetic disease models. The key component of our models, the Haplotype-Phenotype Transducer (HPT), draws from recent advances in graphical model inference and transducer theory [10], and provides a simple and flexible formalism for building genetic disease models. The output of inference over HPT models is a posterior distribution over disease sites, which is easier to interpret than LOD scores. The cost of this modeling flexibility is that the graphical model corresponding to the inference problem is larger and has more loops that traditional pedigree graphical models. Our solution to this challenge is based on the observation that the difficult graphical model can be covered by a collection of tractable forest graphical models. We use a method based on measure factorization [11] to efficiently combine these approximations. Our approach is applicable to other dense graphical models, and we show that empirically it gives accurate approximations in dense graphical models containing millions of nodes as well as short and long cycles. Our approximation can be refined by adding more trees in the forest, with a cost linear in the number of forests used in the cover. We show that considerable gains in accuracy can be obtained this way. In contrast, methods such as [12] can suffer from an exponential increase in running time when larger clusters are considered. Our framework can be specialized to create analogues of classical penetrance disease models [13]. We focus on these special cases here to compare our method with classical ones. Our experiments show that even for these simpler cases, our approach can achieve significant gains in disease site identification accuracy compared to the most commonly used method, Merlin?s implementation of LOD scores [3, 5]. Moreover, our inference method allows us to perform experiments on unprecedented pedigree sizes, well beyond the capacity of Merlin and other pedigree analysis tools typically used in practice. While graphical models have played an important role in the development of pedigree analysis methods [14, 15], only recently were variational methods applied to the problem [6]. However this previous work is based on the same graphical model as classical LOD methods, while ours significantly differs. Most current work on more advanced disease models have focused on a very different type of data, population data, for genome wide association studies (GWAS) [16]. Similarly, state of the art work on the related task of imputation generally makes similar population assumptions [17]. 2 Background Every individual has two copies of each chromosome, one copy is a collage of the mother?s two chromosomes while the other is a collage of the father?s two chromosomes. The point at which the copying of the chromosomes switches from one of the grand-maternal (grand-paternal) chromosomes to the other, is called a recombination breakpoint. A site is a particular position in the genome at which we can obtain measurable values. For the purposes of this paper, an allele is the nucleotide at a particular site on a particular chromosome. A haplotype is the sequence of alleles that appear together on the same chromosome. If we had complete data, we would know the positions of all of the haplotypes, all of the recombination breakpoints as well as which allele came from which parent. This information is not obtainable from any known experiment. Instead, we have genotype data which is the set of nucleotides that appear in an individual?s genome at a particular site. Given that the genotype is a set, it is unordered, and we do not know which allele came from which parent. All of this and the recombination breakpoints must be inferred. An example is given in the Supplement. 2 A pedigree is a directed acyclic graph with individuals as nodes, where boxes are males and circles are females, and edges directed downward from parent to child. Every individual must have either no parents or one parent of each gender. The individuals without parents in the graph are called founders, and the individuals with parents are non-founders. The pedigree encodes a set of relationships that constrain the allowed inheritance options. These inheritance options define a probability distribution which is investigated during pedigree analysis. Assume a single-site disease model, where a diploid genotype, GD , determines the affection status (phenotype), P ? {?h?,?d?}, according to the penetrance probabilities: f2 = P(P = ?d?|GD = 11), f1 = P(P = ?d?|GD = 10), f0 = P(P = ?d?|GD = 00). Here the disease site usually has a disease allele, 1, that confers greater risk of having the disease. For convenience, we denote the penetrance vector as f = (f2 , f1 , f0 ). Let the pedigree model for n individuals be specified by a pedigree graph, a disease model f , and the minor allele frequency, ?, for a single site of interest, k. Let P = (P1 , P2 , ..., Pn ) be a vector containing the affection status of each individual. Let G = (G1 , G2 , ..., Gn ) be the genotype data for each individual. Between the disease site and site k, we model the per chromosome, per generation recombination fraction, ?, which is the frequency with which recombinations occur between those two sites. Other sites linked to k can contribute to our estimate via their arrangement in single firstorder Markov chain with some sites falling to the left of the disease site and others to the right of the site of interest. Previous work has shown that given a pedigree model, affection data, and genotype data, we can estimate ?. We define the likelihood as L(?) = P(P = p, G = g|?, f, ?) where ? is the recombination probability between the disease site and the first site, p are the founder allele frequencies, and f are the penetrance probabilities. To test for linkage between the disease site and the other sites, we maximize the likelihood to obtain the optimal recombination fraction ?? = argmax? L(?)/L(1/2). The test we use is the likelihood ratio test where the null hypothesis is that of no linkage (? = 1/2). Generally referred to as the log-odd score (or LOD score), the log of this likelihood ratio is log L(?? ) ? log L(1/2). 3 Methods In this section, we describe our model for inferring relationships between phenotypes and genotyped pedigree datasets. We start by giving a high-level description of the generative process. The first step in this generative process consists in sampling a collection of disease model (DM) variables, which encode putative relationships between the genetic sites and the observed phenotypes. There is one disease model variable for each site, s, and to a first approximation, Ds can be thought as taking values zero or one, depending on whether site s is the closest to the primary genetic factor involved in a disease (a more elaborate example is presented in the Supplement). We use C to denote the values Ds can take. The second generative step consists in sampling the chromosomes or haplotypes of a collection of related individuals. We denote these variables by Hi,s,x , where, from now on, i is used to index individuals, s, to index sites, and x ? { ?father?, ?mother? }, to index chromosome parental origin. For SNP data, the set of values H that Hi,s,x can take generally contains two elements (alleles). A related variable, the inheritance variables Ri,s,x , will be sampled jointly with the Hi,s,x ?s to keep track of the grand-parental origin of each chromosome segment. See Figure 1(a) for a factor graph representation of the random variables. Finally, the phenotype Pi , which we assume is taken from a finite set P, can be sampled for each individual i in the pedigree. We will define the distribution of Pi conditionally on the haplotype of the individual in question, Hi , and on the global disease model D. Note that variables with missing indices are used to denote random vectors or matrices, for example D = (D1 , . . . , DS ), where S denotes the number of sites. To summarize this high-level view of the process, and to introduce notations for the distributions involved: D ? DM(?) Ri ? Recomb(?) for all i 3 M (1) Ds (a) GF HP Ti ? 1 Li ,s Hi ,s (2) Ds 1 ?? 0 Li ,s i 1 ?? s (b) DM(.) ?h?:1.0 1 Aa 0 ?? ? ?:1.0 1 aa ? GM 0 ?d?:1.0 1 AA Recomb(.) ?h?:1.0 HPT(.;.) Figure 1: (a) The pedigree graphical model for independent sites. There are two plates, one for each individual and one for each site. The nodes are labeled as follows: M for the marriage node which enforces the Mendelian inheritance constraints, H for haplotype, L and L0 for the two alleles, D(1) for the disease site indicator, and D(2) for the disease allele value. (b) The transducer for DM(?) has three nodes with the start node indicated by an in-arrow and the end node indicated by an out-arrow. The transducer for Recomb(?) has recombination parameter ?. This assumes a constant recombination rate across sites, but non-constant rates can be obtained with a bigger automaton. This transducer for HPT(?) models a recessive disease where the input at each state is the disease (top) and haplotype alleles (bottom). For these last two transducers any node can be the start or end node. The remaining variables (the non-founder individuals? haplotype variables) are obtained deterministically from the values of the founders and the inheritance: Hi,s,x = Hx(i),s,Ri,s,x , where x(i) denotes the index of the father (mother) of i if x = ?father? (?mother?). The distribution on the founder haplotypes is a product of independent Bernoulli distributions, one for each site (the parameters of these Bernoulli distributions is not restricted to be identically distributed and can be estimated [3]). Each genotype variable Gs is obtained via a deterministic function of H. Having generated all the haplotypes and disease variables, we denote the conditional distribution of the phenotypes as follows: Pi |(D, Hi ) ? HPT( ? ; D, Hi ), where HPT stands for a Haplotype-Phenotype Transducer. We now turn to the description of these distributions, starting with the most important one, HPT( ? ; D, Hi ). Formally, this distribution on phenotypes is derived from a weighted automaton, where we view the vectors D and Hi as an input string of length S, the s-th character of which is the triplet (Ds , Hi,s,?father? , Hi,s,?mother? ). We view each of the sampled phenotypes as a length-one output from a weighted transducer given the input D, Hi . Longer outputs could potentially be used for more complex phenotypes or diseases. To illustrate this construction, we show that classical, Mendelian models such as recessive phenotypes are a special case of this formalism. We also make two simplifications to facilitate exposition: first, that the disease site is one of the observed sites, and second, that the disease allele is the less frequent (minor) allele (we show in the Supplement a slightly more complicated transducer that does not make these assumptions). Under the two above assumptions, we claim that the state diagrams in Figure 1(b) specify an HPT transducer for a recessive disease model. Each oval corresponds to a hidden transducer state, and the annotation inside the oval encodes the tuple of input symbols that the corresponding state consumes. The emission is depicted on top of the states, with for example ?d?: 1.0 denotes that a disease indicator is emitted with weight one. We use ?h? for the non-disease (healthy) indicator, and  for the null emission. The probability mass function of the HPT is defined as: P z?ZHPT (h,c?p) HPT(p; c, h) = P z 0 ?ZHPT (h,c??) wHPT (z) wHPT (z 0 ) , where h ? HS , c ? C S , p ? P, and ZHPT (h, c ? p) denotes the set of valid paths in the space Z of hidden states. The valid paths are sequences of hidden states (depicted by black circles in Figure 1(b)) starting at the source and ending at the sink, consuming c, h and emitting p along the way. The star in the denominator of the above equation is used to denote unconstrained emissions. 4 In other words, the denominator is the normalization of the weighted transducer [10]. The set of valid paths is implicitly encoded in the transition diagram of the transducer, and the weight function wHPT : Z ? ? [0, ?) can similarly be compactly represented by only storing weights for individual transitions and multiplying them to get a path weight. The set of valid paths along with their weights can be thought of as encoding a parametric disease model. For example, with a recessive disease, shown in Figure 1(b), we can see that if the transducer is at the site of the disease (encoded as the current symbol in c being equal to 1) then only an input homozygous haplotype ?AA? will lead to an output disease phenotype ?d.? This formalism gives a considerable amount of flexibility to the modeler, who can go beyond simple Mendelian disease models by constructing different transducers. The DM distribution is defined using the same machinery as for the HPT distribution. We show in Figure 1(b) a weighted automaton that encodes the prior that exactly one site is involved in the disease, with an unknown, uniformly distributed location in the genome. The probability mass function of the distribution is given by: P z?ZDM (?c) wDM (z) z 0 ?ZDM (??) wDM (z 0 ) DM(c) = P , where ZDM (? c) and ZDM (? ?) are direct analogues to the HPT case, with the difference being that no input is read in the DM case. The last distribution in our model, Recomb, is standard, but we present it in the new light of the transducer formalism. Refer to Figure 1(b) for an example based on the standard recombination model derived from the marginals of a Poisson process. We use the analogous notation: P Recomb(r) = P 4 z?ZRecomb (?r) wRecomb (z) z 0 ?ZRecomb (??) wRecomb (z 0 ) . Computational Aspects Probabilistic inference in our model is computationally challenging: the variables L, H alone induce a loopy graph [18], and the addition of the variables D, P introduces more loops as well as deterministic constraints, which further complicates the situation. After explaining in more detail the graphical model of interest, we discuss in this section the approximation algorithm that we have used to infer haplotypes, disease loci, and other disease statistics. We show in Figure 1(a) the factor graph obtained after turning the observed variables (genotypes and phenotypes) into potentials (we show a more detailed version in the Supplement). We have also taken the pointwise product of potentials whenever possible (in the case of the transducer potentials, how this pointwise product is implemented is discussed in [10]). Note that our graphical model has more cycles than standard pedigree graphical models [19]; even if we assumed the sites to be independent and the pedigree to be acyclic, our graphical model would still be cyclic. Our inference method is based on the following observation: if we kept only one subtype of factors in the Supplement, say only those connected to the recombination variables R, then inference could be done easily. More precisely, inference would reduce to a collection of small, standard HMMs inference problems, which can be done using existing software. Similarly, by covering the pedigree graph with a collection of subtrees, and removed the factors for disease and recombination, we can get a collection of acyclic pedigrees, one for each site, and hence a tractable problem (the sum-product algorithm in this case is called the Elston-Stewart algorithm [14] in the pedigree literature). We are therefore in a situation where we have several restricted views on our graphical model yielding efficiently solved subproblems. How to combine the solutions of these tractable subproblems is the question we address in the remainder of this section. The most common way this is approached, in pedigrees [20] and elsewhere [21], is via block Gibbs sampling. However, block Gibbs sampling does not apply readily to our model. The main difficulty arises when attempting to resample D: because of the deterministic constraints that arise even in 5 the simplest disease model, it is necessary to sample D in a block also containing a large subset of R and H. However this cannot be done efficiently since D is connected to all individuals in the pedigree. More formally, the difficulty is that some of the components we wish to resample are b-acyclic (barely acyclic) [22]. Another method, closer to ours, is the EP algorithm of [23], which however considers a single tree approximant, while we can accommodate several at once. As we show in the empirical section, it is advantageous to do so in pedigrees. An important feature that we will exploit in the development of method is the forest cover property of the tractable subproblems: we view each tractable subproblem as a subgraph of the initial factor graph, and ask that the union of these subgraph coincides with the original factor graph. Previous variational approaches have been proposed to exploit such forest covers. The most wellknown example, the structured mean field approximation, is unfortunately non-trivial to optimize in the b-acyclic case [22]. Tree reweighted belief propagation [24] has an objective function derived from a forest distribution, however the corresponding algorithms are based on local message passing rather than large subproblems. We propose an alternative based on the measure factorization framework [11]. As we will see, this yields an easy to implement variation approximation that can efficiently exploit arbitrary forest cover approximations. Since the measure factorization interpretation of our approach is not specific to pedigrees, we present it in the context of a generic factor graph over a discrete space, viewed as an exponential family with sufficient statistics ?, log normalization A, and parameters ?: P(X = x) = exp {h?(x), ?i ? A(?)} . (1) To index the factors, we use ? ? F = {1, ..., F }, and v to index the V variables in the factor graph. We start by reparameterizing the exponential family in terms of a larger vector y of variables. P Let us also denote the number of nodes connected to factor ? by n? . This vector y has N = ? n? components, each corresponding to a pair containing a factor and a node index attached to it, and denoted by y?,v . The reparameterization is given by:  Y Y P(Y = y) = exp h?(y), ?i ? A0 (?) 1[y?,v = y?0 ,v ]. ?,?0 ?F (2) v Because of the indicator variables in the right hand side of Equation 2, the set of y?s with P(Y = y) > 0 is in bijection with the set of x?s with P(X = x) > 0. It is therefore well-defined to overload the variable ? in the same equation. Similarly, we have that A0 = A. This reparameterization is inspired by the auxiliary variables used to construct the sampler of Swendsen-Wang [25]. Next, suppose that the sets F1 , . . . , FK form a forest cover of the factor graph, Fk ? F. Then, for k ? {1, . . . , K}, we build as follows the super-partitions required for the measure factorization to apply (as defined in [11]): Ak (?) = X exp {h?(y), ?i} y Y Y ?,?0 ?Fk v 1[y?,v = y?0 ,v ]. (3) Note that computing each Ak is tractable: it corresponds to computing the normalization of one of the forest covering the graphical model. Similarly, gradients of Ak can be computed as the moments of a tree shaped graphical model. Also, the product over k of the base measures in Equation 3 is equal to the base measure of Equation 2. We have therefore constructed a valid measure factorization. With this construction in hand, it is then easy to apply the measure factorization framework to get a principled way for the different subproblem views to exchange messages [11]. 5 Experiments We did two sets of experiments. Haplotype reconstructions were used to assess the quality of the variational approximation. Disease predictions were used to validate the HPT disease model. Simulations. Pedigree graphs were simulated using a Wright-Fisher model [26]. In this model there is a fixed number of male individuals, n, and female individuals, n, per generation, making the population size 2n. The pedigree is built starting from the oldest generation. Each successively more recent generation is built by having each individual in that generation choose uniformly at random one female parent and one male parent. Notice that this process allows inbreeding. 6 (b) Recombination Factors (c) Recombination Parameter ? ? ? ? ? ? ? ? 5 10 15 ? ? 0.00005 0.0005 0.005 0.28 ? 0.28 ? false true ? 0.24 ? ? 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0.20 ? No. Iterations 15 0.28 ? 10 0.20 ? 0.25 ? 5 0.24 ? 0.20 ? 1 2 3 4 5 0.20 ? 0.28 0 0.24 No. Iterations 15 ? ? 0.25 10 ? ? 15 0 5 10 15 ? ? ? 0 5 ? ? 10 ? ? ? ? ? ? 15 ? ? 0.16 10 0.16 5 0.16 0 ? ? 0.16 0.15 ? 0.15 Haplotype Metric ? ? 5 0.20 0 0.24 No. Iterations 0.20 (a) Forest-Cover Factors Figure 2: The pedigree was generated with the following parameters, number of generations 20 and n = 15 which resulted in a pedigree with 424 individuals, 197 marriage nodes, 47 founders. We simulated 1000 markers. The metric used for all panels is the haplotype reconstruction metric. Panel (a) shows the effect of removing factors from the forest cover of the pedigree where the lines are labeled with the number of factors that each experiment contains. Panel (b) shows the effect of removing the recombination factor (false) or using it (true). Together, panels (a-b) show that having more factors helps inference. Panel (c) shows the effect of an incorrect recombination parameter on inference. The correct parameter, with which the data was generated, is line 0.0005. Two incorrect parameters are shown 0.00005 and 0.005. This panel shows that the recombination parameter can be off by an order of magnitude and the haplotype reconstruction is robust. Genotype data were simulated in the simulated pedigree graph. The founder haplotypes were drawn from an empirical distribution (see Supplement for details). The recombination parameters used for inheritance are given in the Supplement. We then simulated the inheritance and recombination process to obtain the haplotypes of the descendants using the external program [27]. We used two distributions for the founder haplotypes, corresponding to two data sets. Individuals with missing data were sampled, where each individual either has all their genetic data missing or not. A random 50% of the non-founder individuals have missing data. An independent 50% of individuals have missing phenotypes for the disease prediction comparison. Haplotype Reconstruction. For the haplotype reconstruction, the inference being scored is, for each individual, the maximum a posteriori haplotype predicted by the marginal haplotype distribution. These haplotypes are not necessarily Mendelian consistent, meaning that it is possible for a child to have an allele on the maternal haplotype that could not possibly be inherited from the mother according to the mother?s marginal distribution. However, transforming the posterior distribution over haplotypes into a set of globally consistent haplotypes is somewhat orthogonal to the methods in this paper, and there exist methods for this task [28]. The goal of this comparison is threefold: 1) to see if adding more factors improves inference, 2) to see if more iterations of the measure factorization algorithm help, and 3) to see if there is robustness of the results to the recombination parameters. Synthetic founder haplotypes were simulated, see Supplement for details. Each experiment was replicated 10 times where for each replicate the founder haplotypes were sampled with a different random seed. We computed a metric ? which is a normalized count of the number of sites that differ between the held-out haplotype and the predicted haplotype. See the Supplement for details. Figure 2 shows the results for the haplotype reconstruction. Panels (a) and (b) show that adding more factors helps inference accuracy. Panel (c) shows that inference accuracy is robust to an incorrect recombination parameter. Disease Prediction. For disease prediction, the inference being scored is the ranking of the sites given by our Bayesian method as compared with LOD estimates computed by Merlin [3]. The disease models we consider are recessive f = (0.95, 0.05, 0.05) and dominant f = (0.95, 0.95, 0.05). The disease site is one of the sites chosen uniformly at random. The goal of this comparison is to see whether our disease model performs at least as well as the LOD estimator used by Merlin. 7 Pedigree Disease model HPT LOD [3] Generations Leaves Individuals f2 f1 f0 Mean ? SD ? Mean ? SD ? 3 8 10 12 22 25 34 0.95 0.05 0.05 0.08 0.07 0.04 (0.09) (0.09) (0.04) 0.25 0.52 0.45 (0.20) (0.44) (0.23) 3 4 5 6 16 20 24 0.04 0.08 0.14 (0.05) (0.09) (0.16) 0.27 0.35 0.20 (0.31) (0.31) (0.22) 5 100 200 300 418 882 1276 1e-3 4e-4 6e-4 (2e-3) (1e-3) (1e-3) 3 8 10 12 22 25 34 0.14 0.11 0.12 (0.15) (0.14) (0.22) 0.95 0.95 0.05 Out of memory Out of memory Out of memory 0.22 0.33 0.22 (0.23) (0.40) (0.16) Table 1: This table gives the performance of our method and Merlin for recessive and dominant diseases as measured by the disease prediction metric. The sizes of the simulated pedigrees are given in the first three columns, the disease model in the next three columns, and the performance of our method and that of Merlin in the final four columns. In all instances, our method outperforms Merlin sometimes by an order of magnitude. Results suggest that the standard deviation of our method is smaller than that of Merlin. Notably, Merlin cannot even analyze the largest pedigrees, because Merlin does exact inference. The founder haplotypes were taken from the phased haplotypes of the JPT+CHB HapMap [29] populations, see Supplement for details. Each experiment was replicated 10 times where for each replicate the founder haplotypes were sampled with a different random seed. We computed a metric ? which is roughly the rank of the disease site in the sorted list of predictions given by each method. Table 1 compares the performance of our method against that of Merlin. In every case our method has better accuracy. The results suggest that our method has a lower standard deviation. Within each delineated row of the table, the mean ? are not comparable because the pedigrees might be of different complexities. Between delineated rows of the table, we can compare the effect of pedigree size, and we observe that larger pedigrees aid in disease site prediction. Indeed, the largest pedigree of 1276 individuals reaches an accuracy of 6e?4 . This pedigree is the largest pedigree that we know of being analyzed in the literature. 6 Discussion This paper introduces a new disease model and a new variational inference method which are applied to find a Bayesian solution to the disease-site correlation problem. This is in contrast to traditional linkage analysis where a likelihood ratio statistic is computed to find the position of the disease site relative to a map of existing sites. Instead, our approach is to use a Haplotype-Phenotype Transducer to obtain a posterior for the probability of each site to be the disease site. This approach is wellsuited to modern data which is very dense in the genome. Particularly with sequencing data, it is likely that either the disease site or a nearby site will be observed. Our method performs well in practice both for genotype prediction and for disease site prediction. In the presence of missing data, where for some individuals the whole genome is missing, our method is able to infer the missing genotypes with high accuracy. As compared with LOD linkage analysis method, our method was better able to predict the disease site when one observed site was responsible for the disease. References [1] G. Mendel. Experiments in plant-hybridisation. In English Translation and Commentary by R. A. Fisher, J.H. Bennett, ed. Oliver and Boyd, Edinburgh 1965, 1866. [2] A. H. Sturtevant. The linear arrangement of six sex-linked factors in drosophila, as shown by their mode of association. Journal of Experimental Zoology, 14:43?59, 1913. 8 [3] GR Abecasis, SS Cherny, WO Cookson, et al. Merlin-rapid analysis of dense genetic maps using sparse gene flow trees. Nature Genetics, 30:97?101, 2002. [4] M Silberstein, A. Tzemach, N. Dovgolevsky, M. Fishelson, A. Schuster, and D. Geiger. On-line system for faster linkage analysis via parallel execution on thousands of personal computers. Americal Journal of Human Genetics, 78(6):922?935, 2006. [5] D. Geiger, C. Meek, and Y. Wexler. Speeding up HMM algorithms for genetic linkage analysis via chain reductions of the state space. Bioinformatics, 25(12):i196, 2009. [6] C. A. Albers, M. A. R. Leisink, and H. J. Kappen. The cluster variation method for efficient linkage analysis on extended pedigrees. BMC Bioinformatics, 7(S-1), 2006. [7] M. L. Metzker. Sequencing technologies?the next generation. Nat Rev Genet, 11(1):31?46, January 2010. [8] M. Abney, C. Ober, and M. S. McPeek. Quantitative-trait homozygosity and association mapping and empirical genome wide significance in large, complex pedigrees: Fasting serum-insulin level in the hutterites. American Journal of Human Genetics, 70(4):920 ? 934, 2002. [9] N.B. Sutter and et al. A Single IGF1 Allele Is a Major Determinant of Small Size in Dogs. Science, 316(5821):112?115, 2007. [10] M. Mohri. Handbook of Weighted Automata, chapter 6. Monographs in Theoretical Computer Science. Springer, 2009. [11] A. Bouchard-C?ot?e and M. I. Jordan. Variational Inference over Combinatorial Spaces. In Advances in Neural Information Processing Systems 23 (NIPS), 2010. [12] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Bethe free energy, Kikuchi approximations and belief propagation algorithms. In Advances in Neural Information Processing Systems (NIPS), 2001. [13] E. M. Wijsman. Penetrance. John Wiley & Sons, Ltd, 2005. [14] R.C. Elston and J. Stewart. A general model for the analysis of pedigree data. Human Heredity, 21:523? 542, 1971. [15] E.S. Lander and P. Green. Construction of multilocus genetic linkage maps in humans. Proceedings of the National Academy of Science, 84(5):2363?2367, 1987. [16] J. Marchini, P. Donnelly, and L. R. Cardon. Genome-wide strategies for detecting multiple loci that influence complex diseases. Nat. Genet., 37(4):413?417, 2005. [17] Y. W. Teh, C. Blundell, and L. T. Elliott. Modelling genetic variations with fragmentation-coagulation processes. In Advances In Neural Information Processing Systems, 2011. [18] A. Piccolboni and D. Gusfield. On the complexity of fundamental computational problems in pedigree analysis. Journal of Computational Biology, 10(5):763?773, 2003. [19] S. L. Lauritzen and N. A. Sheehan. Graphical models for genetic analysis. Statistical Science, 18(4):489? 514, 2003. [20] A. Thomas, A. Gutin, V. Abkevich, and A. Bansal. Multilocus linkage analysis by blocked Gibbs sampling. Statistics and Computing, 10(3):259?269, July 2000. [21] G. O. Roberts and S. K. Sahu. Updating schemes, correlation structure, blocking and parameterization for the Gibbs sampler. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 59(2):291? 317, 1997. [22] A. Bouchard-C?ot?e and M.I. Jordan. Optimization of structured mean field objectives. In Proceedings of the Twenty-Fifth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-09), pages 67?74, Corvallis, Oregon, 2009. AUAI Press. [23] T. Minka and Y. Qi. Tree-structured approximations by expectation. In Advances in Neural Information Processing Systems (NIPS), 2003. [24] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching. In AISTATS, 2003. [25] R. H. Swendsen and J.-S. Wang. Nonuniversal critical dynamics in Monte Carlo simulations. Phys. Rev. Lett., 58:86?88, Jan 1987. [26] J. Wakeley. Coalescent Theory: An Introduction. Roberts & Company Publishers, 1 edition, June 2008. [27] B. Kirkpatrick, E. Halperin, and R. M. Karp. Haplotype inference in complex pedigrees. Journal of Computational Biology, 17(3):269?280, 2010. [28] C. A. Albers, T. Heskes, and H. J. Kappen. Haplotype inference in general pedigrees using the cluster variation method. Genetics, 177(2):1101?1116, October 2007. [29] The International HapMap Consortium. The international HapMap project. Nature, 426:789?796, 2003. 9
4602 |@word h:1 determinant:1 version:1 advantageous:1 replicate:2 sex:1 simulation:2 wexler:1 accommodate:1 kappen:2 reduction:1 cyclic:1 contains:2 score:4 series:1 moment:2 initial:1 genetic:13 ours:2 outperforms:1 existing:3 current:3 yet:1 must:2 readily:1 john:1 partition:1 designed:1 alone:1 generative:3 leaf:1 intelligence:1 parameterization:1 oldest:1 sutter:1 short:1 provides:1 detecting:1 node:12 contribute:1 location:1 bijection:1 mendel:2 simpler:1 coagulation:1 along:2 constructed:1 direct:1 transducer:19 consists:2 incorrect:3 descendant:1 combine:2 inside:1 introduce:2 notably:1 indeed:1 rapid:1 roughly:1 p1:1 inspired:1 globally:1 freeman:1 company:1 project:1 moreover:1 notation:2 panel:8 mass:2 advent:1 null:2 string:1 developed:1 finding:2 pseudo:1 quantitative:1 every:3 firstorder:1 ti:1 auai:1 exactly:1 jpt:1 subtype:1 appear:2 before:1 local:1 sd:2 limit:2 encoding:1 ak:3 path:5 becoming:1 black:1 might:1 challenging:1 hmms:1 factorization:8 limited:2 directed:3 phased:1 responsible:1 enforces:1 practice:2 block:3 union:1 differs:1 implement:1 jan:1 area:1 empirical:3 thought:3 significantly:1 boyd:1 matching:1 word:1 induce:1 suggest:2 consortium:1 get:3 convenience:1 cannot:2 risk:1 context:1 influence:1 optimize:1 measurable:1 map:4 confers:1 missing:8 deterministic:3 go:1 serum:1 starting:3 automaton:4 focused:1 estimator:4 reparameterization:2 population:4 variation:4 analogous:1 construction:3 gm:1 suppose:1 exact:1 lod:12 hypothesis:1 origin:2 leisink:1 element:1 particularly:1 updating:1 labeled:2 blocking:1 observed:5 role:1 bottom:1 ep:1 subproblem:2 solved:1 wang:2 thousand:3 connected:5 cycle:2 ordering:2 removed:1 consumes:1 disease:72 principled:1 transforming:1 monograph:1 complexity:2 mcpeek:1 dynamic:1 personal:1 segment:1 creation:2 f2:3 sink:1 compactly:1 easily:1 represented:1 chapter:1 describe:1 monte:1 artificial:1 approached:1 refined:1 quite:2 encoded:2 larger:5 denser:1 say:1 drawing:1 s:1 statistic:6 g1:1 fasting:1 insulin:1 jointly:1 final:1 sequence:2 unprecedented:1 propose:2 reconstruction:6 interaction:1 product:5 remainder:1 frequent:1 turned:1 loop:2 subgraph:2 flexibility:2 achieve:1 academy:1 description:2 validate:1 parent:9 cluster:3 produce:1 kikuchi:1 help:3 depending:1 illustrate:1 stat:1 measured:1 lauritzen:1 odd:1 minor:2 albers:2 p2:1 wdm:2 implemented:1 c:1 auxiliary:1 predicted:2 differ:1 correct:1 allele:15 human:6 coalescent:1 hapmap:3 exchange:1 hx:1 wellsuited:1 f1:4 drosophila:1 biological:1 around:1 considered:1 marriage:2 swendsen:2 exp:3 wright:1 seed:2 mapping:1 predict:1 claim:1 major:1 resample:2 purpose:1 estimation:1 applicable:1 combinatorial:1 healthy:1 largest:3 create:1 tool:1 weighted:5 reparameterizing:1 genomic:3 super:1 rather:1 pn:1 jaakkola:1 karp:1 chb:1 encode:1 l0:1 focus:1 emission:3 june:1 derived:3 rank:1 sequencing:3 likelihood:6 modelling:1 bernoulli:2 contrast:3 posteriori:1 inference:24 typically:1 a0:2 initially:1 hidden:3 issue:1 among:1 flexible:1 denoted:1 development:4 art:1 special:2 marginal:2 equal:2 construct:1 once:1 having:4 field:2 sampling:5 shaped:1 biology:3 bmc:1 breakpoint:1 others:1 few:1 modern:3 resulted:1 national:1 individual:35 argmax:1 interest:3 message:2 kirkpatrick:2 analyzed:2 male:3 genotype:10 yielding:1 diagnostics:1 hpt:14 light:1 introduces:2 held:1 zoology:1 chain:2 subtrees:1 accurate:2 oliver:1 edge:1 tuple:1 explosion:1 necessary:1 closer:1 nucleotide:2 machinery:1 orthogonal:1 tree:8 old:1 circle:2 theoretical:1 complicates:1 instance:1 formalism:5 modeling:1 column:3 gn:1 cover:7 stewart:2 recessive:6 loopy:1 cost:2 deviation:2 subset:1 hundred:1 father:5 mendelian:4 gr:1 considerably:1 gd:4 synthetic:1 grand:3 fundamental:1 international:2 standing:1 probabilistic:1 off:1 together:2 successively:1 containing:6 choose:1 possibly:1 silberstein:1 external:1 american:1 li:2 approximant:1 potential:5 unordered:1 star:1 includes:2 oregon:1 ranking:1 view:6 analyze:2 linked:2 start:4 option:2 complicated:1 bouchard:4 gwas:1 annotation:1 inherited:1 parallel:1 contribution:1 ass:1 accuracy:7 became:1 who:1 efficiently:4 diploid:1 identify:1 yield:1 bayesian:3 identification:1 carlo:1 multiplying:1 reach:1 phys:1 whenever:1 ed:1 against:1 energy:1 frequency:3 involved:3 minka:1 dm:7 modeler:1 gain:2 sampled:6 treatment:1 ask:1 improves:2 obtainable:1 marchini:1 alexandre:1 originally:1 methodology:1 specify:1 wei:1 gutin:1 done:3 box:1 correlation:2 d:6 hand:2 marker:1 propagation:3 mode:1 quality:1 reveal:1 indicated:2 halperin:1 building:1 facilitate:1 effect:4 normalized:1 true:2 piccolboni:1 hence:1 read:1 cardon:1 conditionally:1 reweighted:2 during:1 covering:2 bonnie:1 pedigree:58 coincides:1 plate:1 bansal:1 complete:1 performs:2 snp:1 meaning:1 variational:6 consideration:1 recently:1 common:1 specialized:1 haplotype:39 empirically:1 attached:1 million:1 extend:1 association:3 discussed:1 interpretation:1 interpret:3 marginals:1 trait:1 expressing:2 significant:1 refer:1 blocked:1 gibbs:4 mother:7 corvallis:1 heredity:1 unconstrained:1 fk:3 heskes:1 similarly:6 hp:1 had:1 f0:3 longer:1 base:2 dominant:2 posterior:3 closest:1 recent:3 female:3 apart:1 wellknown:1 came:2 merlin:12 greater:1 somewhat:1 commentary:1 maximize:1 july:1 multiple:1 infer:2 faster:1 long:2 bigger:1 qi:1 prediction:10 basic:1 denominator:2 metric:6 poisson:1 expectation:1 iteration:4 normalization:3 sometimes:1 background:1 addition:1 lander:1 diagram:2 source:1 publisher:1 ot:3 flow:1 jordan:2 odds:2 call:2 emitted:1 presence:2 identically:1 easy:2 switch:1 nonuniversal:1 reduce:1 genet:2 americal:1 blundell:1 whether:2 six:1 linkage:13 ltd:1 wo:1 suffer:1 passing:1 generally:3 covered:1 detailed:1 amount:2 simplest:1 exist:1 notice:1 estimated:1 per:3 track:1 discrete:1 threefold:1 donnelly:1 key:1 four:1 falling:1 drawn:1 imputation:1 kept:1 graph:15 fraction:2 year:1 sum:1 powerful:1 uncertainty:1 multilocus:2 family:3 putative:1 draw:1 geiger:2 comparable:1 hi:13 breakpoints:2 meek:1 played:1 simplification:1 g:1 annual:1 nontrivial:1 paternal:1 occur:1 constraint:3 precisely:1 constrain:1 ri:3 software:1 encodes:3 nearby:1 aspect:1 breeding:1 attempting:1 department:2 structured:3 according:2 across:1 slightly:1 smaller:1 character:1 son:1 founder:14 delineated:2 making:1 rev:2 restricted:2 taken:3 computationally:2 equation:5 previously:1 turn:1 discus:1 count:1 mind:1 know:3 locus:2 tractable:6 end:2 available:3 yedidia:1 apply:3 observe:1 generic:1 alternative:1 robustness:1 original:1 thomas:1 denotes:4 running:1 assumes:1 top:2 remaining:1 graphical:20 exploit:3 giving:1 recombination:21 build:1 classical:4 society:1 genotyping:1 homozygous:1 objective:2 arrangement:2 question:2 parametric:1 primary:1 strategy:1 traditional:2 gradient:1 simulated:7 capacity:1 hmm:1 collected:1 considers:1 trivial:1 barely:1 willsky:1 length:2 ober:1 copying:1 relationship:3 index:8 ratio:4 pointwise:2 difficult:3 unfortunately:1 october:1 robert:2 potentially:1 subproblems:4 implementation:1 unknown:1 perform:1 teh:1 twenty:1 observation:2 markov:1 datasets:1 gusfield:1 finite:1 january:1 situation:2 extended:1 arbitrary:1 inferred:1 introduced:1 dog:2 pair:1 specified:1 required:1 nip:3 address:1 beyond:2 parental:2 able:2 below:1 usually:1 challenge:1 summarize:1 program:1 built:2 green:1 memory:3 royal:1 belief:3 analogue:2 wainwright:1 critical:1 difficulty:4 indicator:4 turning:1 advanced:1 scheme:1 technology:4 columbia:2 gf:1 speeding:1 prior:1 literature:3 inheritance:9 relative:2 plant:1 sturtevant:4 generation:9 recomb:5 acyclic:6 foundation:1 sufficient:1 consistent:2 elliott:1 storing:1 pi:3 translation:1 row:2 elsewhere:1 genetics:4 mohri:1 last:2 copy:2 english:1 free:1 side:1 allow:1 wide:3 explaining:1 taking:1 fifth:1 sparse:1 distributed:2 edinburgh:1 lett:1 stand:1 valid:5 genome:13 ending:1 transition:2 commonly:2 collection:6 replicated:2 emitting:1 correlate:2 approximate:1 implicitly:1 status:2 keep:1 gene:1 ml:1 global:1 active:1 uai:1 handbook:1 assumed:1 consuming:1 triplet:1 abney:1 table:5 sahu:1 bethe:1 additionally:1 nature:2 chromosome:11 robust:2 ca:2 correlated:1 forest:11 investigated:1 complex:4 meanwhile:1 constructing:2 necessarily:1 did:1 significance:1 dense:4 main:1 aistats:1 arrow:2 whole:1 arise:1 scored:2 edition:1 child:2 allowed:1 site:63 referred:1 elaborate:1 aid:1 wiley:1 position:5 inferring:1 deterministically:1 wish:1 exponential:4 collage:2 british:2 remained:1 removing:2 specific:1 symbol:2 list:1 false:2 adding:3 fragmentation:1 supplement:10 magnitude:2 execution:1 nat:2 downward:1 phenotype:15 interrogate:1 easier:1 depicted:2 likely:1 maternal:2 g2:1 springer:1 gender:1 ubc:2 aa:4 determines:1 corresponds:2 conditional:1 viewed:1 goal:2 sorted:1 exposition:1 absence:1 considerable:2 fisher:2 bennett:1 uniformly:3 sampler:2 called:3 oval:2 experimental:1 formally:2 arises:1 bioinformatics:2 overload:1 d1:1 schuster:1 sheehan:1
3,981
4,603
Provable ICA with Unknown Gaussian Noise, with Implications for Gaussian Mixtures and Autoencoders Sanjeev Arora? Rong Ge? Ankur Moitra ? Sushant Sachdeva? Abstract We present a new algorithm for Independent Component Analysis (ICA) which has provable performance guarantees. In particular, suppose we are given samples of the form y = Ax + ? where A is an unknown n ? n matrix and x is a random variable whose components are independent and have a fourth moment strictly less than that of a standard Gaussian random variable and ? is an n-dimensional Gaussian random variable with unknown covariance ?: We give an algorithm that provable recovers A and ? up to an additive  and whose running time and sample complexity are polynomial in n and 1/. To accomplish this, we introduce a novel ?quasi-whitening? step that may be useful in other contexts in which the covariance of Gaussian noise is not known in advance. We also give a general framework for finding all local optima of a function (given an oracle for approximately finding just one) and this is a crucial step in our algorithm, one that has been overlooked in previous attempts, and allows us to control the accumulation of error when we find the columns of A one by one via local search. 1 Introduction We present an algorithm (with rigorous performance guarantees) for a basic statistical problem. Suppose ? is an independent n-dimensional Gaussian random variable with an unknown covariance matrix ? and A is an unknown n ? n matrix. We are given samples of the form y = Ax + ? where x is a random variable whose components are independent and have a fourth moment strictly less than that of a standard Gaussian random variable. The most natural case is when x is chosen uniformly at random from {+1, ?1}n , although our algorithms in even the more general case above. Our goal is to reconstruct an additive approximation to the matrix A and the covariance matrix ? running in time and using a number of samples that is polynomial in n and 1 , where  is the target precision (see Theorem 1.1) This problem arises in several research directions within machine learning: Independent Component Analysis (ICA), Deep Learning, Gaussian Mixture Models (GMM), etc. We describe these connections next, and known results (focusing on algorithms with provable performance guarantees, since that is our goal). Most obviously, the above problem can be seen as an instance of Independent Component Analysis (ICA) with unknown Gaussian noise. ICA has an illustrious history with applications ranging from econometrics, to signal processing, to image segmentation. The goal generally involves finding a linear transformation of the data so that the coordinates are as independent as possible [1, 2, 3]. This is often accomplished by finding directions in which the projection is ?non-Gaussian? [4]. Clearly, if the datapoint y is generated as Ax (i.e., with no noise ? added) then applying linear transformation A?1 to the data results in samples A?1 y whose coordinates are independent. This restricted case was considered by Comon [1] and Frieze, Jerrum and Kannan [5], and their goal was to recover an ? {arora, rongge, sachdeva}@cs.princeton.edu. Department of Computer Science, Princeton University, Princeton NJ 08540. Research supported by the NSF grants CCF-0832797, CCF-1117309 and Simons Investigator Grant ? [email protected]. School of Mathematics, Institute for Advanced Study, Princeton NJ 08540. Research supported in part by NSF grant No. DMS-0835373 and by an NSF Computing and Innovation Fellowship. 1 additive approximation to A efficiently and using a polynomial number of samples. (We will later note a gap in their reasoning, albeit fixable by our methods. See also recent papers by Anandkumar et al., Hsu and Kakade[6, 7], that do not use local search and avoids this issue.) To the best of our knowledge, there are currently no known algorithms with provable guarantees for the more general case of ICA with Gaussian noise (this is especially true if the covariance matrix is unknown, as in our problem), although many empirical approaches are known. (eg. [8], the issue of ?empirical? vs ?rigorous? is elaborated upon after Theorem 1.1.) The second view of our problem is as a concisely described Gaussian Mixture Model. Our data is generated as a mixture of 2n identical Gaussian components (with an unknown covariance matrix) n whose centers are the points {Ax : x ? {?1, 1} }, and all mixing weights are equal. Notice, this n mixture of 2 Gaussians admits a concise description using O(n2 ) parameters. The problem of learning Gaussian mixtures has a long history, and the popular approach in practice is to use the EM algorithm [9], though it has no worst-case guarantees (the method may take a very long time to converge, and worse, may not always converge to the correct solution). An influential paper of Dasgupta [10] initiated the program of designing algorithms with provable guarantees, which was improved in a sequence of papers [11, 12, 13, 14]. But in the current setting, it is unclear how to apply any of the above algorithms (including EM ) since the trivial application would keep track of exponentially many parameters ? one for each component. Thus, new ideas seem necessary to achieve polynomial running time. The third view of our problem is as a simple form of autoencoding [15]. This is a central notion in Deep Learning, where the goal is to obtain a compact representation of a target distribution using a multilayered architecture, where a complicated function (the target) can be built up by composing layers of a simple function (called the autoencoder [16]). The main tenet is that there are interesting functions which can be represented concisely using many layers, but would need a very large representation if a ?shallow? architecture is used instead). This is most useful for functions that are ?highly varying? (i.e. cannot be compactly described by piecewise linear functions or other ?simple? local representations). Formally, it is possible to represent using just (say) n2 parameters, some distributions with 2n ?varying parts? or ?interesting regions.? The Restricted Boltzmann Machine (RBM) is an especially popular autoencoder in Deep Learning, though many others have been proposed. However, to the best of our knowledge, there has been no successful attempt to give a rigorous analysis of Deep Learning. Concretely, if the data is indeed generated using the distribution represented by an RBM, then do the popular algorithms for Deep Learning [17] learn the model parameters correctly and in polynomial time? Clearly, if the running time were actually found to be exponential in the number of parameters, then this would erode some of the advantages of the compact representation. How is Deep Learning related to our problem? As noted by Freund and Haussler [18] many years ago, an RBM with real-valued visible units (the version that seems more amenable to theoretical analysis) is precisely a mixture of exponentially many standard Gaussians. It is parametrized by an n ? m matrix A and a vector ? ? Rn . It encodes a mixture of n-dimensional standard Gaussians m centered at the points {Ax : x ? {?1, 1} }, where the mixing weight of the Gaussian centered at 2 Ax is exp(kAxk2 + ? ? x). This is of course reminiscent of our problem. Formally, our algorithm can be seen as a nonlinear autoencoding scheme analogous to an RBM but with uniform mixing weights. Interestingly, the algorithm that we present here looks nothing like the approaches favored traditionally in Deep Learning, and may provide an interesting new perspective. 1.1 Our results and techniques We give a provable algorithm for ICA with unknown Gaussian noise. We have not made an attempt to optimize the quoted running time of this model, but we emphasize that this is in fact the first algorithm with provable guarantees for this problem and moreover we believe that in practice our algorithm will run almost as fast as the usual ICA algorithms, which are its close relatives. Theorem 1.1 (Main, Informally). There is an algorithm that recovers the unknown A and ? up to additive error  in each entry in time that is polynomial in n, kAk2 , k?k2 , 1/, 1/?min (A) where k ? k2 denotes the operator norm and ?min (?) denotes the smallest eigenvalue. The classical approach for ICA initiated in Comon [1] and Frieze, Jerrum and Kannan [5]) is for the noiseless case in which y = Ax. The first step is whitening, which applies a suitable linear transformation that makes the variance the same in all directions, thus reducing to the case where 2 A is a rotation matrix. Given samples y = Rx where R is a rotation matrix, the rows of R can be found in principle by computing the vectors u that are local minima of E[(u ? y)4 ]. Subsequently, a number of works (see e.g. [19, 20]) have focused on giving algorithms that are robust to noise. A popular approach is to use the fourth order cumulant (as an alternative to the fourth order moment) as a method for ?denoising,? or any one of a number of other functionals whose local optima reveal interesting directions. However, theoretical guarantees of these algorithms are not well understood. The above procedures in the noise-free model can almost be made rigorous (i.e., provably polynomial running time and number of samples), except for one subtlety: it is unclear how to use local search to find all optima in polynomial time. In practice, one finds a single local optimum, projects to the subspace orthogonal to it and continues recursively on a lower-dimensional problem. However, a naive implementation of this idea is unstable since approximation errors can accumulate badly, and to the best of our knowledge no rigorous analysis has been given prior to our work. (This is not a technicality: in some similar settings the errors are known to blow up exponentially [21].) One of our contributions is a modified local search that avoids this potential instability and finds all local optima in this setting. (Section 4.2.) Our major new contribution however is dealing with noise that is an unknown Gaussian. This is an important generalization, since many methods used in ICA are quite unstable to noise (and a wrong estimate for the covariance could lead to bad results). Here, we no longer need to assume we know even rough estimates for the covariance. Moreover, in the context of Gaussian Mixture Models this generalization corresponds to learning a mixture of many Gaussians where the covariance of the components is not known in advance. We design new tools for denoising and especially whitening in this setting. Denoising uses the fourth order cumulant instead of the fourth moment used in [5] and whitening involves a novel use of the Hessian of the cumulant. Even then, we cannot reduce to the simple case y = Rx as above, and are left with a more complicated functional form (see ?quasi-whitening? in Section 2.) Nevertheless, we can reduce to an optimization problem that can be solved via local search, and which remains amenable to a rigorous analysis. The results of the local optimization step can be then used to simplify the complicated functional form and recover A as well as the noise ?. We defer many of our proofs to the supplementary material section, due to space constraints. In order to avoid cluttered notation, we have focused on the case in which x is chosen uniformly at random from {?1, +1}n , although our algorithm and analysis work under the more general conditions that the coordinates of x are (i) independent and (ii) have a fourth moment that is less than three (the fourth moment of a Gaussian random variable). In this case, the functional P (u) (see Lemma 2.2) will take the same form but with weights depending on the exact value of the fourth moment for each coordinate. Since we already carry through an unknown diagonal matrix D throughout our analysis, this generalization only changes the entries on the diagonal and the same algorithm and proof apply. 2 Denoising and quasi-whitening As mentioned, our approach is based on the fourth order cumulant. The cumulants of a random variable are the coefficients of the Taylor expansion of the logarithm of the characteristic function [22]. Let ?r (X) be the rth cumulant of a random variable X. We make use of: Fact 2.1. (i) If X has mean zero, then ?4 (X) = E[X 4 ] ? 3 E[X 2 ]2 . (ii) If X is Gaussian with mean ? and variance ? 2 , then ?1 (X) = ?, ?2 (X) = ? 2 and ?r (X) = 0 for all r > 2. (iii) If X and Y are independent, then ?r (X + Y ) = ?r (X) + ?r (Y ). The crux of our technique is to look at the following functional, where y is the random variable Ax + ? whose samples are given to us. Let u ? Rn be any vector. Then P (u) = ??4 (uT y). Note that for any u we can compute P (u) reasonably accurately by drawing sufficient number of samples of y and taking an empirical average. Furthermore, since x and ? are independent, and ? is Gaussian, the next lemma is immediate. We call it ?denoising? since it allows us empirical access to some information about A that is uncorrupted by the noise ?. Pn Lemma 2.2 (Denoising Lemma). P (u) = 2 i=1 (uT A)4i . The intuition is that P (u) = ??4 (uT Ax) since the fourth cumulant does not depend on the additive Gaussian noise, and then the lemma follows from completing the square. 3 2.1 Quasi-whitening via the Hessian of P (u) In prior works on ICA, whitening refers to reducing to the case where y = Rx for some some rotation matrix R. Here we give a technique to reduce to the case where y = RDx + ? 0 where ? 0 is some other Gaussian noise (still unknown), R is a rotation matrix and D is a diagonal matrix that depends upon A. We call this quasi-whitening. Quasi-whitening suffices for us since local search using the objective function ?4 (uT y) will give us (approximations to) the rows of RD, from which we will be able to recover A. Quasi-whitening involves computing the Hessian of P (u), which recall is the matrix of all 2nd order partial derivatives of P (u). Throughout this section, we will denote the Hessian operator by H. In matrix form, the Hessian of P (u) is n n X X ?2 P (u) = 24 Ai,k Aj,k (Ak ? u)2 ; H(P (U )) = 24 (Ak ? u)2 Ak ATk = ADA (u)AT ?ui ?uj k=1 k=1 where Ak is the k-th column of the matrix A (we use subscripts to denote the columns of matrices throught the paper). DA (u) is the following diagonal matrix: Definition 2.3. Let DA (u) be a diagonal matrix in which the k th entry is 24(Ak ? u)2 . Of course, the exact Hessian of P (u) is unavailable and we will instead compute an empirical approximation Pb(u) to P (u) (given many samples from the distribution), and we will show that the Hessian of Pb(u) is a good approximation to the Hessian of P (u). 0 Definition 2.4. Given 2N samples y1 , y10 , y2 , y20 ..., yN , yN of the random variable y, let N N 3 X T 2 T 0 2 ?1 X T 4 (u yi ) + (u yi ) (u yi ) . Pb(u) = N i=1 N i=1 Our first step is to show that the expectation of the Hessian of Pb(u) is exactly the Hessian of P (u). In fact, since the expectation of Pb(u) is exactly P (u) (and since Pb(u) is an analytic function of the samples and of the vector u), we can interchange the Hessian operator and the expectation operator. Roughly, one can imagine the expectation operator as an integral over the possible values of the random samples, and as is well-known in analysis, one can differentiate under the integral provided that all functions are suitably smooth over the domain of integration. Claim 2.5. Ey,y0 [?(uT y)4 + 3(uT y)2 (uT y 0 )2 ] = P (u) This claim follows immediately from the definition of P (u), and since y and y 0 are independent. Lemma 2.6. H(P (u)) = Ey,y0 [H(?(uT y)4 + 3(uT y)2 (uT y 0 )2 )] Next, we compute the two terms inside the expectation: Claim 2.7. H((uT y)4 ) = 12(uT y)2 yy T Claim 2.8. H((uT y)2 (uT y 0 )2 ) = 2(uT y 0 )2 yy T + 2(uT y)2 y 0 (y 0 )T + 4(uT y)(uT y 0 )(y(y 0 )T + (y 0 )y T ) Let ?min (A) denote the smallest eigenvalue of A. Our analysis also requires bounds on the entries of DA (u0 ): Claim 2.9. If u0 is chosen uniformly at random then with high probability for all i, n log n n min kAi k22 n?4 ? DA (u0 ))i,i ? max kAi k22 i=1 i=1 n Lemma 2.10. If u0 is chosen uniformly at random and furthermore we are given 2N = poly(n, 1/, 1/?min (A), kAk2 , k?k2 ) samples of y, then with high probability we will have that (1 ? )ADA (u0 )AT  H(Pb(u0 ))  (1 + )ADA (u0 )AT . c  (1 + )ADA (u0 )AT , and let M c = BB T . Lemma 2.11. Suppose that (1 ? )ADA (u0 )AT  M ? ? ?1 1/2 ? Then there is a rotation matrix R such that kB ADA (u0 ) ? R kF ? n. The intuition is: if any of the singular values of B ?1 ADA (u0 )1/2 are outside the range [1 ? , 1 + ], cx are too far apart we can find a unit vector x where the quadratic forms xT ADA (u0 )AT x and xT M (which contradicts the condition of the lemma). Hence the singular values of B ?1 ADA (u0 )1/2 can all be set to one without changing the Froebenius norm of B ?1 ADA (u0 )1/2 too much, and this yields a rotation matrix. 4 3 Our algorithm (and notation) In this section we describe our overall algorithm. It uses as a blackbox the denoising and quasiwhitening already described above, as well as a routine for computing all local maxima of some ?well-behaved? functions which is described later in Section 4. Notation: Placing a hat over a function corresponds to an empirical approximation that we obtain from random samples. This approximation introduces error, which we will keep track of. Step 1: Pick a random u0 ? Rn and estimate the Hessian H(Pb(u0 )). Compute B such that H(Pb(u0 )) = BB T . Let D = DA (u0 ) be the diagonal matrix defined in Definition 2.3. PN 0 Step 2: Take 2N samples y1 , y2 , ...,yN , y10 , y20 , ..., yN , and let Pb0 (u) = ? N1 i=1 (uT B ?1 yi )4 + P N 3 T ?1 yi )2 (uT B ?1 yi0 )2 which is an empirical estimation of P 0 (u). i=1 (u B N Step 3: Use the procedure A LL OPT(Pb0 (u), ?, ? 0 , ? 0 , ? 0 ) of Section 4 to compute all n local maxima of the function Pb0 (u). Step 4: Let R be the matrix whose rows are the n local optima recovered in the previous step. Use procedure R ECOVER of Section 5 to find A and ?. Explanation: Step 1 uses the transformation B ?1 computed in the previous Section to quasi-whiten the data. Namely, we consider the sequence of samples z = B ?1 y, which are therefore of the form R0 Dx+? 0 where ? = B ?1 ?, D = DA (u0 ) and R0 is close to a rotation matrix R? (by Lemma 2.11). In Step 2 we look at ?4 ((uT z)), which effectively denoises the new samples (see Lemma 2.2), and thus is the same as ?4 (R0 D?1/2 x). Let P 0 (u) = ?4 (uT z) = ?4 (uT B ?1 y) which is easily seen to be E[(uT R0 D?1/2 x)4 ]. Step 2 estimates this function, obtaining Pb0 (u). Then Step 3 tries to find local optima via local search. Ideally we would have liked access to the functional P ? (u) = (uT R? x)4 since the procedure for local optima works only for true rotations. But since R0 and R? are close we can make it work approximately with Pb0 (u), and then in Step 4 use these local optima to finally recover A. Theorem 3.1. Suppose we are given samples of the form y = Ax + ? where x is uniform on {+1, ?1}n , A is an n ? n matrix, ? is an n-dimensional Gaussian random variable independent of x with unknown covariance matrix ?. There is an algorithm that with high probability recovers b ? A?diag(ki )kF ?  where ? is some permutation matrix and each ki ? {+1, ?1} and kA b ? ?kF ? . Furthermore the running time and number of samples needed are also recovers k? poly(n, 1/, kAk2 , k?k2 , 1/?min (A)) Note that here we recover A up to a permutation of the columns and sign-flips. In general, this is all we can hope for since the distribution of x is also invariant under these same operations. Also, the dependence of our algorithm on the various norms (of A and ?) seems inherent since our goal is to recover an additive approximation, and as we scale up A and/or ?, this goal becomes a stronger relative guarantee on the error. 4 Framework for iteratively finding all local maxima In this section, we first describe a fairly standard procedure (based upon Newton?s method) for finding a single local maximum of a function f ? : Rn ? R among all unit vectors and an analysis of its rate of convergence. Such a procedure is a common tool in statistical algorithms, but here we state it rather carefully since we later give a general method to convert any local search algorithm (that meets certain criteria) into one that finds all local maxima (see Section 4.2). Given that we can only ever hope for an additive approximation to a local maximum, one should be concerned about how the error accumulates when our goal is to find all local maxima. In fact, a naive strategy is to project onto the subspace orthogonal to the directions found so far, and continue in this subspace. However, such an approach seems to accumulate errors badly (the additive error of the last local maxima found is exponentially larger than the error of the first). Rather, the crux of our analysis is a novel method for bounding how much the error can accumulate (by refining old estimates). 5 Algorithm 1. L OCAL OPT, Input:f (u), us , ?, ? Output: vector v 1. Set u ? us . 2. Maximize (via Lagrangian methods) Proj?u (?f (u))T ? + 12 ? T Proj?u (H(f (u)))? ? k?k22 0 1 2  ? f (u) ?u  ? T Subject to k?k2 ? ? and u ? = 0 3. Let ? be the solution, u ?= u+? ku+?k 4. If f (? u) ? f (u) + ?/2, set u ? u ? and Repeat Step 2 5. Else return u Our strategy is to first find a local maximum in the orthogonal subspace, then run the local optimization algorithm again (in the original n-dimensional space) to ?refine? the local maximum we have found. The intuition is that since we are already close to a particular local maxima, the local search algorithm cannot jump to some other local maxima (since this would entail going through a valley). 4.1 Finding one local maximum Throughout this section, we will assume that we are given oracle access to a function f (u) and its gradient and Hessian. The procedure is also given a starting point us , a search range ?, and a step size ?. For simplicity in notation we define the following projection operator. Definition 4.1. Proj?u (v) = v ? (uT v)u, Proj?u (M ) = M ? (uT M u)uuT . The basic step the algorithm is a modification of Newton?s method to find a local improvement that makes progress so long as the current point u is far from a local maxima. Notice that if we add a small vector to u, we do not necessarily preserve the norm of u. In order to have control over how the norm of u changes, during local optimization step the algorithm projects the gradient ?f and Hessian H(f ) to the space perpendicular to u. There is also an additional correction term ??/?u f (u) ? k?k2 /2. This correction term is necessary because the new vector we obtain is (u + ?)/ k(u + ?)k2 which is close to u ? k?k22 /2 ? u + ? + O(? 3 ). Step 2 of the algorithm is just maximizing a quadratic function and can be solved exactly using Lagrangian Multiplier method. To increase efficiency it is also acceptable to perform an approximate maximization step by taking ? to be either aligned with the gradient Proj?u ?f (u) or the largest eigenvector of Proj?u (H(f (u))). The algorithm is guaranteed to succeed in polynomial time when the function is Locally Improvable and Locally Approximable: Definition 4.2 ((?, ?, ?)-Locally Improvable). A function f (u) : Rn ? R is (?, ?, ?)-Locally Improvable, if for any u that is at least ? far from any local maxima, there is a u0 such that ku0 ? uk2 ? ? and f (u0 ) ? f (u) + ?. Definition 4.3 ((?, ?)-Locally Approximable). A function f (u) is locally approximable, if its third order derivatives exist and for any u and any direction v, the third order derivative of f at point u in the direction of v is bounded by 0.01?/? 3 . The analysis of the running time of the procedure comes from local Taylor expansion. When a function is Locally Approximable it is well approximated by the gradient and Hessian within a ? neighborhood. The following theorem from [5] showed that the two properties above are enough to guarantee the success of a local search algorithm even when the function is only approximated. Theorem 4.4 ([5]). If |f (u) ? f ? (u)| ? ?/8, the function f ? (u) is (?, ?, ?)-Locally Improvable, f (u) is (?, ?) Locally Approximable, then Algorithm 1 will find a vector v that is ? close to some local maximum. The running time is at most O((n2 + T ) max f ? /?) where T is the time to evaluate the function f and its gradient and Hessian. 4.2 Finding all local maxima Now we consider how to find all local maxima of a given function f ? (u). The crucial condition that we need is that all local maxima are orthogonal (which is indeed true in our problem, and is morally true when using local search more generally in ICA). Note that this condition implies that there are at most n local maxima.1 In fact we will assume that there are exactly n local maxima. If we are given an exact oracle for f ? and can compute exact local maxima then we can find all local maxima 6 Algorithm 2. A LL OPT, Input:f (u), ?, ?, ? 0 , ? 0 Output: v1 , v2 , ..., vn , ?i kvi ? vi? k ? ?. 1. 2. 3. 4. 5. 6. 7. Let v1 = L OCAL OPT(f, e1 , ?, ?) FOR i = 2 TO n DO Let gi be the projection of f to the orthogonal subspace of v1 , v2 , ..., vi?1 . Let u0 = L OCAL OPT(g, e1 , ? 0 , ? 0 ). Let vi = L OCAL OPT(f, u0 , ?, ?). END FOR Return v1 , v2 , ..., vn easily: find one local maximum, project the function into the orthogonal subspace, and continue to find more local maxima. Definition 4.5. The projection of a function f to a linear subspace S is a function on that subspace with value equal to f . More explicitly, if {v1 , v2 , ..., vd } is an orthonormal basis of S, the projection Pd of f to S is a function g : Rd ? R such that g(w) = f ( i=1 wi vi ). The following theorem gives sufficient conditions under which the above algorithm finds all local maxima, making precise the intuition given at the beginning of this section. Theorem 4.6. Suppose the function f ? (u) : Rn ? R satisfies the following properties: 1. Orthogonal Local Maxima: The function has n local maxima vi? , and they are orthogonal to each other. 2. Locally Improvable: f ? is (?, ?, ?) Locally Improvable. 3. Improvable Projection: The projection of the function to any subspace spanned by a subset of local maxima is (? 0 , ? 0 , ? 0 ) Locally Improvable. The step size ? 0 ? 10?. ? 4. Lipschitz: If ku ? u0 k2 ? 3 n?, then the function value |f ? (u) ? f ? (u0 )| ? ? 0 /20. ? 5. Attraction Radius: Let Rad ? 3 n? + ? 0 , for any local maximum vi? , let T?be min f ? (u) for ku ? vi? k2 ? Rad, then there exist a set U containing ku ? vi? k2 ? 3 n? + ? 0 and does not contain any other local maxima, such that for every u that is not in U but is ? close to U , f ? (u) < T . If we are given function f such that |f (u) ? f ? (u)| ? ?/8 and f is both (?, ?) and (? 0 , ? 0 ) Locally Approximable, then Algorithm 2 can find all local maxima of f ? within distance ?. To prove this theorem, we first notice the projection of the function f in Step 3 of the algorithm should be close to the projection of f ? to the remaining local maxima. This is implied by Lipschitz condition and is formally shown in the following two lemmas. First we prove a ?coupling? between the orthogonal complement of two close subspaces: Lemma 4.7. Given v1 , v2 , ..., vk , each ?-close respectively to local maxima v1? , v2? , ..., vk? (this is without loss of generality because we can permute the index of local maxima), then there is an orthonormal basis vk+1 , vk+2 , ..., vn for the orthogonal space of span{v1 , v2 , ..., vk } such that for Pn?k Pn?k ? ? any unit vector w ? Rn?k , i=1 wk vk+i is 3 n? close to i=1 wk vk+i . We prove this lemma using a modification of the Gram-Schmidt orthonormalization procedure. Using this lemma we see that the projected function is close to the projection of f ? in the span of the rest of local maxima: Lemma 4.8. Let g ? be the projection of f ? into the space spanned by the rest of local maxima, then |g ? (w) ? g(w)| ? ?/8 + ? 0 /20 ? ? 0 /8. 5 Local search on the fourth order cumulant Next, we prove that the fourth order cumulant P ? (u) satisfies the properties above. Then the algorithm given in the previous section will find all of the local maxima, which is the missing step in our 1 Technically, there are 2n local maxima since for each direction u that is a local maxima, so too is ?u but this is an unimportant detail for our purposes. 7 b  Output: A, b ? b Algorithm 3. R ECOVER, Input:B, Pb0 (u), R, b A (u) be a diagonal matrix whose ith entry is 1. Let D 1 2  bi ) Pb0 (R ?1/2 . b = BR bD b A (u)?1/2 . 2. Let A b= 3. Estimate C = E[yy T ] by taking O((kAk2 + k?k2 )4 n2 ?2 ) samples and let C b=C b?A bA bT 4. Let ? b b 5. Return A, ? 1 N PN i=1 yi yiT . main goal: learning a noisy linear transformation Ax + ? with unknown Gaussian noise. We first use a theorem from [5] to show that properties for finding one local maxima is satisfied. Also, for notational convenience we set di = 2DA (u0 )?2 i,i and let dmin and dmax denote the minimum and maximum values (bounds on these and their ratio follow from Claim 2.9). Using this Pn notation P ? (u) = i=1 di (uT Ri? )4 . ? Theorem 5.1 ([5]). When ? < dmin /10dmax n2 , the function P ? (u) is (3 n?, ?, P ? (u)? 2 /100) Locally Improvable and (?, dmin ? 2 /100n) Locally Approximable. Moreover, the local maxima of the function is exactly {?Ri? }. We then observe that given enough samples, the empirical mean Pb0 (u) is close to P ? (u). For concentration we require every degree four term zi zj zk zl has variance at most Z. Claim 5.2. Z = O(d2min ?min (A)8 k?k42 + d2min ). 0 , suppose columns of R0 = Lemma 5.3. Given 2N samples y1 , y2 , ..., yN , y10 , y20 , ..., yN ?1 1/2 ? B ADA (u0 ) are  close to the corresponding columns of R , with high probability the function Pb0 (u) is O(dmax n1/2  + n2 (N/Z log n)?1/2 ) close to the true function P ? (u). The other properties required by Theorem 4.6 are also satisfied: Lemma 5.4. For any ku ? u0 k2 ? r, |P ? (u) ? P ? (u0 )| ? 5dmax n1/2 r. All local maxima of P ? has attraction radius Rad ? dmin /100dmax . Applying Theorem 4.6 we obtain the following Lemma (the parameters are chosen so that all properties required are satisfied): Lemma 5.5. Let ? 0 = ?((dmin /dmax )2 ), ? = min{?n?1/2 , ?((dmin /dmax )4 n?3.5 )}, then the procedure R ECOVER (f, ?, dmin ? 2 /100n , ? 0 , dmin ? 02 /100n) finds vectors v1 , v2 , ..., vn , so that there is a permutation matrix ? and ki ? {?1} and for all i: kvi ? (R?Diag(ki ))?i k2 ? ?. b = [v1 , v2 , ..., vn ] we can use Algorithm 3 to find A and ?: After obtaining R b such that there is permutation matrix ? and ki ? {?1} with Theorem 5.6. Given a matrix R ? b b such that kA b ? A?Diag(ki )kF ? kRi ? ki (R ?)i k2 ? ? for all i, Algorithm 3 returns matrix A 2 3/2 2 3/2 O(? kAk2 n /?min (A)). If ? ? O(/ kAk2 n ?min (A)) ? min{1/ kAk2 , 1}, we also have b ? ?kF ? . k? Recall that the diagonal matrix DA (u) is unknown Pn (since it depends on A), but if we are given R? (or an approximation) and since P ? (u) = i=1 di (uT Ri? )4 , we can recover the matrix DA (u) approximately from computing P ? (Ri? ). Then given DA (u), we can recover A and ? and this completes the analysis of our algorithm. Conclusions ICA is a vast field with many successful techniques. Most rely on heuristic nonlinear optimization. An exciting question is: can we give a rigorous analysis of those techniques as well, just as we did for local search on cumulants? A rigorous analysis of deep learning ?say, an algorithm that provably learns the parameters of an RBM?is another problem that is wide open, and a plausible special case involves subtle variations on the problem we considered here. 8 References [1] P. Comon. Independent component analysis: a new concept? Signal Processing, pp. 287?314, 1994. 1, 1.1 [2] A. Hyvarinen, J. Karhunen, E. Oja. Independent Component Analysis. Wiley: New York, 2001. 1 [3] A. Hyvarinen, E. Oja. Independent component analysis: algorithms and applications. Neural Networks, pp. 411?430, 2000. 1 [4] P. J. Huber. Projection pursuit. Annals of Statistics pp. 435?475, 1985. 1 [5] A. Frieze, M. Jerrum, R. Kannan. Learning linear transformations. FOCS, pp. 359?368, 1996. 1, 1.1, 4.1, 4.4, 5, 5.1 [6] A. Anandkumar, D. Foster, D. Hsu, S. Kakade, Y. Liu. Two SVDs suffice: spectral decompositions for probabilistic topic modeling and latent Dirichlet allocation. Arxiv:abs/1203.0697, 2012. 1 [7] D. Hsu, S. Kakade. Learning mixtures of spherical Gaussians: moment methods and spectral decompositions. Arxiv:abs/1206.5766, 2012. 1 [8] L. De Lathauwer; J. Castaing; J.-F. Cardoso, Fourth-Order Cumulant-Based Blind Identification of Underdetermined Mixtures, Signal Processing, IEEE Transactions on, vol.55, no.6, pp.2965-2973, June 2007 1 [9] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM Algorithm. Journal of the Royal Statistical Society Series B, pp. 1?38, 1977. 1 [10] S. Dasgupta. Learning mixtures of Gaussians. FOCS pp. 634?644, 1999. 1 [11] S. Arora and R. Kannan. Learning mixtures of separated nonspherical Gaussians. Annals of Applied Probability, pp. 69-92, 2005. 1 [12] M. Belkin and K. Sinha. Polynomial learning of distribution families. FOCS pp. 103?112, 2010. 1 [13] A. T. Kalai, A. Moitra, and G. Valiant. Efficiently learning mixtures of two Gaussians. STOC pp. 553-562, 2010. 1 [14] A. Moitra and G. Valiant. Setting the polynomial learnability of mixtures of Gaussians. FOCS pp. 93?102, 2010. 1 [15] G. Hinton, R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science pp. 504?507, 2006. 1 [16] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, pp. 1?127, 2009. 1 [17] G. E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines, Version 1, UTML TR 2010-003, Department of Computer Science, University of Toronto, August 2010 1 [18] Y. Freund , D. Haussler. Unsupervised Learning of Distributions on Binary Vectors using Two Layer Networks University of California at Santa Cruz, Santa Cruz, CA, 1994 1 [19] S. Cruces, L. Castedo, A. Cichocki, Robust blind source separation algorithms using cumulants, Neurocomputing, Volume 49, Issues 14, pp 87-118, 2002. 1.1 [20] L., De Lathauwer; B., De Moor; J. Vandewalle. Independent component analysis based on higher-order statistics only Proceedings of 8th IEEE Signal Processing Workshop on Statistical Signal and Array Processing, 1996. 1.1 [21] S. Vempala, Y. Xiao. Structure from local optima: learning subspace juntas via higher order PCA. Arxiv:abs/1108.3329, 2011. 1.1 [22] M. Kendall, A. Stuart. The Advanced Theory of Statistics Charles Griffin and Company, 1958. 2 9
4603 |@word version:2 polynomial:11 seems:3 norm:5 nd:1 suitably:1 yi0:1 stronger:1 open:1 covariance:10 decomposition:2 pick:1 concise:1 tr:1 recursively:1 carry:1 moment:8 liu:1 series:1 interestingly:1 current:2 recovered:1 ka:2 dx:1 reminiscent:1 bd:1 tenet:1 cruz:2 additive:8 visible:1 analytic:1 utml:1 v:1 beginning:1 ith:1 junta:1 toronto:1 lathauwer:2 focs:4 prove:4 inside:1 introduce:1 huber:1 indeed:2 ica:13 roughly:1 blackbox:1 salakhutdinov:1 spherical:1 company:1 becomes:1 project:4 provided:1 moreover:3 notation:5 suffice:1 bounded:1 eigenvector:1 finding:9 transformation:6 nj:2 guarantee:10 every:2 exactly:5 k2:14 wrong:1 zl:1 control:2 unit:4 grant:3 yn:6 understood:1 local:72 accumulates:1 ak:5 initiated:2 subscript:1 meet:1 approximately:3 ankur:1 range:2 perpendicular:1 bi:1 practical:1 practice:3 orthonormalization:1 procedure:10 empirical:8 projection:12 refers:1 cannot:3 close:15 onto:1 operator:6 valley:1 convenience:1 context:2 applying:2 instability:1 accumulation:1 optimize:1 lagrangian:2 center:1 maximizing:1 missing:1 d2min:2 starting:1 cluttered:1 focused:2 simplicity:1 immediately:1 haussler:2 attraction:2 array:1 orthonormal:2 spanned:2 notion:1 coordinate:4 traditionally:1 analogous:1 variation:1 annals:2 target:3 suppose:6 imagine:1 exact:4 us:3 designing:1 trend:1 approximated:2 continues:1 econometrics:1 solved:2 worst:1 svds:1 region:1 mentioned:1 intuition:4 pd:1 dempster:1 complexity:1 ui:1 ideally:1 kaxk2:1 depend:1 technically:1 upon:3 efficiency:1 basis:2 compactly:1 easily:2 represented:2 various:1 separated:1 fast:1 describe:3 outside:1 neighborhood:1 whose:9 quite:1 supplementary:1 valued:1 kai:2 say:2 drawing:1 reconstruct:1 larger:1 plausible:1 heuristic:1 statistic:3 jerrum:3 gi:1 noisy:1 laird:1 obviously:1 autoencoding:2 sequence:2 advantage:1 eigenvalue:2 differentiate:1 aligned:1 mixing:3 achieve:1 description:1 convergence:1 optimum:10 liked:1 depending:1 coupling:1 school:1 progress:1 c:1 involves:4 come:1 implies:1 direction:8 radius:2 correct:1 subsequently:1 kb:1 centered:2 material:1 atk:1 require:1 crux:3 suffices:1 generalization:3 opt:6 underdetermined:1 rong:1 strictly:2 correction:2 considered:2 exp:1 claim:7 major:1 smallest:2 purpose:1 estimation:1 currently:1 largest:1 tool:2 moor:1 hope:2 rough:1 clearly:2 gaussian:25 always:1 modified:1 rather:2 kalai:1 avoid:1 pn:7 rdx:1 varying:2 ax:11 refining:1 june:1 improvement:1 vk:7 notational:1 likelihood:1 rigorous:8 bt:1 proj:6 quasi:8 going:1 provably:2 issue:3 overall:1 among:1 favored:1 integration:1 fairly:1 special:1 equal:2 field:1 identical:1 placing:1 stuart:1 look:3 unsupervised:1 others:1 piecewise:1 simplify:1 inherent:1 belkin:1 oja:2 frieze:3 preserve:1 neurocomputing:1 n1:3 attempt:3 ab:3 highly:1 introduces:1 mixture:16 implication:1 amenable:2 integral:2 partial:1 necessary:2 approximable:7 orthogonal:10 incomplete:1 taylor:2 logarithm:1 old:1 theoretical:2 sinha:1 instance:1 column:6 modeling:1 cumulants:3 morally:1 ada:11 maximization:1 subset:1 entry:5 uniform:2 successful:2 vandewalle:1 too:3 learnability:1 accomplish:1 probabilistic:1 sanjeev:1 again:1 central:1 satisfied:3 moitra:4 containing:1 worse:1 ocal:4 derivative:3 denoises:1 return:4 potential:1 de:3 blow:1 wk:2 coefficient:1 explicitly:1 depends:2 vi:8 blind:2 later:3 view:2 try:1 kendall:1 recover:8 complicated:3 simon:1 defer:1 elaborated:1 contribution:2 square:1 improvable:9 variance:3 characteristic:1 efficiently:2 yield:1 castaing:1 identification:1 accurately:1 rx:3 ago:1 history:2 datapoint:1 definition:8 pp:14 dm:1 proof:2 rbm:5 recovers:4 di:3 hsu:3 popular:4 recall:2 knowledge:3 ut:29 dimensionality:1 segmentation:1 subtle:1 routine:1 carefully:1 actually:1 focusing:1 higher:2 follow:1 improved:1 though:2 generality:1 furthermore:3 just:4 autoencoders:1 nonlinear:2 aj:1 reveal:1 behaved:1 believe:1 k22:4 contain:1 true:5 y2:3 ccf:2 multiplier:1 hence:1 concept:1 iteratively:1 eg:1 ll:2 during:1 noted:1 whiten:1 criterion:1 reasoning:1 ranging:1 image:1 novel:3 charles:1 common:1 rotation:8 functional:5 exponentially:4 volume:1 rth:1 erode:1 accumulate:3 kri:1 ai:2 rd:2 mathematics:1 access:3 entail:1 longer:1 whitening:11 etc:1 add:1 recent:1 showed:1 perspective:1 apart:1 certain:1 binary:1 continue:2 success:1 accomplished:1 uncorrupted:1 yi:6 seen:3 minimum:2 additional:1 ey:2 r0:6 converge:2 maximize:1 signal:5 ii:2 u0:29 smooth:1 long:3 e1:2 basic:2 rongge:1 noiseless:1 expectation:5 arxiv:3 represent:1 fellowship:1 else:1 singular:2 completes:1 source:1 crucial:2 rest:2 subject:1 seem:1 anandkumar:2 call:2 iii:1 enough:2 concerned:1 bengio:1 zi:1 architecture:3 reduce:3 idea:2 br:1 pca:1 hessian:16 york:1 deep:9 useful:2 generally:2 santa:2 informally:1 unimportant:1 cardoso:1 locally:15 nonspherical:1 exist:2 nsf:3 zj:1 notice:3 uk2:1 sign:1 track:2 correctly:1 yy:3 dasgupta:2 vol:1 four:1 nevertheless:1 pb:9 yit:1 changing:1 gmm:1 y10:3 v1:10 vast:1 year:1 convert:1 run:2 fourth:14 almost:2 throughout:3 family:1 vn:5 separation:1 griffin:1 acceptable:1 layer:3 bound:2 completing:1 ki:7 guaranteed:1 quadratic:2 refine:1 oracle:3 badly:2 precisely:1 constraint:1 ri:4 encodes:1 min:12 span:2 vempala:1 department:2 influential:1 em:3 y0:2 contradicts:1 wi:1 kakade:3 shallow:1 modification:2 making:1 comon:3 restricted:3 invariant:1 remains:1 dmax:7 needed:1 know:1 flip:1 ge:1 end:1 pursuit:1 gaussians:9 operation:1 apply:2 observe:1 v2:9 sachdeva:2 spectral:2 alternative:1 schmidt:1 hat:1 original:1 denotes:2 running:9 remaining:1 dirichlet:1 newton:2 giving:1 especially:3 uj:1 classical:1 society:1 implied:1 objective:1 added:1 already:3 question:1 strategy:2 concentration:1 dependence:1 usual:1 kak2:7 diagonal:8 unclear:2 gradient:5 subspace:11 distance:1 parametrized:1 vd:1 topic:1 unstable:2 trivial:1 provable:8 kannan:4 index:1 ratio:1 innovation:1 stoc:1 ba:1 implementation:1 design:1 boltzmann:2 unknown:16 perform:1 dmin:8 immediate:1 hinton:2 ever:1 precise:1 pb0:9 y1:3 rn:7 august:1 overlooked:1 complement:1 namely:1 required:2 connection:1 rad:3 california:1 concisely:2 fixable:1 able:1 ku0:1 program:1 built:1 including:1 max:2 explanation:1 royal:1 ia:1 suitable:1 natural:1 rely:1 advanced:2 scheme:1 arora:3 autoencoder:2 naive:2 cichocki:1 prior:2 kf:5 relative:2 freund:2 loss:1 permutation:4 interesting:4 allocation:1 foundation:1 degree:1 sufficient:2 rubin:1 principle:1 exciting:1 foster:1 xiao:1 row:3 course:2 supported:2 last:1 free:1 repeat:1 guide:1 institute:1 wide:1 taking:3 gram:1 avoids:2 concretely:1 made:2 interchange:1 jump:1 projected:1 far:4 hyvarinen:2 transaction:1 functionals:1 bb:2 approximate:1 compact:2 emphasize:1 keep:2 technicality:1 dealing:1 quoted:1 search:14 latent:1 ecover:3 ku:5 zk:1 learn:1 ca:1 reasonably:1 composing:1 robust:2 obtaining:2 unavailable:1 permute:1 expansion:2 poly:2 necessarily:1 domain:1 da:10 diag:3 did:1 main:3 multilayered:1 bounding:1 noise:15 n2:6 nothing:1 wiley:1 precision:1 k42:1 exponential:1 sushant:1 third:3 learns:1 theorem:14 bad:1 xt:2 kvi:2 uut:1 admits:1 workshop:1 albeit:1 effectively:1 valiant:2 karhunen:1 gap:1 cx:1 subtlety:1 applies:1 corresponds:2 satisfies:2 succeed:1 goal:9 lipschitz:2 y20:3 change:2 except:1 uniformly:4 reducing:3 denoising:7 lemma:20 called:1 formally:3 arises:1 cumulant:9 investigator:1 evaluate:1 princeton:4
3,982
4,604
Minimizing Uncertainty in Pipelines? Nilesh Dalvi Facebook, Inc. [email protected] Aditya Parameswaran Stanford University [email protected] Vibhor Rastogi Google, Inc. [email protected] Abstract In this paper, we consider the problem of debugging large pipelines by human labeling. We represent the execution of a pipeline using a directed acyclic graph of AND and OR nodes, where each node represents a data item produced by some operator in the pipeline. We assume that each operator assigns a confidence to each of its output data. We want to reduce the uncertainty in the output by issuing queries to a human, where a query consists of checking if a given data item is correct. In this paper, we consider the problem of asking the optimal set of queries to minimize the resulting output uncertainty. We perform a detailed evaluation of the complexity of the problem for various classes of graphs. We give efficient algorithms for the problem for trees, and show that, for a general dag, the problem is intractable. 1 Introduction In this paper, we consider the problem of debugging pipelines consisting of a set of data processing operators. There is a growing interest in building various web-scale automatic information extraction pipelines [9, 10, 14, 7], with operators such as clustering, extraction, classification, and deduplication. The operators are often based on machine learned models, and they associate confidences with the data items they produce. At the end, we want to resolve the uncertainties of the final output tuples, i.e., figure out which of them are correct and which are incorrect. .5Given a fixed labeling budget, we can only inspect a subset of the output tuples. However, the output uncertainties are highly correlated since different tuples share their lineage. Thus, inspecting a tuple also gives us information about the correctness of other tuples. In this paper, we consider the following interesting and non-trivial problem : given a budget of k tuples, choose the k tuples to inspect that minimize the total uncertainty in the output. We will formalize the notion of a data pipeline and uncertainty in Section 2. Here, we illustrate the problem using an example. Figure 1: Pipeline Example Example 1.1. Consider a simple hypothetical pipeline for extracting computer scientists from the Web that consists of two operators: a classifier that takes a webpage and determines if it is a page about computer science, and a name extractor that extracts names from a given webpage. Fig. 1 shows an execution of this pipeline. There are two webpages, w1 and w2 , output by the classifier. The extractor extracts entities e1 and e2 from w1 and e3 ,e4 and e5 from w2 . Each operator also gives a confidence with its output. In Fig. 1, the classifier attaches a probability of 0.9 and 0.8 to pages w1 and w2 . Similarly, the extractor attaches a probability to each of the extractions e1 to e5 . The probability that an operator attaches to a tuple is conditioned on the correctness of its input. Thus, the final probability of e1 is 0.8 ? 0.9 = 0.72. Similarly, the final probabilities of e2 to e5 are 0.45, 0.8, 0.8 and 0.48 respectively. Note that the uncertainties are correlated, e.g., e3 and e4 are either both correct or both incorrect. We want to choose k tuples to inspect that minimize the total output uncertainty. Classified Pages Extractions ? This w1 w2 0.9 0.8 0.8 0.5 1 1 0.6 e1 e2 e3 e4 e5 work was partly done when the authors were employed at Yahoo! Research. 1 Graph T REE(2) BEST-1 O(n) T REE DAG(2, ?) or DAG(?) DAG(2, ?) DAG(?) DAG O(n) O(n3 ) O(n3 ) PP-Hard INCR BEST-K O(n) or OPEN (Weakly PTIME) O(log n)+O(n log n) preprocessing 2-approximate? : O(n log n) O(n) OPEN (O(nk+1 )) PP-hard (Probabilistic Polynomial) PP-Hard Hard to Approximate Hard to Approximate PP-hard, Hard to Approximate PP-Hard, Hard to Approximate PP-Hard, Hard to Approximate PP-Hard, Hard to Approximate Table 1: Summary of Results; ? Twice the number of queries to achieve same objective as optimal If all the data items were independent, we would have queried the most uncertain items, i.e. having probability closest to 1/2. However, in presence of correlations between the output tuples, the problem becomes non-trivial. For instance, let us revisit the first example with k = 1, i.e., we can inspect one tuple. Of the 5 output tuples, e5 is the most uncertain, since its probability 0.48 is closest to 1/2. However, one might argue that e3 (or e4 ) is more informative item to query, since the extractor has a full confidence on e3 . Thus, e3 is correct iff w2 is correct (i.e. the classifier was correct on w2 ). Resolving e3 completely resolves the uncertainty in w2 , which, in turn, completely resolves the uncertainty in e4 and reduces the uncertainty in e5 . The argument holds even when the extractor confidence in e3 is less than 1 but still very high. In general, one can also query intermediate nodes in addition to the output tuples, and choosing the best node is non-trivial. In this paper, we consider the general setting of a data pipeline given by a directed acyclic graph that can capture both the motivating scenarios. We define a measure of total uncertainty of the final output based on how close the probabilities are to either 0 or 1. We give efficient algorithms to find the set of data items to query that minimizes the total uncertainty of the output, both under interactive and batch settings. 1.1 Related Work Our problem is an instance of active learning [27, 13, 12, 17, 2, 15, 5, 4, 3] since our goal is to infer probability values of the nodes being true in the DAG, by asking for tags of example nodes. The metric that we use is similar to the square loss metric. However, our problem has salient differences. Unlike traditional active learning where we want to learn the underlying probabilistic model from iid samples, in our problem, we already know the underlying model and want to gain information about non-iid items with known correlations. This makes our setting novel and interesting. Our DAG structure is a special case of Bayesian networks [6]. A lot is known about general bayesnet inference [21]. For instance, MAP inference given evidence is NPPP -complete [24] (approximate inference is NP-complete [1]), inferring whether the probability of a set of variables taking a certain values given evidence about others is > 0 is NP-complete [8], is > t is PP-complete [22], while finding its values is #P-complete [26]. However, these results do not apply to our problem setting. In our setting, we are given a set of non-iid items whose correlatations are given by a Bayesian network with known structure and probabilities. We want to choose a subset of items, conditioned on which, the uncertinty of the remaining items is minimized. Our work is closely related to the field of active diagnosis [28, 19, 20], where the goal is to infer the state of unknown nodes in a network by selecting suitable ?test probes?. From this field, the most closely related work is that by Krause and Guestrin [19], which considers minimization of uncertainty in a Bayesian network. In that work, the goal is to identify a subset of variables in a graphical model that would minimize the joint uncertainty of a target set of variables. Their primary result is a proof of submodularity under suitable independence assumptions on the graphical model which is then used to derive an approximation algorithm to pick variables. In our problem setting submodularity does not hold, and hence the techniques do not apply. On the other hand, since our graphical model has a specific AND/OR structure, we are able to concretely study the complexity of the algorithms. Our work is also related to the work on graph search [23], where the goal is to identify hidden nodes while asking questions to humans. Since the target applications are different, the underlying model in that work is less general. 2 Problem Statement Execution Graph: Let G be a directed acyclic graph (dag), where each node n in G has a label from the set {?, ?} and a probability p(n). We call such a graph a probabilistic and-or dag. We denote 2 the class of such graphs as DAG. We represent the results of an execution of a pipeline of operators using a probabilistic and-or dag. The semantics of G ? DAG is as follows. Each node in G represents a data item. The parents of a node n, i.e. the set of nodes having an outgoing edge to n, denote the set of data items which were input to the instance of the operator that produced n. We use parent(n) to denote the parents of n. The probability p(n) denotes the probability that the data item n is correct conditioned on parent(n) being correct. If n has label ?, then it requires all the parents to be correct. If n has label ?, it requires at least one parent to be correct. We further assume that, conditioned on the parents being correct, nodes are correct independently. To state the semantics formally, we associate a set of independent Boolean random variables X(n) for each node n in G with probability p(n). We also associate another set of random variables Y (n), which denotes whether V the result at node n is correct (unconditionally). For a ? node, Y (n) is defined as: Y (n) = X(n) ? m?parent(n) Y (m). For a ? node, Y (n) is defined as: Y (n) = X(n) ? W m?parent(n) Y (m). When G is a tree, i.e., all nodes have a single parent, the labels of nodes do not have any effect, since Y (n) is the same for both ? and ? nodes. In this case, we simply treat G as an unlabeled tree. For instance, Figure 1 denotes the (unlabeled) tree for the pipeline given in Example 1.1. Thus probabilistic and-or dags provide a powerful formalism to capture data pipelines in practice such as the one in Example 1.1. Output Uncertainty: Let L denote the set of leaves of G, which represent the final output of the pipeline. We want all the final probabilities of L to be close to either 0 or 1, as the closer the probability to 1/2, the more uncertain the correctness of the given node is. Let f (p) denote some measure of uncertainty of a random variable as a function of its probability p. Then, we define the total output uncertainty of the DAG as I= ? f (Pr(Y (n))) (1) n?L Our results continue to hold when different n ? L are weighted differently, i.e., we use a weighted version of Eq. (1). We describe this simple extension in the extended technical report [11]. Now, our goal is to query a set of nodes Q that minimize the expected total output uncertainty conditioned on observing Q. We define this as follows. Let Q = {l1 , l2 , ? ? ? , lk } be a set of nodes. Given v = {v1 , ? ? ? , vk } ? {0, 1}k , we use Q = v to denote the event Y (li ) = vi for each i. Then, define I(Q) = ? Pr(Q = v) ? f (Pr(Y (n) | Q = v)) (2) n?L v?{0,1}k The most basic version of our problem is following. Problem 1 (Best-1). Given a G ? DAG, find the node q that minimizes the expected uncertainty I({q}). A more challenging question is the following: Problem 2 (Best-k). Given a G ? DAG, find the set of nodes Q of size k that minimizes I(Q). In addition to this, we also consider the incremental version of the problem defined as follows. Suppose we have already issued a set of queries Q0 and obtained a vector v0 of their correctness values. Given a new set of queries, we define the conditioned uncertainty as I(Q | Q0 = v0 ) = ?v Pr(Q = v | Q0 = v0 ) ?n?L f (Pr(Y (n) | Q = v ? Q0 = v0 )). We also pose the following question: Problem 3 (Incr). Given a G ? DAG, and a set of already issued queries Q0 with answer v0 , find the best node q to query next that minimizes I({q} | Q0 = v0 ). In this work, we use the uncertainty metric given by f (p) = p(1 ? p) (3) Thus, f (p) is minimized when p is either 0 or 1, and is maximum at p = 1/2. Note that f (p) = 1/4 ? (1/2 ? p)2 . Hence, minimizing f (p) is equivalent to maximizing the squares of differences of probabilities with 1/2. We call this the L2 metric. There are other reasonable choices for the uncertainty metric, e.g. L1 or entropy. The actual choice of uncertainty metrics is not important for our application. In the technical report [11], we show that using any of these different metrics, the resulting solutions are ?similar? to each other. Our uncertainty objective function can be shown to satisfy some desirable properties, such as: 3 Theorem 2.1 (Information Never Hurts). For any sets of queries Q1 , Q2 , I(Q1 ) ? I(Q1 ? Q2 ) Thus, expected uncertainty cannot increase with more queries. Further, the objective function I is neither sub-modular nor super-modular. These results continue to hold when f is replaced with other metrics (Sec. 6). Lastly, for the rest of the paper, we will assume that the query nodes Q are selected from only among the leaves of G. This is only to simplify the presentation. There is a simple reduction of the general problem to this problem, where we attach a new leaf node to every internal node, and set their probabilities to 1. Thus, for any internal node, we can equivalently query the corresponding leaf node (we will need to use the weighted form of the Eq. (1), described in the extended technical report [11], to ensure that new leaf nodes have weight 0 in the objective function.) 3 Summary of main results We first define class of probabilistic and-or dags. Let DAG(?) and DAG(?) denote the subclasses of DAG where all the node labels are ? and ? respectively. Let DAG(2, ?) and DAG(2, ?) denote the subclasses where the dags are further restricted to depth 2. (We define the depth to be the number of nodes in the longest root to leaf directed path in the dag.) Similarly, we define the class T REE where the dag is restricted to a tree, and T REE(d), consisting of depth-d trees. For trees, since each node has a single parent, the labels of the nodes do not matter. We start by defining relationships between expressibility of each of these classes. Given any D1 , D2 ? DAG, we say that D1 ? D2 if they have the same number of leaves, and define the same joint probability distribution on the set of their leaves. Given two classes of dags C1 and C2 , we say C1 ? C2 if for all D1 ? C1 , there is a D2 ? C2 s.t. D2 is polynomial in the size of D1 and D1 ? D2 . Theorem 3.1. The following relationships exist between different classes: T REE(2) ? T REE ? DAG(2, ?) = DAG(?) ? DAG(2, ?) ? DAG(?) ? DAG Table 1 shows the complexity of the three problems as defined in the previous section, for different classes of graphs. The parameter n is the number of nodes in the graph. While the problems are tractable, and in fact efficient, for trees, they become hard for general dags. Here, PP denotes the complexity class of probabilistic polynomial time algorithms. Unless P = NP, there are no PTIME algorithms for PP-hard problems. Further, for some of the problems, we can show that they cannot 1?? be approximated within a factor of 2n for any positive constant ? in PTIME. 4 Best-1 Problem We start with the most basic problem: given a probabilistic DAG G, find the node to query that minimizes the resulting uncertainty. We first provide PTIME algorithms for T REE(2), T REE, DAG(?), and DAG(2, ?) (Recall that as we saw earlier, DAG(2, ?) subsumes DAG(?).) Subsequently, we show that finding the best node to query is intractable for DAG(?) of depth greater than 2, and is thus intractable for DAG V as well. For T REE and DAG(?), the expression for Y (n) can be rewritten as the following: Y (n) = m?anc(n) X(m), where anc(n) denotes the set of ancestors of n, i.e., those nodes that have a directed path to n, including n itself. This ?unrolled? formulation will allow us to compute the probabilities Y (x) = 1 easily. 4.1 T REE(2) Consider a simple tree graph G with root r, having p(r) = pr , and having children l1 , ? ? ? , ln with p(li ) = pi . Given a node x, let ex denote the event Y (x) = 1, and ex denote the event that Y (x) = 0. We want to find the leaf q that minimizes I({q}), where: I({q}) = ? Pr(eq ) f (Pr(el | eq )) + Pr(eq ) f (Pr(el | eq )) (4) l?L By a slight abuse of notation, we will use I(q) to denote the quantity I({q}). It is easy to see the following (let l 6= q): Pr(el | eq ) = pl , Pr(eq ) = pr pq , Pr(el | eq ) = pr pl (1 ? pq )/(1 ? pr pq ) Substituting these expressions back in Eq. (4), and assuming f (p) = p(1 ? p), we get the following: I(q) = ? pr pq pl (1 ? pl ) + pr pl (1 ? pq )(1 ? pr pl (1 ? pq )/(1 ? pr pq )) l?L,l6=q 4 We observe that it is of the form F0 (pq , pr ) + F1 (pq , pr ) ? pl + F2 (pq , pr ) ? p2l l (5) l where F0 , F1 , F2 are small rational polynomials over pr and pq . This immediately gives us a linear time algorithm to pick the best q. We first compute ?l pl and ?l p2l , and then compute the objective function for all q in linear time. Now we consider the case when G is any general tree with the set of leaves L. Recall that ex is the event that denotes Y (x) = 1. Denote the probability Pr(ex ) by Px . Thus, Px is the product of p(y) over all nodes y that are the ancestors of x (including x itself). Given nodes x and y, let lca(x, y) denote the least common ancestor of x and y. Our objective is to find q ? L that minimizes Eq. (4). The following is immediate: Pr(eq ) = Pq Pr(el | eq ) = Pl Plca(l,q) Pr(el | eq ) = Pl (1 ? Pq /Plca(l,q) ) 1 ? Pq However, if we directly plug this in Eq.(4), we don?t get a simple form analogous to Eq.(5). Instead, we group all the leaves into equivalence classes based on their lowest common ancestor with q as shown in Fig. 2. Let a1 , ? ? ? , ad be the set of ancestors of q. Consider all leaves in the set Li such that their lowest common ancestor with q is ai . Given a node x, let S(x) denote the sum of Pl2 over all leaves l reachable from x. If we sum Eq. (4) over all leaves in Li , we get the following expression: ad ?(S(ai ) ? S(ai?1 )) (Pq + Pa2i ? 2Pq Pai ) + ? Pl Pa2i (1 ? Pq ) l?Li Define ?1 (ai ) = S(ai ) ? S(ai?1 ) and ?2 (ai ) = (S(ai ) ? 1?2P S(ai?1 )) P2 ai . We can write the above expression as: ai ? a2 a1 q L1 Pq 1 ?1 (ai ) ? ?2 (ai ) + ? Pl 1 ? Pq 1 ? Pq l?Li L2 Ld Figure 2: Equivalence Classes of Leaves Summing these terms over all the ancestors of q, we get Pq 1 I(q) = ? ? ?1 (a) ? 1 ? Pq ? ?2 (a) + ? Pl 1 ? Pq a?anc(q) l?L a?anc(q) 4.2 T REE Our main observation is that we can compute I(q) for all leaves together in time linear in the size of G. First, using a single top-down dynamic programming over the tree, we can compute Px for all nodes x. Next, using a single bottom-up dynamic programming over G, we can compute S(x) for all nodes x. In the third step, we compute ?1 (x) and ?2 (x) for all nodes in the tree. In the fourth step, we compute ?a?anc(x) ?i (x) for all nodes in the graph using another top-down dynamic programming. Finally, we scan all the leaves and compute the objective function using the above expression. Each of the 5 steps runs in time linear in the size of the graph. Thus, we have Theorem 4.1. Given a tree G with n nodes, we can compute the node q that minimizes I(q) is time O(n). 4.3 DAG(2, ?) We now consider DAG(2, ?). As before, we want to find the best node q that minimizes I(q) as given by Eq. (4). However, the expressions for probabilities Pr(eq ) and Pr(el | eq ) are more complex for DAG(2, ?). First, note that Pl , i.e., the probability that Pr(Y (l) = 1) is computed as follows: Pl = p(l) ? 1 ? ?x?parent(l) (1 ? p(x)) . The probability that at least one of the shared ancestors of l and q are true is: Pl,q = 1 ? ?x?parent(l)?parent(q) (1 ? p(x)). And the probability that one of the unique ancestors of l is true is: Pl\q = 1 ? ?x?parent(l)\parent(q) (1 ? p(x)) Then, the following are 5 immediate: Pr(eq ) = Pq p(l) ? p(q) ? (Pl,q + (1 ? Pl,q ) ? Pl\q ? Pq\l ) Pr(eq | el ) = Pl Pq ? (1 ? p(l)) + p(l) ? p(q) ? (1 ? Pl,q ) ? (1 ? Pl\q ) ? Pq\l Pr(eq | el ) = 1 ? Pl Note that Pl , Pl,q , Pl\q can be computed for one l, q pair in time O(n) and thus for all l, q in time O(n3 ). Subsequently, finding the best candidate node would require O(n2 ) time, giving us an overall O(n3 ) algorithm to find the best node. Theorem 4.2. Given G ? DAG(2, ?) with n nodes, we can compute q that minimizes I(q) is time O(n3 ). Since every DAG(?) can be converted into to one in DAG(2, ?) in O(n3 ) (see [11]), we get: Theorem 4.3. Given G ? DAG(?) with n nodes, we can compute q that minimizes I(q) is time O(n3 ). 4.4 DAG(?) Theorem 4.4 (Hardness of Best-1 for DAG(?)). The best-1 problem for DAG(?) is PP-Hard. We use a reduction from the decision version of the #P-Hard monotone-partitioned-2-DNF problem [25]. The proof can be found in the extended technical report [11]. Thus, incremental and best-k problems for DAG(?) are PP-Hard as well. As a corollary from Theorem 3.1 we have: Theorem 4.5 (Hardness of Best-1 for DAG). The best-1 problem for DAG is PP-Hard. This result immediately shows us that the incremental and best-k problems for DAG are PP-Hard. However, we can actually prove a stronger result for DAG, i.e., that they are hard to approximate. We use a weakly parsimonious reduction from the #P-Hard monotone-CNF problem. Note that unlike the partitioned-2-DNF problem (used for the reduction above), which admits a FPRAS (Fully Polynomial Randomized Approximation Scheme) [18], monotone-CNF is known to be hard to approximate [26]. In our proof, we use the fact that repeated applications of an approximation algorithm for best-1 for DAG would lead to an approximation algorithm for monotone-CNF, which is known to be hard to approximate. This result is shown in the extended version [11]. Theorem 4.6 (Inapproximability for DAG). The best-1 problem for DAG is hard to approximate. 5 Incremental Node Selection In this section, we consider the problem of picking the next best node to query after a set of nodes Q0 have already been queried. We let vector v0 reflect their correctness values. We next pick a leaf node q that minimizes I({q} | Q0 = v0 ). Again, by slightly abusing notation, we will write the expression simply as I(q | Q0 = v0 ). In this section, we first consider T REE(2) and T REE. Recall from the previous section that the incremental problem is intractable for DAG(?). Here, we prove that incremental picking is intractable for DAG(?) itself. 5.1 T REE We want to extend our analysis of Sec. 4 by replacing Pr(ex ) by Pr(ex | Q0 = v0 ) and Pr(ex | ey ) by Pr(ex | ey ? Q0 = v0 ). We will show that, conditioned on Q0 = v0 , the resulting probability distribution of the leaves can again be represented using a tree. The new tree is constructed as follows. Given Q0 = v0 , apply a sequence of transformations to G ? T REE, one for each q0 ? Q0 . Suppose the value of q0 = 1. Then, for each ancestor a of q0 including itself, set p(a) = 1. If q0 = 0, then for 1?Pq /Pa each ancestor a including itself, change its p(a) to p(a) 1?P0q . Let all other probabilities remain 0 the same. Theorem 5.1. Let G0 be the tree as defined above. Then, I(q | Q0 = v0 ) on G is equal to I(q) on G0 . Thus, after each query, we can incorporate the new evidence by updating the probabilities of all the nodes along the path from the query node to the root. Thus, finding the next best node to query can still be computed in linear time. 6 5.2 T REE(2) For G ? T REE(2), the above algorithm results in the following tree transformation. If a leaf q is queried, and the result is 1, then p(r) and p(q) are set to 1. If the result is 0, p(q) is set to 0 and p(r) pr (1?pq ) is set to 1?p . r pq Instead of using Eq. (5) to compute the next best in linear time, we can devise a more efficient scheme. Suppose we are given all the leaf probabilities in sorted order (or if we sort them initially). Then, we can subsequently compute the leaf q that minimizes Eq. (5) in O(log n) time: Consider the rational polynomials F0 , F1 and F2 . For a fixed pr , ?l pl , and ?l p2l , this expression can be treated as a rational polynomial in a single variable pq . If we take the derivative, the numerator is a quartic in pq . Thus, it can have at most four roots. We can find the roots of a quartic using Ferrari?s approach in constant time [16]. Using 4 binary searches, we can find the two pq closest to each of these roots (giving us 8 candidates for pq , plus two more which are the smallest and the largest pq ), and evaluate I(q) for each of those 10 candidates. Thus, finding the best q takes O(log n) time. Now, given each new evidence (i.e., the answer to each subsequent query), we can update the pr probability and the sum ?l p2l in constant time. Given the new polynomial, we can find the new set of roots, and using the same technique as above, find the next best q in O(log n) time. Theorem 5.2. If the p values of the leaf nodes are provided in sorted order, then, for a Depth-2 tree, the next best node to query can be computed in O(log n). 5.3 DAG(?) For DAG(?), while we can pick the best-1 node in O(n3 ) time, we have the surprising result that the problem of picking subsequent nodes become intractable. The intuition is that unlike trees, after conditioning on a query node, the resulting distribution can no longer be represented using another dag. In particular, we show that given a set S of queried nodes, the problem of finding the next best node is intractable in the size of S. We use a reduction from the monotone-2-CNF problem. Theorem 5.3 (PP-Hardness of Incr. for DAG(?)). The incremental problem in DAG(?) is PP-Hard. Our reduction, shown in in the extended technical report [11], is a weakly parsimonious reduction involving monotone-2-CNF, which is known to be hard to approximate, thus we have the following result: Theorem 5.4 (Inapproximability for DAG(?)). The Incremental problem for DAG(?) is hard to approximate. The above result, along with Theorem 3.1, implies that DAG(2, ?) is also PP-Hard. 6 Best-K In this section, we consider the problem of picking the best k nodes to minimize uncertainty. Krause et al. [19] give a log n approximation algorithm for a similar problem under the conditions of super-modularity: super-modularity states that the marginal decrease in uncertainty when adding a single query node to an existing set of query nodes decreases as the set becomes larger. Here, we show that super-modularity property does not hold in our setting, even for the simplest case of 1?? T REE. In fact, for DAG(2, ?), the problem is hard to approximate within a factor of O(2n ) for any ? > 0. We show that T REE(2) admits a weakly-polynomial exact algorithm and a polynomial approximation algorithm. For general trees, we leave the complexity problem open. Picking Nodes Greedily: First, we show that picking greedily can be arbitrarily bad. Consider a tree with root having p(r) = 1/2. There are 2n leaves, half with p = 1 and rest with p = 1/2. If we pick any leaf node with p = 1, the expected uncertainty is n/8. If we pick a node with p = 1/2, the expected uncertainty is 25n/16 ? 4/16. Thus, if we sort nodes by their expected uncertainty, all the p = 1 nodes appear before all the p = 1/2 nodes. Consider the problem of picking the best n nodes. If we pick greedily based on their expected uncertainty, we pick all the p = 1 nodes. However, all of them are perfectly correlated. Thus, expected uncertainty after querying all p = 1 nodes is still n/8. On the other hand, if we pick a single p = 1 node, and n ? 1 nodes with p = 1/2, the resulting uncertainty is a constant. Thus, picking nodes greedily can be O(n) worse than the optimal. Counter-example for super-modularity: Next we show an example from a graph in DAG(2, ?) where super-modularity does not hold. Consider a G ? DAG(2, ?) having two nodes u and v on 7 the top layer and three nodes a, b, and c in the bottom layer. Labels of all nodes are ?. Node u has an edge to a and b, while v has an edge to b and c. Let Pr(u) = 1/2, Pr(v) = 1/2, and Pr(a) = Pr(b) = Pr(c) = 1. Now consider the expected uncertainty Ic at node c. Super-modularity condition implies that Ic ({b, a}) ? Ic ({b}) ? Ic ({a}) ? Ic ({}) (since marginal decrease in expected uncertainty of c on picking an additional node a should be less for set {} compared to {b}). We show that this is violated. First note that Pr(Y (c)|Y (a)) is same as Pr(Y (c)) (since Y (a) does not affect Y (v) and Y (c)). Thus expected uncertainty at c is unaffected by conditioning on a alone, and thus Ic ({a}) = Ic ({}). On the other hand, if Y (b) = 0 and Y (a) = 1 then Y (c) = 0 (since Y (a) = 1 implies Y (u) = 1 which together with Y (b) = 0 implies Y (v) = 0 and Y (c) = 0). This can be used to show that conditioned on Y (b), expected uncertainty in c drops when conditioning on Y (a). Thus the term Ic ({b, a}) ? Ic ({b}) is negative, while we showed that Ic ({a}) ? Ic ({}) is 0. This violates the super-modularity condition. The above example actually shows that super-modularity is violated on DAG(?) for any choice of metric f in computing expected uncertainty I, as long as f is monotonic decreasing away from 1/2. When f (p) = p(1 ? p), we can show that super-modularity is violated even for trees as stated in the proposition below. Proposition 6.1. Let f (p) = p(1 ? p) be the metric used in computing expected uncertainty I. Then there exists a tree T ? T REE(d) such that for leaf nodes a , b, and c in T the following holds: Ic ({b, a}) ? Ic ({b}) < Ic ({a}) ? Ic ({}). 6.1 T REE(2) We now consider the Best-k problem for T REE(2). As in Section 4, assume the root r with p(r) to be pr , while the leaves L = {l1 , . . . , ln } have p(li ) = pi . Let B = ?l?L p2 (l). Given a set Q ? L, define P(Q) = ? p(l) l?Q S1 (Q) = ? p(l)(1 ? p(l)) S2 (Q) = l?Q ? p2 (l) l?Q Lemma 6.2. The best set Q of size k is one that minimizes: r S2 (Q)) (1?pr1?p )/P(Q)+pr I 0 (Q) = ?S1 (Q) + (B ? (The details of this computation is shown in the extended technical report.) It is easy to check that that the first term is minimized with Q consists of nodes with p(l) closest to 1/2, and the second term is minimized with nodes with p(l) closest to 1. Intuitively, the first term prefers nodes that are as uncertain as possible, while the second term prefers nodes that reveal as much about the root as possible. This immediately gives us a 2-approximation in the number of queries : by picking at most 2k nodes, k closest to 1/2 and k closest to 1, we can do at least as well as the optimal solution for best-k. Exact weakly-polynomial time algorithm: Note also that as k increases, P(Q) ? 0, and the second term vanishes. This also makes intuitive sense, since the second term prefers nodes that reveal more about the root, and once we use sufficiently many nodes to infer the correctness of the root, we do not get any gain from asking additional questions. Thus, we set a constant c? , depending on the pi , such that if k < c? , we consider all possible choices of k queries, and if k ? c? , we may simply pick the k largest pi , because the second term would be very small. We describe this algorithm along with the proof in the extended technical report [11]. 6.2 DAG(?): Theorem 6.3 (PP-Hardness of Incr. for DAG(?)). The best-k problem in DAG(?) is PP-Hard. The proof can be found in the extended technical report [11]. Our reduction is a weakly parsimonious reduction involving monotone-partitioned-2-CNF, which is known to be hard to approximate, thus we have the following result: Theorem 6.4 (Inapproximability for DAG(?)). The best-k problem for DAG(?) is hard to approximate. 7 Conclusion In this work, we performed a detailed complexity analysis for the problem of finding optimal set of query nodes for various classes of graphs. We showed that for trees, most of the problems are tractable, and in fact quite efficient. For general dags, they become hard to even approximate. We leave open the complexity of the best-k problem for trees. 8 References [1] Ashraf M. Abdelbar and Sandra M. Hedetniemi. Approximating maps for belief networks is np-hard and other theorems. Artif. Intell., 102(1):21?38, June 1998. [2] Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. J. Comput. Syst. Sci., 75(1):78?89, 2009. [3] Kedar Bellare, Suresh Iyengar, Aditya Parameswaran, and Vibhor Rastogi. Active sampling for entity matching. In KDD, 2012. [4] Alina Beygelzimer, Sanjoy Dasgupta, and John Langford. Importance weighted active learning. In ICML, page 7, 2009. [5] Alina Beygelzimer, Daniel Hsu, John Langford, and Tong Zhang. Agnostic active learning without constraints. In NIPS, pages 199?207, 2010. [6] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 1 edition, 2007. [7] Philip Bohannon, Srujana Merugu, Cong Yu, Vipul Agarwal, Pedro DeRose, Arun Iyer, Ankur Jain, Vinay Kakade, Mridul Muralidharan, Raghu Ramakrishnan, and Warren Shen. Purple sox extraction management system. SIGMOD Rec., 37:21?27, March 2009. [8] Gregory F. Cooper. The computational complexity of probabilistic inference using bayesian belief networks. Artif. Intell., 42(2-3):393?405, 1990. [9] Nilesh Dalvi, Ravi Kumar, Bo Pang, Raghu Ramakrishnan, Andrew Tomkins, Philip Bohannon, Sathiya Keerthi, and Srujana Merugu. A web of concepts (keynote). In PODS. Providence, Rhode Island, USA, June 2009. [10] Nilesh Dalvi, Ravi Kumar, and Mohamed A. Soliman. Automatic wrappers for large scale web extraction. PVLDB, 4(4):219?230, 2011. [11] Nilesh Dalvi, Aditya Parameswaran, and Vibhor Rastogi. Minimizing uncertainty in pipelines. Technical report, Stanford Infolab, 2012. [12] Sanjoy Dasgupta and John Langford. Tutorial summary: Active learning. In ICML, page 178, 2009. [13] Yoav Freund, H. Sebastian Seung, Eli Shamir, and Naftali Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28(2-3):133?168, 1997. [14] Pankaj Gulhane, Rajeev Rastogi, Srinivasan H. Sengamedu, and Ashwin Tengli. Exploiting content redundancy for web information extraction. PVLDB, 3(1):578?587, 2010. [15] Steve Hanneke. A bound on the label complexity of agnostic active learning. In ICML, pages 353?360, 2007. [16] Don Herbison-Evans. Solving quartics and cubics for graphics. 1994. [17] Nikos Karampatziakis and John Langford. Online importance weight aware updates. In UAI, pages 392?399, 2011. [18] Richard M. Karp and Michael Luby. Monte-carlo algorithms for enumeration and reliability problems. In Proceedings of the 24th Annual Symposium on Foundations of Computer Science, pages 56?64, 1983. [19] Andreas Krause and Carlos Guestrin. Near-optimal nonmyopic value of information in graphical models. In UAI, pages 324?331, 2005. [20] Andreas Krause and Carlos Guestrin. Near-optimal observation selection using submodular functions. In AAAI, pages 1650?1654, 2007. [21] J. Kwisthout. The Computational Complexity of Probabilistic Inference. Technical Report ICIS?R11003, Radboud University Nijmegen, April 2011. [22] Michael L. Littman, Stephen M. Majercik, and Toniann Pitassi. Stochastic boolean satisfiability. J. Autom. Reasoning, 27(3):251?296, 2001. [23] A. Parameswaran, A. Das Sarma, H. Garcia-Molina, N. Polyzotis, and J. Widom. Human-assisted graph search: it?s okay to ask questions. In VLDB, 2011. [24] James D. Park and Adnan Darwiche. Complexity results and approximation strategies for map explanations. J. Artif. Intell. Res. (JAIR), 21:101?133, 2004. [25] J. Scott Provan and Michael O. Ball. The complexity of counting cuts and of computing the probability that a graph is connected. SIAM J. Comput., 12(4):777?788, 1983. [26] Dan Roth. On the hardness of approximate reasoning. Artif. Intell., 82(1-2):273?302, 1996. [27] Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin?Madison, 2009. [28] Alice X. Zheng, Irina Rish, and Alina Beygelzimer. Efficient test selection in active diagnosis via entropy approximation. In UAI, pages 675?, 2005. 9
4604 |@word version:5 polynomial:11 stronger:1 open:4 widom:1 d2:5 vldb:1 adnan:1 q1:3 pick:10 ld:1 reduction:9 wrapper:1 icis:1 selecting:1 daniel:1 existing:1 rish:1 com:2 surprising:1 beygelzimer:4 gmail:1 issuing:1 john:5 evans:1 subsequent:2 informative:1 kdd:1 drop:1 update:2 alone:1 half:1 leaf:27 selected:1 item:14 pvldb:2 node:99 zhang:1 along:3 c2:3 constructed:1 become:3 symposium:1 incorrect:2 consists:3 prove:2 dan:1 dalvi:4 burr:1 darwiche:1 hardness:5 expected:14 nor:1 growing:1 pr1:1 expressibility:1 decreasing:1 nppp:1 resolve:3 actual:1 enumeration:1 becomes:2 provided:1 underlying:3 notation:2 agnostic:3 lowest:2 minimizes:14 q2:2 plca:2 finding:7 transformation:2 sox:1 every:2 hypothetical:1 subclass:2 interactive:1 classifier:4 appear:1 positive:1 before:2 scientist:1 treat:1 ree:22 path:3 abuse:1 rhode:1 might:1 plus:1 twice:1 ankur:1 equivalence:2 challenging:1 alice:1 directed:5 unique:1 practice:1 suresh:1 matching:1 confidence:5 get:6 cannot:2 close:2 unlabeled:2 operator:10 selection:3 equivalent:1 map:3 roth:1 maximizing:1 fpras:1 independently:1 pod:1 survey:1 shen:1 assigns:1 lineage:1 immediately:3 notion:1 ferrari:1 hurt:1 analogous:1 target:2 suppose:3 shamir:1 exact:2 programming:3 associate:3 pa:1 approximated:1 recognition:1 updating:1 rec:1 keynote:1 cut:1 bottom:2 capture:2 cong:1 connected:1 decrease:3 counter:1 intuition:1 vanishes:1 complexity:12 bohannon:2 littman:1 seung:1 dynamic:3 weakly:6 solving:1 f2:3 completely:2 easily:1 joint:2 differently:1 various:3 represented:2 jain:1 describe:2 dnf:2 monte:1 radboud:1 query:31 labeling:2 choosing:1 whose:1 modular:2 stanford:3 larger:1 quite:1 say:2 statistic:1 itself:5 final:6 online:1 sequence:1 srujana:2 product:1 vibhor:4 iff:1 achieve:1 intuitive:1 webpage:3 parent:16 exploiting:1 produce:1 incremental:8 leave:2 muralidharan:1 illustrate:1 derive:1 depending:1 pose:1 andrew:1 eq:25 p2:3 c:1 implies:4 submodularity:2 closely:2 correct:13 subsequently:3 stochastic:1 human:4 settle:1 violates:1 require:1 sandra:1 f1:3 proposition:2 inspecting:1 extension:1 pl:28 assisted:1 hold:7 sufficiently:1 ic:15 substituting:1 a2:1 smallest:1 label:8 saw:1 largest:2 correctness:6 arun:1 weighted:4 minimization:1 iyengar:1 super:10 karp:1 corollary:1 june:2 vk:1 longest:1 maria:1 check:1 karampatziakis:1 greedily:4 sense:1 parameswaran:4 inference:5 el:9 initially:1 hidden:1 ancestor:11 selective:1 semantics:2 overall:1 classification:1 among:1 yahoo:1 special:1 marginal:2 field:2 aware:1 equal:1 extraction:7 having:6 sampling:2 never:1 once:1 represents:2 pai:1 yu:1 icml:3 park:1 minimized:4 np:4 others:1 report:11 simplify:1 richard:1 okay:1 intell:4 replaced:1 consisting:2 irina:1 keerthi:1 interest:1 highly:1 zheng:1 evaluation:1 tuple:3 edge:3 closer:1 unless:1 tree:25 re:1 uncertain:4 instance:5 formalism:1 earlier:1 boolean:2 asking:4 yoav:1 subset:3 p0q:1 tishby:1 graphic:1 motivating:1 providence:1 answer:2 gregory:1 randomized:1 siam:1 probabilistic:10 nilesh:4 picking:10 michael:3 together:2 w1:4 again:2 reflect:1 aaai:1 management:1 choose:3 worse:1 derivative:1 li:7 syst:1 converted:1 sec:2 subsumes:1 inc:2 matter:1 satisfy:1 vi:1 ad:2 performed:1 root:12 lot:1 deduplication:1 observing:1 start:2 sort:2 carlos:2 minimize:6 square:2 purple:1 pang:1 merugu:2 rastogi:5 identify:2 infolab:1 bayesian:4 produced:2 iid:3 carlo:1 hanneke:1 unaffected:1 classified:1 sebastian:1 facebook:1 pp:19 mohamed:1 james:1 e2:3 proof:5 gain:2 rational:3 hsu:1 ask:1 recall:3 satisfiability:1 formalize:1 actually:2 back:1 steve:1 jair:1 april:1 formulation:1 done:1 lastly:1 correlation:2 langford:5 hand:3 web:5 replacing:1 christopher:1 rajeev:1 google:1 abusing:1 reveal:2 artif:4 name:2 building:1 effect:1 concept:1 true:3 usa:1 hence:2 q0:19 numerator:1 incr:4 naftali:1 complete:5 l1:5 balcan:1 reasoning:2 novel:1 nonmyopic:1 common:3 conditioning:3 extend:1 slight:1 dag:81 queried:4 ai:13 automatic:2 ashwin:1 similarly:3 submodular:1 pq:35 reachable:1 reliability:1 f0:3 longer:1 v0:14 pitassi:1 closest:7 showed:2 sarma:1 quartic:3 scenario:1 certain:1 issued:2 binary:1 continue:2 arbitrarily:1 devise:1 molina:1 guestrin:3 greater:1 additional:2 nikos:1 employed:1 ey:2 stephen:1 resolving:1 full:1 desirable:1 reduces:1 infer:3 technical:11 plug:1 long:1 autom:1 e1:4 a1:2 involving:2 basic:2 florina:1 metric:10 represent:3 agarwal:1 c1:3 addition:2 want:10 krause:4 w2:7 rest:2 unlike:3 call:2 extracting:1 near:2 presence:1 counting:1 intermediate:1 easy:2 independence:1 affect:1 perfectly:1 reduce:1 andreas:2 whether:2 expression:8 e3:8 cnf:6 prefers:3 detailed:2 ptime:4 pankaj:1 bellare:1 simplest:1 exist:1 tutorial:1 revisit:1 diagnosis:2 write:2 dasgupta:2 srinivasan:1 group:1 redundancy:1 salient:1 four:1 alina:4 neither:1 ravi:2 v1:1 graph:18 monotone:7 pl2:1 sum:3 run:1 eli:1 uncertainty:43 powerful:1 fourth:1 reasonable:1 provan:1 parsimonious:3 decision:1 layer:2 bound:1 annual:1 constraint:1 n3:8 tag:1 argument:1 kumar:2 px:3 lca:1 debugging:2 march:1 ball:1 remain:1 slightly:1 partitioned:3 kakade:1 island:1 s1:2 intuitively:1 restricted:2 pr:50 pipeline:16 ln:2 turn:1 committee:1 know:1 tractable:2 end:1 raghu:2 rewritten:1 apply:3 probe:1 observe:1 away:1 luby:1 batch:1 denotes:6 clustering:1 remaining:1 ensure:1 top:3 graphical:4 tomkins:1 l6:1 madison:1 sigmod:1 giving:2 approximating:1 objective:7 g0:2 already:4 question:5 quantity:1 strategy:1 primary:1 traditional:1 sci:1 entity:2 philip:2 argue:1 considers:1 trivial:3 assuming:1 relationship:2 minimizing:3 unrolled:1 equivalently:1 statement:1 negative:1 stated:1 nijmegen:1 unknown:1 perform:1 inspect:4 observation:2 immediate:2 defining:1 extended:8 pair:1 learned:1 nip:1 able:1 below:1 pattern:1 scott:1 including:4 explanation:1 belief:2 suitable:2 event:4 treated:1 attach:1 scheme:2 lk:1 extract:2 unconditionally:1 literature:1 l2:3 checking:1 wisconsin:1 freund:1 loss:1 fully:1 toniann:1 interesting:2 attache:3 acyclic:3 querying:1 kwisthout:1 foundation:1 share:1 pi:4 summary:3 warren:1 allow:1 taking:1 depth:5 p2l:4 fb:1 author:1 concretely:1 preprocessing:1 approximate:19 active:11 uai:3 summing:1 sathiya:1 tuples:10 don:2 search:3 modularity:9 table:2 learn:1 correlated:3 vinay:1 e5:6 anc:5 complex:1 da:1 main:2 s2:2 edition:1 n2:1 child:1 repeated:1 fig:3 cooper:1 tong:1 sub:1 inferring:1 comput:2 candidate:3 third:1 extractor:5 e4:5 theorem:17 down:2 bad:1 specific:1 bishop:1 admits:2 evidence:4 intractable:7 exists:1 kedar:1 adding:1 importance:2 execution:4 iyer:1 budget:2 conditioned:8 nk:1 entropy:2 garcia:1 simply:3 aditya:3 bo:1 inapproximability:3 monotonic:1 springer:1 pedro:1 ramakrishnan:2 determines:1 goal:5 presentation:1 sorted:2 shared:1 content:1 hard:35 change:1 lemma:1 total:6 sanjoy:2 partly:1 formally:1 internal:2 scan:1 violated:3 ashraf:1 incorporate:1 evaluate:1 outgoing:1 d1:5 ex:8
3,983
4,605
Learning as MAP Inference in Discrete Graphical Models James Petterson NICTA/ANU Canberra, Australia [email protected] Xianghang Liu NICTA/UNSW Sydney, Australia [email protected] Tiberio S. Caetano NICTA/ANU/University of Sydney Canberra and Sydney, Australia [email protected] Abstract We present a new formulation for binary classification. Instead of relying on convex losses and regularizers such as in SVMs, logistic regression and boosting, or instead non-convex but continuous formulations such as those encountered in neural networks and deep belief networks, our framework entails a non-convex but discrete formulation, where estimation amounts to finding a MAP configuration in a graphical model whose potential functions are low-dimensional discrete surrogates for the misclassification loss. We argue that such a discrete formulation can naturally account for a number of issues that are typically encountered in either the convex or the continuous non-convex approaches, or both. By reducing the learning problem to a MAP inference problem, we can immediately translate the guarantees available for many inference settings to the learning problem itself. We empirically demonstrate in a number of experiments that this approach is promising in dealing with issues such as severe label noise, while still having global optimality guarantees. Due to the discrete nature of the formulation, it also allows for direct regularization through cardinality-based penalties, such as the `0 pseudo-norm, thus providing the ability to perform feature selection and trade-off interpretability and predictability in a principled manner. We also outline a number of open problems arising from the formulation. 1 Introduction A large fraction of the machine learning community is concerned itself with the formulation of a learning problem as a single, well-defined optimization problem. This is the case for many popular techniques, including those associated with margin or likelihood-based estimators, such as SVMs, logistic regression, boosting, CRFs and deep belief networks. Among these optimization-based frameworks for learning, two paradigms stand out: the one based on convex formulations (such as SVMs) and the one based on non-convex formulations (such as deep belief networks). The main argument in favor of convex formulations is that we can effectively decouple modeling from optimization, what has substantial theoretical and practical benefits. In particular, it is of great value in terms of reproducibility, modularity and ease of use. Coming from the other end, the main argument for non-convexity is that a convex formulation very often fails to capture fundamental properties of a real problem (e.g. see [1, 2] for examples of some fundamental limitations of convex loss functions). 1 The motivation for this paper starts from the observation that the above tension is not really between convexity and non-convexity, but between convexity and continuous non-convexity. Historically, the optimization-based approach to machine learning has been virtually a synonym of continuous optimization. Estimation in continuous parameter spaces in some cases allows for closed-form solutions (such as in least-squares regression), or if not we can resort to computing gradients (for smooth continuous functions) or subgradients (for non-smooth continuous functions) which give us a generic tool for finding a local optimum of an arbitrary continuous function (global optimum if the continuous function is convex). On the contrary, unless P=NP there is no general tool to efficiently optimize discrete functions. We suspect this is one of the reasons why machine learning has traditionally been formulated in terms of continuous optimization: it is indeed convenient to compute gradients or subgradients and delegate optimization to some off-the-shelf gradient-based algorithm. The formulation we introduce in this paper is non-convex, but discrete rather than continuous. By being non-convex we will attempt at capturing some of the expressive power of continuous non-convex formulations (such as robustness to labeling noise), and by being discrete we will retain the ability of convex formulations to provide theoretical guarantees in optimization. There are highly non-trivial classes of non-convex discrete functions defined over exponentially large discrete spaces which can be optimized efficiently. This is, after all, the main topic of combinatorial optimization. Discrete functions factored over cliques of low-treewidth graphs can be optimized efficiently via dynamic programming [3]. Arbitrary submodular functions can be minimized in polynomial time [4]. Particular submodular functions can be optimized very efficiently using max-flow algorithms [5]. Discrete functions defined over other particular classes of graphs also have polynomial-time algorithms (planar graphs [6], perfect graphs [7]). And of course although many discrete optimization problems are NP-hard, several have efficient constant-factor approximations [8]. In addition to all that, much progress has been done recently on developing tight LP relaxations for hard combinatorial problems [9]. Although all these discrete approaches have been widely used for solving inference problems in machine learning settings, we argue in this paper that they should also be used to solve estimation problems, or learning per se. The discrete approach does pose several new questions though, which we list at the end. Our contribution is to outline the overall framework in terms of a few key ideas and assumptions, as well as to empirically evaluate in real-world datasets particular model instances within the framework. Although these instances are very simple, they already display important desirable behavior that is missing in state-of-the-art estimators such as SVMs. 2 Desiderata We want to rethink the problem of learning a linear binary classifier. In this section we list the features that we would like a general-purpose learning machine for this problem to possess. These features essentially guide the assumptions behind our framework. Option to decouple modeling from optimization: As discussed in the introduction, this is the great appeal of convex formulations, and we would like to retain it. Note however that we want the option, not necessarily a mandate of always decoupling modeling from optimization. We want to be able to please the user who is not an optimization expert or doesn?t have the time or resources to refine the optimizer, by having the option of requesting the learning machine to configure itself in a mode in which global optimization is guaranteed and the runtime of optimization is precisely predictable. However we also want to please the user who is an expert, and is willing to spend a lot of time in refining the optimizer, to achieve the best possible results regardless of training time considerations. In our framework, we have the option to explore the spectrum between simpler models in which we can generate precise estimates of the runtime of the whole algorithm, and more complex models where we can focus on boosted performance at the expense of runtime predictability or demand for expert-exclusive fine-tuning skills. Option of Simplicity: This point is related to the previous one, but it?s more general. The complexity of a learning algorithm is a great barrier for its dissemination, even if it promises exceptional results once properly implemented. Most users of machine learning are not machine learning experts themselves, and for them in particular the cost of getting 2 a complex algorithm to work often outweighs the accuracy gains, especially if a reasonably good solution can be obtained with a very simple algorithm. For instance, in our framework the user has the option of reducing the learning algorithm to a series of matrix multiplications and lookup operations, while having a precise estimate of the total runtime of the algorithm and retaining good performance. Robustness to label noise: SVMs are considered state-of-the-art estimators for binary classifiers, as well as boosting and logistic regression. All these optimize convex loss functions. However, when label noise is present, convex loss functions inflict arbitrarily large penalty on misclassifications because they are unbounded. In other words, in high label noise settings these convex loss functions become poor proxies for the 0/1 loss (the loss we really care about). This fundamental limitation of convex loss functions is well understood theoretically [1]. The fact that the loss function of interest is itself discrete is indeed a hint that maybe we should investigate discrete surrogates rather than continuous surrogates for the 0/1 loss: optimizing discrete functions over continuous spaces is hard, but not necessarily over discrete spaces. In our framework we directly address this issue. Ability to achieve sparsity: Often we need to estimate sparse models. This can be for several reasons, including interpretability (be able to tell which are the ?most important? features), efficiency (at prediction time we can only afford to use a limited number of features) or, importantly, for purely statistical reasons (constraining the solution to low-dimensional subspaces has a regularization effect). The standard convex approach uses `1 regularization. However the assumptions required to make `1 -regularized models be actually good proxies for the support cardinality function (`0 pseudo-norm) are very strong and in practice rarely met [10]. In fact this has motivated an entire new line of work on structured sparsity, which tries to further regularize the solution so as to obtain better statistical properties in high dimensions [11, 12, 13]. This however comes at the price of more expensive optimization algorithms. Ideally we would like to regularize with `0 directly; maybe this suggests the possibility of exploring an inherently discrete formulation? In our approach we have the ability to perform direct regularization via the `0 pseudo-norm, or other scale-invariant regularizers. Leverage the power of low-dimensional approximations: Machine learning folklore has it that the Naive Bayes assumption (features conditionally independent given the class label) often produces remarkably good classifiers. So a natural question is: is it really necessary to work directly in the original high-dimensional space, such as SVMs do? A key aspect of our framework is that we explicitly exploit the concept of composing a highdimensional model from low-dimensional pieces. However we go beyond the Naive Bayes assumption by constructing graphs that model dependencies between variables. By varying the properties of these graphs we can trade-off model complexity and optimization efficiency in a straightforward manner. 3 Basic Setting Much of current machine learning research studies estimators of the type X argmin `(y n , f (xn ; ?)) + ??(?) ??? (1) n where {xn , y n } is a training set of inputs x ? X and outputs y ? Y, assumed sampled independently from an unknown probability measure P on X ? Y. f : X ? Y is a member of a given class of predictors parameterized by ?, ? is a continuous space such as a Hilbert space, and ` as well as ? are continuous and convex functions of ?. ` is a loss function which enforces a penalty whenever f (xn ) 6= y n , and therefore the first term in (1) measures the total loss incurred by predictor f on the training sample {xn , y n } under parameterization ?. ? controls the complexity of ? so as to avoid overfitting, and ? trades-off the importance of a good fit to the training set versus model parsimony, so that good generalization is hopefully achieved. Problem (1) is often called regularized empirical risk minimization, since the first term is the risk (expected loss) under the empirical distribution of the training data, and the second is a regularizer. This formulation is used for regression (Y continuous) as well as classification and structured prediction (Y discrete). Logistic Regression, Regularized Least-Squares 3 Regression, SVMs, CRFs, structured SVMs, Lasso, Group Lasso and a variety of other estimators are all instances of (1) for particular choices of `, f , ? and ?. The formulation in (1) is a very general formulation for machine learning under the i.i.d. assumption. In this paper we study problem (1) under the assumption that the parameter space ? is discrete and finite, focusing on binary classification, when Y = {?1, 1}. 4 Formulation Our formulation departs from the one in (1) in two ways. The first assumption is that both the loss ` and the regularizer ? are additive on low-dimensional functions defined by a graph G = (V, E), i.e., X `c (y, fc (x; ?c )) (2) `(y, f (x; ?)) = c?C ?(?) = X ?c (?c ) (3) c?C0 where C ? C0 is the set of maximal cliques in G. Note that (3) is standard: `1 and `2 norms for example are both additive on singletons (in which case C0 = V ). The arguably strong assumption here is (2). C is the set of parts where each part c is, in principle, an arbitrary subset of {1, . . . , D}, where D is the dimensionality of the parameterization, i.e., ? = (?1 , . . . , ?D ). `c is a low-dimensional discrete surrogate for `, and fc is a low-dimensional predictor, both to be defined below. Note that in general two parameter subvectors ?ci and ?cj are not independent since the cliques ci and cj can overlap. Indeed, one of the key reasons sustaining the power of this formulation is that all ?c are coupled either directly or indirectly through the connected graph G = (V, E). The second assumption is that ? is discrete and therefore the vector ? = (?1 , . . . , ?D ) is discrete in the sense that ?i is only allowed to take on finitely many values, including the value 0 (this will be important when we discuss regularization). For simplicity of exposition let?s assume that the number of discrete values (bins) for each ?i is the same: B. B can be potentially quite large, for example it can be in the hundreds. Random Projections. An instance x above in reality is not the raw feature vector but instead a random projection of it into a space of the same or higher dimension, i.e., we effectively apply X = RX 0 where X 0 is the original data matrix, R is a random matrix with entries drawn from N (0, 1) and X is the new data matrix. This often provides improved performance for our model due to the spreading of higher-order dependencies over lowerorder cliques (when mapping to a higher dimensional space) and also is motivated from a theoretical argument (section 6). In what follows x is the feature vector after the projection. Low-Dimensional Predictor. We will assume a standard linear predictor of the kind fc (x; ?) = argmax y hxc , ?c i = sign hxc , ?c i (4) y?{?1,1} In other words, we have a linear classifier that only considers the features in clique c.1 Low-Dimensional Discrete Surrogates for the 0/1 loss The low-dimensional discrete surrogate for the 0/1 loss is simply defined as the 0/1 loss incurred by predictor fc : `c (y; fc (x; ?)) = (1 ? yfc (x; ?))/2 (5) A key observation now is that fc and therefore `c can be computed in O(B k ) by full enumeration over the B k instantiations of ?c , where k is the size of clique c. In other words, the 0/1 loss constrained to the discretized subspace defined by clique c can be exactly and efficiently computed (for small cliques). Regularization. One critical technical issue is that linear predictors of the kind argmaxy h?(x, y), ?i are insensitive to scalings of ? [14]. Therefore, the loss ` will be such that `(y, f (x; ??)) = `(y, f (x; ?)) for ? 6= 0. This means that any regularizer that depends 1 For notational simplicity we assume an offset parameter is already included in ?c and a corresponding entry of 1 is appended to the vector xc . 4 on scale (such as `1 and `2 norms) is effectively meaningless since the minimization in (1) will drive ?(?) to 0 (as this doesn?t affect the loss). In other words, in such discrete setting we need a scale-invariant regularizer, such as the `0 pseudo-norm. Note that `0 is trivial to implement in this formulation, as we have enforced that the zero value must be included in the set of B values attainable by each ?i : X ?(?) = `0 (?) = 1?i 6=0 (6) i In addition, since this regularizer is additive on singletons ?i , it comes for free the fact that it does not contribute to the complexity of inference in the graphical model (i.e., it is a unary potential), which is a convenient property. P Nothing prevents us however from having group regularizers, for example of the form c?C0 ?c 1?c 6=0 . Again, we can trade-off model simplicity and optimization efficiency by controlling the size of the maximal clique in C0 . Final optimization Problem. After compiling the low-dimensional discrete proxies for the 0/1 loss (the functions lc ) and incorporating our regularizer, we can assemble the following optimization problem argmin ??? N XX `c (y n , fc (xn ; ?c )) + c?C n=1 | D X i=1 {z :=?N ?c (?c ) } ?1? 6=0 | {zi } (7) :=???i (?i ) which is a relaxation of (1) under all the above assumptions. The critical observation now is that (7) is a MAP inference problem in a discrete graphical model with clique set C, high-order clique potentials ?c (?c ) and unary potentials ?i (?i ) [15]. Therefore we can resort to the vast literature on inference in graphical models to find exact or approximate solutions for (7). For example, if G = (V, E) is a tree, then (7) can be solved exactly and efficiently using a dynamic programming algorithm that only requires matrix-vector multiplications in the (min, +) semiring, in addition to elementary lookup operations [3]. For more general graphs the problem (7) can become NP-hard, but even in that case there are several principled approaches that often find excellent solutions, such as those based on linear programming relaxations [9] for tightly outer-bounding the marginal polytope [16]. In the experimental section we explore several options for constructing G, from simply generating a random chain (where MAP inference can be solved efficiently by dynamic programming) to generating dense random graphs (where MAP inference requires a more sophisticated approach such as an LP relaxation). 5 Related Work The most closely related work we found is a recent paper by Potetz [17]. In a similar spirit to our approach, it also addresses the problem of estimating linear binary classifiers in a discrete formulation. However, instead of composing low-dimensional discrete surrogates of the 0/1 loss as we do, it instead uses a fully connected factor graph and performs inference by estimating the mean of the max-marginals rather than MAP. Inference is approached using message-passing, which for the fully connected graph reduces to an intractable knapsack problem. In order to obtain a tractable model, the problem is then relaxed to a linear multiple choice knapsack problem, which can be solved efficiently. All the experiments though are performed on very low-dimensional datasets2 and it is unclear how this approach would scale to high dimensionality while keeping a fully connected graph. 6 Analysis Here we sketch arguments supporting the assumptions driving our formulation. Obtaining a rigorous theoretical analysis is left as an open problem for future research. Our assumptions involve three approximations of the problem of 0/1 loss minimization. First, the discretization of the parameter space. Second, the computation of low-dimensional proxies for the 0/1 loss rather than attacking the 0/1 loss directly in the resulting discrete space. Finally, the use of a graph G = (V, E) which in general will be sparse, i.e., not fully connected. We now discuss each of these assumptions. 2 Seven datasets with dimensionalities 7,9,10,11,14,15 and 61. See [17]. 5 6.1 Discretization of the parameter space The explicit enforcement of a finite number of possible values for each parameter may seem at first a strong assumption. However, a key observation here is that we are restricting ourselves to linear predictors, which basically means that, for any sample, small perturbations of a random hyperplane will with high probability induce at most small changes in the 0/1 loss. Therefore there are good reasons to believe that indeed, for linear predictors, increasing binning has a diminishing returns behavior and after only a moderate amount of bins no much improvement can be obtained. This assumption is also used in [17]. 6.2 Low-dimensional proxies for the 0/1 loss This assumption can be justified using recent results stating that the margin is well-preserved under random projections to low-dimensional subspaces [18, 19]. For instance, Theorem 6 in [19] shows that the margin is preserved with high probability for embeddings with dimension only logarithmic on the sample size (a result similar in spirit to the Johnson-Lindenstrauss Lemma [20]). Since the (soft)margin upper bounds the 0/1 loss, this should also be preserved with at least equivalent guarantees. 6.3 Graph sparsity This is apparently the strongest assumption. In our formulation, we impose conditional independence assumptions on the set of random variables used as features. There are two main observations. The first is that in real high-dimensional data the existence of (approximate) conditional independences is more of a rule than an exception. This is directly related to the fact that usually high-dimensional data inhabit low-dimensional manifolds or subspaces. In our case, we have a graph with the nodes representing different features, and this can be seen as a patching of low-dimensional subspaces, where each subspace is defined by one of the cliques in the graph. We do not address in this work how to optimally determine a subgraph, leaving that as an open problem in this framework. Rather, we show that even with random subgraphs, and in particular subgraphs as simple as chains, we can obtain models that have high accuracy and remarkable robustness to high degrees of label noise. The second observation is that nothing prevents us from using quite dense graphs and seeking approximate rather than exact MAP inference, say through LP relaxations [9]. Indeed we illustrate this possibility in the experimental section below. 7 Experiments Settings. To evaluate our method (DISCRETE) for binary classification problems, we apply it to real-world datasets and compared it to linear Support Vector Machines (SVM), which are a state-of-the-art estimator for linear classifiers. We note that although both use linear predictors, the model classes are not identical: since we use discretization, the set of hyperplanes our estimator will optimize over is strictly smaller. We run these algorithms on publicly available datasets from the UCI machine learning repository [21]. See Table 1 for the details of these datasets. For both algorithms, the only hyperparameter is the tradeoff between the loss and the regularization term. We run 5-fold cross validation for both methods to select the optimal hyperparameters. The number of bins used for discretization may affect the accuracy of DISCRETE. For the experiments, we fix it to 11, since for larger values there was negligible improvement (which supports our argument from section 6.1). Robustness to Label Noise. In the first experiment, we test the robustness of different methods to increasing label noise. We first flip the labels of the training data with increasing probability from 0 to 0.4 and then run these algorithms on the noisy training data. The plots of the classification accuracy at each noise level are shown in Figure 1. For DISCRETE, we used as the graph G a random chain, i.e., the simplest possible option for a connected graph. In this case, optimization is straightforward via a Viterbi algorithm: a sequence of matrixvector multiplications in the (min, +) semiring with trivial bookkeeping and subsequential lookup, which will run in O(B 2 D) since we have B states per variable and D variables. To assess the effect of randomization, we run on 20 random chains and plot both the average and the standard error obtained. The impact of randomization seems negligible. From Figure 1, DISCRETE demonstrates classification accuracy only slightly inferior to SVM in 6 (a) GISETTE (b) MNIST 5 vs 6 (c) A2A (d) USPS 8 vs 9 (e) ISOLET (f) ACOUSTIC Figure 1: Comparison of the Discrete Method and Linear SVM the noiseless regime (i.e., when the hinge loss is a good proxy for the 0/1 loss). However, as soon as a significant amount of label noise is present, SVM degrades substantially while DISCRETE remains remarkably stable, delivering high accuracy even after flipping labels with 40% probability. We believe these are significant results given the truly elementary nature of the optimization procedure: the method is simple, fast and the runtime can be predicted with high accuracy since there is a determined number of operations; 2(D ? 1) messages are passed, each with worst-case runtime of O(B 2 ) determined by the matrixvector multiplication. Note in particular how this differs from continuous optimization settings in which the analysis is in terms of rate of convergence rather than the precise number of discrete operations performed. It is also interesting to observe that for different values of the cross-validation parameter our algorithm runs in precisely the same amount of time, while for SVMs convergence will be much slower for small scalings of the regularizer since the relative importance of the non-differentiable hinge loss over the strongly convex quadratic term increases. This experiment shows that even if we have the simplest setting of our formulation (random chains, which comes with very fast and exact MAP inference) we can still obtain results that are close or similar to those obtained by the state-of-the-art linear SVM classifier in the noiseless case, and superior for high levels of label noise. Evaluation without Noise. As seen in Figure 1, in the noiseless (or small noise) regime SVM is often slightly superior to our random chain model. A natural question to ask is therefore how would more complex graph topologies perform. Here we run experiments on two other types of graphs: a random 2-chain (i.e. a random junction tree with cliques {i, i + 1, i + 2}) and a random k-regular graph, where k is set to be such that the resulting graph has 10% of the possible edges. For the 2-chain, the optimization algorithm is exact inference via (min, +) message-passing, just as the Viterbi algorithm, but now applied to a larger clique, which increases the memory and runtime cost by O(B). For the random graph, we obtain a more complex topology in which exact inference is intractable. In our experiments we used the approximate inference algorithm from [22], which solves optimally and efficiently an LP relaxation via the alternating direction method of multipliers, ADMM [23]. 7 # Train # Test # Features Table 1: Datasets used for the experiments in Figure 1 GISETTE MNIST A2A USPS ISOLET ACOUSTIC 6000 10205 2265 950 480 19705 1000 1134 30296 237 120 78823 5000 784 123 256 617 50 Table 2: Error rates of different methods for binary classification, without label noise. In this setting, the hinge loss used by SVM is an excellent proxy for the 0/1 loss. Yet, the proposed variants (top 3 rows) are still competitive in most datasets. GISETTE MNIST A2A USPS ISOLET ACOUSTIC random chain 89.23 93.79 82.55 97.51 100 76.01 random 2-chain 89 94.47 82.65 97.78 100 76.55 random graph 88.6 94.89 83.17 97.44 100 74.80 SVM 97.7 96.47 83.88 98.4 100 76.01 8 Extensions and Open Problems Clearly the results in this paper are only a first step in the direction proposed. Several questions arise from this formulation. Theory. In section 6 we only sketched the reasons why we pursued the assumptions laid out in this paper. We did not present any rigorous quantitative arguments analyzing the limitations of our formulation. This is left as an open problem. However we believe section 6 does point to the key ideas that will ultimately underly a quantitative theory. Extension to multi-class and structured prediction. In this work we only study binary classification problems. The extension to multi-class and structured prediction, as well as other learning settings is an open problem. Adaptive binning. When discretizing the parameters, we used a fixed number of bins. This can be made more elaborate through the use of adaptive binning techniques that are dependent on the information content of each variable. Informative graph construction. We only explored randomly generated graphs. The problem of selecting a graph topology in an informative way is highly relevant and is left open. For example B-matching can be used to generate an informative regular graph [24]. This problem is essentially a manifold learning problem and there are several ways it could be approached. Existing work on supervised manifold learning is very relevant here. Nonparametric extension. We considered only linear parametric models. It would be interesting to consider nonparametric models, where the discretization occurs at the level of parameters associated with each training instance (as in the dual formulation of SVMs). 9 Conclusion We presented a discrete formulation for learning linear binary classifiers. Parameters associated with features of the linear model are discretized into bins, and low-dimensional discrete surrogates of the 0/1 loss restricted to small groups of features are constructed. This results in a data structure that can be seen as a graphical model, where regularized risk minimization can be performed via MAP inference. We sketch theoretical arguments supporting the assumptions underlying our proposal and present empirical evidence that very simple, easily and quickly trainable models estimated with such a procedure can deliver results that are often comparable to those obtained by linear SVMs for noiseless scenarios, and superior under moderate to severe label noise. Acknowledgements We thank E. Bonilla, A. Defazio, D. Garc??a-Garc??a, S. Gould, J. McAuley, S. Nowozin, M. Reid, S. Sanner and B. Williamson for discussions. NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. 8 References [1] P. M. Long and R. A. Servedio, ?Random classification noise defeats all convex potential boosters,? Machine Learning, vol. 78, no. 3, pp. 287?304, 2010. [2] P. M. Long and R. A. Servedio, ?Learning large-margin halfspaces with more malicious noise,? in NIPS, 2011. [3] S. M. Aji and R. J. McEliece, ?The generalized distributive law,? IEEE Trans. Inform. Theory, vol. 46, no. 2, pp. 325?343, 2000. [4] B. Korte and J. Vygen, Combinatorial Optimization: Theory and Algorithms. Springer Publishing Company, Incorporated, 4th ed., 2007. [5] V. Kolmogorov and R. Zabih, ?What energy functions can be minimizedvia graph cuts?,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp. 147?159, 2004. [6] A. Globerson and T. S. Jaakkola, ?Approximate inference using planar graph decomposition,? in Advances in Neural Information Processing Systems 19 (B. Sch? olkopf, J. Platt, and T. Hoffman, eds.), pp. 473?480, Cambridge, MA: MIT Press, 2007. [7] T. Jebara, ?Perfect graphs and graphical modeling.? To Appear in Tractability, Cambridge University Press, 2012. [8] V. V. Vazirani, Approximation Algorithms. Springer, 2004. [9] D. Sontag, Approximate Inference in Graphical Models using LP Relaxations. PhD thesis, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2010. [10] P. Zhao and B. Yu, ?On model selection consistency of lasso,? J. Mach. Learn. Res., vol. 7, pp. 2541?2563, Dec. 2006. [11] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski, ?Structured sparsity through convex optimization.? Technical report, HAL 00621245-v2, to appear in Statistical Science, 2012. [12] J. Huang, T. Zhang, and D. Metaxas, ?Learning with structured sparsity,? in Proceedings of the 26th Annual International Conference on Machine Learning, ICML ?09, (New York, NY, USA), pp. 417?424, ACM, 2009. [13] F. R. Bach, ?Structured sparsity-inducing norms through submodular functions,? in NIPS, pp. 118?126, 2010. [14] D. Mcallester and J. Keshet, ?Generalization bounds and consistency for latent structural probit and ramp loss,? in Advances in Neural Information Processing Systems 24 (J. ShaweTaylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds.), pp. 2205?2212, 2011. [15] D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [16] M. J. Wainwright and M. I. Jordan, Graphical Models, Exponential Families, and Variational Inference. Hanover, MA, USA: Now Publishers Inc., 2008. [17] B. Potetz, ?Estimating the bayes point using linear knapsack problems,? in ICML, pp. 257?264, 2011. [18] M.-F. Balcan, A. Blum, and S. Vempala, ?Kernels as features: On kernels, margins, and low-dimensional mappings,? Machine Learning, vol. 65, no. 1, pp. 79?94, 2006. [19] Q. Shi, C. Chen, R. Hill, and A. van den Hengel, ?Is margin preserved after random projection?,? in ICML, 2012. [20] S. Dasgupta and A. Gupta, ?An elementary proof of a theorem of johnson and lindenstrauss,? Random Struct. Algorithms, vol. 22, pp. 60?65, Jan. 2003. [21] A. Frank and A. Asuncion, ?UCI machine learning repository,? 2010. [22] O. Meshi and A. Globerson, ?An alternating direction method for dual map lp relaxation,? in Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II, ECML PKDD?11, (Berlin, Heidelberg), pp. 470?483, SpringerVerlag, 2011. [23] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, ?Distributed optimization and statistical learning via the alternating direction method of multipliers,? Foundations and Trends in Machine Learning, vol. 3, 2011. [24] T. Jebara, J. Wang, and S. Chang, ?Graph construction and b-matching for semi-supervised learning,? in ICML, 2009. 9
4605 |@word repository:2 polynomial:2 norm:7 seems:1 c0:5 open:7 willing:1 decomposition:1 attainable:1 mcauley:1 liu:2 configuration:1 series:1 selecting:1 existing:1 current:1 com:3 discretization:5 yet:1 chu:1 must:1 additive:3 underly:1 informative:3 shawetaylor:1 plot:2 v:2 pursued:1 intelligence:1 parameterization:2 provides:1 boosting:3 contribute:1 node:1 hyperplanes:1 simpler:1 zhang:1 unbounded:1 constructed:1 direct:2 become:2 introduce:1 manner:2 excellence:1 theoretically:1 expected:1 indeed:5 behavior:2 themselves:1 pkdd:1 multi:2 discretized:2 relying:1 company:1 enumeration:1 cardinality:2 subvectors:1 increasing:3 xx:1 estimating:3 gisette:3 underlying:1 what:3 argmin:2 kind:2 parsimony:1 substantially:1 yfc:1 finding:2 guarantee:4 pseudo:4 quantitative:2 runtime:7 exactly:2 classifier:8 demonstrates:1 platt:1 control:1 appear:2 arguably:1 reid:1 negligible:2 understood:1 local:1 engineering:1 mach:1 analyzing:1 au:3 sustaining:1 suggests:1 ease:1 limited:1 practical:1 globerson:2 enforces:1 practice:1 implement:1 differs:1 procedure:2 aji:1 jan:1 empirical:3 convenient:2 projection:5 word:4 induce:1 regular:2 matching:2 boyd:1 close:1 selection:2 risk:3 optimize:3 equivalent:1 map:11 missing:1 crfs:2 shi:1 go:1 regardless:1 straightforward:2 independently:1 convex:26 simplicity:4 immediately:1 factored:1 estimator:7 rule:1 subgraphs:2 importantly:1 isolet:3 regularize:2 traditionally:1 controlling:1 construction:2 user:4 exact:5 programming:4 us:2 trend:1 expensive:1 cut:1 database:1 binning:3 solved:3 capture:1 worst:1 electrical:1 wang:1 caetano:2 connected:6 trade:4 halfspaces:1 principled:2 substantial:1 predictable:1 convexity:5 complexity:4 ideally:1 dynamic:3 ultimately:1 tight:1 solving:1 purely:1 deliver:1 efficiency:3 usps:3 easily:1 represented:1 kolmogorov:1 regularizer:7 train:1 fast:2 approached:2 labeling:1 tell:1 zemel:1 whose:1 quite:2 widely:1 solve:1 spend:1 say:1 larger:2 ramp:1 ability:4 favor:1 itself:4 noisy:1 final:1 sequence:1 differentiable:1 coming:1 maximal:2 uci:2 relevant:2 translate:1 reproducibility:1 achieve:2 subgraph:1 inducing:1 olkopf:1 getting:1 convergence:2 optimum:2 produce:1 generating:2 perfect:2 illustrate:1 stating:1 pose:1 finitely:1 progress:1 solves:1 sydney:3 implemented:1 predicted:1 strong:3 treewidth:1 come:3 met:1 australian:2 direction:4 closely:1 australia:3 mcallester:1 bin:5 garc:2 meshi:1 government:1 fix:1 generalization:2 really:3 tiberio:2 randomization:2 elementary:3 exploring:1 strictly:1 extension:4 considered:2 great:3 mapping:2 viterbi:2 driving:1 optimizer:2 purpose:1 estimation:3 label:14 combinatorial:3 spreading:1 council:1 exceptional:1 tool:2 hoffman:1 minimization:4 mit:2 clearly:1 always:1 rather:7 avoid:1 shelf:1 boosted:1 varying:1 jaakkola:1 focus:1 refining:1 properly:1 notational:1 improvement:2 likelihood:1 rigorous:2 sense:1 inference:20 economy:1 dependent:1 unary:2 typically:1 entire:1 diminishing:1 koller:1 sketched:1 issue:4 classification:9 dual:2 overall:1 among:1 retaining:1 art:4 constrained:1 marginal:1 once:1 having:4 identical:1 yu:1 icml:4 future:1 minimized:1 np:3 report:1 hint:1 few:1 randomly:1 petterson:2 tightly:1 argmax:1 ourselves:1 attempt:1 friedman:1 interest:1 message:3 highly:2 investigate:1 possibility:2 evaluation:1 severe:2 argmaxy:1 truly:1 behind:1 configure:1 regularizers:3 chain:10 edge:1 necessary:1 unless:1 tree:2 re:1 theoretical:5 instance:7 modeling:4 soft:1 cost:2 tractability:1 subset:1 entry:2 predictor:10 hundred:1 johnson:2 optimally:2 dependency:2 fundamental:3 international:1 retain:2 probabilistic:1 off:5 quickly:1 again:1 thesis:1 huang:1 booster:1 resort:2 expert:4 zhao:1 return:1 account:1 potential:5 singleton:2 lookup:3 inc:1 explicitly:1 bonilla:1 depends:1 piece:1 performed:3 try:1 lot:1 closed:1 apparently:1 start:1 bayes:3 option:8 competitive:1 asuncion:1 contribution:1 ass:1 square:2 appended:1 accuracy:7 publicly:1 who:2 efficiently:9 raw:1 metaxas:1 basically:1 rx:1 drive:1 strongest:1 inform:1 whenever:1 ed:3 servedio:2 energy:1 pp:12 james:2 naturally:1 associated:3 proof:1 gain:1 sampled:1 popular:1 ask:1 massachusetts:1 knowledge:1 dimensionality:3 hilbert:1 cj:2 sophisticated:1 actually:1 jenatton:1 focusing:1 higher:3 vygen:1 supervised:2 tension:1 planar:2 improved:1 formulation:31 done:1 though:2 datasets2:1 strongly:1 just:1 mceliece:1 sketch:2 expressive:1 hopefully:1 logistic:4 mode:1 believe:3 hal:1 usa:2 effect:2 concept:1 multiplier:2 regularization:7 alternating:3 conditionally:1 please:2 inferior:1 generalized:1 hill:1 outline:2 demonstrate:1 performs:1 balcan:1 variational:1 consideration:1 recently:1 parikh:1 patching:1 superior:3 bookkeeping:1 empirically:2 exponentially:1 insensitive:1 defeat:1 discussed:1 volume:1 marginals:1 significant:2 cambridge:2 tuning:1 consistency:2 centre:1 submodular:3 funded:1 stable:1 entail:1 recent:2 optimizing:1 moderate:2 scenario:1 binary:9 arbitrarily:1 discretizing:1 seen:3 matrixvector:2 care:1 relaxed:1 impose:1 attacking:1 paradigm:1 determine:1 semi:1 ii:1 full:1 desirable:1 multiple:1 reduces:1 smooth:2 technical:2 cross:2 long:2 bach:2 impact:1 prediction:4 desideratum:1 regression:7 basic:1 variant:1 essentially:2 noiseless:4 kernel:2 achieved:1 dec:1 justified:1 addition:3 want:4 fine:1 remarkably:2 preserved:4 proposal:1 mandate:1 leaving:1 malicious:1 publisher:1 sch:1 meaningless:1 posse:1 suspect:1 virtually:1 member:1 contrary:1 flow:1 spirit:2 seem:1 jordan:1 delegate:1 structural:1 leverage:1 constraining:1 embeddings:1 concerned:1 variety:1 affect:2 fit:1 zi:1 misclassifications:1 independence:2 lasso:3 topology:3 idea:2 tradeoff:1 requesting:1 motivated:2 defazio:1 bartlett:1 passed:1 penalty:3 sontag:1 passing:2 afford:1 york:1 deep:3 se:1 involve:1 delivering:1 korte:1 maybe:2 amount:4 nonparametric:2 zabih:1 svms:11 simplest:2 generate:2 sign:1 estimated:1 arising:1 per:2 discrete:43 hyperparameter:1 promise:1 vol:7 dasgupta:1 group:3 key:6 blum:1 drawn:1 vast:1 graph:34 relaxation:8 fraction:1 enforced:1 run:7 parameterized:1 laid:1 family:1 scaling:2 comparable:1 capturing:1 bound:2 guaranteed:1 display:1 fold:1 quadratic:1 encountered:2 refine:1 assemble:1 annual:1 precisely:2 aspect:1 argument:7 optimality:1 min:3 subgradients:2 vempala:1 gould:1 structured:8 developing:1 department:2 poor:1 dissemination:1 smaller:1 slightly:2 lp:6 den:1 invariant:2 restricted:1 resource:1 remains:1 discus:2 enforcement:1 flip:1 tractable:1 end:2 available:2 operation:4 junction:1 hanover:1 apply:2 observe:1 v2:1 generic:1 indirectly:1 robustness:5 compiling:1 slower:1 xianghang:2 knapsack:3 original:2 existence:1 top:1 weinberger:1 struct:1 publishing:1 graphical:10 outweighs:1 hinge:3 xc:1 folklore:1 exploit:1 potetz:2 especially:1 seeking:1 question:4 already:2 flipping:1 occurs:1 degrades:1 parametric:1 exclusive:1 surrogate:8 unclear:1 gradient:3 subspace:6 thank:1 berlin:1 distributive:1 outer:1 topic:1 polytope:1 argue:2 considers:1 seven:1 trivial:3 reason:6 nicta:7 manifold:3 providing:1 potentially:1 frank:1 expense:1 unsw:1 unknown:1 perform:3 upper:1 observation:6 datasets:7 finite:2 ecml:1 supporting:2 communication:1 precise:3 incorporated:1 perturbation:1 arbitrary:3 jebara:2 community:1 peleato:1 semiring:2 required:1 eckstein:1 optimized:3 acoustic:3 nip:2 trans:1 address:3 able:2 beyond:1 below:2 usually:1 pattern:1 regime:2 sparsity:6 program:1 interpretability:2 including:3 max:2 belief:3 wainwright:1 power:3 misclassification:1 overlap:1 natural:2 critical:2 regularized:4 sanner:1 representing:1 historically:1 technology:1 naive:2 coupled:1 literature:1 acknowledgement:1 ict:1 discovery:1 multiplication:4 relative:1 law:1 loss:37 fully:4 a2a:3 lowerorder:1 probit:1 interesting:2 limitation:3 versus:1 remarkable:1 validation:2 digital:1 foundation:1 incurred:2 degree:1 proxy:7 principle:2 nowozin:1 row:1 course:1 free:1 keeping:1 soon:1 guide:1 institute:1 barrier:1 sparse:2 benefit:1 van:1 distributed:1 dimension:3 xn:5 stand:1 world:2 lindenstrauss:2 doesn:2 hengel:1 made:1 adaptive:2 transaction:1 vazirani:1 approximate:6 skill:1 dealing:1 clique:14 global:3 overfitting:1 instantiation:1 memory:1 mairal:1 assumed:1 spectrum:1 continuous:18 latent:1 modularity:1 why:2 reality:1 table:3 promising:1 learn:1 nature:2 reasonably:1 composing:2 decoupling:1 inherently:1 obtaining:1 heidelberg:1 williamson:1 excellent:2 necessarily:2 complex:4 constructing:2 european:1 did:1 main:4 dense:2 synonym:1 motivation:1 noise:17 whole:1 bounding:1 hyperparameters:1 nothing:2 arise:1 allowed:1 canberra:2 broadband:1 elaborate:1 ny:1 predictability:2 lc:1 fails:1 pereira:1 explicit:1 exponential:1 theorem:2 departs:1 list:2 appeal:1 offset:1 gupta:1 svm:8 explored:1 evidence:1 incorporating:1 intractable:2 mnist:3 restricting:1 effectively:3 importance:2 ci:2 phd:1 keshet:1 anu:2 margin:7 demand:1 chen:1 logarithmic:1 fc:7 simply:2 explore:2 prevents:2 chang:1 springer:2 inflict:1 ma:2 obozinski:1 acm:1 conditional:2 formulated:1 exposition:1 price:1 admm:1 content:1 hard:4 change:1 included:2 determined:2 springerverlag:1 reducing:2 hyperplane:1 decouple:2 lemma:1 total:2 called:1 experimental:2 rarely:1 exception:1 highdimensional:1 select:1 support:3 evaluate:2 trainable:1
3,984
4,606
Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes Michael Bryant and Erik B. Sudderth Department of Computer Science, Brown University, Providence, RI [email protected], [email protected] Abstract Variational methods provide a computationally scalable alternative to Monte Carlo methods for large-scale, Bayesian nonparametric learning. In practice, however, conventional batch and online variational methods quickly become trapped in local optima. In this paper, we consider a nonparametric topic model based on the hierarchical Dirichlet process (HDP), and develop a novel online variational inference algorithm based on split-merge topic updates. We derive a simpler and faster variational approximation of the HDP, and show that by intelligently splitting and merging components of the variational posterior, we can achieve substantially better predictions of test data than conventional online and batch variational algorithms. For streaming analysis of large datasets where batch analysis is infeasible, we show that our split-merge updates better capture the nonparametric properties of the underlying model, allowing continual learning of new topics. 1 Introduction Bayesian nonparametric methods provide an increasingly important framework for unsupervised learning from structured data. For example, the hierarchical Dirichlet process (HDP) [1] provides a general approach to joint clustering of grouped data, and leads to effective nonparametric topic models. While nonparametric methods are best motivated by their potential to capture the details of large datasets, practical applications have been limited by the poor computational scaling of conventional Monte Carlo learning algorithms. Mean field variational methods provide an alternative, optimization-based framework for nonparametric learning [2, 3]. Aiming at larger-scale applications, recent work [4] has extended online variational methods [5] for the parametric, latent Dirichlet allocation (LDA) topic model [6] to the HDP. While this online approach can produce reasonable models of large data streams, we show that the variational posteriors of existing algorithms often converge to poor local optima. Multiple runs are usually necessary to show robust performance, reducing the desired computational gains. Furthermore, by applying a fixed truncation to the number of posterior topics or clusters, conventional variational methods limit the ability of purportedly nonparametric models to fully adapt to the data. In this paper, we propose novel split-merge moves for online variational inference for the HDP (oHDP) which result in much better predictive performance. We validate our approach on two corpora, one with millions of documents. We also propose an alternative, direct assignment HDP representation which is faster and more accurate than the Chinese restaurant franchise representation used in prior work [4]. Additionally, the inclusion of split-merge moves during posterior inference allows us to dynamically vary the truncation level throughout learning. While conservative truncations can be theoretically justifed for batch analysis of fixed-size datasets [2], our data-driven adaptation of the trunction level is far better suited to large-scale analysis of streaming data. Split-merge proposals have been previously investigated for Monte Carlo analysis of nonparametric models [7, 8, 9]. They have also been used for maximum likelihood and variational analysis of 1 ?j ? zjn wjn Nj ? ? ? D ?k ? Figure 1: Directed graphical representation of a hierarchical Dirichlet process topic model, in which an unbounded collection of topics ?k model the Nj words in each of D documents. Topics occur with frequency ?j in document j, and with frequency ? across the full corpus. parametric models [10, 11, 12, 13]. These deterministic algorithms validate split-merge proposals by evaluating a batch objective on the entire dataset, an approach which is unexplored for nonparametric models and infeasible for online learning. We instead optimize the variational objective via stochastic gradient ascent, and split or merge based on only a noisy estimate of the variational lower bound. Over time, these local decisions lead to global estimates of the number of topics present in a given corpus. We review the HDP and conventional variational methods in Sec. 2, develop our novel split-merge procedure in Sec. 3, and evaluate on various document corpora in Sec. 4. 2 Variational Inference for Bayesian Nonparametric Models 2.1 Hierarchical Dirichlet processes The HDP is a hierarchical nonparametric prior for grouped mixed-membership data. In its simplest form, it consists of a top-level DP and a collection of D bottom-level DPs (indexed by j) which share the top-level DP as their base measure: G0 ? DP(?H), Gj ? DP(?G0 ), j = 1, . . . , D. Here, H is a base measure on some parameter space, and ? > 0, ? > 0 are concentration parameters. Using a stick-breaking representation [1] of the global measure G0 , the HDP can be expressed as ? ? X X ?jk ??k . ?k ? ? k , Gj = G0 = k=1 k=1 The global weights ? are drawn from a stick-breaking distribution ? ? GEM(?), and atoms are independently drawn as ?k ? H. Each Gj shares atoms with the global measure G, and the lowerlevel weights are drawn ?j ? DP(??). For this direct assignment representation, the k indices for each Gj index directly into the global set of atoms. To complete the definition of the general HDP, parameters ?jn ? Gj are then drawn for each observation n in group j, and observations are drawn xjn ? F (?jn ) for some likelihood family F . Note that ?jn = ?zjn for some discrete indicator zjn . In this paper we focus on an application of the HDP to modeling document corpora. The topics ?k ? Dirichlet(?) are distributions on a vocabulary of W words. The global topic weights, ? ? GEM(?), are still drawn from a stick-breaking prior. For each document j, document-specific topic frequencies are drawn ?j ? DP(??). Then for each word index n in document j, a topic indicator is drawn zjn ? Categorical(?j ), and finally a word is drawn wjn ? Categorical(?zjn ). 2.2 Batch Variational Inference for the HDP We use variational inference [14] to approximate the posterior of the latent variables (?, ?, ?, z) ? the topics, global topic weights, document-specific topic weights, and topic indicators, respectively ? with a tractable distribution q, indexed by a set of free variational parameters. Appealing to mean field methods, our variational distribution is fully factorized, and is of the form q(?, ?, ?, z | ?, ?, ?) = q(?) ? Y q(?k | ?k ) D Y j=1 k=1 2 q(?j | ?j ) Nj Y n=1 q(zjn | ?jn ), (1) where D is the number of documents in the corpus and Nj is the number of words in document j. Individual distributions are selected from appropriate exponential families: q(?) = ?? ? (?) q(?k | ?k ) = Dirichlet(?k | ?k ) q(?j | ?j ) = Dirichlet(?j | ?j ) q(zjn ) = Categorical(zjn | ?jn ) where ?? ? (?) denotes a degenerate distribution at the point ? ? .1 In our update derivations below, we use ?jw to denote the shared ?jn for all word tokens in document j of type w. Selection of an appropriate truncation strategy is crucial to the accuracy of variational methods for nonparametric models. Here, we truncate the topic indicator distributions by fixing q(zjn = k) = 0 for k > K, where K is a threshold which varies dynamically in our later algorithms. With this assumption, the topic distributions with indices greater than K are conditionally independent of the observed data; we may thus ignore them and tractably update the remaining parameters with respect to the true, infinite model. A similar truncation has been previously used in the context of an otherwise more complex collapsed variational method [3]. Desirably, this truncation is nested such that increasing K always gives potentially improved bounds, but does not require the computation of infinite sums, as in [16]. In contrast, approximations based on truncations of the stick-breaking topic frequency prior [2, 4] are not nested, and their artifactual placement of extra mass on the final topic K is less suitable for our split-merge online variational inference. Via standard convexity arguments [14], we lower bound the marginal log likelihood of the observed data using the expected complete-data log likelihood and the entropy of the variational distribution, def L(q) = Eq [log p(?, ?, ?, z, w | ?, ?, ?)] ? Eq [log q(?, ?, z | ?, ?, ?)] = Eq [log p(w | z, ?)] + Eq [log p(z | ?)] + Eq [log p(? | ??)] + Eq [log p(? | ?)] + Eq [log p(? | ?)] ? Eq [log q(z | ?)] ? Eq [log q(? | ?)] ? Eq [log q(? | ?)] D n X = Eq [log p(wj | zj , ?)] + Eq [log p(zj | ?j )] + Eq [log p(?j | ??)] ? Eq [log q(zj | ?j )] j=1 o 1 Eq [log p(? | ?)] + Eq [log p(? | ?)] ? Eq [log q(? | ?)] , (2) D and maximize this quantity by coordinate ascent on the variational parameters. The expectations are with respect to the variational distribution. Each expectation is dependent on only a subset of the variational parameters; we leave off particular subscripts for notational clarity. Note that the expansion of the variational lower bound in (2) contains all terms inside a summation over documents. This is the key observation that allowed [5] to develop an online inference algorithm for LDA. A full expansion of the variational objective is given in the supplemental material. Taking derivatives of L(q) with respect to each of the variational parameters yields the following updates: ? Eq [log q(?j | ?j )] + ?jwk ? exp {Eq [log ?kw ] + Eq [log ?jk ]} PW ?jk ? ??k + w=1 nw(j) ?jwk PD ?kw ? ? + j=1 nw(j) ?jwk , (3) (4) (5) Here, nw(j) is the number of times word w appears in document j. The expectations in (3) are P P Eq [log ?jk ] = ?(?jk ) ? ?( i ?ji ), Eq [log ?kw ] = ?(?kw ) ? ?( i ?ki ), where ?(x) is the digamma function, the first derivative of the log of the gamma function. In evaluating our objective, we represent ? ? as a (K + 1)-dim. vector containing the probabilities of the first K topics, and the total mass of all other topics. While ? ? cannot be optimized in closed form, it can be updated via gradient-based methods; we use a variant of L-BFGS. Drawing a parallel between variational inference and the expectation maximization (EM) algorithm, we label the document-specific updates of (?j , ?j ) the E-step, and the corpus-wide updates of (?, ?) the M-step. 1 We expect ? to have small posterior variance in large datasets, and using a point estimate ? ? simplifies variational derivations for our direct assignment formulation. As empirically explored for the HDP-PCFG [15], updates to the global topic weights have much less predictive impact than improvements to topic distributions. 3 2.3 Online Variational Inference Batch variational inference requires a full pass through the data at each iteration, making it computationally infeasible for large datasets and impossible for streaming data. To remedy this, we adapt and improve recent work on online variational inference algorithms [4, 5]. The form of the lower bound in (2), as a scaled expectation with respect to thePdocument collection, an online learning algorithm. Given a learning rate ?t satisfying ? t=0 ?t = ? and P? suggests 2 ? < ?, we can optimize the variational objective stochastically. Each update begins by t=0 t sampling a ?mini-batch? of documents S, of size |S|. After updating the mini-batch of documentspecific parameters (?j , ?j ) by iterating (3,4), we update the corpus-wide parameters as ?kw , ?kw ? (1 ? ?t )?kw + ?t ? ? ? ? (1 ? ?t )? ? + ?t ??k , k (6) (7) k ? kw is a set of sufficient statistics for topic k, computed from a noisy estimate of (5): where ? X ? kw = ? + D ? nw(j) ?jwk . |S| (8) j?S The candidate topic weights ?? are found via gradient-based optimization on S. The resulting inference algorithm is similar to conventional batch methods, but is applicable to streaming, big data. 3 Split-Merge Updates for Online Variational Inference We develop a data-driven split-merge algorithm for online variational inference for the HDP, referred to as oHDP-SM. The algorithm dynamically expands and contracts the truncation level K by splitting and merging topics during specialized moves which are interleaved with standard online variational updates. The resulting model truly allows the number of topics to grow with the data. As such, we do not have to employ the technique of [4, 3] and other truncated variational approaches of setting K above the expected number of topics and relying on the inference to infer a smaller number. Instead, we initialize with small K and let the inference discover new topics as it progresses, similar to the approach used in [17]. One can see how this property would be desirable in an online setting, as documents seen after many inference steps may still create new topics. 3.1 Split: Creation of New Topics  |S| Given the result of analyzing one mini-batch q ? = (?j , ?j )j=1 , ?, ? ? , and the corresponding value of the lower bound L(q ? ), we consider splitting topic k into two topics k ? , k ?? .2 The split procedure proceeds as follows: (1) initialize all variational posteriors to break symmetry between the new topics, using information from the data; (2) refine the new variational posteriors using a restricted iteration; (3) accept or reject the split via the change in variational objective value. Initialize new variational posteriors To break symmetry, we initialize the new topic posteriors (?k? , ?k?? ), and topic weights (?k?? , ?k??? ), using sufficient statistics from the previous iteration: ?k , ?k?? = ?t ? ? ??? = ?t ??k . ?k? = (1 ? ?t )?k , ?k?? = (1 ? ?t )?k? , k Intuitively, we expect the sufficient statistics to provide insight into how a topic was actually used |S| during the E-step. The minibatch-specific parameters {?j , ?j }j=1 are then initialized as follows, ?jwk? = ?k ?jwk , ?jk? = ?k ?jk , ?jwk?? = (1 ? ?k )?jwk , ?jk?? = (1 ? ?k )?jk , with the weights defined as ?k = ?k? /(?k? + ?k?? ). 2 Technically, we replace topic k with topic k? and add k?? as a new topic. In practice, we found that the order of topics in the global stick-breaking distribution had little effect on overall algorithm performance. 4 Algorithm 1 Restricted iteration 1: initialize (?? , ?? ) for ? ? {k ? , k ?? } 2: for j ? S do 3: initialize (?j , ?j ) for ? ? {k ? , k ?? } 4: while not converged do 5: update (?j , ?j ) for ? ? {k ? , k ?? } using (3, 4) 6: end while 7: update (?? , ?? ) for ? ? {k ? , k ?? } using (6, 7) 8: end for Restricted iteration After initializing the variational parameters for the new topics, we update them through a restricted iteration of online variational inference. The restricted iteration consists of restricted analogues to both the E-step and the M-step, where all parameters except those for the new topics are held constant. This procedure is similar to, and inspired by, the ?partial E-step? for split-merge EM [10] and restricted Gibbs updates for split-merge MCMC methods [7]. All values of ?jw? and ?j? , ? ? / {k ? , k ?? }, remain unchanged. It is important to note that even though these values are not updated, they are still used in the calculations for both the variational expectation of ?j and the normalization of ?. In particular, exp {Eq [log ?k? w ] + Eq [log ?jk? ]} ?jwk? = P , ??T exp {Eq [log ??w ] + Eq [log ?j? ]} P Eq [log ?jk? ] = ?(?jk? ) ? ?( k?T ?jk ), where T is the original set of topics, minus k, plus k ? and k ?? . The expected log word probabilities Eq [log ?k? w ] and Eq [log ?k?? w ] are computed using the newly updated ? values. Evaluate Split Quality Let ?split for minibatch S be ? as defined above, but with ?jwk replaced ? be defined similarly. by the ?jwk? and ?jwk?? learned in the restricted E-step. Let ? split , ?split and ?split   |S| ? split(k) Now we have a new model state q = (?split , ?split )j=1 , ?split , ?split . We calculate L q split(k) ,  and if L q split(k) > L(q ? ), we update the new model state q ? ? q split(k) , accepting the split. If  L q split(k) < L(q ? ), then we go back and test another split, until all splits are tested. In practice we limit the maximum number of allowed splits each iteration to a small constant. If we wish to allow the model to expand the number of topics more quickly, we can increase this number. Finally, it is important to note that all aspects of the split procedure are driven by the data ? the new topics are initialized using data-driven proposals, refined by re-running the variational E-step, and accepted based on an unbiased estimate of the change in the variational objective. 3.2 Merge: Removal of Redundant Topics Consider a candidate merge of two topics, k ? and k ?? , into a new topic k. For batch variational methods, it is straightforward to determine whether such a merge will increase or decrease the variational objective by combining all parameters for all documents, ?jwk = ?jwk? + ?jwk?? , ?jk = ?jk? + ?jk?? , ?k = ?k? + ?k?? , ?k = ?k? + ?k?? , and computing the difference in the variational objective before and after the merge. Because many terms cancel, computing this bound change is fairly computationally inexpensive, but it can still be computationally infeasible to consider all pairs of topics for large K. Instead, we identify potential merge candidates by looking at the sample covariance of the ?j vectors across the corpus (or minibatch). Topics with positive covariance above a certain threshold have the quantitative effects of their merge evaluated. Intuitively, if there are two copies of a topic or a topic is split into two pieces, they should tend to be used together, and therefore have positive covariance. For consistency in ? ?? notation, we call the model state with topics k ? and k ?? merged q merge(k ,k ) . Combining this merge procedure with the previous split proposals leads to the online variational method of Algorithm 2. In an online setting, we can only compute unbiased noisy estimates of the true difference in the variational objective; split or merge moves that increase the expected variational objective are not guaranteed to do so for the objective evaluated over the entire corpus. The 5 Algorithm 2 Online variational inference for the HDP + split-merge 1: initialize (?, ? ? ) 2: for t = 1, 2, . . . do 3: for j ? minibatch S do 4: initialize (?j , ?j ) 5: while not converged do 6: update (?j , ?j ) using (3, 4) 7: end while 8: end for 9: for pairs of topics {k ? , k ?? } ? K ? K with Cov(?jk? , ?jk?? ) > 0 do ? ??  10: if L q merge(k ,k ) > L(q) then ? ?? 11: q ? q merge(k ,k ) 12: end if 13: end for 14: update (?, ? ? ) using (6, 7) 15: for k = 1, 2, . . . , K do  16: compute L q split(k) via restricted iteration  17: if L q split(k) > L(q) then 18: q ? q split(k) 19: end if 20: end for 21: end for uncertainty associated with the online method can be mitigated to some extent by using large minibatches. Confidence intervals for the expected change in the variational objective can be computed, and might be useful in a more sophisticated acceptance rule. Note that our usage of a nested family of variational bounds is key to the accuracy and stability of our split-merge acceptance rules. 4 Experimental Results To demonstrate the effectiveness of our split-merge moves, we compare three algorithms: batch variational inference (bHDP), online variational inference without split-merge (oHDP), and online variational inference with split-merge (oHDP-SM). On the NIPS corpus we also compare these three methods to collapsed Gibbs sampling (CGS) and the CRF-style oHDP model (oHDP-CRF) proposed by [4].3 We test the models on one synthetic and two real datasets: Bars A 20-topic bars dataset of the type introduced in [18], where topics can be viewed as bars on a 10 ? 10 grid. The vocabulary size is 100, with a training set of 2000 documents and a test set of 200 documents, 250 words per document. NIPS 1,740 documents from the Neural Information Processing Systems conference proceedings, 1988-2000. The vocabulary size is 13,649, and there are 2.3 million tokens in total. We randomly divide the corpus into a 1,392-document training set and a 348-document test set. New York Times The New York Times Annotated Corpus4 consists of over 1.8 million articles appearing in the New York Times between 1987 and 2007. The vocabulary is pruned to 8,000 words. We hold out a randomly selected subset of 5,000 test documents, and use the remainder for training. All values of K given for oHDP-SM models are initial values ? the actual truncation levels fluctuate during inference. While the truncation level K is different from the actual number of topics assigned non-negligible mass, the split-merge model tends to merge away unused topics, so these numbers are usually fairly close. Hyperparameters are initialized to consistent values across all algorithms and datasets, and learned via Newton-Raphson updates (or in the case of CGS, resampled). We use a constant learning rate across all online algorithms. As suggested by [4], we set ?t = (? + t)?? where ? = 1, ? = 0.5. Empirically, we found that slower learning rates could result in greatly reduced performance, across all models and datasets. 3 For CGS we use the code available at http://www.gatsby.ucl.ac.uk/?ywteh/research/npbayes/npbayesr21.tgz, and for oHDP-CRF we use the code at http://www.cs.princeton.edu/?chongw/software/onlinehdp.tar.gz. 4 http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2008T19 6 To compare algorithm performance, we use per-word heldout likelihood, similarly to the metrics of [3, 19, 4]. We randomly split each test document in Dtest into 80%-20% pieces, wj1 and wj2 . ? as the variational expectation of the topics from training, we learn ? Then, using ? ?j on wj1 and Q P approximate the probability of wj2 as w?wj2 k ? ?jk ??kw . The overall test metric is then  P P P ?jk ??kw k? w?wj2 log j?D test P E= j?D test |wj2 | 4.1 Bars For the bars data, we initialize eight oHDP-SM runs with K = {2, 5, 10, 20, 40, 50, 80, 100}, eight runs of oHDP with K = 20, and eight runs with K = 50. As seen in Figure 2(a), the oHDP algorithm converges to local optima, while the oHDP-SM runs all converge to the global optimum. More importantly, all split-merge methods converge to the correct number of topics, while oHDP uses either too few or too many topics. Note that the data-driven split-merge procedure allows splitting and merging of topics to mostly cease once the inference has converged (Figure 2(d)). 4.2 NIPS We compare oHDP-SM, oHDP, bHDP, oHDP-CRF, and CGS in Figure 2. Shown are two runs of oHDP-SM with K = {100, 300}, two runs each of oHDP and bHDP with K = {300, 1000}, and one run each of oHDP-CRF and CGS with K = 300. All the runs displayed are the best runs from a larger sample of trials. Since oHDP and bHDP will use only a subset of topics under the truncation, setting K much higher results in comparable numbers of topics as oHDP-SM. We set |S| = 200 for the online algorithms, and run all methods for approximately 40 hours of CPU time. The non split-merge methods reach poor local optima relatively quickly, while the split-merge algorithms continue to improve. Notably, both oHDP-CRF and CGS perform much worse than any of our methods. It appears that the CRF model performs very poorly for small datasets, and CGS reaches a mode quickly but does not mix between modes. Even though the split-merge algorithms improve in part by adding topics, they are using their topics much more effectively (Figure 2(h)). We speculate that for the NIPS corpus especially, the reason that models achieve better predictive likelihoods with more topics is due to the bursty properties of text data [20]. Figure 3 illustrates the topic refinement and specialization which occurs in successful split proposals. 4.3 New York Times As batch variational methods and samplers are not feasible for such a large dataset, we compare two runs of oHDP with K = {300, 500} to a run of oHDP-SM with K = 200 initial topics. We also use a larger minibatch size of |S| = 10,000; split-merge acceptance decisions can sometimes be unstable with overly small minibatches. Figure 2(c) shows an inherent problem with oHDP for very large datasets ? when truncated to K = 500, the algorithms uses all of its available topics and exhibits overfitting. For the oHDP-SM, however, predictive likelihood improves over a substantially longer period and overfitting is greatly reduced. 5 Discussion We have developed a novel split-merge online variational algorithm for the hierarchical DP. This approach leads to more accurate models and better predictive performance, as well as a model that is able to adapt the number of topics more freely than conventional approximations based on fixed truncations. Our moves are similar in spirit to split-merge samplers, but by evaluating their quality stochastically using streaming data, we can rapidly adapt model structure to large-scale datasets. While many papers have tried to improve conventional mean field methods via higher-order variational expansions [21], local optima can make the resulting algorithms compare unfavorably to Monte Carlo methods [3]. Here we pursue the complementary goal of more robust, scalable optimization of simple variational objectives. Generalization of our approach to more complex hierarchies of DPs, or basic DP mixtures, is feasible. We believe similar online learning methods will prove effective for the combinatorial structures of other Bayesian nonparametric models. Acknowledgments We thank Dae Il Kim for his assistance with the experimental results. 7 NIPS Bars ?2.5 New York Times ?7.56 ?7.4 ?7.58 ?3.5 oHDP?SM oHDP, K=50 oHDP, K=20 ?4 ?7.6 ?7.6 ?7.7 ?7.8 ?7.9 oHDP?SM, K=100 oHDP?SM, K=300 oHDP, K=300 oHDP, K=1000 bHDP, K=300 bHDP, K=1000 oHDP?CRF, K=300 CGS, K=300 ?8 ?8.1 ?4.5 0 50 100 150 200 250 300 350 0 400 per?word log likelihood per?word log likelihood per?word log likelihood ?7.5 ?3 2.5 5 7.5 10 12.5 documents seen iteration (a) ?7.64 ?7.66 ?7.68 ?7.7 ?7.72 ?7.74 oHDP, K=300 oHDP, K=500 oHDP?SM, K=200 ?7.76 ?7.78 ?7.8 0 15 0.5 1 1.5 2 2.5 3 3.5 documents seen 5 x 10 (b) 4 6 x 10 (c) 550 600 80 500 70 500 oHDP?SM, K=2,100 oHDP, K=50 oHDP, K=20 60 450 # topics used 50 40 30 # topics used 400 # topics used ?7.62 300 400 350 300 200 250 20 100 200 10 0 0 50 100 150 0 0 200 2.5 5 7.5 iteration documents seen (d) (e) 10 12.5 150 0 15 0.5 1 1.5 2 2.5 3 3.5 documents seen 5 x 10 4 6 x 10 (f) ?2.5 ?7.56 ?7.4 ?7.58 ?3.5 ?4 ?7.6 per?word log likelihood per?word log likelihood per?word log likelihood ?7.5 ?3 ?7.6 ?7.7 ?7.8 ?7.9 ?7.62 ?7.64 ?7.66 ?7.68 ?7.7 ?7.72 ?7.74 ?8 ?7.76 ?8.1 ?7.78 ?4.5 0 10 20 30 40 50 60 70 80 0 100 200 300 400 500 600 ?7.8 150 200 250 300 # topics used # topics used 350 400 450 500 550 # topics used (g) (h) (i) Figure 2: Trace plots of heldout likelihood and number of topics used. Across all datasets, common color indicates common algorithm, while for NIPS and New York Times, line type indicates different initializations. Top: Test log likelihood for each dataset. Middle: Number of topics used per iteration. Bottom: A plot of per-word log likelihood against number of topics used. Note particularly plot (h), where for every cardinality of used topics shown, there is a split-merge method outperforming a conventional method. Original topic patterns pattern cortex neurons neuronal single responses inputs type activation 40,000 patterns pattern cortex neurons neuronal responses single inputs temporal activation patterns neuronal pattern neurons cortex inputs activation type preferred peak 80,000 patterns pattern cortex neurons neuronal responses single temporal inputs type neuronal patterns pattern neurons cortex activation dendrite inputs peak preferred 120,000 patterns pattern cortex neurons responses neuronal single type number temporal neuronal neurons activation cortex dendrite preferred patterns peak pyramidal inputs 160,000 patterns pattern cortex neurons responses type behavioral types neuronal single neuronal dendritic peak activation cortex pyramidal msec fire dendrites inputs 200,000 patterns pattern cortex neurons responses type behavioral types form neuronal neuronal dendritic fire peak activation msec pyramidal cortex postsynaptic inputs 240,000 patterns pattern cortex responses types type behavioral form neurons areas neuronal dendritic postsynaptic fire cortex activation peak msec pyramidal inputs Figure 3: The evolution of a split topic. The left column shows the topic directly prior to the split. After 240,000 more documents have been analyzed, subtle differences become apparent: the top topic covers terms relating to general neuronal behavior, while the bottom topic deals more specifically with neuron firing. 8 References [1] Y.W. Teh, M. Jordan, and M. Beal. Hierarchical Dirichlet processes. JASA, 2006. [2] D. Blei and M. Jordan. Variational methods for Dirichlet process mixtures. Bayesian Analysis, 1:121?144, 2005. [3] Y.W. Teh, K. Kurihara, and M. Welling. Collapsed variational inference for HDP. NIPS, 2008. [4] C. Wang, J. Paisley, and D. Blei. Online variational inference for the hierarchical Dirichlet process. AISTATS, 2011. [5] M. Hoffman, D. Blei, and F. Bach. Online learning for latent Dirichlet allocation. NIPS, 2010. [6] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 2003. [7] S. Jain and R. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process mixture model. Journal of Computational and Graphical Statistics, 13:158?182, 2004. [8] D.B. Dahl. Sequentially-allocated merge-split sampler for conjugate and nonconjugate Dirichlet process mixture models. Technical report, Texas A&M University, 2005. [9] C. Wang and D. Blei. A split-merge MCMC algorithm for the hierarchical Dirichlet process. ArXiv e-prints, January 2012. [10] N. Ueda, R. Nakano, Z. Ghahramani, and G. Hinton. SMEM algorithm for mixture models. Neural Computation, 2000. [11] K. Kurihara and M. Welling. Bayesian K-means as a ?Maximization-Expectation? algorithm. SIAM conference on data mining SDM06, 2006. [12] N. Ueda and Z. Ghahramani. Bayesian model search for mixture models based on optimizing variational bounds. Neural Networks, 15, 2002. [13] Z. Ghahramani and M. Beal. Variational inference for Bayesian mixtures of factor analysers. NIPS, 2000. [14] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. Introduction to variational methods for graphical models. Machine Learning, 1999. [15] P. Liang, S. Petrov, D. Klein, and M. Jordan. The infinite PCFG using hierarchical Dirichlet processes. Empirical Methods in Natural Language Processing, 2007. [16] K. Kurihara, M. Welling, and N. Vlassis. Accelerated variational Dirichlet process mixtures. NIPS, 2007. [17] D. Blei and C. Wang. Variational inference for the nested Chinese restaurant process. NIPS, 2009. [18] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 101:5228?5235, 2004. [19] A. Asuncion, M. Welling, P. Smyth, and Y.W. Teh. On smoothing and inference for topic models. UAI, 2009. [20] G. Doyle and C. Elkan. Accounting for word burstiness in topic models. ICML, 2009. [21] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1:1?305, 2008. 9
4606 |@word trial:1 middle:1 pw:1 tried:1 covariance:3 accounting:1 minus:1 initial:2 contains:1 wj2:5 document:31 existing:1 com:1 activation:8 gmail:1 plot:3 update:20 selected:2 accepting:1 blei:6 provides:1 simpler:1 unbounded:1 direct:3 become:2 consists:3 prove:1 behavioral:3 inside:1 theoretically:1 notably:1 upenn:1 expected:5 behavior:1 inspired:1 relying:1 little:1 actual:2 cpu:1 cardinality:1 increasing:1 begin:1 discover:1 underlying:1 notation:1 mitigated:1 factorized:1 mass:3 substantially:2 pursue:1 developed:1 supplemental:1 finding:1 nj:4 temporal:3 quantitative:1 unexplored:1 every:1 continual:1 expands:1 bryant:1 scaled:1 stick:5 uk:1 before:1 positive:2 negligible:1 local:6 tends:1 limit:2 aiming:1 analyzing:1 subscript:1 firing:1 merge:43 approximately:1 might:1 plus:1 initialization:1 dynamically:3 suggests:1 limited:1 directed:1 practical:1 acknowledgment:1 practice:3 chongw:1 procedure:7 area:1 empirical:1 reject:1 jwk:15 word:19 confidence:1 griffith:1 cannot:1 close:1 selection:1 context:1 applying:1 collapsed:3 impossible:1 optimize:2 conventional:9 deterministic:1 www:3 go:1 straightforward:1 independently:1 splitting:4 insight:1 rule:2 importantly:1 his:1 steyvers:1 stability:1 coordinate:1 updated:3 hierarchy:1 smyth:1 us:2 elkan:1 trend:1 satisfying:1 jk:20 updating:1 particularly:1 bottom:3 observed:2 initializing:1 capture:2 wang:3 calculate:1 wj:1 decrease:1 burstiness:1 pd:1 convexity:1 predictive:5 creation:1 technically:1 joint:1 various:1 derivation:2 jain:1 effective:2 monte:5 analyser:1 refined:1 apparent:1 larger:3 drawing:1 otherwise:1 ability:1 statistic:4 cov:1 noisy:3 final:1 online:30 beal:2 intelligently:1 ucl:1 propose:2 adaptation:1 remainder:1 combining:2 rapidly:1 degenerate:1 achieve:2 poorly:1 validate:2 cluster:1 optimum:6 jsp:1 produce:1 franchise:1 leave:1 converges:1 derive:1 develop:4 ac:1 fixing:1 progress:1 eq:29 c:2 merged:1 annotated:1 correct:1 stochastic:1 material:1 require:1 generalization:1 dendritic:3 summation:1 wjn:2 hold:1 exp:3 bursty:1 nw:4 vary:1 applicable:1 label:1 combinatorial:1 grouped:2 create:1 hoffman:1 always:1 fluctuate:1 tar:1 jaakkola:1 focus:1 notational:1 improvement:1 likelihood:16 indicates:2 greatly:2 contrast:1 digamma:1 kim:1 dim:1 inference:32 dependent:1 membership:1 streaming:5 entire:2 accept:1 expand:1 overall:2 smoothing:1 initialize:9 fairly:2 marginal:1 field:3 once:1 ng:1 sampling:2 atom:3 kw:11 unsupervised:1 cancel:1 icml:1 report:1 inherent:1 employ:1 few:1 randomly:3 gamma:1 doyle:1 individual:1 replaced:1 fire:3 acceptance:3 mining:1 truly:2 mixture:8 analyzed:1 held:1 chain:1 accurate:2 partial:1 necessary:1 indexed:2 divide:1 initialized:3 desired:1 re:1 dae:1 column:1 modeling:1 cover:1 assignment:3 maximization:2 subset:3 successful:1 too:2 providence:1 varies:1 synthetic:1 npbayes:1 peak:6 siam:1 contract:1 off:1 michael:1 together:1 quickly:4 containing:1 dtest:1 worse:1 stochastically:2 derivative:2 style:1 potential:2 bfgs:1 speculate:1 sec:3 stream:1 piece:2 later:1 break:2 closed:1 parallel:1 asuncion:1 il:1 accuracy:2 variance:1 yield:1 identify:1 bayesian:8 carlo:5 converged:3 reach:2 definition:1 inexpensive:1 against:1 petrov:1 frequency:4 associated:1 gain:1 newly:1 dataset:4 color:1 improves:1 subtle:1 sophisticated:1 actually:1 back:1 appears:2 higher:2 nonconjugate:1 response:7 improved:1 jw:2 formulation:1 evaluated:2 though:2 furthermore:1 until:1 minibatch:5 mode:2 lda:2 quality:2 scientific:1 believe:1 usage:1 effect:2 ohdp:40 brown:2 true:2 remedy:1 unbiased:2 evolution:1 assigned:1 neal:1 sdm06:1 deal:1 conditionally:1 assistance:1 during:4 complete:2 demonstrate:1 crf:8 performs:1 variational:76 novel:4 common:2 specialized:1 ji:1 empirically:2 million:3 relating:1 gibbs:2 paisley:1 consistency:1 grid:1 similarly:2 inclusion:1 language:1 had:1 longer:1 cortex:13 gj:5 base:2 add:1 posterior:10 recent:2 optimizing:1 driven:5 certain:1 outperforming:1 continue:1 seen:6 greater:1 freely:1 converge:3 maximize:1 redundant:1 determine:1 period:1 multiple:1 full:3 pnas:1 infer:1 desirable:1 mix:1 technical:1 faster:2 adapt:4 calculation:1 bach:1 raphson:1 impact:1 prediction:1 scalable:2 variant:1 basic:1 expectation:8 metric:2 arxiv:1 iteration:12 represent:1 normalization:1 sometimes:1 proposal:5 interval:1 sudderth:2 grow:1 pyramidal:4 crucial:1 allocated:1 extra:1 ascent:2 tend:1 zjn:9 spirit:1 effectiveness:1 jordan:6 call:1 unused:1 split:63 restaurant:2 simplifies:1 texas:1 whether:1 motivated:1 tgz:1 specialization:1 york:6 useful:1 iterating:1 nonparametric:15 simplest:1 reduced:2 http:3 zj:3 trapped:1 overly:1 per:10 klein:1 discrete:1 group:1 key:2 threshold:2 drawn:9 clarity:1 dahl:1 sum:1 run:13 uncertainty:1 throughout:1 reasonable:1 family:4 ueda:2 decision:2 scaling:1 comparable:1 interleaved:1 bound:9 def:1 ki:1 guaranteed:1 resampled:1 refine:1 occur:1 placement:1 ri:1 software:1 ywteh:1 aspect:1 argument:1 pruned:1 relatively:1 department:1 structured:1 truncate:1 poor:3 conjugate:1 across:6 smaller:1 increasingly:1 em:2 remain:1 postsynaptic:2 appealing:1 making:1 intuitively:2 restricted:9 computationally:4 previously:2 desirably:1 tractable:1 end:9 available:2 eight:3 hierarchical:11 away:1 appropriate:2 appearing:1 alternative:3 batch:14 slower:1 jn:6 original:2 denotes:1 top:4 dirichlet:19 clustering:1 remaining:1 graphical:4 running:1 newton:1 nakano:1 ghahramani:4 chinese:2 especially:1 unchanged:1 move:6 objective:14 g0:4 quantity:1 occurs:1 print:1 parametric:2 concentration:1 strategy:1 exhibit:1 gradient:3 dp:10 thank:1 topic:93 extent:1 unstable:1 reason:1 erik:1 hdp:16 code:2 index:4 mini:3 liang:1 mostly:1 potentially:1 trace:1 perform:1 allowing:1 teh:3 observation:3 neuron:11 datasets:12 sm:15 markov:1 displayed:1 truncated:2 january:1 extended:1 looking:1 hinton:1 vlassis:1 introduced:1 pair:2 optimized:1 catalog:1 learned:2 hour:1 tractably:1 nip:11 able:1 bar:6 proceeds:1 usually:2 below:1 suggested:1 pattern:19 analogue:1 ldc:1 suitable:1 wainwright:1 natural:1 indicator:4 improve:4 smem:1 categorical:3 gz:1 wj1:2 text:1 prior:5 review:1 removal:1 fully:2 expect:2 heldout:2 mixed:1 allocation:3 lowerlevel:1 foundation:1 jasa:1 sufficient:3 consistent:1 article:1 share:2 token:2 truncation:12 free:1 infeasible:4 copy:1 unfavorably:1 documentspecific:1 allow:1 wide:2 saul:1 taking:1 vocabulary:4 evaluating:3 collection:3 refinement:1 far:1 welling:4 approximate:2 ignore:1 preferred:3 global:10 overfitting:2 sequentially:1 uai:1 corpus:13 gem:2 search:1 latent:4 additionally:1 learn:1 robust:2 symmetry:2 dendrite:3 expansion:3 investigated:1 complex:2 aistats:1 big:1 hyperparameters:1 allowed:2 complementary:1 neuronal:13 referred:1 purportedly:1 gatsby:1 wish:1 msec:3 exponential:2 candidate:3 cgs:8 breaking:5 jmlr:1 specific:4 explored:1 cease:1 merging:3 pcfg:2 adding:1 effectively:1 illustrates:1 suited:1 entropy:1 artifactual:1 xjn:1 expressed:1 nested:4 minibatches:2 viewed:1 goal:1 shared:1 replace:1 feasible:2 change:4 infinite:3 except:1 reducing:1 specifically:1 sampler:3 kurihara:3 conservative:1 total:2 pas:1 accepted:1 experimental:2 accelerated:1 evaluate:2 mcmc:2 princeton:1 tested:1
3,985
4,607
Fiedler Random Fields: A Large-Scale Spectral Approach to Statistical Network Modeling Mikaela Keller? Marc Tommasi? INRIA Lille ? Nord Europe 40 avenue Halley ? B?at A ? Park Plaza 59650 Villeneuve d?Ascq (France) {antonino.freno, mikaela.keller, marc.tommasi}@inria.fr Antonino Freno Abstract Statistical models for networks have been typically committed to strong prior assumptions concerning the form of the modeled distributions. Moreover, the vast majority of currently available models are explicitly designed for capturing some specific graph properties (such as power-law degree distributions), which makes them unsuitable for application to domains where the behavior of the target quantities is not known a priori. The key contribution of this paper is twofold. First, we introduce the Fiedler delta statistic, based on the Laplacian spectrum of graphs, which allows to dispense with any parametric assumption concerning the modeled network properties. Second, we use the defined statistic to develop the Fiedler random field model, which allows for efficient estimation of edge distributions over large-scale random networks. After analyzing the dependence structure involved in Fiedler random fields, we estimate them over several real-world networks, showing that they achieve a much higher modeling accuracy than other well-known statistical approaches. 1 Introduction Arising from domains as diverse as bioinformatics and web mining, large-scale data exhibiting network structure are becoming increasingly available. Network models are commonly used to represent the relations among data units and their structural interactions. Recent studies, especially targeted at social network modeling, have focused on random graph models of those networks. In the simplest form, a random network is a configuration of binary random variables Xuv such that the value of Xuv stands for the presence or absence of a link between nodes u and v in the network. The general idea underlying random graph modeling is that network configurations are generated by a stochastic process governed by specific probability laws, so that different models correspond to different families of distributions over graphs. The simplest random graph model is the Erd?os-R?enyi (ER) model [1], which assumes that the probability of observing a link between two nodes in a given graph is constant for any pair of nodes in that graph, and it is independent of which other edges are being observed. In preferential attachment models [2], the probability of linking to any specified node in a graph is proportional to the degree of the node in the graph, leading to ?rich get richer? effects. Small-world models [3] try to capture instead such phenomena often observed in real networks as small diameters and high clustering coefficients. An attempt to model potentially complex dependencies between graph edges in the form of Gibbs-Boltzmann distributions is made by exponential random graph (ERG) models [4], which subsume the ER model as a special case. Finally, a recent attempt at modeling real networks through ? Universit?e Charles de Gaulle ? Lille 3, Domaine Universitaire du Pont de Bois ? BP 60149, 59653 Villeneuve d?Ascq (France). 1 a stochastic generative process is made by Kronecker graphs [5], which try to capture phenomena such as heavy-tailed degree distributions and shrinking diameter properties while paying attention to the temporal dynamics of network growth. While some of these models behave better than others in terms of computational tractability, one basic limitation affecting all of them is a sort of parametric assumption concerning the probability laws underlying the observed network properties. In other words, currently available models of network structure assume that the shape of the probability distribution generating the network is known a priori. For example, typical formulations of ERG models assume that the building blocks of real networks are given by such structures as k-stars and k-triangles, with different weights assigned to different structures, whereas the preferential attachment model is committed to the assumption that the observed degree distributions obey a power law. In such frameworks, estimating the model from data reduces to fitting the model parameters, where the parametric form of the target distribution is fixed a priori. Clearly, in order for such models to deliver accurate estimates of the distributions at hand, their prior assumptions concerning the behavior of the target quantities must be satisfied by the given data. But unfortunately, this is something that we can rarely assess a priori. To date, the knowledge we have concerning large-scale real-world networks does not allow to assess whether any particular parametric assumption is capturing in depth the target generative process, although some observed network properties may happen to be modeled fairly well. The aim of this paper is twofold. On the one hand, we take a first step toward nonparametric modeling of random networks by developing a novel network statistic, which we call the Fiedler delta statistic. The Fiedler delta function allows to model different graph properties at once in an extremely compact form. This statistic is based on the spectral analysis of the graph, and in particular on the smallest non-zero eigenvalue of the Laplacian matrix, which is known as Fiedler value [6, 7]. On the other hand, we use the Fiedler delta statistic to define a Boltzmann distribution over graphs, leading to the Fiedler random field (FRF) model. Roughly speaking, for each binary edge variable Xuv , potentials in a FRF are functions of the difference determined in the Fiedler value by flipping the value of Xuv , where the spectral decomposition is restricted to a suitable subgraph incident to nodes u, v. The intuition is that the information encapsulated in the Fiedler delta for Xuv gives a measure of the role of Xuv in determining the algebraic connectivity of its neighborhood. As a first step in the theoretical analysis of FRFs, we prove that these models allow to capture edge correlations at any distance within a given neighborhood, hence defining a fairly general class of conditional independence structures over networks. The paper is organized as follows. Sec. 2 reviews some theoretical background concerning the Laplacian spectrum of graphs. FRFs are then introduced in Sec. 3, where we also analyze their dependence structure and present an efficient approach for learning them from data. To avoid unwarranted prior assumptions concerning the statistical behavior of the Fiedler delta, potentials are modeled by non-linear functions, which we estimate from data by minimizing a contrastive divergence objective. FRFs are evaluated experimentally in Sec. 4, showing that they are well suited for large-scale estimation problems over real-world networks, while Sec. 5 draws some conclusions and sketches a few directions for further work. 2 Graphs, Laplacians, and eigenvalues Let G = (V, E) be an undirected graph with n nodes. In the following we assume that the graph is unweighted with adjacency matrix A. The degree du of a node u ? V is defined as the number of connections of u to other nodes, that is du = |{v: {u, v} ? E}|. Accordingly, the degree matrix D of a graph G corresponds to the diagonal matrix with the vertex degrees d1 , . . . , dn on the diagonal. The main tools exploited by the random graph model proposed here are the graph Laplacian matrices. Different graph Laplacians have been defined in the literature. In this work, we use consistently the unnormalized graph Laplacian, given by L = D ? A. Some basic facts related to the unnormalized Laplacian matrix can be summarized as follows [7]: Proposition 1. The unnormalized graph Laplacian L of an undirected graph G has the following properties: (i) L is symmetric and positive semi-definite; (ii) the smallest eigenvalue of L is 0; (iii) L has n non-negative, real-valued eigenvalues 0 = ?1 ? . . . ? ?n ; (iv) the multiplicity of the eigenvalue 0 of L equals the number of connected components in the graph, that is, ?1 = 0 and ?2 > 0 if and only if G is connected. 2 In the following, the (algebraic) multiplicity of an eigenvalue ?i will be denoted by M (?i , G). If the graph has one single connected component, then M (0, G) = 1, and the second smallest eigenvalue ?2 (G) > 0 is called, in this case, the Fiedler eigenvalue. The Fiedler eigenvalue provides insight into several graph properties: when there is a nontrivial spectral gap, i.e. ?2 (G) is clearly separated from 0, the graph has good expansion properties, stronger connectivity, and rapid convergence of random walks in the graph. For example, it is known that ?2 (G) ? ?(G), where ?(G) is the edge connectivity of the graph (i.e. the size of the smallest edge cut whose removal makes the graph disconnected [7]). Notice that if the graph has more than one connected component, then ?2 (G) will be also equal to zero, thus implying that the graph is not connected. Without loss of generality, we abuse the term Fiedler eigenvalue to denote the smallest eigenvalue different from zero, regardless of the number of connected components. In this paper, by Fiedler value we mean the eigenvalue ?k+1 (G), where k = M (0, G). + For any pair of nodes u and v in a graph G = (V, E), we define two corresponding graphs G uv and ? + ? G uv in the following way: G uv = (V, E ? {{u, v}}), and G uv = (V, E \ {{u, v}}). Clearly, we + ? have that either G uv = G or G uv = G. A basic property concerning the Laplacian eigenvalues of + ? G uv and G uv is the following [7, 8, 9]: + ? + Lemma 1. If G uv and G uv are two graphs with n nodes, such that {u, v} ? V, G uv = (V, E ? Pn ? + ? {{u, v}}), and G uv = (V, E \ {{u, v}}), then we have that: (i) i=1 ?i (G uv ) ? ?i (G uv ) = 2; + ? (ii) ?i (G uv ) ? ?i (G uv ) for any i such that 1 ? i ? n. 3 Fiedler random fields Fiedler random fields are introduced in Sec. 3.1, while in Secs. 3.2?3.3 we discuss their dependence structure and explain how to estimate them from data respectively. 3.1 Probability distribution Using the notions reviewed above, we define the Fiedler delta function ??2 in the following way: + Definition 1. Given graph G, let k = M (0, G uv ). Then,  + ? ?k+1 (G uv ) ? ?k+1 (G uv ) if Xuv = 1 ??2 (u, v, G) = ? + ?k+1 (G uv ) ? ?k+1 (G uv ) otherwise (1) In other words, ??2 (u, v, G) is the variation in the Fiedler eigenvalue of the graph Laplacian that would result from flipping the value of Xuv in G. Concerning the range of the Fiedler delta function, we can easily prove the following proposition: Proposition 2. For any graph G = (V, E) and any pair of nodes {u, v} such that Xuv = 1, we have that 0 ? ??2 (u, v, G) ? 2. Proof. Let k = M (0, G). The proposition follows straightforwardly from Lemma 1, given that ? ??2 (u, v, G) = ?k+1 (G) ? ?k+1 (G uv ). We now proceed to define FRFs. Given a graph G = (V, E), for each (unordered) pair of nodes {u, v} such that u 6= v, we take Xuv to denote a binary random variable such that Xuv = 1 if {u, v} ? E, and Xuv = 0 otherwise. Since the graph is undirected,SXuv = Xvu . We also say that a subgraph GS of G with edge set ES is incident to Xuv if {u, v} ? e?ES e. Then: Definition 2. Given a graph G, let XG denote the set of random variables defined on G, i.e. XG = {Xuv : u 6= v ? {u, v} ? V}. For any Xuv ? XG , let Guv be a subgraph of G which is incident to Xuv and ?uv be a two-place real-valued function with parameter vector ?. We say that the probability distribution of XG is a Fiedler random field if it factorizes as ? ? X  1 P (XG | ?) = exp ? (2) ?uv Xuv , ??2 (u, v, Guv ); ? ? Z(?) Xuv ?XG 3 where Z(?) is the partition function. In other words, a FRF is a Gibbs-Boltzmann distribution over graphs, with potential functions defined for each node pair {u, v} along with some neighboring subgraph Guv . In particular, in order to model the dependence of each variable Xuv on Guv , potentials take as argument both the value of Xuv and the Fiedler delta corresponding to {u, v} in Guv . The idea is to treat the Fiedler delta statistic as a (real-valued) random variable defined over subgraph configurations, and to exploit this random variable as a compact representation of those configurations. This means that the dependence structure of a FRF is fixed by the particular choice of subgraphs Guv , so that the set XGuv \ {Xuv } makes Xuv independent of XG \ XGuv . Three fundamental questions are then the following. First, how do we fix the subgraph Guv for each pair of nodes {u, v}? Second, how do we choose a shape for the potential functions, so as to fully exploit the information contained in the Fiedler delta, while avoiding unwarranted assumptions concerning their parametric form? Third, how does the Fiedler delta statistic behave with respect to the Markov dependence property for random graphs? One basic result related to the third question is presented in Sec. 3.2, while Sec. 3.3 will address the first two points. 3.2 Dependence structure We first recall the definition of Markov dependence for random graphs [10]. Let N (Xuv ) denote the set {Xwz : {w, z} ? E ? |{w, z} ? {u, v}| = 1}. Then: Definition 3. A random graph G is said to be a Markov graph (or to have a Markov dependence structure) if, for any pair of variables Xuv and Xwz in G such that {u, v} ? {w, z} = ?, we have that P (Xuv | Xwz , N (Xuv )) = P (Xuv | N (Xuv )). Based on Def. 3, we say that the dependence structure of a random graph G is non-Markovian if, for disjoint pairs of nodes {u, v} and {w, z}, it does not imply that P (Xuv | Xwz , N (Xuv )) = P (Xuv | N (Xuv )), i.e. if it is consistent with the inequality P (Xuv | Xwz , N (Xuv )) 6= P (Xuv | N (Xuv )). We can then prove the following proposition: Proposition 3. There exist Fiedler random fields with non-Markovian dependence structure. Proof sketch. Consider a graph G = (V, E) such that V = {u, v, w, z} and E = {{u, v}, {v, w}, {w, z}, {u, z}}. The proof relies on the following result [6]: if graphs G1 and G2 are, respectively, a path and a circuit of size n, then ?2 (G1 ) = 2 (1 ? cos(?/n)) and ?2 (G2 ) = 2 (1 ? cos(2?/n)). Since adding exactly one edge to a path of size 4 can yield a circuit of the same size, this property allows to derive analytic forms for the Fiedler delta statistic in such graphs, showing that there exist parameterizations of ?uv such that ?uv (Xuv , ??2 (u, v, G); ?) 6= ?uv (Xuv , ??2 (u, v, GS ); ?). This means that the dependence structure of G is non-Markovian.1 Note that the proof of Prop. 3 can be straightforwardly generalized to the dependence between two variables Xuv and Xwz in circuits/paths of arbitrary size n, since the expression used for the Fiedler eigenvalues of such graphs holds for any n. This fact suggests that FRFs allow to model edge correlations at virtually any distance within G, provided that each subgraph Guv is chosen in such a way as to encompass the relevant correlation. 3.3 Model estimation The problem of learning a FRF from an observed network can be split into the task of estimating the potential functions once the network distribution has been factorized into a particular set of subgraphs, and the task of factorizing the distribution through a suitable set of subgraphs, which corresponds to estimating the dependence structure of the FRF. Here we focus on the problem of learning the FRF potentials, while suggesting a heuristic way to fix the dependence structure of the model. In order to estimate the FRF potentials, we need to specify on the one hand a suitable architecture for such functions, and on the other hand the objective function that we want to optimize. As a 1 For a complete proof, see the supplementary material. 4 preliminary step, we tested experimentally a variety of shapes for the potential functions. The tests indicated the importance of avoiding limiting assumptions concerning the form of the potentials, which motivated us to model them by a feed-forward multilayer perceptron (MLP), due to its wellknown capabilities of approximating functions of arbitrary shape [12]. In particular, throughout the applications described in this paper we use a simple MLP architecture with one hidden layer and hyperbolic tangent activation functions. Therefore, our parameter vector ? simply consists of the weights of the chosen MLP architecture. Notice that, as far as the estimation of potentials is concerned, any regression model offering approximation capabilities analogous to the MLP family could be used as well. Here, the only requirement is to avoid unwarranted prior assumptions with respect to the shape of the potential functions. In this respect, we take our approach to be genuinely nonparametric, since it does not require the parametric form of the target functions to be specified a priori in order to estimate them accurately. Concerning instead the learning objective, the main difficulty we want to avoid is the complexity of computing the partition function involved in the Gibbs-Boltzmann distribution. The approach we adopt to this aim is to minimize a contrastive divergence objective [13]. If G = (V, E) is the network that we want to fit our model to, and ? Guv = (Vuv , Euv ) is a subgraph of G such that {u, v} ? Vuv , let Guv denote the graph that we obtain by resampling the value of Xuv in Guv according to the conditional distribution Pb (Xuv | xGuv \ ? {xuv }; ?) predicted by our model. In other words, Guv is the result of performing just one iteration of Gibbs sampling on Xuv using ?, where the configuration xGuv of Guv is used to initialize the (single-step) Markov chain. Then, our goal is to minimize the function ?CD (?; G), given by: ? ?? ? ? 1 ? X  ? ?CD (?; G) = log exp ? ? x?uv , ??2 (u, v, Guv ); ? ? ? log Pb (xG | ?) ? Z(?) ? Xuv ?XG (3)  X    ? ? ? xuv , ??2 (u, v, Guv ); ? ? ? xuv , ??2 (u, v, Guv ); ? = Xuv ?XG where ? is the function computed by our MLP architecture. The appeal of contrastive divergence learning is that, while it does not require to compute the partition function, it is known to converge to points which are very close to maximum-likelihood solutions [14]. If we want our learning objective to be usable in the large-scale setting, then it is not feasible to sum over all node pairs {u, v} in the network, since the number of such pairs grows quadratically with |V|. In this respect, a straightforward approach for scaling to very large networks consists in sampling n objects from the set of all possible pairs of nodes, taking care that the sample contains a good balance between linked and unlinked pairs. Another issue we need to address concerns the way we sample a suitable set of subgraphs Gu1 v1 , . . . , Gun vn for the selected pairs of nodes. Although different sampling techniques could be used in principle [15], our goal is to model correlations between each variable Xuv and some neighboring region Guv in G. Such a neighborhood should be large enough to make ??2 (u, v, Guv ) sufficiently informative with respect to the overall network, but also small enough to keep the spectral decomposition of Guv computationally tractable. Therefore, in order to sample Guv , we propose to draw Vuv by performing k ?snowball waves? on G [16], using u and v as seeds, and then setting Euv to be the edge set induced by Vuv in G (see Algorithm 1 for the details). In this way, we can empirically tune the k hyperparameter in order to trade-off the informativeness of Guv for the tractability of its spectral decomposition, where it is known that the complexity of computing ??2 (u, v, Guv ) is cubic with respect to the number of nodes in Guv [17]. Algorithm 1 SampleSubgraph: Sampling a neighboring subgraph for a given node pair Input: Undirected graph G = (V, E); node pair {u, v}; number k of snowball waves. Output: Undirected graph Guv = (Vuv , Euv ). SampleSubgraph(G, {u, v}, k): 1. Vuv = {u, v} 2. for(i = 1 to Sk) 3. Vuv = Vuv ? w?Vuv {z ? V: {w, z} ? E} 4. Euv = {{w, z} ? E: {w, z} ? Vuv } 5. return (Vuv , Euv ) 5  Once sampled our training set D = (xu1 v1 , Gu1 v1 ), . . . , (xun vn , Gun vn ) , we learn the MLP weights by minimizing the objective ?CD (?; D), which which we obtain from ?CD (?; G) by restricting the summation in Eq. 3 to the elements of D. Minimization is performed by iterative gradient descent, using standard backpropagation for updating the MLP weights. 4 Experimental evaluation In order to investigate the empirical behavior of FRFs as models of large-scale networks, we design two different groups of experiments (in link prediction and graph generation respectively), using collaboration networks drawn from the arXiv e-print repository (http://snap.stanford.edu/ data/index.html), where nodes represent scientists and edges represent paper coauthorships. Some basic network statistics are reported in Table 1. Link prediction. In the first kind of experiments, given a random network G = (V, E), our goal is to measure the accuracy of FRFs at estimating the conditional distribution of variables Xuv given the configuration of neighboring subgraphs Guv of G. This can be seen as a link prediction problem where only local information (given by Guv ) can be used for predicting the presence of a link {u, v}. At the same time, we want to understand whether the overall network size (in terms of the number of nodes) has an impact on the number of training examples that will be necessary for FRFs to converge to stable prediction accuracy. Recall that FRFs are  trained on a data sample D = (xu1 v1 , Gu1 v1 ), . . . , (xun vn , Gun vn ) , where n ? |V| (|V|?1) . 2 Given this, converging to stable predictions for values of n which do not depend on |V| is a crucial requirement for achieving large-scale applicability. Let us sample our training set D by first drawing n node pairs from V in such a way that linked and unlinked pairs from G are equally represented in D, and then extracting the corresponding subgraphs Gui ,vi by Algorithm 1 using one snowball wave. We then learn our model from D as described in Sec. 3.3. In all the experiments reported in this work, the number of hidden units in our MLP architecture is set to 5. A test set T containing m objects (xu1 v1 , GS1 ), . . . , (xum vm , GSm ) is also sampled from G so that T ? D = ?, where pairs {ui , vi } in T are drawn uniformly at random from V ? V. Predictions are derived from the learned model by first computing the conditional probability of observing a link for each pair of nodes {uj , vj } in T , and then making a decision on the presence/absence of links by thresholding the predicted probability (where the threshold is tuned by cross-validation). Prediction accuracy is measured by averaging the recognition accuracy for linked and unlinked pairs in T respectively (where |T | = 10, 000). In Fig. 1, the accuracy of FRFs on the test set is plotted against a growing size n of D (where 12 ? n ? 48). Interestingly, the number of training examples required for the accuracy curve to stabilize does not seem to depend at all on the overall network size. Indeed, fastest convergence is achieved Figure 1: Prediction accuracy of FRFs on the for the average-sized and the second largest arXiv networks for a growing training set size. networks, i.e. HepPh and AstroPh respectively. Notice how a training sample containing an extremely small percentage of node pairs is sufficient for our learning approach to converge to stable prediction accuracy. This result encourages to think of FRFs as a convenient modeling option for the large-scale setting. 0.95 0.9 Prediction accuracy on test set 0.85 0.8 0.75 0.7 0.65 0.6 0.55 GrQc (5,242 nodes) HepTh (9,877 nodes) HepPh (12,008 nodes) AstroPh (18,772 nodes) CondMat (23,133 nodes) 0.5 0.45 10 15 20 25 30 Training set size 35 40 45 50 Besides assessing whether the network size affects the number of training samples needed to accurately learn FRFs, we want to evaluate the usefulness of the dependence structure involved in our model in predicting the conditional distributions of edges given their neighboring subgraphs. That is, we want to ascertain whether the effort of modeling the conditional independence structure of the overall network through the FRF formalism is justified by a suitable gain in prediction accuracy with respect to statistical models that do not focus explicitly on such dependence structure. To this aim, we compare FRFs to two popular statistical models for large-scale networks, namely the WattsStrogatz (WS) and the Barab?asi-Albert (BA) models [3, 2]. The WS formalism is mainly aimed 6 at modeling the short-diameter property often observed in real-world networks. Interestingly, the degree distribution of WS networks can be expressed in closed form in terms of two parameters ? and ?, related to the average degree distribution and a network rewiring process respectively [18]. On the other hand, the BA model is aimed at explaining the emergence of power-law degree distributions, where such distributions can be expressed in terms of an adaptive parameter ? [19]. The parameters of both the WS and the BA model can be estimated by standard maximum-likelihood approaches and then used to predict conditional edge distributions, exploiting information from the degrees observed in the given subgraphs [20, 21]. The ER model is not considered in this group of experiments, since the involved independence assumption makes it unusable (i.e. equivalent to random guessing) for the purposes of conditional estimation tasks. On the other hand, ERG models are not suitable for application to the large-scale setting. We tried them out using edge, k-star and k-triangle statistics [4], and the tests confirmed this point. Although the prohibitive cost of fitting the models and computing the involved feature functions could be overcome in principle by sampling strategies similar to the ones we employ for FRFs, the potentials used in ERGs become numerically unstable in the large-scale setting, leading to numerical representation issues for which we are not aware of any off-the-shelf solution. Accuracy values for the different models are reported in Table 1. FRFs dramatically outperform the other two models on all networks. Since both the BA and the WS model do not show relevant improvements over simple random guessing, this result clearly suggests that exploiting the dependence structure involved in network edge configurations is crucial to accurately predict the presence/absence of links. Table 1: Edge prediction results on the arXiv networks. General network statistics are also reported, where CCG and DG stand for average clustering coefficient and network diameter respectively. Dataset AstroPh CondMat GrQc HepPh HepTh |V| 18,772 23,133 5,242 12,008 9,877 Network Statistics |E | CCG 396,160 0.63 186,936 0.63 28,980 0.52 237,010 0.61 51,971 0.47 DG 14 15 17 13 17 Prediction Accuracy BA FRF WS 50.98% 89.97% 50.14% 50.15% 91.62% 56.71% 52.57% 91.14% 53.72% 51.61% 86.57% 54.33% 58.33% 92.25% 50.30% Graph generation. A second group of experiments is aimed at assessing whether the FRFs learned on the arXiv networks can be considered as plausible models of the degree distribution (DD) and the clustering coefficient distribution (CC) observed in each network [15]. To this aim, we use the estimated FRF models to generate artificial graphs of various size, using Gibbs sampling, and then we compare the DD and CC observed in the artificial graphs with those estimated on the whole networks. For scale-free networks such as the ones considered here, the BA model is known to be the most accurate model currently available with respect to DD. On the other hand, for CC both BA and WS are known to be more realistic models than ER random graphs. Therefore, we compare the graphs generated by FRFs to those generated by the BA, ER, and WS models for the same networks. The distance in DD and CC between the artificial graphs on the one hand and the corresponding real network on the other hand is measured using the Kolmogorov-Smirnov D-statistic, following a common use in graph mining research [15]. Here we only plot results for the CondMat and HepTh networks, noticing that the results we collected on the other arXiv networks lend themselves to the same interpretation as the ones displayed in Fig. 2. Values are averaged over 100 samples for each considered graph size, where the standard deviation is typically in the order of 10?2 . The outcome motivates the following considerations. Concerning DD, FRFs are able to improve (at least slightly) the accuracy of the state-of-the-art BA model, while they are very close that model with respect to clustering coefficient. In all cases, both BA and FRFs prove to be far more accurate than ER or WS, where the only advantage of using WS is limited to improving CC over ER. These results are particularly encouraging, since they show how the nonparametric approach motivating the FRF model allows to accurately estimate network properties (such as DD) that are not aimed for explicitly in the model design. This suggests that the Fiedler delta statistic is a promising direction for building generative models capable of capturing different network properties through a unified approach. 7 1 0.9 BA ER FRF WS 0.9 BA ER FRF WS 0.8 D-statistic for CC D-statistic for DD 0.8 0.7 0.6 0.7 0.6 0.5 0.5 0.4 0.3 0.4 40 60 80 100 Artificial graph size 120 140 160 40 60 80 (a) 100 Artificial graph size 120 140 160 (b) 1 0.9 BA ER FRF WS 0.9 BA ER FRF WS 0.8 D-statistic for CC D-statistic for DD 0.8 0.7 0.6 0.7 0.6 0.5 0.5 0.4 0.3 0.4 40 60 80 100 Artificial graph size 120 140 160 40 (c) 60 80 100 Artificial graph size 120 140 160 (d) Figure 2: D-statistic values for DD and CC on the CondMat (a?b) and HepTh (c?d) networks. 5 Conclusions and future work The main motivation inspiring this work was the observation that statistical modeling of networks cries for genuinely nonparametric estimation, because of the inaccuracy often resulting from unwarranted parametric assumptions. In this respect, we showed how the Fiedler delta statistic offers a powerful building block for designing a nonparametric estimator, which we developed in the form of the FRF model. Since here we only applied FRFs to collaboration networks, which are typically scale-free, an important option for future work is to assess the flexibility of FRFs in modeling networks from different families. In the second place, since we only addressed in a heuristic way the problem of learning the dependence structure of FRFs, a stimulating direction for further research consists in designing clever techniques for learning the structure of FRFs, e.g. considering the use of alternative subgraph sampling techniques. Finally, we would like to assess the possibility of modeling networks through mixtures of FRFs, so as to fit different network regions (with possibly conflicting properties) through specialized components of the mixture. Acknowledgments This work has been supported by the French National Research Agency (ANR-09-EMER-007). The authors are grateful to Gemma Garriga, R?emi Gilleron, Liva Ralaivola, and Michal Valko for their useful suggestions and comments. References [1] P. Erd?os and A. R?enyi, ?On Random Graphs, I,? Publicationes Mathematicae Debrecen, vol. 6, pp. 290?297, 1959. [2] A.-L. Barab?asi and R. Albert, ?Emergence of scaling in random networks,? Science, vol. 286, pp. 509?512, 1999. 8 [3] D. J. Watts and S. H. Strogatz, ?Collective dynamics of ?small-world? networks,? Nature, vol. 393, pp. 440?442, 1998. [4] T. A. B. Snijders, P. E. Pattison, G. L. Robins, and M. S. Handcock, ?New Specifications for Exponential Random Graph Models,? Sociological Methodology, vol. 36, pp. 99?153, 2006. [5] J. Leskovec, D. Chakrabarti, J. Kleinberg, C. Faloutsos, and Z. Ghahramani, ?Kronecker graphs: An approach to modeling networks,? Journal of Machine Learning Research, vol. 11, pp. 985?1042, 2010. [6] M. Fiedler, ?Algebraic connectivity of graphs,? Czechoslovak Mathematical Journal, vol. 23, pp. 298?305, 1973. [7] B. Mohar, ?The Laplacian Spectrum of Graphs,? in Graph Theory, Combinatorics, and Applications (Y. Alavi, G. Chartrand, O. R. Oellermann, and A. J. Schwenk, eds.), pp. 871?898, Wiley, 1991. [8] W. N. Anderson and T. D. Morley, ?Eigenvalues of the Laplacian of a graph,? Linear and Multilinear Algebra, vol. 18, pp. 141?145, 1985. [9] D. M. Cvetkovi?c, M. Doob, and H. Sachs, eds., Spectra of Graphs: Theory and Application. New York (NY): Academic Press, 1979. [10] O. Frank and D. Strauss, ?Markov Graphs,? Journal of the American Statistical Association, vol. 81, pp. 832?842, 1986. [11] J. Besag, ?Spatial Interaction and the Statistical Analysis of Lattice Systems,? Journal of the Royal Statistical Society. Series B, vol. 36, pp. 192?236, 1974. [12] K. Hornik, ?Approximation capabilities of multilayer feedforward networks,? Neural Networks, vol. 4, no. 2, pp. 251?257, 1991. [13] G. E. Hinton, ?Training Products of Experts by Minimizing Contrastive Divergence,? Neural Computation, vol. 14, no. 8, pp. 1771?1800, 2002. ? Carreira-Perpi?na? n and G. E. Hinton, ?On Contrastive Divergence Learning,? in Pro[14] M. A. ceedings of the Tenth International Workshop on Articial Intelligence and Statistics (AISTATS 2005), pp. 33?40, 2005. [15] J. Leskovec and C. Faloutsos, ?Sampling from large graphs,? in Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2006), pp. 631?636, 2006. [16] E. D. Kolaczyk, Statistical Analysis of Network Data. Methods and Models. New York (NY): Springer, 2009. [17] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, eds., Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. Philadelphia (PA): SIAM, 2000. [18] A. Barrat and M. Weigt, ?On the properties of small-world network models,? The European Physical Journal B, vol. 13, pp. 547?560, 2000. [19] R. Albert and A.-L. Barab?asi, ?Statistical mechanics of complex networks,? Reviews of Modern Physics, vol. 74, pp. 47?97, 2002. [20] M. E. J. Newman, ?Clustering and preferential attachment in growing networks,? Physical Review E, vol. 64, p. 025102, 2001. [21] A. Barab?asi, H. Jeong, Z. N?eda, E. Ravasz, A. Schubert, and T. Vicsek, ?Evolution of the social network of scientic collaborations,? Physica A, vol. 311, pp. 590?614, 2002. 9
4607 |@word kolaczyk:1 repository:1 stronger:1 smirnov:1 twelfth:1 tried:1 decomposition:3 contrastive:5 bai:1 configuration:7 contains:1 series:1 offering:1 tuned:1 interestingly:2 michal:1 activation:1 liva:1 must:1 numerical:1 happen:1 partition:3 informative:1 shape:5 analytic:1 realistic:1 kdd:1 designed:1 plot:1 resampling:1 implying:1 generative:3 selected:1 prohibitive:1 intelligence:1 accordingly:1 short:1 provides:1 parameterizations:1 node:32 mathematical:1 dn:1 along:1 become:1 chakrabarti:1 prove:4 consists:3 fitting:2 introduce:1 indeed:1 rapid:1 roughly:1 themselves:1 behavior:4 growing:3 mechanic:1 encouraging:1 considering:1 provided:1 estimating:4 moreover:1 underlying:2 circuit:3 factorized:1 kind:1 developed:1 unified:1 temporal:1 growth:1 exactly:1 universit:1 hepth:4 unit:2 positive:1 scientist:1 local:1 treat:1 analyzing:1 path:3 becoming:1 abuse:1 inria:2 czechoslovak:1 halley:1 suggests:3 co:2 fastest:1 limited:1 range:1 averaged:1 acknowledgment:1 practical:1 vorst:1 block:2 definite:1 backpropagation:1 empirical:1 asi:4 hyperbolic:1 convenient:1 word:4 astroph:3 get:1 close:2 clever:1 ralaivola:1 optimize:1 equivalent:1 mikaela:2 straightforward:1 attention:1 regardless:1 keller:2 focused:1 subgraphs:8 insight:1 estimator:1 d1:1 erg:4 notion:1 variation:1 analogous:1 limiting:1 target:5 designing:2 pa:1 element:1 recognition:1 particularly:1 updating:1 genuinely:2 cut:1 observed:10 role:1 capture:3 region:2 connected:6 gu1:3 trade:1 intuition:1 agency:1 complexity:2 ui:1 dispense:1 dynamic:2 trained:1 depend:2 grateful:1 algebra:1 deliver:1 triangle:2 easily:1 schwenk:1 represented:1 various:1 kolmogorov:1 fiedler:32 enyi:2 separated:1 demmel:1 universitaire:1 artificial:7 newman:1 neighborhood:3 outcome:1 whose:1 richer:1 heuristic:2 valued:3 supplementary:1 say:3 snap:1 otherwise:2 drawing:1 anr:1 stanford:1 statistic:22 plausible:1 g1:2 think:1 emergence:2 advantage:1 eigenvalue:17 propose:1 rewiring:1 interaction:2 product:1 fr:1 neighboring:5 relevant:2 date:1 subgraph:10 flexibility:1 achieve:1 snowball:3 xun:2 exploiting:2 convergence:2 gemma:1 requirement:2 assessing:2 generating:1 object:2 derive:1 develop:1 measured:2 eq:1 paying:1 strong:1 predicted:2 exhibiting:1 direction:3 stochastic:2 material:1 adjacency:1 require:2 fix:2 villeneuve:2 preliminary:1 proposition:6 hepph:3 summation:1 multilinear:1 physica:1 hold:1 sufficiently:1 considered:4 exp:2 seed:1 predict:2 adopt:1 smallest:5 purpose:1 estimation:6 encapsulated:1 unwarranted:4 currently:3 largest:1 tool:1 minimization:1 clearly:4 aim:4 avoid:3 pn:1 shelf:1 factorizes:1 derived:1 focus:2 improvement:1 consistently:1 likelihood:2 mainly:1 garriga:1 besag:1 sigkdd:1 typically:3 hidden:2 relation:1 w:14 doob:1 france:2 pont:1 schubert:1 issue:2 among:1 overall:4 html:1 denoted:1 priori:5 art:1 special:1 fairly:2 initialize:1 spatial:1 field:8 once:3 equal:2 aware:1 sampling:8 articial:1 lille:2 park:1 future:2 others:1 few:1 employ:1 modern:1 dg:2 divergence:5 national:1 antonino:2 gui:1 attempt:2 vuv:11 mlp:8 mining:3 investigate:1 possibility:1 evaluation:1 mixture:2 chain:1 accurate:3 edge:17 capable:1 preferential:3 necessary:1 iv:1 walk:1 plotted:1 theoretical:2 leskovec:2 formalism:2 modeling:13 markovian:3 gilleron:1 gs1:1 tractability:2 applicability:1 vertex:1 cost:1 deviation:1 ravasz:1 lattice:1 usefulness:1 motivating:1 reported:4 straightforwardly:2 dependency:1 fundamental:1 international:2 siam:1 off:2 vm:1 physic:1 na:1 connectivity:4 satisfied:1 containing:2 choose:1 possibly:1 american:1 usable:1 leading:3 return:1 xuv:49 expert:1 suggesting:1 potential:13 de:2 star:2 sec:9 summarized:1 unordered:1 coefficient:4 stabilize:1 mohar:1 combinatorics:1 explicitly:3 xu1:3 vi:2 performed:1 try:2 closed:1 publicationes:1 observing:2 analyze:1 linked:3 wave:3 sort:1 option:2 capability:3 alavi:1 contribution:1 ass:4 minimize:2 accuracy:14 unlinked:3 correspond:1 yield:1 chartrand:1 accurately:4 confirmed:1 cc:8 weigt:1 explain:1 gsm:1 eda:1 mathematicae:1 ed:3 definition:4 against:1 pp:17 involved:6 ruhe:1 proof:5 sampled:2 gain:1 dataset:1 popular:1 recall:2 knowledge:2 organized:1 feed:1 higher:1 condmat:4 methodology:1 specify:1 erd:2 formulation:1 evaluated:1 generality:1 anderson:1 just:1 correlation:4 hand:10 sketch:2 web:1 o:2 french:1 indicated:1 grows:1 building:3 effect:1 evolution:1 hence:1 assigned:1 symmetric:1 encourages:1 euv:5 unnormalized:3 generalized:1 complete:1 pro:1 consideration:1 novel:1 charles:1 common:1 specialized:1 empirically:1 physical:2 linking:1 interpretation:1 association:1 numerically:1 gibbs:5 uv:28 handcock:1 stable:3 europe:1 specification:1 something:1 recent:2 showed:1 wellknown:1 grqc:2 inequality:1 binary:3 der:1 exploited:1 seen:1 care:1 converge:3 semi:1 ii:2 encompass:1 snijders:1 reduces:1 academic:1 cross:1 offer:1 vicsek:1 concerning:13 equally:1 laplacian:11 impact:1 prediction:13 converging:1 basic:5 regression:1 multilayer:2 barab:4 arxiv:5 iteration:1 represent:3 albert:3 achieved:1 justified:1 affecting:1 whereas:1 background:1 want:7 addressed:1 crucial:2 guv:26 comment:1 induced:1 virtually:1 undirected:5 seem:1 call:1 extracting:1 structural:1 presence:4 feedforward:1 iii:1 split:1 concerned:1 enough:2 variety:1 independence:3 fit:2 affect:1 architecture:5 idea:2 avenue:1 whether:5 tommasi:2 expression:1 motivated:1 effort:1 algebraic:4 speaking:1 proceed:1 york:2 dramatically:1 useful:1 aimed:4 tune:1 nonparametric:5 inspiring:1 simplest:2 diameter:4 http:1 generate:1 outperform:1 exist:2 percentage:1 notice:3 delta:15 arising:1 disjoint:1 estimated:3 diverse:1 hyperparameter:1 vol:15 group:3 key:1 threshold:1 pb:2 achieving:1 drawn:2 tenth:1 v1:6 vast:1 graph:81 sum:1 barrat:1 noticing:1 powerful:1 dongarra:1 place:2 family:3 throughout:1 vn:5 draw:2 decision:1 scaling:2 capturing:3 def:1 layer:1 plaza:1 g:2 nontrivial:1 kronecker:2 bp:1 kleinberg:1 emi:1 argument:1 extremely:2 performing:2 developing:1 according:1 watt:1 disconnected:1 ascertain:1 increasingly:1 slightly:1 making:1 restricted:1 multiplicity:2 computationally:1 discus:1 needed:1 tractable:1 available:4 obey:1 spectral:6 alternative:1 faloutsos:2 assumes:1 clustering:5 unsuitable:1 exploit:2 ghahramani:1 especially:1 uj:1 approximating:1 society:1 objective:6 question:2 quantity:2 flipping:2 print:1 parametric:7 strategy:1 dependence:19 diagonal:2 guessing:2 said:1 gradient:1 distance:3 link:9 majority:1 gun:3 collected:1 unstable:1 toward:1 besides:1 modeled:4 index:1 minimizing:3 balance:1 unfortunately:1 potentially:1 frank:1 nord:1 negative:1 ba:14 design:2 motivates:1 boltzmann:4 collective:1 observation:1 markov:6 descent:1 behave:2 displayed:1 subsume:1 defining:1 hinton:2 committed:2 emer:1 arbitrary:2 introduced:2 pair:21 required:1 specified:2 namely:1 connection:1 jeong:1 quadratically:1 learned:2 conflicting:1 inaccuracy:1 address:2 able:1 laplacians:2 royal:1 lend:1 power:3 suitable:6 difficulty:1 predicting:2 valko:1 improve:1 imply:1 ascq:2 pattison:1 attachment:3 xg:10 philadelphia:1 prior:4 review:3 literature:1 removal:1 tangent:1 discovery:1 determining:1 law:5 loss:1 fully:1 sociological:1 generation:2 limitation:1 proportional:1 suggestion:1 validation:1 degree:12 incident:3 sufficient:1 consistent:1 informativeness:1 principle:2 thresholding:1 dd:9 heavy:1 cd:4 collaboration:3 cry:1 supported:1 free:2 guide:1 allow:3 understand:1 perceptron:1 explaining:1 template:1 taking:1 van:1 curve:1 depth:1 overcome:1 world:7 stand:2 rich:1 unweighted:1 forward:1 commonly:1 made:2 adaptive:1 author:1 far:2 social:2 compact:2 frf:17 keep:1 spectrum:4 factorizing:1 iterative:1 tailed:1 sk:1 robin:1 reviewed:1 table:3 learn:3 promising:1 nature:1 hornik:1 improving:1 du:3 expansion:1 complex:2 european:1 marc:2 domain:2 vj:1 aistats:1 main:3 sachs:1 whole:1 motivation:1 fig:2 cubic:1 ny:2 wiley:1 shrinking:1 debrecen:1 exponential:2 governed:1 third:2 unusable:1 perpi:1 specific:2 showing:3 er:11 appeal:1 concern:1 workshop:1 restricting:1 adding:1 strauss:1 importance:1 ccg:2 gap:1 suited:1 simply:1 expressed:2 contained:1 strogatz:1 g2:2 springer:1 corresponds:2 relies:1 acm:1 prop:1 stimulating:1 conditional:8 goal:3 targeted:1 sized:1 twofold:2 absence:3 feasible:1 experimentally:2 carreira:1 determined:1 typical:1 uniformly:1 averaging:1 lemma:2 called:1 e:2 experimental:1 rarely:1 bioinformatics:1 phenomenon:2 evaluate:1 tested:1 avoiding:2
3,986
4,608
A systematic approach to extracting semantic information from functional MRI data Francisco Pereira Siemens Corporation, Corporate Technology Princeton, NJ 08540 [email protected] Matthew Botvinick Princeton Neuroscience Institute and Department of Psychology Princeton University Princeton NJ 08540 [email protected] Abstract This paper introduces a novel classification method for functional magnetic resonance imaging datasets with tens of classes. The method is designed to make predictions using information from as many brain locations as possible, instead of resorting to feature selection, and does this by decomposing the pattern of brain activation into differently informative sub-regions. We provide results over a complex semantic processing dataset that show that the method is competitive with state-of-the-art feature selection and also suggest how the method may be used to perform group or exploratory analyses of complex class structure. 1 Introduction Functional Magnetic Resonance Imaging (fMRI) is a technique used in psychological experiments to measure the blood oxygenation level throughout the brain, which is a proxy for neural activity; this measurement is called brain activation. The data resulting from such an experiment is a 3D grid of cells named voxels covering the brain (on the order of tens of thousands, usually), measured over time as tasks are performed and thus yielding one time series per voxel (collected every 1-2 seconds and yielding hundreds to thousands of points). In a typical experiment, brain activation is measured during a task of interest, e.g. reading words, and during a related control condition, e.g. reading nonsense words, with the goal of identifying brain locations where the two differ. The most common analysis technique for doing this ? statistical parametric mapping [4] ? tests each voxel individually by regressing its time series on a predicted time series determined by the task contrast of interest. This fit is scored and thresholded at a given statistical significance level to yield a brain image with clusters of voxels that respond very differently to the two tasks (colloquially, these are the images that show parts of the brain that ?light up?). Note, however, that for both tasks there are many other processes taking place in tandem with this task-contrasting activation: visual processing to read the words, attentional processing due to task demands, etc. The output of this process for a given experiment is a set of 3D coordinates of all the voxel clusters that appear reliably across all the subjects in a study. This result is easy to interpret, since there is a lot of information about what processes each brain area may be involved in. The coordinates are comparable across studies, and thus result reproduciblity is also an expectation. In recent years, there has been increasing awareness of the fact that there is information in the entire pattern of brain activation and not just in saliently active locations. Classifiers have been the tool 1 of choice for capturing this information and used to make predictions ranging from what stimulus a subject is seeing, what kind of object they are thinking about or what decision they will make [12] [14] [8]. The most common situation is to have an example correspond to the average brain image during one or a few performances of the task of interest, and voxels as the features, and we will discuss various issues with this scenario in mind. The goal of this work is generally not (just) classification accuracy per se, even in diagnostic applications, but understanding where the information used to classify is present. If only two conditions are being contrasted this is relatively straightforward as information is, at its simplest, a difference in activation of a voxel in the two conditions. It?s thus possible to look at the magnitudes of the weights a classifier puts on voxels across the brain and thus locate the voxels with the largest weights 1 ; given that there are typically two to three orders of magnitude more voxels than examples, though, classifiers are usually trained on a selection of voxels rather than the entire activation pattern. Often, this means the best accuracy is obtained using few voxels, from all across the brain, and that different voxels will be chosen in different cross-validation folds; this presents a problem for interpretability of the locations in question. One approach to this problem is to try and regularize classifiers so that they include as many informative voxels as possible [2], thus identifying localizable clusters of voxels that may overlap across folds. A different approach is to cross-validate classifiers over small sections of the grid covering the brain, known as searchlights [10]. This can be used to produce a map of the cross-validated accuracy in the searchlight around each voxel, taking advantage of the pattern of activation across all the voxels contained in it. Such a map can then be thresholded to leave only locations where accuracy is significantly above chance. While these approaches have been used successfully many times over the last decade, they will become progressively less useful in face of the increasing commonality of datasets with tens to hundreds of stimuli, and a correspondingly high number of experimental conditions. Knowing the location of a voxel does not suffice to interpret what it is doing, as it could be very different from stimulus to stimulus (rather than just active or not, as in the two condition situation). It?s also likely that no small brain regions will allow for a searchlight classifier capable of distinguishing between all possible conditions at the spatial resolution of fMRI, and hence defining a searchlight size or shape is a trade-off between including voxels and making it harder to locate information or train a classifier ? as the number of features increases as the number of examples remains constant ? and excluding voxels and thus the number of distinctions that can be made. This paper introduces a method to address all of these issues while still yielding an interpretable, whole-brain classifier. The method starts by learning how to decompose the pattern of activation across the brain into sub-patterns of activation, then it learns a whole-brain classifier in terms of the presence and absence of certain subpatterns and finally combines the classifier and pattern information to generate brain maps indicating which voxels belong to informative patterns and what kind of information they contain. This method is partially based on the notion of pattern feature introduced in an earlier paper by us [15], but has been developed much further so as to dispense with most parameters and allow the creation of spatial maps usable for group or exploratory analyses, as will be discussed later. 2 2.1 Data and Methods Data The grid covering the brain contains on the order of tens of thousands voxels, measured over time as tasks are performed, every 1-2 seconds, yielding hundreds to thousands of 3D images per experiment. During an experiment a given task is performed a certain number of times ? trials ? and often the images collected during one trial are collapsed or averaged together, giving us one 3D image that can be clearly labeled with what happened in that trial, e.g. what stimulus was being seen or what decision a subject made. Although the grid covers the entire head, only a fraction of its voxels contain cortex in a typical subject; hence we only consider these voxels as features. 1 Interpretation is more complicated if nonlinear classifiers are being used [6], [17], but this is far less common 2 A searchlight is a small section of the 3D grid, in our case a 27 = 3 ? 3 ? 3 voxel cube. Analyses using searchlights generally entail computing a statistic [10] or cross-validating a classifier over the dataset containing just those voxels [16], and do so for the searchlight around each voxel in the brain, covering it in its entirety. The intuition for this is that individual voxels are very noisy features, and an effect observed across a group of voxels is more trustworthy. In the experiment performed to obtain our dataset 2 [13], subjects observed a word and a line drawing of an item, displayed on a screen for 3 seconds and followed by 8 seconds of a blank screen. The items named/depicted belonged to one of 12 categories: animals, body parts, buildings, building parts, clothing, furniture, insects, kitchen, man-made objects, tools, vegetables and vehicles. The experimental task was to think about the item and its properties while it was displayed. There were 5 different exemplars of each of the 12 categories and 6 experimental epochs. In each epoch all 60 exemplars were shown in random order without repetition, and all epochs had the same exemplars. During an experiment the task repeated a total of 360 times, and a 3D image of the fMRI-measured brain activation acquired every second. Each example for classification purposes is the average image during a 4 second span while the subject was thinking about the item shown a few seconds earlier (a period which contains the peak of the signal during the trial; the dataset thus contains 360 examples, as many as there were trials. The voxel size was 3 ? 3 ? 5 mm, with the number of voxels being between 20000 and 21000 depending on which of the 9 subjects was considered. The features in each example are voxels, and the example labels are the category of the item being shown in the trial each example came from. 1 2 for each classi?cation task, cross-validate a classi?er in all of the searchlights searchlight: - a 3x3x3 voxel cube - one centered around each voxel in cortex - overlapping test the result at each searchlight, which yields a binary signi?cance image e.g. animals vs insects searchlight accuracy 0.54 0.76 0.61 0.83 0.55 0.46 0.90 3 image as a vector of voxels result signi?cant this is done for all 66 pairwise classi?cation tasks and adjacent searchlights supporting similar pairwise distinctions are clustered together using modularity 5 animals vs insects ... animals vs tools ... ... vegetables vs vehicles ... ... ... vehicles vehicles the binary vector of signi?cance for each searchlight is rearranged into a binary confusion matrix animals insects tools buildings clothing body parts furniture 4 animals insects tools buildings clothing body parts furniture ... searchlight Figure 1: Construction of data-driven searchlights. 2.2 Method The goal of the experiment our dataset comes from is to understand how a certain semantic category is represented throughout the brain (e.g. do ?Insects? and ?Animals? share part of their representation because both kinds of things are alive?). Intuitively, there is information in a given location if at least two categories can be distinguished looking at their respective patterns of activation there; otherwise, the pattern of activation is noise or common to all categories. Our method is based upon this intuition, and comprises three stages: 2 The data were kindly shared with us by Tom Mitchell and Marcel Just, from Carnegie Mellon University. 3 1. the construction of data-driven searchlights, parcels of the 3D grid where the same discriminations between pairs of categories can be made (these are generally larger than the 3 ? 3 ? 3 basic searchlight) 2. the synthesis of pattern features from each data-driven searchlight, corresponding to the presence or absence of a certain pattern of activation across it 3. the training and use of a classifier based on pattern features and the generation of an anatomical map of the impact of each voxel on classification and these are described in detail in each of the following sections. 2.2.1 Construction of data-driven searchlights Create pairwise searchlight maps In order to identify informative locations we start by considering whether a given pair of categories can be distinguished in each of the thousands of 3 ? 3 ? 3 searchlights covering the brain: 1. For each searchlight cross-validate a classifier using the voxels belonging to it, obtaining an accuracy value which will be assigned to the voxel at the center of the searchlight, as shown in part 1 of Figure 1. The classifier used in this case was Linear Discriminant Analysis (LDA, [7]), with a shrinkage estimator for the covariance matrix [18], as this was shown to be effective at both modeling the joint activation of voxels in a searchlight and classification [16]. 2. Transform the resulting brain image with the accuracy of each voxel into a p-value brain image (of obtaining accuracy as high or higher under the null hypothesis that the classes are not distinguishable, see [11]), as shown in part 1 of Figure 1. 3. Threshold the p-value brain image using False Discovery Rate [5] (q = 0.01) to correct multiple for multiple comparisons and get a binary brain image with candidate locations where this pair of categories can be distinguished, as shown in part 2 of Figure 1. The outcome for each pair of categories is a binary significance image, where a voxel is 1 if the categories can be distinguished in the searchlight surrounding it or 0 if not; this is shown for all pairs of categories in part 3 of Figure 1. This can also be viewed per-searchlight, yielding a binary vector encoding which category pairs can be distinguished and which can be rearranged into a binary matrix, as shown in part 4 of Figure 1. Aggregate adjacent searchlights Examining each small searchlight makes sense if we consider that, a priori, we don?t know where the information is or how big a pattern of activation would have to be considered (with some exceptions, notably areas that respond to faces, houses or body parts, see [9] for a review). That said, if the same categories are distinguishable in two adjacent searchlights ? which overlap ? then it is reasonable to assume that all their voxels put together would still be able to make the same distinctions. Doing this repeatedly allows us to find data-driven searchlights, not bound by shape or size assumptions. At the same time we would like to constrain data-driven searchlights to the boundaries of known, large, anatomically determined regions of interest (ROI), both for computational efficiency and for interpretability, as will be described later. At the start of the aggregation process, each searchlight is by itself and has an associated binary information vector with 66 entries corresponding to which pairs of classes can be distinguished in its surrounding searchlight (part 3 of Figure 1). For each searchlight we compute the similarity of its information vector with those of all its neighbours, which yields a 3D grid similarity graph. We then take the portion of the graph corresponding to each ROI in the AAL brain atlas [19], and use modularity [1] to divide it into a number of clusters of adjacent searchlights supporting similar distinctions, as shown in panel 5 of Figure 1. After this is done for all ROIs we obtain a partition of the brain into a few hundred clusters, the data-driven searchlights. Figure 2 depicts the granularity of a typical clustering across multiple brain slices of one of the participants. The similarity measure between two P vectors vi and vj is obtained by computing the number of 1-entries present in both vectors, pairs AND(vi , vj ), the number of 1-entries present in only one P of them, pairs XOR(vi , vj ) and then the measure 4 Figure 2: Data-driven searchlights for participant P1 (brain slices range from inferior to superior). similarity(vi , vj ) = P P pairs AND(vi , vj ) ? pairs P pairs AND(vi , vj ) XOR(vi ,vj ) 2 The measure was chosen because it peaks at 1, if the two vectors match exactly, and decreases ? possibly into negative values ? if there are mismatches; it will tolerate more mismatches if there are more distinctions being made. It will also deem sparse vectors similar as long as there are vew few mismatches. The number of entries present in only one is divided by 2 so that the differences do not get twice the weight of the similarities. The centroid for each cluster encodes the pairs of categories that can be distinguished in that datadriven searchlight. The centroid is obtained by combining the binary information vectors for each of the searchlights in it using a soft-AND function, and is itself a binary information vector. A given entry is 1 ? the respective pair of categories is distinguishable ? if it is 1 in at least q% of the cluster members (where q is the false discovery rate used earlier to threshold the binary image for that pair of categories). 2.2.2 Generation of pattern features from each data-driven searchlight voxels examples 1 clusters (across class pairs) clusters (across all examples) training data singular vectors pattern features SVD cluster 1 ... 3 animals vs insects animals vs tools vegetables vs vehicles 2 cluster 2 ... cluster 3 body parts vs buildings animals vs insects ... animals vs tools cluster 4 ... body parts vs buildings vegetables vs vehicles Figure 3: Construction of pattern detectors and pattern features from data-driven searchlights. Construct two-way classifiers from each data-driven searchlight Each data-driven searchlight has a set of pairs of categories that can be distinguished in it. This indicates that there are particular patterns of activation across the voxels in it which are characteristic of one or more categories, and absent in others. We can leverage this to convert the pattern of activation across the brain into a series of sub-patterns, one from each data-driven searchlight. For each data-driven searchlight, and for each pairwise category distinction in its information vector, we train a classifier using examples of the two categories and just the voxels in the searchlight (a linear SVM with ? = 1, [3]); these will be pattern detectors, outputting a probability estimate for the prediction (which we transform to the [?1, 1] range), shown in part 1 of Figure 3. 5 Use two-way classifiers to generate pattern features The set of pattern-detectors learned from each data-driven searchlight can be applied to any example, not just the ones from the categories that were used to learn them. The output of each pattern-detector is then viewed as representing the degree to which the detector thinks that either of the patterns it is sensitive to is present. For each data-driven searchlight, we apply all of its detectors to all the examples in the training set, over the voxels belonging to the searchlight, as illustrated in part 2 of Figure 3. The output of each detector across all examples becomes a new, synthetic pattern feature. The number of these pattern features varies per searchlight, as does the number of searchlights per subject, but at the end we will typically have between 10K and 20K of them. Note that there may be multiple classifiers for a given cluster which produce very similar outputs (e.g. ones that captured a pattern present in all animate object categories versus one present in all inanimate object ones); these will be highly correlated and redundant. We address this by using Singular Value Decomposition (SVD, [7]) to reduce the dimensionality of the matrix of pattern features to the same as the number of examples (180), keeping all singular vectors; this is shown in part 3 of Figure 3. The detectors and the SVD transformation matrix learned from the training set are also applied to the test set. 2.2.3 Classification and impact maps for each class pattern feature classi?er for "tools"-vs-rest singular vector classi?er for "tools"-vs-rest pattern feature impact values invert SVD 1 "tools" singular vectors X 2 aggregate impact of pattern features belonging to each cluster per-cluster impact values "tools" pattern features 3 assign per-cluster impact value to the voxels that belong to it invert SVD voxelwise impact values Figure 4: The process of going from the weights of a one-versus-rest category classifier over a low-dimensional pattern feature representation to the impact of each voxel in that classification. Given the low-dimensional pattern feature dataset, we train a one-versus-rest classifier (a linear SVM with ? = 1, [3]) for each category; these are then applied to each example in the test set, with the label prediction corresponding to the class with the highest class probability. The classifiers can also be used to determine the extent to which each data-driven searchlight was responsible for correctly predicting each class. A one-versus-rest category classifier consists of a vector of 180 weights, which can be converted into an equivalent classifier over pattern features by inverting the SVD, as shown in part 1 of Figure 4. The impact of each pattern feature in correctly predicting this category can be calculated by multiplying each weight by the values taken by the corresponding pattern feature over examples in the category, and averaging across all examples; this is shown in part 2 of Figure 4. These pattern-feature impact values can then be aggregated by the data-driven searchlight they came from, yielding a net impact value for that searchlight. This is the value that is propagated to each voxel in the data-driven searchlight (part 3 of Figure 4) in order to generate an impact map. 3 3.1 Experiments and Discussion Classification Our goal in this experiment is to determine whether transforming the data from voxel features to pattern features preserves information, and how competitive the results are with a classifier combined with voxel selection. In all experiments we use a split-half cross-validation loop, where the halves contain examples from even and odd epochs, respectively, 180 examples in each (15 per cat6 egory). If cross-validation inside a split-half training set is required, we use leave-one-epoch out cross-validation, Baseline We contrasted experimental results obtained with our method with a baseline of classification using voxel selection. The scoring criterion used to rank each voxel was the accuracy of a LDA classifier ? same as described above ? using the 3 ? 3 ? 3 searchlight around each voxel to do 12-category classification. The number of voxels to use was selected by nested cross-validation inside the training set 3 . The classifier used was a linear SVM (? = 1, [3]), same as the whole brain classifier in our method. Results The results are shown in the first line of Table 1; across subjects, our method is better than voxel selection, with the p-value of a sign-test of this being < 0.01. It is substantially better than a classifier using all the voxels in the brain directly. Whereas the accuracy is above chance (0.08) for all subjects, it is rather low for some. There are at least two factors responsible for this. The first is that some classes give rise to very similar patterns of activation (e.g. ?buildings? and ?building parts?), and hence examples in these classes are confusable (confusion matrices bear this out). The second factor is that subjects vary in their ability to stay focused on the task and avoid stray thoughts or remembering other parts of the experiment, hence examples may not belong to the class corresponding to the label or even any class at all. [13] also points out that accuracy is correlated with a subject?s ability to stay still during the experiment. Table 1: Classification accuracy for the 9 subjects using our method, as well as two baselines. P1 P2 P3 P4 P5 P6 P7 P8 P9 our method 0.54 0.34 0.33 0.42 0.15 0.19 0.22 0.21 0.16 baseline (voxel selection) 0.53 0.33 0.24 0.34 0.14 0.16 0.21 0.20 0.15 baseline (using all voxels) 0.31 0.21 0.19 0.27 0.13 0.09 0.14 0.13 0.15 #voxels selected (fold 1) 1200 400 200 1600 800 800 800 400 2000 800 200 100 800 50 8000 100 1200 100 #voxels selected (fold 2) 3.2 Impact maps tool building Figure 5: Average example for categories ?tool? and ?building? in participant P1 (slices ordered from inferior to superior, red is activation above the image mean, blue below). As described in Section 2.2.3, an impact map can be produced for each category, showing the extent to which each data-driven searchlight helped classify that category correctly. In order to better understand better how impact works, consider two categories ?tools? and ?buildings? where we know where some of the information resides (for ?tools? around the central sulcus, visible on the right of slices to the right, for ?buildings? around the parahippocampal gyrus, visible on the lower side of slices to the left). Figure 5 shows the average example for the two categories; note how similar the two examples are across the slices, indicating that most activation is shared between the two categories. The impact maps for the same participant in Figure 6 show that much of the common activation is eliminated, and that the areas known to be informative are assigned high impact in their respective 3 Possible choices were 50, 100, 200, 400, 800, 1200, 1600, 2000, 4000, 8000, 16000 or all voxels. 7 tool building Figure 6: Impact map for categories ?tool? and ?building? in participant P1. tool building Figure 7: Average impact map for categories ?tool? and ?building? across the nine participants. maps. Impact is positive, regardless of whether activation in each voxel involved is above or below the mean of the image; the activation of each voxel influences the classifier only in the context of its neighbours in each data-driven searchlight. Note, also, that unlike a simple one-vs-rest classifier or searchlight map, the notion of impact can accommodate the situation where the same location is useful, with either different or the same pattern of activation, for two separate classes (rather than have it be downweighted relative to others that might be unique to that particular class). Finally, consider that impact maps can be averaged across subjects, as shown in Figure 7, or undergo t-tests or a more complex second-level group analysis. A more exploratory analysis can be performed by considering locations that are high impact for every participant and, through their data-driven searchlight, examine the corresponding cluster centroids and get a complete picture of which subsets of the classes can be distinguished there (similar to the bottom-up process in part 5 of Figure 1, but now done top-down and given a cross-validated classification result and impact value). References [1] VD Blondel, JL Guillaume, R Lambiotte, and E Lefebvre. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, (10):1?12, 2008. [2] Melissa K Carroll, Guillermo a Cecchi, Irina Rish, Rahul Garg, and a Ravishankar Rao. Prediction and interpretation of distributed neural activity with sparse models. NeuroImage, 44(1):112?22, January 2009. [3] C.C. Chang and C.J. Lin. LIBSVM: a library for support vector machines. Technical report, 2001. [4] Karl J Friston, John Ashburner, Stefan J Kiebel, Thomas E Nichols, and W D Penny. Statistical Parametric Mapping: The Analysis of Functional Brain Images. Academic Press, 2006. [5] Christopher R Genovese, Nicole a Lazar, and Thomas Nichols. Thresholding of statistical maps in functional neuroimaging using the false discovery rate. NeuroImage, 15(4):870?8, 2002. [6] Stephen Jos?e Hanson, Toshihiko Matsuka, and James V Haxby. Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: is there a ?face? area? NeuroImage, 23(1):156?66, 2004. [7] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference and prediction. Springer-Verlag, 2001. [8] J. Haynes and G. Rees. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience, 7(7):523?34, 2006. [9] Marcel Adam Just, Vladimir L Cherkassky, Sandesh Aryal, and Tom M Mitchell. A neurosemantic theory of concrete noun representation based on the underlying brain codes. PloS one, 5(1):e8622, 2010. 8 [10] N Kriegeskorte, R. Goebel, and P. Bandettini. Information-based functional brain mapping. Proceedings of the National Academy of Sciences, 103(10):3863, 2006. [11] John Langford. Tutorial on Practical Prediction Theory for Classification. Journal of Machine Learning Research, 6:273?306, 2005. [12] T. M. Mitchell, R. Hutchinson, R. S. Niculescu, F. Pereira, X. Wang, M. Just, and S. Newman. Learning to Decode Cognitive States from Brain Images. Machine Learning, 57(1/2):145?175, October 2004. [13] T. M. Mitchell, S. V. Shinkareva, A. Carlson, K. Chang, V. L. Malave, R. A. Mason, and M. A. Just. Predicting human brain activity associated with the meanings of nouns. Science, 320(5880):1191?5, 2008. [14] K. A. Norman, S. M. Polyn, G. J. Detre, and J. V. Haxby. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in cognitive sciences, 10(9):424?30, 2006. [15] F Pereira and M Botvinick. Classification of functional magnetic resonance imaging data using informative pattern features. Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ?11, page 940, 2011. [16] F. Pereira and M. Botvinick. Information mapping with pattern classifiers: a comparative study. NeuroImage, 56(2):835?850, 2011. [17] Peter Mondrup Rasmussen, Kristoffer Hougaard Madsen, Torben Ellegaard Lund, and Lars Kai Hansen. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps. NeuroImage, 55(3):1120? 31, April 2011. [18] Juliane Sch?afer and Korbinian Strimmer. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical applications in genetics and molecular biology, 4:Article32, January 2005. [19] N Tzourio-Mazoyer, B Landeau, D Papathanassiou, F Crivello, O Etard, N Delcroix, B Mazoyer, and M Joliot. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage, 15(1):273?89, 2002. 9
4608 |@word trial:6 mri:2 kriegeskorte:1 lobe:1 covariance:2 decomposition:1 accommodate:1 harder:1 series:4 contains:3 blank:1 com:1 trustworthy:1 rish:1 torben:1 activation:26 gmail:1 kiebel:1 john:2 visible:2 partition:1 informative:6 oxygenation:1 shape:2 cant:1 haxby:3 kdd:1 designed:1 interpretable:1 progressively:1 atlas:1 v:15 discrimination:1 half:3 selected:3 item:5 p7:1 mental:1 revisited:1 location:11 become:1 consists:1 combine:1 inside:2 blondel:1 pairwise:4 datadriven:1 p8:1 acquired:1 notably:1 p1:4 examine:1 mechanic:1 multi:1 brain:43 p9:1 considering:2 tandem:1 increasing:2 deem:1 colloquially:1 becomes:1 suffice:1 panel:1 underlying:1 null:1 what:9 kind:3 substantially:1 developed:1 contrasting:1 transformation:1 corporation:1 nj:2 temporal:1 every:4 exactly:1 botvinick:3 classifier:32 control:1 appear:1 positive:1 encoding:1 juliane:1 subpatterns:1 might:1 twice:1 garg:1 range:2 averaged:2 unique:1 responsible:2 practical:1 hougaard:1 area:4 significantly:1 thought:1 word:4 melissa:1 seeing:1 suggest:1 get:3 selection:7 egory:1 parahippocampal:1 put:2 collapsed:1 influence:1 context:1 equivalent:1 map:18 center:1 nicole:1 straightforward:1 regardless:1 focused:1 resolution:1 identifying:2 estimator:1 regularize:1 exploratory:3 coordinate:2 notion:2 construction:4 decode:1 distinguishing:1 hypothesis:1 element:1 trend:1 recognition:1 labeled:1 observed:2 bottom:1 polyn:1 p5:1 wang:1 thousand:5 region:3 plo:1 trade:1 decrease:1 highest:1 intuition:2 cance:2 transforming:1 dispense:1 trained:1 animate:1 creation:1 upon:1 efficiency:1 joint:1 differently:2 various:1 represented:1 surrounding:2 train:3 fast:1 effective:1 newman:1 aggregate:2 labeling:1 outcome:1 lazar:1 larger:1 kai:1 drawing:1 otherwise:1 ability:2 statistic:1 think:2 transform:2 noisy:1 itself:2 advantage:1 voxelwise:1 net:1 outputting:1 p4:1 combining:1 loop:1 academy:1 validate:3 cluster:18 produce:2 comparative:1 adam:1 leave:2 object:5 depending:1 exemplar:3 measured:4 odd:1 p2:1 predicted:1 entirety:1 signi:3 come:1 marcel:2 differ:1 correct:1 lars:1 centered:1 human:2 assign:1 clustered:1 decompose:1 clothing:3 mm:1 around:6 considered:2 roi:3 mapping:4 matthew:1 ventral:1 commonality:1 vary:1 purpose:1 estimation:1 label:3 combinatorial:1 hansen:1 sensitive:1 individually:1 largest:1 repetition:1 create:1 successfully:1 tool:19 kristoffer:1 unfolding:1 stefan:1 clearly:1 rather:4 avoid:1 shrinkage:2 validated:2 rank:1 indicates:1 contrast:1 sigkdd:1 centroid:3 baseline:5 sense:1 inference:1 niculescu:1 entire:3 typically:2 going:1 issue:2 classification:14 insect:8 priori:1 resonance:3 matthewb:1 art:1 toshihiko:1 animal:11 cube:2 spatial:2 construct:1 noun:2 eliminated:1 haynes:1 biology:1 look:1 genovese:1 thinking:2 fmri:4 others:2 stimulus:5 report:1 few:5 neighbour:2 preserve:1 national:1 individual:1 joliot:1 kitchen:1 irina:1 friedman:1 interest:4 highly:1 mining:2 regressing:1 introduces:2 yielding:6 light:1 strimmer:1 implication:1 capable:1 respective:3 divide:1 confusable:1 psychological:1 classify:2 earlier:3 modeling:1 soft:1 rao:1 cover:1 entry:5 subset:1 hundred:4 examining:1 varies:1 hutchinson:1 synthetic:1 combined:1 rees:1 peak:2 international:1 sensitivity:1 stay:2 systematic:1 off:1 decoding:1 jos:1 together:3 synthesis:1 concrete:1 central:1 containing:1 possibly:1 cognitive:2 usable:1 bandettini:1 downweighted:1 shinkareva:1 converted:1 vi:7 vehicle:6 helped:1 performed:5 try:1 lot:1 doing:3 later:2 portion:1 competitive:2 start:3 aggregation:1 complicated:1 participant:7 red:1 accuracy:12 xor:2 characteristic:1 yield:3 correspond:1 identify:1 produced:1 multiplying:1 cation:2 detector:8 lambiotte:1 trevor:1 ashburner:1 involved:2 james:1 associated:2 propagated:1 dataset:6 mitchell:4 knowledge:1 dimensionality:1 higher:1 tolerate:1 tom:2 rahul:1 april:1 done:3 though:1 just:10 stage:1 p6:1 jerome:1 langford:1 christopher:1 nonlinear:2 overlapping:1 spm:1 lda:2 building:16 effect:1 contain:3 nichols:2 norman:1 hence:4 assigned:2 read:1 semantic:3 illustrated:1 adjacent:4 parcel:1 during:9 inferior:2 covering:5 criterion:1 complete:1 x3x3:1 confusion:2 image:20 ranging:1 meaning:1 novel:1 common:5 superior:2 functional:8 jl:1 belong:3 discussed:1 interpretation:2 interpret:2 measurement:1 mellon:1 goebel:1 malave:1 resorting:1 grid:7 had:1 afer:1 entail:1 cortex:2 similarity:5 carroll:1 etc:1 recent:1 madsen:1 driven:22 scenario:1 certain:4 verlag:1 binary:11 came:2 scoring:1 seen:1 captured:1 remembering:1 determine:2 aggregated:1 period:1 redundant:1 signal:1 stephen:1 multiple:4 corporate:1 technical:1 match:1 academic:1 cross:11 long:1 lin:1 divided:1 molecular:1 impact:24 prediction:7 basic:1 expectation:1 kernel:1 aal:1 invert:2 cell:1 whereas:1 singular:5 macroscopic:1 sch:1 rest:6 unlike:1 subject:15 validating:1 undergo:1 thing:1 member:1 extracting:1 presence:2 granularity:1 leverage:1 split:2 easy:1 automated:1 fit:1 psychology:1 hastie:1 reduce:1 knowing:1 absent:1 whether:3 cecchi:1 peter:1 nine:1 repeatedly:1 saliently:1 generally:3 useful:2 se:1 vegetable:4 ten:4 category:37 simplest:1 rearranged:2 generate:3 gyrus:1 tutorial:1 happened:1 sign:1 neuroscience:2 diagnostic:1 per:9 correctly:3 tibshirani:1 anatomical:3 blue:1 carnegie:1 group:4 threshold:2 mazoyer:2 blood:1 sulcus:1 libsvm:1 thresholded:2 imaging:3 graph:2 fraction:1 year:1 convert:1 respond:2 named:2 place:1 throughout:2 reasonable:1 p3:1 decision:2 comparable:1 capturing:1 bound:1 followed:1 furniture:3 fold:4 activity:4 mni:1 alive:1 constrain:1 encodes:1 span:1 relatively:1 department:1 belonging:3 across:20 lefebvre:1 making:1 intuitively:1 anatomically:1 taken:1 visualization:1 remains:1 discus:1 mind:2 know:2 end:1 decomposing:1 apply:1 magnetic:3 distinguished:9 thomas:2 top:1 clustering:1 include:1 carlson:1 giving:1 parcellation:1 question:1 parametric:2 said:1 attentional:1 separate:1 vd:1 collected:2 discriminant:1 extent:2 code:2 vladimir:1 neuroimaging:2 october:1 robert:1 negative:1 rise:1 reliably:1 perform:1 crivello:1 datasets:2 inanimate:1 displayed:2 supporting:2 january:2 situation:3 defining:1 excluding:1 head:1 looking:1 locate:2 community:1 introduced:1 inverting:1 searchlight:60 pair:17 required:1 korbinian:1 hanson:1 distinction:6 learned:2 address:2 able:1 beyond:1 usually:2 pattern:47 mismatch:3 below:2 lund:1 reading:3 belonged:1 interpretability:2 including:1 overlap:2 friston:1 predicting:3 representing:1 technology:1 library:1 picture:1 nonsense:1 genomics:1 epoch:5 voxels:38 understanding:1 discovery:4 review:2 relative:1 bear:1 generation:2 versus:4 validation:5 awareness:1 degree:1 proxy:1 thresholding:1 share:1 karl:1 genetics:1 guillermo:1 last:1 keeping:1 rasmussen:1 side:1 allow:2 understand:2 institute:1 taking:2 face:3 correspondingly:1 sparse:2 penny:1 distributed:1 slice:6 boundary:1 calculated:1 resides:1 made:5 voxel:27 far:1 active:2 francisco:2 don:1 decade:1 modularity:2 table:2 learn:1 nature:1 obtaining:2 complex:3 vj:7 kindly:1 significance:2 whole:3 noise:1 scored:1 big:1 repeated:1 body:6 screen:2 depicts:1 detre:1 sub:3 stray:1 pereira:5 comprises:1 neuroimage:6 candidate:1 house:1 learns:1 down:1 showing:1 er:3 mason:1 svm:3 false:3 magnitude:2 demand:1 cherkassky:1 depicted:1 distinguishable:3 likely:1 visual:1 contained:1 ordered:1 partially:1 chang:2 springer:1 nested:1 chance:2 acm:1 ravishankar:1 goal:4 viewed:2 shared:2 absence:2 man:1 tzourio:1 typical:3 determined:2 contrasted:2 averaging:1 classi:5 called:1 total:1 experimental:4 svd:6 siemens:1 indicating:2 exception:1 guillaume:1 support:1 localizable:1 princeton:5 correlated:2
3,987
4,609
Bayesian models for Large-scale Hierarchical Classification Siddharth Gopal Bing Bai Yiming Yang Alexandru Niculescu-Mizil [email protected] [email protected] {bing,alex}@nec-labs.com Carnegie Mellon University NEC Laboratories America, Princeton Abstract A challenging problem in hierarchical classification is to leverage the hierarchical relations among classes for improving classification performance. An even greater challenge is to do so in a manner that is computationally feasible for large scale problems. This paper proposes a set of Bayesian methods to model hierarchical dependencies among class labels using multivariate logistic regression. Specifically, the parent-child relationships are modeled by placing a hierarchical prior over the children nodes centered around the parameters of their parents; thereby encouraging classes nearby in the hierarchy to share similar model parameters. We present variational algorithms for tractable posterior inference in these models, and provide a parallel implementation that can comfortably handle largescale problems with hundreds of thousands of dimensions and tens of thousands of classes. We run a comparative evaluation on multiple large-scale benchmark datasets that highlights the scalability of our approach and shows improved performance over the other state-of-the-art hierarchical methods. 1 Introduction With the tremendous growth of data, providing a multi-granularity conceptual view using hierarchical classification (HC) has become increasingly important. The large taxonomies for web page categorization at the Yahoo! Directory and the Open Directory Project, and the International Patent Taxonomy are examples of widely used hierarchies. The large hierarchical structures present both challenges and opportunities for statistical classification research. Instead of focusing on individual classes in isolation, we need to address joint training and inference based on the hierarchical dependencies among the classes. Moreover this has to be done in a computationally efficient and scalable manner, as many real world HC problems are characterized by large taxonomies and high dimensionality. In this paper, we investigate a Bayesian framework for leveraging the hierarchical class structure. The Bayesian framework is a natural fit for this problem as it can seamlessly capture the idea that the models at the lower levels of the hierarchy are specialization of models at the ancestor nodes. We define a hierarchical Bayesian model where the prior distribution for the parameters at a node is a Gaussian centered at the parameters of the parent node. This prior encourages the parameters of nodes that are close in the hierarchy to be similar thereby enabling propagation of information across the hierarchical structure and leading to inductive transfer (sharing statistical strength) among the models corresponding to the different nodes. The strength of the Gaussian prior, and hence the amount of information sharing between nodes, is controlled by its covariance parameter, which is also learned from the data. Modelling the covariance structures gives us the flexibility to incorporate different ways of sharing information in the hierarchy. For example, consider a hierarchical organization of all animals with two sub-topics mammals and birds. By placing feature specific variances, the model can learn that the sub-topic parameters are more similar along common features like ?eyes?,?claw? and less similar in other sub-topic specific features like ?feathers?, ?tail? etc. As another example, the model can incorporate children-specific covariances that allows some sub-topic 1 parameters to be less similar to their parent and some to be more similar; for e.g. sub-topic whales is quite distinct from its parent mammals compared to its siblings felines, primates. Formulating such constraints in non-Bayesian large-margin approaches is not as easy, and to our knowledge has not done before in the context of hierarchical classification. Other advantages of a fully Bayesian treatment are that there no reliance on cross-validation, the outputs have a probabilistic interpretation, and it is easy to incorporate prior domain knowledge. Our approach shares similarity to the correlated Multinomial logit [18] (corrMNL) in taking a Bayesian approach to model the hierarchical class structure, but improves over it in two significant aspects - scalability and setting hyperparameters. Firstly, CorrMNL uses slower MCMC sampling for inference, making it difficult to scale to problems with more than a few hundred features and a few hundred nodes in the hierarchy. By modelling the problem as a Hierarchical Bayesian Logistic Regression (HBLR), we are able to vastly improve the scalability by 1) developing variational methods for faster inference, 2) introducing even faster algorithms (partial MAP) to approximate the variational inference at an insignificant cost in classification accuracy, and 3) parallelizing the inference. The approximate variational inference (1 plus 2) reduces the computation time by several order of magnitudes (750x) over MCMC, and the parallel implementation in a Hadoop cluster [4] further improves the time almost linearly in the number of processors. These enabled us to comfortably conduct joint posterior inference for hierarchical logistic regression models with tens of thousands of categories and hundreds of thousands of features. Secondly, a difficulty with the Bayesian approaches, that has been largely side-stepped in [18], is that, when expressed in full generality, they leave many hyperparameters open to subjective input from the user. Typically, these hyper-parameters need to be set carefully as they control the amount of regularization in the model, and traditional techniques such as Empirical Bayes or cross-validation encounter difficulties in achieving this. For instance, Empirical Bayes requires the maximization of marginal likelihood which is difficult to compute in hierarchical logistic models [9] in general, and cross-validation requires reducing the number of free parameters for computational reasons, potentially losing the flexibility to capture the desired phenomena. In contrast, we propose a principled way to set the hyper-parameters directly from data using an approximation to the observed Fisher Information Matrix. Our proposed technique can be easily used to set a large number of hyperparameters without losing model tractability and flexibility. To evaluate the proposed techniques we run a comprehensive empirical study on several large scale hierarchical classification problems. The results show that our approach is able to leverage the class hierarchy and obtain a significant performance boost over leading non-Bayesian hierarchical classification methods, as well as consistently outperform flat methods that do not use the hierarchy information. Other Related Work: Most of the previous work in HC has been primarily using large-margin discriminative methods. Some of the early works in HC [10, 14] use the hierarchical structure to decompose the classification problem into sub-problems recursively along the hierarchy and allocate a classifier at each node. The hierarchy is used to partition the training data into node-specific subsets and classifiers at each node are trained independently without using the hierarchy any further. Many approaches have been proposed to better utilize the hierarchical structure. For instance, in [22, 1], the output of the lower-level classifiers was used as additional features for the instance at the toplevel classifiers. Smoothing the estimated parameters in naive Bayes classifiers along each path from the root to a leaf node has been tried in [17]. [20, 6] proposed large-margin discriminative methods where the discriminant function at each node takes the contributions from all nodes along the path to the root node, and the parameters are jointly learned to minimize a global loss over the hierarchy. Recently, enforcing orthogonality constraints between parent and children classifiers was shown to achieve state-of-art performance [23]. 2 The Hierarchical Bayesian Logistic Regression (HBLR) Framework Define a hierarchy as a set of nodes Y = {1, 2...} with the parent relationship ? : Y ? Y where d ?(y) is the parent of node y ? Y . Let D = {(xi , ti )}N i=1 denote the training data where xi ? R is an instance, ti ? T is a label, where T ? Y is the set of leaf nodes in the hierarchy labeled from 1 to |T |. We assume that each instance is assigned to one of the leaf nodes in the hierarchy. Let Cy be the set of all children of y. 2 For each node y ? Y , we associate a parameter vector wy which has a Gaussian prior. We set the mean of the prior to the parameter of the parent node, w?(y) . Different constraints on the covariance matrix of the prior corresponds to different ways of propagating information across the hierarchy. In what follows, we consider three alternate ways to model the covariance matrix which we call M1, M2 and M3 variants of HBLR. In the M1 variant all the siblings share the same spherical covariance matrix. Formally, the generative model for M1 is M1 wroot ? N (w0 , ?0 ), ?root ? ?(a0 , b0 ) wy | w?(y) , ??(y) ? N (w?(y) , ??(y) ) ?y, ?y ? ?(ay , by ) ?y ? /T t|x? pi (x) = Multinomial(p1 (x), p2 (x), .., p|T | (x)) ?(x, t) ? D exp(wi> x)/?t0 ?T exp(wt>0 x) (1) The parameters of the root node are drawn using user specified parameters w0 , ?0 , a0 , b0 . Each nonleaf node y ? / T has its own ?y drawn from a Gamma with the shape and inverse-scale parameters specified by ay and by . Each wy is drawn from the Normal with mean w?(y) and covariance matrix ?1 ??(y) = ??(y) I. The class-labels are drawn from a Multinomial whose parameters are a soft-max transformation of the wy s from the leaf nodes. This model leverages the class hierarchy information by encouraging the parameters of closely related nodes (parents, children and siblings) to be more similar to each other than those of distant ones in the hierarchy. Moreover, by using different inverse variance parameters ?y for each node, the model has the flexibility to adapt the degree of similarity between the parameters (i.e. parent and children nodes) on a per family basis. For instance it can learn that sibling nodes which are higher in the hierarchy (e.g. mammals and birds) are generally less similar compared to sibling nodes lower in the hierarchy (e.g. chimps and orangutans). Although this model is equivalent to the corrMNL proposed in [18], the hierarchical logistic regression formulation is different from corrMNL and has a distinct advantage that the parameters can be decoupled. As we shall see in Section 3, this enables the use of scalable and parallelizable variational inference algorithms. In contrast, in corrMNL the soft-max parameters are modeled as a sum of contributions along the path from a leaf to the root-node. This introduces two layers of dependencies between the parameters in the corrMNL model (inside the normalization constant as well along the path from leaves to root-node) which makes it less amenable to efficient variational inference. Even if one were to develop a variational approach for the corrMNL parameterization, it would be slower and not efficient for parallelization. Although the M1 approach is rational, one may argue that it would be beneficial to allow the diagonal elements of the covariance matrix ??(y) to be feature-specific instead of uniform. In our previous example with sub-topics mammals and birds, we may want wmammals , wbirds to be commonly close to their parent in some dimensions (e.g., in some common features like ?eyes?,?breathe? and ?claw?) but not in other dimensions (e.g., in bird specific features like ?feathers? or ?beak?). We (i) accommodate this by replacing prior ?y using ?y for every feature (i). This form of setting the prior is referred to as Automatic Relevant Determination (ARD) and forms the basis of several works such as Sparse Bayesian Learning [19], Relevance Vector Machines [3], etc. For the HC problem, we define the M2 variant of the HBLR approach as: M2 wy | w?(y) , ??(y) ? N (w?(y) , ??(y) ) ?y (i) ?y(i) ? ?(a(i) /T y , by ) i = 1..d, ?y ? where ??1 ?(y) = diag(??(y) , ??(y) , . . . , ??(y) ) (1) (2) (d) Yet another extension of the M1 model would be to allow each node to have its own covariance matrix for the Gaussian prior over wy , not shared with its siblings. This enables the model to learn how much the individual children nodes differ from the parent node. For example, consider topic mammals and its two sub-topics whales and carnivores; the sub-topic whales is very distinct from a typical mammal and is more of an ?outlier? topic. Such mismatches are very typical in hierarchies; especially in cases where there is not enough training data and an entire subtree of topics is collapsed as a single node. M3 aims to cope up with such differences. M3 wy | w?(y) , ?y ? N (w?(y) , ?y ) ?y ?y ? ?(ay , by ) ?y ? /T Note that the only difference between M 3 and M 1 is that M 3 uses ?y = ?y?1 I instead of ??(y) in the prior for wy . In our experiments we found that M3 consistently outperformed the other variants suggesting that such effects are important to model in HC. Although it would be natural to extend 3 M3 by placing ARD priors instead of the uniform ?y , we do not expect to see better performance due to the difficulty in learning a large number of parameters. Preliminary experiments confirmed our suspicions so we did not explore this direction further. 3 Inference for HBLR In this section, we present the inference method for M2 which is harder. The procedure can be easily extended for M1 and M3 1 . The posterior of M2 is given by p(W, ?|D) ? p(D|W, ?)p(W, ?) ? (2) d Y Y Y Y exp(wt> x) (i) P p(?y(i) |a(i) y , by ) exp(wt>0 x) y?Y (x,t)?D 0 y?Y \T i=1 t ?T p(wy |w?(y) , ??(y) ) Closed-form solution for the posterior is not possible due to the non-conjugacy between the logistic likelihood and the Gaussian prior, we therefore resort to variational methods to compute the posterior. However, using variational methods are themselves computational intractable in high dimensional scenarios due to the requirement of a matrix inversion which is computationally intensive. Therefore, we explore much faster approximation schemes such as partial MAP inference which are highly scalable. Finally, we show the resulting approximate variational inference procedure can be parallelized in a map-reduce framework to tackle large-scale problems that would be impossible to solve on a single processor. 3.1 Variational Inference Starting with a simple factored form for the posterior, we seek such a distribution q which is closest in KL divergence to the true posterior p. We use independent Gaussian q(wy ) and Gamma q(?y ) posterior distributions for wy and ?y per node as the factored representation: Y q(W, ?) = q(?y ) y?Y \T Y q(wy ) ? d Y Y ?(.|?y(i) , ?y(i) ) y?Y \T i=1 y?Y Y N (.|?y , ?y ) y?Y In order to tackle the non-conjugacy inside p(D|W, ?) in (2), we use a suitable lower-bound to the soft-max normalization constant proposed by [5], for any ? ? R , ?k ? [0, ?)   X gk ? ? ? ?k + ?(?k )((gk ? ?)2 ? ?k2 ) + log(1 + e?k ) 2 k k   1 1 where ?(?) = 2? ? 12 1+e?? log( X egk ) ? ? + where ? , ?k are variational parameters which we can optimize to get the tightest possible bound. For every (x, y) we introduce variational parameters ?x and ?xy . We now derive an EM algorithm that computes the posterior in the E-step and maximizes the variational parameters in the M-step. Variational E-Step The local variational parameters are fixed, and the posterior for a parameter is computed by matching the log-likelihood of the posterior with the expectation of log-likelihood under the rest of the parameters. The parameters are updated as1 , X ??1 y = I(y ? T ) 2?(?xy )xx> + diag( (x,t)?D ? ?y = ?y ?I(y ? T ) X (x,t)?D ?y(i) = by(i) + X ??(y) ?y ) + |Cy | diag( ) ??(y) ?y (3) ? ??(y) 1 ?y X ? (I(t = y) ? + 2?(?xy )?x )x + diag( )??(y) + diag( ) ?c 2 ??(y) ?y c?C y (i) 2 ?(i,i) + ?(i,i) + (?(i) y c y ? ?c ) and ?y(i) c?Cy |Cy | = a(i) y + 2 (4) Variational M-Step We keep the parameters of the posterior distribution fixed and maximize the variational parameters ?xy , ?x . Refer to [5] for detailed M-step derivations, 2 ?xy = x> diag( ?y 2 )x + (?x ? ?> y x) ?y ?x = (.5(.5|T | ? 1) + X y?T ?(?xy )?> y x)/ X ?(?xy ) y?T Class-label Prediction After computing the posterior, one way to compute the probability of a target class-label given a test instance is to simply plugin the posterior mean for prediction. A more principled way would be to compute the predictive distribution of the target class label l given the 1 Complete derivations are presented in the extended version located at http://www.cs.cmu.edu/?sgopal1. 4 test instance, Z p(l|x) = Z p(l, W|x)dW ? p(l|W, x)q(W)dW (5) The above integral cannot be computed in closed form and people have often resorted to probit approximations [16]. We take an alternative route by calculating the joint posterior p(l, W|x) by variational approximations. We assume the following factored form for the predictive distribution, q?(l, W) = Y Y q?(wy )? q (ly ) ? y?T ? y )Bern(.|? N (.|? ?y , ? py ) y?T The posterior can be calculated as before, by introducing variational parameters ??xy , ??x and matching the log likelihoods. Substituting q?(l, W) in (5), we see that the predictive distribution is given by q?(l) and the target class label is given by arg maxy?T p?y . 3.2 Partial MAP Inference In most applications, the requirement for a matrix inversion in step (3) could be demanding. In such scenarios, we split the inference into two stages, first calculating the posterior of wy using MAP solution, and second calculating the posterior of ?y . In the first stage, we find the MAP estimate wymap and then use laplace approximation to approximate the posterior using a separate Normal distribution for each dimension, thereby leading to a diagonal covariance matrix. Note that due to the laplace approximation, wymap and the posterior mean ?y coincide. ? = wymap = arg max W X (?(i,i) )?1 = y ??(y) 1 ? (wy ? w?(y) )> diag( )(wy ? w?(y) ) + log p(D|W, ?) 2 ??(y) y?T X (6) x(i) pxy (1 ? pxy )x(i) (x,t)?Dy where pxy is the probability that training instance x is labeled as y. The arg max in (6) can be computed for all ?y at the same time using optimization techniques like LBFGS [13]. For the second stage, parameters ?y and ?y are updated using (4). Full MAP inference is also possible by performing an alternating maximization between wy , ?y but we do not recommend it as there is no gain in scalability compared to partial MAP Inference and it loses the posterior distribution of ?y . 3.3 Parallelization For large hierarchies, it might be impractical to learn the parameters of all classes, or even store them in memory, on a single machine. We therefore, devise a parallel memory-efficient implementation scheme for our partial MAP Inference. There are 4 sets of parameters that are updated {?y , ?y , ?y , ?y }. The ?y , ?y , ?y can be updated in parallel for each node using (3),(4). For ?, the optimization step in (6) is not easy to parallelize since the w?s are coupled together inside the soft-max function. To make it parallelizable we replace the soft-max function in (1) with multiple binary logistic functions (one for each terminal node), which removes the coupling of parameters inside the log-normalization constant. The optimization can now be done in parallel by making the following observations - firstly note that the optimization problem in (6) is concave maximation, therefore any order of updating the variables reaches the same unique maximum. Secondly, note that the interactions between the wy ?s are only through the parent and child nodes. By fixing the parameters of the parent and children, the parameter wy of a node can be optimized independently of the rest of the hierarchy. One simple way to parallelize is to traverse the hierarchy level by level, optimize the parameters at each level in parallel, and iterate until convergence. A better way that achieves a larger degree of parallelization is to iteratively optimize the odd and even levels - if we fix the parameters at the odd levels, the parameters of parents and the children of all nodes at even levels are fixed, and the wy ?s at all even levels can be optimized in parallel. The same goes for optimizing the odd level parameters. To aid convergence we interleave the ?, ? updates with the ?, ? updates and warm-start with the previous value of ?y . In practice, for the larger hierarchies we observed speedups linear in the number of processors. Note that the convergence follows from viewing this procedure as block co-ordinate ascent on a concave differentiable function [15]. We tested our parallelization framework on a cluster running map-reduce based Hadoop 20.2 with 64 worker nodes with 8 cores and 16GB RAM each. We used Accumulo 1.4 key-value store for fast retrieve-update of the wy s. On this hardware, our experiments on the largest dataset with 15358 class labels and 347256 features took just 38 minutes. Although the map-reduce framework is not a requirement; it is a ubiquitous paradigm in distributed computing and having an implementation compatible with it is a definite advantage. 5 Table 1: Dataset Statistics Dataset #Training #Testing #Class-Labels #Leaf-labels Depth #Features CLEF 10000 1006 87 63 4 89 NEWS20 11260 7505 27 20 3 53975 LSHTC-small 4463 1858 1563 1139 6 51033 LSHTC-large 93805 34905 15358 12294 6 347256 IPC 46324 28926 552 451 4 541869 4 Setting prior parameters The w0 , ?0 represent the overall mean and covariance structure for the wy . We set w0 = 0 and ?0 = (i) (i) I because of their minimal effect on the rest of the parameters. The ay , by are variance components such that b(i) y (i) ay (i) represents the expected variance of the wy . Typically, choosing these parameters is difficult before seeing the data. The traditional way to overcome this is to learn {ay , by } from the data using Empirical Bayes. Unfortunately, in our proposed model, one cannot do this as each {ay , by } is associated with a single ?y . Generally, we need more than one sample value to learn the prior parameters effectively [7]. We therefore resort to a data dependent way of setting these parameters by using an approximation to the observed Fisher Information matrix. We first derive on a simpler model and then extend it to a hierarchy. Consider the following binary logistic model with unknown w and let the Fisher Information matrix be I and observed Fisher Information I? Y | x ? Bernoulli( exp(w> x) ); 1 + exp(w> x) h i I = E p(x)(1 ? p(x))xx> , I? = X p?(x)(1 ? p?(x))xx> (x,t)?D It is well known that I ?1 is the asymptotic covariance of the MLE estimator of w, so reasonable guess for the covariance of a Gaussian prior over w could be the observed I??1 from a dataset D. The problem with I??1 is that we do not have a good estimate p?(x) for a given x as we have exactly one sample for a given x i.e each instance x is labeled exactly once with certainty, therefore p?(x)(1 ? p?(x)) will always be zero. Therefore we approximate p?(x) as the sample prior probability t independent of x, i.e. p?(x) = p? = ?(x,t)?D |D| . Now, the prior on the covariance of wy can be set ?1 such that the expected covariance is I? . To extend this to HC, we need to handle multiple classes, ? ?1 for each y ? T , as well handle multiple levels, which can which can be done by estimating I(y) be done by recursively setting ay , by as?follows, (i) (a(i) y , by ) = ? ( P (i) ac , c?Cy ? ? (1, I(y) P c?Cy ?1(i,i) ) (i) bc ) if y ? /T if y ? T ? is the observed Fisher Information matrix for class label y. This way of setting the priors where I(y) is similar to the method proposed in [12], the key differences are in approximating p(x)(1 ? p(x)) from the data rather using p(x) = 21 , extension to handle multiple classes as well as hierarchies. We also tried other popular strategies such as setting improper gamma priors ?(, )  ? 0 widely used in many ARD works (which is equivalent to using type-2 ML for the ??s if one uses variational methods [2]) and Empirical Bayes using a single a and b (as well as other Empirical Bayes variants). Neither of worked well, the former being to be too sensitive to the value of  which is in agreement with the observations made by [11] and the latter constraining the model by using a single a and b. We do not discuss this any further due to lack of space. 5 Experiments Results Throughout our experiements, we used 4 popular benchmark datasets (Table 1) with the recommended train-test splits - CLEF[8], NEWS202 , LSHTC-{small,large}3 , IPC4 . First, to evaluate the speed advantage of the variational inference, we compare the full variational {M1,M2,M3}-var and partial MAP {M1,M2,M3-map} inference 5 for the three variants of HBLR to the MCMC sampling based inference of CorrMNL [18]. For CorrMNL, we used the implementation as provided by the authors6 . We performed sampling for 2500 iterations with 1000 for burn-in. 2 4 6 3 http://people.csail.mit.edu/jrennie/20Newsgroups/ http://www.wipo.int/classifications /ipc/en/support/ http://www.ics.uci.edu/ babaks/Site/Codes.html 5 6 http://lshtc.iit.demokritos.gr/ Code available at http://www.cs.cmu.edu/?sgopal1 Table 2: Comparison with CorrMNL: Macro-F1 and Micro-F1 on the CLEF dataset {M1,M2,M3}-var {M1,M2,M3}-map {M1,M2,M3}-flat CorrMNL M1 M2 M3 M1 M2 M3 M1 M2 M3 Macro-f1 55.59 56.67 51.23 59.67 55.53 54.76 59.65 52.13 48.78 55.23 Micro-f1 81.10 81.21 79.92 81.61 80.88 80.25 81.41 79.82 77.83 80.52 Time (mins) 2279 79 81 80 3 3 3 3 3 3 0.4 0.55 BLR MLR BSVM MSVM M3-map BLR 0.5 MLR BSVM MSVM M3-map 0.35 0.45 Macro-F1 0.3 Micro-F1 0.4 0.35 0.3 0.25 0.2 0.25 0.15 0.2 0.1 0.15 1 2 3 4 1 5 2 3 4 5 # Training Examples per Class # Training Examples per Class Figure 1: Micro-F1 (left) & Macro-F1 (right) on the CLEF dataset with limited number of training examples. Re-starts with different initialization values gave the same results for both MCMC and variational methods. All models were run on a single CPU without parallelization. We used the small CLEF[8] dataset in order to be able to run CorrMNL model in reasonable time. The results are presented in Table 2. For an informative comparison, we also included the results of {M1,M2,M3}-flat, our proposed approach using a flat hierarchy. With regards to scalability, partial MAP inference is the most scalable method being orders of magnitude faster (750x) than CorrMNL. Full variational inference, although less scalable as it requires O(d3 ) matrix inversions in the feature space, is still orders of magnitude faster (20x) than CorrMNL. In terms of performance, we see that the partial MAP inference for the HBLR has only small loss in performance compared to the full variational inference while having similar training time to the flat approach that does not model the hierarchy ({M1,M2,M3}-flat). Next, we compare the performance of HBLR to several other competing approaches: 1. Hierarchical Baselines: We selected 3 representative hierarchical methods that have shown to have state-of-the-art performance - Hierarchical SVM [6] (HSVM), a large-margin discriminative method with path-dependent discriminant function. Orthogonal Transfer [23] (OT), a method enforcing orthogonality constraints between the parent node and children and Top-down Classification [14] (TD) Top-down decision making using binary SVMs trained at each node. 2. Flat Baselines: Typical flat approaches which do not make use of the hierarchy. We tested Oneversus rest Binary logistic Regressions (BLR), Multiclass Logistic Regression (MLR), One-versus Rest Binary SVMs (BSVM), and Multiclass SVM (MSVM) [21]. For all competing approaches, we tune the regularization parameter using 5 fold CV with a range of values from 10?5 to 105 . For the HBLR models, we used partial MAP Inference because full variational is not scalable to high dimensions. The IPC and LSHTC-large are very large datasets so we are unable to test any method other than our parallel implementation of HBLR, and BLR, BSVM which can be trivially parallelized. Although TD can be parallelized we did not pursue this since TD did not achieve competitive performance on the other datasets. Parallelizing the other methods is not obvious and has not been discussed in previous literature to the best of our knowledge. Table 3 summarizes the results obtain by the different methods. The performance was measured using the standard macro-F1 and micro-F1 measures [14]. The significance tests are performed using sign-test for Micro-F1 and a wilcoxon rank test on the Macro-F1 scores. For every data collection, each method is compared to the best performing method on that dataset. The null hypothesis is that there is no significanct difference between the two systems being compared, the alternative is that the best-performing-method is better. Among M1,M2 and M3, the performance of M3 seems to be consistently better than M1, followed by M2. Although M2 is more expressive than M1, the benefit of a better model seems to be offset by the difficulty in learning a large number of parameters. Comparing to the other hierarchical baselines, M3 achieves significantly higher performance on all datasets, showing that the Bayesian approach is able to leverage the information provided in the class hierarchy. Among the baselines, we find that the average performance of HSVM is higher than the TD, OT. This can be partially explained by noting that both OT and TD are greedy topdown classification methods and any error made in the top level classifications propagates down to 7 Table 3: Macro-F1 and Micro-F1 on the 4 datasets. Bold faced number indicate best performing method. The results of the significance tests are denoted * for a p-value less than 5% and ? for p-value less than 1%. {M1,M2,M3}-map Hierarchical methods Flat methods M1 M2 M3 HSVM OT TD BLR MLR BSVM MSVM CLEF Macro-f1 55.53? 54.76? 59.65 57.23* 37.12? 32.32? 53.26? 54.76? 48.59? 54.33? Micro-f1 80.88* 80.25* 81.41 79.72? 73.84? 70.11? 79.92? 80.52? 77.53? 80.02? NEWS20 Macro-f1 81.54 80.91* 81.69 80.04? 81.20 80.86* 82.17 81.82 82.32 81.73 Micro-f1 82.24* 81.54* 82.56* 80.79* 81.98* 81.20? 82.97 82.56* 83.10 82.47* LSHTC-small Macro-f1 28.81? 25.81? 30.81 21.95? 19.45? 20.01? 28.12? 28.38* 28.62* 28.34* Micro-f1 45.48 43.31? 46.03 39.66? 37.12? 38.48? 44.94? 45.20 45.21* 45.62 LSHTC-large Macro-f1 28.32* 24.93? 28.76 27.91* 27.89* Micro-f1 43.98 43.11? 44.05 43.98 44.03 IPC Macro-f1 50.43? 47.45? 51.06 48.29? 45.71? Micro-f1 55.80* 54.22? 56.02 55.03? 53.12? the leaf node; in contrast to HSVM which uses an exhaustive search over all labels. However, the result of OT do not seem to support the conclusions in [23]. We hypothesize two reasons - firstly, the orthogonality condition which is assumed in OT does not hold in general, secondly, unlike [23] we use cross-validation to set the underlying regularization parameters rather than setting them arbitrarily to 1 (which was used in [23]). Surprisingly, the hierarchical baselines (HSVM,TD and OT) experience a very large drop in performance on LSHTC-small when compared to the flat baselines, indicating that the hierarchy information actually mislead these methods rather than helping them. In contrast, M3 is consistently better than the flat baselines on all datasets except NEWS20. In particular, M3 performs significantly better on the largest datasets, especially in Macro-F1 , showing that even very large class hierarchies can convey very useful information, and highlighting the importance of having a scalable, parallelizable hierarchical classification algorithm. To further establish the importance of modeling the hierarchy, we test our approach under scenarios when the number of training examples is limited. We expect the hierarchy to be most useful in such cases as it enables of sharing of information between class parameters. To verify this, we progressively increased the number of training examples per class-label on the CLEF dataset and compared M3-map with the other best performing methods. Figure 1 reports the results of M3map, MLR, BSVM, MSVM averaged over 20 runs. The results shows that M3-map is significantly better than the other methods especially when the number of examples is small. For instance, when there is exactly one training example per class, M3-map achieves a whopping 10% higher MicroF1 and a 2% higher Macro-F1 than the next best method. We repeated the same experiments on the NEWS20 dataset but however did not find an improved performance even with limited training examples suggesting that the hierarchical methods are not able to leverage the hierarchical structure of NEWS20. 6 Conclusion In this paper, we presented the HBLR approach to hierarchical classification, focusing on scalable ways to leverage hierarchical dependencies among classes in a joint framework. Using a Gaussian prior with informative mean and covariance matrices, along with fast variational methods, and a practical way to set hyperparameters, HBLR significantly outperformed other popular HC methods on multiple benchmark datasets. We hope this study provides useful insights into how hierarchical relationships can be successfully leveraged in large-scale HC. In future, we would like to adapt this approach to equivalent non-bayesian large-margin discriminative counterparts. ACKNOWLDEGMENTS: This work is supported, in part, by the NEC Laboratories America, Princeton under ?NEC Labs Data Management University Awards? and the National Science Foundation (NSF) under grant IIS 1216282. A major part of work was accomplished while the first author was interning at NEC Labs, Princeton. 8 References [1] P.N. Bennett and N. Nguyen. Refined experts: improving classification in large taxonomies. In SIGIR, 2009. [2] C.M. Bishop. Pattern recognition and machine learning. [3] C.M. Bishop and M.E. Tipping. Bayesian regression and classification. 2003. [4] D. Borthakur. The hadoop distributed file system: Architecture and design. Hadoop Project Website, 11:21, 2007. [5] G. Bouchard. Efficient bounds for the softmax function. 2007. [6] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In CIKM, pages 78?87. ACM, 2004. [7] George Casella. Empirical bayes method - a tutorial. Technical report. [8] I. Dimitrovski, D. Kocev, L. Suzana, and S. D?zeroski. Hierchical annotation of medical images. In IMIS, 2008. [9] C.B. Do, C.S. Foo, and A.Y. Ng. Efficient multiple hyperparameter learning for log-linear models. In Neural Information Processing Systems, volume 21, 2007. [10] S. Dumais and H. Chen. Hierarchical classification of web content. In SIGIR, 2000. [11] A. Gelman. Prior distributions for variance parameters in hierarchical models. BA. [12] R.E. Kass and R. Natarajan. A default conjugate prior for variance components in generalized linear mixed models. Bayesian Analysis, 2006. [13] D.C. Liu and J. Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1):503?528, 1989. [14] T.Y. Liu, Y. Yang, H. Wan, H.J. Zeng, Z. Chen, and W.Y. Ma. Support vector machines classification with a very large-scale taxonomy. ACM SIGKDD, pages 36?43, 2005. [15] Z.Q. Luo and P. Tseng. On the convergence of the coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72(1):7?35, 1992. [16] D.J.C. MacKay. The evidence framework applied to classification networks. Neural computation, 1992. [17] A. McCallum, R. Rosenfeld, T. Mitchell, and A.Y. Ng. Improving text classification by shrinkage in a hierarchy of classes. In ICML, pages 359?367, 1998. [18] B. Shahbaba and R.M. Neal. Improving classification when a class hierarchy is available using a hierarchy-based prior. Bayesian Analysis, 2(1):221?238, 2007. [19] M.E. Tipping. Sparse bayesian learning and the relevance vector machine. JMLR, 1:211?244, 2001. [20] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 6(2):1453, 2006. [21] J. Weston and C. Watkins. Multi-class support vector machines. Technical report, 1998. [22] G.R. Xue, D. Xing, Q. Yang, and Y. Yu. Deep classification in large-scale text hierarchies. In SIGIR, pages 619?626. ACM, 2008. [23] D. Zhou, L. Xiao, and M. Wu. Hierarchical classification via orthogonal transfer. Technical report, MSR-TR-2011-54, 2011. 9
4609 |@word msr:1 version:1 inversion:3 interleave:1 seems:2 logit:1 open:2 seek:1 tried:2 covariance:16 mammal:6 thereby:3 tr:1 harder:1 accommodate:1 recursively:2 bai:1 liu:2 score:1 bc:1 document:1 subjective:1 ka:1 com:1 comparing:1 luo:1 yet:1 distant:1 partition:1 informative:2 shape:1 enables:3 hofmann:2 remove:1 hypothesize:1 drop:1 update:3 progressively:1 generative:1 leaf:8 guess:1 selected:1 parameterization:1 greedy:1 directory:2 website:1 mccallum:1 core:1 provides:1 node:45 traverse:1 firstly:3 simpler:1 kocev:1 mathematical:1 along:7 become:1 feather:2 inside:4 introduce:1 manner:2 news20:5 expected:2 p1:1 themselves:1 multi:2 terminal:1 spherical:1 siddharth:1 td:7 encouraging:2 cpu:1 project:2 xx:3 moreover:2 estimating:1 maximizes:1 underlying:1 provided:2 null:1 what:1 pursue:1 transformation:1 impractical:1 certainty:1 every:3 ti:2 concave:2 growth:1 tackle:2 exactly:3 classifier:6 k2:1 control:1 ly:1 bsvm:6 grant:1 medical:1 before:3 local:1 plugin:1 parallelize:2 path:5 might:1 plus:1 bird:4 burn:1 initialization:1 challenging:1 co:1 limited:4 range:1 averaged:1 unique:1 practical:1 testing:1 practice:1 block:1 definite:1 procedure:3 empirical:7 significantly:4 lshtc:8 matching:2 seeing:1 altun:1 get:1 cannot:2 close:2 tsochantaridis:1 gelman:1 context:1 collapsed:1 impossible:1 py:1 optimize:3 equivalent:3 map:23 www:4 go:1 starting:1 independently:2 convex:1 chimp:1 sigir:3 feline:1 mislead:1 m2:20 factored:3 estimator:1 insight:1 enabled:1 dw:2 retrieve:1 handle:4 coordinate:1 laplace:2 updated:4 hierarchy:39 target:3 user:2 losing:2 programming:1 us:4 hypothesis:1 agreement:1 associate:1 element:1 recognition:1 natarajan:1 located:1 updating:1 labeled:3 observed:6 capture:2 thousand:4 cy:6 improper:1 principled:2 trained:2 predictive:3 basis:2 easily:2 joint:4 iit:1 america:2 derivation:2 train:1 distinct:3 fast:2 hyper:2 choosing:1 refined:1 exhaustive:1 quite:1 whose:1 widely:2 solve:1 larger:2 statistic:1 rosenfeld:1 jointly:1 advantage:4 differentiable:2 cai:1 took:1 propose:1 interaction:1 macro:14 relevant:1 uci:1 blr:5 flexibility:4 achieve:2 scalability:5 parent:17 cluster:2 requirement:3 convergence:4 comparative:1 categorization:2 leave:1 yiming:2 derive:2 andrew:1 develop:1 fixing:1 propagating:1 measured:1 coupling:1 ac:1 ard:3 odd:3 b0:2 p2:1 c:3 indicate:1 differ:1 direction:1 closely:1 alexandru:1 centered:2 viewing:1 fix:1 f1:26 decompose:1 preliminary:1 secondly:3 extension:2 helping:1 hold:1 around:1 ic:1 normal:2 exp:6 substituting:1 major:1 achieves:3 early:1 outperformed:2 label:13 sensitive:1 largest:2 successfully:1 hope:1 minimization:1 mit:1 gopal:1 gaussian:8 aim:1 always:1 rather:3 zhou:1 shrinkage:1 joachim:1 consistently:4 modelling:2 likelihood:5 bernoulli:1 seamlessly:1 rank:1 contrast:4 sigkdd:1 baseline:7 inference:28 dependent:2 niculescu:1 typically:2 entire:1 a0:2 relation:1 ancestor:1 arg:3 classification:25 among:7 overall:1 html:1 denoted:1 yahoo:1 proposes:1 animal:1 art:3 smoothing:1 softmax:1 mackay:1 marginal:1 once:1 clef:7 having:3 sampling:3 ng:2 whale:3 placing:3 represents:1 yu:1 icml:1 future:1 report:4 recommend:1 micro:12 few:2 primarily:1 gamma:3 divergence:1 comprehensive:1 individual:2 national:1 organization:1 investigate:1 highly:1 evaluation:1 introduces:1 amenable:1 integral:1 partial:9 worker:1 xy:8 experience:1 decoupled:1 orthogonal:2 conduct:1 desired:1 re:1 minimal:1 instance:11 wroot:1 soft:5 modeling:1 increased:1 maximization:2 cost:1 introducing:2 tractability:1 subset:1 hundred:4 uniform:2 gr:1 too:1 dependency:4 interning:1 xue:1 dumais:1 international:1 csail:1 probabilistic:1 together:1 vastly:1 management:1 leveraged:1 wan:1 resort:2 expert:1 leading:3 suggesting:2 bfgs:1 bold:1 int:1 performed:2 view:1 root:6 lab:3 closed:2 shahbaba:1 start:2 bayes:7 competitive:1 parallel:8 bouchard:1 annotation:1 xing:1 contribution:2 minimize:1 accuracy:1 variance:6 largely:1 bayesian:19 confirmed:1 processor:3 parallelizable:3 reach:1 casella:1 sharing:4 beak:1 obvious:1 associated:1 rational:1 gain:1 dataset:10 treatment:1 popular:3 mitchell:1 knowledge:3 dimensionality:1 improves:2 ubiquitous:1 carefully:1 actually:1 focusing:2 higher:5 tipping:2 improved:2 formulation:1 done:5 generality:1 just:1 stage:3 until:1 web:2 replacing:1 expressive:1 zeng:1 propagation:1 lack:1 logistic:11 effect:2 verify:1 true:1 counterpart:1 inductive:1 hence:1 regularization:3 assigned:1 alternating:1 former:1 laboratory:2 iteratively:1 neal:1 encourages:1 generalized:1 ay:8 complete:1 performs:1 image:1 variational:28 recently:1 common:2 multinomial:3 mlr:5 ipc:4 patent:1 volume:1 comfortably:2 tail:1 interpretation:1 m1:22 extend:3 discussed:1 mellon:1 significant:2 refer:1 cv:1 automatic:1 trivially:1 jrennie:1 similarity:2 etc:2 wilcoxon:1 multivariate:1 posterior:21 own:2 closest:1 optimizing:1 scenario:3 route:1 store:2 binary:5 arbitrarily:1 accomplished:1 devise:1 greater:1 additional:1 george:1 parallelized:3 maximize:1 paradigm:1 recommended:1 ii:1 multiple:7 full:6 reduces:1 technical:3 faster:5 characterized:1 adapt:2 cross:4 determination:1 mle:1 award:1 controlled:1 prediction:2 scalable:8 regression:8 variant:6 cmu:4 expectation:1 iteration:1 normalization:3 represent:1 wipo:1 want:1 parallelization:5 rest:5 ot:7 unlike:1 ascent:1 file:1 leveraging:1 seem:1 call:1 noting:1 yang:3 leverage:6 granularity:1 easy:3 enough:1 split:2 iterate:1 constraining:1 isolation:1 fit:1 newsgroups:1 gave:1 competing:2 architecture:1 reduce:3 idea:1 sibling:6 multiclass:2 intensive:1 t0:1 specialization:1 allocate:1 gb:1 zeroski:1 deep:1 generally:2 useful:3 detailed:1 tune:1 amount:2 ten:2 hardware:1 svms:2 category:1 http:6 outperform:1 nsf:1 tutorial:1 sign:1 estimated:1 cikm:1 per:6 carnegie:1 hyperparameter:1 shall:1 key:2 reliance:1 achieving:1 drawn:4 d3:1 neither:1 oneversus:1 msvm:5 utilize:1 nocedal:1 resorted:1 ram:1 sum:1 run:5 inverse:2 almost:1 family:1 reasonable:2 throughout:1 wu:1 maximation:1 decision:1 dy:1 summarizes:1 layer:1 bound:3 followed:1 pxy:3 fold:1 toplevel:1 strength:2 constraint:4 orthogonality:3 alex:1 worked:1 flat:11 nearby:1 aspect:1 speed:1 min:1 formulating:1 performing:5 claw:2 speedup:1 structured:1 developing:1 alternate:1 conjugate:1 across:2 beneficial:1 increasingly:1 em:1 wi:1 primate:1 making:3 maxy:1 outlier:1 explained:1 computationally:3 conjugacy:2 bing:2 discus:1 tractable:1 available:2 tightest:1 demokritos:1 hierarchical:40 alternative:2 encounter:1 slower:2 top:3 running:1 opportunity:1 calculating:3 especially:3 establish:1 approximating:1 strategy:1 experiements:1 traditional:2 diagonal:2 separate:1 unable:1 w0:4 topic:11 stepped:1 argue:1 discriminant:2 tseng:1 reason:2 enforcing:2 code:2 modeled:2 relationship:3 providing:1 difficult:3 unfortunately:1 taxonomy:5 potentially:1 gk:2 ba:1 implementation:6 design:1 unknown:1 observation:2 datasets:9 benchmark:3 enabling:1 descent:1 extended:2 parallelizing:2 ordinate:1 specified:2 kl:1 optimized:2 learned:2 tremendous:1 boost:1 address:1 able:5 topdown:1 wy:24 pattern:1 mismatch:1 challenge:2 egk:1 max:7 memory:3 suitable:1 demanding:1 difficulty:4 natural:2 warm:1 largescale:1 mizil:1 scheme:2 improve:1 eye:2 suspicion:1 naive:1 coupled:1 faced:1 prior:25 literature:1 text:2 interdependent:1 asymptotic:1 fully:1 loss:2 highlight:1 expect:2 probit:1 mixed:1 var:2 versus:1 validation:4 foundation:1 degree:2 propagates:1 xiao:1 share:3 pi:1 compatible:1 surprisingly:1 supported:1 free:1 bern:1 carnivore:1 side:1 allow:2 taking:1 sparse:2 distributed:2 regard:1 overcome:1 dimension:5 calculated:1 world:1 depth:1 benefit:1 computes:1 default:1 author:1 commonly:1 made:2 coincide:1 hsvm:5 collection:1 nguyen:1 cope:1 approximate:5 keep:1 ml:1 global:1 conceptual:1 assumed:1 discriminative:4 xi:2 search:1 table:6 as1:1 learn:6 transfer:3 hadoop:4 improving:4 hc:9 domain:1 diag:7 did:4 significance:2 linearly:1 hyperparameters:4 child:12 repeated:1 convey:1 site:1 referred:1 representative:1 en:1 aid:1 foo:1 sub:9 jmlr:2 watkins:1 minute:1 down:3 specific:6 bishop:2 showing:2 offset:1 insignificant:1 svm:2 evidence:1 intractable:1 effectively:1 importance:2 nec:5 magnitude:3 subtree:1 margin:6 chen:2 simply:1 explore:2 lbfgs:1 highlighting:1 expressed:1 partially:1 corresponds:1 loses:1 acm:3 breathe:1 ma:1 weston:1 shared:1 fisher:5 feasible:1 replace:1 bennett:1 included:1 specifically:1 typical:3 reducing:1 except:1 wt:3 content:1 m3:28 indicating:1 formally:1 people:2 support:5 latter:1 relevance:2 incorporate:3 evaluate:2 mcmc:4 princeton:3 tested:2 phenomenon:1 correlated:1
3,988
461
Recognizing Overlapping Hand-Printed Characters by Centered-Object Integrated Segmentation and Recognition Gale L. Martin- & Mosfeq Rashid MCC Austin, Thxas 78759 USA Abstract This paper describes an approach, called centered object integrated segmentation and recognition (COISR). for integrating object segmentation and recognition within a single neural network. The application is hand-printed character recognition. 1\vo versions of the system are described. One uses a backpropagation network that scans exhaustively over a field of characters and is trained to recognize whether it is centered over a single character or between characters. When it is centered over a character, the net classifies the cnaracter. The approach is tested on a dataset of hand-printed digits. Vel)' low errOr rates are reported. The second version, COISR-SACCADE, avoids the need for exhaustive scans. The net is trained as before. but also is trained to compute ballistic 'eye' movements that enable the input window to jump from one character to the next. The common model of visual processing includes multiple, independent stages. First, flltering operations act on the raw image to segment or isolate and enhance to-be-recognized clumps. These clumps are normalized for factors such as size, and sometimes simplified further through feature extraction. The results are then fed to one or more classifiers. The operations prior to classification simplify the recognition task. Object segmentation restricts the number of features considered for classification to those associated with a single object, and enables normalization to be applied at the individual object level. Without such pre-processing. recognition may be an intractable problem. However, a weak point of this sequential stage model is that recognition and segmentation decisions are often inter-dependent. Not only does a correct recognition decision depend on first making a correct segmentation decision, but a correct segmentation decision often depends on first making a correct recognition decision. This is a particularly serious problem in character recognition applications. OCR systems use intervening white space and related features to segment a field of characters into individual characters, so that classification can be accomplished one character at a time. This approach fails when characters touch each other or when an individual character is broken up by intervening white space. Some means of integrating the segmentation and recognition stages is needed. This paper descnbes an approach, called centered object integrated segmentation and recognition (COISR), for integrating character segmentation and recognition within one ? Also with Eastman Kodak Company 504 Recognizing Overlapping Hand-Printed Characters NET'S OUfPUT OVER TIME Figure 1: The COISR Exhaustive Scan Approach. neural network. The general approach builds on previous work in pre-segmented character recognition (LeCun, Boser, Denker, Henderson, Howard, Hubbard, & Jackel, 1990; Martin & Pittman, 1990) and on the sliding window conception used in neural network speech applications, such as NETtalk (Sejnowski & Rosenberg(1986) and Tune Delay Neural Networks (Waibel, Sawai, & Shikano, 1988).'1\vo versions of the approach are descnbed. In both cases, a net is trained to recognize what is centered in its input window as it slides along a character field. The window size is chosen to be large enough to include more than one character. 1 COISR VERSION 1: EXHAUSTIVE SCAN As shown in Figure 1, the net is trained on an input window, and a target output vector representing what is in the center of the window. The top half of the figure shows the net's input window scanning successively across the field. Sometimes the middle of the window is centered on a character, and sometimes it is centered on a point between two characters. The target output vector consists of one node per category, and one node corresponding to a NOT-CENTERED condition. This latter node has a high target activation value when the input window is not centered over any character. A temporal stream of output vectors is created (shown at the bottom half of the figure) as the net scans the field. There is no need to explicitly segment characters, during training or testing, because recognition is defined as identifying what is in the center of the scanning window. The net learns to extract regularities in the shapes of individual characters even when those regularities occur in the context of overlapping and broken characters. The final stage of processing involves parsing the temporal stream generated as the net scans the field to yield an ascii string of recognized characters. 1.1 IMPLEMENTATION DETAILS The COISR approach was tested using the National Institute of Standards and Thchnology (NIST) database of hand-printed digit fields, using fields 6-30 of the form, which correspond to five different fields of length 2, 3, 4, 5, or 6 digits each. The training data included roughly 80,000 digits (BOO forms, 20,000 fields), and came from forms labeled fOOOO..-f0499, and f1SOO-f1799 in the dataset. The test data consisted of roughly 20,000 digits (200 forms, 5,000 fields) and came from forms labeled f1800-f1899 and f2OOO-f2099 in the dataset. The large test set was used because considerable variations 505 506 Martin and Rashid in test scores occurred with smaller test set sizes. The samples were scanned at a 300 pixel/inch resolution. Each field image was preprocessed to eliminate the white space and box surrounding the digit field. Each field was then size normalized with respect to the vertical height of the digit field to a vertical height of 20 pixels. Since the input is size normalized to the vertical height of the field of characters, the actual number of characters in the constant-width input window of 36 pixels varies depending on the height-to-width ratio for each character. The scan rate was a 3-pixel increment across the field. A key design principle of the present approach is that highly accurate integrated segmentation and recognition requires training on both the shapes of characters and their positions within the input window. The field images used for training were labeled with the horizontal center positions of each character in the field. The human Iabeler simply pointed at the horizontal center of each digit in sequence with a mouse cursor and clicked on a mouse button. The horizontal position of each character was then paired with its category label (0-9) in a data me. The labeling process is not unlike a human reading teacher using a pointer to indicate the position of each character as he or she reads aloud the sequence of characters making up the word or sentence. During testing this position information is not used. Position information about character centers is used to generate target output values for each possible position of the input window as it scans a field of characters. When the center position of a window is close to the center of a character, the target value of that character's output node is set at the maximum, with the target value of the NOT-CENTERED node set at the minimum. The activation values of all other characters' output nodes are set at the minimum. When the center position of a window is close to the half-way point between two character centers, the target value of all character output nodes are set to the minimum and the target value of the NOTCENTERED node is set to a maximum. Between these two extremes, the target values vary linearly with distance, creating a trapezoidal function (i.e., ~). The neural network is a 2-hidden-Iayer backpropagation network, with local, shared connections in the first hidden layer, and local connections in the second hidden layer (see Figure 2). The first hidden layer consists of 2016 nodes, or more speCifically 18 independent groups of 112 (16x7) nodes, with each group having local, shared connee- Output Layer 1st Hidden Layer ~------36----~--~ Input Layer Figure 2: Architecture for the COISR-Exhaustive Scan Approach. Recognizing Overlapping Hand-Printed C haracters tions to the input layer. The local, overlapping receptive fields of size 6x8 are offset by 2 pixels, such that the region covered by each group of nodes spans the input layer. The second hidden layer consists of 180 nodes, having local, but NOT shared receptive fields of size 6x3. The output layer consists of 11 nodes, with each of these nodes connected to all of the nodes in the 2nd hidden layer. The net has a total of 2927 nodes (includes input and output nodes), and 157,068 connections. In a feedforward (nonlearning) mode on a DEC SOOO workstation, in which the net is scanning a field of digits, the system processes about two digits per second, which includes image pre-processing and the necessary number of feedforward passes on the net. As the net scans horizontally, the activation values of the 11 output nodes create a trace as shown in Figure 1. Th convert this to an ascii string corresponding to the digits in the field, the state of the NOT-CENTERED node is monitored continuously. When it's activation value falls below a threshold, a summing process begins for each of the other nodes, and ends when the activation value of the NOT-CENTERED node exceeds the threshold. At this point the system decides that the input window has moved off of a character. The system then classifies the character on the basis of which output node has the highest summed activation for the position just passed over. 1.2 GENERALIZATION PERFORMANCE As shown in Figure 3, the COISR technique achieves very low field-based error rates, 14 13.5 COISR 13 Field-Based Errors 12.5 I Field Field 12 Error Rate Reject Rate 11 .5 Field Size 11 4.76% 1-digits 1.01% \ 10.5 0.51% 7.09% \ 10 \ 9.5 3-digits 1.01% 11.11% \ 9 0.51% 19.80% \ 8.5 \ 8 4-digits 19.07% 1.01% \ I 7.5 19.06% 0.40% 7 til ~ 5-digits 1.01% 13.41% 0 6.5 I \ \ 18.25% 0.50% 6 ~ 5.5 \ \ ~ 6-digits 35.73% 1.01% 5 ~\ ~ \\ 5 0.51% 56.68% 4.5 \ 4 ~ \ ~ \ \ \ 3.5 1\ 3 \ 2.5 \~ \ 2 1\ 1.5 ~\I \ \ 3 ~ 1 ....... ~ '2 ~ 0.5 , , ~6\_ ~4\ "" ........ ... 0 0-~ 10 ,. ~ 20 30 ..... ~ 40 50 60 70 80 90 100 % REJECfIONS Figure 3: Field-based lest Error and Reject Rates particularly for a single classifier system. The error rates are field-based in the sense that if the network mis-classifies one character in the field, the entire field is consid- 507 508 Martin and Rashid ered as mis-classified. Error rates pertain to the fields remaining, after rejection. Rejections are based on placing a threshold for the acceptable distance between the highest and the next highest running activation total. In this way, by varying the threshold, the error rate can be traded off against the percentage of rejections. Since the reported data apply to fields, the threshold applies to the smallest distance value found across all of the characters in the field. Figure 4 provides examples, from the test set, of fields that the COISR network correctly classifies. t).3:J.i"3 S-s;, ~ 9;. 4171 J6B'a.1<6 JfO &07 16]{, f?;.J.j-- Figure 4: 'lest Set Examples of Thuching and Broken Characters Correctly Recognized The COISR technique is a success in the sense that it does something that conventional charac~er recognition systems can not do. It robustly recognizes character fields containing touching, overlapping, and broken characters. One problem with the approach, however, lies with the exhaustive nature of the scan. The components needed to recognize a character in a given location are essentially replicated across the length of the to-be-classified input field, at the degree of resolution necessary to recognize the smallest and closest characters. While this has not presented any real difficulties for the present system, which processes 2 characters per second, it is likely to be troublesome when extensions are made to two-dimensional scans and larger vocabularies. A rough analogy with respect to human vision would be to require that all of the computational resources needed for recognizing objects at one point on the retina be replicated for each resolvable point on the retina. This design carries the notion of a compound eye to the ridiculous extreme. 2 COISR VERSION 2: SACCADIC SCAN Thking a cue from natural vision systems, the second version of the COISR system uses a saccadic scan. The system is trained to make ballistic eye movements, so that it can effectively jump from character to character and over blank areas. This version is similar to the exhaustive scan version in the sense that a backprop net is trained to recognize when it's input window is centered on a character, and if so, to classify the character. In addition, the net is trained for navigation control (Pomerleau ,1991). At each point in a field of characters, the net is trained to estimate the distance to the next character on the right, and to estimate the degree to which the center-most character is off-center. The trained net accomodates for variations in character width, spacing between characters, writing styles, and other factors. At run-time, the system uses the computed character classification and distances to navigate along a character field. If the character classification judgment, for a given position, has a high degree of certainty, the system accesses the next character distance infonnation computed by the net for the current position and executes the jump. If the system gets off-track, so that a Recognizing Overlapping Hand-Printed Characters character can not be recognized with a high-degree of certainty, it makes a corrective saccade by accessing the off-center character distance computed by the net for the current position. This action corresponds to making a second attempt to center the character within the input window. The primary advantage of this approach, over the exhaustive scan, is improved efficiency, as illustrated in Figure S. The scanning input windows are shown at the top of the figure, for each approach, and each character-containing input window, shown below the scanned image for each approach, corresponds to a forward pass of the net. The exhaustive scan version requires about 4 times as many forward passes as the saccadic scan version. Greater improvements in effiCiency can be achieved with wider input windows and images containing more blank areas. The system is still under development, but accuracy approaches that of the exhaustive scan system. ExhausUve scan Saccadic Scan Figure 5: Number of Forward Passes for Saccadic & Exhaustive Scan Systems 3 COMPARISONS & CONCLUSIONS In comparing accuracy rates between different OCR systems, one relevant factor that should be reported is the number of classifiers used. For a given system, increasing the number of classifiers typically reduces error rates but increases processing time. The low error rates reported here for the COISR-Exhaustive Scan approach come from a single classifier operating at 2 characters per second on a general purpose workstation. Most OCR systems employ multiple classifiers. For example, at the NIPS workshops this year, Jonathan Hull described the University of Buffalo zip code recognition system that contains five classifiers and requires about one minute to process a character. Keeler and Rumelhart, at this conference, also described a two-classifier neural net system for NIST digit recognition. The fact that the COISR approach achieved quite low error rates with a single classifier indicates that the approach is a promising one. Clearly, another relevant factor in comparing systems is the ability to recognize touching and broken characters, since this is a dominant stumbling block for current OCR systems. Conventional systems can be altered to achieve integrated segmentation and recognition in limited cases, but this involves a lot of hand-crafting and a significant 509 510 Martin and Rashid amount of time-consuming iterative processing (Fenrich. 1991). Essentially, multiple segmenters are used, and classification is performed for each such possible segmentation. The final segmentation and recognition decisions can thus be inter-dependent, but only at the cost of computing multiple segmentations and correspondingly, multiple classification decisions. The approach breaks down as the number of possible segmentations increases, as would occur for example if individual characters are broken or touching in multiple places or if multiple letters in a sequence are connected. The COISR system does not appear to have this problem. The NIPS conference this year has included a number of other neural net approaches to integrated segmentation and recognition in OCR domains. Tho approaches similar to the COISR-Exhaustive Scan system were those described by Faggin and by Keeler and Rumelhart. All three achieve integrated segmentation and recognition by convolving a neural network over a field of characters. Faggin described an analog hardware implementation of a neural-network-based OCR system that receives as input a window that slides along the machine-print digit field at the bottom of bank checks. Keeler and Rumelhart descnbed a self-organizing integrated segmentation and recognition (SOISR) system. Initially, it is trained on characters that have been pre-segmented by a labeler effectively drawing a box around each. Then, in subsequent training, a net, with these pre-trained weights, is duplicated repetitively across the extent of a fixed-width input field, and is further trained on examples of entire fields that contain connecting or broken characters. All three approaches have the weakness, described previously of performing essentially exhaustive scans or convolutions over the to-be-classified input field. This complaint is not necessarily directed at the specific applications dealt with at this year's NIPS conference, particularly if operating at the high levels of effiCiency described by Faggin. Nor is the complaint directed at tasks that only require the visual system to focus on a few small clusters or fields in the larger, otherwise blank input field. In these cases, low-resolution filters may be sufficient to efficiently remove blank areas and enable efficient integrated segmentation and recognition. Howevei, we use as an example, the saccadic scanning behavior of human vision in tasks, such as reading this paragrapl!. In such cases that require high-resolution sensitivity across a large, dense image and classification of a very large vocabulary of symbols, it seems clear that other, more flexible and efficient scanning mechanisms will be necessary. This high-density image domain is the focus of the COISR-Saccadic Scan approach, which integrates not only the segmentation and recognition of characters, but also control of the navigational aspects of vision. Acknowledgements We thank Lori Barski, John Canfield, David Chapman, Roger Gaborski, Jay Pittman, and Dave Rumelhart for helpful discussions and/or development of supporting image handling and network software. I also thank Jonathan Martin for help with the position labeling. References Fenrich, R. Segmentation of automatically located handwritten words. paper presented at the International Workshop on frontiers in handwriting recognition. Chateau de Bonas, France. 'l3-27 September 1991. Keeler, J. D., Rumelhart, David E., Leow, Wee-Kheng. Integrated segmentation and recognition of hand-printed numerals, in R. P. Lippmann, John E. Moody, David S. Thuretzky (Eds) Advances in Neural Information Processing Systems 3, p.SS7-S63. San Mateo, CA: Morgan Kaufmann. 1991. LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R. E., Hubbard, W. & Jackel, L. D. Handwritten digit recognition,with a backpropagation network, in D. S. Thuretzky (Ed.) Advances in Neural Information Processing Systems 2. Morgan Kaufmann, 1990. Recognizing Overlapping Hand-Printed Characters Martin, G. L & Pittman, J. A Recognizing hand-printed letters and digits. in D. S. 1buretzky (Ed.) Advances in Neural In/onnation Processing Systems 2. Morgan Kaufmann,I990. Pomerleau, D. A Efficient training of artiflCial neural networks for autonomous navigation. Neural Computation. 3, 1991, 88-97. Rumelhart, D. (1989) Learning and generalization in multi-layer networks. presentation given at the NATO Advanced Research Workshop on Neuro Computing Algorithms, Architectures and Applications. Les Arcs, France. February. 1989. Sejnowski. T. J. & Rosenberg. C. R. (1986) NETtalk: a parallel network that learns to read aloud. Johns Hopkins University Electrical Engineering and Computer Science Thchnical Report JHUIEECS-86/01. Waibel. A. Sawai, H., Shikano, K. (1988) Modularity and scaling in large phonemic neural networks. ATR Interpreting Thlephony Research Laboratories Thchnical Report TR-I-0034. 511
461 |@word middle:1 version:10 thchnical:2 seems:1 nd:1 descnbed:2 leow:1 tr:1 carry:1 contains:1 score:1 blank:4 current:3 comparing:2 activation:7 parsing:1 john:3 subsequent:1 kheng:1 shape:2 enables:1 remove:1 half:3 cue:1 pointer:1 provides:1 node:22 location:1 five:2 height:4 along:3 consists:4 inter:2 behavior:1 roughly:2 nor:1 multi:1 company:1 automatically:1 actual:1 window:22 increasing:1 clicked:1 begin:1 classifies:4 what:3 string:2 temporal:2 certainty:2 act:1 complaint:2 classifier:9 control:2 appear:1 before:1 engineering:1 local:5 troublesome:1 mateo:1 limited:1 clump:2 directed:2 lecun:2 ered:1 testing:2 block:1 backpropagation:3 x3:1 digit:20 area:3 mcc:1 reject:2 printed:10 pre:5 integrating:3 word:2 get:1 close:2 pertain:1 canfield:1 context:1 writing:1 conventional:2 center:13 f1800:1 resolution:4 identifying:1 notion:1 variation:2 increment:1 autonomous:1 target:9 us:3 rumelhart:6 recognition:29 particularly:3 located:1 trapezoidal:1 mosfeq:1 database:1 labeled:3 bottom:2 electrical:1 region:1 connected:2 movement:2 highest:3 accessing:1 broken:7 exhaustively:1 trained:13 depend:1 segment:3 efficiency:3 basis:1 corrective:1 surrounding:1 sejnowski:2 labeling:2 exhaustive:13 aloud:2 quite:1 larger:2 drawing:1 otherwise:1 ability:1 final:2 sequence:3 advantage:1 net:23 relevant:2 organizing:1 achieve:2 intervening:2 moved:1 regularity:2 cluster:1 object:8 tions:1 depending:1 wider:1 help:1 phonemic:1 involves:2 indicate:1 come:1 correct:4 filter:1 hull:1 centered:14 human:4 enable:2 numeral:1 backprop:1 require:3 generalization:2 extension:1 keeler:4 tho:1 frontier:1 around:1 considered:1 traded:1 vary:1 achieves:1 smallest:2 purpose:1 integrates:1 label:1 ballistic:2 infonnation:1 jackel:2 hubbard:2 create:1 rough:1 clearly:1 varying:1 rosenberg:2 focus:2 she:1 improvement:1 indicates:1 check:1 sense:3 helpful:1 dependent:2 integrated:10 eliminate:1 entire:2 typically:1 hidden:7 initially:1 france:2 pixel:5 classification:8 flexible:1 development:2 summed:1 field:48 extraction:1 having:2 labeler:1 chapman:1 placing:1 report:2 simplify:1 serious:1 employ:1 retina:2 few:1 wee:1 recognize:6 national:1 individual:5 attempt:1 highly:1 henderson:2 weakness:1 navigation:2 extreme:2 accurate:1 necessary:3 classify:1 sawai:2 cost:1 recognizing:7 delay:1 reported:4 scanning:6 varies:1 teacher:1 faggin:3 st:1 density:1 international:1 sensitivity:1 off:5 enhance:1 connecting:1 mouse:2 continuously:1 moody:1 hopkins:1 successively:1 containing:3 pittman:3 gale:1 creating:1 convolving:1 til:1 style:1 de:1 includes:3 explicitly:1 depends:1 stream:2 performed:1 break:1 lot:1 parallel:1 accuracy:2 kaufmann:3 efficiently:1 yield:1 correspond:1 judgment:1 inch:1 dealt:1 weak:1 raw:1 handwritten:2 dave:1 executes:1 classified:3 ed:3 against:1 associated:1 mi:2 monitored:1 workstation:2 handwriting:1 dataset:3 duplicated:1 segmentation:23 improved:1 vel:1 box:2 just:1 stage:4 roger:1 hand:11 receives:1 horizontal:3 touch:1 overlapping:8 mode:1 usa:1 normalized:3 consisted:1 contain:1 read:2 laboratory:1 illustrated:1 white:3 nettalk:2 during:2 width:4 self:1 vo:2 interpreting:1 image:9 common:1 sooo:1 analog:1 occurred:1 he:1 significant:1 pointed:1 l3:1 access:1 operating:2 something:1 dominant:1 closest:1 touching:3 compound:1 stumbling:1 came:2 success:1 accomplished:1 morgan:3 minimum:3 greater:1 zip:1 recognized:4 sliding:1 multiple:7 reduces:1 segmented:2 exceeds:1 repetitively:1 paired:1 neuro:1 essentially:3 vision:4 sometimes:3 normalization:1 achieved:2 dec:1 addition:1 spacing:1 unlike:1 lest:2 pass:3 isolate:1 feedforward:2 conception:1 enough:1 boo:1 architecture:2 whether:1 passed:1 speech:1 action:1 covered:1 clear:1 tune:1 amount:1 slide:2 hardware:1 category:2 generate:1 percentage:1 restricts:1 per:4 correctly:2 track:1 group:3 key:1 threshold:5 preprocessed:1 segmenters:1 button:1 convert:1 year:3 run:1 letter:2 place:1 decision:7 acceptable:1 scaling:1 layer:12 occur:2 scanned:2 software:1 x7:1 aspect:1 span:1 performing:1 martin:7 waibel:2 describes:1 across:6 smaller:1 character:77 making:4 resource:1 previously:1 mechanism:1 needed:3 fed:1 end:1 operation:2 denker:2 apply:1 ocr:6 kodak:1 robustly:1 top:2 remaining:1 include:1 running:1 recognizes:1 build:1 february:1 crafting:1 print:1 receptive:2 saccadic:7 primary:1 september:1 distance:7 thank:2 atr:1 me:1 extent:1 length:2 code:1 ratio:1 charac:1 trace:1 implementation:2 design:2 pomerleau:2 vertical:3 convolution:1 howard:2 nist:2 arc:1 buffalo:1 rashid:4 supporting:1 david:3 sentence:1 connection:3 boser:2 nip:3 below:2 reading:2 navigational:1 difficulty:1 natural:1 accomodates:1 advanced:1 representing:1 altered:1 eye:3 created:1 x8:1 extract:1 prior:1 acknowledgement:1 analogy:1 degree:4 sufficient:1 principle:1 bank:1 ascii:2 austin:1 institute:1 fall:1 correspondingly:1 vocabulary:2 avoids:1 forward:3 made:1 jump:3 replicated:2 simplified:1 san:1 lippmann:1 nato:1 decides:1 summing:1 consuming:1 shikano:2 iayer:1 iterative:1 modularity:1 promising:1 nature:1 ca:1 necessarily:1 domain:2 dense:1 linearly:1 fails:1 position:14 lie:1 jay:1 learns:2 resolvable:1 minute:1 down:1 specific:1 navigate:1 er:1 symbol:1 offset:1 intractable:1 workshop:3 sequential:1 effectively:2 cursor:1 lori:1 rejection:3 eastman:1 simply:1 likely:1 visual:2 horizontally:1 saccade:2 applies:1 corresponds:2 presentation:1 shared:3 considerable:1 included:2 specifically:1 called:2 total:2 pas:1 latter:1 scan:26 consid:1 jonathan:2 tested:2 handling:1
3,989
4,610
A Better Way to Pretrain Deep Boltzmann Machines Geoffrey Hinton Department of Computer Science University of Toronto [email protected] Ruslan Salakhutdinov Department of Statistics and Computer Science University of Toronto [email protected] Abstract We describe how the pretraining algorithm for Deep Boltzmann Machines (DBMs) is related to the pretraining algorithm for Deep Belief Networks and we show that under certain conditions, the pretraining procedure improves the variational lower bound of a two-hidden-layer DBM. Based on this analysis, we develop a different method of pretraining DBMs that distributes the modelling work more evenly over the hidden layers. Our results on the MNIST and NORB datasets demonstrate that the new pretraining algorithm allows us to learn better generative models. 1 Introduction A Deep Boltzmann Machine (DBM) is a type of binary pairwise Markov Random Field with multiple layers of hidden random variables. Maximum likelihood learning in DBMs, and other related models, is very difficult because of the hard inference problem induced by the partition function [3, 1, 12, 6]. Multiple layers of hidden units make learning in DBM?s far more difficult [13]. Learning meaningful DBM models, particularly when modelling high-dimensional data, relies on the heuristic greedy pretraining procedure introduced by [7], which is based on learning a stack of modified Restricted Boltzmann Machines (RBMs). Unfortunately, unlike the pretraining algorithm for Deep Belief Networks (DBNs), the existing procedure lacks a proof that adding additional layers improves the variational bound on the log-probability that the model assigns to the training data. In this paper, we first show that under certain conditions, the pretraining algorithm improves a variational lower bound of a two-layer DBM. This result gives a much deeper understanding of the relationship between the pretraining algorithms for Deep Boltzmann Machines and Deep Belief Networks. Using this understanding, we introduce a new pretraining procedure for DBMs and show that it allows us to learn better generative models of handwritten digits and 3D objects. 2 Deep Boltzmann Machines (DBMs) A Deep Boltzmann Machine is a network of symmetrically coupled stochastic binary units. It contains a set of visible units v ? {0, 1}D , and a series of layers of hidden units h(1) ? {0, 1}F1 , h(2) ? {0, 1}F2 ,..., h(L) ? {0, 1}FL . There are connections only between units in adjacent layers. Consider a DBM with three hidden layers, as shown in Fig. 1, left panel. The probability that the DBM assigns to a visible vector v is: 1 X P (v; ?) = exp Z(?) h X (1) (1) Wij vi hj + X ij jl 1 (2) (1) (2) Wjl hj hl + X lm (3) (2) Wlm hl h(3) m  , (1) Pretraining ?RBM? h(3) Deep Belief Network h( 3 ) Deep Boltzmann Machine W(3) h(2) h( 3 ) W (3 ) h( 2 ) RBM h(2) W (3 ) 2W(2) h( 2 ) W (2 ) h( 1 ) W(2) ?RBM? h(1) 2W(1) W (1 ) W(3) 2W(2) h(1) W (2 ) h( 1 ) v 2W(3) W(1) W(1) W (1 ) v v ? Figure 1: Left: Deep Belief Network (DBN) and Deep Boltzmann Machine (DBM). The top two layers of a DBN form an undirected graph and the remaining layers form a belief net with directed, top-down connections. For a DBM, all the connections are undirected. Right Pretraining a DBM with three hidden layers consists of learning a stack of RBMs that are then composed to create a DBM. The first and last RBMs in the stack need to be modified by using asymmetric weights. where h = {h(1) , h(2) , h(3) } are the set of hidden units, and ? = {W(1) , W(2) , W(3) } are the model parameters, representing visible-to-hidden and hidden-to-hidden symmetric interaction terms1 . Setting W(2) =0 and W(3) =0 recovers the Restricted Boltzmann Machine (RBM) model. Approximate Learning: Exact maximum likelihood learning in this model is intractable, but efficient approximate learning of DBMs can be carried out by using a mean-field inference to estimate data-dependent expectations, and an MCMC based stochastic approximation procedure to approximate the model?s expected sufficient statistics [7]. In particular, consider approximating the true posterior P (h|v; ?) with a fully factorized approximating distribution over the three sets of hidden QF1 QF2 QF3 (1) (2) (3) (1) units: Q(h|v; ?) = j=1 , ?(2) , ?(3) } l=1 k=1 q(hj |v)q(hl |v)q(hk |v), where ? = {? (l) (l) are the mean-field parameters with q(hi = 1) = ?i for l = 1, 2, 3. In this case, we can write down the variational lower bound on the log-probability of the data, which takes a particularly simple form: > > log P (v; ?) ? v> W(1) ?(1) + ?(1) W(2) ?(2) + ?(2) W(3) ?(3) ? log Z(?) + H(Q), (2) where H(?) is the entropy functional. Learning proceeds by finding the value of ? that maximizes this lower bound for the current value of model parameters ?, which results in a set of the mean-field fixed-point equations. Given the variational parameters ?, the model parameters ? are then updated to maximize the variational bound using stochastic approximation (for details see [7, 11, 14, 15]). 3 Pretraining Deep Boltzmann Machines The above learning procedure works quite poorly when applied to DBMs that start with randomly initialized weights. Hidden units in higher layers are very under-constrained so there is no consistent learning signal for their weights. To alleviate this problem, [7] introduced a layer-wise pretraining algorithm based on learning a stack of ?modified? Restricted Boltzmann Machines (RBMs). The idea behind the pretraining algorithm is straightforward. When learning parameters of the first layer ?RBM?, the bottom-up weights are constrained to be twice the top-down weights (see Fig. 1, right panel). Intuitively, using twice the weights when inferring the states of the hidden units h(1) compensates for the initial lack of top-down feedback. Conversely, when pretraining the last ?RBM? in the stack, the top-down weights are constrained to be twice the bottom-up weights. For all the intermediate RBMs the weights are halved in both directions when composing them to form a DBM, as shown in Fig. 1, right panel. This heuristic pretraining algorithm works surprisingly well in practice. However, it is solely motivated by the need to end up with a model that has symmetric weights, and does not provide any 1 We omit the bias terms for clarity of presentation. 2 useful insights into what is happening during the pretraining stage. Furthermore, unlike the pretraining algorithm for Deep Belief Networks (DBNs), it lacks a proof that each time a layer is added to the DBM, the variational bound improves. 3.1 Pretraining Algorithm for Deep Belief Networks We first briefly review the pretraining algorithm for Deep Belief Networks [2], which will form the basis for developing a new pretraining algorithm for Deep Boltzmann Machines. Consider pretraining a two-layer DBN using a stack of RBMs. P After learning the first RBM in the stack, we can write the generative model as: p(v; W(1) ) = h(1) p(v|h(1) ; W(1) )p(h(1) ; W(1) ). The second RBMPin the stack attempts to replace the prior p(h(1) ; W(1) ) by a better model p(h(1) ; W(2) ) = h(2) p(h(1) , h(2) ; W(2) ), thus improving the fit to the training data. More formally, for any approximating distribution Q(h(1) |v), the DBN?s log-likelihood has the following variational lower bound on the log probability of the training data {v1 , ..., vN }: N X log P (vn ) ? X h i X   EQ(h(1) |vn ) log P (vn |h(1) ; W(1) ) ? KL Q(h(1) |vn )||P (h(1) ; W(1) ) . n n=1 (1) n (1) (1) (1) We set Q(h |vn ; W ) = P (h |vn ; W ), which is the true factorial posterior of the first layer RBM. Initially, when W(2) = W(1)> , Q(h(1) |vn ) defines the DBN?s true posterior over h(1) , and the bound is tight. Maximizing the bound with respect to W(2) only affects the last KL term in the above equation, and amounts to maximizing: N 1 XX Q(h(1) |vn ; W(1) )P (h(1) ; W(2) ). N n=1 (1) (3) h This is equivalent to training the second layer RBM with vectors drawn from Q(h(1) |v; W(1) ) as data. Hence, Pthe second RBM in the stack learns a better model of the mixture over all N training cases: 1/N n Q(h(1) |vn ; W(1) ), called the ?aggregated posterior?. This scheme can be extended to training higher-layer RBMs. Observe that during the pretraining stage the whole prior of the lower-layer RBM is replaced by the next RBM in the stack. This leads to the hybrid Deep Belief Network model, with the top two layers forming a Restricted Boltzmann Machine, and the lower layers forming a directed sigmoid belief network (see Fig. 1, left panel). 3.2 A Variational Bound for Pretraining a Two-layer Deep Boltzmann Machine Consider a simple two-layer DBM with tied weights W(2) = W(1)> , as shown in Fig. 2a:   X 1 > (1) (1) (1)> (1)> (2) (1) exp v W h + h W h . P (v; W ) = Z(W(1) ) (1) (2) h (4) ,h Similar to DBNs, for any approximate posterior Q(h(1) |v), we can write a variational lower bound on the log probability that this DBM assigns to the training data: N X n=1 log P (vn ) ? X h i X   EQ(h(1) |vn ) log P (vn |h(1) ; W(1) ) ? KL Q(h(1) |vn )||P (h(1) ; W(1) ) . (5) n n The key insight is to note that the model?s marginal distribution over h(1) is the product of two identical distributions, one defined by an RBM composed of h(1) and v, and the other defined by an identical RBM composed of h(1) and h(2) [8]: X X  1 (1) (1) v> W(1) h(1) h(2)> W(1) h(1) P (h ; W ) = e e . (6) Z(W(1) ) v h(2) | {z }| {z } RBM with h(1) and v 3 RBM with h(1) and h(2) h(2a) h(2b) h(2) W(2) W(1) h(2) W(2) W(2) W(1) h(1) h(1) h( 1 ) W(1) h(2) = v h(1) W(1) W(1) v v a) b) v c) Figure 2: Left: Pretraining a Deep Boltzmann Machine with two hidden layers. a) The DBM with tied weights. b) The second RBM with two sets of replicated hidden units, which will replace half of the 1st RBM?s prior. c) The resulting DBM with modified second hidden layer. Right: The DBM with tied weights is trained to model the data using one-step contrastive divergence. The idea is to keep one of these two RBMs and replace the other by the square root of a better prior P (h(1) ; W(2) ). In particular, another RBM with two sets of replicated hidden units and tied P weights P (h(1) ; W(2) ) = h(2a) ,h(2b) P (h(1) , h(2a) , h(2b) ; W(2) ) is trained to be a better model P of the aggregated variational posterior N1 n Q(h(1) |vn ; W(1) ) of the first model (see Fig. 2b). By initializing W(2) = W(1)> , the second-layer RBM has exactly the same prior over h(1) as the original DBM. If the RBM is trained by maximizing the log likelihood objective: XX Q(h(1) |vn ) log P (h(1) ; W(2) ), (7) n h(1) then we obtain: X X KL(Q(h(1) |vn )||P (h(1) ; W(2) )) ? KL(Q(h(1) |vn )||P (h(1) ; W(1) )). n (8) n Similar to Eq. 6, the distribution over h(1) defined by the second-layer RBM is also the product of two identical distributions. Once the two RBMs are composed to form a two-layer DBM model (see Fig. 2c), the marginal distribution over h(1) is the geometric mean of the two probability distributions: P (h(1) ; W(1) ), P (h(1) ; W(2) ) defined by the first and second-layer RBMs: X  X  1 v> W(1) h(1) h(1)> W(2) h(2) P (h(1) ; W(1) , W(2) ) = e e . (9) Z(W(1) , W(2) ) v (2) h Based on Eqs. 8, 9, it is easy to show that the variational lower bound of Eq. 5 improves because replacing half of the prior by a better model reduces the KL divergence from the variational posterior:   X   X KL Q(h(1) |vn )||P (h(1) ; W(1) , W(2) ) ? KL Q(h(1) |vn )||P (h(1) ; W(1) ) . (10) n n Due to the convexity of asymmetric divergence, this is guaranteed to improve the variational bound of the training data by at least half as much as fully replacing the original prior. This result highlights a major difference between DBNs and DBMs. The procedure for adding an extra layer to a DBN replaces the full prior over the previous top layer, whereas the procedure for adding an extra layer to a DBM only replaces half of the prior. So in a DBM, the weights of the bottom level RBM perform much more of the work than in a DBN, where the weights are only used to define the last stage of the generative process P (v|h(1) ; W(1) ). This result also suggests that adding layers to a DBM will give diminishing improvements in the variational bound, compared to adding layers to a DBN. This may explain why DBMs with three hidden layers typically perform worse than the DBMs with two hidden layers [7, 8]. On the other hand, the disadvantage of the pretraining procedure for Deep Belief Networks is that the top-layer RBM is forced to do most of the modelling work. This may also explain the need to use a large number of hidden units in the top-layer RBM [2]. There is, however, a way to design a new pretraining algorithm that would spread the modelling work more equally across all layers, hence bypassing shortcomings of the existing pretraining algorithms for DBNs and DBMs. 4 Replacing 2/3 of the Prior h(2)a h(2)b W(1) W(1) h(2)a h(2)b h(2)c W(2) W(2) W(2) Practical Implementation h(2)a h( 2 ) b W(2) W(2) h(1) W h(1) W v (1) 3W b) (1) W 3W(2) h(1) 2W(2) h( 1 ) W(1) (1) v v a) 2W(2) h(1) h( 1 ) (1) h( 2 ) h(2) v c) Figure 3: Left: Pretraining a Deep Boltzmann Machine with two hidden layers. a) The DBM with tied weights. b) The second layer RBM is trained to model 2/3 of the 1st RBM?s prior. c) The resulting DBM with modified second hidden layer. Right: The corresponding practical implementation of the pretraining algorithm that uses asymmetric weights. 3.3 Controlling the Amount of Modelling Work done by Each Layer Consider a slightly modified two-layer DBM with two groups of replicated 2nd -layer units, h(2a) and h(2b) , and tied weights (see Fig. 3a). The model?s marginal distribution over h(1) is the product of three identical RBM distributions, defined by h(1) and v, h(1) and h(2a) , and h(1) and h(2b) : X  X  X  1 v> W(1) h(1) h(2a)> W(1) h(1) h(2b)> W(1) h(1) P (h(1) ; W(1) ) = e e e . Z(W(1) ) v (2a) (2b) h h During the pretraining stage, we keep one of these RBMs and replace the other two by a better prior P (h(1) ; W(2) ). To do so, similar to Sec. 3.2, we train another RBM, but with three sets of hidden units and tied weights (see Fig. 3b). When we combine the two RBMs into a DBM, the marginal distribution over h(1) is the geometric mean of three probability distributions: one defined by the first-layer RBM, and the remaining two defined by the second-layer RBMs: 1 P (h(1) ; W(1) )P (h(1) ; W(2) )P (h(1) ; W(2) ) Z(W(1) , W(2) ) X  X  X  1 v> W(1) h(1) h(2a)> W(2) h(1) h(2b)> W(2) h(1) = e e e . Z(W(1) , W(2) ) v (2a) (2b) P (h(1) ; W(1) , W(2) ) = h h In this DBM, 2/3 of the first RBM?s prior over the first hidden layer has been replaced by the prior defined by the second-layer RBM. The variational bound on the training data is guaranteed to improve by at least 2/3 as much as fully replacing the original prior. Hence in this slightly modified DBM model, the second layer performs 2/3 of the modelling work compared to the first layer. Clearly, controlling the number of replicated hidden groups allows us to easily control the amount of modelling work left to the higher layers in the stack. 3.4 Practical Implementation So far, we have made the assumption that we start with a two-layer DBM with tied weights. We now specify how one would train this initial set of tied weights W(1) . Let us consider the original two-layer DBM in Fig. 2a with tied weights. If we knew the initial state vector h(2) , we could train this DBM using one-step contrastive divergence (CD) with mean field reconstructions of both the visible states v and the top-layer states h(2) , as shown in Fig. 2, right panel. Instead, we simply set the initial state vector h(2) to be equal to the data, v. Using mean-field reconstructions for v and h(2) , one-step CD is exactly equivalent to training a modified ?RBM? with only one hidden layer but with bottom-up weights that are twice the top-down weights, as defined in the original pretraining algorithm (see Fig. 1, right panel). This way of training the simple DBM with tied weights is unlikely to maximize the likelihood objective, but in practice it produces surprisingly good models that reconstruct the training data well. When learning the second RBM in the stack, instead of maintaining a set of replicated hidden groups, it will often be convenient to approximate CD learning by training a modified RBM with one hidden layer but with asymmetric bottom-up and top-down weights. 5 For example, consider pretraining a two-layer DBM, in which we would like to split the modelling work between the 1st and 2nd -layer RBMs as 1/3 and 2/3. In this case, we train the first layer RBM using one-step CD, but with the bottom-up weights constrained to be three times the top-down weights (see Fig. 3, right panel). The conditional distributions needed for CD learning take form: 1 1 (1) P (hj = 1|v) = , P (vi = 1|h(1) ) = . P P (1) (1) (1) 1 + exp(? i 3Wij vi ) 1 + exp(? j Wij hj ) Conversely, for the second modified RBM in the stack, the top-down weights are constrained to be 3/2 times the bottom-up weights. The conditional distributions take form: 1 1 (2) (1) P (hl = 1|h(1) ) = , P (hj = 1|h(2) ) = . P P (2) (1) (2) (2) 1 + exp(? j 2Wjl hj ) 1 + exp(? l 3Wjl hl ) Note that this second-layer modified RBM simply approximates the proper RBM with three sets of replicated h(2) groups. In practice, this simple approximation works well compared to training a proper RBM, and is much easier to implement. When combining the RBMs into a two-layer DBM, we end up with W(1) and 2W(2) in the first and second layers, each performing 1/3 and 2/3 of the modelling work respectively:   X 1 exp v> W(1) h(1) + h(1)> 2W(2) h(2) . (11) P (v; ?) = Z(?) (1) (2) h ,h Parameters of the entire model can be generatively fine-tuned using the combination of the meanfield algorithm and the stochastic approximation algorithm described in Sec. 2 4 Pretraining a Three Layer Deep Boltzmann Machine In the previous section, we showed that provided we start with a two-layer DBM with tied weights, we can train the second-layer RBM in a way that is guaranteed to improve the variational bound. For the DBM with more than two layers, we have not been able to develop a pretraining algorithm that is guaranteed to improve a variational bound. However, results of Sec. 3 suggest that using simple modifications when pretraining a stack of RBMs would allow us to approximately control the amount of modelling work done by each layer. Consider learning a 3-layer DBM, in which each layer is forced to perform approximately 1/3 of the modelling work. This can easily be accomplished by learning a stack of three modified RBMs. Similar to the two-layer model, we train the first layer RBM using one-step CD, but with the bottom-up weights constrained to be three times the top-down weights (see Fig. 4). Two-thirds of this RBM?s prior will be modelled by the 2nd and 3rd -layer RBMs. Pretraining a 3-layer DBM h(3) h(3) h(2) 4W(2) h(1) 3W (1) W 2W(3) 4W(3) h( 2 ) 2W(3) h( 2 ) 2W(2) 3W(2) h(1) h( 1 ) W(1) (1) v v For the second modified RBM in the stack, we use 4W(2) bottom-up and 3W(2) topdown. Note that we are using 4W(2) bottom-up, as we are expecting to replace half of the second RBM prior by a third RBM, hence splitting the remaining 2/3 of the work equally between the top two layers. If we were to pretrain only a two-layer DBM, we would use 2W(2) bottom-up and 3W(2) top-down, as discussed in Sec. 3.2. Figure 4: Layer-wise pretraining of a 3-layer Deep Boltzmann Machine. For the last RBM in the stack, we use 2W(3) bottom-up and 4W(2) top-down. When combining the three RBMs into a three-layer DBM, we end up with symmetric weights W(1) , 2W(2) , and 2W(3) in the first, second, and third layers, with each layer performing 1/3 of the modelling work:   1 X > (1) (1) (1)> (2) (2) (2)> (3) (3) P (v; ?) = exp v W h + h 2W h + h 2W h . (12) Z(?) h 6 Algorithm 1 Greedy Pretraining Algorithm for a 3-layer Deep Boltzmann Machine 1: Train the 1st layer ?RBM? using one-step CD learning with mean field reconstructions of the visible vectors. Constrain the bottom-up weights, 3W(1) , to be three times the top-down weights, W(1) . 2: Freeze 3W(1) that defines the 1st layer of features, and use samples h(1) from P (h(1) |v; 3W(1) ) as the data for training the second RBM. 3: Train the 2nd layer ?RBM? using one-step CD learning with mean field reconstructions of the visible vectors. Set the bottom-up weights to 4W(1) , and the top-down weights to 3W(1) . 4: Freeze 4W(2) that defines the 2nd layer of features and use the samples h(3) from P (h(2) |h(1) ; 4W(2) ) as the data for training the next RBM. 5: Train the 3rd -layer ?RBM? using one-step CD learning with mean field reconstructions of its visible vectors. During the learning, set the bottom-up weights to 2W(3) , and the top-down weights to 4W(3) . 6: Use the weights {W(1) , 2W(2) , 2W(3) } to compose a three-layer Deep Boltzmann Machine. The new pretraining procedure for a 3-layer DBM is shown in Alg. 1. Note that compared to the original algorithm, it requires almost no extra work and can be easily integrated into existing code. Extensions to training DBMs with more layers is trivial. As we show in our experimental results, this pretraining can improve the generative performance of Deep Boltzmann Machines. 5 Experimental Results In our experiments we used the MNIST and NORB datasets. During greedy pretraining, each layer was trained for 100 epochs using one-step contrastive divergence. Generative fine-tuning of the full DBM model, using mean-field together with stochastic approximation, required 300 epochs. In order to estimate the variational lower-bounds achieved by different pretraining algorithms, we need to estimate the global normalization constant. Recently, [10] demonstrated that Annealed Importance Sampling (AIS) can be used to efficiently estimate the partition function of an RBM. We adopt AIS in our experiments as well. Together with variational inference this will allow us to obtain good estimates of the lower bound on the log-probability of the training and test data. 5.1 MNIST The MNIST digit dataset contains 60,000 training and 10,000 test images of ten handwritten digits (0 to 9), with 28?28 pixels. In our first experiment, we considered a standard two-layer DBM with 500 and 1000 hidden units2 , and used two different algorithms for pretraining it. The first pretraining algorithm, which we call DBM-1/2-1/2, is the original algorithm for pretraining DBMs, as introduced by [7] (see Fig. 1). Here, the modelling work between the 1st and 2nd -layer RBMs is split equally. The second algorithm, DBM-1/3-2/3, uses a modified pretraining procedure of Sec. 3.4, so that the second RBM in the stack ends up doing 2/3 of the modelling work compared to the 1st -layer RBM. Results are shown in Table 1. Prior to the global generative fine-tuning, the estimate of the lower bound on the average test log-probability for DBM-1/3-2/3 was ?108.65 per test case, compared to ?114.32 achieved by the standard pretraining algorithm DBM-1/2-1/2. The large difference of about 7 nats shows that leaving more of the modelling work to the second layer, which has a larger number of hidden units, substantially improves the variational bound. After the global generative fine-tuning, DBM-1/3-2/3 achieves a lower bound of ?83.43, which is better compared to ?84.62, achieved by DBM-1/2-1/2. This is also lower compared to the lower bound of ?85.97, achieved by a carefully trained two-hidden-layer Deep Belief Network [10]. In our second experiment, we pretrained a 3-layer Deep Boltzmann Machine with 500, 500, and 1000 hidden units. The existing pretraining algorithm, DBM-1/2-1/4-1/4, approximately splits the modelling between three RBMs in the stack as 1/2, 1/4, 1/4, so the weights in the 1st -layer RBM perform half of the work compared to the higher-level RBMs. On the other hand, the new pretraining procedure (see Alg. 1), which we call DBM-1/3-1/3-1/3, splits the modelling work equally across all three layers. 2 These architectures have been considered before in [7, 9], which allows us to provide a direct comparison. 7 Table 1: MNIST: Estimating the lower bound on the average training and test log-probabilities for two DBMs: one with two layers (500 and 1000 hidden units), and the other one with three layers (500, 500, and 1000 hidden units). Results are shown for various pretraining algorithms, followed by generative fine-tuning. Pretraining 2 layers 3 layers DBM-1/2-1/2 DBM-1/3-2/3 DBM-1/2-1/4-1/4 DBM-1/3-1/3-1/3 Generative Fine-Tuning Train Test Train Test ?113.32 ?107.89 ?116.74 ?107.12 ?114.32 ?108.65 ?117.38 ?107.65 ?83.61 ?82.83 ?84.49 ?82.34 ?84.62 ?83.43 ?85.10 ?83.02 Table 2: NORB: Estimating the lower bound on the average training and test log-probabilities for two DBMs: one with two layers (1000 and 2000 hidden units), and the other one with three layers (1000, 1000, and 2000 hidden units). Results are shown for various pretraining algorithms, followed by generative fine-tuning. Pretraining 2 layers 3 layers DBM-1/2-1/2 DBM-1/3-2/3 DBM-1/2-1/4-1/4 DBM-1/3-1/3-1/3 Generative Fine-Tuning Train Test Train Test ?640.94 ?633.21 ?641.87 ?632.75 ?643.87 ?636.65 ?645.06 ?635.14 ?598.13 ?593.76 ?598.98 ?592.87 ?601.76 ?597.23 ?602.84 ?596.11 Table 1 shows that DBM-1/3-1/3-1/3 achieves a lower bound on the average test log-probability of ?107.65, improving upon DBM-1/2-1/4-1/4?s bound of ?117.38. The difference of about 10 nats further demonstrates that during the pretraining stage, it is rather crucial to push more of the modelling work to the higher layers. After generative fine-tuning, the bound on the test log-probabilities for DBM-1/3-1/3-1/3 was ?83.02, so with a new pretraining procedure, the three-hidden-layer DBM performs slightly better than the two-hidden-layer DBM. With the original pretraining procedure, the 3-layer DBM achieves a bound of ?85.10, which is worse than the bound of 84.62, achieved by the 2-layer DBM, as reported by [7, 9]. 5.2 NORB The NORB dataset [4] contains images of 50 different 3D toy objects with 10 objects in each of five generic classes: cars, trucks, planes, animals, and humans. Each object is photographed from different viewpoints and under various lighting conditions. The training set contains 24,300 stereo image pairs of 25 objects, 5 per class, while the test set contains 24,300 stereo pairs of the remaining, different 25 objects. From the training data, 4,300 were set aside for validation. To deal with raw pixel data, we followed the approach of [5] by first learning a Gaussian-binary RBM with 4000 hidden units, and then treating the the activities of its hidden layer as preprocessed binary data. Similar to the MNIST experiments, we trained two Deep Boltzmann Machines: one with two layers (1000 and 2000 hidden units), and the other one with three layers (1000, 1000, and 2000 hidden units). Table 2 reveals that for both DBMs, the new pretraining achieves much better variational bounds on the average test log-probability. Even after the global generative fine-tuning, Deep Boltzmann Machines, pretrained using a new algorithm, improve upon standard DBMs by at least 5 nats. 6 Conclusion In this paper we provided a better understanding of how the pretraining algorithms for Deep Belief Networks and Deep Boltzmann Machines are related, and used this understanding to develop a different method of pretraining. Unlike many of the existing pretraining algorithms for DBNs and DBMs, the new procedure can distribute the modelling work more evenly over the hidden layers. Our results on the MNIST and NORB datasets demonstrate that the new pretraining algorithm allows us to learn much better generative models. Acknowledgments This research was funded by NSERC, Early Researcher Award, and gifts from Microsoft and Google. G.H. and R.S. are fellows of the Canadian Institute for Advanced Research. 8 References [1] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009. [2] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006. [3] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring strategies for training deep neural networks. Journal of Machine Learning Research, 10:1?40, 2009. [4] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In CVPR (2), pages 97?104, 2004. [5] V. Nair and G. E. Hinton. Implicit mixtures of restricted Boltzmann machines. In Advances in Neural Information Processing Systems, volume 21, 2009. [6] M. A. Ranzato. Unsupervised learning of feature hierarchies. In Ph.D. New York University, 2009. [7] R. R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 12, 2009. [8] R. R. Salakhutdinov and G. E. Hinton. An efficient learning procedure for Deep Boltzmann Machines. Neural Computation, 24:1967 ? 2006, 2012. [9] R. R. Salakhutdinov and H. Larochelle. Efficient learning of deep Boltzmann machines. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 13, 2010. [10] R. R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of the International Conference on Machine Learning, volume 25, pages 872 ? 879, 2008. [11] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML. ACM, 2008. [12] M. Welling and G. E. Hinton. A new learning algorithm for mean field Boltzmann machines. Lecture Notes in Computer Science, 2415, 2002. [13] M. Welling and C. Sutton. Learning in markov random fields with contrastive free energies. In International Workshop on AI and Statistics (AISTATS?2005), 2005. [14] L. Younes. On the convergence of Markovian stochastic algorithms with rapidly decreasing ergodicity rates, March 17 2000. [15] A. L. Yuille. The convergence of contrastive divergences. In Advances in Neural Information Processing Systems, 2004. 9
4610 |@word briefly:1 nd:6 contrastive:5 initial:4 generatively:1 contains:5 series:1 tuned:1 existing:5 current:1 visible:7 partition:2 treating:1 wlm:1 aside:1 generative:15 greedy:3 half:6 intelligence:2 plane:1 toronto:4 five:1 direct:1 consists:1 combine:1 compose:1 introduce:1 pairwise:1 expected:1 salakhutdinov:5 decreasing:1 gift:1 provided:2 xx:2 estimating:2 panel:7 factorized:1 maximizes:1 what:1 substantially:1 finding:1 fellow:1 quantitative:1 exactly:2 demonstrates:1 control:2 unit:23 omit:1 before:1 sutton:1 solely:1 approximately:3 twice:4 conversely:2 suggests:1 wjl:3 directed:2 practical:3 acknowledgment:1 lecun:1 practice:3 implement:1 digit:3 procedure:16 convenient:1 suggest:1 equivalent:2 demonstrated:1 maximizing:3 annealed:1 straightforward:1 splitting:1 assigns:3 insight:2 lamblin:1 updated:1 dbns:6 controlling:2 hierarchy:1 exact:1 us:2 trend:1 recognition:1 particularly:2 asymmetric:4 bottom:15 initializing:1 ranzato:1 expecting:1 convexity:1 nats:3 trained:7 tight:1 yuille:1 upon:2 f2:1 basis:1 easily:3 various:3 train:13 forced:2 fast:1 describe:1 shortcoming:1 artificial:2 quite:1 heuristic:2 larger:1 cvpr:1 reconstruct:1 compensates:1 statistic:5 net:2 reconstruction:5 interaction:1 product:3 combining:2 rapidly:1 pthe:1 poorly:1 convergence:2 produce:1 object:7 develop:3 pose:1 ij:1 eq:5 c:2 larochelle:2 direction:1 stochastic:6 dbms:18 human:1 f1:1 alleviate:1 extension:1 exploring:1 bypassing:1 considered:2 exp:8 dbm:67 lm:1 major:1 achieves:4 adopt:1 early:1 ruslan:1 create:1 clearly:1 gaussian:1 modified:14 rather:1 hj:7 improvement:1 modelling:19 likelihood:6 pretrain:2 hk:1 inference:3 dependent:1 typically:1 unlikely:1 entire:1 initially:1 hidden:44 diminishing:1 integrated:1 wij:3 pixel:2 animal:1 constrained:6 marginal:4 field:12 once:1 equal:1 sampling:1 identical:4 unsupervised:1 icml:1 randomly:1 composed:4 divergence:6 replaced:2 n1:1 microsoft:1 attempt:1 mixture:2 behind:1 initialized:1 markovian:1 disadvantage:1 osindero:1 reported:1 st:8 international:4 together:2 huang:1 worse:2 toy:1 distribute:1 sec:5 vi:3 root:1 doing:1 start:3 qf2:1 square:1 efficiently:1 modelled:1 handwritten:2 raw:1 lighting:2 researcher:1 explain:2 rbms:22 energy:1 proof:2 rbm:57 recovers:1 dataset:2 car:1 improves:6 carefully:1 higher:5 specify:1 done:2 furthermore:1 ergodicity:1 stage:5 implicit:1 qf3:1 hand:2 replacing:4 lack:3 google:1 defines:3 true:3 hence:4 symmetric:3 deal:1 adjacent:1 during:6 demonstrate:2 performs:2 image:3 variational:22 wise:2 recently:1 sigmoid:1 functional:1 volume:4 jl:1 discussed:1 approximates:1 freeze:2 ai:4 rd:2 tuning:9 dbn:8 funded:1 posterior:7 halved:1 showed:1 certain:2 binary:4 accomplished:1 additional:1 aggregated:2 maximize:2 signal:1 multiple:2 full:2 reduces:1 equally:4 award:1 expectation:1 normalization:1 achieved:5 whereas:1 fine:10 leaving:1 crucial:1 extra:3 unlike:3 induced:1 undirected:2 call:2 symmetrically:1 intermediate:1 split:4 easy:1 canadian:1 bengio:2 affect:1 fit:1 architecture:2 idea:2 motivated:1 stereo:2 york:1 pretraining:63 deep:41 useful:1 factorial:1 amount:4 ten:1 ph:1 younes:1 per:2 write:3 group:4 key:1 drawn:1 clarity:1 preprocessed:1 v1:1 graph:1 almost:1 vn:20 bound:32 layer:118 fl:1 hi:1 guaranteed:4 followed:3 replaces:2 truck:1 activity:1 constrain:1 performing:2 photographed:1 department:2 developing:1 combination:1 march:1 across:2 slightly:3 rsalakhu:1 modification:1 hl:5 intuitively:1 restricted:6 equation:2 needed:1 end:4 observe:1 generic:2 original:8 top:21 remaining:4 maintaining:1 murray:1 approximating:3 objective:2 added:1 strategy:1 gradient:1 evenly:2 trivial:1 code:1 relationship:1 difficult:2 unfortunately:1 design:1 implementation:3 proper:2 boltzmann:32 perform:4 teh:1 datasets:3 markov:2 hinton:7 extended:1 stack:19 introduced:3 pair:2 required:1 kl:8 connection:3 able:1 proceeds:1 topdown:1 belief:16 meanfield:1 hybrid:1 advanced:1 representing:1 scheme:1 improve:6 carried:1 coupled:1 review:1 understanding:4 prior:18 geometric:2 epoch:2 fully:3 lecture:1 highlight:1 geoffrey:1 validation:1 foundation:1 sufficient:1 consistent:1 viewpoint:1 cd:9 surprisingly:2 last:5 free:1 bias:1 allow:2 deeper:1 institute:1 feedback:1 made:1 replicated:6 far:2 welling:2 approximate:5 keep:2 global:4 reveals:1 norb:6 knew:1 why:1 table:5 learn:3 composing:1 improving:2 alg:2 bottou:1 louradour:1 aistats:1 spread:1 whole:1 fig:15 inferring:1 tied:12 third:3 learns:1 down:15 intractable:1 workshop:1 mnist:7 adding:5 importance:1 push:1 easier:1 entropy:1 simply:2 forming:2 happening:1 nserc:1 pretrained:2 tieleman:1 relies:1 acm:1 nair:1 conditional:2 presentation:1 replace:5 hard:1 distributes:1 called:1 invariance:1 experimental:2 meaningful:1 formally:1 mcmc:1
3,990
4,611
Gradient Weights help Nonparametric Regressors Samory Kpotufe? Max Planck Institute for Intelligent Systems [email protected] Abdeslam Boularias Max Planck Institute for Intelligent Systems [email protected] Abstract In regression problems over Rd , the unknown function f often varies more in some coordinates than in others. We show that weighting each coordinate i with the estimated norm of the ith derivative of f is an efficient way to significantly improve the performance of distance-based regressors, e.g. kernel and k-NN regressors. We propose a simple estimator of these derivative norms and prove its consistency. Moreover, the proposed estimator is efficiently learned online. 1 Introduction In regression problems over Rd , the unknown function f might vary more in some coordinates than in others, even though all coordinates might be relevant. How much f varies with coordinate i can be captured by the norm kfi0 k1,? = EX |fi0 (X)| of the ith derivative fi0 = e> i ?f of f . A simple way to take advantage of the information in kfi0 k1,? is to weight each coordinate proportionally to an estimate of kfi0 k1,? . The intuition, detailed in Section 2, is that the resulting data space behaves as a low-dimensional projection to coordinates with large norm kfi0 k1,? , while maintaining information about all coordinates. We show that such weighting can be learned efficiently, both in batch-mode and online, and can significantly improve the performance of distance-based regressors in real-world applications. In this paper we focus on the distance-based methods of kernel and k-NN regression. For distance-based methods, the weights can be incorporated into a distance function of the form p ?(x, x0 ) = (x ? x0 )> W(x ? x0 ), where each element Wi of the diagonal matrix W is an estimate of kfi0 k1,? . This is not metric learning [1, 2, 3, 4] where the best ? is found by optimizing over a sufficiently large space of possible metrics. Clearly metric learning can only yield better performance, but the optimization over a larger space will result in heavier preprocessing time, often O(n2 ) on datasets of size n. Yet, preprocessing time is especially important in many modern applications where both training and prediction are done online (e.g. robotics, finance, advertisement, recommendation systems). Here we do not optimize over a space of metrics, but rather estimate a single metric ? based on the norms kfi0 k1,? . Our metric ? is efficiently obtained, can be estimated online, and still significantly improves the performance of distance-based regressors. To estimate kfi0 k1,? , one does not need to estimate fi0 well everywhere, just well on average. While many elaborate derivative estimators exist (see e.g. [5]), we have to keep in mind our need for fast but consistent estimator of kfi0 k1,? . We propose a simple estimator Wi which averages the differences along i of an estimator fn,h of f . More precisely (see Section 3) Wi has the form En |fn,h (X + tei ) ? fn,h (X ? tei )| /2t where En denotes the empirical expectation over a sample n {Xi }1 . Wi can therefore be updated online at the cost of just two estimates of fn,h . In this paper fn,h is a kernel estimator, although any regression method might be used in estimating kfi0 k1,? . We prove in Section 4 that, under mild conditions, Wi is a consistent estimator of the ? Currently at Toyota Technological Institute Chicago, and affiliated with the Max Planck Institute. 1 (a) SARCOS robot, joint 7. (b) Parkinson?s. (c) Telecom. n o Figure 1: Typical gradient weights Wi ? kfi0 k1,? i?[d] for some real-world datasets. unknown norm kfi0 k1,? . Moreover we prove finite sample convergence bounds to help guide the practical tuning of the two parameters t and h. Most related work As we mentioned above, metric learning is closest in spirit to the gradient-weighting approach presented here, but our approach is different from metric learning in that we do not search a space of possible metrics, but rather estimate a single metric based on gradients. This is far more timeefficient and can be implemented in online applications which require fast preprocessing. There exists many metric learning approaches, mostly for classification and few for regression (e.g. [1, 2]). The approaches of [1, 2] for regression are meant for batch learning. Moreover [1] is limited to Gaussian-kernel regression, and [2] is tuned to the particular problem of age estimation. For the problem of classification, the metric-learning approaches of [3, 4] are meant for online applications, but cannot be used in regression. In the case of kernel regression and local polynomial regression, multiple bandwidths can be used, one for each coordinate [6]. However, tuning d bandwidth parameters requires searching a d?d grid, which is impractical even in batch mode. The method of [6] alleviates this problem, however only in the particular case of local linear regression. Our method applies to any distance-based regressor. Finally, the ideas presented here are related to recent notions of nonparametric sparsity where it is assumed that the target function is well approximated by a sparse function, i.e. one which varies little in most coordinates (e.g. [6], [? ]). Here we do not need sparsity, instead we only need the target function to vary in some coordinates more than in others. Our approach therefore works even in cases where the target function is far from sparse. 2 Technical motivation In this section, we motivate the approach by considering the ideal situation where Wi = kfi0 k1,? . Let?s consider regression on (X , ?), where the input space X ? Rd is connected. The prediction performance of a distance-based estimator (e.g. kernel or k-NN) is well known to be the sum of its variance and its bias [7]. Regression on (X , ?) decreases variance while keeping the bias controlled. Regression variance decreases on (X , ?): The variance of a distance based estimate fn (x) is inversely proportional to the number of samples (and hence the mass) in a neighborhood of x (see e.g. [8]). Let?s therefore compare the masses of ?-balls and Euclidean balls. Suppose some weights largely dominate others, for instance in R2 , let kf20 k1,?  kf10 k1,? . A ball B? in (X , ?) then takes the ellipsoidal shape below which we contrast with the dotted Euclidean ball inside. 2 Relative to a Euclidean ball, a ball B? of similar1 radius has more mass in the direction e1 in which f varies least. This intuition is made more precise in Lemma 1 below, which is proved in the appendix. Essentially, let R ? [d] be the set of coordinates with larger weights Wi , then the mass of balls B? behaves like the mass of balls in R|R| . Thus, effectively, regression in (X , ?) has variance nearly as small as that for regression in the lower-dimensional space R|R| . Note that the assumptions on the marginal ? in the lemma statement are verified for instance when ? has a continuous lower-bounded density on X . For simplicity we let (X , k?k) have diameter 1. Lemma 1 (Mass of ?-balls). Consider any R ? [d] such that maxi?R / Wi < mini?R Wi . Suppose X ? ?1d [0, 1]d , and the marginal ? satisfies on (X , k?k), for some C1 , C2 : ?x ? X , ?r > p ? 0, C1 rd ? ?(B(x, r)) ? C2 rd . Let ? , maxi?R Wi / mini?R Wi , 6R , maxi?R d, / Wi ? and let ?(X ) , supx,x0 ?X ?(x, x0 ). Then for any ?(X ) > 26R , ?(B? (x, ?(X ))) ? C(2?)?|R| |R| , where C is independent of . Ideally we would want |R|  d and 6R ? 0, which corresponds to a sparse metric. Regression bias remains bounded on (X , ?): The bias of distance-based regressors is controlled by the smoothness of the unknown function f on (X , ?), i.e. how much f might differ for two close points. Turning back to our earlier example in R2 , some points x0 that were originally far from x along e1 might now be included in the estimate fn (x) on (X , ?). Intuitively, this should not add bias to the estimate since f does not vary much in e1 . We have the following lemma. Lemma 2 (Change in Lipschitz smoothness for f ). Suppose each derivative fi0 is bounded on X by |fi0 |sup . Assume Wi > 0 whenever |fi0 |sup > 0. Denote by R the largest subset of [d] such that |fi0 |sup > 0 for i ? R . We have for all x, x0 ? X, ! X |fi0 |sup 0 ? |f (x) ? f (x )| ? ?(x, x0 ). W i i?R Applying the above lemma with Wi = 1, we see that in theP original Euclidean space, the variation in f relative to distance between points x, x0 , is of the order i?R |fi0 |sup . This variation in f is now q increased in (X , ?) by a factor of 1/ inf i?R kfi0 k1,? in the worst case. In this sense, the space (X , ?) maintains information about all relevant coordinates. In contrast, information is lost under a projection of the data in the likely scenario that all or most coordinates are relevant. Finally, note that if all weights were close, the space (X , ?) is essentially equivalent to the original (X , k?k), and we likely neither gain nor loose in performance, as confirmed by experiments. However, we observed that in practice, even when all coordinates are relevant, the gradient-weights vary sufficiently (Figure 1) to observe significant performance gains for distance-based regressors. Estimating kfi0 k1,? 3 n In all that follows we are given n i.i.d samples (X, Y) = {(Xi , Yi )}i=1 , from some unknown distribution with marginal ?. The marginal ? has support X ? Rd while the output Y ? R. The kernel estimate at x is defined using any kernel K(u), positive on [0, 1/2], and 0 for u > 1. If B(x, h) ? X = ?, fn,h (x) = En Y , otherwise fn,?,h ? (x) = n X i=1 X K(? ?(x, Xi )/h) Pn ? Yi = wi (x)Yi , ?(x, Xj )/h) j=1 K(? i=1 n (1) for some metric ?? and a bandwidth parameter h. For the kernel regressor fn,h used to learn the metric ? below, ?? is the Euclidean metric. In the 1/d analysis we assume the bandwidth for fn,h is set as h ? log2 (n/?)/n , given a confidence 1 Accounting for the scale change induced by ? on the space X . 3 parameter 0 < ? < 1. In practice we would learn h by cross-validation, but for the analysis we only need to know the existence of a good setting of h. The metric is defined as   |fn,h (X + tei ) ? fn,h (X ? tei )| Wi , En ? 1{An,i (X)} = En ?t,i fn,h (X) ? 1{An,i (X)} , (2) 2t where An,i (X) is the event that enough samples contribute to the estimate ?t,i fn,h (X). For the consistency result, we assume the following setting: 2d ln 2n + ln(4/?) An,i (X) ? min ?n (B(X + sei , h/2)) ? ?n where ?n , . n s?{?t,t} 4 Consistency of the estimator Wi of kfi0 k1,? 4.1 Theoretical setup 4.1.1 Marginal ? Without loss of generality we assume X has bounded diameter 1. The marginal is assumed to have a continuous density on X and has mass everywhere on X : ?x ? X , ?h > 0, ?(B(x, h)) ? C? hd . This is for instance the case if ? has a lower-bounded density on X . Under this assumption, for samples X in dense regions, X ? tei is also likely to be in a dense region. 4.1.2 Regression function and noise The output Y ? R is given as Y = f (X) + ?(X), where E?(X) = 0. We assume the following general noise model: ?? > 0 there exists c > 0 such that supx?X PY |X=x (|?(x)| > c) ? ?. We denote by CY (?) the infimum over all such c. For instance, suppose ?(X) has exponentially decreasing tail, then ?? > 0, CY (?) ? O(ln 1/?). A last assumption on the noise is that the variance of (Y |X = x) is upper-bounded by a constant ?Y2 uniformly over all x ? X . Define the ? -envelope of X as X +B(0, ? ) , {z ? B(x, ? ), x ? X }. We assume there exists ? such that f is continuously differentiable on the ? -envelope X + B(0, ? ). Furthermore, each derivative 0 fi0 (x) = e> i ?f (x) is upper bounded on X + B(0, ? ) by |fi |sup and is uniformly continuous on X + B(0, ? ) (this is automatically the case if the support X is compact). 4.1.3 Parameters varying with t Our consistency results are expressed in terms of the following distributional quantities. For i ? [d], define the (t, i)-boundary of X as ?t,i (X ) , {x : {x + tei , x ? tei } * X }. The smaller the mass ?(?t,i (X )) at the boundary, the better we approximate kfi0 k1,? . The second type of quantity is t,i , supx?X , s?[?t,t] |fi0 (x) ? fi0 (x + sei )|. Since ? has continuous density on X and ?f is uniformly continuous on X + B(0, ? ), we automatt?0 t?0 ically have ?(?t,i (X )) ???? 0 and t,i ???? 0. 4.2 Main theorem Our main theorem bounds the error in estimating each norm kfi0 k1,? with Wi . The main technical hurdles are in handling the various sample inter-dependencies introduced by both the estimates fn,h (X) and the events An,i (X), and in analyzing the estimates at the boundary of X . Theorem 1. Let t + h ? ? , and let 0 < ? < 1. There exist C = C(?, K(?)) and N = N (?) such that the following holds with probability at least 1 ? 2?. Define A(n) , Cd ? log(n/?) ? CY2 (?/2n) ? ?Y2 / log2 (n/?). Let n ? N , we have for all i ? [d]: ? ? ! r 1 r A(n) X ln 2d/? 0 0 0 +h? |fi |sup ? + 2 |fi |sup + ? (?t,i (X )) + t,i . Wi ? kfi k1,? ? ? t nhd n i?[d] 4 The bound suggest to set t in the order of h or larger. We need t to be small in order for ? (?t,i (X )) and t,i to be small, but t need to be sufficiently large (relative to h) for the estimates fn,h (X + tei ) and fn,h (X ? tei ) to differ sufficiently so as to capture the variation in f along ei . n?? n?? n?? The theorem immediately implies consistency for t ????? 0, h ????? 0,?h/t ????? 0, and n?? (n/ log n)hd t2 ????? ?. This is satisfied for many settings, for example t ? h and h ? 1/ log n. 4.3 Proof of Theorem 1 The main difficulty in bounding Wi ? kfi0 k1,? is in circumventing certain depencies: both quantities fn,h (X) and An,i (X) depend not just on X ? X, but on other samples in X, and thus introduce inter-dependencies between the estimates ?t,i fn,h (X) for different X ? X. To handle these dependencies, we carefully decompose Wi ? kfi0 k1,? , i ? [d], starting with: Wi ? kfi0 k1,? ? |Wi ? En |fi0 (X)|| + En |fi0 (X)| ? kfi0 k1,? . (3) The following simple lemma bounds the second term of (3). Lemma 3. With probability at least 1 ? ?, we have for all i ? [d], r ln 2d/? 0 0 0 . En |fi (X)| ? kfi k1,? ? |fi |sup ? n Proof. Apply a Chernoff bound, and a union bound on i ? [d]. Now the first term of equation (3) can be further bounded as |Wi ? En |fi0 (X)|| ? Wi ? En |fi0 (X)| ? 1{An,i (X)} + En |fi0 (X)| ? 1{A?n,i (X)} ? Wi ? En |fi0 (X)| ? 1{An,i (X)} + |fi0 |sup ? En 1{A?n,i (X)} . (4) We will bound each term of (4) separately. The next lemma bounds the second term of (4). It is proved in the appendix. The main technicality in this lemma is that, for any X in the sample X, the event A?n,i (X) depends on other samples in X. Lemma 4. Let ?t,i (X ) be defined as in Section (4.1.3). For n ? n(?), with probability at least 1 ? 2?, we have for all i ? [d], r ln 2d/? En 1{A?n,i (X)} ? + ? (?t,i (X )) . n It remains to bound Wi ? En |fi0 (X)| ? 1{An,i (X)} . To this end we need to bring in f through the following quantities:     f i , En |f (X + tei ) ? f (X ? tei )| ? 1{A (X)} = En ?t,i f (X) ? 1{A (X)} W n,i n,i 2t P and for any x ? X , define f?n,h (x) , EY|X fn,h (x) = i wi (x)f (xi ). f i is easily related to En |f 0 (X)| ? 1{A (X)} . This is done in Lemma 5 below. The The quantity W i n,i f i. quantity f?n,h (x) is needed when relating Wi to W Lemma 5. Define t,i as in Section (4.1.3). With probability at least 1 ? ?, we have for all i ? [d], f Wi ? En |fi0 (X)| ? 1{An,i (X)} ? t,i . 5 Proof. We have f (x + tei ) ? f (x ? tei) = Rt ?t fi0 (x + sei ) ds and therefore 2t (fi0 (x) ? t,i ) ? f (x + tei ) ? f (x ? tei) ? 2t (fi0 (x) + t,i ) . 1 It follows that 2t |f (x + tei ) ? f (x ? tei)| ? |fi0 (x)| ? t,i , therefore 1 f 0 0 Wi ? En |fi (X)| ? 1{An,i (X)} ? En |f (x + tei ) ? f (x ? tei)| ? |fi (x)| ? t,i . 2t f i . We have It remains to relate Wi to W f i =2t En (?t,i fn,h (X) ? ?t,i f (X)) ? 1{A (X)} 2t Wi ? W n,i ?2 max En |fn,h (X + sei ) ? f (X + sei )| ? 1{An,i (X)} s?{?t,t} ?2 max En fn,h (X + sei ) ? f?n,h (X + sei ) ? 1{An,i (X)} s?{?t,t} + 2 max En f?n,h (X + sei ) ? f (X + sei ) ? 1{An,i (X)} . s?{?t,t} (5) (6) We first handle the bias term (6) in the next lemma which is given in the appendix. Lemma 6 (Bias). Let t + h ? ? . We have for all i ? [d], and all s ? {t, ?t}: X En f?n,h (X + sei ) ? f (X + sei ) ? 1{An,i (X)} ? h ? |fi0 |sup . i?[d] The variance term in (5) is handled in the lemma below. The proof is given in the appendix. Lemma 7 (Variance terms). There exist C = C(?, K(?)) such that, with probability at least 1 ? 2?, we have for all i ? [d], and all s ? {?t, t}: s Cd ? log(n/?)CY2 (?/2n) ? ?Y2 ? En fn,h (X + sei ) ? fn,h (X + sei ) ? 1{An,i (X)} ? . n(h/2)d The next lemma summarizes the above results: Lemma 8. Let t + h ? ? and let 0 < ? < 1. There exist C = C(?, K(?)) such that the following holds with probability at least 1 ? 2?. Define A(n) , Cd ? log(n/?) ? CY2 (?/2n) ? ?Y2 / log2 (n/?). We have ?r ? X 1 A(n) Wi ? En |fi0 (X)| ? 1{A (X)} ? ? |fi0 |sup ? + t,i . +h? n,i t nhd i?[d] Proof. Apply lemmas 5, 6 and 7, in combination with equations 5 and 6. To complete the proof of Theorem 1, apply lemmas 8 and 3 in combination with equations 3 and 4. 5 5.1 Experiments Data description We present experiments on several real-world regression datasets. The first two datasets describe the dynamics of 7 degrees of freedom of robotic arms, Barrett WAM and SARCOS [9, 10]. The input points are 21-dimensional and correspond to samples of the positions, velocities, and accelerations of the 7 joints. The output points correspond to the torque of each joint. The far joints (1, 5, 7) 6 KR error KR-? error Barrett joint 1 0.50 ? 0.02 0.38? 0.03 Barrett joint 5 0.50 ? 0.03 0.35 ? 0.02 SARCOS joint 1 0.16 ? 0.02 0.14 ? 0.02 SARCOS joint 5 0.14 ? 0.02 0.12 ? 0.01 0.39 ? 0.02 0.41 ? 0.03 0.37 ? 0.01 0.38 ? 0.02 0.28 ? 0.05 0.32 ? 0.05 0.23 ? 0.03 0.23 ? 0.02 Concrete Strength 0.42 ? 0.05 0.37 ? 0.03 Wine Quality 0.75 ? 0.03 0.75 ? 0.02 Telecom 0.30?0.02 0.23?0.02 Ailerons 0.40?0.02 0.39?0.02 0.14 ? 0.02 0.14 ? 0.01 0.19 ? 0.02 0.19 ? 0.02 0.15?0.01 0.16?0.01 0.20?0.01 0.21?0.01 Barrett joint 1 0.41 ? 0.02 0.29 ? 0.01 Barrett joint 5 0.40 ? 0.02 0.30 ? 0.02 SARCOS joint 1 0.08 ? 0.01 0.07 ? 0.01 SARCOS joint 5 0.08 ? 0.01 0.07 ? 0.01 0.21 ? 0.04 0.13 ? 0.04 0.16 ? 0.03 0.16 ? 0.03 0.13 ? 0.01 0.14 ? 0.01 0.13 ? 0.01 0.13 ? 0.01 Concrete Strength 0.40 ? 0.04 0.38 ? 0.03 Wine Quality 0.73 ? 0.04 0.72 ? 0.03 Telecom 0.13?0.02 0.17?0.02 Ailerons 0.37?0.01 0.34?0.01 0.10 ? 0.01 0.11 ? 0.01 0.15 ? 0.01 0.15 ? 0.01 0.16?0.02 0.15?0.01 0.12?0.01 0.11?0.01 KR time KR-? time KR error KR-? error KR time KR-? time k-NN error k-NN-? error k-NN time k-NN-? time k-NN error k-NN-? error k-NN time k-NN-? time Housing 0.37 ?0.08 0.25 ?0.06 0.10 ?0.01 0.11 ?0.01 Parkinson?s 0.38?0.03 0.34?0.03 0.30?0.03 0.30?0.03 Housing 0.28 ?0.09 0.22?0.06 0.08 ?0.01 0.08 ?0.01 Parkinson?s 0.22?0.01 0.20?0.01 0.14?0.01 0.15?0.01 Table 1: Normalized mean square prediction errors and average prediction time per point (in milliseconds). The top two tables are for KR vs KR-? and the bottom two for k-NN vs k-NN-?. 0.1 0.44 KR error KR?? error 0.08 0.35 KR error KR?? error 0.42 KR error KR?? error 0.3 0.4 error error 0.25 error 0.06 0.38 0.04 0.2 0.36 0.02 0 0.15 0.34 1000 2000 3000 4000 0.32 5000 1000 number of training points 2000 3000 4000 0.1 5000 1000 2000 number of training points (a) SARCOS, joint 7, with KR (b) Ailerons with KR 0.025 4000 5000 6000 7000 (c) Telecom with KR 0.38 k?NN error k?NN?? error 3000 number of training points 0.2 k?NN error k?NN?? error 0.37 0.36 k?NN error k?NN?? error 0.02 0.15 0.35 error error error 0.34 0.015 0.33 0.1 0.32 0.31 0.01 0.05 0.3 0.29 0.005 1000 2000 3000 4000 5000 number of training points (d) SARCOS, joint 7, with k-NN 1000 2000 3000 4000 5000 number of training points (e) Ailerons with k-NN 0 1000 2000 3000 4000 5000 6000 7000 number of training points (f) Telecom with k-NN Figure 2: Normalized mean square prediction error over 2000 points for varying training sizes. Results are shown for k-NN and kernel regression (KR), with and without the metric ?. correspond to different regression problems and are the only results reported. Expectedly, results for the other joints are similarly good. The other datasets are taken from the UCI repository [11] and from [12]. The concrete strength dataset (Concrete Strength) contains 8-dimensional input points, describing age and ingredients of concrete, the output points are the compressive strength. The wine quality dataset (Wine Quality) contains 11-dimensional input points corresponding to the physicochemistry of wine samples, the output points are the wine quality. The ailerons dataset (Ailerons) is taken from the problem of flying a F16 aircraft. The 5-dimensional input points describe the status of the aeroplane, while the goal is 7 to predict the control action on the ailerons of the aircraft. The housing dataset (Housing) concerns the task of predicting housing values in areas of Boston, the input points are 13-dimensional. The Parkinson?s Telemonitoring dataset (Parkison?s) is used to predict the clinician?s Parkinson?s disease symptom score using biomedical voice measurements represented by 21-dimensional input points. We also consider a telecommunication problem (Telecom), wherein the 47-dimensional input points and the output points describe the bandwidth usage in a network. For all datasets we normalize each coordinate with its standard deviation from the training data. 5.2 Experimental setup To learn the metric, we set h by cross-validation on half the training points, and we set t = h/2 for all datasets. Note that in practice we might want to also tune t in the range of h for even better performance than reported here. The event An,i (X) is set to reject the gradient estimate ?n,i fn,h (X) at X if no sample contributed to one the estimates fn,h (X ? tei ). In each experiment, we compare kernel regression in the euclidean metric space (KR) and in the learned metric space (KR-?), where we use a box kernel for both. Similar comparisons are made using k-NN and k-NN-?. All methods are implemented using a fast neighborhood search procedure, namely the cover-tree of [13], and we also report the average prediction times so as to confirm that, on average, time-performance is not affected by using the metric. The parameter k in k-NN/k-NN-?, and the bandwidth in KR/KR-? are learned by cross-validation on half of the training points. We try the same range of k (from 1 to 5 log n) for both k-NN and k-NN-?. We try the same range of bandwidth/space-diameter (a grid of size 0.02 from 1 to 0.02 ) for both KR and KR-?: this is done efficiently by starting with a log search to detect a smaller range, followed by a grid search on a smaller range. Table 5 shows the normalized Mean Square Errors (nMSE) where the MSE on the test set is normalized by variance of the test output. We use 1000 training points in the robotic datasets, 2000 training points in the Telecom, Parkinson?s, Wine Quality, and Ailerons datasets, and 730 training points in Concrete Strength, and 300 in Housing. We used 2000 test points in all of the problems, except for Concrete, 300 points, and Housing, 200 points. Averages over 10 random experiments are reported. For the larger datasets (SARCOS, Ailerons, Telecom) we also report the behavior of the algorithms, with and without metric, as the training size n increases (Figure 2). 5.3 Discussion of results From the results in Table 5 we see that virtually on all datasets the metric helps improve the performance of the distance based-regressor even though we did not tune t to the particular problem (remember t = h/2 for all experiments). The only exceptions are for Wine Quality where the learned weights are nearly uniform, and for Telecom with k-NN. We noticed that the Telecom dataset has a lot of outliers and this probably explains the discrepancy, besides from the fact that we did not attempt to tune t. Also notice that the error of k-NN is already low for small sample sizes, making it harder to outperform. However, as shown in Figure 2, for larger training sizes k-NN-? gains on k-NN. The rest of the results in Figure 2 where we vary n are self-descriptive: gradient weighting clearly improves the performance of the distance-based regressors. We also report the average prediction times in Table 5. We see that running the distance-based methods with gradient weights does not affect estimation time. Last, remember that the metric can be learned online at the cost of only 2d times the average kernel estimation time reported. 6 Final remarks Gradient weighting is simple to implement, computationally efficient in batch-mode and online, and most importantly improves the performance of distance-based regressors on real-world applications. In our experiments, most or all coordinates of the data are relevant, yet some coordinates are more important than others. This is sufficient for gradient weighting to yield gains in performance. We believe there is yet room for improvement given the simplicity of our current method. 8 References [1] Kilian Q. Weinberger and Gerald Tesauro. Metric learning for kernel regression. Journal of Machine Learning Research - Proceedings Track, 2:612?619, 2007. [2] Bo Xiao, Xiaokang Yang, Yi Xu, and Hongyuan Zha. Learning distance metric for regression by semidefinite programming with application to human age estimation. In Proceedings of the 17th ACM international conference on Multimedia, pages 451?460, 2009. [3] Shai Shalev-shwartz, Yoram Singer, and Andrew Y. Ng. Online and batch learning of pseudometrics. In ICML, pages 743?750. ACM Press, 2004. [4] Jason V. Davis, Brian Kulis, Prateek Jain, Suvrit Sra, and Inderjit S. Dhillon. Informationtheoretic metric learning. In ICML, pages 209?216, 2007. [5] W. H?ardle and T. Gasser. On robust kernel estimation of derivatives of regression functions. Scandinavian journal of statistics, pages 233?240, 1985. [6] J. Lafferty and L. Wasserman. Rodeo: Sparse nonparametric regression in high dimensions. Arxiv preprint math/0506342, 2005. [7] L. Rosasco, S. Villa, S. Mosci, M. Santoro, and A. Verri. Nonparametric sparsity and regularization. http://arxiv.org/abs/1208.2572, 2012. [8] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution Free Theory of Nonparametric Regression. Springer, New York, NY, 2002. [9] S. Kpotufe. k-NN Regression Adapts to Local Intrinsic Dimension. NIPS, 2011. [10] Duy Nguyen-Tuong, Matthias W. Seeger, and Jan Peters. Model learning with local gaussian process regression. Advanced Robotics, 23(15):2015?2034, 2009. [11] Duy Nguyen-Tuong and Jan Peters. Incremental online sparsification for model learning in real-time robot control. Neurocomputing, 74(11):1859?1867, 2011. [12] A. Frank and A. Asuncion. UCI machine learning repository. http://archive.ics. uci.edu/ml. University of California, Irvine, School of Information and Computer Sciences, 2012. [13] Luis Torgo. Regression datasets. http://www.liaad.up.pt/?ltorgo. University of Porto, Department of Computer Science, 2012. [14] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbors. ICML, 2006. 9
4611 |@word mild:1 aircraft:2 repository:2 kulis:1 polynomial:1 norm:7 accounting:1 harder:1 contains:2 score:1 tuned:1 current:1 beygelzimer:1 yet:3 luis:1 fn:28 chicago:1 shape:1 v:2 half:2 ith:2 sarcos:9 math:1 contribute:1 org:1 along:3 c2:2 prove:3 inside:1 introduce:1 x0:9 inter:2 mosci:1 behavior:1 mpg:2 nor:1 torque:1 decreasing:1 automatically:1 little:1 considering:1 estimating:3 moreover:3 bounded:8 mass:8 prateek:1 compressive:1 sparsification:1 impractical:1 remember:2 finance:1 control:2 planck:3 positive:1 local:4 analyzing:1 might:6 xiaokang:1 limited:1 kfi:2 range:5 gyorfi:1 practical:1 lost:1 practice:3 union:1 implement:1 procedure:1 jan:2 area:1 empirical:1 significantly:3 reject:1 projection:2 confidence:1 suggest:1 cannot:1 close:2 tuong:2 applying:1 py:1 optimize:1 equivalent:1 www:1 starting:2 simplicity:2 immediately:1 wasserman:1 estimator:10 importantly:1 dominate:1 hd:2 searching:1 notion:1 coordinate:18 variation:3 handle:2 updated:1 target:3 suppose:4 pt:1 programming:1 element:1 velocity:1 approximated:1 distributional:1 observed:1 bottom:1 preprint:1 capture:1 worst:1 cy:2 region:2 connected:1 kilian:1 decrease:2 technological:1 mentioned:1 intuition:2 disease:1 ideally:1 dynamic:1 gerald:1 motivate:1 depend:1 torgo:1 duy:2 flying:1 abdeslam:1 easily:1 joint:15 various:1 represented:1 jain:1 fast:3 describe:3 neighborhood:2 shalev:1 larger:5 otherwise:1 statistic:1 final:1 online:11 housing:7 advantage:1 differentiable:1 descriptive:1 matthias:1 propose:2 pseudometrics:1 relevant:5 uci:3 alleviates:1 adapts:1 fi0:28 description:1 normalize:1 convergence:1 incremental:1 help:3 andrew:1 nearest:1 school:1 ex:1 implemented:2 implies:1 differ:2 direction:1 radius:1 porto:1 human:1 explains:1 require:1 decompose:1 brian:1 ardle:1 hold:2 sufficiently:4 ic:1 predict:2 vary:5 wine:8 estimation:5 currently:1 sei:13 largest:1 clearly:2 gaussian:2 rather:2 pn:1 parkinson:6 varying:2 focus:1 improvement:1 f16:1 contrast:2 seeger:1 sense:1 detect:1 nn:34 santoro:1 classification:2 marginal:6 ng:1 chernoff:1 icml:3 nearly:2 discrepancy:1 others:5 t2:1 intelligent:2 report:3 few:1 modern:1 neurocomputing:1 attempt:1 freedom:1 ab:1 semidefinite:1 tree:2 euclidean:6 walk:1 theoretical:1 instance:4 increased:1 earlier:1 cover:2 cost:2 deviation:1 subset:1 uniform:1 reported:4 dependency:3 varies:4 supx:3 density:4 international:1 regressor:3 continuously:1 concrete:7 satisfied:1 boularias:2 ltorgo:1 rosasco:1 derivative:7 de:2 depends:1 cy2:3 try:2 lot:1 jason:1 sup:12 zha:1 maintains:1 shai:1 asuncion:1 square:3 variance:9 largely:1 efficiently:4 yield:2 correspond:3 confirmed:1 whenever:1 proof:6 gain:4 irvine:1 proved:2 dataset:6 improves:3 carefully:1 back:1 originally:1 wherein:1 verri:1 done:3 though:2 symptom:1 generality:1 furthermore:1 just:3 biomedical:1 box:1 langford:1 d:1 ei:1 mode:3 infimum:1 quality:7 believe:1 usage:1 normalized:4 y2:4 hence:1 regularization:1 dhillon:1 self:1 davis:1 complete:1 bring:1 fi:7 behaves:2 exponentially:1 tail:1 relating:1 significant:1 measurement:1 smoothness:2 rd:6 tuning:2 consistency:5 grid:3 similarly:1 robot:2 scandinavian:1 add:1 closest:1 recent:1 optimizing:1 inf:1 tesauro:1 scenario:1 certain:1 suvrit:1 yi:4 captured:1 ey:1 multiple:1 technical:2 cross:3 e1:3 controlled:2 prediction:7 regression:30 essentially:2 expectation:1 metric:28 arxiv:2 kernel:15 robotics:2 c1:2 want:2 hurdle:1 separately:1 envelope:2 rest:1 archive:1 probably:1 induced:1 virtually:1 lafferty:1 spirit:1 yang:1 ideal:1 enough:1 tei:20 xj:1 affect:1 bandwidth:7 idea:1 heavier:1 handled:1 aeroplane:1 krzyzak:1 peter:2 york:1 action:1 remark:1 proportionally:1 detailed:1 tune:3 nonparametric:5 ellipsoidal:1 gasser:1 diameter:3 http:3 outperform:1 exist:4 millisecond:1 notice:1 dotted:1 estimated:2 per:1 track:1 affected:1 neither:1 verified:1 circumventing:1 sum:1 everywhere:2 telecommunication:1 appendix:4 summarizes:1 bound:9 followed:1 expectedly:1 strength:6 precisely:1 min:1 department:1 ball:9 combination:2 smaller:3 wi:35 kakade:1 wam:1 making:1 intuitively:1 outlier:1 taken:2 kfi0:21 ln:6 equation:3 computationally:1 remains:3 describing:1 loose:1 needed:1 mind:1 know:1 singer:1 end:1 apply:3 observe:1 batch:5 voice:1 weinberger:1 existence:1 original:2 denotes:1 top:1 running:1 log2:3 maintaining:1 yoram:1 k1:25 especially:1 noticed:1 already:1 quantity:6 rt:1 diagonal:1 villa:1 gradient:10 distance:17 tuebingen:2 besides:1 mini:2 setup:2 mostly:1 telemonitoring:1 statement:1 relate:1 frank:1 affiliated:1 unknown:5 kpotufe:2 contributed:1 upper:2 datasets:12 finite:1 situation:1 incorporated:1 precise:1 introduced:1 namely:1 california:1 learned:6 nip:1 below:5 sparsity:3 max:6 event:4 difficulty:1 predicting:1 turning:1 advanced:1 arm:1 improve:3 inversely:1 relative:3 loss:1 ically:1 proportional:1 ingredient:1 age:3 validation:3 degree:1 sufficient:1 consistent:2 xiao:1 cd:3 last:2 keeping:1 free:1 guide:1 bias:7 institute:4 neighbor:1 sparse:4 boundary:3 dimension:2 world:4 made:2 regressors:9 preprocessing:3 nguyen:2 far:4 approximate:1 compact:1 informationtheoretic:1 status:1 keep:1 technicality:1 confirm:1 ml:1 robotic:2 nhd:2 hongyuan:1 assumed:2 xi:4 thep:1 shwartz:1 search:4 continuous:5 liaad:1 timeefficient:1 table:5 learn:3 robust:1 rodeo:1 sra:1 mse:1 did:2 dense:2 main:5 motivation:1 noise:3 bounding:1 n2:1 nmse:1 xu:1 telecom:10 en:28 elaborate:1 ny:1 samory:2 position:1 weighting:6 toyota:1 advertisement:1 theorem:6 maxi:3 r2:2 barrett:5 concern:1 exists:3 intrinsic:1 effectively:1 kr:26 boston:1 likely:3 expressed:1 bo:1 inderjit:1 recommendation:1 applies:1 springer:1 corresponds:1 satisfies:1 acm:2 goal:1 acceleration:1 room:1 lipschitz:1 change:2 included:1 typical:1 clinician:1 uniformly:3 except:1 lemma:21 multimedia:1 experimental:1 exception:1 support:2 meant:2 aileron:9 kohler:1 handling:1
3,991
4,612
Multi-criteria Anomaly Detection using Pareto Depth Analysis Ko-Jen Hsiao, Kevin S. Xu, Jeff Calder, and Alfred O. Hero III University of Michigan, Ann Arbor, MI, USA 48109 {coolmark,xukevin,jcalder,hero}@umich.edu Abstract We consider the problem of identifying patterns in a data set that exhibit anomalous behavior, often referred to as anomaly detection. In most anomaly detection algorithms, the dissimilarity between data samples is calculated by a single criterion, such as Euclidean distance. However, in many cases there may not exist a single dissimilarity measure that captures all possible anomalous patterns. In such a case, multiple criteria can be defined, and one can test for anomalies by scalarizing the multiple criteria using a linear combination of them. If the importance of the different criteria are not known in advance, the algorithm may need to be executed multiple times with different choices of weights in the linear combination. In this paper, we introduce a novel non-parametric multi-criteria anomaly detection method using Pareto depth analysis (PDA). PDA uses the concept of Pareto optimality to detect anomalies under multiple criteria without having to run an algorithm multiple times with different choices of weights. The proposed PDA approach scales linearly in the number of criteria and is provably better than linear combinations of the criteria. 1 Introduction Anomaly detection is an important problem that has been studied in a variety of areas and used in diverse applications including intrusion detection, fraud detection, and image processing [1, 2]. Many methods for anomaly detection have been developed using both parametric and non-parametric approaches. Non-parametric approaches typically involve the calculation of dissimilarities between data samples. For complex high-dimensional data, multiple dissimilarity measures corresponding to different criteria may be required to detect certain types of anomalies. For example, consider the problem of detecting anomalous object trajectories in video sequences. Multiple criteria, such as dissimilarity in object speeds or trajectory shapes, can be used to detect a greater range of anomalies than any single criterion. In order to perform anomaly detection using these multiple criteria, one could first combine the dissimilarities using a linear combination. However, in many applications, the importance of the criteria are not known in advance. It is difficult to determine how much weight to assign to each dissimilarity measure, so one may have to choose multiple weights using, for example, a grid search. Furthermore, when the weights are changed, the anomaly detection algorithm needs to be re-executed using the new weights. In this paper we propose a novel non-parametric multi-criteria anomaly detection approach using Pareto depth analysis (PDA). PDA uses the concept of Pareto optimality to detect anomalies without having to choose weights for different criteria. Pareto optimality is the typical method for defining optimality when there may be multiple conflicting criteria for comparing items. An item is said to be Pareto-optimal if there does not exist another item that is better or equal in all of the criteria. An item that is Pareto-optimal is optimal in the usual sense under some combination, not necessarily linear, of the criteria. Hence, PDA is able to detect anomalies under multiple combinations of the criteria without explicitly forming these combinations. 1 6 5 3 3 3 |?y| |?y| y 4 2 2 2 1 1 1 0 0 1 2 3 x 4 5 6 0 0 1 2 |?x| 3 0 0 1 2 |?x| 3 Figure 1: Left: Illustrative example with 40 training samples (blue x?s) and 2 test samples (red circle and triangle) in R2 . Center: Dyads for the training samples (black dots) along with first 20 Pareto fronts (green lines) under two criteria: |?x| and |?y|. The Pareto fronts induce a partial ordering on the set of dyads. Dyads associated with the test sample marked by the red circle concentrate around shallow fronts (near the lower left of the figure). Right: Dyads associated with the test sample marked by the red triangle concentrate around deep fronts. The PDA approach involves creating dyads corresponding to dissimilarities between pairs of data samples under all of the dissimilarity measures. Sets of Pareto-optimal dyads, called Pareto fronts, are then computed. The first Pareto front (depth one) is the set of non-dominated dyads. The second Pareto front (depth two) is obtained by removing these non-dominated dyads, i.e. peeling off the first front, and recomputing the first Pareto front of those remaining. This process continues until no dyads remain. In this way, each dyad is assigned to a Pareto front at some depth (see Fig. 1 for illustration). Nominal and anomalous samples are located near different Pareto front depths; thus computing the front depths of the dyads corresponding to a test sample can discriminate between nominal and anomalous samples. The proposed PDA approach scales linearly in the number of criteria, which is a significant improvement compared to selecting multiple weights via a grid search, which scales exponentially in the number of criteria. Under assumptions that the multi-criteria dyads can be modeled as a realizations from a smooth K-dimensional density we provide a mathematical analysis of the behavior of the first Pareto front. This analysis shows in a precise sense that PDA can outperform a test that uses a linear combination of the criteria. Furthermore, this theoretical prediction is experimentally validated by comparing PDA to several state-of-the-art anomaly detection algorithms in two experiments involving both synthetic and real data sets. The rest of this paper is organized as follows. We discuss related work in Section 2. In Section 3 we provide an introduction to Pareto fronts and present a theoretical analysis of the properties of the first Pareto front. Section 4 relates Pareto fronts to the multi-criteria anomaly detection problem, which leads to the PDA anomaly detection algorithm. Finally we present two experiments in Section 5 to evaluate the performance of PDA. 2 Related work Several machine learning methods utilizing Pareto optimality have previously been proposed; an overview can be found in [3]. These methods typically formulate machine learning problems as multi-objective optimization problems where finding even the first Pareto front is quite difficult. These methods differ from our use of Pareto optimality because we consider multiple Pareto fronts created from a finite set of items, so we do not need to employ sophisticated methods in order to find these fronts. Hero and Fleury [4] introduced a method for gene ranking using Pareto fronts that is related to our approach. The method ranks genes, in order of interest to a biologist, by creating Pareto fronts of the data samples, i.e. the genes. In this paper, we consider Pareto fronts of dyads, which correspond to dissimilarities between pairs of data samples rather than the samples themselves, and use the distribution of dyads in Pareto fronts to perform multi-criteria anomaly detection rather than ranking. Another related area is multi-view learning [5, 6], which involves learning from data represented by multiple sets of features, commonly referred to as ?views?. In such case, training in one view helps to 2 improve learning in another view. The problem of view disagreement, where samples take different classes in different views, has recently been investigated [7]. The views are similar to criteria in our problem setting. However, in our setting, different criteria may be orthogonal and could even give contradictory information; hence there may be severe view disagreement. Thus training in one view could actually worsen performance in another view, so the problem we consider differs from multi-view learning. A similar area is that of multiple kernel learning [8], which is typically applied to supervised learning problems, unlike the unsupervised anomaly detection setting we consider. Finally, many other anomaly detection methods have previously been proposed. Hodge and Austin [1] and Chandola et al. [2] both provide extensive surveys of different anomaly detection methods and applications. Nearest neighbor-based methods are closely related to the proposed PDA approach. Byers and Raftery [9] proposed to use the distance between a sample and its kth-nearest neighbor as the anomaly score for the sample; similarly, Angiulli and Pizzuti [10] and Eskin et al. [11] proposed to the use the sum of the distances between a sample and its k nearest neighbors. Breunig et al. [12] used an anomaly score based on the local density of the k nearest neighbors of a sample. Hero [13] and Sricharan and Hero [14] introduced non-parametric adaptive anomaly detection methods using geometric entropy minimization, based on random k-point minimal spanning trees and bipartite k-nearest neighbor (k-NN) graphs, respectively. Zhao and Saligrama [15] proposed an anomaly detection algorithm k-LPE using local p-value estimation (LPE) based on a k-NN graph. These k-NN anomaly detection schemes only depend on the data through the pairs of data points (dyads) that define the edges in the k-NN graphs. All of the aforementioned methods are designed for single-criteria anomaly detection. In the multicriteria setting, the single-criteria algorithms must be executed multiple times with different weights, unlike the PDA anomaly detection algorithm that we propose in Section 4. 3 Pareto depth analysis The PDA method proposed in this paper utilizes the notion of Pareto optimality, which has been studied in many application areas in economics, computer science, and the social sciences among others [16]. We introduce Pareto optimality and define the notion of a Pareto front. Consider the following problem: given n items, denoted by the set S, and K criteria for evaluating each item, denoted by functions f1 , . . . , fK , select x ? S that minimizes [f1 (x), . . . , fK (x)]. In most settings, it is not possible to identify a single item x that simultaneously minimizes fi (x) for all i ? {1, . . . , K}. A minimizer can be found by combining the K criteria using a linear combination of the fi ?s and finding the minimum of the combination. Different choices of (nonnegative) weights in the linear combination could result in different minimizers; a set of items that are minimizers under some linear combination can then be created by using a grid search over the weights, for example. A more powerful approach involves finding the set of Pareto-optimal items. An item x is said to strictly dominate another item x? if x is no greater than x? in each criterion and x is less than x? in at least one criterion. This relation can be written as x  x? if fi (x) ? fi (x? ) for each i and fi (x) < fi (x? ) for some i. The set of Pareto-optimal items, called the Pareto front, is the set of items in S that are not strictly dominated by another item in S. It contains all of the minimizers that are found using linear combinations, but also includes other items that cannot be found by linear combinations. Denote the Pareto front by F1 , which we call the first Pareto front. The second Pareto front can be constructed by finding items that are not strictly dominated by any of the remaining items, which are members of the set S \ F1 . More generally, define the ith Pareto front by ? ? i?1 [ Fi = Pareto front of the set S \ ? Fj ? . j=1 For convenience, we say that a Pareto front Fi is deeper than Fj if i > j. 3.1 Mathematical properties of Pareto fronts The distribution of the number of points on the first Pareto front was first studied by BarndorffNielsen and Sobel in their seminal work [17]. The problem has garnered much attention since; for a 3 survey of recent results see [18]. We will be concerned here with properties of the first Pareto front that are relevant to the PDA anomaly detection algorithm and thus have not yet been considered in the literature. Let Y1 , . . . , Yn be independent and identically distributed (i.i.d.) on Rd with density function f : Rd ? R. For a measurable set A ? Rd , we denote by FA the points on the first Pareto front of Y1 , . . . , Yn that belong to A. For simplicity, we will denote F1 by F and use |F| for the cardinality of F. In the general Pareto framework, the points Y1 , . . . , Yn are the images in Rd of n feasible solutions to some optimization problem under a vector of objective functions of length d. In the context of this paper, each point Yl corresponds to a dyad Dij , which we define in Section 4, and d = K is the number of criteria. A common approach in multi-objective optimization is linear scalarization [16], which constructs a new single criterion as a convex combination of the d criteria. It is well-known, and easy to see, S that linear scalarization will only identify Pareto points on the boundary of the convex hull of x?F (x + Rd+ ), where Rd+ = {x ? Rd | xi ? 0, i = 1 . . . , d}. Although this is a common motivation for Pareto methods, there are, to the best of our knowledge, no results in the literature regarding how many points on the Pareto front are missed by scalarization. We present such a result here. We define ) ( d [ X L= argmin ?i xi , Sn = {Y1 , . . . , Yn }. ??Rd + x?Sn i=1 The subset L ? F contains all Pareto-optimal points that can be obtained by some selection of weights for linear scalarization. We aim to study how large L can get, compared to F, in expectation. In the context of this paper, if some Pareto-optimal points are not identified, then the anomaly score (defined in section 4.2) will be artificially inflated, making it more likely that a non-anomalous sample will be rejected. Hence the size of F \ L is a measure of how much the anomaly score is inflated and the degree to which Pareto methods will outperform linear scalarization. Pareto points in F \ L are a result of non-convexities in the Pareto front. We study two kinds of non-convexities: those induced by the geometry of the domain of Y1 , . . . , Yn , and those induced by randomness. We first consider the geometry of the domain. Let ? ? Rd be bounded and open with a smooth boundary ?? and suppose the density f vanishes outside of ?. For a point z ? ?? we denote by ?(z) = (?1 (z), . . . , ?d (z)) the unit inward normal to ??. For T ? ??, define Th ? ? by Th = {z + t? | z ? T, 0 < t ? h}. Given h > 0 it is not hard to see that all Pareto-optimal points will almost surely lie in ??h for large enough n, provided the density f is strictly positive on ??h . Hence it is enough to study the asymptotics for E|FTh | for T ? ?? and h > 0. Theorem 1. Let f ? C 1 (?) with inf ? f > 0. Let T ? ?? be open and connected such that inf min(?1 (z), . . . , ?d (z)) ? ? > 0, z?T and {y ? ? : y  x} = {x}, for x ? T. Then for h > 0 sufficiently small, we have  d?2  d?1 E|FTh | = ?n d + ? ?d?1 O n d as n ? ?, Z d?1 1 1 where ? = d?1 (d!) d ?(d?1 ) f (z) d (?1 (z) ? ? ? ?d (z)) d dz. T The proof of Theorem 1 is postponed to Section 1 of the supplementary material. Theorem 1 shows asymptotically how many Pareto points are contributed on average by the segment T ? ??. The number of points contributed depends only on the geometry of ?? through the direction of its normal vector ? and is otherwise independent of the convexity of ??. Hence, by using Pareto methods, we will identify significantly more Pareto-optimal points than linear scalarization when the geometry of ?? includes non-convex regions. For example, if T ? ?? is non-convex (see left panel of Figure 2) and satisfies the hypotheses of Theorem 1, then for large enough n, all Pareto points in a neighborhood of T will be unattainable by scalarization. Quantitatively, if f ? C on T , then d?1 d?2 d?1 1 E|F \ L| ? ?n d + ? ?d?1 O(n d ), as n ? ?, where ? ? d?1 (d!) d ?(d?1 )|T |?C d and |T | is the d ? 1 dimensional Hausdorff measure of T . It has recently come to our attention that Theorem 1 appears in a more general form in an unpublished manuscript of Baryshnikov and Yukich [19]. We now study non-convexities in the Pareto front which occur due to inherent randomness in the samples. We show that, even in the case where ? is convex, there are still numerous small-scale non-convexities in the Pareto front that can only be detected by Pareto methods. We illustrate this in the case of the Pareto box problem for d = 2. 4 0.25 0.2 0.15 0.1 0.05 0 ?0.05 ?0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Figure 2: Left: Non-convexities in the Pareto front induced by the geometry of the domain ? (Theorem 1). Right: Non-convexities due to randomness in the samples (Theorem 2). In each case, the larger points are Pareto-optimal, and the large black points cannot be obtained by scalarization. Theorem 2. Let Y1 , . . . , Yn be independent and uniformly distributed on [0, 1]2 . Then 1 5 ln n + O(1) ? E|L| ? ln n + O(1), as n ? ?. 2 6 The proof of Theorem 2 is also postponed to Section 1 of the supplementary material. A proof that E|F| = ln n + O(1) as n ? ? can be found in [17]. Hence Theorem 2 shows that, asymptotically and in expectation, only between 12 and 65 of the Pareto-optimal points can be obtained by linear scalarization in the Pareto box problem. Experimentally, we have observed that the true fraction of points is close to 0.7. This means that at least 16 (and likely more) of the Pareto points can only be obtained via Pareto methods even when ? is convex. Figure 2 gives an example of the sets F and L from the two theorems. 4 Multi-criteria anomaly detection Assume that a training set XN = {X1 , . . . , XN } of nominal data samples is available. Given a test sample X, the objective of anomaly detection is to declare X to be an anomaly if X is significantly different from samples in XN . Suppose that K > 1 different evaluation criteria are given. Each criterion is associated with a measure for computing dissimilarities. Denote the dissimilarity between Xi and Xj computed using the measure corresponding to the lth criterion by dl (i, j). We define a dyad by Dij = [d1 (i, j), . . . , dK (i, j)]T ? RK + , i ? {1, . . . , N }, j ? {1, . . . , N } \ i. Each dyad D corresponds to a connection between samples Xi and Xj . Therefore, there are in ij  total N2 different dyads. For convenience, denote the set of all dyads by D and the space of all dyads RK + by D. By the definition of strict dominance in Section 3, a dyad Dij strictly dominates another dyad Di? j ? if dl (i, j) ? dl (i? , j ? ) for all l ? {1, . . . , K} and dl (i, j) < dl (i? , j ? ) for some l. The first Pareto front F1 corresponds to the set of dyads from D that are not strictly dominated by any other dyads from D. The second Pareto front F2 corresponds to the set of dyads from D \ F1 that are not strictly dominated by any other dyads from D \ F1 , and so on, as defined in Section 3. Recall that we refer to Fi as a deeper front than Fj if i > j. 4.1 Pareto fronts of dyads For each sample Xn , there are N ? 1 dyads corresponding to its connections with the other N ? 1 samples. Define the set of N ? 1 dyads associated with Xn by Dn . If most dyads in Dn are located at shallow Pareto fronts, then the dissimilarities between Xn and the other N ? 1 samples are small under some combination of the criteria. Thus, Xn is likely to be a nominal sample. This is the basic idea of the proposed multi-criteria anomaly detection method using PDA. We construct Pareto fronts F1 , . . . , FM of the dyads from the training set, where the total number of fronts M is the required number of fronts such that each dyad is a member of a front. When a test sample X is obtained, we create new dyads corresponding to connections between X and training samples, as illustrated in Figure 1. Similar to many other anomaly detection methods, we connect each test sample to its k nearest neighbors. k could be different for each criterion, so we denote ki PK as the choice of k for criterion i. We create s = i=1 ki new dyads, which we denote by the set 5 Algorithm 1 PDA anomaly detection algorithm. Training phase: 1: for l = 1 ? K do 2: Calculate pairwise dissimilarities dl (i, j) between all training samples Xi and Xj 3: Create dyads Dij = [d1 (i, j), . . . , dK (i, j)] for all training samples 4: Construct Pareto fronts on set of all dyads until each dyad is in a front Testing phase: 1: nb ? [ ] {empty list} 2: for l = 1 ? K do 3: Calculate dissimilarities between test sample X and all training samples in criterion l 4: nbl ? kl nearest neighbors of X 5: nb ? [nb, nbl ] {append neighbors to list} 6: Create s new dyads Dinew between X and training samples in nb 7: for i = 1 ? s do 8: Calculate depth ei of Dinew Ps 9: Declare X an anomaly if v(X) = (1/s) i=1 ei > ? Dnew = {D1new , D2new , . . . , Dsnew }, corresponding to the connections between X and the union of the ki nearest neighbors in each criterion i. In other words, we create a dyad between X and Xj if Xj is among the ki nearest neighbors1 of X in any criterion i. We say that Dinew is below a front Fl if Dinew  Dl for some Dl ? Fl , i.e. Dinew strictly dominates at least a single dyad in Fl . Define the depth of Dinew by ei = min{l | Dinew is below Fl }. Therefore if ei is large, then Dinew will be near deep fronts, and the distance between X and the corresponding training sample is large under all combinations of the K criteria. If ei is small, then Dinew will be near shallow fronts, so the distance between X and the corresponding training sample is small under some combination of the K criteria. 4.2 Anomaly detection using depths of dyads In k-NN based anomaly detection algorithms such as those mentioned in Section 2, the anomaly score is a function of the k nearest neighbors to a test sample. With multiple criteria, one could define an anomaly score by scalarization. From the probabilistic properties of Pareto fronts discussed in Section 3.1, we know that Pareto methods identify more Pareto-optimal points than linear scalarization methods and significantly more Pareto-optimal points than a single weight for scalarization2 . This motivates us to develop a multi-criteria anomaly score using Pareto fronts. We start with the observation from Figure 1 that dyads corresponding to a nominal test sample are typically located near shallower fronts than dyads corresponding to an anomalous test sample. Each test sample is associated with s new dyads, where the ith dyad Dinew has depth ei . For each test sample X, we define the anomaly score v(X) to be the mean of the ei ?s, which corresponds to the average depth of the s dyads associated with X. Thus the anomaly score can be easily computed and compared to the decision threshold ? using the test s v(X) = 1 X H1 ei ? ?. s i=1 H0 Pseudocode for the PDA anomaly detector is shown in Algorithm 1. In Section 3 of the supplementary material we provide details of the implementation as well as an analysis of the time complexity and a heuristic for choosing the ki ?s that performs well in practice. Both the training time and the 1 If a training sample is one of the ki nearest neighbors in multiple criteria, then multiple copies of the dyad corresponding to the connection between the test sample and the training sample are created. 2 Theorems 1 and 2 require i.i.d. samples, but dyads are not independent. However, there are O(N 2 ) dyads, and each dyad is only dependent on O(N ) other dyads. This suggests that the theorems should also hold for the non-i.i.d. dyads as well, and it is supported by experimental results presented in Section 2 of the supplementary material. 6 Table 1: AUC comparison of different methods for both experiments. Best AUC is shown in bold. PDA does not require selecting weights so it has a single AUC. The median and best AUCs (over all choices of weights selected by grid search) are shown for the other four methods. PDA outperforms all of the other methods, even for the best weights, which are not known in advance. (a) Four-criteria simulation (? standard error) Method PDA k-NN k-NN sum k-LPE LOF (b) Pedestrian trajectories AUC by weight Median Best 0.948 ? 0.002 0.848 ? 0.004 0.919 ? 0.003 0.854 ? 0.003 0.916 ? 0.003 0.847 ? 0.004 0.919 ? 0.003 0.845 ? 0.003 0.932 ? 0.003 Method PDA k-NN k-NN sum k-LPE LOF AUC by weight Median Best 0.915 0.883 0.906 0.894 0.911 0.893 0.908 0.839 0.863 time required to test a new sample using PDA are linear in the number of criteria K. To handle multiple criteria, other anomaly detection methods, such as the ones mentioned in Section 2, need to be re-executed multiple times using different (non-negative) linear combinations of the K criteria. If a grid search is used for selection of the weights in the linear combination, then the required computation time would be exponential in K. Such an approach presents a computational problem unless K is very small. Since PDA scales linearly with K, it does not encounter this problem. 5 Experiments We compare the PDA method with four other nearest neighbor-based single-criterion anomaly detection algorithms mentioned in Section 2. For these methods, we use linear combinations of the criteria with different weights selected by grid search to compare performance with PDA. 5.1 Simulated data with four criteria First we present an experiment on a simulated data set. The nominal distribution is given by the uniform distribution on the hypercube [0, 1]4 . The anomalous samples are located just outside of this hypercube. There are four classes of anomalous distributions. Each class differs from the nominal distribution in one of the four dimensions; the distribution in the anomalous dimension is uniform on [1, 1.1]. We draw 300 training samples from the nominal distribution followed by 100 test samples from a mixture of the nominal and anomalous distributions with a 0.05 probability of selecting any particular anomalous distribution. The four criteria for this experiment correspond to the squared differences in each dimension. If the criteria are combined using linear combinations, the combined dissimilarity measure reduces to weighted squared Euclidean distance. The different methods are evaluated using the receiver operating characteristic (ROC) curve and the area under the curve (AUC). The mean AUCs (with standard errors) over 100 simulation runs are shown in Table 1(a). A grid of six points between 0 and 1 in each criterion, corresponding to 64 = 1296 different sets of weights, is used to select linear combinations for the single-criterion methods. Note that PDA is the best performer, outperforming even the best linear combination. 5.2 Pedestrian trajectories We now present an experiment on a real data set that contains thousands of pedestrians? trajectories in an open area monitored by a video camera [20]. Each trajectory is approximated by a cubic spline curve with seven control points [21]. We represent a trajectory with l time samples by   x1 x2 . . . x l T = , y1 y2 . . . yl where [xt , yt ] denote a pedestrian?s position at time step t. 7 1 0.06 0.05 Shape dissimilarity True positive rate 0.8 0.6 0.4 PDA method k?LPE with best AUC weight k?LPE with worst AUC weight Attainable region of k?LPE 0.2 0 0 0.2 0.4 0.6 False positive rate 0.8 0.04 0.03 0.02 0.01 0 1 0 0.01 0.02 0.03 0.04 Walking speed dissimilarity 0.05 Figure 3: Left: ROC curves for PDA and attainable region for k-LPE over 100 choices of weights. PDA outperforms k-LPE even under the best choice of weights. Right: A subset of the dyads for the training samples along with the first 100 Pareto fronts. The fronts are highly non-convex, partially explaining the superior performance of PDA. We use two criteria for computing the dissimilarity between trajectories. The first criterion is to compute the dissimilarity in walking speed. We compute the instantaneous speed at all time steps along p each trajectory by finite differencing, i.e. the speed of trajectory T at time step t is given by (xt ? xt?1 )2 + (yt ? yt?1 )2 . A histogram of speeds for each trajectory is obtained in this manner. We take the dissimilarity between two trajectories to be the squared Euclidean distance between their speed histograms. The second criterion is to compute the dissimilarity in shape. For each trajectory, we select 100 points, uniformly positioned along the trajectory. The dissimilarity between two trajectories T and T 0 is then given by the sum of squared Euclidean distances between the positions of T and T 0 over all 100 points. The training sample for this experiment consists of 500 trajectories, and the test sample consists of 200 trajectories. Table 1(b) shows the performance of PDA as compared to the other algorithms using 100 uniformly spaced weights for linear combinations. Notice that PDA has higher AUC than the other methods under all choices of weights for the two criteria. For a more detailed comparison, the ROC curve for PDA and the attainable region for k-LPE (the region between the ROC curves corresponding to weights resulting in the best and worst AUCs) is shown in Figure 3 along with the first 100 Pareto fronts for PDA. k-LPE performs slightly better at low false positive rate when the best weights are used, but PDA performs better in all other situations, resulting in higher AUC. Additional discussion on this experiment can be found in Section 4 of the supplementary material. 6 Conclusion In this paper we proposed a new multi-criteria anomaly detection method. The proposed method uses Pareto depth analysis to compute the anomaly score of a test sample by examining the Pareto front depths of dyads corresponding to the test sample. Dyads corresponding to an anomalous sample tended to be located at deeper fronts compared to dyads corresponding to a nominal sample. Instead of choosing a specific weighting or performing a grid search on the weights for different dissimilarity measures, the proposed method can efficiently detect anomalies in a manner that scales linearly in the number of criteria. We also provided a theorem establishing that the Pareto approach is asymptotically better than using linear combinations of criteria. Numerical studies validated our theoretical predictions of PDA?s performance advantages on simulated and real data. Acknowledgments We thank Zhaoshi Meng for his assistance in labeling the pedestrian trajectories. We also thank Daniel DeWoskin for suggesting a fast algorithm for computing Pareto fronts in two criteria. This work was supported in part by ARO grant W911NF-09-1-0310. 8 References [1] V. J. Hodge and J. Austin (2004). A survey of outlier detection methodologies. Artificial Intelligence Review 22(2):85?126. [2] V. Chandola, A. Banerjee, and V. Kumar (2009). Anomaly detection: A survey. ACM Computing Surveys 41(3):1?58. [3] Y. Jin and B. Sendhoff (2008). Pareto-based multiobjective machine learning: An overview and case studies. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 38(3):397?415. [4] A. O. Hero III and G. Fleury (2004). Pareto-optimal methods for gene ranking. The Journal of VLSI Signal Processing 38(3):259?275. [5] A. Blum and T. Mitchell (1998). Combining labeled and unlabeled data with co-training. In Proceedings of the 11th Annual Conference on Computational Learning Theory. [6] V. Sindhwani, P. Niyogi, and M. Belkin (2005). A co-regularization approach to semisupervised learning with multiple views. In Proceedings of the Workshop on Learning with Multiple Views, 22nd International Conference on Machine Learning. [7] C. M. Christoudias, R. Urtasun, and T. Darrell (2008). Multi-view learning in the presence of view disagreement. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. [8] M. G?onen and E. Alpayd?n (2011). Multiple kernel learning algorithms. Journal of Machine Learning Research 12(Jul):2211?2268. [9] S. Byers and A. E. Raftery (1998). Nearest-neighbor clutter removal for estimating features in spatial point processes. Journal of the American Statistical Association 93(442):577?584. [10] F. Angiulli and C. Pizzuti (2002). Fast outlier detection in high dimensional spaces. In Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery. [11] E. Eskin, A. Arnold, M. Prerau, L. Portnoy, and S. Stolfo (2002). A geometric framework for unsupervised anomaly detection: Detecting intrusions in unlabeled data. In Applications of Data Mining in Computer Security. Kluwer: Norwell, MA. [12] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander (2000). LOF: Identifying density-based local outliers. In Proceedings of the ACM SIGMOD International Conference on Management of Data. [13] A. O. Hero III (2006). Geometric entropy minimization (GEM) for anomaly detection and localization. In Advances in Neural Information Processing Systems 19. [14] K. Sricharan and A. O. Hero III (2011). Efficient anomaly detection using bipartite k-NN graphs. In Advances in Neural Information Processing Systems 24. [15] M. Zhao and V. Saligrama (2009). Anomaly detection with score functions based on nearest neighbor graphs. In Advances in Neural Information Processing Systems 22. [16] M. Ehrgott (2000). Multicriteria optimization. Lecture Notes in Economics and Mathematical Systems 491. Springer-Verlag. [17] O. Barndorff-Nielsen and M. Sobel (1966). On the distribution of the number of admissible points in a vector random sample. Theory of Probability and its Applications, 11(2):249?269. [18] Z.-D. Bai, L. Devroye, H.-K. Hwang, and T.-H. Tsai (2005). Maxima in hypercubes. Random Structures Algorithms, 27(3):290?309. [19] Y. Baryshnikov and J. E. Yukich (2005). Maximal points and Gaussian fields. Unpublished. URL http://www.math.illinois.edu/?ymb/ps/by4.pdf. [20] B. Majecka (2009). Statistical models of pedestrian behaviour in the Forum. Master?s thesis, University of Edinburgh. [21] R. R. Sillito and R. B. Fisher (2008). Semi-supervised learning for anomalous trajectory detection. In Proceedings of the 19th British Machine Vision Conference. 9
4612 |@word nd:1 open:3 simulation:2 attainable:3 bai:1 contains:3 score:11 selecting:3 daniel:1 neighbors1:1 outperforms:2 comparing:2 yet:1 must:1 written:1 numerical:1 shape:3 designed:1 intelligence:2 selected:2 item:18 ith:2 eskin:2 detecting:2 math:1 mathematical:3 along:5 constructed:1 dn:2 barndorff:1 consists:2 fth:2 combine:1 stolfo:1 manner:2 introduce:2 pairwise:1 behavior:2 themselves:1 multi:15 cardinality:1 provided:2 estimating:1 bounded:1 panel:1 inward:1 argmin:1 kind:1 minimizes:2 developed:1 finding:4 control:1 unit:1 grant:1 yn:6 positive:4 declare:2 local:3 multiobjective:1 establishing:1 meng:1 hsiao:1 black:2 studied:3 suggests:1 co:2 range:1 acknowledgment:1 camera:1 testing:1 union:1 practice:1 differs:2 asymptotics:1 area:6 lpe:11 significantly:3 word:1 fraud:1 induce:1 get:1 cannot:2 convenience:2 selection:2 close:1 unlabeled:2 nb:4 context:2 seminal:1 www:1 measurable:1 center:1 dz:1 yt:3 economics:2 attention:2 convex:7 survey:5 formulate:1 simplicity:1 identifying:2 utilizing:1 dominate:1 his:1 handle:1 notion:2 nominal:10 suppose:2 anomaly:57 us:4 breunig:2 hypothesis:1 approximated:1 located:5 continues:1 walking:2 labeled:1 observed:1 portnoy:1 capture:1 worst:2 calculate:3 thousand:1 region:5 connected:1 ordering:1 mentioned:3 vanishes:1 convexity:7 complexity:1 pda:38 depend:1 segment:1 localization:1 bipartite:2 f2:1 triangle:2 easily:1 represented:1 fast:2 detected:1 artificial:2 labeling:1 kevin:1 outside:2 neighborhood:1 h0:1 choosing:2 quite:1 heuristic:1 supplementary:5 larger:1 say:2 otherwise:1 niyogi:1 sequence:1 advantage:1 propose:2 aro:1 maximal:1 saligrama:2 relevant:1 combining:2 realization:1 christoudias:1 empty:1 p:2 darrell:1 object:2 help:1 illustrate:1 develop:1 ij:1 nearest:14 involves:3 come:1 inflated:2 differ:1 concentrate:2 direction:1 closely:1 hull:1 material:5 require:2 behaviour:1 assign:1 f1:9 strictly:8 hold:1 around:2 considered:1 sufficiently:1 normal:2 lof:3 estimation:1 create:5 weighted:1 minimization:2 gaussian:1 aim:1 rather:2 validated:2 improvement:1 rank:1 intrusion:2 detect:6 sense:2 dependent:1 minimizers:3 nn:10 typically:4 relation:1 vlsi:1 provably:1 aforementioned:1 among:2 denoted:2 art:1 spatial:1 biologist:1 equal:1 construct:3 field:1 having:2 ng:1 unsupervised:2 others:1 spline:1 quantitatively:1 inherent:1 employ:1 belkin:1 simultaneously:1 geometry:5 phase:2 yukich:2 detection:42 interest:1 highly:1 mining:2 evaluation:1 severe:1 mixture:1 sobel:2 norwell:1 edge:1 partial:1 orthogonal:1 unless:1 tree:1 euclidean:4 re:2 circle:2 theoretical:3 minimal:1 prerau:1 recomputing:1 w911nf:1 subset:2 uniform:2 examining:1 dij:4 front:63 unattainable:1 connect:1 synthetic:1 combined:2 hypercubes:1 density:6 international:2 probabilistic:1 off:1 yl:2 squared:4 thesis:1 management:1 choose:2 creating:2 american:1 zhao:2 suggesting:1 bold:1 chandola:2 includes:2 pedestrian:6 explicitly:1 ranking:3 depends:1 view:15 h1:1 red:3 start:1 worsen:1 jul:1 characteristic:1 efficiently:1 correspond:2 identify:4 spaced:1 trajectory:19 cybernetics:1 randomness:3 detector:1 tended:1 definition:1 associated:6 mi:1 proof:3 di:1 monitored:1 mitchell:1 recall:1 knowledge:2 organized:1 nielsen:1 positioned:1 sophisticated:1 actually:1 appears:1 manuscript:1 higher:2 supervised:2 methodology:1 evaluated:1 box:2 furthermore:2 rejected:1 just:1 until:2 ei:8 banerjee:1 hwang:1 semisupervised:1 usa:1 concept:2 true:2 y2:1 hausdorff:1 hence:6 assigned:1 regularization:1 illustrated:1 assistance:1 auc:13 illustrative:1 byers:2 criterion:73 pdf:1 performs:3 fj:3 image:2 instantaneous:1 novel:2 recently:2 fi:9 common:2 superior:1 garnered:1 pseudocode:1 overview:2 exponentially:1 belong:1 discussed:1 association:1 kluwer:1 significant:1 refer:1 rd:9 grid:8 fk:2 similarly:1 illinois:1 dot:1 operating:1 recent:1 inf:2 sendhoff:1 certain:1 verlag:1 outperforming:1 postponed:2 minimum:1 greater:2 additional:1 performer:1 surely:1 determine:1 signal:1 semi:1 relates:1 multiple:24 fleury:2 reduces:1 smooth:2 calculation:1 prediction:2 anomalous:14 ko:1 involving:1 basic:1 vision:1 expectation:2 histogram:2 kernel:2 represent:1 median:3 rest:1 unlike:2 strict:1 induced:3 member:2 angiulli:2 call:1 near:5 presence:1 iii:4 identically:1 concerned:1 easy:1 variety:1 enough:3 xj:5 sander:1 identified:1 fm:1 regarding:1 idea:1 scalarization:11 six:1 url:1 dyad:57 deep:2 generally:1 detailed:1 involve:1 clutter:1 http:1 outperform:2 exist:2 notice:1 blue:1 diverse:1 alfred:1 dominance:1 four:7 threshold:1 blum:1 graph:5 asymptotically:3 fraction:1 sum:4 run:2 powerful:1 uncertainty:1 master:1 almost:1 utilizes:1 missed:1 draw:1 decision:1 ki:6 fl:4 followed:1 nonnegative:1 annual:1 occur:1 x2:1 dominated:6 speed:7 optimality:8 min:2 kumar:1 performing:1 combination:26 remain:1 slightly:1 shallow:3 making:1 outlier:3 ln:3 calder:1 previously:2 discus:1 know:1 hero:8 ehrgott:1 umich:1 available:1 disagreement:3 encounter:1 dnew:1 remaining:2 sigmod:1 hypercube:2 forum:1 objective:4 parametric:6 fa:1 usual:1 said:2 exhibit:1 kth:1 distance:8 thank:2 nbl:2 simulated:3 seven:1 urtasun:1 spanning:1 alpayd:1 length:1 devroye:1 modeled:1 illustration:1 differencing:1 difficult:2 executed:4 onen:1 negative:1 append:1 implementation:1 motivates:1 perform:2 contributed:2 shallower:1 sricharan:2 observation:1 finite:2 jin:1 defining:1 situation:1 precise:1 y1:7 introduced:2 pair:3 required:4 unpublished:2 extensive:1 connection:5 kl:1 security:1 conflicting:1 able:1 kriegel:1 below:2 pattern:2 including:1 green:1 video:2 scheme:1 improve:1 numerous:1 created:3 raftery:2 sn:2 review:2 geometric:3 literature:2 removal:1 discovery:1 lecture:1 degree:1 principle:1 pareto:89 austin:2 changed:1 supported:2 copy:1 deeper:3 arnold:1 neighbor:14 explaining:1 distributed:2 edinburgh:1 boundary:2 depth:16 calculated:1 evaluating:1 xn:7 dimension:3 curve:6 commonly:1 adaptive:1 social:1 transaction:1 gene:4 receiver:1 gem:1 xi:5 search:7 sillito:1 zhaoshi:1 table:3 investigated:1 complex:1 necessarily:1 artificially:1 domain:3 european:1 pk:1 linearly:4 motivation:1 n2:1 xu:1 x1:2 fig:1 referred:2 roc:4 cubic:1 position:2 exponential:1 lie:1 weighting:1 peeling:1 admissible:1 ymb:1 removing:1 theorem:14 rk:2 british:1 jen:1 xt:3 specific:1 r2:1 dk:2 list:2 multicriteria:2 dominates:2 dl:8 workshop:1 false:2 importance:2 dissimilarity:24 entropy:2 michigan:1 likely:3 forming:1 hodge:2 partially:1 sindhwani:1 springer:1 corresponds:5 minimizer:1 satisfies:1 acm:2 ma:1 lth:1 marked:2 ann:1 jeff:1 man:1 feasible:1 experimentally:2 hard:1 fisher:1 typical:1 uniformly:3 contradictory:1 called:2 total:2 discriminate:1 arbor:1 experimental:1 select:3 pizzuti:2 tsai:1 evaluate:1 d1:2
3,992
4,613
A Neural Autoregressive Topic Model Stanislas Lauly D?epartement d?informatique Universit?e de Sherbrooke [email protected] Hugo Larochelle D?epartement d?informatique Universit?e de Sherbrooke [email protected] Abstract We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm. 1 Introduction In order to leverage the large amount of available unlabeled text, a lot of research has been devoted to developing good probabilistic models of documents. Such models are usually embedded with latent variables or topics, whose role is to capture salient statistical patterns in the co-occurrence of words within documents. The most popular model is latent Dirichlet allocation (LDA) [1], a directed graphical model in which each word is a sample from a mixture of global word distributions (shared across documents) and where the mixture weights vary between documents. In this context, the word multinomial distributions (mixture components) correspond to the topics and a document is represented as the parameters (mixture weights) of its associated distribution over topics. Once trained, these topics have been found to extract meaningful groups of semantically related words and the (approximately) inferred topic mixture weights have been shown to form a useful representation for documents. More recently, Salakhutdinov and Hinton [2] proposed an alternative undirected model, the Replicated Softmax which, instead of representing documents as distributions over topics, relies on a binary distributed representation of the documents. The latent variables can then be understood as topic features: they do not correspond to normalized distributions over words, but to unnormalized factors over words. A combination of topic features generates a word distribution by multiplying these factors and renormalizing. They show that the Replicated Softmax allows for very efficient inference of a document?s topic feature representation and outperforms LDA both as a generative model of documents and as a method for representing documents in an information retrieval setting. While inference of a document representation is efficient in the Replicated Softmax, one of its disadvantages is that the complexity of its learning update scales linearly with the vocabulary size V , i.e. the number of different words that are observed in a document. The factor responsible for this 1 ?v ?v ?v ?v 1 ???? v1 v2 v4 v3 2 3 4 h1 h2 h3 h4 h1 h2 h3 h4 v4 h v1 v2 v3 NADE v4 v1 v2 v3 v1 v4 Replicated Softmax v2 v3 v4 DocNADE Figure 1: (Left) Illustration of NADE. Colored lines identify the connections that share parameters and vbi is a shorthand for the autoregressive conditional p(vi |v<i ). The observations vi are binary. (Center) Replicated Softmax model. Each multinomial observation vi is a word. Connections between each multinomial observation vi and hidden units are shared. (Right) DocNADE, our proposed model. Connections between each multinomial observation vi and hidden units are also shared, and each conditional p(vi |v<i ) is decomposed into a tree of binary logistic regressions. complexity is the conditional distribution of the words given the latent variables, which corresponds to a V -way multinomial logistic regression. In a realistic application scenario, V will usually be in the 100 000?s. The Replicated Softmax is in fact a generalization of the restricted Boltzmann machine (RBM). The RBM is an undirected graphical model with binary observed and latent variables organized in a bipartite graph. The Replicated Softmax instead has multinomial (softmax) observed variables and shares (replicates) across all observed variables the parameters between an observed variable and all latent variables. A good alternative to the RBM is the neural autoregressive distribution estimator (NADE) [3]. It is similar to an autoencoder neural network, in that it takes as input a vector of observations and outputs a vector of the same size. However, the connectivity of NADE has been specifically chosen so as to make it a proper generative model for vectors of binary observations. More specifically, NADE outputs the conditional probabilities of each observation given the other observations to its left in the vector. Taking the product of all these conditional probabilities thus yields a proper joint probability over the whole input vector of observations. One advantage of NADE is that computing the parameter gradient of the data negative log-likelihood requires no approximation (unlike in an RBM). Also, unlike in the RBM, NADE does not require a symmetric connectivity, i.e. the weights going in and out of its hidden units can be different. In this work, we describe DocNADE, a neural network topic model that is similarly inspired by the Replicated Softmax. From the Replicated Softmax, we derive an efficient approach for computing the hidden units of the network. As for the computation of the distribution of words given the hidden units, our feed-forward neural network approach leaves us free to use other conditionals than the V -way multinomial logistic regression implied by the Replicated Softmax. In particular, we instead opt for a hierarchy of binary logistic regressions, organized in a binary tree where each leaf corresponds to a word of the vocabulary. This allows us to obtain a complexity of computing the probability of an observed word scaling sublinearly with V . Our experiments show that DocNADE is competitive both as a generative model of documents and as a learning algorithm for extracting meaningful representations of documents. 2 Neural Autoregressive Distribution Estimation We start with the description of the original NADE. NADE is a generative model over vectors of binary observations v ? {0, 1}D . Through the probability chain rule, it decomposes QD p(v) = i=1 p(vi |v<i ) and computes all p(vi |v<i ) using the feed-forward architecture hi (v<i ) = sigm (c + W:,<i v<i ) , p(vi = 1|v<i ) = sigm (bi + Vi,: hi (v<i )) 2 (1) for i ? {1, . . . , D}, where sigm(x) = 1/(1 + exp(?x)), W ? RH?D and V ? RD?H are connection parameter matrices, b ? RD and c ? RH are bias parameter vectors, v<i is the subvector [v1 , . . . , vi?1 ]> and W:,<i is a matrix made of the i ? 1 first columns of W. This architecture corresponds to a neural network with several parallel hi (v<i ) hidden layers and tied weighted connections between vi and each hidden unit hij (v<i ). Figure 1 gives an illustration. Though each p(vi = 1|v<i ) requires the computation of its own hidden layer hi (v<i ), the tied weights allows to compute them all in O(DH), where H is the size of each hidden layer hi (v<i ). Q Equation 1 provides all the necessary conditionals to compute p(v) = i p(vi |v<i ). The parameters {b, c, W, V} can then be learned by minimizing the negative log-likelihood with stochastic gradient descent. The connectivity behind NADE (i.e. the presence of a separate hidden layer hi (v<i ) for each p(vi = 1|v<i ) with weight sharing) were directly inspired from the RBM. An RBM is an undirected graphical model in which latent binary variables h interact with the observations v through an energy function E(v, h), converted into a distribution over v as follows: X E(v, h) = ?h> Wv ? b> v ? c> h, p(v) = exp(?E(v, h))/Z , (2) h where Z is known as the partition function and ensures that p(v) is a valid distribution and sums to 1. Computing the conditional p(vi = 1|v<i ) in an RBM is generally intractable but can be approximated through mean-field inference. Mean-field inference approximates the full conditional p(vi , v>i , h|v<i ) as a product of independent Bernoulli distributions q(vk = 1|v<i ) = ?k (i) and q(hj = 1|v<i ) = ?j (i). To find the values of the variational parameters ?k (i), ?j (i) that minimize the KL-divergence with p(vi , v>i , h|v<i ), the following message passing equations are applied until convergence, for k ? {i, . . . , D} and j ? {1, . . . , H} (see Larochelle and Murray [3] for the derivation): ? ? ? ? X X X ?j (i) ? sigm ?cj + Wjk ?k (i) + Wjk vk ? , ?k (i) ? sigm ?bk + Wjk ?j (i)? . k?i j k<i (3) The variational parameter q(vi = 1|v<i ) = ?i (i) can then be used to approximate p(vi = 1|v<i ). NADE is derived from the application of each message passing equation only once (with ?j (i) initialized to 0), but compensates by untying the weights between each equation and training the truncation directly to fit the available data. The end result is thus the feed-forward architecture of Equation 1. The relationship between the RBM and NADE is important, as it specifies an effective way of sharing the hidden layer parameters across the conditionals p(vi = 1|v<i ). In fact, other choices not inspired by the RBM have proven less successful (see Bengio and Bengio [4] and Larochelle and Murray [3] for a discussion). 3 Replicated Softmax Documents can?t be easily modeled by the RBM for two reasons: words are not binary but multinomial observations and documents may contain a varying number of words. An observation vector v is now a sequence of words indices vi taking values in {1, . . . , V }, while the size D of v can vary. To address these issues, Salakhutdinov and Hinton [2] proposed the Replicated Softmax model, which uses the following energy function E(v, h) = ?D c> h + D X ?h> W:,vi ? bvi = ?D c> h ? h> Wn(v) ? b> n(v), (4) i=1 where W:,vi is the vith column vector of matrix W and n(v) is a vector of size V containing the word count of each word in the vocabulary. Notice that this energy shares its connection parameters across different positions i in v. Figure 1 provides an illustration. Notice also that the larger v is, 3 the more important the terms summed over i in the energy will be. Hence, the hidden bias term c> h is multiplied by D to maintain a certain balance between all terms. QD Q In this model, the conditional across layers p(v|h) = i=1 p(vi |h) and p(h|v) = j p(hj |v) factorize and are such that: X exp(bw + h> W:,w ) p(hj = 1|v) = sigm(Dcj + Wjvi ) p(vi = w|h) = P (5) > 0 0 w0 exp(bw + h W:,w ) i The normalized exponential in p(vi = w|h) is known as the softmax nonlinearity. We see that, given a value of the topic features h, the distribution each word vi in the document can be understood as the > normalized product of multinomial topic factors exp(hj Wj,: ) and exp(b), as opposed to a mixture of multinomial topic distributions. The gradient of the negative log-likelihood of a single training document vt with respect to any parameter ? has the simple form     ? ? log p(vt ) ? ? t = EEh|vt E(v , h) ? EEv,h E(v, h) . (6) ?? ?? ?? Computing the last expectation exactly is too expensive, hence the contrastive divergence [5] approximation is used: the expectation over v is replaced by a point estimate at a so-called ?negative? sample, obtained from K steps of blocked Gibbs sampling based on Equation 5 initialized at vt . Once a negative sample is obtained Equation 6 can be estimated and used with stochastic gradient descent training. Unfortunately, computing p(vi = w|h) to sample the words during Gibbs sampling is linear in V and H, where V tends to be quite large. Fortunately, given h, it needs to be computed only once before sampling all D words in v. However, when h is re-sampled, p(vi = w|h) must be recomputed. Hence, the computation of p(vi = w|h) is usually the most expensive component of the learning update: sampling the hidden layer given v is only in O(DH), and repeatably sampling from the softmax multinomial distribution can be in O(V ). This makes for a total complexity in O(KV H + DH) of the learning update. 4 Document NADE More importantly for the context of this paper, it can be shown that mean-field inference of p(vi = w|v<i ) in the Replicated Softmax corresponds to the following message passing equations, for k ? {i, . . . , D}, j ? {1, . . . , H} and w ? {1, . . . , V }: ? ? V XX X ?j (i) ? sigm ?D cj + Wjw0 ?kw0 (i) + Wjvk ? , (7) k?i w0 =1 ?kw (i) ? P exp(bw + P 0 w0 exp(bw + Wjw ?j (i)) P . 0 j Wjw ?j (i)) j k<i (8) Following the derivation of NADE, we can truncate the application of these equations to obtain a feed-forward architecture providing an estimate of p(vi = w|v<i ) through ?iw (i) for all i. Specifically, if we consider a single iteration of message passing with ?kw0 (i) initialized to 0, we untie the parameter weight matrix between each equation into two separate matrices W and V and remove the multiplication by D of the hidden bias, we obtain the following feed-forward architecture: ! X exp(bw + Vw,: hi (v<i )) (9) hi (v<i ) = sigm c + W:,vk , p(vi = w|v<i ) = P 0 0 w0 exp(bw + Vw ,: hi (v<i )) k<i for i ? {1, . . . , D}. In words, the probability of the ith word vi is based on a position dependent hidden layer hi (v<i ) which extracts a representation out of all previous words v<i . This latent representation is efficient to compute, as it consists simply in a linear transformation followed by an element-wise sigmoidal nonlinearity. Unlike in the Replicated Softmax, we have found that multiplying the hidden bias by D was not necessary and, in fact, slightly hampered the performance of the model, so we opted for its removal. 4 To obtain the probability of the next word vi+1 , one must first compute the hidden layer X X hi+1 (v<i+1 ) = sigm(c + W:,vk ) = sigm(W:,vi + c + W:,vk ) k<i+1 (10) k<i P which is efficiently computed by reusing the previous linear transformation c + k<i W:,vk and adding W:,vi . With this procedure, we see that computing all hidden layers hi (v<i ) is in O(DH). Computing the softmax nonlinearity of each p(vi = w|v<i ) in Equation 9 requires time linear in V , which we would like to avoid. Fortunately, unlike in the Replicated Softmax, we are not tied to the use of a large softmax nonlinearity to model probabilities over words. In the literature on neural probabilistic language models, the large softmax over words is often replaced by a probabilistic tree model in which each path from the root to a leaf corresponds to a word [6, 7]. The probabilities of each left/right transitions in the tree are modeled by a set of binary logistic regressors and the probability of a given word is then obtained by multiplying the probabilities of each left/right choice of the associated tree path. Specifically, let l(vi ) be the sequence of tree nodes on the path from the root to the word vi and let ?(vi ) be the sequence of binary left/right choices for each of those nodes (e.g. l(vi )1 will always be the root of the tree and ?(vi )1 will be 0 if the word leaf node is in its left subtree or 1 otherwise). Let matrix V now be the matrix containing the logistic regression weights Vl(vi )m ,: of each tree node n(vi )m as its rows and bl(vi )m be its bias. The probability p(vi = w|v<i ) is now computed from hidden layer hi (v<i ) as follows: |?(vi )| p(vi = w|v<i ) = Y p(?(vi )m |v<i ), p(?(vi )m = 1|v<i ) = sigm(bl(vi )m + Vl(vi )m ,: hi (v<i )) m=1 (11) Q The conditionals of Equation 11 let us compute p(v) = i p(vi = 1|v<i ) for any document and the parameters {b, c, W, V} can be learned by minimizing the negative data log-likelihood with stochastic gradient descent. Once the model is trained, it can be used to extract a representation from a new document v? by computing P the value of its hidden layer after observing all of its words, which we note h(v? ) = sigm(c + i W:,vi? ). For a full binary tree of all V words, computing Equation 11 will involve O(log(V )) binary logistic regressions. In our experiments, we used a randomly generated full binary tree with V leaves, each assigned to a unique word of the vocabulary. An even better option would be to derive the tree using Hoffman coding, which would reduce even more the average path lengths. Since the computation of each logistic regression is in O(H) and there are D words in a document, the complexity of computing all p(vi = w|v<i ) given the hidden layers is in O(log(V )DH). The total complexity of computing p(v) and updating the parameters under the model is therefore O(log(V )DH + DH). When compared to the complexity O(KV H + DH) of Replicated Softmax, this is quite competitive1 . Indeed, Salakhutdinov and Hinton [2] suggest gradually increasing K from 1 to 25, which is larger than log(V ) for a very large vocabulary of one million words. Also, the number of words in a document D will usually be much smaller than the vocabulary size V . The final model, which we refer to as Document NADE (DocNADE), is illustrated in Figure 1. A pseudocode for computing p(v) and the parameter learning gradients for a given document is provided in the supplementary material and our code is available here: http://www.dmi.usherb. ca/?larocheh/code/DocNADE.zip. 4.1 Training from bags of word counts So far, we have assumed that the ordering of the words in the document was known. However, document data sets often take the form of set of word counts vectors in which the original word 1 In our experiments, a single training pass of DocNADE on the 20 Newgroups and RCV1-v2 data sets (see Section 6.1 for details) took on average 13 seconds and 726 seconds respectively. On the other hand, for K = 1 Gibbs sampling steps, our implementation of Replicated Softmax requires 28 seconds and 4945 seconds respectively. For K = 5, running time increases even more, to 60 seconds and 11000 seconds. 5 order, which is required by DocNADE to specify the sequence of conditionals p(vi |v< i), has been lost. e is sampled from One solution is to assume the following generative story: first, a seed document v DocNADE and, finally, a random permutation of its words is taken to produce the observed document v. This translates into the following probability distribution: p(v) = X p(v|e v)p(e v) = e?V(v) v X 1 p(e v) |V(v)| (12) e?V(v) v e with the same word where p(e v) is modeled by DocNADE and V(v) is the set of all documents v count vector n(v) = n(e v). This distribution is a mixture over all possible permutations that could have generated the original document v. Now, we can use the fact that sampling uniformly from V(v) can be done solely on the basis of the word counts of v, by randomly sampling words without replacement from those word counts. Therefore, we can train DocNADE on those generated word sequences, as if they were the original documents from which the word counts were extracted. While this is only an approximation of true maximum likelihood learning on the original documents, we?ve found it to work well in practice. This approach of training DocNADE can be understood as learning a model that is good at predicting which new words should be inserted in a document at any position, while maintaining its general semantics. The model is therefore learning not to insert ?intruder? words, i.e. words that do not belong with the others. After training, a document?s learned representation h(v) should contain valuable information to identify intruder words for this document. It?s interesting to note that the detection of such intruder words has been used previously as a task in user studies to evaluate the quality of the topics learned by LDA, though at the level of single topics and not whole documents [8]. 5 Related Work We mentioned that the Replicated Softmax models the distribution over words as a product of topic-dependent factors. The Sparse Additive Generative Model (SAGE) [9] is also based on topicdependent factors, as well as a background factor. The distribution of a word is the renormalized product of its topic factor and the background factor. Unfortunately, much like the Replicated Softmax, training in SAGE scales linearly with the vocabulary size, instead of logarithmically as in DocNADE. Recent work has also been able to improve the complexity of RBM training on word observations. However, for the specific case of the Replicated Softmax, the proposed method does not allow to remove the linear dependence on V of the complexity [10]. There has been fairly little work on using neural networks to learn generative topic models of documents. Glorot et al. [11], Dauphin et al. [12] have trained neural network autoencoders on documents in their binary bag of words representation, but such neural networks are not generative models of documents. One potential advantage of having a proper generative model under which p(v) can be computed exactly is it becomes possible to do Bayesian learning of the parameters, even on a large scale, using recent online Bayesian inference approaches [13, 14]. 6 Experiments We present two quantitative comparison of DocNADE with the Replicated Softmax. The first compares the performance of DocNADE as a generative model, while the later evaluates whether DocNADE hidden layer can be used as a meaningful representation for documents. Following Salakhutdinov and Hinton [2], we use a hidden layer size of H = 50 in all experiments. A validation set is always set aside to perform model selection of other hyper-parameters, such as the learning rate and the number of learning passes over the training set (based on early stopping). We also tested the use of a hidden layer hyperbolic tangent nonlinearity tanh(x) = (exp(x) ? exp(?x))/(exp(x) + exp(?x)) instead of the sigmoid and always used the best option based on the validation set performance. We end this section with a qualitative inspection of the implicit word representation and topic-features learned by DocNADE. 6 Data Set 20 Newsgroups RCV1-v2 LDA (50) LDA (200) 1091 1437 1058 1142 Replicated Softmax (50) 953 988 DocNADE (50) 896 742 DocNADE St. Dev 6.9 4.5 Table 1: Test perplexity per word for LDA with 50 and 200 latent topics, Replicated Softmax with 50 topics and DocNADE with 50 topics. The results for LDA and Replicated Softmax were taken from Salakhutdinov and Hinton [2]. 6.1 Generative Model Evaluation We first evaluated DocNADE?s performance as a generative model of documents. We performed our evaluation on the 20 Newsgroups and the Reuters Corpus Volume I (RCV1-v2) data sets and we followed the same evaluation as in Salakhutdinov and Hinton [2]: word counts were replaced by log(1 + ni ) rounded to the closest integer and a subset of 50 test documents (2193 words for 20 Newsgroups, 4716 words for RCV1-v2) were used to estimate the test perplexity per word P exp(? N1 t |v1t | log p(vt )). The vocabulary size for 20 Newsgroups was 2000 and 10 000 for RCV1-v2. We used the version of DocNADE that trains from document word counts. To approximate the e corresponding distribution p(v) of Equation 12, we sample a single permuted word sequence v from the word counts. This might seem like a crude approximation, but, as we?ll see, the value of p(e v) tends not to vary a lot across different random permutations of the words. P Instead of minimizing the average document negative log-likelihood ? N1 t log p(vt ), we also P considered minimizing a version normalized by each document?s size ? N1 t |v1t | log p(vt ), though the difference in performance between both ended up not being large. For 20 newsgroups, the model with the best perplexity on the validation set used a learning rate of 0.001, sigmoid hidden activation and optimized the average document negative log-likelihood (non-normalized). For RCV1-v2, a learning rate of 0.1, with sigmoid hidden activation and optimization of the objective normalized by each document?s size performed best. The results are reported in Table 1. A comparison is made with LDA using 50 or 200 topics and the Replicated Softmax with 50 topics. The results for LDA and Replicated Softmax were taken from Salakhutdinov and Hinton [2]. We see that DocNADE achieves lower perplexity than both models. On RCV1-v2, DocNADE reaches a perplexity that is almost half that of LDA with 50 topics. We also provide the standard deviation of the perplexity obtained by repeating 100 times the calculation e . We see that it is fairly of the perplexity on the test set using different permuted word sequences v small, which confirms that the value of p(e v) does not vary a lot across different permutations. This is consistent with the observation made by Larochelle and Murray [3] that results are stable with respect to the choice of ordering for the conditionals p(vi |v<i ). 6.2 Document Retrieval Evaluation We also evaluated the quality of the document representation h(v) learned by DocNADE in an information retrieval task using the 20 Newsgroups data set and its label information. In this context, all test documents were each used as queries and compared to a fraction of the closest documents in the original training set. Similarity between documents is computed using the cosine angle between document representations. We then compute the average number of retrieved training documents sharing the same label as the query (precision), and so for different fractions of retrieved documents. For learning, we set aside 1000 documents for validation. For model selection, we used the validation set as the query set and used the average precision at 0.02% retrieved documents as the performance measure. We used only the training objective normalized by the document size and set the maximum number of training passes to 973 (approximately 10 million parameter updates). The best learning rate was 0.01, with tanh hidden activation. Notice that the labels are not used during training. Since Salakhutdinov and Hinton [2] showed that it strictly outperforms LDA on this problem, we only compare to the Replicated Softmax. We performed stochastic gradient descent based on the contrastive divergence approximation during 973 training passes, and so for different learning rates. As recommended in Salakhutdinov and Hinton [2], we gradually increased the num7 jesus atheism christianity christ athos atheists bible christians sin atheist Hidden unit topics shuttle season orbit players lunar nhl spacecraft league nasa braves space playoffs launch rangers saturn hockey billion pitching satellite team encryption escrow pgp crypto nsa rutgers clipper secure encrypted keys Figure 2: (Left) Information retrieval task results, on 20 Newsgroups data set. The error bars correspond to the standard errors. (Right) Illustration of some topics learned by DocNADE. A topic i is visualized by picking the 10 words w with strongest connection Wiw . Table 2: The five nearest neighbors in the word representation space learned by DocNADE. weapons weapon shooting firearms assault armed medical treatment medecine patients process studies companies demand commercial agency company credit define defined definition refer make examples israel israeli israelis arab palestinian arabs book reading read books relevent collection windows dos microsoft version ms pc ber of Gibbs sampling steps K from 1 to 25, but also tried increasing it only to 5 or maintaining it to K = 1. Optionally, we also used mean-field inference for the first few training passes. The best combination of these choices was selected based on validation performance. The final results are presented in Figure 2. We see that DocNADE compares favorably with the Replicated Softmax. DocNADE is never outperformed by the Replicated Softmax and outperforms it for the intermediate retrieval fractions. 6.3 Qualitative Inspection of Learned Representations Since topic models are often used for the exploratory analysis of unlabeled text, we looked at whether meaningful semantics were captured by DocNADE. First, to inspect the nature of topics modeled by the hidden units, we looked at the words with strongest positive connections to that hidden unit, i.e. the words w that have the largest values of Wi,w for the ith hidden unit. Figure 2 shows four topics extracted this way and that could be understood as topics about religion, space, sports and security, which are label (sub)categories in 20 Newsgroups. We can also extract word representations, by using the columns W:,w as the vector representation of each word w. Table 2 shows the five nearest neighbors of some selected words in this space, confirming that the word representations are meaningful. In the supplementary material, we also provide 2D visualizations of these representations based on t-SNE [15], for 20 Newsgroups and RCV1-v2. 7 Conclusion We have proposed DocNADE, an unsupervised neural network topic model of documents and have shown that it is a competitive model both as a generative model and as a document representation learning algorithm. Its training has the advantageous property of scaling sublinearly with the vocabulary size. Since the early work on topic modeling, research on the subject has progressed by developing Bayesian algorithms for topic modeling, by exploiting labeled data and by incorporating more structure within the latent topic representation. We feel like this is a plausible and most natural course to follow for future research. Acknowledgment We thank Ruslan Salakhutdinov for providing us with the data sets used in the experiments. This work was supported by NSERC and Google. 8 References [1] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(4-5):993?1022, 2003. [2] Ruslan Salakhutdinov and Geoffrey Hinton. Replicated Softmax: an Undirected Topic Model. In Advances in Neural Information Processing Systems 22 (NIPS 2009), pages 1607?1614, 2009. [3] Hugo Larochelle and Ian Murray. The Neural Autoregressive Distribution Estimator. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS 2011), volume 15, pages 29?37, Ft. Lauderdale, USA, 2011. JMLR W&CP. [4] Yoshua Bengio and Samy Bengio. Modeling High-Dimensional Discrete Data with MultiLayer Neural Networks. In Advances in Neural Information Processing Systems 12 (NIPS 1999), pages 400?406. MIT Press, 2000. [5] Geoffrey E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771?1800, 2002. [6] Frederic Morin and Yoshua Bengio. Hierarchical Probabilistic Neural Network Language Model. In Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (AISTATS 2005), pages 246?252. Society for Artificial Intelligence and Statistics, 2005. [7] Andriy Mnih and Geoffrey E Hinton. A Scalable Hierarchical Distributed Language Model. In Advances in Neural Information Processing Systems 21 (NIPS 2008), pages 1081?1088, 2009. [8] Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David Blei. Reading Tea Leaves: How Humans Interpret Topic Models. In Advances in Neural Information Processing Systems 22 (NIPS 2009), pages 288?296, 2009. [9] Jacob Eisenstein, Amr Ahmed, and Eric P. Xing. Sparse Additive Generative Models of Text. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pages 1041?1048. Omnipress, 2011. [10] George E. Dahl, Ryan P. Adams, and Hugo Larochelle. Training Restricted Boltzmann Machines on Word Observations. In Proceedings of the 29th International Conference on Machine Learning (ICML 2012), 2012. [11] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pages 513?520. Omnipress, 2011. [12] Yann Dauphin, Xavier Glorot, and Yoshua Bengio. Large-Scale Learning of Embeddings with Reconstruction Sampling. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pages 945?952. Omnipress, 2011. [13] Max Welling and Yee Whye Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pages 681?688. Omnipress, 2011. [14] Sungjin Ahn, Anoop Korattikara, and Max Welling. Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring. In Proceedings of the 29th International Conference on Machine Learning (ICML 2012), 2012. [15] Laurens van der Maaten and Geoffrey E Hinton. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2579?2605, 2008. URL http://www.jmlr.org/papers/ volume9/vandermaaten08a/vandermaaten08a.pdf. 9
4613 |@word version:3 advantageous:1 confirms:1 tried:1 jacob:1 contrastive:3 epartement:2 document:68 outperforms:3 activation:3 must:2 lauly:2 additive:2 realistic:1 partition:1 confirming:1 christian:1 remove:2 update:4 aside:2 generative:16 leaf:6 half:1 selected:2 intelligence:3 inspection:2 ith:2 colored:1 blei:2 provides:2 node:4 sigmoidal:1 org:1 five:2 h4:2 qualitative:2 shorthand:1 consists:1 shooting:1 spacecraft:1 indeed:1 sublinearly:2 v1t:2 untying:1 inspired:4 salakhutdinov:11 decomposed:1 company:2 little:1 armed:1 window:1 increasing:2 becomes:1 provided:1 xx:1 israel:1 transformation:2 ended:1 quantitative:1 wjw:2 exactly:2 universit:2 unit:10 medical:1 before:1 positive:1 understood:4 tends:2 path:5 solely:1 approximately:2 might:1 pitching:1 co:1 bi:1 directed:1 unique:1 responsible:1 acknowledgment:1 recursive:1 lost:1 practice:1 procedure:1 hyperbolic:1 boyd:1 word:83 morin:1 suggest:1 unlabeled:3 selection:2 context:3 yee:1 www:2 center:1 christianity:1 estimator:2 rule:1 importantly:1 exploratory:1 feel:1 hierarchy:1 commercial:1 user:1 us:1 samy:1 logarithmically:2 element:1 expensive:3 approximated:1 dcj:1 updating:1 labeled:1 observed:8 role:1 inserted:1 ft:1 wang:1 capture:1 wj:1 ensures:1 ordering:2 valuable:1 mentioned:1 agency:1 complexity:10 sherbrooke:2 dynamic:1 renormalized:1 trained:3 lunar:1 bipartite:1 eric:1 basis:1 easily:1 joint:1 represented:1 sigm:12 derivation:2 train:2 informatique:2 describe:2 effective:1 query:3 artificial:3 hyper:1 whose:2 quite:2 larger:2 supplementary:2 plausible:1 otherwise:1 compensates:1 statistic:3 final:2 online:1 advantage:2 sequence:7 took:1 reconstruction:1 product:6 adaptation:1 korattikara:1 description:1 kv:2 wjk:3 billion:1 convergence:1 exploiting:1 satellite:1 produce:1 renormalizing:1 adam:1 encryption:1 derive:2 andrew:1 nearest:2 h3:2 launch:1 larochelle:7 qd:2 laurens:1 clipper:1 stochastic:6 human:1 material:2 brave:1 require:1 generalization:1 opt:1 ryan:1 insert:1 strictly:1 considered:1 credit:1 exp:15 seed:1 vith:1 vary:4 early:2 achieves:1 estimation:1 ruslan:2 outperformed:1 bag:2 label:4 iw:1 tanh:2 largest:1 weighted:1 hoffman:1 mit:1 always:3 avoid:1 hj:4 season:1 shuttle:1 varying:1 derived:1 vk:6 bernoulli:1 likelihood:7 opted:1 secure:1 firearm:1 inference:7 dependent:2 stopping:1 vl:2 hidden:31 going:1 semantics:2 issue:1 classification:1 dauphin:2 softmax:40 summed:1 fairly:2 field:5 once:5 dmi:1 having:1 never:1 sampling:11 ng:1 kw:1 unsupervised:1 progressed:1 icml:6 stanislas:2 future:1 others:1 yoshua:4 few:1 randomly:2 divergence:4 ve:1 ranger:1 replaced:3 bw:6 replacement:1 maintain:1 n1:3 microsoft:1 detection:1 message:4 mnih:1 playoff:1 evaluation:4 replicates:1 chong:1 nsa:1 mixture:7 bible:1 behind:1 pc:1 devoted:1 chain:1 necessary:2 tree:12 initialized:3 re:1 orbit:1 increased:1 column:3 newgroups:1 modeling:3 dev:1 disadvantage:1 deviation:1 subset:1 successful:1 too:1 reported:1 st:1 international:8 probabilistic:4 v4:5 lauderdale:1 rounded:1 picking:1 michael:1 connectivity:3 containing:2 opposed:1 book:2 expert:1 reusing:1 converted:1 potential:1 de:2 coding:1 vi:59 later:1 h1:2 lot:3 root:3 performed:3 observing:2 competitive:3 start:1 option:2 parallel:1 xing:1 minimize:1 ni:1 efficiently:1 correspond:3 identify:2 yield:1 bayesian:5 multiplying:3 strongest:2 reach:1 sharing:3 definition:1 evaluates:1 energy:4 associated:2 rbm:12 sampled:2 treatment:1 popular:1 arab:2 organized:2 cj:2 sean:1 nasa:1 feed:5 follow:1 specify:1 done:1 though:3 evaluated:2 implicit:1 until:1 autoencoders:1 hand:1 google:1 logistic:8 lda:11 quality:2 usa:1 normalized:7 contain:2 true:1 xavier:2 hence:3 inspiration:1 assigned:1 read:1 symmetric:1 illustrated:1 visualizing:1 ll:1 during:3 sin:1 eisenstein:1 unnormalized:1 cosine:1 m:1 whye:1 pdf:1 cp:1 omnipress:4 variational:2 wise:1 recently:2 sigmoid:3 pseudocode:1 multinomial:11 permuted:2 hugo:4 volume:2 million:2 belong:1 approximates:1 interpret:1 refer:2 blocked:1 gibbs:4 rd:2 league:1 similarly:1 nonlinearity:5 language:3 stable:1 similarity:1 ahn:1 closest:2 own:1 recent:2 showed:1 retrieved:3 posterior:1 perplexity:7 scenario:1 certain:1 binary:17 wv:1 usherbrooke:2 vt:7 palestinian:1 der:1 scoring:1 captured:1 fortunately:2 george:1 zip:1 paradigm:1 v3:4 recommended:1 full:3 bvi:1 calculation:1 ahmed:1 retrieval:5 wiw:1 scalable:1 regression:7 multilayer:1 patient:1 expectation:2 rutgers:1 iteration:1 encrypted:1 background:2 conditionals:6 weapon:2 unlike:4 pass:4 subject:1 undirected:5 seem:1 jordan:2 integer:1 extracting:1 vw:2 leverage:1 presence:1 intermediate:1 bengio:7 embeddings:1 wn:1 newsgroups:9 fit:1 architecture:6 andriy:1 reduce:1 translates:1 whether:2 url:1 sentiment:1 passing:4 deep:1 useful:1 generally:1 involve:1 amount:1 repeating:1 visualized:1 category:1 http:2 specifies:1 notice:3 estimated:1 per:2 discrete:1 tea:1 group:1 recomputed:1 salient:1 key:1 nhl:1 four:1 vbi:1 dahl:1 v1:5 assault:1 graph:1 fraction:3 sum:1 angle:1 almost:1 intruder:3 yann:1 maaten:1 scaling:2 layer:16 hi:14 followed:2 generates:1 rcv1:8 developing:2 truncate:1 combination:2 across:7 slightly:1 smaller:1 wi:1 restricted:2 gradually:2 atheist:2 taken:3 equation:15 visualization:1 previously:2 crypto:1 kw0:2 count:11 end:3 available:3 multiplied:1 hierarchical:3 v2:12 occurrence:1 alternative:2 original:6 hampered:1 dirichlet:2 running:1 graphical:4 maintaining:2 murray:4 society:1 bl:2 implied:1 objective:2 looked:2 amr:1 dependence:1 antoine:1 gradient:9 separate:2 thank:1 w0:4 topic:39 reason:1 length:1 code:2 modeled:4 relationship:1 illustration:4 index:1 minimizing:5 balance:1 providing:2 optionally:1 unfortunately:2 sne:2 hij:1 favorably:1 negative:8 sage:2 implementation:1 proper:3 boltzmann:2 perform:1 teh:1 inspect:1 observation:16 descent:4 langevin:1 hinton:13 team:1 escrow:1 inferred:1 bk:1 david:2 subvector:1 kl:1 required:1 connection:8 optimized:1 security:1 learned:9 nip:4 israeli:2 address:1 able:1 bar:1 usually:4 pattern:1 reading:2 max:2 natural:1 predicting:1 representing:2 improve:1 extract:4 autoencoder:1 text:4 literature:1 removal:1 tangent:1 multiplication:1 embedded:1 permutation:4 interesting:1 allocation:2 proven:1 geoffrey:4 larocheh:1 h2:2 validation:6 docnade:31 jesus:1 consistent:1 story:1 share:3 bordes:1 row:1 course:1 supported:1 last:1 free:1 truncation:1 bias:5 allow:1 ber:1 neighbor:2 taking:2 sparse:2 distributed:2 van:1 vocabulary:10 valid:1 transition:1 autoregressive:5 computes:1 forward:5 collection:2 made:3 replicated:33 regressors:1 sungjin:1 far:1 welling:2 approximate:2 global:1 corpus:1 assumed:1 factorize:1 latent:11 decomposes:1 table:4 hockey:1 learn:2 nature:1 ca:3 interact:1 domain:1 aistats:2 linearly:3 rh:2 whole:2 reuters:1 atheism:1 graber:1 nade:15 precision:2 sub:1 position:3 exponential:1 crude:1 tied:3 jmlr:2 pgp:1 ian:1 relevent:1 specific:1 glorot:3 frederic:1 intractable:1 incorporating:1 workshop:1 adding:1 subtree:1 demand:1 simply:1 religion:1 nserc:1 sport:1 christ:1 chang:1 corresponds:5 gerrish:1 relies:1 dh:8 extracted:2 conditional:9 replace:1 shared:3 fisher:1 specifically:5 uniformly:1 semantically:1 eeh:1 called:1 total:2 pas:1 player:1 meaningful:7 untie:1 jonathan:1 anoop:1 evaluate:1 tested:1
3,993
4,614
Monte Carlo Methods for Maximum Margin Supervised Topic Models Qixia Jiang?? , Jun Zhu?? , Maosong Sun? , and Eric P. Xing?? Department of Computer Science & Technology, Tsinghua National TNList Lab, ? State Key Lab of Intelligent Tech. & Sys., Tsinghua University, Beijing 100084, China ? School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 {qixia,dcszj,sms}@mail.tsinghua.edu.cn; [email protected] ? Abstract An effective strategy to exploit the supervising side information for discovering predictive topic representations is to impose discriminative constraints induced by such information on the posterior distributions under a topic model. This strategy has been adopted by a number of supervised topic models, such as MedLDA, which employs max-margin posterior constraints. However, unlike the likelihoodbased supervised topic models, of which posterior inference can be carried out using the Bayes? rule, the max-margin posterior constraints have made Monte Carlo methods infeasible or at least not directly applicable, thereby limited the choice of inference algorithms to be based on variational approximation with strict mean field assumptions. In this paper, we develop two efficient Monte Carlo methods under much weaker assumptions for max-margin supervised topic models based on an importance sampler and a collapsed Gibbs sampler, respectively, in a convex dual formulation. We report thorough experimental results that compare our approach favorably against existing alternatives in both accuracy and efficiency. 1 Introduction Topic models, such as Latent Dirichlet Allocation (LDA) [3], have shown great promise in discovering latent semantic representations of large collections of text documents. In order to fit data better, LDA has been successfully extended in various ways. One notable extension is supervised topic models, which were developed to incorporate supervising side information for discovering predictive latent topic representations. Representative methods include supervised LDA (sLDA) [2, 12], discriminative LDA (DiscLDA) [8], and max-entropy discrimination LDA (MedLDA) [16]. MedLDA differs from its counterpart supervised topic models by imposing discriminative constraints (i.e., max-margin constraints) directly on the desired posterior distributions, instead of defining a normalized likelihood model as in sLDA and DiscLDA. Such topic models with max-margin posterior constraints have shown superior performance in various settings [16, 14, 13, 9]. However, their constrained formulations, especially when using soft margin constraints for inseparable practical problems, make it infeasible or at least hard if possible at all1 to directly apply Monte Carlo (MC) methods [10], which have been widely used in the posterior inference of likelihood based models, such as the collapsed Gibbs sampling methods for LDA [5]. Previous inference methods for such models with max-margin posterior constraints have been exclusively on the variational methods [7] usually with a strict mean-field assumption. Although factorized variational methods often seek faster approximation solutions, they could be inaccurate or obtain too compact results [1]. ?? indicates equal contributions from these authors. Rejection sampling can be applied when the constraints are hard, e.g., for separable problems. But it would be inefficient when the sample space is large. 1 1 In this paper, we develop efficient Monte Carlo methods for max-margin supervised topic models, which we believe is crucial for highly scalable implementation, and further performance enhancement of this class of models. Specifically, we first provide a new and equivalent formulation of the MedLDA as a regularized Bayesian model with max-margin posterior constraints, based on Zellner?s interpretation of Bayes? rule as a learning model [15] and the recent development of regularized Bayesian inference [17]. This interpretation is arguably more natural than the original formulation of MedLDA as a hybrid max-likelihood and max-margin learning, where the log-likelihood is approximated by a variational upper bound for computational tractability. Then, we deal with the set of soft max-margin constraints with convex duality methods and derive the optimal solutions of the desired posterior distributions. To effectively reduce the size of the sampling space, we develop two samplers, namely, an importance sampler and a collapsed Gibbs sampler [4, 1], with a much weaker assumption on the desired posterior distribution compared to the mean field methods in [16]. We note that the work [11] presents a duality method to handle moment matching constraints in maximum entropy models. Our work is an extension of their results to learn topic models, which have nontrivially structured latent variables and also use the general soft margin constraints. 2 Latent Dirichlet Allocation LDA [3] is a hierarchical Bayesian model that posits each document as an admixture of K topics, where each topic ?k is a multinomial distribution over a V -word vocabulary. For document d, its topic proportion ? d is a multinomial distribution drawn from a Dirichlet prior. Let wd = {wdn }N n=1 denote the words appearing in document d. For the n-th word wdn , a topic assignment zdn = k is drawn from ? d and wdn is drawn from ?k . In short, the generative process of d is ? d ? Dir(?), zdn = k ? Mult(? d ), wdn ? Mult(?k ), (1) where Dir(?) is a Dirichlet, Mult(?) is a multinomial. For fully-Bayesian LDA, the topics are also random samples drawn from a Dirichlet prior, i.e., ?k ? Dir(?). N Let W = {wd }D d=1 denote all the words in a corpus with D documents, and define zd = {zdn }n=1 , D D Z = {zd }d=1 , ? = {? d }d=1 . The goal of LDA is to infer the posterior distribution p(?, Z, ?|W, ?, ?) = p0 (?, Z, ?|?, ?)p(W|?, Z, ?) . p(W|?, ?) (2) Since inferring the true posterior distribution is intractable, researchers must resort to variational [3] or Monte Carlo [5] approximate methods. Although both methods have shown success in various scenarios. They have complementary advantages. For example, variational methods (e.g., meanfield) can be generally more efficient, while MC methods can obtain more accurate estimates. 3 MedLDA: a supervised topic model with max-margin constraints MedLDA extends LDA by integrating the max-margin learning into the procedure of discovering latent topic representations to learn latent representations that are good for predicting class labels or rating scores of a document. Empirically, MedLDA and its various extensions [14, 13, 9] have demonstrated promise in learning more discriminative topic representations. The original MedLDA was designed as a hybrid max likelihood and max-margin learning, where the intractable loglikelihood is approximated by a variational bound. To derive our sampling methods, we present a new interpretation of MedLDA from the perspective of regularized Bayesian inference [17]. 3.1 Bayesian inference as a learning model As shown in Eq. (2), Bayesian inference is an information processing rule that projects the prior p0 and empirical evidence to a post-data posterior distribution via the Bayes? rule. It is the core for likelihood-based supervised topic models [2, 12]. A fresh interpretation of Bayesian inference was given by Zellner [15], which leads to our novel interpretation of MedLDA. Specifically, Zellner showed that the posterior distribution by Bayes? rule is the solution of an optimization problem. For instance, the posterior p(?, Z, ?|W) of LDA is equivalent to the optimum solution of (3) min KL[p(?, Z, ?)?p0 (?, Z, ?)] ? Ep [log p(W|?, Z, ?)], p(?,Z,?)?P where KL(q||p) is the Kullback-Leibler divergence from q to p, and P is the space of probability distributions. We will use L(p(?, Z, ?)) to denote the objective function. 2 3.2 MedLDA: a regularized Bayesian model For brevity, we consider the classification model. Let D = {(wd , yd )}D d=1 be a given fully-labeled training set, where the response variable Y takes values from a finite set Y = {1, . . . , M }. MedLDA consists of two parts. The first part is an LDA likelihood model for describing input documents. As in previous work, we use the partial2 likelihood model for W. The second part is a mechanism to consider supervising signal. Since our goal is to discover latent representations Z that are good for classification, one natural solution is to connect Z directly to our ultimate goal. MedLDA obtains such a goal by building a classification model on Z. One good candidate of the classification model is the max-margin methods, which avoid defining a normalized likelihood model [12]. Formally, let ? denote the parameters of the classification model. To make the model fully-Bayesian, we also treat ? random. Then, we want to infer the joint posterior distribution p(?, ?, Z, ?|D). For classification, MedLDA defines the following discrimination function ?), F (y; w) = Ep(?,z|w) [F (y, ?, z; w)], F (y, ?, z; w) = ? ? f (y, z (4) ?N 1 ? is a K-dim vector whose element z?k equals to N n=1 I(zn = k), and I(x) is an indicator where z ?) is an M K-dim vector whose function which equals to 1 when x is true otherwise 0; f (y, z ? and all others are zero; and ? is an M K-dimensional vector elements from (y ? 1)K to yK are z concatenating M class-specific sub-vectors. With the above definitions, a natural prediction rule is (5) y? = argmax F (y; w), y and we would like to ?regularize? the properties of the latent topic representations to make them suitable for a classification task. One way to achieve that goal is to take the optimization view of Bayes? theorem and impose the following max-margin constraints to problem (3) F (yd ; wd ) ? F (y; wd ) ? ?d (y) ? ?d , ?y ? Y, ?d, (6) where ?d (y) is a non-negative function that penalizes the wrong predictions; ? = are non-negative slack variables for inseparable cases. Let L(p) = KL(p||p0 (?, ?, Z, ?)) ? ?d ) = f (yd , z ?d ) ? f (y, z ?d ). Then, we define the soft-margin Ep [log p(W|Z, ?)] and ?f (y, z MedLDA as solving D min p(?,?,Z,?)?P,? L(p(?, ?, Z, ?)) + ? C ? ?d D {?d }D d=1 (7) d=1 ?d )] ? ?d (y) ? ?d , ?d ? 0, ?d, ?y, s.t. : Ep [? ?f (y, z where the prior is p0 (?, ?, Z, ?) = p0 (?)p0 (?, Z, ?). With the above discussions, we can see that MedLDA is an instance of regularized Bayesian models [17]. Also, problem (7) can be equivalently written as min L(p(?, ?, Z, ?)) + CR(p(?, ?, Z, ?)) (8) p(?,?,Z,?)?P ? 1 ? ?d )]) is the hinge loss, an upper bound of the where R = D d argmaxy (?d (y) ? Ep [? ?f (y, z prediction error on training data. 4 Monte Carlo methods for MedLDA As in other variants of topic models, it is intractable to solve problem (7) or the equivalent problem (8) directly. Previous solutions resort to variational mean-field approximation methods. It is easy to show that the variational EM method in [16] is a coordinate descent algorithm to solve problem (7), with the additional fully-factorized mean-field constraint, ? ? ? p(zdn )) p(?k ). (9) p(?, ?, Z, ?) = p(?)( p(? d ) n d k Below, we present two MC sampling methods to solve the MedLDA problem, with much weaker constraints on p, and thus they could be expected to produce more accurate solutions. Specifically, we assume p(?, ?, Z, ?) = p(?)p(?, Z, ?). Then, the general procedure is to alternately solve problem (8) by performing the following two steps. 2 A full likelihood model on both W and Y can be defined as in [12]. But its normalization constant (a function of Z) could make the problem hard to solve. 3 Estimate p(?): Given p(?, Z, ?), the subproblem (in an equivalent constrained form) is to solve min KL(p(?)?p0 (?)) + p(?),? D C ? ?d D (10) d=1 ? s.t. : Ep [?] ?f (y, E[? zd ]) ? ?d (y) ? ?d , ?d ? 0, ?d, ?y. By using the Lagrangian methods with multipliers ?, we have the optimum posterior distribution p(?) ? p0 (?)e? ? ? ? y ? D zd ]) d=1 y ?d ?f (y,E[? (11) . For the prior p0 , for simplicity, we choose the standard normal prior, i.e., p0 (?) = N (0, I). In this case, p(?) = N (?, I) and the dual problem is D max ? ? s.t. : ?? y 1 ? ? ?+ ?d ?d (y) 2 y ? d=1 ?yd ? [0, (12) C ], ?d. D y ?D ? where ? = d=1 y ?yd ?f (y, E[? zd ]). Note that ? is the posterior mean of classifier parameters ?, and the element ?yk represents the contribution of topic k in classifying a data point to category y. This problem is the dual problem of a multi-class SVM [6] and we can solve it (or its primal form) efficiently using existing high-performance SVM learners. We denote the optimum solution of this problem by (p? (?), ?? , ?? , ?? ). Estimate p(?, Z, ?): Given p(?), the subproblem (in an equivalent constrained form) is to solve min p(?,Z,?),? L(p(?, Z, ?)) + ? ? D C ? ?d D (13) d=1 s.t. : (? ) ?f (y, Ep [? zd ]) ? ?d (y) ? ?d , ?d ? 0, ?d, ?y. Although in theory we can solve this subproblem again using Lagrangian dual methods, it would be hard to derive the dual objective function (if possible at all). Here, we use the same strategy as in [16], that is, to update p(?, Z, ?) for only one step with ? being fixed at ? ? (the optimum solution of the previous step). It is easy to show that by fixing ? at ?? , we will have the optimum solution p(?, Z, ?) ? p(W, Z, ?, ?)e(? ? ? ) ? y ? zd ) dy (?d ) ?f (y,? , (14) The differences between MedLDA and LDA lie in the above posterior distribution. The first term is the same as the posterior of LDA (the evidence p(W) can be absorbed into the normalization constant). The second term indicates the regularization effects due to the max-margin posterior constraints, which is consistent with our intuition. Specifically, for those data with non-zero Lagrange multipliers (i.e., the data are around the decision boundary or misclassified), the second term will bias the model towards a new posterior distribution that favors more discriminative representations on these ?hard? data points. Now, the remaining problem is how to efficiently draw samples from p(?, Z, ?) and estimate the expectations E[? z] as accurate as possible, which are needed in learning classification models. Below, we present two representative samplers ? an importance sampler and a collapsed Gibbs sampler. 4.1 Importance sampler To avoid dealing with the intractable normalization constant of p(?, Z, ?), one natural choice is to use importance sampling. Importance sampling aims at drawing some samples from a ?simple? distribution and the expectation is estimated as a weighted average over these samples. However, directly applying importance sampling to p(?, Z, ?) may cause some issues since importance sampling suffers from severe limitations in large sample spaces. Alternatively, since the distribution p(?, Z, ?) in Eq. (14) has the factorization form p(?, Z, ?) = p0 (?, ?)p(Z|?, ?), another pos? ?) ? from p0 (?, ?) and sible method is to adopt the ancestral sampling strategy to draw sample (?, ? ? then draw samples from p(Z|?, ?). Although it is easy to draw a sample from the Dirichlet prior p0 (?, ?) = Dir(?)Dir(?), it would require a large number of samples to get a robust estimate of the expectations E[Z]. Below, we present one solution to reduce sample space. 4 One feasible method to reduce the sample space is to collapse (?, ?) out and directly draw samples from the marginal distribution p(Z). However, this will cause tight couplings between Z and make the number of samples needed to estimate the expectation grow exponentially with the dimensionality of Z for importance sampler. A practical sampler for this collapsed distribution would be a Markov chain, as we will present in next section. Here, we propose to use the MAP estimate of (?, ?) as their ?single sample?3 and proceed to draw samples of Z. Specifically, given ? ?), ? we have the conditional distribution (?, Nd D ? ? ? y ? d , ?), ? ?) ? ? ?)e ? (?? )? dy (?d )? ?f (y,?zd ) = ? p(Z|?, ? p(W, Z|?, p(zdn |? (15) d=1 n=1 ? y ? ? ? 1 where ? d , ?, ? wdn = t) = 1 ??kt ??dk e Nd y (?d ) (?yd k ??yk ) (16) p(zdn = k|? Zdn and Zdn is a normalization constant, and ??yk is the [(y ? 1)K + k]-th element of ?? . The difference (??yd k ? ??yk ) represents the different contribution of topic k in classifying d to the true category yd and a wrong category y. If the difference is positive, topic k contributes to make a correct prediction for d; otherwise, it contributes to make a wrong prediction. (j) Then, we draw J samples {zdn }Jj=1 from a proposal distribution g(z) and compute the expectations Nd J j ? ?dn 1 ? (j) ?d and E[zdn ] ? E[? zdk ] = E[zdn ], ?? zdk ? z zdn , (17) ?J j Nd n=1 ? j=1 dn j=1 j where the importance weight ?dn is (j) )I(zdn K (? ? =k) ? ?dk ?kwdn N1 ?y (?yd )? (??y k ???yk ) j d d (18) ?dn = e g(k) k=1 ? ?) ? With the J samples, we update the MAP estimate (?, j ? ? ? (j) N J d 1 ??dk ? J n=1 j=1 ?J dn? j I(zdn = k) + ?k j=1 dn (19) j ?D ?Nd ?J ?dn (j) ??kt ? J1 d=1 n=1 j=1 ?J ? j I(zdn = k)I(wdn = t) + ?t . j=1 dn ? ?) ? to be uniform, and the The above two steps are repeated until convergence, initializing (?, samples from the last iteration are used to estimate the expectation statistics needed in the problem of inferring p(?). 4.2 Collapsed Gibbs sampler As we have stated, another way to effectively reduce the sample space is to integrate out the intermediate variables (?, ?) and build a Markov chain whose equilibrium distribution is the resulting marginal distribution p(Z). We propose to use collapsed Gibbs sampling, which has been successfully used for LDA [5]. For MedLDA, we integrate out (?, ?) and get the marginalized posterior distribution ? ? ? zd ) (?? )? d y (?y d ) ?f (y,? p(Z) = p(W,Z|?,?) e Z [ q ][ ] ? (20) y ?D ?(Cd +?) (?? )? (? )? ?f (y,?zd ) ?K ?(Ck +?) y d = Z1 e , d=1 k=1 ?(?) ?(?) where ?(x) = ?dim(x) ?(xi ) i=1 ?dim(x) xi ) i=1 ?( , Ckt is the number of times the term t being assigned to topic k over the whole corpus and Ck = {Ckt }Vt=1 ; Cdk is the number of times that terms being associated with topic k within the d-th document and Cd = {Cdk }K k=1 . We can also derive the transition probability of one variable zdn given others which we denote by Z? as: t ? ? Ck,?n + ?t 1 (?y )? (?? k yd k ??yk ) p(zdn = k|Z? , W? , wdn = t) ? ? (Cd,?n +?k )e Nd y d (21) ?V t t Ck,?n + t=1 ?t ? where C?,?n indicates that term n is excluded from the corresponding document or topic. Again, we can see the difference between MedLDA and LDA (using collapsed Gibbs sampling) from the additional last term in Eq. (21), which is due to the max-margin posterior constraints. 3 This collapses the sample space of (?, ?) to a single point. 5 For those data on the margin or misclassified (with non-zero Lagrange multipliers), the last term is non-zero and acts as a regularizer directly affecting the topic assignments of these difficult data. Then, we use the transition distribution in Eq. (21) to construct a Markov chain. After this Markov chain has converged (i.e., finished the burn-in stage), we draw J samples {Z(j) } and estimate the expectation statistics Nd J 1 ? 1 ? (j) ?d , and E[zdn ] = z . (22) E[? zdk ] = E[zdn ], ?? zdk ? z Nd n=1 J j=1 dn 4.3 Prediction To make prediction on unlabeled testing data using the prediction rule (5), we take the approach that has been adopted for the variational MedLDA, which uses a point estimate of topics ? from training ? to replace the data and makes prediction based on them. Specifically, we use the MAP estimate ? ? is computed as in Eq. (19). For the probability distribution p(?). For the importance sampler, ? ? using the samples is ??kt ? 1 ?J C t (j) + ?t , where collapsed Gibbs sampler, an estimate of ? j=1 k J Ckt (j) is the times that term t is assigned to topic k in the j-th sample. Given a new document w to be predicted, for importance sampler, the importance weight should ?K (j) be altered as ?nj = k=1 (?k ??kwn /g(k))I(zn =k) . Then, we approximate the expectation of z as ? as p(zn = in Eq. (17). For Gibbs sampler, we infer its latent components z using the obtained ? k k ? k|z?n ) ? ?kwn (C?n + ?k ), where C?n is the times that the terms in this document w assigned to topic k with the n-th term excluded. Then, we approximate the E[? z] as in Eq. (22). 5 Experiments We empirically evaluate the importance sampler and the Gibbs sampler for MedLDA (denoted by iMedLDA and gMedLDA respectively) on the 20 Newsgroups data set with a standard list of stop words4 removed. This data set contains about 20K postings within 20 groups. Due to space limitation, we focus on the multi-class setting. We use the cutting-plane algorithm [6] to solve the multi-class SVM to infer p(?) and solve for the lagrange multipliers ? in MedLDA. For simplicity, we use the uniform proposal distribution g in iMedLDA. In this case, we can globally draw J (e.g., = 3 ? K) samples {Z(j) }Jj=1 from g(z) outside the iteration loop and only update the importance weights to save time. For gMedLDA, we keep J (e.g., 20) adjacent samples after gMedLDA has converged to estimate the expectation statistics. To be fair, we use the same C for different MedLDA methods. The optimum C is chosen via 5-fold cross validation during the training procedure of fMedLDA from {a2 : a = 1, . . . , 8}. We use symmetric Dirichlet priors for all LDA topic models, i.e., ? = ?eK and ? = ?eV , where en is a n-dim vector with every entry being 1. We assess the convergence of a Markov chain when (1) it has run for a maximum number of iterations (e.g., 100), or (2) the relative change in its objective, t+1 t i.e., |L Lt?L | , is less than a tolerance threshold ? (e.g., ? = 10?4 ). We use the same strategy to judge whether the overall inference algorithm converges. We randomly select 7,505 documents from the whole set as the test set and the rest as the training data. We set the cost parameter ?d (y) in problem (7) to be 16, which produces better classification performance than the standard 0/1 cost [16]. To measure the sparsity of the ? latent representations of documents, we compute the average entropy over test documents: |D1t | d?Dt H(? d ). We also measure the sparsity of the inferred topic distributions ? in terms of the average entropy over topics, ?K 1 i.e., K k=1 H(?k ). All experiments are carried out on a PC with 2.2GHz CPU and 3.6G RAM. We report the mean and standard deviation for each model with 4 times randomly initialized runs. 5.1 Performance with different topic numbers This section compares gMedLDA and iMedLDA with baseline methods. MedLDA was shown to outperform sLDA for document classification. Here, we focus on comparing the performance of MedLDA and LDA when using different inference algorithms. Specifically, we compare with the 4 http://mallet.cs.umass.edu/ 6 0.6 iMedLDA gMedLDA fMedLDA gLDA fLDA 0.5 0.4 20 40 60 80 # Topics (a) 100 120 5 iMedLDA gMedLDA fMedLDA gLDA fLDA 4 3 2 1 20 40 60 80 # Topics 100 (b) 120 Average Entropy over Topics 0.7 Average Entropy over Docs Accuracy 0.8 9 8 7 iMedLDA gMedLDA fMedLDA gLDA fLDA 6 5 4 20 40 60 80 # Topics 100 120 (c) Figure 1: Performance of multi-class classification of different topic models with different topic numbers on 20-Newsgroups data set: (a) classification accuracy, (b) the average entropy of ? over test documents, and (c) The average entropy of topic distributions ?. LDA model that uses collapsed Gibbs sampling [5] (denoted by gLDA) and the LDA model that uses fully-factorized variational methods [3] (denoted by fLDA). For LDA models, we discover the latent representations of the training documents and use them to build a multi-class SVM classifier. For MedLDA, we report the results when using fully-factorized variational methods (denoted by fMedLDA) as in [16]. Furthermore, fMedLDA and fLDA optimize the hyper-parameter ? using the Newton-Rampion method [3], while gMedLDA, iMedLDA and gLDA determine ? by 5-fold cross-validation. We have tested a wide range of values of ? (e.g., 10?16 ? 103 ) and found that the performance of iMedLDA degrades seriously when ? is larger than 10?3 . Therefore, we set ? to be 10?5 for iMedLDA while 0.01 for the other topic models just as in the literature [5]. Fig. 1(a) shows the accuracy. We can see that Monte Carlo methods generally outperform the fullyfactorized mean-field methods, mainly because of their weaker factorization assumptions. The reason for the superior performance of iMedLDA over gMedLDA is probably because iMedLDA is more effective in dealing with sample sparsity issues. More insights will be provided in Section 5.2. Fig. 1(b) shows the average entropy of latent representations ? over test documents. We find that the entropy of gMedLDA and iMedLDA are smaller than those of gLDA and fLDA, especially for (relatively) large K. This implies that sampling methods for MedLDA can effectively concentrate the probability mass on just several topics thus discover more predictive topic representations. However, fMedLDA yields the smallest entropy, which is mainly because the fully-factorized variational methods tend to get too compact results, e.g., sparse local optimums. Fig. 1(c) shows the average entropy of topic distributions ? over topics. We can see that gMedLDA improves the sparsity of ? than fMedLDA. However, gMedLDA?s entropy is larger than gLDA?s. This is because for those ?hard? documents, the exponential component in Eq. (21) ?regularizes? the conditional probability p(zdn |Z? ) and leads to a smoother estimate of ?. On the other hand, we find that iMedLDA has the largest entropy. This is probably because many of the samples (topic assignments) generated by the proposal distribution are ?incorrect? but importance sampler still assigns weights to these samples. As a result, the inferred topic distributions are very dense and thus have a large entropy. Moreover, in the above experiments, we found that the lagrange multipliers in MedLDA are very sparse (about 1% non-zeros for both iMedLDA and gMedLDA; about 1.5% for fMedLDA), much sparser than those of SVM built on raw input data (about 8% non-zeros). 5.2 Sensitivity analysis with respect to key parameters Sensitivity to ?. Fig. 2(a) shows the classification performance of gMedLDA and iMedLDA with different values of ?. We can see that the performance of gMedLDA increases as ? becomes large and retains stable when ? is larger than 0.1. In contrast, the accuracy of iMedLDA decreases a bit (especially for small K) when ? becomes large, but is relative stable when ? is small (e.g., ? 0.01). This is probably because with a finite number of samples, Gibbs sampler tends to produce a too sparse estimate of E[Z], and a slightly stronger prior is helpful to deal with the sample sparsity issue. In contrast, the importance sampler avoids such sparsity issue by using a uniform proposal distribution, which could make the samples well cover all topic dimensions. Thus, a small prior is sufficient to get good performance, and increasing the prior?s strength could potentially hurt. Sensitivity to sample size J. For sampling methods, we always need to decide how many samples (sample size J) to keep to ensure sufficient statistics power. Fig. 2(b) shows the classification accuracy of both gMedLDA and iMedLDA with different sample size J when ? = 10?2 /K and C = 16. 7 iMedLDAK=90 gMedLDAK=30 0.6 gMedLDAK=60 0.5 ?4 10 0.6 iMedLDAK=30 iMedLDAK=60 0.4 iMedLDAK=90 gMedLDAK=30 0.2 10 ?2 10 ? ?1 10 0 10 (a) 0 5 0.75 K=30 K=60 K=90 gMedLDAK=60 gMedLDAK=90 ?3 0.8 gMedLDAK=90 10 100 Sample Size 1000 (b) 0.7 ?4 10 Accuracy iMedLDAK=60 Accuracy 0.7 1 0.8 iMedLDAK=30 Accuracy Accuracy 0.85 0.8 0.8 ?3 10 (c) 0.4 iMedLDA gMedLDA fMedLDA 0.2 ?2 10 ? 0.6 0 1 5 10 # iteration 50 (d) Figure 2: Sensitivity study of iMedLDA and gMedLDA: (a) classification accuracy with different ? for different topic numbers, (b) classification accuracy with different sample size J, (c) classification accuracy with different convergence criterion ? for gMedLDA, and (d) classification accuracy of different methods varies as a function of iterations when the topic number is 30. For gMedLDA, we have tested different values of J for training and prediction. We found that the sample size in the training process has almost no influence on the prediction accuracy even when it equals to 1. Hence, for efficiency, we set J to be 1 during the training. It shows that gMedLDA is relatively stable when J is larger than about 20 at prediction. For iMedLDA, Fig. 2(b) shows that it becomes stable when the prediction sample size J is larger than 3 ? K. Sensitivity to convergence criterion ?. For gMedLDA, we have to judge whether a Markov chain has reached its stationarity. Relative change in the objective is a commonly used diagnostic to justify the convergence. We study the influence of ?. In this experiment, we don?t bound the maximum number of iterations and allow the Gibbs sampler to run until the tolerance ? is reached. Fig. 2(c) shows the accuracy of gMedLDA with different values of ?. We can see that gMedLDA is relatively insensitive to ?. This is mainly because gMedLDA alternately updates posterior distribution and Lagrangian multipliers. Thus, it does Gibbs sampling for many times, which compensates for the influence that each Markov chain has not reached its stationarity. On the other hand, small ? values can greatly slow the convergence. For instance, when the topic number is 90, gMedLDA takes 11,986 seconds on training when ? = 10?4 but 1,795 seconds when ? = 10?2 . These results imply that we can loose the convergence criterion to speedup training while still obtain a good model. Sensitivity to iteration. Fig. 2(d) shows the the classification accuracy of MedLDA with various inference methods as a function of iteration when the topic number is set at 30. We can see that all the various MedLDA models converge quite quickly to get good accuracy. Compared to fMedLDA, which uses mean-field variational inference, the two MedLDA models using Monte Carlo methods (i.e., iMedLDA and gMedLDA) are slightly faster to get stable prediction performance. Time efficiency 4 10 CPU?Seconds 5.3 Although gMedLDA can get good results even for a loosen converiMedLDA 10 gence criterion ? as discussed in Sec. 5.2, we set ? to be 10?4 for gMedLDA fMedLDA all the methods in order to get a more objective comparison. Fig. 3 gLDA fLDA reports the total training time of different models, which includes 10 20 40 60 80 100 120 two phases ? inferring the latent topic representations and training # Topics SVMs. We find iMedLDA is the most efficient, which benefits from Figure 3: Training time. (1) generateing samples outside the iteration loop and uses them for all iterations; and (2) using the MAP estimates to collapse the sample space of (?, ?) to a ?single sample? for efficiency. In contrast, both gMedLDA and fMedLDA have to iteratively update the variables or variational parameters. gMedLDA requires more time than fMedLDA but is comparable when ? is set to be 0.01. By using the equivalent 1-slack formulation, about 76% of the training time spent on inference for iMedLDA and 90% for gMedLDA. For prediction, both iMedLDA and gMedLDA are slightly slower than fMedLDA. 3 2 6 Conclusions We have presented two Monte Carlo methods for MedLDA, a supervised topic model using maxmargin constraints directly on the desired posterior distributions for discovering predictive latent topic representations. Our methods are based on a novel interpretation of MedLDA as a regularized Bayesian model and the a convex dual formulation to deal with soft-margin constraints. Experimental results on the 20 Newsgroups data set show that Monte Carlo methods are robust to hyper-parameters and could yield very competitive results for such max-margin topic models. 8 100 Acknowledgements Part of the work was done when QJ was visiting CMU. JZ and MS are supported by the National Basic Research Program of China (No. 2013CB329403 and 2012CB316301), National Natural Science Foundation of China (No. 91120011, 61273023 and 61170196) and Tsinghua Initiative Scientific Research Program No.20121088071. EX is supported by AFOSR FA95501010247, ONR N000140910758, NSF Career DBI-0546594 and Alfred P. Sloan Research Fellowship. References [1] C.M. Bishop. Pattern recognition and machine learning, volume 4. springer New York, 2006. [2] D.M. Blei and J.D. McAuliffe. Supervised topic models. NIPS, pages 121?128, 2007. [3] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003. [4] A. Gelman, J.B. Carlin, H.S. Stern, and D.B. Rubin. Bayesian data analysis. Boca Raton, FL: Chapman and Hall/CRC, 2004. [5] T.L. Griffiths and M. Steyvers. Finding scientific topics. Proc. of National Academy of Sci., pages 5228?5235, 2004. [6] T. Joachims, T. Finley, and C.N.J. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1):27?59, 2009. [7] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183?233, 1999. [8] S. Lacoste-Jullien, F. Sha, and M.I. Jordan. DiscLDA: Discriminative learning for dimensionality reduction and classification. NIPS, pages 897?904, 2009. [9] D. Li, S. Somasundaran, and A. Chakraborty. A combination of topic models with max-margin learning for relation detection. In ACL TextGraphs-6 Workshop, 2011. [10] R.Y. Rubinstein and D.P. Kroese. Simulation and the Monte Carlo method, volume 707. Wileyinterscience, 2008. [11] E. Schofield. Fitting maximum-entropy models on large sample spaces. PhD thesis, Department of Computing, Imperial College London, 2006. [12] C. Wang, D.M. Blei, and Li F.F. Simultaneous image classification and annotation. CVPR, 2009. [13] Y. Wang and G. Mori. Max-margin latent Dirichlet allocation for image classification and annotation. In BMVC, 2011. [14] S. Yang, J. Bian, and H. Zha. Hybrid generative/discriminative learning for automatic image annotation. In UAI, 2010. [15] A. Zellner. Optimal information processing and Bayes?s theorem. American Statistician, pages 278?280, 1988. [16] J. Zhu, A. Ahmed, and E.P. Xing. MedLDA: maximum margin supervised topic models for regression and classification. In ICML, pages 1257?1264, 2009. [17] J. Zhu, N. Chen, and E.P. Xing. Infinite latent SVM for classification and multi-task learning. In NIPS, 2011. 9
4614 |@word chakraborty:1 proportion:1 nd:8 stronger:1 seek:1 simulation:1 p0:14 thereby:1 tnlist:1 reduction:1 moment:1 contains:1 exclusively:1 score:1 uma:1 seriously:1 document:19 maosong:1 existing:2 wd:5 comparing:1 must:1 written:1 j1:1 designed:1 update:5 discrimination:2 generative:2 discovering:5 plane:2 sys:1 short:1 core:1 blei:3 dn:9 initiative:1 incorrect:1 consists:1 fitting:1 expected:1 multi:6 globally:1 cpu:2 increasing:1 becomes:3 project:1 discover:3 provided:1 moreover:1 factorized:5 mass:1 developed:1 finding:1 nj:1 thorough:1 every:1 act:1 wrong:3 classifier:2 arguably:1 mcauliffe:1 positive:1 local:1 treat:1 tsinghua:4 tends:1 jiang:1 yd:10 burn:1 acl:1 china:3 limited:1 factorization:2 collapse:3 range:1 practical:2 testing:1 differs:1 likelihoodbased:1 procedure:3 empirical:1 mult:3 matching:1 word:4 integrating:1 d1t:1 griffith:1 get:8 unlabeled:1 gelman:1 collapsed:10 applying:1 influence:3 optimize:1 equivalent:6 map:4 demonstrated:1 lagrangian:3 convex:3 simplicity:2 assigns:1 rule:7 insight:1 dbi:1 regularize:1 steyvers:1 handle:1 coordinate:1 hurt:1 us:5 pa:1 element:4 approximated:2 recognition:1 labeled:1 ep:7 subproblem:3 initializing:1 wang:2 boca:1 sun:1 decrease:1 removed:1 yk:7 intuition:1 n000140910758:1 solving:1 tight:1 predictive:4 eric:1 efficiency:4 learner:1 po:1 joint:1 kwn:2 various:6 regularizer:1 effective:2 london:1 monte:12 rubinstein:1 hyper:2 outside:2 whose:3 slda:3 widely:1 solve:11 larger:5 loglikelihood:1 drawing:1 otherwise:2 cvpr:1 compensates:1 favor:1 statistic:4 advantage:1 propose:2 loop:2 achieve:1 academy:1 convergence:7 enhancement:1 optimum:7 zellner:4 produce:3 converges:1 supervising:3 derive:4 develop:3 coupling:1 fixing:1 spent:1 school:1 eq:8 c:2 predicted:1 judge:2 implies:1 concentrate:1 posit:1 correct:1 crc:1 require:1 extension:3 around:1 hall:1 normal:1 great:1 equilibrium:1 inseparable:2 adopt:1 a2:1 smallest:1 proc:1 applicable:1 label:1 largest:1 successfully:2 weighted:1 always:1 aim:1 ck:4 avoid:2 cr:1 jaakkola:1 focus:2 joachim:1 likelihood:10 indicates:3 mainly:3 tech:1 contrast:3 greatly:1 baseline:1 dim:5 inference:14 helpful:1 inaccurate:1 relation:1 misclassified:2 issue:4 dual:6 classification:24 overall:1 denoted:4 development:1 schofield:1 constrained:3 marginal:2 field:7 equal:4 construct:1 ng:1 sampling:16 chapman:1 represents:2 yu:1 icml:1 report:4 others:2 intelligent:1 employ:1 randomly:2 national:4 divergence:1 argmax:1 phase:1 statistician:1 n1:1 wdn:7 detection:1 stationarity:2 highly:1 severe:1 argmaxy:1 pc:1 primal:1 chain:7 accurate:3 kt:3 penalizes:1 desired:4 initialized:1 instance:3 soft:5 cover:1 retains:1 zn:3 assignment:3 tractability:1 cost:2 deviation:1 entry:1 uniform:3 too:3 connect:1 varies:1 dir:5 sensitivity:6 ancestral:1 quickly:1 kroese:1 again:2 thesis:1 choose:1 cb329403:1 resort:2 inefficient:1 ek:1 american:1 li:2 sible:1 sec:1 includes:1 notable:1 sloan:1 view:1 lab:2 reached:3 xing:3 bayes:6 competitive:1 zha:1 annotation:3 contribution:3 ass:1 accuracy:18 efficiently:2 yield:2 bayesian:13 raw:1 mc:3 carlo:12 researcher:1 converged:2 simultaneous:1 suffers:1 definition:1 against:1 associated:1 stop:1 dimensionality:2 improves:1 dt:1 supervised:13 bian:1 response:1 bmvc:1 formulation:6 done:1 furthermore:1 just:2 stage:1 until:2 ckt:3 hand:2 defines:1 lda:21 scientific:2 believe:1 building:1 effect:1 normalized:2 true:3 multiplier:6 counterpart:1 regularization:1 assigned:3 hence:1 excluded:2 symmetric:1 leibler:1 iteratively:1 semantic:1 deal:3 adjacent:1 during:2 criterion:4 m:1 mallet:1 image:3 variational:16 novel:2 superior:2 multinomial:3 empirically:2 exponentially:1 insensitive:1 volume:2 discussed:1 interpretation:6 mellon:1 fa95501010247:1 gibbs:14 imposing:1 automatic:1 stable:5 posterior:27 recent:1 showed:1 perspective:1 scenario:1 onr:1 success:1 vt:1 additional:2 impose:2 determine:1 converge:1 signal:1 smoother:1 full:1 infer:4 faster:2 fullyfactorized:1 cross:2 ahmed:1 post:1 loosen:1 prediction:15 scalable:1 variant:1 basic:1 regression:1 cmu:2 expectation:9 iteration:10 normalization:4 proposal:4 affecting:1 want:1 fellowship:1 grow:1 crucial:1 rest:1 unlike:1 strict:2 probably:3 induced:1 tend:1 nontrivially:1 jordan:3 structural:1 glda:8 yang:1 intermediate:1 easy:3 newsgroups:3 quite:1 fit:1 carlin:1 reduce:4 cn:1 qj:1 whether:2 ultimate:1 proceed:1 cause:2 jj:2 york:1 generally:2 cb316301:1 svms:2 category:3 http:1 outperform:2 nsf:1 estimated:1 diagnostic:1 zd:10 alfred:1 carnegie:1 promise:2 medlda:37 group:1 key:2 threshold:1 drawn:4 imperial:1 lacoste:1 ram:1 beijing:1 run:3 extends:1 almost:1 decide:1 draw:9 doc:1 decision:1 dy:2 disclda:3 bit:1 comparable:1 bound:4 fl:1 fold:2 strength:1 constraint:21 min:5 performing:1 separable:1 relatively:3 speedup:1 department:2 structured:1 combination:1 smaller:1 slightly:3 em:1 maxmargin:1 mori:1 describing:1 slack:2 mechanism:1 loose:1 needed:3 adopted:2 apply:1 hierarchical:1 appearing:1 save:1 alternative:1 zdk:4 slower:1 original:2 dirichlet:9 include:1 remaining:1 ensure:1 graphical:1 hinge:1 marginalized:1 newton:1 exploit:1 ghahramani:1 especially:3 build:2 objective:5 strategy:5 degrades:1 sha:1 visiting:1 sci:1 gence:1 topic:66 mail:1 reason:1 fresh:1 equivalently:1 difficult:1 cdk:2 potentially:1 favorably:1 negative:2 stated:1 implementation:1 stern:1 upper:2 markov:7 sm:1 finite:2 descent:1 defining:2 extended:1 regularizes:1 zdn:20 textgraphs:1 inferred:2 rating:1 raton:1 namely:1 kl:4 z1:1 alternately:2 nip:3 usually:1 below:3 ev:1 pattern:1 sparsity:6 program:2 built:1 max:24 power:1 meanfield:1 suitable:1 natural:5 hybrid:3 regularized:6 predicting:1 indicator:1 zhu:3 altered:1 technology:1 epxing:1 flda:7 imply:1 finished:1 admixture:1 carried:2 jun:1 finley:1 text:1 prior:11 literature:1 acknowledgement:1 relative:3 afosr:1 fully:7 loss:1 limitation:2 allocation:4 validation:2 foundation:1 integrate:2 sufficient:2 consistent:1 rubin:1 classifying:2 cd:3 supported:2 last:3 infeasible:2 side:2 weaker:4 bias:1 allow:1 wide:1 saul:1 sparse:3 tolerance:2 ghz:1 boundary:1 dimension:1 vocabulary:1 transition:2 avoids:1 benefit:1 author:1 made:1 collection:1 commonly:1 approximate:3 compact:2 obtains:1 cutting:2 kullback:1 keep:2 dealing:2 uai:1 corpus:2 pittsburgh:1 discriminative:7 xi:2 alternatively:1 don:1 latent:18 learn:2 jz:1 robust:2 career:1 contributes:2 dense:1 whole:2 repeated:1 complementary:1 fair:1 fig:9 representative:2 en:1 slow:1 sub:1 inferring:3 concatenating:1 exponential:1 candidate:1 lie:1 jmlr:1 posting:1 theorem:2 specific:1 bishop:1 list:1 dk:3 svm:6 evidence:2 intractable:4 workshop:1 effectively:3 importance:17 phd:1 margin:27 sparser:1 chen:1 rejection:1 entropy:16 lt:1 absorbed:1 lagrange:4 springer:1 dcszj:1 conditional:2 goal:5 towards:1 replace:1 feasible:1 hard:6 change:2 specifically:7 infinite:1 sampler:22 justify:1 total:1 duality:2 experimental:2 formally:1 select:1 college:1 brevity:1 incorporate:1 evaluate:1 tested:2 wileyinterscience:1 ex:1
3,994
4,615
Matrix reconstruction with the local max norm Rina Foygel Department of Statistics Stanford University [email protected] Nathan Srebro Toyota Technological Institute at Chicago [email protected] Ruslan Salakhutdinov Dept. of Statistics and Dept. of Computer Science University of Toronto [email protected] Abstract We introduce a new family of matrix norms, the ?local max? norms, generalizing existing methods such as the max norm, the trace norm (nuclear norm), and the weighted or smoothed weighted trace norms, which have been extensively used in the literature as regularizers for matrix reconstruction problems. We show that this new family can be used to interpolate between the (weighted or unweighted) trace norm and the more conservative max norm. We test this interpolation on simulated data and on the large-scale Netflix and MovieLens ratings data, and find improved accuracy relative to the existing matrix norms. We also provide theoretical results showing learning guarantees for some of the new norms. 1 Introduction In the matrix reconstruction problem, we are given a matrix Y ? Rn?m whose entries are only partly observed, and would like to reconstruct the unobserved entries as accurately as possible. Matrix reconstruction arises in many modern applications, including the areas of collaborative filtering (e.g. the Netflix prize), image and video data, and others. This problem has often been approached using regularization with matrix norms that promote low-rank or approximately-low-rank solutions, including the trace norm (also known as the nuclear norm) and the max norm, as well as several adaptations of the trace norm described below. In this paper, we introduce a unifying family of norms that generalizes these existing matrix norms, and that can be used to interpolate between the trace and max norms. We show that this family includes new norms, lying strictly between the trace and max norms, that give empirical and theoretical improvements over the existing norms. We give results allowing for large-scale optimization with norms from the new family. Some proofs are deferred to the Supplementary Materials. Notation Without loss of generality we take n ? m. We let R+ denote the nonnegative real numbers. For any n ? N, let [n] = {1, . . . , n}, and define the simplex on [n] as ?[n] =  P r ? Rn+ : i ri = 1 . We analyze situations where the locations ofPobserved entries are sampled i.i.d. according to some distribution p on [n] ? [m]. We write pi? = j pij to denote the marginal probability of row i, and prow = (p1? , . . . , pn? ) ? ?[n] to denote the marginal row distribution. We define p?j and pcol similarly for the columns. For any matrix M , M(i) denotes its ith row. 1.1 Trace norm and max norm A common regularizer used in matrix reconstruction, and other matrix problems, is the trace norm kXktr , equal to the sum of the singular values of X. This norm can also be defined via a factorization 1 of X [1]: ? ? X X 1 1 1 A(i) 2 + 1 B(j) 2 ? , ? kXktr = min ? 2 AB > =X n i m j nm (1) where the minimum is taken over factorizations of X of arbitrary dimension?that is, ? the number of columns in A and B is unbounded. Note that we choose to scale the trace norm by 1/ nm in order to emphasize that we are averaging the squared row norms of A and B. Regularization with the trace norm gives good theoretical and empirical results, as long as the locations of observed entries are sampled uniformly (i.e. when p is the uniform distribution on [n]?[m]), and, under this assumption, can also be used to guarantee approximate recovery of an underlying low-rank matrix [1, 2, 3, 4]. The factorized definition of the trace norm (1) allows for an intuitive comparison with the max norm, defined as [1]:   2 2 1 kXkmax = (2) min sup A(i) 2 + sup B(j) 2 . 2 AB > =X j i We see that the max norm measures the largest row norms in the factorization, while the rescaled trace norm instead considers the average row norms. The max norm is therefore an upper bound on the rescaled trace norm, and can be viewed as a more conservative regularizer. For the more general setting where p may not be uniform, Foygel and Srebro [4] show that the max norm is still an effective regularizer (in particular, bounds on error for the max norm are not affected by p). On the other hand, Salakhutdinov and Srebro [5] show that the trace norm is not robust to non-uniform sampling?regularizing with the trace norm may yield large error due to over-fitting on the rows and columns with high marginals. They obtain improved empirical results by placing more penalization on these over-represented rows and columns, described next. 1.2 The weighted trace norm To reduce overfitting on the rows and columns with high marginal probabilities under the distribution p, Salakhutdinov and Srebro propose regularizing with the p-weighted trace norm, 1 1 kXktr(p) := diag(prow ) /2 ? X ? diag(pcol ) /2 . tr If the row and the column of entries to be observed are sampled independently (i.e. p = prow ? pcol is a product distribution), then the p-weighted trace norm can be used to obtain good learning guarantees even when prow and pcol are non-uniform [3, 6]. However, for non-uniform non-product sampling distributions, even the p-weighted trace norm can yield poor generalization performance. To correct for this, Foygel et al. [6] suggest adding in some ?smoothing? to avoid under-penalizing the rows and columns with low marginal probabilities, and obtain improved empirical and theoretical results for matrix reconstruction using the smoothed weighted trace norm: 1 1 e row ) /2 ? X ? diag(e kXktr(ep) := diag(p pcol ) /2 , tr e row and p e col denote smoothed row and column marginals, given by where p e row = (1 ? ?) ? prow + ? ? 1/n and p e col = (1 ? ?) ? pcol + ? ? 1/m , p (3) for some choice of smoothing parameter ? which may be selected with cross-validation1 . The smoothed empirically-weighted trace norm is also studied in [6], where pi? is replaced with observations in row i b i? = # total b p # observations , the empirical marginal probability of row i (and same for p?j ). Using empirical rather than ?true? weights yielded lower error in experiments in [6], even when the true sampling distribution was uniform. More generally, for any weight vectors r ? ?[n] and c ? ?[m] and a matrix X ? Rn?m , the (r, c)-weighted trace norm is given by 1/2 1/2 kXktr(r,c) = diag(r) ? X ? diag(c) . tr 1 Our ? parameter here is equivalent to 1 ? ? in [6]. 2 Of course, we can easily obtain the existing methods of the uniform trace norm, (empirically) weighted trace norm, and smoothed (empirically) weighted trace norm as special cases of this formulation. Furthermore, the max norm is equal to a supremum over all possible weightings [7]: kXkmax = sup kXktr(r,c) . r??[n] ,c??[m] 2 The local max norm We consider a generalization of these norms, which lies ?in between? the trace norm and max norm. For any R ? ?[n] and C ? ?[m] , we define the (R, C)-norm of X: kXk(R,C) = sup r?R,c?C kXktr(r,c) . This gives a norm on matrices, except in the trivial case where, for some i or some j, ri = 0 for all r ? R or cj = 0 for all c ? C. We now show some existing and novel norms that can be obtained using local max norms. 2.1 Trace norm and max norm We can obtain the max norm by taking the largest possible R and C, i.e. kXkmax = kXk(?[n] ,?[m] ) , and similarly we can obtain the (r, c)-weighted trace norm by taking the singleton sets R = {r} and C = {c}. As discussed above, this includes the standard trace norm (when r and c are uniform), as well as the weighted, empirically weighted, and smoothed weighted trace norm. 2.2 Arbitrary smoothing When using the smoothed weighted max norm, we need to choose the amount of smoothing to apply to the marginals, that is, we need to choose ? in our definition of the smoothed row and column weights, as given in (3). Alternately, we could regularize simultaneously over all possible amounts of smoothing by considering the local max norm with R = {(1 ? ?) ? prow + ? ? 1/n : any ? ? [0, 1]} , and same for C. That is, R and C are line segments in the simplex?they are larger than any single point as for the uniform or weighted trace norm (or smoothed weighted trace norm for a fixed amount of smoothing), but smaller than the entire simplex as for the max norm. Connection to (?, ? )-decomposability 2.3 Hazan et al. [8] introduce a class of matrices defined by a property of (?, ? )-decomposability: a matrix X satisfies this property if there exists a factorization X = AB > (where A and B may have an arbitrary number of columns) such that2   X X 2 2 A(i) 2 + B(j) 2 ? ? . max max A(i) 2 , max B(j) 2 ? 2?, 2 2 i j i j Comparing with (1) and (2), we see that the ? and ? parameters essentially correspond to the max norm and trace norm, with the max norm being the minimal 2? ? such that the matrix is (? ? , ? )decomposable for some ? , and the trace norm being the minimal ? ? /2 such that the matrix is (?, ? ? )-decomposable for some ?. However, Hazan et al. go beyond these two extremes, and rely on balancing both ? and ? : they establish learning guarantees (in an adversarial online model, and ? thus also under an arbitrary sampling distribution p) which scale with ? ? ? . It may therefore be useful to consider a penalty function of the form: ? ? sX ?r 2 2 2 X 2 ? A(i) + B(j) Penalty(?,? ) (X) = min max A(i) 2 + max B(j) 2 ? . 2 2? i j X=AB > ? i j (4) 2 Hazan et al. state the property differently, but equivalently, in terms of a semidefinite matrix decomposition. 3 n 2 2 2 o 2 (Note that max maxi A(i) 2 , maxj B(j) 2 is replaced with maxi A(i) 2 + maxj B(j) 2 , ? for later convenience. This affects the value of the penalty function by at most a factor of 2.) This penalty function does not appear to be convex in X. However, the proposition below (proved in the Supplementary Materials) shows that we can use a (convex) local max norm penalty to compute a solution to any objective function with a penalty function of the form (4): b be the minimizer of a penalized loss function with this modified penalty, Proposition 1. Let X n o b := arg min Loss(X) + ? ? Penalty(?,? ) (X) , X X where ? ? 0 is some penalty parameter and Loss(?) is any convex function. Then, for some penalty parameter ? ? 0 and some t ? [0, 1], n o b = arg min Loss(X) + ? ? kXk X , where (R,C) X     t t R = r ? ?[n] : ri ? ?i and C = c ? ?[m] : cj ? ?j . 1 + (n ? 1)t 1 + (m ? 1)t We note that ? and t cannot be determined based on ? alone?they will depend on the properties of b the unknown solution X. Here the sets R and C impose a lower bound on each of the weights, and this lower bound can be used to interpolate between the max and trace norms: when t = 1, each ri is lower bounded by 1/n (and similarly for c ), i.e. R and C are singletons containing only the uniform weights and we j obtain the trace norm. On the other hand, when t = 0, the weights are lower-bounded by zero, and so any weight vector is allowed, i.e. R and C are each the entire simplex and we obtain the max norm. Intermediate values of t interpolate between the trace norm and max norm and correspond to different balances between ? and ? . 2.4 Interpolating between trace norm and max norm We next turn to an interpolation which relies on an upper bound, rather than a lower bound, on the weights. Consider   R = r ? ?[n] : ri ?  ?i and C? = c ? ?[n] : cj ? ? ?j , (5) for some  ? [1/n, 1] and ? ? [1/m, 1]. The (R , C? )-norm is then equal to the (rescaled) trace norm when we choose  = 1/n and ? = 1/m, and is equal to the max norm when we choose  = ? = 1. Allowing  and ? to take intermediate values gives a smooth interpolation between these two familiar norms, and may be useful in situations where we want more flexibility in the type of regularization. We can generalize this to an interpolation between the max norm and a smoothed weighted trace norm, which we will use in our experimental results. We consider two generalizations?for each one, we state a definition of R, with C defined analogously. The first is multiplicative:  1 R? (6) ?,? := r ? ?[n] : ri ? ? ? ((1 ? ?) ? pi? + ? ? /n) ?i , 1 where ? = 1 corresponds to choosing the singleton set R? ?,? = {(1 ? ?) ? prow + ? ? /n} (i.e. the smoothed weighted trace norm), while ? = ? corresponds to the max norm (for any choice of ?) since we would get R? ?,? = ?[n] . The second option for an interpolation is instead defined with an exponent: n o 1?? R?,? := r ? ?[n] : ri ? ((1 ? ?) ? pi? + ? ? 1/n) ?i . (7) Here ? = 0 will yield the singleton set corresponding to the smoothed weighted trace norm, while ? = 1 will yield R?,? = ?[n] , i.e. the max norm, for any choice of ?. We find the second (exponent) option to be more natural, because each of the row marginal bounds will reach 1 simultaneously when ? = 1, and hence we use this version in our experiments. On the other hand, the multiplicative version is easier to work with theoretically, and we use this in our learning guarantee in Section 4.2. If all of the row and column marginals satisfy some loose upper bound, then the two options will not be highly different. 4 3 Optimization with the local max norm One appeal of both the trace norm and the max norm is that they are both SDP representable [9, 10], and thus easily optimizable, at least in small scale problems. In the Supplementary Materials we show that the local max norm is also SDP representable, as long as the sets R and C can be written in terms of linear or semi-definite constraints?this includes all the examples we mention, where in all of them the sets R and C are specified in terms of simple linear constraints. However, for large scale problems, it is not practical to directly use SDP optimization approaches. Instead, and especially for very large scale problems, an effective optimization approach for both the trace norm and the max norm is to use the factorized versions of the norms, given in (1) and (2), and to optimize the factorization directly (typically, only factorizations of some truncated dimensionality are used) [11, 12, 7]. As we show in Theorem 1 below, a similar factorization-optimization approach is also possible for any local max norm with convex R and C. We further give a simplified representation which is applicable when R and C are specified through element-wise upper bounds R ? Rn+ and C ? Rm + , respectively: R = {r ? ?[n] : ri ? Ri ?i} and C = {c ? ?[m] : cj ? Cj ?j} , (8) P P with 0 ? Ri ? 1, i Ri ? 1, 0 ? Cj ? 1, j Cj ? 1 to avoid triviality. This includes the interpolation norms of Section 2.4. Theorem 1. If R and C are convex, then the (R, C)-norm can be calculated with the factorization  X X 2  2 1 kXk(R,C) = (9) inf cj B(j) 2 . sup ri A(i) 2 + sup 2 AB > =X r?R i c?C j In the special case when R and C are defined by (8), writing (x)+ := max{0, x}, this simplifies to n   o X  X  2 2 1 kXk(R,C) = inf a+ Ri A(i) 2 ? a + b + Cj B(j) 2 ? b . 2 AB > =X;a,b?R + + i j 1/2 Proof sketch for Theorem 1. For convenience we will write r1/2 to mean diag(r) , and same for c. Using the trace norm factorization identity (1), we have   1 1 2 2 kCk + kDk 2 kXk(R,C) = 2 sup r /2 ? X ? c /2 = sup inf F F 1 1 tr r?R,c?C r?R,c?C CD > =r /2 ?X?c /2    2  2 1/2 2 1/2 2 1/2 1/2 , ? inf sup r A + sup c B = sup inf r ? A + c ? B r?R,c?C AB > =X AB > =X F F F r?R c?C F where for the next-to-last step we set C = r1/2 A and D = c1/2 B, and the last step follows because sup inf ? inf sup always (weak duality). The reverse inequality holds as well (strong duality), and is proved in the Supplementary Materials, where we also prove the special-case result. 4 An approximate convex hull and a learning guarantee In this section, we look for theoretical bounds on error for the problem of estimating unobserved entries in a matrix Y that is approximately low-rank. Our results apply for either uniform or nonuniform sampling of entries from the matrix. We begin with a result comparing the (R, C)-norm unit ball to a convex hull of rank-1 matrices, which will be useful for proving our learning guarantee. 4.1 Convex hull To gain a better theoretical understanding of the (R, C) norm, we first need to define corresponding vector norms on Rn and Rm . For any u ? Rn , let s X 1/2 kukR := sup ri u2i = sup diag(r) ? u . r?R r?R i 2 We can think of this norm as a way to interpolate between the `2 and `? vector norms. For example, if we choose R = R as defined in (5), then kukR is equal to the root-mean-square of the ?1 largest entries of u whenever ?1 is an integer. Defining kvkC analogously for v ? Rm , we can now relate these vector norms to the (R, C)-norm on matrices. 5 Theorem 2. For any convex R ? ?[n] and C ? ?[m] , the (R, C)-norm unit ball is bounded above and below by a convex hull as: o  n  Conv uv >: kukR = kvkC = 1 ? X : kXk(R,C) ? 1 ? KG ?Conv uv >: kukR = kvkC = 1 , where KG ? 1.79 is Grothendieck?s constant, and implicitly u ? Rn , v ? Rm . This result is a nontrivial extension of Srebro and Shraibman [1]?s analysis for the max norm and the trace norm. They show that the statement holds for the max norm, i.e. when R = ?[n] and C = ?[m] , and that the trace norm unit ball is exactly equal to the corresponding convex hull (see Corollary 2 and Section 3.2 in their paper, respectively). Proof sketch for Theorem 2. To prove the first inclusion, given any X = uv > with kukR = kvkC = 1, we apply the factorization result Theorem 1 to see that kXk(R,C) ? 1. Since the (R, C)-norm unit ball is convex, this is sufficient. For the second inclusion, we state a weighted version of Grothendieck?s Inequality (proof in the Supplementary Materials):  sup hY, U V > i : U ? Rn?k , V ? Rm?k , U(i) 2 ? ai ?i, V(j) 2 ? bj ?j  = KG ? sup hY, uv > i : u ? Rn , v ? Rm , |ui | ? ai ?i, |vj | ? bj ?j . We then apply this weighted inequality to the dual norm of the (R, C)-norm to prove the desired inclusion, as in Srebro and Shraibman [1]?s work for the max norm case (see Corollary 2 in their paper). Details are given in the Supplementary Materials. 4.2 Learning guarantee We now give our main matrix reconstruction result, which provides error bounds for a family of norms interpolating between the max norm and the smoothed weighted trace norm. Theorem 3. Let p be any distribution on [n] ? [m]. Suppose that, for some ? ? 1, R ? ? R? 1/2,? and C ? C1/2,? , where these two sets are defined in (6). Let S = {(it , jt ) : t = 1, . . . , s} be a random sample of locations in the matrix drawn i.i.d. from p, where s ? n. Then, in expectation over the sample S, r !   X X kn log(n) b , pij Yij ? Xij ? inf ? pij |Yij ? Xij | + O ? 1+ ? s ? kXk(R,C) ? k ij ij {z } {z } | | Approximation error ? Excess error Ps |Yit jt ? Xit jt |. Additionally, if we assume that s ? p n log(n), then in the excess risk bound, we can reduce the term log(n) to log(n). b = arg min where X kXk (R,C) ? k t=1 Proof sketch for Theorem 3. The main idea ? is to use the convex hull formulation from Theorem 2 to show that, for any X with kXk(R,C) ? k, there exists a decomposition X = X 0 + X 00 with p ? e represents the smoothed marginals with kX 0 kmax ? O( k) and kX 00 ktr(ep) ? O( k/?), where p 1 smoothing parameter ? = /2 as in (3). We then apply known bounds on the Rademacher complexity of the max norm unit ball [1] weighted trace norm unit ball [6], to bound the  and the smoothed ? Rademacher complexity of X : kXk(R,C) ? k . This then yields a learning guarantee by Theorem 8 of Bartlett and Mendelson [13]. Details are given in the Supplementary Materials. As special cases of this theorem, we can re-derive the existing results for the max norm and smoothed p weighted trace norm. Specifically, choosing ? = ? gives us an excess error term of order kn/s for p the max norm, previously shown by [1], while setting ? = 1 yields an excess error term of order kn log(n)/s for the smoothed weighted trace norm as long as s ? n log(n), as shown in [6]. What advantage does this new result offer over the existing results for the max norm and for the smoothed weighted trace norm? To simplify the comparison, suppose we choose ? = log2 (n), and ? define R = R? 1/2,? and C = C1/2,? . Then, comparing to the max norm result (when ? = ?), we see 6 Table 1: Matrix fitting for the five methods used in experiments. ? ? ? ? ? 0.20 ? ? ? ? ? ? ? ? ? 30 60 120 240 0.15 ? ? ? ? ? ? ? ? ? ? ? 30 Matrix dimension n Free parameters ? ? ? ?; ? ?; ? ; ? ? ? ? 0.13 ? Mean sq. error per entry ? ? Fixed parameters ? arbitrary; ? = 1 ? = 1; ? = 0 ? = 0; ? = 0 ? =0 ? 0.11 0.24 0.28 Trace Emp. trace Smth. trace Max Local max ? 0.16 Mean sq. error per entry Norm Max norm (Uniform) trace norm Empirically-weighted trace norm Arbitrarily-smoothed emp.-wtd. trace norm Local max norm 60 120 240 Matrix dimension n Figure 1: Simulation results for matrix reconstruction with a rank-2 (left) or rank-4 (right) signal, corrupted by noise. The plot shows per-entry squared error averaged over 50 trials, with standard error bars. For the rank-4 experiment, max norm error exceeded 0.20 for each n = 60, 120, 240 and is not displayed in the plot. that the excess error term is the same in both cases (up to a constant), but the approximation error term may in general be much lower for the local max norm than for the max norm. Comparing next to the weighted trace norm (when ? = 1), we see that the excess error term is lower by a factor of log(n) for the local max norm. This may come at a cost of increasing the approximation error, but in general this increase will be very small. In particular, the local max norm result allows us to give a meaningful guarantee for a sample size s = ? (kn), rather than requiring s ? ? (kn log(n)) as for any trace norm result, but with a hypothesis class significantly richer than the max norm constrained class (though not as rich as the trace norm constrained class). 5 Experiments We test the local max norm on simulated and real matrix reconstruction tasks, and compare its performance to the max norm, the uniform and empirically-weighted trace norms, and the smoothed empirically-weighted trace norm. 5.1 Simulations We simulate n ? n noisy matrices for n = 30, 60, 120, 240, where the underlying signal has rank k = 2 or k = 4, and we observe s = 3kn entries (chosen uniformly without replacement). We performed 50 trials for each of the 8 combinations of (n, k). Data For each trial, we randomly draw a matrix U ? Rn?k by drawing each row uniformly at random from the unit sphere in Rn . We generate V ? Rm?k similarly. We set Y = U V > + ? ? Z, where the noise matrix Z has i.i.d. standard normal entries and ? = 0.3 is a moderate noise level. We also divide the n2 entries of the matrix into sets S0 t S1 t S2 which consist of s = 3kn training entries, s validation entries, and n2 ? 2s test entries, respectively, chosen uniformly at random. Methods We use the two-parameter family of norms defined in (7), but replacing the true b i? and p b ?j . For each ?, ? ? {0, 0.1, . . . , 0.9, 1} marginals pi? and p?j with the empirical marginals p and each penalty parameter value ? ? {21 , 22 , . . . , 210 }, we compute the fitted matrix X 2 b = arg min X . (10) (i,j)?S0 (Yij ? Xij ) + ? ? kXk(R?,? ,C?,? ) (In fact, we use a rank-8 approximation to this optimization problem, as described in Section 3.) For each method, we use S1 to select the best ?, ? , and ?, with restrictions on ? and/or ? as specified by the definition of the method (see Table 1), then report the error of the resulting fitted matrix on S2 . Results The results for these simulations are displayed in Figure 1. We see that the local max norm results in lower error than any of the tested existing norms, across all the settings used. 7 Table 2: Root mean squared error (RMSE) results for estimating movie ratings on Netflix and MovieLens data using a rank 30 model. Setting ? = 0 corresponds to the uniform or weighted or smoothed weighted trace norm (depending on ?), while ? = 1 corresponds to the max norm for any ? value. ?\? 0.00 0.05 0.10 0.15 0.20 1.00 0.00 0.7852 0.7836 0.7831 0.7833 0.7842 0.7997 MovieLens 0.05 0.10 0.7827 0.7838 0.7822 0.7842 0.7837 0.7846 0.7842 0.7854 0.7853 0.7866 ?\? 0.00 0.05 0.10 0.15 0.20 1.00 1.00 0.7918 ? ? ? ? 0.00 0.9107 0.9095 0.9096 0.9102 0.9126 0.9235 Netflix 0.05 0.9092 0.9090 0.9098 0.9111 0.9344 0.10 0.9094 0.9107 0.9122 0.9131 0.9153 1.00 0.9131 ? ? ? ? 5.2 Movie ratings data We next compare several different matrix norms on two collaborative filtering movie ratings datasets, the Netflix [14] and MovieLens [15] datasets. The sizes of the data sets, and the split of the ratings into training, validation and test sets3 , are: Dataset Netflix MovieLens # users 480,189 71,567 # movies 17,770 10,681 Training set 100,380,507 8,900,054 Validation set 100,000 100,000 Test set 1,408,395 1,000,000 We test the local max norm given in (7) with ? ? {0, 0.05, 0.1, 0.15, 0.2} and ? ? {0, 0.05, 0.1}. We also test ? = 1 (the max norm?here ? is arbitrary) and ? = 1, ? = 0 (the uniform trace norm). We follow the test protocol of [6], with a rank-30 approximation to the optimization problem (10). Table 2 shows root mean squared error (RMSE) for the experiments. For both the MovieLens and Netflix data, the local max norm with ? = 0.05 and ? = 0.05 gives strictly better accuracy than any previously-known norm studied in this setting. (In practice, we can use a validation set to reliably select good values for the ? and ? parameters4 .) For the MovieLens data, the local max norm achieves RMSE of 0.7822, compared to 0.7831 achieved by the smoothed empirically-weighted trace norm with ? = 0.10, which gives the best result among the previously-known norms. For the Netflix dataset the local max norm achieves RMSE of 0.9090, improving upon the previous best result of 0.9095 achieved by the smoothed empirically-weighted trace norm [6]. 6 Summary In this paper, we introduce a unifying family of matrix norms, called the ?local max? norms, that generalizes existing methods for matrix reconstruction, such as the max norm and trace norm. We examine some interesting sub-families of local max norms, and consider several different options for interpolating between the trace (or smoothed weighted trace) and max norms. We find norms lying strictly between the trace norm and the max norm that give improved accuracy in matrix reconstruction for both simulated data and real movie ratings data. We show that regularizing with any local max norm is fairly simple to optimize, and give a theoretical result suggesting improved matrix reconstruction using new norms in this family. Acknowledgements R.F. is supported by NSF grant DMS-1203762. R.S. is supported by NSERC and Early Researcher Award. 3 For Netflix, the test set we use is their ?qualification set?, designed for a more uniform distribution of ratings across users relative to the training set. For MovieLens, we choose our test set at random from the available data. 4 To check this, we subsample half of the test data at random, and use it as a validation set to choose (?, ? ) for each method (as specified in Table 1). We then evaluate error on the remaining half of the test data. For MovieLens, the local max norm gives an RMSE of 0.7820 with selected parameter values ? = ? = 0.05, as compared to an RMSE of 0.7829 with selected smoothing parameter ? = 0.10 for the smoothed weighted trace norm. For Netflix, the local max norm gives an RMSE of 0.9093 with ? = ? = 0.05, while the smoothed weighted trace norm gives an RMSE of 0.9098 with ? = 0.05. The other tested methods give higher error on both datasets. 8 References [1] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. 18th Annual Conference on Learning Theory (COLT), pages 545?560, 2005. [2] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of Machine Learning Research, 11:2057?2078, 2010. [3] S. Negahban and M. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. arXiv:1009.2118, 2010. [4] R. Foygel and N. Srebro. Concentration-based guarantees for low-rank matrix reconstruction. 24th Annual Conference on Learning Theory (COLT), 2011. [5] R. Salakhutdinov and N. Srebro. Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm. Advances in Neural Information Processing Systems, 23, 2010. [6] R. Foygel, R. Salakhutdinov, O. Shamir, and N. Srebro. Learning with the weighted trace-norm under arbitrary sampling distributions. Advances in Neural Information Processing Systems, 24, 2011. [7] J. Lee, B. Recht, R. Salakhutdinov, N. Srebro, and J. Tropp. Practical Large-Scale Optimization for Max-Norm Regularization. Advances in Neural Information Processing Systems, 23, 2010. [8] E. Hazan, S. Kale, and S. Shalev-Shwartz. Near-optimal algorithms for online matrix prediction. 25th Annual Conference on Learning Theory (COLT), 2012. [9] M. Fazel, H. Hindi, and S. Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proceedings of the 2001 American Control Conference, volume 6, pages 4734?4739, 2002. [10] N. Srebro, J.D.M. Rennie, and T.S. Jaakkola. Maximum-margin matrix factorization. Advances in Neural Information Processing Systems, 18, 2005. [11] J.D.M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd international conference on Machine learning, pages 713?719. ACM, 2005. [12] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. Advances in neural information processing systems, 20:1257?1264, 2008. [13] P. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463?482, 2002. [14] J. Bennett and S. Lanning. The netflix prize. In Proceedings of KDD Cup and Workshop, volume 2007, page 35. Citeseer, 2007. [15] MovieLens Dataset. Available at http://www.grouplens.org/node/73. 2006. 9
4615 |@word trial:3 version:4 norm:188 nd:1 that2:1 simulation:3 decomposition:2 citeseer:1 mention:1 tr:4 existing:10 comparing:4 written:1 chicago:1 kdd:1 plot:2 designed:1 alone:1 half:2 selected:3 ith:1 prize:2 provides:1 node:1 toronto:2 location:3 org:1 five:1 unbounded:1 u2i:1 prove:3 fitting:2 introduce:4 theoretically:1 p1:1 examine:1 sdp:3 salakhutdinov:7 considering:1 increasing:1 conv:2 begin:1 estimating:2 notation:1 underlying:2 bounded:3 factorized:2 what:1 kg:3 shraibman:3 unobserved:2 guarantee:11 exactly:1 rm:7 control:1 unit:7 grant:1 appear:1 local:25 qualification:1 interpolation:6 approximately:2 studied:2 factorization:13 averaged:1 fazel:1 practical:2 practice:1 definite:1 sq:2 area:1 empirical:7 significantly:1 boyd:1 suggest:1 get:1 cannot:1 convenience:2 risk:2 kmax:1 writing:1 www:1 optimize:2 equivalent:1 restriction:1 go:1 kale:1 independently:1 convex:13 decomposable:2 recovery:1 nuclear:2 regularize:1 oh:1 proving:1 shamir:1 suppose:2 user:2 hypothesis:1 element:1 observed:3 ep:2 utstat:1 rina:1 technological:1 rescaled:3 convexity:1 ui:1 complexity:3 depend:1 segment:1 upon:1 easily:2 differently:1 represented:1 regularizer:3 fast:1 effective:2 approached:1 choosing:2 shalev:1 whose:1 richer:1 stanford:2 supplementary:7 larger:1 heuristic:1 drawing:1 reconstruct:1 rennie:2 statistic:2 think:1 noisy:2 online:2 advantage:1 reconstruction:13 propose:1 product:2 adaptation:1 flexibility:1 ktr:1 intuitive:1 p:1 r1:2 rademacher:3 derive:1 depending:1 completion:2 ij:2 strong:2 come:1 correct:1 hull:6 material:7 generalization:3 proposition:2 yij:3 strictly:3 extension:1 hold:2 lying:2 normal:1 bj:2 achieves:2 early:1 kvkc:4 ruslan:1 applicable:1 grouplens:1 largest:3 weighted:43 minimization:1 lanning:1 gaussian:1 always:1 modified:1 rather:3 pn:1 avoid:2 jaakkola:1 corollary:2 xit:1 improvement:1 rank:15 check:1 adversarial:1 entire:2 typically:1 arg:4 dual:1 among:1 colt:3 exponent:2 smoothing:8 special:4 constrained:2 fairly:1 marginal:6 equal:6 sampling:6 placing:1 represents:1 look:1 promote:1 simplex:4 others:1 report:1 simplify:1 modern:1 randomly:1 simultaneously:2 interpolate:5 maxj:2 replaced:2 familiar:1 replacement:1 ab:8 highly:1 mnih:1 deferred:1 extreme:1 semidefinite:1 regularizers:1 divide:1 desired:1 re:1 theoretical:7 minimal:2 fitted:2 column:11 cost:1 entry:18 decomposability:2 uniform:17 pcol:6 kn:7 corrupted:1 recht:1 international:1 negahban:1 lee:1 probabilistic:1 analogously:2 squared:4 nm:2 containing:1 choose:9 american:1 suggesting:1 singleton:4 includes:4 satisfy:1 performed:1 later:1 multiplicative:2 root:3 analyze:1 sup:17 hazan:4 netflix:11 option:4 rmse:8 collaborative:4 square:1 accuracy:3 yield:6 correspond:2 generalize:1 weak:1 accurately:1 researcher:1 reach:1 whenever:1 definition:4 dm:1 proof:5 sampled:3 gain:1 proved:2 dataset:3 dimensionality:1 cj:9 exceeded:1 higher:1 follow:1 improved:5 formulation:2 though:1 generality:1 furthermore:1 hand:3 sketch:3 tropp:1 replacing:1 keshavan:1 requiring:1 true:3 regularization:4 hence:1 image:1 wise:1 regularizing:3 novel:1 common:1 empirically:9 volume:2 discussed:1 marginals:7 cup:1 ai:2 uv:4 similarly:4 inclusion:3 wtd:1 inf:8 moderate:1 reverse:1 inequality:3 arbitrarily:1 minimum:2 impose:1 signal:2 semi:1 smooth:1 cross:1 sphere:1 long:3 offer:1 dept:2 award:1 prediction:2 essentially:1 expectation:1 arxiv:1 achieved:2 c1:3 want:1 singular:1 kukr:5 sets3:1 integer:1 structural:1 near:1 intermediate:2 split:1 affect:1 reduce:2 simplifies:1 idea:1 triviality:1 bartlett:2 penalty:11 generally:1 useful:3 amount:3 extensively:1 generate:1 http:1 xij:3 nsf:1 kxktr:7 per:3 write:2 affected:1 kck:1 drawn:1 yit:1 penalizing:1 sum:1 family:10 draw:1 bound:16 nonnegative:1 yielded:1 nontrivial:1 annual:3 constraint:2 ri:14 hy:2 nathan:1 simulate:1 min:7 department:1 according:1 ball:6 poor:1 representable:2 combination:1 smaller:1 across:2 rsalakhu:1 s1:2 restricted:1 taken:1 previously:3 foygel:5 turn:1 loose:1 optimizable:1 generalizes:2 available:2 apply:5 observe:1 denotes:1 remaining:1 log2:1 unifying:2 especially:1 establish:1 objective:1 concentration:1 simulated:3 considers:1 trivial:1 balance:1 equivalently:1 statement:1 relate:1 trace:77 reliably:1 unknown:1 allowing:2 upper:4 observation:2 datasets:3 displayed:2 truncated:1 situation:2 defining:1 rn:11 nonuniform:1 smoothed:26 arbitrary:7 ttic:1 rating:7 specified:4 connection:1 alternately:1 beyond:1 bar:1 below:4 max:83 including:2 video:1 wainwright:1 natural:1 rely:1 hindi:1 movie:5 grothendieck:2 literature:1 understanding:1 nati:1 acknowledgement:1 relative:2 loss:5 interesting:1 filtering:3 srebro:13 penalization:1 validation:5 pij:3 sufficient:1 s0:2 pi:5 balancing:1 cd:1 row:21 course:1 penalized:1 summary:1 supported:2 last:2 free:1 institute:1 emp:2 taking:2 dimension:3 calculated:1 world:1 unweighted:1 rich:1 kdk:1 simplified:1 excess:6 approximate:2 emphasize:1 implicitly:1 supremum:1 overfitting:1 shwartz:1 table:5 additionally:1 robust:1 improving:1 interpolating:3 protocol:1 diag:8 vj:1 main:2 montanari:1 s2:2 noise:4 subsample:1 n2:2 allowed:1 kxkmax:3 sub:1 col:2 lie:1 toyota:1 weighting:1 theorem:11 jt:3 showing:1 maxi:2 appeal:1 exists:2 mendelson:2 consist:1 workshop:1 adding:1 sx:1 kx:2 margin:2 easier:1 generalizing:1 kxk:13 nserc:1 corresponds:4 minimizer:1 satisfies:1 relies:1 acm:1 viewed:1 identity:1 bennett:1 movielens:10 except:1 uniformly:4 determined:1 averaging:1 specifically:1 conservative:2 total:1 called:1 partly:1 experimental:1 duality:2 meaningful:1 select:2 arises:1 evaluate:1 tested:2
3,995
4,616
Bandit Algorithms boost motor-task selection for Brain Computer Interfaces Joan Fruitet INRIA, Sophia Antipolis 2004 Route des Lucioles 06560 Sophia Antipolis, France [email protected] Alexandra Carpentier Statistical Laboratory, CMS Wilberforce Road, Cambridge CB3 0WB UK [email protected] R?emi Munos INRIA Lille - Nord Europe 40, avenue Halley 59000 Villeneuve d?ascq, France [email protected] Maureen Clerc INRIA, Sophia Antipolis 2004 Route des Lucioles 06560 Sophia Antipolis, France [email protected] Abstract Brain-computer interfaces (BCI) allow users to ?communicate? with a computer without using their muscles. BCI based on sensori-motor rhythms use imaginary motor tasks, such as moving the right or left hand, to send control signals. The performances of a BCI can vary greatly across users but also depend on the tasks used, making the problem of appropriate task selection an important issue. This study presents a new procedure to automatically select as fast as possible a discriminant motor task for a brain-controlled button. We develop for this purpose an adaptive algorithm, UCB-classif , based on the stochastic bandit theory. This shortens the training stage, thereby allowing the exploration of a greater variety of tasks. By not wasting time on inef?cient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more ef?cient use of the BCI training session. Comparing the proposed method to the standard practice in task selection, for a ?xed time budget, UCB-classif leads to an improved classi?cation rate, and for a ?xed classi?cation rate, to a reduction of the time spent in training by 50%. 1 Introduction Scalp recorded electroencephalography (EEG) can be used for non-muscular control and communication systems, commonly called brain-computer interfaces (BCI). BCI allow users to ?communicate? with a computer without using their muscles. The communication is made directly through the electrical activity from the brain, collected by EEG in real time. This is a particularly interesting prospect for severely handicapped people, but it can also be of use in other circumstances, for instance for enhanced video games. A possible way of communicating through the BCI is by using sensori-motor rhythms (SMR), which are modulated in the course of movement execution or movement imagination. The SMR corresponding to movement imagination can be detected after pre-processing the EEG, which is corrupted by important noise, and after training (see [1, 2, 3]). A well-trained classi?er can then use features of the SMR in order to discriminate periods of imagined movement from resting periods, when the user is idle. The detected mental states can be used as buttons in a Brain Computer Interface, mimicking traditional interfaces such as keyboard or mouse button. This paper deals with training a BCI corresponding to a single brain-controlled button (see [2, 4]), in which a button is pressed (and instantaneously released) when a certain imagined movement is detected. The important steps are thus to ?nd a suitable imaginary motor task, and to train a 1 classi?er. This is far from trivial, because appropriate tasks which can be well classi?ed from the background resting state are highly variable among subjects; moreover, the classi?er requires to be trained on a large set of labeled data. The setting up of such a brain-controlled button can be very time consuming, given that many training examples need to be acquired for each of the imaginary motor task to be tested. The usual training protocol for a brain-controlled button is to display sequentially to the user a set of images, that serve as prompts to perform the corresponding imaginary movements. The collected data are used to train the classi?er, and to select the imaginary movement that seems to provide the highest classi?cation rate (compared to the background resting state). We refer to this imaginary movement as the ?best imaginary movement?. In this paper, we focus on the part of the training phase that consists in ef?ciently ?nding this best imaginary movement. This is an important problem, since the SMR collected by the EEG are heterogeneously noisy: some imaginary motor tasks will provide higher classi?cation rates than others. In the literature, ?nding such imaginary motor tasks is deemed an essential issue (see [5, 6, 7]), but, to the best of our knowledge, no automatized protocol has yet been proposed to deal with it. We believe that enhancing the ef?ciency of the training phase is made even more essential by the facts that (i) the best imaginary movement differs from one user to another, e.g. the best imaginary movement for one user could be to imagine moving the right hand, and for the next, to imagine moving both feet (see [8]) and (ii) using a BCI requires much concentration, and a long training phase exhausts the user. If an ?oracle? were able to state what the best imaginary movement is, then the training phase would consist only in requiring the user to perform this imaginary movement. The training set for the classi?er on this imaginary movement would be large, and no training time would be wasted in asking the user to perform sub-optimal and thus useless imaginary movements. The best imaginary movement is however not known in advance, and so the commonly used strategy (which we will refer to as uniform) consists in asking the user to perform all the movements a ?xed number of times. An alternative strategy is to learn while building the training set what imaginary movements seem the most promising, and ask the classi?er to perform these more often. This problem is quite archetypal to a ?eld of Machine Learning called Bandit Theory (initiated in [9]). Indeed, the main idea in Bandit Theory is to mix the Exploration of the possible actions1 , and their Exploitation to perform the empirical best action. Contributions This paper builds on ideas of Bandit Theory, in order to propose an ef?cient method to select the best imaginary movement for the activation of a brain-controlled button. To the best of our knowledge, this is the ?rst contribution to the automation and optimization of this task selection. ? We design a BCI experiment for imaginary motor task selection, and collect data on several subjects, for different imaginary motor tasks, in the aim of testing our methods. ? We provide a bandit algorithm (which is strongly inspired by the Upper Con?dence Bound Algorithm of [10]) adapted to this classi?cation problem. In addition, we propose several variants of this algorithm that are intended to deal with other slightly different scenarios that the practitioner might face. We believe that this bandit-based classi?cation technique is of independent interest and could be applied to other task selection procedures under constraints on the samples. ? We provide empirical evidence that using such an algorithm considerably speeds up the training phase for the BCI. We gain up to 18% in terms of classi?cation rate, and up to 50% in training time, when compared to the uniform strategy traditionally used in the literature. The rest of the paper is organized as follows: in Section 2, we describe the EEG experiment we built in order to acquire data and simulate the training of a brain-controlled button. In Section 3, we model the task selection as a bandit problem, which is solved using an Upper Con?dence Bound algorithm. We motivate the choice of this algorithm by providing a performance analysis. Section 4, which is the main focus of this paper, presents results on simulated experiments, and proves empirically the gain brought forth by adaptive algorithms in this setting. We then conclude this paper with further perspectives. 1 Here, the actions are images displayed to the BCI user as prompts to perform the corresponding imaginary tasks. 2 2 Material and protocol BCI systems based on SMR rely on the users? ability to control their SMR in the mu (8-13Hz) and/or beta (16-24Hz) frequency bands [1, 2, 3]. Indeed, these rhythms are naturally modulated during real and imagined motor action. More precisely, real and imagined movements similarly activate neural structures located in the sensori-motor cortex, which can be detected in EEG recordings through increases in power (event related synchronization or ERS) and/or decreases in power (event related de-synchronization or ERD) in the mu and beta frequency bands [11, 12]. Because of the homuncular organization of the sensori-motor cortex [13], different limb movements may be distinguished according to the spatial layout of the ERD/ERS. BCI based on the control of SMR generally use movements lasting several seconds, that enable continuous control of multidimensional interfaces [1]. On the contrary this work targets a braincontrolled button that can be rapidly triggered by a short motor task [2, 4]. A vast variety of motor tasks can be used in this context, like imagining rapidly moving the hand, grasping an object, or kicking an imaginary ball. We remind that the best imaginary movement differs from one user to another (see [8]). As explained in the Introduction, the use of a BCI must always be preceded by a training phase. In the case of a BCI managing a brain-controlled button through SMR, this training phase consists in displaying to the user a sequence of images corresponding to movements, that he/she must imagine performing. By processing the EEG, the SMR associated to the imaginary movements and to idle periods can be extracted. Collecting these labeled data results in a training set, which serves to train the classi?er between the movements, and the idle periods. The imaginary movement with highest classi?cation rate is then selected to activate the button in the actual use of the BCI. The rest of this Section explains in more detail the BCI material and protocol used to acquire the EEG, and to extract the features from the signal. 2.1 The EEG experiment The EEG experiment was similar to the training of a brain-controlled button: we presented, at random timing, cue images during which the subjects were asked to perform 2 second long motor tasks (intended to activate the button). Six right-handed subjects, aged 24 to 39, with no disabilities, were sitting at 1.5m of a 23? LCD screen. EEG was recorded dat a sampling rate of 512Hz via 11 scalp electrodes of a 64-channel cap and ampli?ed with a TMSI ampli?er (see Figure 1). The OpenViBE platform [14] was used to run the experiment. The signal was ?ltered in time through a band-pass ?lter, and in space through a surface Laplacian to increase the signal to noise ratio. The experiment was composed of 5 to 12 blocks of approximately 5 minutes. During each block, 4 cue images were presented for 2 seconds in a random order, 10 times each. The time between two image presentations varied between 1.5s and 10s. Each cue image was a prompt for the subject to perform or imagine the corresponding motor action during 2 seconds, namely moving the right or left hand, the feet or the tongue. 2.2 Feature extraction In the case of short motor tasks, the movement (real or imagined) produces an ERD in the mu and beta bands during the task, and is followed by a strong ERS [4] (sometimes called beta rebound as it is most easily seen in the beta frequency band). We extracted features of the mu and beta bands during the 2-second windows of the motor action and in the subsequent 1.5 seconds of signal in order to use the bursts of mu and beta power (ERS or rebound) that follow the indicated movement. Figure 1 shows a time-frequency map on which the movement and rebound windows are indicated. One may observe that, during the movement, the power in the mu and beta bands decreases (ERD) and that, approximately 1 second after the movement, it increases to reach a higher level than in the resting state (ERS). More precisely, the features were chosen as the power around 12Hz and 18Hz extracted at 3 electrodes over the sensori-motor cortex (C3, C4 and Cz). Thus, 6 features are extracted during the movement and 6 during the rebound. The lengths and positions of the windows and the frequency bands were chosen according to a preliminary study with one of the subjects and were deliberately kept ?xed for the other subjects. 3 One of the goals of our algorithm is to be able to select the best task among a large number of tasks. However, in our experiment, only a limited number of tasks were used (four), because we limited the length of the sessions in order not to tire the subjects. To demonstrate the usefulness of our method for a larger number of tasks, we decided to create arti?cial (degraded) tasks by mixing the features of one of the real tasks (the feet) with different proportions of the features extracted during the resting period. Figure 1: A: Layout of the 64 EEG cap, with (in black) the 3 electrodes from which the features are extracted. The electrodes marked in blue/grey are used for the Laplacian. B: Time-frequency map of the signal recorded on electrode C3, for a right hand movement lasting 2 seconds (subject 1). Four features (red windows) are extracted for each of the 3 electrodes. 2.3 Evaluation of performances For each task k, we can classify between when the subject is inactive and when he/she is performing task k. Consider a sample (X, Y ) ? Dk where Dk is the distribution of the data restricted to task k and the idle task (task 0), X is the feature set, and Y is the label (1 if the sample corresponds to task k and 0 otherwise). We consider a compact set of classi?ers H. De?ne the best classi?er in H for task k as h?k = arg minh?H E(X,Y )?Dk [1{h(X) ?= Y }]. De?ne the theoretical loss rk? of a task k as the probability of labeling incorrectly a new data drawn from Dk with the best classi?er h?k , that is to say rk? = 1 ? P(X,Y )?D (h?k (X) ?= Y ). At time t, there are Tk,t + T0,t samples (Xi , Yi )i?Tk,t +T0,t (where Tk,t is the number of samples for task k, and T0,t is the number of samples for the idle task) that? are available. With these data, ? we ? k,t = arg minh?H ?Tk,t +T0,t 1{h(Xi ) ?= Yi } . We build the empirical minimizer of the loss h i=1 ? ?? Tk,t +T0,t de?ne the empirical loss of this classi?er r?k,t = 1 ? minh?H 1{h(X ) = ? Y } i i . i=1 Since during our experiments we collect, between each imaginary task, a sample of idle condition, we have T0,t ? Tk,t . From Vapnik-Chervonenkis theory (see [15] and also the Supplementary Material), we obtain ? ? with ??probability ? 1 ? ?, that the error in generalization of classi?er hk,t is not larger than rk + d log(1/?) O , where d is the VC dimension of the domain of X. This implies that the perTk,n formance of the optimal empirical classi?er for task k is close to the performance of the optimal classi?er for task k. Also with probability 1 ? ?, ? ? d log(1/?) ? . (1) |? rk,t ? rk? | = O Tk,n We consider in this paper linear classi?ers. In this case, the VC dimension d is the dimension of X, i.e. the number of features. The loss we considered ((0, 1) loss) is dif?cult to minimize in practice ? k,t provided by linear because it is not convex. This is why we consider in this work the classi?er h SVM. We also estimate the performance r?k,t of this classi?er by cross-validation: we use the leaveone-out technique when less than 8 samples of the task are available, and a 8-fold validation when more repetitions of the task have been recorded. As explained in [15], results similar to Equation 1 hold for this classi?er. We will use in the next Section the results of Equation 1, in order to select as fast as possible the task with highest rk? and collect as many samples from it as possible. 4 3 A bandit algorithms for optimal task selection In order to improve the ef?ciency of the training phase, it is important to ?nd out as fast as possible what are the most promising imaginary tasks (i.e. tasks with large rk? ). Indeed, it is important to collect as many samples as possible from the best imaginary movement, so that the classi?er built for this task is as precise as possible. In this Section, we propose the UCB-Classif algorithm, inspired by the Upper Con?dence Bound algorithm in Bandit Theory (see [10]). 3.1 Modeling the problem by a multi-armed bandit Let K denote the number of different tasks2 and N the total number of rounds (the budget) of the training stage. Our goal is to ?nd a presentation strategy for the images (i.e. that choose at each timestep t ? {1, . . . , N } an image kt ? {1, . . . , K} to show), which allows to determine the ?best?, i.e. most discriminative imaginary movement, with highest classi?cation rate in generalization). Note that, in order to learn an ef?cient classi?er, we need as many training data as possible, so our presentation strategy should rapidly focus on the most promising tasks in order to obtain more samples from these rather than from the ones with small classi?cation rate. This issue is relatively close to the stochastic bandit problem [9]. The classical stochastic bandit problem is de?ned by a set of K actions (pulling different arms of bandit machines) and to each action is assigned a reward distribution, initially unknown to the learner. At time t ? {1, . . . , N }, if we choose an action kt ? {1, . . . , K}, we receive a reward sample drawn independently from the distribution of the corresponding action kt . The goal is to ?nd a sampling strategy which maximizes the sum of obtained rewards. We model the K different images to be displayed as the K possible actions, and we de?ne the reward as the classi?cation rate of the corresponding motor action. In the bandit problem, pulling a bandit arm directly gives a stochastic reward which is used to estimate the distribution of this arm. In our case, when we display a new image, we obtain a new data sample for the selected imaginary movement, which provides one more data sample to train or test the corresponding classi?er and thus obtain a more accurate performance. The main difference is that for the stochastic bandit problem, the goal is to maximize the sum of obtained rewards, whereas ours is to maximize the performance of the ?nal classi?er. However, the strategies are similar: since the distributions are initially unknown, one should ?rst explore all the actions (exploration phase) but then rapidly select the best one (exploitation phase). This is called the exploration-exploitation trade-off. 3.2 The UCB-classif algorithm The task presentation strategy is a close variant of the Upper Con?dence Bound (UCB) algorithm of [10], which builds high probability Upper Con?dence Bounds (UCB) on the mean reward value of each action, and selects at each time step the action with highest bound. We adapt the idea of this UCB algorithm to our adaptive classi?cation problem and call this algorithm UCB-classif (see the pseudo-code in Table 1). The algorithm builds a sequence of values Bk,t de?ned as ? a log N , (2) Bk,t = r?k,t + Tk,t?1 where r?k,t represents an estimation of the classi?cation rate built from a q-fold cross-validation technique and the a corresponds to Equation 1 (see Supplementary Material for the precise theoretical value). The cross-validation uses a linear SVM classi?er based on the Tk,t data samples obtained (at time t) from task k. Writing rk? the classi?cation rate for the optimal linear SVM classi?er (which would be obtained by using a in?nite number of samples), we have the property that Bk,t is a high probability upper bound on rk? : P(Bk,t < rk? ) decreases to zero polynomially fast (with N ). The intuition behind the algorithm is that it selects at time t an action kt either because it has a good classi?cation rate r?k,t (thus it is interesting to obtain more samples from it, to perform exploitation) or because its classi?cation ? rate is highly uncertain since it has not been sampled many times, i.e., log N is large (thus it is important to explore it more). With this strategy, Tk,t?1 is small and then aTk,t?1 the action that has the highest classi?cation rate is presented more often. It is indeed important to 2 The tasks correspond to the imaginary movements of moving the feet, tongue, right hand, and left hand, plus 4 additional degraded tasks (so a total of K = 8 actions). 5 The UCB-Classif Algorithm Parameters: a, N , q Present each image q = 3 times (thus set Tk,qK = q). for t = qK + 1, . . . , N do Evaluate the performance r?k,t of each q action (by a 8-split Cross Validation or leave-one-out if Tk,t < 8). Compute the UCB: Bk,t = r?k,t + a log N Tk,t?1 for each action 1 ? k ? K. Select the image to present: kt = arg maxk?{1,...,K} Bk,t . Update T : Tkt ,t = Tkt ,t?1 + 1 and ?k ?= kt , Tk,t = Tk,t?1 end for Table 1: Pseudo-code of the UCB-classif algorithm. gather as much data as possible from the best action in order to build the best possible classi?er. The UCB-classif algorithm guarantees that the non-optimal tasks are chosen only a negligible fraction of times (O(log N ) times out of a total budget N ). The best action is thus sampled N ? O(log N ) times (this is formally proven in the Supplementary Material)3 . It is a huge gain when compared to actual unadaptive procedures for building training sets. Indeed, the unadaptive optimal strategy is to sample each action N/K times, and thus the best task is only sampled N/K times (and not N ? O(log N )). More precisely, we prove the following Theorem. Theorem 1 For any N ? 2qK, with probability at least 1 ? N1 , if Equation 1 is satis?ed (e.g. if the data are i.i.d.) and if a ? 5(d + 1) we have that the number of times that the image of the best imaginary movement is displayed by algorithm UCB-classif is such that (where r? = maxk rk? ) TN? ? N ? ? a log(8N K) 8 ? . (r ? rk? )2 k The proof of this Theorem is in the provided Supplementary Material, Appendix A. 3.3 Discussion on variants of this algorithm We stated that our objective, given a ?xed budget N , is to ?nd as fast as possible the image with highest classi?cation rate, and to train the classi?er with as many samples as possible. Depending on the objectives of the practitioner, other possible aims can however be pursued. We brie?y describe two other settings, and explain how ideas from the bandit setting can be used to adapt to these different scenarios. Best stopping time: A close, yet different, goal, is to ?nd the best time for stopping the training phase. In this setting, the practitioner?s aim is to stop the training phase as soon as the algorithm has built an almost optimal classi?er for the user. With ideas very similar to those developed in [16] (and extended for bandit problems in e.g. [17]), we can think of an adaptation of algorithm UCB-classif to this new formulation of the problem. Assume that the objective is to ?nd an ??optimal classi?er with probability 1 ? ?, and to stop the training phase as soon as this classi?er is built. Then using ideas similar to those presented ? in [17], an ef?cient algorithm will at time t select the action that K/?) ? and will stop at the ?rst time T? when there is an action maximizes Bk,t = r?k,t + a log(N Tk,t?1 ? a log(N K/?) ? k?? such that ?k ?= k?? , Bk??? ,T? ? Bk, > ? + 2 . We thus shorten the training phase Tk,T? ?1 T? almost optimally on the class of adaptive algorithms (see [17] for more details). Choice of the best action with a limited budget: Another question that could be of interest for the practitioner is to ?nd the best action with a ?xed budget (and not train the classi?er at the same time). We can use ideas from paper ? [18] to modify UCB-classif. By selecting at each time t the action that ?K) ?? maximizes Bk,t = r?k,t + a(N Tk,t?1 , we attain this objective in the sense that we guarantee that the probability of choosing a non-optimal action decreases exponentially fast with N . 4 Results We present some numerical experiments illustrating the ef?ciency of Bandit algorithms for this problem. Although the objective is to implement UCB-classif on the BCI device, in this paper we test the algorithm on real databases that we bootstrap (this is explained in details later). This kind of 3 The ideas of the proof are very similar to the ideas in [10], with the difference that the upper bounds have to be computed using inequalities based on VC-dimension. 6 procedure is common for testing the performances of adaptive algorithms (see e.g. [19]). Acquiring data for BCI experiments is time-consuming because it requires a human subject to sit through the experiment. The advantage of bootstrapping is that several experiments can be performed with a single database, making it possible to provide con?dence bands for the results. In this Section, we present the experiments we performed, i.e. describe the kind of data we collect, and illustrate the performance of our algorithm on these data. 4.1 Performances of the different tasks The images that were displayed to the subjects correspond to movements of both feet, of the tongue, of the right hand, and of the left hand (4 actions in total). Six right-handed subjects went through the experiment with real movements and three of them went through an additional shorter experiment with imaginary movements. For four of the six subjects, the best performance for the real movement was achieved with the right hand, whereas the two other subjects? best tasks corresponded to the left hand and the feet. We collected data for these four tasks. It is not a large number of tasks but we needed a large amount of data for each of them in order to do a signi?cant comparison. In order to have a larger number of tasks and place ourselves in a more realistic situation, we created some articicial tasks (see below). Results on only four tasks are presented in a companion article [20]. Surprisingly, two of the subjects who went through the imaginary experiment obtained better results while imagining moving their left hand than their right hand, which was the best task during the real movements experiment. For the third subject who did the imaginary experiment, the best task was the feet, as for the real movement experiment. As explained in section 2.2, for this study we chose to use a very small set of ?xed features (12 features, extracted from 3 electrodes, 2 frequency bands and 2 time-windows), calibrated on only one of the six subjects during a preliminary experiment. In this work, the features were not subject-speci?c. It would certainly improve the classi?cation results to tune the features. Using the bandit algorithm to tune the features and to select the tasks at the same time presents a risk over?tting, especially for an initially very small amount of data, and also a risk of biasing the task selection to those that have been the most sampled, and for which the features will thus be the best tuned. Although for all the subjects, the best task achieved a classi?cation accuracy above 85%, this accuracy could further be improved by using a larger set of subject-speci?c features [21] and more advanced techniques (like the CSP [22] or feature selection [23]). 4.2 Performances of the bandit algorithm We compare the performance of the UCB-classif sampling strategy to a uniform strategy, i.e. the standard way of selecting a task, consisting of N/K presentations of each image. Movement Number of presentations Off-line classi?cation rate Right hand 28.6 ? 12.8 88.1% Left hand 9.0 ? 7.5 80.5% Feet 11.6 ? 9.5 82.6% Tongue 4.5 ? 1.5 63.3% Feet 80% 5.1 ? 2.6 71.4% Feet 60% 4.0 ? 1.5 68.6% Feet 40% 3.5 ? 1.0 59.2% Feet 20% 3.5 ? 0.9 54.0% Total presentations 70 Table 2: Actions presented by the UCB-classif algorithm for subject 5 across 500 simulated online BCI experiments. Feet X% is a mixture of the features measured during feet movement and during the resting condition, with a X/100-X proportion. (The off-line classi?cation rate of each action gives an idea of the performance of each action). To obtain a realistic evaluation of the performance of our algorithm we use a bootstrap technique. More precisely, for each chosen budget N , for the UCB-classif strategy and the uniform strategy, we simulated 500 online BCI experiments by randomly sampling from the acquired data of each action. Table 2 shows, for one subject and for a ?xed budget of N = 70, the average number of presentations of each task Tk , and its standard deviation, across the 500 simulated experiments. It also contains the off-line classi?cation rate of each task to give an idea of the performances of the different tasks for this subject. We can see that very little budget is allocated to the tongue movement and to the most degraded feet 20% tasks, which are the less discriminative actions, and that most of the budget is devoted to the right hand, thus enabling a more ef?cient training. 7 Figure 2 and Table 3 show, for different budgets (N ), the performance of the UCB-classif algorithm versus the uniform technique. The training of the classi?er is done on the actions presented during the simulated BCI experiment, and the testing on the remaining data. For a budget N > 70 the UCB-classif could not be used for all the subjects because there was not enough data for the best action (One subject only underwent a session of 5 blocks and so only 50 samples of each motor task were recorded. If we try to simulate an on-line experiment using the UCB-classif with a budget higher than N = 70 it is likely to ask for a 51th presentation of the best task, which has not been recorded). The classi?cation results depend on which data is used to simulate the BCI experiment. To give an idea of this variability, the ?rst and last quartiles are plotted as error bars on the graphics. Budget (N ) Length of the experiment Uniform strategy UCB-classif Bene?t 30 3min45 47.7% 64.4% +16.7% 40 5min 58.5% 77.2% +18.7% 50 6min15 63.4% 82.0% +18.5% 60 7min30 67.0% 84.0% +17.1% 70 8min45 70.1% 85.7% +15.6% 100 12min30 77.6% * 150 18min45 83.2% * 180 22min30 85.2% * Table 3: Comparison of the performances of the UCB-classif vs. the uniform strategy for different budgets, averaged over all subjects, for real movements. (The increases are signi?cant with p > 95%.) For each budget, we give an indication of the length of the experiment (without counting pauses between blocks) required to obtain this amount of data. The UCB-classif strategy signi?cantly outperforms the uniform strategy, even for relatively small N . On average on all the users it even gives better classi?cation rates when using only half of the available samples, compared to the uniform strategy. Indeed, Table 3 shows that, to achieve a classi?cation rate of 85% the UCB-classif only requires a budget of N = 70 whereas the uniform strategy needs N = 180. We believe that such gain in performance motivates the implementation of such a training algorithm in BCI devices, specially since the algorithm itself is quite simple and fast. 80 70 60 50 40 5 Adaptative Algorithm Uniform Strategy 30 40 50 60 70 80 Budget N 90 100 110 120 Sujet 3 imaginary movement 90 80 70 60 50 40 Adaptative Algorithm Uniform Strategy 30 40 50 Budget N 60 Classification rate of the chosen movement 90 Sujet 2 imaginary movement Classification rate of the chosen movement Classification rate of the chosen movement Sujet 1 real movement 90 80 70 60 50 40 Adaptative Algorithm Uniform Strategy 30 40 50 Budget N 60 70 Figure 2: UCB-classif algorithm (full line, red) versus uniform strategy (dashed line, black). Conclusion The method presented in this paper falls in the category of adaptive BCI based on Bandit Theory. To the best of our knowledge, this is the ?rst such method for dealing with automatic task selection. UCB-classif is a new adaptive algorithm that allows to automatically select a motor task in view of a brain-controlled button. By rapidly eliminating non-ef?cient motor tasks and focusing on the most promising ones, it enables a better task selection procedure than a uniform strategy. Moreover, by more frequently presenting the best task it allows a good training of the classi?er. This algorithm enables to shorten the training period, or equivalently, to allow for a larger set of possible movements among which to select the best. In a paper due to appear [20], we implement this algorithm online. A future research direction is to learn several discriminant tasks in order to activate several buttons. Acknowledgements This work was partially supported by the French ANR grant Co-Adapt ANR-09-EMER-002, Nord-Pas-de-Calais Regional Council, French ANR grant EXPLO-RA (ANR08-COSI-004), the EC Seventh Framework Programme (FP7/2007-2013) under grant agreement 270327 (CompLACS project), and by Pascal-2. 8 References [1] D. J. McFarland, W. A. Sarnacki, and J. R Wolpaw. Electroencephalographic (EEG) control of threedimensional movement. Journal of Neural Engineering, 7(3):036007, 2010. [2] T. Solis-Escalante, G. Mller-Putz, C. Brunner, V. Kaiser, and G. Pfurtscheller. Analysis of sensorimotor rhythms for the implementation of a brain switch for healthy subjects. Biomedical Signal Processing and Control, 5(1):15 ? 20, 2010. [3] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. Mller, and G. Curio. The non-invasive berlin braincomputer interface: Fast acquisition of effective performance in untrained subjects. NeuroImage, 37(2):539 ? 550, 2007. [4] J. Fruitet, M. Clerc, and T. Papadopoulo. Preliminary study for an hybrid BCI using sensorimotor rhythms and beta rebound. In International Journal of Bioelectromagnetism, 2011. [5] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan. Brain-computer interfaces for communication and control. Clinical Neurophysiology, 113(6):767 ? 791, 2002. [6] J. del R. Mill?an, F. Renkens, J. Mourio, and W. Gerstner. Brain-actuated interaction. Arti?cial Intelligence, 159(1-2):241 ? 259, 2004. [7] C. Vidaurre and B. Blankertz. Towards a cure for BCI illiteracy. Brain Topography, 23:194?198, 2010. 10.1007/s10548-009-0121-6. [8] M.-C. Dobrea and D.M. Dobrea. The selection of proper discriminative cognitive tasks - a necessary prerequisite in high-quality BCI applications. In Applied Sciences in Biomedical and Communication Technologies, 2009. ISABEL 2009. 2nd International Symposium on, pages 1 ?6, 2009. [9] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527?535, 1952. [10] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235?256, 2002. [11] G. Pfurtscheller and F. H. Lopes da Silva. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiology, 110(11):1842 ? 1857, 1999. [12] G. Pfurtscheller and C. Neuper. Motor imagery activates primary sensorimotor area in humans. Neuroscience Letters, 239(2-3):65 ? 68, 1997. [13] H. Jasper and W. Pen?eld. Electrocorticograms in man: Effect of voluntary movement upon the electrical activity of the precentral gyrus. European Archives of Psychiatry and Clinical Neuroscience, 183:163? 174, 1949. 10.1007/BF01062488. [14] Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy, O. Bertrand, and A. L?ecuyer. OpenViBE: An open-source software platform to design, test, and use brain?computer interfaces in real and virtual environments. Presence: Teleoperators and Virtual Environments, 19(1):35?53, 2010. [15] V.N. Vapnik. The nature of statistical learning theory. Springer-Verlag New York Inc, 2000. [16] O. Maron and A.W. Moore. Hoeffding races: Accelerating model selection search for classi?cation and function approximation. Robotics Institute, page 263, 1993. [17] J.Y. Audibert, S. Bubeck, and R. Munos. Bandit view on noisy optimization. Optimization for Machine Learning, pages 431?454, 2011. [18] J.Y. Audibert, S. Bubeck, and R. Munos. Best arm identi?cation in multi-armed bandits. In Annual Conference on Learning Theory (COLT), 2010. [19] J. Langford, A. Strehl, and J. Wortman. Exploration scavenging. In Proceedings of the 25th international conference on Machine learning, pages 528?535. ACM, 2008. [20] J. Fruitet, A. Carpentier, R. Munos, M. Clerc, et al. Automatic motor task selection via a bandit algorithm for a brain-controlled button. Journal of Neural Engineering, 2012. (to appear). [21] M. Dobrea, D.M. Dobrea, and D. Alexa. Spectral EEG features and tasks selection process: Some considerations toward BCI applications. In Multimedia Signal Processing (MMSP), 2010 IEEE International Workshop on, pages 150 ?155, 2010. [22] H. Ramoser, J. Muller-Gerking, and G. Pfurtscheller. Optimal spatial ?ltering of single trial EEG during imagined hand movement. Rehabilitation Engineering, IEEE Transactions on, 8(4):441 ?446, 2000. [23] J. Fruitet, D. J. McFarland, and J. R. Wolpaw. A comparison of regression techniques for a twodimensional sensorimotor rhythm-based brain-computer interface. Journal of Neural Engineering, 7(1), 2010. 9
4616 |@word neurophysiology:2 trial:1 exploitation:4 illustrating:1 eliminating:1 seems:1 proportion:2 nd:9 heterogeneously:1 open:1 grey:1 arti:2 eld:2 thereby:1 pressed:1 reduction:1 contains:1 selecting:2 chervonenkis:1 tuned:1 ours:1 outperforms:1 imaginary:38 comparing:1 activation:1 yet:2 must:2 subsequent:1 numerical:1 realistic:2 cant:2 enables:2 motor:27 update:1 v:1 cue:3 selected:2 pursued:1 device:2 half:1 intelligence:1 cult:1 short:2 mental:1 provides:1 burst:1 sophia:4 beta:9 symposium:1 consists:3 prove:1 acquired:2 congedo:1 ra:1 indeed:6 frequently:1 multi:2 brain:21 inspired:2 bertrand:1 automatically:2 actual:2 armed:2 window:5 electroencephalography:1 little:1 provided:2 project:1 moreover:2 maximizes:3 what:3 xed:8 cm:1 kind:2 developed:1 bootstrapping:1 wasting:1 guarantee:2 cial:2 pseudo:2 dornhege:1 multidimensional:1 collecting:1 uk:2 control:8 grant:3 renkens:1 appear:2 negligible:1 engineering:4 timing:1 modify:1 severely:1 initiated:1 approximately:2 inria:6 might:1 black:2 plus:1 chose:1 halley:1 collect:5 co:1 dif:1 limited:3 averaged:1 decided:1 testing:3 kicking:1 practice:2 block:4 implement:2 differs:2 bootstrap:2 wolpaw:3 procedure:5 nite:1 area:1 empirical:5 attain:1 pre:1 road:1 idle:6 close:4 selection:17 twodimensional:1 context:1 risk:2 writing:1 vaughan:1 map:2 send:1 layout:2 independently:1 convex:1 shorten:2 communicating:1 traditionally:1 tting:1 enhanced:1 imagine:4 target:1 user:17 escalante:1 us:1 agreement:1 pa:1 particularly:1 located:1 labeled:2 database:2 electrical:2 solved:1 went:3 grasping:1 movement:60 prospect:1 highest:7 decrease:4 trade:1 intuition:1 environment:2 mu:6 reward:7 asked:1 cam:1 lcd:1 trained:2 depend:2 motivate:1 serve:1 upon:1 learner:1 easily:1 isabel:1 train:6 fast:8 describe:3 activate:4 effective:1 detected:4 labeling:1 corresponded:1 choosing:1 quite:2 larger:5 supplementary:4 ecuyer:1 say:1 otherwise:1 anr:3 bci:30 ability:1 fischer:1 think:1 noisy:2 itself:1 online:3 triggered:1 sequence:2 advantage:1 indication:1 propose:3 interaction:1 fr:3 adaptation:1 inef:1 rapidly:5 mixing:1 achieve:1 forth:1 rst:5 electrode:7 produce:1 leave:1 object:1 spent:1 clerc:4 develop:1 ac:1 tk:19 depending:1 measured:1 illustrate:1 strong:1 signi:3 implies:1 direction:1 foot:15 stochastic:5 vc:3 exploration:5 human:2 quartile:1 enable:1 material:6 atk:1 virtual:2 explains:1 generalization:2 villeneuve:1 archetypal:1 preliminary:3 hold:1 around:1 considered:1 vidaurre:1 delannoy:1 mller:2 vary:1 released:1 purpose:1 estimation:1 label:1 calais:1 healthy:1 council:1 robbins:1 repetition:1 create:1 instantaneously:1 brought:1 activates:1 always:1 aim:3 csp:1 rather:1 focus:3 she:2 electroencephalographic:1 greatly:1 hk:1 psychiatry:1 sense:1 stopping:2 initially:3 bandit:27 france:3 selects:2 mimicking:1 issue:3 among:3 arg:3 classification:3 pascal:1 colt:1 spatial:2 platform:2 extraction:1 sampling:4 represents:1 lille:1 rebound:5 future:1 others:1 randomly:1 composed:1 phase:14 intended:2 ourselves:1 consisting:1 n1:1 organization:1 interest:2 huge:1 satis:1 highly:2 evaluation:2 certainly:1 smr:9 mixture:1 behind:1 devoted:1 kt:6 accurate:1 brunner:1 necessary:1 shorter:1 plotted:1 precentral:1 theoretical:2 uncertain:1 tongue:5 instance:1 handed:2 classify:1 wb:1 asking:2 modeling:1 deviation:1 uniform:15 usefulness:1 wortman:1 seventh:1 graphic:1 optimally:1 corrupted:1 considerably:1 calibrated:1 gerking:1 international:4 cantly:1 off:4 complacs:1 alexa:1 mouse:1 imagery:1 recorded:6 cesa:1 choose:2 electrocorticograms:1 hoeffding:1 cognitive:1 american:1 imagination:2 de:10 exhaust:1 automation:1 inc:1 race:1 audibert:2 later:1 performed:2 try:1 view:2 red:2 contribution:2 minimize:1 ltering:1 degraded:3 accuracy:2 formance:1 qk:3 who:2 papadopoulo:1 sitting:1 correspond:2 sensori:5 cation:28 explain:1 reach:1 ed:3 sensorimotor:4 frequency:7 acquisition:1 invasive:1 naturally:1 associated:1 proof:2 con:6 gain:4 sampled:4 automatized:1 stop:3 ask:2 adaptative:3 knowledge:3 cap:2 organized:1 auer:1 focusing:2 higher:3 follow:1 improved:2 erd:4 formulation:1 done:1 cosi:1 strongly:1 stage:2 biomedical:2 langford:1 hand:17 scavenging:1 del:1 french:2 maron:1 quality:1 indicated:2 pulling:2 believe:3 alexandra:1 building:2 effect:1 requiring:1 deliberately:1 classif:24 assigned:1 laboratory:1 moore:1 statslab:1 deal:3 round:1 game:1 during:17 rhythm:6 teleoperators:1 presenting:1 demonstrate:1 tn:1 interface:10 silva:1 image:17 consideration:1 ef:10 common:1 preceded:1 empirically:1 jasper:1 birbaumer:1 exponentially:1 imagined:6 he:2 resting:6 refer:2 multiarmed:1 cambridge:1 automatic:2 mathematics:1 session:3 similarly:1 moving:7 europe:1 cortex:3 surface:1 perspective:1 scenario:2 route:2 keyboard:1 certain:1 verlag:1 inequality:1 yi:2 muller:1 muscle:2 seen:1 greater:1 additional:2 speci:2 managing:1 determine:1 maximize:2 period:6 signal:8 ii:1 dashed:1 full:1 mix:1 faster:1 adapt:3 cross:4 long:2 clinical:3 controlled:10 laplacian:2 variant:3 basic:1 regression:1 circumstance:1 enhancing:1 sometimes:1 cz:1 achieved:2 robotics:1 receive:1 background:2 addition:1 whereas:3 aged:1 source:1 allocated:1 rest:2 specially:1 regional:1 archive:1 subject:29 hz:5 recording:1 contrary:1 seem:1 practitioner:4 ciently:1 call:1 counting:1 presence:1 split:1 enough:1 variety:2 switch:1 lotte:1 idea:12 avenue:1 inactive:1 t0:6 six:4 accelerating:1 wilberforce:1 york:1 action:36 krauledat:1 generally:1 tune:2 amount:3 band:10 category:1 gyrus:1 neuroscience:2 blue:1 four:5 drawn:2 cb3:1 carpentier:3 nal:1 lter:1 kept:1 ampli:2 vast:1 button:17 wasted:1 timestep:1 fraction:1 sum:2 run:1 letter:1 communicate:2 lope:1 place:1 almost:2 appendix:1 bound:8 followed:1 display:2 fold:2 oracle:1 scalp:2 activity:2 adapted:1 annual:1 constraint:1 precisely:4 software:1 dence:6 aspect:1 emi:1 speed:1 simulate:3 min:1 performing:2 relatively:2 ned:2 according:2 ball:1 maureen:2 across:3 slightly:1 making:2 rehabilitation:1 lasting:2 explained:4 restricted:1 tkt:2 equation:4 needed:1 fp7:1 serf:1 end:1 available:3 prerequisite:1 limb:1 observe:1 appropriate:2 spectral:1 distinguished:1 alternative:1 remaining:1 build:5 prof:1 especially:1 classical:1 dat:1 threedimensional:1 society:1 objective:5 question:1 kaiser:1 strategy:25 concentration:1 primary:1 usual:1 traditional:1 disability:1 simulated:5 berlin:1 collected:4 discriminant:2 trivial:1 toward:1 meg:1 length:4 code:2 useless:1 remind:1 providing:1 ratio:1 acquire:2 equivalently:1 nord:2 stated:1 design:3 implementation:2 motivates:1 proper:1 unknown:2 perform:10 allowing:1 upper:7 bianchi:1 minh:3 enabling:1 finite:1 displayed:4 incorrectly:1 voluntary:1 maxk:2 extended:1 communication:4 precise:2 situation:1 variability:1 emer:1 varied:1 prompt:3 bk:10 namely:1 required:1 bene:1 c3:2 c4:1 identi:1 boost:1 able:2 bar:1 mcfarland:3 below:1 handicapped:1 biasing:1 built:5 video:1 power:5 suitable:1 event:3 braincomputer:1 rely:1 hybrid:1 pause:1 advanced:1 arm:4 blankertz:2 improve:2 technology:1 ne:4 ascq:1 nding:2 created:1 deemed:1 ltered:1 extract:1 joan:2 literature:2 acknowledgement:1 synchronization:3 loss:5 topography:1 interesting:2 unadaptive:2 proven:1 versus:2 validation:5 gather:1 article:1 displaying:1 principle:1 strehl:1 course:1 surprisingly:1 last:1 soon:2 supported:1 tire:1 allow:3 institute:1 fall:1 shortens:1 face:1 munos:5 underwent:1 leaveone:1 bulletin:1 dimension:4 cure:1 mmsp:1 commonly:2 adaptive:7 made:2 programme:1 far:1 polynomially:1 ec:1 transaction:1 compact:1 dealing:1 sequentially:1 conclude:1 consuming:2 xi:2 discriminative:3 continuous:1 pen:1 search:1 why:1 table:7 promising:5 nature:1 learn:3 channel:1 antipolis:4 actuated:1 eeg:16 imagining:2 untrained:1 gerstner:1 european:1 protocol:4 domain:1 da:1 did:1 ramoser:1 main:3 noise:2 cient:7 brie:1 screen:1 pfurtscheller:5 sub:1 position:1 neuroimage:1 ciency:3 third:1 minute:1 rk:12 theorem:3 companion:1 er:38 desynchronization:1 dk:4 svm:3 evidence:1 sit:1 essential:2 consist:1 curio:1 vapnik:2 sequential:1 workshop:1 execution:1 budget:20 mill:1 remi:1 explore:2 likely:1 bubeck:2 partially:1 acquiring:1 springer:1 corresponds:2 minimizer:1 extracted:8 acm:1 goal:5 presentation:9 marked:1 towards:1 man:1 muscular:1 classi:60 called:4 total:5 discriminate:1 pas:1 multimedia:1 ucb:28 explo:1 neuper:1 select:11 formally:1 people:1 modulated:2 evaluate:1 tested:1
3,996
4,617
Graphical Models via Generalized Linear Models Pradeep Ravikumar Department of Computer Science University of Texas at Austin [email protected] Eunho Yang Department of Computer Science University of Texas at Austin [email protected] Zhandong Liu Department of Pediatrics-Neurology Baylor College of Medicine [email protected] Genevera I. Allen Department of Statistics Rice University [email protected] Abstract Undirected graphical models, also known as Markov networks, enjoy popularity in a variety of applications. The popular instances of these models such as Gaussian Markov Random Fields (GMRFs), Ising models, and multinomial discrete models, however do not capture the characteristics of data in many settings. We introduce a new class of graphical models based on generalized linear models (GLMs) by assuming that node-wise conditional distributions arise from exponential families. Our models allow one to estimate multivariate Markov networks given any univariate exponential distribution, such as Poisson, negative binomial, and exponential, by fitting penalized GLMs to select the neighborhood for each node. A major contribution of this paper is the rigorous statistical analysis showing that with high probability, the neighborhood of our graphical models can be recovered exactly. We also provide examples of non-Gaussian high-throughput genomic networks learned via our GLM graphical models. 1 Introduction Undirected graphical models, also known as Markov random fields, are an important class of statistical models that have been extensively used in a wide variety of domains, including statistical physics, natural language processing, image analysis, and medicine. The key idea in this class of models is to represent the joint distribution as a product of clique-wise compatibility functions; given an underlying graph, each of these compatibility functions depends only on a subset of variables within any clique of the underlying graph. Such a factored graphical model distribution can also be related to an exponential family distribution [1], where the unnormalized probability is expressed as the exponential of a weighted linear combination of clique-wise sufficient statistics. Learning a graphical model distribution from data within this exponential family framework can be reduced to learning weights on these sufficient statistics. An important modeling question is then, how do we choose suitable sufficient statistics? In the case of discrete random variables, sufficient statistics can be taken as indicator functions as in the Ising or Potts model. These, however, are not suited to all kinds of discrete variables such as that of non-negative integer counts. Similarly, in the case of continuous variables, Gaussian Markov Random Fields (GMRFs) are popular. The multivariate normal distribution imposed by the GMRF, however, is a stringent assumption; the marginal distribution of any variable must also be Gaussian. In this paper, we propose a general class of graphical models beyond the Ising model and the GMRF to encompass variables arising from all exponential family distributions. Our approach is motivated by recent state of the art methods for learning the standard Ising and Gaussian MRFs [2, 3, 4]. 1 The key idea in these recent methods is to learn the MRF graph structure by estimating nodeneighborhoods, which are estimated by maximizing the likelihood of each node conditioned on the rest of the nodes. These node-wise fitting methods have been shown to be both computationally and statistically attractive. Here, we study the general class of models obtained by the following construction: suppose the node-conditional distributions of each node conditioned on the rest of the nodes are Generalized Linear Models (GLMs) [5]. By the Hammersley-Clifford Theorem [6] and some algebra as derived in [7], these node-conditional distributions entail a global distribution that factors according to cliques defined by the graph obtained from the node-neighborhoods. Moreover, these have a particular set of potential functions specified by the GLM. The resulting class of MRFs broadens the class of models available off-the-shelf, from the standard Ising, indicator-discrete, and Gaussian MRFs. Beyond our initial motivation of finding more general graphical model sufficient statistics, a broader class of parametric graphical models are important for a number of reasons. First, our models provide a principled approach to model multivariate distributions and network structures among a large number of variables. For many non-Gaussian exponential families, multivariate distributions typically do not exist in an analytical or computationally tractable form. Graphical model GLMs provide a way to ?extend? univariate exponential families of distributions to the multivariate case and model and study relationships between variables for these families of distributions. Second, while some have proposed to extend the GMRF to a non-parametric class of graphical models by first Gaussianizing the data and then fitting a GMRF over the transformed variables [8], the sample complexity of such non-parametric methods is often inferior to parametric methods. Thus for modeling data that closely follows a non-Gaussian distribution, statistical power for network recovery can be gained by directly fitting parametric GLM graphical models. Third, and specifically for multivariate count data, others have suggested combinatorial approaches to fitting graphical models, mostly in the context of contingency tables [6, 9, 1, 10]. These approaches, however, are computationally intractable for even moderate numbers of variables. Finally, potential applications for our GLM graphical models abound. Networks of call-times, time spent on websites, diffusion processes, and life-cycles can be modeled with exponential graphical models; other skewed multivariate data can be modeled with gamma or chi-squared graphical models. Perhaps the most interesting motivating applications are for multivariate count data such as from website visits, user-ratings, crime and disease incident reports, bibliometrics, and next-generation genomic sequencing technologies. The latter is a relatively new high-throughput technology to measure gene expression that is rapidly replacing the microarray [11]. As Gaussian graphical models are widely used to infer genomic regulatory networks from microarray data, Poisson and negative binomial graphical models may be important for inferring genomic networks from the multivariate count data arising from this emerging technology. Beyond next generation sequencing, there has been a recent proliferation of new high-throughput genomic technologies that produce non-Gaussian data. Thus, our more general class of GLM graphical models can be used for inferring genomic networks from these new high-throughput technologies. The construction of our GLM graphical models also suggests a natural method for learning such models: node-wise neighborhood estimation by fitting sparsity constrained GLMs. A main contribution of this paper is to provide a sparsistency analysis for the recovery of the underlying graph structure of this new class of MRFs. The presence of non-linearities arising from the GLM poses subtle technical issues not present in the linear case [2]. Indeed, for the specific cases of logistic, and multinomial respectively, [3, 4] derive such a sparsistency analysis via fairly extensive arguments which were tuned to those specific cases. Here, we generalize their analysis to general GLMs, which requires a slightly modified M-estimator and a more subtle theoretical analysis. We note that this analysis might be of independent interest even outside the context of modeling and recovering graphical models. In recent years, there has been a trend towards unified statistical analyses that provide statistical guarantees for broad classes of models via general theorems [12]. Our result is in this vein and provides structure recovery for the class of sparsity constrained generalized linear models. We hope that the techniques we introduce might be of use to address the outstanding question of sparsity constrained M-estimation in its full generality. 2 2 A New Class of Graphical Models Problem Setup and Background. Suppose X = (X1 , . . . , Xp ) is a random vector, with each variable Xi taking values in a set X . Suppose G = (V, E) is an undirected graph over p nodes corresponding to the p variables; the corresponding graphical model is a set of distributions that satisfy Markov independence assumptions with respect to the graph. By the Hammersley-Clifford theorem, any such distribution also factors according to the graph in the following way. Let C be a set of cliques (fully-connected subgraphs) of the graph G, and let {?c (Xc ) c ? C} be a set of clique-wise sufficient statistics. With this notation, any distribution of X within the graphical model family represented by the graph G takes the form: X  P (X) ? exp ?c ?c (Xc ) , (1) c?C where {?c } are weights over the sufficient statistics. With a pairwise graphical model distribution, the set of cliques consists of the set of nodes V and the set of edges E, so that X  X P (X) ? exp ?s ?s (Xs ) + ?st ?st (Xs , Xt ) . (2) s?V (s,t)?E As previously discussed, an important question is how to select the class of sufficient statistics, ?, in particular to obtain as a multivariate extension of specified univariate parametric distributions? We next outline a subclass of graphical models where the node-conditional distributions are exponential family distributions, with an important special case where these node-conditional distributions are generalized linear models (GLMs). Then, in Section 3, we will study how to learn the underlying graph structure, or infer the edge set E, providing an M-estimator and sufficient conditions under which the estimator recovers the graph structure with high probability. Graphical Models via GLMs. In this section, we investigate the class of models that arise from specifying the node-conditional distributions as exponential families. Specifically, suppose we are given a univariate exponential family distribution, P (Z) = exp(? B(Z) + C(Z) ? D(?)), with sufficient statistics B(Z), base measure C(Z), and D(?) as the log-normalization constant. Let X = (X1 , X2 , . . . , Xp ) be a p-dimensional random vector; and let G = (V, E) be an undirected graph over p nodes corresponding to the p variables. Now suppose the distribution of Xs given the rest of nodes XV \s is given by the above exponential family, but with the canonical exponential family parameter set to a linear combination of k-th order products of univariate functions {B(Xt )}t?N (s) . This gives the following conditional distribution: n  X X P (Xs |XV \s ) = exp B(Xs ) ?s + ?st B(Xt ) + ?s t2 t3 B(Xt2 )B(Xt3 ) t2 ,t3 ?N (s) t?N (s) + X ?s t2 ...tk k Y  o ? V \s ) , B(Xtj ) + C(Xs ) ? D(X (3) j=2 t2 ,...,tk ?N (s) ? V \s ) is the log-normalization constant. where C(Xs ) is specified by the exponential family, and D(X By the Hammersley-Clifford theorem, and some elementary calculation, this conditional distribution can be shown to specify the following unique joint distribution P (X1 , . . . , Xp ): Proposition 1. Suppose X = (X1 , X2 , . . . , Xp ) is a p-dimensional random vector, and its nodeconditional distributions are specified by (3). Then its joint distribution P (X1 , . . . , Xp ) is given by: ( X X X P (X) = exp ?s B(Xs ) + ?st B(Xs )B(Xt ) s + X X ?s...tk B(Xs ) s?V t2 ,...,tk ?N (s) s?V t?N (s) k Y j=2 where A(?) is the log-normalization constant. 3 ) B(Xtj ) + X s C(Xs ) ? A(?) , (4) An important question is whether the conditional and joint distributions specified above have the most general form, under just the assumption of exponential family node-conditional distributions? In particular, note that the canonical parameter in the previous proposition is a tensor factorization of the univariate sufficient statistic, with pair-wise and higher-order interactions, which seems a bit stringent. Interestingly, by extending the argument from [7] and the Hammersley-Clifford Theorem, we can show that indeed (3) and (4) have the most general form. Proposition 2. Suppose X = (X1 , X2 , . . . , Xp ) is a p-dimensional random vector, and its nodeconditional distributions are specified by an exponential family, ? V \s )}, P (Xs |XV \s ) = exp{E(XV \s ) B(Xs ) + C(Xs ) ? D(X (5) ? V \s )) only depends on where the function E(XV \s ) (and hence the log-normalization constant D(X variables Xt in N (s). Further, suppose the corresponding joint distribution factors according to the graph G = (V, E), with the factors over cliques of size at most k. Then, the conditional distribution in (5) has the tensor-factorized form in (3), and the corresponding joint distribution has the form in (4). The proposition thus tells us that under the general assumptions that (a) the joint distribution is a graphical model that factors according to a graph G, and has clique-factors of size at most k, and (c) its node-conditional distribution follows an exponential family, it necessarily follows that the conditional and joint distributions are given by (3) and (4) respectively. An important special case is when the joint distribution has factors of size at most two. The conditional distribution then is given by: ? ? ? ? X ? V \s ) , (6) P (Xs |XV \s ) = exp ?s B(Xs ) + ?st B(Xs )B(Xt ) + C(Xs ) ? D(X ? ? t?N (s) while the joint distribution is given as ? ? ?X ? X X P (X) = exp ?s B(Xs ) + ?st B(Xs )B(Xt ) + C(Xs ) ? A(?) . ? ? s (7) s (s,t)?E Note that when the univariate sufficient statistic function B(?) is a linear function B(Xs ) = Xs , then the conditional distribution in (6) is precisely a generalized linear model [5] in canonical form, ? ? ? ? X ? V \s ; ?) , P (Xs |XV \s ) = exp ?s Xs + ?st Xs Xt + C(Xs ) ? D(X (8) ? ? t?N (s) while the joint distribution has the form, ? ? ?X ? X X P (X) = exp ?s Xs + ?st Xs Xt + C(Xs ) ? A(?) . ? s ? s (9) (s,t)?E In the subsequent sections, we will refer to the entire class of models in (7) as GLM graphical models, but focus on the case (9) with linear functions B(Xs ) = Xs . Examples. The GLM graphical models provide multivariate or Markov network extensions of univariate exponential family distributions. The popular Gaussian graphical model and Ising model can thus also be represented by (7). Consider the latter, for example, where for the Bernoulli distribution, we have that B(X) = X, C(X) = 0, and A(?) is the log-partition function; plugging these into (9), we have the form of the Ising model studied in [3]. The form of the multinomial graphical model, an extension of the Ising model, can also be represented by (7) and has been previously studied in [4] and others. It is instructive to consider the domain of the set of all possible valid parameters in the GLM graphical model (9); namely those that ensure that the density is normalizable, or equivalently, so that the log-partition function satisfies A(?) < +?. The Ising model imposes no constraint on its parameters, {?st }, for normalizability, since there are finitely many configurations of the binary random 4 vector X. For other exponential families, with countable discrete or continuous valued variables, the GLM graphical model does impose additional constraints on valid parameters. Consider the example of the Poisson and exponential distributions. The Poisson family has sufficient statistic B(X) = X and base measure C(X) = ?log(X!). With some algebra, we can show that A(?) < +? implies ?st ? 0 ? s, t. Thus, the Poisson graphical model can only capture negative conditional relationships between variables. Consider the exponential distribution with sufficient statistic B(X) = ?X, base measure C(X) = 0. To ensure that the density is finitely integrable, so that A(?) < +?, we then require that ?st ? 0 ? s, t. Similar constraints on the parameter space are necessary to ensure proper density functions for several other exponential family graphical models as well. 3 Statistical Guarantees In this section, we study the problem of learning the graph structure of an underlying GLM graphical model given iid samples. Specifically, we assume that we are given n samples X1n = {X (i) }ni=1 , from a GLM graphical model: ? ? ? X ? X ? ?st Xs Xt + C(Xs ) ? A(?) . P (X; ?? ) = exp (10) ? ? ? s (s,t)?E We have removed node-wise terms for simplicity, noting that our analysis extends to the general case. The goal in graphical model structure recovery is to recover the edges E ? of the underlying graph G = (V, E ? ). Following [3, 4], we will approach this problem via neighborhood estimation, where we estimate the neighborhood of each node individually, and then stitch these together to b (s) for the true neighborhood form the global graph estimate. Specifically, if we have an estimate N N ? (s), then we can estimate the overall graph structure as: b = ?s?V ? b {(s, t)}. E t?N (s) (11) In order to estimate the neighborhood of any node, we consider the sparsity constrained conditional MLE. Given the joint distribution in (10), the conditional distribution of Xs given the rest of the nodes is given by: ? ? ?  X   X ? ? ? P (Xs |XV \s ) = exp Xs ?st Xt + C(Xs ) ? D ?st Xt . (12) ? ? t?N (s) t?N (s) ? ? ? ? = 0, for t ? N (s) and ?st }t?V \s ? Rp?1 be a zero-padded vector, with entries ?st Let ?\s = {?st n (i) n for t 6? N (s). Given n samples X1 = {X }i=1 , we can write the conditional log-likelihood of the distribution (12) as: `(?\s ; X1n ) := ? n n Y  1 1X (i) (i) (i)  log P Xs(i) |X\s , ?\s = ?Xs(i) h?\s , X\s i + D h?\s , X\s i . n n i=1 i=1 We can then solve the `1 regularized conditional log-likelihood loss for each node Xs : min ?\s ?Rp?1 `(?\s ; X1n ) + ?n k?\s k1 . (13) Given the solution ?b\s of the M-estimation problem above, we then estimate the node-neighborhood b (s) = {t ? V \s : ?bst 6= 0}. In the following when we focus on a fixed node s ? V , of s as N we will overload notation, and use ? ? Rp?1 as the parameters of the conditional distribution, suppressing the dependence on s. In the rest of the section, we first discuss the assumptions we impose on the GLM graphical model parameters. The first set of assumptions are standard irrepresentable-type conditions imposed for structure recovery in high-dimensional statistical estimators, and in particular, our assumptions mirror those in [3]. The second set of assumptions are key to our generalized analysis of the class of GLM graphical models as a whole. We then follow with our main theorem, that guarantees structure recovery under these assumptions, with high probability even in high-dimensional regimes. 5 Our first set of assumptions use the Fisher Information matrix, Q?s = ?2 `(?s? ; X1n ), which is the Hessian of the node-conditional log-likelihood. In the following, we will simply use Q? instead of Q?s where the reference node s should be understood implicitly. We also use S = {(s, t) : t ? N (s)} to denote the true neighborhood of node s, and S c to denote its complement. We use Q?SS to denote the d ? d sub-matrix indexed by S. Our first two assumptions , and are as follows: Assumption 1 (Dependency condition). There exists a constant ?min > 0 such that ?min (Q?SS ) ? b \s X T ]) ? ?max . ?min . Moreover, there exists a constant ?max < ? such that ?max (E[X \s Assumption 2 (Incoherence condition). We also need an incoherence or irrepresentable condition on the fisher information matrix as in [3]. Specifically, there exists a constant ? > 0, such that maxt?S c kQ?tS (Q?SS )?1 k1 ? 1 ? ?. A key technical facet of the linear, logistic, and multinomial models in [2, 3, 4] and used heavily in their proofs, is that the random variables {Xs } there were bounded with high probability. Unfortunately, in the general GLM distribution in (12), we cannot assume this explicitly. Nonetheless, we show that we can analyze the corresponding regularized M-estimation problems, provided the first and second moments are bounded. Assumption 3. The first and second moments of the distribution in (10) are bounded as follows. The first moment ?? := E[X] , satisfies k?? k2 ? ?m ; the second moment satisfies maxt?V E[Xt2 ] ? ?v . We also need smoothness assumptions on the log-normalization constants : Assumption 4. The log-normalization constant A(?) of the joint distribution (10) satisfies: maxu:kuk?1 ?max (?2 A(?? + u)) ? ?h . Assumption 5. The log-partition function D(?) of the node-conditional distribution (12) satisfies: There exist constants ?1 and ?2 (that depend on the exponential family) s.t. max{|D00 (?1 log ?)|, |D000 (?1 log ?)|} ? n?2 where ? = max{n, p}, ?1 ? 29 k?? k2 and ?2 ? [0, 1/4]. Assumptions 3 and 4 are the key technical conditions under which we can generalize the analyses in [2, 3, 4] to the general GLM case. In particular, we can show that the statements of the following propositions hold, which show that the random vectors X following the GLM graphical model in (10) are suitably well-behaved: Proposition 3. Suppose X is a random vector with the distribution specified in (10). Then, for any vector u ? Rp such that kuk2 ? c0 , any positive constant ?, and some constants c > 0,  0 P |hu, Xi| ? ? log ? ? c? ??/c . Proposition 4. Suppose X is a random vector with the distribution specified in (10). Then, for ? ? min{2?v /3, ?h + ?v }, and some constant c > 0, ! n   1X (i) 2 P Xs ? ? ? 2 exp ?c n ? 2 . n i=1 Putting these key technical results and assumptions together, we arrive at our main result: Theorem 1. Consider a GLM graphical model distribution as specified in (10), with true parameter ? ?? and associated edge set E ? that satisfies Assumptions 1-5. Suppose that min(s,t)?E ? |?st | ? ? 10 d? where d is the maximum neighborhood size. Suppose also that the regularization pan ?min q (2??) log p rameter is chosen such that ?n ? M ? for some constant M > 0. Then, there exist n1??2  2 1 positive constants L, K1 and K2 such that if n ? L d log p(max{log n, log p})2 1?3?2 , then with probability at least 1 ? exp(?K1 ?2n n) ? K2 max{n, p}?5/4 , the following statements hold: (a) (Unique Solution) For each node s ? V , the solution of the M-estimation problem in (13) is unique, and (b) (Correct Neighborhood Recovery) The M-estimate also recovers the true neighborhood exactly, b (s) = N (s). so that N 6 1 0.8 0.8 Success probability Success probability 1 0.6 p = 64 p = 100 p = 169 p = 225 0.4 0.2 0 400 600 800 n 1000 p = 64 p = 100 p = 169 p = 225 0.6 0.4 0.2 0 1200 1.5 2 2.5 3 3.5 4 ? Figure 1: Probabilities of successful support recovery for a Poisson grid structure (? = ?0.1). The probability of successful edge recovery vs. n (Left), and the probability of successful edge recovery vs. control parameter ? = n/(c log p) (Right). Note that if the neighborhood of each node is recovered with high probability, then by a simple b = ?s?V ? b {(s, t)} is equal to the true edge set E ? with union bound, the estimate in (11), E t?N (s) high-probability. Also note that ?2 in the statement is a constant from Assumption 5. The Poisson family has one of the steepest log-partition function: D(?) = exp(?). Hence, in order to satisfy Assumption 5, 1 log n we need k?? k2 ? 18 log p with ?2 = 1/4. On the other hand, for the binomial, multinomial or Gaussian cases studied in [2, 3, 4], we can recover their results with ?2 = 0 since the log-partition function D(?) of these families are upper bounded by some constant for any input. Nevertheless, we need to restrict ?? to satisfy Assumption 4 so that the variables are bounded with high probability in Proposition 3 and 4 for any GLM case. 4 Experiments Experiments on Simulated Networks. We provide a small simulation study that demonstrates the consequences of Theorem 1 when the conditional distribution in (12) has the form of Poisson distribution. We performed experiments on lattice (4 nearest neighbor) graphs with identical edge weight ? for all edges. Simulating data viaqGibbs sampling, we solved the sparsity-constrained optimization problem with a constant factor of logn p for ?n . The left panel of Figure 1 shows the probability of successful edge recovery for different numbers of nodes, p = {64, 100, 169, 225}. In the right panel of Figure 1, we re-scale the sample size n using the ?control parameter? ? = n/(c log p) for some constant c. Each point in the plot indicates the probability that all edges are successfully recovered out of 50 trials. We can see that the curves for different problem sizes are well aligned with the results of Theorem 1. Learning Genomic Networks. Gaussian graphical models learned from microarray data have often been used to study high-throughput genomic regulatory networks. Our GLM graphical models will be important for understanding genomic networks learned from other high-throughput technologies that do not produce approximately Gaussian data. Here, we demonstrate the versatility of our model by learning two cancer genomic networks, a genomic copy number aberration network (from aCGH data) for Glioblastoma learned by multinomial graphical models and a meta-miRNA inhibitory network (from next generation sequencing data) for breast cancer learned by Poisson graphical models. Level III data, breast cancer miRNA expression (next generation sequencing) [13] and copy number variation (aCGH) Glioblastoma data [14], was obtained from the the Cancer Genome Atlas (TCGA) data portal (http://tcga-data.nci.nih.gov/tcga/), and processed according to standard techniques. Data descriptions and processing details are given in the supplemental materials. A Poisson graphical model and a multinomial graphical model were fit to the processed miRNA data and aberration data respectively by performing neighborhood selection with the sparsity of the graph determined by stability selection [15]. Our GLM graphical models, Figure 2, reveal results consistent with the cancer genomics literature. The meta-miRNA inhibitory network has three major hubs, two of which, mir-519 and mir-520, are known to be breast cancer tumor suppressors [16, 17]. Interestingly, let-7, a well-known miRNA involved in tumor metastasis [18], plays a central role 7 38 73 22 74 2 100 14 9 6 90 60 27 26 76 55 20 89 29 92 30 32 70 51 7 31 59 29 87 mir-518c mir-520a 48 64 14 91 19 24 41 57 4 35 13 75 36 mir-449 mir-519-a 95 let-7 39 17 28 15 99 4 18 49 31 23 1 97 16 71 50 32 94 mir-143 mir-150 61 5 43 mir-3156 mir-105 42 30 10 23 19 13 62 25 83 3 80 34 6 24 16 12 82 53 12 17 2 11 34 10 38 63 25 35 15 47 22 37 93 11 44 26 33 28 40 81 21 18 58 20 0 84 46 3 Figure 2: Genomic copy number aberration network for Glioblastoma learned via multinomial graphical models (left) and meta-miRNA inhibitory network for breast cancer learned via Poisson graphical models (right). 27 98 96 86 65 77 36 54 67 78 72 69 68 85 9 52 8 88 7 66 79 56 45 in our network, sharing edges with the five largest hubs; this suggests that our model has learned relevant negative associations between tumor suppressors and enhancers. The Glioblastoma copy number aberration network reveals five major modules, color coded on the left panel in Figure 2, and three of these modules have been previously implicated in Glioblastoma: EGFR in the yellow module, PTEN in the purple module, and CDK2A in the blue module [19]. 5 Discussion We have introduced a new class of graphical models that arise when we assume that node-wise conditional distributions follow an exponential family distribution. We have also provided simple M-estimators for learning the network by fitting node-wise penalized GLMs that enjoy strong statistical recovery properties. Our work has broadened the class of off-the-shelf graphical models to encompass a wide range of parametric distributions. These classes of graphical models may be of further interest to the statistical community as they provide closed form multivariate densities for several exponential family distributions (e.g. Poisson, exponential, negative binomial) where few currently exist. Furthermore, the statistical analysis of our M-estimator required subtle techniques that may be of general interest in the analysis of sparse M-estimation. Our work outlines the general class of graphical models for exponential family distributions, but there are many avenues for future work in studying this model for specific distributional families. In particular, our model sometimes places restrictions on the parameter space. A question remains, can these restrictions be relaxed for specific exponential family distributions? Additionally, we have focused on families with linear sufficient statistics (e.g. Gaussian, Bernoulli, Poisson, exponential, negative binomial); our models can be studied with non-linear sufficient statistics or multi-parameter distributions as well. Overall, our work has opened the door for learning Markov Networks from a broad class of distributions, the properties and applications of which leave much room for future research. Acknowledgments E.Y. and P.R. acknowledge support from NSF IIS-1149803. G.A. and Z.L. acknowledge support from the Collaborative Advances in Biomedical Computing seed funding program at the Ken Kennedy Institute for Information Technology at Rice University supported by the John and Ann Doerr Fund for Computational Biomedicine and by the Center for Computational and Integrative Biomedical Research seed funding program at Baylor College of Medicine. G.A. also acknowledges support from NSF DMS-1209017. 8 References [1] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. R in Machine Learning, 1(1-2):1?305, 2008. Foundations and Trends [2] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34:1436?1462, 2006. [3] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional ising model selection using `1 regularized logistic regression. Annals of Statistics, 38(3):1287?1319, 2010. [4] A. Jalali, P. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using groupsparse regularization. In Inter. Conf. on AI and Statistics (AISTATS), 14, 2011. [5] P. McCullagh and J.A. Nelder. Generalized linear models. Monographs on statistics and applied probability 37. Chapman and Hall/CRC, New York, 1989. [6] S.L. Lauritzen. Graphical models, volume 17. Oxford University Press, USA, 1996. [7] J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society. Series B (Methodological), 36(2):192?236, 1974. [8] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. The Journal of Machine Learning Research, 10:2295?2328, 2009. [9] Y.M.M. Bishop, S.E. Fienberg, and P.W. Holland. Discrete multivariate analysis. Springer Verlag, 2007. [10] Trevor. Hastie, Robert. Tibshirani, and JH (Jerome H.) Friedman. The elements of statistical learning. Springer, 2 edition, 2009. [11] J.C. Marioni, C.E. Mason, S.M. Mane, M. Stephens, and Y. Gilad. Rna-seq: an assessment of technical reproducibility and comparison with gene expression arrays. Genome research, 18(9):1509?1517, 2008. [12] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers, 2010. [13] Cancer Genome Atlas Research Network. Comprehensive molecular portraits of human breast tumours. Nature, 490(7418):61?70, 2012. [14] Cancer Genome Atlas Research Network. Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature, 455(7216):1061?1068, October 2008. [15] H. Liu, K. Roeder, and L. Wasserman. Stability approach to regularization selection (stars) for high dimensional graphical models. Arxiv preprint arXiv:1006.3316, 2010. [16] K. Abdelmohsen, M.M. Kim, S. Srikantan, E.M. Mercken, S.E. Brennan, G.M. Wilson, R. de Cabo, and M. Gorospe. mir-519 suppresses tumor growth by reducing hur levels. Cell cycle (Georgetown, Tex.), 9(7):1354, 2010. [17] I. Keklikoglou, C. Koerner, C. Schmidt, JD Zhang, D. Heckmann, A. Shavinskaya, H. Allgayer, B. G?uckel, T. Fehm, A. Schneeweiss, et al. Microrna-520/373 family functions as a tumor suppressor in estrogen receptor negative breast cancer by targeting nf-?b and tgf-? signaling pathways. Oncogene, 2011. [18] F. Yu, H. Yao, P. Zhu, X. Zhang, Q. Pan, C. Gong, Y. Huang, X. Hu, F. Su, J. Lieberman, et al. let-7 regulates self renewal and tumorigenicity of breast cancer cells. Cell, 131(6):1109?1123, 2007. [19] R. McLendon, A. Friedman, D. Bigner, E.G. Van Meir, D.J. Brat, G.M. Mastrogianakis, J.J. Olson, T. Mikkelsen, N. Lehman, K. Aldape, et al. Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature, 455(7216):1061?1068, 2008. [20] Jianhua Zhang. Convert segment data into a region by sample matrix to allow for other high level computational analyses, version 1.2.0 edition. Bioconductor package. [21] Gerald B W Wertheim, Thomas W Yang, Tien-chi Pan, Anna Ramne, Zhandong Liu, Heather P Gardner, Katherine D Dugan, Petra Kristel, Bas Kreike, Marc J van de Vijver, Robert D Cardiff, Carol Reynolds, and Lewis A Chodosh. The Snf1-related kinase, Hunk, is essential for mammary tumor metastasis. Proceedings of the National Academy of Sciences of the United States of America, 106(37):15855?15860, September 2009. [22] J.T. Leek, R.B. Scharpf, H.C. Bravo, D. Simcha, B. Langmead, W.E. Johnson, D. Geman, K. Baggerly, and R.A. Irizarry. Tackling the widespread and critical impact of batch effects in high-throughput data. Nature Reviews Genetics, 11(10):733?739, 2010. [23] J. Li, D.M. Witten, I.M. Johnstone, and R. Tibshirani. Normalization, testing, and false discovery rate estimation for rna-sequencing data. Biostatistics, 2011. [24] G. I. Allen and Z. Liu. A Log-Linear Graphical Model for Inferring Genetic Networks from HighThroughput Sequencing Data. IEEE International Conference on Bioinformatics and Biomedicine, 2012. [25] J. Bullard, E. Purdom, K. Hansen, and S. Dudoit. Evaluation of statistical methods for normalization and differential expression in mrna-seq experiments. BMC bioinformatics, 11(1):94, 2010. 9
4617 |@word trial:1 version:1 seems:1 suitably:1 c0:1 hu:2 integrative:1 simulation:1 moment:4 initial:1 liu:5 configuration:1 series:1 united:1 egfr:1 tuned:1 genetic:1 interestingly:2 suppressing:1 nonparanormal:1 reynolds:1 recovered:3 tackling:1 must:1 john:1 subsequent:1 partition:5 plot:1 atlas:3 fund:1 v:2 website:2 brennan:1 steepest:1 core:2 provides:1 characterization:2 node:36 zhang:3 five:2 differential:1 consists:1 fitting:7 pathway:3 introduce:2 pairwise:1 inter:1 indeed:2 proliferation:1 multi:1 chi:2 srikantan:1 gov:1 tex:1 abound:1 provided:2 estimating:1 underlying:6 moreover:2 linearity:1 notation:2 factorized:1 bounded:5 panel:3 biostatistics:1 kind:1 emerging:1 suppresses:1 supplemental:1 unified:2 finding:1 pediatrics:1 guarantee:3 nf:1 subclass:1 growth:1 exactly:2 k2:5 demonstrates:1 bst:1 control:2 broadened:1 enjoy:2 positive:2 understood:1 xv:8 consequence:1 receptor:1 oxford:1 incoherence:2 approximately:1 might:2 studied:4 heather:1 meinshausen:1 suggests:2 specifying:1 factorization:1 range:1 statistically:1 unique:3 acknowledgment:1 testing:1 union:1 glioblastoma:7 signaling:1 cannot:1 irrepresentable:2 selection:5 targeting:1 context:2 restriction:2 imposed:2 center:1 maximizing:1 mrna:1 focused:1 simplicity:1 gmrf:4 recovery:12 decomposable:1 wasserman:2 factored:1 estimator:7 subgraphs:1 array:1 stability:2 variation:1 annals:2 construction:2 suppose:12 heavily:1 user:1 play:1 trend:2 element:1 ising:10 distributional:1 vein:1 geman:1 role:1 module:5 preprint:1 solved:1 capture:2 region:1 pradeepr:1 cycle:2 connected:1 removed:1 principled:1 disease:1 monograph:1 complexity:1 gerald:1 depend:1 segment:1 algebra:2 joint:13 represented:3 america:1 tell:1 broadens:1 neighborhood:15 outside:1 widely:1 valued:1 solve:1 s:3 statistic:20 analytical:1 propose:1 interaction:2 product:2 aligned:1 relevant:1 rapidly:1 reproducibility:1 academy:1 description:1 olson:1 extending:1 produce:2 leave:1 tk:4 spent:1 derive:1 gong:1 pose:1 acgh:2 nearest:1 finitely:2 lauritzen:1 strong:1 recovering:1 c:2 zhandong:2 implies:1 petra:1 tcga:3 closely:1 correct:1 simcha:1 opened:1 human:3 stringent:2 material:1 crc:1 gallen:1 require:1 d00:1 proposition:8 elementary:1 tgf:1 extension:3 hold:2 hall:1 normal:1 exp:15 maxu:1 seed:2 major:3 estimation:9 combinatorial:1 currently:1 hansen:1 uhlmann:1 utexas:2 individually:1 largest:1 successfully:1 weighted:1 hope:1 genomic:14 gaussian:15 rna:2 normalizability:1 modified:1 shelf:2 broader:1 wilson:1 derived:1 focus:2 mane:1 potts:1 sequencing:6 likelihood:4 bernoulli:2 indicates:1 methodological:1 besag:1 rigorous:1 kim:1 inference:1 roeder:1 mrfs:4 typically:1 entire:1 transformed:1 compatibility:2 issue:1 overall:2 among:1 logn:1 art:1 constrained:5 fairly:1 special:2 marginal:1 field:3 equal:1 renewal:1 spatial:1 sampling:1 chapman:1 bmc:1 identical:1 broad:2 yu:2 throughput:7 future:2 sanghavi:1 others:2 report:1 metastasis:2 few:1 t2:5 mammary:1 gamma:1 national:1 comprehensive:3 sparsistency:2 xtj:2 versatility:1 n1:1 friedman:2 interest:3 investigate:1 evaluation:1 pradeep:1 regularizers:1 edge:12 necessary:1 indexed:1 re:1 theoretical:1 instance:1 portrait:1 modeling:3 facet:1 lieberman:1 lattice:2 subset:1 entry:1 kq:1 successful:4 johnson:1 motivating:1 dependency:1 st:18 density:4 international:1 negahban:1 physic:1 off:2 together:2 yao:1 clifford:4 squared:1 central:1 choose:1 huang:1 conf:1 li:1 potential:2 de:2 star:1 lehman:1 satisfy:3 explicitly:1 depends:2 performed:1 closed:1 analyze:1 recover:2 contribution:2 collaborative:1 purple:1 ni:1 characteristic:1 t3:2 zhandonl:1 yellow:1 generalize:2 iid:1 kennedy:1 biomedicine:2 sharing:1 trevor:1 nonetheless:1 involved:1 dm:1 proof:1 associated:1 recovers:2 popular:3 enhancer:1 hur:1 color:1 subtle:3 nodeconditional:2 brat:1 higher:1 follow:2 specify:1 generality:1 furthermore:1 just:1 biomedical:2 jerome:1 glms:9 hand:1 replacing:1 su:1 assessment:1 widespread:1 defines:2 logistic:3 perhaps:1 behaved:1 reveal:1 usa:1 effect:1 true:5 hence:2 regularization:3 attractive:1 skewed:1 self:1 inferior:1 x1n:4 unnormalized:1 generalized:8 outline:2 demonstrate:1 allen:2 image:1 wise:10 pten:1 variational:1 funding:2 nih:1 witten:1 multinomial:8 regulates:1 volume:1 extend:2 discussed:1 association:1 refer:1 ai:1 smoothness:1 grid:1 similarly:1 language:1 entail:1 vijver:1 base:3 multivariate:13 recent:4 moderate:1 dugan:1 verlag:1 meta:3 binary:1 success:2 life:1 tien:1 integrable:1 additional:1 relaxed:1 impose:2 ii:1 stephen:1 encompass:2 full:1 infer:2 technical:5 calculation:1 rameter:1 ravikumar:4 visit:1 mle:1 plugging:1 coded:1 molecular:1 impact:1 mrf:1 regression:1 breast:7 poisson:13 arxiv:2 represent:1 normalization:8 aberration:4 sometimes:1 gilad:1 cell:3 background:1 nci:1 semiparametric:1 microarray:3 rest:5 mir:11 undirected:5 lafferty:2 jordan:1 integer:1 call:1 yang:2 presence:1 noting:1 iii:1 door:1 variety:2 independence:1 fit:1 hastie:1 restrict:1 lasso:1 idea:2 avenue:1 texas:2 whether:1 motivated:1 expression:4 hessian:1 york:1 bravo:1 extensively:1 processed:2 ken:1 reduced:1 http:1 meir:1 exist:4 canonical:3 inhibitory:3 nsf:2 estimated:1 arising:3 popularity:1 tibshirani:2 blue:1 discrete:7 write:1 key:6 putting:1 nevertheless:1 groupsparse:1 kuk:1 diffusion:1 graph:23 padded:1 year:1 xt2:2 convert:1 package:1 extends:1 family:32 arrive:1 place:1 seq:2 jianhua:1 bit:1 bound:1 precisely:1 constraint:3 normalizable:1 x2:3 argument:2 min:7 performing:1 relatively:1 department:4 according:5 combination:2 slightly:1 pan:3 glm:22 taken:1 fienberg:1 computationally:3 previously:3 remains:1 discus:1 count:4 leek:1 microrna:1 tractable:1 studying:1 available:1 highthroughput:1 simulating:1 schmidt:1 batch:1 rp:4 jd:1 thomas:1 binomial:5 gaussianizing:1 ensure:3 graphical:63 xc:2 medicine:3 k1:4 society:1 tensor:2 question:5 parametric:7 dependence:1 jalali:1 tumour:1 september:1 simulated:1 estrogen:1 reason:1 assuming:1 modeled:2 relationship:2 eunho:2 providing:1 baylor:2 equivalently:1 setup:1 mostly:1 unfortunately:1 robert:2 statement:3 october:1 katherine:1 negative:8 ba:1 countable:1 proper:1 kinase:1 upper:1 markov:8 acknowledge:2 t:1 community:1 rating:1 introduced:1 complement:1 pair:1 namely:1 specified:9 extensive:1 required:1 crime:1 bcm:1 learned:8 address:1 beyond:3 suggested:1 regime:1 sparsity:6 hammersley:4 cardiff:1 program:2 including:1 max:8 royal:1 wainwright:3 power:1 suitable:1 critical:1 natural:2 doerr:1 regularized:3 indicator:2 zhu:1 technology:7 gardner:1 acknowledges:1 gmrfs:2 genomics:1 review:1 understanding:1 literature:1 discovery:1 georgetown:1 fully:1 loss:1 interesting:1 generation:4 foundation:1 contingency:1 incident:1 vasuki:1 sufficient:16 xp:6 imposes:1 consistent:1 maxt:2 austin:2 bibliometrics:1 cancer:11 penalized:2 genetics:1 supported:1 copy:4 implicated:1 allow:2 jh:1 institute:1 wide:2 neighbor:1 taking:1 johnstone:1 mirna:6 sparse:1 van:2 curve:1 valid:2 genome:4 oncogene:1 implicitly:1 gene:4 clique:9 suppressor:3 global:2 reveals:1 nelder:1 xi:2 neurology:1 continuous:2 regulatory:2 table:1 additionally:1 learn:2 nature:4 necessarily:1 domain:2 marc:1 aistats:1 anna:1 main:3 motivation:1 whole:1 arise:3 edition:2 x1:7 sub:1 inferring:3 exponential:32 third:1 theorem:9 kuk2:1 specific:4 xt:12 bishop:1 showing:1 hub:2 mason:1 x:43 intractable:1 exists:3 essential:1 false:1 gained:1 mirror:1 portal:1 conditioned:2 suited:1 simply:1 univariate:8 expressed:1 stitch:1 holland:1 springer:2 satisfies:6 lewis:1 rice:3 conditional:25 goal:1 dudoit:1 ann:1 towards:1 room:1 fisher:2 genevera:1 mccullagh:1 specifically:5 determined:1 reducing:1 tumor:6 select:2 college:2 support:4 latter:2 carol:1 bioinformatics:2 outstanding:1 overload:1 instructive:1
3,997
4,618
CPRL ? An Extension of Compressive Sensing to the Phase Retrieval Problem Henrik Ohlsson Division of Automatic Control, Department of Electrical Engineering, Link?oping University, Sweden. Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA [email protected] Allen Y. Yang Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA Roy Dong Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA S. Shankar Sastry Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA Abstract While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique ? CPRL, which relaxes the NP-hard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1 Introduction In the area of X-ray imaging, phase retrieval (PR) refers to the problem of recovering a complex multivariate signal from the squared magnitude of its Fourier transform. Existing sensor devices for collecting X-ray images are only sensitive to signal intensities but not the phases. However, it is very important to be able to recover the missing phase information as it reveals finer structures of the subjects than using the intensities alone. The PR problem also has broader applications and has been studied extensively in biology, physics, chemistry, astronomy, and more recent nanosciences [29, 20, 18, 24, 23]. Mathematically, PR can be formulated using a linear system y = Ax ? CN , where the matrix A may represent the Fourier transform or other more general linear transforms. If the complex measurements y are available and the matrix A is assumed given, it is well known that the leastsquares (LS) solution recovers the model parameter x that minimizes the squared estimation error: 1 ky ? Axk22 . In PR, we assume that the phase of the coefficients of y is omitted and only the squared magnitude of the output is observed: bi = |yi |2 = |hx, ai i|2 , i = 1, ? ? ? , N, (1) where AH = [a1 , ? ? ? , aN ] ? Cn?N , y T = [y1 , ? ? ? , yN ] ? CN , and AH denotes the Hermitian transpose of A. Inspired by the emerging theory of compressive sensing [17, 8] and a lifting technique recently proposed for PR [13, 10], we study the PR problem with a more restricted assumption that the model parameter x is sparse and the number of observations N are too few for (1) to have a unique solution, and in some cases even fewer measurements than the number of unknowns n. The problem is known as compressive phase retrieval (CPR) [25, 27, 28]. In many X-ray imaging applications, for instance, if the complex source signal is indeed sparse under a proper basis, CPR provides a viable solution to exactly recover the signal while collecting much fewer measurements than the traditional non-compressive solutions. Clearly, the PR problem and its CPR extension are much more challenging than the LS problem, as the phase of y is lost while only its squared magnitude is available. For starters, it is important to note that the setup naturally leads to ambiguous solutions regardless whether the original linear model is overdetermined or not. For example, if x0 ? Cn is a solution to y = Ax, then any multiplication of x and a scalar c ? C, |c| = 1, leads to the same squared output b. As mentioned in [10], when the dictionary A represents the unitary discrete Fourier transform (DFT), the ambiguities may represent time-reversed or time-shifted solutions of the ground-truth signal. Hence, these global ambiguities are considered acceptable in PR applications. In this paper, when we talk about a unique solution to PR, it is indeed a representative of a family of solutions up to a global phase ambiguity. 1.1 Contributions The main contribution of the paper is a convex formulation of the CPR problem. Using the lifting technique, the NP-hard problem is relaxed as a semidefinite program (SDP). We will briefly summarize several theoretical bounds for guaranteed recovery of the complex input signal, which is presented in full detail in our technical report [26]. Built on the assurance of the guaranteed recovery, we will focus on the development of a novel scalable implementation of CPR based on the alternating direction method of multipliers (ADMM) approach. The ADMM implementation provides a means to apply CS ideas to PR applications e.g., high-impact nanoscale X-ray imaging. In the experiment, we will present a comprehensive comparison of the new algorithm with the traditional interior-point method, other state-of-the-art sparse optimization techniques, and a greedy algorithm proposed in [26]. In high-dimensional complex domain, the ADMM algorithm demonstrates superior performance in our simulated examples and real images. Finally, the paper also provides practical guidelines to practitioners at large working on other similar nonsmooth SDP applications. To aid peer evaluation, the source code of all the algorithms have been made available at: http://www.rt.isy.liu.se/?ohlsson/. 2 Compressive Phase Retrieval via Lifting (CPRL) Since (1) is nonlinear in the unknown x, N  n measurements are in general needed for a unique solution. When the number of measurements N are fewer than necessary for such a unique solution, additional assumptions are needed as regularization to select one of the solutions. In classical CS, the ability to find the sparsest solution to a linear equation system enables reconstruction of signals from far fewer measurements than previously thought possible. Classical CS is however only applicable to systems with linear relations between measurements and unknowns. To extend classical CS to the nonlinear PR problem, we seek the sparsest solution satisfying (1): min kxk0 , x H subj. to b = |Ax|2 = {aH i xx ai }1?i?N , (2) with the square acting element-wise and b = [b1 , ? ? ? , bN ]T ? RN . As the counting norm k ? k0 is not a convex function, following the `1 -norm relaxation in CS, (2) can be relaxed as min kxk1 , x H subj. to b = |Ax|2 = {aH i xx ai }1?i?N . 2 (3) Note that (3) is still not a convex program, as its equality constraint is not a linear equation. In the literature, a lifting technique has been extensively used to reframe problems such as (3) to a standard form in SDP, such as in Sparse PCA [15]. More specifically, given the ground-truth signal x0 ? Cn , n?n let X0 , x0 xH be an induced rank-1 semidefinite matrix. Then (3) can be reformulated 0 ? C 1 into minX0 kXk1 , subj. to rank(X) = 1, bi = aH i Xai , i = 1, ? ? ? , N. (4) This is of course still a nonconvex problem due to the rank constraint. The lifting approach addresses this issue by replacing rank(X) with Tr(X). For a positive-semidefinite matrix, Tr(X) is equal to the sum of the eigenvalues of X (or the `1 -norm on a vector containing all eigenvalues of X). This leads to the nonsmooth SDP minX0 Tr(X) + ?kXk1 , subj. to bi = Tr(?i X), i = 1, ? ? ? , N, (5) n?n where we further denote ?i , ai aH and ? ? 0 is a design parameter. Finally, the estimate i ?C of x can be found by computing the rank-1 decomposition of X via singular value decomposition. We refer to the approach as compressive phase retrieval via lifting (CPRL). Consider now the case that the measurements are contaminated by data noise. In a linear model, bounded random noise typically affects the output of the system as y = Ax + e, where e ? CN is a noise term with bounded `2 -norm: kek2 ? . However, in phase retrieval, we follow closely a more special noise model used in [13]: bi = |hx, ai i|2 + ei . (6) This nonstandard model avoids the need to calculate the squared magnitude output |y|2 with the added noise term. More importantly, in most practical phase retrieval applications, measurement noise is introduced when the squared magnitudes or intensities of the linear system are measured on the sensing device, but not y itself. Accordingly, we denote a linear operator B of X as B : X ? Cn?n 7? {Tr(?i X)}1?i?N ? RN , (7) which measures the noise-free squared output. Then the approximate CPR problem with bounded `2 -norm error model can be solved by the following nonsmooth SDP program: minX0 Tr(X) + ?kXk1 , subj. to kB(X) ? bk2 ? ?. (8) Due to the machine rounding error, in general a nonzero ? should be always assumed and in its termination condition during the optimization. The estimate of x, just as in noise free case, can finally be found by computing the rank-1 decomposition of X via singular value decomposition. We refer to the method as approximate CPRL. 3 Theoretical Analysis This section highlights some of the analysis results derived for CPRL. The proofs of these results are available in the technical report [26]. The analysis follows that of CS and is inspired by derivations given in [13, 12, 16, 9, 3, 7]. In order to state some theoretical properties for CPRL, we need a generalization of the restricted isometry property (RIP). 2 2 ? 1| <  for all Definition 1 (RIP) A linear operator B(?) as defined in (7) is (, k)-RIP if | kB(X)k kXk2 2 kXk0 ? k and X 6= 0. We can now state the following theorem: Theorem 2 (Recoverability/Uniqueness) Let B(?) be a (, 2kX ? k0 )-RIP linear operator with  < ? be the sparsest solution to (1). If X ? satisfies b = B(X ? ), X ?  0, rank{X ? } = 1, 1 and let x ? ?x ?H . then X is unique and X ? = x ?: We can also give a bound on the sparsity of x ? be the ? H k0 from above) Let x ? be the sparsest solution to (1) and let X Theorem 3 (Bound on k? xx H ? ? ? k0 . solution of CPRL (5). If X has rank 1 then kXk0 ? k? xx The following result now holds trivially: 1 In this paper, kXk1 for a matrix X denotes the entry-wise `1 -norm, and kXk2 denotes the Frobenius norm. 3 ? be the sparsest solution to (1). The solution Corollary 4 (Guaranteed recovery using RIP) Let x H ? ? 0 )-RIP with  < 1. ? ? of CPRL X is equal to xx if it has rank 1 and B(?) is (, 2kXk ? can not be guaranteed, the following bound becomes useful: ?x ?H = X If x ? 1 ) Let  < 1? and assume B(?) to be a (, 2k)-RIP linear Theorem 5 (Bound on kX ? ? Xk 1+ 2 ? operator. Let X be any matrix (sparse or dense) satisfying b = B(X ? ), X ?  0, rank{X ? } = 1, ? be the CPRL solution, (5), and form Xs from X ? by setting all but the k largest elements to let X zero. Then, ? 2 ? ? ? X ? k1 ? (1 ? ( 21??k + 1) ?1 )kX kX ? ? Xs k1 , (9) (1??) k ? with ? = 2/(1 ? ). Given the RIP analysis, it may be the case that the linear operator B(?) does not well satisfy the RIP property defined in Definition 1, as pointed out in [13]. In these cases, RIP-1 maybe considered: 1 ? 1| <  for all matrices Definition 6 (RIP-1) A linear operator B(?) is (, k)-RIP-1 if | kB(X)k kXk1 X 6= 0 and kXk0 ? k. Theorems 2?3 and Corollary 4 all hold with RIP replaced by RIP-1 and are not restated in detail here. Instead we summarize the most important property in the following theorem: ? be the sparsest solution to (1). The Theorem 7 (Upper bound & recoverability through `1 ) Let x ? is equal to x ? 0 )-RIP-1 with  < 1. ?x ? H if it has rank 1 and B(?) is (, 2kXk solution of CPRL (5), X, The RIP type of argument may be difficult to check for a given matrix and are more useful for claiming results for classes of matrices/linear operators. For instance, it has been shown that random Gaussian matrices satisfy the RIP with high probability. However, given realization of a random Gaussian matrix, it is indeed difficult to check if it actually satisfies the RIP. Two alternative arguments are spark [14] and mutual coherence [17, 11]. The spark condition usually gives tighter bounds but is known to be difficult to compute as well. On the other hand, mutual coherence may give less tight bounds, but is more tractable. We will focus on mutual coherence, which is defined as: Definition 8 (Mutual coherence) For a matrix A, define the mutual coherence as ?(A) = Ha | j i max1?i,j?n,i6=j ka|a . i k2 kaj k2 By an abuse of notation, let B be the matrix satisfying b = BX s with X s being the vectorized version of X. We are now ready to state the following theorem: ? be the sparsest solution to (1). The solution Theorem 9 (Recovery using mutual coherence) Let x ? is equal to x ? 0 < 0.5(1 + 1/?(B)). ?x ? H if it has rank 1 and kXk of CPRL (5), X, 4 Numerical Implementation via ADMM In addition to the above analysis of guaranteed recovery properties, a critical issue for practitioners is the availability of efficient numerical solvers. Several numerical solvers used in CS may be applied to solve nonsmooth SDPs, which include interior-point methods (e.g., used in CVX [19]), gradient projection methods [4], and augmented Lagrangian methods (ALM) [4]. However, interior-point methods are known to scale badly to moderate-sized convex problems in general. Gradient projection methods also fail to meaningfully accelerate the CPRL implementation due to the complexity of the projection operator. Alternatively, nonsmooth SDPs can be solved by ALM. However, the augmented primal and dual objective functions are still complex SDPs, which are equally expensive to solve in each iteration. In summary, as we will demonstrate in Section 5, CPRL as a nonsmooth complex SDP is categorically more expensive to solve compared to the linear programs underlying CS, and the task exceeds the capability of many popular sparse optimization techniques. In this paper, we propose a novel solver to the nonsmooth SDP underlying CPRL via the alternating directions method of multipliers (ADMM, see for instance [6] and [5, Sec. 3.4]) technique. The motivation to use ADMM are two-fold: 1. It scales well to large data sets. 2. It is known for its fast convergence. There are also a number of strong convergence results [6] which further motivates the choice. To set the stage for ADMM, rewrite (5) to the equivalent SDP minX1 ,X2 ,Z f1 (X1 ) + f2 (X2 ) + g(Z), subj. to X1 ? Z = 0, 4 X2 ? Z = 0, (10) where  0 if X  0 Tr(X) if bi = T r(?i X), i = 1, . . . , N , g(Z) , ?kZk1 . , f2 (X) , f1 (X) , ? otherwise ? otherwise  The update rules of ADMM now lead to the following: Xil+1 Z l+1 Yil+1 = arg minX fi (X) + Tr(Yil (X ? Z l )) + ?2 kX ? Z l k22 , P2 = arg minZ g(Z) + i=1 ?Tr(Yil Z) + ?2 kXil+1 ? Zk22 , = Yil + ?(Xil+1 ? Z l+1 ), (11) where Xi , Yi , Z are constrained to stay in the domain of Hermitian matrices. Each of these steps has a tractable calculation. However, the Xi , Yi , and Z variables are complex-valued, and, as most of the optimization literature deals with real-valued vectors and symmetric matrices, we will emphasize differences between the real case and complex case. After some simple manipulations, we have: X1l+1 = argminX kX ? (Z l ? I+Y1l ? )k2 , subj. to bi = Tr(?i X), i = 1, ? ? ? , N. (12) Assuming that a feasible solution exists, and defining ?A as the projection onto the convex set given l 1 ). This optimization problem has a by the linear constraints, the solution is: X1l+1 = ?A (Z l ? I+Y ? closed-form solution; converting the matrix optimization problem in (12) into an equivalent vector optimization problem yields a problem of the form: minx ||x?z||2 subj. to b = Ax. The answer is given by the pseudo-inverse of A, which can be precomputed. This complex-valued problem can be solved by converting the linear constraint in Hermitian matrices into an equivalent constraint on real-valued vectors. This conversion is done by noting that for n ? n Hermitian matrices A, B: Pn Pn Pn Pn Pn hA, Bi = P Tr(AB) = i=1P j=1PAij Bij = i=1 Aii Bii + i=1 j=i+1 Aij Bij + Aij Bij n n n = i=1 Aii Bii + i=1 j=i+1 2 real(Aij ) real(Bij ) + 2 imag(Aij ) imag(Bij ) v 2 So ? if we define the vector A as an n vector such that ? its elements are Aii for i = 1, ? ? ? , n, 2 real(Aij ) for i = 1, ? ? ? , n, j = i + 1, ? ? ? , n, and 2 imag(Aij ) for i = 1, ? ? ? , n, j = i + 1, ? ? ? , n, and similarly define B v , then we can see that hA, Bi = hAv , B v i. This turns the constraint 2 bi = Tr(?i X), i = 1, ? ? ? , N, into one of the form: b = [?v1 ? ? ? ?vN ]T X v , where each ?vi is in Rn . Thus, for this subproblem, the memory usage scales linearly with N , the number of measurements, l and quadratically with n, the dimension of the data. Next, X2l+1 = argminX0 kX ?(Z l ? Y?2 )k2 = l ?P SD (Z l ? Y?2 ), where ?P SD denotes the projection onto the positive-semidefinite cone, which can easily be obtained via eigenvalue decomposition. This holds for real-valued and complex-valued P2 l+1 l = 21 i=1 Xil+1 and similarly Y . Then, the Z update rule Hermitian matrices. Finally, let X can be written: Z l+1 = argminZ ?kZk1 + 2? 2 kZ ? (X l+1 l + Y ? )k22 = soft(X l+1 l + Y ? ? , 2? ). (13) We note that the soft operator in the complex domain must be coded with care. One does not simply check the sign of the difference, as in the real case, but rather the magnitude of the complex number: ( 0 if |x| ? q, soft(x, q) = |x|?q (14) x otherwise, |x| where q is a positive real number. Setting l = 0, the Hermitian matrices Xil , Zil , Yil can now be iteratively computed using the ADMM iterations (11). The stopping criterion of the algorithm is given by: l krl k2 ? nabs + rel max(kX k2 , kZ l k2 ), abs rel ?3 l ksl k2 ? nabs + rel kY k2 , l (15) l where  ,  are algorithm parameters set to 10 and r and s are the primal and dual residuals given by: rl = (X1l ? Z l , X2l ? Z l ), sl = ??(Z l ? Z l?1 , Z l ? Z l?1 ). We also update ? according to the rule discussed in [6]: ? l if krl k2 > ?ksl k2 , ??incr ? l+1 l ? = (16) ? /? if ksl k2 > ?krl k2 , ? l decr ? otherwise, where ?incr , ?decr , and ? are algorithm parameters. Values commonly used are ? = 10 and ?incr = ?decr = 2. 5 5 Experiment The experiments in this section are chosen to illustrate the computational performance and scalability of CPRL. Being one of the first papers addressing the CPR problem, existing methods available for comparison are limited. For the CPR problem, to the authors? best knowledge, the only methods developed are the greedy algorithms presented in [25, 27, 28], and GCPRL [26]. The method proposed in [25] handles CPR but is only tailored to random 2D Fourier samples from a 2D array and it is extremely sensitive to initialization. In fact, it would fail to converge in our scenarios of interest. [27] formulates the CPR problem as a nonconvex optimization problem that can be solved by solving a series of convex problems. [28] proposes to alternate between fit the estimate to measurements and thresholding. GCPRL, which stands for greedy CPRL, is a new greedy approximate algorithm tailored to the lifting technique in (5). The algorithm draws inspiration from the matching-pursuit algorithm [22, 1]. In each iteration, the algorithm adds a new nonzero component of x that minimizes the CPRL objective function the most. We have observed that if the number of nonzero elements in x is expected to be low, the algorithm can successfully recover the ground-truth sparse signal while consuming less time compared to interior-point methods for the original SDP.2 In general, greedy algorithms for solving CPR problems work well when a good guess for the true solution is available, are often computationally efficient but lack theoretical recovery guarantees. We also want to point out that CPRL becomes a special case in a more general framework that extends CS to nonlinear systems (see [1]). In general, nonlinear CS can be solved locally by greedy simplex pursuit algorithms. Its instantiation in PR is the GCPRL algorithm. However, the key benefit of developing the SDP solution for PR in this paper is that the global convergence can be guaranteed. In this section, we will compare implementations of CPRL using the interior-point method used by CVX [19] and ADMM with the design parameter choice recommended in [6] (?incr = ?decr = 2). ? = 10 will be used in all experiments. We will also compare the results to GCPRL and the PR algorithm PhaseLift [13]. The former is a greedy approximate solution, while the latter does not enforce sparsity and is obtained by setting ? = 0 in CPRL. In terms of the scale of the problem, the largest problem we have tested is on a 30 ? 30 image and is 100-sparse in the Fourier domain with 2400 measurements. Our experiment is conducted on an IBM x3558 M3 server with two Xeon X5690 processors, 6 cores each at 3.46GHz, 12MB L3 cache, and 96GB of RAM. The execution for recovering one instance takes approximately 36 hours to finish in MATLAB environment, comprising of several tens of thousands of iterations. The average memory usage is 3.5 GB. 5.1 A simple simulation In this example we consider a simple CPR problem to illustrate the differences between CPRL, GCPRL, and PhaseLift. We also compare computational speed for solving the CPR problem and illustrate the theoretical bounds derived in Section 3. Let x ? C64 be a 2-sparse complex signal, A , RF where F ? C64?64 is the Fourier transform matrix and R ? C32?64 a random projection matrix (generated by sampling a unit complex Gaussian), and let the measurements b satisfy the PR relation (1). The left plot of Figure 1 gives the recovered signal x using CPRL, GCPRL and PhaseLift. As seen, CPRL and GCPRL correctly identify the two nonzero elements in x while PhaseLift fails to identify the true signal and gives a dense estimate. These results are rather typical (see the MCMC simulation in [26]). For very sparse examples, like this one, CPRL and GCPRL often both succeed in finding the ground truth (even though we have twice as many unknowns as measurements). PhaseLift, on the other side, does not favor sparse solutions and would need considerably more measurements to recover the 2-sparse signal. The middle plot of Figure 1 shows the computational time needed to solve the nonsmooth SDP of CPRL using CVX, ADMM, and GCPRL. It shows that ADMM is the fastest and that GCPRL outperforms CVX. The right plot of Figure 1 shows the mutual coherence bound 0.5(1 + 1/?(B)) for a number of different N ?s and n?s, A , RF , F ? Cn?n the Fourier transform matrix and R ? CN ?n a random projection ? satisfies kXk ? 0< matrix. This is of interest since Theorem 9 states that when the CRPL solution X H ? ?x ? , where x ? is the sparsest solution to (1). From 0.5(1 + 1/?(B)) and has rank 1, then X = x 2 We have also tested an off-the-shelf toolbox that solves convex cone problems, called TFOCS [2]. Unfortunately, TFOCS cannot be applied directly to solving the nonsmooth SDP in CPRL. 6 ? has rank 1 and only a single nonzero the plot it can be concluded that if the CPRL solution X ? = x ?x ? H . We also component for a choice of 125 ? n, N ? 5, Theorem 9 guarantees that X observe that Theorem 9 is conservative, since we previously saw that 2 nonzero components could be recovered correctly for n = 64 and N = 32. In fact, numerical simulation can be used to show that N = 30 suffices to recover the ground truth in 95 out of 100 runs [26]. 1 10 PhaseLift CPRL/GCPRL 0.9 0.8 8 0.7 1.22 110 100 1.2 90 ~ __ 0.5 1.18 80 0.4 1.16 70 0.3 1.14 6 N || xxH ? X || i 120 7 0.6 |x | 1.24 CPRL (CVX) GCPRL CPRL (ADMM) 9 5 4 60 1.12 3 50 0.2 1.1 2 40 0.1 1 1.08 30 0 0 10 20 30 40 50 60 0 0 i 20 40 60 time [s] 80 100 40 60 80 100 120 n Figure 1: Left: The magnitude of the estimated signal provided by CPRL, GCPRL and PhaseLift. ? 2 plotted against time for ADMM (gray line), GCPRL (solid ? H ? Xk Middle: The residual k? xx black line) and CVX (dashed black line). Right: A contour plot of the quantity 0.5(1 + 1/?(B)). ? is taken as the average over 10 realizations of the data. 5.2 Compressive sampling and PR One of the motivations of presented work and CPRL is that it enables compressive sensing for PR problems. To illustrate this, consider the 20 ? 20 complex image in Figure 2 Left. To measure the image, we could measure each pixel one-by-one. This would require us to sample 400 times. What CS proposes is to measure linear combinations of samples rather than individual pixels. It has been shown that the original image can be recovered from far fewer samples than the total number of pixels in the image. The gain using CS is hence that fewer samples are needed. However, traditional CS only discuss linear relations between measurements and unknowns. To extend CS to PR applications, consider again the complex image in Figure 2 Left and assume that we only can measure intensities or intensities of linear combinations of pixels. Let R ? CN ?400 capture how intensity measurements b are formed from linear combinations of pixels in the image, b = |Rz|2 (z is a vectorized version of the image). An essential part in CS is also to find a dictionary (possibly overcomplete) in which the image can be represented using only a few basis images. For classical CS applications, dictionaries have been derived. For applying CS to the PR applications, dictionaries are needed and a topic for future research. We will use a 2D inverse Fourier transform dictionary in our example and arrange the basis vectors as columns in F ? C400?400 . If we choose N = 400 and generate R by sampling from a unit Gaussian distribution and set A = RF , CPRL recovers exactly the true image. This is rather remarkable since the PR relation (1) is nonlinear in the unknown x and N  n measurements are in general needed for a unique solution. If we instead sample the intensity of each pixel, one-by-one, neither CPRL or PhaseLift recover the true image. If we set A = R and do not care about finding a dictionary, we can use a classical PR algorithm to recover the true image. If PhaseLift is used, N = 1600 measurements are sufficient to recover the true image. The main reasons for the low number of samples needed in CPRL is that we managed to find a good dictionary (20 basis images were needed to recover the true image) and CPRL?s ability to recover the sparsest solution. In fact, setting A = RF , PhaseLift still needs 1600 measurements to recover the true solution. 5.3 The Shepp-Logan phantom In this last example, we again consider the recovery of complex valued images from random samples. The motivation is twofold: Firstly, it illustrates the scalability of the ADMM implementation. In fact, ADMM has to be used in this experiment as CVX cannot handle the CPRL problem in this scale. Secondly, it illustrates that CPRL can provide approximate solutions that are visually close to the ground-truth images. Consider now the image in Figure 2 Middle Left. This 30 ? 30 SheppLogan phantom has a 2D Fourier transform with 100 nonzero coefficients. We generate N linear combinations of pixels as in the previous example and square the measurements, and then apply 7 CPRL and PhaseLift with a 2D Fourier dictionary. The middel image in Figure 2 shows the recovered result using PhaseLift with N = 2400, the second image from the right shows the recovered result using CPRL with the same number N = 2400 and the right image is the recovered result using CPRL with N = 1500. The number of measurements with respect to the sparsity in x is too low for both CPRL and PhaseLift to perfectly recover z. However, CPRL provides a much better approximation and outperforms PhaseLift visually even though it uses considerably fewer measurements. 2 5 5 5 5 10 10 10 10 15 15 15 15 20 20 20 20 25 25 25 4 6 8 10 12 14 16 25 18 20 30 2 4 6 8 10 12 14 16 18 20 30 5 10 15 20 25 30 30 5 10 15 20 25 30 30 5 10 15 20 25 30 5 10 15 20 25 30 Figure 2: Left: Absolute value of the 2D inverse Fourier transform of x, |F x|, used in the experiment in Section 5.2. Middle Left: Ground truth for the experiment in Section 5.3. Middle: Recovered result using PhaseLift with N = 2400. Middle Right: CPRL with N = 2400. Right: CPRL with N = 1500. 6 Future Directions The SDP underlying CPRL scales badly with the number of unknowns or basis vectors in the dictionary. Therefore, learning a suitable dictionary for a specific application becomes even more critical than that in traditional linear CS setting. We also want to point out that when classical CS was first studied, many of today?s accelerated numerical algorithms were not available. We are very excited about the new problem to improve the speed of SDP algorithms in sparse optimization, and hope our paper would foster the community?s interest to address this challenge collaboratively. One interesting direction might be to use ADMM to solve the dual of (5), see for instance [30, 31]. Another possible direction is the outer approximation methods [21]. 7 Acknowledgement Ohlsson is partially supported by the Swedish foundation for strategic research in the center MOVIII, the Swedish Research Council in the Linnaeus center CADICS, the European Research Council under the advanced grant LEARN, contract 267381, and a postdoctoral grant from the SwedenAmerica Foundation, donated by ASEA?s Fellowship Fund, and by a postdoctoral grant from the Swedish Research Council. Yang is supported by ARO 63092-MA-II. Dong is supported by the NSF Graduate Research Fellowship under grant DGE 1106400, and by the Team for Research in Ubiquitous Secure Technology (TRUST), which receives support from NSF (award number CCF0424422). The authors also want to acknowledge useful input from Stephen Boyd and Yonina Eldar. References [1] A. Beck and Y. C. Eldar. Sparsity constrained nonlinear optimization: Optimality conditions and algorithms. Technical Report arXiv:1203.4580, 2012. [2] S. Becker, E. Cand`es, and M. Grant. Templates for convex cone problems with applications to sparse signal recovery. Mathematical Programming Computation, 3(3), 2011. [3] R. Berinde, A. Gilbert, P. Indyk, H. Karloff, and M. Strauss. Combining geometry and combinatorics: A unified approach to sparse signal recovery. In Communication, Control, and Computing, 2008 46th Annual Allerton Conference on, pages 798?805, September 2008. [4] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999. [5] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena Scientific, 1997. [6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 2011. [7] A. Bruckstein, D. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review, 51(1):34?81, 2009. 8 [8] E. Cand`es. Compressive sampling. In Proceedings of the International Congress of Mathematicians, volume 3, pages 1433?1452, Madrid, Spain, 2006. [9] E. Cand`es. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9?10):589?592, 2008. [10] E. Cand`es, Y. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. Technical Report arXiv:1109.0573, Stanford University, September 2011. [11] E. Cand`es, X. Li, Y. Ma, and J. Wright. Robust Principal Component Analysis? Journal of the ACM, 58(3), 2011. [12] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52:489?509, February 2006. [13] E. Cand`es, T. Strohmer, and V. Voroninski. PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming. Technical Report arXiv:1109.4499, Stanford University, September 2011. [14] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33?61, 1998. [15] A. d?Aspremont, L. El Ghaoui, M. Jordan, and G. Lanckriet. A direct formulation for Sparse PCA using semidefinite programming. SIAM Review, 49(3):434?448, 2007. [16] D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289?1306, April 2006. [17] D. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via `1 -minimization. PNAS, 100(5):2197?2202, March 2003. [18] J. Fienup. Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint. Journal of Optical Society of America A, 4(1):118?123, 1987. [19] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http: //cvxr.com/cvx, August 2010. [20] D. Kohler and L. Mandel. Source reconstruction from the modulus of the correlation function: a practical approach to the phase problem of optical coherence theory. Journal of the Optical Society of America, 63(2):126?134, 1973. [21] H. Konno, J. Gotoh, T. Uno, and A. Yuki. A cutting plane algorithm for semi-definite programming problems with applications to failure discriminant analysis. Journal of Computational and Applied Mathematics, 146(1):141?154, 2002. [22] S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397?3415, December 1993. [23] S. Marchesini. Phase retrieval and saddle-point optimization. Journal of the Optical Society of America A, 24(10):3289?3296, 2007. [24] R. Millane. Phase retrieval in crystallography and optics. Journal of the Optical Society of America A, 7:394?411, 1990. [25] M. Moravec, J. Romberg, and R. Baraniuk. Compressive phase retrieval. In SPIE International Symposium on Optical Science and Technology, 2007. [26] H. Ohlsson, A. Y. Yang, R. Dong, and S. Sastry. Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming. Technical Report arXiv:1111.6323, University of California, Berkeley, November 2011. [27] Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev. Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing. Opt. Express, 19(16):14807?14822, Aug 2011. [28] A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E. B. Kley, S. Gazit, T. Cohen-Hyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y. C. Eldar, O. Cohen, and M. Segev. Sparsity-based single-shot subwavelength coherent diffractive imaging. Nature Materials, 11(5):455? 459, May 2012. [29] A. Walther. The question of phase retrieval in optics. Optica Acta, 10:41?49, 1963. [30] Z. Wen, D. Goldfarb, and W. Yin. Alternating direction augmented lagrangian methods for semidefinite programming. Mathematical Programming Computation, 2:203?230, 2010. [31] Z. Wen, C. Yang, X. Liu, and S. Marchesini. Alternating direction methods for classical and ptychographic phase retrieval. Inverse Problems, 28(11):115010, 2012. 9
4618 |@word version:3 briefly:1 middle:6 norm:7 termination:1 seek:1 simulation:3 bn:1 decomposition:6 excited:1 tr:12 solid:1 shot:1 shechtman:2 marchesini:2 liu:2 series:1 past:1 existing:2 outperforms:2 ka:1 recovered:7 com:1 steiner:1 chu:1 written:1 must:1 nanoscale:1 axk22:1 numerical:7 enables:2 plot:5 update:3 fund:1 alone:1 greedy:7 fewer:7 device:2 assurance:1 guess:1 accordingly:1 plane:1 xk:2 core:1 provides:4 allerton:1 firstly:1 zhang:1 mathematical:2 x1l:3 direct:1 symposium:1 viable:1 walther:1 ray:4 hermitian:6 alm:2 x0:4 expected:1 indeed:3 cand:7 sdp:14 inspired:2 paij:1 cache:1 solver:4 becomes:3 provided:1 xx:6 bounded:3 notation:1 underlying:3 spain:1 what:1 minimizes:2 emerging:1 developed:1 compressive:12 mathematician:1 unified:1 astronomy:1 finding:2 guarantee:3 pseudo:1 berkeley:6 collecting:2 donated:1 exactly:2 demonstrates:1 k2:13 control:2 unit:2 imag:3 grant:6 yn:1 xxh:1 bertsekas:2 positive:3 engineering:5 sd:2 limit:1 congress:1 abuse:1 approximately:1 black:2 might:1 twice:1 initialization:1 studied:2 acta:1 challenging:1 fastest:1 limited:1 bi:9 graduate:1 unique:6 practical:3 atomic:1 lost:1 definite:1 area:2 thought:1 shoham:1 projection:7 matching:2 boyd:3 refers:1 mandel:1 onto:2 interior:5 cannot:2 operator:9 shankar:1 close:1 romberg:2 applying:1 www:1 equivalent:3 phantom:2 lagrangian:2 missing:1 center:2 gilbert:1 regardless:1 l:2 convex:10 restated:1 spark:2 recovery:11 rule:3 array:1 importantly:1 handle:2 today:1 mallat:1 rip:18 exact:3 programming:9 us:1 overdetermined:1 lanckriet:1 element:5 roy:1 satisfying:3 expensive:2 decr:4 trend:1 observed:2 kxk1:6 subproblem:1 electrical:5 solved:5 capture:1 calculate:1 thousand:1 zibulevsky:1 mentioned:1 environment:1 complexity:1 tight:1 rewrite:1 solving:4 division:1 max1:1 f2:2 basis:6 accelerate:2 aii:3 easily:1 k0:4 represented:1 america:4 talk:1 derivation:1 fast:1 saunders:1 peer:1 elad:2 solve:5 valued:8 stanford:2 otherwise:4 compressed:3 ability:2 favor:1 transform:9 itself:1 indyk:1 eigenvalue:3 propose:2 reconstruction:4 aro:1 mb:1 combining:1 realization:2 frobenius:1 ky:2 scalability:2 convergence:3 xil:4 object:1 illustrate:4 completion:1 measured:1 aug:1 p2:2 solves:1 recovering:2 c:23 strong:1 direction:8 closely:1 kb:3 material:1 require:1 mathematique:1 hx:2 f1:2 generalization:1 suffices:1 opt:1 tighter:1 leastsquares:1 secondly:1 mathematically:1 extension:3 cpr:13 hold:3 considered:2 wright:1 ground:7 visually:2 nonorthogonal:1 dictionary:12 arrange:1 collaboratively:1 omitted:1 uniqueness:1 estimation:1 applicable:1 sensitive:2 saw:1 largest:2 hav:1 council:3 successfully:1 hope:1 minimization:1 clearly:1 sensor:1 always:1 gaussian:4 rather:4 pn:5 shelf:1 broader:1 corollary:2 ax:6 inherits:1 focus:2 derived:3 rank:14 check:3 secure:1 zk22:1 stopping:1 el:1 typically:1 relation:4 comprising:1 voroninski:2 tao:1 pixel:7 arg:2 dual:3 cprl:48 eldar:5 issue:2 development:2 proposes:2 art:1 special:2 constrained:2 mutual:7 field:1 equal:4 sampling:4 yonina:1 biology:1 represents:1 future:2 simplex:1 np:2 contaminated:1 nonsmooth:10 report:6 few:3 wen:2 comprehensive:1 individual:1 beck:1 replaced:1 phase:21 argminx:2 geometry:1 ab:4 interest:3 highly:1 evaluation:1 semidefinite:8 light:1 primal:2 strohmer:2 implication:1 necessary:1 sweden:1 incomplete:1 phaselift:16 logan:1 plotted:1 overcomplete:1 theoretical:5 instance:5 xeon:1 soft:3 column:1 modeling:1 formulates:1 strategic:1 addressing:1 entry:1 rounding:1 conducted:1 oping:1 too:2 optimally:1 reframe:1 answer:1 eec:1 x2l:2 considerably:2 international:2 siam:3 stay:1 contract:1 dong:3 physic:1 off:1 squared:9 ambiguity:3 again:2 containing:1 choose:1 possibly:1 bx:1 li:1 chemistry:1 sec:1 availability:1 coefficient:2 satisfy:3 combinatorics:1 vi:1 closed:1 recover:13 capability:1 parallel:1 contribution:2 square:2 formed:1 yield:1 identify:2 sdps:3 yavneh:1 ohlsson:5 finer:1 processor:1 ah:6 szameit:2 nonstandard:1 c32:1 definition:4 against:1 failure:1 frequency:2 naturally:1 proof:1 spie:1 recovers:2 gain:1 popular:1 knowledge:1 ubiquitous:1 actually:1 follow:1 disciplined:1 april:1 swedish:3 formulation:2 done:1 though:2 just:1 stage:1 correlation:1 working:1 hand:1 receives:1 replacing:1 ei:1 nonlinear:7 trust:1 lack:1 gray:1 scientific:3 dge:1 modulus:2 usa:4 usage:2 k22:2 multiplier:3 true:8 managed:1 former:1 hence:2 regularization:1 equality:1 alternating:5 symmetric:1 nonzero:7 iteratively:1 inspiration:1 goldfarb:1 deal:1 during:1 incr:4 ambiguous:1 criterion:1 demonstrate:1 konno:1 allen:1 image:25 wise:2 krl:3 novel:4 recently:1 fi:1 parikh:1 superior:1 rl:1 yil:5 cohen:2 volume:1 extend:2 discussed:1 measurement:26 refer:2 zil:1 ai:5 dft:1 automatic:1 sastry:2 trivially:1 i6:1 pointed:1 similarly:2 mathematics:1 l3:1 stable:1 add:1 multivariate:1 isometry:2 recent:1 moderate:1 c64:2 manipulation:1 scenario:1 nonconvex:2 server:1 yi:3 seen:1 additional:1 relaxed:2 care:2 kxk0:4 converting:2 converge:1 recommended:1 signal:21 semi:1 dashed:1 full:1 desirable:1 pnas:1 ii:1 stephen:1 isy:1 technical:6 exceeds:1 calculation:1 retrieval:15 equally:1 award:1 coded:1 a1:1 impact:1 scalable:2 arxiv:4 iteration:4 represent:2 tailored:2 addition:1 want:3 fellowship:2 singular:2 source:3 concluded:1 subject:1 induced:1 meaningfully:1 december:1 jordan:1 practitioner:2 unitary:1 hyams:1 yang:4 counting:1 noting:1 relaxes:1 affect:1 fit:1 finish:1 perfectly:1 karloff:1 idea:1 cn:10 whether:1 pca:2 gb:2 becker:1 reformulated:1 matlab:2 useful:3 se:1 maybe:1 transforms:1 extensively:2 locally:1 ten:1 argminz:1 http:2 generate:2 sl:1 nsf:2 shifted:1 sign:1 estimated:1 correctly:2 discrete:1 express:1 key:1 neither:1 v1:1 imaging:5 ram:1 relaxation:1 year:1 sum:1 cone:3 run:1 inverse:4 uncertainty:1 baraniuk:1 extends:1 family:1 vn:1 cvx:9 draw:1 coherence:8 acceptable:1 kaj:1 bound:10 guaranteed:6 fold:1 quadratic:1 annual:1 badly:2 subj:8 constraint:7 optic:2 segev:2 uno:1 x2:3 software:1 fourier:12 speed:2 argument:2 min:2 extremely:1 optimality:1 optical:6 department:5 developing:1 according:1 alternate:1 combination:4 march:1 restricted:3 pr:21 ghaoui:1 taken:1 computationally:1 equation:3 previously:2 turn:1 precomputed:1 fail:2 discus:1 needed:8 rendus:1 tractable:2 available:7 pursuit:4 apply:2 observe:1 enforce:1 bii:2 alternative:1 original:3 rz:1 denotes:4 include:1 k1:2 february:1 classical:7 society:4 objective:2 added:1 quantity:1 question:1 rt:1 traditional:4 september:3 minx:5 gradient:2 reversed:1 link:1 simulated:1 athena:2 outer:1 topic:1 discriminant:1 reason:1 assuming:1 code:1 setup:1 difficult:3 unfortunately:1 claiming:1 kzk1:2 implementation:7 guideline:1 proper:1 design:2 unknown:7 motivates:1 upper:1 y1l:1 observation:1 conversion:1 acknowledge:1 november:1 defining:1 communication:1 team:1 y1:1 rn:3 recoverability:2 august:1 community:1 intensity:8 peleato:1 introduced:1 eckstein:1 toolbox:1 california:5 coherent:1 quadratically:1 hour:1 starter:1 address:2 able:1 shepp:1 usually:1 sparsity:6 summarize:2 challenge:1 program:5 built:1 max:1 memory:2 rf:4 critical:2 suitable:1 kek2:1 residual:2 advanced:1 improve:1 technology:2 ready:1 aspremont:1 incoherent:1 review:2 literature:2 acknowledgement:1 multiplication:1 highlight:1 interesting:1 dana:1 remarkable:1 foundation:3 fienup:1 vectorized:2 sufficient:1 thresholding:1 bk2:1 foster:1 principle:1 ibm:1 course:1 summary:1 supported:3 last:1 transpose:1 free:2 aij:6 side:1 tsitsiklis:1 template:1 absolute:1 sparse:20 benefit:1 ghz:1 distributed:2 dimension:1 moravec:1 stand:1 avoids:1 contour:1 kz:2 author:2 made:1 commonly:1 far:2 transaction:3 approximate:5 emphasize:1 cutting:1 minx1:1 global:3 bruckstein:1 reveals:1 xai:1 instantiation:1 b1:1 assumed:2 consuming:1 xi:2 alternatively:1 postdoctoral:2 learn:1 nature:1 robust:2 ca:4 complex:20 european:1 domain:4 main:2 dense:2 linearly:1 motivation:3 noise:8 cvxr:1 x1:2 augmented:3 representative:1 madrid:1 henrik:1 aid:1 fails:1 sub:1 sparsest:9 xh:1 kxk2:2 minz:1 bij:5 theorem:12 specific:1 tfocs:2 sensing:8 x:2 exists:1 essential:1 rel:3 strauss:1 lifting:8 magnitude:8 execution:1 illustrates:2 kx:8 chen:1 crystallography:1 yin:1 simply:1 saddle:1 wavelength:1 kxk:4 partially:2 scalar:1 applies:1 truth:7 satisfies:3 acm:1 ma:2 succeed:1 sized:1 formulated:1 donoho:4 twofold:1 admm:17 feasible:1 hard:2 specifically:1 typical:1 acting:1 principal:1 conservative:1 called:1 total:1 comptes:1 e:7 m3:1 select:1 support:2 millane:1 latter:1 accelerated:1 kohler:1 mcmc:1 tested:2
3,998
4,619
3D Social Saliency from Head-mounted Cameras Hyun Soo Park Carnegie Mellon University [email protected] Eakta Jain Texas Instruments [email protected] Yaser Sheikh Carnegie Mellon University [email protected] Abstract A gaze concurrence is a point in 3D where the gaze directions of two or more people intersect. It is a strong indicator of social saliency because the attention of the participating group is focused on that point. In scenes occupied by large groups of people, multiple concurrences may occur and transition over time. In this paper, we present a method to construct a 3D social saliency field and locate multiple gaze concurrences that occur in a social scene from videos taken by head-mounted cameras. We model the gaze as a cone-shaped distribution emanating from the center of the eyes, capturing the variation of eye-in-head motion. We calibrate the parameters of this distribution by exploiting the fixed relationship between the primary gaze ray and the head-mounted camera pose. The resulting gaze model enables us to build a social saliency field in 3D. We estimate the number and 3D locations of the gaze concurrences via provably convergent modeseeking in the social saliency field. Our algorithm is applied to reconstruct multiple gaze concurrences in several real world scenes and evaluated quantitatively against motion-captured ground truth. 1 Introduction Scene understanding approaches have largely focused on understanding the physical structure of a scene: ?what is where?? [1]. In social scenes, i.e., scenes occupied by people, this definition of understanding needs to be expanded to include interpreting what is socially salient in that scene, such as who people interact with, where they look, and what they attend to. While classic structural scene understanding is an objective interpretation of the scene (e.g., 3D reconstruction [2], object recognition [3], or human affordance identification [4]), social scene understanding is subjective as it depends on the beholder and the particular group of people occupying the scene. For example, when we first enter a foyer during a party, we quickly look at different people and the groups they have formed, search for personal friends or acquaintances, and choose a group to join. Consider instead, an artificial agent, such as a social robot, that enters the same room: how should it interpret the social dynamics of the environment? The subjectivity of social environments makes the identification of quantifiable and measurable representations of social scenes difficult. In this paper, we aim to recover a representation of saliency in social scenes that approaches objectivity through the consensus of multiple subjective judgements. Humans transmit visible social signals about what they find important and these signals are powerful cues for social scene understanding [5]. For instance, humans spontaneously orient their gaze to the target of their attention. When multiple people simultaneously pay attention to the same point in three dimensional space, e.g., at an obnoxious customer at a restaurant, their gaze rays1 converge to a point that we refer to as a gaze concurrence. Gaze concurrences are foci of the 3D social saliency field of a scene. It is an effective approximation because although an individual?s gaze indicates what he or she is subjectively interested in, a gaze concurrence encodes the consensus of multiple individuals. In a scene occupied by a larger number of people, multiple such concurrences may emerge as social cliques form and dissolve. In this paper, we present a method to reconstruct a 3D social saliency field and localize 3D gaze concurrences from videos taken by head-mounted cameras 1 A gaze ray is a three dimensional ray emitted from the center of eyes and oriented to the point of regard as shown in Figure 1(b). 1 Primary gaze ray, l Gaze concurrences W ld ? Left eye Center of eyes, p vd v p Right eye Point of regard Videos from head-mounted cameras (a) Input and output Gaze ray (b) Head top view v d ? 1 v v ? 2 W Primary gaze ray C l ? 1 1 d = d v + d 2 v 2? d1 , d 2 ? N (0, h 2 ) (c) Gaze ray model Cone-shaped distribution of the point of regard (d) Gaze distribution Figure 1: (a) In this paper, we present a method to reconstruct 3D gaze concurrences from videos taken by head-mounted cameras. (b) The primary gaze ray is a fixed 3D ray with respect to the head coordinate system and the gaze ray can be described by an angle with respect to the primary gaze ray. (c) The variation of the eye orientation is parameterized by a Gaussian distribution of the points on the plane, ?, which is normal to the primary gaze ray, l at unit distance from p. (d) The gaze ray model results in a cone-shaped distribution of the point of regard. on multiple people (Figure 1(a)). Our method automatically finds the number and location of gaze concurrences that may occur as people form social cliques in an environment. Why head-mounted cameras? Estimating 3D gaze concurrences requires accurate estimates of the gaze of people who are widely distributed over the social space. For a third person camera, i.e., an outside camera looking into a scene, state-of-the-art face pose estimation algorithms cannot produce reliable face orientation and location estimation beyond approximately 45 degrees of a head facing the camera directly [6]. Furthermore, as they are usually fixed, third person views introduce spatial biases (i.e., head pose estimates would be better for people closer to and facing the camera) and limit the operating space. In contrast, head-mounted cameras instrument people rather than the scene. Therefore, one camera is used to estimate each head pose. As a result, 3D pose estimation of head-mounted cameras provides accurate and spatially unbiased estimates of the primary gaze ray2 . Head-mounted cameras are poised to broadly enter our social spaces and many collaborative teams (such as search and rescue teams [8], police squads, military patrols, and surgery teams [9]) are already required to wear them. Head-mounted camera systems are increasingly becoming smaller, and will soon be seamlessly integrated into daily life [10]. Contributions The core contribution of this paper is an algorithm to estimate the 3D social saliency field of a scene and its modes from head-mounted cameras, as shown in Figure 1(a). This is enabled by a new model of gaze rays that represents the variation due to eye-in-head motion via a cone-shaped distribution. We present a novel method to calibrate the parameters of this model by leveraging the fact that the primary gaze ray is fixed with respect to the head-mounted camera in 3D. Given the collection of gaze ray distributions in 3D space, we automatically estimate the number and 3D locations of multiple gaze concurrences via mode-seeking in the social saliency field. We prove that the sequence of mode-seeking iterations converge. We evaluate our algorithm using motion capture data quantitatively, and apply it to real world scenes where social interactions frequently occur, such as meetings, parties, and theatrical performances. 2 Related Work Humans transmit and respond to many different social signals when they interact with others. Among these signals, gaze direction is one of the most prominent visible signals because it usually indicates what the individual is interested in. In this context, gaze direction estimation has been widely studied in robotics, human-computer interaction, and computer vision [6, 11?22]. Gaze direction can be precisely estimated by the eye orientation. Wang and Sung [11] presented a system that estimates the direction of the iris circle from a single image using the geometry of the iris. Guestrin and Eizenman [12] and Hennessey and Lawrence [13] utilized corneal reflections and the vergence of the eye to infer the eye geometry and its motion, respectively. A head-mounted eye tracker is often used to determine the eye orientation [14, 15]. Although all these methods can estimate highly accurate gaze direction, either they can be used in a laboratory setting or the device occludes the viewer?s field of view. 2 The primary gaze ray is a fixed eye orientation with respect to the head. It has been shown that the orientation is a unique pose, independent of gravity, head posture, horizon, and the fusion reflex [7]. 2 While the eyes are the primary source of gaze direction, Emery [16] notes that the head orientation is a strong indication of the direction of attention. For head orientation estimation, there are two approaches: outside-in and inside-out [23]. An outside-in system takes, as input, a third person image from a particular vantage point and estimates face orientation based on a face model. MurphyChutorian and Trivedi [6] have summarized this approach. Geometric modeling of the face has been used to orient the head by Gee and Cipolla [17] and Ballard and Stockman [18]. Rae and Ritter [19] estimated the head orientation via neural networks and Robertson and Reid [20] presented a method to estimate face orientation by learning 2D face features from different views in a low resolution video. With these approaches, a large number of cameras would need to be placed to cover a space large enough to contain all people. Also, the size of faces in these videos is often small, leading to biased head pose estimation depending on the distance from the camera. Instead of the outside-in approach, an inside-out approach estimates head orientation directly from a head-mounted camera looking out at the environment. Munn and Pelz [22] and Takemura et al. [15] estimated the headmounted camera motion in 3D by feature tracking and visual SLAM, respectively. Pirri et al. [24] presented a gaze calibration procedure based on the eye geometry using 4 head-mounted cameras. We adopt an inside-out as it does not suffer from space limitations and biased estimation. Gaze in a group setting has been used to identify social interaction or to measure social behavior. Stiefelhagen [25] and Smith et al. [26] estimated the point of interest in a meeting scene and a crowd scene, respectively. Bazzani et al. [27] introduced the 3D representation of the visual field of view, which enabled them to locate the convergence of views. Cristani et al. [28] adopted the F-formation concept that enumerates all possible spatial and orientation configurations of people to define the region of interest. However, these methods rely on data captured from the third person view point, i.e., outside-in systems and therefore, their capture space is limited and accuracy of head pose estimation degrades with distance from the camera. Our method is not subject to the same limitations. For an inside-out approach, Fathi et al. [29] present a method that uses a single first person camera to recognize discrete interactions within the wearer?s immediate social clique. Their method is a complementary approach to our method as it analyzes the faces within a single person?s field of view. In contrast, our approach analyzes an entire environment where several social cliques may form or dissolve over time. 3 Method The videos from the head-mounted cameras are collected and reconstructed in 3D via structure from motion. Each person wears a camera on the head and performs a predefined motion for gaze ray calibration based on our gaze ray model (Section 3.1). After the calibration (Section 3.2), they may move freely and interact with other people. From the reconstructed camera poses in conjunction with the gaze ray model, we estimate multiple gaze concurrences in 3D via mode-seeking (Section 3.3). Our camera pose registration in 3D is based on structure from motion as described in [2, 30, 31]. We first scan the area of interest (for example, the room or the auditorium) with a camera to reconstruct the reference structure. The 3D poses of the head-mounted cameras are recovered relative to the reference structure using a RANSAC [32] embedded Perspective-n-Point algorithm [33]. When some camera poses cannot be reconstructed because of lack of features or motion blur, we interpolate the missing camera poses based on the epipolar constraint between consecutive frames. 3.1 Gaze Ray Model We represent the direction of the viewer?s gaze as a 3D ray that is emitted from the center of the eyes and is directed towards the point of regard, as shown in Figure 1(b). The center of the eyes is fixed with respect to the head position and therefore, the orientation of the gaze ray in the world coordinate system is a composite of the head orientation and the eye orientation (eye-in-head motion). A headmounted camera does not contain sufficient information to estimate the gaze ray because it can capture only the head position and orientation but not the eye orientation. However, when the motion of the point of regard is stabilized, i.e., when the point of regard is stationary or slowly moving with respect to the head pose, the eye orientation varies by a small degree [34?36] from the primary gaze ray. We represent the variation of the gaze ray with respect to the primary gaze ray by a Gaussian distribution on a plane normal to the primary gaze ray. The point of regard (and consequently, the gaze ray) is more likely to be near the primary gaze ray. 3 p C (p, ? ) v ?= W R H C C H R (a) Cone W v v T (y i ? p a ) p0 yi p0 v bi yi ai p = p0 + ? v pa (b) Apex candidate (c) Cone estimation Figure 2: (a) We parameterize our cone, ?, with an apex, p, and ratio, ?, of the radius, ?, to the height, ?. (b) An apex can lie on the orange colored half line, i.e., behind p0 . Otherwise some of the points are invisible. (c) An apex can be parameterized as p = p0 ? ?v where ? > 0. Equation (2) allows us to locate the apex accurately. Let us define the primary gaze ray l by the center of the eyes p ? R3 , and the unit direction vector, v ? R3 in the world coordinate system, ?, as shown in Figure 1(b). Any point on the primary gaze ray can be written as p + ?v where ? > 0. Let ? be a plane normal to the primary gaze ray l at unit distance from p, as shown in Figure 1(c). The point d in ? can be written as d = ?1 v1? + ?2 v2? where v1? and v2? are two orthogonal vectors to v and ?1 and ?2 are scalars drawn from a Gaussian distribution, i.e., ?1 , ?2 ? ? (0, ?2 ). This point d corresponds to the ray ld in 3D. Thus, the distribution of the points on the plane maps to the distribution of the gaze ray by parameterizing the 3D ray as ld (p, vd ) = p+?vd where vd = v+d and ? > 0. The resulting distribution of 3D points of regard is a cone-shaped distribution whose central axis is the primary gaze ray, i.e., a point distribution on any normal plane to the primary gaze ray is a scaled Gaussian centered at the intersection between l and the plane as shown in Figure 1(d). 3.2 Gaze Ray Calibration Algorithm When a person wears a head-mounted camera, it may not be aligned with the direction of the primary gaze ray. In general, its center may not coincide with the center of the eyes either, as shown in Figure 1(d). The orientation and position offsets between the head-mounted camera and the primary gaze ray must be calibrated to estimate where the person is looking. The relative transform between the primary gaze ray and the camera pose is constant across time because the camera is, for the most part, stationary with respect to the head, ?, as shown in Figure 1(d). Once the relative transform and camera pose have been estimated, the primary gaze ray can be recovered. We learn the primary gaze ray parameters, p and v, with respect to the camera pose and the standard deviation ? of eye-in-head motion. We ask people to form pairs and instruct each pair to look at each other?s camera. While doing so, they are asked to move back and forth and side to side. Suppose two people A and B form a pair. If the cameras from A and B are temporally synchronized and reconstructed in 3D simultaneously, the camera center of B is the point of regard of A. Let y? (the camera center of B) be the point of regard of A and R and C be the camera orientation and the camera center of A, respectively. y? is represented in the world coordinate system, ?. We can transform y? to A?s camera centered coordinate system, ?, by y = Ry? ? RC. From {y? }?=1,??? ,? where ? is the number of the points of regard, we can infer the primary gaze ray parameters with respect to the camera pose. If there is no eye-in-head motion, all {y? }?=1,??? ,? will form a line which is the primary gaze ray. Due to the eye-in-head motion, {y? }?=1,??? ,? will be contained in a cone whose central axis is the direction of the primary gaze ray, v, and whose apex is the center of eyes, p. We first estimate the primary gaze line and then, find the center of the eye on the line to completely describe the primary gaze ray. To estimate the primary gaze line robustly, we embed line estimation by two points in the RANSAC framework [32]3 . This enables us to obtain a 3D line, l(p? , v) where p? is the projection of the camera center onto the line and v is the direction vector of the line. The projections of {y? }?=1,??? ,? onto the line will be distributed on a half line with respect to p? . This enables us to determine the sign of v. Given this line, we find a 3D cone, ?(p, ?), that encapsulates 3 We estimate a 3D line by randomly selecting two points at each iteration and find the line that produces the maximum number of inlier points. 4 x l i (pi , v i ) d (l i , x ) pi vi x? i Primary gaze ray Center of eyes Mean trajectories Mean convergences Primary gaze direction Center of eyes x i v i7 (x ? pi ) (a) Geometry (b) Gaze model (c) Social saliency field and mean trajectories ?? is the projection of x onto the primary gaze ray, l? , and d is a perspective distance Figure 3: (a) x vector defined in Equation (4). (b) Our gaze ray representation results in the cone-shaped distribution in 3D. (c) Two gaze concurrences are formed by seven gaze rays. High density is observed around the intersections of rays. Note that the maximum intensity projection is used to visualize the 3D density field. Our mean-shift algorithm allows any random points to converge to the highest density point accurately. all {y? }?=1,??? ,? where p is the apex and ? is the ratio of the radius, ?, to height, ?, as shown in Figure 2(a). The apex can lie on a half line, which originates from the closest point, p0 , to the center of the eyes and orients to ?v direction, otherwise some y are invisible. In Figure 2(b), the apex must lie on the orange half line. p0 can be obtained as follows: p0 = p? + min{vT (y1 ? p? ) , ? ? ? , vT (y? ? p? )}v. (1) Then, the apex can be written as p = p0 ? ?v where ? > 0, as shown in Figure 2(c). There are an infinite number of cones which contain all points, e.g., any apex behind all points and ? = ? can be a solution. Among these solutions, we want to find the tightest cone, where the minimum of ? is achieved. This also leads a degenerate solution where ? = 0 and ? = ?. We add a regularization term to avoid the ? = ? solution. The minimization can be written as, minimize ? + ?? ? ?? < ?, ? ? = 1, ? ? ? , ? subject to ?? +? (2) ?>0 where ?? = (I ? vvT )(y? ? p0 ) and ?? = vT (y? ? p0 ) (Figure 2(c)), which are all known once v and p0 are known. ?? /(?? + ?) < ? is the constraint that the cone encapsulates all points of regard {y? }?=1,??? ,? and ? > 0 is the condition that the apex must be behind p0 . ? is a parameter that controls how far the apex is from p0 . Equation (2) is a convex optimization problem (see Appendix in the supplementary material). Once the cone ?(p, ?) is estimated from {y? }?=1,??? ,? , ? is the standard deviation of the distance, ? = std{?d(l, y? )?}?=1,??? ,? , and will be used in Equation (3) as the bandwidth for the kernel density function. 3.3 Gaze Concurrence Estimation via Mode-seeking 3D gaze concurrences are formed at the intersections of multiple gaze rays, not at the intersection of multiple primary gazes (see Figure 1(b)). If we knew the 3D gaze rays, and which of rays shared a gaze concurrence, the point of intersection could be directly estimated via least squares estimation, for example. In our setup, neither one of these are known, nor do we know the number of gaze concurrences. With a head-mounted camera, only the primary gaze ray is computable; the eye-inhead motion is an unknown quantity. This precludes estimating the 3D gaze concurrence by finding a point of intersection, directly. In this section, we present a method to estimate the number and the 3D locations of gaze concurrences given primary gaze rays. Our observations from head-mounted cameras are primary gaze rays. The gaze ray model discussed in Section 3.1 produces a distribution of points of regard for each primary gaze ray. The superposition of these distributions yields a 3D social saliency field. We seek modes in this saliency field via a mean-shift algorithm. The modes correspond to the gaze concurrences. The mean-shift algorithm [37] finds the modes by evaluating the weights between the current mean and observed points. We derive the closed form of the mean-shift vector directly from the observed primary gaze rays. While the observations are rays, the estimated modes are points in 3D. This formulation differs from the classic mean-shift algorithm where the observations and the modes lie in the same space. 5 For any point in 3D, x ? R3 , a density function (social saliency field), ? , is generated by our gaze ray model. ? is the average of the Gaussian kernel density functions ? which evaluate the distance vector between the point, x, and the primary gaze rays l? as follows: ? (x) = ) ( ( ( ) ) ? ? ? ?d(l? , x)?2 d(l? , x) 1 ? ? ? 1 1 ?d(l? , x)?2 1 ? 1 ? = ? exp ? ? = , ? ?=1 ?? ? ?=1 ?? ?2? ? ?=1 ?? 2? 2 ?2? (3) where ? is the number of gaze rays and ?? is a bandwidth set to be the standard deviation of eye-inhead motion obtained from the gaze ray calibration (Section 3.2) for the ?th gaze ray. ? is the profile of the kernel density function, i.e., ?(?) = ??(? ? ?2 )/? and ? is a scaling constant. d ? R3 is a perspective distance vector defined as { x??x? for v?T (x ? p? ) ? 0 v?T (x?p? ) (4) d(l? (p? , v? ), x) = ? otherwise, ?? = p? + v?T (x ? p? ) v? , which is the projection of x onto the primary gaze ray as shown in where x Figure 3(a). p? is the center of eyes and v? is the direction vector for the ?th primary gaze ray. Note that when v?T (x ? p? ) < 0, the point is behind the eyes, and therefore is not visible. This distance vector directly captures the distance between l and ld in the gaze ray model (Section 3.1) and therefore, this kernel density function yields a cone-shaped density field (Figure 1(d) and Figure 3(b)). Figure 3(c) shows a social saliency field (density field) generated by seven gaze rays. The regions of high density are the gaze concurrences. Note that the maximum intensity projection of the density field is used to illustrate a 3D density field. The updated mean is the location where the maximum density increase can be achieved from the current mean. Thus, it moves along the gradient direction of the density function evaluated at the current mean. The gradient of the density function, ? (x), is ? 2? ? 1 ? ?x ? (x) = ? ? ?=1 ?3? ( ][? [? ]T ) ? ? d(l? , x) 2 ? ? 2? ?x ? T ?=1 ?? ?x , ?? ?? d(l? , x) (?x d(l? , x)) = ? ?=1 ?? ?=1 (5) where ?? = ( 2 ) ? d(l???,x) ? ? ?2 ?x ? x ? ? v? , , x = x + ( ) ? ? 2 v?T (x ? p) ?3? v?T (x ? p? ) ?? is the location that the gradient at x points to with respect to l? , as shown and ?(?) = ?? ? (?). x in Figure 3(a). Note that the gradient direction at x is perpendicular to the ray connecting x and p? . The last term of Equation (5) is the difference between the current mean estimate and the weighted mean. The new mean location, x?+1 , can be achieved by adding the difference to the current mean estimate, x? : ?? ? ? ?? ?+1 ?=1 ?? x . (6) x = ? ? ? ?=1 ?? Figure 3(c) shows how our mean-shift vector moves random initial points according to the gradient information. The mean-shift algorithm always converges as shown in the following theorem. Theorem 1 The sequence {? (x? )}?=1,2,??? provided by Equation (6) converges to the local maximum of the density field. See Appendix in the supplementary material for proof. 4 Result We evaluate our algorithm quantitatively using a motion capture system to provide ground truth and apply it to real world examples where social interactions frequently occur. We use GoPro HD Hero2 cameras (www.gopro.com) and use the head mounting unit provided by GoPro. We synchronize the cameras using audio signals, e.g., a clap. In the calibration step, we ask people to form pairs, and move back and forth and side to side at least three times to allow the gaze ray model to be accurately estimated. For the initial points of the mean-shift algorithm, we sample several points on the primary gaze rays. This sampling results in convergences of the mean-shift because the local maxima form around the rays. If the weights of the estimated mode are dominated by only one gaze, we reject the mode, i.e., more than one gaze rays must contribute to estimate a gaze concurrence. 6 4.1 Validation with Motion Capture Data We compare the 3D gaze concurrences estimated by our result with ground truth obtained from a motion capture system (capture volume: 8.3m?17.7m?4.3m). We attached several markers on a camera and reconstructed the camera motion using structure from motion and the motion capture system simultaneously. From the reconstructed camera trajectory, we recovered the similarity transform (scale, orientation, and translation) between two reconstructions. We placed two static markers and asked six people to move freely while looking at the markers. Therefore, the 3D gaze concurrences estimated by our algorithm should coincide with the 3D position of the static markers. The top row in Figure 4(a) shows the trajectories of the gaze concurrences (solid lines) overlaid by the static marker positions (dotted lines). The mean error is 10.1cm with 5.73cm standard deviation. The bottom row in Figure 4(a) shows the gaze concurrences (orange and red points) with the ground truth positions (green and blue points) and the confidence regions (pink region) where a high value of the saliency field is achieved (region which has higher than 80% of the local maximum value). The ground truth locations are always inside these regions. 4.2 Real World Scenes We apply our method to reconstruct 3D gaze concurrences in three real world scenes: a meeting, a musical, and a party. Figures 4(b), 5(a), and 5(b) show the reconstructed gaze concurrences and the projections of 3D gaze concurrences onto the head-mounted camera plane (top row). 3D renderings of the gaze concurrences (red dots) with the associated confidence region (salient region) are drawn in the middle row and the cone-shaped gaze ray models are also shown. The trajectories of the gaze concurrences are shown in the bottom row. The transparency of the trajectories encodes the timing. Meeting scene: There were 11 people forming two groups: 6 for one group and 5 for the other group as shown in Figure 4(b). The people in each group started to discuss among themselves at the beginning (2 gaze concurrences). After a few minutes, all the people faced the presenter in the middle (50th frame: 1 gaze concurrence), and then they went back to their group to discuss again (445th frame: 2 gaze concurrences) as shown in Figure 4(b). Musical scene: 7 audience members wore head-mounted cameras and watched the song, ?Summer Nights? from the musical Grease. There were two groups of actors, ?the pink ladies (women?s group)? and ?the T-birds (men?s group)? and they sang the song alternatingly as shown in Figure 5(a). In the figure, we show the reconstruction of two frames when the pink ladies sang (41st frame) and when the T-birds sang (390th frame). Party scene: there were 11 people forming 4 groups: 3 sat on couches, 3 talked to each other at the table, 3 played table tennis, and 2 played pool (178th frame: 4 gaze concurrences) as shown in Figure 5(b). Then, all moved to watch the table tennis game (710th frame: one gaze concurrence). Our method correctly evaluates the gaze concurrences at the location where people look. All results are best seen in the videos from the following project website (http: //www.cs.cmu.edu/?hyunsoop/gaze_concurrence.html). 5 Discussion In this paper, we present a novel representation for social scene understanding in terms of 3D gaze concurrences. We model individual gazes as a cone-shaped distribution that captures the variation of the eye-in-head motion. We reconstruct the head-mounted camera poses in 3D using structure from motion and estimate the relationship between the camera pose and the gaze ray. Our mode-seeking algorithm finds the multiple time-varying gaze concurrences in 3D. We show that our algorithm can accurately estimate the gaze concurrences. When people?s gaze rays are almost parallel, as in the musical scene (Figure 5(a)), the estimated gaze concurrences become poorly conditioned. The confidence region is stretched along the direction of the primary gaze rays. This is the case where the point of regard is very far away while people look at the point from almost the same vantage point. For such a scene, head-mounted cameras from different points of views can help to localize the gaze concurrences precisely. Recognizing gaze concurrences is critical to collaborative activity. A future application of this work will be to use gaze concurrence to allow artificial agents, such as robots, to become collaborative team members that recognize and respond to social cues, rather than passive tools that require prompting. The ability to objectively measure gaze concurrences in 3D will also enable new investigations into social behavior, such as group dynamics, group hierarchies, and gender interactions, and 7 X (cm) 40 0 ?40 Z (cm) Y (cm) ?40 50 100 150 200 250 50 100 150 200 250 50 100 150 Frame 200 250 ?80 ?120 40 30 20 10 oblique view Ground truth position 1 Gaze concurrence trajectory 1 Ground truth position 2 Gaze concurrence trajectory 2 top view side view 50th frame: 1 gaze concurrence left oblique view top view (a) Quantitative result oblique view top view side view 445th frame: 2 gaze concurrences side view right oblique view (b) Meeting scene Figure 4: (a) Top: the solid lines (orange and red) are the trajectories of the gaze concurrences and the dotted lines (green and blue) are the ground truth marker positions. The colored bands are one standard deviation wide and are centered at the trajectory means. Bottom: there are two gaze concurrences with six people. (b) We reconstruct the gaze concurrences for the meeting scene. 11 head-mounted cameras were used to capture the scene. Top row: images with the reprojection of the gaze concurrences, middle row: rendering of the 3D gaze concurrences with cone-shaped gaze models, bottom row: the trajectories of the gaze concurrences. front view oblique view oblique view oblique view top view oblique view 178th frame: 4 gaze concurrences top view oblique view 710th frame: 1 gaze concurrence (a) Musical scene (b) Party scene Figure 5: (a) We reconstruct the gaze concurrences from musical audiences. 7 head-mounted cameras were used to capture the scene. (b) We reconstruct the gaze concurrences for the party scene. 11 head-mounted cameras were used to capture the scene. Top row: images with the reprojection of the gaze concurrences, bottom row: rendering of the 3D gaze concurrences with cone-shaped gaze models. research into behavioral disorders, such as autism. We are interested in studying the spatiotemporal characteristics of the birth and death of gaze concurrences and how they relate to the groups in the scene. Acknowledgement This work was supported by a Samsung Global Research Outreach Program, Intel ISTC-EC, NSF IIS 1029679, and NSF RI 0916272. We thank Jessica Hodgins, Irfan Essa, and Takeo Kanade for comments and suggestions on this work. References [1] D. Marr. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Phenomenology and the Cognitive Sciences, 1982. [2] N. Snavely, M. Seitz, and R. Szeliski. Photo tourism: Exploring photo collections in 3D. TOG, 2006. [3] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In CVPR, 2003. 8 [4] A. Gupta, S. Satkin, A. A. Efros, and M. Hebert. From scene geometry to human workspace. In CVPR, 2011. [5] A. Vinciarelli, M. Pantic, and H. Bourlard. Social signal processing: Survey of an emerging domain. Image and Vision Computing, 2009. [6] E. Murphy-Chutorian and M. M. Trivedi. Head pose estimation in computer vision: A survey. TPAMI, 2009. [7] R. S. Jampel and D. X. Shi. The primary position of the eyes, the resetting saccade, and the transverse visual head plane. head movements around the cervical joints. Investigative Ophthalmology and Vision Science, 1992. [8] R. R. Murphy. Human-robot interaction in rescue robotics. IEEE Trans. on Systems, Man and Cybernetics, 2004. [9] S. Marks, B. W?unsche, and J. Windsor. Enhancing virtual environment-based surgical teamwork training with non-verbal communication. In GRAPP, 2009. [10] N. Bilton. A rose-colored view may come standard: Google glass. The New York Times, April 2012. [11] J.-G. Wang and E. Sung. Study on eye gaze estimation. IEEE Trans. on Systems, Man and Cybernetics, 2002. [12] E. D. Guestrin and M. Eizenman. General theory of remote gaze estimation using the pupil center and corneal reflection. IEEE Trans. on Biomedical Engineering, 2006. [13] C. Hennessey and P. Lawrence. 3D point-of-gaze estimation on a volumetric display. In ETRA, 2008. [14] D. Li, J. Babcock, and D. J. Parkhurst. openEyes: a low-cost head-mounted eye-tracking solution. In ETRA, 2006. [15] K. Takemura, Y. Kohashi, T. Suenaga, J. Takamatsu, and T. Ogasawara. Estimating 3D point-of-regard and visualizing gaze trajectories under natural head movements. In ETRA, 2010. [16] N. J. Emery. The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience and Biobehavioral Reviews, 2000. [17] A. H. Gee and R. Cipolla. Determining the gaze of faces in images. Image and Vision Computing, 1994. [18] P. Ballard and G. C. Stockman. Controlling a computer via facial aspect. IEEE Trans. on Systems, Man and Cybernetics, 1995. [19] R. Rae and H. J. Ritter. Recognition of human head orientation based on artificial neural networks. IEEE Trans. on Neural Networks, 1998. [20] N. M. Robertson and I. D. Reid. Estimating gaze direction from low-resolution faces in video. In ECCV, 2006. [21] B. Noris, K. Benmachiche, and A. G. Billard. Calibration-free eye gaze direction detection with gaussian processes. In GRAPP, 2006. [22] S. M. Munn and J. B. Pelz. 3D point-of-regard, position and head orientation from a portable monocular video-based eye tracker. In ETRA, 2008. [23] G. Welch and E. Foxlin. Motion tracking: no silver bullet, but a respectable arsenal. IEEE Computer Graphics and Applications, 2002. [24] F. Pirri, M. Pizzoli, and A. Rudi. A general method for the point of regard estimation in 3d space. In CVPR, 2011. [25] R. Stiefelhagen, M. Finke, J. Yang, and A. Waibel. From gaze to focus of attention. In VISUAL, 1999. [26] K. Smith, S. O. Ba, J.-M. Odobez, and D. Gatica-Perez. Tracking the visual focus of attention for a varying number of wandering people. TPAMI, 2008. [27] L. Bazzani, D. Tosato, M. Cristani, M. Farenzena, G. Pagetti, G. Menegaz, and V. Murino. Social interactions by visual focus of attention in a three-dimensional environment. Expert Systems, 2011. [28] M. Cristani, L. Bazzani, G. Paggetti, A. Fossati, D. Tosato, A. Del Bue, G. Menegaz, and V. Murino. Social interaction discovery by statistical analysis of F-formations. In BMVC, 2011. [29] A. Fathi, J. K. Hodgins, and J. M. Rehg. Social interaction: A first-person perspective. In CVPR, 2012. [30] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2004. [31] T. Shiratori, H. S. Park, L. Sigal, Y. Sheikh, and J. K. Hodgins. Motion capture from body-mounted cameras. TOG, 2011. [32] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981. [33] V. Lepetit, F. Moreno-Noguer, and P. Fua. EPnP: An accurate O(n) solution to the PnP problem. IJCV, 2009. [34] H. Misslisch, D. Tweed, and T. Vilis. Neural constraints on eye motion in human eye-head saccades. Journal of Neurophysiology, 1998. [35] E. M. Klier, H. Wang, A. G. Constantin, and J. D. Crawford. Midbrain control of three-dimensional head orientation. Science, 2002. [36] D. E. Angelaki and B. J. M. Hess. Control of eye orientation: where does the brain?s role end and the muscle?s begin? European Journal of Neuroscience, 2004. [37] K. Fukunaga and L. D. Hostetler. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. on Information Theory, 1975. 9
4619 |@word neurophysiology:1 middle:3 judgement:1 seitz:1 seek:1 p0:14 solid:2 lepetit:1 ld:4 initial:2 configuration:1 selecting:1 subjective:2 recovered:3 com:2 current:5 written:4 must:4 takeo:1 visible:3 blur:1 occludes:1 enables:3 moreno:1 farenzena:1 mounting:1 stationary:2 cue:2 half:4 device:1 website:1 takamatsu:1 plane:8 beginning:1 smith:2 core:1 oblique:9 colored:3 provides:1 contribute:1 location:10 height:2 rc:1 along:2 become:2 prove:1 ijcv:1 poised:1 fitting:1 ray:81 inside:5 behavioral:1 pnp:1 introduce:1 behavior:2 themselves:1 frequently:2 nor:1 ry:1 brain:1 socially:1 automatically:2 biobehavioral:1 provided:2 estimating:4 project:1 begin:1 what:6 cm:5 emerging:1 finding:1 sung:2 quantitative:1 ti:1 gravity:1 scaled:1 control:3 unit:4 originates:1 reid:2 attend:1 local:3 timing:1 engineering:1 limit:1 becoming:1 approximately:1 bird:2 studied:1 teamwork:1 limited:1 bi:1 perpendicular:1 directed:1 unique:1 camera:65 spontaneously:1 differs:1 procedure:1 intersect:1 area:1 arsenal:1 reject:1 composite:1 projection:7 vantage:2 confidence:3 lady:2 cannot:2 onto:5 context:1 www:2 measurable:1 map:1 customer:1 center:19 missing:1 shi:1 attention:7 odobez:1 convex:1 focused:2 resolution:2 survey:2 welch:1 disorder:1 parameterizing:1 marr:1 rehg:1 enabled:2 hd:1 classic:2 coordinate:5 variation:5 transmit:2 updated:1 target:1 suppose:1 hierarchy:1 controlling:1 us:1 pa:1 robertson:2 recognition:4 utilized:1 std:1 observed:3 bottom:5 role:1 enters:1 capture:14 wang:3 parameterize:1 murino:2 region:9 went:1 remote:1 movement:2 highest:1 eizenman:2 rose:1 environment:7 fischler:1 asked:2 dynamic:2 personal:1 stockman:2 tog:2 completely:1 samsung:1 joint:1 represented:1 jain:2 investigative:1 effective:1 describe:1 couch:1 emanating:1 artificial:3 objectivity:1 beholder:1 outside:5 crowd:1 formation:2 birth:1 whose:3 larger:1 widely:2 supplementary:2 cvpr:4 reconstruct:9 otherwise:3 precludes:1 ability:1 objectively:1 transform:4 sequence:2 indication:1 tpami:2 essa:1 reconstruction:3 interaction:10 aligned:1 degenerate:1 poorly:1 forth:2 moved:1 participating:1 quantifiable:1 bazzani:3 exploiting:1 convergence:3 reprojection:2 produce:3 emery:2 silver:1 converges:2 object:2 inlier:1 depending:1 friend:1 derive:1 pose:21 illustrate:1 help:1 strong:2 c:3 come:1 synchronized:1 direction:21 radius:2 hartley:1 centered:3 human:10 enable:1 material:2 virtual:1 require:1 investigation:2 viewer:2 exploring:1 tracker:2 around:3 ground:8 normal:4 exp:1 lawrence:2 overlaid:1 visualize:1 efros:1 adopt:1 consecutive:1 estimation:18 superposition:1 occupying:1 tool:1 weighted:1 istc:1 minimization:1 gaussian:6 always:2 aim:1 rather:2 occupied:3 avoid:1 neuroethology:1 varying:2 conjunction:1 focus:4 she:1 indicates:2 cartography:1 seamlessly:1 contrast:2 glass:1 integrated:1 entire:1 perona:1 interested:3 provably:1 among:3 orientation:26 html:1 art:1 spatial:2 orange:4 tourism:1 field:22 construct:1 once:3 shaped:11 sampling:1 represents:1 park:2 look:5 vvt:1 unsupervised:1 future:1 others:1 quantitatively:3 few:1 oriented:1 randomly:1 simultaneously:3 recognize:2 interpolate:1 individual:4 murphy:2 geometry:6 jessica:1 patrol:1 detection:1 interest:3 rae:2 highly:1 perez:1 behind:4 slam:1 predefined:1 accurate:4 constantin:1 closer:1 daily:1 facial:1 orthogonal:1 circle:1 instance:1 military:1 modeling:1 cover:1 respectable:1 finke:1 calibrate:2 cost:1 deviation:5 recognizing:1 front:1 graphic:1 varies:1 spatiotemporal:1 calibrated:1 person:10 density:18 st:1 workspace:1 ritter:2 pool:1 gaze:170 connecting:1 quickly:1 again:1 central:2 choose:1 slowly:1 woman:1 cognitive:1 acquaintance:1 prompting:1 leading:1 expert:1 sang:3 li:1 summarized:1 parkhurst:1 depends:1 vi:1 view:29 closed:1 doing:1 dissolve:2 red:3 recover:1 parallel:1 collaborative:3 contribution:2 square:1 formed:3 accuracy:1 minimize:1 musical:6 largely:1 who:2 characteristic:1 yield:2 saliency:16 identify:1 correspond:1 resetting:1 surgical:1 identification:2 accurately:4 trajectory:12 autism:1 cybernetics:3 alternatingly:1 tweed:1 definition:1 volumetric:1 against:1 evaluates:1 subjectivity:1 proof:1 associated:1 static:3 ask:2 enumerates:1 back:3 higher:1 zisserman:2 april:1 bmvc:1 formulation:1 evaluated:2 fua:1 hostetler:1 furthermore:1 biomedical:1 night:1 marker:6 lack:1 google:1 del:1 mode:13 bullet:1 contain:3 unbiased:1 concept:1 evolution:1 regularization:1 spatially:1 laboratory:1 death:1 visualizing:1 during:1 game:1 iris:2 prominent:1 presenter:1 invisible:2 performs:1 motion:28 interpreting:1 reflection:2 passive:1 bolles:1 image:8 novel:2 chutorian:1 physical:1 attached:1 volume:1 discussed:1 interpretation:1 he:1 interpret:1 mellon:2 refer:1 cambridge:1 enter:2 ai:1 stretched:1 hess:1 affordance:1 dot:1 wear:3 moving:1 robot:3 calibration:7 apex:13 operating:1 similarity:1 subjectively:1 add:1 actor:1 tennis:2 closest:1 perspective:4 life:1 meeting:6 yi:2 vt:3 muscle:1 captured:2 guestrin:2 analyzes:2 minimum:1 seen:1 freely:2 gatica:1 converge:3 determine:2 paradigm:1 signal:7 ii:1 multiple:14 infer:2 transparency:1 vinciarelli:1 instruct:1 watched:1 ransac:2 vision:7 cmu:3 enhancing:1 iteration:2 represent:2 kernel:4 robotics:2 achieved:4 audience:2 want:1 windsor:1 source:1 biased:2 comment:1 subject:2 member:2 leveraging:1 emitted:2 structural:1 near:1 yang:1 enough:1 rendering:3 automated:1 restaurant:1 bandwidth:2 computable:1 texas:1 shift:9 i7:1 six:2 song:2 yaser:2 suffer:1 wandering:1 york:1 klier:1 band:1 http:1 cervical:1 nsf:2 stabilized:1 dotted:2 rescue:2 sign:1 estimated:13 neuroscience:2 correctly:1 tosato:2 blue:2 broadly:1 carnegie:2 discrete:1 group:18 salient:2 localize:2 drawn:2 neither:1 registration:1 vilis:1 v1:2 cone:20 orient:3 angle:1 parameterized:2 powerful:1 respond:2 almost:2 appendix:2 scaling:1 capturing:1 pay:1 summer:1 played:2 convergent:1 display:1 rudi:1 activity:1 occur:5 grease:1 precisely:2 constraint:3 scene:41 ri:1 encodes:2 dominated:1 aspect:1 min:1 fukunaga:1 expanded:1 according:1 waibel:1 pink:3 smaller:1 across:1 increasingly:1 ophthalmology:1 sheikh:2 fathi:2 encapsulates:2 midbrain:1 invariant:1 taken:3 equation:6 monocular:1 discus:2 r3:4 know:1 instrument:2 end:1 photo:2 adopted:1 studying:1 tightest:1 phenomenology:1 apply:3 noguer:1 v2:2 away:1 robustly:1 top:11 include:1 build:1 surgery:1 objective:1 seeking:5 already:1 move:6 posture:1 quantity:1 degrades:1 primary:45 snavely:1 gradient:6 distance:10 thank:1 pantic:1 vd:4 seven:2 collected:1 consensus:3 portable:1 talked:1 relationship:2 ratio:2 difficult:1 setup:1 relate:1 ba:1 munn:2 unknown:1 billard:1 observation:3 hyun:1 immediate:1 communication:2 looking:4 head:66 team:4 locate:3 frame:13 stiefelhagen:2 y1:1 police:1 intensity:2 transverse:1 introduced:1 pair:4 required:1 trans:6 beyond:1 usually:2 pattern:1 program:1 reliable:1 soo:1 video:10 epipolar:1 green:2 critical:1 natural:1 rely:1 synchronize:1 indicator:1 bourlard:1 eye:46 temporally:1 squad:1 axis:2 started:1 crawford:1 faced:1 review:1 understanding:7 geometric:1 acknowledgement:1 discovery:1 determining:1 relative:3 embedded:1 takemura:2 men:1 limitation:2 mounted:31 suggestion:1 facing:2 validation:1 agent:2 degree:2 sufficient:1 sigal:1 pi:3 translation:1 row:10 eccv:1 clap:1 placed:2 last:1 free:1 soon:1 supported:1 gee:2 verbal:1 hebert:1 bias:1 side:7 allow:2 concurrence:66 wore:1 wide:1 face:11 szeliski:1 emerge:1 distributed:2 regard:18 transition:1 world:8 evaluating:1 collection:2 coincide:2 party:6 far:2 social:42 ec:1 reconstructed:7 clique:4 global:1 sat:1 auditorium:1 knew:1 fergus:1 search:2 vergence:1 why:1 table:3 kanade:1 ballard:2 learn:1 irfan:1 interact:3 european:1 domain:1 hodgins:3 profile:1 angelaki:1 complementary:1 body:1 intel:1 join:1 pupil:1 position:11 candidate:1 lie:4 third:4 theorem:2 minute:1 embed:1 offset:1 gupta:1 fusion:1 adding:1 conditioned:1 horizon:1 trivedi:2 intersection:6 likely:1 forming:2 visual:7 bue:1 contained:1 tracking:4 cristani:3 scalar:1 watch:1 saccade:2 reflex:1 cipolla:2 gender:1 corresponds:1 truth:8 babcock:1 acm:1 consequently:1 towards:1 room:2 shared:1 man:3 infinite:1 people:29 mark:1 scan:1 evaluate:3 audio:1 d1:1
3,999
462
Against Edges: Function Approximation with Multiple Support Maps Trevor Darrell and Alex Pentland Vision and Modeling Group, The Media Lab Massachusetts Institute of Technology E15-388, 20 Ames Street Cambridge MA, 02139 Abstract Networks for reconstructing a sparse or noisy function often use an edge field to segment the function into homogeneous regions, This approach assumes that these regions do not overlap or have disjoint parts, which is often false. For example, images which contain regions split by an occluding object can't be properly reconstructed using this type of network. We have developed a network that overcomes these limitations, using support maps to represent the segmentation of a signal. In our approach, the support of each region in the signal is explicitly represented. Results from an initial implementation demonstrate that this method can reconstruct images and motion sequences which contain complicated occlusion. 1 Introduction The task of efficiently approximating a function is central to the solution of many important problems in perception and cognition. Many vision algorithms, for instance, integrate depth or other scene attributes into a dense map useful for robotic tasks such as grasping and collision avoidance. Similarly, learning and memory are often posed as a problem of generalizing from stored observations to predict future behavior, and are solved by interpolating a surface through the observations in an appropriate abstract space. Many control and planning problems can also be solved by finding an optimal trajectory given certain control points and optimization constraints. 388 Against Edges: Function Approximation with Multiple Support Maps In general, of course, finding solutions to these approximation problems is an illposed problem, and no exact answer can be found without the application of some prior knowledge or assumptions. Typically, one assumes the surface to be fit is either locally smooth or has some particular parametric form or basis function description. Many successful systems have been built to solve such problems in the cases where these assumptions are valid. However in a wide range of interesting cases where there is no single global model or universal smoothness constraint, such systems have difficulty. These cases typically involve the approximation or estimation of a heterogeneous function whose typical local structure is known, but which also includes an unknown number of abrupt changes or discontinuities in shape. 2 Approximation of Heterogeneous Functions In order to accurately approximate a heterogeneous function with a minimum number of parameters or interpolation units, it is necessary to divide the function into homogeneous chunks which can be approximated parsimoniously. When there is more than one homogeneous chunk in the signal/function, the data must be segmented so that observations of one object do not intermingle with and corrupt the approximation of another region. One simple approach is to estimate an edge map to denote the boundaries of homogeneous regions in the function, and then to regularize the function within such boundaries. This method was formalized by Geman and Geman (1984), who developed the "line-process" to insert discontinuities in a regularization network. A regularized solution can be efficiently computed by a neural network, either using discrete computational elements or analog circuitry (Poggio et al. 1985; Terzopoulos 1988). In this context, the line-process can be thought of as an array of switches placed between interpolation nodes (Figure la). As the regularization proceeds in this type of network, the switches of the line process open and prevent smoothing across suspected discont.inuities. Essentially, these switches are opened when the squared difference between neighboring interpolated values exceeds some threshold (Blake and Zisserman 1987; Geiger and Girosi 1991). In practice a continuation method is used to avoid problems with local minima, and a continuous non-linearity is used in place of a boolean discontinuity. The term "resistive fuse" is often used to describe these connections between interpolation sites (Harris et al. 1990). 3 Limitations of Edge-based Segmentation An edge-based representation assumes that homogeneous chunks of a function are completely connected, and have no disjoint subregions. For the visual reconstruction task, this implies that the projection of an object onto the image plane will always yield a single connected region. While this may be a reasonable assumption for certain classes of synthetic images, it is not valid for realistic natural images which contain occlusion and/or transparent phenomena. While a human observer can integrate over gaps in a region split by occlusion, the line process will prevent any such smoothing, no matter how close the subregions are in the image plane. When these disjoint regions are small (as when viewing an object through branches or leaves), the interpolated values provided by such a 389 390 Darrell and Pentland (a) (b) Figure 1: (a) Regularization network with line-process. Shaded circles represent data nodes, while open circles represent interpolation nodes. Solid rectangles indicate resistorsj slashed rectangles indicate "resistive fuses". (b) Regularization network with explicit support mapSj support process can be implemented by placing resistive fuses between data and interpolation nodes (other constraints on support are described in text). network will not be reliable, since observation noise can not be averaged over a large number of samples. Similarly, an edge-based approach cannot account for the perception of motion transparency, since these stimuli have no coherent local neighborhoods. Human observers can easily interpolate 3-D surfaces in transparent random-dot motion displays (Husain et al. 1989). In this type of display, points only last a few frames, and points from different surfaces are transparently intermingled. With a lineprocess, no smoothing or integration would be possible, since neighboring points in the image belong to different 3-D surfaces. To represent and process images containing this kind of transparent phenomena, we need a framework that does not rely on a global 2D edge map to make segmentation decisions. By generalizing the regularization/surface interpolation paradigm to use support. maps rather than a line-process, we can overcome limitations the discontinuity approach has with respect to transparency. 4 U sing Support Maps for Segmentation Our approach decomposes a heterogeneous function into a set of individual approximations corresponding to the homogeneous regions of the function. Each approximation covers a specific region, and ues a support map to indicate which points belong to that region. Unlike an edge-based representation, the support of an approximation need not be a connected region - in fact, the support can consist of a scattered collection of independent points! Against Edges: Function Approximation with Multiple Support Maps For a single approximation, it is relatively straight-forward to compute a support map. Given an approximation, we can find the support it has in the function by thresholding the residual error of that approximation. In terms of analog regularization, the support map (or support "process") can be implemented by placing a resistive fuse between the data and the interpolating units (Figure 1b). A single support map is limited in usefulness, since only one region can be approximated. In fact, it reduces to the "outlier" rejection paradigm of certain robust estimation methods, which are known to have severe theoretical limits on the amount of outlier contamination they can handle (Meer et al. 1991; Li 1985). To represent true heterogeneous stimuli, multiple support maps are needed, with one support map corresponding to each homogeneous (but not necessarily connected) region. We have developed a method to estimate a set of these support maps, based on finding a minimal length description of the function. We adopt a three-step approach: first, we generate a set of candidate support maps using simple thresholding techniques. Second, we find the subset of these maps which minimally describes the function, using a network optimization to find the smallest set of maps that covers all the observations. Finally, we re-allocate the support in this subset, such that only the approximation with the lowest residual error supports a particular point. 4.1 Estimating Initial Support Fields Ideally, we would like to consider all possible support patterns of a given dimension as candidate support maps. Unfortunately, the combinatorics of the problem makes this impossible; instead, we attempt to find a manageable number of initial maps which will serve as a useful starting point. A set of candidate approximations can be obtained in many ways. In our work we have initialized their surfaces either using a table of typical values or by fitting a small fixed regions of the function. We denote each approximation of a homogeneous region as a tuple, (ai,si,ui,fi), where si = {Sij} is a support map, ui = {Uij} is the approximated surface, and ri = {l'ij} is the residual error computed by taking the difference of ui with the observed data. (The scalar ai is used in deciding which subset of approximations are used in the final representation.) The support fields are set by thresholding the residual field based on our expected (or assumed) observation variance e. if (rij)2 otherwise 4.2 < e} Estimating the Number of Regions Perhaps the most critical problem in recovering a good heterogeneous description is estimating how many regions are in the function. Our approach to this problem is based on finding a small set of approximations which constitutes a parsimonious description of the function. We attempt to find a subset of the candidate approximations whose support maps are a minimal covering of the function, e.g. the smallest subset whose combined support covers the entire function. In non-degenerate cases this will consist of one approximation for each real region in the function. 391 392 Darrell and Pentland The quantity ai indicates if approximation i is included in the final representation. A positive value indicates it is "active" in the representation; a negative value indicates it is excluded from the representation. Initially ai is set to zero for each approximation; to find a minimal covering, this quantity is dynamically updated as a function of the number of points uniquely supported by a particular support map. A point is uniquely supported in a support map if it is supported by that map and no other. Essentially, we find these points by modulating the support values of a particular approximation with shunting inhibition from all other active approximations. To compute Cij, a flag that indicates whether or not point j of map i is uniquely supported, we multiply each support map with the product of the inverse of all other maps whose aj value indicates it is active: = Sij II (1 - Cij SkjO"(ak? k~i where 0"0 is a sigmoid function which converts the real-valued ai into a multiplicative factor in the range (0, 1). The quantity Cij is close to one at uniquely supported points, and close to zero for all other points. If there are a sufficient number of uniquely supported points in an approximation, we increase ai, otherwise it is decreased: d dt ai = Cij - a. (1) L j where a specifies the penalty for adding another approximation region to the representation. This constant determines the smallest number of points we are willing to have constitute a distinct region in the function. The network defined by these equations has a corresponding Lyoponov function: N E = L M ai( - I)O"(Sij) j i II (1 - O"(Skj )O"(ak?) + a) k~i so it will be guaranteed to converge to a local minima if we bound the values of ai (for fixed Sij and a). After convergence, those approximations with positive ai are kept, and the rest are discarded. Empirically we have found the local minima found by our network correspond to perceptually salient segmentations. 4.3 Refining Support Fields Once we have a set of approximations whose support maps minimally cover the function (and presumably correspond to the actual regions of the function), we can refine the support using a more powerful criteria than a local threshold. First, we interpolate the residual error values through unsampled points, so that support can be computed even where there are no observations. Then we update the support maps based on which approximation has the lowest residual error for a given point: Sij if (rij)2 < 1 and (rij)2 -- { 0 otherwise (J = min{klak>o}(rkj)2 Against Edges: Function Approximation with Multiple Support Maps ( Figure 2: (a) Function consisting of constant regions with added noise. (b) Same function sparsely sampled. (c) Support maps found to approximate uniformly sampled function. (d) Support maps found for sparsely sampled function. 5 Results We tested how well our network could reconstruct functions consisting of piecewise constant patches corrupted with random noise of known variance. Figure 2( a) shows the image containing the function the used in this experiment. We initialized 256 candidate approximations, each with a different constant surface. Since the image consisted of piecewise constant regions, the interpolation performed by each approximation was to compute a weighted average of the data over the supported points. Other experiments have used more powerful shape models, such as thin-plate or membrane Markov random fields, as well as piecewise-quadratic polynomials (Darrell et al. 1990). Using a penalty term which prevented approximations with 10 or fewer support points to be considered (0' 10.0), the network found 5 approximations which covered the entire image; their support maps are shown in Figure 2( c). The estimated surfaces corresponded closely to the values in the constant patches before noise was added. We ran a the same experiment on a sparsely sampled version of this function, as shown in Figure 2(b) and (d), with similar results and only slightly reduced accuracy in the recovered shape of the support maps. = 393 394 Darrell and Pentland (b) -0 - 0 -'L- 0_0 - (d) Figure 3: (a) First frame from image sequence and (b) recovered regions. (c) First frame from random dot sequence described in text. (d) Recovered parameter values across frames for dots undergoing looming motion; solid line plots T z , dotted line plots T x , and circles plot Ty for each frame. We have also applied our framework to the problem of motion segmentation. For homogeneous data, a simple "direct" method can be used to model image motion (Horn and Weldon 1988). Under this assumption, the image intensities for a region centered at the origin undergoing a translation (Tx, T y , T z ) satisfy at each point dI dI dI dI dI o = dt + Tx dx + Ty dy + Tz (x dx + y dy) where I is the image function. Each approximation computes a motion estimate by selecting a T vector which minimizes the square of the right hand side of this equation over its support map, using a weighted least-squares algorithm. The residual error at each point is then simply this constraint equation evaluated with the particular translation estimate. Figure 3( a) shows the first frame of one sequence, containing a person moving behind a stationary plant. Our network began with 64 candidate approximations, with the initial motion parameters in each distributed uniformly along the parameter axes. Figure 3(b) shows the segmentation provided by our method. Two regions were found to be needed, one for the person and one for the plant. Most of the person has been correctly grouped together despite the occlusion caused by the plant's leaves. Points that have no spatial or temporal variation in the image sequence are not attributed to any approximation, since they are invisible to our motion model. Note that there is a cast shadow moving in synchrony with the person in the scene, .a nd is thus grouped with that approximation. Against Edges: Function Approximation with Multiple Suppon Maps Finally, we ran our system on the finite-lifetime, transparent random dot stimulus described in Section 2. Since our approach recovers a global motion estimate for each region in each frame, we do not need to build explicit pixel-to-pixel correspondences over long sequences. We used two populations of random dots, one undergoing a looming motion and one a rightward shift. After each frame 10% of the dots died off and randomly moved to a new point on the 3-D surface. Ten 128x128 frames were rendered using perspective projection; the first is shown in Figure 3(c) We applied our method independently to each trio of successive frames, and in each case two approximations were found to account for the motion information in the scene. Figure 3(d) shows the parameters recovered for the looming motion. Similar results were found for the translating motion, except that the Tx parameter was nonzero rather than T z ? Since the recovered estimates were consistent, we would be able to decrease the overall uncertainty by averaging the parameter values over successive frames. References Geman, S., and Geman, D. (1984) Stochastic relaxation, Gibbs distribution, and Bayesian restoration of images. Trans. Pattern Anal. Machine Intell. 6:721-741. Poggio, T., Torre, V., and Koch, C. (1985) Computational vision and regularization theory. Nature 317(26). Terzopoulos, D. (1988) The computation of visible surface representations. IEEE Trans. Pattern Anal. Machine Intel/. 10:4. Geiger, D., and Girosi, F. (1991) Parallel and deterministic algorithms from MRF's: surface reconstruction. Trans. Pattern Anal. Machine Intell. 13:401-412. Blake, A. and Zisserman, A. (1987) Visual Reconstruction; MIT Press, Cambridge, MA. Harris J., Koch, C., Staats, E., and Luo, J. (1990) Analog hardware for detecting discontinuities in early vision Inti. 1. Computer Vision 4:211-233. Husain, M., Treue, S., and Andersen, R. A. (1989) Surface interpolation in threedimensional structure-from-motion perception. Neural Computation 1:324-333. Meer, P., Mintz, D., and Rosenfeld, A. (1991) Robust regression methods for computer vision: A review. Inti. 1. Computer Vision; 6:60-70. Li, G. (1985) Robust Regression. In D.C. Hoaglin, F. Mosteller and J.W. Tukey (Eds.) Exploring Data, Tables, Trends and Shapes: John Wiley & Sons, N.Y. Darrell, T ., Sclaroff, S., and Pentland, A. P. (1990) Segmentation by minimal description. Proc. IEEE 3nd Inti. Con! Computer Vision; Osaka, Japan. Horn, B.K.P., and Weldon, E.J. (1988) Direct methods for recovering motion. Inti. 1. Computer Vision 2:51-76. 395
462 |@word version:1 manageable:1 polynomial:1 nd:2 open:2 willing:1 solid:2 initial:4 selecting:1 recovered:5 luo:1 si:2 dx:2 must:1 john:1 visible:1 realistic:1 girosi:2 shape:4 plot:3 update:1 stationary:1 leaf:2 fewer:1 plane:2 detecting:1 node:4 ames:1 successive:2 x128:1 along:1 direct:2 resistive:4 fitting:1 expected:1 behavior:1 planning:1 actual:1 provided:2 estimating:3 linearity:1 medium:1 lowest:2 kind:1 minimizes:1 developed:3 finding:4 temporal:1 control:2 unit:2 positive:2 before:1 local:6 died:1 limit:1 despite:1 ak:2 interpolation:8 minimally:2 dynamically:1 shaded:1 limited:1 range:2 averaged:1 horn:2 practice:1 illposed:1 universal:1 thought:1 projection:2 onto:1 close:3 cannot:1 context:1 impossible:1 map:38 deterministic:1 starting:1 independently:1 formalized:1 abrupt:1 avoidance:1 array:1 regularize:1 osaka:1 meer:2 handle:1 population:1 variation:1 updated:1 husain:2 exact:1 homogeneous:9 origin:1 element:1 trend:1 approximated:3 skj:1 sparsely:3 geman:4 observed:1 solved:2 rij:3 region:29 connected:4 grasping:1 decrease:1 contamination:1 ran:2 ui:3 ideally:1 segment:1 serve:1 basis:1 completely:1 rightward:1 easily:1 represented:1 tx:3 distinct:1 describe:1 corresponded:1 neighborhood:1 intermingled:1 whose:5 posed:1 solve:1 valued:1 reconstruct:2 otherwise:3 rosenfeld:1 noisy:1 final:2 sequence:6 reconstruction:3 product:1 neighboring:2 degenerate:1 description:5 moved:1 convergence:1 darrell:6 object:4 ij:1 implemented:2 recovering:2 shadow:1 implies:1 indicate:3 closely:1 attribute:1 torre:1 opened:1 stochastic:1 centered:1 human:2 viewing:1 translating:1 unsampled:1 transparent:4 insert:1 exploring:1 koch:2 considered:1 blake:2 deciding:1 presumably:1 cognition:1 predict:1 circuitry:1 adopt:1 smallest:3 early:1 estimation:2 proc:1 modulating:1 grouped:2 weighted:2 mit:1 always:1 rather:2 avoid:1 treue:1 ax:1 refining:1 properly:1 indicates:5 typically:2 entire:2 initially:1 weldon:2 uij:1 pixel:2 overall:1 smoothing:3 integration:1 spatial:1 field:6 once:1 placing:2 constitutes:1 thin:1 future:1 stimulus:3 piecewise:3 few:1 looming:3 randomly:1 interpolate:2 parsimoniously:1 individual:1 intell:2 mintz:1 occlusion:4 consisting:2 attempt:2 multiply:1 severe:1 behind:1 edge:12 tuple:1 necessary:1 poggio:2 divide:1 initialized:2 circle:3 re:1 theoretical:1 minimal:4 instance:1 modeling:1 boolean:1 cover:4 restoration:1 subset:5 usefulness:1 successful:1 stored:1 answer:1 trio:1 corrupted:1 synthetic:1 combined:1 chunk:3 person:4 mosteller:1 off:1 together:1 squared:1 central:1 andersen:1 containing:3 tz:1 li:2 japan:1 account:2 includes:1 matter:1 satisfy:1 combinatorics:1 explicitly:1 caused:1 multiplicative:1 performed:1 observer:2 lab:1 tukey:1 complicated:1 parallel:1 synchrony:1 square:2 accuracy:1 variance:2 who:1 efficiently:2 yield:1 correspond:2 bayesian:1 staats:1 accurately:1 trajectory:1 straight:1 trevor:1 ed:1 against:5 ty:2 di:5 attributed:1 recovers:1 con:1 sampled:4 massachusetts:1 knowledge:1 segmentation:8 dt:2 zisserman:2 evaluated:1 hoaglin:1 lifetime:1 hand:1 aj:1 perhaps:1 contain:3 consisted:1 true:1 regularization:7 excluded:1 nonzero:1 uniquely:5 covering:2 criterion:1 plate:1 demonstrate:1 invisible:1 motion:16 image:17 fi:1 began:1 sigmoid:1 empirically:1 analog:3 belong:2 cambridge:2 gibbs:1 ai:10 smoothness:1 similarly:2 dot:6 moving:2 surface:14 inhibition:1 perspective:1 certain:3 minimum:4 converge:1 paradigm:2 signal:3 ii:2 branch:1 multiple:6 reduces:1 transparency:2 smooth:1 segmented:1 exceeds:1 long:1 shunting:1 prevented:1 mrf:1 regression:2 heterogeneous:6 vision:9 essentially:2 represent:5 decreased:1 rest:1 unlike:1 split:2 switch:3 fit:1 shift:1 whether:1 allocate:1 penalty:2 constitute:1 useful:2 collision:1 covered:1 involve:1 amount:1 locally:1 subregions:2 ten:1 hardware:1 reduced:1 continuation:1 generate:1 specifies:1 transparently:1 dotted:1 estimated:1 disjoint:3 correctly:1 discrete:1 group:1 salient:1 threshold:2 prevent:2 kept:1 rectangle:2 fuse:4 relaxation:1 convert:1 inverse:1 powerful:2 uncertainty:1 place:1 reasonable:1 patch:2 geiger:2 parsimonious:1 decision:1 dy:2 bound:1 guaranteed:1 display:2 correspondence:1 quadratic:1 refine:1 constraint:4 alex:1 scene:3 ri:1 interpolated:2 min:1 rendered:1 relatively:1 membrane:1 across:2 describes:1 reconstructing:1 slightly:1 son:1 outlier:2 sij:5 inti:4 equation:3 needed:2 appropriate:1 assumes:3 rkj:1 build:1 approximating:1 threedimensional:1 added:2 quantity:3 parametric:1 street:1 length:1 unfortunately:1 cij:4 negative:1 implementation:1 anal:3 unknown:1 observation:7 markov:1 discarded:1 sing:1 finite:1 pentland:5 frame:11 intensity:1 cast:1 connection:1 coherent:1 discontinuity:5 trans:3 able:1 proceeds:1 perception:3 pattern:4 built:1 reliable:1 memory:1 overlap:1 critical:1 difficulty:1 natural:1 regularized:1 rely:1 residual:7 e15:1 technology:1 ues:1 text:2 prior:1 review:1 plant:3 interesting:1 limitation:3 integrate:2 sufficient:1 consistent:1 suspected:1 thresholding:3 corrupt:1 translation:2 course:1 placed:1 last:1 supported:7 side:1 terzopoulos:2 institute:1 wide:1 taking:1 sparse:1 distributed:1 boundary:2 depth:1 overcome:1 valid:2 dimension:1 computes:1 forward:1 collection:1 reconstructed:1 approximate:2 overcomes:1 global:3 robotic:1 active:3 assumed:1 continuous:1 decomposes:1 table:2 nature:1 robust:3 interpolating:2 necessarily:1 dense:1 noise:4 site:1 intel:1 scattered:1 wiley:1 discont:1 explicit:2 candidate:6 specific:1 undergoing:3 consist:2 false:1 adding:1 perceptually:1 gap:1 sclaroff:1 rejection:1 generalizing:2 simply:1 visual:2 scalar:1 determines:1 harris:2 ma:2 change:1 included:1 typical:2 except:1 uniformly:2 averaging:1 flag:1 la:1 occluding:1 support:47 tested:1 phenomenon:2