Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
3,700
4,350
Shallow vs. Deep Sum-Product Networks Olivier Delalleau Department of Computer Science and Operation Research Universit?e de Montr?eal [email protected] Yoshua Bengio Department of Computer Science and Operation Research Universit?e de Montr?eal [email protected] Abstract We investigate the representational power of sum-product networks (computation networks analogous to neural networks, but whose individual units compute either products or weighted sums), through a theoretical analysis that compares deep (multiple hidden layers) vs. shallow (one hidden layer) architectures. We prove there exist families of functions that can be represented much more efficiently with a deep network than with a shallow one, i.e. with substantially fewer hidden units. Such results were not available until now, and contribute to motivate recent research involving learning of deep sum-product networks, and more generally motivate research in Deep Learning. 1 Introduction and prior work Many learning algorithms are based on searching a family of functions so as to identify one member of said family which minimizes a training criterion. The choice of this family of functions and how members of that family are parameterized can be a crucial one. Although there is no universally optimal choice of parameterization or family of functions (or ?architecture?), as demonstrated by the no-free-lunch results [37], it may be the case that some architectures are appropriate (or inappropriate) for a large class of learning tasks and data distributions, such as those related to Artificial Intelligence (AI) tasks [4]. Different families of functions have different characteristics that can be appropriate or not depending on the learning task of interest. One of the characteristics that has spurred much interest and research in recent years is depth of the architecture. In the case of a multi-layer neural network, depth corresponds to the number of (hidden and output) layers. A fixedkernel Support Vector Machine is considered to have depth 2 [4] and boosted decision trees to have depth 3 [7]. Here we use the word circuit or network to talk about a directed acyclic graph, where each node is associated with some output value which can be computed based on the values associated with its predecessor nodes. The arguments of the learned function are set at the input nodes of the circuit (which have no predecessor) and the outputs of the function are read off the output nodes of the circuit. Different families of functions correspond to different circuits and allowed choices of computations in each node. Learning can be performed by changing the computation associated with a node, or rewiring the circuit (possibly changing the number of nodes). The depth of the circuit is the length of the longest path in the graph from an input node to an output node. Deep Learning algorithms [3] are tailored to learning circuits with variable depth, typically greater than depth 2. They are based on the idea of multiple levels of representation, with the intuition that the raw input can be represented at different levels of abstraction, with more abstract features of the input or more abstract explanatory factors represented by deeper circuits. These algorithms are often based on unsupervised learning, opening the door to semi-supervised learning and efficient 1 use of large quantities of unlabeled data [3]. Analogies with the structure of the cerebral cortex (in particular the visual cortex) [31] and similarities between features learned with some Deep Learning algorithms and those hypothesized in the visual cortex [17] further motivate investigations into deep architectures. It has been suggested that deep architectures are more powerful in the sense of being able to more efficiently represent highly-varying functions [4, 3]. In this paper, we measure ?efficiency? in terms of the number of computational units in the network. An efficient representation is important mainly because: (i) it uses less memory and is faster to compute, and (ii) given a fixed amount of training samples and computational power, better generalization is expected. The first successful algorithms for training deep architectures appeared in 2006, with efficient training procedures for Deep Belief Networks [14] and deep auto-encoders [13, 27, 6], both exploiting the general idea of greedy layer-wise pre-training [6]. Since then, these ideas have been investigated further and applied in many settings, demonstrating state-of-the-art learning performance in object recognition [16, 28, 18, 15] and segmentation [20], audio classification [19, 10], natural language processing [9, 36, 21, 32], collaborative filtering [30], modeling textures [24], modeling motion [34, 33], information retrieval [29, 26], and semi-supervised learning [36, 22]. Poon and Domingos [25] introduced deep sum-product networks as a method to compute partition functions of tractable graphical models. These networks are analogous to traditional artificial neural networks but with nodes that compute either products or weighted sums of their inputs. Analogously to neural networks, we define ?hidden? nodes as those nodes that are neither input nodes nor output nodes. If the nodes are organized in layers, we define the ?hidden? layers to be those that are neither the input layer nor the output layer. Poon and Domingos [25] report experiments with networks much deeper (30+ hidden layers) than those typically used until now, e.g. in Deep Belief Networks [14, 3], where the number of hidden layers is usually on the order of three to five. Whether such deep architectures have theoretical advantages compared to so-called ?shallow? architectures (i.e. those with a single hidden layer) remains an open question. After all, in the case of a sum-product network, the output value can always be written as a sum of products of input variables (possibly raised to some power by allowing multiple connections from the same input), and consequently it is easily rewritten as a shallow network with a sum output unit and product hidden units. The argument supported by our theoretical analysis is that a deep architecture is able to compute some functions much more efficiently than a shallow one. Until recently, very few theoretical results supported the idea that deep architectures could present an advantage in terms of representing some functions more efficiently. Most related results originate from the analysis of boolean circuits (see e.g. [2] for a review). Well-known results include the proof that solving the n-bit parity task with a depth-2 circuit requires an exponential number of gates [1, 38], and more generally that there exist functions computable with a polynomial-size depthk circuit that would require exponential size when restricted to depth k ? 1 [11]. Another recent result on boolean circuits by Braverman [8] offers proof of a longstanding conjecture, showing that bounded-depth boolean circuits are unable to distinguish some (non-uniform) input distributions from the uniform distribution (i.e. they are ?fooled? by such input distributions). In particular, Braverman?s result suggests that shallow circuits can in general be fooled more easily than deep ones, i.e., that they would have more difficulty efficiently representing high-order dependencies (those involving many input variables). It is not obvious that circuit complexity results (that typically consider only boolean or at least discrete nodes) are directly applicable in the context of typical machine learning algorithms such as neural networks (that compute continuous representations of their input). Orponen [23] surveys theoretical results in computational complexity that are relevant to learning algorithms. For instance, H?astad and Goldmann [12] extended some results to the case of networks of linear threshold units with positivity constraints on the weights. Bengio et al. [5, 7] investigate, respectively, complexity issues in networks of Gaussian radial basis functions and decision trees, showing intrinsic limitations of these architectures e.g. on tasks similar to the parity problem. Utgoff and Stracuzzi [35] informally discuss the advantages of depth in boolean circuit in the context of learning architectures. Bengio [3] suggests that some polynomials could be represented more efficiently by deep sumproduct networks, but without providing any formal statement or proofs. This work partly addresses this void by demonstrating families of circuits for which a deep architecture can be exponentially more efficient than a shallow one in the context of real-valued polynomials. Note that we do not address in this paper the problem of learning these parameters: even if an efficient deep representation exists for the function we seek to approximate, in general there is no 2 guarantee for standard optimization algorithms to easily converge to this representation. This paper focuses on the representational power of deep sum-product circuits compared to shallow ones, and studies it by considering particular families of target functions (to be represented by the learner). We first formally define sum-product networks. We consider two families of functions represented by deep sum-product networks (families F and G). For each family, we establish a lower bound on the minimal number of hidden units a depth-2 sum-product network would require to represent a function of this family, showing it is much less efficient than the deep representation. 2 Sum-product networks Definition 1. A sum-product network is a network composed of units that either compute the product of their inputs or a weighted sum of their inputs (where weights are strictly positive). Here, we restrict our definition of the generic term ?sum-product network? to networks whose summation units have positive incoming weights1 , while others are called ?negative-weight? networks. Definition 2. A ?negative-weight? sum-product network may contain summation units whose weights are non-positive (i.e. less than or equal to zero). Finally, we formally define what we mean by deep vs. shallow networks in the rest of the paper. Definition 3. A ?shallow? sum-product network contains a single hidden layer (i.e. a total of three layers when counting the input and output layers, and a depth equal to two). Definition 4. A ?deep? sum-product network contains more than one hidden layer (i.e. a total of at least four layers, and a depth at least three). The family F 3 3.1 Definition The first family of functions we study, denoted by F, is made of functions built from deep sumproduct networks that alternate layers of product and sum units with two inputs each (details are provided below). The basic idea we use here is that composing layers (i.e. using a deep architecture) is equivalent to using a factorized representation of the polynomial function computed by the network. Such a factorized representation can be exponentially more compact than its expansion as a sum of products (which can be associated to a shallow network with product units in its hidden layer and a sum unit as output). This is what we formally show in what follows. + ?21 = ?11?11 + ?11?12 = x1x2 + x3x4 = f (x1, x2, x3, x4) ?11 = 1 ?11 = 1 ? ?11 = x1x2 x1 x2 ? ?12 = x3x4 x3 x4 Figure 1: Sum-product network computing the function f ? F such that i = ?11 = ?11 = 1. Let n = 4i , with i a positive integer value. Denote by ?0 the input layer containing scalar variables {x1 , . . . , xn }, such that ?0j = xj for 1 ? j ? n. Now define f ? F as any function computed by a sum-product network (deep for i ? 2) composed of alternating product and sum layers: 2k 2(i?k)?1 ? ?2k+1 = ?2k 2j?1 ? ?2j for 0 ? k ? i ? 1 and 1 ? j ? 2 j 2k?1 2k?1 for 1 ? k ? i and 1 ? j ? 22(i?k) ? ?2k j = ?jk ?2j?1 + ?jk ?2j where the weights ?jk and ?jk of the summation units are strictly positive. The output of the network is given by f (x1 , . . . , xn ) = ?2i 1 ? R, the unique unit in the last layer. The corresponding (shallow) network for i = 1 and additive weights set to one is shown in Figure 1 1 This condition is required by some of the proofs presented here. 3 (this architecture is also the basic building block of bigger networks for i > 1). Note that both the input size n = 4i and the network?s depth 2i increase with parameter i. 3.2 Theoretical results The main result of this section is presented below in Corollary 1, providing a lower bound on the minimum number of hidden units required by a shallow sum-product network to represent a function f ? F. The high-level proof sketch consists in the following steps: (1) Count the number of unique products found in the polynomial representation of f (Lemma 1 and Proposition 1). (2) Show that the only possible architecture for a shallow sum-product network to compute f is to have a hidden layer made of product units, with a sum unit as output (Lemmas 2 to 5). (3) Conclude that the number of hidden units must be at least the number of unique products computed in step 3.2 (Lemma 6 and Corollary 1). Lemma 1. Any element ?kj can be written as a (positively) weighted sum of products of input variables, such that each input variable xt is used in exactly one unit of ?k . Moreover, the number mk of products found in the sum computed by ?kj does not depend on j and obeys the following recurrence rule for k ? 0: if k + 1 is odd, then mk+1 = m2k , otherwise mk+1 = 2mk . Proof. We prove the lemma by induction on k. It is obviously true for k = 0 since ?0j = xj . Assuming this is true for some k ? 0, we consider two cases: ? If k + 1 is odd, then ?jk+1 = ?k2j?1 ? ?k2j . By the inductive hypothesis, it is the product of two (positively) weighted sums of products of input variables, and no input variable can appear in both ?k2j?1 and ?k2j , so the result is also a (positively) weighted sum of products of input variables. Additionally, if the number of products in ?k2j?1 and ?k2j is mk , then mk+1 = m2k , since all products involved in the multiplication of the two units are different (since they use disjoint subsets of input variables), and the sums have positive weights. Finally, by the induction assumption, an input variable appears in exactly one unit of ?k . This unit is an input to a single unit of ?k+1 , that will thus be the only unit of ?k+1 where this input variable appears. ? If k + 1 is even, then ?k+1 = ?jk ?k2j?1 + ?jk ?k2j . Again, from the induction assumption, it j must be a (positively) weighted sum of products of input variables, but with mk+1 = 2mk such products. As in the previous case, an input variable will appear in the single unit of ?k+1 that has as input the single unit of ?k in which this variable must appear. Proposition 1. The number of products in the sum computed in the output unit l12i of a network ? n?1 . computing a function in F is m2i = 2 Proof. We first prove by induction on k ? 1 that for odd k, mk = 22 k 22 1+1 2 2 k+1 2 ?2 , and for even k, . This is obviously true for k = 1 since 2 = 2 = 1, and all units in ?1 are mk = 2 single products of the form xr xs . Assuming this is true for some k ? 1, then: ?1 0 ?2 ? if k + 1 is odd, then from Lemma 1 and the induction assumption, we have:  k 2 (k+1)+1 k +1 mk+1 = m2k = 22 2 ?1 = 22 2 ?2 = 22 2 ?2 ? if k + 1 is even, then instead we have: mk+1 = 2mk = 2 ? 22 k+1 2 ?2 = 22 (k+1) 2 ?1 which shows the desired result for k + 1, and thus concludes the induction proof. Applying this result with k = 2i (which is even) yields 2i m2i = 22 2 ?1 ? =2 4 22i ?1 ? =2 n?1 . Lemma 2. The products computed in the output unit l12i can be split in two groups, one with products containing only variables x1 , . . . , x n2 and one containing only variables x n2 +1 , . . . , xn . Proof. This is obvious since the last unit is a ?sum? unit that adds two terms whose inputs are these two groups of variables (see e.g. Fig. 1). Lemma 3. The products computed in the output unit l12i involve more than one input variable. Proof. It is straightforward to show by induction on k ? 1 that the products computed by ljk all involve more than one input variable, thus it is true in particular for the output layer (k = 2i). Lemma 4. Any shallow sum-product network computing f ? F must have a ?sum? unit as output. Proof. By contradiction, suppose the output unit of such a shallow sum-product network is multiplicative. This unit must have more than one input, because in the case that it has only one input, the output would be either a (weighted) sum of input variables (which would violate Lemma 3), or a single product of input variables (which would violate Proposition 1), depending on the type (sum or product) of the single input hidden unit. Thus the last unit must compute a product of two or more hidden units. It can be re-written as a product of two factors, where each factor corresponds to either one hidden unit, or a product of multiple hidden units (it does not matter here which specific factorization is chosen among all possible ones). Regardless of the type (sum or product) of the hidden units involved, those two factors can thus be written as weighted sums of products of variables xt (with positive weights, and input variables potentially raised to powers above one). From Lemma 1, both x1 and xn must be present in the final output, and thus they must appear in at least one of these two factors. Without loss of generality, assume x1 appears in the first factor. Variables x n2 +1 , . . . , xn then cannot be present in the second factor, since otherwise one product in the output would contain both x1 and one of these variables (this product cannot cancel out since weights must be positive), violating Lemma 2. But with a similar reasoning, since as a result xn must appear in the first factor, variables x1 , . . . , x n2 cannot be present in the second factor either. Consequently, no input variable can be present in the second factor, leading to the desired contradiction. Lemma 5. Any shallow sum-product network computing f ? F must have only multiplicative units in its hidden layer. Proof. By contradiction, suppose there exists a ?sum? unit in the hidden layer, written s = P t?S ?t xt with S the set of input indices appearing in this sum, and ?t > 0 for all t ? S. Since according to Lemma 4 the output unit must also be a sum (and have positive weights according to Definition 1), then the final output will also contain terms of the form ?t xt for t ? S, with ?t > 0. This violates Lemma 3, establishing the contradiction. Lemma 6. Any shallow negative-weight sum-product network (see Definition 2) computing f ? F ? must have at least 2 n?1 hidden units, if its output unit is a sum and its hidden units are products. Proof. Such a network computes a weighted sum of its hidden units, where each hidden unit is a ? product of input variables, i.e. its output can be written as ?j wj ?t xt jt with wj ? R and ?jt ? {0, 1}. In order to compute a function in F, this shallow network thus needs a number of hidden units at least equal to the number of unique products in that function. From Proposition 1, this ? number is equal to 2 n?1 . ? Corollary 1. Any shallow sum-product network computing f ? F must have at least 2 units. n?1 hidden Proof. This is a direct corollary of Lemmas 4 (showing the output unit is a sum), 5 (showing that hidden units are products), and 6 (showing the desired result for any shallow network with this specific structure ? regardless of the sign of weights). 5 3.3 Discussion Corollary 1 above shows that in order to compute some function in F with n inputs, the number of ? ? units in a shallow network has to be at least 2 n?1 , (i.e. grows exponentially in n). On another hand, the total number of units in the deep (for i > 1) network computing the same function, as described in Section 3.1, is equal to 1 + 2 + 4 + 8 + . . . + 22i?1?(since all units are binary), which is also equal to 22i ? 1 = n ? 1 (i.e. grows only quadratically in n). It shows that some deep sumproduct network with n inputs and depth O(log n) can represent with O(n) units what would ? require O(2 n ) units for a depth-2 network. Lemma 6 also shows a similar result regardless of the sign of the weights in the summation units of the depth-2 network, but assumes a specific architecture for this network (products in the hidden layer with a sum as output). 4 The family G In this section we present similar results with a different family of functions, denoted by G. Compared to F, one important difference of deep sum-product networks built to define functions in G is that they can vary their input size independently of their depth. Their analysis thus provides additional insight when comparing the representational efficiency of deep vs. shallow sum-product networks in the case of a fixed dataset. 4.1 Definition Networks in family G also alternate sum and product layers, but their units have as inputs all units from the previous layer except one. More formally, define the family G = ?n?2,i?0 Gin of functions represented by sum-product networks, where the sub-family Gin is made of all sum-product networks with n input variables and 2i + 2 layers (including the input layer ?0 ), such that: 1. ?1 contains summation units; further layers alternate multiplicative and summation units. 2. Summation units have positive weights. 3. All layers are of size n, except the last layer ?2i+1 that contains a single sum unit that sums all units in the previous layer ?2i . k?1 4. In each layer ?k for 1 ? k ? 2i, each unit ?kj takes as inputs {?m |m 6= j}. An example of a network belonging to G1,3 (i.e. with three layers and three input variables) is shown in Figure 2. ?31 = x21 + x22 + x23 + 3(x1x2 + x1x3 + x2x3) = g(x1, x2, x3) + ?21 = x21 + x1x2 ? +x1x3 + x2x3 ?11 = x2 + x3 ? ?22 = . . . ? ?23 = x23 + x1x2 +x1x3 + x2x3 + + ?12 = x1 + x3 + ?13 = x1 + x2 x1 x2 x3 Figure 2: Sum-product network computing a function of G1,3 (summation units? weights are all 1?s). 4.2 Theoretical results The main result is stated in Proposition 3 below, establishing a lower bound on the number of hidden units of a shallow sum-product network computing g ? G. The proof sketch is as follows: 1. We show that the polynomial expansion of g must contain a large set of products (Proposition 2 and Corollary 2). 2. We use both the number of products in that set as well as their degree to establish the desired lower bound (Proposition 3). 6 We will also need the following lemma, which states that when n ? 1 items each belong to n ? 1 sets among a total of n sets, then we can associate to each item one of the sets it belongs to without using the same set for different items. Lemma 7. Let S1 , . . . , Sn be n sets (n ? 2) containing elements of {P1 , . . . , Pn?1 }, such that for any q, r, |{r|Pq ? Sr }| ? n ? 1 (i.e. each element Pq belongs to at least n ? 1 sets). Then there exist r1 , . . . , rn?1 different indices such that Pq ? Srq for 1 ? q ? n ? 1. Proof. Omitted due to lack of space (very easy to prove by construction). t Proposition 2. For any 0 ? j ? i, and any product of variables P = ?nt=1 x? t such that ?t ? N and P j 2j whose computed value, when expanded as a weighted t ?t = (n ? 1) , there exists a unit in ? sum of products, contains P among these products. Proof. We prove this proposition by induction on j. First, for j = 0, this is obvious since any P of this form must be made of a single input variable xt , that appears in ?0t = xt . t Suppose nowP the proposition is true for some j < i. Consider a product P = ?nt=1 x? t such that j+1 ?t ? N and t ?t = (n ? 1) . P can be factored in n ? 1 sub-products of degree (n ? 1)j , P ? i.e. written P = P1 . . . Pn?1 with Pq = ?nt=1 xt qt , ?qt ? N and t ?qt = (n ? 1)j for all q. By the induction hypothesis, each Pq can be found in at least one unit ?2j kq . As a result, by property 4 (in the definition of family G), each Pq will also appear in the additive layer ?2j+1 , in at least n ? 1 different units (the only sum unit that may not contain Pq is the one that does not have ?2j kq as input). By Lemma 7, we can thus find a set of units ?2j+1 such that for any 1 ? q ? n ? 1, the product rq Pq appears in ?2j+1 , with indices r being different from each other. Let 1 ? s ? n be such that q rq 2(j+1) s 6= rq for all q. Then, from property 4 of family G, the multiplicative unit ?s computes the n?1 2j+1 product ?q=1 ?rq , and as a result, when expanded as a sum of products, it contains in particular P1 . . . Pn?1 = P . The proposition is thus true for j + 1, and by induction, is true for all j ? i. Corollary 2. The output gin of a sum-product network in Gin , when expandedP as a sum of products, i t contains all products of variables of the form ?nt=1 x? such that ? ? N and t t t ?t = (n ? 1) . Proof. Applying Proposition 2 with j = i, we obtain that all products of this form can be found in the multiplicative units of ?2i . Since the output unit ?2i+1 computes a sum of these multiplicative 1 units (weighted with positive weights), those products are also present in the output. Proposition 3. A shallow negative-weight sum-product network computing gin ? Gin must have at least (n ? 1)i hidden units. Proof. First suppose the output unit of the shallow network is a sum. Then it may be able to compute gin , assuming we allow multiplicative units in the hidden layer in the hidden layer to use powers of their inputs in the product they compute (which we allow here for the proof to be more generic). However, it will require at least as many of these units as the number of unique products that can be found in the expansion of gin . In particular, from Corollary 2, itPwill require at least the number n of unique tuples of the form (?1 , . . . , ?n ) such that ?t ? N and t=1 ?t = (n ? 1)i . Denoting  ni ?1 , and it is easy to verify it is higher dni = (n ? 1)i , this number is known to be equal to n+ddni than (or equal to) dni for any n ? 2 and i ? 0. Now suppose the output unit is multiplicative. Then there can be no multiplicative hidden unit, otherwise it would mean one could factor some input variable xt in the computed function output: this is not possible since by Corollary 2, for any variable xt there exist products in the output function that do not involve xt . So all hidden units must be additive, and since the computed function contains products of degree dni , there must be at least dni such hidden units. 7 4.3 Discussion Proposition 3 shows that in order to compute the same function as gin ? Gin , the number of units in the shallow network has to grow exponentially in i, i.e. in the network?s depth (while the deep network?s size grows linearly in i). The shallow network also needs to grow polynomially in the number of input variables n (with a degree equal to i), while the deep network grows only linearly in n. It means that some deep sum-product network with n inputs and depth O(i) can represent with O(ni) units what would require O((n ? 1)i ) units for a depth-2 network. Note that in the similar results found for family F, the depth-2 network computing the same function as a function in F had to be constrained to either have a specific combination of sum and hidden units (in Lemma 6) or to have non-negative weights (in Corollary 1). On the contrary, the result presented here for family G holds without requiring any of these assumptions. 5 Conclusion We compared a deep sum-product network and a shallow sum-product network representing the same function, taken from two families of functions F and G. For both families, we have shown that the number of units in the shallow network has to grow exponentially, compared to a linear growth in the deep network, so as to represent the same functions. The deep version thus offers a much more compact representation of the same functions. This work focuses on two specific families of functions: finding more general parameterization of functions leading to similar results would be an interesting topic for future research. Another open question is whether it is possible to represent such functions only approximately (e.g. up to an error bound ?) with a much smaller shallow network. Results by Braverman [8] on boolean circuits suggest that similar results as those presented in this paper may still hold, but this topic has yet to be formally investigated in the context of sum-product networks. A related problem is also to look into functions defined only on discrete input variables: our proofs do not trivially extend to this situation because we cannot assume anymore that two polynomials yielding the same output values must have the same expansion coefficients (since the number of input combinations becomes finite). Acknowledgments The authors would like to thank Razvan Pascanu and David Warde-Farley for their help in improving this manuscript, as well as the anonymous reviewers for their careful reviews. This work was partially funded by NSERC, CIFAR, and the Canada Research Chairs. References [1] Ajtai, M. (1983). P1 1 -formulae on finite structures. Annals of Pure and Applied Logic, 24(1), 1?48. [2] Allender, E. (1996). Circuit complexity before the dawn of the new millennium. In 16th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pages 1?18. Lecture Notes in Computer Science 1180, Springer Verlag. [3] Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1?127. Also published as a book. Now Publishers, 2009. [4] Bengio, Y. and LeCun, Y. (2007). Scaling learning algorithms towards AI. In L. Bottou, O. Chapelle, D. DeCoste, and J. Weston, editors, Large Scale Kernel Machines. MIT Press. [5] Bengio, Y., Delalleau, O., and Le Roux, N. (2006). The curse of highly variable functions for local kernel machines. In NIPS?05, pages 107?114. MIT Press, Cambridge, MA. [6] Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep networks. In NIPS 19, pages 153?160. MIT Press. [7] Bengio, Y., Delalleau, O., and Simard, C. (2010). Decision trees do not generalize to new variations. Computational Intelligence, 26(4), 449?467. [8] Braverman, M. (2011). Poly-logarithmic independence fools bounded-depth boolean circuits. Communications of the ACM, 54(4), 108?115. [9] Collobert, R. and Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML 2008, pages 160?167. [10] Dahl, G. E., Ranzato, M., Mohamed, A., and Hinton, G. E. (2010). Phone recognition with the meancovariance restricted boltzmann machine. In Advances in Neural Information Processing Systems (NIPS). 8 [11] H?astad, J. (1986). Almost optimal lower bounds for small depth circuits. In Proceedings of the 18th annual ACM Symposium on Theory of Computing, pages 6?20, Berkeley, California. ACM Press. [12] H?astad, J. and Goldmann, M. (1991). On the power of small-depth threshold circuits. Computational Complexity, 1, 113?129. [13] Hinton, G. E. and Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504?507. [14] Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527?1554. [15] Kavukcuoglu, K., Sermanet, P., Boureau, Y.-L., Gregor, K., Mathieu, M., and LeCun, Y. (2010). Learning convolutional feature hierarchies for visual recognition. In NIPS?10. [16] Larochelle, H., Erhan, D., Courville, A., Bergstra, J., and Bengio, Y. (2007). An empirical evaluation of deep architectures on problems with many factors of variation. In ICML?07, pages 473?480. ACM. [17] Lee, H., Ekanadham, C., and Ng, A. (2008). Sparse deep belief net model for visual area V2. In NIPS?07, pages 873?880. MIT Press, Cambridge, MA. [18] Lee, H., Grosse, R., Ranganath, R., and Ng, A. Y. (2009a). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML 2009. Montreal (Qc), Canada. [19] Lee, H., Pham, P., Largman, Y., and Ng, A. (2009b). Unsupervised feature learning for audio classification using convolutional deep belief networks. In NIPS?09, pages 1096?1104. [20] Levner, I. (2008). Data Driven Object Segmentation. Ph.D. thesis, Department of Computer Science, University of Alberta. [21] Mnih, A. and Hinton, G. E. (2009). A scalable hierarchical distributed language model. In NIPS?08, pages 1081?1088. [22] Mobahi, H., Collobert, R., and Weston, J. (2009). Deep learning from temporal coherence in video. In ICML?2009, pages 737?744. [23] Orponen, P. (1994). Computational complexity of neural networks: a survey. Nordic Journal of Computing, 1(1), 94?110. [24] Osindero, S. and Hinton, G. E. (2008). Modeling image patches with a directed hierarchy of markov random field. In NIPS?07, pages 1121?1128, Cambridge, MA. MIT Press. [25] Poon, H. and Domingos, P. (2011). Sum-product networks: A new deep architecture. In UAI?2011, Barcelona, Spain. [26] Ranzato, M. and Szummer, M. (2008). Semi-supervised learning of compact document representations with deep networks. In ICML. [27] Ranzato, M., Poultney, C., Chopra, S., and LeCun, Y. (2007). Efficient learning of sparse representations with an energy-based model. In NIPS?06, pages 1137?1144. MIT Press. [28] Ranzato, M., Boureau, Y.-L., and LeCun, Y. (2008). Sparse feature learning for deep belief networks. In NIPS?07, pages 1185?1192, Cambridge, MA. MIT Press. [29] Salakhutdinov, R. and Hinton, G. E. (2007). Semantic hashing. In Proceedings of the 2007 Workshop on Information Retrieval and applications of Graphical Models (SIGIR 2007), Amsterdam. Elsevier. [30] Salakhutdinov, R., Mnih, A., and Hinton, G. E. (2007). Restricted Boltzmann machines for collaborative filtering. In ICML 2007, pages 791?798, New York, NY, USA. [31] Serre, T., Kreiman, G., Kouh, M., Cadieu, C., Knoblich, U., and Poggio, T. (2007). A quantitative theory of immediate visual recognition. Progress in Brain Research, Computational Neuroscience: Theoretical Insights into Brain Function, 165, 33?56. [32] Socher, R., Lin, C., Ng, A. Y., and Manning, C. (2011). Learning continuous phrase representations and syntactic parsing with recursive neural networks. In ICML?2011. [33] Taylor, G. and Hinton, G. (2009). Factored conditional restricted Boltzmann machines for modeling motion style. In ICML 2009, pages 1025?1032. [34] Taylor, G., Hinton, G. E., and Roweis, S. (2007). Modeling human motion using binary latent variables. In NIPS?06, pages 1345?1352. MIT Press, Cambridge, MA. [35] Utgoff, P. E. and Stracuzzi, D. J. (2002). Many-layered learning. Neural Computation, 14, 2497?2539. [36] Weston, J., Ratle, F., and Collobert, R. (2008). Deep learning via semi-supervised embedding. In ICML 2008, pages 1168?1175, New York, NY, USA. [37] Wolpert, D. H. (1996). The lack of a priori distinction between learning algorithms. Neural Computation, 8(7), 1341?1390. [38] Yao, A. (1985). Separating the polynomial-time hierarchy by oracles. In Proceedings of the 26th Annual IEEE Symposium on Foundations of Computer Science, pages 1?10. 9
4350 |@word multitask:1 version:1 polynomial:8 open:2 stracuzzi:2 seek:1 contains:8 orponen:2 denoting:1 document:1 comparing:1 nt:4 yet:1 written:7 parsing:1 must:20 additive:3 partition:1 v:4 intelligence:2 fewer:1 greedy:2 item:3 parameterization:2 provides:1 pascanu:1 contribute:1 node:16 five:1 direct:1 predecessor:2 symposium:2 prove:5 consists:1 expected:1 p1:4 nor:2 multi:1 brain:2 ratle:1 salakhutdinov:3 alberta:1 decoste:1 inappropriate:1 considering:1 curse:1 becomes:1 provided:1 spain:1 bounded:2 moreover:1 circuit:23 factorized:2 what:5 substantially:1 minimizes:1 unified:1 finding:1 guarantee:1 temporal:1 berkeley:1 quantitative:1 growth:1 exactly:2 universit:2 unit:92 appear:6 positive:11 before:1 local:1 establishing:2 path:1 approximately:1 suggests:2 factorization:1 obeys:1 directed:2 unique:6 acknowledgment:1 lecun:4 recursive:1 block:1 x3:6 xr:1 x1x3:3 procedure:1 razvan:1 area:1 empirical:1 word:1 pre:1 radial:1 suggest:1 cannot:4 unlabeled:1 layered:1 context:4 applying:2 equivalent:1 demonstrated:1 reviewer:1 straightforward:1 regardless:3 independently:1 survey:2 k2j:8 qc:1 roux:1 sigir:1 pure:1 contradiction:4 rule:1 insight:2 factored:2 lamblin:1 kouh:1 embedding:1 searching:1 variation:2 analogous:2 annals:1 target:1 suppose:5 construction:1 hierarchy:3 olivier:1 us:1 domingo:3 hypothesis:2 associate:1 element:3 trend:1 recognition:4 jk:7 wj:2 ranzato:4 rq:4 intuition:1 complexity:6 utgoff:2 warde:1 motivate:3 depend:1 solving:1 efficiency:2 learner:1 basis:1 easily:3 represented:7 talk:1 fast:1 artificial:2 whose:5 valued:1 delalleau:3 otherwise:3 g1:2 syntactic:1 final:2 obviously:2 advantage:3 net:2 rewiring:1 product:97 relevant:1 poon:3 representational:3 roweis:1 exploiting:1 r1:1 object:2 help:1 depending:2 montreal:1 qt:3 odd:4 progress:1 astad:3 larochelle:2 human:1 violates:1 require:6 generalization:1 investigation:1 anonymous:1 proposition:14 summation:8 strictly:2 hold:2 pham:1 considered:1 vary:1 omitted:1 applicable:1 weighted:12 mit:8 always:1 gaussian:1 pn:3 boosted:1 varying:1 corollary:10 focus:2 longest:1 mainly:1 fooled:2 sense:1 elsevier:1 abstraction:1 typically:3 explanatory:1 hidden:40 issue:1 classification:2 among:3 denoted:2 priori:1 art:1 raised:2 constrained:1 equal:9 field:1 ng:4 cadieu:1 x4:2 look:1 unsupervised:3 cancel:1 icml:9 future:1 yoshua:2 report:1 others:1 opening:1 few:1 composed:2 individual:1 montr:2 interest:2 investigate:2 highly:2 braverman:4 mnih:2 evaluation:1 yielding:1 farley:1 x22:1 poggio:1 tree:3 taylor:2 desired:4 re:1 theoretical:9 minimal:1 mk:13 instance:1 eal:2 modeling:5 boolean:7 phrase:1 ekanadham:1 subset:1 uniform:2 kq:2 successful:1 osindero:2 encoders:1 dependency:1 lee:3 off:1 analogously:1 yao:1 again:1 thesis:1 containing:4 possibly:2 positivity:1 book:1 simard:1 leading:2 style:1 de:2 bergstra:1 coefficient:1 matter:1 collobert:3 performed:1 multiplicative:9 collaborative:2 ni:2 convolutional:3 characteristic:2 efficiently:6 correspond:1 identify:1 yield:1 generalize:1 raw:1 kavukcuoglu:1 published:1 definition:10 energy:1 involved:2 mohamed:1 obvious:3 associated:4 proof:21 dataset:1 dimensionality:1 segmentation:2 organized:1 appears:5 manuscript:1 higher:1 hashing:1 supervised:4 violating:1 generality:1 until:3 sketch:2 hand:1 lack:2 grows:4 usa:2 building:1 hypothesized:1 contain:5 true:8 verify:1 requiring:1 inductive:1 serre:1 read:1 alternating:1 semantic:1 recurrence:1 x1x2:5 criterion:1 motion:3 largman:1 reasoning:1 image:1 wise:2 recently:1 umontreal:2 dawn:1 exponentially:5 cerebral:1 belong:1 extend:1 cambridge:5 ai:3 trivially:1 language:3 had:1 pq:8 funded:1 chapelle:1 cortex:3 similarity:1 add:1 recent:3 belongs:2 driven:1 phone:1 verlag:1 binary:2 minimum:1 greater:1 additional:1 converge:1 semi:4 ii:1 multiple:4 violate:2 faster:1 offer:2 retrieval:2 cifar:1 lin:1 bigger:1 involving:2 basic:2 scalable:2 m2i:2 represent:7 tailored:1 kernel:2 void:1 grow:3 crucial:1 publisher:1 rest:1 sr:1 member:2 contrary:1 integer:1 chopra:1 counting:1 door:1 bengio:10 split:1 easy:2 xj:2 independence:1 architecture:22 restrict:1 idea:5 computable:1 ajtai:1 whether:2 york:2 deep:52 generally:2 fool:1 informally:1 involve:3 amount:1 ph:1 exist:4 sign:2 neuroscience:1 disjoint:1 discrete:2 group:2 four:1 demonstrating:2 threshold:2 changing:2 neither:2 dahl:1 graph:2 sum:77 year:1 parameterized:1 powerful:1 knoblich:1 family:28 almost:1 patch:1 decision:3 coherence:1 scaling:1 bit:1 layer:42 bound:6 distinguish:1 courville:1 annual:3 oracle:1 kreiman:1 constraint:1 x2:6 software:1 argument:2 chair:1 expanded:2 conjecture:1 department:3 according:2 alternate:3 combination:2 manning:1 belonging:1 smaller:1 shallow:32 lunch:1 s1:1 restricted:4 taken:1 remains:1 discus:1 count:1 tractable:1 available:1 operation:2 rewritten:1 goldmann:2 x23:2 hierarchical:2 v2:1 appropriate:2 generic:2 appearing:1 anymore:1 m2k:3 gate:1 assumes:1 spurred:1 include:1 x21:2 graphical:2 x3x4:2 establish:2 gregor:1 question:2 quantity:1 traditional:1 said:1 gin:10 unable:1 thank:1 separating:1 originate:1 topic:2 iro:1 induction:10 assuming:3 length:1 index:3 providing:2 sermanet:1 statement:1 potentially:1 negative:5 stated:1 boltzmann:3 allowing:1 teh:1 allender:1 markov:1 finite:2 immediate:1 situation:1 extended:1 communication:1 hinton:9 rn:1 sumproduct:3 canada:2 introduced:1 david:1 required:2 connection:1 california:1 learned:2 quadratically:1 distinction:1 barcelona:1 nip:11 address:2 able:3 suggested:1 usually:1 below:3 appeared:1 poultney:1 built:2 including:1 memory:1 video:1 belief:7 power:7 natural:2 difficulty:1 representing:3 millennium:1 technology:1 mathieu:1 concludes:1 auto:1 kj:3 sn:1 prior:1 review:2 popovici:1 multiplication:1 loss:1 lecture:1 interesting:1 limitation:1 filtering:2 acyclic:1 analogy:1 foundation:3 degree:4 editor:1 supported:2 parity:2 free:1 last:4 formal:1 allow:2 deeper:2 sparse:3 dni:4 distributed:1 depth:26 xn:6 computes:3 author:1 made:4 universally:1 longstanding:1 erhan:1 polynomially:1 ranganath:1 approximate:1 compact:3 logic:1 delallea:1 incoming:1 uai:1 conclude:1 tuples:1 continuous:2 latent:1 additionally:1 ca:2 composing:1 improving:1 expansion:4 investigated:2 bottou:1 poly:1 main:2 linearly:2 n2:4 allowed:1 x1:13 positively:4 fig:1 grosse:1 ny:2 sub:2 exponential:2 formula:1 xt:11 specific:5 jt:2 showing:6 mobahi:1 x:1 intrinsic:1 exists:3 workshop:1 socher:1 texture:1 boureau:2 wolpert:1 logarithmic:1 visual:5 amsterdam:1 nserc:1 partially:1 scalar:1 springer:1 corresponds:2 acm:4 ma:5 ljk:1 weston:4 conditional:1 consequently:2 careful:1 towards:1 typical:1 except:2 reducing:1 lemma:22 called:2 total:4 partly:1 formally:5 support:1 szummer:1 audio:2
3,701
4,351
On Tracking The Partition Function Guillaume Desjardins, Aaron Courville, Yoshua Bengio {desjagui,courvila,bengioy}@iro.umontreal.ca D?epartement d?informatique et de recherche op?erationnelle Universit?e de Montr?eal Abstract Markov Random Fields (MRFs) have proven very powerful both as density estimators and feature extractors for classification. However, their use is often limited by an inability to estimate the partition function Z. In this paper, we exploit the gradient descent training procedure of restricted Boltzmann machines (a type of MRF) to track the log partition function during learning. Our method relies on two distinct sources of information: (1) estimating the change ?Z incurred by each gradient update, (2) estimating the difference in Z over a small set of tempered distributions using bridge sampling. The two sources of information are then combined using an inference procedure similar to Kalman filtering. Learning MRFs through Tempered Stochastic Maximum Likelihood, we can estimate Z using no more temperatures than are required for learning. Comparing to both exact values and estimates using annealed importance sampling (AIS), we show on several datasets that our method is able to accurately track the log partition function. In contrast to AIS, our method provides this estimate at each time-step, at a computational cost similar to that required for training alone. 1 Introduction In many areas of application, problems are naturally expressed as a Gibbs measure, where the distribution over the domain X is given by, for x ? X : q(x) = X q?(x) exp{??E(x)} = , with Z(?) = q?(x). Z(?) Z(?) x (1) E(x) is refered to as the ?energy? of configuration x, ? is a free parameter known as the inverse temperature and Z(?) is the normalization factor commonly refered to as the partition function. Under certain general conditions on the form of E, these models are known as Markov Random Fields (MRF), and have been very popular within the vision and natural language processing communities. MRFs with latent variables ? in particular restricted Boltzmann machines (RBMs) [9] ? are among the most popular building block for deep architectures [1], being used in the unsupervised initialization of both Deep Belief Networks [9] and Deep Boltzmann Machines [22]. As illustrated in Eq. 1, the partition function is computed by summing over all variable configurations. Since the number of configurations scales exponentially with the number of variables, exact calculation of the partition function is generally computationally intractable. Without the partition function, probabilities under the model can only be determined up to a multiplicative constant, which seriously limits the model?s utility. One method recently proposed for estimating Z(?) is annealed importance sampling (AIS) [18, 23]. In AIS, Z(?) is approximated by the sum of a set of importance-weighted samples drawn from the model distribution. With a large number of variables, drawing a set of importance-weighted samples is generally subject to extreme variance in the importance weights. AIS alleviates this issue by annealing the model distribution through a series of slowly changing distributions that link the target model distribution to one where the log partition function is tractable. While AIS is quite successful, it generally requires the use of tens of thousands of annealing distributions in order to achieve accurate results. This computationally intensive 1 requirement renders AIS inappropriate as a means of maintaining a running estimate of the log partition function throughout training. Yet, having ready access to this quantity throughout learning opens the door to a range of possibilities. Likelihood could be used as a basis for model comparison throughout training; early-stopping could be accomplished by monitoring an estimate of the likelihood of a validation set. Another important application is in Bayesian inference in MRFs [17] where we require the partition function for each value of the parameters in the region of support. Tracking the log partition function would also enable simultaneous estimation of all the parameters of a heterogeneous model, for example an extended directed graphical model with Gibbs distributions forming some of the model components. In this work, we consider a method of tracking the log partition function during training, which builds upon the parallel tempering (PT) framework [7, 10, 15]. Our method relies on two basic observations. First, when using stochastic gradient descent 1 , parameters tend to change slowly during training; consequently, the partition function Z(?) also tends to evolve slowly. We exploit this property of the learning process by using importance sampling to estimate changes in the log partition function from one learning iteration the next. If the changes in the distribution from time-step t to t + 1 are small, the importance sampling estimate can be very accurate, even with relatively few samples. This is the same basic strategy employed in AIS, but while with AIS one constructs a path of close distributions through an annealing schedule, in our procedure we simply rely on the path of distributions that emerges from the learning process. Second, parallel tempering (PT) relies on simulating an extended system, consisting of multiple models each running at their own temperature. These temperatures are chosen such that neighboring models overlap sufficiently as to allow for frequent cross-temperature state swaps. This is an ideal operating regime for bridge sampling [2, 19], which can thus serve to estimate the difference in log partition functions between neighboring models. While with relatively few samples, each method on its own tends not to provide reliable estimates, we propose to combine these measurements using a variation of the well-known Kalman filter (KF), allowing us to accurately track the evolution of the log partition function throughout learning. The efficiency of our method stems from the fact that our estimator makes use of the samples generated in the course of training, thus incurring relatively little additional computational cost. This paper is structured as follows. In Section 2, we provide a brief overview of RBMs and the SML-PT training algorithm, which serves as the basis of our tracking algorithm. Sections (3.1-3.3) cover the details of the importance and bridge sampling estimates, while Section 3.4 provides a comprehensive look at our filtering procedure and the tracking algorithm as a whole. Experimental results are presented in Section 4. 2 Stochastic Maximum Likelihood with Parallel Tempering Our proposed log partition function tracking strategy is applicable to any Gibbs distribution model that is undergoing relatively smooth changes in the partition function. However, we concentrate on its application to the RBM since it has become a model of choice for learning unsupervised features for use in deep feed-forward architectures [9, 1] as well as for modeling complex, high-dimensional distributions [27, 24, 12]. RBMs are bipartite graphical models where visible units v ? {0, 1}nv interact with hidden units h ? {0, 1}nh through the energy function E(v, h) = ?hT W v ? cT h ? bT v. The model parameters ? = [W, c, b] consist of the weight matrix W ? Rnh ?nv , whose entries Wij connect units (vi , hj ), and offset vectors b and c. RBMs can be trained through a stochastic approximation to the nega?F (v) ?F (v) tive log-likelihood P gradient ?? ? Ep [ ?? ], where F (v) is the free-energy function defined as F (v) = ? log h exp(?E(v, h)). In Stochastic Maximum Likelihood (SML) [25], we replace the expectation by a sample average, where approximate samples are drawn from a persistent Markov chain, updated through k-steps of Gibbs sampling between parameter updates. Other algorithms improve upon this default formulation by replacing Gibbs sampling with more powerful sampling algorithms [26, 7, 21, 20]. By increasing the mixing rate of the underlying Markov chain, these methods can lead to lower variance estimates of the maximum likelihood gradient and faster conver1 Stochastic gradient descent is one of the most popular methods for training MRFs precisely because second order optimization methods typically require a deterministic gradient, whereas sampling-based estimators are the only practical option for models with an intractable partition function. 2 gence. However, from the perspective of tracking the log partition function, we will see in Section 3 that the SML-PT scheme [7] presents a rather unique advantage. Throughout training, parallel tempering draws samples from an extended system Mt = {qi,t ; i ? [1, M ]}, where qi,t denotes the model with inverse temperature ?i ? [0, 1] obtained after t steps of gradient descent. Each model qi,t (associated with a unique partition function Zi,t ) represents a smoothed version of the target distribution: q1,t (with ?1 = 1). The inverse temperature ?i = 1/Ti ? [0, 1] controls the degree of smoothing, with smaller values of ?i leading to distributions which are easier to sample from. To leverage these fast-mixing chains, PT alternates k steps of Gibbs sampling (performed independently at each temperature) with cross-temperature state swaps. These are proposed between neighboring chains using a Metropolis-Hastings-based acceptance criterion. If we denote the particle obtained by each model qi,t after k steps of Gibbs sampling as xi,t , then the swap acceptance ratio ri,t for chains (i, i + 1) is given by:   q?i,t (xi+1,t )? qi+1,t (xi,t ) ri,t = min 1, (2) q?i,t (xi,t )? qi+1,t (xi+1,t ) These swaps ensure that samples from highly ergodic chains are gradually swapped into lower temperature chains. Our swapping schedule is the deterministic even-odd algorithm [14] which proposes swaps between all pairs (qi,t , qi+1,t ) with even i?s, followed by those with odd i?s. The gradient is then estimated by using the sample which was last swapped into temperature ?1 . To reduce the variance on our estimate, we run multiple Markov chains per temperature, yielding a mini-batch of (n) ? qi,t (x); 1 ? n ? N } at each time-step and temperature. model samples Xi,t = {xi,t SML with Adaptive parallel tempering (SML-APT) [6], further improves upon SML-PT by automating the choice of temperatures. It does so by maximizing the flow of particles between extremal temperatures, yielding better ergodicity and more robust sampling in the negative phase of training. 3 Tracking the Partition Function Unrolling in time (learning iterations) the M models being simulated by PT, we can envision a twodimensional lattice of RBMs indexed by (i, t). As previously mentioned, gradient descent learning causes qi,t , the model with inverse temperature ?i obtained at time-step t, to be close to qi,t?1 . We can thus apply importance sampling between adjacent temporal models 2 to obtain an estimate of ?t ?i,t ? ?i,t?1 , denoted as Oi,t . Inspired by the annealing distributions used in AIS, one could think to iterate this process from a known quantity ?i,1 , in order to estimate ?i,t . Unfortunately, the variance of such an estimate would grow quickly with t. PT provides an interesting solution to this problem, by simulating an extended system Mt where the ?i ?s are selected such that qi,t and qi+1,t have enough overlap to allow for frequent cross-temperature state swaps. This motivates using bridge sampling [2] to provide an estimate of ?i+1,t ? ?i,t , the ?? difference in log partitions between temperatures ?i+1 and ?i . We denote this estimate as Oi,t . Ad3 ditionally, we can treat ?M,t as a known quantity during training, by setting ?M = 0 . Beginning with ?M,t (see definition in Fig. 1), repeated application of bridge sampling alone could in principle arrive at an accurate estimate of {?i,t ; i ? [1, M ], t ? [1, T ]}. However, reducing the variance sufficiently to provide useful estimates of the log partition function would require using a relatively large number of samples at each temperature. Within the context of RBM training, the required number of samples at each of the parallel chains would have an excessive computational cost. Nonetheless even with relatively few samples, the bridge sampling estimate provides an additional source of information regarding the log partition function. ?? ?t Our strategy is to combine these two high variance estimates Oi,t and Oi,t by treating the unknown log partition functions as a latent state to be tracked by a Kalman filter. In this framework, we ?? ?t consider Oi,t and Oi,t as observed quantities, used to iteratively refine the joint distribution over the latent state at each learning iteration. Formally, we define this latent state to be ?t = [?1,t , . . . , ?M,t , bt ] ?? , where bt is an extra term to account for a systematic bias in O1,t (see Sec. 3.2 for details). The corresponding graphical model is shown in Figure 1. 2 This same technique was recently used in [5], in the context of learning rate adaptation. The visible P units of an RBM with zero weights are marginally independent. Its log partition function is thus given as i log(1 + exp(bi )) + nh ? log(2). 3 3 System Equations: ?M,t?1 p(?0 ) = N (?0 , ?0 ) ?M,t ?? OM ?1,t?1 p(?t | ?t?1 ) = N (?t?1 , ?? ) ?? OM ?1,t (?t) | ?t , ?t?1 ) = N (C[?t , ?t?1 ]T , ??t ) (??) | ?t ) = N (H?t , ??? ) p(Ot ?2,t?1 p(Ot ?2,t 2 ?t O2,t ?? O1,t?1 ?1,t?1 ?? O1,t 6 C=6 4 IM ?1,t ?t O1,t bt?1 2 ?1 6 0 H=6 4 bt 0 1 0 .. . 0 +1 ?1 0 0 0 .. . 0 ?IM 0 +1 ... 0 3 7 7 5 0 0 .. . ?1 +1 0 3 7 0 7 5 0 0 Figure 1: A directed graphical model for log partition function tracking. The shaded nodes represent observed variables, and the double-walled nodes represent the tractable ?M,: with ?M = 0. For clarity of presentation, we show the bias term as distinct from the other ?i,t (recall bt = ?M +1,t .) 3.1 Model Dynamics The first step is to specify how we expect the log partition function to change over training iterations, i.e. our prior over the model dynamics. SML training of the RBM model parameters is a stochastic gradient descent algorithm (typically over a mini-batch of N examples) where the parameters change by small increments specified by an approximation to the likelihood gradient. This implies that both the model distribution and the partition function change relatively slowly over learning increments with a rate of change being a function of the SML learning rate; i.e. we expect qi,t and ?i,t to be close to qi,t?1 and ?i,t?1 respectively. Our model dynamics are thus simple and capture the fact that the log partition function is slowly changing. Characterizing the evolution of the log partition functions as independent Gaussian processes, we model the probability of ?t conditioned on ?t?1 as p(?t |?t?1 ) = N (?t?1 , ?? ), a normal 2 and ?b2 distribution with mean ?t?1 and fixed diagonal covariance ?? = Diag[?Z2 , . . . , ?Z2 , ?b2 ]. ?Z are hyper-parameters controlling how quickly the latent states ?i,t and bt are expected to change between learning iterations. 3.2 Importance Sampling Between Learning Iterations The observation distribution p(Ot(?t) | ?t , ?t?1 ) = N (C[?t , ?t?1 ]T , ??t ) models the relationship between the evolution of the latent log partitions and the statistical measurements Ot(?t) = (?t) (?t) ?t [O1,t , . . . , OM,t ] given by importance sampling, with Oi,t defined as: ( ) (n) N q?i,t (xi,t?1 ) 1 X (n) (n) ?t Oi,t = log w with wi,t = . (3) (n) N n=1 i,t q?i,t?1 (x ) i,t?1 In the above distribution, the matrix C encodes the fact that the average importance weights estimate ?i,t ? ?i,t?1 + bt ? Ii=1 , where I is the indicator function. It is formally defined in Fig. 1. ??t is a diagonal covariance matrix, whose elements are updated online from the estimated variances of the log-importance weights. At time-step t, the i-th entry of its diagonal is thus given by P 2 Var[wi,t ]/ n w(n) . (?t) The term bt accounts for a systematic bias in O1,t . It stems from the reuse of samples X1,t?1 : first, for estimating the negative phase gradient at time-step t ? 1 (i.e. the gradient applied between qi,t?1 4 and qi,t ) and second, to compute the importance weights of Eq. 3. Since the SML gradient acts to (n) lower the probability of negative particles, wi,t is biased. 3.3 Bridging the Parallel Tempering Temperature Gaps Consider now the other dimension of our parallel tempered lattice of RBMs: temperature. As previously mentioned, neighboring distributions in PT are designed to have significant overlap in their densities in order to permit particle swaps. However, the intermediate distributions qi,t (v, h) are not so close to one another that we can use them as the intermediate distributions of AIS. AIS typically requires thousands of intermediate chains, and maintaining that number of parallel chains would carry a prohibitive computational burden. On the other hand, the parallel tempering strategy of spacing the temperature to ensure moderately frequent swapping nicely matches the ideal operating regime of bridge sampling [2]. We thus consider a second observation model as p(Ot(??) | ?t ) = N (H?t , ??? ), with H defined in ?? ?? Fig.1. The quantities Ot(??) = [O1,t , . . . , OM ?1,t ] are obtained via bridge sampling as estimates of ?? ?i+1,t ? ?i,t . Entries Oi,t are given by:     (n) (n) ? ? N N qi,t xi,t qi,t xi+1,t X X (n) (n) (n) (n) ??   , vi,t   . (4) Oi,t = log ui,t ? log vi,t , where ui,t = = (n) (n) q?i,t xi,t q?i+1,t xi+1,t n=1 n=1 ? The bridging distribution [2, 19] qi,t is chosen such that it has large support with both qi and (opt) qi+1 . For all i ? [1, M ? 1], we choose the approximately optimal distribution qi,t (x) = q?i,t (x)? qi+1,t (x) si,t q?i,t (x)+? qi+1,t (x) where si,t ? Zi+1,t /Zi,t . Since the Zi,t ?s are the very quantities we are trying to estimate, this definition may seem problematic. However it is possible to start with a coarse estimate of si,1 and refine it in subsequent iterations by using the output of our tracking algorithm. ??? is once again a diagonal covariance matrix, updated online from the variance of the log-importance Var[ui,t ] Var[vi,t ] i2 + hP i . weights u and v [19]. The i-th entry is given by hP (n) (n) 2 n 3.4 ui,t n vi,t Kalman Filtering of the Log-Partition Function In the above we have described two sources of information regarding the log partition function for each of the RBMs in the lattice. In this section we describe a method to fuse all available information to improve the overall accuracy of the estimate of every log partition function. We now consider the steps involved in the inference process in moving from an estimate of the posterior over the latent state at time t ? 1 to an estimate of the posterior at time t. We begin by assuming we know the (?t) (??) (?) (?) (?) , Ot?1:0 ), where Ot?1:0 = [O1 , . . . , Ot?1 ]. posterior p(?t?1 | Ot?1:0 We follow the treatment of Neal [18] in characterizing our uncertainty regarding ?i,t as a Gaussian (?t) (??) distribution and define p(?t?1 | Ot?1:0 , Ot?1:0 ) ? N (?t?1,t?1 , Pt?1,t?1 ), a multivariate Gaussian with mean ?t?1,t?1 and covariance Pt?1,t?1 . The double index notation is used to indicate which is the latest observation being conditioned on for each of the two types of observations: e.g. ?t,t?1 (?t) (??) represents the posterior mean given Ot:0 and Ot?1:0 . Departing from the typical Kalman filter setting, Ot(?t) depends on both ?t and ?t?1 . In order to incorporate this observation into our estimate of the latent state, we first need to specify the prior (?t) (??) (?t) (??) joint distribution p(?t?1 , ?t | Ot?1:0 , Ot?1:0 ) = p(?t | ?t?1 )p(?t?1 | Ot?1:0 , Ot?1:0 ), with p(?t | (?t) ?t?1 ) as defined in Sec. 3.1. Observation Ot is then incorporated through Bayes rule, yielding (?t) (??) p(?t?1 , ?t | Ot:0 , Ot?1:0 ) . Having incorporated the importance sampling estimate into the model, (?t) (??) we can then marginalize over ?t?1 (which is no longer required), to yield p(?t | Ot:0 , Ot?1:0 ). (??) Finally, it remains only to incorporate the bridge sampler estimate Ot by a second application (?t) (??) of Bayes rule, which gives us p(?t | Ot:0 , Ot:0 ), the updated posterior over the latent state at time-step t. The detailed inference equations are provided in Fig. 2 and can be derived easily from standard textbook equations on products and marginals of normal distributions [4]. 5 Inference Equations: ? ? (?t) (??) p ?t?1 , ?t | Ot?1:0 , Ot?1:0 = N (?t?1,t?1 , Vt?1,t?1 ) ? ? ? ? P with ?t?1,t?1 = ?t?1,t?1 and Vt?1,t?1 = Pt?1,t?1 t?1,t?1 t?1,t?1 (i) (ii) (?t) Pt?1,t?1 ?? + Pt?1,t?1 ? (??) p(?t?1 , ?t | Ot:0 , Ot?1:0 ) = N (?t,t?1 , Vt,t?1 ) ?1 ?1 ?1 with Vt,t?1 = (Vt?1,t?1 + C T ??1 and ?t,t?1 = Vt,t?1 (C T ??t Ot(?t) + Vt?1,t?1 ?t?1,t?1 ) ?t C) ? ? (?t) (??) (iii) p ?t | Ot:0 , Ot?1:0 = N (?t,t?1 , Pt,t?1 ) with ?t,t?1 = [?t,t?1 ]2 and Pt,t?1 = [Vt,t?1 ]2,2 (?t) (??) (iv) p(?t | Ot:0 , Ot:0 ) = N (?t,t , Pt,t ) ?1 ?1 ?1 with Pt,t = (Pt,t?1 + H T ??1 and ?t,t = Pt,t (H T ??? Ot(??) + Pt,t?1 ?t,t?1 ) ?? H) Figure 2: Inference equations for our log partition tracking algorithm, a variant on the Kalman filter. For any vector v and matrix V , we use the notation [v]2 to denote the vector obtained by preserving the bottom half elements of v and [V ]2,2 to indicate the lower right-hand quadrant of V . 4 Experimental Results For the following experiments, SML was performed using either constant or decreasing learning ? rates. We used the decreasing schedule t = min(init t+1 , init ), where t is the learning rate at time-step t, init is the initial or base learning rate and ? is the decrease constant. Entries of ?? 2 (see Section 3.1) were set as follows. We set ?Z = +?, which is to say that we did not exploit the (??) (?t) , Ot?1:0 ). ?b2 smoothness prior when estimating the prior distribution over the joint p(?t?1 , ?t | Ot?1:0 (?t) ?3 was set to 10 ? t , allowing the estimated bias on O1,t to change faster for large learning rates. When initializing the RBM visible offsets4 as proposed in [8], the intermediate distributions of Eq. 1 lead to sub-optimal swap rates between adjacent chains early in training, with a direct impact on the quality of tracking. In our experiments, we avoid this issue by using the intermediate distributions qi,t (x) ? exp[?i ? (?hT W v ? cT h) ? bT v]. We tested mini-batch sizes N ? [10, 20]. Comparing to Exact Likelihood We start by comparing the performance of our tracking algorithm to the exact likelihood, obtained by marginalizing over both visible and hidden units. We chose 25 hidden units and trained on the ubiquitous MNIST [13] dataset for 300k updates, using both fixed and adaptive learning rates. The main results are shown in Figure 3. In Figure 3(a), we can see that our tracker provides a very good fit to the likelihood with init = 0.001 and decrease constants ? in {103 , 104 , 105 }. Increasing the base learning rate to init = 0.01 in Figure 3(b), we maintain a good fit up to ? = 104 , with a small dip in performance at 50k updates. Our tracker fails however to capture the oscillatory behavior engendered by too high of a learning rate (init = 0.01, ? = 105 ). It is interesting to note that the failure mode of our algorithm seems to coincide with an unstable optimization process. Comparing to AIS for Large-Scale Models In evaluating the performance of our tracking algorithm on larger models, exact computation of the likelihood is no longer possible, so we use AIS as our baseline.5 Our models consisted of RBMs with 500 hidden units, trained using SML-APT [6] on the MNIST and Caltech Silhouettes [16] datasets. We performed 200k updates, with learning rate parameters init ? {.01, .001} and ? ? {103 , 104 , 105 }. On MNIST, AIS estimated the test-likelihood of our best model at ?94.34 ? 3.08 (where ? indicates the 3? confidence interval), while our tracking algorithm reported a value ?89.96. On Caltech Silhouettes, our model reached ?134.23 ? 21.14 according to AIS, while our tracker reported x ?k Each bk is initialized to log 1?? , where x ?k is the mean of the k-th dimension on the training set. xk Our base AIS config. was 103 intermediate distributions spaced linearly between ? = [0, 0.5], 104 distributions for the interval [0.5, 0.9] and 104 for [0.9, 1.0]. Estimates of log Z are averaged over 100 annealed importance weights. 4 5 6 ?140 ?130 ?140 ?150 ?150 Likelihood (nats) Likelihood (nats) ?160 ?170 ?180 ?190 ?170 ?180 ?190 ?i = 0.001 ? = 1e3 ?i = 0.001 ? = 1e4 ?i = 0.001 ? = 1e5 ?200 ?210 ?160 0 50 100 150 Updates (x1e3) 200 Exact AIS Kalman 250 ?i = 0.010 ? = 1e3 ?i = 0.010 ? = 1e4 ?i = 0.010 ? = 1e5 ?200 ?210 300 0 50 100 150 Updates (x1e3) 200 Exact AIS Kalman 250 300 Figure 3: Comparison of exact test-set likelihood and estimated likelihood as given by AIS and our tracking algorithm. We trained a 25-hidden unit RBM for 300k updates using SML, with a learning rate schedule t = min(??init /(t+1), init ), with (left) init = 0.001 and (right) init = 0.01 varying ? ? {103 , 104 , 105 }. ?80 540 6 520 ?85 500 460 2 ?t log Z log Z (nats) 480 440 Likelihood (nats) 4 ?90 ?95 0 420 (?t) Ot ?t (??) Ot 400 bt ?100 ?t (train) ?t (valid) ?2 AIS (valid) AIS 380 0 50 100 Updates (x1e3) 150 ?105 200 0 100 200 Epochs 300 400 500 Figure 4: (left) Plotted on left y-axis are the Kalman filter measurements Ot(??) , our log partition estimate of ?1,t and point estimates of ?1,t obtained by AIS. On the right y-axis, measurement Ot(?t) is plotted, along with the estimated bias bt . Note how bt becomes progressively less-pronounced as t decreases and the model converges. Also of interest, the variance on Ot(??) increases with t but is compensated by a decreasing variance on Ot(?t) , yielding a relatively smooth estimate ?1,t . (not shown) The ?3? confidence interval of the AIS estimate at 200k updates was measured to be 3.08. (right) Example of early-stopping on dna dataset. ?114.31. To put these numbers in perspective, Salakhutdinov and Murray [23] reports values of ?125.53, ?105.50 and ?86.34 for 500 hidden unit RBMs trained with CD{1,3,25} respectively. Marlin et al. [16] report around ?120 for Caltech Silhouettes, again using 500 hidden units. Figure 4(left) shows a detailed view of the Kalman filter measurements and its output, for the best performing MNIST model. We can see that the variance on Ot(??) (plotted on the left y-axis) grows slowly over time, which is mitigated by a decreasing variance on Ot(?t) (plotted on the right yaxis). As the model converges and the learning rate decreases, qi,t?1 and qi,t become progressively closer and the importance sampling estimates become more robust. The estimated bias term bt also converges to zero. An important point to note is that a naive linear-spacing of temperatures yielded low exchange rates between neighboring temperatures, with adverse effects on the quality of our bridge sampling estimates. As a result, we observed a drop in performance, both in likelihood as well as tracking performance. Adaptive tempering [6] (with a fixed number of chains M ) proved crucial in getting good tracking for these experiments. Early-Stopping Experiments Our final set of experiments highlights the performance of our method on a wide-variety of datasets [11]. In these experiments, we use our estimate of the log 7 Dataset adult connect4 dna mushrooms nips ocr letters rcv1 web RBM Kalman -15.24 -15.77 -87.97 -10.49 -270.10 -33.87 -46.89 -28.95 AIS -15.70 (? 0.50) -16.81 (? 0.67) -88.51 (? 0.97) -14.68 (? 30.75) -271.23 (? 0.58) -31.45 (? 2.70) -48.61 (? 0.69) -29.91 (? 0.74) RBM-25 NADE -16.29 -22.66 -96.90 -15.15 -277.37 -43.05 -48.88 -29.38 -13.19 -11.99 -84.81 -9.81 -273.08 -27.22 -46.66 -28.39 Table 1: Test set likelihood on various datasets. Models were trained using SML-PT. Early-stopping was performed by monitoring likelihood on a hold-out validation set, using our KF estimate of the log partition function. Best models (i.e. the choice of hyper-parameters) were then chosen according to the AIS likelihood estimate. Results for 25-hidden unit RBMs and NADE are taken from [11]. ? indicates a confidence interval of three standard deviations. partition to monitor model performance on a held-out validation set. When the onset of over-fitting is detected, we store the model parameters and report the associated test-set likelihood, as estimated by both AIS and our tracking algorithm. The advantages of such an early-stopping procedure is shown in Figure 4(b), where training log-likelihood increases throughout training while validation performance starts to decrease around 250 epochs. Detecting over-fitting without tracking the log partition would require a dense grid of AIS runs which would prove computationally prohibitive. We tested parameters in the following range: number of hidden units in {100, 200, 500, 1000} (depending on dataset size), learning rates in {10?2 , 10?3 , 10?4 } either held constant during training or annealed with constants ? ? {103 , 104 , 105 }. For tempering, we used 10 fixed temperatures, spaced linearly between ? = [0, 1]. SGD was performed using mini-batches of size {10, 100} when estimating the gradient, and mini-batches of size {10, 20} for our set of tempered-chains (we thus simulate 10 ? {10, 20} tempered chains in total). As can be seen in Table 4, our tracker performs very well compared to the AIS estimates and across all datasets. Efforts to lower the variance of the AIS estimate proved unsuccessful, even going as far as 105 intermediate distributions. 5 Discussion In this paper, we have shown that while exact calculation of the partition function of RBMs may be intractable, one can exploit the smoothness of gradient descent learning in order to approximately track the evolution of the log partition function during learning. Treating the ?i,t ?s as latent variables, the graphical model of Figure 1 allowed us to combine multiple sources of information to achieve good tracking of the log partition function throughout training, on a variety of datasets. We note however that good tracking performance is contingent on the ergodicity of the negative phase sampler. Unsurprisingly, this is the same condition required by SML for accurate estimation of the negative phase gradient. The method presented in the paper is also computationally attractive, with only a small computaiton overhead relative to SML-PT training. The added computational cost lies in the computation of the importance weights for importance sampling and bridge sampling. However, this boils down to computing free-energies which are mostly pre-computed in the course of gradient updates with the sole exception being the computation of q?i,t (xi,t?1 ) in the importance sampling step. In comparison to AIS, our method allows us to fairly accurately track the log partition function, and at a per-point estimate cost well below that of AIS. Having a reliable and accurate online estimate of the log partition function opens the door to a wide range of new research directions. Acknowledgments The authors acknowledge the financial support of NSERC and CIFAR; and Calcul Qu?ebec for computational resources. We also thank Hugo Larochelle for access to the datasets of Sec. 4; Hannes Schulz, Andreas Mueller, Olivier Delalleau and David Warde-Farley for feedback on the paper and algorithm; along with the developers of Theano [3]. 8 References [1] Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1?127. Also published as a book. Now Publishers, 2009. [2] Bennett, C. (1976). Efficient estimation of free energy differences from Monte Carlo data. Journal of Computational Physics, 22(2), 245?268. [3] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Oral. [4] Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer. [5] Cho, K., Raiko, T., and Ilin, A. (2011). Enhanced gradient and adaptive learning rate for training restricted boltzmann machines. In L. Getoor and T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ?11, pages 105?112, New York, NY, USA. ACM. [6] Desjardins, G., Courville, A., and Bengio, Y. (2010a). Adaptive parallel tempering for stochastic maximum likelihood learning of rbms. NIPS*2010 Deep Learning and Unsupervised Feature Learning Workshop. [7] Desjardins, G., Courville, A., Bengio, Y., Vincent, P., and Delalleau, O. (2010b). Tempered Markov chain monte carlo for training of restricted Boltzmann machine. In JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), volume 9, pages 145?152. [8] Hinton, G. (2010). A practical guide to training restricted boltzmann machines. Technical Report 2010003, University of Toronto. version 1. [9] Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527?1554. [10] Iba, Y. (2001). Extended ensemble monte carlo. International Journal of Modern Physics, C12, 623?656. [11] Larochelle, H. and Murray, I. (2011). The Neural Autoregressive Distribution Estimator. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS?2011), volume 15 of JMLR: W&CP. [12] Larochelle, H., Bengio, Y., and Turian, J. (2010). Tractable multivariate binary density estimation and the restricted Boltzmann forest. Neural Computation, 22(9), 2285?2307. [13] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient based learning applied to document recognition. IEEE, 86(11), 2278?2324. [14] Lingenheil, M., Denschlag, R., Mathias, G., and Tavan, P. (2009). Efficiency of exchange schemes in replica exchange. Chemical Physics Letters, 478(1-3), 80 ? 84. [15] Marinari, E. and Parisi, G. (1992). Simulated tempering: A new monte carlo scheme. EPL (Europhysics Letters), 19(6), 451. [16] Marlin, B., Swersky, K., Chen, B., and de Freitas, N. (2009). Inductive principles for restricted boltzmann machine learning. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS?10), volume 9, pages 509?516. [17] Murray, I. and Ghahramani, Z. (2004). Bayesian learning in undirected graphical models: Approximate mcmc algorithms. [18] Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2), 125?139. [19] Neal, R. M. (2005). Estimating ratios of normalizing constants using linked importance sampling. [20] Salakhutdinov, R. (2010a). Learning deep boltzmann machines using adaptive mcmc. In L. Bottou and M. Littman, editors, Proceedings of the Twenty-seventh International Conference on Machine Learning (ICML-10), volume 1, pages 943?950. ACM. [21] Salakhutdinov, R. (2010b). Learning in Markov random fields using tempered transitions. In NIPS?09. [22] Salakhutdinov, R. and Hinton, G. E. (2009). Deep Boltzmann machines. In AISTATS?2009, volume 5, pages 448?455. [23] Salakhutdinov, R. and Murray, I. (2008). On the quantitative analysis of deep belief networks. In W. W. Cohen, A. McCallum, and S. T. Roweis, editors, ICML 2008, volume 25, pages 872?879. ACM. [24] Taylor, G. and Hinton, G. (2009). Factored conditional restricted Boltzmann machines for modeling motion style. In L. Bottou and M. Littman, editors, ICML 2009, pages 1025?1032. ACM. [25] Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. In W. W. Cohen, A. McCallum, and S. T. Roweis, editors, ICML 2008, pages 1064?1071. ACM. [26] Tieleman, T. and Hinton, G. (2009). Using fast weights to improve persistent contrastive divergence. In L. Bottou and M. Littman, editors, ICML 2009, pages 1033?1040. ACM. [27] Welling, M., Rosen-Zvi, M., and Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In NIPS?04, volume 17, Cambridge, MA. MIT Press. 9
4351 |@word version:2 seems:1 open:2 covariance:4 contrastive:1 q1:1 sgd:1 carry:1 epartement:1 initial:1 configuration:3 series:1 seriously:1 document:1 envision:1 o2:1 freitas:1 comparing:4 z2:2 si:3 yet:1 mushroom:1 gpu:1 visible:4 partition:47 subsequent:1 engendered:1 treating:2 designed:1 update:11 progressively:2 drop:1 alone:2 half:1 selected:1 prohibitive:2 intelligence:3 xk:1 beginning:1 mccallum:2 recherche:1 provides:5 coarse:1 node:2 detecting:1 ad3:1 pascanu:1 math:1 toronto:1 along:2 direct:1 become:3 persistent:2 ilin:1 prove:1 combine:3 fitting:2 overhead:1 expected:1 behavior:1 inspired:1 salakhutdinov:5 decreasing:4 little:1 cpu:1 inappropriate:1 increasing:2 unrolling:1 begin:1 estimating:7 underlying:1 notation:2 provided:1 mitigated:1 becomes:1 courvila:1 developer:1 textbook:1 marlin:2 temporal:1 quantitative:1 every:1 ti:1 act:1 ebec:1 universit:1 control:1 unit:12 treat:1 tends:2 limit:1 path:2 approximately:2 chose:1 initialization:1 shaded:1 limited:1 range:3 bi:1 averaged:1 directed:2 practical:2 unique:2 acknowledgment:1 lecun:1 block:1 procedure:5 area:1 confidence:3 pre:1 quadrant:1 close:4 marginalize:1 twodimensional:1 context:2 put:1 deterministic:2 compensated:1 maximizing:1 annealed:5 latest:1 independently:1 ergodic:1 scipy:1 factored:1 estimator:4 rule:2 lamblin:1 financial:1 variation:1 increment:2 updated:4 target:2 pt:23 controlling:1 enhanced:1 exact:9 olivier:1 element:2 trend:1 approximated:1 recognition:2 ep:1 observed:3 bottom:1 initializing:1 capture:2 thousand:2 region:1 decrease:5 mentioned:2 ui:4 moderately:1 nats:4 littman:3 warde:2 dynamic:3 rnh:1 trained:6 oral:1 harmonium:1 serve:1 upon:3 bipartite:1 efficiency:2 basis:2 swap:8 easily:1 joint:3 various:1 train:1 informatique:1 distinct:2 fast:3 describe:1 monte:4 detected:1 artificial:3 hyper:2 quite:1 whose:2 larger:1 say:1 drawing:1 delalleau:2 statistic:4 think:1 final:1 online:3 advantage:2 parisi:1 net:1 propose:1 product:1 adaptation:1 frequent:3 neighboring:5 alleviates:1 mixing:2 achieve:2 roweis:2 pronounced:1 getting:1 double:2 requirement:1 converges:3 depending:1 measured:1 sole:1 odd:2 op:1 eq:3 implies:1 indicate:2 larochelle:3 concentrate:1 direction:1 filter:6 stochastic:8 enable:1 require:4 exchange:3 opt:1 im:2 hold:1 sufficiently:2 tracker:4 around:2 normal:2 exp:4 desjardins:4 early:6 estimation:4 applicable:1 extremal:1 bridge:11 weighted:2 mit:1 gaussian:3 rather:1 avoid:1 hj:1 varying:1 derived:1 likelihood:26 indicates:2 contrast:1 baseline:1 inference:6 mueller:1 mrfs:5 stopping:5 bt:14 typically:3 hidden:9 walled:1 wij:1 schulz:1 going:1 issue:2 classification:1 among:1 overall:1 denoted:1 proposes:1 smoothing:1 fairly:1 field:3 construct:1 once:1 having:3 nicely:1 sampling:30 represents:2 nega:1 look:1 unsupervised:3 excessive:1 icml:7 rosen:1 yoshua:1 report:4 few:3 modern:1 divergence:1 comprehensive:1 phase:4 consisting:1 maintain:1 montr:1 acceptance:2 interest:1 possibility:1 highly:1 extreme:1 yielding:4 farley:2 swapping:2 held:2 chain:16 accurate:5 closer:1 indexed:1 iv:1 connect4:1 taylor:1 initialized:1 plotted:4 epl:1 eal:1 modeling:2 cover:1 lattice:3 cost:5 deviation:1 entry:5 successful:1 seventh:1 osindero:1 too:1 zvi:1 reported:2 connect:1 combined:1 cho:1 density:3 international:6 automating:1 systematic:2 physic:3 quickly:2 again:2 choose:1 slowly:6 book:1 leading:1 style:1 account:2 de:3 bergstra:1 sml:15 sec:3 b2:3 c12:1 vi:5 depends:1 onset:1 multiplicative:1 performed:5 view:1 linked:1 reached:1 start:3 bayes:2 option:1 parallel:11 compiler:1 om:4 oi:10 accuracy:1 variance:13 ensemble:1 yield:1 spaced:2 bayesian:2 vincent:1 accurately:3 marginally:1 carlo:4 monitoring:2 published:1 simultaneous:1 oscillatory:1 definition:2 failure:1 energy:5 rbms:12 nonetheless:1 involved:1 naturally:1 associated:2 rbm:8 boil:1 dataset:4 treatment:1 popular:3 proved:2 recall:1 emerges:1 improves:1 ubiquitous:1 schedule:4 feed:1 follow:1 specify:2 hannes:1 formulation:1 ergodicity:2 hand:2 hastings:1 web:1 replacing:1 mode:1 quality:2 scientific:1 grows:1 building:1 effect:1 usa:1 consisted:1 evolution:4 inductive:1 chemical:1 iteratively:1 i2:1 illustrated:1 neal:3 attractive:1 adjacent:2 during:6 iba:1 criterion:1 trying:1 performs:1 cp:2 temperature:25 motion:1 recently:2 umontreal:1 mt:2 hugo:1 overview:1 tracked:1 cohen:2 exponentially:1 nh:2 volume:7 marginals:1 measurement:5 significant:1 cambridge:1 gibbs:7 ai:33 smoothness:2 grid:1 hp:2 particle:4 language:1 moving:1 access:2 longer:2 operating:2 base:3 posterior:5 own:2 multivariate:2 perspective:2 store:1 certain:1 binary:1 vt:8 tempered:7 accomplished:1 caltech:3 preserving:1 seen:1 additional:2 contingent:1 employed:1 ii:2 multiple:3 stem:2 smooth:2 technical:1 faster:2 match:1 calculation:2 cross:3 cifar:1 retrieval:1 europhysics:1 qi:29 impact:1 mrf:2 basic:2 variant:1 heterogeneous:1 vision:1 expectation:1 iteration:7 normalization:1 represent:2 whereas:1 spacing:2 annealing:4 interval:4 thirteenth:2 grow:1 source:5 crucial:1 publisher:1 swapped:2 extra:1 ot:47 biased:1 breuleux:1 nv:2 subject:1 tend:1 undirected:1 flow:1 seem:1 leverage:1 door:2 ideal:2 bengio:7 enough:1 intermediate:7 iii:1 iterate:1 config:1 fit:2 zi:4 variety:2 architecture:3 reduce:1 regarding:3 andreas:1 haffner:1 intensive:1 expression:1 utility:1 bridging:2 reuse:1 effort:1 render:1 e3:2 york:1 cause:1 deep:10 generally:3 useful:1 detailed:2 ten:1 dna:2 problematic:1 estimated:8 track:5 per:2 monitor:1 drawn:2 tempering:11 changing:2 clarity:1 ht:2 replica:1 fuse:1 sum:1 run:2 inverse:4 letter:3 powerful:2 uncertainty:1 fourteenth:1 swersky:1 arrive:1 throughout:7 family:1 draw:1 ct:2 followed:1 courville:3 refine:2 yielded:1 precisely:1 ri:2 encodes:1 simulate:1 min:3 performing:1 rcv1:1 relatively:8 structured:1 according:2 alternate:1 smaller:1 across:1 wi:3 metropolis:1 qu:1 refered:2 restricted:9 gradually:1 theano:2 taken:1 computationally:4 equation:5 resource:1 previously:2 remains:1 know:1 tractable:3 serf:1 available:1 incurring:1 permit:1 apply:1 ocr:1 simulating:2 batch:5 apt:2 denotes:1 running:2 ensure:2 graphical:6 maintaining:2 exploit:4 ghahramani:1 build:1 murray:4 added:1 quantity:6 erationnelle:1 strategy:4 diagonal:4 gradient:22 link:1 thank:1 simulated:2 gence:1 unstable:1 iro:1 assuming:1 kalman:11 o1:9 index:1 relationship:1 mini:5 ratio:2 unfortunately:1 mostly:1 negative:5 motivates:1 boltzmann:12 unknown:1 twenty:1 allowing:2 teh:1 observation:7 markov:7 datasets:7 acknowledge:1 descent:7 extended:5 incorporated:2 hinton:6 smoothed:1 community:1 tive:1 bk:1 pair:1 required:5 specified:1 david:1 nip:4 adult:1 able:1 below:1 pattern:1 regime:2 reliable:2 unsuccessful:1 belief:3 overlap:3 getoor:1 natural:1 rely:1 indicator:1 scheme:3 improve:3 brief:1 axis:3 raiko:1 ready:1 naive:1 prior:4 epoch:2 calcul:1 python:1 kf:2 evolve:1 marginalizing:1 relative:1 unsurprisingly:1 expect:2 highlight:1 interesting:2 filtering:3 proven:1 var:3 validation:4 foundation:1 incurred:1 degree:1 principle:2 editor:6 cd:1 course:2 last:1 free:4 bias:6 allow:2 guide:1 wide:2 characterizing:2 departing:1 dip:1 default:1 dimension:2 evaluating:1 valid:2 feedback:1 autoregressive:1 transition:1 forward:1 commonly:1 adaptive:6 coincide:1 author:1 far:1 welling:1 approximate:2 yaxis:1 silhouette:3 summing:1 xi:13 latent:10 table:2 robust:2 ca:1 init:11 forest:1 interact:1 e5:2 bottou:4 complex:1 domain:1 diag:1 did:1 aistats:4 main:1 dense:1 linearly:2 whole:1 turian:2 repeated:1 allowed:1 x1:1 fig:4 nade:2 marinari:1 scheffer:1 ny:1 sub:1 fails:1 exponential:1 bengioy:1 lie:1 jmlr:2 extractor:1 e4:2 down:1 bastien:1 bishop:1 undergoing:1 offset:1 normalizing:1 intractable:3 consist:1 burden:1 mnist:4 workshop:1 importance:23 conditioned:2 gap:1 easier:1 chen:1 simply:1 ditionally:1 forming:1 expressed:1 nserc:1 tracking:22 springer:1 tieleman:2 relies:3 acm:6 ma:1 conditional:1 presentation:1 consequently:1 replace:1 bennett:1 change:11 adverse:1 determined:1 typical:1 reducing:1 sampler:2 total:1 mathias:1 experimental:2 aaron:1 formally:2 guillaume:1 exception:1 support:3 inability:1 incorporate:2 mcmc:2 tested:2
3,702
4,352
Learning Probabilistic Non-Linear Latent Variable Models for Tracking Complex Activities Angela Yao? ETH Zurich Juergen Gall ETH Zurich Luc Van Gool ETH Zurich Raquel Urtasun TTI Chicago {yaoa, gall, vangool}@vision.ee.ethz.ch, [email protected] Abstract A common approach for handling the complexity and inherent ambiguities of 3D human pose estimation is to use pose priors learned from training data. Existing approaches however, are either too simplistic (linear), too complex to learn, or can only learn latent spaces from ?simple data?, i.e., single activities such as walking or running. In this paper, we present an efficient stochastic gradient descent algorithm that is able to learn probabilistic non-linear latent spaces composed of multiple activities. Furthermore, we derive an incremental algorithm for the online setting which can update the latent space without extensive relearning. We demonstrate the effectiveness of our approach on the task of monocular and multi-view tracking and show that our approach outperforms the state-of-the-art. 1 Introduction Tracking human 3D articulated motions from video sequences is well known to be a challenging machine vision problem. Estimating the human body?s 3D location and orientation of the joints is notoriously difficult because it is a high-dimensional problem and is riddled with ambiguities coming from noise, monocular imagery and occlusions. To reduce the complexity of the task, it has become very popular to use prior models of human pose and dynamics [20, 25, 27, 28, 8, 13, 22]. Linear models (e.g. PCA) are among the simplest priors [20, 15, 26], though linearity also restricts a model?s expressiveness and results in inaccuracies when learning complex motions. Priors generated from non-linear dimensionality reduction techniques such as Isomap [23] and LLE [18] have also been used for tracking [5, 8]. These techniques try to preserve the local structure of the manifold but tend to fail when manifold assumptions are violated, e.g., in the presence of noise, or multiple activities. Moreover, LLE and Isomap provide neither a probability distribution over the space of possible poses nor a mapping from the latent space to the high dimensional space. While such a distribution and or mapping can be learned post hoc, learning them separately from the latent space typically results in suboptimal solutions. Probabilistic latent variable models (e.g. probabilistic PCA), have the advantage of taking uncertainties into account when learning latent representations. Taylor et al. [22] introduced the use of Conditional Restricted Boltzmann Machines (CRBM) and implicit mixtures of CRBM (imCRBM), which are composed of large collections of discrete latent variables. Unfortunately, learning this type of model is a highly complex task. A more commonly used latent variable model is the Gaussian Process Latent Variable Model (GPLVM) [9] which has been applied to animation [27] and tracking [26, 25, 6, 7]. While the GPLVM is very successful at modeling small training sets with single activities, it often struggles to learn latent spaces from larger datasets, especially those with multiple activities. The main reason is that the GPLVM is a non-parametric model; learning requires ? This research was supported by the Swiss National Foundation NCCR project IM2, NSERC Canada and NSF #1017626. Source code is available at www.vision.ee.ethz.ch/yaoa 1 PCA GPLVM stochastic GPLVM Basketball Signals Exercise Stretching Jumping Walking Distance Matrix Figure 1: Representative poses, data (Euclidean) distance matrices and learned latent spaces from walking, jumping, exercise stretching and basketball signal sequences. GPLVM was initialized using probabilistic PCA; while stochastic GPLVM was initialized randomly. the optimization of a non-convex function, for which complexity grows with the number of training samples. As such, having a good initialization is key for success [9], though good initializations are not always available [6], especially with complex data. Additionally, GPLVM learning scales cubicly with the number of training examples, and application to large datasets is computationally intractable, making it necessary to use sparsification techniques to approximate learning [17, 10]. As a consequence, the GPLVM has been mainly applied to single activities, e.g., walking or running. More recent works have focused on handling multiple activities, most often with mixture models [14, 12, 13] or switching models [16, 8, 2]. However, coordinating the different components of the mixture models requires special care to ensure that they are aligned in the latent space [19], thereby complicating the learning process. In addition, both mixture and switching models require a discrete notion of activity which is not always available, e.g. dancing motions are not a discrete set. Others have tried to couple discriminate action classifiers with action-specific models [1, 5], though accuracy of such systems does not scale well with the number of actions. A good prior model for tracking should be accurate, expressive enough to capture a wide range of human poses, and easy and tractable for both learning and inference. Unfortunately, none of the aforementioned approaches exhibit all of these properties. In this paper, we are interested in learning a probabilistic model that fulfill all of these criteria. Towards this end, we propose a stochastic gradient descent algorithm for the GPLVM which can learn latent spaces from random initializations. We draw inspiration for our work from two main sources. The first, [24], approximates Gaussian process regression for large training sets by doing online predictions based on local neighborhoods. The second, [11], maximizes the likelihood function for GPLVM by considering one dimension of the gradient at a time in the context of collaborative filtering. Based on these two works, we propose a similar strategy to approximate the gradient computation within each step of the stochastic gradient descent algorithm. Local estimation of the gradients allows our approach to efficiently learn models from large and complex training sets while mitigating the problem of local minima. Furthermore, we propose an online algorithm that can effectively learn latent spaces incrementally without extensive relearning. We demonstrate the effectiveness of our approach on the task of monocular and multi-view tracking and show that our approach outperforms the state-of-the-art on the standard benchmark HumanEva [21]. 2 2 Stochastic learning We first review the GPLVM, the basis of our work, and then introduce our optimization method for learning with stochastic local updates. Finally, we derive an extension of the algorithm which can be applied to the online setting. 2.1 GPLVM Review The GPLVM assumes that the observed data has been generated by some unobserved latent random variables. More formally, let Y = [y1 , ? ? ? , yN ]T be the set of observations yi ? <D , and X = [x1 , ? ? ? , xN ]T be the set of latent variables xi ? <Q , with Q  D. The GPLVM relates the latent variables and the observations via the probabilistic mapping y (d) = f (x) + ?, with ? being i.i.d. Gaussian noise, and y (d) the d-th coordinate of the observations. In particular, the GPLVM places a Gaussian process prior over the mapping f such that marginalization of the mapping can be done in closed form. The resulting conditional distribution becomes    1 1 ?1 T p p (Y|X, ?) = exp ? tr K YY , (1) 2 (2?)N ?D |K|D where K is the kernel matrix with elements Kij = k(xi , xj ) and the kernel k has parameters ?. Here, we follow existing approaches [26, 25]  and use0 2aa kernel compounded from an RBF, a bias, ? 0 k 0 and Gaussian noise, i.e., k (x, x ) = ?1 exp ? kx?x + ?3 + x,x ?2 ?4 . The GPLVM is usually learned by maximum likelihood estimation of the latent coordinates X and the kernel hyperparameters ? = {?1 , ? ? ? , ?4 }. This is equivalent to minimizing the negative log likelihood L:  D 1 DN ln 2? ? ln |K| ? tr K?1 YYT . (2) L = ? ln p (Y|X, ?) = ? 2 2 2 Typically a gradient descent algorithm is used for the minimization. The gradient of L with respect to X can be obtained via the chain rule, where  ?K ?L ?L ?K = ? = ? K?1 YYT K?1 ? DK?1 ? . ?X ?K ?X ?X (3) ?K Similarly, the gradient of L with respect to ? can be found by substituting ?K ?X with ? ? in Eq. (3) (see [9] for the exact derivation). As N gets large, however, computing the gradients becomes computationally expensive, because inverting K is of O(N 3 ), with N the number of training examples. More importantly, as the negative log likelihood L is highly non-convex, especially with respect to X, standard gradient descent approaches tend to get stuck in local minima, and rely on having good initializations for success. We now demonstrate how a stochastic gradient descent approach can be used to reduce computational complexity as well as decrease the chances of getting trapped in local minima. In particular, as shown in our experiments (Section 3), we are able to obtain smooth and accurate manifolds (see Fig. 1) from random initialization. 2.2 Stochastic Gradient Descent In standard gradient descent, all points are taken into account at the same time when computing the gradient; stochastic gradient descent approaches, on the other hand, approximate the gradient at each point individually. Typically, a loop goes over the points in a series or by randomly sampling from the training set. Note that after iterating over all the points, the gradient is exact. As the GPLVM is a non-parametric approach, the gradient computation at each point does not decompose, making it necessary to invert K, an O(N 3 ) operation at every iteration. We propose, however, to approximate the gradient computation within each step of the stochastic gradient descent algorithm. Therefore, the gradient of L can be estimated locally for some neighborhood of points XR , centered at a reference point xr , rather than over all of X. Eq. (3) can then be evaluated only for the points within the neighborhood, i.e., 3 Algorithm 2: Incremental stochastic GPLVM for t = 1 : T1 Learn Xorig and ? orig as per Algorithm 1. end Initialize Xincr using nearest neighbors. Set ? = ? orig Group data: Y = [Yorig , Yincr ] X = [Xorig , Xincr ] for t = T1 + 1 : T2 randomly select xr ? Xincr find R neighbors around xr : XR = X ? R incr Compute ?L and ?Lincr (see Eq. (6)) ?XR ? ?R Update X and ?: incr ?Xt = ?X ? ?Xt?1 + ?X ? ?L ?XR Xt ? Xt?1 + ?Xt ?? t = ?? ? ?? t?1 + ?? ? ?Lincr ? ?R ? t ? ? t?1 + ?? t end Algorithm 1: Stochastic GPLVM Randomly initialize X Set ? with an initial guess for t = 1:T randomly select xr find R neighbors around xr : XR = X ? R ?L Compute ?X and ?L (see Eq. (3)) R ? ?R Update X and ?: ?L ?Xt = ?X ? ?Xt?1 + ?X ? ?X R Xt ? Xt?1 + ?Xt ?L ?? t = ?? ? ?? t?1 + ?? ? ? ?R ? t ? ? t?1 + ?? t end Figure 2: Stochastic gradient descent and incremental learning for the GPLVM; ?(?) is a momentum parameter and ?(?) is the learning rate. Note that R, ?, and ? can also vary with t.  ?KR ?L ?1 ?1 T ? ? K?1 ? , R YR YR KR ? DKR ?XR ?XR (4) where KR is the kernel matrix for XR and YR is the corresponding neighborhood data points. We employ a random strategy for choosing the reference point xr . The neighborhood R can be determined by any type of distance measure, such as Euclidean distance in the latent space and/or data space, or temporal neighbors when working with time series. More critical than the specific type of distance measure, however, is allowing sufficient coverage of the latent space so that each neighborhood is not restricted too locally. To keep the complexity low, it is beneficial to sample randomly from a larger set of neighbors (see supplementary material). The use of stochastic gradient descent has several desirable traits that correct for the aforementioned drawbacks of GPLVMs. First, computational complexity is greatly reduced, making it feasible to learn latent spaces with much larger amounts of data. Secondly, estimating the gradients stochastically and locally improves robustness of the learning process against local minima, making it possible to have a random initialization. An algorithmic summary of stochastic gradient descent learning for GPLVMs is given in Fig. 2. 2.3 Incremental Learning In this section, we derive an incremental learning algorithm based on the stochastic gradient descent approach of the previous section. In this setting, we have an initial model which we would like to update as new data comes in on the fly. More formally, let Yorig be the initial training data, and Xorig and ? orig be a model learned from Yorig using stochastic GPLVM. For every step in the online learning, let Yincr be new data, which can be as little as a single point or an entire set of training points. Let Y = [Yorig , Yincr ] ? R(N +M )?D be the set of training points containing both the already trained data Yorig , and the new incoming data Yincr , and let X=[Xorig , Xincr ] ? R(N +M )?Q be the corresponding latent coordinates, where M is the number of newly added training ? orig be the estimate of the latent coordinates that has already been learned. examples. Let X A possible strategy is to update only the incoming points; however, we would like to exploit the new data for improving the estimate of the entire manifold, therefore we propose to learn the full X. To prevent the already-learned manifold from diverging and also to speed up learning, we add a regularizer to the log-likelihood to encourage original points to not deviate too far from their initial ? orig . Learning estimate. To this end, we use the Frobenius norm of the deviation from the estimate X 4 Within-Subject Cross-Subject 150 particles Walking Error (mm) 25 particles 90 45 30 15 75 60 45 Basketball Signal Exercise Stretching Jumping Error (mm) Error (mm) Error (mm) 25 particles 60 120 160 100 145 80 130 60 115 240 280 200 260 160 240 120 220 150 205 125 190 100 175 75 0% 0.05% 0.1% 0% PCA 0.05% 0.1% GPLVM 160 0% 150 particles 0.05% 0.1% 0% 0.05% 0.1% stochastic GPLVM Figure 3: Within- and cross-subject 3D tracking errors for each type of activity sequence with respect to amount of additive noise for different number of particles, where error bars represent the standard deviation from repetitions runs. is then done by minimizing the regularized negative log-likelihood 1 ? orig ||2 . ||X1:N,: ? X (5) F N Here, X1:N,: indicates the first N rows of X, while ? is a weighting on the regularization term. The gradient of L with respect to XR 1 can then be computed as  ?Lincr ?L 2 ? orig ?X1:N,: . ? X1:N,: ? X = +?? (6) ?XR ?XR N ?XR Lincr = L + ? ? We employ a stochastic gradient descent approach for our incremental learning, where the points are sampled randomly from Xincr . Note that while xr is only sampled from Xincr in the subsequent learning step, this does not exclude points in Xorig from being a part of the neighbourhood R, and thus from being updated. We have chosen a nearest neighbor approach by comparing Yincr to Yorig for estimating an initial Xincr , though other possibilities include performing a grid search in the latent space and selecting locations with the highest global log-likelihood (Eq. (2)) or training a regressor from Yorig to Xorig to be applied to Yincr . An algorithmic summary of the incremental method is provided in Fig. 2. 2.4 Tracking Framework During training, a latent variable model M is learned from YM , where YM are relative joint locations with respect to a root node. We designate the learned latent points as XM . During inference, tracking is performed in the latent space using a particle filter. The corresponding pose is computed by projecting back to the data space via the Gaussian process mapping learned in the GPLVM. 1 ?Lincr = ?L ? ?R ? ?R since the regularization term does not depend on ? R . 5 GPLVM stochastic GPLVM incremental stochastic GPLVM GPLVM stochastic GPLVM incremental stochastic GPLVM 250 200 150 100 50 0% (a) manifolds 0.05% 0.1% (b) 3D tracking error Figure 4: (a) Learned manifolds from regular GPLVM, stochastic GPLVM and incremental stochastic GPLVM from an exercise stretching sequence, where blue, red, green indicate jumping jacks, jogging and squats respectively and (b) the associated 3D tracking errors (mm), where error bars indicate standard deviation over repeated runs. We model the state s at time t as st = (xt , gt , rt ) where xt denotes position in the latent space, while gt and rt are the global position and rotation of the root node. Particles are initialized in the latent space by a nearest neighbor search between the observed 2D image pose in the first frame of the sequence and the projected 2D poses of YM . Particles are then propagated from frame to frame using a first-order Markov model xit = xit?1 + x? it , i gti = gt?1 + g? ti , rit = rit?1 + r? it . (7) We approximate the derivative x? i with the difference between temporally sequential points of the nearest neighbors in XM , while g? i and r? i are drawn from individual Gaussians with means and ? t at time t is then standard deviations estimated from the training data. The tracked latent position x ? t is estimated via the mean approximated as the mode over all particles in the latent space while y Gaussian process estimate T ? t = ?M + YM y K?1 k(? xt , XM ), (8) with ?M the mean of YM and k(? xt , XM ) the vector with elements k(? xt , xm ) for all xm in XM . Note that the computation of K?1 needs to be performed only once and can be stored. 3 Experimental Evaluation We demonstrate the effectiveness of our model when applied to tracking in both monocular and multi-view scenarios. In all cases, the latent models were learned with ?X = 0.8, ?? = 0.5, ?X =10e-4, ?? =10e-8; we annealed these parameters over the iterations. To further smooth the learned models, we incorporate a Gaussian Process prior over the dynamics of the training data in the latent space [27] for the GPLVM and the stochastic GPLVM. We refer the reader to the supplementary material for a visualization of the learning process as well as the results. 3.1 Monocular Tracking We compare in the monocular setting the use of PCA, regular GPLVM and our stochastic GPLVM to learn latent spaces from motion capture sequences (from the CMU Motion Capture Database [3]). We chose simple single-activity sequences, such as walking (3 subjects, 18 sequences) and jumping (2 subjects, 8 sequences), as well as complex multi-activity sequences, such as stretching exercises (2 subjects, 6 sequences) and basketball refereeing signals (7 subjects, 13 sequences). The stretching exercise and basketball signal sequences were cut to each contain four types of activities. We synthesized 2D data by projecting the mocap from 3D to 2D and then corrupting the location of each joint with different levels of additive Gaussian noise. We then recover the 3D locations of each joint from the noisy images by tracking with the particle filter described in the previous section. Examples of learned latent spaces for each type of sequence (i.e., walking, jumping, exercise, basketball) are shown in Fig. 1. We used a neighborhood of 60 points for the single activity sequences, which have on average 250 training examples, and 100 points for the multiple activity sequences, 6 C1, Frame 72 C3, Frame 27 C1, Frame 30 C3, Frame 72 C1, Frame 60 C3, Frame 30 C3, Frame 60 S1 Boxing S3 Walking C1, Frame 27 Figure 5: Example poses from tracked results on HumanEva. which have on average 800 training examples. For a sequence of 800 training examples, the stochastic GPLVM takes only 27s to learn (neighborhood of 100 points, 2500 iterations); in comparison, the regular GPLVM takes 2560s for 312 iterations, while with FITC approximations [10] takes on average 1700s (100 active points, 2500 iterations)2 . In general, as illustrated by Fig. 1, the manifolds learned with stochastic GPLVM have smoother trajectories than those learned from PCA and GPLVM, with better separation between the activities in the multi-activity sequences. We evaluate the effectiveness of the learned latent pose models for tracking by comparing the average tracking error per joint per frame between PCA, GPLVM and stochastic GPLVM in two sets of experiments. In the first, training and test sequences are performed by the same subject; in the second, to test generalization properties of the different latent spaces, we train and test on different subjects. We report results average over 10 sequences, each repeated over 10 different runs of the tracker. We use importance sampling and weight particle at time  each  t proportionally to a likelihood P defined by the reprojection error: wti ? exp ?? j kpij,t ? qj,t k2 , where pij,t is the projected 2D position of joint j in yti from xit (see Eq. (8)) and qj,t is the observed 2D position of joint j, assuming that the camera projection and correspondences between joints are already known. ? is a parameter determining selectivity of the weight function (we use ? = 5 ? 10?5 ). Fig. 3 depicts 3D tracking error as a function of the amount of Gaussian noise for different number of particles employed in the particle filter for the within- and cross-subject experiments. As expected, tracking error is lower within-subject than cross-subject for all types of latent models. For the simple activities such as walking and jumping, GPLVM generally outperforms PCA, but for the complex activities, it performs only comparably or worse than PCA (with the exception of cross-subject basketball signals). Our stochastic GPLVM, on the other hand, consistently outperforms PCA and matches or outperforms the regular GPLVM in all experimental conditions, with significantly better performance in the complex, multi-activity sequences. Additional experiments are provided in the supplementary material. 3.2 Online Tracking We took two stretching exercise sequences with three different activities from the same subject and apply the online learning algorithm (see Sec. 2.3), setting ? = 2. We consider each activity as a new batch of data, and learn the latent space on the first sequence and then track on the second and vice versa. We find the online algorithm less accurate for tracking than the stochastic GPLVM learned with all data. This is expected since the latent space is biased towards the initial set of activities. We note, however, that the incremental stochastic GPLVM still outperforms the regular GPLVM, as illustrated in Fig. 4(b). Examples of the learned manifolds are shown in Fig. 4(a). 3.3 Multi-view Tracking on HumanEva We also evaluate our learning algorithm on the HumanEva benchmark [21] on the activities walking and boxing. For all experiments, we use a particle filter as described in Sec. 2.4 with 25 particles as well as an additional annealing component [4] of 15 layers. To maintain consistency with previous 2 Note that none of the models have completed training. For timing purposes, we take here a fixed number of iterations for the stochastic method and the FITC approximation and the ?equivalent? for the regular GPLVM, i.e., 2500 iterations /8, where 8 comes from the fact that 8X more points are used in computing K. 7 Train S1 S1,2,3 S2 S1,2,3 S3 S1,2,3 Test S1 S1 S2 S2 S3 S3 [28] 140.3 149.4 156.3 [13] 68.7 ? 24.7 69.6 ? 22.2 - GPLVM 57.6 ? 11.6 64.3 ? 19.2 98.2 ? 15.8 155.9 ? 48.8 71.6 ? 10.0 123.8. ? 16.7 CRBM [22] 48.8 ? 3.7 55.4 ? 0.8 47.4 ? 2.9 99.1 ? 23.0 49.8 ? 2.2 70.9 ? 2.1 imCRBM [22] 58.6 ? 3.9 54.3 ? 0.5 67.0 ? 0.7 69.3 ? 3.3 51.4 ? 0.9 43.4 ? 4.1 Ours 44.0 ? 1.8 41.6 ? 0.8 54.4 ? 1.8 64.0 ? 2.9 45.4 ? 1.1 46.5 ? 1.4 Table 1: Comparison of 3D tracking errors (mm) on the entire walking validation sequence with subject-specific models, where ? indicates standard deviation over runs, except for [13], who reports tracking results for 200 frames of the sequences, with standard deviation over frames. Model [16] as reported in [12] [14] as reported in [12] GPLVM [12] Best CRBM [22] Ours Tracking Error 569.90 ? 209.18 380.02 ? 74.97 121.44 ? 30.7 117.0 ? 5.5 75.4 ? 9.7 74.1 ? 3.3 Table 2: Comparison of 3D tracking errors (mm) on boxing validation sequence for S1, where ? indicates standard deviation over runs. Our results are comparable to the state-of-the-art [22]. works, we use the images from the 3 color cameras and the simple silhouette and edge likelihoods provided in the HumanEva baseline algorithm [21]. HumanEva-I Walking: As per [22, 28, 13], we track the walking validation sequences of subjects S1, S2, and S3. The latent variable models are learned on the training sequences, being either subjectspecific or with all three subjects combined. Subject-specific models have ?1200-2000 training examples each, for which we used a neighborhood of 60 points, while the combined model has ?4000 training examples with a neighborhood of 150 points. 3D tracking errors averaged over the 15 joints as specified in [21] and over all frames in the full sequence are depicted in Table1. Sample frames of the estimated poses are shown in Fig. 5. In four of the six training/test combinations, the stochastic GPLVM model outperforms the state-of-the-art CRBM and imCRBM model from [22], while in the other two cases, our model is comparable. These results are remarkable, given that we use only a simple first-order Markov model for estimating dynamics, and our success can only be attributed to the latent model?s accuracy in encoding the body poses from the training data. HumanEva-I boxing: We also track the validation sequence of S1 for boxing to assess the ability of the stochastic GPLVM for learning acyclic motions. 3D tracking errors are shown in Table 2 and are compared with [14, 13, 22]. Our results are slightly better than state-of-the-art. 4 Conclusion and Future Work In this paper, we try to learn a probabilistic prior model which is accurate yet expressive, and is tractable for both learning and inference. Our proposed stochastic GPLVM fulfills all these criteria - it effectively learns latent spaces of complex multi-activity datasets in a computationally efficient manner. When applied to tracking, our model outperforms state-of-the-art on the HumanEva benchmark, despite the use of very few particles and only a simple first-order Markov model for handling dynamics. In addition, we have also derived a novel approach for learning latent spaces incrementally. One of the great criticisms of current latent variable models is that they cannot handle new training examples without relearning; given the sometimes cumbersome learning process, this is not always feasible. Our incremental method can be easily applied to an online setting without extensive relearning, which may have impact in applications such as robotics where domain adaptation might be key for accurate prediction. In the future, we plan to further investigate the incorporation of dynamics into the stochastic model, particularly for multiple activities. 8 References [1] A. Baak, M. Mueller B. Rosenhahn, and H.-P. Seidel. Stabilizing motion tracking using retrieved motion priors. In ICCV, 2009. [2] J. Chen, M. Kim, Y. Wang, and Q. Ji. Switching gaussian process dynamic models for simultaneous composite motion tracking and recognition. In CVPR, 2009. [3] CMU Mocap Database. http://mocap.cs.cmu.edu/. [4] J. Deutscher and I. Reid. Articulated body motion capture by stochastic search. IJCV, 61(2), 2005. [5] J. Gall, A. Yao, and L. Van Gool. 2d action recognition serves 3d human pose estimation. In ECCV, 2010. [6] A. Geiger, R. Urtasun, and T. Darrell. Rank priors for continuous non-linear dimensionality reduction. In CVPR, 2009. [7] S. Hou, A. Galata, F. Caillette, N. Thacker, and P. Bromiley. Real-time body tracking using a gaussian process latent variable model. ICCV, 2007. [8] T. Jaeggli, E. Koller-Meier, and L. Van Gool. Learning generative models for multi-activity body pose estimation. IJCV, 83(2):121?134, 2009. [9] N. Lawrence. Probabilistic non-linear principal component analysis with gaussian process latent variable models. JMLR, 6:1783?1816, 2005. [10] N. Lawrence. Learning for larger datasets with the gaussian process latent variable model. In AISTATS, 2007. [11] N. Lawrence and R. Urtasun. Non-linear matrix factorization with gaussian processes. In ICML, 2009. [12] R. Li, T. Tian, and S. Sclaroff. Simultaneous learning of non-linear manifold and dynamical models for high-dimensional time series. In ICCV, 2007. [13] R. Li, T.-P. Tian, S. Sclaroff, and M.-H. Yang. 3d human motion tracking with a coordinated mixture of factor analyzers. IJCV, 87:170?190, 2010. [14] R.S. Lin, C.B. Liu, M.H. Yang, N. Ahja, and S. Levinson. Learning nonlinear manifolds from time series. In ECCV, 2006. [15] D. Ormoneit, C. Lemieux, and D. Fleet. Lattice particle filters. In UAI, 2001. [16] V. Pavlovic, J. Rehg, and J. Maccormick. Learning switching linear models of human motion. In NIPS, pages 981?987, 2000. [17] J. Quinonero-Candela and C. Rasmussen. A unifying view of sparse approximate gaussian process regression. JMLR, page 2005, 2006. [18] S. Roweis and L. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 290(5500):2323?2326, 2000. [19] S. Roweis, L. Saul, and G. Hinton. Global coordination of local linear models. In NIPS, 2002. [20] H. Sidenbladh, M. Black, and D. Fleet. Stochastic tracking of 3d human figures using 2d image motion. In ECCV, 2000. [21] L. Sigal, A. Balan, and M. Black. Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. IJCV, 87(1-2):4?27, 2010. [22] G. Taylor, L. Sigal, D. Fleet, and G. Hinton. Dynamical binary latent variable models for 3d human pose tracking supplementary material. In CVPR, 2010. [23] J. Tenenbaum, V. de Silva, and J. Langford. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, 2000. [24] R. Urtasun and T. Darrell. Sparse probabilistic regression for activity-independent human pose inference. In CVPR, 2008. [25] R. Urtasun, D. Fleet, and P. Fua. 3d people tracking with gaussian process dynamical models. In CVPR, 2006. [26] R. Urtasun, D. Fleet, A. Hertzman, and P. Fua. Priors for people tracking from small training sets. In ICCV, 2005. [27] J. Wang, D. Fleet, and A. Hertzmann. Gaussian process dynamical models for human motion. PAMI, 30(2):283?298, 2008. [28] X. Xu and B. Li. Learning motion correlation for tracking articulated human body with a raoblackwellised particle filter. In ICCV, 2007. 9
4352 |@word dkr:1 norm:1 tried:1 thereby:1 tr:2 reduction:4 initial:6 liu:1 series:4 selecting:1 ours:2 outperforms:8 existing:2 current:1 comparing:2 yet:1 hou:1 additive:2 subsequent:1 chicago:1 update:6 generative:1 guess:1 yr:3 imcrbm:3 node:2 location:5 dn:1 become:1 ijcv:4 manner:1 introduce:1 crbm:5 expected:2 nor:1 multi:9 little:1 considering:1 becomes:2 project:1 estimating:4 linearity:1 moreover:1 maximizes:1 provided:3 unobserved:1 sparsification:1 temporal:1 every:2 ti:1 classifier:1 k2:1 yn:1 reid:1 t1:2 local:9 timing:1 struggle:1 consequence:1 switching:4 despite:1 encoding:1 pami:1 might:1 chose:1 black:2 initialization:6 challenging:1 factorization:1 range:1 tian:2 averaged:1 camera:2 swiss:1 xr:19 eth:3 significantly:1 composite:1 projection:1 regular:6 get:2 cannot:1 context:1 www:1 equivalent:2 annealed:1 go:1 convex:2 focused:1 stabilizing:1 rule:1 importantly:1 rehg:1 embedding:1 handle:1 notion:1 coordinate:4 updated:1 exact:2 gall:3 element:2 expensive:1 approximated:1 walking:13 particularly:1 recognition:2 cut:1 database:2 observed:3 fly:1 wang:2 capture:5 squat:1 decrease:1 highest:1 complexity:6 hertzmann:1 dynamic:6 trained:1 depend:1 orig:7 basis:1 easily:1 joint:9 regularizer:1 derivation:1 articulated:4 train:2 neighborhood:10 choosing:1 larger:4 supplementary:4 cvpr:5 ability:1 noisy:1 online:9 hoc:1 sequence:30 advantage:1 took:1 propose:5 coming:1 adaptation:1 aligned:1 loop:1 roweis:2 frobenius:1 getting:1 reprojection:1 table1:1 darrell:2 incremental:12 tti:1 derive:3 pose:17 nearest:4 eq:6 coverage:1 c:1 come:2 indicate:2 synchronized:1 drawback:1 correct:1 filter:6 stochastic:41 centered:1 human:14 material:4 require:1 generalization:1 decompose:1 secondly:1 designate:1 extension:1 rurtasun:1 mm:7 around:2 tracker:1 exp:3 great:1 lawrence:3 mapping:6 algorithmic:2 substituting:1 vary:1 purpose:1 estimation:5 coordination:1 individually:1 repetition:1 vice:1 minimization:1 raoblackwellised:1 gaussian:18 always:3 fulfill:1 rather:1 xorig:6 derived:1 xit:3 consistently:1 rank:1 likelihood:9 mainly:1 indicates:3 greatly:1 criticism:1 baseline:2 kim:1 inference:4 mueller:1 typically:3 entire:3 koller:1 interested:1 mitigating:1 among:1 orientation:1 aforementioned:2 plan:1 art:6 special:1 initialize:2 once:1 having:2 sampling:2 icml:1 future:2 others:1 t2:1 report:2 inherent:1 employ:2 few:1 pavlovic:1 randomly:7 composed:2 preserve:1 national:1 individual:1 occlusion:1 maintain:1 highly:2 possibility:1 investigate:1 evaluation:2 mixture:5 chain:1 accurate:5 edge:1 encourage:1 necessary:2 jumping:7 taylor:2 euclidean:2 initialized:3 hertzman:1 kij:1 modeling:1 juergen:1 lattice:1 deviation:7 successful:1 thacker:1 too:4 stored:1 reported:2 combined:2 st:1 probabilistic:10 regressor:1 ym:5 yao:2 imagery:1 ambiguity:2 containing:1 worse:1 stochastically:1 nccr:1 derivative:1 li:3 account:2 exclude:1 de:1 bromiley:1 sec:2 coordinated:1 performed:3 view:5 try:2 closed:1 root:2 doing:1 candela:1 red:1 recover:1 collaborative:1 ass:1 accuracy:2 stretching:7 efficiently:1 who:1 comparably:1 none:2 trajectory:1 notoriously:1 simultaneous:2 cumbersome:1 against:1 associated:1 attributed:1 couple:1 sampled:2 newly:1 propagated:1 dataset:1 popular:1 color:1 dimensionality:4 improves:1 yyt:2 humaneva:9 back:1 follow:1 fua:2 done:2 though:4 evaluated:1 furthermore:2 implicit:1 langford:1 correlation:1 hand:2 working:1 expressive:2 nonlinear:3 incrementally:2 mode:1 grows:1 gti:1 contain:1 isomap:2 regularization:2 riddled:1 inspiration:1 illustrated:2 during:2 basketball:7 incr:2 criterion:2 demonstrate:4 performs:1 motion:17 silva:1 image:4 jack:1 novel:1 common:1 rotation:1 ji:1 tracked:2 approximates:1 im2:1 trait:1 synthesized:1 refer:1 versa:1 refereeing:1 grid:1 consistency:1 similarly:1 particle:18 analyzer:1 gt:3 add:1 recent:1 retrieved:1 scenario:1 selectivity:1 binary:1 success:3 yi:1 minimum:4 additional:2 care:1 employed:1 mocap:3 signal:6 levinson:1 relates:1 multiple:6 desirable:1 full:2 smoother:1 seidel:1 compounded:1 smooth:2 match:1 cross:5 lin:1 post:1 impact:1 prediction:2 simplistic:1 regression:3 vision:3 cmu:3 iteration:7 kernel:5 represent:1 sometimes:1 invert:1 robotics:1 c1:4 addition:2 separately:1 annealing:1 source:2 biased:1 subject:18 tend:2 effectiveness:4 ee:2 presence:1 yang:2 enough:1 easy:1 marginalization:1 xj:1 wti:1 suboptimal:1 reduce:2 qj:2 fleet:6 six:1 pca:11 action:4 generally:1 iterating:1 proportionally:1 amount:3 locally:4 tenenbaum:1 simplest:1 reduced:1 http:1 restricts:1 nsf:1 s3:5 coordinating:1 trapped:1 estimated:4 yy:1 per:4 blue:1 track:3 discrete:3 dancing:1 key:2 group:1 four:2 drawn:1 prevent:1 neither:1 vangool:1 run:5 uncertainty:1 raquel:1 place:1 reader:1 separation:1 geiger:1 draw:1 comparable:2 layer:1 correspondence:1 activity:28 incorporation:1 cubicly:1 speed:1 performing:1 deutscher:1 combination:1 beneficial:1 slightly:1 making:4 s1:10 projecting:2 restricted:2 iccv:5 taken:1 computationally:3 monocular:6 zurich:3 ln:3 visualization:1 fail:1 tractable:2 end:5 serf:1 available:3 operation:1 gaussians:1 boxing:5 apply:1 neighbourhood:1 batch:1 robustness:1 original:1 angela:1 running:2 ensure:1 assumes:1 include:1 denotes:1 completed:1 unifying:1 exploit:1 especially:3 already:4 added:1 parametric:2 strategy:3 rt:2 rosenhahn:1 exhibit:1 gradient:29 distance:5 maccormick:1 sidenbladh:1 quinonero:1 manifold:11 urtasun:6 reason:1 assuming:1 code:1 minimizing:2 difficult:1 unfortunately:2 negative:3 boltzmann:1 allowing:1 observation:3 datasets:4 markov:3 benchmark:3 descent:15 gplvm:56 hinton:2 y1:1 frame:16 expressiveness:1 ttic:1 canada:1 introduced:1 inverting:1 meier:1 specified:1 extensive:3 c3:4 learned:20 inaccuracy:1 nip:2 able:2 bar:2 usually:1 dynamical:4 xm:7 green:1 video:2 gool:3 critical:1 rely:1 regularized:1 ormoneit:1 fitc:2 temporally:1 deviate:1 prior:11 review:2 geometric:1 determining:1 relative:1 filtering:1 acyclic:1 remarkable:1 validation:4 foundation:1 sufficient:1 pij:1 sigal:2 corrupting:1 row:1 eccv:3 balan:1 summary:2 supported:1 rasmussen:1 bias:1 lle:2 wide:1 neighbor:8 taking:1 saul:2 sparse:2 van:3 dimension:1 complicating:1 xn:1 stuck:1 collection:1 commonly:1 projected:2 far:1 approximate:6 silhouette:1 keep:1 global:4 active:1 incoming:2 uai:1 xi:2 search:3 latent:51 continuous:1 table:3 additionally:1 learn:14 improving:1 complex:10 domain:1 aistats:1 main:2 s2:4 noise:7 animation:1 hyperparameters:1 repeated:2 jogging:1 body:6 x1:5 fig:9 representative:1 xu:1 depicts:1 momentum:1 position:5 exercise:8 jmlr:2 weighting:1 learns:1 specific:4 xt:15 dk:1 intractable:1 sequential:1 effectively:2 kr:3 importance:1 kx:1 relearning:4 chen:1 sclaroff:2 depicted:1 nserc:1 tracking:38 ch:2 chance:1 conditional:2 rbf:1 towards:2 luc:1 feasible:2 yti:1 determined:1 gplvms:2 except:1 principal:1 discriminate:1 experimental:2 diverging:1 exception:1 formally:2 select:2 rit:2 people:2 fulfills:1 ethz:2 violated:1 incorporate:1 evaluate:2 handling:3
3,703
4,353
Convergent Bounds on the Euclidean Distance Yoonho Hwang Hee-Kap Ahn Department of Computer Science and Engineering Pohang University of Science and Technology POSTECH, Pohang, Gyungbuk, Korea(ROK) {cypher,heekap}@postech.ac.kr Abstract Given a set V of n vectors in d-dimensional space, we provide an efficient method for computing quality upper and lower bounds of the Euclidean distances between a pair of vectors in V . For this purpose, we define a distance measure, called the MS-distance, by using the mean and the standard deviation values of vectors in V . Once we compute the mean and the standard deviation values of vectors in V in O(dn) time, the MS-distance provides upper and lower bounds of Euclidean distance between any pair of vectors in V in constant time. Furthermore, these bounds can be refined further in such a way to converge monotonically to the exact Euclidean distance within d refinement steps. An analysis on a random sequence of refinement steps shows that the MS-distance provides very tight bounds in only a few refinement steps. The MS-distance can be used to various applications where the Euclidean distance is used to measure the proximity or similarity between objects. We provide experimental results on the nearest and the farthest neighbor searches. 1 Introduction The Euclidean distance between two vectors x and y in d-dimensional space is a typical distance measure that reflects their proximity in the space. Measuring the Euclidean distance is a fundamental operation in computer science, including the areas of database, computational geometry, computer vision and computer graphics. In machine learning, the Euclidean distance, denoted by dist(x, y), or it?s variations(for example, e||x?y|| ) are widely used to measure data similarity for clustering [1], classification [2] and so on. A typical problem is as follows. Given two sets X and Y of vectors in d-dimensional space, our goal is to find a pair (x, y), for x ? X and y ? Y , such that dist(x, y) is the optimum (minimum or maximum) over all such pairs. For the nearest or farthest neighbor searches, X is the set consisting of a single query point while Y consists of all candidate data points. If the dimension is low, a brute-force computation would be fast enough. However, data sets in areas such as optimization, computer vision, machine learning or statistics often live in spaces of dimensionality in the order of thousands or millions. In d dimensional space, a single distance computation already takes O(d) time, thus the cost for finding the nearest or farthest neighbor becomes O(dnm) time, where n and m are the cardinalities of X and Y , respectively. Several techniques have been proposed to reduce computation cost for computing distance. Probably PCA (principal component analysis) is the most frequently used technique for this purpose [3], in which we use an orthogonal transformation based on PCA to convert a set of given data so that the dimensionality of the transformed data is reduced. Then it computes distances between pairs of transformed data efficiently. However, this transformation does not preserve the pairwise distances of data in general, therefore there is no guarantee on the computation results. 1 If we restrict ourselves to the nearest neighbor search, some methods using space partitioning trees such as KD-tree [4], R-tree [5], or their variations have been widely used. However, they become impractical for high dimensions because of their poor performance in constructing data structures for queries. Recently, cover tree [6] has been used for high dimensional nearest neighbor search, but its construction time increases drastically as the dimension increases [7]. Another approach that has attracted some attention is to compute a good bound of the exact Euclidean distance efficiently such that it can be used to filter off some unnecessary computation, for example, the distance computation between two vectors that are far apart from each other in nearest neighbor search. One of such methods is to compute a distance bound using the inner product approximation [8]. This method, however, requires the distribution of the input data to be known in advance, and works only on data in some predetermined distribution. Another method is to compute a distance bound using bitwise operations [9]. But this method works well only on uniformly distributed vectors, and requires O(2d ) bitwise operations in d dimension. A method using an index structure [10] provides an effective filtering method based on the triangle inequality. But this works well only when data are well clustered. In this paper, we define a distance measure, called the MS-distance, by using the mean and the standard deviation values of vectors in V . Once we compute the mean and the standard deviation values of vectors in V in O(dn) time, the MS-distance provides tight upper and lower bounds of Euclidean distance between any pair of vectors in V in constant time. Furthermore, these bounds can be refined further in such a way to converge monotonically to the exact Euclidean distance within d refinement steps. Each refinement step takes constant time. We provide an analysis on a random sequence of k refinement steps for 0 ? k ? d, which shows a good expectation on the lower and upper bounds. This can justify that the MS-distance provides very tight bounds in a few refinement steps of a typical sequence. We also show that the MS-distance can be used in fast filtering. Note that we do not use any assumption on data distribution. The MS-distance can be used to various applications where the Euclidean distance is a measure for proximity or similarity between objects. Among them, we provide experimental results on the nearest and the farthest neighbor searches. 2 An Upper and A Lower Bounds of the Euclidean Distance Pd For a d-dimensional vector x = [x1 , x2 , . . . , xd ], we denote its mean by ?x = d1 i=1 xi and its P d variance by ?x2 = d1 i=1 (xi ? ?x )2 . For a pair of vectors x and y, we can reformulate the squared Euclidean distance between x and y as follows. Let a = [a1 , a2 , . . . , ad ] and b = [b1 , b2 , . . . , bd ] such that ai = xi ? ?x and bi = yi ? ?y . dist(x, y)2 = d X (xi ? yi )2 i=1 = d X ((?x + ai ) ? (?y + bi ))2 i=1 = d X (?2x + 2ai ?x + a2i + ?2y + 2bi ?y + b2i ? 2(?x ?y + ai ?y + bi ?x + ai bi )) (1) i=1 = d X (?2x ? 2?x ?y + ?2y + a2i + b2i ? 2ai bi ) (2) i=1 d X  = d (?x ? ?y )2 + (?x + ?y )2 ? 2d?x ?y ? 2 ai bi (3) i=1 d X  = d (?x ? ?y )2 + (?x ? ?y )2 + 2d?x ?y ? 2 ai bi . i=1 2 (4) Pd Pd Pd By the definitions of ai and bi , we have i=1 ai = i=1 bi = 0, and d1 i=1 a2i = ?x2 . By the first properties, equation (1) is simplified to (2), and by the second property, equations (2) becomes (3) and (4). Note that equations (3) and (4) are composed of the mean and variance values (their products and squared values, multiplied by d) of x and y, except the last summations. Thus, once we preprocess V of n vectors such that both ?x and ?x for all x ? V are computed in O(dn) time and stored in a table of size O(n), this sum can be computed in constant time for any pair of vectors, regardless of the dimension. Pd The last summation, i ai bi , is the inner product ha, bi, and therefore by applying the CauchySchwarz inequality we get v u d d d uX X X |ha, bi| = | ai bi | ? t( (5) a2i )( b2i ) = d?x ?y . i=1 i=1 i=1 This gives us the following upper and lower bounds of the squared Euclidean distance from equations (3) and (4). Lemma 1 For two d-dimensional vectors x, y, the followings hold. dist(x, y)2 ? d (?x ? ?y )2 + (?x ? ?y )2 2 2 2 dist(x, y) ? d (?x ? ?y ) + (?x + ?y ) 3  (6)  (7) The MS-distance The lower and upper bounds in inequalities (6) and (7) can be computed in constant time once we compute the mean and standard variance values of each vector in V in the preprocessing. However, in some applications these bounds may not be tight enough. In this section, we introduce the MSdistance which not only provides lower and upper bounds of the Euclidean distance in constant time, but also could be refined further in such a way to converge to the exact Euclidean distance within d steps. To do this, we reformulate equations (3) and (4), that is, the inner product ha, bi. If qP the last term ofq Pd Pd d 2 2 the norms ||a|| = i=1 ai or ||b|| = i=1 bi are zero, then i=1 ai bi = 0, thus the upper and lower bounds become the same. This implies that we can compute the exact Euclidean distance in constant time. So from now on, we assume that both ||a|| and ||b|| are non-zero. We reformulate the inner product ha, bi. d X ai bi = d?x ?y ? d?x ?y + i=1 d X ai bi i=1 ?x ?y = d?x ?y ? 2 ! d X 2ai bi 2d ? ? ? i=1 x y ! 2 X 2 X d  d  d X ai bi 2ai bi + ? ?x ?y ? ? i=1 i=1 i=1 x y = d?x ?y ? ?x ?y 2 = d?x ?y ? d ? x ? y X bi ai 2 ( ? ) 2 i=1 ?y ?x d ?x ?y X bi ai 2 = ?d?x ?y + ( + ) 2 i=1 ?y ?x (8) (9) (10) Pd Pd Equation (8) is because of i=1 a2i = d?x2 and i=1 b2i = d?y2 . We can also get equation (10) by switching the roles of the term ?d?x ?y and the term d?x ?y in the above equations. 3 Definition. Now we define the MS-distance between x and y in its lower bound form, denoted by MSL (x, y, k), by replacing the last term of equation (3) with equation (9), and in its upper bound form, denoted by MSU(x, y, k) by replacing the last term of equation (4) with equation (10). The MS-distance makes use of the nonincreasing intermediate values for its upper bound and the nondecreasing intermediate values for its lower bound. We let a0 = b0 = 0. MSL (x, y, k) 2 2 = d (?x ? ?y ) + (?x ? ?y )  + ?x ?y k  X bi i=0 MSU (x, y, k) k X  = d (?x ? ?y )2 + (?x + ?y )2 ? ?x ?y i=0  ai ? ?y ?x 2 bi ai + ?y ?x 2 (11) (12) Properties. Note that equation (11) is nondecreasing and equation (12) is nonincreasing while i increases from 0 to d, because d, ?x , and ?y are all nonnegative, and ( ?byi ? ?axi )2 and ( ?byi + ?axi )2 are also nonnegative for all i. This is very useful because, in equation (11), the first term, MSL(x, y, 0), is already a lower bound of dist(x, y)2 by inequality (6) , and the lower bound can be refined further nondecreasingly over the summation in the second term. If we stop the summation at i = k, for k < d, the intermediate result is also a refined lower bounds of dist(x, y)2 . Similarly, in equation (12), the first term, MSU(x, y, 0), is already an upper bound of dist(x, y)2 by inequality (7) , and the upper bound can be refined further nonincreasingly over the summation in the second term. This means we can stop the summation as soon as we find a bound good enough for the application under consideration. If we need the exact Euclidean distance, we can get it by continuing to the full summation. We summarize the above properties in the following. Lemma 2 (Monotone Convergence) Let MSL(x, y, k) and MSU(x, y, k) be the lower and upper bounds of MS-distance as defined above, respectively. Then the following properties hold. ? MSL(x, y, 0) ? MSL(x, y, 1) ? ? ? ? ? MSL(x, y, d ? 1) ? MSL(x, y, d) = dist(x, y)2 . ? MSU(x, y, 0) ? MSU(x, y, 1) ? ? ? ? ? MSU(x, y, d ? 1) ? MSU(x, y, d) = dist(x, y)2 . ? MSL(x, y, k) = MSL(x, y, k + 1) if and only if bk+1 /?y = ak+1 /?x . ? MSU(x, y, k) = MSU(x, y, k + 1) if and only if bk+1 /?y = ?ak+1 /?x . Lemma 3 For 0 ? k < d, we can update MSL(x, y, k) to MSL(x, y, k + 1), and MSU(x, y, k) to MSU (x, y, k + 1) in constant time. Fast Filtering. We must emphasize that MSL(x, y, 0) and MSU(x, y, 0) can be used for fast filtering. Let ? denote a threshold for filtering defined in some proximity search problem under consideration. If ? < MSL(x, y, 0) in case of nearest search or ? > MSL(x, y, 0) in case of farthest search, we do not need to consider this pair (x, y) as a candidate, thus we can save time from computing their exact Euclidean distance. Precisely speaking, we map each d-dimensional vector x = [x1 , x2 , . . . , xd ] into a pair of points, ? + and x ? ? , in the 2-dimensional plane such that x ? + = [?x , ?x ] and x ? ? = [?x , ??x ]. Then x ? + )2 = MSL(x, y, 0)/d dist(? x+ , y + ? 2 ? ) = MSU(x, y, 0)/d. dist(? x ,y (13) (14) To see why it is useful in fast filtering, consider the case of finding the nearest vector. For ddimensional vectors in V of size n, we have n pairs of points in the plane as in Figure 1. Since ?x ? + denote is nonnegative, exactly n points lie on or below ?-axis. Let q be a query vector, and let q the point mapped in the plane as defined above. Among these n points lying on or below ?-axis, let ?? ? + . Note that the closest point from the query can be computed x i be the point that is nearest to q efficiently in 2-dimensional space, for example, after constructing some space partitioning structures such as kd-trees or R-trees, each query can be answered in poly-logarithmic search time. 4 ? + lies outside the circle Then we can ignore all d-dimensional vectors x whose mapped point x ? + and of radius dist(? ?? centered at q q+ , x ) in the plane, because they are strictly farther than xi i from q. ? X1+ Y+ X2+ ? X1? Y ? X2? Figure 1: Fast filtering using MSL(x, y, 0) and MSU(x, y, 0). All d-dimensional vectors x whose ? + lies outside the circle are strictly farther than xi from q. mapped point x 4 Estimating the Expected Difference Between Two Bounds We now turn to estimating the expected difference between MSL(x, y, k) and MSU(x, y, k). Observe that MSL(x, y, k) is almost the same as MSL(x, y, k ? 1) if bk /?y ? ak /?x . Hence, in the worst case, MSL(x, y, 0) = MSL(x, y, d ? 1) < MSL(x, y, d) = dist(x, y)2 when bk /?y = ak /?x for all k = 0, 1, . . . , d ? 1, except k = d. Therefore, if we need a lower bound strictly better than MSL (x, y, 0), then we need to go through all d refinement steps, which takes O(d) time. It is not difficult to see that this also applies to the case of MSU(x, y, k). However, this is unlikely to happen. Consider a random order for the last term in equation MSL (x, y, k) and for the last term in equation MSU (x, y, k). We show below that their expected values increase and decrease linearly, respectively, as k increases from 0 to d. Formally, let (a?(i) , b?(i) ) denote the ith pair in the random order. We measure the expected quality of the bounds by the difference between the bounds, that is, MSU(x, y, k) ? MSL(x, y, k) as follows.     ! k X a?(i) 2 b?(i) 2 MSU (x, y, k) ? MSL (x, y, k) = 4d?x ?y ? 2?x ?y + ?x ?y i=0  2  2 ! d ai bi kX + = 4d?x ?y ? 2?x ?y d i=0 ?x ?y = 4d?x ?y ? 4k?x ?y = 4?x ?y (d ? k) (15) (16) (17) (18) Let us explain how we get Equation (16) from (15). Let N denote the set of all pairs, and let N k denote the set of first k pairs in the random order. Since each pair in N is treated equally, N k is a Pk random subset of N of size k. Therefore, i=1 (a?(i) /?x )2 is equivalent to take the total sum of Pk (ai /?x )2 with i from 1 to d and divide it by d/k. We can also show this for i=1 (b?(i) /?y )2 by a similar augment. Pd Pd Equations (17) and (18) are because i=1 a2i = d?x2 and i=1 b2i = d?y2 by definitions of ai and bi . Pd Pd By replacing each squared sum with d, that is , by applying i=1 (ai /?x )2 = i=1 (bi /?y )2 = d, we have Equation (18). Lemma 4 The expected value of MSU(x, y, k) ? MSL(x, y, k) is 4?x ?y (d ? k). 5 Because dist(x, y)2 always lies in between the two bounds, the following also holds. Corollary 1 Both expected values of MSU(x, y, k) ? dist(x, y)2 and dist(x, y)2 ? MSL(x, y, k) are at most 4?x ?y (d ? k). This shows a good theoretical expectation on the lower and upper bounds. This can justify that the MS-distance provides very tight bounds in a few refinement steps of a typical sequence. 5 Applications : Proximity Searches The MS-distance can be used to application problems where the Euclidean distance is a measure for proximity or similarity of objects. As a case study, we implemented the nearest neighbor search (NNS) and the farthest neighbor search (FNS) using the MS-distance. Given a set X of d-dimensional vectors xi , for i = 1, . . . , n, and a d-dimensional query vector q, we use the following simple randomized algorithm for NNS. Initially, we set ? to the threshold given from the application under consideration or computed from the fast filtering in 2-dimension in Section 3. 1. Consider the vectors in X one at a time according to this sequence. At the ith stage, we do the followings. if MSL(q, xi , 0) < ? : for j = 1, 2, ..., d : if MSL(q, xi , j) > ? : break; if j = d: ? = MSL(q, xi , d); NN = i; 2. return NN as the nearest neighbor of q with the squared Euclidean distance ?. Note that the first line of the pseudocodes filters out the vectors whose distance to q is larger than ? as in the fast filtering in Section 3. In the for loop, we compute MSL(q, xi , j) from MSL(q, xi , j ?1) in constant time. From the last two lines of the pseudocodes, we update ? to the exact Euclidean distance between q and xi and store the index as the current nearest neighbor (NN). The algorithm for the farthest neighbor search is similar to this one, except that it uses MSU(xi , y, j) and maintains the maximum distance. For empirical comparison, we implemented a linear search algorithm that simply computes distances from q to every xi and chooses the one with the minimum distance. We also used the implementation of the cover tree [6]. A cover tree is a data structure that supports fast nearest neighbor queries given a fixed intrinsic dimensionality [7]. We tested these implementations on data sets from UCI machine learning archive [11]. We selected data sets D from various dimensions (from 10 to 100, 000), and randomly selected 30 queries points Q ? D, and queried them on D \ Q. We labelled the data set on d-dimension as ?Dd?. The data sets D500, D5000, D10000, D20000, D100000 were used in NIPS 2003 challenge on feature selection [12]. The test machine has one CPU, Intel Q6600 with 2.4GHz, 3GB memory, and 32bit Ubuntu 10 operating system running on the machine. Figure 2 shows the percentage of data filtered off. For the data sets on relaxed dimensions, the MS-distance filtered off over 95% of data without lose of accuracy. For high dimensional data, MS-distance failed to filter off many data. Probably this is because the distances from queries to their nearest vectors tend to converge to the distances to their farthest vectors as described in [13]. This makes it hard to decrease (or increase in FNS) the threshold ? for the MS-distance enough to filter off many data. However, on such high dimensions, both the linear search and the cover tree algorithm also show poor performance. Figure 3 shows the preprocessing time of the MS-distance and the cover tree for NNS. The time axis is log-scaled second. This shows that the preprocessing time of the MS-distance is up to 1000 times 6 100 Time(Second) Percent(%) 80 60 40 20 NNS D10 D11 D16 D19 D22 D27 D37 D50 D55 D57 D61 D64 D86 D90 D167 D255 D500 D617 D5000 D10000 D20000 D10^5 D10 D11 D16 D19 D22 D27 D37 D50 D55 D57 D61 D64 D86 D90 D167 D255 D500 D617 D5000 D10000 D20000 D10^5 0 10000 1000 100 10 1 0.1 0.01 0.001 FNS MS-dist Figure 2: Data filtered off in percentage. Cover Figure 3: Preprocessing time for nearest neighbor search in log-scaled second. faster than the one in the cover tree. This is because for the MS-distance it requires only O(dn) time to compute the mean and the standard deviation values. 1.2 1 Relative Time Relative Time 1.2 0.8 0.6 0.4 0.2 1 0.8 0.6 0.4 0.2 0 D10 D11 D16 D19 D22 D27 D37 D50 D55 D57 D61 D64 D86 D90 D167 D255 D500 D617 D5000 D10000 D20000 D10^5 D10 D11 D16 D19 D22 D27 D37 D50 D55 D57 D61 D64 D86 D90 D167 D255 D500 D617 D5000 D10000 D20000 D10^5 0 MS-dist Cover MS-dist Linear Figure 4: Relative running time for the nearest neighbor search queries, normalized by linear search time. Linear Figure 5: Relative running time for the farthest neighbor search queries, normalized by linear search time. Figure 4 shows the time spent for NNS queries. The graph shows the query time that is normalized by the linear search time. It is clear that the filtering algorithm based on the MS-distance beats the linear search algorithm, even on high dimensional data in the results. The cover tree, which is designed exclusively for NNS, shows slightly better query performance than ours. However, the MS-distance is more general and flexible: it supports addition of a new vector to the data set (our data structure) in O(d) time for computing the mean and the standard deviation values of the vector. Deletion of a vector from the data set can be done in constant time. Furthermore, the data structure for NNS can also be used for FNS. Figure 5 shows the time spent for FNS queries. This is outstanding compared to the linear search algorithm. We hardly know any other previous work achieving better performance than this. 6 Conclusion We introduce a fast distance bounding technique, called the MS-distance, by using the mean and the standard deviation values. The MS-distance between two vectors provides upper and lower bounds of Euclidean distance between them in constant time, and these bounds converge monotonically to the exact Euclidean distance over iteration. The MS-distance can be used to application problems where the Euclidean distance is a measure for proximity or similarity of objects. The experimental results show that our method is efficient enough even to replace the best known algorithms for proximity searches. 7 Table 1: Data sets Data Label Name D10 D11 D16 D19 D22 D27 D37 D50 D55 D57 D61 Page Blocks Wine Quality Letter Recognition Image Segmentation Parkinsons Tel Steel Plates Faults Statlog Satellite MiniBooNE Covertype Spambase IPUMS Census # of vectors 5473 6497 20000 2310 5875 1941 6435 130064 581012 4601 233584 Data Label Name D64 D86 D90 D167 D255 D500 D617 D5000 D10000 D20000 D100000 Optical Recognition Insurance Company YearPredictionMSD Musk2 Semeion Madelon ISOLET Gisette Arcene Dexter Dorothea # of vectors 5620 5822 515345 6597 1593 4400 7795 13500 900 2600 1950 Acknowledgments This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (MEST) (NRF-2010-0009857). References [1] J. B. MacQueen. Some methods for classification and analysis of multivariate observations. In L. M. Le Cam and J. Neyman, editors, Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281?297. University of California Press, 1967. [2] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, NY, USA, 1995. [3] K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2:559?572, 1901. [4] J. L. Bentley. Multidimensional binary search trees used for associative searching. Communications of ACM, 18:509?517, September 1975. [5] A. Guttman. R-trees: A dynamic index structure for spatial searching. In Beatrice Yormark, editor, Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD ?84, pages 47?57. ACM, 1984. [6] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In Proceedings of the 23rd international conference on Machine learning, ICML ?06, pages 97?104, New York, NY, USA, 2006. ACM. [7] D. R. Karger and M. Ruhl. Finding nearest neighbors in growth-restricted metrics. In Proceedings of the 34th annual ACM symposium on Theory of computing, STOC ?02, pages 741?750, New York, NY, USA, 2002. ACM. ? E?gecio?glu and H. Ferhatosmano?glu. Dimensionality reduction and similarity computation by inner [8] O. product approximations. In Proceedings of the ninth international conference on Information and knowledge management, CIKM ?00, pages 219?226, New York, NY, USA, 2000. ACM. [9] R. Weber, H. J. Schek, and S. Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In Proceedings of the 24rd International Conference on Very Large Data Bases, VLDB ?98, pages 194?205, San Francisco, CA, USA, 1998. Morgan Kaufmann Publishers Inc. [10] H. V. Jagadish, B. C. Ooi, K.L. Tan, C. Yu, and R. Zhang. idistance: An adaptive b+-tree based indexing method for nearest neighbor search. ACM Transactions on Database Systems, 30:364?397, June 2005. [11] UCI machine learning archive. http://archive.ics.uci.edu/ml/. 8 [12] NIPS 2003 challenge on feature selection. http://clopinet.com/isabelle/projects/nips2003/. [13] K. S. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is ?nearest neighbor? meaningful? In Proceedings of the 7th International Conference on Database Theory, ICDT ?99, pages 217?235, London, UK, 1999. Springer. 9
4353 |@word madelon:1 norm:1 vldb:1 reduction:1 exclusively:1 karger:1 ours:1 spambase:1 bitwise:2 current:1 com:1 beygelzimer:1 attracted:1 bd:1 must:1 happen:1 predetermined:1 designed:1 update:2 selected:2 ubuntu:1 plane:5 ruhl:1 ith:2 farther:2 filtered:3 provides:8 zhang:1 mathematical:1 dn:4 become:2 symposium:2 consists:1 schek:1 introduce:2 pairwise:1 expected:6 dist:20 frequently:1 company:1 cpu:1 cardinality:1 becomes:2 project:1 estimating:2 gisette:1 finding:3 transformation:2 impractical:1 guarantee:1 berkeley:1 every:1 multidimensional:1 quantitative:1 ooi:1 xd:2 growth:1 exactly:1 musk2:1 scaled:2 uk:1 brute:1 partitioning:2 farthest:9 grant:1 engineering:1 switching:1 ak:4 dnm:1 q6600:1 bi:30 acknowledgment:1 block:1 dorothea:1 area:2 empirical:1 get:4 selection:2 hee:1 arcene:1 live:1 applying:2 d19:5 equivalent:1 map:1 go:1 attention:1 regardless:1 d22:5 isolet:1 d1:3 searching:2 variation:2 construction:1 tan:1 magazine:1 exact:9 us:1 recognition:2 database:3 role:1 d16:5 worst:1 thousand:1 decrease:2 pd:13 cam:1 dynamic:1 tight:5 triangle:1 various:3 fast:10 effective:1 london:1 query:15 outside:2 refined:6 pearson:1 whose:3 widely:2 larger:1 statistic:2 nondecreasing:2 associative:1 sequence:5 product:6 uci:3 loop:1 icdt:1 convergence:1 optimum:1 satellite:1 object:4 spent:2 ac:1 nearest:21 b0:1 ddimensional:1 implemented:2 implies:1 radius:1 filter:4 centered:1 government:1 beatrice:1 clustered:1 ofq:1 statlog:1 summation:7 strictly:3 hold:3 proximity:8 lying:1 miniboone:1 ic:1 a2:1 wine:1 purpose:2 lose:1 label:2 reflects:1 always:1 beyer:1 parkinson:1 dexter:1 corollary:1 semeion:1 june:1 d50:5 nn:3 unlikely:1 a0:1 initially:1 transformed:2 classification:2 among:2 flexible:1 denoted:3 augment:1 spatial:1 once:4 msl:34 nrf:1 yu:1 icml:1 few:3 randomly:1 composed:1 preserve:1 national:1 geometry:1 consisting:1 ourselves:1 insurance:1 d100000:2 d11:5 nonincreasing:2 pseudocodes:2 korea:2 orthogonal:1 tree:16 euclidean:26 continuing:1 divide:1 circle:2 d90:5 theoretical:1 cover:10 measuring:1 d64:5 cost:2 deviation:7 subset:1 graphic:1 stored:1 nns:7 chooses:1 fundamental:1 randomized:1 international:5 guttman:1 off:6 squared:5 management:2 return:1 b2:1 inc:1 ad:1 b2i:5 break:1 maintains:1 accuracy:1 variance:3 kaufmann:1 efficiently:3 preprocess:1 yearpredictionmsd:1 explain:1 definition:3 stop:2 knowledge:1 dimensionality:4 segmentation:1 goldstein:1 done:1 furthermore:3 stage:1 langford:1 replacing:3 d10:9 quality:3 hwang:1 bentley:1 usa:5 name:2 normalized:3 y2:2 hence:1 postech:2 m:30 plate:1 percent:1 image:1 weber:1 consideration:3 nips2003:1 recently:1 qp:1 volume:1 million:1 isabelle:1 ai:27 queried:1 rd:2 similarly:1 funded:1 similarity:7 ahn:1 operating:1 base:1 closest:2 multivariate:1 apart:1 store:1 inequality:5 binary:1 fault:1 yi:2 morgan:1 minimum:2 relaxed:1 converge:5 monotonically:3 full:1 faster:1 equally:1 a1:1 vision:2 expectation:2 metric:1 iteration:1 addition:1 publisher:1 archive:3 probably:2 tend:1 intermediate:3 enough:5 fit:1 restrict:1 reduce:1 inner:5 pca:2 gb:1 speaking:1 york:4 hardly:1 useful:2 clear:1 glu:2 reduced:1 http:2 percentage:2 kap:1 cikm:1 mest:1 threshold:3 pohang:2 achieving:1 graph:1 monotone:1 convert:1 sum:3 d57:5 letter:1 almost:1 bit:1 bound:38 convergent:1 nonnegative:3 annual:1 covertype:1 precisely:1 x2:8 answered:1 optical:1 department:1 according:1 poor:2 kd:2 slightly:1 kakade:1 restricted:1 census:1 indexing:1 fns:5 equation:21 neyman:1 turn:1 know:1 clopinet:1 operation:3 multiplied:1 observe:1 save:1 a2i:6 clustering:1 running:3 sigmod:2 already:3 september:1 distance:70 mapped:3 byi:2 index:3 reformulate:3 difficult:1 korean:1 stoc:1 steel:1 implementation:2 upper:16 observation:1 macqueen:1 beat:1 communication:1 ninth:1 bk:4 pair:15 philosophical:1 california:1 deletion:1 nip:2 below:3 summarize:1 challenge:2 including:1 memory:1 treated:1 force:1 technology:1 axis:3 relative:4 filtering:10 foundation:1 ipums:1 dd:1 editor:2 supported:1 last:8 soon:1 drastically:1 neighbor:20 fifth:1 distributed:1 ghz:1 dimension:10 axi:2 computes:2 refinement:9 preprocessing:4 simplified:1 san:1 adaptive:1 far:1 transaction:1 emphasize:1 ignore:1 ml:1 b1:1 unnecessary:1 francisco:1 xi:15 search:28 msu:23 why:1 table:2 nature:1 ca:1 tel:1 poly:1 constructing:2 pk:2 linearly:1 bounding:1 x1:4 intel:1 ny:4 shaft:1 candidate:2 lie:4 intrinsic:1 vapnik:1 kr:1 kx:1 logarithmic:1 simply:1 failed:1 ux:1 applies:1 springer:2 blott:1 ramakrishnan:1 acm:8 goal:1 labelled:1 replace:1 hard:1 typical:4 except:3 uniformly:1 justify:2 principal:1 lemma:4 called:3 total:1 jagadish:1 experimental:3 meaningful:1 formally:1 support:2 outstanding:1 tested:1
3,704
4,354
An ideal observer model for identifying the reference frame of objects Joseph L. Austerweil Department of Psychology University of California, Berkeley Berkeley, CA 94720 [email protected] Abram L. Friesen Department of Computer Science and Engineering University of Washington Seattle, WA 98195 [email protected] Thomas L. Griffiths Department of Psychology University of California, Berkeley Berkeley, CA 94720 Tom [email protected] Abstract The object people perceive in an image can depend on its orientation relative to the scene it is in (its reference frame). For example, the images of the symbols ? and + differ by a 45 degree rotation. Although real scenes have multiple images and reference frames, psychologists have focused on scenes with only one reference frame. We propose an ideal observer model based on nonparametric Bayesian statistics for inferring the number of reference frames in a scene and their parameters. When an ambiguous image could be assigned to two conflicting reference frames, the model predicts two factors should influence the reference frame inferred for the image: The image should be more likely to share the reference frame of the closer object (proximity) and it should be more likely to share the reference frame containing the most objects (alignment). We confirm people use both cues using a novel methodology that allows for easy testing of human reference frame inference. 1 Introduction When are the objects in two images the same?1 Although people recognize and categorize objects successfully and effortlessly, object recognition in machine learning is an incredibly difficult problem and people?s success is a puzzle to cognitive scientists. To solve this problem, object recognition techniques typically generate a set of features using a predefined procedure (e.g., SIFT descriptors [1] or textons [2]) or learn features (e.g., deep belief networks [3]) from the images. The general goal of these methods is to extract features from images that are useful for identifying the objects that generated the images after whatever transformations occurred while producing them (e.g., viewpoint changes). This is a sensible strategy given that people typically perceive the same object even when it is transformed in its image (e.g., translations). However, not all transformations should be ignored: The perceived identity of some objects depends on the orientation of its features with respect to the scene it is in (e.g., ? vs. + differ only in orientation), but for other objects it does 1 In this paper, we use the following terminology for scene, image, and object. The entire visual input of an observer is a scene. A scene contains a set of images. An image is a part of the visual input that is generated by a single object, which is ambiguous as two or more objects could generate the same image. An object is the item in the world that generates an image in the visual input. 1 not. Developing proper object recognition and fully understanding how people do it depends on explaining how people determine the orientation of objects with respect to the scene they are in. The importance of orientation for object recognition leads us to the following question: If two objects project to the same image under different viewing conditions (e.g., + and ? after 45 degree rotations), how do people infer which object is in the image? In psychology, there are two main theories for how people solve this problem: the invariant feature hypothesis [4], which is essentially the strategy taken by current object recognition techniques (use features that preserve object identity over the possible transformations that generate images of the object), and the reference frame hypothesis, which posits that objects are embedded in coordinate axes [5]. The coordinate axes set the orientation and scale of the objects, and thus + and ? can be identified as different objects. Though they may produce the same image, they will have different coordinate axes. In some situations the orientation of an image?s reference frame is simply the orientation of the retina; however, this is not the case when we rotate our heads (as our retinal image rotates) or look at a rotated object (e.g., a person lying on a bench or a document rotated on a desk). Thus, the reference frame of an image is ambiguous without additional information. However, if there is another object in the scene whose orientation is unambiguous (like a 5), then the orientation of the ambiguous image can be inferred.2 We demonstrate that people use the orientation of other images in the scene to determine the orientation of an ambiguous image by asking participants to solve arithmetic problems, where the operator image is ambiguous and the two numbers flanking the operator are either oriented upright or rotated 45 degrees. The solution people adopt is indicative of the reference frame they inferred for the operator (multiplication implies an upright reference frame and addition implies a diagonal reference frame). This is a novel experimental method that allows us to explore reference frame inference in a wide range of contexts. In real life, we typically view scenes with multiple reference frames. For example, some books on a bookshelf might be upright, other books could be tilted diagonally (for support), while other books might lie flat. Yet there has been little work investigating how people infer the number of reference frames, their orientations, and which images belong to each reference frame. To solve this problem, we note that each image in a scene belongs to a single reference frame, and thus reference frames form a partition of the images in a scene (where each block in the partition corresponds to a reference frame). Using a standard nonparametric Bayesian model for partitions, we formulate an ideal observer model to infer multiple reference frames and their parameters. The model predicts that people should be sensitive to two cues when inferring the reference frames of a scene: the proximity of the ambiguous image to two unambiguous flanking images in conflicting orientations, and the difference in the number of objects aligned in the competing reference frames. We confirm people are sensitive to both cues using the novel method described above. The summary of the article is as follows. First, Section 2 summarizes relevant psychological research on how orientation affects the objects perceived in ambiguous images. Next, Section 3 develops a novel method for online testing of the reference frame people infer for an image and establishes its efficacy. Section 4 presents an ideal observer model for reference frame inference in scenes with multiple reference frames. The model predicts that the ambiguous image?s proximity to other reference frames should affect the inferred reference frame and Section 5 confirms that people act in accordance with this prediction in a behavioral experiment. The model also predicts that the number of aligned objects in a reference frame should affect the reference frame inferred for an ambiguous image. Section 6 confirms this prediction in a behavioral experiment. Section 7 concludes the paper and highlights some directions for future research. 2 Orientation in psychological theories of object representation Though the perceived object of some images does not depend on its orientation (like a 5), there are many examples where the perceived object does depend on its orientation [7, 8], including + vs. ? or a square vs. a diamond, and other effects of orientation on object recognition [9, 10]. This has led psychologists to believe that people represent objects within a reference frame (a set of coordinate axes).3 Figure 1 (a) shows that reference frames predict the image + is interpreted as a + when 2 3 We view the ambiguity of a reference frame as essentially the same as the strength of the intrinsic axes [6]. Though coordinate axes have other properties (e.g., scale), we focus on orientation in this article. 2 (d) 5 5 (b) (c ) (a) + + + + ++ + + 5 5 + + Figure 1: Reference frames. (a) The ambiguity of the + image can be resolved using reference frames: a + with horizontal orientation (solid axes) or a ? rotated 45 degrees (dashed axes). (b) Other images are unambiguous, like a 5. (c) The reference frame of ambiguous objects is influenced by objects with unambiguous reference frames. (d) The group of objects is seen as either all + or all ?, but not some + and some ?. This establishes one reference frame per group. the coordinate axes are aligned with the document?s axes and as ? when the coordinate axes are diagonal to the document?s axes. For objects that are rotationally invariant, there is only one object that generates the observed image and so it is identifiable in any orientation (see Figure 1 (b)). The dependence of object perception on orientation is a well established norm and has been demonstrated with novel and familiar 2-D objects, faces, handwriting [8, 9], and 3-D objects [10, 11]. Central to the reference frame hypothesis is the ability of our perceptual system to infer a reference frame for a given image. As more than one reference frame may be consistent with an observed image, psychologists have explored how people infer the appropriate reference frame for an image. Though reference frame inference is strongly influenced by the top-down axis of the retinal image and by the axis of gravity (given by our proprioceptive and vestibular senses) [8], the scene itself can influence the inferred reference frame. Objects grouped together in the world tend to be affected by the same transformation when they generate images (e.g., the text on a poster as the poster is rotated), and so it is sensible that the inferred reference frame for an ambiguous image is influenced by the orientations of the images surrounding it. Figures 1 (c) and (d) are phenomenological demonstrations of how the alignment of the orientations of other objects in a scene can bias the inferred reference frame for an image whose reference frame is ambiguous (and there is strong corroborating empirical evidence for this principle [12, 13]).4 Figure 1 (c) is biased towards being interpreted as ? based on the surrounding context and the images in Figure 1 (d) are interpreted as either all + or all tilted ?, but it is difficult to interpret some as + and others as tilted ? simultaneously [14]. Thus, there is one reference frame shared by all the objects in a group. Although there is a wealth of research into reference frame inference for scenes containing a single reference frame, to the best of our knowledge, there has not been any research into how people determine the reference frame of ambiguously oriented images when there is more than one reference frame in the scene (and both are consistent with the images). Before exploring what cues influence human reference frame inference in scenes with multiple reference frames, we develop a novel method for testing human reference frame inference. 3 Testing reference frame inference using arithmetic To test how different factors influence the reference frame people infer for an image, we ask people to solve an arithmetic problem without specifying the appropriate operation. If people view ? and their response is the multiplication answer, then their reference frame for ? is aligned with the horizontal and vertical axes of the page. Alternatively, if people view the same ?, but their response is the addition answer, then their reference frame for ? is aligned with the axes diagonal to the page (and thus, relative to its own reference frame, it is treated as +).5 We use this new method instead of previous techniques (e.g., explicitly asking the image?s orientation and recording the frequency each orientation is chosen that is either compatible or conflicting to the tested hypothesis [15]) due to its ability to be used in a wide range of contexts and to demonstrate the robust importance of reference 4 We use slightly different terminology than previous work has done and refer to this principle as alignment rather then symmetry to avoid the ambiguity in the word symmetry (which symmetry we are referring to). 5 Although we use + and ? as the ambiguous images, this method works with any ambiguous images by teaching the participant to use addition in one orientation of the image and multiplication in the other. 3 (a) (b) Axis Oriented (c) (d) Diagonal Oriented Axis Oriented + + 5 5 5 0 Diagonal Oriented 10 Frequency 5 5 Frequency 10 10 25 Response 5 0 10 25 Response Figure 2: Effect of the orientations of other objects in the same reference frame. (a) 5s aligned with axes implies that the operator is ?. (b) 5s aligned with diagonal implies the operator is + at a diagonal orientation. (c) Frequency of answers to (a) given by participants. Most participants respond with 25, the solution to the product of 5 and 5, meaning their reference frame is aligned with the axes of the page. (d) Frequency of answers to (b) given by participants. Most participants respond with 10, meaning their reference frame is aligned with the diagonals of the page. frame inference on a seemingly unrelated cognitive behavior (solving an arithmetic problem). We confirm its validity by reproducing a previously found effect ? the influence of orientation on other images in the scene [12]. When the reference frame for an image is ambiguous, one factor that influences the inferred reference frame is the orientation of other images it is grouped with, especially when those images are identifiable in any orientation. Thus, if we ask people to solve an arithmetic problem, where the operator ? is paired with the numbers 5 aligned with the top-down axes of the page (Figure 2 (a)), they should respond 25, the result of multiplication. Alternatively, if people solve the same problem except the numbers 5 are aligned diagonally, they should infer the diagonal axes to be the reference frame and respond 10, the result of addition (Figure 2 (b)). To test this method, we recruited 20 participants online, who answered one arithmetic problem in exchange for a small monetary reward. The participants were counterbalanced over the axis or diagonally oriented conditions (Figures 2 (a) and (b) respectively) and all participants gave either the addition (10) or multiplication (25) solution. By changing the orientation of the numbers, the solutions to the arithmetic problems given by participants in Figures 2 (a) and (b) are different despite having identical numbers and the identical operator image. Figures 2 (c) and (d) show that the responses of two groups of participants who answered the arithmetic problem in (a) and (b) differed as predicted (?2 (1) = 5.208, p < 0.05, using Yates? chi-square correction). Thus, asking participants to solve arithmetic problems is an effective method for testing reference frame inference and perceived orientations can influence higher level cognition. 4 Modeling reference frame inference Before describing our model of reference frame inference with multiple reference frames, we first present a probabilistic model for scenes of multiple images with only a single reference frame. 4.1 Reference frame inference for scenes with one reference frame We assume that a vocabulary of possible objects is known ahead of time of size V and that there are R possible rotations. Each scene (e.g., Figure 2 (a) is one scene) consists of a set of images (e.g., 5, ?, and 5 are the images of Figure 2 (a)). For each image i in a scene, the model is given its visual properties yi and its spatial location xi = (xi1 , xi2 ) The visual properties of the image yi are generated by an unknown object vi rotated by r, the orientation of the scene?s reference frame. A V ? R binary image-object alignment matrix A(i) encodes the object-rotation pairs consistent with the observed image yi such that A(i) (v, r) = 1 if the image of object v rotated r degrees is consistent with yi . The model assumes that the spatial locations of the images are independent identically distributed draws from a Gaussian distribution with shared parameters ?, the center point for the reference frame, and ?, the spread of objects around its center point. The unobserved objects and the orientation of the reference frame r are drawn from independent discrete distributions 4 with parameters ? and ?, the prior over objects and reference frame orientations, respectively. The following generative model defines our statistical model: iid r|? ? Discrete(?) vi |? ? Discrete(?) xi |?, ? ? Gaussian (?, ?) P (yi |vi , r) = A(i) (vi , r) If the model assumes there are three types of objects (5, + and ?) and two possible rotations (0 and 45 degrees), the model captures the sensitivity of participants in the demonstration (Figure 2). In Figure 2 (a), the 5s are oriented at 0 degrees. A(5, r) is only non-zero when r = 0 because no other object can produce an image consistent with the observed image of the 5. r = 0 implies that the operator is ?, which is consistent with participant responses (Figure 2 (c)). When the 5s are oriented at 45 degrees (Figure 2 (b)), A(5, r) is only non-zero when r = 45 for the same reason as before. r = 45 implies that the operator is +, which is consistent with participant responses (Figure 2 (d)). 4.2 Extending the model for scenes with multiple reference frames Although the model defined in the previous section succeeds in inferring the reference frame of an ambiguous image using other images it is grouped with, it cannot handle scenes containing multiple reference frames, such as the scenes in Figure 3. We extend the model by partitioning the images of a scene into reference frames, where each image of the scene belongs to exactly one reference frame and a reference frame is a block of the partition. From this perspective, inferring multiple reference frames for a scene of images is equivalent to partitioning the scene or clustering the images. With the insight that grouping images into reference frames is like finding a partition of a scene, we can extend our model to select the reference frames of a scene (with an unknown number of reference frames). First, we generate a partition of the images in the scene from the Chinese restaurant process (CRP) [16] with parameter ?, an exchangeable distribution over partitions. The CRP is defined through the following sequential construction:  nk k?K ?+i?1 P (ci = k|c1 , . . . , ci?1 ) = ? k =K +1 ?+i?1 where K is the current number of reference frames and nk is the number of objects assigned to reference frame k. ci denotes the reference frame that object i is assigned to and if ci = K + 1, it is assigned a new reference frame containing none of the previous objects and K increments by one (to initialize, the first object starts its own reference frame and K = 1). This gives us an assignment vector c, where ci = j denotes reference frame j contains image i. Each block in the partition (reference frame) j is associated with a rotation rj and is embedded in the spatial layout of the scene with a center position ?j and spread ?j (each of which is generated from a Gaussian-Inverse Wishart distribution with shared parameters). Thus, we have defined the following generative model for a set of images in a scene: iid c|? ? CRP(?) ?j , ?j |?0 , ?0 , k0 , ?0 ? GIW (?0 , ?0 , k0 , ?0 ) iid iid rj |? ? Discrete(?) vi |? ? Discrete(?) P (yi |vi , rci , ci ) = A(i) (vi , rci ) xi |ci , ?ci , ?ci ? Gaussian (?ci , ?ci ) where GIW signifies the Gaussian-Inverse-Wishart distribution, and ?, ?0 , ?0 , k0 , ?0 , ?, and ? are hyperparameters of our model. We use Gibbs sampling for inference [17], which gives us the cluster assignments for each image and the updated parameters ?j = (?j , ?j , rj ) for each cluster j. We begin by assigning each image to its own reference frame and then iterating. For each observed image, we resample ci from the set of existing clusters and m = 2 newly drawn clusters. After all ci values have been resampled, we discard any empty clusters and update the parameters of the remaining clusters by drawing them from their posterior distribution given the objects assigned to that reference frame p(?j |{xi , yi : ci = j}), where {xi , yi : ci = j} is the set of images and their locations in reference frame j. 4.3 Predictions for human reference frame inference What factors influence the reference frame assigned to an ambiguous image according to our ideal observer model? Two factors it predicts should influence the image?s inferred reference frame are 5 Operator Position -2 Operator Position -1 5 Operator Position 1 5 Operator Position 2 5 + + 5 5 5 5 5 + + Figure 3: Trials from Experiment 1 showing the possible positions of the operators for the main factor of the experiment. Other factors randomized over trials are the numbers in the problem (always single digits), which of the two numbers was rotated, the diagonal that the numbers and operator are aligned on (positive diagonal shown in the figure, but numbers and operator aligned on the negative diagonal as well), and the rotation of the operator. (a) (b) Participant responses for proximity experiment Percent grouped left Percent grouped left 0.6 0.4 0.2 0 Model predictions for proximity experiment 0.8 0.8 ?2 ?1 1 Operator Position 0.6 0.4 0.2 0 2 ?2 ?1 1 Operator Position 2 Figure 4: Proximity effects: (a) Human results and (b) Model results. The closer the operator is to the left number, the more likely it is to take the left number?s orientation. proximity or how close the image is to unambiguous images (as images in the same reference frame are coupled in spatial location) and alignment or the difference in the number of images assigned to each reference frame. The general paradigm we use to test the predictions is to have the + or ? operator flanked by a number with different orientations on each side (see examples in Figure 3). It is clear that the two numbers should have their own reference frame, but it is ambiguous which reference frame the operator should be assigned to. We compare how each of these factors influences the reference frames inferred in the scene by people and our model in two behavioral experiments. 5 Experiment 1: Proximity effects on reference frame inference When the reference frame for an image is ambiguous and there are two conflicting neighboring reference frames, our model predicts that proximity or the distance of the ambiguous image to the two conflicting reference frames should affect the reference frame adopted by the ambiguous image. We explore this question using the method presented above, where participants are asked to solve an arithmetic problem where the operator is ambiguous between + or ? and the two numbers have conflicting reference frames (orientations). This allows us to deduce the reference frame inferred for the operator image from the answer given by participants. We manipulate proximity by changing the location of the operator such that it is closer to one of the two numbers as shown in Figure 3. 5.1 Methods A total of 134 participants completed the experiment online through Amazon Mechanical Turk in exchange for $0.20 USD. Four participants did not give a correct solution to the arithmetic problem (neither the addition nor multiplication solution) leaving 130 participants for analysis. Participants were asked to maximize their window before answering the arithmetic problem. All factors were manipulated between subjects as preliminary testing demonstrated a strong effect of trial order on the selected reference frame (probably because reference frames rarely change in the world). 6 The primary factor of interest of the experiment was the position of the operator scored from -2 (far to the left) to 2 (far to the right), which was counterbalanced over participants (without the 0 position). The problem was viewed through a simulated aperture (to minimize the effect of the monitor?s reference frame). See Figure 3 for example trials with the operator in each position. There were several other factors that were randomized over participants: the numbers in the problem (randomly chosen single digit numbers), which number was rotated (left or right), the diagonal that the numbers and operator were aligned on (positive diagonal, as shown in Figure 3, or negative diagonal), and the rotation of the operator (+ or ?). 5.2 Results and Discussion Figure 4 (a) shows that participants are more likely to infer the orientation of the left number for the operator the closer it is to the left number. The results confirm our hypothesis: the closer the operator is to an image with an unambiguous reference frame, the more likely participants are to infer that reference frame for the operator (?2 (1) = 3.99, p < 0.05 for -2 vs. 2). A probit regression analysis corroborates this result as the regression coefficient is significantly different from zero (p < 0.05). The model results were generated using Gibbs sampling (as previously described) and shown in Figure 4 (b). For each trial, we ran the sampler for 50 burn-in iterations, recorded 750 samples, and then thinned the samples by selecting every 5 samples. This left 150 samples that formed our estimate for the proportion of times the operator grouped with the left reference frame. The parameters were initialized to: ? = 0.001, ?0 = [264.7, 261.94], ?0 = 1000I (scenes are 550?550 pixels with the bottom-left corner as origin), where I is the identity matrix, k0 = 0.2, and ?0 = 110. The discrete distributions encoding the priors on objects and orientations, ? and ?, were uniform over all V and R possibilities. The model and human results clearly exhibit the same qualitative behavior: As the distance between the operator and the left number decreased, the probability the operator took the orientation of the left number increased. 6 Experiment 2: Alignment effects on reference frame inference Our model also predicts that the difference in the number of unambiguous images assigned to the conflicting reference frames should affect the reference frame adopted by the operator image. In this experiment, we test the prediction using the same method as above, but manipulate the number of extra oriented unambiguous objects in each of the competing reference frames (see Figure 5 (a)). 6.1 Methods A total of 80 people participated online through Amazon Mechanical Turk in exchange for $0.20 USD. There were 12 participants who gave an incorrect answer, leaving 68 participants for analysis. The instructions and design were identical to the previous experiment, except that there were two extra factors manipulating the context of the left and right number (5 on the left and 1 on the right or vice versa) and there were only two operator positions (-2 and 2). Figure 5 (a) illustrates example trials of the context manipulations for the operator in position -2. 6.2 Results and Discussion Figure 5 (b) shows that participants were more likely to infer the operator?s orientation to be the orientation of whichever side had more objects and it was closer to, replicating the effect of Experiment 1 (?2 (1) = 12.8728, p < 0.0005 ). Model results were generated using the same procedure and parameter values as Experiment 1 (except ?0 = 10 to account for the increased number of objects) and Figure 5 (c) shows its similarity to participant results. 7 Conclusions and future directions In this paper, we introduced the first study of how people infer the reference frame of images in scenes with multiple reference frames. We presented an implicit method for testing reference frame inference, an ideal observer model that predicts people should be sensitive to two scene cues, and 7 (b) 1 Percent grouped left (a) 5 5L1R 5 + Alignment effects on participant responses 5L1R 1L5R 0.8 0.6 0.4 0.2 0 (c) 1L5R 5 + Percent grouped left 5 1 ?2 2 Operator Position Alignment effects on model responses 5L1R 1L5R 0.8 0.6 0.4 0.2 0 ?2 Operator Position 2 Figure 5: Alignment effects. The operator is more likely to take the orientation of the side with more objects. 5L1R denotes five objects in the left reference frame and one object in the right, and 1L5R indicates the opposite arrangement. (a) Example stimuli, (b) Human results, and (c) Model results. behavioral evidence supporting its predictions. Because the objects people perceive depend on the orientation of their images in the scene, these results improve our understanding of how the configuration of objects in scenes affects object perception. We plan to extend our model to capture other cues identified by perceptual psychologists. A first step is to include the bias towards using the up-down axis of the input image [8] by using a nonuniform distribution over rotations (estimating ?). We can capture the elongation cue (that the orientation of the spread of images in a scene biases the orientation of the reference frame of the images in the scene [5]) by coupling the covariance matrix (?) and rotation (r) of a reference frame. Currently, our model assumes the positions of images in a reference frame are Gaussian distributed; however, people have strong expectations about the arrangement of images in a scene [18]. We plan to compare people?s bias to a sophisticated scene segmentation model [19]. We are also interested in cues that depend on the structure of the images or the orientation of the agent in the world, like axes of symmetry [5] or gravitational axes [8]. Another direction for future work is to address an assumption of the model: How do people learn the set of objects and whether or not those objects are orientation-invariant? A potential solution is to combine our model with previous work that presented a nonparametric Bayesian model for learning features and the transformations they are allowed to undergo [20]. Hopefully, incorporating our model into this feature learning method will yield better inferred features and, in turn, will help create better feature generation and object recognition techniques by providing better understanding of how people perceive objects from ambiguous image data. Finally, we plan to explore how the presented principles scale to more realistic scenes with objects more complex than + and ? and more orientations. Our paradigm provides a principled starting point for investigating how reference frames are identified in scenes with multiple reference frames. It is easily extended to more complex scenes by associating different orientations (or rotations in depth) of an ambiguous image with different arithmetic operators. Our hope is that this leads to a better understanding of object identification and reference frame identification. Acknowledgements We thank Karen Schloss, Stephen Palmer, Anna Rafferty, David Whitney and the Computational Cognitive Science Lab at Berkeley for discussions and AFOSR grant FA-9550-10-1-0232 for support. 8 References [1] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the International Conference on Computer Vision, volume 2, pages 1150?1157, 1999. [2] D. R. Martin, C. C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(5):530?549, 2005. [3] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313:504?507, 2006. [4] O. G. Selfridge and U. Neisser. Pattern recognition by machine. In Computers and thought, pages 235?267. McGraw-Hill, New York, 1963. [5] S. E. Palmer. Reference frames in the perception of shape and orientation. In Object perception: Structure and Process, pages 121?163. Lawrence Erlbaum Associates, Hillsdale, NJ, 1989. [6] M. Wiser. The role of intrinsic axes in shape recognition. In Proceedings of the Third Annual Meeting of the Cognitive Science Society, pages 184?186, San Mateo, CA, 1981. Morgan Kaufman. [7] E. Mach. The analysis of sensations. Open Court, Chicago, 1914/1959. [8] I. Rock. Orientation and form. Academic Press, New York, 1973. [9] P. Jolicoeur. The time to name disoriented natural objects. Memory & Cognition, 13:289?303, 1985. [10] M. J. Tarr, P. Williams, W. G. Hayward, and I. Gauthier. Three-dimensional object recognition is viewpoint dependent. Nature Neuroscience, 1(4):275?277, 1998. [11] I. Rock, J. DiVita, and R. Barbeito. The effect on form perception of change of orientation in the third dimension. Journal of Experimental Psychology: Human Perception and Performance, 7:719?732, 1981. [12] S. E. Palmer. What makes triangles point: Local and global effects in configurations of ambiguous triangles. Cognitive Psychology, 12:285?305, 1980. [13] S. E. Palmer. The role of symmetry in shape perception. Acta Psychologica, 59:67?90, 1985. [14] F. Attneave. Triangles as ambiguous figures. American Journal of Psychology, 81:447?453, 1968. [15] S. E. Palmer and N. M. Bucher. Configural effects in perceived pointing of ambiguous triangles. Journal of Experimental Psychology: Human Perception and Performance, 7(1):88?114, 1981. [16] J. Pitman. Combinatorial Stochastic Processes. 2002. Notes for Saint Flour Summer School. [17] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249?265, 2000. [18] S. E. Palmer. Vision Science. MIT Press, Cambridge, MA, 1999. [19] E. Sudderth and M. I. Jordan. Shared segmentation of natural scenes using dependent PitmanYor processes. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1585?1592. 2009. [20] J. L. Austerweil and T. L. Griffiths. Learning invariant features using the transformed Indian buffet process. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 82?90. 2010. 9
4354 |@word trial:6 norm:1 proportion:1 open:1 instruction:1 confirms:2 covariance:1 brightness:1 solid:1 configuration:2 contains:2 efficacy:1 selecting:1 document:3 existing:1 current:2 com:1 gmail:1 yet:1 assigning:1 tilted:3 realistic:1 partition:8 chicago:1 shape:3 update:1 v:4 cue:9 generative:2 selected:1 item:1 intelligence:1 indicative:1 divita:1 provides:1 location:5 disoriented:1 five:1 neisser:1 qualitative:1 consists:1 incorrect:1 combine:1 behavioral:4 thinned:1 behavior:2 nor:1 chi:1 salakhutdinov:1 little:1 window:1 project:1 begin:1 unrelated:1 estimating:1 hayward:1 what:3 kaufman:1 interpreted:3 unobserved:1 transformation:5 finding:1 nj:1 configural:1 berkeley:6 every:1 act:1 gravity:1 exactly:1 whatever:1 partitioning:2 exchangeable:1 grant:1 producing:1 before:4 positive:2 engineering:1 scientist:1 accordance:1 local:3 despite:1 encoding:1 mach:1 might:2 burn:1 acta:1 mateo:1 specifying:1 palmer:6 range:2 testing:7 block:3 digit:2 procedure:2 empirical:1 significantly:1 poster:2 thought:1 word:1 griffith:3 cannot:1 close:1 operator:43 context:5 influence:10 equivalent:1 demonstrated:2 center:3 layout:1 williams:2 incredibly:1 starting:1 focused:1 formulate:1 amazon:2 identifying:2 perceive:4 insight:1 handle:1 coordinate:7 increment:1 updated:1 construction:1 hypothesis:5 origin:1 associate:1 recognition:11 predicts:8 observed:5 bottom:1 role:2 capture:3 culotta:1 ran:1 principled:1 reward:1 asked:2 depend:5 solving:1 triangle:4 resolved:1 easily:1 k0:4 surrounding:2 pitmanyor:1 effective:1 zemel:1 whose:2 solve:9 drawing:1 ability:2 austerweil:3 statistic:2 itself:1 online:4 seemingly:1 rock:2 took:1 propose:1 ambiguously:1 product:1 neighboring:1 aligned:14 relevant:1 monetary:1 seattle:1 cluster:6 empty:1 extending:1 produce:2 rotated:9 object:84 help:1 coupling:1 develop:1 school:1 strong:3 c:1 predicted:1 implies:6 differ:2 direction:3 posit:1 sensation:1 correct:1 stochastic:1 human:9 viewing:1 hillsdale:1 exchange:3 preliminary:1 exploring:1 correction:1 gravitational:1 proximity:10 effortlessly:1 lying:1 around:1 lawrence:1 puzzle:1 predict:1 cognition:2 pointing:1 adopt:1 resample:1 perceived:6 combinatorial:1 currently:1 sensitive:3 grouped:8 vice:1 create:1 successfully:1 establishes:2 hope:1 mit:1 clearly:1 gaussian:6 always:1 rather:1 avoid:1 ax:21 focus:1 indicates:1 detect:1 inference:18 dependent:2 typically:3 entire:1 koller:1 manipulating:1 transformed:2 interested:1 pixel:1 orientation:57 plan:3 spatial:4 initialize:1 having:1 washington:2 sampling:3 elongation:1 identical:3 tarr:1 look:1 future:3 rci:2 others:1 stimulus:1 develops:1 retina:1 oriented:10 randomly:1 manipulated:1 simultaneously:1 preserve:1 recognize:1 familiar:1 interest:1 possibility:1 alignment:9 flour:1 mixture:1 sens:1 chain:1 predefined:1 closer:6 taylor:1 initialized:1 psychological:2 increased:2 modeling:1 asking:3 whitney:1 assignment:2 signifies:1 uniform:1 erlbaum:1 answer:6 referring:1 person:1 international:1 sensitivity:1 randomized:2 rafferty:1 probabilistic:1 xi1:1 together:1 ambiguity:3 central:1 recorded:1 containing:4 usd:2 wishart:2 cognitive:5 book:3 corner:1 american:1 account:1 potential:1 retinal:2 coefficient:1 textons:1 explicitly:1 depends:2 vi:7 view:4 observer:7 lab:1 lowe:1 start:1 participant:31 minimize:1 square:2 formed:1 descriptor:1 who:3 yield:1 bayesian:3 identification:2 iid:4 none:1 influenced:3 giw:2 frequency:5 turk:2 attneave:1 associated:1 handwriting:1 newly:1 ask:2 knowledge:1 color:1 dimensionality:1 segmentation:2 sophisticated:1 higher:1 friesen:1 tom:1 methodology:1 response:10 done:1 though:4 strongly:1 implicit:1 crp:3 flanked:1 horizontal:2 gauthier:1 hopefully:1 defines:1 believe:1 name:1 effect:15 validity:1 assigned:9 proprioceptive:1 neal:1 ambiguous:28 unambiguous:8 hill:1 demonstrate:2 percent:4 image:111 meaning:2 novel:6 rotation:11 volume:1 belong:1 occurred:1 extend:3 interpret:1 refer:1 versa:1 gibbs:2 cambridge:1 teaching:1 replicating:1 had:1 phenomenological:1 shawe:1 similarity:1 deduce:1 posterior:1 own:4 perspective:1 belongs:2 discard:1 manipulation:1 binary:1 success:1 life:1 meeting:1 yi:8 seen:1 rotationally:1 additional:1 morgan:1 determine:3 paradigm:2 maximize:1 schloss:1 dashed:1 arithmetic:13 stephen:1 multiple:12 rj:3 infer:12 academic:1 manipulate:2 paired:1 prediction:7 regression:2 essentially:2 expectation:1 vision:2 iteration:1 represent:1 c1:1 addition:6 participated:1 decreased:1 wealth:1 sudderth:1 leaving:2 biased:1 abram:1 extra:2 probably:1 recording:1 tend:1 recruited:1 subject:1 undergo:1 lafferty:1 jordan:1 ideal:6 bengio:1 easy:1 identically:1 affect:6 restaurant:1 psychology:7 counterbalanced:2 gave:2 identified:3 competing:2 opposite:1 associating:1 court:1 whether:1 karen:1 york:2 deep:1 ignored:1 useful:1 iterating:1 clear:1 nonparametric:3 desk:1 generate:5 neuroscience:1 per:1 discrete:6 yates:1 affected:1 group:4 four:1 terminology:2 monitor:1 drawn:2 changing:2 neither:1 inverse:2 respond:4 draw:1 summarizes:1 resampled:1 summer:1 identifiable:2 annual:1 strength:1 ahead:1 scene:54 flat:1 encodes:1 generates:2 answered:2 martin:1 department:3 developing:1 according:1 slightly:1 joseph:2 psychologist:4 invariant:5 flanking:2 taken:1 previously:2 describing:1 turn:1 xi2:1 whichever:1 adopted:2 operation:1 appropriate:2 fowlkes:1 buffet:1 thomas:1 top:2 assumes:3 clustering:1 denotes:3 remaining:1 completed:1 include:1 saint:1 dirichlet:1 graphical:1 especially:1 chinese:1 society:1 malik:1 question:2 arrangement:2 strategy:2 primary:1 dependence:1 fa:1 diagonal:15 exhibit:1 distance:2 thank:1 rotates:1 simulated:1 sensible:2 reason:1 providing:1 demonstration:2 difficult:2 wiser:1 negative:2 design:1 proper:1 unknown:2 diamond:1 vertical:1 markov:1 supporting:1 situation:1 extended:1 hinton:1 head:1 frame:137 nonuniform:1 reproducing:1 inferred:13 introduced:1 david:1 pair:1 mechanical:2 bench:1 california:2 conflicting:7 established:1 vestibular:1 address:1 perception:8 pattern:2 including:1 memory:1 belief:1 treated:1 natural:3 improve:1 axis:6 concludes:1 extract:1 coupled:1 text:1 prior:2 understanding:4 acknowledgement:1 multiplication:6 relative:2 afosr:1 embedded:2 fully:1 probit:1 highlight:1 generation:1 degree:8 agent:1 consistent:7 article:2 principle:3 viewpoint:2 editor:2 share:2 translation:1 compatible:1 summary:1 diagonally:3 bias:4 side:3 explaining:1 wide:2 face:1 pitman:1 distributed:2 boundary:1 depth:1 vocabulary:1 world:4 dimension:1 san:1 far:2 transaction:1 mcgraw:1 aperture:1 confirm:4 global:1 investigating:2 corroborating:1 corroborates:1 xi:5 alternatively:2 learn:2 nature:1 robust:1 ca:3 symmetry:5 schuurmans:1 bottou:1 complex:2 did:1 anna:1 main:2 spread:3 hyperparameters:1 scored:1 allowed:1 bookshelf:1 jolicoeur:1 differed:1 inferring:4 position:16 lie:1 psychologica:1 perceptual:2 answering:1 third:2 down:3 sift:1 showing:1 symbol:1 explored:1 evidence:2 grouping:1 intrinsic:2 incorporating:1 sequential:1 importance:2 ci:15 texture:1 illustrates:1 nk:2 led:1 simply:1 likely:7 explore:3 visual:5 corresponds:1 ma:1 goal:1 identity:3 viewed:1 towards:2 shared:4 change:3 upright:3 except:3 reducing:1 sampler:1 total:2 experimental:3 succeeds:1 rarely:1 select:1 people:34 support:2 rotate:1 categorize:1 indian:1 tested:1
3,705
4,355
From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models Skander Mensi School of Computer and Communication Sciences and Brain-Mind Institute Ecole Polytechnique Federale de Lausanne 1015 Lausanne EPFL, SWITZERLAND [email protected] Richard Naud School of Computer and Communication Sciences and Brain-Mind Institute Ecole Polytechnique Federale de Lausanne 1015 Lausanne EPFL, SWITZERLAND [email protected] Wulfram Gersnter School of Computer and Communication Sciences and Brain-Mind Institute Ecole Polytechnique Federale de Lausanne 1015 Lausanne EPFL, SWITZERLAND [email protected] Abstract Variability in single neuron models is typically implemented either by a stochastic Leaky-Integrate-and-Fire model or by a model of the Generalized Linear Model (GLM) family. We use analytical and numerical methods to relate state-of-theart models from both schools of thought. First we find the analytical expressions relating the subthreshold voltage from the Adaptive Exponential Integrate-andFire model (AdEx) to the Spike-Response Model with escape noise (SRM as an example of a GLM). Then we calculate numerically the link-function that provides the firing probability given a deterministic membrane potential. We find a mathematical expression for this link-function and test the ability of the GLM to predict the firing probability of a neuron receiving complex stimulation. Comparing the prediction performance of various link-functions, we find that a GLM with an exponential link-function provides an excellent approximation to the Adaptive Exponential Integrate-and-Fire with colored-noise input. These results help to understand the relationship between the different approaches to stochastic neuron models. 1 Motivation When it comes to modeling the intrinsic variability in simple neuron models, we can distinguish two traditional approaches. One approach is inspired by the stochastic Leaky Integrate-and-Fire (LIF) hypothesis of Stein (1967) [1], where a noise term is added to the system of differential equations implementing the leaky integration to a threshold. There are multiple versions of such a stochastic LIF [2]. How the noise affects the firing probability is also a function of the parameters of the neuron model. Therefore, it is important to take into account the refinements of simple neuron models in terms of subthreshold resonance [3, 4], spike-triggered adaptation [5, 6] and non-linear spike 1 initiation [7, 5]. All these improvements are encompassed by the Adaptive Exponential Integrateand-Fire model (AdEx [8, 9]). The other approach is to start with some deterministic dynamics for the the state of the neuron (for instance the instantaneous distance from the membrane potential to the threshold) and link the probability intensity of emitting a spike with a non-linear function of the state variable. Under some conditions, this type of model is part of a greater class of statistical models called Generalized Linear Models (GLM [10]). As a single neuron model, the Spike Response Model (SRM) with escape noise is a GLM in which the state variable is explicitly the distance between a deterministic voltage and the threshold. The original SRM could account for subthreshold resonance, refractory effects and spike-frequency adaptation [11]. Mathematically similar models were developed independently in the study of the visual system [12] where spike-frequency adaptation has also been modeled [13]. Recently, this approach has retained increased attention since the probabilistic framework can be linked with the Bayesian theory of neural systems [14] and because Bayesian inference can be applied to the population of neurons [15]. In this paper, we investigate the similarity and differences between the state-of-the-art GLM and the stochastic AdEx. The motivation behind this work is to relate the traditional threshold neuron models to Bayesian theory. Our results extend the work of Plesser and Gerstner (2000) [16] since we include the non-linearity for spike initiation and spike-frequency adaptation. We also provide relationships between the parameters of the AdEx and the equivalent GLM. These precise relationships can be used to relate analog implementations of threshold models [17] to the probabilistic models used in the Bayesian approach. The paper is organized as follows: We first describe the expressions relating the SRM state-variable to the parameters of the AdEx (Sect. 3.1) in the subthreshold regime. Then, we use numerical methods to find the non-linear link-function that models the firing probability (Sect. 3.2). We find a functional form for the SRM link-function that best describes the firing probability of a stochastic AdEx. We then compare the performance of this link-function with the often used exponential or linear-rectifier link-functions (also called half-wave linear rectifier) in terms of predicting the firing probability of an AdEx under complex stimulus (Sect. 3.3). We find that the exponential linkfunction yields almost perfect prediction. Finally, we explore the relations between the statistic of the noise and the sharpness of the non-linearity for spike initiation with the parameters of the SRM. 2 Presentation of the Models In this section we present the general formula for the stochastic AdEx model (Sect. 2.1) and the SRM (Sect 2.2). 2.1 The Stochastic Adaptive Exponential Integrate-and-Fire Model The voltage dynamics of the stochastic AdEx is given by:   V ?? ? ?m V = El ? V + ?T exp ? Rw + RI + R (1) ?T ?w w? = a(V ? El ) ? w (2) where ?m is the membrane time constant, El the reverse potential, R the membrane resistance, ? is the threshold, ?T is the shape factor and I(t) the input current which is chosen to be an OrnsteinUhlenbeck process with correlation time-constant of 5 ms. The exponential term ?T exp( V??? ) is T a non-linear function responsible for the emission of spikes and  is a diffusive white noise with standard deviation ? (i.e.  ? N (0, ?)). Note that the diffusive white-noise does not imply white noise fluctuations of the voltage V (t), the probability distribution of V (t) will depend on ?T and ?. The second variable, w, describes the subthreshold as well as the spike-triggered adaptation both parametrized by the coupling strength a and the time constant ?w . Each time t?j the voltage goes to infinity, we assumed that a spike is emitted. Then the voltage is reset to a fixed value Vr and w is increased by a constant value b. 2.2 The Generalized Linear Model In the SRM, The voltage V (t) is given by the convolution of the injected current I(t) with the membrane filter ?(t) plus the additional kernel ?(t) that acts after each spikes (here we split the 2 spike-triggered kernel in two ?(t) = ?v (t) + ?w (t) for reasons that will become clear later): X  V (t) = El + [? ? I](t) + ?v (t ? t?j ) + ?w (t ? t?j ) (3) {t?j } Then at each time t?j a spike is emitted which results in a change of voltage described by ?(t) = ?v (t) + ?w (t). Given the deterministic voltage, (Eq. 3) a spike is emitted according to the firing intensity ?(V ): ?(t) = f (V (t)) (4) where f (?) is an arbitrary function called the link-function. Then the firing behavior of the SRM depends on the choice of the link-function and its parameters. The most common link-function used to model single neuron activities are the linear-rectifier and the exponential function. 3 Mapping In order to map the stochastic AdEx to the SRM we follow a two-step procedure. First we derive the filter ?(t) and the kernels ?v (t) and ?w (t) analytically as a function of AdEx parameters. Second, we derive the link-function of the SRM from the stochastic spike emission of the AdEx. Figure 1: Mapping of the subthreshold dynamics of an AdEx to an equivalent SRM. A. Membrane filter ?(t) for three different sets of parameters of the AdEx leading to over-damped, critically damped and under-damped cases (upper, middle and lower panel, respectively). B. Spike-Triggered ?(t) (black), ?v (t) (light gray) and ?w (gray) for the three cases. C. Example of voltage trace produced when an AdEx is stimulated with a step of colored noise (black). The corresponding voltage from a SRM stimulated with the same current and where we forced the spikes to match those of the AdEx (red). D. Error in the subthreshold voltage (VAdEx ? VGLM ) as a function of the mean voltage of the AdEx, for the three different cases: over-, critically and under-damped (light gray, gray and black, respectively) with ?T = 1 mV. Red line represents the voltage threshold ?. E. Root Mean Square Error (RMSE) ratio for the three cases with ?T = 1 mV. The RMSE ratio is the RMSE between the deterministic VSRM and the stochastic VAdEx divided by the RMSE between repetitions of the stochastic AdEx voltage. The error bar shows a single standard deviation as the RMSE ratio is averaged accross multiple value of ?. 3.1 Subthreshold voltage dynamics We start by assuming that the non-linearity for spike initiation does not affect the mean subthreshold voltage of the stochastic AdEx (see Figure 1 D). This assumption is motivated by the small ?T 3 observed in in-vitro recordings (from 0.5 to 2 mV [8, 9]) which suggest that the subthreshold dynamics are mainly linear except very close to ?. Also, we expect that the non-linear link-function will capture some of the dynamics due to the non-linearity for spike initiation. Thus it is possible to rewrite the deterministic subthreshold part of the AdEx (Eq. 1-2 without  and without ?T exp((V ? ?)/?T )) using matrices: x? = Ax (5)    1 1  ? ?m ? g l ?m V with x = and A = (6) a w ? ?1w ?w In this form, the dynamics of the deterministic AdEx voltage is a damped oscillator with a driving force. Depending on the eigenvalues of A the system could be over-damped, critically damped or under-damped. The filter ?(t) of the GLM is given by the impulse response of the system of coupled differential equations of the AdEx, described by Eq. 5 and 6. In other words, one has to derive the response of the system when stimulating with a Dirac-delta function. The type of damping gives three different qualitative shapes of the kernel ?(t), which are summarized in Table 3.1 and Figure 1 A. Since the three different filters also affect the nature of the stochastic voltage fluctuations, we will keep the distinction between over-damped, critically damped and under-damped scenarios throughout the paper. This means that our approach is valid for at least 3 types of diffusive voltage-noise (i.e. the white noise  in Eq. 1 filtered by 3 different membrane filters ?(t)). To complete the description of the deterministic voltage, we need an expression for the spiketriggered kernels. The voltage reset at each spike brings a spike-triggered jump in voltage of magnitude ? = Vr ? V (t?). This perturbation is superposed to the current fluctuations due to I(t) and can be mediated by a Delta-diract pulse of current. Thus we can write the voltage reset kernel by: ?v (t) = ? ? [? ? ?] (t) = ?(t) ?(0) ?(0) (7) where ?(t) is the Dirac-delta function. The shape of this kernel depends on ?(t) and can be computed from Table 3.1 (see Figure 1 B). Finally, the AdEx mediates spike-frequency adaptation by the jump of the second variables w. From Eq. 2 we can see that this produces a current wspike (t) = b exp (?t/?w ) that can cumulate over subsequent spikes. The effect of this current on voltage is then given by the convolution of wspike (t) with the membrane filter ?(t). Thus in the SRM framework the spike-frequency adaptation is taken into account by: ?w (t) = [wspike ? ?](t) (8) Again the precise form of ?w (t) depends on ?(t) and can be computed from Table 3.1 (see Figure 1 B). At this point, we would like to verify our assumption that the non-linearity for spike emission can be neglected. Fig. 1 C and D shows that the error between the voltage from Eq. 3 and the voltage from the stochastic AdEx is generally small. Moreover, we see that the main contribution to the voltage prediction error is due to the mismatch close to the spikes. However the non-linearity for spike initiation may change the probability distribution of the voltage fluctuations, which in turn influences the probability of spiking. This will influence the choice of the link-function, as we will see in the next section. 3.2 Spike Generation Using ?(t), ?v (t) and ?w (t), we must relate the spiking probability of the stochastic AdEx as a function of its deterministic voltage. According to [2] the probability of spiking in time bin dt given the deterministic voltage V (t) is given by: p(V ) = prob{spike in [t, t + dt]} = 1 ? exp (?f (V (t))dt) (9) where f (?) gives the firing intensity as a function of the deterministic V (t) (Eq. 3). Thus to extract the link-function f we have to compute the probability of spiking given V (t) for our SRM. To do so we apply the method proposed by Jolivet et al. (2004) [18], where the probability of spiking is simply given by the distribution of the deterministic voltage estimated at the spike times divided by the distribution of the SRM voltage when there is no spike (see figure 2 A). One can numerically compute these two quantities for our models using N repetitions of the same stimulus. 4 Table 1: Analytical expressions for the membrane filter ?(t) in terms of the parameters of the AdEx for over-, critically-, and under-damped cases. Membrane Filter: ?(t) over-damped if: (?m + ?w )2 > critically-damped if: 4?m ?w (gl +a) gl (?m + ?w )2 = ?(t) = k1 e?1 t + k2 e?2 t ?(t) = (?t + ?)e?t ?1 = 2?m1 ?w (?(?m + ?w ) +  q (?m + ?w )2 ? 4 ?mg?l w (gl + a) ?2 = q 1 2?m ?w (?(?m + ?w ) ? (?m + ?w )2 ? 4 ?mg?l w (gl + a) k1 = ?(1+(?m ?2 )) C?m (?1 ??2 ) k2 = 1+(?m ?1 ) C?m (?1 ??2 ) 4?m ?w (gl +a) gl ?= ?(?m +?w ) 2?m ?w ?= ?m ??w 2C?m ?w under-damped if: (?m + ?w )2 < 4?m ?w (gl +a) gl ?(t) = (k1 cos (?t) + k2 sin (?t)) e?t ?= ?(?m +?w ) 2?m ?w s  ?? 2 w m ? = ?2? ? m ?w g l ?m ?w a  ?= 1 C k1 = k2 = 1 C ?(1+?m ?) C??m The standard deviation ? of the noise and the parameter ?T of the AdEx non-linearity may affect the shape of the link-function. We thus extract p(V ) for different ? and ?T (Fig. 2 B). Then using visual heuristics and previous knowledge about the potential analytical expression of the link-funtion, we try to find a simple analytical function that captures p(V ) for a large range of combinations of ? and ?T . We observed that the log(? log(p)) is close to linear in most studied conditions Fig. 2 B suggesting the following two distributions of p(V ):    V ? VT (10) p(V ) = 1 ? exp ? exp ?V    V ? VT p(V ) = exp ? exp ? (11) ?V Once we have p(V ), we can use Eq. 4 to obtain the equivalent SRM link-function, which leads to: ?1 f (V ) = log (1 ? p(V )) (12) dt Then the two potential link-functions of the SRM can be derived from Eq. 10 and Eq. 11 (respectively):   V ? VT f (V ) = ?0 exp (13) ?V     V ? VT (14) f (V ) = ??0 log 1 ? exp ? exp ? ?V 1 with ?0 = dt , VT the threshold of the SRM and ?V the sharpness of the link-function (i.e. the parameters that governs the degree of the stochasticity). Note that the exact value of ?0 has no importance since it is redundant with VT . Eq. 13 is the standard exponential link-function, but we call Eq. 14 the log-exp-exp link-function. 3.3 Prediction The next point is to evaluate the fit quality of each link-function. To do this, we first estimate the parameters VT and ?V of the GLM link-function that maximize the likelihood of observing a spike 5 Figure 2: SRM link-function. A. Histogram of the SRM voltage at the AdEx firing times (red) and at non-firing times (gray). The ratio of the two distributions gives p(V ) (Eq. 9, dashed lines). Inset, zoom to see the voltage histogram evaluated at the firing time (red). B. log(? log(p)) as a function of the SRM voltage for three different noise levels ? = 0.07, 0.14, 0.18 nA (pale gray, gray, black dots, respectively) and ?T = 1 mV. The line is a linear fit corresponding to the log-exp-exp linkfunction and the dashed line corresponds to a fit with the exponential link-function. C. Same data and labeling scheme as B, but plotting f (V ) according to Eq. 12. The lines are produced with Eq. 14 with parameters fitted as described in B. and the dashed lines are produced with Eq. 13. Inset, same plot but on a semi-log(y) axis. train generated with an AdEx. Second we look at the predictive power of the resulting SRM in terms of Peri-Stimulus Time Histogram (PSTH). In other words we ask how close the spike trains generated with a GLM are from the spike train generated with a stochastic AdEx when both models are stimulated with the same input current. For any GLM with link-function f (V ) ? f (t|I, ?) and parameters ? regulating the shape of ?(t), ?v (t) and ?w (t), the Negative Log-Likelihood (NLL) of observing a spike-train {t?} is given by: ? ? X X NLL = ? ? log(f (t|I, ?)) ? f (t|I, ?)? (15) t t? It has been shown that the negative log-likelihood is convex in the parameters if f is convex and logconcave [19]. It is easy to show that a linear-rectifier link-function, the exponential link-function and the log-exp-exp link-function all satisfy these conditions. This allows efficient estimation of ? using a simple gradient descent. One can thus estimate from a the optimal parameters V?T and ?V single AdEx spike train the optimal parameters of a given link-function, which is more efficient than the method used in Sect. 3.2. The minimal NLL resulting from the gradient descent gives an estimation of the fit quality. A better estimate of the fit quality is given by the distance between the PSTHs in response to stimuli not used for parameter fitting . Let ?1 (t) be the PSTH of the AdEx, and ?2 (t) be the PSTH of the fitted SRM, 6 Figure 3: PSTH prediction. A. Injected current. B. Voltage traces produced by an AdEx (black) and the equivalent SRM (red), when stimulated with the current in A. C. Raster plot for 20 realizations of AdEx (black tick marks) and equivalent SRM (red tick marks). D. PSTH of the AdEx (black) and the SRM (red) obtained by averaging 10,000 repetitions. E. Optimal log-likelihood for the three cases of the AdEx, using three different link-functions, a linear-rectifier (light gray), an exponential link-function (gray) and the link-function defined by Eq. 14 (dark gray), these values are obtained by averaging over 40 different combinations ? and ?T (see Fig. 4). Error bars are one standard deviation, the stars denote a significant difference, two-sample t-test with ? = 0.01. F. same as E. but for Md (Eq. 16). then we use Md ? [0, 1] as a measure of match: R 2 2 (?1 (t) ? ?2 (t)) dt R Md = R ?1 (t)2 dt + ?2 (t)2 dt (16) Md = 1 means that it is impossible to differentiate the SRM from the AdEx in terms of their PSTHs, whereas a Md of 0 means that the two PSTHs are completely different. Thus Md is a normalized similarity measure between two PSTHs. In practice, Md is estimated from the smoothed (boxcar average of 1 ms half-width) averaged spike train of 1 000 repetitions for each models. We use both the NLL and Md to quantify the fit quality for each of the three damping cases and each of the three link-functions. Figure 3 shows the match between the stochastic AdEx used as a reference and the derived GLM when both are stimulated with the same input current (Fig. 3 A). The resulting voltage traces are almost identical (Fig. 3 B) and both models predict almost the same spike trains and so the same PSTHs (Fig. 3 C and D). More quantitalively, we see on Fig. 3 E and F, that the linear-rectifier fits significantly worse than both the exponential and log-exp-exp link-functions, both in terms of NLL and of Md . The exponential link-function performs as well as the log-exp-exp link-function, with a spike train similarity measure Md being almost 1 for both. Finally the likelihood-based method described above gives us the opportunity to look at the relationship between the AdEx parameters ? and ?T that governs its spike emission and the parameters VT and ?V of the link-function (Fig. 4). We observe that an increase of the noise level produces a flatter link-function (greater ?V ) while an increase in ?T also produces an increase in ?V and VT (note that Fig. 4 shows ?V and VT for the exponential link-function only, but equivalent results are obtained with the log-exp-exp link-function). 4 Discussion In Sect. 3.3 we have shown that it is possible to predict with almost perfect accuracy the PSTH of a stochastic AdEx model using an appropriate set of parameters in the SRM. Moreover, since 7 Figure 4: Influence of the AdEx parameters on the parameters of the exponential link-function. A. VT as a function of ?T and ?. B. ?V as a function of ?T and ?. the subthreshold voltage of the AdEx also gives a good match with the deterministic voltage of the SRM, we expect that the AdEx and the SRM will not differ in higher moments of the spike train probability distributions beyond the PSTH. We therefore conclude that diffusive noise models of the type of Eq. 1-2 are equivalent to GLM of the type of Eq. 3-4. Once combined with similar results on other types of stochastic LIF (e.g. correlated noise), we could bridge the gap between the literature on GLM and the literature on diffusive noise models. Another noteworthy observation pertains to the nature of the link-function. The link-function has been hypothesized to be a linear-rectifier, an exponential, a sigmoidal or a Gaussian [16]. We have observed that for the AdEx the link-function follows Eq. 14 that we called the log-exp-exp linkfunction. Although the link-function is log-exp-exp for most of the AdEx parameters, the exponential link-function gives an equivalently good prediction of the PSTH. This can be explained by the fact that the difference between log-exp-exp and exponential link-functions happens mainly at low voltage (i.e. far from the threshold), where the probability of emitting a spike is so low (Figure 2 C, until -50 mv). Therefore, even if the exponential link-function overestimates the firing probability at these low voltages it rarely produces extra spikes. At voltages closer to the threshold, where most of the spikes are emitted, the two link-functions behave almost identically and hence produce the same PSTH. The Gaussian link-function can be seen as lying in-between the exponential link-function and the log-exp-exp link-function in Fig. 2. This means that the work of Plesser and Gerstner (2000) [16] is in agreement with the results presented here. The importance of the time-derivative of the voltage stressed by Plesser and Gerstner (leading to a two-dimensional link-function f (V, V? )) was not studied here to remain consistent with the typical usage of GLM in neural systems [14]. Finally we restricted our study to exponential non-linearity for spike initiation and do not consider other cases such as the Quadratic Integrate-and-fire (QIF, [5]) or other polynomial functional shapes. We overlooked these cases for two reasons. First, there are many evidences that the non-linearity in neurons (estimated from in-vitro recordings of Pyramidal neurons) is well approximated by a single exponential [9]. Second, the exponential non-linearity of the AdEx only affects the subthreshold voltage at high voltage (close to threshold) and thus can be neglected to derive the filters ?(t) and ?(t). Polynomial non-linearities on the other hand affect a larger range of the subthreshold voltage so that it would be difficult to justify the linearization of subthreshold dynamics essential to the method presented here. References [1] R. B. Stein, ?Some models of neuronal variability,? Biophys J, vol. 7, no. 1, pp. 37?68, 1967. [2] W. Gerstner and W. Kistler, Spiking neuron models. Cambridge University Press New York, 2002. [3] E. Izhikevich, ?Resonate-and-fire neurons,? Neural Networks, vol. 14, no. 883-894, 2001. [4] M. J. E. Richardson, N. Brunel, and V. Hakim, ?From subthreshold to firing-rate resonance,? Journal of Neurophysiology, vol. 89, pp. 2538?2554, 2003. 8 [5] E. Izhikevich, ?Simple model of spiking neurons,? IEEE Transactions on Neural Networks, vol. 14, pp. 1569?1572, 2003. [6] S. Mensi, R. Naud, M. Avermann, C. C. H. Petersen, and W. Gerstner, ?Parameter extraction and classification of three neuron types reveals two different adaptation mechanisms,? Under review. [7] N. Fourcaud-Trocme, D. Hansel, C. V. Vreeswijk, and N. Brunel, ?How spike generation mechanisms determine the neuronal response to fluctuating inputs,? Journal of Neuroscience, vol. 23, no. 37, pp. 11 628?11 640, 2003. [8] R. Brette and W. Gerstner, ?Adaptive exponential integrate-and-fire model as an effective description of neuronal activity,? Journal of Neurophysiology, vol. 94, pp. 3637?3642, 2005. [9] L. Badel, W. Gerstner, and M. Richardson, ?Dependence of the spike-triggered average voltage on membrane response properties,? Neurocomputing, vol. 69, pp. 1062?1065, 2007. [10] P. McCullagh and J. A. Nelder, Generalized linear models, 2nd ed. Chapman & Hall/CRC, 1998, vol. 37. [11] W. Gerstner, J. van Hemmen, and J. Cowan, ?What matters in neuronal locking?? Neural computation, vol. 8, pp. 1653?1676, 1996. [12] D. Hubel and T. Wiesel, ?Receptive fields and functional architecture of monkey striate cortex,? Journal of Physiology, vol. 195, pp. 215?243, 1968. [13] J. Pillow, L. Paninski, V. Uzzell, E. Simoncelli, and E. Chichilnisky, ?Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model,? Journal of Neuroscience, vol. 25, no. 47, pp. 11 003?11 013, 2005. [14] K. Doya, S. Ishii, A. Pouget, and R. P. N. Rao, Bayesian brain: Probabilistic approaches to neural coding. The MIT Press, 2007. [15] S. Gerwinn, J. H. Macke, M. Seeger, and M. Bethge, ?Bayesian inference for spiking neuron models with a sparsity prior,? in Advances in Neural Information Processing Systems, 2007. [16] H. Plesser and W. Gerstner, ?Noise in integrate-and-fire neurons: From stochastic input to escape rates,? Neural Computation, vol. 12, pp. 367?384, 2000. [17] J. Schemmel, J. Fieres, and K. Meier, ?Wafer-scale integration of analog neural networks,? in Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, june 2008, pp. 431 ?438. [18] R. Jolivet, T. Lewis, and W. Gerstner, ?Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy,? Journal of Neurophysiology, vol. 92, pp. 959?976, 2004. [19] L. Paninski, ?Maximum likelihood estimation of cascade point-process neural encoding models,? Network: Computation in Neural Systems, vol. 15, pp. 243?262, 2004. 9
4355 |@word neurophysiology:3 version:1 middle:1 polynomial:2 wiesel:1 nd:1 pulse:1 moment:1 ecole:3 current:11 comparing:1 must:1 subsequent:1 numerical:2 shape:6 plot:2 half:2 intelligence:1 filtered:1 colored:2 provides:2 psth:9 sigmoidal:1 mathematical:1 differential:2 become:1 qualitative:1 fitting:1 behavior:1 brain:4 inspired:1 accross:1 linearity:11 moreover:2 panel:1 funtion:1 what:1 monkey:1 developed:1 act:1 k2:4 overestimate:1 congress:1 encoding:1 firing:14 fluctuation:4 noteworthy:1 black:7 plus:1 studied:2 lausanne:6 co:1 range:2 averaged:2 responsible:1 practice:1 procedure:1 thought:1 significantly:1 physiology:1 cascade:1 word:2 suggest:1 petersen:1 close:5 influence:3 impossible:1 superposed:1 equivalent:7 deterministic:13 map:1 go:1 attention:1 independently:1 convex:2 fieres:1 sharpness:2 pouget:1 population:1 exact:1 hypothesis:1 agreement:1 approximated:1 observed:3 capture:2 calculate:1 sect:7 locking:1 dynamic:8 neglected:2 depend:1 rewrite:1 predictive:1 completely:1 joint:1 various:1 train:10 forced:1 describe:1 effective:1 labeling:1 heuristic:1 larger:1 ability:1 statistic:1 richardson:2 nll:5 triggered:6 eigenvalue:1 mg:2 analytical:5 differentiate:1 reset:3 adaptation:8 pale:1 realization:1 description:2 dirac:2 produce:5 perfect:2 help:1 coupling:1 derive:4 depending:1 school:4 eq:21 implemented:1 come:1 quantify:1 differ:1 switzerland:3 filter:10 stochastic:23 kistler:1 implementing:1 bin:1 crc:1 integrateand:1 mathematically:1 lying:1 hall:1 exp:32 mapping:2 predict:3 driving:1 estimation:3 hansel:1 bridge:1 repetition:4 mit:1 gaussian:2 voltage:48 adex:46 ax:1 emission:4 derived:2 june:1 improvement:1 likelihood:6 mainly:2 seeger:1 ishii:1 inference:2 el:4 epfl:6 plesser:4 typically:1 brette:1 relation:1 classification:1 resonance:3 art:1 lif:3 integration:2 field:1 once:2 extraction:1 chapman:1 identical:1 represents:1 look:2 theart:1 stimulus:4 richard:2 escape:3 zoom:1 neurocomputing:1 fire:11 regulating:1 investigate:1 light:3 behind:1 damped:15 closer:1 damping:2 minimal:1 fitted:2 federale:3 instance:1 increased:2 modeling:1 rao:1 deviation:4 srm:31 combined:1 peri:1 international:1 probabilistic:4 receiving:1 decoding:1 bethge:1 na:1 again:1 worse:1 derivative:1 leading:2 macke:1 account:3 potential:5 suggesting:1 de:3 retinal:1 star:1 summarized:1 flatter:1 coding:1 matter:1 satisfy:1 explicitly:1 mv:5 depends:3 later:1 root:1 try:1 linked:1 observing:2 red:7 start:2 wave:1 rmse:5 contribution:1 square:1 accuracy:2 subthreshold:16 yield:1 bayesian:6 critically:6 produced:4 ed:1 raster:1 frequency:5 pp:13 ask:1 mensi:3 knowledge:1 organized:1 higher:1 dt:8 follow:1 response:8 cumulate:1 evaluated:1 correlation:1 until:1 hand:1 nonlinear:1 brings:1 quality:4 gray:10 impulse:1 izhikevich:2 usage:1 effect:2 hypothesized:1 verify:1 normalized:1 analytically:1 hence:1 white:4 sin:1 width:1 m:2 generalized:6 complete:1 polytechnique:3 performs:1 instantaneous:1 recently:1 common:1 ornsteinuhlenbeck:1 psths:5 stimulation:1 functional:3 vitro:2 spiking:9 refractory:1 extend:1 analog:2 m1:1 relating:2 numerically:2 significant:1 cambridge:1 boxcar:1 stochasticity:1 dot:1 similarity:3 cortex:1 reverse:1 scenario:1 initiation:7 gerwinn:1 vt:11 seen:1 greater:2 additional:1 determine:1 maximize:1 redundant:1 dashed:3 semi:1 multiple:2 simoncelli:1 schemmel:1 match:4 divided:2 prediction:7 histogram:3 kernel:7 cell:1 diffusive:5 whereas:1 pyramidal:1 extra:1 recording:2 logconcave:1 cowan:1 emitted:4 call:1 split:1 easy:1 identically:1 affect:6 fit:7 architecture:1 expression:6 motivated:1 resistance:1 york:1 generally:1 clear:1 governs:2 detailed:1 stein:2 dark:1 rw:1 delta:3 estimated:3 neuroscience:2 write:1 vol:14 wafer:1 threshold:11 andfire:1 prob:1 injected:2 family:1 almost:6 throughout:1 doya:1 distinguish:1 quadratic:1 activity:3 strength:1 ijcnn:1 infinity:1 ri:1 according:3 combination:2 membrane:11 describes:2 remain:1 happens:1 explained:1 restricted:1 glm:16 taken:1 equation:2 turn:1 vreeswijk:1 mechanism:2 mind:3 apply:1 observe:1 fluctuating:1 appropriate:1 original:1 include:1 opportunity:1 k1:4 naud:3 added:1 quantity:1 spike:51 receptive:1 dependence:1 md:10 traditional:2 striate:1 gradient:2 distance:3 link:56 parametrized:1 reason:2 assuming:1 modeled:1 relationship:4 retained:1 ratio:4 equivalently:1 difficult:1 qif:1 relate:4 trace:3 negative:2 implementation:1 upper:1 neuron:19 convolution:2 observation:1 descent:2 spiketriggered:1 behave:1 communication:3 variability:3 precise:2 perturbation:1 smoothed:1 arbitrary:1 intensity:3 overlooked:1 meier:1 chichilnisky:1 distinction:1 mediates:1 jolivet:2 beyond:1 bar:2 mismatch:1 regime:1 sparsity:1 power:1 force:1 predicting:1 scheme:1 imply:1 axis:1 mediated:1 coupled:1 extract:2 review:1 literature:2 prior:1 expect:2 generation:2 integrate:10 degree:2 consistent:1 fourcaud:1 plotting:1 gl:8 tick:2 understand:1 hakim:1 institute:3 leaky:3 van:1 valid:1 pillow:1 world:1 adaptive:5 refinement:1 jump:2 far:1 emitting:2 transaction:1 approximate:1 keep:1 hubel:1 reveals:1 assumed:1 conclude:1 nelder:1 table:4 stimulated:5 nature:2 gerstner:11 complex:2 excellent:1 main:1 motivation:2 noise:19 neuronal:5 fig:11 hemmen:1 encompassed:1 vr:2 exponential:26 formula:1 rectifier:7 inset:2 evidence:1 intrinsic:1 essential:1 importance:2 magnitude:1 linearization:1 biophys:1 gap:1 simply:1 explore:1 paninski:2 ganglion:1 visual:2 brunel:2 ch:3 corresponds:1 lewis:1 stimulating:1 presentation:1 oscillator:1 wulfram:2 change:2 mccullagh:1 typical:1 except:1 averaging:2 justify:1 called:4 rarely:1 uzzell:1 mark:2 stressed:1 pertains:1 evaluate:1 correlated:1
3,706
4,356
A Machine Learning Approach to Predict Chemical Reactions Matthew A. Kayala Pierre Baldi? Institute of Genomics and Bioinformatics School of Information and Computer Sciences University of California, Irvine Irvine, CA 92697 {mkayala,pfbaldi}@ics.uci.edu Abstract Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Previous approaches are not highthroughput, are not generalizable or scalable, or lack sufficient data to be effective. We describe single mechanistic reactions as concerted electron movements from an electron orbital source to an electron orbital sink. We use an existing rule-based expert system to derive a dataset consisting of 2,989 productive mechanistic steps and 6.14 million non-productive mechanistic steps. We then pose identifying productive mechanistic steps as a ranking problem: rank potential orbital interactions such that the top ranked interactions yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.0% of non-productive reactions with less than a 0.1% false negative rate. Then, we train an ensemble of ranking models on pairs of interacting orbitals to learn a relative productivity function over single mechanistic reactions in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanisms at the top 89.1% of the time, rising to 99.9% of the time when top ranked lists with at most four nonproductive reactions are considered. The final system allows multi-step reaction prediction. Furthermore, it is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert system does not handle. 1 Introduction Determining the major products of chemical reactions given the input reactants and conditions is a fundamental problem in organic chemistry. Reaction prediction is a necessary component of retro-synthetic analysis or virtual library generation for drug design[1, 2] and has the potential to increase our understanding of biochemical catalysis and metabolism[3]. There are a broad range of approaches to reaction prediction falling around at least three main poles: physical simulations of transition states using various quantum mechanical and other approximations[4, 5, 6], rule-based expert systems[2, 7, 8, 9, 10, 11], and inductive machine learning methods[12]. However, none of these approaches can successfully emulate the remarkable abilities of a human chemist. 1.1 Previous approaches and representations The very concept of a ?reaction? can be ambiguous, as it corresponds to a macroscopic abstraction, hence simplification, of a very complex underlying microscopic reality, ultimately driven by the ? To whom correspondence should be addressed 1 laws of quantum and statistical mechanics. However, even for relatively small systems, it is impossible to find exact solutions to the Schr?odinger equation. Thus in practice, energies are calculated with varyingly accurate approximations, ranging from ab-initio Hartree-Fock approaches or density functional theory to semi-empirical methods or mechanical force fields[6]. This leads to modeling reactions as minimum energy paths between stable atom configurations on a high-dimensional potential energy surface, where the path through the lowest energy transition state, i.e., saddle point, is the most favorable[4, 5]. By explicitly modeling energies, these approaches can be highly accurate and generalize to a diverse range of chemistries but require careful initialization and are computationally expensive (see [13] for a representative example). This branch of computational chemistry provides invaluable tools for in-depth analysis but is currently not suitable for high-throughput reactivity tasks and is far from being able to recapitulate the knowledge and ability of human experts. In contrast, most rule-based expert systems for high-throughput reactivity tasks use a much more abstract representation, in the form of general transformations over molecular graphs[2, 7, 8, 9, 10]. Reactions are predicted when a match is found in a library of allowable graph transformations. These general transformations model only net molecular changes for processes that in reality involve a sequence of transition states, as shown in Figure 1. These rule-based approaches suffer from at least four drawbacks: (1) they use a representation that is too high-level, in that an overall transformation obfuscates the underlying physical reality; (2) they require the manual curation of large amounts of expert knowledge; (3) they become unmanageable at larger scales, in that adding a new graph pattern often involves having to update a large proportion of existing transformations with exceptions; and (4) they lack generality, in that particular chemistries must explicitly be encoded to be predicted. [C;X3H0:1]=[C;X3:2].[H:3][Br:4]>>[Br:4][C:1][C:2][H:3] Figure 1: Overall transformation of an alkene (hydrocarbon with double bond) with hydrobromic acid (HBr) and corresponding mechanistic reactions. (a) shows the overall transform as a SMIRKS[14] string pattern and as a graph representation. In a molecular graph, vertices represent atoms, with carbons at unlabeled vertices. The number of edges between two vertices represents bond order. +/? symbols represent formal charge. Standard valences are filled using implicit hydrogens. (b) shows the two mechanistic reactions composing the overall transformation as arrowpushing diagrams[15, 16]. Dots represent non-bonded (lone pair) electrons, while arrows represent concerted electron movement. In the first step, electrons in the electron-rich carbon-carbon double bond attack the hydrogen and break the electron-poor hydrogen-bromine single bond, producing an anionic bromide (Br?) and a carbocation (C+). In the second step, electrons from the charged, electron-rich bromide attack the electron-poor carbocation, yielding the final alkyl halide. Somewhere between low-level QM treatment and abstract graph-based overall transformations, one can consider reactions at the mechanistic level. A mechanistic, or elementary, reaction is a concerted electron movement through a single transition state[15, 16]. These mechanistic reactions can be composed to yield overall transformations. For example, Figure 1 shows the overall transformation of an alkene interacting with hydrobromic acid to yield an alkyl halide, along with the two elementary reactions which compose the transformation. A mechanistic reaction is described as an idealized molecular orbital (MO) interaction between an electron source (donor) MO and an electron sink (acceptor) MO. MOs represent regions of the molecule with high (source) or low (sink) electron 2 density. In general, potential electron sources are composed of lone pairs of electrons and bonds, and potential electron sinks are composed of empty atomic orbitals and bonds. Bonds can act as either a source or a sink depending on the context. Because of space constraints, we cannot fully describe subtle chemical details that must be handled, such as chaining for resonance rearrangement. For details, see texts[15, 16] on mechanisms. Note that by considering all possible pairings of source and sink MOs, this representation allows the exhaustive enumeration of all potential mechanistic reactions over an arbitrary set of molecules. Recent work by Chen and Baldi[11] introduces a rule-based expert system (Reaction Explorer) in which each rearrangement pattern encompasses an elementary reaction. Here, the elementary reactions represent ?productive? mechanistic steps, i.e. those reactions which lead to the overall major products. Thus, elementary reactions which are not the most kinetically favorable, but which eventually lead to the overall thermodynamic transformation product may be considered ?productive?. This approach is a marked change from previous approaches using overall transformations, but as a rule-based system still suffers from the problems of curation, scale, and generality. While mechanistic reaction representations are approximations quite far from the Schr?odinger equation, we expect them to be closer to the underlying reality and therefore more useful than overall transformations. Furthermore, we expect them also to be easier to predict than overall transformations due to their more elementary nature and mechanistic interpretation. In combination, these arguments suggest that working with mechanistic steps may facilitate the application of statistical machine learning approaches, and take advantage of their capability to generalize. Thus, in this work, reactions are modeled as mechanisms, and for the remainder of the paper, we consider the term ?reaction? to denote a single elementary reaction. Furthermore, we consider the problem of reaction prediction to be precisely that of identifying the ?productive? reactions over a given set of reactants under particular conditions. There has been very little work on machine learning approaches to reaction prediction. The sole example is a paper from 1990 on inductively extracting overall transformation patterns from reaction databases[12], a method which was never actually incorporated into a full reaction prediction system. This situation is surprising. Given improvements in both computing power and machine learning methods over the past 20 years, one could imagine a machine learning system that mines reaction information to learn the grammar of chemistry, e.g., in terms of graph grammars[17]. One potential reason behind the lack of progress in this area is the paucity of available data. Chemical publishing is dominated by closed models, making literature information difficult to access. Furthermore, parsing scientific text and extracting relevant chemical information from text and image data is an open problem of research[18, 19]. While commercial reaction databases exist, e.g., Reaxys[20] or SPRESI[21], the reactions in these databases are mostly unbalanced, not atom-mapped, and lack mechanistic detail[22]. This is in addition to suffering from a severe lack of openness; the databases are exorbitantly priced or provided with a restrictive query interface which precludes serious statistical data mining. As a result, and to the best of our knowledge, effective machine learning approaches to reaction prediction still need to be developed. 1.2 A new approach The limitations of previous work motivate a new, fresh approach to reaction prediction combining machine learning with mechanistic representations. The key idea is to first enumerate all potential source and sink MOs, and thus all possible reactions by their pairing, and then use classification and ranking techniques to identify productive reactions. There are multiple benefits resulting from such an approach. By using very general rules to enumerate possible reactions, the approach is not restricted to manually curated reaction patterns. By detailing individual reactions at the mechanistic level, the system may be able to statistically learn efficient predictive models based on physicochemical attributes rather than abstract overall transformations. And by ranking possible reactions instead of making binary decisions, the system may provide results amenable to flexible interpretation. However, the new approach also faces three key challenges: (1) the development of appropriate training datasets of productive reactions; (2) the development of a machine learning approach to control the combinatorial complexity resulting from considering all possible pairs of electron donors and acceptors among the reacting molecules; and (3) the development of machine learning solutions to the problem of predictively ranking the possible mechanisms. These challenges are addressed one-by-one in the following sections. 3 2 The data challenge A mechanistically defined dataset of reactions to use with the proposed approach does not currently exist. To derive a dataset, we use a mechanistically defined rule-based expert system (Reaction Explorer) together with its validation suite[11]. The validation suite is a manually composed set of reactants, reagents, and products covering a complete undergraduate organic chemistry curriculum. Entering a set of reactants and a reagent model into Reaction Explorer yields the complete sequence of mechanistic steps leading to the final products, where all reactions in this sequence share the conditions encoded by the corresponding reagent model. Each one of these mechanistic steps is considered to be a distinct productive elementary reaction. For a given set of reactants and conditions, which we call a (r, c) query tuple, the Reaction Explorer system labels a small set of reactions productive, while all other reactions enumerated by pairing source and sink MOs over the reactants are considered non-productive. We then define two {0, 1} labels for each atom (up to symmetries) and conditions (a, c) tuple over all (r, c) queries. An (a, c) tuple has label srcreact = 1 if it is the main atom of a source MO in a productive reaction over any corresponding (r, c) query, and has label srcreact = 0 otherwise. The label sinkreact is defined similarly using sink MOs. Reaction conditions are described with three parameters: temperature, anion solvation potential, and cation solvation potential. Temperature is listed in Kelvin. The solvation potentials are unitless numbers between 0 and 1 representing ease of cation or anion solvation, thus providing a quantitative scale to describe polar protic, polar aprotic, and nonpolar solvents. Note that any mechanistic interaction with the solvent or reagent is explicitly modeled, e.g. as in Figure 1. As an initial validation of the method, we consider general ionic reactions from the Reaction Explorer validation suite involving C, H, N, O, Li, Mg, and the halides. Extensions to include stereoselective, pericyclic, and radical reactions are discussed in Section 5. The dataset consists of 6.14 million reactions composed of 84,825 source and 74,725 sink MOs from 2,752 distinct reactants and reaction conditions, i.e., (r, c) queries. Of these 6.14 million reactions, the Reaction Explorer system labels 2,989 of them as productive. There are 22,894 atom symmetry classes, which when paired with reaction condition yields 29,104 (a, c) tuples. Of these 29,104 (a, c) tuples, 1,262 have label srcreact = 1 , and 1,786 have label sinkreact = 1. Atom and MO interaction data is available at our chemoinformatics portal (http://cdb.ics. uci.edu) under Supplements. 3 The combinatorial complexity challenge In the dataset, the average molecule has 44 source MOs and 50 sink MOs. For this average molecule, considering only intermolecular reactions with a second copy of the same molecule gives 44 ? 50 = 2200 potential elementary reactions. Thus, the number of possible reactions is very large, motivating identifying productive reactions given a (r, c) query in two stages. In the first stage, we train filters using classification techniques on the source and sink reactivity labels. The idea is to train highly sensitive classifiers which reduce the breadth of possible reactions without erroneously filtering productive reactions. Then only those source and sink MOs where the main atom passes the respective atom level filter are considered when enumerating reactions to consider in the second ranking stage for predicting reaction productivity. Here, we train two separate classifiers to predict the source and sink atom level reactivity labels, each using the same feature descriptions and machine learning implementations. To assess the performance of the reactive site filter training, we perform full 10-fold cross-validation (CV) over all distinct tuples of molecules and conditions (m, c). 3.1 Feature representation Each (a, c) tuple is represented as a vector of physicochemical and topological features. There are 14 real-valued physicochemical features such as the reaction conditions, the molecular weight of the molecule, and the charge at and around the atom. Topological features are meant to capture the neighboring context of a in the molecular graph, for example counts over vertex-and-edge labeled 4 paths and trees rooted at a. We compute paths to length 4 and trees to depth 2, producing 743 molecular graph features. In addition to standard molecular graph features, we also include similar topological features over a restricted alphabet pharmacophore point graph, where pharmacophore point graph definitions are adapted from H?ahnke, et al[23]. Using paths of length 4 and trees of depth 2 in the pharmacophore point graph yields another 759 features. This results in a total of 1,516 features. 3.2 Training Before training, all features are normalized to [0, 1] using the minimum and maximum values of the training set. We oversample (a, c) tuples with label 1 to ensure approximately balanced classes. We experimented with a variety of architectures. Here we report the results obtained using artificial neural networks using sigmoidal activation functions, with a single hidden layer and a single output node with a cross-entropy error function. Grid search using internal three-fold CV on a single training set is used to fit the architecture size (converging to 10 hidden nodes) and the L2-regularization (weight decay) parameter shared by all folds of the overall 10-fold CV. Weights are optimized by stochastic gradient descent with per-weight adaptive learning rates[24]. Optimization is stopped after 100 epochs as this is observed to be sufficient for convergence. As highly sensitive classifiers are desired, the choice of a decision threshold is important. We perform internal three-fold CV on the training set to find decision thresholds yielding a false negative rate of 0 on each respective internal test set. The decision threshold for the overall CV fold is taken as the average of these internal CV fold thresholds. 3.3 Results We report the true negative rate (TNR) and the false negative rate (FNR) for both the source and sink classification problems as well as for the the actual reaction filtering problem, as shown in Table 1. In a CV regime, we are able to filter 94.0% of the 6.14 million non-productive reactions with less than 0.1% false negatives, effectively reducing the ranking problem imbalance by an order of magnitude with minimal error. Having established excellent filtering results with rigorous CV, we then train classifiers with all available data in order to independently assess the ranking method. The results of these classifiers are shown in the last column of Table 1. Table 1: Reactive site classification results. Source reactive and sink reactive rows show results on the respective classification problems. The reaction row shows results of using the two atom classifiers for an initial reaction filtering. CV columns indicate results of full 10-fold cross-validation over (m, c) tuples. CV results show the mean and standard deviation over folds. The best TNR column shows results when trained with all available data. Problem CV TNR % (SD) CV FNR % (SD) Best TNR % Source Reactive 87.7(2.0) 0.1(0.2) 92.1 Sink Reactive 75.6(5.8) 0.2(0.4) 85.6 Reaction 94.0(1.5) < 0.1(< 0.1) 97.2 4 The ranking challenge We pose the task of identifying the productive reactions as a ranking problem. To assess performance, we perform full 10-fold CV over the 2,752 distinct (r, c) queries. With the overall filtered set of reactions, there are, on average, 1.1 productive and 62.5 non-productive reactions per (r, c) query. 4.1 Feature representation Each reaction is composed of a source and sink MO. The reaction feature vector is the concatenation of the corresponding source and sink atom level feature vectors with some modifications. To keep the size reasonable, only real valued and pharmacophore (path length 3 and tree depth 2) atom level 5 features are included. 124 features are calculated to describe the net difference between reactants and products, such as counts over bond types, rings, and formal charges. And finally, 450 features describing the forward and inverse reactions are calculated, including atoms and bonds involved and implied transition state geometry. This leads to a total of 1,677 reaction features. 4.2 Training We use a pairwise approach to ranking similar to [25], using two identical shared-weight artificial neural networks linked to a single comparator output node with fixed ?1 weights. The general architecture is shown in Figure 2. Each shared network receives as an input a potential reaction, i.e. a source-sink pair. Training is performed via back-propagation with weight-sharing. ... ... (Source, Sink) A (Source, Sink) B Figure 2: Shared weight artificial neural network architecture for pairwise ranking. The goal is to determine a productivity order between the (source, sink) A and (source, sink) B pairs. This is done with a pair of shared-weight artificial neural networks with sigmoidal hidden nodes and a linear output node. The output of these internal networks are tied to a single sigmoidal output node with fixed weights. The final output will approach 1 if the (source, sink) A pair is predicted to be relatively more productive than the (source, sink) B pair, and 0 otherwise. Training details are similar to the reactive site classification. All features are normalized to [0, 1] and grid search with internal three-fold CV on a single training set is used to fit the architecture size (converging to 20 hidden nodes) and L2-regularization (weight decay) parameter shared by all folds of the overall 10-fold CV. Weights are optimized using stochastic gradient descent with the same per-weight adaptive learning rate scheme[24]. Optimization is stopped after 25 epochs as this is observed to be sufficient for convergence. An ensemble consisting of five separate pairwise ranking machines (as described in Figure 2) is used for each training set. Each machine in the ensemble is trained with all the productive reactions (from the training set) and a random partition of the non-productive reactions (from the training set). Final ranking on the test set is determined by either simple majority vote or by ranking the average scores from the linear output node of the inner shared-weight network for each machine in the ensemble. The latter yields a minute performance increase and is reported. 4.3 Results We consider two measures for evaluating rankings, Normalized Discounted Cumulative Gain at list size i (NDCG@i) and Percent Within-n. NDCG@i is a common information retrieval metric[26] that sums the overall usefulness (or gain) of productive reactions in a given list of the top-i results, where individual gain decays exponentially with lower position. The measure is normalized such that the best possible ranking of a size i list has NDCG@i = 1. For example, NDCG@1 is the fraction of (r, c) queries in which the top ranked reaction is a productive reaction. Percent Within-n is simply how many (r, c) queries have at most n non-productive reactions in the smallest ranked list containing all productive reactions. For example, Percent Within-0 measures the percent of (r, c) 6 queries with perfect rank, and Percent Within-4 measures how often all productive reactions are recovered with at most 4 mis-ranked non-productive reactions. Note that the NDCG@1 and Percent Within-0 will differ because roughly 10% of (r, c) queries have more than one productive reaction. The non-productive MO interactions vastly outnumber the productive interactions. In spite of this imbalance, our approach gives excellent ranking results, shown in Table 2. The NDCG results show, for example, that in 89.5% of the queries, the top ranked reaction is productive. The Percent Within-n results show that 89.1% of queries have perfect ranking, while 99.9% of queries recover all productive reactions by considering lists with at most four non-productive reactions. Table 2: Reaction ranking results. We show Normalized Discounted Cumulative Gain at different list sizes i (NDCG@i) and Percent Within-n. See text for description of the measures. We report mean (standard deviation) results over CV folds. i 1 2 3 4 5 n 0 1 2 3 4 Mean NDCG@i (SD) 0.895(0.016) 0.939(0.011) 0.952(0.008) 0.954(0.007) 0.956(0.007) Percent Within-n (SD) 89.1(1.7) 96.8(1.0) 98.9(0.6) 99.5(0.4) 99.9(0.3) 4.4 Chemical applications The strong performance of the ranking system is exhibited by its ability to make accurate multi-step reaction predictions. An example, shown in the first row of Table 3, is an intramolecular Claisen condensation reaction with conditions (room temperature, polar aprotic solvent) requiring three elementary steps. The ranking method correctly predicts the given reaction as the highest ranked reaction at each step. Table 3: Chemical reactions of interest. The first row shows an example of full multi-step reaction prediction by the ranking system, a three step intramolecular Claisen condensation (room temp., polar aprotic). At each stage, the reaction shown is the top ranked when all possible reactions are considered by the two stage machine learning system. The second row shows two macrocyclizations which the rule-based system (Reaction Explorer) is unable to predict, but the machine learning approach effectively generalizes and ranks correctly. These reactions lead to the formation of a seven homo-cycle (7 carbons) on the left and seven hetero-cycle (6 carbons, 1 oxygen) on the right. The third row shows an intelligible error of the machine learning approach (see text). MultiStep Reaction Prediction Generality Reasonable Errors A generalizable system should be able to make reasonable predictions about reactants and reaction types with which it has only had implicit, rather than explicit, experience. Reaction Explorer, as a 7 rule-based expert system without explicit rules about larger ring forming reactions, does not make any predictions about seven and eight atom cyclizations. In reality though, larger ring forming reactions are possible. The second row of Table 3 shows the top two ranked reactions over a set of bromo-hept-1-en-2-olate reactants, leading to seven-member ring formation. The ranking model, without ever being trained with seven or eight-member ring forming reactions, returns the enolate attack as the most favorable, but also returns the lone pair nucleophilic substitution as the second most favorable. Similar results are made for similar eight-membered ring systems (not shown). Thus the ranking model is able to generalize and make reasonable suggestions, while the rule-based system is limited by hard-coded transformation patterns. Finally, the vast majority of errors are close errors, as exhibited by the 99.9% Within-4 measure. Furthermore, upon examination of these errors, they are largely intelligible and not unreasonable predictions. For example, the third row of Table 3 shows two reactions involving an oxonium compound and a bromide anion. Our ranking models return these two reactions as the highest, ranking the deprotonation slightly ahead of the substitution. This is considered a Within-1 ranking because the Reaction Explorer system labels only the substitution reaction as productive. However, the immediate precursor reaction in the sequence of Reaction Explorer mechanisms leading to these reactants is the inverse of the deprotonation reaction, i.e., the protonation of the alcohol. Hydrogen transfer reactions like this are reversible, and thus the deprotonation is likely the kinetically favored mechanism, i.e., it is reasonable to rank the deprotonation highly. It is just not productive, in that it does not lead to the final overall product. In a prediction system working with multi-step syntheses, such reversals of previous steps are easily discarded. 5 Conclusion Being able to predict the outcome of chemical reactions is a fundamental scientific problem. The ultimate goal of a reaction prediction system is to recapitulate and eventually surpass the ability of human chemists. In this work, we take a significant step in this direction, showing how to formulate reaction prediction as a machine learning problem and building an accurate implementation for a large and key subset of organic chemistry. There are a number of immediate applications of our system, including validating retro-synthetic suggestions, generating virtual libraries of molecules, and mechanistically annotating existing reaction databases. Reaction prediction is a largely untapped area for machine learning approaches. As such, there is of course room for improvements. The first is increasing the breadth of chemistry captured, e.g. radical, pericyclic, and stereoselective chemistry. Augmenting the MO description with number of electrons, allowing cyclic chained MO interactions, and including face orientations are plausible extensions to attack each of these additional areas of chemical reactivity. A second area of improvement is the curation of larger mechanistically defined datasets. We can approach this manually, by further use of expert systems to construct data with the required level of detail, or by carefully crafted crowdsourcing approaches. Other ongoing areas of research include improving the features, performing systematic feature selection, and experimenting with different statistical ranking techniques. As an untapped research problem for the machine learning community, we hope that the current work and our publicly available data will spark continued and open research in this important area. Acknowledgments Work supported by NIH grants LM010235-01A1 and 5T15LM007743 and NSF grant MRI EIA0321390 to PB. We acknowledge OpenEye Scientific Software and ChemAxon for academic software licenses. We wish to thank Profs. James Nowick, David Van Vranken, and Gregory Weiss for useful discussions. References [1] E.J. Corey and W.T. Wipke. 166(3902):178?92, 1969. Computer-assisted design of complex organic syntheses. Science, [2] M.H. Todd. Computer-aided organic synthesis. Chem. Soc. Rev., 34(3):247?266, 2005. [3] P. Rydberg, D.E. Gloriam, J. Zaretzki, C. Breneman, and L. Olsen. SMARTCyp: A 2D method for prediction of cytochrome P450-mediated drug metabolism. ACS Med. Chem. Lett., 1(3):96?100, 2010. 8 [4] G. Henkelman, B.P. Uberuaga, and H. J?onsson. A climbing image nudged elastic band method for finding saddle points and minimum energy paths. J. Chem. Phys., 113(22):9901?9904, 2000. [5] B. Peters, A. Heyden, A.T. Bell, and A. Chakraborty. A growing string method for determining transition states: comparison to the nudged elastic band and string methods. J. Chem. Phys., 120(17):7877?7886, 2004. [6] C.J. Cramer. Essentials of Computational Chemistry: Theories and Models. Wiley, West Sussex, England, 2 edition, 2004. [7] W.L. Jorgensen, E.R. Laird, A.J. Gushurst, J.M. Fleischer, S.A. Gothe, H.E. Helson, G.D. Paderes, and S. Sinclair. CAMEO: a program from the logical prediction of the products of organic reactions. Pure Appl. Chem., 62:1921?1932, 1990. [8] R. Hollering, J. Gasteiger, L. Steinhauer, K.-P. Schulz, and A. Herwig. Simulation of organic reactions: from the degradation of chemicals to combinatorial synthesis. J. Chem. Inf. Model., 40(2):482?494, 2000. [9] G. Benk?o, C. Flamm, and P.F. Stadler. A graph-based toy model of chemistry. J. Chem. Inf. Model., 43(4):1085?1093, 2003. [10] I.M. Socorro, K. Taylor, and J.M. Goodman. 7(16):3541?3544, 2005. ROBIA: a reaction prediction program. Org. Lett., [11] J. Chen and P. Baldi. No electron left behind: a rule-based expert system to predict chemical reactions and reaction mechanisms. J. Chem. Inf. Model., 49(9):2034?2043, 2009. [12] P. R?ose and J. Gasteiger. Automated derivation of reaction rules for the EROS 6.0 system for reaction prediction. Anal. Chim. Acta, 235:163?168, 1990. [13] B. Wang and Z. Cao. Mechanism of acid-catalyzed hydrolysis of formamide from cluster-continuum model calculations: concerted versus stepwise pathway. J. Phys. Chem. A, 114(49):12918?12927, 2010. [14] C.A. James, D. Weininger, and J. Delany. Daylight theory manual. http://www.daylight.com/ dayhtml/doc/theory/index.html, 2008. Last accessed January 2011. [15] C.K. Ingold. Structure and Mechanism in Organic Chemistry. Cornell University Press, Ithaca, NY, 1953. [16] R. Grossman. The Art of Writing Reasonable Organic Reaction Mechanisms. Springer-Verlag, New York, NY, 2 edition, 2003. [17] G. Rozenberg, editor. Handbook of Graph Grammars and Computing by Graph Transformation: Volume I. Foundations. World Scientific Publishing, River Edge, NJ, 1997. [18] D.L. Banville. Mining chemical structural information from the drug literature. Drug Discovery Today, 11:35?42, 2006. [19] J. Park, G.R. Rosania, and K. Saitou. Tunable machine vision-based strategy for automated annotation of chemical databases. J. Chem. Inf. Model., 49(8):1993?2001, 2009. [20] D.D. Ridley. Searching for chemical reaction information. In S.R. Heller, editor, The Beilstein Online Database, volume 436 of ACS Symposium Series, pages 88?112. American Chemical Society, Washington, DC, 1990. [21] D.L. Roth. SPRESIweb 2.1, a selective chemical synthesis and reaction database. J. Chem. Inf. Model., 45(5):1470?1473, 2005. [22] J. Gasteiger and T. Engel, editors. Chemoinformatics: A Textbook. Wiley-VCH, Weinheim, Germany, 2003. [23] V. H?ahnke, B. Hofmann, T. Grgat, E. Proschak, D. Steinhilber, and G. Schneider. PhAST: pharmacophore alignment search tool. J. Comput. Chem., 30(5):761?71, 2009. [24] R. Neuneier and H.-G. Zimmermann. How to train neural networks. In G.B. Orr and K.-R. M?uller, editors, Neural Networks: Tricks of the Trade, pages 373?423. Springer-Verlag, Heidelberg, Germany, 1998. [25] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine Learning (ICML05), pages 89?96. ACM Press, Bonn, Germany, 2005. [26] K. J?arvelin and J. Kek?al?ainen. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst., 20(4):422?446, 2002. 9
4356 |@word mri:1 rising:1 proportion:1 chakraborty:1 nd:1 open:2 simulation:2 recapitulate:2 initial:2 configuration:1 substitution:3 score:1 cyclic:1 series:1 past:1 reaction:141 existing:3 recovered:1 current:1 com:1 surprising:1 neuneier:1 activation:1 must:2 parsing:1 partition:1 hofmann:1 ainen:1 update:1 metabolism:2 renshaw:1 filtered:1 provides:1 node:8 attack:4 sigmoidal:3 org:1 accessed:1 five:1 intramolecular:2 along:1 become:1 symposium:1 pairing:3 consists:1 compose:1 pathway:1 baldi:3 concerted:4 pairwise:3 orbital:4 roughly:1 mechanic:1 multi:4 growing:1 weinheim:1 discounted:2 little:1 enumeration:1 actual:1 considering:4 precursor:1 increasing:1 provided:1 underlying:3 lowest:1 string:3 textbook:1 generalizable:3 developed:1 lone:3 finding:1 transformation:20 jorgensen:1 suite:3 nj:1 quantitative:1 act:1 charge:3 qm:1 classifier:6 control:1 grant:2 producing:2 kelvin:1 hamilton:1 before:1 tnr:4 sd:4 todd:1 reacting:1 path:7 multistep:1 approximately:1 ndcg:8 initialization:1 acta:1 appl:1 ease:1 limited:1 range:2 statistically:1 breneman:1 ridley:1 acknowledgment:1 atomic:1 practice:1 x3:1 alkene:2 area:6 empirical:1 drug:4 bell:1 acceptor:2 deed:1 organic:10 spite:1 suggest:1 cannot:1 unlabeled:1 close:1 selection:1 context:2 impossible:1 writing:1 shaked:1 www:1 charged:1 roth:1 independently:1 formulate:1 spark:1 identifying:4 pure:1 rule:15 continued:1 handle:1 searching:1 imagine:1 commercial:1 today:1 exact:1 trick:1 expensive:1 curated:1 donor:2 database:8 labeled:1 observed:2 predicts:1 wang:1 capture:1 region:1 halide:3 cycle:2 movement:3 highest:2 trade:1 balanced:1 complexity:2 productive:38 inductively:1 mine:1 chemist:2 chained:1 ultimately:1 motivate:1 trained:3 arvelin:1 predictive:1 upon:1 sink:26 easily:1 various:1 emulate:1 represented:1 alphabet:1 train:7 derivation:1 distinct:4 effective:2 describe:4 query:15 artificial:4 formation:2 outcome:1 exhaustive:1 quite:1 encoded:2 larger:4 bromide:3 valued:2 plausible:1 otherwise:2 precludes:1 grammar:3 ability:4 annotating:1 transform:1 reactivity:6 laird:1 final:6 online:1 sequence:4 advantage:1 mg:1 net:2 interaction:8 product:9 remainder:1 neighboring:1 uci:2 relevant:1 combining:1 cao:1 description:3 convergence:2 double:2 empty:1 cluster:1 generating:1 perfect:2 ring:6 derive:2 depending:1 radical:2 augmenting:1 pose:2 ac:2 sole:1 school:1 progress:1 strong:1 soc:1 predicted:3 involves:1 indicate:1 differ:1 direction:1 nudged:2 hetero:1 drawback:1 attribute:1 filter:5 stochastic:2 human:3 anion:3 virtual:2 reagent:4 require:2 elementary:10 enumerated:1 extension:2 assisted:1 initio:1 around:2 considered:7 ic:2 cramer:1 predict:7 mo:18 electron:21 matthew:1 major:3 continuum:1 smallest:1 favorable:4 polar:4 bond:9 currently:2 combinatorial:3 label:12 sensitive:2 engel:1 successfully:1 tool:2 hope:1 uller:1 rather:2 openness:1 cornell:1 improvement:3 rank:6 experimenting:1 contrast:1 rigorous:1 obfuscates:1 abstraction:1 biochemical:1 hidden:4 selective:1 schulz:1 germany:3 overall:20 classification:6 flexible:1 among:1 orientation:1 favored:1 html:1 development:3 resonance:1 art:1 field:1 construct:1 never:1 having:2 washington:1 atom:17 manually:3 identical:1 represents:1 broad:1 park:1 throughput:2 report:3 serious:1 composed:6 individual:2 flamm:1 geometry:1 consisting:2 kayala:1 ab:1 rearrangement:2 interest:1 highly:4 mining:2 homo:1 evaluation:1 severe:1 alignment:1 introduces:1 yielding:2 behind:2 amenable:1 accurate:4 edge:3 closer:1 tuple:4 necessary:1 experience:1 respective:3 filled:1 tree:4 taylor:1 detailing:1 desired:1 rozenberg:1 reactant:12 minimal:1 stopped:2 column:3 modeling:2 pole:1 vertex:4 deviation:2 subset:1 lazier:1 stadler:1 usefulness:1 too:1 motivating:1 reported:1 gregory:1 synthetic:2 orbitals:2 density:2 fundamental:2 mechanistically:4 river:1 international:1 systematic:1 together:1 synthesis:5 vastly:1 containing:1 sinclair:1 expert:11 american:1 leading:3 return:3 grossman:1 li:1 toy:1 syst:1 potential:13 orr:1 chemistry:13 untapped:2 explicitly:3 ranking:29 idealized:1 performed:1 break:1 closed:1 linked:1 recover:1 capability:1 annotation:1 cdb:1 saitou:1 ass:3 ir:1 publicly:1 acid:3 largely:2 unitless:1 ensemble:5 yield:7 identify:1 kek:1 climbing:1 generalize:3 none:1 ionic:1 cation:2 phys:3 suffers:1 manual:2 sharing:1 definition:1 energy:6 involved:1 james:2 mi:1 outnumber:1 irvine:2 gain:5 dataset:5 treatment:1 tunable:1 logical:1 knowledge:3 subtle:1 carefully:1 actually:1 back:1 wei:1 done:1 though:1 generality:3 furthermore:5 just:1 stage:6 implicit:2 working:2 oversample:1 receives:1 reversible:1 lack:5 propagation:1 scientific:4 building:1 facilitate:1 concept:1 normalized:5 true:1 requiring:1 inductive:1 hence:1 regularization:2 chemical:17 entering:1 ambiguous:1 covering:1 rooted:1 chaining:1 sussex:1 bonded:1 allowable:1 complete:2 invaluable:1 interface:1 temperature:3 percent:9 oxygen:1 ranging:1 image:2 nih:1 common:1 functional:1 physical:2 exponentially:1 volume:2 million:4 discussed:1 interpretation:2 significant:1 cv:16 grid:2 similarly:1 predictively:1 had:1 dot:1 stable:1 access:1 surface:1 recent:1 inf:6 driven:1 compound:1 verlag:2 binary:1 captured:1 minimum:3 additional:1 schneider:1 prune:1 determine:1 semi:1 branch:1 thermodynamic:1 full:5 multiple:1 match:1 academic:1 england:1 cross:3 calculation:1 retrieval:1 physicochemical:3 curation:3 molecular:8 coded:1 paired:1 a1:1 prediction:23 scalable:1 involving:2 converging:2 vision:1 metric:1 represent:6 addition:2 condensation:2 addressed:2 diagram:1 source:26 macroscopic:1 goodman:1 ithaca:1 exhibited:2 pass:1 med:1 validating:1 member:2 call:1 extracting:2 structural:1 automated:2 variety:1 fit:2 pfbaldi:1 architecture:5 perfectly:1 reduce:1 idea:2 inner:1 br:3 enumerating:1 fleischer:1 handled:1 retro:2 ultimate:1 suffer:1 peter:1 york:1 icml05:1 enumerate:2 useful:2 involve:1 listed:1 amount:1 band:2 http:2 exist:2 nsf:1 per:3 correctly:2 diverse:1 key:3 four:3 threshold:4 pb:1 falling:1 license:1 breadth:2 vast:1 graph:16 fraction:1 year:1 sum:1 inverse:2 reasonable:7 doc:1 decision:4 layer:1 simplification:1 correspondence:1 fold:14 topological:3 adapted:1 ahead:1 constraint:1 precisely:1 software:2 solvent:3 dominated:1 erroneously:1 bonn:1 argument:1 performing:1 relatively:2 combination:1 poor:2 slightly:1 temp:1 making:3 modification:1 rev:1 restricted:2 zimmermann:1 taken:1 computationally:1 equation:2 describing:1 eventually:2 mechanism:10 count:2 mechanistic:22 reversal:1 available:5 generalizes:1 highthroughput:1 unreasonable:1 eight:3 appropriate:1 pierre:1 top:8 include:3 ensure:1 publishing:2 cytochrome:1 somewhere:1 paucity:1 restrictive:1 prof:1 society:1 implied:1 strategy:1 microscopic:1 gradient:3 valence:1 separate:2 mapped:1 unable:1 concatenation:1 majority:2 thank:1 seven:5 whom:1 reason:1 fresh:1 length:3 modeled:2 index:1 providing:1 daylight:2 difficult:1 mostly:1 carbon:5 negative:5 implementation:3 design:2 anal:1 perform:3 allowing:1 imbalance:2 datasets:2 discarded:1 acknowledge:1 descent:3 january:1 immediate:2 situation:1 incorporated:1 ever:1 schr:2 interacting:2 dc:1 arbitrary:2 community:1 david:1 pair:10 mechanical:2 required:1 optimized:2 vch:1 california:1 established:1 trans:1 able:7 pattern:7 regime:1 challenge:5 encompasses:1 program:2 including:3 power:1 suitable:1 ranked:9 force:1 explorer:10 predicting:1 examination:1 curriculum:1 representing:1 scheme:1 alcohol:1 library:3 mediated:1 genomics:1 hullender:1 text:5 epoch:2 understanding:1 literature:2 l2:2 discovery:1 heller:1 determining:2 relative:1 law:1 fully:1 expect:2 generation:1 limitation:1 filtering:4 suggestion:2 versus:1 remarkable:1 validation:6 heyden:1 foundation:1 sufficient:3 editor:4 share:1 row:8 course:2 supported:1 last:2 copy:1 formal:2 burges:1 institute:1 face:2 unmanageable:1 priced:1 benefit:1 van:1 lett:2 calculated:3 depth:4 transition:6 evaluating:1 rich:2 quantum:2 cumulative:2 forward:1 made:1 adaptive:2 world:1 far:2 olsen:1 keep:1 handbook:1 tuples:5 chemoinformatics:2 hydrogen:4 search:3 reality:5 table:9 learn:3 nature:1 molecule:9 ca:1 elastic:2 composing:1 symmetry:2 transfer:1 hbr:1 improving:1 heidelberg:1 excellent:2 complex:2 main:3 arrow:1 intelligible:2 edition:2 suffering:1 site:3 representative:1 crafted:1 en:1 west:1 ny:2 wiley:2 ose:1 position:1 explicit:3 wish:1 comput:1 tied:1 third:2 corey:1 minute:1 onsson:1 showing:1 symbol:1 list:7 experimented:1 decay:3 essential:2 stepwise:1 catalysis:1 false:4 adding:1 cumulated:1 undergraduate:1 effectively:2 supplement:1 magnitude:1 portal:1 chen:2 easier:1 entropy:1 simply:1 saddle:2 likely:1 forming:3 springer:2 corresponds:1 acm:2 comparator:1 marked:1 goal:2 careful:1 room:3 shared:7 catalyzed:1 change:2 hard:1 fnr:2 included:1 determined:1 reducing:1 aided:1 surpass:1 degradation:1 total:2 odinger:2 vote:1 productivity:3 exception:1 internal:6 latter:1 chem:12 unbalanced:1 meant:1 bioinformatics:1 reactive:7 ongoing:1 crowdsourcing:1
3,707
4,357
How biased are maximum entropy models? Jakob H. Macke Gatsby Computational Neuroscience Unit University College London, UK [email protected] Iain Murray School of Informatics University of Edinburgh, UK [email protected] Peter E. Latham Gatsby Computational Neuroscience Unit University College London, UK [email protected] Abstract Maximum entropy models have become popular statistical models in neuroscience and other areas in biology, and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e. the true entropy of the data can be severely underestimated. Here we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We show that if the data is generated by a distribution that lies in the model class, the bias is equal to the number of parameters divided by twice the number of observations. However, in practice, the true distribution is usually outside the model class, and we show here that this misspecification can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of model class, and we illustrate our results using numerical simulations of an Ising model; i.e. the second-order maximum entropy distribution on binary data. 1 Introduction Over the last several decades, information theory [1, 2] has played a major role in our effort to understand the neural code in the brain [3, 4]. Its usefulness, however, is limited by the fact that the quantity of interest, mutual information (typically between stimuli and neuronal responses) is hard to compute from data [5]. Consequently, although this approach has led to a relatively deep understanding of neural coding in single neurons [4], it has told us far less about populations [6, 7]. In essence, the brute-force approaches to measuring mutual information that have worked so well on single spike trains simply do not work on populations. This is because the key-ingredient of mutual information is the entropy, and in general, estimation of the entropy from finite data sets suffers from a severe downward bias [8, 9]: on average, the entropy estimated on the data set will be lower than the actual entropy of the underlying model. While a number of improved estimators have been developed (see [5, 10] for an overview), the amount of data one needs is, ultimately, exponential in the number of neurons, so even modest populations (tens of neurons) are out of reach. To apply information-theoretic techniques to populations, then, our only hope is to develop models in which the number of unconstrained parameters grows (relatively) slowly with the number of neurons [11]. For such models, estimating information requires much less data than brute force methods. Still, the amount of data is nontrivial, and naive estimators of information can be badly biased. Here we consider one class of models ? maximum entropy models subject to linear constraints ? and compute the bias in the entropy. We show that if the true distribution lies in the parametric model class, then the bias is equal to the number of parameters divided by twice the number of observations. When the true distribution is outside the model class, however, the bias can be much larger. 1 We illustrate our results using a very popular model in neuroscience, the Ising model [12], which is the second-order maximum entropy distribution on binary data. Recently, this model has become a popular means of characterizing the distribution of firing patterns in multi-electrode recordings, and has been used extensively in a wide range of applications, including recordings in the retina [13, 14] and visual cortex [15]. In addition, several recent studies [16, 17, 18] have used numerical simulations of large Ising models to understand the scaling of the entropy of the model with population size. And, finally, Ising models have been used in other fields in biology, for example to model gene-regulation networks [19]. 2 2.1 Theory Maximum entropy models Our starting point is an underlying true distribution, denoted p(x) where x is a (typically real valued) vector; the goal is to model it with a maximum entropy distribution. For simplicity, when developing the formalism we take x to be discrete; however, all our results apply to continuous variables. The maximum entropy distribution is the distribution with the highest entropy subject to a set of constraints, where the entropy is given by ? S=? p(x) log p(x) . (1) x Specifically, suppose that under the true distributions a set of m functions, denoted gi (x), i = 1, ..., m, average to ?i , ? ?i = p(x)gi (x) . (2) x If we use q(x|?) to denote the maximum entropy distribution (with ? ? (?1 , ?2 , ..., ?m )), the constraints (here taken to be linear in the probability) are of the form ? q(x|?)gi (x) = ?i . (3) x Finding an explicit expression for q(x|?) is a straightforward optimization problem (see, e.g., [2]). It can be shown that the maximum entropy distribution is in the exponential family, ?m exp [ i=1 ?i (?)gi (x)] q(x|?) = (4) Z(?) where the parameters, ?i (the Lagrange multipliers of the optimization problem), are chosen such that the constraints in Eq. (2) are satisfied. The partition function, Z(?), ensures that the probabilities normalize to one, ?m ? ? ? Z(?) = exp ?i (?)gi (x) . (5) x i=1 Once we have identified the parameters of this model, we can insert Eq. (4) into Eq. (1), which allows us to write the entropy in the form m ? Sq (?) = log Z(?) ? ?i (?)?i . (6) i=1 2.2 Estimation bias in maximum entropy models So far we have assumed that the true ?i are known. In general, though, we have to estimate the ?i from data. Specifically, if we have K observations of x, denoted x(k) , k = 1, ..., K, then the estimate of ?i , denoted ? ?i , is given by K 1 ? ? (k) ? ? ?i = gi x . (7) K k=1 2 We can still use the maximum entropy formulation described above; the only difference is that we ? Thus, the maximum entropy distribution is given by q(x|?) ? (Eq. (4)) and the replace ? by ?. ? (Eq. (6)). entropy by Sq (?) Because of sampling error, the ? ?i are not equal to their true values, ?i ; consequently, neither is ? This leads to variability, in the sense that different sets of x(k) lead to different entropies Sq (?). and, because the entropy is concave, to bias. Thus, the entropy estimated from a finite data set will be lower, on average, than the entropy obtained from the true underlying model. In the large K limit, so that ? ?i is close to ?i , the bias can be computed by Taylor expanding around Sq (?) and averaging over the true distribution, p(x). Anticipating somewhat our result, we use ?b/2K to denote the bias, and we have m m ? b ?Sq (?) 1 ? ? 2 Sq (?) ? ? Sq (?)?p(x) = ? ? ?Sq (?) ???i ?p(x) + ???i ??j ?p(x) + ... 2K ??i 2 i,j=1 ??i ??j i=1 (8) where K 1 ? ? (k) ? ??i ? ? ? i ? ?i = gi x ? ?i . K (9) k=1 The angle brackets with subscript p(x) indicate an average with respect to the true distribution, p(x). The quantity we focus on is b, the normalized bias (as it is independent of K in the large K limit). Computing the averages and derivatives in Eq. (8) is straightforward (see Appendix A in the supplementary material for details), and we find that, through second order in ??, ? q?1 p b= Cij Cji , (10) ij where q Cij ? ??gi (x)?gj (x)?q(x|?) p Cij ?1 q Here Cij ? ??gi (x)?gj (x)?p(x) . (11b) denotes the ij th entry of C q ?1 and ?gi (x) ? gi (x) ? ?i . 2.3 (11a) (12) Bias when the true model is in the model class Equation (10) tells us the normalized bias (to first order in 1/K). Evaluating it is, typically, hard, but there is one case in which we can write down an explicit expression for it: when the true distribution lies in the model class, so that p(x) = q(x|?). In that case, Cq = Cp , the normalized bias is the trace of the identity matrix, and we have b = m (recall that m is the number of constraints); alternatively, Bias[S] = ?m/2K. An important within-model-class case arises when x is discrete and the ?parametrized? model is a direct histogram of the data. If x can take on D values, then there are D ? 1 parameters (the ??1? comes from the fact that p(x) must sum to 1) and the normalized bias is (D ? 1)/2K. We thus recover a general version of the Miller?Madow [8] or Panzeri & Treves bias correction [9], which was derived for a multinomial distribution. (Note that our expression differs from theirs by a factor of log 2; that?s because they use base 2 logarithms whereas we use natural logarithms.) Alternatively, one can exploit the relationship between entropy-maximization and maximum-likelihood estimation in the exponential family to deduce this result from the asymptotic distribution of maximum likelihood estimators [20]. For details see Appendix B in the supplementary material. 2.4 Bias when the true model is not in the model class In practice, it is rare for the true distribution to lie in the model class, so it is important to know how the normalized bias behaves in general. In this section, we investigate how quickly it changes when we leave the model class. We concentrate on the worst case scenario and determine the largest normalized bias that is consistent with a given ?distance? from the true model class. For cases in which we are close to the true model class, we provide a perturbative expression for this quantity. 3 To assess the normalized bias out of model class, we assume that p(x), the distribution from which the data was generated, can be written as p(x) = q(x|?) + ?p(x) (13) ? with ?p(x) chosen so that it is orthogonal to all the constraints; that is x ?p(x)gi (x) = 0, which in turn implies that ? ? p(x)gi (x) = q(x|?)gi (x) (14) x x (and both, of course, are equal to ?i ). We then ask how the normalized bias behaves as ?p(x) varies. q Because q(x|?) is independent of ?p(x), so is Cij , and the normalized bias, b, that appears in Eq. (10) can be written (using Eq. (11b)) b = ?B(x)?p(x) (15) where ? q ?1 ?gi (x)Cij ?gj (x) . (16) B(x) ? ij It?s not possible to say anything definitive about the normalized bias in general, but what we can do is compute its maximum as a function of the distance between p(x) and q(x|?), with ?distance? measured by the Kullback?Leibler divergence. The latter quantity, denoted ?S, is given by ? p(x) ?S = p(x) log = Sq (?) ? Sp (17) q(x|?) x where Sp is the entropy of p(x). The second equality follows from the definition of q(x|?), Eq. (4), and the fact that ?gi (x)?p(x) = ?gi (x)?q(x|?) , which comes from Eq. (14). We are interested in finding the maximal normalized bias that is consistent with a given ?S. Rather than maximizing the normalized bias at fixed ?S, we take the complementary approach: For each possible bias, we find the minimal possible ?S. This gives us a relationship between bias and minimal ?S, which we can invert to obtain the maximal bias for a given ?S. Since Sq (?) is independent of p(x), minimizing ?S is equivalent to maximizing Sp (see Eq. (17)). Thus, again we have a maximum entropy problem. Now, though, we have an additional constraint on the normalized bias, which gives us an additional Lagrange multiplier in addition to the ?i we had for the original optimization problem. This leads to (in analogy to Eq. (4)) ? exp [?B(x) + i ?i (?, ?)gi (x)] p(x|?, ?) = (18) Z(?, ?) where Z(?, ?) is the partition function and the ?i (?, ?) are chosen to satisfy Eq. (2), but with p(x) replaced by p(x|?, ?). Amongst all models that satisfy the moments constraints and have the same normalized bias, this is the one that is closest (in KL?divergence) to the maximum entropy model. Note that we have slightly abused notation: whereas in the previous sections the ?i and Z depended only on ?, they now depend on both ? and ?. However, the previous variables are closely related to the new ones: when ? = 0 the constraint associated with b disappears, and we recover q(x|?); that is, p(x|?, 0) = q(x|?). Consequently, ?i (?, 0) = ?i (?), and Z(?, 0) = Z(?). Relating ?S to b is now a purely numerical task: choose a set of ?i and a normalized bias, b, determine the Lagrange multipliers, ?i (?, ?) and ?, that appear in Eq. (18), then compute Sp the entropy of p(x|?, ?), and subtract that from Sq (?) to find ?S (see Eq. (17)). In section 3.2 we do exactly that. First, however, to gain some intuition into how the normalized bias depends on ?S, we compute the relationship between the two perturbatively. This can be done by considering the small ? limit. In this limit we can expand both ?S and b as a Taylor series in ?. Defining ?S(?) ? Sq (?) ? Sp (?) (19) where Sp (?) is the entropy of p(x|?, ?), and using primes to denote derivatives with respect to ?, we have, through second order in ?, ?S(?) = Sq (?) ? Sp (0) ? ?Sp? (0) ? b(?) = b(0) + ?b? (0) . 4 ? 2 ?? S (0) 2 p (20a) (20b) We expand ?S(?) to second order in ? because Sp? (0) = 0, which follows from the fact that when ? ?= 0 there is an additional constraint on the normalized bias, and so any ? ?= 0 can only lower the entropy; therefore, ? = 0 must be a local maximum. Alternatively, a straightforward calculation in which we write down the entropy of p(x|?, ?) using Eq. (18) (which results in an expression analogous to Eq. (6)) and differentiate with respect to ?, yields Sp? (?) = ??b? (?) . (21) From this it follows that Sp? (0) = 0; in addition, we see that Sp?? (0) = ?b? (0). Thus, using the fact that when ? = 0, p(x|?, 0) is within the model class, so Sp (0) = Sq (?), Eq. (20) tells us that when ? is sufficiently small, (b ? m)2 . (22) 2b? (0) The term in the denominator, b? (0), is relatively easy to compute, and we show in Appendix C (in the supplementary material) that it is given by m ? q ?1 b? (0) = Var[B]q(x|?) ? ?B(x)?gi (x)?q(x|?) Cij ??gj (x)B(x)?q(x|?) . (23) ?S = i,j=1 The key result of the perturbative analysis is that when the true distribution is out of the model class, the normalized bias can be increased by a term proportional to b? (0)1/2 . Thus, the size of b? (0) is crucial for telling us how big the bias really is. In the next section we investigate this numerically for a particular model, the Ising model. 3 Numerical Results: Estimation bias in Ising models For our numerical simulations, we consider the second order maximum entropy model on n binary variables, also known as the Ising model [12] (see [13, 14] for an application of Ising models to neuroscience). In this section, we use numerical studies to verify that the asymptotic bias gives an accurate characterization of the expected bias for relevant sample-sizes K, investigate the size of the normalized bias when the true model is not in the model class, and study the scaling of the normalized bias with the number of parameters. We show numerically that, for the Ising model, the model-misspecification can result in the normalized bias increasing rapidly with population size. 3.1 Estimation in a binary maximum entropy model We consider n interacting spins si , i = 1, ..., n with si ? {0, 1}. We put constraints on the first and second moments only, so m, the number of constraints, is n(n + 1)/2: gi (s) = si and gij (s) = si sj , i < j. The maximum entropy model (with the ?i ?s replaced by hi and Jij and the gi written explicitly) has the form ? ? ? ? 1 q(s|h, J) = exp ? hi s i + si Jij sj ? . (24) Z(h, J) i i<j To illustrate our results for the asymptotic bias, and to investigate how large K has to be for the asymptotic calculation to be relevant, we performed the following simulations: For different values of K (ranging from 10 to 104 ) and different values of the model-size n ? {2, 3, 5, 10, 15}, we generated 104 data sets of size K each from an independent binary model with n variables and mean ? = 0.1 or ? = 0.5, i.e. sampling from the distribution given in Eq. (24) with Jij = 0 and hi = log(?/(1 ? ?)). For each such data set, we fit a pairwise binary maximum entropy model to the data by gradient-ascent on the (log-concave) likelihood. By calculating the entropy of the resulting model (via Eq. (6)) and averaging over the 104 data sets, we obtained a numerical estimate of the difference between the true entropy and the expected estimated entropy; i.e. the bias. Figure 1 shows (aside from the reassuring fact that our asymptotic calculations are consistent with the numerical simulations) that the asymptotic solution gives surprisingly accurate results even for relatively low values of K. From figures 1B and D, we can see that, for values of K of around 100, the numerical biases already lie very close to the asymptotic prediction. Since the asymptotics are accurate for large K, we expect this fit to remain close. While we did observe some deviations 5 for very large data sets for which the bias is very small (K > 103 ), such deviations could be a consequence of numerical errors in the fitting-procedure. We note that our choice of Jij = 0 is merely for concreteness, and that the validity of our formulation is not dependent on the values of Jij . We also performed simulations with models in which Jij is non-zero and drawn from a Gaussian distribution, which yielded qualitatively similar results. A) B) ?2 10 Rescaled Bias (Negative) bias 1 n=2 3 5 10 15 Asymp 0 10 0.8 0.6 0.4 0.2 ?4 10 1 10 2 3 10 10 Sample size K 0 10 4 10 C) 50 100 250 Sample size K 1000 D) 1.5 0 10 Rescaled Bias (Negative) bias 25 ?2 10 ?4 10 1 10 2 3 10 10 Sample size K 1 0.5 0 10 4 10 25 50 100 Sample size K 250 Figure 1: Asymptotic bias in Ising models. A) Comparison of asymptotic bias with expected bias calculated via simulations of an independent model with a mean of 0.5 (see text). The thin-black lines correspond to the bias as predicted by our asymptotic calculation. We have here inverted the sign of the bias, the actual biases are negative numbers. B) Same data as in A, but on a semilog plot to illustrate how many samples are necessary for the asymptotic bias to be an accurate representation of the actual bias: For the parameters used here, the bias seems to be accurate even for small (< 100) values of K. We rescaled the estimated biases of each population size n such that the predicted asymptotic biases (thin black lines) are on top of each other, and such that the biases are positive. C and D) Same as in A and B, but for an independent model with mean 0.1. Error bars show standard errors on the mean estimates from 104 simulated data sets. 3.2 Estimation bias when the data has higher-order correlations What happens when the true model is not in the model class? To investigate this question, we first consider homogeneous pairwise maximum entropy models (hi = h and Jij = J) of sizes n ? {5, 10, 15}, common means ?si ? = 0.5 or 0.1, and pairwise correlation-coefficient ?i,j = 0.1 for each pair i, j. For a range of normalized biases, we calculated ?S, the maximum entropy difference between g(x|?) and an out of model class distribution as a function of normalized bias, b. For very small or large normalized biases, the optimization did not converge to values moment constraints, indicating that such an extreme normalized bias would be inconsistent with the specified second order moments. The results are shown in Fig. 2, along with the perturbative predictions. For these choices of parameters, the maximum and minimum normalized bias did not deviate much from the within-model-class case. In the next example, we illustrate that the deviation can be very large. To get a better understanding of the additional bias (or, potentially, reduction in bias) due to model misspecification, we studied the bias of the Dichotomized Gaussian distribution, which can be interpreted as a very simple model of neural population activity in which correlations among neurons are induced by common, Gaussian inputs into threshold neurons [21, 22]. In this case we simply set p(x) to a Dichotomized Gaussian, and numerically computed the bias and the KL?divergence between p(x) and the maximum entropy model with the same first and second moments. We did 6 ? S/ S2 (in percent) N=5 15 Predicted Exact N=10 25 30 20 10 20 5 10 15 10 5 0 12 14 16 Normalized bias 0 30 40 50 60 Normalized bias N=5 ? S/ S2 (in percent) N=15 Predicted Exact 15 60 80 100 120 140 Normalized bias N=15 20 30 15 10 10 5 0 N=10 20 10 0 14.8 15 15.2 Normalized bias 0 5 54 55 56 57 Normalized bias 0 118 120 122 124 126 Normalized bias Figure 2: Bias in the case of model misspecification. Top row: ?S/S2 , where S2 is the entropy of the second order model, as a function of the normalized bias for a model with means ?si ? = 0.5 and correlation-coefficient 0.1. The red (dashed) lines show the exact ?S calculated by using equation (18), and the green (solid) lines using the perturbative expansion in equation (22). The curves end because for normalized biases too large or too small the optimization does not converge to values which satisfy the moment constraints. Bottom row: Same as top row, but using means of ?si ? = 0.1. this for means set to ?si ? = 0.02, a realistic value for applications of maximum entropy models in neuroscience, and different values of the pairwise correlation coefficient ? ? {0.02, 0.1, 0.5}. We also included, for comparison, the normalized bias for a within model class distribution (i.e. a maximum entropy model with matched first and second moments), which is just n(n + 1)/2. For the Dichotomized Gaussian, the normalized bias was substantially larger than the within model class bias. For example, for population size n = 15, its bias is 2.3 times larger for ? = 0.1, and 6.8 times larger for ? = 0.5. Figure 3B shows ?S versus population size for the models in Fig. 3A, and the corresponding ?maximally biased? model; i.e. the model which has the same normalized bias as the Dichotomized Gaussian, but minimal ?S. Interestingly, ?S for the maximally biased models (equation (18)) is very similar to ?S for the Dichotomized Gaussian. This suggests that our extremal calculation of the bias is relevant for a reasonably mechanistic model of neural population activity. 4 Conclusions In recent years, there has been a resurgence of interest in maximum entropy models in neuroscience and related fields [13, 14, 15]. In particular, maximum entropy models can be useful for model-based estimation of the information content of neural populations [11], as direct information-estimates do not scale well for large population sizes. In this paper, we studied estimation biases in the entropy of maximum entropy models. We focused on ?naive? estimators, i.e. estimators of the entropy which simply calculate it from the empirical estimates of the probabilities of the model, and do not attempt to do any bias reduction. We found that if the true model is in the model class, the (downward) bias in a maximum entropy estimate from finite observations is proportional to the ratio of the number of parameters to the number of observations, a relationship which is identical to that of the (naive) histogram estimators [8, 9]. However, we also show that if the model is misspecified (i.e. if the true data do not come 7 A) B) 0.2 MaxEnt model DG ?= .02 DG ?= .1 DG ?= .5 800 600 0.15 ?S Normalized bias 1000 0.1 400 0.05 200 0 DG ?= 0.02 Min ?= 0.02 DG ?= 0.1 Min ?= 0.1 DG ?= 0.5 Min ?= 0.5 5 10 Population size 0 15 5 10 15 Population size Figure 3: Bias in the case of model misspecification, using the Dichotomized Gaussian. A) Scaling of the normalized bias with population size. The normalized bias of the Dichotomized Gaussian (DG) is much larger than that of the maximum entropy model. B) Distance from model class, ?S, versus population size for the Dichotomized Gaussian and maximum entropy models. They are about the same, indicating that the Dichotomized Gaussian model has close to maximum bias. from the specified exponential family model), then the bias can be much larger. We numerically investigated the bias in second-order binary maximum entropy models (also known as Ising models), and showed that in this case, model misspecification can lead to substantially bigger biases. Non-parametric estimation of entropy is a well researched subject, and various estimators with optimized properties have been proposed (see e.g. [5, 23]). A number of studies have looked at the entropy estimation for the multivariate normal distribution [24, 25, 26, 27] and other continuous distributions, and improved estimators for the Gaussian distribution have been described [28]. As the (differential) entropy of a Gaussian distribution is essentially its log-determinant, the bias of this model can be related to results about the eigenvalues of random matrices [29]. An overview of estimators of the entropy of continuous-valued distributions is given in [30]. However, to our knowledge, the entropy bias of maximum entropy models in the presence of modelmisspecification has not be characterized or studied numerically. We provided here an asymptotic derivation of this bias, and studied it numerically for the pairwise binary maximum entropy model, the Ising model. Our characterization of the bias relates the (worst case) bias in the case of modelmisspecification to the distance (as measured by KL?divergence) between the model and the actual data. This characterization does not yield a precise estimate of the bias on a given data-set which could simply be ?subtracted-off?? thus, our derivation does not directly yield an improved estimator of the bias for such data-sets. However, importantly, our results show that model-misspecification can indeed lead to additional bias which can be much larger than generally appreciated. Using numerical simulations, we showed that this also happens for a realistic model which shares many properties with neural recordings. In addition, our results could be useful for deriving general guideline for how many samples a neurophysiological data-set needs to contain to achieve a bias which is less than some desired accuracy. Acknowledgements We acknowledge support from the Gatsby Charitable Foundation. JHM is supported by an EC Marie Curie Fellowship, and IM in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors? views. References [1] C.E. Shannon and W. Weaver. The mathematical theory of communication. University of Illinois Press, 1949. [2] T.M. Cover, J.A. Thomas, J. Wiley, et al. Elements of information theory, volume 6. Wiley Online Library, 1991. [3] F. Rieke, D. Warland, R. de R uytervansteveninck, and W. Bialek. Spikes: exploring the neural code (computational neuroscience). The MIT Press, 1999. 8 [4] A. Borst and F. E. Theunissen. Information theory and neural coding. Nat Neurosci, 2(11):947?957, 1999 Nov. [5] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15(6):1191?1253, 2003. [6] B. B. Averbeck, P. E. Latham, and A. Pouget. Neural correlations, population coding and computation. Nature Reviews Neuroscience, 7(5):358?66, 2006. [7] R. Quian Quiroga and S. Panzeri. Extracting information from neuronal populations: information theory and decoding approaches. Nat Rev Neurosci, 10(3):173?185, 2009. [8] G. Miller. Note on the bias of information estimates. In Information Theory in Psychology II-B, chapter 95-100. Free Press, Glencole, IL, 1955. [9] A. Treves and S. Panzeri. The upward bias in measures of information derived from limited data samples. Neural Computation, 7(2):399?407, 1995. [10] S. Panzeri, R. Senatore, M. A. Montemurro, and R. S. Petersen. Correcting for the sampling bias problem in spike train information measures. J Neurophysiol, 98(3):1064?1072, 2007. [11] Robin A A Ince, Alberto Mazzoni, Rasmus S Petersen, and Stefano Panzeri. Open source tools for the information theoretic analysis of neural data. Front Neurosci, 4, 2010. [12] E. Ising. Beitrag zur Theorie des Ferromagnetismus. Z. Phys, 31:253, 1925. [13] E. Schneidman, M. J. 2nd Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087):1007?12, 2006. [14] J. Shlens, G. D. Field, J. L. Gauthier, M. I. Grivich, D. Petrusca, A. Sher, A. M. Litke, and E. J. Chichilnisky. The structure of multi-neuron firing patterns in primate retina. J Neurosci, 26(32):8254?66, 2006. [15] I. E. Ohiorhenuan, F. Mechler, K. P. Purpura, A. M. Schmid, Q. Hu, and J. D. Victor. Sparse coding and high-order correlations in fine-scale cortical networks. Nature, 466(7306):617?621, 2010. [16] G. Tkacik, E. Schneidman, M. J. Berry, II, and W. Bialek. Spin glass models for a network of real neurons. arXiv:q-bio/0611072v2, 2009. [17] Y. Roudi, J. Tyrcha, and J. Hertz. Ising model for neural data: model quality and approximate methods for extracting functional connectivity. Phys Rev E Stat Nonlin Soft Matter Phys, 79(5 Pt 1):051915, May 2009. [18] Y. Roudi, E. Aurell, and J. A. Hertz. Statistical physics of pairwise probability models. Front Comput Neurosci, 3:22, 2009. [19] T. Mora, A. M. Walczak, W. Bialek, and C. G. Jr Callan. Maximum entropy models for antibody diversity. Proc Natl Acad Sci U S A, 107(12):5405?5410, 2010. [20] A.W. Van der Vaart. Asymptotic statistics. Cambridge Univ Pr, 2000. [21] J.H. Macke, P. Berens, A.S. Ecker, A.S. Tolias, and M. Bethge. Generating spike trains with specified correlation coefficients. Neural Computation, 21(2):397?423, 2009. [22] J.H. Macke, M. Opper, and M. Bethge. Common input explains higher-order correlations and entropy in a simple model of neural population activity. Physical Review Letters, 106(20):208102, 2011. [23] I. Nemenman, W. Bialek, and R.D.R. Van Steveninck. Entropy and information in neural spike trains: Progress on the sampling problem. Physical Review E, 69(5):056111, 2004. [24] N.A. Ahmed and D. V. Gokhale. Entropy expressions and their estimators for multivariate distributions. Information Theory, IEEE Transactions on, 35(3):688?692, 1989. [25] O. Oyman, R. U. Nabar, H. Bolcskei, and A. J. Paulraj. Characterizing the statistical properties of mutual information in MIMO channels: insights into diversity-multiplexing tradeoff. In Signals, Systems and Computers, 2002. Conference Record of the Thirty-Sixth Asilomar Conference on, volume 1, pages 521? 525. IEEE, 2002. [26] N. Misra, H. Singh, and E. Demchuk. Estimation of the entropy of a multivariate normal distribution. Journal of multivariate analysis, 92(2):324?342, 2005. [27] G. Marrelec and H. Benali. Large-sample asymptotic approximations for the sampling and posterior distributions of differential entropy for multivariate normal distributions. Entropy, 13(4):805?819, 2011. [28] S. Srivastava and M.R. Gupta. Bayesian estimation of the entropy of the multivariate Gaussian. In Information Theory, 2008. ISIT 2008. IEEE International Symposium on, pages 1103?1107. IEEE, 2008. [29] N.R. Goodman. The distribution of the determinant of a complex Wishart distributed matrix. The Annals of mathematical statistics, 34(1):178?180, 1963. [30] M. Gupta and S. Srivastava. Parametric Bayesian estimation of differential entropy and relative entropy. Entropy, 12(4):818?843, 2010. 9
4357 |@word determinant:2 version:1 seems:1 nd:1 open:1 hu:1 simulation:8 tkacik:1 solid:1 reduction:2 moment:7 series:1 interestingly:1 si:9 perturbative:5 must:2 written:3 realistic:2 numerical:11 partition:2 plot:1 aside:1 record:1 characterization:3 mathematical:2 along:1 direct:2 become:2 differential:3 symposium:1 fitting:1 excellence:1 pairwise:7 indeed:1 expected:4 montemurro:1 multi:2 brain:1 borst:1 researched:1 actual:4 considering:1 increasing:1 provided:1 estimating:1 underlying:3 notation:1 matched:1 pel:1 what:2 interpreted:1 substantially:2 developed:1 finding:2 bolcskei:1 concave:2 exactly:1 uk:6 brute:2 unit:2 bio:1 appear:1 positive:1 local:1 limit:4 severely:1 depended:1 consequence:1 acad:1 subscript:1 firing:2 black:2 twice:2 studied:4 suggests:1 limited:2 range:2 steveninck:1 thirty:1 practice:2 differs:1 sq:14 procedure:1 asymptotics:1 area:1 empirical:1 petersen:2 get:1 close:5 put:1 equivalent:1 ecker:1 maximizing:2 straightforward:3 starting:1 focused:1 simplicity:1 correcting:1 pouget:1 iain:1 estimator:11 insight:1 importantly:1 deriving:1 shlens:1 mora:1 population:21 rieke:1 analogous:1 annals:1 pt:1 suppose:1 exact:3 homogeneous:1 element:1 madow:1 ising:14 theunissen:1 bottom:1 role:1 worst:2 calculate:1 ensures:1 highest:1 rescaled:3 intuition:1 ultimately:1 depend:1 singh:1 purely:1 neurophysiol:1 various:1 chapter:1 derivation:2 train:4 univ:1 london:2 abused:1 tell:2 outside:2 larger:8 valued:2 supplementary:3 say:1 tyrcha:1 statistic:2 gi:21 vaart:1 online:1 differentiate:1 eigenvalue:1 ucl:2 maximal:2 jij:7 relevant:3 rapidly:1 achieve:1 normalize:1 electrode:1 generating:1 leave:1 illustrate:5 develop:1 ac:3 stat:1 measured:2 ij:3 school:1 progress:1 eq:20 predicted:4 indicate:1 come:3 implies:1 concentrate:1 closely:1 material:3 explains:1 really:1 isit:1 biological:1 im:1 insert:1 exploring:1 correction:1 quiroga:1 around:2 sufficiently:1 normal:3 exp:4 panzeri:5 major:1 estimation:14 proc:1 extremal:1 largest:1 tool:2 reflects:1 hope:1 beitrag:1 mit:1 gaussian:14 averbeck:1 rather:1 publication:1 derived:2 focus:1 likelihood:3 litke:1 sense:1 glass:1 dependent:1 typically:3 expand:2 interested:1 upward:1 among:1 denoted:5 mutual:6 equal:4 field:3 once:1 paulraj:1 sampling:7 petrusca:1 biology:2 identical:1 thin:2 stimulus:1 retina:2 ohiorhenuan:1 dg:7 divergence:4 replaced:2 attempt:1 nemenman:1 interest:2 investigate:5 severe:1 bracket:1 extreme:1 natl:1 accurate:5 callan:1 necessary:1 modest:1 orthogonal:1 asymp:1 taylor:2 logarithm:2 maxent:1 desired:1 dichotomized:9 minimal:3 increased:1 formalism:1 soft:1 cover:1 measuring:1 maximization:1 deviation:3 entry:1 rare:1 usefulness:1 mimo:1 too:2 front:2 varies:1 international:1 told:1 physic:1 off:1 informatics:1 decoding:1 bethge:2 quickly:1 connectivity:1 again:1 satisfied:1 choose:1 slowly:1 wishart:1 macke:3 derivative:2 de:2 diversity:2 coding:4 coefficient:4 matter:1 satisfy:3 explicitly:1 depends:1 performed:2 view:1 red:1 recover:2 ferromagnetismus:1 benali:1 curie:1 ass:1 perturbatively:1 spin:2 accuracy:1 il:1 miller:2 yield:3 correspond:1 weak:1 bayesian:2 reach:1 suffers:1 phys:3 ed:1 definition:1 sixth:1 associated:1 gain:1 popular:3 ask:1 recall:1 knowledge:1 anticipating:1 appears:1 higher:2 response:1 maximally:3 improved:3 formulation:2 done:1 though:2 strongly:1 just:1 correlation:10 gauthier:1 quality:1 grows:1 validity:1 verify:1 true:24 multiplier:3 normalized:40 contain:1 equality:1 leibler:1 essence:1 anything:1 theoretic:2 latham:2 cp:1 ince:1 stefano:1 percent:2 ranging:1 gokhale:1 recently:1 misspecified:1 common:3 behaves:2 multinomial:1 functional:1 physical:2 overview:2 volume:2 relating:1 theirs:1 numerically:6 cambridge:1 unconstrained:1 illinois:1 had:1 cortex:1 gj:4 deduce:1 base:1 closest:1 multivariate:6 recent:2 showed:2 roudi:2 posterior:1 prime:1 scenario:1 misra:1 binary:8 der:1 victor:1 inverted:1 minimum:1 additional:5 somewhat:1 determine:2 converge:2 schneidman:2 dashed:1 ii:2 relates:1 signal:1 characterized:1 calculation:5 ahmed:1 divided:2 alberto:1 bigger:1 prediction:2 denominator:1 essentially:1 arxiv:1 histogram:2 invert:1 zur:1 addition:4 whereas:2 fellowship:1 fine:1 underestimated:1 source:1 crucial:1 goodman:1 biased:4 semilog:1 ascent:1 subject:4 recording:3 induced:1 inconsistent:1 nonlin:1 extracting:2 presence:1 easy:1 fit:3 psychology:1 identified:1 tradeoff:1 expression:6 quian:1 cji:1 effort:1 peter:1 deep:1 useful:3 generally:1 walczak:1 amount:2 ten:1 extensively:1 sign:1 neuroscience:9 estimated:4 discrete:2 write:3 ist:2 key:2 threshold:1 drawn:1 neither:1 marie:1 merely:1 concreteness:1 sum:1 year:1 angle:1 letter:1 family:3 antibody:1 appendix:3 scaling:3 hi:4 played:1 yielded:1 badly:1 nontrivial:1 activity:3 constraint:14 worked:1 segev:1 multiplexing:1 min:3 relatively:4 developing:1 mechler:1 hertz:2 remain:1 slightly:1 jr:1 rev:2 primate:1 happens:2 pr:1 taken:1 asilomar:1 equation:4 turn:1 know:1 mechanistic:1 end:1 grivich:1 apply:2 observe:1 v2:1 subtracted:1 original:1 thomas:1 denotes:1 top:3 calculating:1 exploit:1 warland:1 murray:2 already:1 quantity:4 spike:5 question:1 parametric:3 looked:1 mazzoni:1 bialek:5 amongst:1 gradient:1 distance:5 simulated:1 sci:1 parametrized:1 code:2 relationship:4 cq:1 ratio:1 minimizing:1 rasmus:1 regulation:1 cij:7 potentially:1 theorie:1 trace:1 negative:3 resurgence:1 guideline:1 observation:5 neuron:8 finite:3 acknowledge:1 defining:1 variability:1 misspecification:7 precise:1 communication:1 interacting:1 jakob:2 community:1 treves:2 pair:1 kl:3 specified:3 optimized:1 chichilnisky:1 bar:1 usually:1 pattern:2 including:1 green:1 pascal2:1 natural:1 force:2 weaver:1 library:1 imply:1 disappears:1 naive:3 sher:1 schmid:1 text:1 deviate:1 understanding:2 acknowledgement:1 review:3 berry:2 asymptotic:15 relative:1 expect:1 aurell:1 proportional:2 analogy:1 var:1 ingredient:1 versus:2 foundation:1 consistent:3 charitable:1 share:1 row:3 course:1 surprisingly:1 last:1 supported:1 free:1 appreciated:1 bias:112 understand:2 telling:1 wide:1 characterizing:2 sparse:1 edinburgh:1 van:2 curve:1 calculated:3 cortical:1 evaluating:1 opper:1 distributed:1 author:1 qualitatively:1 programme:1 far:2 ec:1 transaction:1 sj:2 nov:1 approximate:1 kullback:1 gene:1 assumed:1 tolias:1 alternatively:3 continuous:3 decade:1 purpura:1 jhm:1 robin:1 nature:3 reasonably:1 channel:1 expanding:1 correlated:1 obtaining:1 expansion:1 investigated:1 european:1 berens:1 complex:1 sp:13 did:4 neurosci:5 big:1 definitive:1 s2:4 complementary:1 neuronal:2 fig:2 gatsby:5 wiley:2 explicit:2 exponential:4 comput:1 lie:5 down:2 gupta:2 nat:2 downward:2 subtract:1 entropy:80 led:1 simply:4 paninski:1 neurophysiological:1 visual:1 lagrange:3 reassuring:1 goal:1 identity:1 consequently:3 replace:1 content:1 hard:2 change:1 included:1 specifically:2 averaging:2 gij:1 shannon:1 indicating:2 college:2 support:1 latter:1 arises:1 srivastava:2
3,708
4,358
Gaussian process modulated renewal processes Yee Whye Teh Gatsby Computational Neuroscience Unit University College London [email protected] Vinayak Rao Gatsby Computational Neuroscience Unit University College London [email protected] Abstract Renewal processes are generalizations of the Poisson process on the real line whose intervals are drawn i.i.d. from some distribution. Modulated renewal processes allow these interevent distributions to vary with time, allowing the introduction of nonstationarity. In this work, we take a nonparametric Bayesian approach, modelling this nonstationarity with a Gaussian process. Our approach is based on the idea of uniformization, which allows us to draw exact samples from an otherwise intractable distribution. We develop a novel and efficient MCMC sampler for posterior inference. In our experiments, we test these on a number of synthetic and real datasets. 1 Introduction Renewal processes are stochastic point processes on the real line where intervals between successive points (times) are drawn i.i.d. from some distribution. The simplest example of a renewal process is the homogeneous Poisson process, whose interevent times are exponentially distributed. A limitation of this is the memoryless property of the exponential distribution, resulting in an ?as bad as old after a repair? property [1] that is not true of many real-world phenomena. For example, immediately after firing, a neuron is depleted of its resources and incapable of firing again, and the gamma distribution is used to model interspike intervals [2]. Similarly, because of the phenomenon of elastic rebound, some time is required to recharge stresses released after an earthquake and an inverse Gaussian distribution is used to model intervals between major earthquakes [3]. Other examples include using the Pareto distribution to better capture the burstiness and self-similarity of network traffic arrival times [4], and the Erlang distribution to model the fact that buying incidence of frequently purchased goods is less variable than Poisson [5]. Modelling interevent times as i.i.d. draws from a general renewal density can allow larger or smaller variances than an exponential with the same mean (overdispersion or underdispersion), but effectively encodes an ?as good as new after a repair? property. Again, this is often only an approximation: because of age or other time-varying factors, the interevent distribution of the point process may vary with time. For instance, internet traffic can vary with time of the day, day of the week and in response to advertising and seasonal trends. Similarly, an external stimulus can modulate the firing rate of the neuron, economic trends can modulate financial transactions etc. The most popular way of modelling this nonstationarity is via an inhomogeneous Poisson process whose intensity function determines the instantaneous event rate, and there has also been substantial work extending this to renewal processes in various ways (see section 2.2). In this paper, we describe a nonparametric Bayesian approach where a renewal process is modulated by a random intensity function which is given a Gaussian process prior. Our approach extends work by [6] on the Poisson process, using a generalization of the idea of Poisson thinning called uniformization [7] to draw exact samples from the model. We extend recent ideas from [8] to develop a more natural and efficient block Gibbs sampler than the incremental Metropolis-Hastings algorithm used in [6]. In our experiments we demonstrate the usefulness of our model and sampler on a number of synthetic and real-world datasets. 1 2 Modulated renewal processes Consider a renewal process R over an interval [0, T ] whose interevent time is distributed according to a renewal density g. Let G = {G1 , G2 , ...} be the ordered set of event times sampled from this renewal process, i.e. G ? R(g) (1) For simplicity1 we place a starting event G0 at time 0, so for each i ? 1 we have (Gi ? Gi?1 ) ? g. Associated with the renewal density g is a hazard function h, where h(? )?, for infinitesimal ? > 0, is the probability of the interevent interval being in [?, ? + ?] conditioned on it being at least ? , i.e. h(? ) = g(? ) R? 1 ? 0 g(u)du (2) Let ?(t) be some time-varying intensity function. A simple way to introduce nonstationarity into a renewal process is to modulate the hazard function by ?(t) so that it depends on both the time ? since the last event, and on the absolute time t [9, 10]: h(?, t) ? m(h(? ), ?(t)) (3) where m(?, ?) is some interaction function. Examples include additive (h(? ) + ?(t)) and multiplicative (h(? )?(t)) interactions. For concreteness, we assume multiplicative interactions in what follows, however our results extend easily to general interaction functions. With a modulated hazard rate, the distribution of interevent times is no longer stationary. Instead, plugging a multiplicative interaction into (2) and solving for g (see the supplementary material for details), we get  Z ?  g(? |tprev ) = ?(tprev + ? )h(? ) exp ? ?(tprev + u)h(u)du (4) 0 where tprev is the previous event time. Observe that equation (4) encompasses the inhomogeneous Poisson process as a special case (a constant hazard function with multiplicative modulation). 2.1 Gaussian process intensity functions In this paper we are interested in estimating both parameters of the hazard function h(? ) as well as the intensity function ?(t) itself. Taking a Bayesian nonparametric approach, we model ?(t) using a Gaussian process (GP) [11] prior, which has support over a rich class of functions and offers a flexibility not afforded by parametric approaches. We call the resulting model a Gaussian process modulated renewal process. A minor issue is that samples from a GP can take negative values; we address this using a sigmoidal link function. Finally, we use a gamma family for the hazard function: ??1 ??x h(? ) = R ?xu??1ee??u du where ? is the shape parameter2 . Our complete model is thus x l(?) ? GP(?, K), ?(?) = ?? ?(l(?)), G ? R(?(?), h(?)) (5) ? where ? and K are the GP mean and covariance kernel, ? is a positive scale parameter, and ?(x) = (1 + exp(?x))?1 . We place a gamma hyperprior on ?? as well as hyperpriors on the GP hyperparameters. 2.2 Related work The idea of defining a nonstationary renewal process by modulating the hazard function dates back to Cox [9]. Early work [12] focussed on hypothesis testing for the stationarity assumption. [13, 14, 1] proposed parametric (generalized linear) models where the intensity function was a linear combination of some known functions; these regression coefficients were estimated via maximum likelihood. [15] considers general modulated hazard functions as well; however they assume it has known form and are concerned with calculating statistical properties of the resulting process. 1 With renewal processes there is an ambiguity about the time of the first event, which is typically taken to be exponentially distributed. It is straightforward to handle this case. 2 We parametrize the hazard function to produce 1 event per unit time; other parametrizations may be used. 2 Finally, [10] describe a model that is a generalization of ours, but again have to resort to maximum likelihood estimation (our ideas can easily be extended to their more general model too). A different approach to producing inhomogeneity is by first sampling from a homogeneous renewal process and then rescaling time [16, 17]. The trend renewal process [18] uses such an approach, and the authors propose an iterative kernel smoothing scheme to approximate a maximum likelihood estimate of the intensity function. [2] uses time-rescaling to introduce inhomogeneity and, similar to us, a Gaussian process prior for the intensity function. Unlike us, they had to discretize time and used a variational approach to inference. Finally, we note that our approach generalizes [6], who describe a doubly stochastic Poisson process and an MCMC sampler which does not require time discretization. In the next sections we describe a generalization of their model to the inhomogeneous renewal process using a twist on a classical idea called uniformization. 3 Sampling via Uniformization Before we consider Markov chain Monte Carlo (MCMC) inference for our model, observe that to even na??vely generate samples from the prior is difficult; this requires evaluating integrals of a continuous-time function drawn from a GP (see equation (4)). One approach is to evaluate these integrals numerically by discretizing time [2], which can be time consuming and introduce approximation errors. In section 3.2 we will show how a classical idea called uniformization allows us to efficiently draw exact samples from the model, without approximations due to discretization. Then in section 4 we will develop a novel MCMC algorithm based on uniformization. 3.1 Modulated Poisson processes We start with thinning, a well-known result to sample from an inhomogeneous Poisson process with intensity ?(t). Suppose that ?(t) is upper bounded by some constant ?. Let E be a set of locations sampled from a homogeneous Poisson process with rate ?. We thin this set by deleting each point e ? E independently with probability 1 ? ?(e) ? . Let F be the remaining set of points. Then: Proposition 1 ([19]). The set F is a draw from a Poisson process with intensity function ?(t) . 3.2 Modulated renewal processes Less well-known is a generalization of this result to renewal processes [13]. Note that the thinning result of the previous section builds on the memoryless property of the exponential distribution (or the complete randomness [20] of the Poisson process): events in disjoint sets occur independently of each other. For a renewal process, events are no longer independent of their neighbours. This suggests a generalization of thinning involving a Markov chain over the set of events. This idea of thinning a Poisson process by a subordinated Markov chain is called uniformization [7]. [21] describes a uniformization scheme to sample from a homogeneous renewal process. We extend it to the modulated case here. We will assume that both the intensity function ?(t) and the hazard function h(? ) are bounded, so that there exists a constant ? such that ? ? max h(? )?(t) (6) t,? Note that because of the sigmoidal link function, our model has ?(t) ? ?? , while the gamma hazard h(? ) is bounded by the shape parameter ? if ? ? 1. We now sample a set of times E = {E0 = 0, E1 , E2 , . . .} from a homogeneous Poisson process with rate ? and thin this set by running a discrete time Markov chain on the times in E. Let Y0 = 0, Y1 , Y2 , . . . be an integer-valued Markov chain, where each Yi either equals Yi?1 or i. We interpret Yi as indicating the index of the last unthinned event prior or equal to Ei . That is, Yi = Yi?1 means that Ei is thinned, and Yi = i means Ei is not thinned. Note that Ei ? EYi gives the time since the last unthinned event. For i > j ? 0, define the transition probabilities of the Markov chain (conditioned on E) as follows, p(Yi = i|Yi?1 = j) = h(Ei ? Ej )?(Ei ) , ? p(Yi = j|Yi?1 = j) = 1 ? After drawing a sample from Y , we define F = {Ei ? E s.t. Yi = i}. 3 h(Ei ? Ej )?(Ei ) ? (7) Proposition 2. For any ? ? maxt,? h(? )?(t), F is a sample from a modulated renewal process with hazard h(?) and modulating intensity ?(?). The proof of this is included in the supplementary material. The basic idea is to write down the probability p(E, Y ) of the whole generative process and marginalize out the thinned times, showing that the resulting interevent time is simply (4). For a different proof of a similar result, see [13]. Now recall that we have a GP prior for l(?). The uniformization procedure above only requires the intensity function evaluated at the times in E (which is finite on a finite interval), and this is easily obtained by sampling from a finite dimensional Gaussian N (?E , KE ), with mean and covariance being the corresponding GP parameters ? and K evaluated at E. Our procedure to sample from a GP-modulated renewal process now follows: sample from a homogeneous Poisson process P(?) on [0, T ], instantiate the GP on this finite set of points and then thin the set by running the Markov chain described previously. Defining lE as l(t) evaluated on the set E, Ei? as the restriction of E to the interval (Fi?1 , Fi ), and Fi+1 = T we can write the joint distribution: P (F, l, E) = ?|E| e??T N (lE |?E , KE ) 4 |F |  Y ?(F i )h(Fi ?Fi?1 ) ? |+1 |FY Y 1?  (8) i=1 e?Ei? i=1 Inference ?(e)h(e?Fi?1 ) ? We now consider posterior inference on the modulating function ?(t) (and any unknown hyperparameters) given an observed set of event times G. Our sampling algorithm is based on ideas developed in [8]. We imagine G was generated via uniformization, so that there exists an unobserved set ? We then proceed by Markov chain Monte Carlo, setting up a Markov chain of thinned events G. ? the values of the GP on the set G ? G ? as well whose state consists of the number and locations of G, as the current sampled hyperparameters. Note from equation (8) that given these values, the value of the modulating function at any other location is independent of the observations and can be sampled from the conditional distribution of a multivariate Gaussian. The challenge now is to construct a transition operator that results in this Markov chain having the desired posterior distribution as its equilibrium distribution. In their work, [6] defined a transition operator by proposing insertions and deletions of thinned events as well as by perturbing their locations. The proposals were accepted or rejected using a Metropolis-Hastings correction. The remaining variables were updated using standard Gaussian process techniques. We show below that ? it is actually possible to produce a new independent sample of instead of incrementally updating G, ? the entire set G (conditioned on all other variables). This leads to a more natural sampler that does not require any external tuning and that mixes more rapidly. To understand our algorithm, suppose first that the modulating function ?(t) is known for all t. Then, from (4), the probability of the set of events G on the interval [0, T ] is3 : ! Z Gi |G| |G|+1 Y Y P (G|?(t)) = ?(Gi )h(Gi ? Gi?1 ) exp ? ?(t)h(t ? Gi?1 )dt (9) i=1 Gi?1 i=1 Now, suppose that in each consecutive interval (Gi?1 , Gi ) we independently sample a set of events ? ? from an inhomogeneous Poisson process with rate (? ? ?(t)h(t ? Gi?1 )), and let G ? = ?G ??. G i i A little algebra shows that: ? ? ! Z Gi |G|+1 Y Y ? G|?(t)) = ?exp ? P (G, dt (? ? ?(t)h(t ? Gi?1 ) (? ? ?(? g )h(? g ? Gi?1 ))? Gi?1 i=1 ? |G| Y ?? g ??G i |G|+1 ?(Gi )h(Gi ? Gi?1 ) i=1 ? |G|+|G| =? exp (??T ) Z ! Gi exp ? ?(t)h(t ? Gi?1 )dt (10) Gi?1 i=1   |G|+1 |G|  Y ?(Gi )h(Gi ? Gi?1 ) Y Y i=1 3 Y ? i=1 g ?? ??G i Recall that G0 = 0. We also take G|G|+1 = T . 4 ?(? g )h(? g ? Gi?1 ) 1? ?  (11) Comparing with equation (8), we have the following proposition: ? G) are equivalent i.e. they have the same distribution. Proposition 3. The sets (E, F ) and (G ? G, In other words, given a set of event times G, the inhomogeneous Poisson process-distributed points ? can be taken to be the events thinned in the procedure of section 3.2. The only complication left G is that we do not know the function ?(t) everywhere. This is easily overcome by uniformization (in fact, just by thinning, since we?re dealing with a Poisson process). Specifically, let G be the set of ? prev the previous set of thinned events. To sample the new set G ? ? from the observed events and G i Poisson process on [Gi?1 , Gi ] with rate (? ? ?(t)h(t ? Gi?1 )), we first sample a set of points A from a homogeneous Poisson process on [Gi?1 , Gi ] with rate ? and instantiate the Gaussian process ? prev and l ? on those points, conditioned on G ? G G?Gprev (note that all this involves is conditionally i?1 ) sampling from a multivariate Gaussian4 ). Finally, we keep a ? A with probability 1? ?(a)h(a?G . ? ? (and the associated set of GP values), we next must resample the value of Having resampled G the GP at G. This does involve the sigmoid likelihood function, and we proceed by elliptical slice sampling [22] 5 . Algorithm 1 lists the steps involved. Algorithm 1 Blocked Gibbs sampler for GP-modulated renewal process on the interval [0, T ] ? prev and l instantiated at G ? G ? prev . Input: Set of event times G, set of thinned times G ? ? new . Output: A new set of thinned times Gnew and a new instantiation l ? of the GP on G ? G G?Gnew 1: Sample A ? [0, T ] from a Poisson process with rate ?. 2: Sample lA |lG?G? prev .  3: Thin A, keeping element a ? A ? [Gi?1 , Gi ] with probability 1 ? ?? ?(l(a))h(a?Gi?1 ) ?  . ? ? 4: Let Gnew be the resulting set and lG? new be the restriction of lA to this set. Discard Gprev and lG? prev . 5: Resample lG?G? new using, for example, elliptical slice sampling. The gamma prior on ?? is conjugate to the Poisson, resulting in a gamma posterior. We resampled the GP hyperparameters using slice sampling [23] 5 , while parameters of the hazard function were updated using Metropolis-Hastings moves along with equation (8). 4.1 Computational considerations The inferential bottleneck in our model is the Gaussian process: sampling a GP on a set of points is, in the worst case, cubic in the size of that set. In our model, each iteration sees on average |G|+2|E| values of the GP, where |G| is the number of observations and |E| is the average number of points sampled from the subordinating Poisson process. Note that |E| varies from iteration to iteration (being proportional to the scaling factor ?? ). Since we perform posterior inference on this quantity, the complexity of our model can be thought to adapt to that of the problem. This is in contrast with time-discretization approaches, where a resolution is picked beforehand, fixing the complexity of the inference problem accordingly. For instance, [2] use a resolution of 1ms to model neural spiking, making it impossible to na??vely deal with spike trains extending over more than a second. However as they demonstrate in their work, instantiating a GP on a regular lattice allows the development of fast approximate inference algorithms that scale linearly with the number of grid-points. In our case, the Gaussian processes is sampled at random locations. Moreover, these locations change each iteration, requiring the inversion of a new covariance matrix; this is the price we have to pay for an exact sampler. One approach is to try reduce the number of thinned events |E|. Recall that our generative approach is to thin a sample from a subordinating, homogeneous Poisson process whose rate upper bounds the modulated hazard rate. We can reduce the number of thinned events by subordinating to an inhomogeneous Poisson process, one whose rate more closely resembles the instantaneous hazard rate. Thus, instead of using a single constant ?? , one could use (say) a piecewise linear function 4 5 In particular, it does not require any sophisticated GP sampling algorithm Code available on Iain Murray?s website: http://homepages.inf.ed.ac.uk/imurray2/ 5 ?? (t) The more segments we use, the more flexibility we have; the price being the complexity of resampling this function, and slower mixing because of correlations it introduces. This however does not help if G, the number of observations itself is large. In such a situation one has to call upon the vast literature concerning approximate inference for Gaussian processes [11]. The question then is how these approximation compare with those like [2]. We believe this is an interesting question in its own right, and raises the possibility of approximate inference algorithms that combine ideas from [2] with the adaptive nature of our approach. 5 Experiments In this section we evaluate our model and sampler on a number of datasets. We used gamma distributed interevent times with shape parameter ? ? 1. When ? = 1, we recover the Poisson process, and our model reduces to that of [6], while ? > 1 models ?refractoriness?, where two events in quick succession are less likely than under a Poisson process. When appropriate, we place a noninformative prior on the shape parameter: an exponential with rate 0.1 shifted to have a minimum value of 1. Note that for shape parameters less than 1, the renewal process becomes ?bursty? and the hazard function becomes unbounded. This is an interesting scenario but beyond the scope of this paper. An interesting issue concerns the identifiability of the shape parameter under our model. We find from our experiments that this is only a problem when the length scale of the intensity function is comparable to the refractory period of the renewal process. The base rate of the modulated renewal process (i.e. the rate when the intensity function is fixed at 1) is set to the empirical rate of the observed point process. As a result the identifiability of the shape parameter is a consequence of the dispersion of the point process rather than of some sort of rate matching. Synthetic data. Our first set of experiments uses three synthetic datasets generated by modulating a gamma renewal process (shape parameter ? = 3) with three different functions (see figure 1): ? ?1 (t) = 2 exp(t/5) + exp(?((t ? 25)/10)2 , t ? [0, 50]: 44 events ? ?2 (t) = 5 sin(t2 ) + 6, t ? [0, 5]: 12 events ? ?3 (t): a piecewise linear function , t ? [0, 100]: 153 events Additionally, for each function, we also generated 10 test sets. We ran three settings of our model: with the shape parameter fixed to 1 (MRP Exp), with the shape parameter fixed to the truth (MRP Gam3), and with a hyperprior on the shape parameter (MRP Full). For comparison, we also ran an approximate discrete-time sampler where the Gaussian process was instantiated on a regular grid covering the interval of interest. In this case, all intractable integrals were approximated numerically and we use elliptical slice sampling to run MCMC on this Gaussian vector. Figure 1 shows the results from 5000 MCMC samples after a burn-in of 1000 samples. We quantify these in Table 1 by calculating the l2 distance of the posterior means from the truth. We also calculated the mean predictive probabilities of the 10 test sequences. Not surprisingly, the inhomogeneous Poisson process forms a poor approximation to the gamma renewal process; it underestimates the intensity function required to produce a sequence of events with refractory intervals. Fixing the shape parameter to the truth significantly reduces the l2 error and increases the predictive probabilities, but interestingly, for these datasets, the model with a prior on the shape parameter performs comparably with the ?oracle? model. We have also included plots of the posterior distribution over the gamma parameter; these are peaked around 3. Discretizing time into a 100 bins (Disc100) results in comparable performance for the first two datasets on the l2 error; for the third, (which spans a longer interval and has a larger event count), we had to increase the resolution to 500 bins to improve accuracy. Discretizing to 25 bins was never sufficient. A conclusion is that with time discretization, for a small bias, one must be conservative in choosing the time-resolution; however, evaluating a GP on a fine grid can result in slow mixing. Our sampler has the advantage of automatically picking the ?right? resolution. However as we discussed in the section on computation, time discretization has its own advantages that make it a viable model [2]. Coal mine disaster data. For our next experiment, we ran our model on the coal mine disaster dataset commonly used in the point process literature. This dataset records the dates of a series of 191 coal mining disasters, each of which killed ten or more men [24]. Figure 2(left) shows the posterior mean of the intensity function (surrounded by 1 standard deviation) returned by our model. Not included is the posterior distribution over the shape parameter; this concentrated in the interval 1 to around 1.1, suggesting that the data is well modelled as an inhomogeneous Poisson process, and 6 12 3 Truth MRP Exp MRP Gam3 MRP Full Disc100 2.5 2 Intensity 1.5 8 1 4 2 0 0 ?0.5 ?2 0 10 20 30 40 2 6 0.5 50 0.2 3 10 1 0 0 0.2 1 2 3 4 5 0 0.4 0.15 0.15 0.3 0.1 0.1 0.2 0.05 0.05 0.1 0 1 2 3 4 0 5 1 2 3 4 5 20 0 1 40 2 60 3 80 4 100 5 Figure 1: Synthetic Datasets 1-3: Posterior mean intensities plotted against time (top) and gamma shape posteriors (bottom) l2 error log pred. prob. l2 error log pred. prob. l2 error log pred. prob. MRP Exp 7.8458 -47.5469 141.0067 -3.704396 82.0289 -89.8787 MRP Gam3 3.19 -38.0703 56.2183 -2.945298 11.4167 -48.2777 MRP Full 2.548 -37.3712 58.4361 -3.280871 13.4441 -48.57 Disc25 4.089003 -41.646350 91.321069 -5.245478 122.335151 87.170034 Disc100 2.426973 -41.016425 57.896300 -3.848443 38.047332 -55.802997 Table 1: l2 distance from the truth and mean log-predictive probabilities of the held-out datasets for synthetic datasets 1(top) to 3(bottom). is in agreement with [24]. As sanity check, and to shed further light on the issue of identifiability, we processed the dataset by deleting every alternate event. With such a transformation, a homogeneous Poisson would reduce to a gamma renewal process with shape 2. Our model returns a posterior peaked around 1.5 (in agreement with the form of the inhomogeneity). Note that the posteriors over intensity functions are similar (except for the obvious scaling factor of about 2). Intensity Spike timing data We next ran our model on neural spike train data recorded from grasshopper auditory receptor cells [25]. This dataset is characterized by a relatively high firing rate (? 150 4 4 3 3 2 2 1 1 0 0 ?1 1850 1900 1950 ?1 1850 0.2 0.15 0.1 0.05 1900 1950 0 1 1.5 2 Figure 2: Left: Posterior mean intensity for coal mine data with 1 standard deviation error bars (plotted against time in years). Centre: Posterior mean intensity for ?thinned? coalmine data with 1 standard deviation error bars. Right: Gamma shape posterior for ?thinned? coal mine data. 7 3 2.5 Figure 3: Left: Posterior mean intensity for neural data with 1 standard deviation error bars. Superimposed is the log stimulus (scaled and shifted). Right: Posterior over the gamma shape parameter. 0.2 2 0.1 1.5 0 Gibbs MH 500 1000 Time (ms) 1500 0 1 Synthetic dataset 1 Mean ESS Minimum ESS 93.45 ? 6.91 50.94 ? 5.21 56.37 ? 10.30 19.34 ? 11.55 1.5 Time(sec) 77.85 345.44 2 Mean ESS 53.54 ? 8.15 47.83 ? 9.18 Coalmine dataset Minimum ESS 24.87 ? 7.38 18.91 ? 6.45 Time(sec) 282.72 1703 Table 2: Sampler comparisons. Numbers are per 1000 samples. Hz), making refractory effects more prominent. We plot the posterior distribution over the intensity function given a sequence of 200 spikes in a 1.6 second interval. We also included the posterior distribution over gamma shape parameters in figure 3; this concentrates around 1.5, agreeing with the refractory nature of neuronal firing. The results above follow from using noninformative hyperpriors; we have also plotted the log-transformed stimulus, an amplitude-modulated signal. In practice, other available knowledge (viz. the shape parameter, the stimulus length-scale, the transformation from the stimulus to the input of the neuron etc) can be used to make more accurate inferences. Computational efficiency and mixing. For our final experiment, we compare our proposed blocked Gibbs sampler with the Metropolis-Hastings sampler of [6]. We ran both algorithms on two datasets, synthetic dataset 1 from section 5 and the coal mine disaster dataset. All involved 20 MCMC runs with 5000 iterations each (following a burn-in period of a 1000 iterations). For both datasets, we evaluated the latent GP on a uniform grid of 200 points, calculating the effective sample size (ESS) of each component of the Gaussian vectors (using R-CODA [26]). For each run, we return the mean and the minimum ESS across all 200 components. In Table 2, we report these numbers: not only does our sampler mix faster (resulting in larger ESSs), but also takes less computation time. Additionally, our sampler is simpler and more natural to the problem, and does not require any external tuning. 6 Discussion We have described how to produce exact samples from a nonstationary renewal process whose hazard function is modulated by a Gaussian process. Our scheme is based on the idea of uniformization, and using this idea, we also develop a novel MCMC sampler. There are a number of interesting avenues worth following. First is the restriction that the hazard function be bounded: while this covers a large and useful class of renewal processes, it is worth considering how our approach can be extended to produce exact or approximate samples for renewal processes with unbounded hazard functions. In any case, following [13], it is easy to extend our ideas to Bayesian inference for more general point processes. Because of the latent Gaussian process, our approach will not scale well to large problems; however there is a vast literature concerning approximate sampling for Gaussian processes. An important question is how these approximations compare to approximations introduced via time-discretization. Finally, even though we considered GP modulating functions, our uniformization-based sampler will also be useful for Bayesian inference involving simpler priors on modulating functions, eg. splines or Markov jump processes. Acknowledgements We thank the Gatsby Charitable Foundation for generous funding. We thank Ryan Adams and Iain Murray for code and comments; and Jakob Macke and Lars Buesing for useful discussions. The grasshopper data was collected by Ariel Rokem at Andreas Herz?s lab and provided through the CRCNS program (http://crcns.org). 8 References [1] J. F. Lawless and K. Thiagarajah. A point-process model incorporating renewals and time trends, with application to repairable systems. Technometrics, 38(2):131?138, 1996. [2] John P. Cunningham, Byron M. Yu, Krishna V. Shenoy, and Maneesh Sahani. Inferring neural firing rates from spike trains using Gaussian processes. In Advances in Neural Information Processing Systems 20, 2008. [3] T. Parsons. Earthquake recurrence on the south Hayward fault is most consistent with a time dependent, renewal process. Geophysical Research Letters, 35, 2008. [4] V. Paxson and S. Floyd. Wide area traffic: the failure of Poisson modeling. IEEE/ACM Transactions on Networking, 3(3):226?244, June 1995. [5] C. Wu. Counting your customers: Compounding customer?s in-store decisions, interpurchase time and repurchasing behavior. European Journal of Operational Research, 127(1):109?119, November 2000. [6] Ryan P. Adams, Iain Murray, and David J. C. MacKay. Tractable nonparametric Bayesian inference in Poisson processes with Gaussian process intensities. In Proceedings of the 26th International Conference on Machine Learning (ICML), 2009. [7] A. Jensen. Markoff chains as an aid in the study of Markoff processes. Skand. Aktuarietiedskr., 36:87?91, 1953. [8] V. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2011. [9] D.R. Cox. The statistical analysis of dependencies in point processes. In P.A. Lewis, editor, Stochastic point processes, pages 55?56. New York: Wiley 1972, 1972. [10] Robert E. Kass and Val?erie Ventura. A spike-train probability model. Neural Computation, 13(8):1713? 1720, 2001. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [12] M. Berman. Inhomogeneous and modulated gamma processes. Biometrika, 68(1):143, 1981. [13] Yosihiko Ogata. On Lewis? simulation method for point processes. IEEE Transactions on Information Theory, 27(1):23?31, 1981. [14] Mark Berman and T. Rolf Turner. Approximating point process likelihoods with GLIM. Journal of the Royal Statistical Society. Series C (Applied Statistics), 41(1):pp. 31?38, 1992. [15] I. Sahin. A generalization of renewal processes. Operations Research Letters, 13(4):259?263, May 1993. [16] Emery N. Brown, Riccardo Barbieri, Val?erie Ventura, Robert E. Kass, and Loren M. Frank. The timerescaling theorem and its application to neural spike train data analysis. Neural computation, 14(2):325? 46, February 2002. [17] I. Gerhardt and B. L. Nelson. Transforming renewal processes for simulation of nonstationary arrival processes. INFORMS Journal on Computing, 21(4):630?640, April 2009. [18] Bo Henry Lindqvist. Nonparametric estimation of time trend for repairable systems data. In V.V. Rykov, N. Balakrishnan, and M.S. Nikulin, editors, Mathematical and Statistical Models and Methods in Reliability, Statistics for Industry and Technology, pages 277?288. Birkhuser Boston, 2011. [19] P. A. W. Lewis and G. S. Shedler. Simulation of nonhomogeneous Poisson processes with degree-two exponential polynomial rate function. Operations Research, 27(5):1026?1040, September 1979. [20] J. F. C. Kingman. Poisson processes, volume 3 of Oxford Studies in Probability. The Clarendon Press Oxford University Press, New York, 1993. Oxford Science Publications. [21] J George Shanthikumar. Uniformization and hybrid simulation/analytic models of renewal processes. Oper. Res., 34:573?580, July 1986. [22] Iain Murray, Ryan Prescott Adams, and David J.C. MacKay. Elliptical slice sampling. JMLR: W&CP, 9, 2010. [23] Iain Murray and Ryan Prescott Adams. Slice sampling covariance hyperparameters of latent Gaussian models. In Advances in Neural Information Processing Systems 23, 2010. [24] B. Y. R. G. Jarrett. A note on the intervals between coal-mining disasters. Biometrika, 66(1):191?193, 1979. [25] Ariel Rokem, Sebastian Watzl, Tim Gollisch, Martin Stemmler, and Andreas V.M. Herz. Spike-Timing Precision Underlies the Coding Efficiency of Auditory Receptor Neurons. Journal of Neurophysiology, pages 2541?2552, 2006. [26] Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines. CODA: Convergence diagnosis and output analysis for MCMC. R News, 6(1):7?11, March 2006. 9
4358 |@word neurophysiology:1 cox:2 inversion:1 polynomial:1 simulation:4 covariance:4 subordinating:3 series:2 ours:1 interestingly:1 current:1 discretization:6 incidence:1 comparing:1 elliptical:4 ka:2 must:2 john:1 additive:1 interspike:1 shape:20 noninformative:2 analytic:1 plot:2 resampling:1 stationary:1 generative:2 instantiate:2 website:1 intelligence:1 accordingly:1 es:7 record:1 complication:1 location:6 successive:1 sigmoidal:2 simpler:2 org:1 unbounded:2 mathematical:1 along:1 viable:1 consists:1 doubly:1 prev:6 combine:1 thinned:13 introduce:3 coal:7 behavior:1 frequently:1 buying:1 automatically:1 gollisch:1 little:1 considering:1 becomes:2 provided:1 estimating:1 bounded:4 moreover:1 hayward:1 homepage:1 what:1 developed:1 proposing:1 unobserved:1 transformation:2 every:1 shed:1 biometrika:2 scaled:1 uk:3 unit:3 producing:1 shenoy:1 positive:1 before:1 timing:2 consequence:1 receptor:2 oxford:3 barbieri:1 firing:6 modulation:1 burn:2 resembles:1 suggests:1 jarrett:1 earthquake:3 testing:1 practice:1 block:1 procedure:3 area:1 empirical:1 maneesh:1 thought:1 significantly:1 inferential:1 matching:1 word:1 regular:2 prescott:2 get:1 marginalize:1 operator:2 impossible:1 yee:1 restriction:3 equivalent:1 quick:1 customer:2 straightforward:1 williams:1 starting:1 independently:3 ke:2 resolution:5 immediately:1 iain:5 financial:1 yosihiko:1 handle:1 updated:2 imagine:1 suppose:3 exact:6 homogeneous:9 us:3 hypothesis:1 agreement:2 trend:5 element:1 approximated:1 updating:1 observed:3 bottom:2 capture:1 worst:1 vine:1 news:1 burstiness:1 substantial:1 ran:5 transforming:1 insertion:1 complexity:3 shedler:1 mine:5 raise:1 solving:1 segment:1 algebra:1 predictive:3 upon:1 efficiency:2 easily:4 joint:1 mh:1 various:1 stemmler:1 train:5 instantiated:2 fast:2 describe:4 london:2 monte:2 effective:1 artificial:1 plummer:1 choosing:1 sanity:1 whose:8 larger:3 supplementary:2 valued:1 say:1 drawing:1 otherwise:1 statistic:2 gi:33 g1:1 gp:23 itself:2 inhomogeneity:3 final:1 sequence:3 advantage:2 ucl:2 propose:1 nikulin:1 interaction:5 date:2 rapidly:1 parametrizations:1 mixing:3 flexibility:2 nicky:1 convergence:1 extending:2 produce:5 emery:1 incremental:1 adam:4 help:1 tim:1 develop:4 ac:3 martyn:1 fixing:2 informs:1 minor:1 involves:1 berman:2 quantify:1 concentrate:1 inhomogeneous:10 closely:1 stochastic:3 lars:1 material:2 bin:3 require:4 generalization:7 proposition:4 ryan:4 correction:1 around:4 considered:1 exp:11 bursty:1 equilibrium:1 scope:1 week:1 major:1 vary:3 early:1 consecutive:1 released:1 resample:2 generous:1 estimation:2 modulating:8 gaussian4:1 compounding:1 mit:1 gaussian:25 unthinned:2 rather:1 ej:2 varying:2 publication:1 viz:1 seasonal:1 june:1 modelling:3 likelihood:5 check:1 superimposed:1 contrast:1 inference:14 dependent:1 typically:1 entire:1 cunningham:1 transformed:1 interested:1 issue:3 development:1 grasshopper:2 smoothing:1 renewal:41 special:1 mackay:2 equal:2 construct:1 never:1 having:2 lawless:1 sampling:15 yu:1 icml:1 rebound:1 thin:5 peaked:2 t2:1 stimulus:5 piecewise:2 report:1 spline:1 neighbour:1 gamma:16 vrao:1 technometrics:1 stationarity:1 interest:1 possibility:1 mining:2 introduces:1 light:1 held:1 chain:11 accurate:1 beforehand:1 integral:3 erlang:1 vely:2 old:1 hyperprior:2 desired:1 plotted:3 re:2 e0:1 instance:2 industry:1 modeling:1 rao:2 cover:1 vinayak:1 lattice:1 deviation:4 uniform:1 usefulness:1 too:1 dependency:1 varies:1 gerhardt:1 synthetic:8 density:3 international:2 loren:1 picking:1 na:2 again:3 ambiguity:1 recorded:1 external:3 resort:1 macke:1 kingman:1 rescaling:2 return:2 oper:1 suggesting:1 sec:2 coding:1 coefficient:1 skand:1 kate:1 eyi:1 depends:1 multiplicative:4 try:1 picked:1 lab:1 traffic:3 start:1 recover:1 sort:1 identifiability:3 accuracy:1 variance:1 who:1 efficiently:1 succession:1 modelled:1 bayesian:7 buesing:1 comparably:1 carlo:2 advertising:1 worth:2 randomness:1 networking:1 nonstationarity:4 ed:1 sebastian:1 infinitesimal:1 against:2 underestimate:1 failure:1 pp:1 involved:2 obvious:1 e2:1 associated:2 proof:2 sampled:6 auditory:2 dataset:8 popular:1 recall:3 knowledge:1 amplitude:1 sophisticated:1 actually:1 thinning:6 back:1 glim:1 clarendon:1 dt:3 day:2 follow:1 response:1 interevent:9 april:1 evaluated:4 refractoriness:1 though:1 rejected:1 just:1 correlation:1 hastings:4 ei:11 incrementally:1 believe:1 effect:1 requiring:1 true:1 y2:1 brown:1 memoryless:2 deal:1 conditionally:1 eg:1 sin:1 floyd:1 self:1 recurrence:1 covering:1 m:2 generalized:1 whye:1 prominent:1 coda:2 stress:1 complete:2 demonstrate:2 performs:1 cp:1 variational:1 instantaneous:2 novel:3 fi:6 consideration:1 funding:1 sigmoid:1 spiking:1 twist:1 perturbing:1 refractory:4 exponentially:2 volume:1 extend:4 discussed:1 numerically:2 interpret:1 blocked:2 gibbs:4 tuning:2 grid:4 similarly:2 killed:1 centre:1 had:2 henry:1 reliability:1 similarity:1 longer:3 etc:2 base:1 posterior:20 multivariate:2 recent:1 own:2 inf:1 discard:1 scenario:1 store:1 incapable:1 discretizing:3 fault:1 yi:11 krishna:1 minimum:4 george:1 period:2 signal:1 july:1 full:3 mix:2 reduces:2 faster:1 adapt:1 characterized:1 offer:1 hazard:19 rokem:2 concerning:2 e1:1 plugging:1 instantiating:1 involving:2 regression:1 basic:1 underlies:1 poisson:35 iteration:6 kernel:2 disaster:5 cell:1 proposal:1 fine:1 interval:17 unlike:1 comment:1 hz:1 south:1 byron:1 balakrishnan:1 call:2 nonstationary:3 ee:1 integer:1 depleted:1 counting:1 easy:1 concerned:1 economic:1 idea:14 reduce:3 avenue:1 andreas:2 bottleneck:1 returned:1 karen:1 proceed:2 york:2 useful:3 involve:1 nonparametric:5 ten:1 concentrated:1 processed:1 simplest:1 generate:1 http:2 shifted:2 neuroscience:2 estimated:1 per:2 disjoint:1 herz:2 diagnosis:1 discrete:2 write:2 drawn:3 vast:2 concreteness:1 year:1 run:3 inverse:1 everywhere:1 prob:3 letter:2 uncertainty:1 extends:1 place:3 family:1 wu:1 draw:5 decision:1 scaling:2 comparable:2 bound:1 internet:1 resampled:2 pay:1 oracle:1 occur:1 your:1 afforded:1 encodes:1 ywteh:1 span:1 relatively:1 martin:1 according:1 alternate:1 combination:1 poor:1 erie:2 conjugate:1 march:1 smaller:1 describes:1 across:1 y0:1 agreeing:1 metropolis:4 making:2 repair:2 ariel:2 taken:2 resource:1 equation:5 previously:1 count:1 is3:1 know:1 overdispersion:1 tractable:1 parametrize:1 generalizes:1 available:2 operation:2 observe:2 hyperpriors:2 appropriate:1 slower:1 markoff:2 top:2 remaining:2 include:2 running:2 calculating:3 build:1 murray:5 approximating:1 classical:2 society:1 purchased:1 february:1 move:1 g0:2 question:3 quantity:1 spike:8 parametric:2 september:1 distance:2 link:2 thank:2 nelson:1 considers:1 fy:1 collected:1 code:2 length:2 index:1 riccardo:1 difficult:1 lg:4 ventura:2 robert:2 frank:1 negative:1 paxson:1 unknown:1 perform:1 teh:2 allowing:1 discretize:1 neuron:4 upper:2 datasets:11 markov:12 observation:3 finite:4 dispersion:1 november:1 defining:2 extended:2 situation:1 y1:1 jakob:1 uniformization:14 intensity:26 pred:3 introduced:1 david:2 required:2 deletion:1 address:1 beyond:1 bar:3 below:1 challenge:1 encompasses:1 program:1 rolf:1 max:1 royal:1 deleting:2 event:31 natural:3 hybrid:1 turner:1 scheme:3 improve:1 technology:1 sahani:1 prior:10 literature:3 l2:7 acknowledgement:1 val:2 interesting:4 limitation:1 proportional:1 men:1 age:1 foundation:1 degree:1 sufficient:1 consistent:1 editor:2 charitable:1 pareto:1 surrounded:1 mrp:9 maxt:1 surprisingly:1 last:3 keeping:1 rasmussen:1 bias:1 allow:2 understand:1 wide:1 taking:1 focussed:1 absolute:1 gnew:3 distributed:5 slice:6 overcome:1 calculated:1 world:2 evaluating:2 rich:1 transition:3 author:1 commonly:1 adaptive:1 jump:2 transaction:3 approximate:7 keep:1 dealing:1 instantiation:1 consuming:1 parameter2:1 continuous:2 iterative:1 latent:3 table:4 additionally:2 nature:2 nonhomogeneous:1 elastic:1 parson:1 operational:1 du:3 european:1 linearly:1 whole:1 hyperparameters:5 arrival:2 xu:1 neuronal:1 crcns:2 cubic:1 gatsby:5 slow:1 aid:1 wiley:1 precision:1 inferring:1 exponential:5 jmlr:1 third:1 ogata:1 down:1 theorem:1 bad:1 showing:1 jensen:1 list:1 timerescaling:1 concern:1 intractable:2 exists:2 incorporating:1 effectively:1 conditioned:4 boston:1 simply:1 likely:1 ordered:1 g2:1 bo:1 truth:5 determines:1 lewis:3 acm:1 conditional:1 modulate:3 price:2 change:1 included:4 specifically:1 except:1 sampler:17 conservative:1 called:4 accepted:1 la:2 geophysical:1 indicating:1 college:2 support:1 mark:1 modulated:18 evaluate:2 mcmc:10 phenomenon:2 cowles:1
3,709
4,359
Beating SGD: Learning SVMs in Sublinear Time Elad Hazan Tomer Koren Technion, Israel Institute of Technology Haifa, Israel 32000 {ehazan@ie,tomerk@cs}.technion.ac.il Nathan Srebro Toyota Technological Institute Chicago, Illinois 60637 [email protected] Abstract We present an optimization approach for linear SVMs based on a stochastic primal-dual approach, where the primal step is akin to an importance-weighted SGD, and the dual step is a stochastic update on the importance weights. This yields an optimization method with a sublinear dependence on the training set size, and the first method for learning linear SVMs with runtime less then the size of the training set required for learning! 1 Introduction Stochastic approximation (online) approaches, such as stochastic gradient descent and stochastic dual averaging, have become the optimization method of choice for many learning problems, including linear SVMs. This is not surprising, since such methods yield optimal generalization guarantees with only a single pass over the data. They therefore in a sense have optimal, unbeatable runtime: from a learning (generalization) point of view, in a ?data laden? setting [2, 13], the runtime to get to a desired generalization goal is the same as the size of the data set required to do so. Their runtime is therefore equal (up to a small constant factor) to the runtime required to just read the data. In this paper we show, for the first time, how to beat this unbeatable runtime, and present a method that, in a certain relevant regime of high dimensionality, relatively low noise and accuracy proportional to the noise level, learns in runtime less then the size of the minimal training set size required for generalization. The key here, is that unlike online methods that consider an entire training vector at each iteration, our method accesses single features (coordinates) of training vectors. Our computational model is thus that of random access to a desired coordinate of a desired training vector (as is standard for sublinear time algorithms), and our main computational cost are these feature accesses. Our method can also be understood in the framework of ?budgeted learning? [5] where the cost is explicitly the cost of observing features (but unlike, e.g. [8], we do not have differential costs for different features), and gives the first non-trivial guarantee in this setting (i.e. first theoretical guarantee on the number of feature accesses that is less then simply observing entire feature vectors). We emphasize that our method is not online in nature, and we do require repeated access to training examples, but the resulting runtime (as well as the overall number of features accessed) is less (in some regimes) then for any online algorithms that considers entire training vectors. Also, unlike recent work by Cesa-Bianchi et al. [3], we are not constrained to only a few features from every vector, and can ask for however many we need (with the aim of minimizing the overall runtime, and thus the overall number of feature accesses), and so we obtain an overall number of feature accesses which is better then with SGD, unlike Cesa-Bianchi et al., which aim at not being too much worse then full-information SGD. As discussed in Section 3, our method is a primal-dual method, where both the primal and dual steps are stochastic. The primal steps can be viewed as importance-weighted stochastic gradient descent, and the dual step as a stochastic update on the importance weighting, informed by the current primal solution. This approach builds on the work of [4] that presented a sublinear time algorithm for approximating the margin of a linearly separable data set. Here, we extend that work to the more rel1 evant noisy (non-separable) setting, and show how it can be applied to a learning problem, yielding generalization runtime better then SGD. The extension to the non-separable setting is not straightforward and requires re-writing the SVM objective, and applying additional relaxation techniques borrowed from [10]. 2 The SVM Optimization Problem We consider training a linear binary SVM based on a training set of n labeled points {xi , yi }i=1...n , xi ? Rd , yi ? {?1}, with the data normalized such that kxi k ? 1. A predictor is specified by w ? Rd and a bias b ? R. In training, we wish to minimize the empirical error, measured in terms ? hinge (w, b) = 1 Pn [1 ? y(hw, xi i + b)]+ , and the norm of w. Since of the average hinge loss R i=1 n we do not typically know a-priori how to balance the norm with the error, this is best described as an unconstrained bi-criteria optimization problem: min w?Rd ,b?R ? hinge (w, b) kwk , R (1) A common approach to finding Pareto optimal points of (1) is to scalarize the objective as: min w?Rd ,b?R ? hinge (w, b) + ? kwk2 R 2 (2) where the multiplier ? ? 0 controls the trade-off between the two objectives. However, in order to apply our framework, we need to consider a different parametrization of the Pareto optimal set (the ?regularization path?): instead of minimizing a trade-off between the norm and the error, we maximize the margin (equivalent to minimizing the norm) subject to a constraint on the error. This allows us to write the objective (the margin) as a minimum over all training points?a form we will later exploit. Specifically, we introduce slack variables and consider the optimization problem: max min w?Rd , b?R, 0??i i?[n] yi (hw, xi i + b) + ?i s.t. kwk ? 1 and n X ?i ? n? (3) i=1 where the parameter ? controls the trade-off between desiring a large margin (low norm) and small error (low slack), and parameterizes solutions along the regularization path. This is formalized by the following Lemma, which also gives guarantees for ?-sub-optimal solutions of (3): ? hinge (w, b)/ kwk. Let w? , b? , ? ? Lemma 2.1. For any w 6= 0,b ? R consider problem (3) with ? = R ? ? = be an ?-suboptimal solution to this problem with value ? , and consider the rescaled solution w w? /? ? , ?b = b? /? ? . Then: ? ? kwk 1 kwk , 1 ? kwk ? ? hinge (w) ? ? R and 1 ? hinge (w). R 1 ? kwk ? That is, solving (3) exactly (to within ? = 0) yields Pareto optimal solutions of (1), and all such solutions (i.e. the entire regularization path) can be obtained by varying ?. When (3) is only solved approximately, we obtain a Pareto sub-optimal point, as quantified by Lemma 2.1. Before proceeding, we also note that any solution of (1) that classifies at least some positive and negative points within the desired margin must have kwk ? 1 and so in Lemma 2.1 we will only need to consider 0 ? ? ? 1. In terms of (3), this means that we could restrict 0 ? ?i ? 2 without affecting the optimal solution. 3 Overview: Primal-Dual Algorithms and Our Approach The CHW framework The method of [4] applies to saddle-point problems of the form max min ci (z). z?K i?[n] 2 (4) where ci (z) are concave functions of z over some set K ? Rd . The method is a stochastic primaldual method, where the dual solution can be viewed as importance weighting over the n terms ci (z). To better understand this view, consider the equivalent problem: max min z?K p??n n X pi ci (z) (5) i=1 where ?n = {p ? Rn | pi ? 0, kpk1 = 1} is the probability simplex. The method maintains and (stochastically) improves both a primal solution (in our case, a predictor w ? Rd ) and a dual solution, which is a distribution p over [n]. Roughly speaking, the distribution p is used to focus in on the terms actually affecting the minimum. Each iteration of the method proceeds as follows: 1. Stochastic primal update: (a) A term i ? [n] is chosen according to the distribution p, in time O(n). (b) The primal variable z is updated according to the gradient of the ci (z), via an online low-regret update. This update is in fact a Stochastic Gradient Descent (SGD) step on the objective of (5), as explained in section 4. Since we use only a single term ci (z), this can be usually done in time O(d). 2. Stochastic dual update: (a) We obtain a stochastic estimate of ci (z), for each i ? [n]. We would like to use an estimator that has a bounded variance, and can be computed in O(1) time per term, i.e. in overall O(n) time. When the ci ?s are linear functions, this can be achieved using a form of `2 -sampling for estimating an inner-product in Rd . (b) The distribution p is updated toward those terms with low estimated values of ci (z). This is accomplished using a variant of the Multiplicative Updates (MW) framework for online optimization over the simplex (see for example [1]), adapted to our case in which the updates are based on random variables with bounded variance. This can be done in time O(n). Evidently, the overall runtime per iteration is O(n + d). In addition, the regret bounds on the updates of z and p can be used to bound the number of iterations required to reach an ?-suboptimal solution. Hence, the CHW approach is particularly effective when this regret bound has a favorable dependence on d and n. As we note below, this is not the case in our application, and we shall need some additional machinery to proceed. The PST framework The Plotkin-Shmoys-Tardos framework [10] is a deterministic primal-dual method, originally proposed for approximately solving certain types of linear programs known as ?fractional packing and covering? problems. The same idea, however, applies also to saddle-point problems of the form (5). In each iteration ofP this method, the primal variable z is updated by solving the ?simple? optimization n problem maxz?K i=1 pi ci (z) (where p is now fixed), while the dual variable p is again updated using a MW step (note that we do not use an estimation for ci (z) here, but rather the exact value). These iterations yield convergence to the optimum of (5), and the regret bound of the MW updates is used to derive a convergence rate guarantee. Since each iteration of the framework relies on the entire set of functions ci , it is reasonable to apply it only on relatively small-sized problems. Indeed, in our application we shall use this method for the update of the slack variables ? and the bias term b, for which the implied cost is only O(n) time. Our hybrid approach The saddle-point formulation (3) of SVM from section 2 suggests that the SVM optimization problem can be efficiently approximated using primal-dual methods, and specifically using the CHW framework. Indeed, taking z = (w, b, ?) and K = Bd ? [?1, 1] ? ?? where Bd ? Rd is the Euclidean unit ball and ?? = {? ? Rn | ?i 0 ? ?i ? 2, k?k1 ? ?n} , we cast the problem into the form (4). However, as already pointed out, a na??ve application of the CHW framework yields in this case a rather slow convergence rate. Informally speaking, this is because our set K is ?too large? and thus the involved regret grows too quickly. In this work, we propose a novel hybrid approach for tackling problems such as (3), that combines the ideas of the CHW and PST frameworks. Specifically, we suggest using a SGD-like low-regret 3 update for the variable w, while updating the variables ? and b via a PST-like step; the dual update of our method is similar to that of CHW. Consequently, our algorithm enjoys the benefits of both methods, each in its respective domain, and avoids the problem originating from the ?size? of K. We defer the detailed description of the method to the following section. 4 Algorithm and Analysis In this section we present and analyze our algorithm, which we call SIMBA (stands for ?Sublinear IMportance-sampling Bi-stochastic Algorithm?). The algorithm is a sublinear-time approximation algorithm for problem (3), which as shown in section 2 is a reformulation of the standard softmargin SVM problem. For the simplicity of presentation, we omit the bias term for now (i.e., fix b = 0 in (3)) and later explain how adding such bias to our framework is almost immediate and does not affect the analysis. This allows us to ignore the labels yi , by setting xi ? ?xi for any i with yi = ?1. Let us begin the presentation with some additional notation. To avoid confusion, we use the notation v(i) to refer to the i?th coordinate of a vector v. We also use the shorthand v 2 to denote the vector for which v 2 (i) = (v(i))2 for all i. The n-vector whose entries are all 1 is denoted as 1n . Finally, we stack the training instances xi as the rows of a matrix X ? Rn?d , although we treat each xi as a column vector. Algorithm 1 SVM-SIMBA 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: Input: ? > 0, 0 ? ? ? 1, and Xp? Rn?d with xi ? Bd for i ? [n]. Let T ? 1002 ??2 log n, ? ? log(n)/T and u1 ? 0, q1 ? 1n for t = 1 to T do Choose it ? i with probability pt (i) ? Let ut ? ut?1 + xit / 2T , ?t ? arg max???? (p> t ?) wt ? ut / max{1, kut k} Choose jt ? j with probability wt (j)2 /kwt k2 . for i = 1 to n do v?t (i) ? xi (jt )kwt k2 /wt (jt ) + ?t (i) vt (i) ? clip(? vt (i), 1/?) 2 qt+1 (i) ? qt (i)(1 ? ?vt (i) + ? 2 vt (i) ) end for pt ? qt /kqt k1 end for P P ? = T1 t wt , ?? = T1 t ?t return w The pseudo-code of the SIMBA algorithm is given in figure 1. In the primal part (lines 4 through 6), the vector ut is updated by adding an instance xi , randomly chosen according to the distribution pt . This is a version of SGD applied on the function p> t (Xw + ?t ), whose gradient with respect to w is p> t X; by the sampling procedure of it , the vector xit is an unbiased estimator of this gradient. The vector ut is then projected onto the unit ball, to obtain wt . On the other hand, the primal variable ?t is updated by a complete optimization of p> t ? with respect to ? ? ?? . This is an instance of the PST framework, described in section 3. Note that, by the structure of ?? , this update can be accomplished using a simple greedy algorithm that sets ?t (i) = 2 corresponding to the largest entries pt (i) of pt , until a total mass of ?n is reached, and puts ?t (i) = 0 elsewhere; this can be implemented in O(n) time using standard selection algorithms. In the dual part (lines 7 through 13), the algorithm first updates the vector qt using the jt column of X and the value of wt (jt ), where jt is randomly selected according to the distribution wt2 /kwt k2 . This is a variant of the MW framework (see definition 4.1 below) applied on the function p> (Xwt + ?t ); the vector v? serves as an estimator of Xwt + ?t , the gradient with respect to p. We note, however, that the algorithm uses a clipped version v of the estimator v?; see line 10, where we use the notation clip(z, C) = max(min(z, C), ?C) for z, C ? R. This, in fact, makes v a biased estimator of the gradient. As we show in the analysis, while the clipping operation is crucial to the stability of the algorithm, the resulting slight bias is not harmful. Before stating the main theorem, we describe in detail the MW algorithm we use for the dual update. 4 Definition 4.1 (Variance MW algorithm). Consider a sequence of vectors v1 , . . . , vT ? Rn and a parameter ? > 0. The Variance Multiplicative Weights (Variance MW) algorithm is as follows. Let w1 ? 1n , and for t ? 1, pt ? wt / kwt k1 , and wt+1 (i) ? wt (i)(1 ? ?vt (i) + ? 2 vt (i)2 ). (6) The following lemma establishes a regret bound for the Variance MW algorithm. Lemma 4.2 (Variance MW Lemma). The Variance MW algorithm satisfies X X X log n 2 p> max{vt (i), ?1/?} + +? p> t vt ? min t vt . ? i?[n] t?[T ] t?[T ] t?[T ] We now state the main theorem. Due to space limitations, we only give here a sketch of the proof. Theorem 4.3 (Main). The SIMBA algorithm above returns an ?-approximate solution to formula? ?2 (n + d)). tion (3) with probability at least 1/2. It can be implemented to run in time O(? Proof (sketch). The Pmain idea of the proof is to establish lower and upper bounds on the average objective value T1 t?[T ] p> t (Xwt + ?t ). Then, combining these bounds we are able to relate the ? to the value of the optimum of (3). In the following, we let ? ?) value of the output solution (w, ? ? (w , ? ) be the optimal solution of (3) and denote the value of this optimum by ? ? . P For the lower bound, we consider the primal part of the algorithm. Noting that t?[T ] p> t ?t ? P > ? t?[T ] pt ? (which follows from the PST step) and employing a standard regret guarantee for bounding the regret of the SGD update, we obtain the lower bound (with probability ? 1 ? O( n1 )): q  1 X > log n pt (Xwt + ?t ) ? ? ? ? O . T T t?[T ] For the upper bound, we examine the dual part of the algorithm. Applying lemma 4.2 for bounding the regret of the MW update, we get the following upper bound (with probability > 43 ? O( n1 )):  q X 1 X > 1 log n . pt (Xwt + ?t ) ? min [x> w + ? (i)] + O t t i T T T i?[n] t?[T ] t?[T ] p ? ? + ?(i)] Relating the two bounds we conclude that mini?[n] [x> ? ? ? ? O( log(n)/T ) with i w probability ? 21 , and using our choice for T the claim follows. Finally, we note the runtime. The algorithm makes T = O(??2 log n) iterations. In each iteration, the update of the vectors wt and pt takes O(d) and O(n) time respectively, while ?t can be computed ? ?2 (n + d)). in O(n) time as explained above. The overall runtime is therefore O(? Incorporating a bias term We return to the optimization problem (3) presented in section 2, and show how the bias term b can be integrated into our algorithm. Unlike with SGD-based approaches, including the bias term in our framework is straightforward. The only modification required to our algorithm as presented in figure 1 occurs in lines 5 and 9, where the vector ?t is referred. For additionally maintaining a bias bt , we change the optimization over ? in line 5 to a joint optimization over both ? and b: (?t , bt ) ? argmax p> t (? + b ? y) ???? , b?[?1,1] and use the computed bt for the dual update, in line 9: v?t (i) ? xi (jt )kwt k2 /wt (jt ) + ?t (i) + yi bt , P while returning the average bias ?b = t?[T ] bt /T in the output of the algorithm. Notice that we still assume that the labels yi were subsumed into the instances xi , as in section 4. The update of ?t is thus unchanged and can be carried out as described in section 4. The update of bt , on the other hand, admits a simple, closed-form formula: bt = sign(p> t y). Evidently, the running time of each iteration remains O(n + d), as before. The adaptation of the analysis to this case, which involves only a change of constants, is technical and straightforward. 5 The sparse case We conclude the section with a short discussion of the common situation in which the instances are sparse, that is, each instance contains very few non-zero entries. In this case, we can ? implement algorithm 1 so that each iteration takes O(?(n + d)), where ? is the overall data sparsity ratio. Implementing the vector updates is straightforward, using a data representation similar to [12]. In order to implement the sampling operations in time O(log n) and O(log d), we maintain a tree over the points and coordinates, with internal nodes caching the combined (unnormalized) probability mass of their descendants. 5 Runtime Analysis for Learning In Section 4 we saw how to obtain an ?-approximate solution to the optimization problem (3) in time ? ?2 (n + d)). Combining this with Lemma 2.1, we see that for any Pareto optimal point w? of O(? ? hinge (w? ) = R ? ? , the runtime required for our method to find a predictor (1) with kw? k = B and R ? ? hinge (w) ? R ? + ?? is with kwk ? 2B and R   ? ? ? 2  R +? 2 ? O B (n + d) . (7) ?? This guarantee is rather different from guarantee for other SVM optimization approaches. E.g. using a stochastic gradient descent (SGD) approach, we could find a predictor with kwk ? B and ? hinge (w) ? R ? ? + ?? in time O(B 2 d/??2 ). Compared with SGD, we only ensure a constant factor apR proximation to the norm, and our runtime does depend on the training set size n, but the dependence on ?? is more favorable. This makes it difficult to compare the guarantees and suggests a different form of comparison is needed. Following [13], instead of comparing the runtime to achieve a certain optimization accuracy on the empirical optimization problem, we analyze the runtime to achieve a desired generalization performance. Recall that our true learning objective is to find a predictor with low generalization error Rerr (w) = Pr(x,y) (y hw, xi ? 0) where x, y are distributed according to some unknown source distribution, and the training set is drawn i.i.d. from this distribution. We assume that there exists some (unknown) predictor w? that has norm kw? k ? B and low expected hinge loss R? = Rhinge (w? ) = E [[1 ? y hw? , xi]+ ], and analyze the runtime to find a predictor w with generalization error Rerr (w) ? R? + ?. In order to understand the runtime from this perspective, we must consider the required sample size ? hinge (w). to obtain generalization to within ?, as well as the required suboptimality for kwk and R 2 2 The standard SVMs analysis calls for a sample size of n = O(B /? ). But since, as we will see, our analysis will be sensitive to the value of R? , we will consider a more refined generalization guarantee which gives a better rate when R? is small relative to ?. Following Theorem 5 of [14] (and recalling that the hinge-loss is an upper bound on margin violations), we have that with high probability over a sample of size n, for all predictors w: s ? ? 2 ? 2 kwk R (w) kwk hinge ? hinge (w) + O ? ?. Rerr (w) ? R + (8) n n This implies that a training set of size ? n=O  B 2 R? + ? ? ? ?  (9) is enough for generalization to within ?. We will be mostly concerned here with the regime where either R? is small and we seek generalization to within ? = ?(R? )?a typical regime in learning. This is always the case in the realizable setting, where R? = 0, but includes also the non-realizable setting, as long as the desired estimation error ? is not much smaller then the unavoidable error R? . In any case, in such a regime In that case, the second factor in (9) is of order one. In fact, an online approach 1 can find a predictor with Rerr (w) ? R? + ? with a single pass over ? 2 /? ? (? + R? )/?) training points. Since each step takes O(d) time (essentially the time n = O(B 1 The Perceptron rule, which amounts to SGD on Rhinge (w), ignoring correctly classified points [7, 3]. 6 required to read the training point), the overall runtime is:  2  B R? + ? O d? . ? ? (10) Returning to our approach, approximating the norm to within a factor of two is fine, as it only effects the required sample size, and hence the runtime by a constant factor. In particular, in order to ensure Rerr (w) ? R? + ? it is enough to have kwk ? 2B, optimize the empirical hinge loss to within ?? = ?/2, and use a sample size as specified in (9) (where we actually consider a radius of 2B and require generalization to within ?/4, but this is subsumed in the constant factors). Plugging this into the runtime analysis (7) yields: Corollary 5.1. For any B ? 1 and ? > 0, with high probability over a training set of size, n = ? 2 /? ? (? + R? )/?), Algorithm 1 outputs a predictor w with Rerr (w) ? R? + ? in time O(B    !  ? ? 2 4 ? + R ? + R B 2 ? O ? ? B d+ ? ? ? where R? = inf kw? k?B Rhinge (w? ). Let us compare the above runtime to the online runtime (10), focusing on the regime where R? is ? ? small and ? = ?(R? ) and so R ?+? = O(1), and ignoring the logarithmic factors hidden in the O(?) notation in Corollary 5.1. To do so, we will first rewrite the runtime in Corollary 5.1 as:  ? 3 ! ? 2 2 R + ? B R + ? B ? 2 ? O d? ? (R + ?) + B ? . (11) ? ? ? ? In order to compare the runtimes, we must consider the relative magnitudes of the dimensionality d and the norm B. Recall that using a norm-regularized approach, such as SVM, makes sense only when d  B 2 . Otherwise, the low dimensionality would guarantee us good generalization, and we ? wouldn?t gain anything from regularizing the norm. And so, at least when R ?+? = O(1), the first term in (11) is the dominant term and we should compare it with (10). More generally, we will see ? an improvement as long as d  B 2 ( R ?+? )2 . Now, the first term in (11) is more directly comparable to the online runtime (10), and is always smaller by a factor of (R? + ?) ? 1. This factor, then, is the improvement over the online approach, or more generally, over any approach which considers entire sample vectors (as opposed to individual features). We see, then, that our proposed approach can yield a significant reduction in runtime when the resulting error rate is small. Taking into account the hidden logarithmic factors, we get an improvement as long as (R? + ?) = O(1/ log(B 2 /?)). Returning to the form of the runtime in Corollary 5.1, we can also understand the runtime as follows: Initially, a runtime of O(B 2 d) is required in order for the estimates of w and p to start being reasonable. However, this runtime does not depend on the desired error (as long as ? = ?(R? ), including when R? = 0), and after this initial runtime investment, once w and p are ?reasonable?, we can continue decreasing the error toward R? with runtime that depends only on the norm, but is independent of the dimensionality. 6 Experiments In this section we present preliminary experimental results, that demonstrate situations in which our approach has an advantage over SGD-based methods. To this end, we choose to compare the performance of our algorithm to that of the state-of-the-art Pegasos algorithm [12], a popular SGD variant for solving SVM. The experiments were performed with two standard, large-scale data sets: ? The news20 data set of [9] that has 1,355,191 features and 19,996 examples. We split the data set into a training set of 8,000 examples and a test set of 11,996 examples. ? The real vs. simulated data set of McCallum, with 20,958 features and 72,309 examples. We split the data set into a training set of 20,000 examples and a test set of 52,309 examples. We implemented the SIMBA algorithm p exactly as in Section 4, with a single modification: we used a time-adaptive learning rate ?t = log(n)/t and a similarly an adaptive SGD step-size (in line 5), 7 0.5 0.55 SIMBA(? = 5 ? 10?5 ) Pegasos (? = 5 ? 10?5 ) 0.45 0.5 0.4 0.45 test error test error SIMBA(? = 1 ? 10?3 ) Pegasos (? = 1.25 ? 10?4 ) 0.35 0.3 0.4 0.35 0.25 0.3 0.2 0.25 0.15 107 108 feature accesses 109 109 1010 feature accesses 1011 Figure 1: The test error, averaged over 10 repetitions, vs. the number of feature accesses, on the real vs. simulated (left) and news20 (right) data sets. The error bars depict one standard-deviation of the measurements. instead of leaving them constant. While this version of the algorithm is more convenient to work with, we found that in practice its performance is almost equivalent to that of the original algorithm. In both experiments, we tuned the tradeoff parameter of each algorithm (i.e., ? and ?) so as to obtain the lowest possible error over the test set. Note that our algorithm assumes random access to features (as opposed to instances), thus it is not meaningful to compare the test error as a function of the number of iterations of each algorithm. Instead, and according to our computational model, we compare the test error as a function of the number of feature accesses of each algorithm. The results, averaged over 10 repetitions, are presented in figure 1 along with the parameters we used. As can be seen from the graphs, on both data sets our algorithm obtains the same test error as Pegasos achieves at the optimum, using about 100 times less feature accesses. 7 Summary Building on ideas first introduced by [4], we present a stochastic-primal-stochastic-dual approach that solves a non-separable linear SVM optimization problem in sublinear time, and yields a learning method that, in a certain regime, beats SGD and runs in less time than the size of the training set required for learning. We also showed some encouraging preliminary experiments, and we expect further work can yield significant gains, either by improving our method, or by borrowing from the ideas and innovations introduced, including: ? Using importance weighting, and stochastically updating the importance weights in a dual stochastic step. ? Explicitly introducing the slack variables (which are not typically represented in primal SGD approaches). This allows us to differentiate between an accounted-for margin mistakes, and a constraint violation where we did not yet assign enough ?slack? and want to focus our attention on. This differs from heuristic importance weighting approaches for stochastic learning, which tend to focus on all samples with a non-zero loss gradient. ? Employing the PST methodology when the standard low-regret tools fail to apply. We believe that our ideas and framework can also be applied to more complex situations where much computational effort is currently being spent, including highly multiclass and structured SVMs, latent SVMs [6], and situations where features are very expensive to calculate, but can be calculated on-demand. The ideas can also be extended to kernels, either through linearization [11], using an implicit linearization as in [4], or through a representation approach. Beyond SVMs, the framework can apply more broadly, whenever we have a low-regret method for the primal problem, and a sampling procedure for the dual updates. E.g. we expect the approach to be successful for `1 regularized problems, and are working on this direction. Acknowledgments This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors? views. 8 References [1] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta algorithm and applications. Manuscript, 2005. [2] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. Advances in neural information processing systems, 20:161?168, 2008. [3] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. Information Theory, IEEE Transactions on, 50(9):2050?2057, 2004. [4] K.L. Clarkson, E. Hazan, and D.P. Woodruff. Sublinear optimization for machine learning. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pages 449?457. IEEE, 2010. [5] K. Deng, C. Bourke, S. Scott, J. Sunderman, and Y. Zheng. Bandit-based algorithms for budgeted learning. In Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on, pages 463?468. IEEE, 2007. [6] P. Felzenszwalb, D. Mcallester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In In IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2008, 2008. [7] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265?299, 2003. [8] A. Kapoor and R. Greiner. Learning and classifying under hard budgets. Machine Learning: ECML 2005, pages 170?181, 2005. [9] S.S. Keerthi and D. DeCoste. A modified finite newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 6(1):341, 2006. ? Tardos. Fast approximation algorithms for fractional pack[10] S.A. Plotkin, D.B. Shmoys, and E. ing and covering problems. In Proceedings of the 32nd annual symposium on Foundations of computer science, pages 495?504. IEEE Computer Society, 1991. [11] A. Rahimi and B. Recht. Random features for large-scale kernel machines. Advances in neural information processing systems, 20:1177?1184, 2008. [12] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In Proceedings of the 24th international conference on Machine learning, pages 807?814. ACM, 2007. [13] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size. In Proceedings of the 25th international conference on Machine learning, pages 928?935, 2008. [14] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In Advances in Neural Information Processing Systems 23, pages 2199?2207. 2010. 9
4359 |@word version:3 norm:13 nd:1 seek:1 unbeatable:2 q1:1 sgd:18 reduction:1 initial:1 contains:1 woodruff:1 tuned:1 current:1 comparing:1 surprising:1 tackling:1 yet:1 must:3 bd:3 chicago:1 update:25 depict:1 v:3 greedy:1 selected:1 mccallum:1 parametrization:1 short:1 node:1 accessed:1 along:2 become:1 differential:1 symposium:2 descendant:1 shorthand:1 combine:1 introduce:1 excellence:1 news20:2 expected:1 indeed:2 roughly:1 examine:1 decreasing:1 encouraging:1 decoste:1 solver:1 begin:1 classifies:1 bounded:2 estimating:1 notation:4 mass:2 lowest:1 israel:2 informed:1 finding:1 guarantee:11 pseudo:1 every:1 concave:1 runtime:34 exactly:2 returning:3 k2:4 control:2 unit:2 ramanan:1 omit:1 before:3 positive:1 understood:1 t1:3 treat:1 mistake:1 path:3 approximately:2 quantified:1 suggests:2 bi:2 averaged:2 acknowledgment:1 investment:1 regret:12 implement:2 practice:1 differs:1 procedure:2 empirical:3 convenient:1 suggest:1 get:3 onto:1 pegasos:5 selection:1 put:1 applying:2 writing:1 optimize:1 equivalent:3 deterministic:1 maxz:1 straightforward:4 attention:1 laden:1 kale:1 formalized:1 simplicity:1 estimator:5 rule:1 stability:1 coordinate:4 updated:6 tardos:2 pt:10 exact:1 us:1 approximated:1 particularly:1 updating:2 expensive:1 recognition:1 labeled:1 solved:1 calculate:1 trade:3 rescaled:1 technological:1 trained:1 depend:2 solving:4 rewrite:1 packing:1 joint:1 evant:1 represented:1 fast:3 effective:1 describe:1 shalev:2 refined:1 whose:2 heuristic:1 elad:1 cvpr:1 otherwise:1 ability:1 noisy:1 online:10 differentiate:1 sequence:1 advantage:1 evidently:2 propose:1 product:1 adaptation:1 relevant:1 combining:2 kapoor:1 achieve:2 deformable:1 description:1 convergence:3 optimum:4 spent:1 derive:1 ac:1 stating:1 measured:1 qt:4 borrowed:1 solves:1 implemented:3 c:1 involves:1 implies:1 direction:1 radius:1 stochastic:19 mcallester:1 implementing:1 require:2 assign:1 fix:1 generalization:15 preliminary:2 extension:1 claim:1 achieves:1 favorable:2 estimation:2 label:2 currently:1 saw:1 sensitive:1 largest:1 repetition:2 establishes:1 tool:1 weighted:2 reflects:1 always:2 aim:2 modified:1 rather:3 pn:1 avoid:1 caching:1 varying:1 publication:1 corollary:4 focus:3 xit:2 improvement:3 sense:2 realizable:2 bourke:1 entire:6 typically:2 integrated:1 bt:7 borrowing:1 hidden:2 initially:1 originating:1 bandit:1 overall:9 dual:21 arg:1 denoted:1 priori:1 constrained:1 art:1 equal:1 once:1 sampling:5 runtimes:1 kw:3 simplex:2 few:2 randomly:2 ve:1 kwt:5 individual:1 argmax:1 keerthi:1 n1:2 maintain:1 recalling:1 subsumed:2 highly:1 mining:1 zheng:1 violation:2 yielding:1 primal:20 ehazan:1 respective:1 machinery:1 tree:1 euclidean:1 harmful:1 haifa:1 desired:7 re:1 theoretical:1 minimal:1 instance:7 column:2 clipping:1 cost:5 introducing:1 deviation:1 entry:3 predictor:10 technion:2 successful:1 seventh:1 too:3 plotkin:2 kxi:1 combined:1 st:1 recht:1 chw:6 international:3 ie:1 kut:1 off:3 quickly:1 na:1 w1:1 again:1 cesa:3 unavoidable:1 opposed:2 choose:3 worse:1 stochastically:2 return:3 account:1 includes:1 explicitly:2 depends:1 tion:1 later:2 view:3 multiplicative:3 closed:1 hazan:3 observing:2 kwk:14 analyze:3 reached:1 maintains:1 start:1 defer:1 minimize:1 il:1 accuracy:2 variance:8 efficiently:1 yield:9 rhinge:3 shmoys:2 classified:1 explain:1 reach:1 whenever:1 definition:2 involved:1 proof:3 gain:2 pst:6 popular:1 ask:1 recall:2 fractional:2 dimensionality:4 improves:1 ut:5 actually:2 focusing:1 manuscript:1 originally:1 methodology:1 formulation:1 done:2 just:1 implicit:1 until:1 hand:2 sketch:2 working:1 multiscale:1 grows:1 believe:1 building:1 effect:1 normalized:1 multiplier:1 unbiased:1 true:1 regularization:3 hence:2 read:2 covering:2 anything:1 unnormalized:1 suboptimality:1 criterion:1 complete:1 demonstrate:1 confusion:1 novel:1 common:2 performed:1 overview:1 discussed:1 extend:1 slight:1 relating:1 kwk2:1 refer:1 significant:2 measurement:1 smoothness:1 rd:9 unconstrained:1 similarly:1 pointed:1 illinois:1 access:13 wt2:1 dominant:1 softmargin:1 recent:1 showed:1 perspective:1 inf:1 certain:4 meta:1 binary:1 continue:1 vt:10 yi:7 accomplished:2 seen:1 minimum:2 additional:3 gentile:2 deng:1 maximize:1 full:1 rahimi:1 ing:1 technical:1 long:4 ofp:1 icdm:1 plugging:1 variant:3 essentially:1 vision:1 iteration:12 kernel:2 achieved:1 affecting:2 addition:1 fine:1 want:1 source:1 leaving:1 crucial:1 biased:1 unlike:5 subject:1 tend:1 sridharan:1 call:2 mw:11 noting:1 split:2 enough:3 concerned:1 affect:1 restrict:1 suboptimal:2 inner:1 idea:7 parameterizes:1 tradeoff:2 multiclass:1 effort:1 akin:1 clarkson:1 speaking:2 proceed:1 generally:2 tewari:1 detailed:1 informally:1 amount:1 clip:2 svms:9 notice:1 sign:1 estimated:2 per:2 correctly:1 broadly:1 write:1 shall:2 ist:2 key:1 reformulation:1 drawn:1 budgeted:2 v1:1 graph:1 relaxation:1 run:2 inverse:1 clipped:1 almost:2 reasonable:3 comparable:1 bound:13 koren:1 annual:2 adapted:1 constraint:2 kpk1:1 bousquet:1 nathan:1 u1:1 min:8 separable:4 relatively:2 structured:1 according:6 ball:2 smaller:2 modification:2 explained:2 pr:1 remains:1 slack:5 fail:1 needed:1 know:1 singer:1 end:3 serf:1 operation:2 apply:4 robustness:1 original:1 assumes:1 running:1 ensure:2 hinge:16 maintaining:1 newton:1 xw:1 exploit:1 k1:3 build:1 establish:1 approximating:2 society:1 unchanged:1 implied:1 objective:7 already:1 occurs:1 dependence:4 gradient:11 simulated:2 considers:2 trivial:1 toward:2 code:1 mini:1 ratio:1 minimizing:3 balance:1 innovation:1 difficult:1 mostly:1 relate:1 negative:1 unknown:2 bianchi:3 upper:4 finite:1 descent:4 ecml:1 beat:2 immediate:1 situation:4 extended:1 rn:5 stack:1 tomer:1 community:1 ttic:1 introduced:2 cast:1 required:13 specified:2 able:1 bar:1 proceeds:1 usually:1 below:2 beating:1 beyond:1 scott:1 regime:7 sparsity:1 pattern:1 program:1 kqt:1 including:5 max:7 pascal2:1 hybrid:2 regularized:2 technology:1 arora:1 carried:1 nati:1 relative:2 loss:5 expect:2 discriminatively:1 sublinear:8 limitation:1 proportional:1 srebro:4 foundation:2 xp:1 xwt:5 pareto:5 pi:3 classifying:1 row:1 elsewhere:1 summary:1 accounted:1 supported:1 enjoys:1 bias:10 understand:3 perceptron:1 institute:2 taking:2 felzenszwalb:1 sparse:2 benefit:1 distributed:1 calculated:1 stand:1 avoids:1 author:1 adaptive:2 projected:1 wouldn:1 programme:1 employing:2 transaction:1 approximate:2 emphasize:1 ignore:1 obtains:1 proximation:1 conclude:2 xi:15 shwartz:2 latent:1 additionally:1 nature:1 pack:1 ignoring:2 simba:7 improving:1 bottou:1 complex:1 european:1 domain:1 did:1 apr:1 main:4 linearly:1 bounding:2 noise:3 repeated:1 referred:1 slow:1 sub:3 wish:1 toyota:1 weighting:4 learns:1 hw:4 theorem:4 formula:2 jt:8 svm:13 admits:1 incorporating:1 exists:1 adding:2 importance:9 ci:12 magnitude:1 linearization:2 budget:1 margin:7 demand:1 logarithmic:2 simply:1 saddle:3 greiner:1 conconi:1 applies:2 tomerk:1 primaldual:1 satisfies:1 relies:1 acm:1 goal:1 viewed:2 sized:1 consequently:1 presentation:2 change:2 hard:1 specifically:3 typical:1 averaging:1 wt:11 lemma:9 total:1 pas:2 experimental:1 meaningful:1 internal:1 regularizing:1
3,710
436
REMARKS ON INTERPOLATION AND RECOGNITION USING NEURAL NETS Eduardo D. Sontag? SYCON - Center for Systems and Control Rutgers University New Brunswick, NJ 08903 Abstract We consider different types of single-hidden-Iayer feedforward nets: with or without direct input to output connections, and using either threshold or sigmoidal activation functions. The main results show that direct connections in threshold nets double the recognition but not the interpolation power, while using sigmoids rather than thresholds allows (at least) doubling both. Various results are also given on VC dimension and other measures of recognition capabilities. 1 INTRODUCTION In this work we continue to develop the theme of comparing threshold and sigmoidal feedforward nets. In (Sontag and Sussmann, 1989) we showed that the "generalized delta rule" (backpropagation) can give rise to pathological behavior -namely, the existence of spurious local minima even when no hidden neurons are used,in contrast to the situation that holds for threshold nets. On the other hand, in (Sontag and Sussmann, 1989) we remarked that provided that the right variant be 'Used, separable sets do give rise to globally convergent back propagation, in complete analogy to the classical perceptron learning theorem. These results and those obtained by other authors probably settle most general questions about the case of no hidden units, so the next step is to look at the case of single hidden layers. In (Sontag, 1989) we announced the fact that sigmoidal activations (at least) double recognition power. Here we provide details, and we make several further remarks on this as well as on the topic of interpolation. Nets with one hidden layer are known to be in principle sufficient for arbitrary recognition tasks. This follows from the approximation theorems proved by various ."E-mail: [email protected] 940 Sontag authors: (Funahashi,1988), (Cybenko,1989), and (Hornik et. al., 1989). However, what is far less clear is how many neurons are needed for achieving a given recognition, interpolation, or approximation objective. This is of importance both in its practical aspects (having rough estimates of how many neurons will be needed is essential when applying back propagation) and in evaluating generalization properties (larger nets tend to lead to poorer generalization). It is known and easy to prove (see for instance (Arai, 1989), (Chester, 1990)) that one can basically interpolate values at any n + 1 points using an n-neuron net, and in particular that any n + 1point set can be dichotomized by such nets. Among other facts, we point out here that allowing direct input to output connections permits doubling the recognition power to 2n, and the same result is achieved if sigmoidal neurons are used but such direct connections are not allowed. Further, we remark that approximate interpolation of 2n - 1 points is also possible, provided that sigmoidal units be employed (but direct connections in threshold nets do not suffice). The dimension of the input space (that is, the number of "input units") can influence the number of neurons needed, are least for dichotomy problems for suitably chosen sets. In particular, Baum had shown some time back (Baum, 1988) that the VC dimension of threshold nets with a fixed number of hidden units is at least proportional to this dimension. We give lower bounds, in dimension two, at least doubling the VC dimension if sigmoids or direct connections are allowed. Lack of space precludes the inclusion of proofs; references to technical reports are given as appropriate. A full-length version of this paper is also available from the author. 2 DICHOTOMIES The first few definitions are standard. Let N be a positive integer. A dichotomy or two-coloring (S_, S+) on a set S ~ JRN is a partition S = S_ U S+ of S into two disjoint subsets. A function I : JRN -. JR will be said to implement this dichotomy if it holds that I(u) > 0 for u E S+ and I(u) < 0 for u E S_ . Let F be a class of functions from JRN to JR, assumed to be nontrivial, in the sense that for each point u E lRN there is some 11 E F so that II (u) > 0 and some h E F so that h (u) < O. This class shatters the set S ~ RN if each dichotomy on S can be implemented by some I E F. Here we consider, for any class of functions F as above, the following measures of classification power. First we introduce J1. and J1., dealing with "best" and "worst" cases respectively: J1.(F) denotes the largest integer I 2: 1 (possibly 00) so that there is at least some set S of cardinality I in JRN which can be shattered by F, while J1.(F) is the largest integer I 2: 1 (possibly 00) so that every set of cardinality I can be shattered by F. Note that by definition, J1.(F) ~ /1(F) for every class F. In particular, the definitions imply that no set of cardinality J1.(F) + 1 can be shattered, and that there is at least some set of cardinality J1.(F) + 1 which cannot be shattered. The integer J1. is usually called the Vapnik-Chervonenkis (VC) dimension of the class F (see for instance (Baum,1988)), and appears in formalizations of learning in the distribution-free sense. Remarks on Interpolation and Recognition Using Neural Nets A set may fail to be shattered by :F because it is very special (see the example below with colinear points). In that sense, a more robust measure is useful: p,(:F) is the largest integer I 2: 1 (possibly 00) for which the class of sets S that can be shattered by :F is dense, in the sense that given every I-element set S {st, ... , S,} there are points Si arbitrarily close to the respective Si'S such that S {st, ... , s,} can be shattered by :F. Note that = = p,(:F) ::; p,(:F) ::; 1l(:F) (1) for all :F. To obtain an upper bound m for p,(:F) one needs to exhibit an open class of sets of cardinality m + 1 none of which can be shattered. Take as an example the class :F consisting of all affine functions f( x) = ax + by + c on lR 2 ? Since any three poin ts can be shattered by an affine map provided that they are not colinear (just choose a line ax + by + c 0 that separates any poin t which is colored different from the rest), it follows that 3 ::; p,. On the other hand, no set of four points can ever be dichotomized, which implies that II ::; 3 and therefore the conclusion p, = II = 3 for this class. (The negative statement can be verified by a case by case analysis: if the four points form the vertices of a 4-gon color them in "XOR" fashion, alternate vertices of the same color; if 3 form a triangle and the remaining one is inside, color the extreme points differently from the remaining one; if all colinear then use an alternating coloring). Finally, since there is some set of 3 points which cannot be dichotomized (any set of three colinear points is like this), but every set of two can, !!:.. 2 . We shall say that :F is robust if whenever S can be shattered by :F also every small enough perturbation of S can be shattered. For a robust class and I = p,(:F), every set in an open dense subset in the above topology, i.e. almost every set of 1 elements, can be shattered. = = 3 NETS We define a "neural net" as a function of a certain type , corresponding to the idea of feedforward interconnections, via additive links, of neurons each of which has a scalar response or activation function (). Definition 3.1 Let () : lR ~ lR be any function. A function f : lRN ~ lR is a single-hidden-Iayer neural net with k hidden neurons of type () and N inputs, or just a (k , ()-net, if there are real numbers Wo, Wl, ??? , Wk, 'Tl, ??? , 'Tk and vectors vo, Vi, ? ? ? , Vk E lRN such that, for all u E lRN , k f(u) = Wo + Vo?U + L Wi()(Vi. U - 'Ti) (2) i=l where the dot indicates inner product. A net with no direct i/o connections is one for which Vo = O. For fixed (), and under mild assumptions on (), such neural nets can be used to approximate uniformly arbitrary continuous functions on compacts. In particular, they can be used to implement arbitrary dichotomies. 941 942 Sontag In neural net practice, one often takes 9 to be the standard sigmoid u( x) = 1+!-" or equivalently, up to translations and change of coordinates, the hyperbolic tangent tanh(x). Another usual choice is the hardlimiter, threshold, or Heaviside function 1t(x) ={ ~ if x ~ 0 if x> 0 which can be approximated well by u("'(x) when the "gain" "'( is large. Yet another possibility is the use of the piecewise linear function 7r(x) = { -1 ! if x ~ -1 if x> 1 otherwise. Most analysis has been done for 1t and no direct connections, but numerical techniques typically use the standard sigmoid (or equivalently tanh). The activation 1(" will be useful as an example for which sharper bounds can be obtained. The examples u and 1(", but not 1t, are particular cases of the following more general type of activation function: Definition 3.2 A function 9 : m. erties hold: --+ m. will be called a sigmoid if these two prop- (51) t+ := limx--++oo 9(x) and L := liffix--+-oo 9(x) exist, and t+ =1= t_. (52) There is some point c such that 9 is differentiable at c and 9'(c) = JL =1= o. 0 All the examples above lead to robust classes, in the sense defined earlier. More precisely, assume that 9 is continuous except for at most finitely many points x, and it is left continuous at such x, and let :F be the class of (k,9)-nets, for any fixed k. Then:F is robust, and the same statement holds for nets with no direct connections. 4 CLASSIFICATION RESULTS We let JL(k, 9, N) denote JL(:F), where :F is the class of (k, 9)-nets in m.N with no direct connections, and similarly for JL and JL, and a superscript d is used for the class of arbitrary such nets (with possible direct connections from input to output). The lower measure JL is independent of dimension: Lemma 4.1 For each k, 9, N, !!:.(k, 9, N) = JL(k, 9,1) and !!:.d(k, 9, N) = JLd(k, 9,1). This justifies denoting these quantities just as JL( k, 9) and JLd( k, 9) respectively, as we do from now on, and giving proofs only for N = 1. Lemma 4.2 For any sigmoid 9, and for each k, N, JL(k + 1,9, N) > JLd(k, 1t, N) and similarly for JL and JL. The main results on classification will be as follows. Remarks on Interpolation and Recognition Using Neural Nets Theorem 1 For any sigmoid 9, and for each k, J-L(k,1t) - J-Ld(k,1t) J-L(k,9) k +1 2k +2 > 2k. Theorem 2 For each k, 4l~J ~ J-L(k, 1t, 2) < 2k + 1 J-Ld(k, 1t, 2) < 4k +3 . Theorem 3 For any sigmoid 9, and for each k, +1 < 4k + 3 < 4k - 1 < 2k J-L(k, 1t, 2) ~(k, 1t, 2) J-L(k, 9, 2) . These results are proved in (Sontag, 1990a). The first inequality in Theorem 2 follows from the results in (Baum, 1988), who in fact established a lower bound of 2N l ~ J for J-L(k, 1t, N) (and hence for J-L too), for every N, not just N = 2 as in the ~eorem above. We conjecture, but have as yet been unable to prove, that direct connections or sigmoids should also improve these bounds by at least a factor of 2, just as in the two-dimensional case and in the worst-case analysis. Because of Lemma 4.2, the last statements in Theorems 1 and 3 are consequences of the previous two. 5 SOME PARTICULAR ACTIVATION FUNCTIONS Consider the last inequality in Theorem 1. For arbitrary sigmoids, this is far too conservative, as the number J-L can be improved considerably from 2k, even made infinite (see below). We conjecture that for the important practical case 9(x) = O'(x) it is close to optimal, but the only upper bounds that we have are still too high. For the piecewise linear function 11", at least, one has equality: Lemma 5.1 ~(k, 11") = 2k. It is worth remarking that there are sigmoids 9, as differentiable as wanted, even real-analytic, where all classification measures are infinite. Of course, the function 9 is so complicated that there is no reasonably "finite" implementation for it. This remark is only of theoretical interest, to indicate that, unless further restrictions are made on (S1)-(S2), much better bounds can be obtained. (If only J-L and J-L are desired to be infinite, one may also take the simpler example 9( x) sin( x). Note that for any I rationally independent real numbers Xi, the vectors of the form (sin(-YIXI), ... , sin(-y,xr), with the 'Yi'S real, form a dense subset of [-1,1]', so all dichotomies on {Xl,"" xd can be implemented with (1, sin)-nets.) = Lemma 5.2 There is some sigmoid 9, which can be taken to be an analytic function , so that J-L(1, 9) = 00. 943 944 Sontag 6 INTERPOLATION We now consider the following approximate interpolation problem. Assume given a sequence of k (distinct) points Xl, ??? , Xk in RN, any ? > 0, and any sequence of real numbers YI,"" Yk, as well as some class :F of functions from JRN to JR. We ask if there exists some (3) I E :F so that I/(xd - yd < ? for each i. Let ~(:F) be the largest integer k ~ 1, possibly infinite, so that for every set of data as above (3) can be solved. Note that, obviously, ~(:F) ~ p,(:F). Just as in Lemma 4.1, .1. is independent of the dimension N when applied to nets. Thus we let .1.d(k, B) and .1.(k, B) be respectively the values of .1.(:F) when applied to (k, B)-nets with or without direct connections. We now summarize properties of.1.. The next result -see (Sontag,1991), as well as the full version of this paper, for a proof- should be compared with Theorem 1. The main difference is in the second equality. Note that one can prove .1.( k, B) ~ ~d(k - 1,1l), in complete analogy with the case of p" but this is not sufficient anymore to be able to derive the last inequality in the Theorem from the second equality. Theorem 4 For any continuous sigmoid B, and lor each k, .1.(k,1l) ,Ad(k,1l) .1.(k, B) k+1 k+2 > 2k - 1 . Remark 6.1 Thus we can approximately interpolate any 2k - 1 points using k sigmoidal neurons. It is not hard to prove as a corollary that, for the standard sigmoid, this approximate interpolation property holds in the following stronger sense: for an open dense set of 2k - 1 points, one can achieve an open dense set of values; the proof involves looking first at points with rational coordinates, and using that on such points one is dealing basically with rational functions (after a diffeomorphism), plus some theory of semialgebraic sets. We conjecture that one 2 this is easy to should be able to interpolate at 2k points. Note that for k achieve: just choose the slope d so that some Zi - Zi+l becomes zero and the Zi are allowed to be nonincreasing or nondecreasing. The same proof, changing the signs if necessary, gives the wanted net. For some examples, it is quite easy to get 2k points. For instance, .1.(k,1r) = 2k for the piecewise linear sigmoid 1r. 0 = 7 FURTHER REMARKS The main conclusion from Theorem 1 is that sigmoids at least double recognition power for arbitrary sets. It may be the case that p,(k, (7, N)j p,(k, 1l, N) : : : : 2 for all N; this is true for N = 1 and is strongly suggested by Theorem 3 (the first bound appears to be quite tight). Unfortunately the proof of this theorem is based on a result from (Asano et. al., 1990) regarding arrangements of points in the plane, a fact which does not generalize to dimension three or higher. One may also compare the power of nets with and without connections, or threshold vs sigmoidal processors, on Boolean problems. For instance, it is a trivial consequence from the given results that parity on n bits can be computed with rni1l Remarks on Interpolation and Recognition Using Neural Nets hidden sigmoidal units and no direct connections, though requiring (apparently, though this is an open problem) n thresholds. In addition, for some families of Boolean functions, the gap between sigmoidal nets and threshols nets may be infinitely large (Sontag, 1990a). See (Sontag, 1990b) for representation properties of two-hidden-Iayer nets Acknow ledgements This work was supported in part by Siemens Corporate Research, and in part by the CAIP Center, Rutgers University. References Arai, M., "Mapping abilities of three-layer neural networks," Proc. IJCNN Int. Joint Conf.on Neural Networks, Washington, June 18-22, 1989, IEEE Publications, 1989, pp. 1-419/424. Asano,T., J. Hershberger, J. Pach, E.D. Sontag, D. Souivaine, and S. Suri, "Separating Bi-Chromatic Points by Parallel Lines," Proceedings of the Second Canadian Conference on Computational Geometry, Ottawa, Canada, 1990, p. 46-49. Baum, E.B., "On the capabilities of multilayer perceptrons," J.Complexity 4(1988): 193-215. Chester, D., "Why two hidden layers and better than one," Proc. Int. Joint Conf. on Neural Networks, Washington, DC, Jan. 1990, IEEE Publications, 1990, p. 1.265268. Cybenko, G., "Approximation by superpositions of a sigmoidal function," Math. Control, Signals, and Systems 2(1989): 303-314. Funahashi, K., "On the approximate realization of continuous mappings by neural networks," Proc. Int. Joint Conf. on Neural Networks, IEEE Publications, 1988, p. 1.641-648. Hornik, K.M., M. Stinchcombe, and H. White, "Multilayer feedforward networks are universal approximators," Neural Networks 2(1989): 359-366. Sontag, E.D., "Sigmoids distinguish better than Heavisides," Neural Computation 1(1989): 470-472. Sontag, E.D., "On the recognition capabilities of feedforward nets," Report SYCON-90-03, Rutgers Center for Systems and Control, April 1990. Sontag, E.D., "Feedback Stabilization Using Two-Hidden-Layer Nets," Report SYCON-90-11, Rutgers Center for Systems and Control, October 1990. Sontag, E.D., "Capabilities and training of feedforward nets," in Theory and Applications of Neural Networks (R. Mammone and J. Zeevi, eds.), Academic Press, NY, 1991, to appear. Sontag, E.D., and H.J. Sussmann, "Back propagation can give rise to spurious local minima even for networks without hidden layers," Complex Systems 3(1989): 91106. Sontag, E.D., and H.J. Sussmann, "Backpropagation separates where perceptrons do," Neural Networks(1991), to appear. 945
436 |@word mild:1 version:2 stronger:1 suitably:1 open:5 t_:1 ld:2 chervonenkis:1 denoting:1 comparing:1 activation:6 si:2 yet:2 numerical:1 additive:1 j1:8 partition:1 analytic:2 wanted:2 v:1 plane:1 xk:1 funahashi:2 lr:4 colored:1 math:1 sigmoidal:10 simpler:1 lor:1 direct:14 chester:2 prove:4 inside:1 introduce:1 behavior:1 globally:1 cardinality:5 becomes:1 provided:3 suffice:1 what:1 arai:2 nj:1 eduardo:1 every:9 ti:1 xd:2 control:4 unit:5 appear:2 positive:1 local:2 consequence:2 interpolation:11 yd:1 approximately:1 plus:1 bi:1 practical:2 practice:1 implement:2 backpropagation:2 xr:1 jan:1 universal:1 hyperbolic:1 sycon:3 get:1 cannot:2 close:2 applying:1 influence:1 restriction:1 map:1 center:4 baum:5 sussmann:4 rule:1 coordinate:2 element:2 recognition:12 approximated:1 gon:1 solved:1 worst:2 yk:1 complexity:1 colinear:4 tight:1 triangle:1 joint:3 differently:1 various:2 distinct:1 dichotomy:7 mammone:1 quite:2 larger:1 say:1 interconnection:1 precludes:1 otherwise:1 ability:1 nondecreasing:1 superscript:1 obviously:1 sequence:2 differentiable:2 net:36 product:1 realization:1 achieve:2 double:3 tk:1 oo:2 develop:1 derive:1 finitely:1 implemented:2 involves:1 implies:1 indicate:1 vc:4 stabilization:1 settle:1 generalization:2 cybenko:2 hold:5 mapping:2 zeevi:1 proc:3 tanh:2 superposition:1 largest:4 wl:1 rough:1 rather:1 chromatic:1 publication:3 corollary:1 ax:2 june:1 vk:1 indicates:1 contrast:1 sense:6 shattered:12 typically:1 hidden:13 spurious:2 among:1 classification:4 special:1 having:1 washington:2 look:1 report:3 piecewise:3 few:1 pathological:1 interpolate:3 geometry:1 consisting:1 interest:1 limx:1 possibility:1 extreme:1 lrn:4 nonincreasing:1 poorer:1 necessary:1 respective:1 unless:1 desired:1 dichotomized:3 theoretical:1 instance:4 earlier:1 boolean:2 vertex:2 subset:3 ottawa:1 too:3 considerably:1 st:2 choose:2 possibly:4 conf:3 wk:1 int:3 vi:2 ad:1 apparently:1 capability:4 complicated:1 parallel:1 slope:1 xor:1 who:1 generalize:1 basically:2 none:1 worth:1 processor:1 whenever:1 ed:1 definition:5 pp:1 remarked:1 proof:6 gain:1 rational:2 proved:2 ask:1 color:3 hilbert:1 back:4 coloring:2 appears:2 higher:1 response:1 improved:1 april:1 done:1 though:2 strongly:1 just:7 hand:2 propagation:3 lack:1 requiring:1 true:1 hence:1 equality:3 alternating:1 white:1 sin:4 generalized:1 complete:2 vo:3 suri:1 sigmoid:10 jl:11 similarly:2 inclusion:1 had:1 dot:1 poin:2 showed:1 certain:1 inequality:3 arbitrarily:1 continue:1 approximators:1 yi:2 minimum:2 employed:1 signal:1 ii:3 full:2 corporate:1 technical:1 academic:1 variant:1 multilayer:2 rutgers:5 achieved:1 addition:1 rest:1 probably:1 tend:1 pach:1 integer:6 feedforward:6 canadian:1 easy:3 enough:1 zi:3 topology:1 inner:1 idea:1 regarding:1 wo:2 sontag:19 remark:9 useful:2 clear:1 jrn:5 exist:1 sign:1 delta:1 disjoint:1 ledgements:1 shall:1 four:2 threshold:10 achieving:1 changing:1 shatters:1 verified:1 almost:1 family:1 announced:1 bit:1 bound:8 layer:6 distinguish:1 convergent:1 nontrivial:1 ijcnn:1 precisely:1 aspect:1 separable:1 conjecture:3 alternate:1 jr:3 wi:1 s1:1 taken:1 fail:1 needed:3 available:1 permit:1 appropriate:1 caip:1 anymore:1 existence:1 denotes:1 remaining:2 giving:1 classical:1 objective:1 question:1 quantity:1 arrangement:1 usual:1 said:1 exhibit:1 rationally:1 separate:2 link:1 unable:1 separating:1 topic:1 mail:1 s_:3 trivial:1 length:1 equivalently:2 unfortunately:1 october:1 statement:3 sharper:1 acknow:1 negative:1 rise:3 implementation:1 allowing:1 upper:2 neuron:9 finite:1 t:1 situation:1 ever:1 looking:1 dc:1 rn:2 perturbation:1 arbitrary:6 canada:1 namely:1 connection:15 established:1 able:2 suggested:1 usually:1 below:2 remarking:1 summarize:1 stinchcombe:1 power:6 jld:3 improve:1 imply:1 tangent:1 proportional:1 analogy:2 semialgebraic:1 affine:2 sufficient:2 principle:1 translation:1 course:1 supported:1 last:3 free:1 parity:1 perceptron:1 feedback:1 dimension:10 evaluating:1 author:3 made:2 far:2 approximate:5 compact:1 dealing:2 assumed:1 iayer:3 xi:1 continuous:5 why:1 reasonably:1 robust:5 yixi:1 hornik:2 complex:1 main:4 dense:5 s2:1 allowed:3 tl:1 fashion:1 ny:1 formalization:1 theme:1 xl:2 erties:1 theorem:14 essential:1 exists:1 vapnik:1 importance:1 sigmoids:7 justifies:1 gap:1 infinitely:1 doubling:3 scalar:1 prop:1 diffeomorphism:1 change:1 hard:1 infinite:4 except:1 uniformly:1 lemma:6 conservative:1 called:2 siemens:1 perceptrons:2 brunswick:1 heaviside:2
3,711
4,360
Co-regularized Multi-view Spectral Clustering Abhishek Kumar? Dept. of Computer Science University of Maryland, College Park, MD [email protected] Piyush Rai? Dept. of Computer Science University of Utah, Salt Lake City, UT [email protected] Hal Daum?e III Dept. of Computer Science University of Maryland, College Park, MD [email protected] Abstract In many clustering problems, we have access to multiple views of the data each of which could be individually used for clustering. Exploiting information from multiple views, one can hope to find a clustering that is more accurate than the ones obtained using the individual views. Often these different views admit same underlying clustering of the data, so we can approach this problem by looking for clusterings that are consistent across the views, i.e., corresponding data points in each view should have same cluster membership. We propose a spectral clustering framework that achieves this goal by co-regularizing the clustering hypotheses, and propose two co-regularization schemes to accomplish this. Experimental comparisons with a number of baselines on two synthetic and three real-world datasets establish the efficacy of our proposed approaches. 1 Introduction Many real-world datasets have representations in the form of multiple views [1, 2]. For example, webpages usually consist of both the page-text and hyperlink information; images on the web have captions associated with them; in multi-lingual information retrieval, the same document has multiple representations in different languages, and so on. Although these individual views might be sufficient on their own for a given learning task, they can often provide complementary information to each other which can lead to improved performance on the learning task at hand. In the context of data clustering, we seek a partition of the data based on some similarity measure between the examples. Our of the numerous clustering algorithms, Spectral Clustering has gained considerable attention in the recent past due to its strong performance on arbitrary shaped clusters, and due to its well-defined mathematical framework [3]. Spectral clustering is accomplished by constructing a graph from the data points with edges between them representing the similarities, and solving a relaxation of the normalized min-cut problem on this graph [4]. For the multi-view clustering problem, we work with the assumption that the true underlying clustering would assign corresponding points in each view to the same cluster. Given this assumption, we can approach the multi-view clustering problem by limiting our search to clusterings that are compatible across the graphs defined over each of the views: corresponding nodes in each graph should have the same cluster membership. In this paper, we propose two spectral clustering algorithms that achieve this goal by co-regularizing the clustering hypotheses across views. Co-regularization is a well-known technique in semisupervised literature; however, not much is known on using it for unsupervised learning problems. We propose novel spectral clustering objective functions that implicitly combine graphs from multiple views of the data to achieve a better clustering. Our proposed methods give us a way to combine multiple kernels (or similarity matrices) for the clustering problem. Moreover, we would like to note here that although multiple kernel learning has met with considerable success on supervised learning problems, similar investigations for unsupervised learning have been found lacking so far, which is one of the motivations behind this work. ? Authors contributed equally 1 2 Co-regularized Spectral Clustering We assume that we are given data having multiple representations (i.e., views). Let X = (v) (v) (v) {x1 , x2 , . . . , xn } denote the examples in view v and K(v) denote the similarity or kernel matrix of X in this view. We write the normalized graph Laplacian for this view as: L(v) = ?1/2 ?1/2 D(v) K(v) D(v) . The single view spectral clustering algorithm of [5] solves the following optimization problem for the normalized graph Laplacian L(v) :   T T (1) max tr U(v) L(v) U(v) , s.t. U(v) U(v) = I U(v) ?Rn?k where tr denotes the matrix trace. The rows of matrix U(v) are the embeddings of the data points that can be given to the k-means algorithm to obtain cluster memberships. For a detailed introduction to both theoretical and practical aspects of spectral clustering, the reader is referred to [3]. Our multi-view spectral clustering framework builds on the standard spectral clustering with a single view, by appealing to the co-regularization framework typically used in the semi-supervised learning literature [1]. Co-regularization in semi-supervised learning essentially works by making the hypotheses learned from different views of the data agree with each other on unlabeled data [6]. The framework employs two main assumptions for its success: (a) the true target functions in each view should agree on the labels for the unlabeled data (compatibility), and (b) the views are independent given the class label (conditional independence). The compatibility assumption allows us to shrink the space of possible target hypotheses by searching only over the compatible functions. Standard PAC-style analysis [1] shows that this also leads to reductions in the number of examples needed to learn the target function, since this number depends on the size of the hypothesis class. The independence assumption makes it unlikely for compatible classifiers to agree on wrong labels. In the case of clustering, this would mean that a data point in both views would be assigned to the correct cluster with high probability. Here, we propose two co-regularization based approaches to make the clustering hypotheses on different graphs (i.e., views) agree with each other. The effectiveness of spectral clustering hinges crucially on the construction of the graph Laplacian and the resulting eigenvectors that reflect the cluster structure in the data. Therefore, we construct an objective function that consists of the graph Laplacians from all the views of the data and regularize on the eigenvectors of the Laplacians such that the cluster structures resulting from each Laplacian look consistent across all the views. Our first co-regularization scheme (Section 2.1) enforces that the eigenvectors U(v) and U(w) of a view pair (v, w) should have high pairwise similarity (using a pair-wise co-regularization criteria we will define in Section 2.1). Our second co-regularization scheme (Section 2.3) enforces the view-specific eigenvectors to look similar by regularizing them towards a common consensus (centroid based co-regularization). The idea is different from previously proposed consensus clustering approaches [7] that commit to individual clusterings in the first step and then combine them to a consensus in the second step. We optimize for individual clusterings as well as the consensus using a joint cost function. 2.1 Pairwise Co-regularization In standard spectral clustering, the eigenvector matrix U(v) is the data representation for subsequent k-means clustering step (with i?th row mapping to the original i?th sample). In our proposed objective function, we encourage the pairwise similarities of examples under the new representation (in terms of rows of U(?) ?s) to be similar across all the views. This amounts to enforcing the spectral clustering hypotheses (which are based on the U(?) ?s) to be the same across all the views. We will work with two-view case for the ease of exposition. This will later be extended to more than two views. We propose the following cost function as a measure of disagreement between clusterings of two views: 2 KU(v) KU(w) (v) (w) . D(U , U ) = (2) ? ||KU(v) ||2F ||KU(w) ||2F F KU(v) is the similarity matrix for U(v) , and || ? ||F denotes the Frobenius norm of the matrix. The similarity matrices are normalized by their Frobenius norms to make them comparable across 2 views. We choose linear kernel, i.e., k(xi , xj ) = xTi xj as our similarity measure in Equation 2. T This implies that we have KU(v) = U(v) U(v) . The reason for choosing linear kernel to measure similarity of U(?) is twofold. First, the similarity measure (or kernel) used in the Laplacian for spectral clustering has already taken care of the non-linearities present in the data (if any), and the embedding U(?) being real-valued cluster indicators, can be considered to obey linear similarities. Secondly, we get a nice optimization problem by using linear kernel for U(?) . We also note that ||KU(v) ||2F = k, where k is the number of clusters. Substituting this in Equation 2 and ignoring the constant additive and scaling terms that depend on the number of clusters, we get   T T D(U(v) , U(w) ) = ?tr U(v) U(v) U(w) U(w) We want to minimize the above disagreement between the clusterings of views v and w. Combining this with the spectral clustering objectives of individual views, we get the following joint maximization problem for two graphs:       T T T T max tr U(v) L(v) U(v) + tr U(w) L(w) U(w) + ? tr U(v) U(v) U(w) U(w) U(v) ?Rn?k (3) U(w) ?Rn?k T T s.t. U(v) U(v) = I, U(w) U(w) = I The hyperparameter ? trades-off the spectral clustering objectives and the spectral embedding (dis)agreement term. The joint optimization problem given by Equation 3 can be solved using alternating maximization w.r.t. U(v) and U(w) . For a given U(w) , we get the following optimization problem in U(v) : n   o T T T max tr U(v) L(v) + ?U(w) U(w) U(v) , s.t. U(v) U(v) = I. (4) U(v) ?Rn?k T This is a standard spectral clustering objective on view v with graph Laplacian L(v) + ?U(w) U(w) . This can be seen as a way of combining kernels or Laplacians. The difference from standard kernel combination (kernel addition, for example) is that the combination is adaptive since U(w) keeps getting updated at each step, as guided by the clustering algorithm. The solution U(v) is given by the top-k eigenvectors of this modified Laplacian. Since the alternating maximization can make the algorithm stuck in a local maximum [8], it is important to have a sensible initialization. If there is no prior information on which view is more informative about the clustering, we can start with any of the views. However, if we have some a priori knowledge on this, we can start with the graph Laplacian L(w) of the more informative view and initialize U(w) . The alternating maximization is carried out after this until convergence. Note that one possibility could be to regularize directly on the eigenvectors U(v) ?s and make them close to each other (e.g., in the sense of the Frobenious norm of the difference between U(v) and U(w) ). However, this type of regularization could be too restrictive and could end up shrinking the hypothesis space of feasible clusterings too much, thus ruling out many valid clusterings. For fixed ? and n, the joint objective of Eq. 3 can be shown to be bounded from above by a constant. Since the objective is non-decreasing with the iterations, the algorithm is guaranteed to converge. In practice, we monitor the convergence by the difference in the value of the objective between consecutive iterations, and stop when the difference falls below a minimum threshold of ? = 10?4 . In all our experiments, we converge within less than 10 iterations. Note that we can use either U(v) or U(w) in the final k-means step of the spectral clustering algorithm. In our experiments, we note a marginal difference in the clustering performance depending on which U(?) is used in the final step of k-means clustering. 2.2 Extension to Multiple Views We can extend the co-regularized spectral clustering proposed in the previous section for more than two views. This can be done by employing pair-wise co-regularizers in the objective function of Eq. 3. For m number of views, we have m     X X T T T tr U(v) U(v) U(w) U(w) , tr U(v) L(v) U(v) + ? max U(1) ,U(2) ,...,U(m) ?Rn?k 1?v,w?m v=1 (5) v6=w T s.t. U(v) U(v) = I, ? 1 ? v ? V 3 We use a common ? for all pair-wise co-regularizers for simplicity of exposition, however different ??s can be used for different pairs of views. Similar to the two-view case, we can optimize it by alternating maximization cycling over the views. With all but one U(v) fixed, we have the following optimization problem: o   n X T T T U(v) , s.t. U(v) U(v) = I U(w) U(w) max tr U(v) L(v) + ? (v) (6) U 1?w?m, w6=v We initialize all U(v) , 2 ? v ? m by solving the spectral clustering problem for single views. We solve the objective of Eq. 6 for U(1) given all other U(v) , 2 ? v ? m. The optimization is then cycled over all views while keeping the previously obtained U(?) ?s fixed. 2.3 Centroid-Based Co-regularization In this section, we present an alternative regularization scheme that regularizes each view-specific set of eigenvectors U(v) towards a common centroid U? (akin to a consensus set of eigenvectors) . In contrast with the pairwise regularization approach which has m 2 pairwise regularization terms, where m is the number of views, the centroid based regularization scheme has m pairwise regularization terms. The objective function can be written as: m   X   X T T T ?v tr U(v) U(v) U? U? , tr U(v) L(v) U(v) + max U(1) ,U(2) ,...,U(m) ,U? ?Rn?k (7) v v=1 T s.t. U(v) U(v) = I, ? 1 ? v ? V, T U? U? = I This objective tries to balance a trade-off between the individual spectral clustering objectives and the agreement of each of the view-specific eigenvectors U(v) with the consensus eigenvectors U? . Each regularization term is weighted by a parameter ?v specific to that view, where ?v can be set to reflect the importance of view v. Just like for Equation 6, the objective in Equation 7 can be solved in an alternating fashion optimizing each of the U(v) ?s one at a time, keeping all other variables fixed, followed by optimizing the consensus U? , keeping all the U(v) ?s fixed. It is easy to see that with all other view-specific eigenvectors and the consensus U? fixed, optimizing U(v) for view v amounts to solving the following:     T T T T max tr U(v) L(v) U(v) + ?v tr U(v) U(v) U? U? , s.t. U(v) U(v) = I (8) (v) n?k U ?R which is nothing but equivalent to solving the standard spectral clustering objective for U(v) with a T modified Laplacian L(v) + ?v U? U? . Solving for the consensus U? requires solving the following objective:   X T T T max ?v tr U(v) U(v) U? U? , s.t. U? U? = I (9) ? n?k U ?R v Using the circular property of matrix traces, Equation 9 can be rewritten as: ! ) (  X  T (v) (v)T ? ?T U , s.t. U? U? = I max tr U ?v U U U? ?Rn?k (10) v ? which is equivalent the standard spectral clustering objective for U with a modified  to solving P (v) (v)T . In contrast with the pairwise co-regularization approach of SecLaplacian v ?v U U tion 2.1 which computes optimal view specific eigenvectors U(v) ?s, which finally need to be combined (e.g., via column-wise concatenation) before running the k-means step, the centroid-based co-regularization approach directly finds an optimal U? to be used in the k-means step. One possible downside of the centroid-based co-regularization approach is that noisy views could potentially affect the optimal U? as it depends on all the views. To deal with this, careful selection of the weighing parameter ?v is required. If it is a priori known that some views are noisy, then it is advisable to use a small value of ?v for such views, so as to prevent them from adversely affecting U? . 4 3 Experiments We compare both of our co-regularization based multi-view spectral clustering approaches with a number of baselines. In particular, we compare with: ? Single View: Using the most informative view, i.e., one that achieves the best spectral clustering performance using a single view of the data. ? Feature Concatenation: Concatenating the features of each view, and then running standard spectral clustering using the graph Laplacian derived from the joint view representation of the data. ? Kernel Addition: Combining different kernels by adding them, and then running standard spectral clustering on the corresponding Laplacian. As suggested in earlier findings [9], even this seemingly simple approach often leads to near optimal results as compared to more sophisticated approaches for classification. It can be noted that kernel addition reduces to feature concatenation for the special case of linear kernel. In general, kernel addition is same as concatenation of features in the Reproducing Kernel Hilbert Space. ? Kernel Product (element-wise): Multiplying the corresponding entries of kernels and applying standard spectral clustering on the resultant Laplacian. For the special case of Gaussian kernel, element-wise kernel product would be same as simple feature concatenation if both kernels use same width parameter ?. However, in our experiments, we use different width parameters for different views so the performances of kernel product may not be directly comparable to feature concatenation. ? CCA based Feature Extraction: Applying CCA for feature fusion from multiple views of the data [10], and then running spectral clustering using these extracted features. We apply both standard CCA and kernel CCA for feature extraction and report the clustering results for whichever gives the best performance. ? Minimizing-Disagreement Spectral Clustering: Our last baseline is the minimizingdisagreement approach to spectral clustering [11], and is perhaps most closely related to our coregularization based approach to spectral clustering. This algorithm is discussed more in Sec. 4. To distinguish between the results of our two co-regularization based approaches, in the tables containing the results, we use symbol ?P? to denote the pairwise co-regularization method and symbol ?C? to denote the centroid based co-regularization method. For datasets with more than 2 views, we have also explicitly mentioned the number of views in parentheses. We report experimental results on two synthetic and three real-world datasets. We give a brief description of each dataset here. ? Synthetic data 1: Our first synthetic dataset consists of two views and is generated in a manner akin to [12] which first chooses the cluster ci each sample belongs to, and then generates each (1) (2) of the views xi and xi from a two-component Gaussian mixture model. These views are (1) (2) combined to form the sample (xi , xi , ci ). We sample 1000 points from each view. The cluster (1) (1) (2) (2) means in view 1 are ?1 = (1 1) , ?2 = (2 2), and in view 2 are ?1 = (2 2) , ?2 = (1 1). The covariances for the two views are given below.         1 0.5 0.3 0 0.3 0 1 0.5 (2) (1) (2) (1) , ?2 = , ?2 = , ?1 = ?1 = 0.5 1.5 0 0.6 0 0.6 0.5 1.5 ? Synthetic data 2: Our second synthetic dataset consists of three views. Moreover, the features are correlated. Each view still has two clusters. Each view is generated by a two component (1) (1) Gaussian mixture model. The cluster means in view 1 are ?1 = (1 1) , ?2 = (3 4); in view 2 (2) (2) (3) (3) are ?1 = (1 2) , ?2 = (2 2); and in view 3 are ?1 = (1 1) , ?2 = (3 3). The covariances (v) for the three views are given below. The notation ?c denotes the parameter for c?th cluster in v?th view.       1 0.5 1 ?0.2 1.2 0.2 (1) (2) (3) ?1 = , ?1 = , ?1 = 0.5 1.5 ?0.2 1 0.2 1       0.3 0.2 0.6 0.1 1 0.4 (1) (2) (3) ?2 = , ?2 = , ?2 = 0.2 0.6 0.1 0.5 0.4 0.7 5 ? Reuters Multilingual data: The test collection contains feature characteristics of documents originally written in five different languages (English, French, German, Spanish and Italian), and their translations, over a common set of 6 categories [13]. This corpus is built by sampling parts of the Reuters RCV1 and RCV2 collections [14, 15]. We use documents originally in English as the first view and their French translations as the second view. We randomly sample 1200 documents from this collection in a balanced manner, with each of the 6 clusters having 200 documents. The documents are in bag-of-words representation which implies that the features are extremely sparse and high-dimensional. The standard similarity measures (like Gaussian kernel) in very high dimensions are often unreliable. Since spectral clustering essentially works with similarities of the data, we first project the data using Latent Semantic Analysis (LSA) [16] to a 100-dimensional space and compute similarities in this lower dimensional space. This is akin to a computing topic based similarity of documents [17]. ? UCI Handwritten digits data: Our second real-world dataset is taken from the handwritten digits (0-9) data from the UCI repository. The dataset consists of 2000 examples, with view-1 being the 76 Fourier coefficients, and view-2 being the 216 profile correlations of each example image. ? Caltech-101 data: Our third real-world dataset is a subset of the Caltech-101 data from the Multiple Kernel Learning repository from which we chose 450 examples having 30 underlying clusters. We experiment with 4 kernels from this dataset. In particular, we chose the ?pixel features?, the ?Pyramid Histogram Of Gradients?, bio-inspired ?Sparse Localized Features?, and SIFT descriptors as our four views. We report results on our co-regularized spectral clustering for two, three and four views cases. We use normalized mutual information (NMI) as the clustering quality evaluation measure, which gives the mutual information between obtained clustering and the true clustering normalized by the cluster entropies. NMI ranges between 0 and 1 with higher value indicating closer match to the true clustering. We use Gaussian kernel for computing the graph similarities in all the experiments, unless mentioned otherwise. The standard deviation of the kernel is taken equal to the median of the pair-wise Euclidean distances between the data points. In our experiments, the co-regularization parameter ? is varied from 0.01 to 0.05 and the best result is reported (we keep ? the same for all views; one can however also choose different ??s based on the importance of individual views). We experiment with ? values more exhaustively later in this Section where we show that our approach outperforms other baselines for a wide range of ?. In the results table, the numbers in the parentheses are the standard deviations of the performance measures obtained with 20 different runs of k-means with random initializations. 3.1 Results The results for all datasets are shown in Table 1. For two-view synthetic data (Synthetic Data 1), both the co-regularized spectral clustering approaches outperform all the baselines by a significant margin, with the pairwise approach doing marginally better than the centroid-based approach. The closest performing approaches are kernel addition and CCA. For synthetic data, order-2 polynomial kernel based kernel-CCA gives best performance among all CCA variants, while Gaussian kernel based kernel-CCA performs poorly. We do not report results for Gaussian kernel CCA here. All the multi-view baselines outperform the single view case for the synthetic data. For three-view synthetic data (Synthetic Data 2), we can see that simple feature concatenation does not help much. In fact, it reduces the performance when the third view is added, so we report the performance with only two views for feature concatenation. Kernel addition with three views gives a good improvement over single view case. As compared to other baselines (with two views), both our co-regularized spectral clustering approaches with two views perform better. For both approaches, addition of third view also results in improving the performance beyond the two view case. For the document clustering results on Reuters multilingual data, English and French languages are used as the two views. On this dataset too, both our approaches outperform all the baselines by a significant margin. The next best performance is attained by minimum-disagreement spectral clustering [11] approach. It should be noted that CCA and element-wise kernel product performances are worse than that of single view. For UCI Handwritten digits dataset, quite a few approaches including kernel addition, element-wise kernel multiplication, and minimum-disagreement are close to both of our co-regularized spectral 6 Method Best Single View Feature Concat Kernel Addition Kernel Product CCA Min-Disagreement Co-regularized (P) (2) Co-regularized (P) (3) Co-regularized (P) (4) Co-regularized (C) (2) Co-regularized (C) (3) Co-regularized (C) (4) Synth data 1 0.267 (0.0) 0.294 (0.0) 0.339 (0.0) 0.277 (0.0) 0.330 (0.0) 0.313 (0.0) 0.378 (0.0) ? ? 0.367 (0.0) ? ? Synth data 2 0.898 (0.0) 0.923 (0.0) 0.973 (0.0) 0.959 (0.0) 0.932 (0.0) 0.936 (0.0) 0.981 (0.0) 0.989 (0.0) ? 0.955 (0.0) 0.989 (0.0) ? Reuters 0.287 (0.019) 0.298 (0.020) 0.323 (0.021) 0.123 (0.010) 0.147 (0.003) 0.342 (0.024) 0.375 (0.002) ? ? 0.360 (0.025) ? ? Handwritten 0.641 (0.008) 0.619 (0.015) 0.744 (0.030) 0.754 (0.026) 0.682 (0.019) 0.745 (0.024) 0.759 (0.031) ? ? 0.768 (0.025) ? ? Caltech 0.510 (0.008) ? 0.383 (0.008) 0.429 (0.007) 0.466 (0.007) 0.389 (0.008) 0.527 (0.007) 0.533 (0.008) 0.564 (0.007) 0.522 (0.004) 0.512 (0.007) 0.561 (0.005) Table 1: NMI results on various datasets for different baselines and the proposed approaches. Numbers in parentheses are the std. deviations. The numbers (2), (3) and (4) indicate the number of views used in our co-regularized spectral clustering approach. Other multi-view baselines were run with maximum number of views available (or maximum number of views they can handle). Letters (P) and (C) indicate pairwise and centroid based regularizations respectively. clustering approaches. It can be also be noted that feature concatenation actually performs worse than single view on this dataset. For Caltech-101 data, we cannot do feature concatenation since only kernels are available. Surprisingly, on this dataset, all the baselines perform worse than the single view case. On the other hand, both of our co-regularized spectral clustering approaches with two views outperform the single view case. As we added more views that were available for the Caltech-101 datasets, we found that the performance of the pairwise approach consistently went up as we added the third and the fourth view. On the other hand, the performance of the centroid-based approach slightly got worse upon adding the third view (possibly due to the view being noisy which affected the learned U? ); however addition of the fourth view brought the performance almost close to that of the pairwise case. 0.58 0.38 Co?regularization approach Closest performing baseline Co?regularization approach Closed performing baseline 0.57 0.36 0.56 0.34 NMI Score NMI Score 0.55 0.32 0.3 0.54 0.53 0.52 0.28 0.51 0.26 0.24 0.5 0 0.02 0.04 0.06 0.08 Co?regularization parameter ? 0.49 0.1 (a) 0 0.02 0.04 0.06 0.08 Co?regularization Parameter ? 0.1 (b) Figure 1: NMI scores of Co-regularized Spectral Clustering as a function of ? for (a) Reuters multilingual data and (b) Caltech-101 data We also experiment with various values of co-regularization parameter ? and observe its effect on the clustering performance. Our reported results are for the pairwise co-regularization approach. Similar trends were observed for the centroid-based co-regularization approach and therefore we do not report them here. Fig. 1(a) shows the plot for Reuters multilingual data. The NMI score shoots up right after ? starts increasing from 0 and reaches a peak at ? = 0.01. After reaching a second peak at about 0.025, it starts decreasing and hovers around the second best baseline (Minimizingdisagreement in this case) for a while. The NMI becomes worse than the second best baseline after ? = 0.075. The plot for Caltech-101 data is shown in Fig. 1(b). The normalized mutual information (NMI) starts increasing as the value of lambda is increased away from 0, and reaches a peak at ? = 0.01. It starts to decrease after that with local ups and downs. For the range of ? shown in the plot, the NMI for co-regularized spectral clustering is greater than the closest baseline for most of 7 the ? values. These results indicate that although the performance of our algorithms depends on the weighing parameter ?, it is reasonably stable across a wide range of ?. 4 Related Work A number of clustering algorithms have been proposed in the past to learn with multiple views of the data. Some of them first extract a set of shared features from the multiple views and then apply any off-the-shelf clustering algorithm such as k-means on these features. The Canonical Correlation Analysis (CCA) [2, 10] based approach is an example of this. Alternatively, some other approaches exploit the multiple views of the data as part of the clustering algorithm itself. For example, [19] proposed an Co-EM based framework for multi-view clustering in mixture models. Co-EM approach computes expected values of hidden variables in one view and uses these in the M-step for other view, and vice versa. This process is repeated until a suitable stopping criteria is met. The algorithm often does not converge. Multi-view clustering algorithms have also been proposed in the framework of spectral clustering [11, 20, 21]. In [20], the authors obtain a graph cut which is good on average over the multiple graphs but may not be the best for a single graph. They give a random walk based formulation for the problem. [11] approaches the problem of two-view clustering by constructing a bipartite graph from nodes of both views. Edges of the bipartite graph connect nodes from one view to those in the other view. Subsequently, they solve standard spectral clustering problem on this bipartite graph. In [21], a co-training based framework is proposed where the similarity matrix of one view is constrained by the eigenvectors of the Laplacian in the other view. In [22], the information from multiple graphs are fused using Linked Matrix Factorization. Consensus clustering approaches can also be applied to the problem of multi-view clustering [7]. These approaches do not generally work with original features. Instead, they take different clusterings of a dataset coming from different sources as input and reconcile them to find a final clustering. 5 Discussion We proposed a multi-view clustering approach in the framework of spectral clustering. The approach uses the philosophy of co-regularization to make the clusterings in different views agree with each other. Co-regularization idea has been used in the past for semi-supervised learning problems. To the best of our knowledge, this is the first work to apply the idea to the problem of unsupervised learning, in particular to spectral clustering. The co-regularized spectral clustering has a joint optimization function for spectral embeddings of all the views. An alternating maximization framework reduces the problem to the standard spectral clustering objective which is efficiently solvable using state-ofthe-art eigensolvers. It is possible to extend the proposed framework to the case where some of the views have missing data. For missing data points, the corresponding entries in the similarity matrices would be unavailable. We can estimate these missing similarities by the corresponding similarities in other views. One possible approach to estimate the missing entry could be to simply average the similarities from views in which the data point is available. Proper normalization of similarities (possibly by Frobenius norm of the whole matrix) might be needed before averaging to make them comparable. Other methods for missing kernel entries estimation can also be used. It is also possible to assign weights to different views in the proposed objective function as done in [20], if we have some a priori knowledge about the informativeness of the views. Our co-regularization based framework can also be applied to other unsupervised problems such as spectral methods for dimensionality reduction. For example, the Kernel PCA algorithm [23] can be extended to work with multiple views by defining each view as having its own Kernel PCA objective function and having a regularizer which enforces the embeddings to look similar across all views (e.g., by enforcing the similarity matrices defined on embeddings of each view to be close to each other). Theoretical analysis of the proposed approach can also be pursued as a separate line of work. There has been very little prior work analyzing spectral clustering methods. For instance, there has been some work on consistency analysis of single view spectral clustering [24], which provides results about the rate of convergence as the sample size increases, using tools from theory of linear operators and empirical processes. Similar convergence properties could be studied for multi-view spectral clustering. We can expect the convergence to be faster for multi-view case. Coregularization reduces the size of hypothesis space and hence less number of examples should be needed to converge to a solution. 8 References [1] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Conference on Learning Theory, 1998. [2] Kamalika Chaudhuri, Sham M. Kakade, Karen Livescu, and Karthik Sridharan. Multi-view Clustering via Canonical Correlation Analysis. In International Conference on Machine Learning, 2009. [3] Ulrike von Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing, 2007. [4] J. Shi and J. Malik. Normalized cuts and Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22:888?905, 1997. [5] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In Advances in Neural Information Processing Systems, 2002. [6] Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. A Co-regularization approach to semisupervised learning with multiple views. In Proceedings of the Workshop on Learning with Multiple Views, International Conference on Machine Learning, 2005. [7] Alexander Strehl and Joydeep Ghosh. Cluster Ensembles - A Knowledge Reuse Framework for Combining Multiple Partitions. Journal of Machine Learning Research, pages 583?617, 2002. [8] Donglin Niu, Jennifer G. Dy, and Michael I. Jordan. Multiple non-redundant spectral clustering views. In International Conference on Machine Learning, 2010. [9] Corinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. Learning non-linear combination of kernels. In Advances in Neural Information Processing Systems, 2009. [10] Matthew B. Blaschko and Christoph H. Lampert. Correlational Spectral Clustering. In Computer Vision and Pattern Recognition, 2008. [11] Virginia R. de Sa. Spectral Clustering with two views. In Proceedings of the Workshop on Learning with Multiple Views, International Conference on Machine Learning, 2005. [12] Xing Yi, Yunpeng Xu, and Changshui Zhang. Multi-view em algorithm for finite mixture models. In ICAPR, Lecture Notes in Computer Science, Springer-Verlag, 2005. [13] Massih-Reza Amini, Nicolas Usunier, and Cyril Goutte. Learning from multiple partially observed views - an application to multilingual text categorization. In Advances in Neural Information Processing Systems, 2009. [14] D. D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1. A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361?397, 2004. [15] Reuters. Corpus, volume 2, multilingual corpus, 1996-08-20 to 1997-08-19, 2005. [16] Thomas Hofmann. Probabilistic latent semantic analysis. In Uncertainty in Artificial Intelligence, pages 289?296, 1999. [17] David M. Blei, Andreq Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, pages 993?1022, 2003. [18] The UCSD Multiple Kernel Learning Repository. http://mkl.ucsd.edu. [19] Steffen Bickel and Tobias Scheffer. Multi-View Clustering. In IEEE International Conference on Data Mining, 2004. [20] Dengyong Zhou and Christopher J. C. Burges. Spectral Clustering and Transductive Learning with Multiple Views. In International Conference on Machine Learning, 2007. [21] Abhishek Kumar and Hal Daum?e. A Co-training Approach for Multiview Spectral Clustering. In International Conference on Machine Learning, 2011. [22] Wei Tang, Zhengdong Lu, and Inderjit S. Dhillon. Clustering with Multiple Graphs. In IEEE International Conference on Data Mining, 2009. [23] Y. Bengio, P. Vincent, and J.F. Paiement. Spectral clustering and kernel PCA are learning eigenfunctions. Technical Report 2003s-19, CIRANO, 2003. [24] Ulrike von Luxburg, Mikhail Belkin, and Olivier Bousquet. Consistency of Spectral Clustering. Annals of Statistics, 36(2):555?586, 2008. 9
4360 |@word repository:3 polynomial:1 norm:4 seek:1 crucially:1 covariance:2 tr:16 reduction:2 contains:1 efficacy:1 score:4 eigensolvers:1 document:8 past:3 outperforms:1 written:2 additive:1 subsequent:1 partition:2 informative:3 hofmann:1 plot:3 pursued:1 intelligence:2 weighing:2 concat:1 blei:1 provides:1 node:3 zhang:1 five:1 mathematical:1 consists:4 combine:3 manner:2 pairwise:13 expected:1 multi:17 steffen:1 inspired:1 decreasing:2 xti:1 little:1 increasing:2 becomes:1 project:1 blaschko:1 underlying:3 moreover:2 linearity:1 bounded:1 notation:1 eigenvector:1 finding:1 ghosh:1 classifier:1 wrong:1 bio:1 lsa:1 before:2 local:2 analyzing:1 niu:1 might:2 chose:2 initialization:2 studied:1 christoph:1 co:57 ease:1 factorization:1 range:4 practical:1 enforces:3 practice:1 digit:3 empirical:1 got:1 ups:1 word:1 get:4 cannot:1 unlabeled:3 selection:1 close:4 operator:1 context:1 applying:2 rcv2:1 optimize:2 equivalent:2 missing:5 shi:1 attention:1 simplicity:1 regularize:2 embedding:2 searching:1 handle:1 limiting:1 updated:1 target:3 construction:1 annals:1 caption:1 olivier:1 us:2 livescu:1 hypothesis:9 agreement:2 element:4 trend:1 recognition:1 std:1 cut:3 labeled:1 observed:2 solved:2 went:1 trade:2 decrease:1 mentioned:2 balanced:1 rose:1 tobias:1 exhaustively:1 depend:1 solving:7 upon:1 bipartite:3 joint:6 various:2 regularizer:1 artificial:1 choosing:1 quite:1 valued:1 solve:2 otherwise:1 statistic:2 niyogi:1 commit:1 transductive:1 noisy:3 itself:1 final:3 seemingly:1 propose:6 product:5 coming:1 massih:1 uci:3 combining:5 poorly:1 achieve:2 chaudhuri:1 description:1 frobenius:3 getting:1 exploiting:1 webpage:1 cluster:20 convergence:5 categorization:2 piyush:2 depending:1 advisable:1 help:1 dengyong:1 sa:1 solves:1 eq:3 strong:1 c:2 implies:2 indicate:3 met:2 guided:1 closely:1 correct:1 subsequently:1 assign:2 investigation:1 secondly:1 extension:1 around:1 considered:1 mapping:1 matthew:1 substituting:1 achieves:2 consecutive:1 bickel:1 estimation:1 bag:1 label:3 individually:1 changshui:1 vice:1 city:1 tool:1 weighted:1 hope:1 brought:1 gaussian:7 modified:3 reaching:1 zhou:1 shelf:1 derived:1 improvement:1 consistently:1 contrast:2 centroid:11 baseline:16 sense:1 rostamizadeh:1 stopping:1 membership:3 typically:1 unlikely:1 hidden:1 hovers:1 italian:1 compatibility:2 pixel:1 classification:1 among:1 priori:3 constrained:1 special:2 initialize:2 mutual:3 marginal:1 equal:1 construct:1 art:1 shaped:1 having:5 extraction:2 sampling:1 ng:2 park:2 look:3 unsupervised:4 report:7 employ:1 few:1 belkin:2 randomly:1 individual:7 karthik:1 possibility:1 circular:1 mining:2 evaluation:1 mixture:4 behind:1 regularizers:2 accurate:1 edge:2 encourage:1 closer:1 unless:1 euclidean:1 walk:1 theoretical:2 joydeep:1 increased:1 column:1 earlier:1 downside:1 instance:1 maximization:6 cost:2 deviation:3 entry:4 subset:1 too:3 virginia:1 reported:2 connect:1 accomplish:1 synthetic:12 combined:2 chooses:1 peak:3 international:8 probabilistic:1 off:3 michael:2 fused:1 von:2 reflect:2 containing:1 choose:2 possibly:2 worse:5 admit:1 adversely:1 lambda:1 style:1 li:1 de:1 sec:1 coefficient:1 explicitly:1 depends:3 later:2 view:163 try:1 tion:1 closed:1 doing:1 linked:1 ulrike:2 start:6 xing:1 partha:1 minimize:1 descriptor:1 characteristic:1 efficiently:1 ensemble:1 ofthe:1 handwritten:4 zhengdong:1 vincent:1 marginally:1 lu:1 multiplying:1 reach:2 resultant:1 associated:1 stop:1 dataset:12 mitchell:1 knowledge:4 ut:1 dimensionality:1 hilbert:1 segmentation:1 sophisticated:1 actually:1 originally:2 higher:1 supervised:4 attained:1 improved:1 wei:2 formulation:1 done:2 shrink:1 just:1 until:2 correlation:3 hand:3 web:1 christopher:1 mkl:1 french:3 quality:1 perhaps:1 hal:3 semisupervised:2 utah:2 effect:1 normalized:8 true:4 regularization:38 assigned:1 hence:1 alternating:6 dhillon:1 semantic:2 deal:1 width:2 spanish:1 noted:3 criterion:2 multiview:1 performs:2 image:3 wise:9 shoot:1 novel:1 common:4 salt:1 reza:1 volume:1 extend:2 discussed:1 significant:2 versa:1 consistency:2 language:3 access:1 stable:1 similarity:24 closest:3 own:2 recent:1 optimizing:3 belongs:1 verlag:1 success:2 accomplished:1 yi:1 caltech:7 seen:1 minimum:3 greater:1 care:1 converge:4 redundant:1 semi:3 multiple:26 sham:1 reduces:4 technical:1 match:1 faster:1 retrieval:1 equally:1 laplacian:13 parenthesis:3 variant:1 essentially:2 vision:1 iteration:3 kernel:47 histogram:1 normalization:1 pyramid:1 addition:10 want:1 affecting:1 median:1 source:1 umd:2 umiacs:1 eigenfunctions:1 sridharan:1 effectiveness:1 jordan:3 near:1 yang:1 iii:1 embeddings:4 easy:1 bengio:1 independence:2 xj:2 affect:1 idea:3 pca:3 reuse:1 akin:3 karen:1 cyril:1 generally:1 detailed:1 eigenvectors:13 amount:2 category:1 http:1 outperform:4 canonical:2 tutorial:1 write:1 hyperparameter:1 affected:1 paiement:1 four:2 threshold:1 blum:1 monitor:1 prevent:1 graph:23 relaxation:1 run:2 luxburg:2 letter:1 fourth:2 uncertainty:1 almost:1 reader:1 ruling:1 frobenious:1 lake:1 dy:1 scaling:1 comparable:3 cca:12 guaranteed:1 followed:1 distinguish:1 x2:1 bousquet:1 generates:1 aspect:1 fourier:1 min:2 extremely:1 kumar:2 rcv1:2 performing:3 rai:1 combination:3 lingual:1 across:9 nmi:10 slightly:1 em:3 appealing:1 kakade:1 making:1 minimizingdisagreement:2 taken:3 equation:6 agree:5 previously:2 jennifer:1 goutte:1 german:1 needed:3 whichever:1 end:1 usunier:1 available:4 rewritten:1 apply:3 obey:1 observe:1 away:1 spectral:64 disagreement:6 amini:1 icapr:1 alternative:1 corinna:1 vikas:1 original:2 thomas:1 denotes:3 clustering:112 top:1 running:4 dirichlet:1 hinge:1 daum:2 exploit:1 restrictive:1 build:1 establish:1 objective:21 malik:1 already:1 added:3 md:2 cycling:1 gradient:1 distance:1 separate:1 maryland:2 concatenation:10 sensible:1 topic:1 consensus:10 reason:1 enforcing:2 w6:1 afshin:1 balance:1 minimizing:1 potentially:1 synth:2 trace:2 proper:1 contributed:1 perform:2 datasets:7 benchmark:1 finite:1 yunpeng:1 regularizes:1 extended:2 looking:1 defining:1 rn:7 varied:1 reproducing:1 ucsd:2 arbitrary:1 david:1 pair:6 required:1 learned:2 beyond:1 suggested:1 usually:1 below:3 pattern:2 laplacians:3 hyperlink:1 built:1 max:9 including:1 suitable:1 regularized:18 indicator:1 solvable:1 representing:1 scheme:5 brief:1 numerous:1 carried:1 extract:1 text:3 nice:1 literature:2 prior:2 multiplication:1 lacking:1 expect:1 lecture:1 allocation:1 localized:1 sufficient:1 consistent:2 informativeness:1 cycled:1 strehl:1 translation:2 row:3 compatible:3 mohri:1 surprisingly:1 last:1 keeping:3 english:3 dis:1 burges:1 fall:1 wide:2 mikhail:2 sparse:2 dimension:1 xn:1 world:5 valid:1 computes:2 author:2 stuck:1 adaptive:1 collection:4 far:1 employing:1 cirano:1 transaction:1 implicitly:1 multilingual:6 keep:2 unreliable:1 corpus:3 abhishek:3 xi:5 alternatively:1 search:1 latent:3 table:4 learn:2 ku:7 reasonably:1 nicolas:1 ignoring:1 improving:1 unavailable:1 mehryar:1 constructing:2 main:1 motivation:1 reuters:7 reconcile:1 profile:1 whole:1 nothing:1 lampert:1 repeated:1 complementary:1 coregularization:2 x1:1 xu:1 fig:2 referred:1 scheffer:1 fashion:1 shrinking:1 concatenating:1 third:5 tang:1 down:1 specific:6 pac:1 sift:1 symbol:2 cortes:1 fusion:1 consist:1 workshop:2 adding:2 kamalika:1 gained:1 importance:2 ci:2 margin:2 entropy:1 simply:1 v6:1 partially:1 inderjit:1 sindhwani:1 springer:1 lewis:1 extracted:1 conditional:1 goal:2 exposition:2 towards:2 careful:1 twofold:1 shared:1 considerable:2 feasible:1 averaging:1 correlational:1 experimental:2 indicating:1 college:2 alexander:1 philosophy:1 dept:3 regularizing:3 correlated:1
3,712
4,361
Spatial distance dependent Chinese restaurant processes for image segmentation Soumya Ghosh1 , Andrei B. Ungureanu2 , Erik B. Sudderth1 , and David M. Blei3 1 Department of Computer Science, Brown University, {sghosh,sudderth}@cs.brown.edu 2 Morgan Stanley, [email protected] 3 Department of Computer Science, Princeton University, [email protected] Abstract The distance dependent Chinese restaurant process (ddCRP) was recently introduced to accommodate random partitions of non-exchangeable data [1]. The ddCRP clusters data in a biased way: each data point is more likely to be clustered with other data that are near it in an external sense. This paper examines the ddCRP in a spatial setting with the goal of natural image segmentation. We explore the biases of the spatial ddCRP model and propose a novel hierarchical extension better suited for producing ?human-like? segmentations. We then study the sensitivity of the models to various distance and appearance hyperparameters, and provide the first rigorous comparison of nonparametric Bayesian models in the image segmentation domain. On unsupervised image segmentation, we demonstrate that similar performance to existing nonparametric Bayesian models is possible with substantially simpler models and algorithms. 1 Introduction The Chinese restaurant process (CRP) is a distribution on partitions of integers [2]. When used in a mixture model, it provides an alternative representation of a Bayesian nonparametric Dirichlet process mixture?the data are clustered and the number of clusters is determined via the posterior distribution. CRP mixtures assume that the data are exchangeable, i.e., their order does not affect the distribution of cluster structure. This can provide computational advantages and simplify approximate inference, but is often an unrealistic assumption. The distance dependent Chinese restaurant process (ddCRP) was recently introduced to model random partitions of non-exchangeable data [1]. The ddCRP clusters data in a biased way: each data point is more likely to be clustered with other data that are near it in an external sense. For example, when clustering time series data, points that closer in time are more likely to be grouped together. Previous work [1] developed the ddCRP mixture in general, and derived posterior inference algorithms based on Gibbs sampling [3]. While they studied the ddCRP in time-series and sequential settings, ddCRP models can be used with any type of distance and external covariates. Recently, other researchers [4] have also used the ddCRP in non-temporal settings. In this paper, we study the ddCRP in a spatial setting. We use a spatial distance function between pixels in natural images and cluster them to provide an unsupervised segmentation. The spatial distance encourages the discovery of connected segments. We also develop a region-based hierarchical generalization, the rddCRP. Analogous to the hierarchical Dirichlet process (HDP) [5], the rddCRP clusters groups of data, where cluster components are shared across groups. Unlike the HDP, however, the rddCRP allows within-group clusterings to depend on external distance measurements. To demonstrate the power of this approach, we develop posterior inference algorithms for segmenting images with ddCRP and rddCRP mixtures. Image segmentation is an extensively studied area, 1 C C Removing C leaves clustering unchanged Adding C leaves the clustering unchanged Removing C splits the cluster Adding C merges the cluster Figure 1: Left: An illustration of the relationship between the customer assignment representation and the table assignment representation. Each square is a data point (a pixel or superpixel) and each arrow is a customer assignment. Here, the distance window is of length 1. The corresponding table assignments, i.e., the clustering of these data, is shown by the color of the data points. Right: Intuitions behind the two cases considered by the Gibbs sampler. Consider the link from node C. When removed, it may leave the clustering unchanged or split a cluster. When added, it may leave the clustering unchanged or merge two clusters. which we will not attempt to survey here. Influential existing methods include approaches based on kernel density estimation [6], Markov random fields [3, 7], and the normalized cut spectral clustering algorithm [8, 9]. A recurring difficulty encountered by traditional methods is the need to determine an appropriate segment resolution for each image; even among images of similar scene types, the number of observed objects can vary widely. This has usually been dealt via heuristics with poorly understood biases, or by simplifying the problem (e.g., partially specifying each image?s segmentation via manual user input [7]). Recently, several promising segmentation algorithms have been proposed based on nonparametric Bayesian methods [10, 11, 12]. In particular, an approach which couples Pitman-Yor mixture models [13] via thresholded Gaussian processes [14] has lead to very promising initial results [10], and provides a baseline for our later experiments. Expanding on the experiments in [10], we analyze 800 images of different natural scene types, and show that the comparatively simpler ddCRP-based algorithms perform similarly to this work. Moreover, unlike previous nonparametric Bayesian approaches, the structure of the ddCRP allows spatial connectivity of the inferred segments to (optionally) be enforced. In some applications, this is a known property of all reasonable segmentations. Our results demonstrate the practical utility of spatial ddCRP and hierarchical rddCRP models. We also provide the first rigorous comparison of nonparametric Bayesian image segmentation models. 2 Image Segmentation with Distance Dependent CRPs Our goal is to develop a probabilistic method to segment images of complex scenes. Image segmentation is the problem of partitioning an image into self-similar groups of adjacent pixels. Segmentation is an important step towards other tasks in image understanding, such as object recognition, detection, or tracking. We model images as observed collections of ?superpixels? [15], which are small blocks of spatially adjacent pixels. Our goal is to associate the features xi in the ith superpixel with some cluster zi ; these clusters form the segments of that image. Image segmentation is thus a special kind of clustering problem where the desired solution has two properties. First, we hope to find contiguous regions of the image assigned to the same cluster. Due to physical processes such as occlusion, it may be appropriate to find segments that contain two or three contiguous image regions, but we do not want a cluster that is scattered across individual image pixels. Traditional clustering algorithms, such as k-means or probabilistic mixture models, do not account for external information such as pixel location and are not biased towards contiguous regions. Image locations have been heuristically incorporated into Gaussian mixture models by concatenating positions with appearance features in a vector [16], but the resulting bias towards elliptical regions often produces segmentation artifacts. Second, we would like a solution that deter2 mines the number of clusters from the image. Image segmentation algorithms are typically applied to collections of images of widely varying scenes, which are likely to require different numbers of segments. Except in certain restricted domains such as medical image analysis, it is not practical to use an algorithm that requires knowing the number of segments in advance. In the following sections, we develop a Bayesian algorithm for image segmentation based on the distance dependent Chinese restaurant process (ddCRP) mixture model [1]. Our algorithm finds spatially contiguous segments in the image and determines an image-specific number of segments from the observed data. 2.1 Chinese restaurant process mixtures The ddCRP mixture is an extension of the traditional Chinese restaurant process (CRP) mixture. CRP mixtures provide a clustering method that determines the number of clusters from the data? they are an alternative formulation of the Dirichlet process mixture model. The assumed generative process is described by imagining a restaurant with an infinite number of tables, each of which is endowed with a parameter for some family of data generating distributions (in our experiments, Dirichlet). Customers enter the restaurant in sequence and sit at a randomly chosen table. They sit at the previously occupied tables with probability proportional to how many customers are already sitting at each; they sit at an unoccupied table with probability proportional to a scaling parameter. After the customers have entered the restaurant, the ?seating plan? provides a clustering. Finally, each customer draws an observation from a distribution determined by the parameter at the assigned table. Conditioned on observed data, the CRP mixture provides a posterior distribution over table assignments and the parameters attached to those tables. It is a distribution over clusterings, where the number of clusters is determined by the data. Though described sequentially, the CRP mixture is an exchangeable model: the posterior distribution over partitions does not depend on the ordering of the observed data. Theoretically, exchangeability is necessary to make the connection between CRP mixtures and Dirichlet process mixtures. Practically, exchangeability provides efficient Gibbs sampling algorithms for posterior inference. However, exchangeability is not an appropriate assumption in image segmentation problems?the locations of the image pixels are critical to providing contiguous segmentations. 2.2 Distance dependent CRPs The distance dependent Chinese Restaurant Process (ddCRP) is a generalization of the Chinese restaurant process that allows for a non-exchangeable distribution on partitions [1]. Rather than represent a partition by customers assigned to tables, the ddCRP models customers linking to other customers. The seating plan is a byproduct of these links?two customers are sitting at the same table if one can reach the other by traversing the customer assignments. As in the CRP, tables are endowed with data generating parameters. Once the partition is determined, the observed data for each customer are generated by the per-table parameters. As illustrated in Figure 1, the generative process is described in terms of customer assignments ci (as opposed to partition assignments or tables, zi ). The distribution of customer assignments is  f (dij ) j 6= i, (1) p (ci = j | D, f, ?) ? ? j = i. Here dij is a distance between data points i and j and f (d) is called the decay function. The decay function mediates how the distance between two data points affects their probability of connecting to each other, i.e., their probability of belonging to the same cluster. Details of the ddCRP are found in [1]. We note that the traditional CRP is an instance of a ddCRP. However, in general, the ddCRP does not correspond to a model based on a random measure, like the Dirichlet process. The ddCRP is appropriate for image segmentation because it can naturally account for the spatial structure of the superpixels through its distance function. We use a spatial distance between pixels to enforce a bias towards contiguous clusters. Though the ddCRP has been previously described in general, only time-based distances are studied in [1]. 3 Figure 2: Comparison of distance-dependent segmentation priors. From left to right, we show segmentations produced by the ddCRP with a = 1, the ddCRP with a = 2, the ddCRP with a = 5, and the rddCRP with a = 1. Restaurants represent images, tables represent segment assignment, and customers represent superpixels. The distance between superpixels is modeled as the number of hops required to reach one superpixel from another, with hops being allowed only amongst spatially neighboring superpixels. A ?window? decay function of width a, f (d) = 1[d ? a], determines link probabilities. If a = 1, superpixels can only directly connect to adjacent superpixels. Note this does not explicitly restrict the size of segments, because any pair of pixels for which one is reachable from the other (i.e., in the same connected component of the customer assignment graph) are in the same image segment. For this special case segments are guaranteed with probability one to form spatially connected subsets of the image, a property not enforced by other Bayesian nonparametric models [10, 11, 12]. The full generative process for the observed features x1:N within a N -superpixel image is as follows: 1. For each table, sample parameters ?k ? G0 . 2. For each customer, sample a customer assignment ci ? ddCRP(?, f, D). This indirectly determines the cluster assignments z1:N , and thus the segmentation. 3. For each superpixel, independently sample observed data xi ? P (? | ?zi ). The customer assignments are sampled using the spatial distance between pixels. The partition structure, derived from the customer assignments, is used to sample the observed image features. Given an image, the posterior distribution of the customer assignments induces a posterior over the cluster structure; this provides the segmentation. See Figure 1 for an illustration of the customer assignments and their derived table assignments in a segmentation setting. As in [10], the data generating distribution for the observed features studied in Section 4 is multinomial, with separate distributions for color and texture. We place conjugate Dirichlet priors on these cluster parameters. 2.3 Region-based hierarchical distance dependent CRPs The ddCRP model, when applied to an image with window size a = 1, produces a collection of contiguous patches (tables) homogeneous in color and texture features (Figure 2). While such segmentations are useful for various applications [16], they do not reflect the statistics of manual human segmentations, which contain larger regions [17]. We could bias our model to produce such regions by either increasing the window size a, or by introducing a hierarchy wherein the produced patches are grouped into a small number of regions. This region level model has each patch (table) associated with a region k from a set of potentially infinite regions. Each region in turn is associated with an appearance model ?k . The corresponding generative process is described as follows: 1. For each customer, sample customer assignments ci ? ddCRP (?, f, D). This determines the table assignments t1:N . 2. For each table t, sample region assignments kt ? CRP (?). 3. For each region, sample parameters ?k ? G0 . 4. For each superpixel, independently sample observed data xi ? P (? | ?zi ), where zi = kti . Note that this region level rddCRP model is a direct extension of the Chinese restaurant franchise (CRF) representation of the HDP [5], with the image partition being drawn from a ddCRP instead 4 of a CRP. In contrast to prior applications of the HDP, our region parameters are not shared amongst images, although it would be simple to generalize to this case. Figure 3 plots samples from the rddCRP and ddCRP priors with increasing a. The rddCRP produces larger partitions than the ddCRP with a = 1, while avoiding the noisy boundaries produced by a ddCRP with large a (see Figure 2). 3 Inference with Gibbs Sampling A segmentation of an observed image is found by posterior inference. The problem is to compute the conditional distribution of the latent variables?the customer assignments c1:N ?conditioned on the observed image features x1:N , the scaling parameter ?, the distances between pixels D, the window size a, and the base distribution hyperparameter ?: Q  N p(c | D, a, ?) p(x1:N | z(c1:N ), ?) i i=1 Q  p(c1:N | x1:N , ?, d, a, ?) = P (2) N i=1 p(ci | D, a, ?) p(x1:N | z(c1:N ), ?) c1:N where z(c1:N ) is the cluster representation that is derived from the customer representation c1:N . Notice again that the prior term uses the customer representation to take into account distances between data points; the likelihood term uses the cluster representation. The posterior in Equation (2) is not tractable to directly evaluate, due to the combinatorial sum in the denominator. We instead use Gibbs sampling [3], a simple form of Monte Carlo Markov chain (MCMC) inference [18]. We define the Markov chain by iteratively sampling each latent variable ci conditioned on the others and the observations, p(ci | c?i , x1:N , D, ?, ?) ? p(ci | D, ?)p(x1:N | z(c1:N ), ?). (3) The prior term is given in Equation (1). We can decompose the likelihood term as follows: K(c1:N ) p(x1:N | z(c1:N ), ?) = Y p(xz(c1:N )=k | z(c1:N ), ?). (4) k=1 We have introduced notation to more easily move from the customer representation?the primary latent variables of our model?and the cluster representation. Let K(c1:N ) denote the number of unique clusters in the customer assignments, z(c1:N ) the cluster assignments derived from the customer assignments, and xz(c1:N )=k the collection of observations assigned to cluster k. We assume that the cluster parameters ?k have been analytically marginalized. This is possible when the base distribution G0 is conjugate to the data generating distribution, e.g. Dirichlet to multinomial. Sampling from Equation (3) happens in two stages. First, we remove the customer link ci from the current configuration. Then, we consider the prior probability of each possible value of ci and how it changes the likelihood term, by moving from p(x1:N | z(c?i ), ?) to p(x1:N | z(c1:N ), ?). In the first stage, removing ci either leaves the cluster structure intact, i.e., z(cold 1:N ) = z(c?i ), or splits the cluster assigned to data point i into two clusters. In the second stage, randomly reassigning ci either leaves the cluster structure intact, i.e., z(c?i ) = z(c1:N ), or joins the cluster assigned to data point i to another. See Figure 1 for an illustration of these cases. Via these moves, the sampler explores the space of possible segmentations. Let ? and m be the indices of the tables that are joined to index k. We first remove ci , possibly splitting a cluster. Then we sample from  p(ci | D, ?)?(x, z, ?) if ci joins ? and m; p(ci | c?i , x1:N , D, ?, ?) ? (5) p(ci | D, ?) otherwise, where ?(x, z, ?) = p(xz(c1:N )=k | ?) . p(xz(c1:N )=? | ?)p(xz(c1:N )=m | ?) (6) This defines a Markov chain whose stationary distribution is the posterior of the spatial ddCRP defined in Section 2. Though our presentation is slightly different, this algorithm is equivalent to the one developed for ddCRP mixtures in [1]. 5 In the rddCRP, the algorithm for sampling the customer indicators is nearly the same, but with two differences. First, when ci is removed, it may spawn a new cluster. In that case, the region identity of the new table must be sampled from the region level CRP. Second, the likelihood term in Equation (4) depends only on the superpixels in the image assigned to the segment in question. In the rddCRP, it also depends on other superpixels assigned to segments that are assigned to the same region. Finally, the rddCRP also requires resampling of region assignments as follows:  ?t m? p(xt | x?t , ?) if ? is used; p(kt = ? | k?t , x1:N , t(c1:N ), ?, ?) ? (7) ?p(xt | ?) if ? is new. Here, xt is the set of customers sitting at table t, x?t is the set of all customers associated with region ? excluding xt , and m?t ? is the number of tables associated with region ? excluding xt . 4 Empirical Results We compare the performance of the ddCRP to manual segmentations of images drawn from eight natural scene categories [19]. Non-expert users segmented each image into polygonal shapes, and labeled them as distinct objects. The collection, which is available from LabelMe [17], contains a total of 2,688 images.1 We randomly select 100 images from each category. This image collection has been previously used to analyze an image segmentation method based on spatially dependent Pitman-Yor (PY) processes [10], and we compare both methods using an identical feature set. Each image is first divided into approximately 1000 superpixels [15, 20]2 using the normalized cut algorithm [9].3 We describe the texture of each superpixel via a local texton histogram [21], using band-pass filter responses quantized to 128 bins. A 120-bin HSV color histogram is also computed. Each superpixel i is summarized via these histograms xi . Our goal is to make a controlled comparison to alternative nonparametric Bayesian methods on a challenging task. Performance is assessed via agreement with held out human segmentations, via the Rand index [22]. We also present segmentation results for qualitative evaluation in Figures 3 and 4 . 4.1 Sensitivity to Hyperparameters Our models are governed by the CRP concentration parameters ? and ?, the appearance base measure hyperparameter ? = (?0 , ...?0 ), and the window size a. Empirically, ? has little impact on the segmentation results, due to the high-dimensional and informative image features; all our experiments set ? = 1. ? and ?0 induce opposing biases: a small ? encourages larger segments, while a large ?0 encourages larger segments. We found ? = 10?8 and ?0 = 20 to work well. The most influential prior parameter is the window size a, the effect of which is visualized in Figure 3. For the ddCRP model, setting a = 1 (ddCRP1) produces a set of small but contiguous segments. Increasing to a = 2 (ddCRP2) results in fewer segments, but the produced segments are typically spatially fragmented. This phenomenon is further exacerbated with larger values of a. The rddCRP model groups segments produced by a ddCRP. Because it is hard to recover meaningful partitions if these initial segments are poor, the rddCRP performs best when a = 1. 4.2 Image Segmentation Performance We now quantitatively measure the performance of our models. The ddCRP and the rddCRP samplers were run for 100 and 500 iterations, respectively. Both samplers displayed rapid mixing and often stabilized withing the first 50 iterations. Note that similar rapid mixing has been observed in other applications of the ddCRP [1]. We also compare to two previous models [10]: a PY mixture model with no spatial dependence (pybof20), and a PY mixture with spatial coupling induced via thresholded Gaussian processes (pydist20). To control the comparison as much as possible, the PY models are tested with identical features and base measure ?, and other hyperparameters as in [10]. We also compare to the nonspatial PY with ?0 = 1, the best bag-of-feature model in our experiments (pybof ). We employ 1 http://labelme.csail.mit.edu/browseLabelMe/ http://www.cs.sfu.ca/?mori/ 3 http://www.eecs.berkeley.edu/Research/Projects/CS/vision/ 2 6 Models Images Figure 3: Segmentations produced by various Bayesian nonparametric methods. From left to right, the columns display natural images, segmentations for the ddCRP with a = 1, the ddCRP with a = 2, the rddCRP with a = 1, and thresholded Gaussian processes (pydist20) [10]. The top row displays partitions sampled from the corresponding priors, which have 130, 54, 5, and 6 clusters, respectively. Figure 4: Top left: Average segmentation performance on the database of natural scenes, as measured by the Rand index (larger is better), and those pairs of methods for which a Wilcoxon?s signed rank test indicates comparable performance with 95% confidence. In the binary image, dark pixels indicate pairs that are statistically indistinguishable. Note that the rddCRP, spatial PY, and mean shift methods are statistically indistinguishable, and significantly better than all others. Bottom left: Scatter plots comparing the pydist20 and rddCRP methods in the Mountain and Street scene categories. Right: Example segmentations produced by the rddCRP. non-hierarchical versions of the PY models, so that each image is analyzed independently, and perform inference via the previously developed mean field variational method. Finally, from the vision literature we also compare to the normalized cuts (Ncuts) [8] and mean shift (MS) [6] segmentation algorithms.4 4 We used the EDISON implementation of mean shift. The parameters of mean shift and normalized cuts were tuned by performing a grid search over a training set containing 25 images from each of the 8 categories. For normalized cuts the optimal number of segments was determined to be 5. For mean shift we held the spatial 7 Quantitative performance is summarized in Figure 4. The rddCRP outscores both versions of the ddCRP model, in terms of Rand index. Nevertheless, the patchy ddCRP1 segmentations are interesting for applications where segmentation is an intermediate step rather than the final goal. The bag of features model with ?0 = 20 performs poorly; with optimized ?0 = 1 it is better, but still inferior to the best spatial models. In general, the spatial PY and rddCRP perform similarly. The scatter plots in Fig. 4, which show Rand indexes for each image from the mountain and street categories, provide insights into when one model outperforms the other. For the street images rddCRP is better, while for images containing mountains spatial PY is superior. In general, street scenes contain more objects, many of which are small, and thus disfavored by the smooth Gaussian processes underlying the PY model. To most fairly compare priors, we have tested a version of the spatial PY model employing a covariance function that depends only on spatial distance. Further performance improvements were demonstrated in [10] via a conditionally specified covariance, which depends on detected image boundaries. Similar conditional specification of the ddCRP distance function is a promising direction for future research. Finally, we note that the ddCRP (and rddCRP) models proposed here are far simpler than the spatial PY model, both in terms of model specification and inference. The ddCRP models only require pairwise superpixel distances to be specified, as opposed to the positive definite covariance function required by the spatial PY model. Furthermore, the PY model?s usage of thresholded Gaussian processes leads to a complex likelihood function, for which inference is a significant challenge. In contrast, ddCRP inference is carried out through a straightforward sampling algorithm,5 and thus may provide a simpler foundation for building rich models of visual scenes. 5 Discussion We have studied the properties of spatial distance dependent Chinese restaurant processes, and applied them to the problem of image segmentation. We showed that the spatial ddCRP model is particularly well suited for segmenting an image into a collection of contiguous patches. Unlike previous Bayesian nonparametric models, it can produce segmentations with guaranteed spatial connectivity. To go from patches to coarser, human-like segmentations, we developed a hierarchical region-based ddCRP. This hierarchical model achieves performance similar to state-of-the-art nonparametric Bayesian segmentation algorithms, using a simpler model and a substantially simpler inference algorithm. References [1] D. M. Blei and P. I. Frazier. Distant dependent chinese restaurant processes. Journal of Machine Learning Research, 12:2461?2488, August 2011. [2] J. Pitman. Combinatorial Stochastic Processes. Lecture Notes for St. Flour Summer School. Springer-Verlag, New York, NY, 2002. [3] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on pattern analysis and machine intelligence, 6(6):721?741, November 1984. [4] Richard Socher, Andrew Maas, and Christopher D. Manning. Spectral chinese restaurant processes: Nonparametric clustering based on similarities. In Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 2011. [5] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of American Statistical Association, 25(2):1566 ? 1581, 2006. [6] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence, pages 603?619, 2002. bandwidth constant at 7, and found optimal values of feature bandwidth and minimum region size to be 25 and 4000 pixels, respectively. 5 In our Matlab implementations, the core ddCRP code was less than half as long as the corresponding PY code. For the ddCRP, the computation time was 1 minute per iteration, and convergence typically happened after only a few iterations. The PY code, which is based on variational approximations, took 12 minutes per image. 8 [7] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. In ACM Transactions on Graphics (TOG), volume 23, pages 309?314, 2004. [8] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. PAMI, 22(8):888? 905, 2000. [9] C. Fowlkes, D. Martin, and J. Malik. Learning affinity functions for image segmentation: Combining patch-based and gradient-based approaches. CVPR, 2:54?61, 2003. [10] E. B. Sudderth and M. I. Jordan. Shared segmentation of natural scenes using dependent pitman-yor processes. NIPS 22, 2008. [11] P. Orbanz and J. M. Buhmann. Smooth image segmentation by nonparametric Bayesian inference. In ECCV, volume 1, pages 444?457, 2006. [12] Lan Du, Lu Ren, David Dunson, and Lawrence Carin. A bayesian model for simultaneous image clustering, annotation and object segmentation. In NIPS 22, pages 486?494. 2009. [13] J. Pitman and M. Yor. The two-parameter Poisson?Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25(2):855?900, 1997. [14] J. A. Duan, M. Guindani, and A. E. Gelfand. Generalized spatial Dirichlet process models. Biometrika, 94(4):809?825, 2007. [15] X. Ren and J. Malik. Learning a classification model for segmentation. ICCV, 2003. [16] C. Carson, S. Belongie, H. Greenspan, and J. Malik. Blobworld: Image segmentation using expectation-maximization and its application to image querying. PAMI, 24(8):1026?1038, August 2002. [17] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. Labelme: A database web-based tool for image annotation. IJCV, 77:157?173, 2008. [18] C. Robert and G. Casella. Monte Carlo Statistical Methods. Springer Texts in Statistics. Springer-Verlag, New York, NY, 2004. [19] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV, 42(3):145 ? 175, 2001. [20] G. Mori. Guiding model search using segmentation. ICCV, 2005. [21] D. R. Martin, C.C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. PAMI, 26(5):530?549, 2004. [22] W.M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, pages 846?850, 1971. 9
4361 |@word version:3 heuristically:1 covariance:3 simplifying:1 brightness:1 accommodate:1 initial:2 configuration:1 series:2 contains:1 tuned:1 outperforms:1 existing:2 elliptical:1 com:1 current:1 comparing:1 gmail:1 scatter:2 must:1 distant:1 partition:13 informative:1 shape:2 remove:2 plot:3 resampling:1 stationary:1 generative:4 leaf:4 fewer:1 intelligence:3 half:1 cue:1 nonspatial:1 ith:1 core:1 blei:3 provides:6 quantized:1 node:1 location:3 hsv:1 simpler:6 direct:1 qualitative:1 ijcv:2 pairwise:1 theoretically:1 rapid:2 xz:5 freeman:1 duan:1 little:1 window:7 increasing:3 project:1 moreover:1 notation:1 underlying:1 mountain:3 kind:1 substantially:2 developed:4 temporal:1 berkeley:1 quantitative:1 interactive:1 biometrika:1 exchangeable:5 partitioning:1 medical:1 control:1 producing:1 segmenting:2 positive:1 t1:1 understood:1 disfavored:1 local:2 merge:1 approximately:1 signed:1 pami:3 studied:5 specifying:1 challenging:1 statistically:2 practical:2 unique:1 block:1 definite:1 cold:1 area:1 empirical:1 significantly:1 confidence:1 induce:1 py:16 www:2 equivalent:1 customer:34 demonstrated:1 shi:1 straightforward:1 go:1 independently:3 survey:1 resolution:1 splitting:1 examines:1 insight:1 meer:1 analogous:1 annals:1 hierarchy:1 user:2 homogeneous:1 us:2 superpixel:9 associate:1 agreement:1 recognition:1 particularly:1 cut:7 coarser:1 labeled:1 database:2 observed:14 bottom:1 geman:2 region:25 connected:3 ordering:1 russell:1 removed:2 intuition:1 covariates:1 mine:1 depend:2 segment:24 tog:1 easily:1 various:3 kolmogorov:1 distinct:1 describe:1 monte:2 detected:1 artificial:1 whose:1 heuristic:1 widely:2 larger:6 cvpr:1 gelfand:1 otherwise:1 statistic:3 noisy:1 final:1 beal:1 advantage:1 sequence:1 took:1 propose:1 sudderth1:1 neighboring:1 combining:1 entered:1 holistic:1 mixing:2 poorly:2 convergence:1 cluster:38 produce:6 generating:4 leave:2 franchise:1 object:5 coupling:1 develop:4 andrew:1 measured:1 school:1 exacerbated:1 c:4 indicate:1 direction:1 filter:1 stochastic:2 human:4 bin:2 require:2 clustered:3 generalization:2 decompose:1 extension:3 practically:1 considered:1 blake:1 lawrence:1 vary:1 achieves:1 torralba:2 estimation:1 bag:2 combinatorial:2 grouped:2 tool:1 hope:1 mit:1 gaussian:6 rather:2 occupied:1 exchangeability:3 varying:1 greenspan:1 derived:6 improvement:1 frazier:1 rank:1 likelihood:5 indicates:1 superpixels:10 contrast:2 rigorous:2 baseline:1 sense:2 detect:1 inference:13 dependent:13 typically:3 pixel:13 among:1 classification:1 plan:2 spatial:28 special:2 fairly:1 art:1 field:2 once:1 extraction:1 sampling:8 hop:2 identical:2 reassigning:1 unsupervised:2 nearly:1 carin:1 foreground:1 future:1 others:2 simplify:1 quantitatively:1 employ:1 richard:1 few:1 soumya:1 randomly:3 individual:1 murphy:1 occlusion:1 opposing:1 attempt:1 detection:1 evaluation:2 flour:1 mixture:21 analyzed:1 behind:1 held:2 chain:3 kt:2 closer:1 byproduct:1 necessary:1 traversing:1 desired:1 withing:1 instance:1 column:1 modeling:1 contiguous:9 patchy:1 restoration:1 assignment:26 maximization:1 introducing:1 subset:1 dij:2 graphic:1 connect:1 eec:1 st:1 density:1 explores:1 sensitivity:2 international:1 csail:1 probabilistic:2 together:1 connecting:1 connectivity:2 again:1 reflect:1 opposed:2 containing:2 possibly:1 external:5 expert:1 american:2 account:3 summarized:2 explicitly:1 depends:4 later:1 analyze:2 recover:1 annotation:2 square:1 sitting:3 correspond:1 dealt:1 generalize:1 bayesian:15 iterated:1 produced:7 lu:1 carlo:2 ren:2 researcher:1 simultaneous:1 reach:2 casella:1 manual:3 naturally:1 associated:4 couple:1 sampled:3 color:5 stanley:1 segmentation:55 wherein:1 response:1 rand:5 formulation:1 though:3 furthermore:1 stage:3 crp:16 web:1 christopher:1 unoccupied:1 defines:1 artifact:1 building:1 effect:1 usage:1 brown:2 normalized:6 contain:3 ncuts:1 analytically:1 assigned:9 spatially:6 iteratively:1 illustrated:1 conditionally:1 adjacent:3 indistinguishable:2 self:1 encourages:3 width:1 inferior:1 comaniciu:1 subordinator:1 m:1 generalized:1 carson:1 criterion:1 crf:1 demonstrate:3 performs:2 image:76 variational:2 novel:1 recently:4 superior:1 multinomial:2 physical:1 empirically:1 attached:1 volume:2 linking:1 association:2 measurement:1 significant:1 gibbs:6 enter:1 grid:1 similarly:2 reachable:1 moving:1 specification:2 stable:1 similarity:1 base:4 wilcoxon:1 posterior:11 showed:1 orbanz:1 certain:1 verlag:2 binary:1 ddcrp:52 morgan:1 minimum:1 determine:1 grabcut:1 full:1 segmented:1 smooth:2 long:1 divided:1 controlled:1 impact:1 oliva:1 denominator:1 vision:2 expectation:1 poisson:1 histogram:3 kernel:1 represent:4 iteration:4 texton:1 c1:21 want:1 sudderth:2 biased:3 envelope:1 unlike:3 induced:1 jordan:2 integer:1 near:2 intermediate:1 split:3 affect:2 restaurant:17 zi:5 restrict:1 bandwidth:2 knowing:1 shift:6 utility:1 york:2 matlab:1 useful:1 nonparametric:13 dark:1 extensively:1 band:1 induces:1 visualized:1 category:5 http:3 stabilized:1 notice:1 happened:1 per:3 hyperparameter:2 group:5 nevertheless:1 lan:1 drawn:2 thresholded:4 graph:2 relaxation:1 sum:1 enforced:2 run:1 fourteenth:1 place:1 family:1 reasonable:1 patch:6 sfu:1 draw:1 scaling:2 comparable:1 guaranteed:2 summer:1 display:2 encountered:1 scene:11 performing:1 martin:2 department:2 influential:2 poor:1 manning:1 belonging:1 conjugate:2 across:2 slightly:1 happens:1 restricted:1 iccv:2 spawn:1 mori:2 equation:4 previously:4 turn:1 tractable:1 available:1 endowed:2 eight:1 hierarchical:9 spectral:2 appropriate:4 enforce:1 indirectly:1 fowlkes:2 alternative:3 top:2 dirichlet:11 clustering:16 include:1 marginalized:1 chinese:13 comparatively:1 unchanged:4 move:2 g0:3 added:1 already:1 question:1 malik:5 objective:1 primary:1 concentration:1 dependence:1 traditional:4 amongst:2 affinity:1 gradient:1 distance:28 link:4 separate:1 street:4 blobworld:1 seating:2 toward:1 rother:1 erik:1 hdp:4 length:1 relationship:1 illustration:3 providing:1 modeled:1 index:6 code:3 optionally:1 dunson:1 robert:1 potentially:1 implementation:2 perform:3 teh:1 observation:3 markov:4 november:1 displayed:1 incorporated:1 excluding:2 august:2 inferred:1 david:2 introduced:3 pair:3 required:2 specified:2 connection:1 z1:1 optimized:1 merges:1 mediates:1 nip:2 trans:2 recurring:1 usually:1 pattern:2 challenge:1 guindani:1 unrealistic:1 power:1 critical:1 natural:8 difficulty:1 indicator:1 buhmann:1 carried:1 text:1 prior:10 understanding:1 discovery:1 literature:1 lecture:1 interesting:1 proportional:2 querying:1 foundation:1 kti:1 row:1 eccv:1 maas:1 bias:6 pitman:5 yor:4 fragmented:1 boundary:3 rich:1 collection:7 employing:1 far:1 transaction:3 approximate:1 sequentially:1 assumed:1 edison:1 belongie:1 xi:4 search:2 latent:3 table:25 promising:3 robust:1 expanding:1 ca:1 imagining:1 du:1 complex:2 domain:2 aistats:1 arrow:1 hyperparameters:3 allowed:1 x1:12 fig:1 join:2 scattered:1 andrei:2 ny:2 position:1 guiding:1 concatenating:1 governed:1 removing:3 minute:2 specific:1 xt:5 decay:3 sit:3 socher:1 polygonal:1 sequential:1 adding:2 ci:18 texture:4 conditioned:3 suited:2 likely:4 explore:1 appearance:4 visual:1 tracking:1 partially:1 joined:1 springer:3 determines:5 acm:1 conditional:2 goal:5 presentation:1 identity:1 towards:4 shared:3 labelme:3 change:1 hard:1 determined:5 except:1 infinite:2 sampler:4 called:1 total:1 pas:1 intact:2 meaningful:1 select:1 assessed:1 avoiding:1 evaluate:1 mcmc:1 princeton:2 tested:2 phenomenon:1
3,713
4,362
Fast and Accurate k-llleans For Large Datasets Michael Shindler School of EECS Oregon State University [email protected] Alex Wong Department of Computer Science UC Los Angeles [email protected] Adam Meyerson Google, Inc. Mountain View, CA [email protected] Abstract Clustering is a popular problem with many applications. We consider the k-means problem in the situation where the data is too large to be stored in main memory and must be accessed sequentially, such as from a disk, and where we must use as little memory as possible. Our algorithm is based on recent theoretical results, with significant improvements to make it practical. Our approach greatly simplifies a recently developed algorithm, both in design and in analysis, and eliminates large constant factors in the approximation guarantee, the memory requirements, and the running time. We then incorporate approximate nearest neighbor search to compute k-means in o( nk) (where n is the number of data points; note that computing the cost, given a solution, takes 8(nk) time). We show that our algorithm compares favorably to existing algorithms - both theoretically and experimentally, thus providing state-of-the-art performance in both theory and practice. 1 Introduction We design improved algorithms for Euclidean k-means in the streaming model. In the k-means problem, we are given a set of n points in space. Our goal is to select k points in this space to designate asfacilities (sometimes called centers or means); the overall cost of the solution is the sum of the squared distances from each point to its nearest facility. The goal is to minimize this cost; unfortunately the problem is NP-Hard to optimize, although both heuristic [21] and approximation algorithm techniques [20,25,7] exist. In the streaming model, we require that the point set be read sequentially, and that our algorithm stores very few points at any given time. Many problems which are easy to solve in the standard batch-processing model require more complex techniques in the streaming model (a survey of streaming results is available [3]); nonetheless there are a number of existing streaming approximations for Euclidean k-means. We present a new algorithm for the problem based on [9] with several significant improvements; we are able to prove a faster worst-case running time and a better approximation factor. In addition, we compare our algorithm empirically with the previous state-of-the-art results of [2] and [4] on publicly available large data sets. Our algorithm outperforms them both. The notion of clustering has widespread applicability, such as in data mining, pattern recognition, compression, and machine learning. The k-means objective is one of the most popular formalisms, and in particular Lloyd's algorithm [21] has significant usage [5,7, 19,22, 23, 25, 27, 28]. Many of the applications for k-means have experienced a large growth in data that has overtaken the amount of memory typically available to a computer. This is expressed in the streaming model, where an algorithm must make one (or very few) passes through the data, reflecting cases where random access to the data is unavailable, such as a very large file on? a hard disk. Note that the data size, despite being large, is still finite. Our algorithm is based on the recent work of [9]. They "guess" the cost of the optimum, then run the online facility location algorithm of [24] until either the total cost of the solution exceeds a constant times the guess or the total number of facilities exceeds some computed value 1\,. They then declare the end of a phase, increase the guess, consolidate the facilities via matching, and continue with the next point. When the stream has been exhausted, the algorithm has some I\, facilities, which are then consolidated down to k. They then run a ball k-means step (similar to [25]) by maintaining samples of the points assigned to each facility and moving the facilities to the centers of mass of these samples. The algorithm uses O(k logn) memory, runs in O(nk logon) time, and obtains an 0(1) worst-case approximation. Provided that the original data set was o--separable (see section 1.2 for the definition), they use ball k-means to improve the approximation factor to 1 + 0(0- 2 ). From a practical standpoint, the main issue with [9] is that the constants hidden in the asymptotic notation are quite large. The approximation factor is in the hundreds, and the 0 (k log n) memory requirement has sufficiently high constants that there are actually more than n facilities for many of the data sets analyzed in previous papers. Further, these constants are encoded into the algorithm itself, making it difficult to argue that the performance should improve for non-worst-case inputs. 1.1 Our Contributions We substantially simplify the algorithm of [9]. We improve the manner by which the algorithm determines better facility cost as the stream is processed, removing unnecessary checks and allowing the user to parametrize what remains. We show that our changes result in a better approximation guarantee than the previous work. We also develop a variant that computes a solution in o( nk) and show experimentally that both algorithms outperform previous techniques. We remove the end-of-phase condition based on the total cost, ending phases only when the number of facilities exceeds 1\,. While we require I\, E n(k log n), we do not require any particular constants in the expression (in fact we will use I\, == k log n in our experiments). We also simplify the transition between phases, observing that it's quite simple to bound the number of phases by log 0 PT (where OPT is the optimum k-means cost), and that in practice this number of phases is usually quite a bit less than n. We show that despite our modifications, the worst case approximation factor is still constant. Our proof is based on a much tighter bound on the cost incurred per phase, along with a more flexible definition of the "critical phase" by which the algorithm should terminate. Our proofs establish that the algorithm converges for any I\, > k; of course, there are inherent tradeoffs between I\, and the approximation bound. For appropriately chosen constants our approximation factor will be roughly 17, substantially less than the factor claimed in [9] prior to the ball k-means step. In addition, we apply approximate nearest-neighbor algorithms to compute the facility assignment of each point. The running time of our algorithm is dominated by repeated nearest-neighbor calculations, and an appropriate technique can change our running time from 8 (nk log n) to 8 (n(log k + loglogn)), an improvement for most values of k. Of course, this hurts our accuracy somewhat, but we are able to show that we take only a constant-factor loss in approximation. Note that our final running time is actually faster than the 8 (nk) time needed to compute the k-means cost of a given set of facilities! In addition to our theoretical improvements, we perform a number of empirical tests using realistic data. This allows us to compare our algorithm to previous [4, 2] streaming k-means results. 1.2 Previous Work A simple local search heuristic for the k-means problem was proposed in 1957 by Lloyd [21]. The algorithm begins with k arbitrarily chosen points as facilities. At each stage, it allocates the points into clusters (each point assigned to closest facility) and then computes the center of mass for each cluster. These become the new facilities for the next phase, and the process repeats until it is stable. Unfortunately, Lloyd's algorithm has no provable approximation bound, and arbitrarily bad examples exist. Furthermore, the worst-case running time is exponential [29]. Despite these drawbacks, Lloyd's algorithm (frequently known simply as k-means) remains common in practice. The best polynomial-time approximation for k-means is by Kanungo, Mount, Netanyahu, Piatko, Silverman, and Wu [20]. Their algorithm uses local search (similar to the k-median algorithm of [8]), and is a 9 + c approximation. However, Lloyd's observed runtime is superior, and this is a high priority for real applications. 2 Ostrovsky, Rabani, Schulman and Swamy [25] observed that the value of k is typically selected such that the data is "well-clusterable" rather than being arbitrary. They defined the notion of (J"separability, where the input to k-means is said to be (J"-separable if reducing the number of facilities They designed an from k to k - 1 would incr~ase the cost of the optimum solution by a factor algorithm with approximation ratio 1 + O((J"2). Subsequently, Arthur and Vassilvitskii [7] showed that the same procedure produces an O(log k) approximation for arbitrary instances of k-means. ;2' There are two basic approaches to the streaming version of the k-means problem. Our approach is based on solving k-means as we go (thus at each point in the algorithm, our memory contains a current set of facilities). This type of approach was pioneered in 2000 by Guha, Mishra, Motwani, and O'Callaghan [17]. Their algorithm reads the data in blocks, clustering each using some nonstreaming approximation, and then gradually merges these blocks when enough of them arrive. An improved result for k-median was given by Charikar, O'Callaghan, and Panigrahy in 2003 [11], producing an 0 (1) approximation using 0 (k log2 n) space. Their work was based on guessing a lower bound on the optimum k-median cost and running O(log n) parallel versions of the online facility location algorithm of Meyerson [24] with facility cost based on the guessed lower bound. When these parallel calls exceeded the approximation bounds, they would be terminated and the guessed lower bound on the optimum k-median cost would increase. The recent paper of Braverman, Meyerson, Ostrovsky, Roytman, Shindler, and Tagiku [9] extended the result of [11] to k-means and improved the space bound to 0 ( k log n) by proving high-probability bounds on the performance of online facility location. This result also added a ball k-means step (as in [25]) to substantially improve the approximation factor under the assumption that the original data was (J"-separable. Another recent result for streaming k-means, due to Ailon, Jaiswal, and Monteleoni [4], is based on a divide and conquer approach, similar to the k-median algorithm of Guha, Meyerson Mishra, Motwani, and O'Callaghan [16]. It uses the result of Arthur and Vassilvitskii [7] as a subroutine, finding 3k log k centers for each block. Their experiment showed that this algorithm is an improvement over an online variant of Lloyd's algorithm and was comparable to the batch version of Lloyd's. The other approach to streaming k-means is based on coresets: selecting a weighted subset of the original input points such that any k-means solution on the subset has roughly the same cost as on the original point set. At any point in the algorithm, the memory should contain a weighted representative sample of the points. This approach was first used in a non-streaming setting for a variety of clustering problems by Badoiu, Har-Peled, and Indyk [10], and in the streaming setting by Har-Peled and Mazumdar [18]; the time and memory bounds were subsequently improved through a series of papers [14, 13] with the current best theoretical bounds by Chen [12]. A practical implementation of the coreset paradigm is due to Ackermann, Lammersen, Martens, Raupach, Sohler, and Swierkot [2]. Their approach was shown empirically to be fast and accurate on a variety of benchmarks. 2 Algorithm and Theory Both our algorithm and that of [9] are based on the online facility location algorithm of [24]. For the facility location problem, the number of clusters is not part of the input (as it is for k-means), but rather a facility cost is given; an algorithm to solve this problem may have as many clusters as it desires in its output, simply by denoting some point as a facility. The solution cost is then the sum of the resulting k-means cost ("service cost") and the total paid for facilities. Our algorithm runs the online facility location algorithm of [24] with a small facility cost until we have more than ~ E e (k log n) facilities. It then increases the facility cost, re-evaluates the current facilities, and continues with the stream. This repeats until the entire stream is read. The details of the algorithm are given as Algorithm 1. The major differences between our algorithm and that of [9] are as follows. We ignore the overall service cost in determining when to end a phase and raise our facility cost f. Further, the number of facilities which must open to end a phase can be any ~ E (k log n), the constants do not depend directly on the competitive ratio of online facility location (as they did in [9]). Finally, we omit the somewhat complicated end-of-phase analysis of [9], which used matching to guarantee that the' number of facilities decreased substantially with each phase and allowed bounding the number of phases by kl~gn' We observe that our number of phases will be bounded by logj3 OPT; while this is not technically bounded in terms of n, in practice this term should be smaller than the linear number of phases implied in previous work. e 3 Algorithm 1 Fast streaming k-means (data stream, k, ~, (3) 1: Initialize f = Ij(k(l + logn)) and an empty set K 2: while some portion of the stream remains unread do 3: while IKI :::; ~ = 8(k log n) and some portion of the stream is unread do 4: Read the next point x from the stream 5: Measure 6 = minYEK d(x, y)2 6: if probability fJ j f event occurs then 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: setK r- KU{x} else assign x to its closest facility in K if stream not exhausted then while IKI > ~ do Set f r- {31 Move each x E K to the center-of-mass of its points Let W x be the number of points assigned to x E K Initialize K containing the first facility from K for each x E K do Measure fJ = minyEK d(x, y)2 if probability w x 6j f event occurs then setKr-KU{x} else assign" x to its closest facility in K SetK r- K else Run batch k-means algorithm on weighted points K Perform ball k-means (as per [9]) on the resulting set of clusters We will give a theoretical analysis of our modified algorithm to obtain a constant approximation bound. Our constant is substantially smaller than those implicit in [9], with most of the loss occurring in the final non-streaming k-means algorithm to consolidate ~ means down to k. The analysis will follow from the theorems stated below; proofs of these theorems are deferred to the appendix. Theorem 1. Suppose that our algorithm completes the data stream when the facility cost is f. Then the overall solution prior to the final re-clustering has expected service cost at most ~~, and the probability of being within 1 + E of the expected service cost is at least 1 - pol~( n) . Theorem 2. With probability at least 1 - pol~(n)' the algorithm will either halt with 1 :::; e(~*){3, where C* is the optimum k-means cost, or it will halt within one phase of exceeding this value. Furthermore, for large values of ~ and {3, the hidden constant in 8 (C*) approaches 4. Note that while the worst-case bound of roughly 4 proven here may not seem particularly strong, 'unlike the previous work of [9], the worst-case performance is not directly encoded into the algorithm. In practice, we would expect the performance of online facility location to be substantially better than worst-case (in fact, if the ordering of points in the stream is non-adversarial there is a proof to this effect in [24]); in addition the assumption was made that distances add (i.e. triangle inequality is tight) which will not be true in practice (especially of points in low-dimensional space). We also assumed that using more than k facilities does not substantially help the optimum service cost (also unlikely to be true for real data). Combining these, it would be unsurprising if our service cost was actually better than optimum at the end of the data stream (of course, we used many more facilities than optimum, so it is not precisely a fair comparison). The following theorem summarizes the worst-case performance of the algorithm; its proof is direct from Theorems 1 and 2. Theorem 3. The cost of our algorithm's final ~-mean solution is at most O( C*), where C* is the cost of the optimum k-means solution, with probability 1 - pol~(n)' If ~ is a large constant times k log nand j3 > 2 is fairly large, then the cost of our algorithm's solution will approach C* :~21; the extra j3 factor is due to "overshooting" the bestfacility cost f. 4 We note that if we run the streaming part of the algorithm NI times in parallel, we can take the solution with the smallest final facility cost. This improves the approximation factor to roughly 4;31+(1/ M) . Of course, . . ~ can sub ' 11y Increase . ,B-1 ,whi ch approac hes 4'In the l'lIDlt. IncreasIng stantla th e memory requirement and increasing NI can increase both memory and running time requirements. When the algorithm terminates, we have a set of ~ weighted means which we must reduce to k means. A theoretically sound approach involves mapping these means back to randomly selected points from the original set (these can be maintained in a streaming manner) and then approximating k-means on ~ points using a non-streaming algorithm. The overall approximation ratio will be twice the ratio established by our algorithm (we lose a factor of two by mapping back to the original points) plus the approximation ratio for the non-streaming algorithm. If we use the algorithm of [20] along with a large ~, we will get an approximation factor of twice 4 plus 9+c for roughly 17. Ball k-means can then reduce the approximation factor to 1 + 0 ((J2) if the inputs were (J -separable (as in [25] and [9]; the hidden constant will be reduced by our more accurate algorithm). 3 Approximate Nearest Neighbor The most time-consuming step in our algorithm is measuring 6 in lines 5 and 17. This requires as many as ~ distance computations; there are a number of results enabling fast computation of approximate nearest neighbors and applying these results will improve our running time. If we can assume that errors in nearest neighbor computation are independent from one point to the next (and that the expected result is good), our analysis from the previous section applies. Unfortunately, many of the algorithms construct a random data structure to store the facilities, then use this structure to resolve all queries; this type of approach implies that errors are not independent from one query to the next. Nonetheless we can obtain a constant approximation for sufficiently large choices of (3. For our empirical result, we will use a very simple approximate nearest-neighbor algorithm based on random projection. This has reasonable performance in expectation, but is not independent from one step to the next. While the theoretical results from this particular approach are not very strong, it works very well in our experiments. For this implementation, a vector w is created, with each of the d dimensions space being chosen independently and uniformly at random from [0,1). We store our facilities sorted by their inner product with w. When a new point x arrives, instead of taking O(~) to determine its (exact) nearest neighbor, we instead use O(log~) to find the two facilities that x . w is between. We determine the (exact) closer of these two facilities; this determines the value of 6 in lines 5 and 17 and the "closest" facility in lines 9 and 21. Theorem 4. If our approximate nearest neighbor computation finds a facility with distance at most v times the distance to the closest facility in expectation, then the approximation ratio increase by a constant factor. We defer explanation of how we form the stronger theoretical result to the appendix. 4 Empirical Evaluation A comparison of algorithms on real data sets gives a great deal of insight as to their relative performance. Real data is not worst-case, implying that neither the asymptotic performance or runningtime bounds claimed in theoretical results are necessarily tight. Of course, empirical evaluation depends heavily on the data sets selected for the experiments. We selected data sets which have been used previously to demonstrate streaming algorithms. A number of the data sets analyzed in previous work were not particularly large, probably so that batch-processing algorithms would terminate quickly on those inputs. The main motivation for streaming is very large data sets, so we are more interested in sets that might be difficult to fit in a main memory and focused on the largest examples. We looked to [2], and used the two biggest data sets they considered. These were the BigCross dataset1 and the Census1990 dataset 2. All the other data sets in [2, 4] were either subsets of these or were well under a half million points. A necessary input for each of these algorithms is the desired number of clusters. Previous work chose k seemingly arbitrarily; typical values were of the form {5, 10, 15,20, 25}. While this input provides a well-defined geometry problem, it fails to capture any information about how k-means is IThe BigCross dataset is 11,620,300 points in 57-dimensional space; it is available from [1] 2The Census1990 dataset is 2,458,285 points in 68 dimensions; it is available from [15] 5 used in practice and need not lead to separable data. Instead, we want to select k such that the best k-means solution is much cheaper than the best (k -I)-means solution. Since k-means is NP-Hard, we cannot solve large instances to optimality. For the Census dataset we ran several iterations of the algorithm of [25] for each of many values of k. We took the best observed cost for each value of k, and found the four values of k minimizing the ratio of k-means cost to (k - I)-means cost. This was not possible for the larger BigCross dataset. Instead, we ran a modified version of our algorithm; at the end of a phase, it adjusts the facility cost and restarts the stream. This avoids the problem of compounding the approximation factor at the end of a phase. As with Census, we ran this for consecutive values of k and chose the best ratios of observed values; we chose two, rather than four, so that we could finish our experiments in a reasonable amount of time. Our approach to selecting k is closer to what's done in practice, and is more likely to yield meaningful results. We do not compare to the algorithm of [9]. First, the memory is not configurable, making it not fit into the common baselme that we will define shortly. Second, the memory requirements and runtime, while asymptotically nice, have large leading constants that cause it to be impracticaL In fact, it was an attempt to implement this algorithm that initially motivated the work on this paper. 4.1 Implementation Discussion The divide and conquer ("D&C") algorithm [4] can use its available memory in two possible ways. First, it can use the entire amount to read from the stream, writing the results of computing their 3k log k means to disk; when the stream is exhausted, this file is treated as a stream, until an iteration produces a file that fits entirely into main memory. Alternately, the available memory could be partitioned into layers; the first layer would be filled by reading from the stream, and the weighted facilities produced would be stored in the second. When any layer is full, it can be clustered and the result placed in a higher layer, replacing the use of files and disk. Upon completion of the stream, any remaining points are gathered and clustered to produce k final means. When larger amounts of memory are available, the latter method is preferred. With smaller amounts, however, this isn't always possible, and when it is possible, it can produce worse actual running times than a disk-based approach. As our goal is to judge streaming algorithms under low memory conditions, we used the first approach, which is more fitting to such a constraint. Each algorithm3 was programmed in C/C++, compiled with g++, and run under Ubuntu Linux (10.04 LTS) on HP Pavilion p6520fDesktop PC, with an AMD Athlon II X4 635 Processor running at 2.9 GhZ and with 6 GB main memory (although nowhere near the entirety of this was used by any algorithm). For StreamKM++, the authors' implementation [2], also in C, was used instead. With all algorithms, the reported cost is determined by taking the resulting k facilities and computing the k-means cost across the entire dataset. The time to compute this cost is not included in the reported running times of the algorithms. Each test case was run 10 times and the average costs and running times were reported. 4.2 Experimental Design Our goal is to compare the algorithms at a common basepoint. Instead of just comparing for the same dataset and cluster count, we further constrained each to use the same amount of memory (in terms of number of points stored in random access). The memory constraints were chosen to reflect the usage of small amounts of memory that are close to the algorithms' designers' specifications, where possible. Ailon et al [4] suggest -v:n:k memory for the batch process; this memory availability is marked in the charts by an asterisk. The suggestion from [2] for a coreset of size 200k was not run for all algorithms, as the amount of memory necessary for computing a coreset of this size is much larger than the other cases, and our goal is to compare the algorithms at a small memory limit. This does produce a drop in solution quality compared to running the algorithm at their suggested parameters, although their approach remains competitive. Finally, our algorithm suggests memory of ~ = k log n or a small constant times the same. In each case, the memory constraint dictates the parameters; for the divide and conquer algorithm, this is simply the batch size. The coreset size is also dictated by the available memory. Our algorithm is a little more parametrizable; when M memory is available, we allowed ~ = M /5 and each facility to have four samples. 3Visit http://web . engr. oregonstate. edu/ - shindler / to access code for our algorithms 6 JIIIOurs+ANN IIIlD&C JIIID&C 2520 3350 MemoryAvailable MemoryAvailaible Figure 1: Census Data, k=8, cost ~ Figure 2: Census Data, k=8, time 1I0ur,,+ANN 4DDE+13 1I0&C .D&C 3780 5040 Memory Availahle MemOfyAvaUahle Figure 3: Census Data, k= 12, cost ~ Figure 4: Census Data, k=12, time 1.50E""-14????? .Our!l+ANN MemoryAvailahle MemoryAvaiEable Figure 5: BigCross Data, k=13, cost Figure 6: BigCross Data, k=13, time ; Ours 1lI0urs-:-ANN IlIStreamKMH MemoryAvaifable MemoryAVlliiable Figure 7: BigCross Data, k=24, cost Figure 8: BigCross Data, k=24, time 7 4.3 Discussion of Results We see that our algorithms are much faster than the D&C algorithm, while having a comparable (and often better) solution quality. We find that we compare well to StreamKM++ in average results, with a closer standard deviation and a better sketch of the data produced. Furthermore, our algorithm stands to gain the most by improved solutions to batch k-means, due to the better representative sample present after the stream is processed. The prohibitively high running time of the divide-and-conquer algorithm [4] is due to the many repeated instances of running their k-means# algorithm on each batch of the given size. For sufficiently large memory, this is not problematic, as very few batches will need this treatment. Unfortunately, with very small locally available memory, there will be an immense amount of repeated calls, and the overall running time will suffer greatly. In particular, the observed running time was much worse than the other approaches. For the Census dataset, k = 12 case, for example, the slowest run of our algorithm (20 minutes) and the fastest run of the D&C algorithm (125 minutes) occurred at the same case. It is because of this discrepancy that we present the chart of algorithm running times as a log-plot. Furthermore, due to the prohibitively high running time on the smaller data set, we omitted the divide-and-conquer algorithm for the experiment with the larger set. The decline in accuracy for StreamKM++ at very low memory can be partially explained by the 8(k 2 1og8 n) points' worth of memory needed for a strong guarantee in previous theory work [12]. However, the fact that the algorithm is able to achieve a good approximation in practice while using far less than that amount of memory suggests that improved provable bounds for coreset algorithms may be on the horizon. We should note that the performance of the algorithm declines sharply as the memory difference with the authors' specification grows, but gains accuracy as the memory grows. All three algorithms can be described as computing a weighted sketch of the data, and then solving k-means on that sketch. The final approximation ratios can be described as a(l + E) where a is the loss from the final batch algorithm. The coreset E is a direct function of the memory allowed to the algorithm, and can be made arbitrarily small. However, the memory needed to provably reduce E to a small constant is quite substantial, and while StreamKM++ does produce a good resulting clustering, it is not immediately clear the the discovery of better batch k-means algorithms would improve their solution quality. Our algorithm's E represents the ratio of the cost of our f1:-mean solution to the cost of the optimum k-means solution. The provable value is a large constant, but since ~ is much larger than k, we would expect better performance in practice, and we observe this effect in our experiments. For our algorithm, the observed value of 1 + E has been typically between 1 and 3, whereas the D&C approach did not yield one better than 24, and was high (low thousands) for the very low memory conditions. The coreset algorithm was the worst, with even the best values in the mid ten figures (tens to hundreds of billions). The low ratio for our algorithm also suggests that our ~ facilities are a good sketch of the overall data, and thus our observed accuracy can be expected to improve as more accurate batch k-means algorithms are discovered. Acknowledgments We are grateful to Christian Sohler's research group for providing their code for the StreamKM++ algorithm. We also thank Jennifer Wortman Vaughan, Thomas G. Dietterich, Daniel Sheldon, Andrea Vattani, and Christian Sohler for helpful feedback on drafts of this paper. This work was done while all the authors were at UCLA; at that time, Adam Meyerson and Michael Shindler were partially supported by NSF CIF Grant CCF-1016540. References [1] http://www.cs.uni-paderbom.de/en/fachgebiete/ag-bloemer/research/clustering/streamkmpp. [2] Marcel R. Ackermann, Christian Lammersen, Marcus Martens, Christoph Raupach, Christian Sohler, and Kamil Swierkot. StreamKM++: A clustering algorithms for data streams. In ALENEX, 2010. [3] Cham C. Aggarwal, editor. Data Streams: Models and Algorithms. Springer, 2007. 8 [4] Nir Ailon, Ragesh Jaiswal, and Claire Monteleoni. Streaming k-means approximation. In NIPS, 2009. [5] Khaled Alsabti, Sanjay Ranka, and Vineet Singh. An efficient k-means clustering algorithm. In HPDM, 1998. [6] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Communications of the ACM, January 2008. [7] David Arthur and Sergei Vassilvitskii. k-means++: The Advantages of Careful Seeding. In SODA, 2007. [8] Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and Vinayaka Pandit. Local search heuristic for k-median and facility location problems. In STOC, 2001. [9] Vladimir Braverman, Adam Meyerson, Rafail Ostrovsky, Alan Roytman, Michael Shindler, and Brian Tagiku. Streaming k-means on Well-Clusterable Data. In SODA, 201l. [10] Mihai Badoiu, Sariel Har-Peled, and Piotr Indyk. Approximate clustering via core-sets. In STOC, 2002. [11] Moses Charikar, Liadan O'Callaghan, and Rina Panigrahy. Better streaming algorithms for clustering problems. In STOC, 2003. [12] Ke Chen. On coresets for k-median and k-means clustering in metric and euclidean spaces and their applications. SIAM J. Comput., 2009. . [13] Dan Feldman, Morteza Monemizadeh, and Christian Sohler. A PTAS for k-means clustering based on weak coresets. In SCG, 2007. [14] Gereon Frahling and Christian Sohler. Coresets in dynamic geometric data streams. In STOC, 2005. [15] A. Frank and A. Asuncion. DCI machine learning repository, 2010. [16] Sudipto Guha, Adam Meyerson, Nina Mishra, Rajeev Motwani, and Liadan O'Callaghan. Clustering data streams: Theory and practice. In TDKE, 2003. [17] Sudipto Guha, Nina Mishra, Rajeev Motwani, and Liadan 0' Callaghan. streams. In FOCS, 2000. Clustering data [18] Sariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In STOC, 2004. [19] Anil Kumar Jain, M Narasimha Murty, and Patrick Joseph Flynn. Data clustering: a review. ACM Computing Surveys, 31(3), September 1999. [20] Tapas Kanungo, David Mount, Nathan Netanyahu, Christine Piatko, Ruth Silverman, and Angela Wu. A local search approximation algorithm for k-means clustering. In SCG, 2002. [21] Stuart Lloyd. Least Squares Quantization in PCM. In Special issue on quantization, IEEE Transactions on Information Theory, 1982. [22] James MacQueen. Some methods for classification and analysis of multivariate observations. In Berkeley Symposium on Mathematical Statistics and Probability, 1967. [23] Joel Max. Quantizing for minimum distortion. IEEE Transactions on Information Theory, 1960. [24] Adam Meyerson. Online facility location. In FOCS, 200l. [25] Rafail Ostrovsky, Yuval Rabani, Leonard Schulman, and Chaitanya Swamy. The Effectiveness of Lloyd-Type Methods for the k-Means Problem. In FOCS, 2006. [26] Rina Panigrahy. Entropy based nearest neighbor search in high dimensions. In SODA, 2006. [27] Dan Pelleg and Andrew Moore. Accelerating exact k-means algorithms with geometric reasoning. In KDD, 1999. [28] Steven 1. Phillips. Acceleration of k-means and related clustering problems. In ALENEX, 2002. [29] Andrea Vattani. k-means requires exponentially many iterations even in the plane. Discrete Computational Geometry, June 2011. 9
4362 |@word repository:1 version:4 polynomial:1 compression:1 stronger:1 disk:5 open:1 iki:2 scg:2 paid:1 contains:1 series:1 selecting:2 daniel:1 denoting:1 ours:1 outperforms:1 existing:2 mishra:4 current:3 com:1 comparing:1 must:5 dde:1 sergei:1 realistic:1 ranka:1 kdd:1 christian:6 remove:1 designed:1 drop:1 plot:1 seeding:1 overshooting:1 implying:1 half:1 selected:4 guess:3 ubuntu:1 plane:1 core:1 provides:1 draft:1 location:10 accessed:1 parametrizable:1 mathematical:1 along:2 direct:2 become:1 symposium:1 focs:3 prove:1 fitting:1 dan:2 manner:2 theoretically:2 expected:4 andrea:2 roughly:5 frequently:1 resolve:1 little:2 actual:1 increasing:2 provided:1 begin:1 notation:1 bounded:2 mass:3 what:2 mountain:1 consolidated:1 substantially:7 developed:1 alenex:2 finding:1 ag:1 flynn:1 impractical:1 guarantee:4 berkeley:1 growth:1 runtime:2 prohibitively:2 ostrovsky:4 grant:1 omit:1 producing:1 declare:1 service:6 local:4 limit:1 despite:3 mount:2 might:1 plus:2 twice:2 chose:3 garg:1 suggests:3 christoph:1 fastest:1 programmed:1 practical:3 acknowledgment:1 piatko:2 practice:11 block:3 implement:1 alexandr:1 silverman:2 procedure:1 empirical:4 murty:1 dictate:1 matching:2 projection:1 suggest:1 get:1 cannot:1 close:1 applying:1 writing:1 wong:1 optimize:1 vaughan:1 www:1 marten:2 center:5 go:1 independently:1 survey:2 focused:1 ke:1 immediately:1 coreset:7 insight:1 adjusts:1 proving:1 notion:2 hurt:1 pt:1 suppose:1 heavily:1 user:1 pioneered:1 exact:3 us:3 nowhere:1 recognition:1 particularly:2 continues:1 vinayaka:1 observed:7 steven:1 capture:1 worst:11 thousand:1 rina:2 jaiswal:2 ordering:1 ran:3 substantial:1 peled:4 pol:3 dynamic:1 engr:1 raise:1 solving:2 depend:1 unread:2 tight:2 ithe:1 technically:1 upon:1 grateful:1 singh:1 triangle:1 jain:1 fast:4 query:2 quite:4 heuristic:3 encoded:2 solve:3 whi:1 larger:5 distortion:1 statistic:1 itself:1 final:8 online:9 indyk:3 seemingly:1 pavilion:1 advantage:1 quantizing:1 took:1 product:1 j2:1 combining:1 achieve:1 sudipto:2 los:1 billion:1 cluster:7 requirement:5 optimum:11 sea:1 produce:6 motwani:4 adam:6 converges:1 empty:1 help:1 develop:1 completion:1 andrew:1 ij:1 nearest:12 school:1 strong:3 entirety:1 involves:1 implies:1 judge:1 c:1 marcel:1 drawback:1 subsequently:2 munagala:1 pandit:1 require:4 assign:2 f1:1 clustered:2 opt:2 tighter:1 brian:1 designate:1 sufficiently:3 considered:1 great:1 mapping:2 major:1 consecutive:1 smallest:1 omitted:1 lose:1 largest:1 weighted:6 compounding:1 always:1 modified:2 rather:3 june:1 improvement:5 check:1 slowest:1 greatly:2 adversarial:1 helpful:1 streaming:24 i0:1 typically:3 entire:3 unlikely:1 nand:1 hidden:3 initially:1 subroutine:1 interested:1 provably:1 issue:2 overall:6 flexible:1 classification:1 logn:2 overtaken:1 art:2 special:1 initialize:2 uc:1 fairly:1 constrained:1 construct:1 having:1 piotr:2 sohler:6 x4:1 represents:1 stuart:1 discrepancy:1 np:2 simplify:2 inherent:1 few:3 randomly:1 cheaper:1 phase:19 geometry:2 attempt:1 mining:1 braverman:2 evaluation:2 joel:1 deferred:1 analyzed:2 arrives:1 pc:1 har:4 immense:1 accurate:4 closer:3 arthur:3 necessary:2 allocates:1 filled:1 euclidean:3 divide:5 chaitanya:1 re:2 desired:1 theoretical:7 instance:3 formalism:1 gn:1 measuring:1 assignment:1 cost:48 applicability:1 deviation:1 subset:3 hundred:2 wortman:1 guha:4 too:1 unsurprising:1 stored:3 configurable:1 reported:3 eec:2 siam:1 vineet:1 michael:3 quickly:1 linux:1 squared:1 reflect:1 containing:1 monemizadeh:1 priority:1 worse:2 vattani:2 leading:1 de:1 lloyd:9 coresets:5 availability:1 inc:1 oregon:1 depends:1 stream:24 view:1 observing:1 portion:2 competitive:2 parallel:3 complicated:1 asuncion:1 defer:1 contribution:1 minimize:1 square:1 publicly:1 accuracy:4 ni:2 chart:2 guessed:2 yield:2 gathered:1 ackermann:2 weak:1 produced:2 worth:1 processor:1 monteleoni:2 definition:2 evaluates:1 nonetheless:2 james:1 proof:5 gain:2 dataset:8 treatment:1 popular:2 improves:1 actually:3 reflecting:1 back:2 exceeded:1 census1990:2 higher:1 hashing:1 follow:1 restarts:1 improved:6 done:2 furthermore:4 just:1 stage:1 implicit:1 until:5 sketch:4 web:1 replacing:1 rajeev:2 google:2 widespread:1 quality:3 liadan:3 grows:2 usage:2 effect:2 dietterich:1 contain:1 true:2 ccf:1 facility:56 assigned:3 read:5 moore:1 lts:1 deal:1 incr:1 maintained:1 demonstrate:1 christine:1 fj:2 reasoning:1 recently:1 common:3 superior:1 empirically:2 exponentially:1 million:1 he:1 occurred:1 significant:3 setk:2 mihai:1 feldman:1 phillips:1 hp:1 moving:1 access:3 stable:1 specification:2 compiled:1 add:1 patrick:1 closest:5 multivariate:1 recent:4 showed:2 dictated:1 store:3 claimed:2 inequality:1 continue:1 arbitrarily:4 cham:1 minimum:1 ptas:1 somewhat:2 determine:2 paradigm:1 ii:1 full:1 sound:1 aggarwal:1 exceeds:3 badoiu:2 faster:3 alan:1 calculation:1 ase:1 visit:1 halt:2 j3:2 variant:2 basic:1 expectation:2 metric:1 iteration:3 sometimes:1 loglogn:1 athlon:1 addition:4 want:1 whereas:1 decreased:1 else:3 median:8 completes:1 standpoint:1 appropriately:1 extra:1 eliminates:1 unlike:1 pass:1 file:4 probably:1 khaled:1 seem:1 effectiveness:1 call:2 near:2 easy:1 enough:1 variety:2 fit:3 finish:1 reduce:3 simplifies:1 inner:1 decline:2 tradeoff:1 angeles:1 vassilvitskii:3 expression:1 motivated:1 gb:1 accelerating:1 ragesh:1 suffer:1 cif:1 cause:1 clear:1 amount:10 kanungo:2 mid:1 locally:1 ten:2 processed:2 reduced:1 http:2 outperform:1 exist:2 problematic:1 nsf:1 moses:1 designer:1 per:2 discrete:1 clusterable:2 group:1 four:3 neither:1 asymptotically:1 pelleg:1 sum:2 run:11 soda:3 arrive:1 reasonable:2 wu:2 consolidate:2 appendix:2 summarizes:1 comparable:2 bit:1 entirely:1 bound:16 layer:4 precisely:1 constraint:3 alex:1 sharply:1 mazumdar:2 ucla:2 dominated:1 sheldon:1 nathan:1 rabani:2 optimality:1 kumar:1 separable:5 department:1 charikar:2 ailon:3 ball:6 smaller:4 terminates:1 across:1 separability:1 partitioned:1 joseph:1 making:2 modification:1 explained:1 gradually:1 census:7 remains:4 previously:1 jennifer:1 count:1 needed:3 end:8 available:11 parametrize:1 apply:1 observe:2 appropriate:1 batch:12 shortly:1 swamy:2 original:6 thomas:1 angela:1 clustering:19 running:20 remaining:1 log2:1 maintaining:1 especially:1 establish:1 conquer:5 approximating:1 implied:1 objective:1 move:1 added:1 occurs:2 looked:1 guessing:1 said:1 september:1 distance:5 algorithm3:1 thank:1 raupach:2 amd:1 argue:1 khandekar:1 provable:3 marcus:1 nina:2 panigrahy:3 code:2 ruth:1 providing:2 ratio:11 minimizing:1 vladimir:1 difficult:2 unfortunately:4 stoc:5 frank:1 favorably:1 dci:1 stated:1 shindler:6 design:3 implementation:4 perform:2 allowing:1 observation:1 datasets:1 macqueen:1 benchmark:1 finite:1 enabling:1 arya:1 kamesh:1 january:1 situation:1 extended:1 communication:1 discovered:1 arbitrary:2 david:2 kl:1 merges:1 established:1 alternately:1 nip:1 able:3 suggested:1 usually:1 pattern:1 below:1 sanjay:1 reading:1 max:1 memory:42 explanation:1 critical:1 event:2 treated:1 improve:7 created:1 isn:1 nir:1 review:1 geometric:2 prior:2 oregonstate:2 schulman:2 nice:1 discovery:1 determining:1 asymptotic:2 relative:1 rohit:1 loss:3 expect:2 sariel:2 suggestion:1 proven:1 asterisk:1 incurred:1 editor:1 netanyahu:2 basepoint:1 claire:1 course:5 repeat:2 placed:1 supported:1 neighbor:11 taking:2 ghz:1 feedback:1 dimension:4 ending:1 transition:1 dataset1:1 computes:2 avoids:1 meyerson:9 made:2 author:3 stand:1 rafail:2 far:1 transaction:2 approximate:8 obtains:1 ignore:1 preferred:1 uni:1 sequentially:2 unnecessary:1 assumed:1 consuming:1 search:6 terminate:2 ku:2 ca:1 unavailable:1 complex:1 necessarily:1 kamil:1 did:2 main:6 terminated:1 bounding:1 motivation:1 tapa:1 repeated:3 allowed:3 fair:1 representative:2 biggest:1 en:1 experienced:1 sub:1 fails:1 exceeding:1 exponential:1 comput:1 anil:1 down:2 removing:1 theorem:8 bad:1 minute:2 quantization:2 andoni:1 callaghan:6 exhausted:3 occurring:1 horizon:1 nk:6 chen:2 vijay:1 morteza:1 runningtime:1 entropy:1 simply:3 likely:1 pcm:1 expressed:1 desire:1 partially:2 applies:1 springer:1 ch:1 determines:2 acm:2 goal:5 sorted:1 marked:1 ann:4 careful:1 leonard:1 acceleration:1 experimentally:2 hard:3 change:2 typical:1 determined:1 reducing:1 uniformly:1 included:1 yuval:1 called:1 total:4 experimental:1 meaningful:1 select:2 soham:1 naveen:1 latter:1 incorporate:1
3,714
4,363
Scalable Training of Mixture Models via Coresets Dan Feldman MIT Matthew Faulkner Caltech Andreas Krause ETH Zurich Abstract How can we train a statistical mixture model on a massive data set? In this paper, we show how to construct coresets for mixtures of Gaussians and natural generalizations. A coreset is a weighted subset of the data, which guarantees that models fitting the coreset will also provide a good fit for the original data set. We show that, perhaps surprisingly, Gaussian mixtures admit coresets of size independent of the size of the data set. More precisely, we prove that a weighted set of O(dk3 /?2 ) data points suffices for computing a (1 + ?)-approximation for the optimal model on the original n data points. Moreover, such coresets can be efficiently constructed in a map-reduce style computation, as well as in a streaming setting. Our results rely on a novel reduction of statistical estimation to problems in computational geometry, as well as new complexity results about mixtures of Gaussians. We empirically evaluate our algorithms on several real data sets, including a density estimation problem in the context of earthquake detection using accelerometers in mobile phones. 1 Introduction We consider the problem of training statistical mixture models, in particular mixtures of Gaussians and some natural generalizations, on massive data sets. Such data sets may be distributed across a cluster, or arrive in a data stream, and have to be processed with limited memory. In contrast to parameter estimation for models with compact sufficient statistics, mixture models generally require inference over latent variables, which in turn depends on the full data set. In this paper, we show that Gaussian mixture models (GMMs), and some generalizations, admit small coresets: A coreset is a weighted subset of the data which guarantees that models fitting the coreset will also provide a good fit for the original data set. Perhaps surprisingly, we show that Gaussian mixtures admit coresets of size independent of the size of the data set. We focus on ?-semi-spherical Gaussians, where the covariance matrix ?i of each component i has eigenvalues bounded in [?, 1/?], but some of our results generalize even to the semi-definite case. In particular, we show that given a data set D of n points in Rd , ? > 0 and k ? N, how one can efficiently construct a weighted set C of O(dk 3 /?2 ) points, such that for any mixture of k ?-semispherical Gaussians ? = [(w1 , ?1 , ?1 ), . . . , (wk , ?k , ?k )] it holds that the log-likelihood ln P (D | ?) of D under ? is approximated by the (properly weighted) log-likelihood ln P (C | ?) of C under ? to arbitrary accuracy as ? ? 0. Thus solving the estimation problem on the coreset C (e.g., using weighted variants of the EM algorithm, see Section 3.3) is almost as good as solving the estimation problem on large data set D. Our algorithm for constructing C is based on adaptively sampling points from D and is simple to implement. Moreover, coresets can be efficiently constructed in a map-reduce style computation, as well as in a streaming setting (using space and update time per point of poly(dk??1 log n log(1/?))). Existence and construction of coresets have been investigated for a number of problems in computational geometry (such as k-means and k-median) in many recent papers (cf., surveys in [1, 2]). In this paper, we demonstrate how these techniques from computational geometry can be lifted to the realm of statistical estimation. As a by-product of our analysis, we also close an open question on the VC dimension of arbitrary mixtures of Gaussians. We evaluate our algorithms on several synthetic and real data sets. In particular, we use our approach for density estimation for acceleration data, motivated by an application in earthquake detection using mobile phones. 1 2 Background and Problem Statement Fitting mixture models by MLE. Suppose we are given a data set D = {x1 , . . . , xn } ? Rd . We consider fitting a mixture of Gaussians ? = [(w1 , ?1 , ?1 ), . . . , (wk , ?k , ?k )], i.e., the distribution Pk P P (x | ?) = i=1 wi N (x; ?i , ?i ), where w1 , . . . , wk ? 0 are the mixture weights, i wi = 1, and ?i and ?i are mean and covariance of the i-th mixture component, which is modeled  as a multivariate normal distribution N (x, ?i , ?i ) = ? 1 exp ? 12 (x ? ?i )T ??1 i (x ? ?i ) . In Section 4, we |2??i | will discuss extensions to more general mixture models. PAssuming the data was generated i.i.d., the negative log likelihood of the data is L(D | ?) = ? j ln P (xj | ?), and we wish to obtain the maximum likelihood estimate (MLE) of the parameters ?? = argmin??C L(D | ?), where C is a set of constraints ensuring that degenerate solutions are avoided1 . Hereby, for a symmetric matrix A, spec A is the set of all eigenvalues of A. We define C = C? = {? = [(w1 , ?1 , ?1 ), . . . , (wk , ?k , ?k )] | ?i : spec(?i ) ? [?, 1/?]} to be the set of all mixtures of k Gaussians ?, such that all the eigenvalues of the covariance matrices of ? are bounded between ? and 1/? for some small ? > 0. Approximating the log-likelihood. Our goal is to approximate the data set D by a weighted set 0 d C = {(?1 , x01 ), . . . , (? Pm , xm )} ?0 R ? R , such that L(D | ?) ? L(C | ?) for all ?, where we define L(C | ?) = ? i ?i ln P (xi | ?). What kind of approximation accuracy may we hope to expect? Notice that there is a nontrivial issue of scale: Suppose we have a MLE ?? for D, and let ? > 0. Then straightforward linear algebra shows that we can obtain an MLE ??? for a scaled data set ?D = {?x : x ? D} by simply scaling all means by ?, and covariance matrices by ?2 . For the log-likelihood, however, it holds that L(?D | ??? ) = d ln ? + L(D | ?? ). Therefore, optimal solutions on one scale can be efficiently transformed to optimal solutions at a different scale, while maintaining the same additive error. This means, that any algorithm which achieves absolute error ? at any scale could be used to achieve parameter estimates (for means, covariances) with arbitrarily small error, simply by applying the algorithm to a scaled data set and transforming back the obtained solution. An alternative, scaleinvariant approach may be to strive towards approximating L(D | ?) up to multiplicative error (1 + ?). Unfortunately, this goal is also hard to achieve: Choosing a scaling parameter ? such that d ln ? + L(D | ?? ) = 0 would require any algorithm that achieves any bounded multiplicative error to essentially incur no error at all when evaluating L(?D | ?? ). The above observations hold even for the case k = 1 and ? = I, where the mixture ? consists of a single Gaussian, and the log-likelihood is the sum of squared distances to a point ? and an additive term. Motivated by the scaling issues discussed above, we use the following error bound that was suggested in [3] (who studied the case where all Gaussians are identical spheres). We decompose the negative log-likelihood L(D | ?) of a data set D as   n k X X wi 1 T ?1 p L(D | ?) = ? ln exp ? (xj ? ?i ) ?i (xj ? ?i ) = ?n ln Z(?) + ?(D | ?) 2 |2??i | j=1 i=1 P where Z(?) = i ? wi is a normalizer, and the function ? is defined as |2??i |   n k X X w 1 pi ?(D | ?) = ? ln exp ? (xj ? ?i )T ??1 (x ? ? ) . j i i 2 Z(?) |2??i | j=1 i=1 Hereby, Z(?) plays the role of a normalizer, which can be computed exactly, independently of the set D. ?(D | ?) captures all dependencies of L(D | ?) on D, and via Jensen?s inequality, it can be seen that ?(D | ?) is always nonnegative. We can now use this term ?(D | ?) as a reference for our error bounds. In particular, we call ?? a ? ? ?(D | ?)(1 + ?). (1 + ?)-approximation for ? if (1 ? ?)?(D | ?) ? ?(D | ?) Coresets. We call a weighted data set C a (k, ?)-coreset for another (possibly weighted) set D ? Rd , if for all mixtures ? ? C of k Gaussians it holds that (1 ? ?)?(D | ?) ? ?(C | ?) ? ?(D | ?)(1 + ?). 1 equivalently, C can be interpreted as prior thresholding. 2 3 3 3 2 2 2 1 1 1 0 0 0 ?1 ?1 ?1 ?2 ?2 ?2 ?3 ?3 ?4 ?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 (a) Example data set ?4 ?2 ?3 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 ?4 ?2 ?1.5 ?1 ?0.5 0 0.5 (b) Iteration 1 (c) Iteration 3 (e) Sampling distribution (f) Coreset 1 1.5 2 3 2 1 0 ?1 ?2 ?3 ?4 ?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 (d) Final approximation B 2 Figure 1: Illustration of the coreset construction for example data set (a). (b,c) show two iterations of constructing the set B. Solid squares are points sampled uniformly from remaining points, hollow squares are points selected in previous iterations. Red color indicates half the points furthest away from B, which are kept for next iteration. (d) final approximate clustering B on top of original data set. (e) Induced non-uniform sampling distribution: radius of circles indicates probability; color indicates weight, ranging from red (high weight) to yellow (low weight). (f) Coreset sampled from distribution in (e). Hereby ?(C | ?) is generalized to weighted data sets C in the natural way (weighing the contribution of each summand x0j ? C by ?j ). Thus, as ? ? 0, for a sequence of (k, ?)-coresets C? we have that sup??C |L(C? | ?) ? L(D | ?)| ? 0, i.e., L(C? | ?) uniformly (over ? ? C) approximates L(D the additional condition that all variances are sufficiently large (formally Q | ?). Further, under 1 ? ? for all components i), the log-normalizer ln Z(?) is negative, and consed ??spec(?i ) (2?) quently the coreset in fact provides a multiplicative (1 + ?) approximation to the log-likelihood, i.e., (1 ? ?)L(D | ?) ? L(C | ?) ? L(D | ?)(1 + ?). More details can be found in the supplemental material. Note that if we had access to a (k, ?)-coreset C, then we could reduce the problem of fitting a mixture model on D to one of fitting a model on C, since the optimal solution ?C is a good approximation (in terms of log-likelihood) of ?? . While finding the optimal ?C is a difficult problem, one can use a (weighted) variant of the EM algorithm to find a good solution. Moreover, if |C|  |D|, running EM on C may be orders of magnitude faster than solving it on D. In Section 3.3, we give more details about solving the density estimation problem on the coreset. The key question is whether small (k, ?)-coresets exist, and whether they can be efficiently constructed. In the following, we answer this question affirmatively. We show that, perhaps surprisingly, one can efficiently find coresets C of size independent of the size n of D, and with polynomial dependence on 1? , d and k. 3 Efficient Coreset Construction via Adaptive Sampling Naive approach: uniform sampling. A naive approach towards approximating D would be to just pick a subset C uniformly at random. In particular, suppose the data set is generated from a mixture of two spherical Gaussians (?i = I) with weights w1 = ?1n and w2 = 1 ? ?1n . Unless ? m = ?( n) points are sampled, with constant probability no data point generated from Gaussian 2 is selected. By moving the means of the Gaussians arbitrarily far apart, L(D | ?C ) can be made arbitrarily worse than L(D | ?D ), where ?C and ?D are MLEs on C and D respectively. Thus, even for two well-separated Gaussians, uniform sampling can perform arbitrarily poorly. This example already suggests that, intuitively, in order to achieve small multiplicative error, we must devise a sampling scheme that adaptively selects representative points from all ?clusters? present in the data set. However, this suggests that obtaining a coreset requires solving a chicken-and-egg problem, where we need to understand the density of the data to obtain the coreset, but simultaneously would like to use the coreset for density estimation. 3 Better approximation via adaptive sampling. The key idea behind the coreset construction is that we can break the chicken-and-egg problem by first obtaining a rough approximation B of the clustering solution (using more than k components, but far fewer than n), and then to use this solution to bias the random sampling. Surprisingly, a simple procedure which iteratively samples a small number ? of points, and removes half of the data set closest to the sampled points, provides a sufficiently accurate first approximation B for this purpose. This initial clustering is then used to sample the data points comprising coreset C according to probabilities which are roughly proportional to the squared distance to the set B. This non-uniform random sampling can be understood as an importance-weighted estimate of the log-likelihood L(D | ?), where the weights are optimized in order to reduce the variance. The same general idea has been found successful in constructing coresets for geometric clustering problems such as k-means and k-median [4]. The pseudocode for obtaining the approximation B, and for using it to obtain coreset C is given in Algorithm 1. Algorithm 1: Coreset construction Input: Data set D, ?, ?, k Output: Coreset C = (?(x1 ), x1 ), . . . , (?(x|C| ), x|C| ) 0 D ? D; B ? ?; while |D0 | > 10dk ln(1/?) do Sample set S of ? = 10dk ln(1/?) points uniformly at random from D0 ; Remove d|D0 |/2e points x ? D0 closest to S (i.e., minimizing dist(x, S)) from D0 ; Set B ? B ? S; Set B ? B ? D0 ; for each b ? B do Db ? the points in D whose closest point in B is b. Ties broken arbitrarily; for each b ? B l and x ? Db do m 2 m(x) ? |D5b | + P 0 dist(x,B) ; dist(x0 ,B)2 x ?D 2 2 0 Pick a non-uniform random sample C of 10ddk|B| P ln(1/?)/?0 e points from D, where for every x ? C and m(x ); x ? D, we have x0 = x with probability m(x)/ 0 x ?D P for each x0 ? C do ?(x0 ) ? x?D m(x) ; |C|?m(x0 ) We have the following result, proved in the supplemental material: Theorem 3.1. Suppose C is sampled from D using Algorithm 1 for parameters ?, ? and k. Then, with probability at least 1 ? ? it holds that for all ? ? C? , ?(D | ?)(1 ? ?) ? ?(C | ?) ? ?(D | ?)(1 + ?). In our experiments, we compare the performance of clustering on coresets constructed via adaptive sampling, vs. clustering on a uniform sample. The size of C in Algorithm 1 depends on |B|2 = log2 n. By replacing B in the algorithm with a constant factor approximation B 0 , |B 0 | = l for the k-means problem, we can get a coreset C of size independent of n. Such a set B 0 can be computed in O(ndk) time either by applying exhaustive search on the output C of the original Algorithm 1 or by using one of the existing constant-factor approximation algorithms for k-means (say, [5]). 3.1 Sketch of Analysis: Reduction to Euclidean Spaces For space limitations, the proof of Theorem 3.1 is included in the supplemental material, we only provide a sketch of the analysis, carrying the main intuition. The key insight in the proof is that the contribution log P (x | ?) to the likelihood L(D | ?) can be expressed in the following way: Lemma 3.2. There exist functions ?, ?, and f such that, for any point x ? Rd and mixture model ?, ln P (x | ?) = ?f?(x) (?(?)) + Z(?), where X  fx? (y) = ? ln w ?i exp ?Wi dist(? x?? ?i , si )2 . i Hereby, ? is a function that maps a point x ? Rd into x ? = ?(x) ? R2d , and ? is a function that maps a mixture model ? into a tuple y = (s, w, ? ?, W ) where w is a k-tuple of nonnegative weights w ?1 , . . . , w ?k summing to 1, s = s1 , . . . , sk ? R2d is a set of k d-dimensional subspaces that are weighted by weights W1 , ? ? ? , Wk > 0, and ? ?=? ?1 , ? ? ? , ? ?k ? R2d is a set of k means. The main idea behind Lemma 3.2 is that level sets of distances between points and subspaces are quadratic forms, and can thus represent level sets of the Gaussian probability density function (see Figure 2(a) for an illustration). We recognize the ?soft-min? function ?w0 (?) ? 4 C7 ? C3 ? C1 ? C6 ? C2 ? C4 ? C5 ? x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 (a) Gaussian pdf as Euclidean distances (b) Tree for coreset construction Figure 2: (a) Level sets of the distances between points on a plane (green) and (disjoint) k-dimensional subspaces are ellipses, and thus can represent contour lines of the multivariate Gaussian. (b) Tree construction for generating coresets in parallel or from data streams. Black arrows indicate ?merge-and-compress? operations. The (intermediate) coresets C1 , . . . , C7 are enumerated in the order in which they would be generated in the streaming case. In the parallel case, C1 , C2 , C4 and C5 would be constructed in parallel, followed by parallel construction of C3 and C6 , finally resulting in C7 . P ? ln i wi0 exp (??i ) as an approximation upper-bounding the minimum min(?) = mini ?i for ?i = Wi dist(? x?? ?i , si )2 and ? = [?1 , . . . , ?k ]. The motivation behind this transformation is that it allows expressing the likelihood P (x | ?) of a data point x given a model ? in a purely geometric manner as soft-min over distances between points and subspaces in a transformed space. Notice that if we use the minimum min() instead of the soft-min ?w? (), we recover the problem of approximating the data set D (transformed via ?) by k-subspaces. For semi-spherical Gaussians, it can be shown that the subspaces can be chosen as points while incurring a multiplicative error of at most 1/?, and thus we recover the well-known k-means problem in the transformed space. This insight suggests using a known coreset construction for k-means, adapted to the transformation employed. The remaining challenge in the proof is to bound the additional error incurred by using the soft-min function ?w? (?) instead of the minimum min(?). We tackle this challenge by proving a generalized triangle inequality adapted to the exponential transformation, and employing the framework described in [4], which P provides a general method for constructing coresets for clustering problems of the form mins i fx? (s). As proved in [4], the key quantity that controls the size of a coreset is the pseudo-dimension of the functions Fd = {fx? for x ? ? R2d }. This notion of dimension is closely related to the VC dimension of the (sub-level sets of the) functions Fd and therefore represents the complexity of this set of functions. The final ingredient in the proof of Theorem 3.1 is a new bound on the complexity of mixtures of k Gaussians in d dimensions proved in the supplemental material. 3.2 Streaming and Parallel Computation One major advantage of coresets is that they can be constructed in parallel, as well as in a streaming setting where data points arrive one by one, and it is impossible to remember the entire data set due to memory constraints. The key insight is that coresets satisfy certain composition properties, which have previously been used by [6] for streaming and parallel construction of coresets for geometric clustering problems such as k-median and k-means. 1. Suppose C1 is a (k, ?)-coreset for D1 , and C2 is a (k, ?)-coreset for D2 . Then C1 ? C2 is a (k, ?)-coreset for D1 ? D2 . 2. Suppose C is a (k, ?)-coreset for D, and C 0 is a (k, ?)-coreset for C. Then C 0 is a (k, (1 + ?)(1 + ?) ? 1)-coreset for D. In the following, we review how to exploit these properties for parallel and streaming computation. Streaming. In the streaming setting, we assume that points arrive one-by-one, but we do not have enough memory to remember the entire data set. Thus, we wish to maintain a coreset over time, while keeping only a small subset of O(log n) coresets in memory. There is a general reduction that shows that a small coreset scheme to a given problem suffices to solve the corresponding problem on a streaming input [7, 6]. The idea is to construct and save in memory a coreset for every block of poly(dk/?) consecutive points arriving in a stream. When we have two coresets in memory, we can merge them (resulting in a (k, ?)-coreset via property (1)), and compress by computing a single coreset from the merged coresets (via property (2)) to avoid increase in the coreset size. An important subtlety arises: While merging two coresets (via property (1)) does not increase the approximation error, compressing a coreset (via property (2)) does increase the error. A naive approach that merges and compresses immediately as soon as two coresets have been constructed, can incur an exponential increase in approximation error. Fortunately, it is possible to organize the merge-and-compress operations in a binary tree of height O(log n), where we need to store in memory a single coreset 5 for each level on the tree (thus requiring only poly(dk??1 log n) memory). Figure 2(b) illustrates this tree computation. In order to construct a coreset for the union of two (weighted) coresets, we use a weighted version of Algorithm 1, where we consider a weighted point as duplicate copies of a non-weighted point (possibly with fractional weight). A more formal description can be found in [8]. We summarize our streaming result in the following theorem. Theorem 3.3. A (k, ?)-coreset for a stream of n points in Rd can be computed for the ?semi-spherical GMM problem with probability at least 1 ? ? using space and update time poly(dk??1 log n log(1/?)). Parallel/Distributed computations. Using the same ideas from the streaming model, a (nonparallel) coreset construction can be transformed into a parallel one. We partition the data into sets, and compute coresets for each set, independently, on different computers in a cluster. We then (in parallel) merge (via property (1)) two coresets, and compute a single coreset for every pair of such coresets (via property (2)). Continuing in this manner yields a process that takes O(log n) iterations of parallel computation. This computation is also naturally suited for map-reduce [9] style computations, where the map tasks compute coresets for disjoint parts of D, and the reduce tasks perform the merge-and-compress operations. Figure 2(b) illustrates this parallel construction. Theorem 3.4. A (k, ?)-coreset for a set of n points in Rd can be computed for the ?-semispherical GMM problem with probability at least 1 ? ? using m machines in time (n/m) ? poly(dk??1 log(1/?) log n). 3.3 Fitting a GMM on the Coreset using Weighted EM One approach, which we employ in our experiments, is to use a natural generalization of the EM algorithm, which takes the coreset weights into account. We here describe the algorithm for the case of GMMs. For other mixture distributions, the E and M steps are modified appropriately. Algorithm 2: Weighted EM for Gaussian mixtures Input: Coreset C, k, TOL Output: Mixture model ?C Lold = ?; Initialize means ?1 , . . . , ?k by sampling k points from C with probability proportional to their weight. Initialize ?i = I and wi = k1 for all i; repeat wi N (x0j ;?i ,?i ) ; 0 ` w` N (xj ;?` ,?` ) Lold = L(C | ?); for j = 1 to n do for i = 1 to k do Compute ?i,j = ?i P for i = 1 to k do  T P P P P P wi ? wi / ` wi ; ?i ? j ?i,j x0j / j ?i,j ; ?i ? j ?i,j x0j ? ?i x0j ? ?i / j ?i,j ; until L(C | ?) ? Lold ? T OL ; Using a similar analysis as for the standard EM algorithm, Algorithm 2 is guaranteed to converge, but only to a local optimum. However, since it is applied on a much smaller set, it can be initialized using multiple random restarts. 4 Extensions and Generalizations We now show how the connection between estimating the parameters for mixture models and problems in computational geometry can be leveraged further. Our observations are based on the link between mixture of Gaussians and projective clustering (multiple subspace approximation) as shown in Lemma 3.2. Generalizations to non-semi-spherical GMMs. For simplicity, we generalized the coreset construction for the k-means problem, which required assumptions that the Gaussians are ?-semispherical. However, several more complex coresets for projective clustering were suggested recently (cf., [4]). Using the tools developed in this article, each such coreset implies a corresponding coreset for GMMs and generalizations. As an example, the coresets for approximating points by lines [10] implies that we can construct small coresets for GMMs even if the smallest singular value of one of the corresponding covariance matrices is zero. Generalizations to `q distances and other norms. Our analysis is based on combinatorics (such as the complexity of sub-levelsets of GMMs) and probabilistic methods (non-uniform random sampling). Therefore, generalizations to other non-Euclidean distance functions, or error functions such as (non-squared) distances (mixture of Laplace distributions) is straightforward. The main property 6 52 0 0 Coreset 48 Uniform Sample 47 46 45 44 1 10 ?200 Full Set ?400 Coreset ?600 Uniform Sample ?800 ?1000 ?1200 ?1400 ?50 0.75 Area Under ROC Curve 49 Full Set Log Likelihood on Test Data Set 50 Log Likelihood on Test Data Set Log Likelihood on Test Data Set 51 Full Set Coreset ?100 Uniform Sample ?150 ?200 2 3 10 Training Set Size 4 10 (a) MNIST 5 10 ?1800 2 10 3 4 10 10 Training Set Size ?250 1 10 5 10 (b) Tetrode recordings Full Set Uniform Sample 0.6 0.55 ?1600 10 0.7 Coreset 0.65 2 10 3 10 Training Set Size 4 10 (c) CSN data 5 10 1 10 2 10 3 10 Training Set Size 4 10 5 10 (d) CSN detection Figure 3: Experimental results for three real data sets. We compare likelihood of the best model obtained on subsets C constructed by uniform sampling, and by the adaptive coreset sampling procedure. that we need is a generalization of the triangle inequality, as proved in the supplemental material. For example, replacing the squared distances by non-squared distances yields a coreset for mixture 2 2 of Laplace distributions. The double triangle inequality ka ? ck ? 2(ka ? bk + kb ? ck ) that we 2 2 O(q) used in this paper is replaced by H?older?s inequality, ka ? ck ? 2 ka ? bk + 2 kb ? ck . Such a result is straight-forward from our analysis, and we summarize it in the following theorem. Theorem 4.1. Let q ? 1 be an integer. Consider Algorithm 1, where dist(?, ?)2 is replaced by dist(?, ?)q and ?2 is replaced by ?O(q) . Suppose C is sampled from D using this updated version of Algorithm 1 for parameters ?, ? and k. Then, with prob. at least 1 ? ? it holds that for all ? ? C? , ?(D | ?)(1 ? ?) ? ?(C | ?) ? ?(D | ?)(1 + ?), q   P wi P Pk wi 1 ?1/2 where Z(?) = i g(? (x ? ? ) and ?(D | ?) = ? exp ? ln ? i i x?D i=1 Z(?)g(?i ) 2 i)   q R ?1/2 using the normalizer g(?i ) = exp ? 21 ?i (x ? ?i ) dx. 5 Experiments We experimentally evaluate the effectiveness of using coresets of different sizes for training mixture models. We compare against running EM on the full set, as well as on an unweighted, uniform sample from D. Results are presented for three real datasets. MNIST handwritten digits. The MNIST dataset contains 60,000 training and 10,000 testing grayscale images of handwritten digits. As in [11], we normalize each component of the data to have zero mean and unit variance, and then reduce each 784-pixel (28x28) image using PCA, retaining only the top d = 100 principal components as a feature vector. From the training set, we produce coresets and uniformly sampled subsets of sizes between 30 and 5000, using the parameters k = 10 (a cluster for each digit), ? = 20 and ? = 0.1 (see Algorithm 1), and fit GMMs using EM with 3 random restarts. The log likelihood (LLH) of each model on the testing data is shown in Figure 3(a). Notice that coresets significantly outperform uniform samples of the same size, and even a coreset of 30 points performs very well. Further note how the test-log likelihood begins to flatten out for |C| = 1000. Constructing the coreset and running EM on this size takes 7.9 seconds (Intel Xeon 2.6 GHz), over 100 times faster than running EM on the full set (15 minutes). Neural tetrode recordings. We also compare coresets and uniform sampling on a large dataset containing 319,209 records of rat hippocampal action potentials, measured by four co-located electrodes. As done by [11], we concatenate the 38-sample waveforms produced by each electrode to obtain a 152-dimensional vector. The vectors are normalized so each component has zero mean and unit variance. The 319,209 records are divided in half to obtain training and testing sets. From the training set, we produce coresets and uniformly sampled subsets of sizes between 70 and 1000, using the parameters k = 33 (as in [11]), ? = 66, and ? = 0.1, and fit GMMs. The log likelihood of each model on the held-out testing data is shown in Figure 3(b). Coreset GMMs obtain consistently higher LLH than uniform sample GMMs for sets of the same size, and even a coreset of 100 points performs very well. Overall, training on coresets achieves approximately the same likelihood as training on the full set about 95 times faster (1.2 minutes vs. 1.9 hours). CSN cell phone accelerometer data. Smart phones with accelerometers are being used by the Community Seismic Network (CSN) as inexpensive seismometers for earthquake detection. In [12], 7 GB of acceleration data were recorded from volunteers while carrying and operating their phone in normal conditions (walking, talking, on desk, etc.). From this data, 17-dimensional feature vectors were computed (containing frequency information, moments, etc.). The goal is to train, in an online 7 fashion, GMMs based on normal data, which then can be used to perform anomaly detection to detect possible seismic activity. Motivated by the limited storage on smart phones, we evaluate coresets on a data set of 40,000 accelerometer feature vectors, using the parameters k = 6, ? = 12, and ? = 0.1. Figure 3(c) presents the results of this experiment. Notice that on this data set, coresets show an even larger improvement over uniform sampling. We hypothesize that this is due to the fact that the recorded accelerometer data is imbalanced, and contains clusters of vastly varying size, so uniform sampling does not represent smaller clusters well. Overall, the coresets obtain a speedup of approximately 35 compared to training on the full set. We also evaluate how GMMs trained on the coreset compare with the baseline GMMs in terms of anomaly detection performance. For each GMM, we compute ROC curves measuring the performance of detecting earthquake recordings from the Southern California Seismic Network (cf., [12]). Note that even very small coresets lead to performance comparable to training on the full set, drastically outperforming uniform sampling (Fig. 3(d)). 6 Related Work Theoretical results on mixtures of Gaussians. There has been a significant amount of work on learning and applying GMMs (and more general distributions). Perhaps the most commonly used technique in practice is the EM algorithm [13], which is however only guaranteed to converge to a local optimum of the likelihood. Dasgupta [14] is the first to show that parameters of an unknown GMM P can be estimated in polynomial time, with arbitrary accuracy ?, given i.i.d. samples from P . However, his algorithm assumes a common covariance, bounded excentricity, a (known) bound on ? the smallest component weight, as well as a separation (distance of the means), that scales as ?( d). Subsequent works relax the assumption on separation to d1/4 [15] and k 1/4 [16]. [3] is the first to learn general GMMs, with separation d1/4 . [17] provides the first result that does not require any separation, but assumes that the Gaussians are axis-aligned. Recently, [18] and [19] provide algorithms with polynomial running time (except exponential dependence on k) and sample complexity for arbitrary GMMs. However, in contrast to our results, all the results described above crucially rely on the fact that the data set D is actually generated by a mixture of Gaussians. The problem of fitting a mixture model with near-optimal log-likelihood for arbitrary data is studied by [3], who provides a PTAS for this problem. However, their result requires that the Gaussians are identical spheres, in which case the maximum likelihood problem is identical to the k-means problem. In contrast, our results make only mild assumptions about the Gaussian components. Furthermore, none of the algorithms described above applies to the streaming or parallel setting. Coresets. Approximation algorithms in computational geometry often make use of random sampling, feature extraction, and -samples [20]. Coresets can be viewed as a general concept that includes all of the above, and more. See a comprehensive survey on this topic in [4]. It is not clear that there is any commonly agreed-upon definition of a coreset, despite several inconsistent attempts to do so [6, 8]. Coresets have been the subject of many recent papers and several surveys [1, 2]. They have been used to great effect for a host of geometric and graph problems, including k-median [6], k-mean [8], k-center [21], k-line median [10] subspace approximation [10, 22], etc. Coresets also imply streaming algorithms for many of these problems [6, 1, 23, 8]. A framework that generalizes and improves several of these results has recently appeared in [4]. 7 Conclusion We have shown how to construct coresets for estimating parameters of GMMs and natural generalizations. Our construction hinges on a natural connection between statistical estimation and clustering problems in computational geometry. To our knowledge, our results provide the first rigorous guarantees for obtaining compressed ?-approximations of the log-likelihood of mixture models for large data sets. The coreset construction relies on an intuitive adaptive sampling scheme, and can be easily implemented. By exploiting certain closure properties of coresets, it is possible to construct them in parallel, or in a single pass through a stream of data, using only poly(dk??1 log n log(1/?)) space and update time. Unlike most of the related work, our coresets provide guarantees for any given (possibly unstructured) data, without assumptions on the distribution or model that generated it. Lastly, we apply our construction on three real data sets, demonstrating significant gains over no or naive subsampling. Acknowledgments This research was partially supported by ONR grant N00014-09-1-1044, NSF grants CNS-0932392, IIS-0953413 and DARPA MSEE grant FA8650-11-1-7156. 8 References [1] P. K. Agarwal, S. Har-Peled, and K. R. Varadarajan. Geometric approximations via coresets. Combinatorial and Computational Geometry - MSRI Publications, 52:1?30, 2005. [2] A. Czumaj and C. Sohler. Sublinear-time approximation algorithms for clustering via random sampling. Random Struct. Algorithms (RSA), 30(1-2):226?256, 2007. [3] Sanjeev Arora and Ravi Kannan. Learning mixtures of separated nonspherical gaussians. Annals of Applied Probability, 15(1A):69?92, 2005. [4] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Proc. 41th Annu. ACM Symp. on Theory of Computing (STOC), 2011. [5] S. Har-Peled and A. Kushal. Smaller coresets for k-median and k-means clustering. Discrete & Computational Geometry, 37(1):3?19, 2007. [6] S. Har-Peled and S. Mazumdar. On coresets for k-means and k-median clustering. In Proc. 36th Annu. ACM Symp. on Theory of Computing (STOC), pages 291?300, 2004. [7] Jon Louis Bentley and James B. Saxe. Decomposable searching problems i: Static-to-dynamic transformation. J. Algorithms, 1(4):301?358, 1980. [8] D. Feldman, M. Monemizadeh, and C. Sohler. A PTAS for k-means clustering based on weak coresets. In Proc. 23rd ACM Symp. on Computational Geometry (SoCG), pages 11?18, 2007. [9] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: Simplified data processing on large clusters. In OSDI?04: Sixth Symposium on Operating System Design and Implementation, 2004. [10] D. Feldman, A. Fiat, and M. Sharir. Coresets for weighted facilities and their applications. In Proc. 47th IEEE Annu. Symp. on Foundations of Computer Science (FOCS), pages 315?324, 2006. [11] Ryan Gomes, Andreas Krause, and Pietro Perona. Discriminative clustering by regularized information maximization. In Proc. Neural Information Processing Systems (NIPS), 2010. [12] Matthew Faulkner, Michael Olson, Rishi Chandy, Jonathan Krause, K. Mani Chandy, and Andreas Krause. The next big one: Detecting earthquakes and other rare events from community-based sensors. In In Proc. ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), 2011. [13] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. J. Roy. Statist. Soc. Ser. B, 39:1?38, 1977. [14] S. Dasgupta. Learning mixtures of gaussians. In Fortieth Annual IEEE Symposium on Foundations of Computer Science (FOCS), 1999. [15] S. Dasgupta and L.J. Schulman. A two-round variant of em for gaussian mixtures. In Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI), 2000. [16] S. Vempala and G. Wang. A spectral algorithm for learning mixture models. In In Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. [17] J. Feldman, R. A. Servedio, and R. O?Donnell. Pac learning axis-aligned mixtures of gaussians with no separation assumption. In COLT, 2006. [18] A. Moitra and G. Valiant. Settling the polynomial learnability of mixtures of gaussians. In In Proc. Foundations of Computer Science (FOCS), 2010. [19] M. Belkin and K. Sinha. Polynomial learning of distribution families. In In Proc. Foundations of Computer Science (FOCS), 2010. [20] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Inf. Comput., 100(1):78?150, 1992. [21] S. Har-Peled and K. R. Varadarajan. High-dimensional shape fitting in linear time. Discrete & Computational Geometry, 32(2):269?288, 2004. [22] M.W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106(3):697, 2009. [23] G. Frahling and C. Sohler. Coresets in dynamic geometric data streams. In Proc. 37th Annu. ACM Symp. on Theory of Computing (STOC), pages 209?217, 2005. 9
4363 |@word mild:1 version:2 polynomial:5 norm:1 open:1 d2:2 closure:1 crucially:1 covariance:7 decomposition:1 pick:2 solid:1 moment:1 reduction:3 initial:1 contains:2 existing:1 csn:4 ka:4 si:2 dx:1 must:1 additive:2 partition:1 concatenate:1 subsequent:1 shape:1 remove:2 hypothesize:1 update:3 v:2 spec:3 selected:2 half:3 weighing:1 fewer:1 intelligence:1 plane:1 record:2 provides:5 detecting:2 c6:2 height:1 constructed:8 c2:4 symposium:3 focs:4 prove:1 consists:1 dan:1 fitting:9 symp:5 manner:2 x0:5 roughly:1 dist:7 ol:1 spherical:5 begin:1 estimating:2 moreover:3 bounded:4 what:1 argmin:1 kind:1 interpreted:1 msee:1 developed:1 supplemental:5 unified:1 finding:1 transformation:4 guarantee:4 pseudo:1 remember:2 every:3 tackle:1 tie:1 exactly:1 scaled:2 ser:1 control:1 unit:2 grant:3 organize:1 louis:1 understood:1 local:2 despite:1 merge:5 approximately:2 black:1 studied:2 suggests:3 co:1 limited:2 projective:2 acknowledgment:1 earthquake:5 testing:4 union:1 block:1 definite:1 implement:1 x3:1 practice:1 rsa:1 digit:3 procedure:2 rishi:1 area:1 eth:1 significantly:1 flatten:1 varadarajan:2 get:1 close:1 storage:1 context:1 applying:3 impossible:1 map:6 dean:1 center:1 straightforward:2 independently:2 survey:3 simplicity:1 unstructured:1 immediately:1 coreset:63 decomposable:1 insight:3 haussler:1 his:1 proving:1 searching:1 x14:1 fx:3 notion:1 laplace:2 updated:1 annals:1 construction:16 suppose:7 play:1 massive:2 anomaly:2 roy:1 approximated:1 located:1 walking:1 role:1 wang:1 capture:1 compressing:1 sharir:1 intuition:1 transforming:1 broken:1 complexity:5 peled:4 dempster:1 dynamic:2 trained:1 carrying:2 solving:5 algebra:1 smart:2 incur:2 purely:1 upon:1 triangle:3 drineas:1 easily:1 darpa:1 train:2 separated:2 describe:1 artificial:1 choosing:1 exhaustive:1 whose:1 larger:1 solve:1 say:1 relax:1 compressed:1 statistic:1 scaleinvariant:1 final:3 online:1 laird:1 sequence:1 eigenvalue:3 advantage:1 czumaj:1 net:1 product:1 aligned:2 degenerate:1 achieve:3 poorly:1 academy:1 sixteenth:1 description:1 intuitive:1 olson:1 normalize:1 exploiting:1 cluster:7 optimum:2 double:1 electrode:2 produce:2 generating:1 measured:1 soc:1 implemented:1 indicate:1 implies:2 waveform:1 radius:1 closely:1 merged:1 ipsn:1 vc:2 kb:2 saxe:1 material:5 require:3 suffices:2 generalization:12 decompose:1 ryan:1 enumerated:1 extension:2 hold:6 sufficiently:2 normal:3 exp:7 great:1 matthew:2 major:1 achieves:3 consecutive:1 smallest:2 purpose:1 estimation:10 proc:9 combinatorial:1 tool:1 weighted:20 hope:1 mit:1 rough:1 sensor:2 gaussian:11 always:1 modified:1 ck:4 avoid:1 lifted:1 mobile:2 varying:1 publication:1 focus:1 properly:1 consistently:1 improvement:1 likelihood:26 indicates:3 contrast:3 normalizer:4 rigorous:1 baseline:1 detect:1 inference:1 lold:3 kushal:1 osdi:1 streaming:14 entire:2 perona:1 transformed:5 selects:1 comprising:1 pixel:1 issue:2 x11:1 colt:1 overall:2 retaining:1 initialize:2 construct:7 extraction:1 sampling:22 sohler:3 identical:3 x4:1 represents:1 jon:1 summand:1 duplicate:1 employ:1 belkin:1 simultaneously:1 recognize:1 comprehensive:1 national:1 replaced:3 geometry:10 cns:1 jeffrey:1 maintain:1 attempt:1 detection:6 fd:2 mahoney:1 mixture:43 behind:3 held:1 har:4 accurate:1 tuple:2 unless:1 tree:5 incomplete:1 euclidean:3 continuing:1 initialized:1 circle:1 theoretical:1 sinha:1 xeon:1 soft:4 measuring:1 maximization:1 subset:7 rare:1 uniform:19 successful:1 learnability:1 dependency:1 answer:1 synthetic:1 adaptively:2 mles:1 density:6 international:1 probabilistic:1 donnell:1 michael:1 sanjeev:1 w1:6 squared:5 vastly:1 x9:1 recorded:2 containing:2 leveraged:1 possibly:3 r2d:4 monemizadeh:1 moitra:1 worse:1 admit:3 strive:1 style:3 account:1 potential:1 accelerometer:5 wk:5 coresets:56 includes:1 satisfy:1 combinatorics:1 depends:2 stream:6 multiplicative:5 break:1 sup:1 red:2 recover:2 parallel:15 contribution:2 square:2 accuracy:3 variance:4 who:2 efficiently:6 yield:2 yellow:1 generalize:1 weak:1 handwritten:2 produced:1 none:1 straight:1 definition:1 sixth:1 against:1 c7:3 inexpensive:1 servedio:1 frequency:1 james:1 hereby:4 proof:4 naturally:1 static:1 cur:1 sampled:8 gain:1 proved:4 dataset:2 realm:1 color:2 x13:1 fractional:1 improves:1 knowledge:1 agreed:1 fiat:1 actually:1 back:1 higher:1 x6:1 restarts:2 improved:1 done:1 furthermore:1 just:1 lastly:1 until:1 sketch:2 replacing:2 perhaps:4 bentley:1 effect:1 requiring:1 normalized:1 concept:1 facility:1 wi0:1 mani:1 symmetric:1 iteratively:1 round:1 x5:1 rat:1 generalized:3 hippocampal:1 pdf:1 theoretic:1 demonstrate:1 performs:2 ranging:1 image:2 novel:1 recently:3 common:1 pseudocode:1 empirically:1 discussed:1 approximates:1 expressing:1 composition:1 significant:2 feldman:5 rd:9 pm:1 had:1 moving:1 access:1 operating:2 etc:3 multivariate:2 closest:3 recent:2 imbalanced:1 inf:1 apart:1 phone:6 store:1 certain:2 n00014:1 inequality:5 binary:1 arbitrarily:5 outperforming:1 onr:1 devise:1 caltech:1 seen:1 minimum:3 additional:2 fortunately:1 ptas:2 ndk:1 employed:1 converge:2 semi:5 ii:1 full:10 multiple:2 d0:6 x10:1 faster:3 x28:1 sphere:2 divided:1 host:1 mle:4 ellipsis:1 ensuring:1 nonparallel:1 scalable:1 variant:3 essentially:1 volunteer:1 iteration:6 represent:3 agarwal:1 chicken:2 c1:5 cell:1 background:1 krause:4 median:7 singular:1 chandy:2 appropriately:1 w2:1 unlike:1 induced:1 recording:3 subject:1 db:2 inconsistent:1 gmms:17 effectiveness:1 call:2 integer:1 near:1 intermediate:1 faulkner:2 enough:1 xj:5 fit:4 andreas:3 reduce:7 idea:5 whether:2 motivated:3 pca:1 gb:1 fa8650:1 action:1 tol:1 generally:1 clear:1 amount:1 desk:1 statist:1 processed:1 outperform:1 exist:2 nonspherical:1 nsf:1 notice:4 estimated:1 disjoint:2 per:1 msri:1 discrete:2 dasgupta:3 key:5 four:1 demonstrating:1 gmm:5 ravi:1 kept:1 graph:1 pietro:1 sum:1 prob:1 fortieth:1 uncertainty:1 arrive:3 almost:1 x0j:5 ddk:1 family:1 separation:5 decision:1 scaling:3 comparable:1 bound:5 followed:1 guaranteed:2 quadratic:1 nonnegative:2 activity:1 nontrivial:1 adapted:2 annual:2 precisely:1 constraint:2 x2:1 mazumdar:1 x7:1 min:8 vempala:1 x12:1 speedup:1 according:1 across:1 smaller:3 em:14 wi:13 s1:1 intuitively:1 socg:1 ln:17 zurich:1 previously:1 turn:1 discus:1 generalizes:1 gaussians:25 operation:3 incurring:1 apply:1 away:1 spectral:1 save:1 alternative:1 struct:1 existence:1 original:5 compress:5 top:2 remaining:2 cf:3 clustering:17 running:5 assumes:2 log2:1 maintaining:1 hinge:1 subsampling:1 exploit:1 k1:1 approximating:6 question:3 already:1 quantity:1 dependence:2 southern:1 subspace:8 distance:12 link:1 w0:1 topic:1 furthest:1 kannan:1 modeled:1 illustration:2 mini:1 minimizing:1 equivalently:1 difficult:1 unfortunately:1 statement:1 stoc:3 negative:3 design:1 implementation:1 unknown:1 perform:3 seismic:3 upper:1 observation:2 datasets:1 affirmatively:1 arbitrary:5 community:2 bk:2 pair:1 required:1 c3:2 optimized:1 connection:2 c4:2 california:1 merges:1 hour:1 nip:1 suggested:2 sanjay:1 xm:1 appeared:1 challenge:2 summarize:2 including:2 memory:8 green:1 event:1 natural:6 rely:2 regularized:1 settling:1 scheme:3 older:1 imply:1 axis:2 arora:1 x8:1 naive:4 prior:1 geometric:6 review:1 mapreduce:1 schulman:1 expect:1 sublinear:1 limitation:1 proportional:2 ingredient:1 foundation:5 x01:1 incurred:1 sufficient:1 article:1 thresholding:1 rubin:1 pi:1 surprisingly:4 repeat:1 keeping:1 arriving:1 soon:1 copy:1 drastically:1 bias:1 formal:1 understand:1 absolute:1 distributed:2 ghz:1 curve:2 dimension:5 xn:1 evaluating:1 contour:1 unweighted:1 llh:2 forward:1 made:1 adaptive:5 c5:2 commonly:2 simplified:1 far:2 employing:1 approximate:2 compact:1 langberg:1 uai:1 summing:1 gomes:1 xi:1 discriminative:1 grayscale:1 search:1 latent:1 sk:1 learn:1 obtaining:4 investigated:1 poly:6 complex:1 constructing:5 pk:2 main:3 arrow:1 bounding:1 motivation:1 big:1 x1:4 fig:1 representative:1 intel:1 roc:2 egg:2 fashion:1 x16:1 sub:2 wish:2 exponential:3 comput:1 x15:1 theorem:8 minute:2 annu:4 pac:2 jensen:1 ghemawat:1 dk:9 tetrode:2 quently:1 mnist:3 merging:1 supported:1 importance:1 valiant:1 magnitude:1 illustrates:2 suited:1 simply:2 expressed:1 partially:1 subtlety:1 talking:1 applies:1 relies:1 acm:5 goal:3 viewed:1 acceleration:2 towards:2 hard:1 experimentally:1 included:1 except:1 uniformly:6 lemma:3 principal:1 pas:1 experimental:1 formally:1 arises:1 jonathan:1 hollow:1 evaluate:5 d1:4
3,715
4,364
Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories Cristina Savin1 [email protected] Peter Dayan2 [email protected] M?at?e Lengyel1 [email protected] 1 Computational & Biological Learning Lab, Dept. of Engineering, University of Cambridge, UK 2 Gatsby Computational Neuroscience Unit, University College London, UK Abstract Storing a new pattern in a palimpsest memory system comes at the cost of interfering with the memory traces of previously stored items. Knowing the age of a pattern thus becomes critical for recalling it faithfully. This implies that there should be a tight coupling between estimates of age, as a form of familiarity, and the neural dynamics of recollection, something which current theories omit. Using a normative model of autoassociative memory, we show that a dual memory system, consisting of two interacting modules for familiarity and recollection, has best performance for both recollection and recognition. This finding provides a new window onto actively contentious psychological and neural aspects of recognition memory. 1 Introduction Episodic memory such as that in the hippocampus acts like a palimpsest ? each new entity to be stored is overlaid on top of its predecessors, and, in turn, is submerged by its successors. This implies both anterograde interference (existing memories hinder the processing of new ones) and retrograde interference (new memories overwrite information about old ones). Both pose important challenges for the storage and retrieval of information in neural circuits. Some aspects of these challenges have been addressed in two theoretical frameworks ? one focusing on anterograde interference through the interaction of novelty and storage [1]; the other on retrograde interference in individual synapses [2]. However, neither fully considered the critical issue of retrieval from palimpsests; this is our focus. First, [1] made the critical observation that autoassociative memories only work if normal recall dynamics are suppressed on presentation of new patterns that need to be stored. Otherwise, rather than memorizing the new pattern, the memory associated with the existing pattern that most closely matches the new input will be strengthened. This suggests that it is critical to have a mechanism for assessing pattern novelty or, conversely, familiarity, a function that is often ascribed to neocortical areas surrounding the hippocampus. Second, [2] considered the palimpsest problem of overwriting information in synapses whose efficacies have limited dynamic ranges. They pointed out that this can be at least partially addressed through allowing multiple internal states (for instance forming a cascade) for each observable synaptic efficacy level. However, although [2] provide an attractive formalism for analyzing and optimizing synaptic storage, a retrieval mechanism associated with this storage is missing. 1 a b potentiation depression Figure 1: a. The cascade model. Internal states of a synapse (circles) can express one of two different efficacies (W, columns). Transitions between states are stochastic and can either be potentiating, or depressing, depending on pre- and postsynaptic activities. Probabilities of transitions between states expressing the same efficacy p and between states expressing different efficacies, q, decrease geometrically with cascade depth. b. Generative model for the autoassociative memory task. The ? is a noisy version of one of the stored patterns x. Upon storing pattern x synaptic states recall cue x changed from V0 (sampled from the stationary distribution of synaptic dynamics) to V1 . Recall occurs after the presentation of t ? 1 intervening patterns, when synapses are in states Vt , with ? are observed at recall. corresponding synaptic efficacies Wt . Only Wt and x Although these pieces of work might seem completely unrelated, we show here that they are closely linked via retrieval. The critical fact about recall from memory, in general, is to know how the information should appear at the time of retrieval. In the case of a palimpsest, the trace of a memory in the synaptic efficacies depends critically on the age of the memory, i.e., its relative familiarity. This suggests a central role for novelty (or familiarity) signals during recollection. Indeed, we show retrieval is substantially worse when familiarity is not explicitly represented than when it is. Dual system models for recognition memory are the topic of a heated debate [3, 4]. Our results could provide a computational rationale for them, showing that separating a perirhinal-like network (involved in familiarity) from a hippocampal-like network can be beneficial even when the only task is recollection. We also show that the task of recognition can also be best accomplished by combining the outputs of both networks, as suggested experimentally [4]. 2 Storage in a palimpsest memory We consider the task of autoassociative recall of binary patterns from a palimpsest memory. Specifically, the neural circuit consists of N binary neurons that enjoy all-to-all connectivity. During storage, network activity is clamped to the presented pattern x, inducing changes in the synapses? ?internal? states V and corresponding observed binary efficacies W (Fig. 1a). At recall, we seek to retrieve a pattern x that was originally stored, given a noisy cue x ? and the current weight matrix W. This weight matrix is assumed to result from storing x on top of the stationary distribution of the synaptic efficacies coming from the large number of patterns that had been previously stored, and then subsequently storing a sequence of t ? 1 other intervening patterns with the same statistics on top of x (Fig. 1b). In more detail, a pattern to be stored has density f , and is drawn from the distribution: Y Y Pstore (x) = Pstore (xi ) = f xi ? (1 ? f )1?xi i i (1) The recall cue is a noisy version of the original pattern, modeled using a binary symmetric channel: Pnoise (? x|x) = Y i Pnoise (x?i |xi ) Pnoise (x?i |xi ) = (1 ? r)xi ? r  1?xi x?i where r defines the level of input noise. 2 (2) ? rxi ? (1 ? r)  1?xi 1?x?i (3) The recall time t is assumed to come from a geometric distribution with mean t?:  t?1 1 1 Precall (t) = ? 1 ? t? t? (4) The synaptic learning rule is local and stochastic, with the probability of an event actually leading to state changes determined by the current state of the synapse Vij and the activity at the pre- and post-synaptic neurons, xi and xj . Hence, learning is specified through a set of transition matrices M (xi , xj ), with M (xi , xj )l0 l = P(Vij0 = l0 |Vij = l, xi , xj ). For convenience, we adopted the cascade model [2] (Fig. 1a), which assumes that the probability of potentiation and depression decays n?1 with cascade depth i as a geometric progression, qi? = ?i?1 , with qn? = ?1?? to compensate for i ? boundary effects. The transition between metastates is given by p? i = ?? 1?? , with the correction f factors ?+ = 1?f f and ?? = 1?f ensuring that different metastates are equally occupied for different pattern sparseness values f [2]. Furthermore, we assume synaptic changes occur only when the postsynaptic neuron is active, leading to potentiation if the presynaptic neuron is also active and to depression otherwise. The specific form of the learning rule could influence the memory span of the network, but we expect it not to change the results below qualitatively. The evolution of the distribution over synaptic states after encoding can be described by a Markov process, with a transition matrix M given as the averageP change in synaptic states expected after storing an arbitrary pattern from the prior Pstore (x), M = xi ,xj Pstore (xi )?Pstore (xj )?M(xi , xj ). Additionally, we define the column vectors ? V (xi , xj ) and ? W (xi , xj ) for the distribution of the synaptic states and observable efficacies, respectively, when one of the patterns stored was (xi , xj ), such that ?lW (xi , xj ) = P(Wij = l|xi , xj ) and ?lV (xi , xj ) = P(Vij = l|xi , xj ). Given these definitions, we can express the final distribution over synaptic states as:  X t?1 ? V (xi , xj ) = Precall (t) ? M ? M(xi , xj ) ? ? ? (5) t where we start from the stationary distribution ? ? (the eigenvector of M for eigenvalue 1), encode pattern (xi , xj ) and then t ? 1 additional patterns from the same distribution. The corresponding weight distribution is ? W (xi , xj ) = T ? ? V (xi , xj ), where T is a 2 ? 2n matrix defining the deterministic mapping from synaptic states to observable efficacies. The fact that the recency of the pattern to be recalled, t, appears in equation 5 implies that pattern age will strongly influence information retrieval. In the following, we consider two possible solutions to this problem. We first show the limitations of recall dynamics that involve a single, monolithic module which averages over t. We then prove the benefits of a dual system with two qualitatively different modules, one of which explicitly represents an estimate of pattern age. 3 3.1 A single module recollection system Optimal retrieval dynamics Since information storage by synaptic plasticity is lossy, the recollection task described above is a probabilistic inference problem [5,6]. Essentially, neural dynamics should represent (aspects of) the posterior over stored patterns, P (x|? x, W), that expresses the probability of any pattern x being the ? , and the synaptic efficacies W. correct response for the recall query given a noisy recall cue, x In more detail, the posterior over possible stored patterns can be computed as: ? ) ? Pstore (x) ? Pnoise (? P (x|W, x x|x) ? P(W|x) (6) 1 where we assume that evidence from the weights factorizes over synapses , P (W|x) = Q P (W ij |xi , xj ). ij 1 This assumption is never exactly true in practice, as synapses that share a pre- or post- synaptic partner are bound to be correlated. Here, we assume the intervening patterns cause independent weight changes and ignore the effects of such correlations. 3 Previous Bayesian recall dynamics derivations assumed learning rules for which the contribution of each pattern to the final weight were the same, irrespective of the order of pattern presentation [5,6]. By contrast, the Markov chain behaviour of our synaptic learning rule forces us to explicitly consider pattern age. Furthermore, as pattern age is unknown at recall, we need to integrate over all possible t values (Eq. 5). This integral (which is technically a sum, for discrete t) can be computed analytically using the eigenvalue decomposition of the transition matrix M. Alternatively, if the value of t is known during recall, the prior is replaced by a delta function, Precall (t) = ?(t ? t? ). There are several possible ways of representing the posterior in Eq.6 through neural dynamics without reifying t. For consistency, we assume neural states to be binary, with network activity at each step representing a sample from the posterior [7, 8]. An advantage of this approach is that the full posterior is represented in the network dynamics, such that higher decision modules can not only extract the ?best? pattern (for the mean squared error cost function considered here, this would be the mean of the posterior) but also estimate the uncertainty of this solution. Nevertheless, other representations, for example representing the parameters of a mean-field approximation to the true posterior [5, 9, 10], would also be possible and similarly informative about uncertainty. In particular, we use Gibbs sampling, as it allows for neurally plausible recall dynamics [7]. This results in asynchronous updates, in which the activity of a neuron xi changes stochastically as a function of its input cue x ?i , the activity of all other neurons, x\i , and neighbouring synapses, Wi,? and W?,i . Specifically, the Gibbs sampler results in a sigmoid transfer function, with the total current to the neuron given by the log-odds ratio: Iirec = log P(xi = 1|x\i , W, x ?i ) = Iirec,in + Iirec,out + a? xi + b P(xi = 0|x\i , W, x ?i ) (7) in/out defining the evidence from the incoming and outgoing synapses of neuron i, with the terms Irec and the constants a and b determined by the prior over patterns and the noise model.2 The terms describing the contribution from recurrent interactions, have a similar shape: X  in in in Iirec,in = cin (8) 1 ? Wij xj + c2 ? Wij + c3 ? xj + c4 j Iirec,out = X out out out cout 1 ? Wji xj + c2 ? Wji + c3 ? xj + c4  (9) j in/out The parameters ck , uniquely determined by the learning rule and the priors for x and t, rescale the contribution of the evidence from the weights as a function of pattern age (see supplementary text). Furthermore, these constants translate into a unique signal, giving a sort of ?sufficient statistic? for the expected memory strength. NoteP that the optimal dynamics include two homeostatic P processes, corresponding to global inhibition, j xj , and neuronal excitability regulation, j Wij , that stabilize network activity during recall. 3.2 Limitations Beside the effects of assuming a factorized weight distribution, the neural dynamics derived above should be the best we can do given the available data (i.e. recall cue and synaptic weights). How well does the network fare in practice? Performance is as expected when pattern age is assumed known: as the available information from the weights decreases, so does performance, finally converging to control levels, defined by the retrieval performance of a network without plastic recurrent connections, i.e. when inference uses only the recall cue and the prior over stored patterns (Fig. 2a, green). When t is unknown, performance also deteriorates with increasing pattern age, however this time beneath control levels (Fig. 2a, blue). Intuitively, one can see that relying on the prior over t is similar to assuming t fixed to a value close 2 out Real neurons can only receive information from their presynaptic partners, so cannot estimate Irec . We therefore ran simulations without this term in the dynamics and found that although it did decrease recall performance, this decrease was similar to that obtained by randomly pruning half of the connections in the network and keeping this term in the dynamics (not shown). This indicated that performance is mostly determined by the number of available synapses used for inference, and not so much by the direction of those synapses. Hence, in the following we use both terms and leave the systematic study of connectivity for future work. 4 40 t known t unknown control 30 b 20 10 control 5 10 0 0 15 error (%) error (%) a 50 t 100 150 0 t known single module dual system Gibbs tempered transitions Figure 2: a. Recall performance for a single module memory system. b. Average recollection error comparison for the single and dual memory system. Black lines mark control performance, when ignoring the information from the synaptic weights. to the mean of this prior. When the pattern that was actually presented is older than this estimate, the resulting memory signal is weaker than expected, suggesting that the initial pattern was very sparse (since a pair of inactive elements does not induce any synaptic changes according to our learning rule). However, less reasonable is the fact that averaging over the prior distribution of recall times t (Eq. 4), performance is worse than this control (Fig. 2b). One possible reason for this failure is that the sampling procedure used for inference might not work in certain cases. Since Gibbs samplers are known to mix poorly when the shape of the posterior is complex (with strong correlations, as in frustrated Ising models), perhaps our neural dynamics are unable to sample the desired distribution effectively. We confirmed this hypothesis by implementing a more sophisticated sampling procedure using tempered transitions [11] (details in supplementary text). Indeed, with tempered transitions performance becomes significantly better than control, even for the cases where Gibbs sampling fails (Fig. 2b). Unfortunately, there has yet to be a convincing suggestion as to how tempering dynamics (or in fact any other sampling algorithm that works well with correlated posteriors) can be represented neurally since, for example, they require a global acceptance decision to be taken at the end of each temperature cycle. It is worth noting that with more complex synaptic dynamics (e.g. deeper cascades) simple Gibbs sampling works reasonably well (data not shown), probably because the posterior is smoother and hence easier to sample. 4 A dual memory system An alternative to implicitly marginalizing over the age of the pattern throughout the inference process is to estimate it at the same time as performing recollection. This suggests the use of dual modules that together estimate the joint posterior P (x, t|? x, W), with sampling proceeding in a loop: the familiarity module generates a sample from the posterior over the age of the currently esti? , W); and the recollection module uses this estimated age to compute a new mated pattern, P(t|x, x sample from the distribution over possible stored patterns given the age, P (x|? x, W, t) (Fig. 3a). The module that computes familiarity can also be seen as a palimpsest, with each pattern overlaying, and being overlaid by, its predecessors and successors. Formally, it needs to compute the probability ? , W), as the system continues to implement a Gibbs sampler with t as an additional dimenP(t|x, x sion. As a separate module, the neural network estimating familiarity cannot however access the weights W of the recollection module. A biologically plausible approximation is to assume that the familiarity module uses a separate set of weights, which we call Wfam . Also, it is clear from ? conditioned on x, thus the conditioning on x ? can be dropped when Fig. 1b that t is independent of x computing the posterior over t, that is, external input need only feed directly into the recollection but not the familiarity module (Fig. 3a). In particular, we assume a feedforward network structure in the familiarity module, with each neuron receiving the output of the recollection module as inputs through synapses Wfam . These synaptic 5 b cue familiarity recollection activation 0.05 0 1 100 neuron index familiarity signal a 10 1 10 0 10 ?1 200 0 50 t 100 150 Figure 3: a. An overview of the dual memory system. The familiarity network has a feedforward structure, with the activity of individual neurons estimating the probability of the true pattern age being a certain value t, see example in inset. The estimated pattern age translates into a familiarity signal, which scales the contribution of the recurrent inputs in the network dynamics. b. Dependence of the familiarity signal on the estimated pattern age. weights change according to the same cascade rule used for recollection.3 For simplicity, we assume that the familiarity neurons are always activated during encoding, so that synapses can change state (either by potentiation or depression) with every storage event. Concretely, the familiarity module consists of Nfam neurons, each corresponding to a certain pattern age in the range 1?Nfam (the last unit codes for t ? Nfam ). This forms a localist code for familiarity. The total input to a neuron is given by the log-posterior Iifam = log P(t = i|x, Wfam ) which translates into a simple linear activation function: X  fam fam fam fam Iifam = cfam + cfam + log P(t) ? log(Z) (10) 1,i Wij xj + c2,i Wij 3,i xj + c4,i j in/out before (albeit different for each neuron where the constants cfam k,i are similar to parameters c because of their tuning to different values of t), and Z is the unknown partition function. As mentioned above, we treat the activity of the familiarity module as a sample from the posterior over age t. This representation requires lateral competition between different units such that only one can become active at each step. Dynamics of this sort can be implemented using a softmax operator, Ii P(xfam = 1) = Pe eIj (thus rendering the evaluation of the partition function Z unnecessary), and i j are a common feature of a range of neural models [12, 13]. Critically, this familiarity module is not just a convenient theoretical construct associated with retrieval. First, as we mentioned before, the assessment of novelty actually plays a key part in memory storage ? in making the decision as to whether a pattern that is presented is novel, and so should be stored, or familiar, and so should have its details be recalled. This venerable suggestion [1] has played a central part in the understanding of structure-function relationships in the hippocampus. The graded familiarity module that we have suggested is an obvious extension of this idea; the use for retrieval is new. Second, it is in general accord with substantial data on the role of perirhinal cortex and the activity of neurons in this structure [3]. Recency neurons would be associated with small values of t; novelty neurons with large or effectively infinite values of t [14], although perirhinal cortex appears to adopt a population coding strategy for age, rather than just one-of-n. The recollection module has the same dynamics as before, with constants ci computed assuming t fixed to the output of the familiarity module. Thus we predict that familiarity multiplicatively modulates recurrent interactions in the recollection module during recall. Since there is a deterministic mapping between t and this modulatory factor (Fig. 3b), it can be computed using a linear unit pooling the outputs of all the neurons in the familiarity module, with weights given by the corresponding values for cfam (t). i 3 There is nothing to say that the learning rule that optimizes the recollection network?s ability to recall patterns should be equally appropriate for assessing familiarity. Hence, the familiarity module could have their own learning rule, optimized for its specific task. 6 b novel 40 c 0.9 1 0.7 0.9 * * 100 50 30 0.5 20 0.3 10 0 familiar 0 0.04 0.08 0.12 0.1 hits familiarity: estimated t a 0.8 0 0.7 100 d fam rec * 90 0.6 0.5 both 80 0 0.2 recollection: average entropy 0.4 0.6 false alarms 0.8 1 fam rec Figure 4: a. Decision boundaries for the recognition module. b. Corresponding ROC curve. c. Performance comparison when the decision layer uses signals from the familiarity module, the recollection module, or both. d. Same comparison, when data is restricted to recent stimuli. Note that difference between fam and rec became significant compared to c. In order to compare single and dual module systems fairly, the computational resources employed by each should be the same. We therefore reduced the overall connectivity in the dual system such that the two have the same total number of synapses. Moreover, since elements of Wfam are correlated, the effective number of connections is in fact somewhat lower in the dual system. Regardless, the dual memory system performs significantly better than the single module system (Fig. 2b). 5 Recognition memory We have so far considered familiarity merely as an instrument for effective recollection. However, there are many practical and experimental tasks in which it is sufficient to make a binary decision about whether a pattern is novel or familiar rather than recalling it in all its gory detail. It is these tasks that have been used to elucidate the role of perirhinal cortex in recognition memory. In the dual module system, information about recognition is available from both the familiarity module (patterns judged to have young ages are recognized) and the recollection module (patterns recalled with higher certainty are recognized). We therefore construct an additional decision module which takes the outputs of the familiarity and recollection modules and maps them into a binary behavioral response (familiar vs. novel). Specifically, we use the average of the entropies associated with the activities of neurons in the recollection module and the mean estimate of t from the familiarity module. Since the palimpsest property implicitly assumes that all patterns have been presented at some point, we define a pattern to be familiar if its age is less than a fixed threshold tth . We train the decision module using a Gaussian process classifier4 [15], which yields as outcome the probability of a hit, P(familiar|t? , x? ), shown in Fig. 4a. The shape of the resulting discriminator, that it is not parallel to either axis, suggests that the output of both modules is needed for successful recognition, as suggested experimentally [4,16]. The fact that a classifier trained using only one of the two dimensions cannot match the recognition performance of that using both confirms this observation (Fig. 4c). Moreover, the ROC curve produced by the classifier, plotting hit rates against false alarms as relative losses are varied, has a similar shape to those obtained for human behavioral data: it has a so-called ?curvi-linear? character because of the apparent intersect at a finite hit probability for 0 false alarm rate [17] (Fig. 4b). Lastly, as recognition is known to rely more on familiarity for relatively recent patterns [18], we estimate recognition performance for recent patterns, which we define as having age t ? t2th . To determine the contribution of each module in recognition outcomes in this case, we estimate performance of classifiers trained on single input dimensions for this test data. Consistent with experimental data, our analysis reveals that the familiarity signal gives a more reliable estimate of novelty, compared to the recollection output for relatively recent items (Fig. 4d). 4 The specific classifier was chosen as it allows for an easy estimation of the ROC curves. Future work should explore analytical decision rules. 7 6 Conclusions and discussion Knowing the age of a pattern is critical for retrieval from palimpsest memories, a consideration that has so far eluded theoretical inquiry. We showed that a memory system could either treat this information implicitly, by marginalizing over all possible ages, or it could estimate age explicitly as a form of familiarity. In principle, both solutions should have similar performance, given the same resources. In practice, however, a system involving dual modules is significantly better. In our model, the posterior over possible stored patterns was represented in neural activities via samples. We showed that a complex, biologically-questionable sampling procedure would be necessary for the implicit, single module, system. Instead, a dual memory system with two functionally distinct but closely interacting modules, yielded the best performance both for efficient recollection and for recognition. Importantly, though Gibbs sampling and tempered transitions provide a useful framework for understanding the performance differences between different memory systems, the presented results are not restricted to a sampling-based implementation. Since age and identity are tightly correlated, a mean field solution that use factorized distributions [5] shows very similar behavior (see supplementary text). Similarly, the specific details of the familiarity module are not critical for these effects, which should be apparent for any alternative implementation correctly estimating pattern age. Representing pattern age, t, explicitly essentially amounts to implementing an auxiliary variable for sampling the space of possible patterns, x more efficiently. Such auxiliary variable methods are widely used to increase sampling efficiency when other, simpler methods fail [19]. Moreover, since t in our case specifically modulates the correlated components of the posterior it can be seen as a ?temperature? parameter, and so we can understand the advantages brought about by the dual system as due to implementing a form of ?simulated tempering? ? a class of methods known to help mixing in strongly correlated posteriors. Our proposal provides a powerful new window onto the contentious debate about the neural mechanisms of recognition and recall. The rationale for our familiarity network was improving recollection; however, the form of the network was motivated by the substantial experimental data [14] on recognition, and indeed standard models of perirhinal cortex activity [20]. These, for instance, also rely on some form of inhibition to mediate interactions between different familiarity neurons. Nevertheless, our model is the first to link the computational function of familiarity networks to recall; it is distinct also in that it considers palimpsest synapses, as previous models use purely additive learning rules [20]. Although we only considered pattern age as the basis of familiarity here, the principle of the interaction between familiarity and recollection remains the same in an extended setting, when familiarity characterizes the expected strength of the memory trace more completely, including the effects of retention interval, number of repetitions, and spacing between repetitions. Future work with the extended model should allow us to address familiarity, novelty, and recency neurons in the perirhinal cortex, and indeed provide a foundation for new thinking about this region. In our model familiarity interacts with recollection by multiplicatively (or divisively) modulating the contribution of recurrent inputs in the recollection module. Neurally, this effect could be mediated by shunting inhibition via specific classes of hippocampal interneurons which target the dendritic segment corresponding to recurrent connections, thus rescaling the relative contribution of external versus recurrent inputs [21]. Whether pathways reaching CA3 from perirhinal cortex through entorhinal cortex preserve a sufficient amount of input specificity of feed-forward inhibition is unknown. Our theory predicts important systems-level aspects of memory from synaptic-level constraints. In particular, by optimizing our dual system solely for memory recall we also predicted non-trivial ROC curves for recognition that are in at least broad qualitative agreement with experiments. Future work will be needed to explore whether the ROC curves in our model show similar dissociations in response to specific lesions of the two modules to those found in recent experiments [22,23] and the relation to other recognition memory models [24]. Acknowledgements This work was supported by the Wellcome Trust (CS, ML) and the Gatsby Charitable Foundation (PD). 8 References [1] Hasselmo, M.E. The role of acetylcholine in learning and memory. Current opinion in neurobiology 16, 710?715 (2006). [2] Fusi, S., Drew, P.J. & Abbott, L.F. Cascade models of synaptically stored memories. Neuron 45, 599?611 (2005). [3] Brown, M.W. & Aggleton, J.P. Recognition memory: What are the roles of the perirhinal cortex and hippocampus? Nature Reviews Neuroscience 2, 51?61 (2001). [4] Wixted, J.T. & Squire, L.R. The medial temporal lobe and the attributes of memory. Trends in Cognitive Sciences 15, 210?217 (2011). [5] Sommer, F.T. & Dayan, P. Bayesian retrieval in associative memories with storage errors. IEEE transactions on neural networks 9, 705?713 (1998). [6] Lengyel, M., Kwag, J., Paulsen, O. & Dayan, P. Matching storage and recall: hippocampal spike timing-dependent plasticity and phase response curves. Nature Neuroscience 8, 1677? 1683 (2005). [7] Ackley, D., Hinton, G. & Sejnowski, T. A learning algorithm for Boltzmann machines. Cognitive Science 9, 147?169 (1995). [8] Fiser, J., Berkes, P., Orb?an, G. & Lengyel, M. Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences 14, 119?130 (2010). [9] Hinton, G. Deterministic Boltzmann learning performs steepest descent in weight-space. Neural Computation 1, 143?150 (1990). [10] Lengyel, M. & Dayan, P. Uncertainty, phase and oscillatory hippocampal recall. Advances in Neural Information Processing (2007). [11] Neal, R.M. Sampling from multimodal distributions using tempered transitions. Statistics and Computing 6, 353?366 (1996). [12] Fukai, T. & Tanaka, S. A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winners-share-all. Neural computation 9, 77?97 (1997). [13] Bogacz, R. & Gurney, K. The basal ganglia and cortex implement optimal decision making between alternative actions. Neural computation 19, 442?477 (2007). [14] Xiang, J.Z. & Brown, M.W. Differential neuronal encoding of novelty, familiarity and recency in regions of the anterior temporal lobe. Neuropharmacology 37, 657?676 (1998). [15] Rasmussen, C.E. & Williams, C.K.I. Gaussian Processes for Machine Learning (MIT Press, 2006). [16] Warburton, E.C. & Brown, M.W. Findings from animals concerning when interactions between perirhinal cortex, hippocampus and medial prefrontal cortex are necessary for recognition memory. Neuropsychologia 48, 2262?2272 (2010). [17] Yonelinas, A.P. Components of episodic memory: the contribution of recollection and familiarity. Philosophical Transactions of the Royal Society B: Biological Sciences 356, 1363?1374 (2001). [18] Yonelinas, A. The nature of recollection and familiarity: A review of 30 years of research. Journal of memory and language 46, 441?517 (2002). [19] Iba, Y. Extended ensemble Monte Carlo. Int. J. Mod. Phys 12, 653?656 (2001). [20] Bogacz, R. Comparison of computational models of familiarity discrimination in the perirhinal cortex. Hippocampus (2003). [21] Mitchell, S. Shunting inhibition modulates neuronal gain during synaptic excitation. Neuron (2003). [22] Fortin, N.J., Wright, S.P. & Eichenbaum, H. Recollection-like memory retrieval in rats is dependent on the hippocampus. Nature 431, 188?191 (2004). [23] Cowell, R., Winters, B., Bussey, T. & Saksida, L. Paradoxical false memory for objects after brain damage. Science (2010). [24] Norman, K. & O?Reilly, R. Modeling hippocampal and neocortical contributions to recognition memory: A complementary-learning-systems approach. Psychological Review (2003). 9
4364 |@word version:2 hippocampus:7 anterograde:2 confirms:1 seek:1 simulation:1 lobe:2 eng:1 decomposition:1 paulsen:1 initial:1 cristina:1 efficacy:12 existing:2 current:5 anterior:1 activation:3 yet:1 additive:1 partition:2 informative:1 plasticity:2 shape:4 update:1 medial:2 v:1 stationary:3 generative:1 cue:8 half:1 item:2 discrimination:1 steepest:1 provides:2 simpler:1 c2:3 predecessor:2 become:1 differential:1 retrieving:1 qualitative:1 consists:2 prove:1 pathway:1 behavioral:2 ascribed:1 expected:5 indeed:4 behavior:2 brain:1 relying:1 window:2 increasing:1 becomes:2 estimating:3 unrelated:1 moreover:3 circuit:2 factorized:2 what:1 bogacz:2 substantially:1 eigenvector:1 finding:2 esti:1 certainty:1 every:1 temporal:2 act:1 questionable:1 exactly:1 classifier:4 hit:4 uk:5 control:7 unit:4 omit:1 appear:1 enjoy:1 overlaying:1 before:3 retention:1 dropped:1 local:1 engineering:1 monolithic:1 treat:2 timing:1 encoding:3 analyzing:1 solely:1 might:2 black:1 suggests:4 conversely:1 limited:1 range:3 statistically:1 unique:1 practical:1 practice:3 implement:2 procedure:3 episodic:2 area:1 intersect:1 cascade:8 significantly:3 convenient:1 matching:1 pre:3 induce:1 reilly:1 specificity:1 onto:2 palimpsest:12 convenience:1 close:1 cannot:3 storage:11 recency:4 influence:2 operator:1 judged:1 heated:1 deterministic:3 map:1 missing:1 williams:1 regardless:1 simplicity:1 fam:7 rule:11 importantly:1 retrieve:1 population:1 overwrite:1 elucidate:1 play:1 target:1 neighbouring:1 us:4 hypothesis:1 agreement:1 element:2 trend:2 recognition:21 rec:3 continues:1 ising:1 predicts:1 observed:2 role:6 module:47 divisively:1 ackley:1 region:2 cycle:1 decrease:4 cin:1 ran:1 mentioned:2 substantial:2 pd:1 venerable:1 cam:2 dynamic:21 hinder:1 trained:2 tight:1 segment:1 technically:1 upon:1 purely:1 efficiency:1 completely:2 basis:1 multimodal:1 joint:1 represented:4 surrounding:1 derivation:1 train:1 distinct:3 effective:2 london:1 sejnowski:1 monte:1 query:1 outcome:2 whose:1 apparent:2 supplementary:3 plausible:2 widely:1 say:1 otherwise:2 ability:1 statistic:3 noisy:4 final:2 associative:1 sequence:1 eigenvalue:2 advantage:2 analytical:1 ucl:1 interaction:6 coming:1 combining:1 beneath:1 loop:1 translate:1 poorly:1 mixing:1 intervening:3 inducing:1 competition:1 assessing:2 leave:1 object:1 help:1 coupling:1 depending:1 ac:3 pose:1 recurrent:7 rescale:1 ij:2 eq:3 strong:1 implemented:1 auxiliary:2 predicted:1 come:2 implies:3 c:1 exhibiting:1 direction:1 orb:1 closely:3 correct:1 attribute:1 stochastic:2 subsequently:1 human:1 successor:2 opinion:1 implementing:3 require:1 potentiation:4 behaviour:1 biological:2 dendritic:1 extension:1 correction:1 considered:5 wright:1 normal:1 overlaid:2 mapping:2 predict:1 adopt:1 estimation:1 currently:1 modulating:1 hasselmo:1 pnoise:4 faithfully:1 repetition:2 brought:1 mit:1 overwriting:1 always:1 gaussian:2 rather:3 occupied:1 ck:1 reaching:1 sion:1 factorizes:1 acetylcholine:1 encode:1 l0:2 focus:1 derived:1 contrast:1 inference:5 dayan:4 dependent:2 relation:1 wij:6 selective:1 issue:1 dual:17 overall:1 animal:1 softmax:1 fairly:1 field:2 construct:2 never:1 having:1 sampling:13 represents:1 broad:1 thinking:1 future:4 stimulus:1 randomly:1 winter:1 preserve:1 tightly:1 individual:2 familiar:6 replaced:1 phase:2 consisting:1 recalling:2 acceptance:1 interneurons:1 evaluation:1 activated:1 chain:1 integral:1 necessary:2 old:1 circle:1 desired:1 theoretical:3 psychological:2 instance:2 column:2 formalism:1 modeling:1 rxi:1 localist:1 cost:2 ca3:1 successful:1 stored:15 density:1 probabilistic:1 systematic:1 receiving:1 together:1 connectivity:3 squared:1 central:2 prefrontal:1 worse:2 stochastically:1 external:2 cognitive:3 leading:2 rescaling:1 actively:1 suggesting:1 coding:1 stabilize:1 int:1 explicitly:5 squire:1 depends:1 dayan2:1 piece:1 lab:1 linked:1 characterizes:1 start:1 sort:2 parallel:1 contribution:9 became:1 efficiently:1 dissociation:1 yield:1 ensemble:2 bayesian:2 plastic:1 critically:2 produced:1 carlo:1 confirmed:1 worth:1 lengyel:4 inquiry:1 synapsis:14 oscillatory:1 phys:1 synaptic:26 definition:1 failure:1 against:1 involved:1 obvious:1 associated:5 sampled:1 gain:1 mitchell:1 recall:29 pstore:6 sophisticated:1 actually:3 focusing:1 appears:2 feed:2 originally:1 higher:2 response:4 synapse:2 depressing:1 though:1 strongly:2 furthermore:3 just:2 implicit:1 lastly:1 fiser:1 correlation:2 gurney:1 trust:1 assessment:1 defines:1 indicated:1 perhaps:1 lossy:1 effect:6 averagep:1 brown:3 true:3 norman:1 evolution:1 hence:4 analytically:1 excitability:1 symmetric:1 neal:1 attractive:1 potentiating:1 during:7 uniquely:1 iba:1 excitation:1 rat:1 hippocampal:5 cout:1 neocortical:2 performs:2 temperature:2 consideration:1 novel:4 sigmoid:1 common:1 overview:1 conditioning:1 winner:2 fare:1 functionally:1 neuropharmacology:1 expressing:2 significant:1 cambridge:1 lengyel1:1 gibbs:8 tuning:1 consistency:1 similarly:2 pointed:1 language:1 had:1 access:1 cortex:12 v0:1 inhibition:5 berkes:1 something:1 posterior:18 own:1 recent:5 showed:2 optimizing:2 optimizes:1 certain:3 binary:7 vt:1 yonelinas:2 accomplished:1 tempered:5 wji:2 seen:2 additional:3 somewhat:1 employed:1 recognized:2 novelty:8 determine:1 signal:8 ii:1 smoother:1 multiple:1 full:1 neurally:3 mix:1 match:2 compensate:1 retrieval:14 concerning:1 post:2 equally:2 shunting:2 aggleton:1 qi:1 ensuring:1 converging:1 involving:1 essentially:2 represent:1 accord:1 synaptically:1 receive:1 proposal:1 spacing:1 addressed:2 interval:1 fukai:1 probably:1 pooling:1 mod:1 seem:1 neuropsychologia:1 odds:1 call:1 noting:1 feedforward:2 easy:1 rendering:1 xj:27 idea:1 knowing:2 translates:2 inactive:1 whether:4 motivated:1 peter:1 cause:1 action:1 autoassociative:4 depression:4 useful:1 modulatory:1 clear:1 involve:1 amount:2 tth:1 reduced:1 neuroscience:3 delta:1 deteriorates:1 estimated:4 correctly:1 blue:1 kwag:1 discrete:1 express:3 basal:1 key:1 nevertheless:2 threshold:1 drawn:1 tempering:2 neither:1 abbott:1 contentious:2 retrograde:2 v1:1 geometrically:1 merely:1 sum:1 year:1 uncertainty:3 powerful:1 throughout:1 reasonable:1 fusi:1 decision:10 bound:1 layer:1 played:1 yielded:1 activity:13 strength:2 occur:1 constraint:1 generates:1 aspect:4 span:1 mated:1 performing:1 relatively:2 eichenbaum:1 according:2 gory:1 beneficial:1 suppressed:1 postsynaptic:2 wi:1 character:1 biologically:2 making:2 memorizing:1 intuitively:1 restricted:2 interference:4 taken:1 wellcome:1 equation:1 resource:2 previously:2 remains:1 turn:1 describing:1 mechanism:3 fail:1 needed:2 know:1 instrument:1 end:1 adopted:1 available:4 progression:1 appropriate:1 alternative:3 original:1 top:3 assumes:2 include:1 sommer:1 paradoxical:1 giving:1 graded:1 society:1 occurs:1 spike:1 strategy:1 damage:1 dependence:1 interacts:1 unable:1 separate:2 separating:1 recollection:34 entity:1 lateral:1 simulated:1 link:1 topic:1 partner:2 presynaptic:2 considers:1 trivial:1 reason:1 assuming:3 code:2 modeled:1 index:1 relationship:1 ratio:1 convincing:1 multiplicatively:2 regulation:1 mostly:1 unfortunately:1 debate:2 trace:3 implementation:2 boltzmann:2 unknown:5 allowing:1 observation:2 neuron:25 markov:2 finite:1 descent:1 defining:2 extended:3 neurobiology:1 hinton:2 interacting:2 varied:1 homeostatic:1 arbitrary:1 pair:1 specified:1 c3:2 connection:4 optimized:1 discriminator:1 philosophical:1 c4:3 recalled:3 eluded:1 tanaka:1 address:1 suggested:3 below:1 pattern:63 perception:1 challenge:2 green:1 memory:48 reliable:1 including:1 royal:1 critical:7 event:2 force:1 rely:2 representing:4 older:1 fortin:1 axis:1 irrespective:1 iirec:5 mediated:1 extract:1 text:3 prior:8 geometric:2 understanding:2 acknowledgement:1 review:3 marginalizing:2 xiang:1 relative:3 beside:1 fully:1 expect:1 loss:1 rationale:2 suggestion:2 limitation:2 versus:1 lv:1 age:30 foundation:2 integrate:1 sufficient:3 consistent:1 plotting:1 principle:2 vij:3 charitable:1 storing:5 share:2 interfering:1 changed:1 supported:1 last:1 asynchronous:1 keeping:1 rasmussen:1 weaker:1 deeper:1 understand:1 allow:1 sparse:1 benefit:1 boundary:2 depth:2 curve:6 transition:11 dimension:2 qn:1 computes:1 concretely:1 made:1 qualitatively:2 forward:1 far:2 transaction:2 pruning:1 observable:3 ignore:1 implicitly:3 ml:1 global:2 active:3 incoming:1 reveals:1 assumed:4 unnecessary:1 xi:32 alternatively:1 additionally:1 channel:1 transfer:1 reasonably:1 nature:4 ignoring:1 improving:1 complex:3 wixted:1 did:1 noise:2 alarm:3 mediate:1 nothing:1 lesion:1 complementary:1 neuronal:4 fig:16 roc:5 strengthened:1 gatsby:3 fails:1 clamped:1 pe:1 lw:1 young:1 familiarity:52 specific:6 inset:1 showing:1 normative:1 decay:1 evidence:3 albeit:1 false:4 effectively:2 modulates:3 ci:1 drew:1 entorhinal:1 conditioned:1 sparseness:1 easier:1 entropy:2 eij:1 explore:2 forming:1 ganglion:1 partially:1 cowell:1 frustrated:1 identity:1 presentation:3 experimentally:2 change:10 specifically:4 determined:4 infinite:1 wt:2 sampler:3 averaging:1 total:3 called:1 perirhinal:10 experimental:3 formally:1 college:1 internal:3 mark:1 dept:1 outgoing:1 correlated:6
3,716
4,365
Infinite Latent SVM for Classification and Multi-task Learning Jun Zhu? , Ning Chen? , and Eric P. Xing? Dept. of Computer Science & Tech., TNList Lab, Tsinghua University, Beijing 100084, China ? Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA [email protected];[email protected];[email protected] ? Abstract Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations, we study nonparametric Bayesian inference with regularization on the desired posterior distributions. While priors can indirectly affect posterior distributions through Bayes? theorem, imposing posterior regularization is arguably more direct and in some cases can be much easier. We particularly focus on developing infinite latent support vector machines (iLSVM) and multi-task infinite latent support vector machines (MT-iLSVM), which explore the largemargin idea in combination with a nonparametric Bayesian model for discovering predictive latent features for classification and multi-task learning, respectively. We present efficient inference methods and report empirical studies on several benchmark datasets. Our results appear to demonstrate the merits inherited from both large-margin learning and Bayesian nonparametrics. 1 Introduction Nonparametric Bayesian latent variable models have recently gained remarkable popularity in statistics and machine learning, partly owning to their desirable ?nonparametric? nature which allows practitioners to ?sidestep? the difficult model selection problem, e.g., figuring out the unknown number of components (or classes) in a mixture model [2] or determining the unknown dimensionality of latent features [12], by using an appropriate prior distribution with a large support. Among the most commonly used priors are Gaussian process (GP) [24], Dirichlet process (DP) [2] and Indian buffet process (IBP) [12]. However, standard nonparametric Bayesian models are limited in that they usually make very strict and unrealistic assumptions on data, such as that observations being homogeneous or exchangeable. A number of recent developments in Bayesian nonparametrics have attempted to alleviate such limitations. For example, to handle heterogenous observations, predictor-dependent processes [20] have been proposed; and to relax the exchangeability assumption, various correlation structures, such as hierarchical structures [26], temporal or spatial dependencies [5], and stochastic ordering dependencies [13, 10], have been introduced. However, all these methods rely solely on crafting a nonparametric Bayesian prior encoding some special structure, which can indirectly influence the posterior distribution of interest via trading-off with likelihood models. Since it is the posterior distributions, which capture the latent structures to be learned, that are of our ultimate interest, an arguably more direct way to learn a desirable latent-variable model is to impose posterior regularization (i.e., regularization on posterior distributions), as we will explore in this paper. Another reason for using posterior regularization is that in some cases it is more natural and easier to incorporate domain knowledge, such as the large-margin [15, 31] or manifold constraints [14], directly on posterior distributions rather than through priors, as shown in this paper. Posterior regularization, usually through imposing constraints on the posterior distributions of latent variables or via some information projection, has been widely studied in learning a finite log-linear model from partially observed data, including generalized expectation [21], posterior regulariza1 tion [11], and alternating projection [6], all of which are doing maximum likelihood estimation (MLE) to learn a single set of model parameters by optimizing an objective. Recent attempts toward learning a posterior distribution of model parameters include the ?learning from measurements? [19], maximum entropy discrimination [15] and MedLDA [31]. But again, all these methods are limited to finite parametric models. To our knowledge, very few attempts have been made to impose posterior regularization on nonparametric Bayesian latent variable models. One exception is our recent work of infinite SVM (iSVM) [32], a DP mixture of large-margin classifiers. iSVM is a latent class model that assigns each data example to a single mixture component for classification and the unknown number of mixture components is automatically resolved from data. In this paper, we present a general formulation of performing nonparametric Bayesian inference subject to appropriate posterior constraints. In particular, we concentrate on developing the infinite latent support vector machines (iLSVM) and multi-task infinite latent support vector machines (MT-iLSVM), which explore the discriminative large-margin idea to learn infinite latent feature models for classification and multi-task learning [3, 4], respectively. As such, our methods as well as [32] represent an attempt to push forward the interface between Bayesian nonparametrics and large margin learning, which have complementary advantages but have been largely treated as two separate subfields in the machine learning community. Technically, although it is intuitively natural for MLE-based methods to include a regularization term on the posterior distributions of latent variables, this is not straightforward for Bayesian inference because we do not have an optimization objective to be regularized. We base our work on the interpretation of the Bayes? theorem by Zellner [29], namely, the Bayes? theorem can be reformulated as a minimization problem. Under this optimization framework, we incorporate posterior constraints to do regularized Bayesian inference, with a penalty term that measures the violation of the constraints. Both iLSVM and MT-iLSVM are special cases that explore the large-margin principle to consider supervising information for learning predictive latent features, which are good for classification or multi-task learning. We use the nonparametric IBP prior to allow the models to have an unbounded number of latent features. The regularized inference problem can be efficiently solved with an iterative procedure, which leverages existing high-performance convex optimization techniques. Related Work: As stated above, both iLSVM and MT-iLSVM generalize the ideas of iSVM to infinite latent feature models. For multi-task learning, nonparametric Bayesian models have been developed in [28, 23] for learning features shared by multiple tasks. But these methods are based on standard Bayesian inference, without the ability to consider posterior regularization, such as the large-margin constraints or the manifold constraints [14]. Finally, MT-iLSVM is a nonparametric Bayesian generalization of the popular multi-task learning methods [1, 16], as explained shortly. 2 Regularized Bayesian Inference with Posterior Constraints In this section, we present the general framework of regularized Bayesian inference with posterior constraints. We begin with a brief review of the basic results due to Zellner [29]. 2.1 Bayesian Inference as a Learning Model Let M be a model space, containing any variables whose posterior distributions we are trying to infer. Bayesian inference starts with a prior distribution ?(M) and a likelihood function p(x|M) indexed by the model M ? M. Then, by the Bayes? theorem, the posterior distribution is p(M|x1 , ? ? ? , xN ) = ? ?(M) N n=1 p(xn |M) , p(x1 , ? ? ? , xN ) (1) where p(x1 , ? ? ? , xN ) is the marginal likelihood or evidence of observed data. Zellner [29] first showed that the posterior distribution due to the Bayes? theorem is the solution of the problem min p(M) s.t. : KL(p(M)??(M)) ? p(M) ? Pprob , N ? ? n=1 log p(xn |M)p(M)dM (2) where KL(p(M)??(M)) is the Kullback-Leibler (KL) divergence, and Pprob is the space of valid probability distributions with an appropriate dimension. 2.2 Regularized Bayesian Inference with Posterior Constraints As commented by E.T. Jaynes [29], ?this fresh interpretation of Bayes? theorem could make the use of Bayesian methods more attractive and widespread, and stimulate new developments in 2 the general theory of inference?. Below, we study how to extend the basic results to incorporate posterior constraints in Bayesian inference. In the standard Bayesian inference, the constraints (i.e., p(M) ? Pprob ) do not have auxiliary free parameters. In general, regularized Bayesian inference solves the constrained optimization problem min p(M),? s.t. : KL(p(M)??(M)) ? p(M) ? Ppost (?), N ? ? n=1 log p(xn |M)p(M)dM + U (?) (3) where Ppost (?) is a subspace of distributions that satisfy a set of constraints. The auxiliary parameters ? are usually nonnegative and interpreted as slack variables. U (?) is a convex function, which usually corresponds to a surrogate loss (e.g., hinge loss) of a prediction rule, as we shall see. We can use an iterative procedure to do the regularized Bayesian inference based on convex optimization techniques. The general recipe is that we use the Lagrangian method by introducing Lagrangian multipliers ?. Then, we iteratively solve for p(M) with ? and ? fixed; and solve for ? and ? with p(M) given. For the first step, we can use sampling or variational methods [9] to do approximate inference; and under certain conditions, such as using the constraints based on posterior expectation [21], the second step can be efficiently done using high-performance convex optimization techniques, as we shall see. 3 Infinite Latent Support Vector Machines In this section, we concretize the ideas of regularized Bayesian inference by particularly focusing on developing large-margin classifiers with an unbounded dimension of latent features, which can be used as a representation of examples for the single-task classification or as a common representation that captures relationships among multiple tasks for multi-task learning. We first present the single-task classification model. The basic setup is that we project each data example x ? X ? RD to a latent feature vector z. Here, we consider binary features1 . Given a set of N data examples, let Z be the matrix, of which each row is a binary vector zn associated with data sample n. Instead of pre-specifying a fixed dimension of z, we resort to the nonparametric Bayesian methods and let z have an infinite number of dimensions. To make the expected number of active latent features finite, we put the well-studied IBP prior on the binary feature matrix Z. 3.1 Indian Buffet Process Indian buffet process (IBP) was proposed in [12] and has been successfully applied in various fields, such as link prediction [22] and multi-task learning [23]. We focus on its stick-breaking construction [25], which is good for developing efficient inference methods. Let ?k ? (0, 1) be a parameter associated with column k of the binary matrix Z. Given ?k , each znk in column k is sampled independently from Bernoulli(?k ). The parameters ? are generated by a stick-breaking process k ? ?1 = ?1 , and ?k = ?k ?k?1 = ?i , (4) i=1 where ?i ? Beta(?, 1). This process results in a decreasing sequence of probabilities ?k . Specifically, given a finite dataset, the probability of seeing feature k decreases exponentially with k. 3.2 Infinite Latent Support Vector Machines We consider the multi-way classification, where each training data is provided with a categorical def label y, where y ? Y = {1, ? ? ? , L}. For binary classification and regression, similar procedure can be applied to impose large-margin constraints on posterior distributions. Suppose that the latent features z are given, then we can define the latent discriminant function as def f (y, x, z; ?) = ? ? g(y, x, z), (5) where g(y, x, z) is a vector stacking of L subvectors2 of which the yth is z? and all the others are zero. Since we are doing Bayesian inference, we need to maintain the entire distribution profile of 1 Real-valued features can be easily considered as in [12]. We can consider the input features x or its certain statistics in combination with the latent features z to define a classifier boundary, by simply concatenating them in the subvectors. 2 3 the latent features Z. However, in order to make a prediction on the observed data x, we need to get rid of the uncertainty of Z. Here, we define the effective discriminant function as an expectation3 (i.e., a weighted average considering all possible values of Z) of the latent discriminant function. To make the model fully Bayesian, we also treat ? as random and aim to infer the posterior distribution p(Z, ?) from given data. More formally, the effective discriminant function f : X ?Y 7? R is def f (y, x; p(Z, ?)) = Ep(Z,?) [f (y, x, z; ?)] = Ep(Z,?) [? ? g(y, x, z)]. (6) Note that although the number of latent features is allowed to be infinite, with probability one, the number of non-zero features is finite when only a finite number of data are observed, under the IBP prior. Moreover, to make it computationally feasible, we usually set a finite upper bound K to the number of possible features, where K is sufficiently large and known as the truncation level (See Sec 3.4 and Appendix A.2 for details). As shown in [9], the ?1 -distance truncation error of marginal distributions decreases exponentially as K increases. With the above definitions, we define the Ppost (?) in problem (3) using large-margin constraints as { } ?n ? Itr : f (yn , xn ; p(Z, ?))?f (y, xn ; p(Z, ?)) ? ?(y, yn )??n , ?y def c Ppost (?) = p(Z, ?) (7) ?n ? 0 ? def and define the penalty function as U c (?) = C n?Itr ?np , where p ? 1. If p is 1, minimizing c U (?) is ? equivalent to minimizing the hinge-loss (or ?1 -loss) Rch of the prediction rule (9), where c Rh = C n?Itr maxy (f (y, xn ; p(Z, ?)) + ?(y, yn ) ? f (yn , xn ; p(Z, ?))); if p is 2, the surrogate loss is the ?2 -loss. For clarity, we consider the hinge loss. The non-negative cost function ?(y, yn ) (e.g., 0/1-cost) measures the cost of predicting xn to be y when its true label is yn . Itr is the index set of training data. In order to robustly estimate the latent matrix Z, we need a reasonable amount of data. Therefore, we also relate Z to the observed data x by defining a likelihood model to provide as much data as possible. Here, we define the linear-Gaussian likelihood model for real-valued data 2 2 p(xn |zn , W, ?n0 ) = N (xn |Wz? n , ?n0 I), (8) where W is a random loading matrix and I is an identity matrix with ?appropriate dimensions. We assume W follows an independent Gaussian prior, i.e., ?(W) = d N (wd |0, ?02 I). Fig. 1 (a) 2 can be set a priori or shows the graphical structure of iLSVM. The hyperparameters ?02 and ?n0 estimated from observed data (See Appendix A.2 for details). Testing: to make prediction on test examples, we put both training and test data together to do the regularized Bayesian inference. For training data, we impose the above large-margin constraints because of the awareness of their true labels, while for test data, we do the inference without the large-margin constraints since we do not know their true labels. After inference, we make the prediction via the rule def y ? = arg max f (y, x; p(Z, ?)). (9) y The ability to generalize to test data relies on the fact that all the data examples share ? and the IBP prior. We can also cast the problem as a transductive inference problem by imposing additional constraints on test data [17]. However, the resulting problem will be generally harder to solve. 3.3 Multi-Task Infinite Latent Support Vector Machines Different from classification, which is typically formulated as a single learning task, multi-task learning aims to improve a set of related tasks through sharing statistical strength between these tasks, which are performed jointly. Many different approaches have been developed for multi-task learning (See [16] for a review). In particular, learning a common latent representation shared by all the related tasks has proven to be an effective way to capture task relationships [1, 3, 23]. Below, we present the multi-task infinite latent SVM (MT-iLSVM) for learning a common binary projection matrix Z to capture the relationships among multiple tasks. Similar as in iLSVM, we also put the IBP prior on Z to allow it to have an unbounded number of columns. 3 Although other choices such as taking the mode are possible, our choice could lead to a computationally easy problem because expectation is a linear functional of the distribution under which the expectation is taken. Moreover, expectation can be more robust than taking the mode [18], and it has been used in [31, 32]. 4 Xn W IBP(? ) Zn ? Yn IBP(? ) Wmn ?m Z Xmn Ymn ?m N (a) Figure 1: Graphical structures of (a) infinite latent SVM (iLSVM); and (b) multi-task infinite latent SVM (MT-iLSVM). For MT-iLSVM, the dashed nodes (i.e., ?m ) are included to illustrate the task relatedness. We have omitted the priors on W and ? for notation brevity. N M (b) Suppose we have M related tasks. Let Dm = {(xmn , ymn )}n?Itrm be the training data for task m. We consider binary classification tasks, where Ym = {+1, ?1}. Extension to multi-way classification or regression tasks can be easily done. If the latent matrix Z is given, we define the latent discriminant function for task m as def ? fm (x, Z; ? m ) = (Z? m )? x = ? ? m (Z x). (10) This definition provides two views of how the M tasks get related. If we let ?m = Z? m , then ?m are the actual parameters of task m and all ?m in different tasks are coupled by sharing the same latent matrix Z. Another view is that each task m has its own parameters ? m , but all the tasks share the same latent features Z? x, which is a projection of the input features x and Z is the latent projection matrix. As such, our method can be viewed as a nonparametric Bayesian treatment of alternating structure optimization (ASO) [1], which learns a single projection matrix with a pre-specified latent dimension. Moreover, different from [16], which learns a binary vector with known dimensionality to select features or kernels on x, we learn an unbounded projection matrix Z using nonparametric Bayesian techniques. As in iLSVM, we take the fully Bayeisan treatment (i.e., ? m are also random variables) and define the effective discriminant function for task m as the expectation def fm (x; p(Z, ?)) = Ep(Z,?) [fm (x, Z; ? m )] = Ep(Z,?) [Z? m ]? x. (11) ? def = ym signfm (x). Similarly, we do regularized Then, the prediction rule for task m is naturally ? def Bayesian inference by imposing the following constraints and defining U M T (?) = C m,n?Itrm ?mn { } def ?m, ?n ? Itrm : ymn Ep(Z,?) [Z? m ]? xmn ? 1 ? ?mn MT Ppost (?) = p(Z, ?) . (12) ?mn ? 0 T Similar as in iLSVM, minimizing U M T (?) is equivalent to minimizing the hinge-loss RM of the h ? MT multiple binary prediction rules, where Rh = C m,n?Itrm max(0, 1?ymn Ep(Z,?) [Z? m ]? xmn ). Finally, to obtain more data to estimate the latent Z, we also relate it to observed data by defining the likelihood model p(xmn |wmn , Z, ?2mn ) = N (xmn |Zwmn , ?2mn I), (13) ? 2 I). where wmn is a vector. We assume W has an independent prior ?(W) = mn N (wmn |0, ?m0 Fig. 1 (b) illustrates the graphical structure of MT-iLSVM. For testing, we use the same strategy as in iLSVM to do Bayesian inference on both training and test data. The difference is that training data are subject to large-margin constraints, while test data are not. Similarly, the hyper-parameters 2 ?m0 and ?2mn can be set a priori or estimated from data (See Appendix A.1 for details). 3.4 Inference with Truncated Mean-Field Constraints We briefly discuss how to do regularized Bayesian inference (3) with the large-margin constraints for MT-iLSVM. For iLSVM, similar procedure applies. To make the problem easier to solve, we use the stick-breaking representation of IBP, which includes the auxiliary variables ?, and infer the posterior p(?, W, Z, ?). Furthermore, we impose the truncated mean-field constraint that p(?, W, Z, ?) = p(?) K ( D )? ? ? 2 p(?k |? k ) p(zdk |?dk ) p(wmn |?mn , ?mn I), k=1 d=1 mn (14) 2 2 where K is the truncation level; p(wmn |?mn , ?mn I) = N (wmn |?mn , ?mn I); p(zdk |?dk ) = Bernoulli(?dk ); and p(?k |? k ) = Beta(?k1 , ?k2 ). We first turn the constrained problem to a problem of finding a stationary point using Lagrangian methods by introducing Lagrange multipliers ?, one for each large-margin constraint as defined in Eq. (12), and u for the nonnegativity constraints of ?. Let L(p, ?, ?, u) be the Lagrangian functional. The inference procedure iteratively solves the following two steps (We defer the details to Appendix A.1): 5 Infer p(?), p(W), and p(Z): for p(W), since the prior is also normal, we can easily derive the 2 update rules for ?mn and ?mn . For p(?), we have the same update rules as in [9]. We defer the details to Appendix A.1. Now, we focus on p(Z) and provide insights on how the large-margin constraints regularize the procedure of inferring the latent matrix Z. Since the large-margin 1 constraints are linear of p(Z), we can get the mean-field update equation as ?dk = 1+e?? , where dk ?dk = 1 ( 2 (K?mn + (?kmn )2 ) 2 2? mn mn j=1 ) ? j ? d k ?2xmn ?mn + 2 ?mn ?kmn ?dj + ymn Ep [?mk ]xdmn , k ? Ep [log vj ] ? L?k ? ? (15) m j?=k m,n?Itr ?k where L?k is an lower bound of Ep [log(1 ? j=1 vj )] (See Appendix A.1 for details). The last term of ?dk is due to the large-margin posterior constraints as defined in Eq. (12). ? Infer p(?) and solve for ? and ?: We optimize L over p(?) and can get p(?) = m p(? m ), where p(? m ) ? ?(? m ) exp{? ? m ?m }, ? ? and ?m = n?Itrm ymn ?mn (? xmn ). Here, we assume ?(? m ) is standard normal. Then, we have p(? m ) = N (? m |?m , I). Substituting the solution of p(?) into L, we get M independent dual problems ? 1 max ? ?m 2 ?? m ?m + s.t.. : 0 ? ?mn ? 1, ?n ? Itrm , ?mn n?Itrm (16) which (or its primal form) can be efficiently solved with a binary SVM solver, such as SVM-light. 4 Experiments We present empirical results for both classification and multi-task learning. Our results demonstrate the merits inherited from both Bayesian nonparametrics and large-margin learning. 4.1 Multi-way Classification We evaluate the infinite latent SVM (iLSVM) for classification on the real TRECVID2003 and Flickr image datasets, which have been extensively evaluated in the context of learning finite latent feature models [8]. TRECVID2003 consists of 1078 video key-frames, and each example has two types of features ? 1894-dimension binary vector of text features and 165-dimension HSV color histogram. The Flickr image dataset consists of 3411 natural scene images about 13 types of animals (e.g., tiger, cat and etc.) downloaded from the Flickr website. Also, each example has two types of features, including 500-dimension SIFT bag-of-words and 634-dimension real-valued features (e.g., color histogram, edge direction histogram, and block-wise color moments). Here, we consider the real-valued features only by using normal distributions for x. We compare iLSVM with the large-margin Harmonium (MMH) [8], which was shown to outperform many other latent feature models [8], and two decoupled approaches ? EFH+SVM and IBP+SVM. EFH+SVM uses the exponential family Harmonium (EFH) [27] to discover latent features and then learns a multi-way SVM classifier. IBP+SVM is similar, but uses an IBP factor analysis model [12] to discover latent features. As finite models, both MMH and EFH+SVM need to pre-specify the dimensionality of latent features. We report their results on classification accuracy and F1 score (i.e., the average F1 score over all possible classes) [32] achieved with the best dimensionality in Table 1. For iLSVM and IBP+SVM, we use the mean-field inference method and present the average performance with 5 randomly initialized runs (See Appendix A.2 for the algorithm and initialization details). We perform 5-fold cross-validation on training data to select hyperparameters, e.g., ? and C (we use the same procedure for MT-iLSVM). We can see that iLSVM can achieve comparable performance with the nearly optimal MMH, without needing to pre-specify the latent feature dimension4 , and is much better than the decoupled approaches (i.e., IBP+SVM and EFH+SVM). 4.2 Multi-task Learning 4.2.1 Description of the Data Scene and Yeast Data: These datasets are from the UCI repository, and each data example has multiple labels. As in [23], we treat the multi-label classification as a multi-task learning problem, 4 We set the truncation level to 300, which is large enough. 6 Table 1: Classification accuracy and F1 scores on the TRECVID2003 and Flickr image datasets. TRECVID2003 Flickr Model Accuracy F1 score Accuracy F1 score EFH+SVM 0.565 ? 0.0 0.427 ? 0.0 0.476 ? 0.0 0.461 ? 0.0 MMH 0.566 ? 0.0 0.430 ? 0.0 0.538 ? 0.0 0.512 ? 0.0 IBP+SVM 0.553 ? 0.013 0.397 ? 0.030 0.500 ? 0.004 0.477 ? 0.009 iLSVM 0.563 ? 0.010 0.448 ? 0.011 0.533 ? 0.005 0.510 ? 0.010 Table 2: Multi-label classification performance on Scene and Yeast datasets. Yeast Scene Model Acc F1-Micro F1-Macro Acc F1-Micro F1-Macro yaxue [23] 0.5106 0.3897 0.4022 0.7765 0.2669 0.2816 piyushrai-1 [23] 0.5212 0.3631 0.3901 0.7756 0.3153 0.3242 piyushrai-2 [23] 0.5424 0.3946 0.4112 0.7911 0.3214 0.3226 MT-IBP+SVM 0.5475 ? 0.005 0.3910 ? 0.006 0.4345 ? 0.007 0.8590 ? 0.002 0.4880 ? 0.012 0.5147 ? 0.018 MT-iLSVM 0.5792 ? 0.003 0.4258 ? 0.005 0.4742 ? 0.008 0.8752 ? 0.004 0.5834 ? 0.026 0.6148 ? 0.020 where each label assignment is treated as a binary classification task. The Yeast dataset consists of 1500 training and 917 test examples, each having 103 features, and the number of labels (or tasks) per example is 14. The Scene dataset consists 1211 training and 1196 test examples, each having 294 features, and the number of labels (or tasks) per example for this dataset is 6. School Data: This dataset comes from the Inner London Education Authority and has been used to study the effectiveness of schools. It consists of examination records from 139 secondary schools in years 1985, 1986 and 1987. It is a random 50% sample with 15362 students. The dataset is publicly available and has been extensively evaluated in various multi-task learning methods [4, 7, 30], where each task is defined as predicting the exam scores of students belonging to a specific school based on four student-dependent features (year of the exam, gender, VR band and ethnic group) and four school-dependent features (percentage of students eligible for free school meals, percentage of students in VR band 1, school gender and school denomination). In order to compare with the above methods, we follow the same setup described in [3, 4] and similarly we create dummy variables for those features that are categorical forming a total of 19 student-dependent features and 8 school-dependent features. We use the same 10 random splits5 of the data, so that 75% of the examples from each school (task) belong to the training set and 25% to the test set. On average, the training set includes about 80 students per school and the test set about 30 students per school. 4.2.2 Results Scene and Yeast Data: We compare with the closely related nonparametric Bayesian methods [23, 28], which were shown to outperform the independent Bayesian logistic regression and a singletask pooling approach [23], and a decoupled method MT-IBP+SVM6 that uses IBP factor analysis model to find shared latent features among multiple tasks and then builds separate SVM classifiers for different tasks. For MT-iLSVM and MT-IBP+SVM, we use the mean-field inference method in Sec 3.4 and report the average performance with 5 randomly initialized runs (See Appendix A.1 for initialization details). For comparison with [23, 28], we use the overall classification accuracy, F1-Macro and F1-Micro as performance measures. Table 2 shows the results. We can see that the large-margin MT-iLSVM performs much better than other nonparametric Bayesian methods and MT-IBP+SVM, which separates the inference of latent features from learning the classifiers. School Data: We use the percentage of explained variance [4] as the measure of the regression performance, which is defined as the total variance of the data minus the sum-squared error on the test set as a percentage of the total variance. Since we use the same settings, we can compare with the state-of-the-art results of Bayesian multi-task learning (BMTL) [4], multi-task Gaussian processes (MTGP) [7], convex multi-task relationship learning (MTRL) [30], and single-task learning (STL) as reported in [7, 30]. For MT-iLSVM and MT-IBP+SVM, we also report the results achieved by using both the latent features (i.e., Z? x) and the original input features x through vector concatenation, and we denote the corresponding methods by MT-iLSVMf and MT-IBP+SVMf , respectively. From 5 Available at: http://ttic.uchicago.edu/?argyriou/code/index.html This decoupled approach is in fact an one-iteration MT-iLSVM, where we first infer the shared latent matrix Z and then learn an SVM classifier for each task. 6 7 Table 3: Percentage of explained variance by various models on the School dataset. STL BMTL MTGP MTRL MT-IBP+SVM MT-iLSVM MT-IBP+SVMf MT-iLSVMf 23.5 ? 1.9 29.5 ? 0.4 29.2 ? 1.6 29.9 ? 1.8 20.0 ? 2.9 30.9 ? 1.2 28.5 ? 1.6 31.7 ? 1.1 Table 4: Percentage of explained variance and running time by MT-iLSVM with various training sizes. 50% 60% 70% 80% 90% 100% explained variance (%) 25.8 ? 0.4 27.3 ? 0.7 29.6 ? 0.4 30.0 ? 0.5 30.8 ? 0.4 30.9 ? 1.2 running time (s) 370.3 ? 32.5 455.9 ? 18.6 492.6 ? 33.2 600.1 ? 50.2 777.6 ? 73.4 918.9 ? 96.5 0.59 0.585 0.58 0.575 0.57 0.565 35 Explained variance (%) 0.59 0.585 Accuracy Accuracy the results in Table 3, we can see that the multi-task latent SVM (i.e., MT-iLSVM) achieves better results than the existing methods that have been tested in previous studies. Again, the joint MTiLSVM performs much better than the decoupled method MT-IBP+SVM, which separates the latent feature inference from the training of large-margin classifiers. Finally, using both latent features and the original input features can boost the performance slightly for MT-iLSVM, while much more significantly for the decoupled MT-IBP+SVM. 0.58 0.575 0.57 0.565 1 2 3 4 sqrt of ? 5 6 0 1 (a) Yeast 2 3 sqrt of C 4 5 6 30 25 20 15 0 (b) Yeast 0.5 1 1.5 2 2.5 C (c) School Figure 2: Sensitivity study of MT-iLSVM: (a) classification accuracy with different ?; (b) classification accuracy with different C; and (c) percentage of explained variance with different C. 4.3 Sensitivity Analysis Figure 2 shows how the performance of MT-iLSVM changes against the hyper-parameter ? and regularization constant C on Yeast and School datasets. We can see that on the Yeast dataset, MTiLSVM is insensitive to ? and C. For the School dataset, MT-iLSVM is stable when C is set between 0.3 and 1. MT-iLSVM is insensitive to ? on the School data too, which is omitted to save space. Table 4 shows how the training size affects the performance and running time of MT-iLSVM on the School dataset. We use the first b% (b = 50, 60, 70, 80, 90, 100) of the training data in each of the 10 random splits as training set and use the corresponding test data as test set. We can see that as training size increases, the performance and running time generally increase; and MT-iLSVM achieves the state-of-art performance when using about 70% training data. From the running time, we can also see that MT-iLSVM is generally quite efficient by using mean-field inference. Finally, we investigate how the performance of MT-iLSVM changes against the hyperparameters 2 2 = 1 and compute ?2mn from observed data. If we further and ?2mn . We initially set ?m0 ?m0 estimate them by maximizing the objective function, the performance does not change much (?0.3% for average explained variance on the School dataset). We have similar observations for iLSVM. 5 Conclusions and Future Work We first present a general framework for doing regularized Bayesian inference subject to appropriate constraints, which are imposed directly on the posterior distributions. Then, we particularly concentrate on developing two nonparametric Bayesian models to learn predictive latent features for classification and multi-task learning, respectively, by exploring the large-margin principle to define posterior constraints. Both models allow the latent dimension to be automatically resolved from the data. The empirical results on several real datasets appear to demonstrate that our methods inherit the merits from both Bayesian nonparametrics and large-margin learning. Regularized Bayesian inference offers a general framework for considering posterior regularization in performing nonparametric Bayesian inference. For future work, we plan to study other posterior regularization beyond the large-margin constraints, such as posterior constraints defined on manifold structures [14], and investigate how posterior regularization can be used in other interesting nonparametric Bayesian models [5, 26]. 8 Acknowledgments This work was done when JZ was a post-doc fellow in CMU. JZ is supported by National Key Project for Basic Research of China (No. 2012CB316300) and the National Natural Science Foundation of China (No. 60805023). EX is supported by AFOSR FA95501010247, ONR N000140910758, NSF Career DBI-0546594 and Alfred P. Sloan Research Fellowship. References [1] R. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR, (6):1817?1853, 2005. [2] C.E. Antoniak. Mixture of Dirichlet process with applications to Bayesian nonparametric problems. Annals of Stats, (273):1152?1174, 1974. [3] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. In NIPS, 2007. [4] B. Bakker and T. Heskes. Task clustering and gating for Bayesian multitask learning. JMLR, (4):83?99, 2003. [5] M.J. Beal, Z. Ghahramani, and C.E. Rasmussen. The infinite hidden Markov model. In NIPS, 2002. [6] K. Bellare, G. Druck, and A. McCallum. Alternating projections for learning with expectation constraints. In UAI, 2009. [7] E. Bonilla, K.M.A. Chai, and C. Williams. Multi-task Gaussian process prediction. In NIPS, 2008. [8] N. Chen, J. Zhu, and E.P. Xing. Predictive subspace learning for multiview data: a large margin approach. In NIPS, 2010. [9] F. Doshi-Velez, K. Miller, J. Van Gael, and Y.W. Teh. Variational inference for the Indian buffet process. In AISTATS, 2009. [10] D. Dunson and S. Peddada. Bayesian nonparametric inferences on stochastic ordering. ISDS Discussion Paper, 2, 2007. [11] K. Ganchev, J. Graca, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable models. JMLR, (11):2001?2094, 2010. [12] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS, 2006. [13] D. Hoff. Bayesian methods for partial stochastic orderings. Biometrika, 90:303?317, 2003. [14] S. Huh and S. Fienberg. Discriminative topic modeling based on manifold learning. In KDD, 2010. [15] T. Jaakkola, M. Meila, and T. Jebara. Maximum entropy discrimination. In NIPS, 1999. [16] T. Jebara. Multitask sparsity via maximum entropy discrimination. JMLR, (12):75?110, 2011. [17] T. Joachims. Transductive inference for text classification using support vector machines. In ICML, 1999. [18] M. E. Khan, B. Marlin, G. Bouchard, and K. Murphy. Variational bounds for mixed-data factor analysis. In NIPS, 2010. [19] P. Liang, M. Jordan, and D. Klein. Learning from measurements in exponential families. In ICML, 2009. [20] S.N. MacEachern. Dependent nonparametric process. In the Section on Bayesian Statistical Science of ASA, 1999. [21] G. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning with weakly labeled data. JMLR, (11):955?984, 2010. [22] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction. In NIPS, 2009. [23] P. Rai and H. Daume III. Infinite predictor subspace models for multitask learning. In AISTATS, 2010. [24] C.E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In NIPS, 2002. [25] Y.W. Teh, D. Gorur, and Z. Ghahramani. Stick-breaking construction of the Indian buffet process. In AISTATS, 2007. [26] Y.W. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet process. JASA, 101(476):1566?1581, 2006. [27] M. Welling, M. Rosen-Zvi, and G. Hinton. Exponential family harmoniums with an application to information retrieval. In NIPS, 2004. [28] Y. Xue, D. Dunson, and L. Carin. The matrix stick-breaking process for flexible multi-task learning. In ICML, 2007. [29] A. Zellner. Optimal information processing and Bayes? theorem. American Statistician, 42:278?280, 1988. [30] Y. Zhang and D.Y. Yeung. A convex formulation for learning task relationships in multi-task learning. In UAI, 2010. [31] J. Zhu, A. Ahmed, and E.P. Xing. MedLDA: Maximum margin supervised topic models for regression and classification. In ICML, 2009. [32] J. Zhu, N. Chen, and E.P. Xing. Infinite SVM: a Dirichlet process mixture of large-margin kernel machines. In ICML, 2011. 9
4365 |@word multitask:3 repository:1 briefly:1 loading:1 efh:6 minus:1 harder:1 tnlist:1 moment:1 score:6 existing:3 wd:1 jaynes:1 kdd:1 update:3 n0:3 discrimination:3 stationary:1 discovering:2 website:1 mccallum:2 record:1 rch:1 blei:1 provides:1 authority:1 node:1 hsv:1 zhang:2 unbounded:4 direct:2 beta:2 consists:5 isds:1 expected:1 multi:34 decreasing:1 automatically:2 actual:1 subvectors:1 considering:2 solver:1 begin:1 project:2 features1:1 provided:1 moreover:3 notation:1 discover:2 interpreted:1 bakker:1 developed:2 finding:1 marlin:1 temporal:1 fellow:1 graca:1 biometrika:1 classifier:8 rm:1 stick:5 exchangeable:1 k2:1 appear:2 yn:7 arguably:2 treat:2 tsinghua:2 encoding:1 solely:2 initialization:2 china:3 studied:2 specifying:1 limited:2 subfields:1 acknowledgment:1 testing:2 block:1 procedure:7 pontil:1 empirical:3 significantly:1 projection:8 pre:4 word:1 griffith:2 seeing:1 get:5 unlabeled:1 selection:1 put:3 context:1 influence:1 optimize:1 equivalent:2 imposed:1 lagrangian:4 maximizing:1 straightforward:1 williams:1 independently:1 convex:7 assigns:1 stats:1 rule:7 insight:1 argyriou:2 dbi:1 regularize:1 handle:1 denomination:1 annals:1 construction:2 suppose:2 homogeneous:1 us:3 pa:1 particularly:3 labeled:1 observed:8 ep:9 taskar:1 solved:2 capture:4 ordering:3 decrease:2 n000140910758:1 weakly:1 harmonium:3 predictive:5 technically:1 asa:1 eric:1 resolved:2 easily:3 joint:1 various:5 cat:1 isvm:3 effective:4 london:1 hyper:2 whose:1 quite:1 widely:1 solve:5 valued:4 relax:1 ppost:5 ability:2 statistic:2 gp:1 transductive:2 jointly:1 beal:2 advantage:1 sequence:1 itrm:7 macro:3 uci:1 achieve:1 description:1 recipe:1 chai:1 zellner:4 supervising:1 illustrate:1 derive:1 exam:2 school:20 ibp:27 eq:2 solves:2 auxiliary:3 c:1 trading:1 come:1 concentrate:2 ning:1 direction:1 closely:1 stochastic:3 mann:1 education:1 f1:11 generalization:1 alleviate:1 extension:1 exploring:1 sufficiently:1 considered:1 normal:3 exp:1 substituting:1 m0:4 achieves:2 omitted:2 estimation:1 bag:1 label:10 create:1 successfully:1 ganchev:1 weighted:1 aso:1 minimization:1 gaussian:6 aim:2 rather:1 exchangeability:1 jaakkola:1 focus:3 joachim:1 bernoulli:2 likelihood:7 tech:1 inference:41 dependent:6 entire:1 typically:1 initially:1 hidden:1 arg:1 classification:26 flexible:1 among:4 dual:1 priori:2 overall:1 development:2 animal:1 art:2 spatial:1 special:2 hoff:1 constrained:2 marginal:2 field:7 evgeniou:1 having:2 sampling:1 icml:5 nearly:1 carin:1 future:2 rosen:1 report:4 others:1 np:1 micro:3 few:1 randomly:2 divergence:1 national:2 murphy:1 statistician:1 maintain:1 ando:1 attempt:3 gorur:1 interest:2 investigate:2 violation:1 mixture:7 light:1 primal:1 edge:1 partial:1 decoupled:6 indexed:1 initialized:2 desired:1 mk:1 column:3 modeling:1 zn:3 assignment:1 stacking:1 introducing:2 cost:3 predictor:2 too:1 zvi:1 reported:1 dependency:2 xue:1 sensitivity:2 off:1 together:1 ym:2 druck:1 again:2 squared:1 containing:1 yth:1 mtgp:2 cb316300:1 resort:1 sidestep:1 expert:1 american:1 sec:2 student:8 includes:2 satisfy:1 sloan:1 bonilla:1 tion:1 performed:1 view:2 lab:1 thu:1 doing:3 xing:4 bayes:7 start:1 inherited:2 bouchard:1 defer:2 publicly:1 accuracy:9 variance:9 largely:1 efficiently:3 miller:2 generalize:2 html:1 bayesian:52 acc:2 sqrt:2 flickr:5 sharing:2 definition:2 against:2 dm:3 naturally:1 pprob:3 associated:2 doshi:1 sampled:1 dataset:12 treatment:2 popular:1 knowledge:3 color:3 dimensionality:4 focusing:1 supervised:2 follow:1 specify:2 improved:1 nonparametrics:5 formulation:2 done:3 evaluated:2 furthermore:1 correlation:1 widespread:1 mode:2 logistic:1 stimulate:1 yeast:9 usa:1 multiplier:2 true:3 regularization:14 alternating:3 leibler:1 iteratively:2 attractive:1 criterion:1 generalized:2 trying:1 ymn:6 multiview:1 demonstrate:3 performs:2 interface:1 image:4 variational:3 wise:1 recently:1 common:3 functional:2 mt:42 exponentially:2 insensitive:2 extend:1 interpretation:2 belong:1 velez:1 mellon:1 measurement:2 fa95501010247:1 imposing:4 meal:1 rd:1 meila:1 heskes:1 similarly:3 gillenwater:1 dj:1 stable:1 etc:1 base:1 posterior:37 own:1 recent:3 showed:1 optimizing:1 certain:2 binary:12 onr:1 additional:1 impose:5 dashed:1 semi:1 multiple:7 desirable:2 needing:1 infer:6 ahmed:1 cross:1 offer:1 huh:1 retrieval:1 dept:1 post:1 mle:2 prediction:10 basic:4 regression:5 cmu:2 expectation:8 yeung:1 histogram:3 represent:1 kernel:2 iteration:1 achieved:2 fellowship:1 unlike:1 specially:1 strict:1 subject:3 pooling:1 trecvid2003:4 effectiveness:1 jordan:3 practitioner:1 leverage:1 split:1 easy:1 enough:1 iii:1 affect:2 fm:3 inner:1 idea:4 cn:2 itr:5 ultimate:1 penalty:2 reformulated:1 generally:3 gael:1 amount:1 nonparametric:24 extensively:2 band:2 bellare:1 http:1 outperform:2 percentage:7 nsf:1 figuring:1 estimated:2 conceived:1 popularity:1 per:4 dummy:1 klein:1 alfred:1 carnegie:1 shall:2 medlda:2 commented:1 key:2 four:2 group:1 clarity:1 year:2 beijing:1 sum:1 run:2 uncertainty:1 wmn:7 family:3 reasonable:1 eligible:1 doc:1 appendix:8 comparable:1 def:11 bound:3 fold:1 nonnegative:1 strength:1 constraint:34 scene:6 min:2 performing:2 department:1 developing:5 structured:1 rai:1 combination:2 belonging:1 slightly:1 maxy:1 largemargin:1 intuitively:1 explained:8 taken:1 fienberg:1 computationally:2 equation:1 singletask:1 slack:1 discus:1 turn:1 know:1 merit:3 available:2 hierarchical:2 indirectly:2 appropriate:5 robustly:1 save:1 zdk:2 buffet:6 shortly:1 original:2 dirichlet:4 include:2 running:5 clustering:1 graphical:3 hinge:4 k1:1 build:1 ghahramani:4 crafting:1 objective:3 parametric:1 strategy:1 surrogate:2 dp:2 subspace:3 distance:1 separate:4 link:2 concatenation:1 topic:2 mail:1 manifold:4 discriminant:6 reason:1 toward:1 fresh:1 code:1 index:2 relationship:5 minimizing:4 liang:1 difficult:1 setup:2 dunson:2 relate:2 stated:1 negative:1 unknown:3 perform:1 teh:3 upper:1 observation:3 datasets:7 markov:1 benchmark:1 finite:9 truncated:2 defining:3 hinton:1 frame:1 jebara:2 community:1 ttic:1 introduced:1 namely:1 cast:1 kl:4 specified:1 plan:1 khan:1 learned:1 boost:1 heterogenous:1 nip:10 xmn:8 beyond:1 usually:5 below:2 sparsity:1 including:2 wz:1 max:3 video:1 unrealistic:1 natural:4 rely:2 treated:2 regularized:14 concretize:1 predicting:2 examination:1 zhu:4 mn:26 improve:1 epxing:1 brief:1 categorical:2 jun:1 coupled:1 text:2 prior:16 review:2 determining:1 afosr:1 loss:8 fully:2 mmh:4 mixed:1 interesting:1 limitation:1 proven:1 remarkable:1 validation:1 foundation:1 downloaded:1 awareness:1 znk:1 jasa:1 principle:2 share:2 row:1 supported:2 last:1 free:2 truncation:4 rasmussen:2 allow:3 uchicago:1 taking:2 van:1 boundary:1 dimension:11 xn:14 valid:1 forward:1 commonly:1 made:1 welling:1 approximate:1 relatedness:1 kullback:1 active:1 uai:2 rid:1 pittsburgh:1 discriminative:2 latent:63 iterative:2 table:8 nature:1 learn:6 robust:1 jz:2 career:1 domain:2 vj:2 inherit:1 aistats:3 rh:2 hyperparameters:3 profile:1 kmn:2 daume:1 allowed:1 complementary:1 x1:3 ethnic:1 fig:2 owning:1 vr:2 inferring:1 nonnegativity:1 concatenating:1 exponential:3 breaking:5 jmlr:5 mtrl:2 learns:3 theorem:7 specific:1 sift:1 gating:1 dk:7 svm:30 evidence:1 stl:2 gained:1 illustrates:1 push:1 margin:28 chen:3 easier:3 entropy:3 simply:1 explore:4 antoniak:1 forming:1 lagrange:1 partially:1 applies:1 gender:2 corresponds:1 relies:1 dcszj:1 identity:1 formulated:1 viewed:1 shared:4 feasible:1 tiger:1 change:3 included:1 infinite:22 specifically:1 total:3 secondary:1 partly:1 attempted:1 exception:1 formally:1 select:2 support:9 maceachern:1 brevity:1 indian:6 incorporate:4 evaluate:1 tested:1 ex:1
3,717
4,366
From Bandits to Experts: On the Value of Side-Observations Ohad Shamir Microsoft Research New England USA [email protected] Shie Mannor Department of Electrical Engineering Technion, Israel [email protected] Abstract We consider an adversarial online learning setting where a decision maker can choose an action in every stage of the game. In addition to observing the reward of the chosen action, the decision maker gets side observations on the reward he would have obtained had he chosen some of the other actions. The observation structure is encoded as a graph, where node i is linked to node j if sampling i provides information on the reward of j. This setting naturally interpolates between the well-known ?experts? setting, where the decision maker can view all rewards, and the multi-armed bandits setting, where the decision maker can only view the reward of the chosen action. We develop practical algorithms with provable regret guarantees, which depend on non-trivial graph-theoretic properties of the information feedback structure. We also provide partially-matching lower bounds. 1 Introduction One of the most basic learning settings studied in the online learning framework is learning from experts. In its simplest form, we assume that each round t, the learning algorithm must choose one of k possible actions, which can be interpreted as following the advice of one of k ?experts?1 . At the end of the round, the performance of all actions, measured here in terms of some reward, is revealed. This process is iterated for T rounds, and our goal is to minimize the regret, namely the difference between the total reward of the single best action in hindsight, and our own accumulated reward. We follow the standard online learning framework, in which nothing whatsoever can be assumed on the process generating the rewards, and they might even be chosen by an adversary who has full knowledge of our learning algorithm. A crucial assumption in this setting is that we get to see the rewards of all actions at the end of each round. However, in many real-world scenarios, this assumption is unrealistic. A canonical example is web advertising, where at any timepoint one may choose only a single ad (or small number of ads) to display, and observe whether it was clicked, but not whether other ads would have been clicked or not if presented to the user. This partial information constraint has led to a flourishing literature on multi-armed bandits problems, which model the setting where we can only observe the reward of the action we chose. While this setting has been long studied under stochastic assumptions, the landmark paper [4] showed that this setting can also be dealt with under adversarial conditions, making the setting comparable to?the experts setting discussed above. The price in terms of the provable regret is usually an extra k multiplicative factor in the bound. The intuition for this factor has long been that in the bandit setting, we only get ?1/k of the information? obtained in the expert setting (as we observe just a single reward rather than k). While the bandits setting received much theoretical interest, it has also been criticized for not capturing additional side-information we often 1 The more general setup, which is beyond the scope of this paper, considers k experts providing advice for choosing among n actions, where in general n 6= k [4]. 1 have on the rewards of the different actions. This has led to studying richer settings, which make various assumptions on the relationship between the rewards; see below for more details. In this paper, we formalize and initiate a study on a range of settings that interpolates between the bandits setting and the experts setting. Intuitively, we assume that after choosing some action i, and obtaining the action?s reward, we observe not just action i?s reward (as in the bandit setting), and not the rewards of all actions (as in the experts setting), but rather some (possibly noisy) information on a subset of the other actions. This subset may depend on action i in an arbitrary way, and may change from round to round. This information feedback structure can be modeled as a sequence of directed graphs G1 , . . . , GT (one per round t), so that an edge from action i to action j implies that by choosing action i, ?sufficiently good? information is revealed on the reward of action j as well. The case of Gt being the complete graph corresponds to the experts setting. The case of Gt being the empty graph corresponds to the bandit setting. The broad scenario of arbitrary graphs in between the two is the focus of our study. As a motivating example, consider the problem of web advertising mentioned earlier. In the standard multi-armed bandits setting, we assume that we have no information whatsoever on whether undisplayed ads would have been clicked on. However, in many cases, we do have some side-information. For instance, if two ads i, j are for similar vacation packages in Hawaii, and ad i was displayed and clicked on by some user, it is likely that the other ad j would have been clicked on as well. In contrast, if ad i is for running shoes, and ad j is for wheelchair accessories, then a user who clicked on one ad is unlikely to clique on the other. This sort of side-information can be better captured in our setting. As another motivating example, consider a sensor network where each sensor collects data from a certain geographic location. Each sensor covers an area that may overlap the area covered by other sensors. At every stage a centralized controller activates one of the sensors and receives input from it. The value of this input is modeled as the integral of some ?information? in the covered area. Since the area covered by each of the sensors overlaps the area covered by other sensors, the reward obtained when choosing sensor i provides an indication of the reward that would have been obtained when sampling sensor j. A related example comes from ultra wideband communication networks, where every agent can select which channel to use for transmission. When using a channel, the agent senses if the transmission was successful, and also receives some indication of the noise level in other channels that are in adjacent frequency bands [2]. Our results portray an interesting picture, with the attainable regret depending on non-trivial properties of these graphs. We provide two practical algorithms with regret guarantees: the ExpBan algorithm that is based on a combination of existing methods, and the more fundamentally novel ELP algorithm that has superior guarantees. We also study lower bounds for our setting. In the case of undirected graphs, we show that the information-theoretically attainable regret is precisely characterized by the average independence number (or stability number) of the graph, namely the size of its largest independent set. For the case of directed graphs, we obtain a weaker regret which depends on the average clique-partition number of the graphs. More specifically, our contributions are as follows: ? We formally define and p initiate a study of the setting that interpolates between learning with expert advice (with O( log(k)T ) regret) ? that assumes that all rewards are revealed and ? kT ) regret) that assumes that only the reward of the multi-armed bandits setting (with O( the action selected is revealed. We provide an answer to a range of models in between. ? The framework we consider assumes that by choosing each action, other than just obtaining that action?s reward, we can also observe some side-information about the rewards of other actions. We formalize this as a graph Gt over the actions, where an edge between two actions means that by choosing one action, we can also get a ?sufficiently good? estimate of the reward of the other action. We consider both the case where Gt changes at each round t, as well as the case that Gt = G is fixed throughout all rounds. ? We establish upper and lower bounds on the achievable regret, which depends on two combinatorial properties of Gt : Its independence number ?(Gt ) (namely, the largest number of nodes without edges between them), and its clique-partition number ?(G ? t ) (namely, the smallest number of cliques into which the nodes can be partitioned). 2 ? We present two practical algorithms to deal with this setting. The first algorithm, called ExpBan, combines existing algorithms in a natural way, and applies only when Gt = G is fixed at all T p rounds. Ignoring computational constraints, the algorithm achieves a regret bound of O( ?(G) ? log(k)T ). With computational constraints, its regret bound is p O( c log(k)T ), where c is the size of the minimal clique partition one can efficiently find for G. However, note that for general graphs, it is NP-hard to find a clique partition for which c = O(k 1? ) for any  > 0. ? The second algorithm, called ELP, is an improved algorithm, which can handle graphs which change between rounds. For undirected graphs, whereqsampling i gives an obserPT vation on j and vice versa, it achieves a regret bound of O( log(k) t=1 ?(Gt )). For directed graphs the observation structure is not symmetric), our regret bound is q (where PT ? t )). Moreover, the algorithm is computationally efficient. at most O( log(k) t=1 ?(G This is in contrast to the ExpBan algorithm, p which in the worst case, cannot efficiently achieve regret significantly better than O( k log(k)T ). p  ? For the case of a fixed graph Gt = G, we present an information-theoretic ? ?(G)T lower bound on the regret, which holds regardless of computational efficiency. ? We present some simple synthetic experiments, which demonstrate that the potential advantage of the ELP algorithm over other approaches is real, and not just an artifact of our analysis. 1.1 Related Work The standard multi-armed bandits problem assumes no relationship between the actions. Quite a few papers studied alternative models, where the actions are endowed with a richer structure. However, in the large majority of such papers, the feedback structure is the same as in the standard multi-armed bandits. Examples include [11], where the actions? rewards are assumed to be drawn from a statistical distribution, with correlations between the actions; and [1, 8], where the actions reward?s are assumed to satisfy some Lipschitz continuity property with respect to a distance measure between the actions. In terms of other approaches, the combinatorial bandits framework [7] considers a setting slightly similar to ours, in that one chooses and observes the rewards of some subset of actions. However, it is crucially assumed that the reward obtained is the sum of the rewards of all actions in the subset. In other words, there is no separation between earning a reward and obtaining information on its value. Another relevant approach is partial monitoring, which is a very general framework for online learning under partial feedback. However, this generality comes at the price of tractability for all but specific cases, which do not include our model. Our work is also somewhat related to the contextual bandit problem (e.g., [9, 10]), where the standard multi-armed bandits setting is augmented with some side-information provided in each round, which can be used to determine which action to pick. While we also consider additional sideinformation, it is in a more specific sense. Moreover, our goal is still to compete against the best single action, rather than some set of policies which use this side-information. 2 Problem Setting Let [k] = {1, . . . , k} and [T ] = {1, . . . , T }. We consider a set of actions 1, 2, . . . , k. Choosing an action i at round t results in receiving a reward gi (t), which we shall assume without loss of generality to be bounded in [0, 1]. Following the standard adversarial framework, we make no assumptions whatsoever about how the rewards are selected, and they might even be chosen by an adversary. We denote our choice of action at round t as it . Our goal is to minimize regret with respect to the best single action in hindsight, namely max i T X t=1 gi (t) ? 3 T X t=1 git (t). Algorithm 1 The ExpBan Algorithm Input: neighborhood sets {Ni (t)}i?[k] . Split the graph induced by the neighborhood sets into c cliques (c ? k as small as possible) For each clique, define a ?meta-action? to be a standard experts algorithm over the actions in the clique Run a multi-armed-bandits algorithm over the c meta-actions For simplicity, we will focus on a finite-horizon setting (where the number of rounds T is known in advance), on regret bounds which hold in expectation, and on oblivious adversaries, namely that the reward sequence gi (t) is unknown but fixed in advance (see Sec. 8 for more on this issue). Each round t, the learning algorithm chooses a single action it . In the standard multi-armed bandits setting, this results in git (t) being revealed to the algorithm, while gj (t) remains unknown for any j 6= it . In our setting, we assume that by choosing an action i, other than getting gi (t), we also get some side-observations about the rewards of the other actions. Formally, we assume that one receives gi (t), and for some fixed parameter b is able to construct unbiased estimates g?j (t) for all actions j in some subset of [k], such that E[? gj (t)|action i chosen] = gj (t) and Pr(|? gj (t)| ? b) = 1. For any action j, we let Nj (t) be the set of actions, for which we can get such an estimate g?j (t) on the reward of action j. This is essentially the ?neighborhood? of action j, which receives sufficiently good information (as parameterized by b) on the reward of action j. We note that j is always a member of Nj , and moreover, Nj may be larger or smaller depending on the value of b we choose. We assume that Nj (t) for all j, t are known to the learner in advance. Intuitively, one can think of this setting as a sequence of graphs, one graph per round t, which captures the information feedback structure between the actions. Formally, we define Gt to be a graph on the k nodes 1, . . . , k, with an edge from node i to node j if and only if j ? Ni (t). In the case that j ? Ni (t) if and only if i ? Nj (t), for all i, j, we say that Gt is undirected. We will use this graph viewpoint extensively in the remainder of the paper. 3 The ExpBan Algorithm We begin by presenting the ExpBan algorithm (see Algorithm 1 above), which builds on existing algorithms to deal with our setting, in the special case where the graph structure remains fixed throughout the rounds - namely, Gt = G for all t. The idea of the algorithm is to split the actions into c cliques, such that choosing an action in a clique reveals unbiased estimates of the rewards of all the other actions in the clique. By running a standard experts algorithm (such as the exponentially weighted forecaster - see [6, Chapter 2]), we can get low regret with respect to any action in that clique. We then treat each such expert algorithm as a meta-action, and run a standard bandits algorithm (such as the EXP3 [4]) over these c meta-actions. We denote this algorithm as ExpBan, since it combines an experts algorithm with a bandit algorithm. The following result provides a bound on the expected regret of the algorithm. The proof appears in the appendix. Theorem 1. Suppose Gt = G is fixed for all T rounds. If we run ExpBan using the exponentially weighted forecaster and the EXP3 algorithm, then the expected regret is bounded as follows:2 " T # T X X p gj (t) ? E git (t) ? 4b c log(k)T . (1) t=1 t=1 For the optimal clique partition, we have c = ?(G), ? the clique-partition number of G. It is easily seen that ?(G) ? is a number between 1 and k. The case ?(G) ? = 1 corresponds to G being a clique, namely, that choosing any action allows us to estimate the rewards of all other actions. This pcorresponds to the standard experts setting, in which case the algorithm attains the optimal O( log(k)T ) regret. At the other extreme, ?(G) ? = k corresponds to G being the empty graph, 2 Using more sophisticated methods, it is now known that the log(k) factor can be removed (e.g., [3]). However, we will stick with this slightly less tight analysis for simplicity. 4 namely, that choosing any action only reveals the reward of that action. p This corresponds to the standard bandit setting, in which case the algorithm attains the standard O( log(k)kT ) regret. For general graphs, our algorithm interpolates between these regimes, in a way which depends on ?(G). ? While being simple and using off-the-shelf components, the ExpBan algorithm has some disadvantages. First of all, for a general graph G, it is N P -hard to find c ? O(k 1? ) for any  > 0. (This follows from [12] and the fact that the clique-partition number of G equals the chromatic number of its complement.) Thus, with computational constraints, one cannot hope to obtain a bound better ? ? kT ). That being said, we note that this is only a worst-case result, and in practice or for than O( specific classes of graphs, computing a good clique partition might be relatively easy. A second disadvantage of the algorithm is that it is not applicable for an observation structure that changes with time. 4 The ELP Algorithm We now turn to present the ELP algorithm (which stands for ?Exponentially-weighted algorithm with Linear Programming?). Like all multi-armed bandits algorithms, it is based on a tradeoff between exploration and exploitation. However, unlike standard algorithms, the exploration component is not uniform over the actions, but is chosen carefully to reflect the graph structure at each round. In fact, the optimal choice of the exploration requires us to solve a simple linear program, hence the name of the algorithm. Below, we present the pseudo-code as well as a couple of theorems that bound the expected regret of the algorithm under appropriate parameter choices. The proofs of the theorems appear in the appendix. The first theorem concerns the symmetric observation case, where if choosing action i gives information on action j, then choosing action j must also give information on i. The second theorem concerns the general case. We note that in both cases the graph Gt may change arbitrarily in time. Algorithm 2 The ELP Algorithm Input: ?, {?(t)}t?[T ] , {si (t)}i?[k],t?[T ] , neighborhood sets {Ni (t)}i?[k],t?[T ] . ? j ? [k] wj (1) := 1/k. for t = 1, . . . , T do ? i ? [k] pi (t) := (1 ? ?(t)) Pkwi (t) + ?(t)si (t) w (k) l=1 l Choose action it with probability pit (t), and receive reward git (t) Compute g?j (t) for all j ? Nit (t) g ? (t) For all j ? [k], let g?j (t) = P j pl (t) if it ? Nj (t), and g?j (t) = 0 otherwise. l?Nj (t) ? j ? [k] wj (t + 1) = wj (t) exp(?? gj (t)) end for 4.1 Undirected Graphs The following theorem provides a regret bound for the algorithm, as well as appropriate parameter choices, in the case of undirected graphs. Later on, we will discuss the case of directed graphs. In a nutshell, the theorem shows that the regret bound depends on the average independence number ?(Gt ) of each graph Gt - namely, the size of its largest independent set. Theorem 2. Suppose that for all t, Gt is an undirected graph. Suppose we run Algorithm 2 using some ? ? (0, 1/2bk), and choosing X {si (t)}i?[k] = argmax min sl (t), P ?i si (t)?0, i si (t)=1 j?[k] l?Nj (t) P (which can be easily done via linear programming) and ?(t) = ?b/ minj?[k] l?Nj (t) sl (t). Then it holds for any fixed action j that " T # T T X X X log(k) . (2) gj (t) ? E git (t) ? 3?b2 ?(Gt ) + ? t=1 t=1 t=1 5 If we choose ? = p log(k)/3b2 P t ?(Gt ), then the bound equals v u T u X bt3 log(k) ?(Gt ). (3) t=1 Comparing Thm. 2 with Thm. 1, we note that for any graph Gt , its independence number ?(Gt ) lower bounds its clique-partition number ?(G ? t ). In fact, the gap between them can be very large (see Sec. 6). Thus, the attainable regret using the ELP algorithm is better than the one attained by the ExpBan algorithm. Moreover, the ELP algorithm is able to deal with time-changing graphs, unlike the ExpBan algorithm. If we take worst-case computational efficiency into account, things are slightly more involved. For the ELP algorithm, the optimal value of ?, needed to obtain Eq. (3), requires knowledge of PT t=1 ?(Gt ), but computing or approximating the ?(Gt ) is NP-hard in the worst case. However, there is a simple fix: we create dlog(k)e copies of the ELP algorithm, where copy i assumes that PT i?1 . Note that one of these values must be wrong by a factor of at most 2, t=1 ?(Gt ) equals 2 so the regret of the algorithm using that value would be larger by a factor of at most 2. Of course, the problem is that we don?t know in advance which of those dlog(k)e copies is the best one. But this can be easily solved by treating each such copy as a ?meta-action?, and running a standard multi-armed bandits algorithm (such as EXP3) over these dlog(k)e actions. Note that the same idea was used in the construction of q the ExpBan algorithm. Since there are dlog(k)e meta-actions, the additional regret incurred is O( log2 (k)T ). So up to logarithmic factors in k, we get the same regret as if we could actually compute the optimal value of ?. 4.2 Directed Graphs So far, we assumed that the graphs we are dealing with are all undirected. However, a natural extension of this setting is to assume a directed graph, where choosing an action i may give us information on the reward of action j, but not vice-versa. It is readily seen that the ExpBan algorithm would still work in this setting, with the same guarantee. For the ELP algorithm, we can provide the following guarantee: Theorem 3. Under the conditions of Thm. 2 (with the relaxation that the graphs Gt may be directed), it holds for any fixed action j that " T # T T X X X log(k) gj (t) ? E git (t) ? 3?b2 ?(G ? t ), + . (4) ? t=1 t=1 t=1 where ?(G ? t ) is the clique-partition number of Gt . If we choose ? = the bound equals v u T u X bt3 log(k) ?(G ? ). t p log(k)/3b2 P t ?(G ? t ), then (5) t=1 Note that this bound is weaker than the one of Thm. 2, since ?(Gt ) ? ?(G ? t ) as discussed earlier. We do not know whether this bound (relying on the clique-partition number) is tight, but we conjecture that the independence number, which appears to be the key quantity in undirected graphs, is not the correct combinatorial measure for the case of directed graphs3 . In any case, we note that even with the weaker bound above, the ELP algorithm still seems superior to the ExpBan algorithm, in the sense that it allows us to deal with time-changing graphs, and that an explicit clique decomposition of the graph is not required. Also, we again have the issue of ? which is determined by a quantity which is NP-hard to compute, i.e. ?(G ? t ). However, this can be circumvented using the same trick discussed in the context of undirected graphs. 3 pIt is possible to construct examples where the analysis of the ELP algorithm necessarily leads to an O( k log(k)T ) bound, even when the independence number is 1 6 5 Lower Bound The following theorem provides a lower bound on the regret in terms of the independence number ?(G), for a constant graph Gt = G. Theorem 4. Suppose Gt = G for all t, and that actions which are not linked in G get no side-observations whatsoever between them. Then there exists a (randomized) adversary strategy, p such that for every T ? 374?(G)3 and any learning strategy, the expected regret is at least 0.06 ?(G)T . A proof is provided in the appendix. The intuition of the proof is that if the graph G has ?(G) independent vertices, then an adversary can make this problem as hard as a?standard multi-armed bandits problem, played on ?(G) actions. Using a known lower bound of ?( nT ) for multi-armed bandits on n actions, our result follows4 . For constant undirected graphs, this lower bound matches the regret upper bound for the ELP algorithm (Thm. 2) up to logarithmic factors. For directed graphs, the difference between them boils down to the difference between ?(G) ? and ?(G). For many well-behaved graphs, this gap is rather small. However, for general graphs, the difference can be huge - see the next section for details. 6 Examples Here, we briefly discuss some concrete examples of graphs G, and show how the regret performance of our algorithms depend on their structure. An interesting issue to notice is the potential gap between the performance of our algorithms, through the graph?s independence number ?(G) and clique-partition number ?(G). ? First, consider the case where there exists a single action, such that choosing it reveals the rewards of all the other actions. In contrast, choosing the other actions only reveal their own reward. At first blush, it may seem that having such a ?super-action?, which reveals everything that happens in the current round, should help us improve our regret. However, the independence number ?(G) of such a graph is easily seen to be k ? 1. Based on our lower bound, we see that this ?super-action? is actually not helpful at all (up to negligible factors). Second, consider the case where the actions are endowed with some metric distance function, and edge (i, j) is in G if and only if the distance between i, j is at most some fixed constant r. We can think of each action i as being in the center of a sphere of radius r, such that the reward of action i is propagated to every other action in that sphere. In this case, ?(G) is essentially the number of nonoverlapping spheres we can pack in G. In contrast, ?(G) ? is essentially the number of spheres we need to cover G. Both numbers shrink rapidly as r increases, improving the regret of our algorithms. However, the sphere covering size can be much larger than the sphere packing size. For example, if the actions are placed as the elements in {0, 1/2, 1}n , we use the l? metric, and r ? (1/2, 1), it is easily seen that the sphere packing number is just 1. In contrast, the sphere covering number is at least 2n = k log3 (2) ? k 0.63 , since we need a separate sphere to cover every element in {0, 1}n . Third, consider the random Erd?os - R?enyi graph G = G(k, p), which is formed by linking every action i to every action j with probability p independently. It is well known that when p is a constant, the independence number ?(G) of this graph is only O(log(k)), whereas ? the clique-partition number ?(G) ? is at least ?(k/ log(k)). This translates to a regret bound of O( kT ) for the Expq Ban algorithm, and only O( directed random graph. 7 log2 (k)T ) for the ELP algorithm. Such a gap would also hold for a Empirical Performance Gap between ExpBan and ELP In this section, we show that the gap between the performance of the ExpBan algorithm and the ELP algorithm can be real, and is not just an artifact of our analysis. 4 We note that if the maximal degree of every node is bounded by d, it is possible to get the lower bound for T ? ?(d2 ?(G)) (as opposed to T ? ?(?(G)3 )); see the proof for details. 7 Average Payoff p = 0.05 p = 0.35 p = 0.65 p = 0.95 0.8 0.6 0.4 1 2 3 Iteration (?10 4 ) 1 2 3 Iteration (?10 4 ) EXP3 1 2 3 Iteration (?10 4 ) ExpBan 1 2 3 Iteration (?10 4 ) ELP Figure 1: Experiments on random graphs. To show this, we performed the following simple experiment: we created a random Erd?os - R?enyi graph over 300 nodes, where each pair of nodes were linked independently with probability p. Choosing any action results in observing the rewards of neighboring actions in the graph. The reward of each action at each round was chosen randomly and independently to be 1 with probability 1/2 and 0 with probability 1/2, except for a single node, whose reward equals 1 with a higher probability of 3/4. We then implemented the ExpBan and ELP algorithms in this setting, for T = 30, 000. For comparison, we also implemented the standard EXP3 multi-armed bandits algorithm [4], which doesn?t use any side-observations. All the parameters were set to their theoretically optimal values. The experiment was repeated for varying p and over 10 independent runs. The results are displayed in Figure 1. The X-axis is the iteration number, and the Y -axis is the mean payoff obtained so far, averaged over the 10 runs (the variance in the numbers was minuscule, and therefore we do not report confidence intervals). For p = 0.05, the graph is rather empty, and the advantage of using side observations is not large. As a result, all 3 algorithms perform roughly the same for this choice of T . As p increases, the value of side-obervations increase, and the the performance of our two algorithms, which utilize side-observations, improves over the standard multi-armed bandits algorithm. Moreover, for intermediate values of p, there is a noticeable gap between the performance of ExpBan and ELP. This is exactly the regime where the gap between the clique-partition number (governing the regret bound of ExpBan) and the independence number (governing the regret bound for the ELP algorithm) tends to be larger as well5 . Finally, for large p, the graph is almost complete, and the advantage of ELP over ExpBan becomes small again (since most actions give information on most other actions). 8 Discussion In this paper, we initiated a study of a large family of online learning problems with side observations. In particular, we studied the broad regime which interpolates between the experts setting and the bandits setting of online learning. We provided algorithms, as well as upper and lower bounds on the attainable regret, with a non-trivial dependence on the information feedback structure. There are many open questions that warrant further study. First, the upper and lower bounds essentially match only in particular settings (i.e., in undirected graphs, where no side-observations whatsoever, other than those dictated by the graph are allowed). Can this gap be narrowed or closed? Second, our lower bounds depend on a reduction which essentially assumes that the graph is constant over time. We do not have a lower bound for changing graphs. Third, it remains to be seen whether other online learning results can be generalized to our setting, such as learning with respect to policies (as in EXP4 [4]) and obtaining bounds which hold with high probability. Fourth, the model we have studied assumed that the observation structure is known. In many practical cases, the observation structure may be known just partially or approximately. Is it possible to devise algorithms for such cases? Acknowledgements. This research was supported in part by the Google Inter-university center for Electronic Markets and Auctions. 5 Intuitively, this can be seen by considering the extreme cases - for a complete graph over k nodes, both numbers equal 1, and for an empty graph over k nodes, both numbers equal k. For constant p ? (0, 1), there is a real gap between the two, as discussed in Sec. 6 8 References [1] R. Agrawal. The continuum-armed bandit problem. SIAM J. Control and Optimization, 33:1926?1951, 1995. [2] H. Arslan, Z. N. Chen, and M. G. Di Benedetto. Ultra Wideband Wireless Communication. Wiley - Interscience, 2006. [3] J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In COLT, 2009. [4] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48?77, 2002. [5] V. Baston. Some cyclic inequalities. Proceedings of the Edinburgh Mathematical Society (Series 2), 19:115?118, 1974. [6] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [7] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. In COLT, 2009. [8] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In STOC, pages 681?690, 2008. [9] J. Langford and T. Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In NIPS, 2007. [10] L. Li, W. Chu, J. Langford, and R. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661?670. ACM, 2010. [11] P. Rusmevichientong and J. Tsitsiklis. Linearly parameterized bandits. Math. Oper. Res., 35(2):395?411, 2010. [12] D. Zuckerman. Linear degree extractors and the inapproximability of max clique and chromatic number. Theory of Computing, 3(1):103?128, 2007. 9
4366 |@word exploitation:1 briefly:1 achievable:1 seems:1 open:1 d2:1 crucially:1 git:6 forecaster:2 decomposition:1 attainable:4 pick:1 reduction:1 cyclic:1 series:1 ours:1 existing:3 current:1 com:1 contextual:2 comparing:1 nt:1 si:5 expq:1 chu:1 must:3 readily:1 partition:14 treating:1 greedy:1 selected:2 provides:5 mannor:1 node:13 location:1 math:1 zhang:1 mathematical:1 combine:2 interscience:1 theoretically:2 inter:1 market:1 expected:4 roughly:1 multi:17 relying:1 armed:18 considering:1 clicked:6 provided:3 begin:1 moreover:5 bounded:3 becomes:1 israel:1 interpreted:1 whatsoever:5 hindsight:2 nj:9 guarantee:5 pseudo:1 every:9 nutshell:1 exactly:1 wrong:1 stick:1 control:1 appear:1 negligible:1 engineering:1 treat:1 tends:1 initiated:1 approximately:1 lugosi:2 might:3 chose:1 studied:5 collect:1 pit:2 wideband:2 range:2 averaged:1 directed:10 practical:4 practice:1 regret:40 area:5 empirical:1 significantly:1 matching:1 word:1 confidence:1 get:10 cannot:2 context:1 center:2 nit:1 regardless:1 independently:3 sideinformation:1 simplicity:2 stability:1 handle:1 shamir:1 pt:3 suppose:4 user:3 construction:1 programming:2 trick:1 element:2 electrical:1 capture:1 worst:4 solved:1 wj:3 news:1 removed:1 observes:1 mentioned:1 intuition:2 reward:48 depend:4 tight:2 efficiency:2 learner:1 packing:2 easily:5 various:1 chapter:1 enyi:2 choosing:18 neighborhood:4 quite:1 encoded:1 richer:2 larger:4 solve:1 say:1 whose:1 otherwise:1 gi:5 g1:1 think:2 noisy:1 online:7 sequence:3 indication:2 advantage:3 agrawal:1 maximal:1 remainder:1 neighboring:1 relevant:1 minuscule:1 rapidly:1 achieve:1 getting:1 empty:4 transmission:2 generating:1 help:1 depending:2 develop:1 ac:1 measured:1 noticeable:1 received:1 eq:1 implemented:2 implies:1 come:2 radius:1 correct:1 stochastic:2 exploration:3 everything:1 fix:1 timepoint:1 ultra:2 extension:1 pl:1 hold:6 sufficiently:3 exp:1 scope:1 achieves:2 continuum:1 smallest:1 applicable:1 combinatorial:4 maker:4 largest:3 vice:2 create:1 weighted:3 hope:1 sensor:9 activates:1 always:1 super:2 rather:5 shelf:1 chromatic:2 varying:1 blush:1 focus:2 contrast:5 adversarial:4 attains:2 sense:2 helpful:1 accumulated:1 unlikely:1 bandit:35 issue:3 among:1 colt:2 special:1 equal:7 construct:2 having:1 sampling:2 broad:2 warrant:1 np:3 report:1 fundamentally:1 few:1 oblivious:1 randomly:1 argmax:1 microsoft:2 interest:1 centralized:1 graphs3:1 huge:1 extreme:2 sens:1 kt:4 edge:5 integral:1 partial:3 ohad:1 re:1 theoretical:1 minimal:1 criticized:1 instance:1 earlier:2 cover:3 disadvantage:2 tractability:1 vertex:1 subset:5 uniform:1 technion:2 successful:1 motivating:2 vation:1 answer:1 synthetic:1 chooses:2 international:1 randomized:1 siam:2 off:1 receiving:1 concrete:1 again:2 reflect:1 cesa:3 opposed:1 choose:7 possibly:1 hawaii:1 expert:17 benedetto:1 li:1 oper:1 account:1 potential:2 nonoverlapping:1 sec:3 b2:4 rusmevichientong:1 satisfy:1 audibert:1 ad:10 depends:4 multiplicative:1 view:2 later:1 performed:1 closed:1 observing:2 linked:3 sort:1 narrowed:1 contribution:1 minimize:2 il:1 ni:4 formed:1 variance:1 who:2 efficiently:2 dealt:1 iterated:1 advertising:2 monitoring:1 minj:1 against:1 frequency:1 involved:1 naturally:1 proof:5 di:1 boil:1 couple:1 propagated:1 wheelchair:1 knowledge:2 improves:1 formalize:2 sophisticated:1 carefully:1 actually:2 auer:1 appears:2 attained:1 higher:1 follow:1 improved:1 erd:2 done:1 shrink:1 generality:2 just:7 stage:2 accessory:1 governing:2 correlation:1 langford:2 receives:4 web:3 o:2 elp:22 google:1 continuity:1 artifact:2 reveal:1 behaved:1 usa:1 name:1 geographic:1 unbiased:2 hence:1 symmetric:2 deal:4 round:22 adjacent:1 game:2 covering:2 generalized:1 presenting:1 theoretic:2 complete:3 demonstrate:1 auction:1 novel:1 superior:2 exponentially:3 discussed:4 he:2 linking:1 multiarmed:1 versa:2 cambridge:1 had:1 gj:8 gt:32 own:2 showed:1 dictated:1 scenario:2 certain:1 meta:6 inequality:1 arbitrarily:1 devise:1 captured:1 seen:6 additional:3 somewhat:1 determine:1 full:1 match:2 england:1 characterized:1 exp3:5 long:2 sphere:9 prediction:1 basic:1 controller:1 essentially:5 expectation:1 metric:3 iteration:5 receive:1 addition:1 whereas:1 interval:1 bt3:2 crucial:1 extra:1 unlike:2 induced:1 undirected:11 shie:2 member:1 thing:1 seem:1 ee:1 revealed:5 split:2 easy:1 intermediate:1 independence:11 nonstochastic:1 idea:2 tradeoff:1 translates:1 whether:5 interpolates:5 action:99 covered:4 band:1 extensively:1 simplest:1 schapire:2 sl:2 canonical:1 notice:1 per:2 shall:1 key:1 drawn:1 changing:3 utilize:1 graph:67 relaxation:1 sum:1 compete:1 package:1 run:6 parameterized:2 fourth:1 throughout:2 almost:1 family:1 electronic:1 separation:1 earning:1 decision:4 appendix:3 comparable:1 capturing:1 bound:37 played:1 display:1 constraint:4 precisely:1 personalized:1 kleinberg:1 min:1 relatively:1 conjecture:1 circumvented:1 department:1 combination:1 smaller:1 slightly:3 partitioned:1 making:1 happens:1 intuitively:3 dlog:4 pr:1 computationally:1 remains:3 turn:1 discus:2 needed:1 initiate:2 know:2 end:3 studying:1 endowed:2 observe:5 appropriate:2 alternative:1 vacation:1 assumes:6 running:3 include:2 log2:2 build:1 establish:1 approximating:1 society:1 question:1 quantity:2 strategy:2 dependence:1 said:1 distance:3 separate:1 landmark:1 majority:1 considers:2 trivial:3 provable:2 code:1 modeled:2 relationship:2 providing:1 setup:1 stoc:1 policy:3 unknown:2 perform:1 bianchi:3 upper:4 observation:15 finite:1 displayed:2 payoff:2 communication:2 arbitrary:2 thm:5 bk:1 complement:1 namely:10 required:1 pair:1 slivkins:1 nip:1 beyond:1 adversary:5 able:2 usually:1 below:2 regime:3 program:1 max:2 unrealistic:1 overlap:2 exp4:1 natural:2 minimax:1 improve:1 ohadsh:1 picture:1 axis:2 created:1 portray:1 epoch:1 literature:1 acknowledgement:1 freund:1 loss:1 interesting:2 upfal:1 incurred:1 agent:2 degree:2 article:1 viewpoint:1 pi:1 course:1 ban:1 placed:1 supported:1 copy:4 wireless:1 tsitsiklis:1 side:17 weaker:3 wide:1 edinburgh:1 feedback:6 world:2 stand:1 doesn:1 far:2 log3:1 clique:26 dealing:1 reveals:4 assumed:6 don:1 channel:3 pack:1 ignoring:1 obtaining:4 improving:1 flourishing:1 necessarily:1 linearly:1 noise:1 nothing:1 repeated:1 allowed:1 augmented:1 advice:3 wiley:1 explicit:1 comput:1 third:2 extractor:1 theorem:11 down:1 specific:3 concern:2 exists:2 horizon:1 gap:10 chen:1 led:2 logarithmic:2 likely:1 bubeck:1 shoe:1 partially:2 recommendation:1 inapproximability:1 applies:1 corresponds:5 acm:1 goal:3 price:2 lipschitz:1 change:5 hard:5 specifically:1 determined:1 except:1 total:1 called:2 select:1 formally:3
3,718
4,367
On Strategy Stitching in Large Extensive Form Multiplayer Games Richard Gibson and Duane Szafron Department of Computing Science, University of Alberta Edmonton, Alberta, T6G 2E8, Canada {rggibson | dszafron}@ualberta.ca Abstract Computing a good strategy in a large extensive form game often demands an extraordinary amount of computer memory, necessitating the use of abstraction to reduce the game size. Typically, strategies from abstract games perform better in the real game as the granularity of abstraction is increased. This paper investigates two techniques for stitching a base strategy in a coarse abstraction of the full game tree, to expert strategies in fine abstractions of smaller subtrees. We provide a general framework for creating static experts, an approach that generalizes some previous strategy stitching efforts. In addition, we show that static experts can create strong agents for both 2-player and 3-player Leduc and Limit Texas Hold?em poker, and that a specific class of static experts can be preferred among a number of alternatives. Furthermore, we describe a poker agent that used static experts and won the 3-player events of the 2010 Annual Computer Poker Competition. 1 Introduction Many sequential decision-making problems are commonly modelled as an extensive form game. Extensive games are very versatile due to their ability to represent multiple agents, imperfect information, and stochastic events. For many real-world problems, however, the extensive form game representation is too large to be feasibly handled by current techniques. To address this limitation, strategies are often computed in abstract versions of the game that group similar states together into single abstract states. For very large games, these abstractions need to be quite coarse, leaving many different states indistinguishable. However, for smaller subtrees of the full game, strategies can be computed in much finer abstractions. Such ?expert? strategies can then be pieced together, typically connecting to a ?base strategy? computed in the full coarsely-abstracted game. A disadvantage of this approach is that we may make assumptions about the other agents? strategies. In addition, by computing the base strategy and the experts separately, we may lose ?cohesion? among the different components. We investigate stitched strategies in extensive form games, focusing on the trade-offs between the sizes of the abstractions versus the assumptions made and the cohesion among the computed strategies. We define two strategy stitching techniques: (i) static experts that are computed in very fine abstractions with varying degrees of assumptions and little cohesion, and (ii) dynamic experts that are contained in abstractions with lower granularity, but make fewer assumptions and have perfect cohesion. This paper generalizes previous strategy stitching efforts [1, 2, 11] under a more general static expert framework. We use poker as a testbed to demonstrate that, despite recent mixed results, static experts can create much stronger overall agents than the base strategy alone. Furthermore, we show that under a fixed memory limitation, a specific class of static experts are preferred to several 1 alternatives. As a final validation of these results, we describe entries to the 2010 Annual Computer Poker Competition1 (ACPC) that used static experts to win the 3-player events. 2 Background An extensive form game [9] is a rooted directed tree, where nodes represent decision states, edges represent actions, and terminal nodes hold end-game utility values for players. For each player, the decision states are partitioned into information sets such that game states within an information set are indistinguishable to the player. Non-singleton information sets arise due to hidden information that is only available to a subset of the players, such as private cards in poker. More formally: Definition 2.1 (Osborne and Rubenstein [9, p. 200]) A finite extensive game ? with imperfect information has the following components: ? A finite set N of players. ? A finite set H of sequences, the possible histories of actions, such that the empty sequence is in H and every prefix of a sequence in H is also in H. Z ? H are the terminal histories (those which are not a prefix of any other sequence). A(h) = {a | ha ? H} are the actions available after a nonterminal history h ? H. ? A function P that assigns to each nonterminal history h ? H\Z a member of N ? {C}. P is the player function. P (h) is the player who takes an action after the history h. If P (h) = C, then chance determines the action taken after history h. Define Hi := {h ? H | P (h) = i}. ? A function fC that associates with every history h for which P (h) = C a probability measure fC (?|h) on A(h) (fC (a|h) is the probability that a occurs given h), where each such probability measure is independent of every other such measure. ? For each player i ? N a partition Ii of Hi with the property that A(h) = A(h? ) whenever h and h? are in the same member of the partition. For I ? Ii , we denote by A(I) the set of A(h) and by P (I) the player P (h) for any h ? I. Ii is the information partition of player i; a set I ? Ii is an information set of player i. ? For each player i ? N a utility function ui from the terminal histories Z to the real numbers R. If N = {1, 2} and u1 = ?u2 , it is a 2-player zero-sum extensive game. Define ?u,i := maxz ui (z) ? minz ui (z) to be the range of the utilities for player i. A strategy for player i, ?i , is a function such that for each information set I ? Ii , ?i (I) is a probability distribution over A(I). Let ?i be the set of all strategies for player i. For h ? I, we define ?i (h) := ?i (I). A strategy profile ? consists of a strategy ?i for each player i ? N . We let ??i refer to all the strategies in ? except ?i , and denote ui (?) to be the expected utility for player i given that all players play according to ?. In a 2-player zero-sum game, a best response to a player 1 strategy ?1 is a player 2 strategy ?2BR = argmax?2 u2 (?1 , ?2 ) (similarly for a player 2 strategy ?2 ). The best response value of ?1 is u2 (?1 , ?2BR ), which measures the exploitability of ?1 . The exploitability of a strategy tells us how much that strategy loses to a worst-case opponent. Outside of 2-player zero-sum games, the worst-case scenario for player i would be for all other players to minimize player i?s utility instead of maximizing their own. In large games, this value is difficult to compute since opponents cannot share private information. Thus, we only investigate exploitability for 2-player zero-sum games. Counterfactual regret minimization (CFR) [14] is an iterative procedure for computing strategy profiles in extensive form games. In 2-player zero-sum games, CFR produces an approximate Nash equilibrium profile. In addition, CFR strategies have also been found to compete very well in games with more than 2 players [1]. CFR?s memory requirements are proportional to the number of information sets in the game times the number of actions available at an information set. The extensive form game representation of many real-world problems is too large to feasibly compute a strategy directly. A common approach in these games is to first create an abstract game by combining information sets into single abstract states or by disallowing certain actions: 1 http://www.computerpokercompetition.org 2                                           (a)                     (b) Figure 1: (a) An abstraction of an extensive game, where states connected by a bold curve are in the same information set and thin curves denote merged abstract information sets. In the unabstracted game, player 1 cannot distinguish between whether chance generated b or c and player 2 cannot distinguish between a and b. In the abstract game, neither player can distinguish between any of chance?s outcomes. (b) An example of a game ?? derived from the unabstracted game ? in (a) for a dynamic expert strategy. Here, the abstraction from (a) is used as the base abstraction, and the null abstraction is employed on the subtree with G1,1 = ? and G2,1 = {al, bl, cl} (bold states). Definition 2.2 (Waugh et al. [12]) An abstraction for player i is a pair ?i = ?Ii , ?A i , where ? ?Ii is a partition of Hi defining a set of abstract information sets coarser than Ii (i.e., every I ? Ii is a subset of some set in ?Ii ), and A A A ? ? ?A i is a function on histories where ?i (h) ? A(h) and ?i (h) = ?i (h ) for all histories h and h? in the same abstract information set. We will call this the abstract action set. The null abstraction for player i is ?i = hIi , Ai. An abstraction ? is a set of abstractions ?i , one for each player. Finally, for any abstraction ?, the abstract game, ?? , is the extensive game obtained from ? by replacing Ii with ?Ii and A(h) with ?A i (h) when P (h) = i, for all i ? N . Figure 1a shows an example of an abstracted extensive form game with no action abstraction. By reducing the number of information sets, computing strategies in an abstract game with an algorithm such as CFR requires less memory than computing strategies in the real game. Intuitively, if a strategy profile for the abstract game ? performs well in ?? , and if ?Ii is defined such that merged information sets are ?strategically similar,? then ? is also likely to perform well in ?. Identifying strategically similar information sets can be delicate though and typically becomes a domain-specific task. Nevertheless, we often would like to have as much granularity in our abstraction as will fit in memory to allow computed strategies to be as diverse as necessary. 3 Strategy Stitching To achieve abstractions with finer granularity, a natural approach is to break the game up into subtrees, abstract each of the subtrees, and compute a strategy for each abstract subtree independently. We introduce a formalism for doing so that generalizes Waugh et al.?s strategy grafting [11] and two poker-specific methods described in Section 5. First, select a subset S ? N of players. Secondly, for each i ? S, compute a base strategy ?i for playing the full game. Next, divide the game into subtrees: Definition 3.1 (Waugh et al. [11]) Gi = {Gi,0 , Gi,1 , ..., Gi,p } is a grafting partition for player i if ? Gi is a partition of Hi (possibly containing empty parts), ? ?I ? Ii , ?j ? {0, 1, ..., p} such that I ? Gi,j , and ? ?j ? {1, 2, ..., p}, h ? Gi,j , and h? ? Hi , if h is a prefix of h? , then h? ? Gi,j ? Gi,0 . For each i ? S, choose a grafting partition Gi so that each partition has an equal number of parts p. Then, compute a strategy, or static expert, for each subtree using any strategy computation technique, such as CFR. Finally, since the subtrees are disjoint, create a static expert strategy by combining the static experts without any overlap to the base strategy in the undivided game: 3                                             (a)                            (b) j Figure 2: Two examples of a game ? for a static expert derived from the unabstracted game ? in Figure 1a. In both (a) and (b), G2,j = {al, bl, cl} (bold states). If player 1 takes action r, player 2 no longer controls his or her decisions. Player 2?s actions are instead generated by the base strategy ?2 , computed beforehand. In (a), we have S = {2}. On the other hand, in (b), S = N = {1, 2}, G1,j = ?, and hence all of player 1?s actions are seeded by the base strategy ?1 . Definition 3.2 Let S ? N be a nonempty subset of players. For each i ? S, let ?i be a strategy for player i and Gi = {Gi,0 , Gi,1 , ..., Gi,p } be a grafting partition for player i. For j ? {1, 2, ..., p}, define ?j to be an extensive game derived from the original game ? where, for all i ? S and h ? Hi \Gi,j , we set P (h) = C and fC (a|h) = ?i (h, a). That is, each player i ? S only controls actions for histories in Gi,j and is forced to play according to ?i elsewhere. Let the static expert of {Gi,j | i ? S}, ? j , be a strategy profile of the game ?j . Finally, define the static expert strategy for  player i, ?iS , as ?i (h, a) if h ? Gi,0 S ?i (h, a) := ?ij (h, a) if h ? Gi,j . We call {?i | i ? S} the base or seeding strategies and {Gi | i ? S} the grafting profile for the static expert strategy ?iS . Figure 2 shows two examples of a game ?j for a single static expert. This may be the only subtree for which a static expert is computed (p = 1), or there could be more subtrees contained in the grafting partition(s) (p > 1). Under a fixed memory limitation, we can employ finer abstractions for the subtrees ?j than we can in the full game ?. This is because ?j removes some of the information sets belonging to players in S, freeing up memory for computing strategies on the subtrees. When |S| = 1, the static expert approach is identical to strategy grafting [11, Definition 8], with the exception that each static expert need not be an approximate Nash equilibrium. We relax the definition for static experts because Nash equilibria are difficult to compute in multiplayer games, and may not be the best solution concept outside of 2-player zero-sum games anyways. Choosing |S| > 1, however, is dangerous because we fix opponent probabilities and assume that our opponents are ?static? at certain locations. For example, in Figure 2b, it may not be wise for player 2 to assume that player 1 must follow ?1 . Doing so can dramatically skew player 2?s beliefs about the action generated by chance and hurt the expert?s performance against opponents that do not follow ?1 . As we will see in Section 6, having more static experts with |S| > 1 can result in a more exploitable static expert strategy. On the other hand, by removing information sets for multiple players, the static expert approach creates smaller subtrees than strategy grafting does. As a result, we can employ even finer abstractions within the subtrees. Section 6 shows that despite the risks, the abstraction gains often lead to static experts with S = N being preferred. Regardless of the choice of S, the base strategy lacks ?cohesion? with the static experts since its computation is based on its own play at the subtrees rather than the experts? play. Though the experts are identically seeded, the base strategy may want to play towards the expert subtrees more often to increase utility. This observation motivates our introduction of dynamic experts that are computed concurrently with a base. The full extensive game is divided into subtrees and each subtree is supplied its own abstraction: 4 Definition 3.3 Let ?0 , ?1 , ..., ?p be abstractions for the game ? and for each i ? N , let Gi = {Gi,0 , Gi,1 , ..., Gi,p } be a grafting partition for player i satisfying I ? Gi,j ? {?, I} for all j ? {0, ..., p} and I ? ?j,I i . Thus, each abstract information set is contained entirely in some Sp part of the grafting partition. Let ?? be the abstract game obtained from ? by replacing Ii with j=0 {I ? ?j,I | I ? Gi,j } and A(h) with ?j,A i i (h) when P (h) = i and h ? Gi,j , for all i ? N . Let the dynamic expert strategy for player i, ?i? , be a strategy for player i of the game ?? . Finally define the dynamic expert of Gi,j , ?ij , to be ?i? restricted to the histories in Gi,j , ?i? |Gi,j . The abstraction ?0 is denoted as the base abstraction and the dynamic expert ?i0 is denoted as the base strategy. Figure 1b contains an abstract game tree ?? for a dynamic expert strategy. We can view a dynamic expert strategy as a strategy computed in an abstraction with differing granularity dependent on the history of actions taken. Note that our definition is somewhat redundant to the definition of abstraction as we are simply defining a new abstraction for ? based on the abstractions ?0 , ?1 , ..., ?p . Nonetheless, we supply Definition 3.3 to provide the terms in bold that we will use throughout. Under memory constraints, a dynamic expert strategy typically sacrifices abstraction granularity in the base strategy to achieve finer granularity in the experts. We hope doing so achieves better performance at parts of the game that we believe may be more important. For instance, importance could depend on the predicted relative frequencies of reaching different subtrees.The base strategy?s abstraction is reduced to guarantee perfect cohesion between the base and the experts; the base strategy knows about the experts and can calculate its probabilities ?dynamically? during strategy computation based on the feedback from the experts. In Section 6, we contrast static and dynamic experts to compare this trade-off between abstraction size and strategy cohesion. 4 Texas and Leduc Hold?em A hand of Texas Hold?em poker (or simply Hold?em) begins with each player being dealt two private cards, and two players posting mandatory bets or blinds. There are four betting rounds, the pre-flop, flop, turn, and river where five community cards are successively revealed. Of the players that did not fold, the player with the highest ranked poker hand wins all of the bets. Full rules can be found on-line.2 We focus on the Limit Hold?em variant that fixes the bet sizes and the number of bets allowed per round. We denote the players? actions as f (fold), c (check or call), and r (bet or raise). Leduc Hold?em [10] (or simply Leduc) is a smaller version of Hold?em, played with a six card deck consisting of two Jacks, two Queens, and two Kings with only two betting rounds, pre-flop and flop. Rather than using blinds, antes are posted by all players at the beginning of a hand. Only one private card is dealt to each player and one community card is dealt on the flop. While Leduc is small enough to bypass abstraction, Hold?em is a massive game in terms of the number of information sets; 2-player Limit Hold?em has approximately 3 ? 1014 information sets, and 3-player has roughly 5 ? 1017 . Applying CFR to these enormous state spaces necessitates abstraction. A common abstraction technique in poker is to group many different card dealings into single abstract states or buckets. This is commonly done by ordering all possible poker hands for a specific betting round according to some metric, such as expected hand strength (E[HS]) or expected hand strength squared (E[HS2 ]), and then grouping hands with similar metric values into the same bucket [7]. Percentile bucketing with N buckets and M hands puts the top M/N hands into 1 bucket, the next best M/N into a second bucket, etc., so that the buckets are approximately equal in size. More advanced bucketing schemes that use multiple metrics and clustering techniques are possible, but our experiments use simple percentile bucketing with no action abstraction. 5 Related Work Our general framework for applying static experts to any extensive form game captures some previous poker-specific strategy stitching approaches. First, the PsOpti family of agents [2], which play 2-player Limit Hold?em, contain a base strategy called the ?pre-flop model? and 7 static experts with S = N , or ?post-flop models.? Due to resource and technology limitations, the abstractions used to 2 http://en.wikipedia.org/wiki/Texas hold ?em 5 build the pre-flop and post-flop models were quite coarse, making the family no match for today?s top agents. Secondly, Abou Risk and Szafron [1] attach 6 static experts with S = N (which they call ?heads-up experts?) to a base strategy for playing 3-player Limit Hold?em. Each expert focuses on a subtree immediately following a fold action, allowing much finer abstractions for these 2-player scenarios. However, their results were mixed as the stitched strategy was not always better than the base strategy alone. Nonetheless, our positive results for static experts with S = N in Section 6 provide evidence that the PsOpti approach and heads-up experts are indeed credible. In addition, Gilpin and Sandholm [5] create a poker agent for 2-player Limit Hold?em that uses a 2-phase strategy different from the approaches discussed thus far. The first phase is used to play the pre-flop and flop rounds, and is computed similarly to the PsOpti pre-flop model. For the turn and river rounds, a second phase strategy is computed on-line. One drawback of this approach is that the on-line computations must be quick enough to play in real time. Despite fixing the flop cards, this constraint forced the authors to still employ a very coarse abstraction during the second phase. Furthermore, there have been a few other related approaches to creating poker agents. While 2player poker is well studied, Ganzfried and Sandholm [3, 4] developed algorithms for computing Nash equilibria in multiplayer games and applied it to a small 3-player jam/fold poker game. Additionally, Gilpin et al. [6] use an automated abstraction building tool to dynamically bucket hands in 2-player Limit Hold?em. Here, we are not concerned with equilibrium properties or the abstraction building process itself. In fact, strategy stitching is orthogonal to both strategy computation and abstraction improvements, and could be used in conjunction with more sophisticated techniques. 6 Empirical Evaluation In this section, we create several stitched strategies in both Leduc and Hold?em using the chancesampled variant of CFR [14]. CFR is state of the art in terms of memory efficiency for strategy computation, allowing us to employ abstractions with higher granularity than otherwise possible. Results may differ with other techniques for computing strategies and building abstractions. While CFR requires iterations quadratic in the number of information sets to converge [14, Theorem 4], we restrict our resources only in terms of memory. Even though Leduc is small enough to not necessitate strategy stitching, the Leduc experiments were conducted to evaluate our hypothesis that static experts with S = N can improve play. We ran many experiments and for brevity, only a representative sample of the results are summarized. To be consistent with post-flop models [2] and heads-up experts [1], our grafting profiles are defined only in terms of the players? actions. For each history h ? H, define b := b(h) to be the subsequence of h obtained by removing all actions generated by chance. We refer to a b-expert for player i as an expert constructed for the subtree Gi (b) := {h ? Hi | b is a prefix of b(h)} containing all histories where the players initially follow b. For example, the experts for the games in Figures 1b, 2a, and 2b are l-experts because the game is split after player 1 takes action l. Leduc. Our Leduc experiments use three different base abstractions, one of which is simply the null abstraction. The second and third abstractions are the ?JQ-K? and ?J-QK? abstractions that, on the pre-flop, cannot distinguish between whether the private card is a Jack or Queen, or whether the private card is a Queen or King respectively. In addition, these two abstractions can only distinguish between whether the flop card pairs with the private card or not rather than knowing the identity of the flop card. Because Leduc is such a small game, we do not consider a fixed memory restriction and instead just compare the techniques within the same base abstraction. For both 2-player and 3-player, for each of the three base abstractions, and for each player i, we build a base strategy, a dynamic expert strategy, an S = {i} static expert strategy, and two S = N static expert strategies. Recall choosing S = {i} means that during computation of each static expert, we only fix player i?s action probabilities outside of the expert subtree, whereas S = N means that we fix all players outside of the subtree. For 2-player Leduc, we use r, cr, ccr, and cccrexperts for both players. Thus, the base strategy plays until the first raise occurs, at which point an expert takes over for the remainder of the hand. As an exception, only one of our two S = N static expert strategies, named ?All,? uses all four experts; the other, named ?Pre-flop,? just uses the r and cr-experts. For 3-player Leduc, we use r, cr, ccr, cccr, ccccr, and cccccr-experts, except the ?Pre-flop? static strategies use just the three experts r, cr, and ccr. The null abstraction is employed 6 Table 1: The size, earnings, and exploitability of the 2-player (2p) Leduc strategies in the JQ-K base abstraction, and the size and earnings of the 3-player (3p) strategies in the J-QK base abstraction. The sizes are measured in terms of the maximum number of information sets present within a single CFR computation. Earnings, as described in the text, and exploitability are in milli-antes per hand. Strategy (2p) Base Dynamic Static.S={i} Static.S=N .All Static.S=N .Pre-flop Size 132 444 226 186 186 Earns. 24.73 45.75 28.87 29.20 37.77 Exploit. 496.31 159.84 167.61 432.74 214.44 Strategy (3p) Base Dynamic Static.S={i} Static.S=N .All Static.S=N .Pre-flop Size 1890 6903 3017 2145 2145 Earns. -68.46 113.04 96.14 117.01 119.73 on every expert subtree. Each run of CFR is stopped after 100 million iterations, which for 2-player yields strategies within a milli-ante of equilibrium in the abstract game. Each strategy is evaluated against all combinations and orderings of opponent strategies where all strategies use different base abstractions, and the scores are averaged together. For example, for each of our 2-player strategy profiles ? in the JQ-K base abstraction, we compute 1/2(u1 (?1 , ?2? ) + u2 (?1? , ?2 )), averaged over all profiles ? ? that use either the null or J-QK base abstraction. Leduc is a small enough game that the utilities can be computed exactly. A selection of these scores, along with 2-player exploitability values, are reported in Table 1. Firstly, by increasing abstraction granularity, all of the JQ-K strategies employing experts earn more than the base strategy alone. Secondly, Dynamic and Static.S=N earn more overall than Static.S={i}, despite the 2-player Static.S=N being more exploitable due to the opponent action assumptions. In fact, despite requiring much less memory to compute, Static.S=N surprisingly earns more than Dynamic in 3-player Leduc. Finally, we see that only using two pre-flop static experts as opposed to all four reduces the number of dangerous assumptions to provide a stronger and less exploitable strategy. However, as expected, Dynamic and Static.S={i} are less exploitable. Hold?em. Our Hold?em experiments enforce a fixed memory restriction per run of CFR, which we artificially set to 24 million information sets for 2-player and 162 million information sets for 3-player. We compute stitched strategies of each type using as many percentile E[HS2 ] buckets as possible within the restriction. Our 2-player abstractions distribute buckets as close to uniformly as possible across the betting rounds while remembering buckets from previous rounds (known as ?perfect recall?). Our 3-player abstractions are similar, except they use 169 pre-flop buckets that are forgotten on later rounds (known as ?imperfect recall;? see [1] and [13] for more regarding CFR and imperfect recall). For 2-player, our dynamic strategy has just an r-expert, our S = {i} static strategy uses r, cr, ccr, and cccr-experts, and our S = N static strategy employs r and cr-experts. These choices were based on preliminary experiments to make the most effective use of the limited memory available for each stitching approach. Following Abou Risk and Szafron [1], our 3-player stitched strategies all have f , rf , rrf , and rcf -experts as these appear to be the most commonly reached 2-player scenarios [1, Table 4]. Our abstractions range quite dramatically in terms of number of buckets. For example, in 3-player, our dynamic strategy?s base abstraction has just 8 river buckets with 7290 river buckets for each expert, whereas our static strategies have 16 river buckets in the base abstraction with up to 194,481 river buckets for the S = N static rcf -expert abstraction. For reference, all of the 2-player base and experts are built from 720 million iterations of CFR, while we run CFR for 100 million and 5 billion iterations for the 3-player base and experts respectively. We evaluate our 2-player strategies by playing 500,000 duplicate hands (players play both sides of the dealt cards) of poker between each pair of strategies. In addition to our base and stitched strategies, we also included a base strategy called ?Base.797M? in an abstraction with over 797 million information sets that we expected to beat all of the strategies we were evaluating. Furthermore, using a specialized best response tool [8], we computed the exploitability of our 2-player strategies. For 3-player, we play 500,000 triplicate hands (each set of dealt cards played 6 times, one for each of the player seatings) between each combination of 3 strategies. We also included two other strategies: ?ACPC-09,? the 2009 ACPC 3-player event winner that did not use experts (Abou Risk and Szafron [1] call it ?IR16?), and ?ACPC-10,? a static expert strategy that won a 3-player event at the 2010 ACPC and is outlined at the end of this section. The results are provided in Table 2. 7 Table 2: Earnings and 95% confidence intervals over 500,000 duplicate hands of 2-player Hold?em per pairing, and over 500,000 triplicate hands of 3-player Hold?em per combination. The exploitability of the 2-player strategies is also provided. All values are in milli-big-blinds per hand. Strategy (2p) Base Dynamic Static.S={i} Static.S=N Base.797M Earnings ?10.47 ? 1.99 ?4.43 ? 1.98 ?13.13 ? 2.00 ?4.57 ? 1.95 32.59 ? 2.14 Exploitability 310.04 307.76 301.00 288.82 135.43 Strategy (3p) Base Dynamic Static.S={i} Static.S=N ACPC-09 ACPC-10 Earnings ?6.09 ? 0.71 ?4.91 ? 0.75 ?5.20 ? 0.70 3.06 ? 0.70 ?14.15 ? 0.89 27.29 ? 0.86 Firstly, in 2-player, we see that Static.S=N and Dynamic outperform Static.S={i} considerably, agreeing with the previous Leduc results. In fact, the Static.S={i} fails to even improve upon the base strategy. For 3-player, Static.S=N is noticeably ahead of both Dynamic and Static.S={i} as it is the only strategy, aside from ACPC-10, to win money. By forcing one player to fold, the static experts with S = N essentially reduce the size of the game tree from a 3-player to a 2-player game, allowing many more buckets to be used. This result indicates that at least for poker, the gains in abstraction bucketing outweigh the risks of forced action assumptions and lack of cohesion between the base strategy and the experts. Furthermore, Static.S=N is slightly less exploitable in 2-player than the base strategy and the other two stitched strategies. While there are one and two opponent static actions assumed by the r and cr-experts respectively, trading these few assumptions for an increase in abstraction granularity is beneficial. In summary, static experts with S = N are preferred to both dynamic and static experts with S = {i} in the experiments we ran. An additional validation of the quality of the static expert approach was provided by the 2010 ACPC. The winning entries in both 3-player events employed static experts with S = N . The base strategy, computed from 70 million iterations of CFR, used 169, 900, 100, and 25 buckets on each of the respective rounds. Four experts were used, f , rf , rrf , and rcf , computed from 10 billion iterations of CFR, each containing 169, 60,000, 180,000, and 26,160 buckets on the respective rounds. In addition, clustering techniques on strength distribution were used instead of percentile bucketing. Two strategies were created, where one was trained to play slightly more aggressively for the total bankroll event. Each version finished in first place in its respective competition. 7 Conclusions We discussed two strategy stitching techniques for extensive games, including static experts that generalize strategy grafting and some previous techniques used in poker. Despite the accompanying potential dangers and lack of cohesion, we have shown static experts with S = N outperform the dynamic and static experts with S = {i} that we considered, especially when memory limitations are present. However, additional static experts with several forced actions can lead to a more exploitable strategy. Static experts with S = N is currently our preferred method for creating multiplayer poker strategies and would be our first option for playing other large extensive games. Future work includes finding a way to create more cohesion between the base strategy and static experts. One possibility is to rebuild the base strategy after the experts have been created so that the base strategy?s play is more unified with the experts. In addition, we have yet to experiment with 3player ?hybrid? static experts where |S| = 2. Finally, there are many ways to combine the stitching techniques described in this paper. One possibility is to use a dynamic expert strategy as a base strategy of a static expert strategy. In addition, static experts could themselves be dynamic expert strategies for the appropriate subtrees. Such combinations may produce even stronger strategies than those produced in this paper. Acknowledgments We would like to thank Westgrid and Compute Canada for their computing resources that were used during this work. We would also like to thank the members of the Computer Poker Research Group at the University of Alberta for their helpful pointers throughout this project. This research was funded by NSERC and Alberta Ingenuity, now part of Alberta Innovates - Technology Futures. 8 References [1] N. Abou Risk and D. Szafron. Using counterfactual regret minimization to create competitive multiplayer poker agents. In AAMAS, pages 159?166, 2010. [2] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron. Approximating game-theoretic optimal strategies for full-scale poker. In IJCAI, pages 661? 668, 2003. [3] S. Ganzfried and T. Sandholm. Computing an approximate jam/fold equilibrium for 3-agent no-limit Texas Hold?em tournaments. In AAMAS, 2008. [4] S. Ganzfried and T. Sandholm. Computing equilibria in multiplayer stochastic games of imperfect information. In IJCAI, 2009. [5] A. Gilpin and T. Sandholm. Better automated abstraction techniques for imperfect information games, with application to Texas Hold?em poker. In AAMAS, 2007. [6] A. Gilpin, T. Sandholm, and T.B. S?rensen. Potential-aware automated abstraction of sequential games, and holistic equilibrium analysis of Texas Hold?em poker. In AAAI, 2007. [7] M. Johanson. Robust strategies and counter-strategies: Building a champion level computer poker player. Master?s thesis, University of Alberta, 2007. [8] M. Johanson, K. Waugh, M. Bowling, and M. Zinkevich. Accelerating best response calculation in large extensive games. In IJCAI, 2011. To appear. [9] M. Osborne and A. Rubenstein. A Course in Game Theory. The MIT Press, Cambridge, Massachusetts, 1994. [10] F. Southey, M. Bowling, B. Larson, C. Piccione, N. Burch, D. Billings, and C. Rayner. Bayes? bluff: Opponent modelling in poker. In UAI, pages 550?558, 2005. [11] K. Waugh, M. Bowling, and N. Bard. Strategy grafting in extensive games. In NIPS-22, pages 2026?2034, 2009. [12] K. Waugh, D. Schnizlein, M. Bowling, and D. Szafron. Abstraction pathologies in extensive games. In SARA, pages 781?788, 2009. [13] Kevin Waugh, Martin Zinkevich, Michael Johanson, Morgan Kan, David Schnizlein, and Michael Bowling. A practical use of imperfect recall. In SARA, pages 175?182, 2009. [14] M. Zinkevich, M. Johanson, M. Bowling, and C. Piccione. Regret minimization in games with incomplete information. In NIPS-20, pages 905?912, 2008. 9
4367 |@word h:1 private:7 version:3 innovates:1 stronger:3 szafron:7 rayner:1 abou:4 versatile:1 contains:1 score:2 prefix:4 current:1 yet:1 must:2 partition:12 seeding:1 remove:1 aside:1 alone:3 fewer:1 beginning:1 pointer:1 coarse:4 node:2 location:1 earnings:6 org:2 firstly:2 five:1 along:1 constructed:1 supply:1 pairing:1 consists:1 combine:1 introduce:1 sacrifice:1 indeed:1 expected:5 ingenuity:1 themselves:1 roughly:1 terminal:3 alberta:6 little:1 increasing:1 becomes:1 begin:1 provided:3 project:1 null:5 developed:1 differing:1 finding:1 unified:1 guarantee:1 forgotten:1 every:5 exactly:1 control:2 appear:2 positive:1 limit:8 despite:6 approximately:2 tournament:1 bankroll:1 studied:1 dynamically:2 sara:2 limited:1 range:2 averaged:2 directed:1 acknowledgment:1 practical:1 regret:3 procedure:1 danger:1 gibson:1 empirical:1 pre:13 confidence:1 cannot:4 close:1 selection:1 put:1 risk:6 applying:2 www:1 restriction:3 maxz:1 quick:1 outweigh:1 maximizing:1 zinkevich:3 regardless:1 independently:1 identifying:1 assigns:1 immediately:1 rule:1 his:1 hurt:1 play:14 today:1 ualberta:1 massive:1 us:4 hypothesis:1 associate:1 satisfying:1 coarser:1 capture:1 worst:2 calculate:1 pieced:1 connected:1 ordering:2 trade:2 e8:1 highest:1 counter:1 ran:2 nash:4 ui:4 dynamic:26 trained:1 depend:1 raise:2 creates:1 upon:1 efficiency:1 necessitates:1 milli:3 forced:4 describe:2 effective:1 tell:1 kevin:1 outside:4 outcome:1 choosing:2 quite:3 relax:1 otherwise:1 ability:1 gi:31 g1:2 itself:1 final:1 sequence:4 remainder:1 combining:2 holistic:1 achieve:2 competition:2 billion:2 ijcai:3 empty:2 requirement:1 produce:2 perfect:3 fixing:1 measured:1 freeing:1 nonterminal:2 ij:2 strong:1 predicted:1 trading:1 differ:1 merged:2 drawback:1 stochastic:2 jam:2 noticeably:1 fix:4 preliminary:1 secondly:3 hold:23 accompanying:1 considered:1 equilibrium:9 achieves:1 cohesion:10 lose:1 currently:1 champion:1 create:8 rcf:3 tool:2 minimization:3 hope:1 offs:1 concurrently:1 mit:1 always:1 rather:3 reaching:1 johanson:4 cr:7 varying:1 bet:5 hs2:2 conjunction:1 derived:3 focus:2 improvement:1 rubenstein:2 modelling:1 check:1 indicates:1 contrast:1 helpful:1 waugh:7 abstraction:73 dependent:1 rebuild:1 i0:1 typically:4 initially:1 hidden:1 her:1 jq:4 overall:2 among:3 denoted:2 art:1 equal:2 aware:1 having:1 identical:1 thin:1 future:2 richard:1 leduc:17 feasibly:2 strategically:2 employ:5 few:2 duplicate:2 argmax:1 consisting:1 phase:4 delicate:1 investigate:2 possibility:2 evaluation:1 stitched:7 subtrees:16 beforehand:1 edge:1 necessary:1 respective:3 orthogonal:1 tree:4 incomplete:1 divide:1 stopped:1 increased:1 formalism:1 instance:1 triplicate:2 disadvantage:1 queen:3 entry:2 subset:4 conducted:1 too:2 reported:1 considerably:1 river:6 off:1 michael:2 together:3 connecting:1 earn:2 squared:1 aaai:1 thesis:1 successively:1 containing:3 choose:1 possibly:1 opposed:1 necessitate:1 creating:3 expert:105 distribute:1 potential:2 singleton:1 bold:4 summarized:1 includes:1 blind:3 later:1 break:1 view:1 doing:3 reached:1 competitive:1 bayes:1 option:1 ante:3 minimize:1 qk:3 who:1 yield:1 dealt:5 modelled:1 generalize:1 produced:1 finer:6 history:15 whenever:1 definition:10 against:2 nonetheless:2 frequency:1 schaeffer:1 static:80 gain:2 schauenberg:1 massachusetts:1 counterfactual:2 recall:5 credible:1 sophisticated:1 focusing:1 higher:1 follow:3 response:4 done:1 though:3 evaluated:1 furthermore:5 just:5 until:1 hand:19 ganzfried:3 replacing:2 lack:3 westgrid:1 quality:1 believe:1 building:4 concept:1 contain:1 requiring:1 hence:1 seeded:2 aggressively:1 round:11 indistinguishable:2 game:82 during:4 bowling:6 rooted:1 percentile:4 larson:1 won:2 theoretic:1 demonstrate:1 necessitating:1 performs:1 wise:1 jack:2 common:2 wikipedia:1 specialized:1 winner:1 million:7 discussed:2 refer:2 cambridge:1 ai:1 outlined:1 similarly:2 pathology:1 funded:1 longer:1 money:1 anyways:1 etc:1 base:53 own:3 recent:1 forcing:1 scenario:3 mandatory:1 certain:2 morgan:1 holte:1 additional:2 somewhat:1 remembering:1 employed:3 converge:1 redundant:1 ii:16 full:8 multiple:3 reduces:1 match:1 calculation:1 divided:1 post:3 variant:2 essentially:1 metric:3 iteration:6 represent:3 addition:9 background:1 fine:2 separately:1 want:1 whereas:2 interval:1 leaving:1 member:3 call:5 granularity:10 revealed:1 split:1 identically:1 enough:4 automated:3 concerned:1 fit:1 restrict:1 earns:3 reduce:2 imperfect:7 regarding:1 knowing:1 br:2 billing:2 texas:7 whether:4 six:1 handled:1 utility:7 accelerating:1 effort:2 action:26 dramatically:2 amount:1 reduced:1 http:2 wiki:1 supplied:1 outperform:2 rensen:1 disjoint:1 per:6 ccr:4 diverse:1 coarsely:1 group:3 four:4 nevertheless:1 enormous:1 rrf:2 neither:1 sum:6 compete:1 run:3 master:1 named:2 place:1 throughout:2 family:2 decision:4 investigates:1 entirely:1 hi:7 distinguish:5 played:2 fold:6 quadratic:1 annual:2 strength:3 dangerous:2 ahead:1 constraint:2 burch:2 u1:2 betting:4 martin:1 department:1 according:3 combination:4 multiplayer:6 belonging:1 bucketing:5 smaller:4 sandholm:6 em:22 across:1 agreeing:1 partitioned:1 slightly:2 beneficial:1 making:2 intuitively:1 restricted:1 bucket:19 taken:2 resource:3 skew:1 turn:2 nonempty:1 know:1 stitching:12 end:2 generalizes:3 available:4 opponent:9 enforce:1 appropriate:1 hii:1 alternative:2 original:1 top:2 clustering:2 exploit:1 build:2 especially:1 approximating:1 bl:2 occurs:2 strategy:142 poker:27 win:3 thank:2 card:15 seating:1 cfr:18 bard:1 difficult:2 motivates:1 perform:2 allowing:3 observation:1 finite:3 schnizlein:2 beat:1 defining:2 flop:23 head:3 acpc:9 community:2 canada:2 david:1 pair:3 extensive:22 testbed:1 nip:2 address:1 rf:2 built:1 memory:15 including:1 belief:1 event:7 overlap:1 natural:1 ranked:1 attach:1 hybrid:1 advanced:1 scheme:1 improve:2 technology:2 finished:1 created:2 text:1 relative:1 piccione:2 mixed:2 limitation:5 proportional:1 versus:1 validation:2 southey:1 agent:11 degree:1 t6g:1 consistent:1 playing:4 share:1 bypass:1 elsewhere:1 summary:1 course:1 surprisingly:1 side:1 allow:1 curve:2 feedback:1 world:2 evaluating:1 author:1 commonly:3 made:1 bluff:1 far:1 employing:1 approximate:3 grafting:13 preferred:5 dealing:1 abstracted:2 uai:1 assumed:1 davidson:1 subsequence:1 iterative:1 table:5 additionally:1 exploitability:9 robust:1 ca:1 cl:2 posted:1 artificially:1 domain:1 sp:1 did:2 big:1 arise:1 profile:9 osborne:2 allowed:1 aamas:3 exploitable:6 representative:1 edmonton:1 en:1 extraordinary:1 fails:1 winning:1 minz:1 third:1 posting:1 removing:2 theorem:1 specific:6 evidence:1 grouping:1 sequential:2 importance:1 subtree:10 demand:1 unabstracted:3 fc:4 simply:4 likely:1 deck:1 contained:3 nserc:1 g2:2 u2:4 duane:1 loses:1 chance:5 determines:1 kan:1 identity:1 king:2 towards:1 included:2 except:3 reducing:1 uniformly:1 called:2 total:1 player:124 exception:2 formally:1 select:1 gilpin:4 brevity:1 evaluate:2 disallowing:1
3,719
4,368
Facial Expression Transfer with Input-Output Temporal Restricted Boltzmann Machines Matthew D. Zeiler1 , Graham W. Taylor1 , Leonid Sigal2 , Iain Matthews2 , and Rob Fergus1 1 Department of Computer Science, New York University, New York, NY 10012 2 Disney Research, Pittsburgh, PA 15213 Abstract We present a type of Temporal Restricted Boltzmann Machine that defines a probability distribution over an output sequence conditional on an input sequence. It shares the desirable properties of RBMs: efficient exact inference, an exponentially more expressive latent state than HMMs, and the ability to model nonlinear structure and dynamics. We apply our model to a challenging real-world graphics problem: facial expression transfer. Our results demonstrate improved performance over several baselines modeling high-dimensional 2D and 3D data. 1 Introduction Modeling temporal dependence is an important consideration in many learning problems. One can capture temporal structure either explicitly in the model architecture, or implicitly through latent variables which can act as a ?memory?. Feedforward neural networks which incorporate fixed delays into their architecture are an example of the former. A limitation of these models is that temporal context is fixed by the architecture instead of inferred from the data. To address this shortcoming, recurrent neural networks incorporate connections between the latent variables at different time steps. This enables them to capture arbitrary dynamics, yet they are more difficult to train [2]. Another family of dynamical models that has received much attention are probabilistic models such as Hidden Markov Models and more general Dynamic Bayes nets. Due to their statistical structure, they are perhaps more interpretable than their neural-network counterparts. Such models can be separated into two classes [19]: tractable models, which permit an exact and efficient procedure for inferring the posterior distribution over latent variables, and intractable models which require approximate inference. Tractable models such as Linear Dynamical Systems and HMMs are widely applied and well understood. However, they are limited in the types of structure that they can capture. These limitations are exactly what permit simple exact inference. Intractable models, such as Switching LDS, Factorial HMMs, and other more complex variants of DBNs permit more complex regularities to be learned from data. This comes at the cost of using approximate inference schemes, for example, Gibbs sampling or variational inference, which introduce either a computational burden or poorly approximate the true posterior. In this paper we focus on Temporal Restricted Boltzmann Machines [19,20], a family of models that permits tractable inference but allows much more complicated structure to be extracted from time series data. Models of this class have a number of attractive properties: 1) They employ a distributed state space where multiple factors interact to explain the data; 2) They permit nonlinear dynamics and multimodal predictions; and 3) Although maximum likelihood is intractable for these models, there exists a simple and efficient approximate learning algorithm that works well in practice. We concentrate on modeling the distribution of an output sequence conditional on an input sequence. Recurrent neural networks address this problem, though in a non-probabilistic sense. The InputOutput HMM [3] extends HMMs by conditioning both their dynamics and emission model on an input sequence. However, the IOHMM is representationally limited by its simple discrete state in 1 the same way as a HMM. Therefore we extend TRBMs to cope with input-output sequence pairs. Given the conditional nature of a TRBM (its hidden states and observations are conditioned on short histories of these variables), conditioning on an external input is a natural extension to this model. Several real-world problems involve sequence-to-sequence mappings. This includes motion-style transfer [9], economic forecasting with external indicators [13], and various tasks in natural language processing [6]. Sequence classification is a special case of this setting, where a scalar target is conditioned on an input sequence. In this paper, we consider facial expression transfer, a well-known problem in computer graphics. Current methods considered by the graphics community are typically linear (e.g., methods based on blendshape mapping) and they do not take into account dynamical aspects of the facial motion itself. This makes it difficult to retarget the facial articulations involved in speech. We propose a model that can encode a complex nonlinear mapping from the motion of one individual to another which captures facial geometry and dynamics of both source and target. 2 Related work In this section we discuss several latent variable models which can map an input sequence to an output sequence. We also briefly review our application field: facial expression transfer. 2.1 Temporal models Among probabilistic models, the Input-Output HMM [3] is most similar to the architecture we propose. Like the HMM, the IOHMM is a generative model of sequences but it models the distribution of an output sequence conditional on an input, while the HMM simply models the distribution of an output sequence. The IOHMM is also trained with a more discriminative-style EM-based learning paradigm than HMMs. A similarity between IOHMMs and TRBMs is that in both models, the dynamics and emission distributions are formulated as neural networks. However, the IOHMM state space is a multinomial while TRBMs have binary latent states. A K-state TRBM can thus represent the history of a time series using 2K state configurations while IOHMMs are restricted to K settings. The Continuous Profile Model [12] is a rich and robust extension of dynamic time warping that can be applied to many time series in parallel. The CPM has a discrete state-space and requires an input sequence. Therefore it is a type of conditional HMM. However, unlike the IOHMM and our proposed model, the input is unobserved, making learning completely unsupervised. Our approach is also related to the many proposed techniques for supervised learning with structured outputs. The problem of simultaneously predicting multiple, correlated variables has received a great deal of recent attention [1]. Many of these models, including the one we propose, are formally defined as undirected graphs whose potential functions are functions of some input. In Graph Transformer Networks [11] the dependency structure on the outputs is chosen to be sequential, which decouples the graph into pairwise potentials. Conditional Random Fields [10] are a special case of this model with linear potential functions. These models are trained discriminatively, typically with gradient descent, where our model is trained generatively using an approximate algorithm. 2.2 Facial expression transfer Facial expression transfer, also called motion retargeting or cross-mapping, is the act of adapting the motion of an actor to a target character. It, as well as the related fields of facial performance capture and performance-driven animation, have been very active research areas over the last several years. According to a review by Pighin [15], the two most important considerations for this task are facial model parameterization (called ?the rig? in the graphics industry) and the nature of the chosen crossmapping. A popular parameterization is ?blendshapes? where a rig is a set of linearly combined facial expressions each controlled by a scalar weight. Retargeting amounts to estimating a set of blending weights at each frame of the source data that accurately reconstructs the target frame. There are many different ways of selecting blendshapes, from simply selecting a set of sufficient frames from the data, to creating models based on principal components analysis. Another common parameterization is to simply represent the face by its vertex, polygon or spline geometry. The downside of this approach is that this representation has many more degrees of freedom than are present in an actual facial expression. A linear function is the most common choice for cross-mapping. While it is simple to estimate from data, it cannot produce subtle nonlinear motion required for realistic graphics applications. An 2 example of this approach is [5] which uses a parametric model based on eigen-points to reliably synthesize simple facial expressions but ultimately fails to capture more subtle details. Vlasic et al. [23] have proposed a multilinear mapping where variation in appearance across the source and target is explicitly separated from the variation in facial expression. None of these models explicitly incorporate dynamics into the mapping, which is a limitation addressed by our approach. Finally, we note that Susskind et al. [18] have used RBMs for facial expression generation, but not retargeting. Their work is focused on static rather than temporal data. 3 Modeling dynamics with Temporal Restricted Boltzmann Machines In this section we review the Temporal Restricted Boltzmann Machine. We then introduce the InputOutput Temporal Restricted Boltzmann Machine which extends the architecture to model an output sequence conditional on an input sequence. 3.1 Temporal Restricted Boltzmann Machines A Restricted Boltzmann Machine [17] is a bipartite Markov Random Field consisting of a layer of stochastic observed variables (?visible units?) connected to a layer of stochastic latent variables (?hidden units?). The absence of connections between hidden units ensures they are conditionally independent given a setting of the visible units, and vice-versa. This simplifies inference and learning. The RBM can be extended to model temporal data by conditioning its visible units and/or hidden units on a short history of their activations. This model is called a Temporal Restricted Boltzmann Machine [19]. Conditioning the model on the previous settings of the hidden units complicates inference. Although one can approximate the posterior distribution with the filtering distribution (treating the past setting of the hidden units as fixed), we choose to use a simplified form of the model which conditions only on previous visible states [20]. This model inherits the most important computational properties of the standard RBM: simple, exact inference and efficient approximate learning. RBMs typically have binary observed variables and binary latent variables but to model real-valued data (e.g., the parameterization of a face), we can use a modified form of the TRBM with conditionally independent linear-Gaussian observed variables [7]. The model, depicted in Fig. 1 (left), defines a joint probability distribution over a real-valued representation of the current frame of data, vt , and a collection of binary latent variables, ht , hj ? {0, 1}: p(vt , ht |v<t ) = exp (?E(vt , ht |v<t )) /Z(v<t ). (1) For notational simplicity, we concatenate a short history of data at t?1,. . ., t?N into a vector which we call v<t . The distribution specified by Eq. 1 is conditional on this history and normalized by a quantity Z which is intractable to compute exactly1 but not needed for inference nor learning. The joint distribution is characterized by an ?energy function?: X1 X X E(vt , ht |v<t ) = (vi,t ? a ?i,t )2 ? hj,t?bj,t ? Wij vi,t hj,t 2 i j ij (2) which captures pairwise interactions between variables, assigning high energy to improbable configurations and low energy to probable configurations. In the first term, each visible unit contributes a quadratic penalty that depends on its deviation from a ?dynamic mean? determined by the history: X a ?i,t = ai + Aki vk,<t (3) k where k indexes the history vector. Weight matrix A and offset vector a (with elements ai ) parameterize the autoregressive relationship between the history and current frame of data. Each hidden unit hj contributes a linear offset to the energy which is also a function of the history: X ?bj,t = bj + Bkj vk,<t . (4) k 1 To compute Z exactly we would need to integrate over the joint space of all possible output configurations and all settings of the binary latent variables. 3 Weight matrix B and offset b (with elements bj ) parameterize the relationship between the history and the latent variables. The final term of Eq. 2 is a bi-linear constraint on the interaction between the current setting of the visible units and hidden units, characterized by matrix W . The density for observation vt conditioned on the past can be expressed by marginalizing out the binary hidden units in Eq. 1: X X exp (?E(vt , ht |v<t )) /Z(v<t ), (5) p(vt , ht |v<t ) = p(vt |v<t ) = ht ht while the probability of observing a sequence, v(N +1):T , given an N -frame history v1:N , is simply the product of all the local conditional probabilities up to time T , the length of a sequence: T Y p(v(N +1):T |v1:N ) = p(vt |v<t ). (6) t=N +1 The TRBM has been used to generate and denoise sequences [19, 20], as well as a prior in multiview person tracking [22]. In all cases, it requires an initialization, v1:N , to perform these tasks. Alternatively, by learning a prior model of v1:N it could easily extended to model sequences nonconditionally, i.e., defining p(v1:T ). 3.2 Input-Output Temporal Restricted Boltzmann Machines Ultimately we are interested in learning a probabilistic mapping from an input sequence, s1:T to an output sequence, v1:T . In other words, we seek a model that defines p(v1:T |s1:T ). However, the TRBM only defines a distribution over an output sequence p(v1:T ). Extending this model to learn an input-output mapping is the primary contribution of this paper. Without loss of generality, we will assume that in addition to having access to the complete history of the input, we also have access to the first N frames of the output. Therefore we seek to model p(v(N +1):T |v1:N , s1:T ). By placing an N th order Markov assumption on the current output, vt , that is, assuming conditional independence on all other variables given an N -frame history of vt and an N + 1-frame history of the input (up to and including time t), we can operate in an online setting: T Y p(v(N +1):T |v1:N , s1:T ) = p(vt |v<t , s<=t ). (7) t=N +1 where we have used the shorthand s<=t to describe a vector that concatenates a window over the input at time t, t?1, . . . , t?N . Note that in an offline setting, it is simple to generalize the model by conditioning the term inside the product on an arbitrary window of the source (which may include source observations past time t). (a) (b) (c) l l ..... ..... st-N st-1 Input Frames h Hidden Units j vt-N vt-1 Previous Output Frames j vt vt-N vt-1 st Ws h Hidden Units Wh j ? Wv B A ..... vt Predicted Output i k A Previous Output Frames st-1 i k ..... Predicted Output P W B A ..... h i k st-N Input Frames Q Hidden Units W B st v v t-N t-1 Previous Output Frames v t Predicted Output Figure 1: Left: A Temporal Restricted Boltzmann Machine. Middle: An Input-Output Temporal Restricted Boltzmann Machine. Right: A factored third-order IOTRBM (FIOTRBM). 4 We can easily adapt the TRBM to model p(vt |v<t , s<=t ) by modifying its energy function to incorporate the input. The general form of energy function remains the same as Eq. 2 but it is now also conditioned on s<=t by redefining the dynamic biases (Eq. 3 and 4) as follows: X X a ?it = ai + Aki vk,<t + Pli sl,<=t (8) ?bjt = bj + k l X X Bkj vk,<t + k Qlj sl,<=t (9) l where l is an index over elements of the input vector. Therefore the matrix P ties the input linearly to the output (much like existing simple models) but the matrix Q also allows the input to nonlinearly interact with the output through the latent variables h. We call this model an Input-Output Temporal Restricted Boltzmann Machine (IOTRBM). It is depicted in Fig. 1 (middle). A desirable criterion for training the model is to maximize the conditional log likelihood of the data: L= T X log p(vt |v<t , s<=t ). (10) t=N +1 However, the gradient of Eq. 10 with respect to the model parameters ? = {W, A, B, P, Q, a, b} is difficult to compute analytically due to the normalization constant Z. Therefore, Contrastive Divergence (CD) learning is typically used in place of maximum likelihood. It follows the approximate gradient of an objective function that is the difference between two Kullback-Leibler divergences [8]. It is widely used in practice and tends to produce good generative models [4]. The CD updates for the IOTRBM have a common form (see the supplementary material for details):     T X ?E(vt , ht |v<t , s<=t ) ?E(vt , ht |v<t , s<=t ) ??i ? ? (11) ??i ??i data recon t=N +1 where h?idata is an expectation with respect to the training data distribution, and h?irecon is the M -step reconstruction distribution as obtained by alternating Gibbs sampling, starting with the visible units clamped to the training data. The input and output history stay fixed during Gibbs sampling. CD requires two main operations: 1) sampling the latent variables, given a window of the input and output, !?1 X p(hj,t = 1|vt , v<t , s<=t ) = 1 + exp(? Wij vi,t ? ?bjt ) , (12) i and 2) reconstructing the output data, given the latent variables: ? ? X vi,t |ht , v<t , s<=t ? N ?vit ; Wij hj,t + a ?i,t , 1? . (13) j Eq. 12 and 13 are alternated M times to arrive at the M -step quantities used in the weight updates. More details are given in Sec. 4. 3.3 Factored Third-order Input-Output Temporal Restricted Boltzmann Machines In an IOTRBM the input and target history can only modify the hidden units and current output through additive biases. There has been recent interest in exploring higher-order RBMs in which variables interact multiplicatively [14, 16, 21]. Fig. 1 (right) shows an IOTRBM whose parameters W, Q and P have been replaced by a three-way weight tensor defining a multiplicative interaction between the three sets of variables. The introduction of the tensor results in the number of model parameters becoming cubic and therefore we factor the tensor into three matrices: W s , W h , and W v . These parameters connect the input, hidden units, and current target, respectively to a set of deterministic units which modulate the connections between variables. The introduction of these factors corresponds to a kind of low-rank approximation to the original interaction tensor, that uses O(K 2 ) parameters instead of O(K 3 ). 5 The energy function of this model is: X1 X XX v h s E(vt , ht |v<t , s<=t ) = (vi,t ?? ai,t )2 ? hj,t?bj,t ? Wif Wjf Wlf vi,t hj,t sl,<=t (14) 2 i j f ijl where f indexes factors and a ?i,t and ?bj,t are defined by Eq. 3 and 4 respectively. Weight updates all have the same form as Eq. 11 (see the supplementary material for details). The conditional distribution of the latent variables given the other variables becomes, ? ??1 X X X h v s p(hj,t = 1|vt , v<t , s<=t ) = ?1 + exp(? Wjf Wif vi,t Wlf sl,<=t ? ?bjt )? (15) i f l and the reconstruction distribution becomes, ? ? X X X v h s vi,t |ht , v<t , s<=t ? N ?vit ; Wif Wjf hj,t Wlf sl,<=t + a ?i,t , 1? . j f 4 (16) l Experiments We evaluate the IOTRBM on two facial expression transfer datasets, one based on 2D motion capture and the other on 3D motion capture. On both datasets we compare our model against three baselines: Linear regression (LR): We perform a regularized linear regression between each frame of the input to each frame of the output. The model is solved analytically by least squares. The regularization parameter is set by cross-validation on the training set. N th -order Autoregressive2 model (AR): This model improves on linear regression by also considering linear dynamics through the history of the input and output. Again through regularized least squares we fit a matrix that maps from a concatenation of the (N + 1)-frame input window s<=t and N -frame target window, v<t . Multilayer perceptron: A nonlinear model with one deterministic hidden layer, the same cardinality as the IOTRBM. The input is the concatenation of the source and target history, the output is the current target frame. We train with a nonlinear conjugate gradient method. These baselines were chosen to highlight the main difference of our approach over the majority of techniques proposed for this application, namely the consideration of dynamics and the use of a nonlinear mapping through latent variables. We also tried an IORBM, that is, an IOTRBM with no target history. It consistently performed worse than the IOTRBM, and we do not report its results. Details of learning All models saw a window of 4 input frames (3 previous + 1 current) and 6 previous output frames, with the exception of linear regression which only saw the current input. For the IOTRBM models, we found that initializing the parameters A and P to the solution found by the autoregressive model gave slightly better results. All other parameters were initialized to small random values. For CD learning we set the learning rates for A and P to 10?6 and for all other parameters to 10?3 . This was done to prevent strong correlations from dominating early in learning. All parameters used a fixed weight decay of 0.02 and momentum of 0.75. As suggested by [21], we added a small amount of Gaussian noise (? = 0.1) to the output history to make the model more robust to unseen outputs during prediction (recall that the model sees true outputs at training time, but is fed back predictions at test time). 4.1 2D facial expression transfer The first dataset we consider consists of facial motion capture of two subjects who were asked to speak the same phrases. It has 186 trials, totaling 10414 fames per subject. Each frame is 180 dimensional, representing the x and y position of 90 facial markers. Each pair of sequences has been manually time-aligned based on a phonetic transcription so they are synchronized between subjects. 2 This model considers the history of the source when predicting the target so it is not purely autoregressive. 6 RMS Marker Error (mm) XXX XXX Split XXX S1 Model Linear regression Autoregressive MLP IOTRBM FIOTRBM 6.19 5.43 5.30 5.31 5.41 S2 S3 S4 S5 S6 Mean 6.18 5.22 5.28 5.27 5.43 6.19 5.67 5.76 5.71 5.76 5.85 5.37 5.31 5.14 5.42 6.13 5.37 5.28 5.17 5.45 6.34 5.76 5.31 5.08 5.46 6.15 ? 0.15 5.47 ? 0.20 5.37 ? 0.19 5.28 ? 0.22 5.49 ? 0.13 Table 1: 2D dataset. Mean RMS error on test output sequences. Input noise Output noise Input & Output Noise XX XXX Noise 0.1 1 0.01 0.1 1 0.01 0.1 1 XXX 0.01 Model X Linear regression 6.48 15.05 136.2 N/A Autoregressive 5.83 10.48 84.40 5.78 7.24 36.19 5.85 11.26 94.35 MLP 5.40 5.42 6.80 5.40 5.43 6.37 5.40 5.43 7.55 IOTRBM 5.06 5.07 5.39 5.07 5.18 8.48 5.07 5.17 8.57 FIOTRBM 5.46 5.46 5.66 5.46 5.46 5.56 5.46 5.46 5.82 Table 2: 2D dataset. Mean RMS error (in mm) under noisy input and output history (Split 6). Preprocessing We found the original data to exhibit significant random relative motion between the two faces throughout the entire sequences which could not reasonably be modeled. Therefore, we transformed the data with an affine transform on all markers in each frame such that a select few nose and skull markers per frame (stationary facial locations) were approximately fixed relative to the first frame of the source sequences. Both the input and output were reduced to 30 dimensions by retaining only their first 30 principal components. This maintained 99.9% of the variance in the data. Finally, the data was normalized to have zero mean and scaled by the average standard deviation of all the elements in the training set. We evaluate the various methods on 6 random arbitrary splits of the dataset. In each case, 150 complete sequences are maintained for training and the remaining 36 sequences are used for testing. Each model is presented with the first 6 frames of the true test output and successive 4-frame windows of the true test input. The exception is the linear regression model, which only sees the current input. Therefore prediction is measured from the 7th frame onward. The IOTRBM produces its final output by initializing its visible units with the current previous frame plus a small amount of Gaussian noise and then performing 30 alternating Gibbs steps. At the last step, we do not sample the hidden units. This predicted output frame now becomes the most recent frame in the output history and we iterate forward. The results show a IOTRBM with 30 hidden units. We also tried a model with 100 hidden units which performed slightly worse. Finally, we include the performance of a factored, third-order IOTRBM. This model used 30 hidden units and 50 factors. We report RMS marker error in mm where the mean is taken over all markers, frames and test sequences (Table 1). Not surprisingly, the IOTRBM consistently outperforms linear regression. In all but two splits (where performance is comparable) the IOTRBM outperforms the AR model. Mean performance over the splits shows an advantage to our approach. This is also qualitatively apparent in videos we have attached as supplementary material that superimpose the true target with predictions from the model. We encourage the reader to view the attached videos, as certain aesthetic properties such as the tradeoff between smoothness and responsiveness are not captured by RMS error. We observed that on the 2D dataset, the FIOTRBM had no advantage over the simpler IOTRBM. To compare the robustness of each model to corrupted inputs or outputs, we added various amounts of white Gaussian noise to the input window, output history initialization or both during retargeting with a trained model. This is performed for data split S6 (though we observed similar results for other splits). The performance of each model is given in Table 2. The IOTRBM generally outperforms the baseline models in the presence of noise. This is most apparent in the case of input noise: the scenario we would most likely find in practice. However, under low to moderate output noise, we note that the IOTRBM is robust, to the point that it does not even require a valid N frame output initialization to produce a sensible retargeting. Interestingly, we also observe the FIOTRBM performing well under high-noise conditions. 7 Split Autoregressive MLP IOTRBM FIOTRBM S1 2.12 1.98 1.98 1.70 S2 2.98 1.58 2.62 1.54 RMS Marker Error (mm) S3 S4 S5 Mean 2.44 2.26 2.46 2.45 ? 0.33 1.69 1.51 1.39 1.63 ? 0.22 2.37 2.11 2.27 2.27 ? 0.25 1.55 1.42 1.48 1.54 ? 0.10 Table 3: 3D dataset. Mean RMS error on test output sequences. 4.2 3D facial expression transfer The second dataset we consider consists of facial motion capture data of two subjects, asked to perform a set of isolated facial movements based on FACS. The movements are more exaggerated than the speech performed in the 2D set. The dataset consists of two trials, totaling 1050 frames per subject. In contrast to the 2D set, the marker set used differs between subjects. The first subject has 313 markers (939 dimensions per frame) and the second subject has 332 markers (996 dimensions per frame). There is no correspondence between marker sets. Preprocessing The 3D data was not spatially aligned. Both the input and output were PCA-reduced to 50 dimensions (99.9% of variance). We then normalized in the same way as for the 2D data. We evaluate performance on 5 random splits of the 3D dataset, shown in Table 3. The IOTRBM and FIOTRBM models considered have identical architectures to the ones used for 2D data. We found empirically that increasing the noise level of the output history to ? = 1 improved generalization on the smaller dataset. Figure 2: Retargeting with the third-order factored TRBM. We show every 30th frame. The top row shows the input. The bottom row shows the true target (circles) and the prediction from our model (crosses). This figure is best viewed in electronic form and zoomed. Similar to the experiments with 2D data, the IOTRBM consistently outperforms the autoregressive model. However, it does not outperform the MLP. Interestingly, the factored, third-order model considerably improves on the performance of the standard IOTRBM and the MLP. Fig. 2 visualizes the predictions made by the FIOTRBM. We also refer the reader to videos included as supplementary material. These demonstrate a qualitative improvement of our models over the baselines considered. 5 Conclusion We have introduced the Input-Output Temporal Restricted Boltzmann Machine, a probabilistic model for learning mappings between sequences. We presented two variants of the model, one with pairwise and one with third-order multiplicative interactions. Our experiments so far are limited to dynamic facial expression transfer, but nothing restricts the model to this domain. Current methods for facial expression transfer are unable to factor out style in the retargeted motion, making it difficult to adjust the emotional content of the resulting facial animation. We are therefore interested in exploring extensions of our model that include style-based contextual variables (c.f., [21]). 8 Acknowledgements The authors thank Rafael Tena and Sarah Hilder for assisting with data collection and annotation. Matlab code Code is available at: http://www.matthewzeiler.com/pubs/nips2011/. References [1] G. H. Bakir, T. Hofmann, B. Sch?olkopf, A. J. Smola, B. Taskar, and S. V. N. Vishwanathan. Predicting Structured Data. MIT Press, 2007. [2] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157?166, 1994. [3] Y. Bengio and P. Frasconi. An input/output HMM architecture. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Proc. NIPS 7, pages 427?434, 1995. [4] M. Carreira-Perpinan and G. Hinton. On contrastive divergence learning. In AISTATS, pages 59?66, 2005. [5] E. Chuang and C. Bregler. Performance driven facial animation using blendshape interpolation. Technical report, Stanford University, 2002. [6] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML, pages 160?167, 2008. [7] Y. Freund and D. Haussler. Unsupervised learning of distributions of binary vectors using 2-layer networks. In Proc. NIPS 4, 1992. [8] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput, 14(8):1771?1800, 2002. [9] E. Hsu, K. Pulli, and J. Popovi?c. Style translation for human motion. ACM Trans. Graph., 24(3):1082?1089, 2005. [10] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, pages 282?289, 2001. [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. [12] J. Listgarten, R. Neal, S. Roweis, and A. Emili. Multiple alignment of continuous time series. In Proc. NIPS 17, 2005. [13] A. Mateo, A. Mu?noz, and J. Garc??a-Gonz?alez. Modeling and forecasting electricity prices with input/output hidden Markov models. IEEE Trans. on Power Systems, 20(1):13?24, 1995. [14] R. Memisevic and G. Hinton. Learning to represent spatial transformations with factored higher-order Boltzmann machines. Neural Comput, 22(6):1473?92, 2010. [15] F. Pighin and J. P. Lewis. Facial motion retargeting. In ACM SIGGRAPH 2006 Courses, SIGGRAPH ?06, New York, NY, USA, 2006. ACM. [16] M. Ranzato and G. E. Hinton. Modeling pixel means and covariances using factorized ThirdOrder boltzmann machines. In Proc. CVPR, pages 2551?2558, 2010. [17] P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart, J. L. McClelland, et al., editors, Parallel Distributed Processing: Volume 1: Foundations, pages 194?281. MIT Press, Cambridge, MA, 1986. [18] J. Susskind, G. Hinton, J. Movellan, and A. Anderson. Generating facial expressions with deep belief nets. In Affective Computing, Focus on Emotion Expression, Synthesis and Recognition. I-TECH Education and Publishing, 2008. [19] I. Sutskever and G. Hinton. Learning multilevel distributed representations for highdimensional sequences. In Proc. AISTATS, 2007. [20] G. W. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. In Proc. NIPS 19, 2007. [21] G. Taylor and G. Hinton. Factored conditional restricted Boltzmann machines for modeling motion style. In Proc. ICML, pages 1025?1032, 2009. [22] G. Taylor, L. Sigal, D. Fleet, and G. Hinton. Dynamical binary latent variable models for 3d human pose tracking. In Proc. CVPR, 2010. [23] D. Vlasic, M. Brand, H. Pfister, and J. Popovi?c. Face transfer with multilinear models. In ACM SIGGRAPH 2005, pages 426?433, 2005. 9
4368 |@word multitask:1 trial:2 briefly:1 middle:2 seek:2 tried:2 covariance:1 contrastive:3 wjf:3 configuration:4 series:4 generatively:1 selecting:2 pub:1 document:1 interestingly:2 past:3 existing:1 outperforms:4 current:13 contextual:1 com:1 activation:1 yet:1 assigning:1 realistic:1 visible:8 concatenate:1 additive:1 hofmann:1 enables:1 treating:1 interpretable:1 update:3 stationary:1 generative:2 parameterization:4 mccallum:1 short:3 lr:1 location:1 successive:1 simpler:1 qualitative:1 shorthand:1 consists:3 affective:1 inside:1 introduce:2 pairwise:3 nor:1 actual:1 window:8 considering:1 cardinality:1 becomes:3 increasing:1 estimating:1 xx:2 factorized:1 what:1 kind:1 unified:1 unobserved:1 transformation:1 temporal:20 alez:1 every:1 act:2 tie:1 exactly:2 decouples:1 scaled:1 unit:25 segmenting:1 understood:1 local:1 modify:1 tends:1 switching:1 representationally:1 blendshape:2 becoming:1 approximately:1 interpolation:1 plus:1 initialization:3 mateo:1 challenging:1 hmms:5 limited:3 bi:1 lecun:1 testing:1 practice:3 differs:1 movellan:1 susskind:2 procedure:1 area:1 adapting:1 word:1 cannot:1 context:1 transformer:1 www:1 map:2 deterministic:2 attention:2 starting:1 vit:2 focused:1 simplicity:1 factored:7 iain:1 haussler:1 s6:2 variation:2 pli:1 dbns:1 target:14 exact:4 speak:1 us:2 pa:1 synthesize:1 element:4 recognition:2 rumelhart:1 observed:5 bottom:1 taskar:1 solved:1 capture:11 parameterize:2 initializing:2 ensures:1 connected:1 rig:2 ranzato:1 movement:2 mu:1 asked:2 thirdorder:1 dynamic:15 ultimately:2 trbms:3 trained:4 nips2011:1 purely:1 bipartite:1 completely:1 multimodal:1 joint:3 easily:2 siggraph:3 various:3 polygon:1 train:2 separated:2 shortcoming:1 describe:1 labeling:1 whose:2 apparent:2 widely:2 valued:2 supplementary:4 dominating:1 stanford:1 cvpr:2 ability:1 unseen:1 emili:1 transform:1 itself:1 noisy:1 final:2 online:1 sequence:36 advantage:2 trbm:7 net:2 listgarten:1 propose:3 reconstruction:2 interaction:5 product:3 zoomed:1 aligned:2 poorly:1 roweis:2 inputoutput:2 olkopf:1 sutskever:1 pighin:2 regularity:1 extending:1 produce:4 generating:1 sarah:1 recurrent:2 pose:1 measured:1 ij:1 received:2 eq:9 strong:1 predicted:4 come:1 synchronized:1 bjt:3 concentrate:1 modifying:1 stochastic:2 human:3 material:4 education:1 garc:1 require:2 multilevel:1 generalization:1 probable:1 multilinear:2 blending:1 extension:3 exploring:2 onward:1 mm:4 bregler:1 considered:3 exp:4 great:1 mapping:11 bj:7 matthew:1 early:1 proc:9 facs:1 cpm:1 harmony:1 saw:2 vice:1 mit:2 gaussian:4 modified:1 rather:1 hj:10 totaling:2 encode:1 focus:2 emission:2 inherits:1 notational:1 vk:4 rank:1 likelihood:3 consistently:3 improvement:1 tech:1 contrast:1 baseline:5 sense:1 inference:10 iohmm:5 typically:4 entire:1 hidden:21 w:1 wij:3 transformed:1 interested:2 pixel:1 classification:1 among:1 retaining:1 spatial:1 special:2 field:5 emotion:1 having:1 frasconi:2 sampling:4 manually:1 identical:1 placing:1 unsupervised:2 icml:3 pulli:1 report:3 spline:1 employ:1 few:1 simultaneously:1 divergence:4 individual:1 replaced:1 geometry:2 consisting:1 freedom:1 interest:1 mlp:5 wlf:3 adjust:1 alignment:1 encourage:1 improbable:1 facial:30 taylor:3 initialized:1 circle:1 isolated:1 complicates:1 industry:1 modeling:8 downside:1 ar:2 phrase:1 cost:1 electricity:1 vertex:1 deviation:2 delay:1 graphic:5 dependency:2 connect:1 corrupted:1 considerably:1 combined:1 person:1 density:1 st:6 stay:1 memisevic:1 probabilistic:6 synthesis:1 again:1 reconstructs:1 choose:1 worse:2 external:2 creating:1 expert:1 simard:1 style:6 account:1 potential:3 sec:1 includes:1 explicitly:3 vi:8 depends:1 collobert:1 multiplicative:2 performed:4 view:1 observing:1 bayes:1 parallel:2 complicated:1 annotation:1 contribution:1 square:2 variance:2 who:1 generalize:1 lds:1 accurately:1 none:1 visualizes:1 history:25 explain:1 touretzky:1 against:1 rbms:4 energy:7 involved:1 rbm:2 static:1 hsu:1 dataset:10 popular:1 wh:1 recall:1 improves:2 bakir:1 subtle:2 back:1 higher:2 popovi:2 supervised:1 xxx:5 improved:2 leen:1 done:1 though:2 generality:1 anderson:1 smola:1 correlation:1 expressive:1 nonlinear:7 marker:11 defines:4 perhaps:1 usa:1 normalized:3 true:6 counterpart:1 former:1 analytically:2 regularization:1 alternating:2 spatially:1 leibler:1 neal:1 deal:1 attractive:1 conditionally:2 white:1 during:3 aki:2 maintained:2 criterion:1 ijl:1 multiview:1 complete:2 demonstrate:2 motion:16 variational:1 consideration:3 common:3 multinomial:1 empirically:1 conditioning:5 exponentially:1 attached:2 volume:1 extend:1 s5:2 significant:1 refer:1 versa:1 gibbs:4 ai:4 cambridge:1 smoothness:1 language:2 had:1 access:2 actor:1 similarity:1 posterior:3 recent:3 exaggerated:1 moderate:1 driven:2 tesauro:1 bkj:2 phonetic:1 certain:1 scenario:1 gonz:1 binary:9 wv:1 vt:25 responsiveness:1 captured:1 paradigm:1 maximize:1 assisting:1 multiple:3 desirable:2 technical:1 characterized:2 adapt:1 cross:4 long:1 controlled:1 prediction:7 variant:2 regression:8 multilayer:1 expectation:1 represent:3 normalization:1 addition:1 addressed:1 source:8 sch:1 operate:1 unlike:1 subject:8 undirected:1 lafferty:1 call:2 presence:1 feedforward:1 split:9 wif:3 aesthetic:1 bengio:3 iterate:1 independence:1 fit:1 gave:1 architecture:8 retargeted:1 economic:1 simplifies:1 haffner:1 tradeoff:1 fleet:1 expression:18 pca:1 rms:7 forecasting:2 penalty:1 speech:2 york:3 matlab:1 deep:2 generally:1 involve:1 factorial:1 amount:4 s4:2 recon:1 mcclelland:1 reduced:2 generate:1 http:1 sl:5 outperform:1 restricts:1 s3:2 per:5 discrete:2 idata:1 prevent:1 ht:13 v1:10 graph:4 year:1 extends:2 family:2 place:1 arrive:1 throughout:1 reader:2 electronic:1 graham:1 comparable:1 layer:4 correspondence:1 quadratic:1 constraint:1 vishwanathan:1 aspect:1 performing:2 department:1 structured:2 according:1 conjugate:1 across:1 slightly:2 em:1 character:1 reconstructing:1 smaller:1 skull:1 rob:1 making:2 s1:6 restricted:17 taken:1 remains:1 discus:1 needed:1 nose:1 tractable:3 fed:1 available:1 operation:1 permit:5 apply:1 observe:1 blendshapes:2 robustness:1 eigen:1 original:2 chuang:1 top:1 remaining:1 include:3 publishing:1 emotional:1 warping:1 objective:1 tensor:4 added:2 quantity:2 parametric:1 primary:1 dependence:1 exhibit:1 gradient:6 unable:1 thank:1 concatenation:2 hmm:7 majority:1 sensible:1 considers:1 assuming:1 length:1 code:2 index:3 relationship:2 multiplicatively:1 modeled:1 minimizing:1 difficult:5 reliably:1 boltzmann:18 perform:3 observation:3 markov:4 datasets:2 iohmms:2 descent:2 defining:2 extended:2 hinton:9 disney:1 frame:37 arbitrary:3 community:1 inferred:1 superimpose:1 introduced:1 pair:2 required:1 specified:1 nonlinearly:1 connection:3 redefining:1 namely:1 learned:1 nip:4 trans:2 address:2 suggested:1 dynamical:5 articulation:1 smolensky:1 including:2 memory:1 video:3 belief:1 power:1 natural:3 irecon:1 regularized:2 predicting:3 indicator:1 representing:1 scheme:1 alternated:1 review:3 prior:2 acknowledgement:1 marginalizing:1 relative:2 freund:1 loss:1 discriminatively:1 highlight:1 generation:1 limitation:3 filtering:1 validation:1 foundation:2 integrate:1 degree:1 affine:1 sufficient:1 sigal:1 editor:2 share:1 cd:4 translation:1 row:2 course:1 surprisingly:1 last:2 retargeting:7 offline:1 bias:2 perceptron:1 noz:1 face:4 distributed:3 dimension:4 world:2 valid:1 rich:1 autoregressive:7 forward:1 collection:2 qualitatively:1 preprocessing:2 simplified:1 made:1 author:1 far:1 cope:1 transaction:1 approximate:8 implicitly:1 kullback:1 transcription:1 rafael:1 active:1 pittsburgh:1 discriminative:1 alternatively:1 continuous:2 latent:18 table:6 nature:2 reasonably:1 transfer:13 robust:3 learn:1 concatenates:1 contributes:2 interact:3 bottou:1 complex:3 domain:1 aistats:2 main:2 linearly:2 s2:2 noise:12 animation:3 profile:1 denoise:1 nothing:1 x1:2 fig:4 cubic:1 ny:2 fails:1 inferring:1 momentum:1 position:1 pereira:1 comput:2 clamped:1 perpinan:1 third:6 offset:3 decay:1 intractable:4 burden:1 exists:1 sequential:1 fame:1 conditioned:4 depicted:2 simply:4 appearance:1 likely:1 expressed:1 tracking:2 scalar:2 corresponds:1 lewis:1 extracted:1 acm:4 ma:1 weston:1 conditional:14 modulate:1 viewed:1 formulated:1 price:1 leonid:1 absence:1 content:1 included:1 determined:1 carreira:1 principal:2 called:3 pfister:1 brand:1 exception:2 formally:1 select:1 highdimensional:1 incorporate:4 evaluate:3 correlated:1
3,720
4,369
An Unsupervised Decontamination Procedure For Improving The Reliability Of Human Judgments Michael C. Mozer,? Benjamin Link,? Harold Pashler? ? Dept. of Computer Science, University of Colorado ? Dept. of Psychology, UCSD Abstract Psychologists have long been struck by individuals? limitations in expressing their internal sensations, impressions, and evaluations via rating scales. Instead of using an absolute scale, individuals rely on reference points from recent experience. This relativity of judgment limits the informativeness of responses on surveys, questionnaires, and evaluation forms. Fortunately, the cognitive processes that map stimuli to responses are not simply noisy, but rather are influenced by recent experience in a lawful manner. We explore techniques to remove sequential dependencies, and thereby decontaminate a series of ratings to obtain more meaningful human judgments. In our formulation, the problem is to infer latent (subjective) impressions from a sequence of stimulus labels (e.g., movie names) and responses. We describe an unsupervised approach that simultaneously recovers the impressions and parameters of a contamination model that predicts how recent judgments affect the current response. We test our iterated impression inference, or I3 , algorithm in three domains: rating the gap between dots, the desirability of a movie based on an advertisement, and the morality of an action. We demonstrate significant objective improvements in the quality of the recovered impressions. 1 Introduction Individuals are often asked to convey their opinions and sentiments in the form of quantitative judgments. On a 1?5 scale, how much did you enjoy the movie Kung Fu Panda? How many stars would you give the Olive Garden restaurant? How bad is the pain in your back, where 1 means no pain and 10 means unbearable? What grade should you assign to the term paper on ?Consciousness and Commander Data?? On a Likert scale (ranging from strongly disagree to strongly agree), what is your attitude toward the statement ?NIPS should stay in North America?? Researchers in the social sciences have developed methods for minimizing response bias of various sorts (Bagozzi, 1994). Response bias can occur from the wording of questions, respondents trying to portray themselves in a certain way, individual differences in the use of the response scale (e.g., extreme responding versus midpoint responding), or even cultural variation in the ideal rating-scale granularity (Chami-Castaldi, Reynolds, & Wallace, 2008). An additional influence on responses is the sequential ordering of items to be judged. To illustrate, suppose you are asked to make a series of moral judgments concerning various actions using a 1-10 scale, with a rating of 1 indicating ?not particularly bad or wrong? and a rating of 10 indicating ?extremely evil.? When individuals are shown the series on the left, their ratings of item (3) tend to be higher than the identical item (30 ) in the series on the right (Parducci, 1968). (1) Stealing a towel from a hotel (2) Keeping a dime you find on the ground (3) Poisoning a barking dog (10 ) Testifying falsely for pay (20 ) Using guns on striking workers (30 ) Poisoning a barking dog Individuals seem incapable of making absolute judgments, and instead recent experience provides reference points with respect to which relative judgments are made (e.g., Laming, 1984; Parducci, 1 1965, 1968; Stewart, Brown, & Chater, 2005). These sequential dependencies in judgment arise in almost every task in which an individual is asked to make a series of responses, such as filling out surveys, questionnaires, and evaluations (e.g., usability ratings, pain assessment inventories). Every faculty member is aware of drift in grading that necessitates comparing papers graded early on a stack with those graded later. Sequential effects have been demonstrated in domains as varied as legal reasoning and jury evidence interpretation (Furnham, 1986; Hogarth & Einhorn, 1992) and clinical assessments (Mumma & Wilson, 2006). Sequential dependencies are observed in even the simplest of laboratory tasks involving the rating of unidimensional stimuli, such as the loudness of a tone or the length of a line. Individuals? ability to rate even these simple stimuli is surprisingly poor compared to their ability to discriminate the same stimuli. Across many domains, responses convey not much more than two bits of mutual information with the stimulus (Stewart et al., 2005). The poor transmission of information is attributed in large part to the contamination from recent trials. (A trial is psychological jargon for a single judgment of a stimulus.) We (Mozer et al., 2010) have surveyed the empirical and theoretical literature on sequential effects (e.g., DeCarlo & Cross, 1990; Parducci, 1965; Petrov & Anderson, 2005; Stewart et al., 2005), and mention here findings relevant for understanding their mechanistic basis. The influence of recent trials is exerted by both stimuli and responses. We?ll refer to the relevant recent history as the context. One interpretation of the joint effect of stimuli and responses is that individuals form their current response by analogy to recent trials: they determine a response to the current stimulus that has the same relationship as the previous response had to the previous stimulus. The immediately preceding trial has the strongest influence on responses, and the influence of further back trials typically falls off exponentially. Linear autoregression models have done a reasonable job of accounting for sequential dependencies, though many theories include nonlinearities, e.g., memory based anchors, and generalization from previous trials that depends on the similarity of the current stimulus to the previous stimuli. 2 Decontamination Models Because responses are influenced by recent context, they are not as pure a measure of an individual?s stimulus appraisal as one might wish for. In the applied psychology literature, techniques have been explored to mitigate judgment relativity effects, such as increasing the number of response categories and varying the type and frequency of anchors (Mumma & Wilson, 2006; Wedell, Parducci, & Lane, 1990). In previous work (Mozer et al., 2010), we proposed an alternative: an algorithmic technique that decontaminates responses to remove contextual influences. By removing the contamination from previous trials, we recover ratings that are more meaningful than are the raw ratings. We focused on a task in which an objective ground truth exists in order that we could assess the quality of the ratings. The task involved judging the gap between two dots that appear on a monitor. In our experiments, ten equally spaced gaps were used, and we asked participants to rate the gaps on a 1-10 scale. Even though there is a one-to-one mapping between stimuli and responses, and participants were shown all gaps prior to the start of the experiment, participants still show strong sequential dependencies in their ratings. Our decontamination procedure obtains ratings that better reflect ground truth than the reported ratings by about 5%. (This improvement is purely due to desequencing?the removal of sequential effects. The improvement rises to about 20% with debiasing and decompressing the ratings.) Our framework assumes that an external stimulus?the dot pairs in our experiment?maps to an internal mental representation we refer to as the impression, and this impression is then mapped to a response. We treat the mapping from stimulus to impression as veridical, and contamination from recent trials as occurring in the mapping from impression to response. The term sensation might be preferred over impression if the judgment task is purely perceptual, and the term evaluation might be preferred in a domain involving higher cognition, but we will use impression as the generic term for the internal representation. The goal of decontamination is to recover the (latent) impression from the sequence of ratings and stimuli. To introduce some notation, St denotes the stimulus presented on trial t, ?St denotes the impression associated with the stimulus, and the corresponding rating or response is Rt . St is a unique label associated with each distinct stimulus. Importantly, we do 2 not assume to have any metric or featural information about the stimulus; we will simply index the n distinct stimuli with integers, 1, 2, ..., n. We denote the impression associated with stimulus s as ?s . Decontamination involves discovering ? ? {?1 , ?2 , ..., ?n } given stimulus sequence {S1 , S2 , ..., ST } and the corresponding response sequence {R1 , R2 , ..., RT }, where each of the n distinct stimuli is presented at least once in the stimulus sequence, i.e., ?s ? {1, 2, ..., n}, ?t : St = s. In our previous work, we utilized ground truth not only to evaluate the quality of decontamination procedures but also for training decontamination models. That is, we adopted a supervised training paradigm in which the ground truth provided the target impression values. We built a single model for all participants. One group of participants was used for training the model, and another group for testing. We explored a range of models, including linear and nonlinear regression, look up tables, hybrid models, and these same models embedded in a conditional random fields (CRFs). With their more powerful inference techniques, the CRF-based models performed the best. Ground truth is known for stimuli that vary along a unidimensional perceptual continuum, e.g., gaps between points, pitches of tones. However, interesting and realistic judgment tasks often involve stimuli that vary along dimensions that are ill defined and even inherently subjective (i.e., a universal ground truth does not exist) Even perceptual tasks may have this character, e.g., smell or taste evaluation. And in cognitive preference tasks, e.g., the rating of movies or music, not only are the stimulus dimensions unknown but the critical dimensions and preferences may vary from one individual to the next. In complex, cognitive domains, the only means of obtaining ground truth for an individual is to ask the individual to rate the same stimulus in many contexts and to average the ratings to eliminate the ?noise? due to sequential effects. Although training data can in principle be obtained, the cost is nontrivial. In our simple gap-measurement task, even with twenty ratings of the each gap, the error in the impression obtained by debiasing, decompressing, and averaging ratings was still nonzero, and dropped with each subsequent rating incorporated into the average. 2.1 The Iterated Impression Inference (I3 ) Algorithm Given the challenge of collecting sufficient ground-truth data for supervised training of decontamination models, our goal in this paper is to develop an unsupervised technique for decontamination of rating sequences. Our technique involves simultaneously inferring the set of impressions, ?, and the parameters ? of a contamination model C? , that predicts the response at time t given the current stimulus and context:  ? t = C? ?S , ..., ?S , Rt?1 , ..., Rt?h , R t t?h ? t denotes the prediction and h is the number of trials of history (the context) used to make where R the prediction. In the style of the EM algorithm, our approach is a straightforward iteration between inference on the latent variables and updating the model parameters: 1. Define the baseline estimate of the impression for each stimulus s to be the average rating given on all trials when s is presented, i.e., ?s0 = E{t:St =s} [Rt ] , where the superscript (0) associated with ? indicates the iteration of the algorithm, and E is the expectation over a set of trial indices. 2. Given impressions determined on the previous iteration i, ?i , train a new contamination model for iteration i + 1, C?i+1 , by searching for model parameters ? i+1 that minimize the mean squared-error, MSE(?i , ? i+1 ), defined as h  2 i MSE(?, ?) = Et C? ?St , ..., ?St?h , Rt?1 , ..., Rt?h ? Rt . 3. Given the updated contamination model, C?i+1 , search for a new set of impressions, ?i+1 , that minimize the mean squared-error criterion MSE(?i+1 , ? i+1 ). 4. Repeat steps 2 and 3 until ?i+1 ? ?i . 3 We refer to this algorithm as Iterated Impression Inference or I3 . Because the MSE is strictly nonincreasing at steps 2 and 3, I3 is guaranteed to find a local optimum in the search space. The initialization at step 1 starts the search in the neighborhood of the solution because?by our definition of an impression?the responses are noise-corrupted and recency-modulated instantiations of the impressions. Consequently, local optimization has a shot at finding a good solution. The simulations described in the following sections all use a simple linear model for C? , ? t = ?0 + ?1 ?S + ... + ?h+1 ?S R + ?h+2 Rt?1 + ... + ?2h+1 Rt?h . t t?h Because the model is bilinear in the set of variables that we?re solving for? ? and ??steps 2 and 3 each amount to solving a least squares regression problem, and the I3 algorithm offers an approach to bilinear regression. Although this system of bilinear equations does not have a unique least-squares solution, the I3 benefits from the strong initialization conditions. Related iterative algorithms for bilinear regression are found in the literature (Bai & Li, 2004; Bai & Liu, 2006). In preference tasks (e.g., movie rating) where impressions are different for different individuals, I3 has a large number of free parameters: an impression ? must be inferred for each distinct item being rated and for each respondent. To address the possibility of overfitting, we incorporate ridge regression in estimating the impressions (step 3 of I3 ). As the regressand at step 3, we use the deviation of the impression at iteration i from the baseline impression, i.e., ?i ? ?0 . Consequently, regularization penalizes large deviations from the baseline impressions, and a large ridge parameter prevents the impressions from wandering too far from the baseline. Overfitting is avoided because the baseline impressions are grounded in the ratings. 3 Simulations We describe a series of simulations using I3 to decontaminate both artificial sequences and actual rating sequences from behavioral experiments. In all cases, ratings are integers on a 1?10 scale. The impressions are on the same scale, but are allowed to be continuous in [1, 10]. With ||?|| = 2h + 2, ||?|| = n, and a context of h trials required before the model can be used, we need a T ? 3h + n + 2 trial sequence to constrain the model from the data. In all our experiments, two complete passes through the (randomly ordered) set of stimuli is sufficient to constrain the model parameters, although we could in principle get by with fewer than two complete passes through the stimuli. Data from multiple participants are obtained in each experiment. In principle, we could decontaminate each participant?s data in isolation. However, in the simulations we report, we have chosen to build a single contamination model for all participants, thereby imposing a strong constraint on the ? parameters. An alternative would be a hierarchical Bayesian approach with shared hyperpriors but separate parameter inference for each participant. 3.1 Artificial Data To evaluate I3 under ideal circumstances, we construct artificial data via a generative contamination process that is consistent with the linear form of C, and therefore I3 should be able to perform perfect decontamination given sufficient data. The artificial sequences are generated by drawing randomly from n = 10 stimuli for a total of p passes through the stimulus set such that each stimulus appears exactly once in a series of 10 trials and the total number of trials is T = pn. The impressions associated with each stimulus were randomly drawn from {2, 4, 6, 8}, and responses were generated by an autoregressive model: Rt = ?St + ?St?1 ? Rt?1 , anchored with ?S0 ? R0 ? ?1. For example, the impression sequence {8, 2, 4, 8, 6, 4} would yield response sequence {7, 3, 3, 9, 5, 5}. The stimulus-impression mapping was used in the generative process, but was not provided to I3 . The goal of I3 is to infer both the impressions ? and the model parameters ?. Figure 1 shows the results of 50 replications of the simulation in which we vary the number of participants, from 1 to 5, and the number of passes, p, from 2 to 10. For each replication, we compute the mean squared-error (MSE) between the true impressions and ?0 ?the baseline recovered impression that is obtained by averaging ratings across stimulus presentations. We also compute the MSE between the true impressions and ?? ?the impressions recovered by I3 . The percentage improvement due to I3 is displayed on the ordinate of the graph. Even with only one participant and T = 20 trials, I3 reduces the error in the reconstructed impression by 65%. With 3 or more 4 % reduction in impression MSE 100 95 90 85 80 Figure 1: Benefit of decontamination (% reduction in MSE for decontamination over baseline) for the artificial data set as a function of number of stimulus presentations and number of individuals providing data. Error bars are ?1 SEM. 1 participant 2 participants 3 participants 4 participants 5 participants 75 70 65 2 4 6 8 10 # presentations of each stimulus participants, and T = 50 trials, the error is reduced by 95%. To emphasize, this reduction is due to the use of decontamination and reflects the improvement over a baseline which is the best one can do without considering sequential dependencies, treating variability in responses to the same stimulus simply as noise to be eliminated by averaging. Comparison of the curves for, say, one versus two participants in Figure 1 indicates a benefit of combining data to build a single contamination model for all participants, assuming, of course, that the participants share a common underlying contamination process. 3.2 Gap-Estimation Task Next, we decontaminate data from the gap-estimation task described earlier (Mozer et al., 2010). The reason for using this task is that it provides ground truth, which can be used for evaluating the quality of the recovered impressions even though I3 does not use ground truth impressions in training. The experiment consists of 180 trials in which pairs of dots were presented and participants were asked to estimate the gap between dots. In every block of 10 consecutive trials, the set of distinct gaps were shown in random order. Data were collected from 76 participants. Because the trials are blocked, we can vary the number of passes, p, through the stimulus set used for decontamination by selecting only the first T = 10p trials of the experiment. Figure 2a shows the MSE associated with ?0 , the baseline impression, and ?? , the impression recovered by I3 , as a function of p. The key feature of the curves is that increasing p?obtaining more ratings of each stimulus?produces a steep drop in MSE. Surprisingly, even with 18 ratings of the same stimulus, there is still a significant discrepancy between ground truth and the mean rating provided by the participant. This result is all the more surprising considering that we postprocess the recovered impressions to use the full response range; without this postprocessing, the error is larger yet. Although I3 produces impressions closer to ground truth than the baseline for p between 2 and 12, Figure 2a belies the magnitude of the difference. The solid green curve of Figure 2b shows the percentage reduction in MSE due to I3 . When only a small number of ratings of each stimulus is available, I3 obtains a roughly 10% reduction in MSE by accounting for sequential dependencies. We can evaluate the significance of this reduction by asking what percentage reduction in MSE would be obtained if we simply collected p + 1 ratings of each stimulus instead of p. The benefit of additional data is shown in the dashed blue curve of Figure 2b. For small p, collecting additional data is more valuable than decontaminating the existing data, but once we have roughly p = 4 samples, the benefit of decontamination is comparable to the benefit obtained from collecting an additional round of ratings. However, the cost of collecting additional ratings can be quite high if you consider large data sets, e.g., music and movies. Figure 2c depicts the distribution of errors for individual participants and stimuli, where the red points indicate the (signed) error in the baseline impression for p = 3, with one point per participant and stimulus, and the green points indicate the error in the impressions recovered by I3 for p = 3. We have jittered the horizontal position of the points a bit to make it easier to see the densities. It?s clear that the large errors are reduced by decontamination, and the other points appear more tightly clustered around zero. (The errors for the endpoints of the stimulus continuum are small because of the rescaling we mentioned previously.) 5 (a) 0.45 20 ? 0.3 0.25 ?0 ? ? 3 2 15 signed error % reduction in MSE 0.35 4 ?? add?l data ? 0.4 MSE (c) (b) 0 ? 10 5 1 0 ?1 ?2 ?3 0.2 0 2 4 6 8 10 12 14 16 18 2 # stim. presentations 4 6 8 10 12 14 # stim. presentations 16 18 ?4 1 2 3 4 5 6 7 8 9 10 stimulus (relative gap distance) Figure 2: (a) Mean squared-error of the baseline impressions (?0 ) and impressions recovered by I3 (?? ) on the gap-estimation task as a function of number of stimulus presentations included in the modeling. (b) Percentage reduction in MSE for I3 over baseline (green line); benefit of adding an additional stimulus presentation to the data set (dashed blue). (c) Distribution of errors for individual participants and stimuli Figure 3: Movie ads used for rating task. Examples are shown of action, comedy, drama, family, and sports genres. 3.3 Movie Advertisement Evaluation Having established the value of decontamination in a domain where performance can be assessed relative to objective truth, we move on to examine judgments in more complex, subjective domains. We conducted a web-based experiment in which participants were asked to indicate their desire to see a movie based solely on a movie poster of the sort that typically appears on a DVD jacket, and shows images from the movie, the movie title, and sometimes quotes from reviews (Figure 3). Participants were asked to rate each movie on a 1?10 scale, where 1 means ?would never watch this movie? and 10 means ?can?t wait to see it?. The rating task here should not be confused with more typical rating task of indicating enjoyment for a previously viewed film; this sort of task might be used by film marketers who attempt to design advertisements to have broad appeal. We selected 50 relatively obscure movies from the Internet Movie Database (IMDb.com). Obscurity was determined by a small number of user ratings on IMDb. We polled participants during the experiment to verify that the films were generally unfamiliar. We chose 10 movies from each of five genres: action, comedy, drama, family, and sports. The movies varied in their mean IMDb rating and in their release year, from 1947 to 2007. Participants were asked to rate each movie four times for a total of T = 200 trials. The trials were blocked such that each movie was presented exactly once every 50 trials. The movies within a block were ordered randomly with the constraint that consecutive films were always drawn from different genres. We collected data from 120 participants in the United States using Mechanical Turk, rejecting five whose ratings looked suspicious on several criteria. Ordinarily, tasks for Mechanical Turk workers are defined to be a single trial; we set up a javascript sequence of 200 trials that had to be completed by the worker to receive payment. We required that participants respond on each trial within 15 seconds to ensure a steady rate of responding. Because of the large stimulus set in this experiment?in contrast to the previous experiment with just 10 items?we had sufficient data to perform model selection by cross validation. We split the data from each participant into 100 trials for training and 100 trials for testing. Within the 100 training trials, we used trials 1-80 for training I3 and trials 81-100 as a validation set for model selection. We searched over two hyperparameters previously described: h, the number of contextual trials to include in the model, and the ridge (regularization) parameter. We say more about h shortly. 6 0.15 impression response model coefficient 0.1 0.05 0 ?0.05 Figure 4: Regression model coefficients for impression (open circles) and response (closed circles) terms, for trial lags 1?10. ?0.1 1 2 3 4 5 6 7 8 9 10 lag (number trials back) One way to evaluate the quality of the contamination model and impressions inferred by I3 is to use the model to predict ratings in the test set. Even though our goal is to decontaminate ratings, if we are successful at this goal we should be able to predict the consequence of context on ratings. We predicted ratings in the test set either with the baseline impressions, ?0 , or with the combination of the contamination model and the recovered impressions, ?? . We obtained a 5.18% reduction in the test set prediction error with I3 over the baseline approach that does not exploit sequential dependencies. Of this reduction in error, about 2/3 (3.32%) was attributable to modeling sequential dependencies and 1/3 (1.86%) to iterative impression inference. Having established the quality of the model using the test set, we can ask a more substantive question about how the model treats movie genre. Intuitively, one would expect individuals to cluster their ratings within genre. Some individuals will love dramas and hate comedies; others will have the reverse preferences. This clustering is present in the data. Using the baseline impressions ?0 , we computed the ratio of intra- to inter-genre impression variance and compared it to a shuffled measure in which we randomly reassigned movies to genres. The original data has a variance ratio of 8.41, whereas the shuffled data has a ratio of 19.8, highly significant (p < .0001) by a sign test with participants as the random factor. A further indication of the quality of the inferred impressions, ?? , is that they obtain an even more compact clustering by genre: the variance ratio for the inferred impressions is 8.04, a reliable 4.40% reduction in the ratio (p < .0005 by a sign test). The model we present here used h = 10 time steps of history, chosen by cross validation. We searched h in the range of 1?10, but didn?t go beyond 10 because each additional time step causes the loss of a training trial. We were surprised to find this amount of context showing a benefit, but we believe we have an explanation that suggests the model is capturing slow drift as well as very local influences of the sequence. Figure 4 shows the model coefficients for impression and response terms for lags 1?10. (The lag 0 impression coefficient is 1.0.) To a first order, the impression and response coefficients are symmetric with most of the impression coefficients being negative. If one thinks of the impression as the average response to a stimulus, then these weights will tend to lower the predicted response on the current trial if recent trials have produced stimulus-conditioned responses that are lower than the average stimulus-conditioned response, and vice-versa, i.e., slow drift. Beyond this first-order analysis of the weights, note that the impression and response coefficients are not entirely symmetric: the impression coefficients tend to have smaller magnitudes. Further, the overall magnitude is larger at lag 1, and the lag 1 and 2 coefficients have flipped signs, a typical pattern reflecting assimilation to the trial t ? 1 and contrast with trial t ? 2 (DeCarlo & Cross, 1990). 3.4 Morality Judgments We conducted a final experiment in which participants were asked to rate the morality of various actions, like the Parducci experiment cited in the introduction of this paper. We concocted 25 actions, ranging from relatively inoffensive (picking two lemons from your neighbors? tree without their permission) to questionable (failing to report $3000 in cash earnings on a tax return) to the unimaginable (sentencing a rape victim to death to prevent her from carrying a child to term). The experiment consisted of T = 100 trials, in blocks of 25 trials in which each action was presented exactly once. Fifty participants were enlisted using Mechanical Turk, using the same procedure for collecting a sequence of ratings as we used for the movie-ad experiment. Three additional participants were rejected due to suspicious patterns in their data (e.g., all items assigned the same rating). Seventy-five trials were used for training the model, and 25 for testing. Of the 75 training trials, 50 7 were used for setting model parameters and 25 were used for model selection on h and the ridge parameter. In this data set, h = 4 yielded the best validation (rating prediction) performance. As summarized in Table 1, we found a benefit for decontamination in these data, although perhaps the magnitudes are a bit smaller than in the movie-ad data. The reduction in error in the test set that comes about by predicting responses using I3 ?relative to using the baseline impressions and not accounting for sequential dependencies?is 4.5%. Beyond this basic verification that the inferred impressions are valid, we hypothesized that in the domain of moral judgments?in contrast to movie-ad sentiment?there should be a strong consensus among individuals within the same culture. Consequently, impressions gleaned from the ratings should show a high degree of interrater agreement. We can measure the interrater agreement as the ratio of the variance of ratings to an item over participants to the variance of mean ratings over items. By this measure, the impressions inferred by I3 , ?? , are superior to the baseline impressions, ?0 , in that the interrater agreement measure improves by 2.13%. Although this improvement is small, it is highly consistent across items (p < .005 by a sign test with items as the random factor). In this experiment, there are only 25 items and many of them are quite distinctive and clearly at the ends of any continuum of actions. Consequently, participants are likely to remember not only having rated items previously, but also the ratings that they assigned. To the extent that memory is playing a role in this experiment, it will diminish sequential effects and the potential of a model like I3 to improve the quality of inferred impressions. It seems advisable in future work to use larger sets of items, or to impose a waiting period between passes through the items. 4 Discussion In this paper, we posed the challenge of improving the quality of human judgments by partialing out contextual contamination. Although both the phenomenon and theory of sequential dependencies has been studied in the psychology literature for over half a century, our work is aimed at the more practical concern of mitigating the influence of recent trials, in order to remove an important source of uncontrolled variability in the data. In this work, we?ve tried to assess the practical utility of decontamination. In the gap-measurement task, we showed that decontamination reduced mismatch between ratings and ground truth about as much as using an additional round of ratings to smooth out the average. With a large set of items to be rated, the time savings can be significant. In the movie ad and morality tasks, we showed a roughly 5% improvement in rating predictability with decontamination, a nontrivial improvement considering that the Netflix prize was aimed at obtaining a 10% improvement in total. Further, decontamination recovered impressions that were more sensible and therefore more meaningful, in the sense that the impressions were more consistent within genres for movie ads and were more consistent across respondents for morality judgments. I3 can readily be incorporated into current web-based and pencil-and-paper surveys if respondents are asked to rate some items more than once. Rating the items twice is sufficient in the tasks we studied to show a benefit of decontamination, but in principle, the model parameters can be constrained with fewer than two complete passes through the items. Further, if a training pool of subjects is used to constrain model parameters (e.g., to set ? or to establish priors on ? and ?), it?s conceivable that decontamination will work without requiring much more than a single rating per item. This final point suggests an obvious avenue for further research: exploring more sophisticated, Bayesian approaches that can better exploit cross-participant constraints to improve the quality of decontamination or reduce the amount of data that needs to be collected to perform decontamination. Experiment Movie ads Morality Reduction in test set prediction error 5.18% 4.46% Improvement in clustering of genre ratings 4.40% (p < .0005) Improvement in interrater reliability 2.13% (p < .005) Table 1: Summary of results from movie ad and morality experiments 8 Acknowledgments This research was supported by NSF grants BCS-0339103 and BCS-720375. References Bagozzi, R. P. (1994). Measurement in marketing research: Basic principles of questionnaire design. In R. P. Bagozzi (Ed.), Principles of marketing research. Massachusetts, USA: Basil Blackwell Ltd. Bai, E.-W., & Li, D. (2004). Convergence of the iterative Hammerstein system identification algorithm. IEEE Transactions on Automatic Control, 49, 1929?1940. Bai, E.-W., & Liu, Y. (2006). Least squares solutions of bilinear equations. Systems and Control Letters, 55, 466?472. Chami-Castaldi, E., Reynolds, N., & Wallace, J. (2008). Individualised rating-scale procedure: a means of reducing response style contamination in survey data? Electronic Journal of Business Research Methods, 6, 9?20. DeCarlo, L. T., & Cross, D. V. (1990). Sequential effects in magnitude scaling: Models and theory. Journal of Experimental Psychology: General, 119, 375?396. Furnham, A. (1986). The robustness of the recency effect: Studies using legal evidence. Journal of General Psychology, 113, 351?357. Hogarth, R. M., & Einhorn, H. J. (1992). Order effects in belief updating: The belief adjustment model. Cognitive Psychology, 24, 1?55. Laming, D. R. J. (1984). The relativity of ?absolute? judgements. Journal of Mathematical and Statistical Psychology, 37, 152?183. Mozer, M. C., Pashler, H., Wilder, M., Lindsey, R., Jones, M. C., & Jones, M. N. (2010). Decontaminating human judgments to remove sequential dependencies. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culota (Eds.), (p. 1705-1713). La Jolla, CA: NIPS Foundation. Mumma, G. H., & Wilson, S. B. (2006). Procedural debiasing of primacy/anchoring effects in clinical-like judgments. Journal of Clinical Psychology, 51, 841?853. Parducci, A. (1965). Category judgment: A range-frequency model. Psychological Review, 72, 407?418. Parducci, A. (1968). The relativism of absolute judgment. Scientific American, 219, 84?90. Petrov, A. A., & Anderson, J. R. (2005). The dynamics of scaling: A memory-based anchor model of category rating and identification. Psychological Review, 112, 383?416. Stewart, N., Brown, G. D. A., & Chater, N. (2005). Absolute identification by relative judgment. Psychological Review, 112, 881?911. Wedell, D. H., Parducci, A., & Lane, M. (1990). Reducing the dependence of clinical judgment on the immediate context: Effects of number of categories and type of anchors. Journal of Personality and Social Psychology, 58, 319?329. 9
4369 |@word trial:47 faculty:1 judgement:1 seems:1 open:1 simulation:5 tried:1 accounting:3 thereby:2 mention:1 solid:1 shot:1 reduction:14 bai:4 liu:2 series:7 selecting:1 united:1 reynolds:2 subjective:3 mumma:3 existing:1 current:7 recovered:10 comparing:1 contextual:3 surprising:1 com:1 yet:1 must:1 olive:1 readily:1 realistic:1 subsequent:1 remove:4 treating:1 drop:1 generative:2 discovering:1 fewer:2 item:19 tone:2 selected:1 postprocess:1 half:1 prize:1 mental:1 provides:2 earnings:1 preference:4 five:3 mathematical:1 along:2 replication:2 suspicious:2 consists:1 surprised:1 behavioral:1 introduce:1 manner:1 falsely:1 inter:1 roughly:3 themselves:1 wallace:2 decontaminate:5 examine:1 grade:1 love:1 anchoring:1 actual:1 considering:3 increasing:2 provided:3 estimating:1 notation:1 cultural:1 underlying:1 didn:1 confused:1 what:3 developed:1 lindsey:1 finding:2 quantitative:1 every:4 mitigate:1 collecting:5 remember:1 questionable:1 exactly:3 wrong:1 control:2 grant:1 enjoy:1 appear:2 veridical:1 before:1 dropped:1 local:3 treat:2 limit:1 consequence:1 bilinear:5 solely:1 might:4 signed:2 chose:1 initialization:2 studied:2 twice:1 suggests:2 jacket:1 range:4 unique:2 practical:2 acknowledgment:1 testing:3 drama:3 block:3 procedure:5 empirical:1 universal:1 poster:1 dime:1 wait:1 get:1 selection:3 judged:1 recency:2 context:9 influence:7 pashler:2 map:2 demonstrated:1 crfs:1 straightforward:1 go:1 williams:1 survey:4 focused:1 immediately:1 pure:1 importantly:1 century:1 searching:1 variation:1 smell:1 updated:1 target:1 suppose:1 colorado:1 user:1 parducci:8 agreement:3 particularly:1 utilized:1 updating:2 predicts:2 database:1 observed:1 role:1 appraisal:1 jury:1 ordering:1 contamination:15 valuable:1 mentioned:1 mozer:5 benjamin:1 questionnaire:3 asked:10 dynamic:1 wording:1 carrying:1 solving:2 purely:2 distinctive:1 decompressing:2 imdb:3 basis:1 necessitates:1 joint:1 various:3 america:1 genre:10 train:1 attitude:1 distinct:5 describe:2 artificial:5 zemel:1 neighborhood:1 sentencing:1 quite:2 whose:1 larger:3 morality:7 film:4 say:2 drawing:1 lag:6 posed:1 victim:1 ability:2 think:1 enlisted:1 noisy:1 superscript:1 final:2 sequence:15 indication:1 polled:1 relevant:2 barking:2 combining:1 tax:1 convergence:1 cluster:1 transmission:1 r1:1 optimum:1 produce:2 perfect:1 illustrate:1 develop:1 advisable:1 job:1 strong:4 wedell:2 predicted:2 involves:2 indicate:3 come:1 sensation:2 human:4 opinion:1 assign:1 generalization:1 clustered:1 exploring:1 strictly:1 around:1 diminish:1 ground:14 algorithmic:1 mapping:4 cognition:1 predict:2 vary:5 early:1 continuum:3 consecutive:2 failing:1 estimation:3 label:2 quote:1 title:1 castaldi:2 vice:1 reflects:1 clearly:1 always:1 desirability:1 i3:31 rather:1 pn:1 cash:1 varying:1 wilson:3 chater:2 release:1 improvement:11 indicates:2 contrast:3 baseline:18 sense:1 inference:7 typically:2 eliminate:1 her:1 obscurity:1 decontaminates:1 mitigating:1 overall:1 among:1 ill:1 constrained:1 mutual:1 field:1 aware:1 exerted:1 once:6 construct:1 eliminated:1 never:1 identical:1 having:3 broad:1 look:1 unsupervised:3 filling:1 seventy:1 flipped:1 jones:2 discrepancy:1 future:1 report:2 stimulus:60 others:1 randomly:5 lawful:1 simultaneously:2 ve:1 individual:20 tightly:1 decarlo:3 attempt:1 possibility:1 highly:2 intra:1 evaluation:6 extreme:1 nonincreasing:1 fu:1 closer:1 worker:3 experience:3 culture:1 desequencing:1 tree:1 taylor:1 penalizes:1 re:1 circle:2 theoretical:1 psychological:4 earlier:1 modeling:2 asking:1 stewart:4 cost:2 deviation:2 successful:1 decontaminating:2 conducted:2 too:1 reported:1 dependency:12 corrupted:1 jittered:1 st:10 density:1 cited:1 stay:1 off:1 laming:2 picking:1 michael:1 pool:1 einhorn:2 squared:4 reflect:1 cognitive:4 external:1 american:1 style:2 rescaling:1 return:1 li:2 potential:1 nonlinearities:1 star:1 summarized:1 north:1 coefficient:9 depends:1 ad:8 later:1 performed:1 closed:1 red:1 start:2 sort:3 recover:2 panda:1 participant:40 netflix:1 ass:2 minimize:2 square:3 variance:5 who:1 judgment:24 spaced:1 yield:1 raw:1 bayesian:2 iterated:3 rejecting:1 produced:1 identification:3 researcher:1 history:3 strongest:1 influenced:2 ed:2 definition:1 petrov:2 hotel:1 frequency:2 involved:1 turk:3 obvious:1 associated:6 attributed:1 recovers:1 massachusetts:1 ask:2 improves:1 sophisticated:1 back:3 reflecting:1 appears:2 higher:2 supervised:2 response:41 formulation:1 done:1 though:4 strongly:2 anderson:2 just:1 rejected:1 marketing:2 until:1 horizontal:1 web:2 nonlinear:1 assessment:2 quality:10 perhaps:1 scientific:1 believe:1 name:1 effect:12 usa:1 requiring:1 brown:2 true:2 verify:1 consisted:1 regularization:2 assigned:2 shuffled:2 pencil:1 jargon:1 laboratory:1 consciousness:1 nonzero:1 symmetric:2 death:1 round:2 ll:1 during:1 harold:1 steady:1 criterion:2 trying:1 decontamination:26 impression:78 crf:1 demonstrate:1 ridge:4 complete:3 gleaned:1 hogarth:2 reasoning:1 postprocessing:1 ranging:2 image:1 common:1 superior:1 debiasing:3 endpoint:1 exponentially:1 interpretation:2 expressing:1 significant:4 refer:3 measurement:3 imposing:1 blocked:2 unfamiliar:1 versa:1 automatic:1 shawe:1 had:3 reliability:2 dot:5 similarity:1 add:1 recent:12 showed:2 jolla:1 reverse:1 certain:1 relativity:3 incapable:1 fortunately:1 additional:9 preceding:1 impose:1 r0:1 determine:1 paradigm:1 period:1 dashed:2 multiple:1 full:1 bcs:2 infer:2 reduces:1 smooth:1 usability:1 clinical:4 long:1 cross:6 offer:1 dept:2 concerning:1 equally:1 pitch:1 involving:2 regression:6 prediction:5 basic:2 circumstance:1 metric:1 expectation:1 iteration:5 grounded:1 sometimes:1 receive:1 respondent:4 whereas:1 evil:1 source:1 fifty:1 pass:7 subject:1 tend:3 member:1 lafferty:1 seem:1 integer:2 ideal:2 likert:1 granularity:1 split:1 affect:1 restaurant:1 psychology:9 isolation:1 reduce:1 unidimensional:2 avenue:1 grading:1 utility:1 ltd:1 moral:2 sentiment:2 wandering:1 cause:1 action:8 generally:1 clear:1 involve:1 aimed:2 amount:3 ten:1 category:4 simplest:1 reduced:3 exist:1 percentage:4 nsf:1 judging:1 sign:4 per:2 blue:2 waiting:1 group:2 key:1 four:1 procedural:1 basil:1 monitor:1 drawn:2 prevent:1 graph:1 year:1 letter:1 you:6 powerful:1 striking:1 regressand:1 respond:1 almost:1 reasonable:1 family:2 electronic:1 scaling:2 comparable:1 bit:3 capturing:1 entirely:1 internet:1 pay:1 guaranteed:1 uncontrolled:1 yielded:1 nontrivial:2 lemon:1 occur:1 constraint:3 your:3 constrain:3 dvd:1 lane:2 extremely:1 poisoning:2 relatively:2 combination:1 poor:2 wilder:1 across:4 smaller:2 em:1 character:1 making:1 stealing:1 s1:1 psychologist:1 intuitively:1 legal:2 equation:2 agree:1 previously:4 payment:1 mechanistic:1 end:1 adopted:1 autoregression:1 available:1 hyperpriors:1 hierarchical:1 generic:1 primacy:1 alternative:2 permission:1 shortly:1 robustness:1 original:1 personality:1 responding:3 assumes:1 include:2 denotes:3 completed:1 ensure:1 clustering:3 music:2 exploit:2 build:2 graded:2 establish:1 objective:3 move:1 question:2 looked:1 rt:12 dependence:1 loudness:1 pain:3 conceivable:1 distance:1 link:1 mapped:1 separate:1 sensible:1 gun:1 collected:4 consensus:1 extent:1 toward:1 reason:1 stim:2 substantive:1 assuming:1 length:1 index:2 relationship:1 providing:1 minimizing:1 ratio:6 steep:1 statement:1 javascript:1 negative:1 rise:1 ordinarily:1 design:2 unknown:1 twenty:1 perform:3 disagree:1 displayed:1 immediate:1 incorporated:2 variability:2 ucsd:1 stack:1 varied:2 drift:3 inferred:7 rating:63 ordinate:1 dog:2 pair:2 struck:1 required:2 mechanical:3 blackwell:1 comedy:3 established:2 nip:2 address:1 able:2 bar:1 beyond:3 pattern:2 mismatch:1 challenge:2 built:1 including:1 memory:3 garden:1 belief:2 green:3 reliable:1 critical:1 explanation:1 business:1 rely:1 hybrid:1 predicting:1 hate:1 improve:2 movie:30 rated:3 portray:1 featural:1 prior:2 literature:4 understanding:1 removal:1 taste:1 review:4 relative:5 embedded:1 loss:1 expect:1 interesting:1 limitation:1 analogy:1 versus:2 validation:4 foundation:1 degree:1 sufficient:5 consistent:4 informativeness:1 s0:2 principle:6 verification:1 playing:1 share:1 testifying:1 obscure:1 course:1 summary:1 surprisingly:2 repeat:1 keeping:1 free:1 supported:1 bias:2 fall:1 neighbor:1 absolute:5 midpoint:1 benefit:10 curve:4 dimension:3 evaluating:1 valid:1 autoregressive:1 made:1 avoided:1 far:1 social:2 transaction:1 reconstructed:1 obtains:2 emphasize:1 preferred:2 compact:1 overfitting:2 instantiation:1 anchor:4 hammerstein:1 hypothesized:1 search:3 latent:3 iterative:3 continuous:1 anchored:1 table:3 reassigned:1 ca:1 inherently:1 obtaining:3 sem:1 improving:2 inventory:1 mse:16 complex:2 domain:8 did:1 significance:1 s2:1 noise:3 arise:1 hyperparameters:1 allowed:1 child:1 convey:2 depicts:1 attributable:1 slow:2 assimilation:1 predictability:1 surveyed:1 inferring:1 position:1 wish:1 perceptual:3 advertisement:3 removing:1 bad:2 showing:1 explored:2 r2:1 appeal:1 evidence:2 concern:1 exists:1 sequential:19 adding:1 magnitude:5 conditioned:2 occurring:1 gap:15 easier:1 simply:4 explore:1 likely:1 prevents:1 desire:1 ordered:2 adjustment:1 sport:2 watch:1 marketer:1 truth:14 towel:1 conditional:1 goal:5 furnham:2 presentation:7 consequently:4 viewed:1 shared:1 included:1 determined:2 typical:2 reducing:2 saving:1 averaging:3 total:4 discriminate:1 experimental:1 la:1 meaningful:3 indicating:3 internal:3 searched:2 rape:1 modulated:1 kung:1 assessed:1 incorporate:1 evaluate:4 phenomenon:1
3,721
437
Further Studies of a Model for the Development and Regeneration of Eye-Brain Maps J.D. Cowan & A.E. Friedman Department of Mathematics, Committee on Neurobiology, and Brain Research Institute, The University of Chicago, 5734 S. Univ. Ave., Chicago, Illinois 60637 Abstract We describe a computational model of the development and regeneration of specific eye-brain circuits. The model comprises a self-organizing map-forming network which uses local Hebb rules, constrained by (genetically determined) molecular markers. Various simulations of the development and regeneration of eye-brain maps in fish and frogs are described, in particular successful simulations of experiments by Schmidt-Cicerone-Easter; Meyer; and Yoon. 1 INTRODUCTION In a previous paper published in last years proceedings (Cowan & Friedman 1990) we outlined a new computational model for the development and regeneration of eye-brain maps. We indicated that such a model can simulate the results of a number of the more complicated surgical manipulations carried out on the visual pathways of goldfish and frogs. In this paper we describe in more detail some of these experiments, and our simulations of them. 1.1 EYE-BRAIN MAPS We refer to figure 1 from the previous paper which shows the retinal map found in the optic lobe or tectum of a fish or frog. The map is topological, i.e.; neighborhood 3 4 Cowan and friedman relationships in the retina are preserved in the optic tectum. As is well-known nearly 50 years ago Sperry (1944) showed that such maps are quite precise and specific, in that maps (following optic nerve sectioning and eye rotation) regenerate in such a way that optic nerve fibers reconnect, more or less, to their previous tectal sites. Some 20 years ago Gaze and Sharma (1970) and Yoon (1972) found evidence for plasticity in the expanded and compressed "maps" which regenerate following eye and brain lesions in goldfish. There are now many experiments which indicate that the regeneration of connections involves both specificity and plasticity. 1. 2. EXPANDED MAPS Such properties are seen in a series of more complicated experiments involving the expansion of a half-eye map to a whole tectum. These experiments were carried out by Schmidt, Cicerone and Easter (1978) on goldfish, in which following the expansion of retinal fibers from a half-eye over an entire (contralateral) tectum, and subsequent sectioning of the fibers, diverted retinal fibers from the other (intact) eye are found to expand over the tectum, as if they were also from a half-eye. This has been interpreted to imply that the tectum has no intrinsic positional markers to provide cues for incoming fibers, and that all its subsequent markers come from the retina (Chung & Cooke, 1978). However Schmidt et.al. also found that the diverted fibers also map normally. Figure 4 of the previous paper shows the result. 1. 3. COMPRESSED MAPS Compression is found in maps from entire eyes to ablated half tecta (Gaze & Sharma, 1970; Sharma & Gaze, 1971; Yoon, 1972). There has been considerable controversy concerning the results. Recently Meyer (1982) has shown that although electrophysiological techniques seem to provide evidence for smoothly expanded and compressed maps, autoradiographic techniques do not. Instead of a smooth map there are patches, and in many cases no real expansion or compression is seen in irradiated sections, at least not initially. An experiment by Yoon (1976) is relevant here. Yoon noticed that in the early stages of map formation under such conditions, the map is normal. Only after some considerable time does a compressed map form. However if the fibers are sectioned (cut) and allowed to regenerate a second time, compression is immediate. This result has been challenged (Cook, 1979), but it was subsequently confirmed by Schmidt (1983). 1.4. MISMATCHED MAPS In mismatch experiments, a half retina is confronted with an inappropriate half tectum. In Yoon's classic "mismatch" experiment (Yoon, 1972) fibers from a half-eye fragment are confronted with the "wrong" half-tectum: the resulting map is normally oriented, even though this involves displacement of retinal fibers from near the tectal positions they normally would occupy. Studies of a Model for the Development and Regeneration of Eye-Brain Maps About 12 years ago Meyer (1979) carried out another important mismatch experiment in which the left half of an eye and its attached retinal fibers were surgically removed, leaving an intact normal half-eye map. At the same time the right half the other eye and its attached fibers were removed, and the fibers from the remaining half eye were allowed to innervate the tectum with the left-half eye map. The result is shown in figure 5 of our previous paper. Fibers from the right half-retina, labelled I through 5, would normally make contact with the corresponding tectal neurons. Instead they make contact with neurons 6 through 10, but in a reversed orientation. Meyer interprets this result to mean that optic nerve fibers show a tendency to aggregate with their nearest retinal neighbors. 2 THE MODEL We introduced our model in last year's NIPS proceedings (Cowan & Friedman 1990). We here repeat some of the details. Let Sij be the strength or weight of the synapse made by the ith retinal fiber with the jth tectal cell. Then the following system of differential equations expresses the changes in Sij: Sij = ~j + cij [j.1ij + (ri - ol)tjJ Sij - k:2 sIJ .. (T-I Lol ~. + R -1 LoJ ~ . ) {A'J + cIJ.. [II""IJ I ., + (r' ...J\t?J sIJ.. } I---'J (1) where i = 1, 2, .... , Nr , the number of retinal ganglion cells and j = 1, 2, .... , Nt, the number of tectal neurons, Cij is the "stickiness" of the ijth contact, fj denotes retinal activity and tj = LiSijri is the corresponding tecta! activity, and eX is a constant measuring the rate of receptor destabilization (see Whitelaw & Cowan (1981) for details). In addition both retinal and tectal elements have fixed lateral inhibitory contacts. The dynamics described by eqn.l is such that both LiSij and LjSij tend to constant values T and R respectively, where T is the total amount of tectal receptor material available per neuron, and R is the total amount of axonal material available per retinal ganglion cell: thus if sij increases anywhere in the net, other synapses made by the ith fiber will decrease, as will other synapses on the jth tectal neuron. In the current terminology, this process is referred to as "winner-take-all". In addiiton Aj represents a general nonspecific growth ofretinotectal contacts, presumed to be controlled and modulated by nerve growth factor (Campenot, 1982). Recent observations (Davies et.al., 1987) indicate that the first fibers to reach a given target neuron stimulate it to produce NGF, which in tum causes more fiber growth. We therefore set Aj =T-l LiSijA where A is a constant. LiSij is the instantaneous value of receptor material used to make contacts, and T is the total amount available, so Aj --> A as the jth neuron becomes innervated. The coefficient j.1ij represents a postulated random depolarization which occurs at synapses due to the quantal release of neurotransmitter-the analog of end-plate potentials (Walmsley et.al., 1987). Thus even if ri = 0, map formation can still occur. However the resulting maps are not as sharp as those formed in 5 6 Cowan and friedman the presence of retinal activity. Of course if J.lij = 0, as might be the case if olbungarotoxin is administered, then Sij = Aj(1- Sij) and Sij --> I, i.e.; all synapses of equal strength. It is the coefficients cij- which determine the nature of the solution to eqn.1. These coefficients express the contact adhesion strengths of synapses. We suppose that such adhesions are generated by fixed distributions of molecules embedded in neural surface membranes. We postulate that the t.il2S. of retinal axons and the surfaces of tectal cells display at least two molecular species, labelled a and b, such that Cij = L~abaibj and the sum is over all possible combinations aa, ab etc. A number of possibilities exist in the choice of ~ab and of the spatial distribution of a and b. One possibility that is consistent with most of the assays which have been carried out (Trisler & Collins (1987), Bonhoffer and Huff (1980), Halfter, Claviez & Schwarz (1981), Boenhoffer & Huff (1985? is ~aa = ~bb > 0 > ~ab = ~ba in which each species prefers itself and repels the other, the so-called homophilic case, with ai and bi as shown in figure 1. t?1 2 2 1 1 b.1 o o 1 i Figure 1: Postulated distribution of sticky molecules in the retina. A similar distribution is supposed to exist in the tectum. The mismatch and compound eye experiments indicate that map formation depends in part on a tendency for fibers to stick to their retinal neighbors, in addition to their tendency to stick to tectal cell surfaces. We therefore append to Cij the term L'k Skj fik where Skj is a local average of Skj and its nearest tectal neighbors, where fik measures themutual stickiness of the ith and kth retinal fibers, and where L'k means Lk -:f; i. Fig. 2 shows the postulated form of fik' {Again we suppose this stickiness is produced by the interaction of two molecular species etc.; specifically theneural contact adhesion molecules (nCAM) of the sort discovered by Edelman (I983)which seem to mediate the fiber-fiber adhesion observed in tissue cultures by Boenhoffer & Huff (1985), but we do not go into the details). Studies of a Model for the Development and Regeneration of Eye-Brain Maps i ~ N.1 0 1 i N.1 Figure 2: The file surface. Retinal fibers are attracted only to themselves or to their immediate retinal neighbors. Meyer's mismatch experiment also indicate that existing fiber projections tend to exclude other fibers, especially inappropriate ones, from innervating occupied areas. One way to incorporate such geometric effects is to suppose that each fiber which establishes contact with a tectal neuron occludes tectal markers there by a factor proportional to its synaptic weight siJ Thus we subtract from the coefficient Cij a fraction proportional to T-l L:'kSkJ With the introduction of occlusion effects and fiber-fiber interactions, it becomes apparent that debris in the fonn of degenerating fiber fragments adhering to tectal cells, following optic nerve sectioning, can also influence map formation. Incoming nerve fibers can stick to debris, and debris can occlude markers. There are in fact four possibilities: debris can occlude tectal markers, markers on other debris, or on incoming fibers; and incoming fibers can occlude markers on debris. All these possibilities can be included in the dependence of Cij on Sij' Skj etc. Note that such debris is supposed to decay, and eventually disappear. 3 SIMULATIONS The model which results from all these modifications and extensions is much more complex in its mathematical structure than any of the previous models. However computer simulation studies show it to be capable of correctly reproducing the observed details of almost aU the experiments cited above. For purposes of illustration we consider the problem of connecting a line of Nr retinal cells to a line of Nt tectal cells. The resulting maps can then be represented by two-dimensional matrices, in which the area of the square at the ijth intersection represents the weight of the synapse between the ith retinal fiber and the jth tecta! cell. The nonnal retino-tecta! map is represented by large squares along the matrix diagonal., (see Whitelaw & Cowan (1981) for terminology and further details). 7 8 Cowan and friedman 3.1 THE SCHMIDT ET. AL. EXPERIMENT Figure 3, for example shows a simulation of the retinal "induction" experiments of Schmidt et.al. This simulation generated both an expanded map and a nearly normal patch, interacting to form patches. These effects occur because some incoming retinal fibers stick to debris left over from the previous expanded map, and other fibers stick to non-occluded tectal markers. The fiber-fiber markers control the regeneration of the expanded map, whereas the retino-tectal markers control the formation of the nearly normal map. 1 i Nr 1 -Do Nt Figure 3: Simulation of the Schmidt et.al. retinal induction experiment. A nearly normal map is intercalated into an expanded map. , ?? I aD ? ? ? ? ? ? I ????? I ? al:!'OCa- aaClaaa-. I ? .-a C.????????????? DCa ? ? ? aa ? ? ? ? . . ? acaa.acaaa ?? ?? . - -?aaa??caaaaac a - - . ? a c I:! ? ? ? ? a a CCI:! I ?? ? ? ? ? a D a 1200 5000 a a ???????????????? a ??? Cc. - _??COc?????.? __ ?-I ' ? I ? ? I ' - a a a a a a a a ? ? ? ??? aaacca ? ? ? ? - - ??? aaccaa a ? -. . .. - .- .- .. .- .. .- .- .? .??? anne a 00 c I ???? 16000 Figure 4: Simulation of the Yoon second compression experiment (see text for details). Studies of a Model for U"le Development and Regeneration of Eye-Brain Maps 3.2 THE YO ON SECOND COMPRESSION EXPERIMENT Yoon's demonstration of immediate second compression can also be simulated. Figure 4 shows details of the simulation. At an early stage just after the first cut, both a normal and a compressed map are forming. The normal map eventually disappears, leaving only a compressed map. After the second cut however, a compressed map forms immediately. Again it is the debris which carries fiber-fiber markers that control map formation. 3.3 THE MEYER MISMATCH EXPERIMENT It is evident that fiber-fiber interactions are important in controlling map formation. The Meyer mismatch experiment shows this quite clearly. A simulation of this experiment also shows the effect. If fik, the mutual stickiness of neighboring fibers is not strong enough, retino-tectal markers dominate, and the mismatched map forms with normal polarity. However if fik is large enough, Meyer's result is found, the mismatched map forms with a reversed polarity. Figure 5 shows the details. ldt UU eye ~1It 12345678910 10987654321 D.Q , ~ left uIt eye UU eye .. _- 711J ~1It UU eye 12345678910 10 9 8 7 654 321 !IIII II- ...:" . . ??? ... , . --- .... .. ??......?... ........ .. ........ .. ........ ????........ ........ ........ .. - laal I I lal I I ? a ???????? ? ? ? ? ? ? ? ? aa Figure 5: Simulation of the Meyer mismatch experiment (see text for details). 4 CONCLUSIONS The model we have outlined generates correctly oriented retinotopic maps. It permits the simulation of a large number of experiments, and provides a consistent explanation of almost all of them. In particular it shows how the apparent induction of central markers by peripheral effects, as seen in the Schmidt et. aI., can be produced by the effects of debris, as can Yoon's observations of immediate second compression. Affinity markers are seen to play a key role in such effects, as they do in the polarity reversal seen in Meyer's experiment. 9 10 Cowan and friedman In summary much of the complexity of the many regeneration experiments which have been carried out in the last fifty years can be understood in terms of the effects produced by contact adhesion molecules with differing affinities, acting to control an activitydependent self-organizing mechanism. Acknowledgements We thank The University of Chicago Brain Research Foundation for partial support of this work. References Boenhoffer, F. & Huf, J. (1980), Nature, 288, 162-164.; (1985), Nature, 315,409-411. Campenot, R.B. (1982), Develop. Biol., 93, 1. Chung, S.-H. & Cooke, J.E. (1978), Proc. Roy. Soc. Lond. B 201, 335-373. Cowan, J.D. & A.E. Friedman (1990) Advances in NIPS, 2, Ed. D.S. Touretzky, MorganKaufmann, 92-99. Cook, J.E. (1979), J. Embryol. expo Morphol., 52, 89-103. Davies, A.M., Bandtlow, C., Heumann, R, Korsching, S., Rohrer, H. & Thoenen, H. (1987), Nature, 326,353-358. Edelman, G.M., (1983), Science, 219, 450-454. Gaze, R.M. & Sharma, S.c. (1970), Exp. Brain Res., 10, 171-181. Halfter, W., Claviez, M. & Schwarz, U. (1981), Nature, 292,67- 70. Meyer, R.L. (1979), Science, 205, 819-821; (1982), Curro Top. Develop. Biol., 17, 101145. Schmidt, J.T. (1983), J. Embryol. expo Morphol., 77, 39-51. Schmidt, J.T., Cicerone, C.M. & Easter, S.S. (1978), J. Compo Neurol., 177, 257-288. Sharma, S.C. & Gaze, R.M. (1971), Arch. Ital. Biol., 109, 357-366. Sperry, R.W. (1944), J. Neurophysiol., 7, 57-69. Trisler, D. & Collins, F. (1987), Science, 237, 1208-1210. Walmsley, B., Edwards, F.R. & Tracey, DJ. (1987), J. Neurosci., 7,4, 1037-1046. Whitelaw, V.A. & Cowan, J.D. (1981), J. Neurosci., 1,12, 1369-1387. Yoon, M. (1972), Amer. Zool., 12, 106.; Exp. Neurol., 37, 451-462; (1976) J. Physiol. Lond., 257,621-643.
437 |@word compression:7 simulation:13 lobe:1 fonn:1 innervating:1 carry:1 series:1 fragment:2 existing:1 current:1 nt:3 anne:1 attracted:1 physiol:1 subsequent:2 chicago:3 plasticity:2 occludes:1 occlude:3 half:14 cue:1 cook:2 ith:4 compo:1 provides:1 mathematical:1 along:1 differential:1 rohrer:1 edelman:2 pathway:1 presumed:1 themselves:1 brain:12 ol:1 inappropriate:2 becomes:2 retinotopic:1 circuit:1 interpreted:1 depolarization:1 differing:1 growth:3 wrong:1 stick:5 control:4 normally:4 understood:1 local:2 receptor:3 might:1 frog:3 au:1 bi:1 displacement:1 area:2 davy:2 projection:1 specificity:1 influence:1 map:48 nonspecific:1 go:1 adhering:1 immediately:1 fik:5 rule:1 dominate:1 classic:1 tectum:10 play:1 target:1 suppose:3 controlling:1 us:1 element:1 roy:1 skj:4 cut:3 observed:2 role:1 yoon:11 sticky:1 decrease:1 removed:2 complexity:1 occluded:1 dynamic:1 controversy:1 surgically:1 neurophysiol:1 various:1 fiber:42 neurotransmitter:1 represented:2 univ:1 describe:2 aggregate:1 formation:7 neighborhood:1 quite:2 apparent:2 regeneration:10 compressed:7 itself:1 ldt:1 confronted:2 net:1 interaction:3 neighboring:1 relevant:1 organizing:2 supposed:2 zool:1 produce:1 develop:2 nearest:2 ij:3 heumann:1 edward:1 soc:1 strong:1 involves:2 indicate:4 come:1 uu:3 subsequently:1 material:3 extension:1 normal:8 exp:2 early:2 purpose:1 proc:1 debris:10 schwarz:2 establishes:1 clearly:1 occupied:1 halfter:2 release:1 yo:1 ave:1 entire:2 initially:1 expand:1 nonnal:1 orientation:1 oca:1 development:7 constrained:1 spatial:1 mutual:1 equal:1 represents:3 nearly:4 retina:5 oriented:2 occlusion:1 friedman:8 ab:3 possibility:4 tj:1 capable:1 partial:1 culture:1 re:1 ablated:1 measuring:1 challenged:1 ijth:2 cci:1 contralateral:1 successful:1 ital:1 coc:1 cited:1 gaze:5 connecting:1 again:2 postulate:1 central:1 chung:2 potential:1 exclude:1 retinal:22 embryol:2 coefficient:4 postulated:3 depends:1 ad:1 sort:1 complicated:2 formed:1 square:2 surgical:1 produced:3 confirmed:1 cc:1 published:1 ago:3 tissue:1 synapsis:5 reach:1 touretzky:1 synaptic:1 ed:1 electrophysiological:1 nerve:6 dca:1 tum:1 synapse:2 amer:1 though:1 anywhere:1 stage:2 just:1 arch:1 eqn:2 marker:15 aj:4 indicated:1 stimulate:1 diverted:2 effect:8 assay:1 self:2 degenerating:1 plate:1 evident:1 fj:1 instantaneous:1 recently:1 rotation:1 attached:2 winner:1 analog:1 refer:1 destabilization:1 ai:2 outlined:2 mathematics:1 illinois:1 innervate:1 lol:1 dj:1 surface:4 etc:3 showed:1 recent:1 manipulation:1 compound:1 seen:5 sharma:5 determine:1 tectal:19 ii:2 campenot:2 smooth:1 concerning:1 molecular:3 controlled:1 involving:1 cell:9 preserved:1 addition:2 whereas:1 iiii:1 adhesion:5 leaving:2 huff:3 fifty:1 file:1 tend:2 cowan:11 seem:2 axonal:1 near:1 presence:1 sectioned:1 enough:2 interprets:1 administered:1 cause:1 prefers:1 retino:3 amount:3 occupy:1 boenhoffer:3 exist:2 inhibitory:1 fish:2 per:2 correctly:2 express:2 key:1 four:1 terminology:2 fraction:1 year:6 sum:1 sperry:2 almost:2 patch:3 curro:1 claviez:2 display:1 topological:1 activity:3 strength:3 occur:2 optic:6 expo:2 ri:2 generates:1 simulate:1 lond:2 expanded:7 department:1 peripheral:1 combination:1 membrane:1 modification:1 aaa:1 sij:12 trisler:2 equation:1 eventually:2 committee:1 mechanism:1 end:1 reversal:1 available:3 permit:1 innervated:1 repels:1 schmidt:10 denotes:1 remaining:1 top:1 especially:1 disappear:1 contact:10 noticed:1 occurs:1 dependence:1 nr:3 diagonal:1 affinity:2 kth:1 reversed:2 thank:1 lateral:1 simulated:1 induction:3 relationship:1 quantal:1 illustration:1 intercalated:1 demonstration:1 polarity:3 cij:8 ba:1 append:1 goldfish:3 neuron:8 regenerate:3 observation:2 immediate:4 neurobiology:1 precise:1 discovered:1 interacting:1 reproducing:1 sharp:1 introduced:1 tecta:4 connection:1 lal:1 nip:2 mismatch:8 genetically:1 explanation:1 eye:26 imply:1 disappears:1 lk:1 carried:5 stickiness:4 lij:1 text:2 geometric:1 acknowledgement:1 embedded:1 easter:3 tjj:1 proportional:2 foundation:1 consistent:2 cooke:2 course:1 summary:1 ngf:1 repeat:1 last:3 jth:4 institute:1 mismatched:3 neighbor:4 uit:1 made:2 bb:1 incoming:5 activitydependent:1 nature:5 molecule:4 expansion:3 complex:1 neurosci:2 whole:1 cicerone:3 mediate:1 lesion:1 allowed:2 site:1 referred:1 fig:1 hebb:1 axon:1 meyer:11 comprises:1 position:1 loj:1 specific:2 decay:1 neurol:2 evidence:2 intrinsic:1 subtract:1 smoothly:1 intersection:1 forming:2 ganglion:2 visual:1 positional:1 tracey:1 aa:4 labelled:2 whitelaw:3 considerable:2 change:1 included:1 determined:1 specifically:1 acting:1 total:3 specie:3 sectioning:3 called:1 tendency:3 intact:2 autoradiographic:1 support:1 modulated:1 collins:2 morgankaufmann:1 incorporate:1 biol:3 ex:1
3,722
4,370
High-Dimensional Graphical Model Selection: Tractable Graph Families and Necessary Conditions Anima Anandkumar Dept. of EECS, Univ. of California Irvine, CA, 92697 [email protected] Vincent Y.F. Tan Dept. of ECE, Univ. of Wisconsin Madison, WI, 53706. [email protected] Alan S. Willsky Dept. of EECS Massachusetts Inst. of Technology, Cambridge, MA, 02139. [email protected] Abstract We consider the problem of Ising and Gaussian graphical model selection given n i.i.d. samples from the model. We propose an efficient threshold-based algorithm for structure estimation based on conditional mutual information thresholding. This simple local algorithm requires only loworder statistics of the data and decides whether two nodes are neighbors in the unknown graph. We identify graph families for which the proposed algorithm has low sample and computational complexities. Under some transparent assumptions, we establish that the proposed algorithm is ?4 structurally consistent (or sparsistent) when the number of samples scales as n = ?(Jmin log p), where p is the number of nodes and Jmin is the minimum edge potential. We also develop novel non-asymptotic techniques for obtaining necessary conditions for graphical model selection. Keywords: Graphical model selection, high-dimensional learning, local-separation property, necessary conditions, typical sets, Fano?s inequality. 1 Introduction The formalism of probabilistic graphical models can be employed to represent dependencies among a large set of random variables in the form of a graph [1]. An important challenge in the study of graphical models is to learn the unknown graph using samples drawn from the graphical model. The general structure estimation problem is NP-hard [2]. In the high-dimensional regime, structure estimation is even more difficult since the number of available observations is typically much smaller than the number of dimensions (or variables). One of the goals is to characterize tractable model classes for which consistent graphical model selection can be guaranteed with low computational and sample complexities. The seminal work by Chow and Liu [3] proposed an efficient algorithm for maximum-likelihood structure estimation in tree-structured graphical models by reducing the problem to a maximum weight spanning tree problem. A more recent approach for efficient structure estimation is based on convex-relaxation [4?6]. The success of such methods typically requires certain ?incoherence? conditions to hold. However, these conditions are NP-hard to verify for general graphical models. We adopt an alternative paradigm in this paper and instead analyze a simple local algorithm which requires only low-order statistics of the data and makes decisions on whether two nodes are neighbors in the unknown graph. We characterize the class of Ising and Gaussian graphical models for which we can guarantee efficient and consistent structure estimation using this simple algorithm. The class of graphs is based on a local-separation property and includes many well-known random graph families, including locally-tree like graphs such as large girth graphs, the Erd?os-R?enyi random graphs [7] and power-law graphs [8], as well as graphs with short cycles such as bounded-degree graphs, and small-world graphs [9]. These graphs are especially relevant in modeling social networks [10, 11]. 1 1.1 Summary of Results We propose an algorithm for structure estimation, termed as conditional mutual information thresholding (CMIT), which computes the minimum empirical conditional mutual information of a given node pair over conditioning sets of bounded cardinality ?. If the minimum exceeds a given threshold (depending on the number of samples n and the number of nodes p), the node pair is declared as an edge. This test has a low computational complexity of O(p?+2 ) and requires only low-order statistics (up to order ? + 2) when ? is small. The parameter ? is an upper bound on the size of local vertex-separators in the graph, and is small for many common graph families, as discussed earlier. We establish that under a set of mild and transparent assumptions, structure learning is consistent in high-dimensions for ?4 CMIT when the number of samples scales as n = ?(Jmin log p), for a p-node graph, where Jmin is the minimum (absolute) edge-potential in the model. We also develop novel techniques to obtain necessary conditions for consistent structure estimation of Erd?os-R?enyi random graphs. We obtain non-asymptotic bounds on the number of samples n in terms of the expected degree and the number of nodes of the model. The techniques employed are information-theoretic in nature and combine the use of Fano?s inequality and the so-called asymptotic equipartition property. Our results have many ramifications: we explicitly characterize the tradeoff between various graph parameters such as the maximum degree, girth and the strength of edge potentials for efficient and consistent structure estimation. We draw connections between structure learning and the statistical physical properties of the model: learning is fundamentally related to the absence of long-range dependencies in the model, i.e., the regime of correlation decay. The notion of correlation decay on Ising models has been extensively characterized [12], but its connections to structure learning have only been explored in a few recent works (e.g., [13]). This work establishes that consistent structure learning is feasible under a slightly weaker condition than the usual notion of correlation decay for a rich class of graphs. Moreover, we show that the Gaussian analog of correlation decay is the so-called walk-summability condition [14]. This is a somewhat unexpected and surprising connection since walk-summability is a condition to characterize the performance of inference algorithms such as loopy belief propagation (LBP). Our work demonstrates that both successful inference and learning hinge on similar properties of the Gaussian graphical model. 2 Preliminaries 2.1 Graphical Models A p-dimensional graphical model is a family of p-dimensional multivariate distributions Markov on some undirected graph G= (V, E) [1]. Each node in the graph i ? V is associated to a random variable Xi taking values in a set X . We consider both discrete (in particular Ising) models where X is a finite set and Gaussian models where X = R. The set of edges E captures the set of conditional-independence relationships among the random variables. More specifically, the vector of random variables X := (X1 , . . . , Xp ) with joint distribution P satisfies the global Markov property with respect to a graph G, if for all disjoint sets A, B ? V , we have P (xA , xB |xS ) = P (xA |xS )P (xB |xS ). (1) 1 where set S is a separator between A and B. The Hammersley-Clifford theorem states that under the positivity condition, given by P (x) > 0 for all x ? X p [1], the model P satisfies the global Markov property according to a graph G if and only if it factorizes according to the cliques of G. We consider the class of Ising models, i.e., binary pairwise models which factorize according to the edges of the graph. More precisely, the probability mass function (pmf) of an Ising model is   1 P (x) ? exp xT JG x + hT x , x ? {?1, 1}p. (2) 2 For Gaussian graphical models, the probability density function (pdf) is of the form,   1 T T f (x) ? exp ? x JG x + h x , x ? Rp . 2 1 A set S ? V is a separator of sets A and B if the removal of nodes in S separates A and B into distinct components. 2 (3) In both the cases, the matrix JG is called the potential or information matrix and h, the potential vector. For both Ising and Gaussian models, the sparsity pattern of the matrix JG corresponds to that of the graph G, i.e., JG (i, j) = 0 if and only if (i, j) ? / G. We assume that the potentials are uniformly bounded above and below as: Jmin ? |JG (i, j)| ? Jmax , ? (i, j) ? G. (4) Our results on structure learning depend on Jmin and Jmax , which is fairly natural ? intuitively, models with edge potentials which are ?too small? or ?too large? are harder to learn than those with comparable potentials, i.e., homogenous models. Notice that the conventional parameterizations for the Ising models in (2) and the Gaussian models in (3) are slightly different. Without loss of generality, for Ising model, we assume that J(i, i) = 0 for all i ? V . On the other hand, in the Gaussian setting, we assume that the diagonal elements of the inverse covariance (or information) matrix JG are normalized to unity (J(i, i) = 1, i ? V ), and that JG can be decomposed as JG = I ? RG , where RG is the matrix of partial correlation coefficients [14]. We consider the problem of structure learning, which involves the estimation of the edge set of the graph G given n i.i.d. samples X1 , . . . , Xn drawn either from the Ising model in (2) or the Gaussian model in (3). We consider the high-dimensional regime, where both p and n grow simultaneously; typically, the growth of p is much faster than that of n. 2.2 Tractable Graph Families We consider the class of graphical models Markov on a graph Gp belonging to some ensemble G(p) of graphs with p nodes. We emphasize that in our formulation the graph ensemble G(p) can either be deterministic or random ? in the latter, we also specify a probability measure over the set of graphs in G(p). In the random setting, we use the term almost every (a.e.) graph G ? G(p) satisfies a certain property Q (for example, connectedness) if limp?? P [Gp satisfies Q] = 1. In other words, the property Q holds asymptotically almost surely2 (a.a.s.) with respect to the random graph ensemble G(p). Intuitively, this means that graphs that have a vanishing probability of occurrence as p ? ? are ignored. Our conditions and theoretical guarantees will be based on this notion for random graph ensembles. We now characterize the ensemble of graphs amenable for consistent structure estimation. For ? ? N, let B? (i; G) denote the set of vertices within distance ? from node i with respect to graph G. Let H?,i := G(B? (i; G)) denote the subgraph of G spanned by B? (i; G), but in addition, we retain the nodes not in B? (i; G) (and remove the corresponding edges). Definition 1 (?-Local Separator) Given a graph G, a ?-local separator S? (i, j) between i and j, for (i, j) ? / G, is a minimal vertex separator3 with respect to the subgraph H?,i . The parameter ? is referred to as the path threshold for local separation. In other words, the ?-local separator S? (i, j) separates nodes i and j with respect to paths in G of length at most ?. We now characterize the ensemble of graphs based on the size of local separators. Definition 2 ((?, ?)-Local Separation Property) An ensemble of graphs G(p; ?, ?) satisfies (?, ?)-local separation property if for a.e. Gp ? G(p; ?, ?), (5) max |S? (i, j)| ? ?. (i,j)?G / p In Section 3, we propose an efficient algorithm for graphical model selection when the underlying graph belongs to a graph ensemble G(p; ?, ?) with sparse local node separators (i.e., with small ?). Below we provide examples of three graph families which satisfy (5) for small ?. 2 Note that the term a.a.s. does not apply to deterministic graph ensembles G(p) where no randomness is assumed, and in this setting, we assume that the property Q holds for every graph in the ensemble. 3 A minimal separator is a separator of smallest cardinality. 3 (Example 1) Bounded Degree: Any (deterministic or random) ensemble of degree-bounded graphs GDeg (p, ?) satisfies the (?, ?)-local separation property with ? = ? and every ? ? N. Thus, our algorithm consistently recovers graphs with small (bounded) degrees (? = O(1)). This case was considered previously in several works, e.g. [15,16]. (Example 2) Bounded Local Paths: The (?, ?)-local separation property also holds when there are at most ? paths of length at most ? in G between any two nodes (henceforth, termed as the (?, ?)-local paths property). In other words, there are at most ? ? 1 overlapping4 cycles of length smaller than 2?. Thus, a graph with girth g (length of the shortest cycle) satisfies the (?, ?)-local separation property with ? = 1 and ? = g. For example, the bipartite Ramanujan graph [17, p. 107] and the random Cayley graphs [18] have large girths. The girth condition can be weakened to allow for a small number of short cycles, while not allowing for overlapping cycles. Such graphs are termed as locally tree-like. For instance, the ensemble of Erd?os-R?enyi graphs GER (p, c/p), where an edge between any node pair appears with a probability c/p, independent of other node pairs, is locally tree-like. It can be shown p that GER (p, c/p) satisfies (?, ?)-local separation property with ? = 2 and ? ? 4log log c a.a.s. Similar observations apply for the more general scale-free or power-law graphs [8, 19]. Along similar lines, the ensemble of ?-random regular graphs, denoted by GReg (p, ?), which is the uniform ensemble of regular graphs with degree ? has no overlapping cycles of length at most ?(log??1 p) a.a.s. [20, Lemma 1]. (Example 3) Small-World Graphs: The class of hybrid graphs or augmented graphs [8, Ch. 12] consist of graphs which are the union of two graphs: a ?local? graph having short cycles and a ?global? graph having small average distances between nodes. Since the hybrid graph is the union of these local and global graphs, it simultaneously has large degrees and short cycles. The simplest model GWatts (p, d, c/p), first studied by Watts and Strogatz [9], consists of the union of a d-dimensional grid and an Erd?os-R?enyi random graph with parameter c. One can check that a.e. p graph G ? GWatts (p, d, c/p) satisfies (?, ?)-local separation property in (5), with ? = d + 2 and ? ? 4log log c . Similar observations apply for more general hybrid graphs studied in [8, Ch. 12]. 3 Method and Guarantees 3.1 Assumptions (A1) Scaling Requirements: We consider the asymptotic setting where both the number of variables (nodes) p and the number of samples n go to infinity. We assume that the parameters (n, p, Jmin ) scale in the following fashion:5 ?4 n = ?(Jmin log p). (6) We require that the number of nodes p ? ? to exploit the local separation properties of the class of graphs under consideration. (A2a) Strict Walk-summability for Gaussian Models: The Gaussian graphical model Markov on almost every Gp ? G(p) is ?-walk summable, i.e., kRGp k ? ? < 1, (7) where ? is a constant (i.e., is not a function of p), RGp := [|RGp (i, j)|] is the entry-wise absolute value of the partial correlation matrix RGp . In addition, k?k denotes the spectral norm, which for symmetric matrices, is given by the maximum absolute eigenvalue. (A2b) Bounded Potentials for Ising Models: The Ising model Markov on a.e. Gp ? G(p) has its maximum absolute potential below a threshold J ? . More precisely, tanh Jmax < 1. (8) tanh J ? Furthermore, the ratio ? in (8) is not a function of p. See [21, 22] for an explicit characterization of J ? for specific graph ensembles. (A3) Local-Separation Property: We assume that the ensemble of graphs G(p; ?, ?) satisfies the (?, ?)-local separation property with ?, ? ? N satisfying: ? := 4 5 ? = O(1), Jmin ??? = ? e (1), Two cycles are said to overlap if they have common vertices. The notations ?(?), ?(?), o(?) and O(?) refer to asymptotics as the number of variables p ? ?. 4 (9) where ? is given by (7) for Gaussian models and by (8) for Ising models.6 We can weaken the second requirement in (9) as Jmin ??? = ?(1) for deterministic graph families (rather than random graph ensembles). (A4) Edge Potentials: The edge potentials {Ji,j , (i, j) ? G} of the Ising model are assumed to be generically drawn from [?Jmax , ?Jmin] ? [Jmin , Jmax ], i.e., our results hold except for a set of Lebesgue measure zero. We also characterize specific classes of models where this assumption can be removed and we allow for all choices of edge potentials. See [21, 22] for details. The above assumptions are very general and hold for a rich class of models. Assumption (A1) stipulates the scaling requirements of number of samples for consistent structure estimation. Assumption (A2) and (A4) impose constraints on the model parameters. Assumption (A3) requires the local-separation property described in Section 2.2 with the path threshold ? satisfying (9). We provide examples of graphs where the above assumptions are met. Gaussian Models on Girth-bounded Graphs: Consider the ensemble of graphs GDeg,Girth (p; ?, g) with maximum degree ? and girth g. We now derive a relationship between ? and g, for the above assumptions to hold. It can be established that for the walk-summability condition in (A2a) to hold for Gaussian models, we require that Jmax = O(1/?). When the minimum edge potential achieves this bound (Jmin = ?(1/?)), a sufficient condition for (A3) to hold is given by ??g = o(1). (10) In (10), we notice a natural tradeoff between the girth and the maximum degree of the graph ensemble for successful estimation under our framework: graphs with large degrees can be learned efficiently if their girths are large. Indeed, in the extreme case of trees which have infinite girth, in accordance with (10), there is no constraint on the node degrees for consistent graphical model selection and recall that the Chow-Liu algorithm [3] is an efficient method for model selection on tree-structured graphical models. Note that the condition in (10) allows for the maximum degree bound ? to grow with the number of nodes as long as the girth g also grows appropriately. For example, if the maximum degree scales as ? = O(poly(log p)) and the girth scales as g = O(log log p), then (10) is satisfied. This implies that graphs with fairly large degrees and short cycles can be recovered successfully consistently using the algorithm in Section 3.2. Gaussian Models on Erd?os-R?enyi and Small-World Graphs: We can also conclude that a.e. Erd?os-R?enyi graph G ? GER (p, c/p) satisfies (9) with ? = 2 when c = O(poly(log p)) under the best possible scaling for Jmin subject to the walk-summability constraint in (7). Similarly, the small-world ensemble GWatts (p, d, c/p) satisfies (9) with ? = d + 2, when d = O(1) and c = O(poly(log p)). Ising Models: For Ising models, the best possible scaling of the minimum edge potential Jmin is when Jmin = ?(J ? ), for the threshold J ? defined in (8). For the ensemble of graphs GDeg,Girth (p; ?, g) with degree ? and girth g, we can establish that J ? = ?(1/?). When the minimum edge potential achieves the threshold, i.e., Jmin = ?(1/?), we end up with a similar requirement as in (10) for Gaussian models. Similarly, for both the Erd?os-R?enyi graph ensemble GER (p, c/p) and small-world ensemble GWatts (p, d, c/p), we can establish that the threshold J ? = ?(1/c), and thus, the observations made for the Gaussian setting hold for the Ising model as well. 3.2 Conditional Mutual Information Threshold Test Our structure learning procedure is known as the Conditional Mutual Information Threshold Test (CMIT). Let CMIT(xn ; ?n,p , ?) be the output edge set from CMIT given n i.i.d. samples xn , a threshold ?n,p and a constant ? ? N. The conditional mutual information test proceeds as follows: one computes the empirical conditional mutual information7 for each node pair (i, j) ? V 2 and finds the conditioning set which achieves the minimum, over all subsets of cardinality at most ?, b i ; Xj |XS ), I(X (11) min S?V \{i,j},|S|?? b i ; Xj |XS ) denotes the empirical conditional mutual information of Xi and Xj given XS . If the above where I(X minimum value exceeds the given threshold ?n,p , then the node pair is declared as an edge. Recall that the conditional mutual information I(Xi ; Xj |XS ) = 0 iff given XS , the random variables Xi and Xj are conditionally independent. f (p) ? ? as p ? ?. We say that two sequences f (p), g(p) satisfy f (p) = ? e (g(p)), if g(p) log p 7 The empirical conditional mutual information is obtained by first computing the empirical distribution and then computing its conditional mutual information. 6 5 Thus, (11) seeks to identify non-neighbors, i.e., node pairs which can be separated in the unknown graph G. However, since we constrain the conditioning set |S| ? ? in (11), the optimal conditioning set may not form an exact separator. Despite this restriction, we establish that the above test can correctly classify the edges and non-neighbors using a suitable threshold ?n,p subject to the assumptions (A1)?(A4). The threshold ?n,p is chosen as a function of the number of nodes p, the number of samples n, and the minimum edge potential Jmin as follows:   log p 2 2? ?n,p = O(Jmin ), ?n,p = ?(? ), ?n,p = ? , (12) n where ? is the path-threshold in (5) for (?, ?)-local separation to hold and ? is given by (7) and (8). The computational complexity of the CMIT algorithm is O(p?+2 ). Thus the algorithm is computationally efficient for small ?. Moreover, the algorithm only uses statistics of order ? + 2 in contrast to the convex-relaxation approaches [4?6] which typically use higher-order statistics. Theorem 1 (Structural consistency of CMIT) Assume that (A1)-(A4) hold. Given a Gaussian graphical model or an Ising model Markov on a graph Gp ? G(p; ?, ?), CMIT(xn ; ?n,p , ?) is structurally consistent. In other words, lim P [CMIT ({xn }; ?n,p , ?) 6= Gp ] = 0. n,p?? (13) Consistency guarantee The CMIT algorithm consistently recovers the structure of the graphical models with probability tending to one and the probability measure in (4) is with respect to both the graph and the samples. ?4 Sample-complexity The sample complexity of the CMIT scales as ?(Jmin log p) and is favorable when the minimum edge potential Jmin is large. This is intuitive since the edges have stronger potentials when Jmin is large. On the other hand, Jmin cannot be arbitrarily large due to the assumption (A2). The minimum sample complexity is attained when Jmin achieves this upper bound. It can be established that for both Gaussian and Ising models Markov on a degree-bounded graph ensemble GDeg (p, ?) with maximum degree ? and satisfying assumption (A3), the minimum sample complexity is given by n = ?(?4 log p) i.e., when Jmin = ?(1/?). We can have improved guarantees for the Erd?os-R?enyi random graphs GER (p, c/p). In ? the Gaussian setting, the 2 minimum sample complexity can be improved to n = ?(? log p), i.e., when Jmin = ?(1/ ?) where the maximum degree scales as ? = ?(log p log c) [7]. On the other hand, for Ising models, the minimum sample complexity can be further improved to n = ?(c4 log p), i.e., when Jmin = ?(J ? ) = ?(1/c). Note that c/2 is the expected degree of the GER (p, c/p) ensemble. Specifically, when the Erd?os-R?enyi random graphs have a bounded average degree (c = O(1)), we can obtain a minimum sample complexity of n = ?(log p) for structure estimation of Ising models. Recall that the sample complexity of learning tree models is ?(log p) [23]. Thus, the complexity of learning sparse Erd?os-R?enyi random graphs is akin to learning trees in certain parameter regimes. ?2 The sample complexity of structure estimation can be improved to n = ?(Jmin log p) by employing empirical conditional covariances for Gaussian models and empirical conditional variation distances in place of empirical conditional mutual information. However, to present a unified framework for Gaussian and Ising models, we present the CMIT here. See [21, 22] for details. Comparison with convex-relaxation approaches We now compare our approach for structure learning with convex-relaxation methods. The work by Ravikumar et al. in [5] employs an `1 -penalized likelihood estimator and un?2 der the so-called incoherence conditions, the sample complexity is n = ?((?2 + Jmin ) log p). Our sample complexity ?2 (using conditional covariances) n = ?(Jmin log p) is the same in terms of Jmin , while there is no explicit dependence on the maximum degree ?. Similarly, we match the neighborhood-based regression method by Meinshausen and Buhlmann in [24] under more transparent conditions. For structure estimation of Ising models, the work in [6] considers `1 -penalized logistic regression which has a sample complexity of n = ?(?3 log p) for a degree-bounded ensemble GDeg (p, ?) satisfying certain ?incoherence? conditions. The sample complexity of CMIT, given by n = ?(?4 log p), is slightly worse, while the modified algorithm described previously has a sample complexity of n = ?(?2 log p), for general degree-bounded ensembles. Additionally, under the CMIT algorithm, we can guarantee an improved sample complexity of n = ?(c4 log p) for Erd?os-R?enyi 6 random graphs GER (p, c/p) and small-world graphs GWatts (p, d, c/p), since the average degree c/2 is typically much smaller than the maximum degree ?. Moreover, note that, the incoherence conditions stated in [6] are NP-hard to establish for general models since they involve the partition function of the model. In contrast, our conditions are transparent and relate to the statistical-physical properties of the model. Moreover, our algorithm is local and requires only low-order statistics, while the method in [6] requires full-order statistics. Proof Outline We first analyze the scenario when exact statistics are available. (i) We establish that for any two non-neighbors (i, j) ? / G, the minimum conditional mutual information in (11) (based on exact statistics) does not exceed the threshold ?n,p . (ii) Similarly, we also establish that the conditional mutual information in (11) exceeds the threshold ?n,p for all neighbors (i, j) ? G. (iii) We then extend these results to empirical versions using concentration bounds. See [21, 22] for details. The main challenge in our proof is step (i). To this end, we analyze the conditional mutual information when the conditioning set is a local separator between i and j and establish that it decays as p ? ?. The techniques involved to establish this for Ising and Gaussian models are different: for Ising models, we employ the self-avoiding walk (SAW) tree construction [25]. For Gaussian models, we use the techniques from walk-sum analysis [14]. 4 Necessary Conditions for Model Selection In the previous sections, we proposed and analyzed efficient algorithms for learning the structure of graphical models. We now derive the necessary conditions for consistent structure learning. We focus on the ensemble of Erd?os-R?enyi graphs GER (p, c/p). For the class of degree-bounded graphs GDeg (p, ?), necessary conditions on sample complexity have been characterized previously [26] by considering a certain (restricted) set of ensembles. However, a na??ve application of such bounds (based on Fano?s inequality [27, Ch. 2]) turns out to be too weak for the class of Erd?os-R?enyi graphs GER (p, c/p). We provide novel necessary conditions for structure learning of Erd?os-R?enyi graphs. Our techniques may also be applicable to other classes of random graphs. Recall that a graph G is drawn from the ensemble of Erd?os-R?enyi graphs GER (p, c/p). Given n i.i.d. samples Xn := b := G(X b n ). It is desired (X1 , . . . , Xn ) ? (X p )n , the task is to estimate G from Xn . Denote the estimated graph as G to derive tight necessary conditions on the number of samples n (as a function of average degree c/2 and number of (p) b n ) 6= G) ? 0 as the number of nodes p tends to infinity. nodes p) so that the probability of error Pe := P (G(X Again, note that the probability measure P is with respect to both the Erd?os-R?enyi graph and the samples. Discrete Graphical Models Let Hb (q) := ?q log2 q ? (1 ? q) log2 (1 ? q) be the binary entropy function. For the Ising model, or more generally any discrete model where each random variable Xi ? X = {1, . . . , |X |}, we can demonstrate the following: Theorem 2 (Weak Converse for Discrete Models) For a discrete graphical model Markov on G ? GER (p, c/p), if (p) Pe ? 0, it is necessary for n to satisfy     1 c c log2 p p n? ? Hb . (14) p log2 |X | 2 p 2 log2 |X | The above bound does not involve any asymptotic notation and shows transparently, how n has to depend on p, c and |X | for consistent structure learning. Note that if the cardinality of the random variables |X | is large, then the necessary sample complexity is small, which makes intuitive sense from a source-coding perspective. Moreover, the above bound states that more samples are required as the average degree c/2 increases. Our bound involves only the average degree c/2 and not the maximum degree of the graph, which is typically much larger than c [7]. Gaussian Graphical Models We now turn out attention to the Gaussian analogue of Theorem 2 under a similar setup. We assume that the ?-walk-summability condition in assumption (A2a) holds. We are then able to demonstrate the following: 7 Theorem 3 (Weak Converse for Gaussian Models) For an ?-walk summable Gaussian graphical model Markov on (p) G ? GER (p, c/p) as p ? ?, if Pe ? 0, we require     2 c log2 p c p h  h  i i . n? ? Hb (15) 1 1 p 2 p log2 2?e 1?? + 1 log2 2?e 1?? +1 As with Theorem 2, the above bound does not involve any asymptotic notation and similar intuitions hold as before. There is a natural logarithmic dependence on p and a linear dependence on the average degree parameter c. Finally, the dependence on ? can be explained as follows: any ?-walk-summable model is also ?-walk-summable for all ? > ?. Thus, the class of ?-walk-summable models contains the class of ?-walk-summable models. This results in a looser bound in (15) for large ?. Analysis tools Our analysis tools are information-theoretic in nature. A common tool to derive necessary conditions (p) is to resort to Fano?s inequality [27, Ch. 2], which (lower) bounds the probability of error Pe as a function of the n conditional entropy H(G|X ) and the size of the set of all graphs with p nodes. However, a na??ve application of Fano?s inequality results in a trivial lower bound as the set of all graphs, which can be realized by GER (p, c/p) is ?too large?. To ameliorate this problem, we focus our attention on the typical graphs for applying Fano?s inequality and not all graphs. The set of typical graphs has a small cardinality but high probability when p is large. The novelty of our proof lies in our use of both typicality as well as Fano?s inequality to derive necessary conditions for structure learning. We can show that (i) the probability of the typical set tends to one as p ? ?, (ii) the graphs in the typical set are almost uniformly distributed (the asymptotic equipartition property), (iii) the cardinality of the typical set is small relative to the set of all graphs. These properties are used to prove Theorems 2 and 3. 5 Conclusion In this paper, we adopted a novel and a unified paradigm for graphical model selection. We presented a simple local algorithm for structure estimation with low computational and sample complexities under a set of mild and transparent conditions. This algorithm succeeds on a wide range of graph ensembles such as the Erd?os-R?enyi ensemble, smallworld networks etc. We also employed novel information-theoretic techniques for establishing necessary conditions for graphical model selection. Acknowledgement The first author is supported by the setup funds at UCI and in part by the AFOSR under Grant FA9550-10-1-0310, the second author is supported by A*STAR, Singapore and the third author is supported in part by AFOSR under Grant FA9550-08-1-1080. References [1] S. Lauritzen, Graphical models: Clarendon Press. Clarendon Press, 1996. [2] D. Karger and N. Srebro, ?Learning Markov Networks: Maximum Bounded Tree-width Graphs,? in Proc. of ACM-SIAM symposium on Discrete algorithms, 2001, pp. 392?401. [3] C. Chow and C. Liu, ?Approximating Discrete Probability Distributions with Dependence Trees,? IEEE Tran. on Information Theory, vol. 14, no. 3, pp. 462?467, 1968. [4] A. d?Aspremont, O. Banerjee, and L. El Ghaoui, ?First-order methods for sparse covariance selection,? SIAM. J. Matrix Anal. & Appl., vol. 30, no. 56, 2008. [5] P. Ravikumar, M. Wainwright, G. Raskutti, and B. Yu, ?High-dimensional covariance estimation by minimizing `1 -penalized log-determinant divergence,? Arxiv preprint arXiv:0811.3628, 2008. [6] P. Ravikumar, M. Wainwright, and J. Lafferty, ?High-dimensional Ising Model Selection Using l1-Regularized Logistic Regression,? Annals of Statistics, 2008. 8 [7] B. Bollob?as, Random Graphs. Academic Press, 1985. [8] F. Chung and L. Lu, Complex graphs and network. Amer. Mathematical Society, 2006. [9] D. Watts and S. Strogatz, ?Collective dynamics of small-worldnetworks,? Nature, vol. 393, no. 6684, pp. 440? 442, 1998. [10] M. Newman, D. Watts, and S. Strogatz, ?Random graph models of social networks,? Proc. of the National Academy of Sciences of the United States of America, vol. 99, no. Suppl 1, 2002. [11] R. Albert and A. Barab?asi, ?Statistical mechanics of complex networks,? Reviews of modern physics, vol. 74, no. 1, p. 47, 2002. [12] H. Georgii, Gibbs Measures and Phase Transitions. Walter de Gruyter, 1988. [13] J. Bento and A. Montanari, ?Which Graphical Models are Difficult to Learn?? in Proc. of Neural Information Processing Systems (NIPS), Vancouver, Canada, Dec. 2009. [14] D. Malioutov, J. Johnson, and A. Willsky, ?Walk-Sums and Belief Propagation in Gaussian Graphical Models,? J. of Machine Learning Research, vol. 7, pp. 2031?2064, 2006. [15] G. Bresler, E. Mossel, and A. Sly, ?Reconstruction of Markov Random Fields from Samples: Some Observations and Algorithms,? in Intl. workshop APPROX Approximation, Randomization and Combinatorial Optimization. Springer, 2008, pp. 343?356. [16] P. Netrapalli, S. Banerjee, S. Sanghavi, and S. Shakkottai, ?Greedy Learning of Markov Network Structure ,? in Proc. of Allerton Conf. on Communication, Control and Computing, Monticello, USA, Sept. 2010. [17] F. Chung, Spectral graph theory. Amer Mathematical Society, 1997. [18] A. Gamburd, S. Hoory, M. Shahshahani, A. Shalev, and B. Virag, ?On the girth of random cayley graphs,? Random Structures & Algorithms, vol. 35, no. 1, pp. 100?117, 2009. [19] S. Dommers, C. Giardin`a, and R. van der Hofstad, ?Ising models on power-law random graphs,? Journal of Statistical Physics, pp. 1?23, 2010. [20] B. McKay, N. Wormald, and B. Wysocka, ?Short cycles in random regular graphs,? The Electronic Journal of Combinatorics, vol. 11, no. R66, p. 1, 2004. [21] A. Anandkumar, V. Y. F. Tan, and A. S. Willsky, ?High-Dimensional Structure Learning of Ising Models: Tractable Graph Families,? Preprint, Available on ArXiv 1107.1736, June 2011. [22] ??, ?High-Dimensional Gaussian Graphical Model Selection: Tractable Graph Families,? Preprint, ArXiv 1107.1270, June 2011. [23] V. Tan, A. Anandkumar, and A. Willsky, ?Learning Markov Forest Models: Analysis of Error Rates,? J. of Machine Learning Research, vol. 12, pp. 1617?1653, May 2011. [24] N. Meinshausen and P. Buehlmann, ?High Dimensional Graphs and Variable Selection With the Lasso,? Annals of Statistics, vol. 34, no. 3, pp. 1436?1462, 2006. [25] D. Weitz, ?Counting independent sets up to the tree threshold,? in Proc. of ACM symp. on Theory of computing, 2006, pp. 140?149. [26] W. Wang, M. Wainwright, and K. Ramchandran, ?Information-theoretic bounds on model selection for Gaussian Markov random fields,? in IEEE International Symposium on Information Theory Proceedings (ISIT), Austin, Tx, June 2010. [27] T. Cover and J. Thomas, Elements of Information Theory. John Wiley & Sons, Inc., 2006. 9
4370 |@word mild:2 determinant:1 version:1 norm:1 stronger:1 seek:1 covariance:5 harder:1 liu:3 contains:1 karger:1 united:1 recovered:1 surprising:1 john:1 partition:1 limp:1 remove:1 fund:1 greedy:1 vanishing:1 short:6 fa9550:2 loworder:1 parameterizations:1 node:30 characterization:1 allerton:1 mathematical:2 along:1 symposium:2 consists:1 prove:1 combine:1 symp:1 pairwise:1 indeed:1 expected:2 mechanic:1 decomposed:1 cardinality:6 considering:1 bounded:15 moreover:5 underlying:1 mass:1 notation:3 unified:2 guarantee:6 every:4 buehlmann:1 growth:1 demonstrates:1 control:1 converse:2 grant:2 before:1 local:30 accordance:1 tends:2 despite:1 establishing:1 incoherence:4 path:7 connectedness:1 wormald:1 weakened:1 studied:2 meinshausen:2 appl:1 range:2 union:3 procedure:1 asymptotics:1 empirical:9 asi:1 word:4 regular:3 cannot:1 selection:16 applying:1 seminal:1 sparsistent:1 conventional:1 deterministic:4 restriction:1 ramanujan:1 go:1 attention:2 convex:4 typicality:1 estimator:1 spanned:1 notion:3 variation:1 jmax:6 annals:2 construction:1 tan:3 exact:3 us:1 element:2 satisfying:4 cayley:2 ising:29 preprint:3 wang:1 capture:1 cycle:11 removed:1 intuition:1 complexity:23 dynamic:1 depend:2 tight:1 bipartite:1 joint:1 various:1 america:1 tx:1 univ:2 enyi:17 distinct:1 separated:1 walter:1 newman:1 neighborhood:1 shalev:1 larger:1 say:1 statistic:11 gp:7 bento:1 sequence:1 eigenvalue:1 propose:3 tran:1 reconstruction:1 uci:2 relevant:1 ramification:1 subgraph:2 iff:1 academy:1 intuitive:2 requirement:4 intl:1 depending:1 develop:2 derive:5 lauritzen:1 keywords:1 netrapalli:1 involves:2 implies:1 met:1 require:3 transparent:5 preliminary:1 randomization:1 isit:1 hold:14 considered:1 exp:2 achieves:4 adopt:1 smallest:1 a2:2 estimation:18 favorable:1 proc:5 applicable:1 combinatorial:1 tanh:2 saw:1 establishes:1 successfully:1 tool:3 mit:1 gaussian:32 modified:1 rather:1 factorizes:1 focus:2 june:3 consistently:3 likelihood:2 check:1 contrast:2 rgp:3 sense:1 inst:1 inference:2 el:1 typically:6 chow:3 equipartition:2 among:2 denoted:1 fairly:2 mutual:15 homogenous:1 field:2 having:2 yu:1 np:3 sanghavi:1 fundamentally:1 few:1 employ:2 modern:1 simultaneously:2 ve:2 divergence:1 national:1 phase:1 lebesgue:1 generically:1 analyzed:1 extreme:1 xb:2 hoory:1 amenable:1 edge:22 partial:2 necessary:14 monticello:1 tree:13 walk:15 pmf:1 desired:1 theoretical:1 minimal:2 weaken:1 instance:1 formalism:1 modeling:1 earlier:1 classify:1 cover:1 loopy:1 vertex:4 entry:1 subset:1 mckay:1 uniform:1 successful:2 johnson:1 too:4 characterize:7 dependency:2 eec:2 density:1 international:1 siam:2 retain:1 probabilistic:1 physic:2 na:2 clifford:1 again:1 satisfied:1 summable:6 positivity:1 henceforth:1 worse:1 conf:1 resort:1 chung:2 potential:19 de:1 star:1 coding:1 includes:1 coefficient:1 inc:1 satisfy:3 combinatorics:1 explicitly:1 analyze:3 weitz:1 greg:1 efficiently:1 ensemble:32 identify:2 weak:3 vincent:1 lu:1 anima:1 randomness:1 a2b:1 malioutov:1 definition:2 pp:10 involved:1 associated:1 proof:3 recovers:2 irvine:1 massachusetts:1 recall:4 lim:1 appears:1 clarendon:2 higher:1 attained:1 specify:1 improved:5 erd:17 formulation:1 amer:2 generality:1 furthermore:1 xa:2 sly:1 correlation:6 hand:3 o:17 overlapping:2 propagation:2 banerjee:2 logistic:2 grows:1 usa:1 verify:1 normalized:1 symmetric:1 shahshahani:1 conditionally:1 self:1 width:1 pdf:1 outline:1 theoretic:4 demonstrate:2 l1:1 wise:1 consideration:1 novel:5 common:3 raskutti:1 tending:1 physical:2 ji:1 conditioning:5 discussed:1 analog:1 extend:1 refer:1 cambridge:1 gibbs:1 approx:1 grid:1 consistency:2 fano:7 similarly:4 jg:9 etc:1 multivariate:1 recent:2 perspective:1 belongs:1 termed:3 scenario:1 certain:5 inequality:7 binary:2 success:1 arbitrarily:1 der:2 minimum:17 somewhat:1 impose:1 employed:3 novelty:1 paradigm:2 shortest:1 ii:2 full:1 alan:1 exceeds:3 faster:1 characterized:2 match:1 academic:1 long:2 ravikumar:3 a1:4 barab:1 regression:3 arxiv:4 albert:1 represent:1 suppl:1 dec:1 lbp:1 addition:2 grow:2 source:1 georgii:1 appropriately:1 strict:1 smallworld:1 subject:2 undirected:1 lafferty:1 anandkumar:4 structural:1 counting:1 exceed:1 iii:2 hb:3 independence:1 xj:5 lasso:1 tradeoff:2 whether:2 akin:1 shakkottai:1 ignored:1 generally:1 involve:3 locally:3 extensively:1 simplest:1 singapore:1 notice:2 transparently:1 estimated:1 disjoint:1 correctly:1 stipulates:1 discrete:7 vol:10 threshold:18 drawn:4 wisc:1 ht:1 graph:123 relaxation:4 asymptotically:1 sum:2 inverse:1 ameliorate:1 place:1 family:10 almost:4 electronic:1 looser:1 separation:15 draw:1 decision:1 scaling:4 comparable:1 bound:15 guaranteed:1 strength:1 precisely:2 infinity:2 constraint:3 constrain:1 declared:2 min:1 structured:2 according:3 watt:3 belonging:1 smaller:3 slightly:3 son:1 unity:1 wi:1 intuitively:2 restricted:1 explained:1 ghaoui:1 computationally:1 previously:3 turn:2 tractable:5 end:2 adopted:1 available:3 apply:3 spectral:2 occurrence:1 alternative:1 rp:1 thomas:1 denotes:2 graphical:33 a4:4 hinge:1 madison:1 log2:8 exploit:1 especially:1 establish:10 approximating:1 society:2 realized:1 concentration:1 dependence:5 usual:1 diagonal:1 said:1 distance:3 separate:2 considers:1 trivial:1 spanning:1 willsky:5 length:5 relationship:2 ratio:1 minimizing:1 bollob:1 difficult:2 setup:2 relate:1 stated:1 anal:1 collective:1 unknown:4 allowing:1 upper:2 observation:5 markov:15 finite:1 communication:1 buhlmann:1 canada:1 pair:7 required:1 connection:3 c4:2 california:1 learned:1 established:2 nip:1 able:1 proceeds:1 below:3 pattern:1 regime:4 sparsity:1 challenge:2 hammersley:1 including:1 max:1 belief:2 analogue:1 power:3 overlap:1 suitable:1 natural:3 hybrid:3 wainwright:3 regularized:1 technology:1 mossel:1 aspremont:1 sept:1 review:1 acknowledgement:1 removal:1 vancouver:1 asymptotic:7 wisconsin:1 law:3 loss:1 summability:6 a2a:3 relative:1 afosr:2 bresler:1 ger:13 srebro:1 degree:32 sufficient:1 consistent:13 xp:1 thresholding:2 austin:1 summary:1 penalized:3 supported:3 free:1 weaker:1 allow:2 neighbor:6 wide:1 taking:1 absolute:4 sparse:3 distributed:1 van:1 dimension:2 xn:8 world:6 transition:1 rich:2 computes:2 author:3 made:1 employing:1 social:2 emphasize:1 clique:1 global:4 decides:1 assumed:2 conclude:1 xi:5 factorize:1 un:1 additionally:1 learn:3 nature:3 ca:1 obtaining:1 forest:1 poly:3 separator:12 complex:2 main:1 montanari:1 x1:3 augmented:1 referred:1 fashion:1 wiley:1 structurally:2 explicit:2 lie:1 pe:4 third:1 theorem:7 xt:1 specific:2 explored:1 decay:5 x:8 a3:4 consist:1 workshop:1 ramchandran:1 rg:2 entropy:2 logarithmic:1 girth:16 unexpected:1 strogatz:3 springer:1 ch:4 corresponds:1 satisfies:12 acm:2 ma:1 gruyter:1 conditional:20 goal:1 absence:1 feasible:1 hard:3 typical:6 specifically:2 reducing:1 uniformly:2 except:1 infinite:1 lemma:1 called:4 ece:1 succeeds:1 jmin:31 latter:1 dept:3 avoiding:1
3,723
4,371
Information Rates and Optimal Decoding in Large Neural Populations Kamiar Rahnama Rad Liam Paninski Department of Statistics, Columbia University {kamiar,liam}@stat.columbia.edu http://www.stat.columbia.edu/?liam/research/pubs/kamiar-ss-info.pdf Abstract Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations carrying a finite total amount of information, the full spiking population response is asymptotically as informative as a single observation from a Gaussian process whose mean and covariance can be characterized explicitly in terms of network and single neuron properties. The Gaussian form of this asymptotic sufficient statistic allows us in certain cases to perform optimal Bayesian decoding by simple linear transformations, and to obtain closed-form expressions of the Shannon information carried by the network. One technical advantage of the theory is that it may be applied easily even to non-Poisson point process network models; for example, we find that under some conditions, neural populations with strong history-dependent (non-Poisson) effects carry exactly the same information as do simpler equivalent populations of non-interacting Poisson neurons with matched firing rates. We argue that our findings help to clarify some results from the recent literature on neural decoding and neuroprosthetic design. Introduction It has long been argued that many key questions in neuroscience can best be posed in informationtheoretic terms; the efficient coding hypothesis discussed in [2, 3, 1], represents perhaps the bestknown example. Answering these questions quantitatively requires us to compute the Shannon information rate of neural channels, whether numerically using experimental data or analytically in mathematical models. In many cases it is useful to exploit connections with ?ideal observer? analysis, in which the performance of an optimal Bayesian decoder places fundamental bounds on the performance of any biological system given access to the same neural information. However, the non-linear, non-Gaussian, and correlated nature of neural responses has hampered the development of this theory, particularly in the case of high-dimensional and/or time-varying stimuli. The neural decoding literature is far too large to review systematically here; instead, we will focus our attention on work which has attempted to develop an analytical theory to simplify these complex decoding and information-rate problems. Two limiting regimes have received significant analytical attention in the neuroscience literature. In the ?high-SNR? regime, n ? ?, where n is the number of neurons encoding the signal of interest; if the information rate of each neuron is bounded away from zero and neurons respond in a conditionally weakly-dependent manner given the stimulus, then the total information provided by the neural population becomes infinite, and the error rate of any reasonable neural decoder tends to zero. For discrete stimuli, the Shannon information is effectively determined in this asymptotic limit by a simpler quantity known as the Chernoff information [9]; for continuous stimuli, maximum likelihood estimation is asymptotically optimal, and the asymp1 totic Shannon information is controlled by the Fisher information [8, 7]. On the other hand we can consider the ?low-SNR? limit, where only a few neurons are observed and each neuron is asymptotically weakly tuned to the stimulus. In this limit, the Shannon information tends to zero, and under certain conditions the optimal Bayesian estimator (which can be strongly nonlinear in general) can be approximated by a simpler linear estimator; see [5] and more recently [16] for details. In this paper, we study information transmission and optimal decoding in what we would argue is a more biologically-relevant ?intermediate? regime, where n is large but the total amount of information provided by the population remains finite, and the problem of decoding the stimulus given the population neural activity remains nontrivial. Likelihood in the intermediate regime: the inhomogeneous Poisson case For clarity, we begin by analyzing the information in a simple population of neurons, represented as inhomogenous Poisson processes that are conditionally independent given the stimulus. We will extend our analysis to more general neural populations in the next section. In response to the stimulus, at each time step t neuron i fires with probability ?i (t)dt, where the rate is given by ?i (t) = f [bi (t) + "#i,t (?)] , (1) where f (.) is a smooth rectifying non-linearity and " is a gain factor controlling each neuron?s sensitivity. The baseline firing rate is determined by bi (t) and is independent of the input signal. The true stimulus at time t is defined by ?t , and ? abbreviates the time varying stimulus ?0:T in the time interval [0, T dt]. The term #i,t (?) summarizes the dependence of the neuron?s firing rate on ?; depending on the setting, this term may represent e.g. a tuning curve or a spatiotemporal filter applied to the stimulus (see examples below). The likelihood includes all the information about the stimulus encoded in the population?s spiking response. Neuron i?s response at time step t is designated by by the binary variable ri (t). The loglikelihood at the parameter value ? (which may be different from the true parameter ?) is given by the standard point-process formula [21]: L? (r) := log p(r|?) = n ! T ! i=1 t=0 ri (t) log ?i (t) ? ?i (t)dt. (2) This expression can be expanded around " = 0: L? (r) = L? (r)|"=0 + " ?L? (r) 1 ? 2 L? (r) |"=0 + "2 |"=0 + O(n"3 ), ?" 2 ?"2 where ?L? (r) |"=0 ?" = ! = ! i,t 2 ? L? (r) |"=0 ?"2 i,t " % $ f! # #i,t (?) ri (t) bi (t) ? f ! (bi (t))dt f " % # f ! $! # $ #2i,t (?) ri (t) bi (t) ? f !! (bi (t))dt . f Let ri denote the vector representation of the ith neuron?s spike train and let1 & 'T f! f! gi (ri ) := ri (1) (bi (1)) ? f ! (bi (1))dt ? ? ? ri (T ) (bi (T )) ? f ! (bi (T ))dt f f & 'T # f ! $! # f ! $! hi (ri ) := ri (1) (bi (1)) ? f !! (bi (1))dt ? ? ? ri (T ) (bi (T )) ? f !! (bi (T ))dt f f & 'T #i (?) := #i,1 (?) #i,2 (?) ? ? ? #i,T (?) ; then L? (r) = L? (r)|"=0 + " n ! n 1 ! #i (?)T gi (ri ) + "2 #i (?)T diag[hi (ri )]#i (?) + O(n"3 ). 2 i=1 i=1 1 With a slight abuse of notation, we use T for both the total number of time steps and the transpose operation; the difference is clear from the context. 2 This second-order loglikelihood expansion is standard in likelihood theory [24]; as usual, the first term is constant in ? and can therefore be ignored, while the third (quadratic) term controls the curvature of the loglikelihood at " = 0, and scales as "n2 . In the high-SNR regime discussed above, where n ? ? and " is fixed, the likelihood becomes sharply peaked at ? (and therefore the Fisher information, which may be understood as the curvature of the log-likelihood at ?, controls the asymptotics of the estimation error in the case of continuous stimuli), and estimation of ? becomes easy; in the low-SNR regime, we fix n and consider the " ? 0 limit. Now, finally, we can more precisely define the ?intermediate? SNR regime: we will focus on the case of large populations (n ? ?), but in order to keep the total information in a finite range we 1 need to scale the sensitivity " as " ? n?1/2 . In this setting, the error term O(n"3 ) = O(n? 2 ) = o(1) and can therefore be neglected, and the law of large numbers (LLN) implies that ( ) 2 1! 2 ? L? (r) T " |"=0 = Er|? #i (?) diag[hi (ri )]#i (?) ; ?"2 n i 2 ? (r) consequently, the quadratic term "2 ? L ?"2 |"=0 will be independent of the observed spike train and therefore void of information about ?. So the first derivative term is the only part of the likelihood that depends both on the neural activity and ?, and may therefore be considered a sufficient statistic in this asymptotic regime: all the information about the stimulus is summarized in ?L? (r) 1 ! " #i (?)T gi (ri ). (3) |"=0 = ? ?" n i We may further apply the central limit theorem (CLT) to this sum of independent random vectors to conclude that this term converges to a Gaussian process indexed by ? (under mild technical conditions that we will ignore here, for clarity). Thus this model enjoys the local asymptotic normality property observed in many parametric statistical models [24]: all of the information in the data can be summarized asymptotically by a sufficient statistic with a sampling distribution that turns out to be Gaussian. Example: Linearly filtered stimuli and state-space models In many cases neurons are modeled in terms of simple rectified linear filters responding to the stimulus. We can handle this case easily using the language introduced above, if we let Ki denote the matrix implementing the transformation (Ki ?)t = #i,t (?), the projection of the stimulus onto the i-th neuron?s stimulus filter. Then, ( * + !, -) n ?L? (r) fi 1 ! T ! T " K diag ri ? fi dt := ?T ?(r), |"=0 = ? ? ?" n i=1 i fi where fi stands for the vector version of f [bi (t)]. Thus all the information in the population spike train can be summarized in the random vector ?(r), which is a simple linear function of the observed spike train data. This vector has an asymptotic Gaussian distribution, with mean and covariance * + !,* n 1 ! T f Ki ? 1 Er|? (?(r)) = ? Ki diag i fi dt + fi! dt ? + O( ) ? fi! dt n i=1 fi n n ) ( n & f !2 ' 1 1! T Ki diag i dt Ki ? + O( ? ) = n i=1 fi n + !, + !, n & ' 1! T f f J := covr|? (?(r)) = Ki diag i covr|? ri diag i Ki n i=1 fi fi n = & f !2 ' 1 1! T Ki diag i dt Ki + O( ? ). n i=1 fi n Thus, the neural population?s non-linear and temporally dynamic response to the stimulus is as informative in this intermediate regime as a single observation from a standard Gaussian experiment, 3 in which the parameter ? is filtered linearly by J and corrupted by Gaussian noise. All of the filtering properties of the population are summarized by the matrix J. (Note that if we consider each Ki as a random sample from some distribution of filters, then J will converge by the law of large numbers to a matrix we can compute explicitly.) Thus in many cases we can perform optimal Bayesian decoding of ? given the spike trains quite easily. For example, if ? has a zero mean Gaussian prior distribution with covariance C? , then the posterior mean and the maximum-a-posteriori (MAP) estimate is well-known and coincides with the optimal linear estimate (OLE): ??OLE (r) = E(?|r) = (J + C??1 )?1 ?(r). (4) We may compute the Shannon information I(? : r) between r and ? in a similarly direct fashion. We know that, asymptotically, the sufficient statistic ?(r) is as informative as the full population response r I(? : r) = I(? : ?(r)). In the case that the prior of ? is Gaussian, as above, then the information can therefore be computed quite explicitly via standard formulas for the linear-Gaussian channel [9]: I(? : ?(r)) = 1 log det(I + JC? ). 2 (5) To summarize, when the encodings #i,t (?) are linear in ?, and we are in the intermediate-SNR regime, and the parameter ? has a Gaussian prior distribution, then the optimal Bayesian estimate is obtained by applying a linear transformation to the sufficient statistic ?(r) which itself is linear in the spike train, and the mutual information between the stimulus and full population response has a particularly simple form. These results help to extend previous theoretical studies [5, 18, 20, 16] demonstrating that in some cases linear decoding can be optimal, and also shed some light on recent experimental studies indicating that optimal linear and nonlinear Bayesian estimators often have similar performance in practice [13, 12]. To work through a concrete example, consider the case that the temporal sequence of parameter values ?t is generated by an autoregressive process: ?t+1 = A?t + ?t ?t ? N (0, R), for a stable dynamics matrix A and positive-semidefinite covariance matrix R. Further assume that the observation matrices Ki act instantaneously, i.e., Ki is block-diagonal with blocks Ki,t , and therefore the responses are modeled as ri (t) ? P oiss[f (bi (t) + "Ki,t ?t )dt]. Thus ? and the responses r together represent a state-space model. This framework has been shown to lead to state-of-the-art performance in a wide variety of neural data analysis settings [14]. To understand optimal inference in this class of models in the intermediate SNR regime, we may follow the recipe outlined above: we see that the asymptotic sufficient statistic in this model can be represented as ?t = Jt ?t + "t "t ? N (0, Jt ), where the effective filter matrix J defined above is block-diagonal (due to the block-diagonal structure of the filter matrices Ki ), with blocks we have denoted Jt . Thus ?t represents observations from a linear-Gaussian state-space model, i.e., a Kalman filter model [17]. Optimal decoding of ? given the observation sequence ?1:T can therefore be accomplished via the standard forward-backward Kalman filter-smoother [10]; see Fig. 1 for an illustration. The information rate limT ?? I(?0:T : r0:T ) = limT ?? I(?0:T : ?(r)0:T ) may be computed via similar recursions in the stationary case (i.e., when Jt is constant in time). The result may be expressed most explicitly in terms of a matrix which is the solution of a Riccati equation involving the effective Kalman model parameters; the details are provided in the appendix. Nonlinear examples: orientation coding, place fields, and small-time expansions While the linear setting discussed above can handle many examples of interest, it does not seem general enough to cover two well-studied decoding problems: inferring the orientation of a visual 4 stimulus from a population of cortical neurons [19, 4], or inferring position from a population of hippocampal or entorhinal neurons [6]. In the former case, the stimulus is a phase variable, and therefore does not fit gracefully into the linear setting described above; in the latter case, place fields and grid fields are not well-approximated as linear functions of position. If we apply our general theory in these settings, the interpretation of the encoding function #i (?) does not change significantly: #i (?) could represent the tuning curve of neuron i as a function of the orientation of the visual stimulus, or of the animal?s location in space. However, without further assumptions the limiting sufficient statistic, which is a weighted sum of these encoding functions #i (?) (recall eq. 3) may result in an infinite-dimensional Gaussian process, which may be computationally inconvenient. To simplify matters somewhat, we can introduce a mild assumption on the tuning functions #i (?). Let?s assume that these functions may be expressed in some low-dimensional basis: #i (?) = Ki ?(?), for some vectors Ki , and ?(?) is defined to map ? into an mT -dimensional space which is usually smaller than dim(?) = dim(?t )T . This finite-basis assumption is very natural: in the orientation example, tuning curves are periodic in the angle ?t and are therefore typically expressed as sums of a few Fourier functions; similarly, two-dimensional finite Fourier or Zernike bases are often used to represent grid or place fields [6]. The key point here is that we may now simply follow the derivation of the last section with ?(?) in place of ?; we find that the sufficient statistic may be represented asymptotically as an mT -dimensional Gaussian vector with mean J and covariance J?(?), with J defined as in the preceding section. We should note that this nonlinear case does remain slightly more complicated than the linear case in one respect: while the likelihood with respect to ?(?) reduces to something very simple and tractable, the prior (which is typically defined as a function of ?) might be some complicated function of the remapped variable ?(?). So in most interesting nonlinear cases we can no longer compute the optimal Bayesian decoder or the Shannon information rate analytically. However, our approach does lead to a major simplification in numerical investigations into theoretical coding issues. For example, to examine the coding efficiency of a population of neurons encoding an orientation variable in this intermediate SNR regime we do not need to simulate the responses of the entire population (which would involve drawing nT random variables, for some large population size n); instead, we only need to draw a single equivalent mT -dimensional Gaussian vector ?(r), and quantify the decoding performance based on the approximate loglikelihood 1 1 L? (r) = L? (r)|"=0 + ?(?)T ?(r) + ?(?)T J?(?) + O( ? ), 2 n which as emphasized above has a simple quadratic form as a function of ?(?). Since m can typically be chosen to be much smaller than n, this approach can result in significant computational savings. We now switch gears slightly and examine another related intermediate regime in which nonlinear encoding plays a key role: instead of letting the sensitivity " of each neuron become small (in order to keep the total information in the population finite), we could instead keep the sensitivity constant and let the time period over which we are observing the population scale inversely with the population size n. This short-time limit is sensible in some physiological and psychophysical contexts [22] and was examined analytically in [15] to study the impact of inter-neuron dependencies on information transmission. Our methods can also be applied to this short-time limit. We begin by writing the loglikelihood of the observed spike count vector r in a single time-bin of length dt: ! ri log f [bi + #i (?)] ? f [bi + #i (?)] dt. L? (r) := log p(r|?) = i The second term does not depend on r; therefore, all information in r about ? resides in the sufficient statistic ! ri log f [bi + #i (?)] . ?? (r) := i Since the i-th neuron fires with probability f [bi + #i (?)] dt, the mean of ?? (r) scales with ndt, and it is clear that dt = 1/n is a natural scaling of the time bin. With this scaling ?? (r) converges to a Gaussian stochastic process with mean 1! Er|? [?? (r)] = f [bi + #i (?)] log f [bi + #i (?)] n i 5 and covariance covr|? [?? (r), ??! (r)] = . /. / 1! f [bi + #i (?)] log f [bi + #i (?)] log f [bi + #i (?! )] , n i where we have used the fact that the variance of a Poisson random variable coincides with its mean. In general, this limiting Gaussian process will be infinite-dimensional. However, if we choose the exponential nonlinearity (f (.) = exp(.)) and the encoding functions #i (?) are of the finite-dimensional form considered above, #i (?) = KiT ?(?), then the log f [bi + #i (?)] term in the definition of ?? (r) simplifies: in this case, all information about ? is captured by the sufficient statistic ! ri Ki . ?(r) = i If we again let dt = 1/n, then we find that ?(r) converges to a finite-dimensional Gaussian random vector with mean and covariance 1 1 1! 0 1! 0 Er|? [?(r)] = f bi + KiT ?(?) Ki ; covr|? [?(r)] = f bi + KiT ?(?) Ki KiT ; n i n i again, if the filters Ki are modeled as independent draws from some fixed distribution, then the above normalized sums converge to their expectations, by the LLN. Thus, as in the intermediateSNR regime, we see that inference can be dramatically simplified in this short-time setting. Likelihood in the intermediate regime: non-Poisson effects We conclude by discussing the generalization to non-Poisson networks with interneuronal dependencies and nontrivial correlation structure. We generalize the rate equation (1) to 2 1 0 ?i (t) = fi bi (t) + "#i,t (?)2Ht , where Ht stands for the spiking activity of all neurons prior to time t: Ht = {ri (t! )}t! <t,1?i?n . Note that the influence of spiking history may be different for each neuron: refractory periods, self-inhibition and coupling between neurons can be formulated by appropriately defining the dependence of fi (.) on Ht . We begin, as usual, by expanding the log-likelihood. The basic point-process likelihood (eq. 2) remains valid. Let gi (r) and hi (r) denote the vector versions of & & 2 ' 2 ' 2 ' 2 ' # f ! $! & f! & ri (t) bi (t)2Ht ? fi! bi (t)2Ht dt and ri (t) bi (t)2Ht ? fi!! bi (t)2Ht dt, f f respectively, analogously to the Poisson case. Then, the first and second terms in the expansion of the loglikelihood may be written as " ! ?L? (r) #Ti (?)gi (r) |"=0 = " ?" i and 1 2 ? 2 L? (r) 1 ! T #i (?)diag[hi (r)]#i (?), " |"=0 = "2 2 2 ?" 2 i as before. For independent neurons, the log-likelihood was composed of normalized sums of independent random variables that converged to a Gaussian process, by the CLT. In the historydependent, coupled case, gi (r) and hi (r) depend not only on the i-th neuron?s activity ri , but rather on the whole network history. Nonetheless, under technical conditions on the network?s dependence structure (to ensure that the firing rates and correlations in the network remain bounded), we may still exploit versions of the LLN and CLT. Thus, under conditions ensuring the validity of the LLN we 2 ? (r) may conclude that, as before, the second-order term "2 ? L ?"2 |"=0 converges to its expectation under 1 the intermediate " ? n? 2 scaling, and therefore carries no information about ?. When we discard this second-order term, along with higher-order terms that are negligible in the intermediate-SNR, 3 ? (r) large-n limit, we are left once again with the gradient term " ?L?" |"=0 = ?1n i #i (?)T gi (r), which under appropriate conditions (ensuring the validity of a CLT) will converge to a Gaussian process limit whose mean and covariance we can often compute analytically. 6 Let?s turn to a specific example, in order to make these claims somewhat more concrete. Consider a network with weak couplings and possibly strong self-inhibition and history dependence; more precisely, we assume that interneuronal conditional cross-covariances are weak, given the stimulus: cov[ri (t), rj (t + ? )|?] = O(n?1 ) for i &= j. See, e.g., [11, 23] for further discussion of this condition, which is satisfied for many spiking networks in which the synaptic weights scale uniformly as O(n?1 ). For simplicity, we will also restrict our attention to linear encoding functions, though generalizations to the nonlinear case are straightforward. Thus, as before, let Ki denote the matrix implementing the transformation (Ki ?)t = #i,t (?), the projection of the stimulus onto the i-th neuron?s stimulus filter. Then ( * + !, -) n fi ?L? (r) 1 ! T T ! diag " K ri ? fi dt , |"=0 = ? ? ?" fi n i=1 i & 2 ' where fi stands for the vector version of fi bi (t)2Ht ; in other words, the t-th entry of fi dt is the probability of observing a spike in the interval [t, t + dt], given the network spiking history Ht in the absence of input. Our sufficient statistic is therefore exactly as in the Poisson setting, * + !, n 1 ! T f ?(r) := ? Ki diag i ri ? fi! dt , (6) n i=1 fi except for the history-dependence induced through the redefinition of fi . Computing the necessary means and covariances in this case requires more work than in the Poisson case; see the appendix for details. It is helpful (though not necessary) to make the stationarity f !2 assumption bi (t) ? bi , which implies in this setting that E( fii ) can also be chosen to be timeinvariant; in this case the limiting covariance and mean of the sufficient statistic are given by n J := covr|? [?(r)] = 2 & # f ! $' 1! Ki diag Er|?=0 i dt Ki ; n i=1 fi Er|? [?(r)] = J?, where the expectations are over the spontaneous network activity in the absence of any input. In short, once again, we have ?(r) ?D N (J?, J). Analytically, the only challenge here is to compute the expectations in the definition of J. In many cases this can be done analytically (e.g., in any population of uncoupled renewal-process neurons), or by using mean-field theory [23], or numerically by simply calculating the mean firing rate of the network in the undriven state ? = 0. We examine this convergence quantitatively in Fig. 1. In this case the stimulus ?t was a sample path from a one-dimensional autoregressive (AR(1)) process. Spikes were generated according to ? ? n ! ? t ?i (t) = ?o exp ? ? + wji Ij (t)? 1?i (t)>?ref , n j=1 where Ij (t) is the synaptic input from the j-th cell (generated by convolving the spike train rj with an exponential of time constant 20 ms), wji is the synaptic weight matrix coupling the output of neuron j to the input of neuron i, ?i (t) is the time since the last spike; therefore, 1?i (t)>?ref enforces the absolute refractory period ?ref , which was set to be 2 ms here. Since the encoding filters Ki act instantaneously in this model (Ki can be represented as a delta function, weighted by n?1/2 ), the observed spike trains can be considered observations from a state-space model, as described above. The weights wji were generated3 randomly from a uniform distribution on the interval ?[5/n, 5/n], with self-weights wii = 0, and j wji = 0 to enforce detailed balance in the network. Note that, while the interneuronal coupling is weak in this example, the autocorrelation in these spike trains is quite strong on short time scales, due to the absolute refractory effect. We compared two estimators of ?: the full (nonlinear) MAP estimate ??MAP = arg max? p(?|r), which we computed using the fast direct optimization methods described in [14], and the limiting optimal estimator ??? := (J + C??1 )?1 ?(r). Note that J is diagonal; we computed the expectations in the definition of J using the numerical approach described above in this simulation, though in 7 spike train(s) with 2ms refractory period, 20ms synaptic time constant and baseline rate 30Hz stimuli 5 sufficient statistics ?(r) 2.5 2 n=1 0 1.5 1 0.5 ?5 0 ? ?MAP ? ? 0.4 5 0.3 n=5 0 0.2 0.1 ?5 0 0.4 5 n = 20 0.3 0 0.2 0.1 ?5 0 0 0.05 0.1 time(sec) 0.15 0.2 0 0.05 0.1 time(sec) 0.15 0.2 0 0.05 0.1 0.15 0.2 time(sec) Figure 1: The left panels show the true stimulus (green), MAP estimate (red) and the limiting optimal estimator ??? := (J + C??1 )?1 ?(r) (blue) for various population sizes n. The middle panels show the spike trains used to compute these estimates. The right panels show the sufficient statistics ?(r) used to compute ??? . Note that the same true stimulus was used in all three simulations. As n increases, the linear decoder converges to the MAP estimate, despite the nonlinear and correlated nature of the network model generating the spike trains (see main text for details). other simulations (with uncoupled renewal-model populations) we checked that the fully-analytical approach gave the correct solution. In addition, C??1 is tridiagonal in this state-space setting; thus the linear matrix equation in eq. (4) can be solved efficiently in O(T ) time using standard tridiagonal matrix solvers. We find that, as predicted, the full nonlinear Bayesian estimator ??MAP approaches the limiting optimal estimator ??? as n becomes large; n = 20 is basically sufficient in this case, although of course the convergence will be slower for larger values of the gain factor " (or, equivalently, larger filters Ki or larger values of the variance of ?t ). We conclude with a few comments about these results. First, note that the covariance matrix J we have computed here coincides almost exactly with what we computed previously in the Poisson case. Indeed, we can make this connection much more precise: we can always choose an equivalent Poisson network with rates defined so that the Er|?=0 [(fi! )2 /fi ] term in the non-Poisson network matches the (fi! )2 /fi term in the Poisson network. Since J determines the information rate completely, we conclude that for any weakly-coupled network there is an equivalent Poisson network which conveys exactly the same information in the intermediate regime. However, note that the the sufficient statistic ?(r) is different in the Poisson and non-Poisson settings, since the f ! /f term linearly reweights the observed spikes, depending on how likely they were given the history; thus the optimal Bayesian decoder incorporates non-Poisson effects explicitly. A number of interesting questions remain open. For example, while we expect a LLN and CLT to continue to hold in many cases of strong, structured interneuronal coupling, computing the asymptotic mean and covariance of the sufficient statistic ?(r) may be more challenging in such cases, and new phenomena may arise. 8 References [1] J. Atick. Could information theory provide an ecological theory of sensory processing? Network: Computation in Neural Systems, pages 213?251, May 1992. [2] F. Attneave. Some informational aspects of visual perception. Psychological Review, 1954. [3] H. B. Barlow. Possible principles underlying the transformation of sensory messages. Sensory Communication, pages 217?234, 1961. [4] P. Berens, A. S. Ecker, S. Gerwinn, A. S. Tolias, and M. Bethge. Reassessing optimal neural population codes with neurometric functions. Proceedings of the National Academy of Sciences, 108:4423?4428, 2011. [5] W. Bialek and A. Zee. Coding and computation with neural spike trains. Journal of Statistical Physics, 59:103?115, 1990. [6] E. Brown, L. Frank, D. Tang, M. Quirk, and M. Wilson. A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18:7411?7425, 1998. [7] N. Brunel and J.-P. Nadal. Mutual information, fisher information, and population coding. Neural Comput., 10(7):1731?1757, 1998. [8] B. Clarke and A. Barron. Information-theoretic asymptotics of Bayes methods. IEEE Transactions on Information Theory, 36:453 ? 471, 1990. [9] T. Cover and J. Thomas. Elements of information theory. Wiley, New York, 1991. [10] J. Durbin and S. Koopman. Time Series Analysis by State Space Methods. Oxford University Press, 2001. [11] I. Ginzburg and H. Sompolinsky. Theory of correlations in stochastic neural networks. Phys Rev E, 50(4):3171?3191, 1994. [12] V. Lawhern, W. Wu, N. Hastopoulos, and L. Paninski. Population decoding of motor cortical activity using a generalized linear model with hidden states. Journal of Neuroscience Methods, 2011. [13] J. Macke, L. Sing, B. Cunningham, J.P. snd Yu, K. Shenoy, and M. Sahani. Modelling lowdimensional dynamics in recorded spiking populations. COSYNE, 2011. [14] L. Paninski, Y. Ahmadian, D. Ferreira, S. Koyama, K. Rahnama Rad, M. Vidne, J. Vogelstein, and W. Wu. A new look at state-space models for neural data. Journal of Computational Neuroscience, 29(1):107?126, 2010. [15] S. Panzeri, S. Schultz, A. Treves, and E. Rolls. Correlations and the encoding of information in the nervous system. Proceedings of the Royal Society London B, 266(1423):1001?1012, 1999. [16] J. Pillow, Y. Ahmadian, and L. Paninski. Model-based decoding, information estimation, and change-point detection in multi-neuron spike trains. Neural Computation, 23(1):1?45, January 2011. [17] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11:305?345, 1999. [18] E. Salinas and L. Abbott. Vector reconstruction from firing rates. Journal of Computational Neuroscience, 1:89?107, 1994. [19] H. S. Seung and H. Sompolinsky. Simple models for reading neuronal population codes. Proceedings of the National Academy of Sciences, 90:10749?10753, 1993. [20] H. Snippe. Parameter extraction from population codes: A critical assesment. Neural Computation, 8:511?529, 1996. [21] D. Snyder and M. Miller. Random Point Processes in Time and Space. Springer-Verlag, 1991. [22] S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature, 381:520?522, 1996. [23] T. Toyoizumi, K. Rahnama Rad, and L. Paninski. Mean-field approximations for coupled populations of generalized linear model spiking neurons with Markov refractoriness. Neural Computation, 21:1203?1243, 2009. [24] A. van der Vaart. Asymptotic statistics. Cambridge University Press, Cambridge, 1998. 9
4371 |@word mild:2 version:4 middle:1 open:1 simulation:3 covariance:13 carry:2 series:1 pub:1 tuned:1 nt:1 written:1 numerical:2 informative:3 motor:1 stationary:1 nervous:1 gear:1 ith:1 short:5 filtered:2 location:1 simpler:3 mathematical:1 along:1 direct:2 become:1 autocorrelation:1 introduce:1 manner:1 inter:1 indeed:1 examine:3 multi:1 informational:1 solver:1 becomes:4 provided:3 begin:3 linearity:1 matched:1 bounded:2 notation:1 panel:3 underlying:1 what:2 nadal:1 finding:1 transformation:5 temporal:1 act:2 ti:1 shed:1 exactly:4 ferreira:1 control:2 interneuronal:4 shenoy:1 positive:1 before:3 understood:1 local:1 negligible:1 tends:2 limit:9 despite:1 encoding:10 analyzing:1 oxford:1 firing:7 path:1 abuse:1 might:1 reassessing:1 studied:1 examined:1 challenging:1 liam:3 bi:36 range:1 enforces:1 practice:1 block:5 asymptotics:2 significantly:1 projection:2 word:1 rahnama:3 onto:2 context:2 applying:1 writing:1 influence:1 www:1 equivalent:4 map:8 ecker:1 straightforward:1 attention:3 simplicity:1 estimator:8 population:36 handle:2 limiting:7 controlling:1 play:1 spontaneous:1 hypothesis:1 element:1 approximated:2 particularly:2 totic:1 observed:7 role:1 solved:1 sompolinsky:2 seung:1 neglected:1 dynamic:3 carrying:1 weakly:3 depend:2 efficiency:1 basis:2 completely:1 easily:3 represented:4 various:1 derivation:1 train:15 fast:1 effective:2 ahmadian:2 london:1 ole:2 salina:1 whose:2 encoded:1 posed:1 quite:3 larger:3 loglikelihood:6 s:1 drawing:1 toyoizumi:1 statistic:18 gi:7 cov:1 vaart:1 itself:1 advantage:1 sequence:2 analytical:4 reconstruction:1 lowdimensional:1 relevant:1 riccati:1 roweis:1 academy:2 recipe:1 convergence:2 transmission:2 generating:1 converges:5 help:2 depending:2 develop:1 clearer:1 stat:2 quirk:1 coupling:5 ij:2 received:1 eq:3 strong:4 ois:1 predicted:1 implies:2 quantify:1 inhomogeneous:1 correct:1 filter:12 stochastic:2 snippe:1 human:1 implementing:2 bin:2 argued:1 fix:1 generalization:2 investigation:1 biological:1 kamiar:3 clarify:1 hold:1 around:1 considered:3 exp:2 panzeri:1 claim:1 major:1 estimation:4 ndt:1 instantaneously:2 weighted:2 gaussian:22 always:1 rather:1 varying:2 zernike:1 wilson:1 focus:2 modelling:1 likelihood:12 baseline:2 posteriori:1 inference:3 dim:2 dependent:2 helpful:1 typically:3 entire:1 cunningham:1 hidden:1 issue:1 arg:1 orientation:5 denoted:1 development:1 animal:1 art:1 renewal:2 mutual:2 field:6 once:2 saving:1 extraction:1 sampling:1 chernoff:1 represents:2 yu:1 look:1 peaked:1 stimulus:30 quantitatively:2 simplify:2 few:3 thorpe:1 randomly:1 composed:1 national:2 phase:1 fire:2 detection:1 stationarity:1 interest:2 message:1 marlot:1 semidefinite:1 light:1 zee:1 necessary:2 indexed:1 inconvenient:1 theoretical:3 psychological:1 cover:2 ar:1 entry:1 snr:9 uniform:1 tridiagonal:2 too:1 dependency:2 spatiotemporal:1 corrupted:1 periodic:1 fundamental:2 sensitivity:4 physic:1 decoding:16 together:1 analogously:1 concrete:2 bethge:1 again:4 central:1 satisfied:1 recorded:1 choose:2 possibly:1 cosyne:1 reweights:1 convolving:1 derivative:1 macke:1 koopman:1 coding:6 summarized:4 includes:1 sec:3 matter:1 jc:1 explicitly:5 depends:1 observer:1 closed:1 observing:2 red:1 bayes:1 complicated:2 rectifying:1 roll:1 variance:2 efficiently:1 ensemble:1 miller:1 generalize:1 weak:3 bayesian:9 basically:1 rectified:1 history:7 converged:1 phys:1 synaptic:4 checked:1 definition:3 nonetheless:1 conveys:1 attneave:1 gain:2 recall:1 higher:1 dt:28 follow:2 response:11 done:1 though:3 strongly:1 refractoriness:1 atick:1 correlation:4 hand:1 nonlinear:10 perhaps:1 effect:4 validity:2 normalized:2 true:4 barlow:1 brown:1 former:1 analytically:6 historydependent:1 conditionally:2 self:3 coincides:3 rat:1 m:4 generalized:2 hippocampal:2 pdf:1 theoretic:1 recently:1 fi:30 spiking:9 mt:3 refractory:4 discussed:3 extend:2 slight:1 interpretation:1 numerically:2 significant:2 cambridge:2 tuning:4 outlined:1 grid:2 similarly:2 nonlinearity:1 language:1 access:1 stable:1 longer:1 inhibition:2 base:1 something:1 curvature:2 posterior:1 fii:1 recent:2 discard:1 certain:2 verlag:1 ecological:1 gerwinn:1 binary:1 continue:1 discussing:1 accomplished:1 der:1 wji:4 captured:1 somewhat:2 preceding:1 kit:4 r0:1 converge:3 paradigm:1 period:4 signal:2 clt:5 smoother:1 full:5 vogelstein:1 rj:2 reduces:1 smooth:1 technical:3 match:1 characterized:1 cross:1 long:1 controlled:1 impact:1 ensuring:2 involving:1 basic:1 prediction:1 redefinition:1 assesment:1 expectation:5 poisson:19 represent:4 limt:2 cell:2 addition:1 interval:3 void:1 appropriately:1 comment:1 induced:1 hz:1 incorporates:1 seem:1 ideal:1 intermediate:12 easy:1 enough:1 variety:1 switch:1 fit:1 gave:1 restrict:1 simplifies:1 det:1 whether:1 expression:2 york:1 ignored:1 useful:1 dramatically:1 clear:2 involve:2 detailed:1 amount:2 http:1 neuroscience:7 delta:1 blue:1 discrete:1 undriven:1 snyder:1 key:3 demonstrating:1 clarity:2 abbott:1 ht:10 backward:1 fize:1 asymptotically:6 sum:5 angle:1 respond:1 place:6 almost:1 reasonable:1 wu:2 draw:2 summarizes:1 appendix:2 scaling:3 clarke:1 bound:1 hi:6 ki:30 simplification:1 quadratic:3 durbin:1 activity:6 nontrivial:2 precisely:2 sharply:1 ri:28 fourier:2 simulate:1 aspect:1 speed:1 expanded:1 department:1 designated:1 according:1 structured:1 smaller:2 remain:3 slightly:2 rev:1 biologically:1 inhomogenous:1 ginzburg:1 computationally:1 equation:3 remains:3 previously:1 turn:2 count:1 know:1 letting:1 tractable:1 operation:1 wii:1 apply:3 away:1 appropriate:1 enforce:1 barron:1 slower:1 thomas:1 hampered:1 responding:1 vidne:1 ensure:1 unifying:1 calculating:1 exploit:2 ghahramani:1 society:1 psychophysical:1 question:4 quantity:2 spike:20 parametric:1 dependence:5 usual:2 diagonal:4 bialek:1 gradient:1 decoder:5 sensible:1 gracefully:1 koyama:1 argue:2 neurometric:1 kalman:3 length:1 modeled:3 code:3 illustration:1 balance:1 equivalently:1 frank:1 info:1 design:1 perform:2 neuron:34 observation:6 lawhern:1 markov:1 sing:1 finite:8 january:1 defining:1 communication:1 precise:1 interacting:1 treves:1 introduced:1 connection:2 rad:3 uncoupled:2 below:1 usually:1 perception:1 pattern:1 regime:16 remapped:1 summarize:1 challenge:1 reading:1 max:1 green:1 royal:1 critical:1 natural:2 recursion:1 normality:1 inversely:1 temporally:1 carried:1 coupled:3 columbia:3 sahani:1 text:1 review:3 understanding:1 literature:3 prior:5 asymptotic:9 law:2 fully:1 expect:1 interesting:2 filtering:1 sufficient:17 principle:1 systematically:1 course:1 lln:5 transpose:1 last:2 enjoys:1 understand:1 wide:1 absolute:2 van:1 curve:3 neuroprosthetic:1 stand:3 cortical:2 resides:1 autoregressive:2 valid:1 forward:1 sensory:3 pillow:1 simplified:1 schultz:1 far:1 transaction:1 approximate:1 informationtheoretic:1 ignore:1 keep:3 conclude:5 tolias:1 snd:1 continuous:2 channel:2 nature:3 expanding:1 expansion:3 complex:1 berens:1 diag:12 main:1 linearly:3 whole:1 noise:1 arise:1 n2:1 ref:3 neuronal:1 fig:2 fashion:1 wiley:1 inferring:2 position:3 exponential:2 comput:1 answering:1 third:1 tang:1 formula:2 theorem:1 specific:1 emphasized:1 jt:4 er:7 physiological:1 effectively:1 entorhinal:1 paninski:5 simply:2 likely:1 visual:4 expressed:3 brunel:1 springer:1 determines:1 conditional:1 formulated:1 consequently:1 fisher:3 absence:2 change:2 infinite:3 determined:2 abbreviates:1 uniformly:1 except:1 total:6 experimental:2 attempted:1 shannon:8 indicating:1 timeinvariant:1 latter:1 phenomenon:1 correlated:2
3,724
4,372
Evaluating the inverse decision-making approach to preference learning Alan Jern Department of Psychology Carnegie Mellon University [email protected] Christopher G. Lucas Department of Psychology Carnegie Mellon University [email protected] Charles Kemp Department of Psychology Carnegie Mellon University [email protected] Abstract Psychologists have recently begun to develop computational accounts of how people infer others? preferences from their behavior. The inverse decision-making approach proposes that people infer preferences by inverting a generative model of decision-making. Existing data sets, however, do not provide sufficient resolution to thoroughly evaluate this approach. We introduce a new preference learning task that provides a benchmark for evaluating computational accounts and use it to compare the inverse decision-making approach to a feature-based approach, which relies on a discriminative combination of decision features. Our data support the inverse decision-making approach to preference learning. A basic principle of decision-making is that knowing people?s preferences allows us to predict how they will behave: if you know your friend likes comedies and hates horror films, you can probably guess which of these options she will choose when she goes to the theater. Often, however, we do not know what other people like and we can only infer their preferences from their behavior. If you know that a different friend saw a comedy today, does that mean that he likes comedies in general? The conclusion you draw will likely depend on what else was playing and what movie choices he has made in the past. A goal for social cognition research is to develop a computational account of people?s ability to infer others? preferences. One computational approach is based on inverse decision-making. This approach begins with a model of how someone?s preferences lead to a decision. Then, this model is inverted to determine the most likely preferences that motivated an observed decision. An alternative approach might simply learn a functional mapping between features of an observed decision and the preferences that motivated it. For instance, in your friend?s decision to see a comedy, perhaps the more movie options he turned down, the more likely it is that he has a true preference for comedies. The difference between the inverse decision-making approach and the feature-based approach maps onto the standard dichotomy between generative and discriminative models. Economists have developed an instance of the inverse decision-making approach known as the multinomial logit model [1] that has been widely used to infer consumer?s preferences from their choices. This model has recently been explored as a psychological model [2, 3, 4], but there are few behavioral data sets for evaluating it as a model of how people learn others? preferences. Additionally, the data sets that do exist tend to be drawn from the developmental literature, which focuses on simple tasks that collect only one or two judgments from children [5, 6, 7]. The limitations of these data sets make it difficult to evaluate the multinomial logit model with respect to alternative accounts of preference learning like the feature-based approach. In this paper, we use data from a new experimental task that elicits a detailed set of preference judgments from a single participant in order to evaluate the predictions of several preference learning models from both the inverse decision-making and feature-based classes. Our task requires each participant to sort a large number of observed decisions on the basis of how strongly they indicate 1 (a) (b) (c) d c c b b a (d) a x d x d c b a x 1. Number of chosen effects (?/+) 2. Number of forgone effects (+/+) 3. Number of forgone options (+/+) 4. Number of forgone options containing x (?/?) 5. Max/min number of effects in a forgone option (+/?) 6. Is x in every option? (?/?) 7. Chose only option with x? (+/+) 8. Is x the only difference between options? (+/+) 9. Do all options have same number of effects? (+/+) 10. Chose option with max/min number of effects? (?/?) Figure 1: (a)?(c) Examples of the decisions used in the experiments. Each column represents one option and the boxes represent different effects. The chosen option is indicated by the black rectangle. (d) Features used by the weighted feature and ranked feature models. Features 5 and 10 involved maxima in Experiment 1, which focused on all positive effects, and minima in Experiment 2, which focused on all negative effects. The signs in parentheses indicate the direction of the feature that suggests a stronger preference in Experiment 1 / Experiment 2. a preference for a chosen item. Because the number of decisions is large and these decisions vary on multiple dimensions, predicting how people will order them offers a challenging benchmark on which to compare computational models of preference learning. Data sets from these sorts of detailed tasks have proved fruitful in other domains. For example, data reported by Shepard, Hovland, and Jenkins [8]; Osherson, Smith, Wilkie, L?opez, and Shafir [9]; and Wasserman, Elek, Chatlosh, and Baker [10] have motivated much subsequent research on category learning, inductive reasoning, and causal reasoning, respectively. We first describe our preference learning task in detail. We then present several inverse decisionmaking and feature-based models of preference learning and compare these models? predictions to people?s judgments in two experiments. The data are well predicted by models that follow the inverse decision-making approach, suggesting that this computational approach may help explain how people learn others? preferences. 1 Multi-attribute decisions and revealed preferences We designed a task that can be used to elicit a large number of preference judgments from a single participant. The task involves a set of observed multi-attribute decisions, some examples of which are represented visually in Figure 1. Each decision is among a set of options and each option produces a set of effects. Figure 1 shows several decisions involving a total of five effects distributed among up to five options. The differently colored boxes represent different effects and the chosen option is marked by a black rectangle. For example, 1a shows a choice between an option with four effects and an option with a single effect; here, the decision maker chose the second option. In our task, people are asked to rank a large number of these decisions by how strongly they suggest that the decision maker had a preference for a particular effect (e.g., effect x in Figure 1). By imposing some minimal constraints, the space of unique multi-attribute decisions is finite and we can obtain rankings for every decision in the space. For example, Figure 2c shows a complete list of 47 unique decisions involving up to five effects, subject to several constraints described later. Three of these decisions are shown in Figure 1. If all the effects are positive?pieces of candy, for example?the first decision (1a) suggests a strong preference for candy x, because the decision maker turned down four pieces in favor of one. The second decision (1b), however, offers much weaker evidence because nearly everyone would choose four pieces of candy over one, even without a specific preference for x. The third decision (1c) provides evidence that is strong but perhaps not quite as strong as the first decision. When all effects are negative?like electric shocks at different body locations?decision makers may still find some effects more tolerable than others, but different inferences are sometimes supported. For example, for negative effects, 1a provides weak evidence that x is relatively tolerable because nearly everyone would choose one shock over four. 2 A computational account of preference learning We now describe a simple computational model for learning a person?s preferences after observing that person make a decision like the ones in Figure 1. We assume that there are n available options 2 {o1 , . . . , on }, each of which produces one or more effects from the set {f1 , f2 , ..., fm }. For simplicity, we assume that effects are binary. Let ui denote the utility the decision maker assigns to effect fi . We begin by specifying a model of decision-making that makes the standard assumptions that decision makers tend to choose things with greater utility and that utilities are additive. That is, if fj is a binary vector indicating the effects produced by option oj and u is a vector of utilities assigned to each of the m effects, then the total utility associated with option oj can be expressed as Uj = fj T u. We complete the specification of the model by applying the Luce choice rule [11], a common psychological model of choice behavior, as the function that chooses among the options: exp(Uj ) exp(fj T u) p(c = oj |u, f ) = Pn = Pn T k=1 exp(Uk ) k=1 exp(fk u) (1) where c denotes the choice made. This model can predict the choice someone will make among a specified set of options, given the utilities that person assigns to the effects in each option. To obtain estimates of someone?s utilities, we invert this model by applying Bayes? rule: p(u|c, F) = p(c|u, F)p(u) p(c|F) (2) where F = {f1 , . . . , fn } specifies the available options and their corresponding effects. This is the multinomial logit model [1], a standard econometric model. In order to apply Equation 2 we must specify a prior p(u) on the utilities. We adopt a standard approach that places independent Gaussian priors on the utilities: ui ? N (?, ? 2 ). For decisions where effects are positive?like candies?we set ? = 2?, which corresponds to a prior distribution that places approximately 2% of the probability mass below zero. Similarly, for negative effects?like electric shocks?we set ? = ?2?. 2.1 Ordering a set of observed decisions Equation 2 specifies a posterior probability distribution over utilities for a single observed decision but does not provide a way to compare the inferences drawn from multiple decisions for the purposes of ordering them. Suppose we are interested in a decision maker?s preference for effect x and we wish to order a set of decisions by how strongly they support this preference. Two criteria for ordering the decisions are as follows:   p(c|ux , F)p(ux ) Absolute utility E(ux |c, F) = Eux p(c|F) p(c|?j ux ? uj , F)p(?j ux ? uj ) Relative utility p(?j ux ? uj |c, F) = p(c|F) The absolute utility model orders decisions by the mean posterior utility for effect x. This criterion is perhaps the most natural way to assess how much a decision indicates a preference for x, but it requires an inference about the utility of x in isolation, and research suggests that people often think about the utility of an effect only in relation to other salient possibilities [12]. The relative utility model applies this idea to preference learning by ordering decisions based on how strongly they suggest that x has a greater utility than all other effects. The decisions in Figures 1b and 1c are cases where the two models lead to different predictions. If the effects are all negative (e.g., electric shocks), the absolute utility model predicts that 1b provides stronger evidence for a tolerance for x because the decision maker chose to receive four shocks instead of just one. The relative utility model predicts that 1c provides stronger evidence because 1b offers no way to determine the relative tolerance of the four chosen effects with respect to one another. Like all generative models, the absolute and relative models incorporate three qualitatively different components: the likelihood term p(c|u, F), the prior p(u), and the reciprocal of the marginal likelihood 1/p(c|F). We assume that the total number of effects is fixed in advance and, as a result, the prior term will be the same for all decisions that we consider. The two other components, however, will vary across decisions. The inverse decision-making approach predicts that both components should influence preference judgments, and we will test this prediction by comparing our 3 two inverse decision-making models to two alternatives that rely only one of these components as an ordering criterion: p(c|?j ux ? uj , F) 1/p(c|F) Representativeness Surprise The representativeness model captures how likely the observed decision would be if the utility for x were high, and previous research has shown that people sometimes rely on a representativeness computation of this kind [13]. The surprise model captures how unexpected the observed decision is overall; surprising decisions may be best explained in terms of a strong preference for x, but unsurprising decisions provide little information about x in particular. 2.2 Feature-based models We also consider a class of feature-based models that use surface features to order decisions. The ten features that we consider are shown in Figure 1d, where x is the effect of interest. As an example, the first feature specifies the number of effects chosen; because x is always among the chosen effects, decisions where few or no other effects belong to the chosen option suggest the strongest preference for x (when all effects are positive). This and the second feature were previously identified by Newtson [14]; we included the eight additional features shown in Figure 1d in an attempt to include all possible features that seemed both simple and relevant. We consider two methods for combining this set of features to order a set of decisions by how strongly they suggest a preference for x. The first model is a standard linear regression model, which we refer to as the weighted feature model. The model learns a weight for each feature, and the rank of a given decision is determined by a weighted sum of its features. The second model is a ranked feature model that sorts the observed decisions with respect to a strict ranking of the features. The top-ranked feature corresponds to the primary sort key, the second-ranked feature to the secondary sort key, and so on. For example, suppose that the top-ranked feature is the number of chosen effects and the second-ranked feature is the number of forgone options. Sorting the three decisions in Figure 1 according to this criterion produces the following ordering: 1a,1c,1b. This notion of sorting items on the basis of ranked features has been applied before to decision-making [15, 16] and other domains of psychology [17], but we are not aware of any previous applications to preference learning. Although our inverse decision-making and feature-based models represent two very different approaches, both may turn out to be valuable. An inverse decision-making approach may be the appropriate account of preference learning at Marr?s [18] computational level, and a feature-based approach may capture the psychological processes by which the computational-level account is implemented. Our goal, therefore, is not necessarily to accept one of these approaches and dismiss the other. Instead, we entertain three distinct possibilities. First, both approaches may account well for the data, which would support the idea that they are valid accounts operating at different levels of analysis. Second, the inverse decision-making approach may offer a better account, suggesting that process-level accounts other than the feature-based approach should be explored. Finally, the feature-based approach may offer a better account, suggesting that inverse decision-making does not constitute an appropriate computational-level account of preference learning. 3 Experiment 1: Positive effects Our first experiment focuses on decisions involving only positive effects. The full set of 47 decisions we used is shown in Figure 2c. This set includes every possible unique decision with up to five different effects, subject to the following constraints: (1) one of the effects (effect x) must always appear in the chosen option, (2) there are no repeated options, (3) each effect may appear in an option at most once, (4) only effects in the chosen option may be repeated in other options, and (5) when effects appear in multiple options, the number of effects is held constant across options. The first constraint is necessary for the sorting task, the second two constraints create a finite space of decisions, and the final two constraints limit attention to what we deemed the most interesting cases. Method 43 Carnegie Mellon undergraduates participated for course credit. Each participant was given a set of cards, with one decision printed on each card. The decisions were represented visually 4 (a) (c) Decisions 42 40 45 Mean human rankings 38 30 23 20 22 17 13 12 11 10 9 8 7 6 19 18 31 34 28 21 26 36 35 33 37 27 29 32 25 24 16 15 14 5 4 3 2 1 Absolute utility model rankings (b) Mean human rankings (Experiment 1) 47 43 44 46 45 38 37 36 34 35 30 33 31 32 28 24 26 2729 25 21 19 22 20 18 16 17 12 13 7 6 11 5 9 4 10 8 1 2 3 42 40 41 39 47 46 44 41 43 39 23 15 14 Mean human rankings (Experiment 2) 1. dcbax 2. cbax 3. bax 4. ax 5. x 6. dcax | bcax 7. dx | cx | bx | ax 8. cax | bax 9. bdx | bcx | bax 10. dcx | bax 11. bx | ax 12. bdx | cax | bax 13. cx | bx | ax 14. d | cbax 15. c | bax 16. b | ax 17. d | c | bax 18. dc | bax 19. c | b | ax 20. dc | bx | ax 21. bdc | bax 22. ad | cx | bx | ax 23. d | c | b | ax 24. bad | bcx | bax 25. ac | bx | ax 26. cb | ax 27. cbad | cbax 28. dc | b | ax 29. ad | ac | bx | ax 30. ab | ax 31. bad | bax 32. dc | ab | ax 33. dcb | ax 34. a | x 35. bad | bac | bax 36. ac | ab | ax 37. ad | ac | ab | ax 38. b | a | x 39. ba | x 40. c | b | a | x 41. cb | a | x 42. d | c | b | a | x 43. cba | x 44. dc | ba | x 45. dc | b | a | x 46. dcb | a | x 47. dcba | x Figure 2: (a) Comparison between the absolute utility model rankings and the mean human rankings for Experiment 1. Each point represents one decision, numbered with respect to the list in panel c. (b) Comparison between the mean human rankings in Experiments 1 and 2. In both scatter plots, the solid diagonal lines indicate a perfect correspondence between the two sets of rankings. (c) The complete set of decisions, ordered by the mean human rankings from Experiment 1. Options are separated by vertical bars and the chosen option is always at the far right. Participants were always asked about a preference for effect x. as in Figure 1 but without the letter labels. Participants were told that the effects were different types of candy and each option was a bag containing one or more pieces of candy. They were asked to sort the cards by how strongly each decision suggested that the decision maker liked a particular target candy, labeled x in Figure 2c. They sorted the cards freely on a table but reported their final rankings by writing them on a sheet of paper, from weakest to strongest evidence. They were instructed to order the cards as completely as possible, but were told that they could assign the same ranking to a set of cards if they believed those cards provided equal evidence. 3.1 Results Two participants were excluded as outliers based on the criterion that their rankings for at least five decisions were at least three standard deviations from the mean rankings. We performed a hierarchical clustering analysis of the remaining 41 participants? rankings using rank correlation as a similarity metric. Participants? rankings were highly correlated: cutting the resulting dendrogram at 0.2 resulted in one cluster that included 33 participants and the second largest cluster included 5 Experiment 2 Negative effects Representativeness Surprise MAE = 2.3 MAE = 17.8 MAE = 7.0 MAE = 6.7 MAE = 4.3 MAE = 17.3 MAE = 9.5 Human rankings Experiment 1 Positive effects Relative utility MAE = 2.3 Human rankings Absolute utility Model rankings Model rankings Model rankings Model rankings Figure 3: Comparison between human rankings in both experiments and predicted rankings from four models. The solid diagonal lines indicate a perfect correspondence between human and model rankings. only 3 participants. Thus, we grouped all participants together and analyzed their mean rankings. The 0.2 threshold was chosen because it produced the most informative clustering in Experiment 2. Inverse decision-making models We implemented the inverse decision-making models using importance sampling with 5 million samples drawn from the prior distribution p(u). Because all the effects were positive, we used a prior on utilities that placed nearly all probability mass above zero (? = 4, ? = 2). The mean human rankings are compared with the absolute utility model rankings in Figure 2a, and the mean human rankings are listed in order in 2c. Fractional rankings were used for both the human data and the model predictions. The human rankings in the figure are the means of participants? fractional rankings. The first row of Figure 3 contains similar plots that allow comparison of the four models we considered. In these plots, the solid diagonal lines indicate a perfect correspondence between model and human rankings. Thus, the largest deviations from this line represent the largest deviations in the data from the model?s predictions. Figure 3 shows that the absolute and relative utility models make virtually identical predictions and both models provide a strong account of the human rankings as measured by mean absolute error (MAE = 2.3 in both cases). Moreover, both models correctly predict the highest ranked decision and the set of lowest ranked decisions. The only clear discrepancy between the model predictions and the data is the cluster of points at the lower left, labeled as Decisions 6?13 in Figure 2a. These are all cases in which effect x appears in all options and therefore these decisions provide no information about a decision maker?s preference for x. Consequently, the models assign the same ranking to this group as to the group of decisions in which there is only a single option (Decisions 1?5). Although people appeared to treat these groups somewhat differently, the models still correctly predict that the entire group of decisions 1?13 is ranked lower than all other decisions. The surprise and representativeness models do not perform nearly as well (MAE = 7.0 and 17.8, respectively). Although the surprise model captures some of the general trends in the human rankings, it makes several major errors. For example, consider Decision 7: dx|cx|bx|ax. This decision provides no information about a preference for x because it appears in every option. The decision is surprising, however, because a decision maker choosing at random from these options would make the observed choice only 1/4 of the time. The representativeness model performs even worse, primarily because it does not take into account alternative explanations for why an option was chosen, such as the fact that no other options were available (e.g., Decision 1 in Figure 2c). The failure of these models to adequately account for the data suggests that both the likelihood p(c|u, F) and marginal likelihood p(c|F) are important components of the absolute and relative utility models. Feature-based models We compared the performance of the absolute and relative utility models to our two feature-based models: the weighted feature and ranked feature models. For each participant, 6 (b) Ranked feature 15 10 10 5 5 Figure 4: Results of the feature-based model analysis from Experiment 1 for (a) the weighted feature models and (b) the ranked feature models. The histograms show the minimum number of features needed to match the accuracy (measured by MAE) of the absolute utility model for each participant. 1 2 3 4 5 6 >6 15 1 2 3 4 5 6 7 8 9 10 >10 Number of participants (a) Weighted feature Number of features needed we considered every subset of features1 in Figure 1d in order to determine the minimum number of features needed by the two models to achieve the same level of accuracy as the absolute utility model, as measured by mean absolute error. The results of these analyses are shown in Figure 4. For the majority of participants, at least four features were needed by both models to match the accuracy of the absolute utility model. For the weighted feature model, 14 participants could not be fit as well as the absolute utility model even when all ten features were considered. These results indicate that a feature-based account of people?s inferences in our task must be supplied with a relatively large number of features. By contrast, the inverse decision-making approach provides a relatively parsimonious account of the data. 4 Experiment 2: Negative effects Experiment 2 focused on a setting in which all effects are negative, motivated by the fact that the inverse decision-making models predict several major differences in orderings when effects are negative rather than positive. For instance, the absolute utility model?s relative rankings of the decisions in Figures 1a and 1b are reversed when all effects are negative rather than positive. Method 42 Carnegie Mellon undergraduates participated for course credit. The experimental design was identical to Experiment 1 except that participants were told that the effects were electric shocks at different body locations. They were asked to sort the cards on the basis of how strongly each decision suggested that the decision maker finds shocks at the target location relatively tolerable. The model predictions were derived in the same way as for Experiment 1, but with a prior distribution on utilities that placed nearly all probability mass below zero (? = ?4, ? = 2) to reflect the fact that effects were all negative. 4.1 Results Three participants were excluded as outliers by the same criterion applied in Experiment 1. The resulting mean rankings are compared with the corresponding rankings from Experiment 1 in Figure 2b. The figure shows that responses based on positive and negative effects were substantially different in a number of cases. Figure 3 shows how the mean rankings compare to the predictions of the four models we considered. Although the relative utility model is fairly accurate, no model achieves the same level of accuracy as the absolute and relative utility models in Experiment 1. In addition, the relative utility model provides a poor account of the responses of many individual participants. To better understand responses at the individual level, we repeated the hierarchical clustering analysis described in Experiment 1, which revealed that 29 participants could be grouped into one of four clusters, with the remaining participants each in their own clusters. We analyzed these four clusters independently, excluding the 10 participants that could not be naturally grouped. We compared the mean rankings of each cluster to the absolute and relative utility models, as well as all one- and two-feature weighted feature and ranked feature models. Figure 5 shows that the mean rankings of participants in Cluster 1 (N = 8) were best fit by the absolute utility model, the mean rankings of participants in Cluster 2 (N = 12) were best fit by the relative utility model, and the mean rankings of participants in Clusters 3 (N = 3) and 4 (N = 6) were better fit by feature-based models than by either the absolute or relative utility models. 1 A maximum of six features was considered for the ranked feature model because considering more features was computationally intractable. 7 Best?fitting weighted feature Cluster 4 N =6 MAE = 2.6 MAE = 4.9 MAE = 14.0 MAE = 7.9 MAE = 5.3 MAE = 2.6 MAE = 13.0 MAE = 6.2 Human rankings Cluster 3 N =3 Human rankings Relative utility Cluster 2 N = 12 Factors: 3,8 Factors: 6,7 Factors: 1,3 Factors: 1,8 MAE = 4.8 MAE = 4.0 MAE = 2.3 MAE = 5.2 Model rankings Model rankings Model rankings Model rankings Human rankings Absolute utility Cluster 1 N =8 Figure 5: Comparison between human rankings for four clusters of participants identified in Experiment 2 and predicted rankings from three models. Each point in the plots corresponds to one decision and the solid diagonal lines indicate a perfect correspondence between human and model rankings. The third row shows the predictions of the best-fitting two-factor weighted feature model for each cluster. The two factors listed refer to Figure 1d. To examine how well the models accounted for individuals? rankings within each cluster, we compared the predictions of the inverse decision-making models to the best-fitting two-factor featurebased model for each participant. In Cluster 1, 7 out of 8 participants were best fit by the absolute utility model; in Cluster 2, 8 out of 12 participants were best fit by the relative utility model; in Clusters 3 and 4, all participants were better fit by feature-based models. No single feature-based model provided the best fit for more than two participants, suggesting that participants not fit well by the inverse decision-making models were not using a single alternative strategy. Applying the feature-based model analysis from Experiment 1 to the current results revealed that the weighted feature model required an average of 6.0 features to match the performance of the absolute utility model for participants in Cluster 1, and an average of 3.9 features to match the performance of the relative utility model for participants in Cluster 2. Thus, although a single model did not fit all participants well in the current experiment, many participants were fit well by one of the two inverse decision-making models, suggesting that this general approach is useful for explaining how people reason about negative effects as well as positive effects. 5 Conclusion In two experiments, we found that an inverse decision-making approach offered a good computational account of how people make judgments about others? preferences. Although this approach is conceptually simple, our analyses indicated that it captures the influence of a fairly large number of relevant decision features. Indeed, the feature-based models that we considered as potential process models of preference learning could only match the performance of the inverse decision-making approach when supplied with a relatively large number of features. We feel that this result rules out the feature-based approach as psychologically implausible, meaning that alternative process-level accounts will need to be explored. One possibility is sampling, which has been proposed as a psychological mechanism for approximating probabilistic inferences [19, 20]. However, even if process models that use large numbers of features are considered plausible, the inverse decision-making approach provides a valuable computational-level account that helps to explain which decision features are informative. Acknowledgments This work was supported in part by the Pittsburgh Life Sciences Greenhouse Opportunity Fund and by NSF grant CDI-0835797. 8 References [1] D. McFadden. Conditional logit analysis of qualitative choice behavior. In P. Zarembka, editor, Frontiers in Econometrics. Amademic Press, New York, 1973. [2] C. G. Lucas, T. L. Griffiths, F. Xu, and C. Fawcett. A rational model of preference learning and choice prediction by children. In Proceedings of Neural Information Processing Systems 21, 2009. [3] L. Bergen, O. R. Evans, and J. B. Tenenbaum. Learning structured preferences. In Proceedings of the 32nd Annual Conference of the Cognitive Science Society, 2010. [4] A. Jern and C. Kemp. Decision factors that support preference learning. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 2011. [5] T. Kushnir, F. Xu, and H. M. Wellman. Young children use statistical sampling to infer the preferences of other people. Psychological Science, 21(8):1134?1140, 2010. [6] L. Ma and F. Xu. Young children?s use of statistical sampling evidence to infer the subjectivity of preferences. Cognition, in press. [7] M. J. Doherty. Theory of Mind: How Children Understand Others? Thoughts and Feelings. Psychology Press, New York, 2009. [8] R. N. Shepard, C. I. Hovland, and H. M. Jenkins. Learning and memorization of classifications. Psychological Monographs, 75, Whole No. 517, 1961. [9] D. N. Osherson, E. E. Smith, O. Wilkie, A. L?opez, and E. Shafir. Category-based induction. Psychological Review, 97(2):185?200, 1990. [10] E. A. Wasserman, S. M. Elek, D. L. Chatlosh, and A. G. Baker. Rating causal relations: Role of probability in judgments of response-outcome contingency. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19(1):174?188, 1993. [11] R. D. Luce. Individual choice behavior. John Wiley, 1959. [12] D. Ariely, G. Loewenstein, and D. Prelec. Tom Sawyer and the construction of value. Journal of Economic Behavior & Organization, 60:1?10, 2006. [13] D. Kahneman and A. Tversky. Subjective probability: A judgment of representativeness. Cognitive Psychology, 3(3):430?454, 1972. [14] D. Newtson. Dispositional inference from effects of actions: Effects chosen and effects forgone. Journal of Experimental Social Psychology, 10:489?496, 1974. [15] P. C. Fishburn. Lexicographic orders, utilities and decision rules: A survey. Management Science, 20(11):1442?1471, 1974. [16] G. Gigerenzer and P. M. Todd. Fast and frugal heuristics: The adaptive toolbox. Oxford University Press, New York, 1999. [17] A. Prince and P. Smolensky. Optimality Theory: Constraint Interaction in Generative Grammar. WileyBlackwell, 2004. [18] D. Marr. Vision. W. H. Freeman, San Francisco, 1982. [19] A. N. Sanborn, T. L. Griffiths, and D. J. Navarro. Rational approximations to rational models: Alternative algorithms for category learning. Psychological Review, 117:1144?1167, 2010. [20] L. Shi and T. L. Griffiths. Neural implementation of Bayesian inference by importance sampling. In Proceedings of Neural Information Processing Systems 22, 2009. 9
4372 |@word stronger:3 logit:4 nd:1 solid:4 contains:1 past:1 existing:1 subjective:1 current:2 comparing:1 surprising:2 scatter:1 dx:2 must:3 john:1 fn:1 additive:1 subsequent:1 informative:2 candy:7 evans:1 designed:1 plot:4 fund:1 generative:4 guess:1 item:2 reciprocal:1 smith:2 colored:1 provides:9 location:3 preference:51 five:5 dispositional:1 qualitative:1 fitting:3 behavioral:1 introduce:1 indeed:1 behavior:6 examine:1 multi:3 freeman:1 little:1 considering:1 begin:2 provided:2 baker:2 moreover:1 panel:1 mass:3 features1:1 lowest:1 what:4 kind:1 substantially:1 developed:1 every:5 uk:1 shafir:2 grant:1 appear:3 positive:12 before:1 treat:1 todd:1 limit:1 oxford:1 approximately:1 might:1 chose:4 black:2 collect:1 suggests:4 someone:3 challenging:1 specifying:1 unique:3 acknowledgment:1 dcx:1 elicit:1 thought:1 printed:1 griffith:3 numbered:1 suggest:4 onto:1 sheet:1 applying:3 influence:2 writing:1 memorization:1 fruitful:1 map:1 shi:1 go:1 attention:1 independently:1 focused:3 resolution:1 survey:1 simplicity:1 assigns:2 wasserman:2 rule:4 loewenstein:1 theater:1 marr:2 notion:1 feel:1 target:2 today:1 suppose:2 construction:1 trend:1 econometrics:1 predicts:3 labeled:2 featurebased:1 observed:10 role:1 capture:5 ordering:7 highest:1 valuable:2 monograph:1 developmental:1 ui:2 asked:4 tversky:1 depend:1 gigerenzer:1 f2:1 basis:3 completely:1 cdi:1 kahneman:1 osherson:2 differently:2 represented:2 separated:1 distinct:1 fast:1 describe:2 bdx:2 dichotomy:1 choosing:1 outcome:1 quite:1 heuristic:1 film:1 widely:1 plausible:1 grammar:1 ability:1 favor:1 think:1 final:2 interaction:1 turned:2 relevant:2 combining:1 horror:1 achieve:1 greenhouse:1 cluster:22 decisionmaking:1 produce:3 perfect:4 liked:1 help:2 andrew:1 develop:2 friend:3 ac:4 measured:3 strong:5 implemented:2 predicted:3 involves:1 indicate:7 direction:1 attribute:3 human:22 assign:2 f1:2 cba:1 frontier:1 credit:2 considered:7 bax:12 visually:2 exp:4 cb:2 cognition:3 predict:5 mapping:1 major:2 achieves:1 vary:2 adopt:1 bcx:2 hovland:2 purpose:1 bag:1 label:1 maker:12 saw:1 largest:3 grouped:3 create:1 weighted:11 lexicographic:1 gaussian:1 always:4 rather:2 pn:2 ax:19 focus:2 derived:1 she:2 rank:3 indicates:1 likelihood:4 contrast:1 inference:7 bergen:1 entire:1 accept:1 relation:2 interested:1 overall:1 among:5 classification:1 lucas:2 proposes:1 fairly:2 marginal:2 equal:1 aware:1 once:1 sampling:5 identical:2 represents:2 jern:2 nearly:5 discrepancy:1 others:7 few:2 primarily:1 resulted:1 individual:4 attempt:1 ab:4 organization:1 interest:1 possibility:3 highly:1 analyzed:2 wellman:1 held:1 accurate:1 entertain:1 necessary:1 prince:1 causal:2 minimal:1 psychological:8 instance:3 column:1 deviation:3 subset:1 unsurprising:1 reported:2 chooses:1 thoroughly:1 person:3 told:3 probabilistic:1 together:1 reflect:1 management:1 containing:2 choose:4 fishburn:1 worse:1 cognitive:3 bx:8 account:22 suggesting:5 potential:1 representativeness:7 includes:1 ranking:56 ad:3 piece:4 later:1 performed:1 observing:1 sort:7 option:44 participant:38 bayes:1 ass:1 accuracy:4 judgment:8 conceptually:1 weak:1 bayesian:1 produced:2 explain:2 strongest:2 opez:2 implausible:1 prelec:1 failure:1 involved:1 subjectivity:1 naturally:1 associated:1 rational:3 proved:1 begun:1 fractional:2 appears:2 follow:1 tom:1 specify:1 response:4 box:2 strongly:7 just:1 dendrogram:1 correlation:1 dismiss:1 christopher:1 indicated:2 perhaps:3 effect:69 true:1 inductive:1 adequately:1 assigned:1 excluded:2 criterion:6 complete:3 doherty:1 performs:1 fj:3 reasoning:2 meaning:1 recently:2 charles:1 fi:1 common:1 functional:1 multinomial:3 shepard:2 million:1 belong:1 he:4 mae:23 mellon:5 refer:2 imposing:1 rd:1 fk:1 similarly:1 had:1 specification:1 similarity:1 surface:1 operating:1 posterior:2 own:1 binary:2 life:1 inverted:1 minimum:3 greater:2 additional:1 somewhat:1 freely:1 determine:3 multiple:3 full:1 infer:7 alan:1 match:5 offer:5 believed:1 parenthesis:1 prediction:13 involving:3 basic:1 regression:1 vision:1 cmu:3 metric:1 histogram:1 represent:4 sometimes:2 psychologically:1 fawcett:1 invert:1 receive:1 dcb:2 addition:1 participated:2 else:1 probably:1 strict:1 subject:2 tend:2 virtually:1 navarro:1 thing:1 revealed:3 isolation:1 psychology:8 fit:11 fm:1 identified:2 economic:1 idea:2 knowing:1 luce:2 motivated:4 six:1 utility:50 york:3 constitute:1 action:1 useful:1 detailed:2 listed:2 clear:1 ten:2 tenenbaum:1 bac:1 category:3 specifies:3 supplied:2 exist:1 nsf:1 ajern:1 sign:1 correctly:2 carnegie:5 group:4 key:2 four:13 salient:1 threshold:1 drawn:3 shock:7 rectangle:2 econometric:1 sum:1 inverse:26 letter:1 you:4 place:2 parsimonious:1 draw:1 decision:116 correspondence:4 sawyer:1 annual:2 constraint:7 your:2 min:2 optimality:1 relatively:5 bdc:1 department:3 structured:1 according:1 combination:1 poor:1 across:2 making:29 psychologist:1 explained:1 outlier:2 computationally:1 equation:2 previously:1 turn:1 mechanism:1 needed:4 know:3 mind:1 available:3 jenkins:2 apply:1 eight:1 hierarchical:2 zarembka:1 appropriate:2 tolerable:3 alternative:7 denotes:1 top:2 include:1 clustering:3 remaining:2 opportunity:1 uj:6 approximating:1 society:2 strategy:1 primary:1 diagonal:4 sanborn:1 reversed:1 elicits:1 card:8 majority:1 kemp:2 reason:1 induction:1 consumer:1 economist:1 o1:1 difficult:1 negative:13 ba:2 design:1 implementation:1 kushnir:1 perform:1 vertical:1 benchmark:2 finite:2 behave:1 excluding:1 dc:6 frugal:1 rating:1 inverting:1 required:1 specified:1 toolbox:1 comedy:5 bar:1 suggested:2 below:2 appeared:1 smolensky:1 max:2 oj:3 explanation:1 everyone:2 memory:1 ranked:15 natural:1 rely:2 predicting:1 hate:1 movie:2 deemed:1 prior:8 literature:1 review:2 relative:19 mcfadden:1 interesting:1 limitation:1 cax:2 contingency:1 offered:1 sufficient:1 principle:1 editor:1 ckemp:1 playing:1 row:2 course:2 accounted:1 supported:2 placed:2 weaker:1 allow:1 understand:2 explaining:1 absolute:25 distributed:1 tolerance:2 dimension:1 evaluating:3 valid:1 seemed:1 instructed:1 made:2 qualitatively:1 adaptive:1 san:1 feeling:1 far:1 social:2 wilkie:2 cutting:1 pittsburgh:1 francisco:1 discriminative:2 why:1 table:1 additionally:1 learn:3 ariely:1 necessarily:1 electric:4 domain:2 did:1 whole:1 child:5 repeated:3 body:2 xu:3 wiley:1 wish:1 third:2 learns:1 young:2 down:2 bad:3 specific:1 explored:3 list:2 evidence:8 weakest:1 intractable:1 undergraduate:2 importance:2 forgone:6 sorting:3 surprise:5 cx:4 simply:1 likely:4 expressed:1 unexpected:1 ordered:1 ux:7 applies:1 corresponds:3 relies:1 ma:1 conditional:1 goal:2 marked:1 sorted:1 consequently:1 included:3 determined:1 except:1 total:3 secondary:1 experimental:4 indicating:1 support:4 people:17 incorporate:1 evaluate:3 correlated:1
3,725
4,373
Target Neighbor Consistent Feature Weighting for Nearest Neighbor Classification Ichiro Takeuchi Department of Engineering Nagoya Institute of Technology [email protected] Masashi Sugiyama Department of Computer Science Tokyo Institute of Technology [email protected] Abstract We consider feature selection and weighting for nearest neighbor classifiers. A technical challenge in this scenario is how to cope with discrete update of nearest neighbors when the feature space metric is changed during the learning process. This issue, called the target neighbor change, was not properly addressed in the existing feature weighting and metric learning literature. In this paper, we propose a novel feature weighting algorithm that can exactly and efficiently keep track of the correct target neighbors via sequential quadratic programming. To the best of our knowledge, this is the first algorithm that guarantees the consistency between target neighbors and the feature space metric. We further show that the proposed algorithm can be naturally combined with regularization path tracking, allowing computationally efficient selection of the regularization parameter. We demonstrate the effectiveness of the proposed algorithm through experiments. 1 Introduction Nearest neighbor (NN) classifiers would be one of the classical and perhaps the simplest non-linear classification algorithms. Nevertheless, they have gathered considerable attention again recently since they are demonstrated to be highly useful in state-of-the-art real-world applications [1, 2]. For further enhancing the accuracy and interpretability of NN classifiers, feature extraction and feature selection are highly important. Feature extraction for NN classifiers has been addressed by the name of metric learning [3?6], while feature selection for NN classifiers has been studied by the name of feature weighting [7?11]. One of the fundamental approaches to feature extraction/selection for NN classifiers is to learn the feature metric/weights so that instance pairs in the same class (?must-link?) are close and instance pairs in other classes (?cannot-link?) are far apart [12, 13]. Although this approach tends to provide simple algorithms, it does not have direct connection to the classification loss for NN classifiers, and thus its validity is not clear. However, directly incorporating the NN classification loss involves a significant technical challenge called the target neighbor (TN) change. To explain this, let us consider binary classification by a 3NN classifier (see Figure 1). Since the classification result is determined by the majority vote from 3 nearest instances, the classification loss is defined using the distance to the 2nd nearest instance in each class (which is referred to as a TN; see Section 2 for details). However, since ?nearest? instances are generally changed when feature metric/weights are updated, TNs must also be updated to be kept consistent with the learned feature metric/weights during the learning process. Although the TN change is a fundamental requirement in feature extraction/selection for NN classifiers, existing methods did not handle this issue properly. For example, in a seminal feature weighting method called Relief [7, 8], the fixed TNs determined based on the uniform weights (i.e., the Euclidean distance) are used throughout the learning process. Thus, the TN-weight consistency is 1 h20     m20    h20   m20          Left: (a) The Euclidean feature space with w1 = w2 = 1/2. The horizontal feature 1 and the vertical feature 2 are regarded as equally important. Right: (b) A weighted feature space with w1 = 2/3 and w2 = 1/3. The horizontal feature 1 is regarded as more important than the vertical feature 2. 0 in the middle is correctly classified Figure 1: Illustration of target neighbors (TNs). An instance in 3NN classification if the distance to the 2nd nearest instance in the same class (called 2nd target hit and denoted by h20 ) is smaller than the distance to the 2nd nearest instance in different classes (called 2nd target miss and denoted by m20 ). In the Euclidean feature space (a), the 2nd target 2 0 is hit/miss are given by (h20 , m20 ) = ( , 6 ). Since d(x0 , x2 |w) > d(x0 , x6 |w), the instance misclassified. On the other hand, in the weighted feature space (b), the 2nd target hit/miss are given 1 0 is correctly classified. 5 ). Since d(x0 , x1 |w) < d(x0 , x5 |w), the instance by (h20 , m20 ) = ( , not guaranteed (large-margin metric learning [5] also suffers from the same drawback). The Simba algorithm [9] is a maximum-margin feature weighting method which adaptively updates TNs in the online learning process. However, the TN-weight consistency is not still guaranteed in Simba. I-Relief [10, 11] is a feature weighting method which cleverly avoids the TN change problem by considering a stochastic variant of NN classifiers (neighborhood component analysis [4] also introduced similar stochastic approximation). However, since the behavior of stochastic NN classifiers tends to be significantly different from the original ones, the obtained feature metric/weights are not necessarily useful for the original NN classifiers. In this paper, we focus on the feature selection (i.e., feature weighting) scenario, and propose a novel method that can properly address the TN change problem. More specifically, we formulate feature weighting as a regularized empirical risk minimization problem, and develop an algorithm that exactly and efficiently keeps track of the correct TNs via sequential quadratic programming. To the best of our knowledge, this is the first algorithm that systematically handles TN-changes and guarantees the TN-weight consistency. We further show that the proposed algorithm can be naturally combined with regularization path tracking [14], allowing computationally efficient selection of the regularization parameter. Finally, we demonstrate the effectiveness of the proposed algorithm through experiments. Throughout the paper, the superscript > indicates the transpose of vectors or matrices. We use R and R+ to denote the sets of real numbers and non-negative real numbers, respectively, while we use Nn := {1, . . . , n} to denote the set of natural numbers. The notations 0 and 1 indicate vectors or matrices with all 0 and 1, respectively. The number of elements in a set S is denoted by |S|. 2 Preliminaries In this section, we formulate the problem of feature weighting for nearest neighbor (NN) classification, and explain the fundamental concept of target neighbor (TN) change. Consider a classification problem from n training instances with ` features. Let xi := [xi1 . . . xi` ]> ? R` be the i-th training instance and yi be ? the corresponding label. The squared Euclidean distance between two instances xi and xi0 is j?N` (xij ? xi0 j )2 , while the weighted squared Euclidean distance is written as ? wj (xij ? xi0 j )2 = ?> d(xi , xi0 |w) := (1) i,i0 w, j?N` where w := [w1 . . . w` ] ? [0, 1] is an `-dimensional vector of non-negative weights and ?i,i0 := [(xi1 ? xi0 1 )2 . . . (xi` ? xi0 ` )2 ]> ? R` , (i, i0 ) ? Nn ? Nn , is introduced for notational simplicity. ` We develop a feature weighting algorithm within the framework of regularized empirical risk minimization, i.e., minimizing the linear combination of a loss term and a regularization term. In order to formulate the loss term for NN classification, let us introduce the notion of target neighbors (TNs): 2 Definition 1 (Target neighbors (TNs)) Define Hi := {h ? Nn |yh = yi , h 6= i} and Mi := {m ? Nn |ym 6= yi } for i ? Nn . Given a weight vector w, an instance h ? Hi is said to be the ?-th target hit of an instance i if it is the ?-th nearest instance among Hi , and m ? Mi is said to be the ?-th target miss of an instance i if it is the ?-th nearest instance among Mi , where the distance between instances are measured by the weighted Euclidean distance (1). The ?-th target hit and ?-th target miss of an instance i ? Nn are denoted by h?i and m?i , respectively. Target hits and misses are collectively called as target neighbors (TNs) 1 . Using TNs, the misclassification rate of a binary kNN classifier when k is odd is formulated as ? LkNN (w) := n?1 i?Nn I{d(xi , xh?i |w) > d(xi , xm?i |w)} with ? = ? = (k + 1)/2, where I(?) is the indicator function with I(z) = 1 if z is true and I(z) = 0 otherwise. For example, in binary 3NN classification, an instance is misclassified if and only if the distance to the 2nd target hit is larger than the distance to the 2nd target miss (see Figure 1). The misclassification cost of a multiclass problem can also be formulated by using TNs similarly, but we omit the details for the sake of simplicity. Since the indicator function I(?) included in the loss function LkNN (w) is hard to directly deal with, we introduce the nearest neighbor (NN) margin2 as a surrogate: Definition 2 (Nearest neighbor (NN) margin) Given a weight vector w, the (?, ?)-neighbor margin is defined as d(xi , xm?i |w) ? d(xi , xh?i |w) for i ? Nn , ? ? N|Hi | , and ? ? N|Mi | . ( ? Based on the NN margin, our loss function is defined as L(w) := n?1 i?Nn d(xi , xh?i |w) ? ) d(xi , xm?i |w) . By minimizing L(w), the average (?, ?)-neighbor margin over all instances is maximized. This loss function allows us to find feature weights such that the distance to the ?-th target hit is as small as possible, while the distance to the ?-th target miss is as large as possible. A regularization term is introduced for incorporating our prior knowledge on the weight vector. Let w ? ? [0, 1]` be our prior weight vector, and we use the regularization term of the form ?(w) := 1 ? := `?1 1, it implies that our baseline choice of the feature ? 22 . For example, if we choose w 2 ||w ? w|| weights is uniform, i.e., the Euclidean distance metric [6]. Given the loss term L(w) and the regularization term ?(w), the feature weighting problem we are going to study in this paper is formulated as ?( ) 1 min ?n?1 d(xi , xh?i |w) ? d(xi , xm?i |w) + ||w ? w|| ? 22 s.t. 1> w = 1, w ? 0, (2) w 2 i?Nn where ? ? R+ is a regularization parameter for controlling the balance between the loss term L(w) and the regularization term ?(w). The first equality constraint restricts that the sum of the weights to be one, while the second constraint indicates that the weights are non-negative. The former is introduced for fixing the scale of the distance metric. It is important to note that TNs {(h?i , m?i )}i?Nn are dependent on the weights w because the weighted Euclidean distance (1) is used in their definitions. Thus, we need to properly update TNs in the optimization process. We refer to this problem as the target neighbor change (TN-change) problem. Since TNs change in a discrete fashion with respect to the weights w, the problem (2) has a non-smooth and non-convex objective function. In the next section, we introduce an algorithm for finding a local minimum solution of (2). An advantage of the proposed algorithm is that it monotonically decreases the objective function in (2), while TNs are properly updated so that they are always kept consistent with the feature space metric given by the weights w in the following sense: Definition 3 (TN-weight Consistency) A weight vector w and n pairs of instances {(h?i , m?i )}i?Nn are said to be TN-weight consistent if {(h?i , m?i )}i?Nn are the TNs when the distance is measured by the weighted Euclidean distance (1) using the weights w. 1 The terminologies target hit and miss were first used in [7], in which only the 1st target hit and miss were considered. We extend them to the ?-th target hit and ?-th target miss for general ? and ?. The terminology target neighbors (TNs) was first used in [5]. 2 The notion of the nearest neighbor margin was first introduced in [9], where only the case of ? = ? = 1 was considered. We use an extended definition with general ? and ?. 3 Figure 1 illustrates how TNs are defined. In the Euclidean feature space with w1 = w2 = 1/2, the 2 0 are given by (h2 2 2nd target hit and miss of the instance 6 ). Since d(x0 , x2 |w) > 0 , m0 ) = ( , 0 is misclassified in 3NN classification. On the other hand, in the d(x0 , x6 |w), the instance weighted feature space with (w1 , w2 ) = (2/3, 1/3), the 2nd target hit and miss of the instance 2 0 are given by (h2 1 5 ). Since d(x0 , x1 |w) < d(x0 , x5 |w) under this weighted metric, 0 , m0 ) = ( , 0 is correctly classified in 3NN classification. the instance 3 Algorithm The problem (2) can be formulated as a convex quadratic program (QP) if TNs are regarded as fixed. Based on this fact, our feature weighting algorithm solves a sequence of such QPs, while TNs are properly updated to be always consistent. 3.1 Active Set QP Formulation First, we study the problem (2) under the condition that TNs remain unchanged. Let us define the following sets of indices: Definition 4 Given a weight vector w and the consistent TNs {(h?i , m?i )}i?Nn , define the following sets of index pairs for ??? being ?<?, ?=?, and ?>?: H[?] := {(i, h) ? Nn ? Hi | d(xi , xh |w) ? d(xi , xh?i |w)}, M[?] := {(i, m) ? Nn ? Mi | d(xi , xm |w) ? d(xi , xm?i |w)}. They are collectively denoted by (H, M), where H := {H[<] , H[=] , H[>] } and M := [?] {M[<] , M[=] , M[>] }. Furthermore, for each i ? Nn , we define Hi := {h|(i, h) ? H[?] } and [?] Mi := {m|(i, m) ? M[?] }. Under the condition that {(h?i , m?i )}i?Nn remain to be TN-weight consistent, the problem (2) is written as ? 1 min ?n?1 (?i ? ?i ) + ||w ? w|| (3a) ? 22 2 w?R` ,??Rn ,??Rn i?Nn s.t. > 1 w = 1, w ? 0, (3b) d(xi , xh |w) ? ?i , (i, h) ? H [<] [<] d(xi , xh |w) = ?i , (i, h) ? H [=] , d(xi , xm |w) ? ?i , (i, m) ? M , (3c) [=] d(xi , xh |w) ? ?i , (i, h) ? H [>] , d(xi , xm |w) = ?i , (i, m) ? M , (3d) , d(xi , xm |w) ? ?i , (i, m) ? M . (3e) [>] In the above, we introduced slack variables ?i and ?i for i ? Nn which represent the weighted distances to the target hit and miss, respectively. In (3), TN-weight consistency is represented by a set of linear constraints (3c)?(3e)3 . Our algorithm handles TN change as a change in the index sets (H, M), and a sequence of convex QPs in the form of (3) are (partially) solved every time the index sets (H, M) are updated. We implement this approach by using an active set QP algorithm (see Chapter 16 in [15]). Briefly, the active set QP algorithm repeats the following two steps: (step1) Estimate the optimal active set4 , and (step2) Solve an equality-constrained QP by regarding the constraints in the current active set as equality constraints and all the other non-active constraints are temporarily disregarded. An advantage of introducing the active set QP algorithm is that TN change can be naturally handled as active set change. Specifically, a change of target hits is interpreted as an exchange of the members between H[<] and H[=] or between H[>] and H[=] , while a change of target misses is interpreted as an exchange of the members between M[<] and M[=] or between M[>] and M[=] . 3 Note that the constraints for (H[<] , H[=] , H[>] ) in (3c)?(3e) restrict that h must remain to be the target hit of i for all (i, h) ? H[=] because those closer than the target hit must remain to be closer and those more distant than the target hit must remain to be more distant. Similarly, the constraints for (M[<] , M[=] , M[>] ) in (3c)?(3e) restrict that m must remain to be the target miss of i for all (i, m) ? M[=] . 4 A constraint satisfied with equality is called active and the set of active constraints is called active set. 4 3.2 Sequential QP-based Feature Weighting Algorithm Here, we present our feature weighting algorithm. We first formulate the equality-constrained QP (EQP) of (3). Then we present how to update the EQP by changing the active sets. In order to formulate the EQP of (3), we introduce another pair of index sets Z := {j|wj = 0} and P := {j|wj > 0}. Suppose that we currently have a solution (w, ?, ?) and the active set (H[=] , M[=] , Z). We first check whether the solution minimizes the loss function (3a) in the subspace defined by the active set. If not, we compute a step (?w, ??, ??) by solving an EQP: ? 1 min ?n?1 ((?i + ??i ) ? (?i + ??i )) + ||(w + ?w) ? w|| ? 22 ?w,??,?? 2 i?Nn s.t. > 1 (w + ?w) = 1, wj + ?wj = 0, j ? Z, ?> i,h (w + ?w) = ?i + ??i , (i, h) ? H [=] , (4) ?> i,m (w + ?w) = ?i + ??i , (i, m) ? M [=] . The solution of the EQP (4) can be analytically obtained by solving a small linear system (see Supplement A). Next, we decide how far we can move the solution along this direction. We set w ? w+? ?w, ? ? ? + ? ??, ? ? ? + ? ??, where ? ? [0, 1] is the step-length determined by the following lemma. Lemma 5 The maximum step length that satisfies feasibility and TN-weight consistency is given by ( ?wj ? := min 1, min , j?P,?wj <0 ?wj min (i,h)?H[<] ,?> i,h ?w>??i min ?(?> i,h w ? ?i ) ?> i,h ?w ? ??i (i,m)?M[<] ,?> i,m ?w>??i , min (i,h)?H[>] ,?> i,h ?w<??i ?(?> i,m w ? ?i ) ?> i,m ?w ? ??i , min ?(?> i,h w ? ?i ) ?> i,h ?w ? ??i (i,m)?M[>] ,?> i,m ?w<??i , ) ?(?> i,m w ? ?i ) ?> i,m ?w ? ??i (5) . The proof of the lemma is presented in Supplement B. If ? < 1, the constraint for which the minimum in (5) is achieved (called the blocking constraint) is added to the active set. For example, if (i, h) ? H[>] achieved the minimum in (5), (i, h) is moved from H[>] to H[=] . We repeat this by adding constraints to the active set until we reach the solution (w, ?, ?) that minimizes the objective function over the current active set. Next, we need to consider whether the objective function of (2) can be further decreased by removing constraints in the active set. Our algorithm and the standard active set QP algorithm are different in this operation: in our algorithm, an active constraint is allowed to be inactive only when the ?-th target hit remains to be a member of H[=] and the ?-th target miss remains to be a member of M[=] . [=] [=] Let us introduce the Lagrange multipliers ? ? R|Z| , ? ? R|H | , and ? ? R|M | for the 2nd, the 3rd, and the 4th constraint in (4), respectively (see Supplement A for details). Then the following lemma tells us which active constraint should be removed. Lemma 6 The objective function in (2) can be further decreased while satisfying feasibility and TN-weight consistency by removing one of the constraints in the active set with the following rules5 : ? If ?j > 0 for j ? Z, then move {j} to P; [<] [=] ? If ?(i,h) < 0, |Hi | ? ? ? 2 and |Hi | ? 2 for (i, h) ? H[=] , then move (i, h) to H[<] ; [>] [=] ? If ?(i,h) > 0, |Hi | < |Hi | ? ? and |Hi | ? 2 for (i, h) ? H[=] , then move (i, h) to H[>] ; [<] [=] ? If ?(i,m) < 0, |Mi | ? ? ? 2 and |Mi | ? 2 for (i, m) ? M[=] , then move (i, m) to M[<] ; [>] [=] ? If ?(i,m) > 0, |Mi | < |Mi | ? ? and |Mi | ? 2 for (i, m) ? M[=] , then move (i, m) to M[>] . 5 If multiple active constraints are selected by these rules, the one with the largest absolute Lagrange multiplier is removed from the active set. 5 The proof of the lemma is presented in Supplement C. The proposed feature weighting algorithm, which we call Sequential QP-based Feature Weighting (SQP-FW) algorithm, is summarized in Algorithm 1. The proposed SQP-FW algorithm possesses Algorithm 1 Sequential QP-based Feature Weighting (SQP-FW) Algorithm Inputs: The training instances {(xi , yi )}i?Nn , the neighborhood parameters (?, ?), regularization parameter ?, and initial weight vector w; ? Initialize w ? w, ? (?, ?) and (H, M, Z, P); for t = 1, 2, . . . do Solve (4) to find (?w, ??, ??); if (?w, ??, ??) = 0 then Compute Lagrange multipliers ?, ?, and ?; if none of the active constraints satisfies the rules in Lemma 6 then stop with solution w? = w; else Update (H, M, Z, P) according to the rule in Lemma 6; else Compute the step size ? as in Lemma 5; if there are blocking constraints then Update (H, M, Z, P) by adding one of the blocking constraints in Lemma 5; Outputs: A local optimal vector of feature weights w? . the following useful properties. Optimality conditions: We can characterize a local optimal solution of the non-smooth and nonconvex problem (2) in the following theorem (its proof is presented in Supplement D): Theorem 7 (Optimality condition) Consider a weight vector w satisfying 1> w = 1 and w ? 0, the consistent TNs {(h?i , m?i )}i?Nn , and the index sets (H, M, Z, P). Then, w is a local minimum solution of the problem (2) if and only if the EQP (4) has the solution (?w, ??, ??) = 0 and there are no active constraints that satisfy the rules in Lemma 6. This theorem is practically useful because it guarantees that the solution cannot be improved in its neighborhood even if some of the current TNs are replaced with others. Without such an optimality condition, we must check all possible combinations of TN change from the current solution in a trial and error manner. The above theorem allows us to avoid such time-consuming procedure. Finite termination property: It can be shown that the SQP-FW algorithm converges to a local minimum solution characterized by Theorem 7 in a finite number of iterations based on the similar argument as that in pages 477?478 in [15]. See Supplement E for details. Computational complexity: When computing the solutions (?w, ??, ??) and the Lagrange multipliers (?, ?, ?) by solving the EQP (4), the main computational cost is only several matrix-vector multiplications involving n ? |P| and n ? |Z| matrices, which is linear with respect to n (see Supplement A for details). On the other hand, if the minimum step length ? is computed naively by Lemma 5, it takes O(n2 |P|) computations, which could be a bottleneck of the algorithm. However, this bottleneck can be eased by introducing a working set approach: only a fixed number of constraints in the working set are evaluated at each step, while the working set is updated, say, every 100 steps. In our implementation, we introduced such working sets to H[>] and M[>] . For each i ? Nn , these working sets contain, say, only top 100 nearest instances. This strategy is based on a natural idea that those outside of the top 100 nearest instances would not become TNs in the next 100 steps. Such a working set strategy allows us to reduce the computational complexity to O(n|P|) for computing the the minimum step length ? , which is linear with respect to n. Regularization path tracking: The SQP-FW algorithm can be naturally combined with regularization path tracking algorithm for computing a path of the solutions that satisfy the optimality condition in Theorem 7 for a range of regularization parameter ?. Due to the space limitation, we only describe the outline here (see Supplement F for details). The algorithm starts from a local optimal solution for a fixed regularization parameter ?. Then, the algorithm continues finding the optimal solutions when ? is slightly increased. It can be shown that the local optimal solution of (2) 6 is a piecewise-linear function of ? as long as the TNs remain unchanged. If ? is further increased, we encounter a point at which TNs must be updated. Such TN changes can be easily detected and handled because the TN-weight consistency conditions are represented by a set of linear constraints (see (3c)?(3e)), and we already have explicit rules (Lemmas 5 and 6) for updating the constraints. The regularization path tracking algorithm provides an efficient and insightful approach for model selection. 4 Experiments In this section, we investigate the experimental performance of the proposed algorithm6 . 4.1 Comparison Using UCI Data Sets First, we compare the proposed SQP-FW algorithm with existing feature weighting algorithms, which handle the TN-change problem in different ways. ? Relief [7, 8]: The Relief algorithm is an online feature weighting algorithm. The goal of Relief is to maximize the average (1, 1)-neighbor margin over instances. The TNs {(h1i , m1i )}i?Nn are determined by the initial Euclidean metric and fixed all through the training process. ? Simba [9]: Simba is also an online algorithm aiming to maximize the average (1, 1)-neighbor margin. The key difference from Relief is that TNs {(h1i , m1i )}i?Nn are updated in each step using the current feature-space metric. The TN-change problem is alleviated in Simba by this reassignment. ? MulRel: To mitigate the TN-weight inconsistency in Relief, we repeat the Relief procedure using the TNs defined by the learned weights in the previous loop (see also [5]). ? NCA-D [4]: Neighborhood component analysis with diagonal metric, which is essentially the same as I-Relief [10, 11]. Instead of discretely assigning TNs, the probability of an instance being TNs is considered. Using these stochastic neighbors, the average margin is formulated as a continuous (non-convex) function of the weights, by which the TN change problem is mitigated. We compared the NN classification performance of these 4 algorithms and the SQP-FW algorithm on 10 UCI benchmark data sets summarized in Table 1. In each data set, we randomly divided the entire data set into the training, validation, and test sets with equal sizes. The number of neighbors k ? {1, 3, 5} was selected based on the classification performance on the validation set. In the SQP-FW algorithm, the neighborhood parameter (?, ?) and the regularization parameter ? were also determined to maximize the classification accuracy on the validation set. The neighborhood parameter (?, ?) were chosen from {(1, 1), (2, 2), (3, 3)}, while ? was chosen from 100 evenly allocated candidates in log-scale between 10?3 and 100 . The working set strategy was used when n > 1000 with the working set size 100 and the working set update frequency 100. All the 4 existing algorithms do not have explicit hyper-parameters. However, since these algorithms also have the risk of overfitting, we removed features with small weights, following the recommendation in [7, 11]. We implemented this heuristic for all the 4 existing algorithms by optimizing the percentage of eliminating features (chosen from {0%, 1%, 2%, . . . , 99%}) based on the classification performance on the validation set. Since Simba and NCA are formulated as non-convex optimization problems and solutions may be trapped in local minima, we ran these two algorithms from five randomly selected starting points and the solution with the smallest training error was adopted. The number of iterations in Relief (and the inner-loop iteration of MulRel as well) and Simba was set to 1000, and the outer-loop iteration of MulRel was set to 100. The experiments were repeated 10 times with random data splitting, and the average performance was reported. To see the statistical significance of the difference, paired-sample t-test was conducted. All the features were standardized to have zero mean and unit variance. Table 1 summarizes the results, showing that the SQP-FW algorithm compares favorably with other methods. 6 See also Supplement G for an illustration of the behavior of the proposed algorithm using an artificial dataset. 7 Table 1: Average misclassification rate of kNN classifier on 10 UCI benchmark data sets. Abbreviated Data Name S.S. ` N.C. SQP-FW Relief Simba MulRel NCA-D Bre. Can. Dia. 569 30 2 *0.040 0.047 0.046 0.056 0.058 Con. Ben. 208 60 2 *0.221 0.227 0.230 0.294 0.276 Ima. Seg. 2310 18 7 0.052 *0.049 0.061 0.065 0.049 Ionosphere 351 33 2 0.122 0.162 0.115 0.138 *0.097 Pag. Blo. Cla. 5473 10 5 0.046 0.048 *0.044 0.053 0.044 Parkinson 195 22 2 *0.102 0.117 0.123 0.109 0.128 Pen. Rec. Han. Dig. 10992 16 10 *0.011 0.012 0.012 0.020 0.029 Spambase 4601 57 2 *0.104 0.108 0.110 0.117 0.112 Wav. Dat. Gen. ver1 5000 21 3 *0.184 0.202 0.217 0.227 0.195 Win. Qua. 6497 11 7 *0.463 0.499 0.471 0.494 0.495 ?S.S.? and ?N.C.? stand for sample size and the number of classes, respectively. Asterisk ?*? indicates the best among 5 algorithms, while boldface means no statistical difference from the best (p-value ? 0.05). Table 2: Results on Microarray Data Experiments Standard 1NN Weighted 1NN with SQP-FW Microarray Data Name S.S. ` N.C. Error Error Med. #(genes) Colon Cancer [16] 62 2000 2 0.180 ? 0.059 0.140 ? 0.065 20 Kidney Cancer [17] 74 4224 3 0.075 ? 0.043 0.050 ? 0.038 10 Leukemia [18] 72 7129 2 0.108 ? 0.022 0.088 ? 0.036 14 Prostate Cancer [19] 102 12600 2 0.230 ? 0.048 0.194 ? 0.052 24 respectively. ?Error? represents the misclassification error rate of 1NN classifier, while ?Med. #(genes)? indicates the median number of genes selected by SQP-FW algorithm over 10 runs. 4.2 Application to Feature Selection Problem in High-Dimensional Microarray Data In order to illustrate feature selection performance, we applied the SQP-FW algorithm to microarray study, in which simple classification algorithms are often preferred because the number of features (genes) ` is usually much larger than the number of instances (patients) n. Since biologists are interested in identifying a set of genes that governs the difference among different biological phenotypes (such as cancer subtypes), selecting a subset of genes that yields good NN classification performance would be practically valuable. For each of the four microarray data sets in Table 2, we divided the entire set into the training and test sets with size ratio 2:1 [2]. We compared the test set classification performance between the plain 1NN classifier (without feature weighting) and the weighted 1NN classifier with the weights determined by the SQP-FW algorithm. In the latter, the neighborhood parameters were fixed to ? = ? = 1 and ? was determined by 10-fold cross validation within the training set. We repeated the data splitting 10 times and the average performance was reported. Table 2 summarizes the results. The median numbers of selected genes (features with nonzero weights) by the SQP-FW algorithm are also reported in the table. Although the improvements of the classification performances were not statistically significant (we could not expect much improvement by feature weighting because the misclassification rates of the plain 1NN classifier are already very low), the number of genes used for NN classification can be greatly reduced. The results illustrate the potential advantage of feature selection using the SQP-FW algorithm. 5 Discussion and Conclusion TN change is a fundamental problem in feature extraction and selection for NN classifiers. Our contribution in this paper was to present a feature weighting algorithm that can systematically handle TN changes and guarantee the TN-weight consistency. An important future direction is to generalize our TN-weight consistent feature weighting scheme to feature extraction (i.e., metric learning). Acknowledgment IT was supported by MEXT KAKENHI 21200001 and 23700165, and MS was supported by MEXT KAKENHI 23120004. 8 References [1] A. S. Das, M. Datar, A. Garg, and S. Rajaram. Google news personalization: Scalable online collaborative filtering. In Proceedings of the 16th International Conference on World Wide Web, pages 271?280. ACM, 2007. [2] S. Dudoit, J. Fridlyand, and T. P. Speed. Comparison of discrimination methods for the classification of tumors using gene expression data. Journal of the American Statistical Association, 97(457):77?87, 2002. [3] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 505?512. MIT Press, Cambridge MA, 2003. [4] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 513?520. MIT Press, Cambridge, MA, 2005. [5] K. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor classification. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 1473?1480. MIT Press, Cambridge, MA, 2006. [6] J. Davis, B. Kulis, P. Jain, S. Sra, and I. Dhillon. Information-theoretic metric learning. In Proceedings of the 24th International Conference on Machine Learning, pages 209?216, 2007. [7] K. Kira and L. Rendell. A practical approach to feature selection. In Proceedings of the 9-th International Conference on Machine Learning, pages 249?256, 1992. [8] I. Kononenko. Estimating attributes: analysis and extensions of relief. In Proceedings of European Conference on Machine Learning, pages 171?182, 1994. [9] R. Gilad-Bachrach, A. Navot, and N. Tishby. Margin based feature selection - theory and algorithms. In Proceedings of the 21st International Conference on Machine Learning, pages 43?50, 2004. [10] Y. Sun and J. Li. Iterative relief for feature weighting. In Proceedings of the 23-rd International Conference on Machhine Learning, pages 913?920, 2006. [11] Y. Sun, S. Todorovic, and S. Goodison. Local learning based feature selection for high dimensional data analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1610?1626, 2010. [12] K. Wagsta, C. Cardie, S. Rogers, and S. Schroedl. Constrained k-means clustering with background knowledge. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 577?584, 2001. [13] M. Sugiyama. Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. Journal of Machine Learning Research, 8:1027?1061, 2007. [14] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5:1391?1415, 2004. [15] J. Nocedal and S. J. Wright. Numerical optimization. Springer, 1999. [16] U. Alon, N. Barkia, D.A. Notterman, and K. Gish et al. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad. Sci. USA, 96:6745?6750, 1999. [17] H. Sueltmann, A. Heydenbreck, W. Huber, and R. Kuner et al. Gene expression in kidney cancer is associated with novel tumor subtypes, cytogenetic abnormalities and metastasis formation. 8:1027?1061, 2007. [18] T. R. Golub, D. K. Slonim, P. Tamayo, and C. Huard et al. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science, 286:531?537, 1999. [19] D. Singh, P. G. Febbo, K. Ross, and D. G. Jackson et al. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell, 1:203?209, 2002. 9
4373 |@word trial:1 kulis:1 briefly:1 middle:1 eliminating:1 nd:12 termination:1 tamayo:1 gish:1 cytogenetic:1 cla:1 reduction:1 initial:2 selecting:1 spambase:1 existing:5 current:5 goldberger:1 assigning:1 must:8 written:2 distant:2 numerical:1 update:7 discrimination:1 intelligence:1 selected:5 provides:1 five:1 along:1 direct:1 become:1 manner:1 introduce:5 x0:8 huber:1 behavior:3 salakhutdinov:1 considering:1 estimating:1 notation:1 mitigated:1 step2:1 interpreted:2 minimizes:2 finding:2 guarantee:4 mitigate:1 masashi:1 every:2 exactly:2 classifier:19 hit:19 platt:1 unit:1 omit:1 engineering:1 local:10 tends:2 slonim:1 aiming:1 acad:1 path:7 datar:1 garg:1 studied:1 range:1 statistically:1 nca:3 acknowledgment:1 practical:1 implement:1 procedure:2 empirical:2 significantly:1 alleviated:1 cannot:2 close:1 selection:16 risk:3 seminal:1 demonstrated:1 eighteenth:1 attention:1 starting:1 kidney:2 convex:5 formulate:5 bachrach:1 simplicity:2 h1i:2 splitting:2 identifying:1 rule:5 array:1 regarded:3 jackson:1 handle:5 notion:2 updated:8 target:40 controlling:1 suppose:1 programming:2 element:1 satisfying:2 updating:1 continues:1 rec:1 labeled:1 blocking:3 solved:1 notterman:1 seg:1 wj:8 news:1 sun:2 decrease:1 removed:3 russell:1 valuable:1 ran:1 m1i:2 complexity:2 singh:1 solving:3 easily:1 multimodal:1 represented:2 chapter:1 jain:1 describe:1 detected:1 artificial:1 tell:1 hyper:1 neighborhood:7 outside:1 formation:1 heuristic:1 larger:2 solve:2 say:2 otherwise:1 knn:2 superscript:1 online:4 advantage:3 sequence:2 propose:2 uci:3 loop:3 gen:1 roweis:1 moved:1 olkopf:1 requirement:1 sqp:16 qps:2 converges:1 ben:1 illustrate:2 develop:2 ac:2 fixing:1 blitzer:1 measured:2 nearest:18 odd:1 alon:1 solves:1 implemented:1 c:1 involves:1 indicate:1 implies:1 direction:2 drawback:1 tokyo:1 correct:2 attribute:1 stochastic:4 rogers:1 exchange:2 preliminary:1 biological:1 subtypes:2 extension:1 practically:2 considered:3 wright:1 normal:1 m0:2 smallest:1 proc:1 label:1 currently:1 ross:1 largest:1 weighted:11 minimization:2 mit:3 always:2 avoid:1 parkinson:1 focus:1 properly:6 notational:1 improvement:2 indicates:4 check:2 kakenhi:2 greatly:1 baseline:1 sense:1 colon:2 dependent:1 nn:58 i0:3 entire:3 misclassified:3 going:1 interested:1 issue:2 classification:26 among:4 denoted:5 art:1 constrained:3 initialize:1 biologist:1 equal:1 extraction:6 ng:1 represents:1 broad:1 leukemia:1 future:1 others:1 prostate:2 piecewise:1 metastasis:1 randomly:2 set4:1 replaced:1 ima:1 relief:13 highly:2 investigate:1 golub:1 personalization:1 natl:1 closer:2 euclidean:11 instance:33 increased:2 cost:2 introducing:2 subset:1 uniform:2 conducted:1 tishby:1 characterize:1 reported:3 rosset:1 combined:3 adaptively:1 st:2 fundamental:4 international:6 xi1:2 ym:1 w1:5 again:1 squared:2 satisfied:1 choose:1 american:1 li:1 potential:1 summarized:2 satisfy:2 ichiro:2 start:1 xing:1 contribution:1 collaborative:1 takeuchi:2 accuracy:2 variance:1 efficiently:2 maximized:1 gathered:1 yield:1 rajaram:1 generalize:1 none:1 cardie:1 monitoring:1 dig:1 tissue:1 classified:3 explain:2 reach:1 suffers:1 definition:6 frequency:1 sugi:1 naturally:4 proof:3 mi:11 associated:1 con:1 stop:1 dataset:1 knowledge:4 dimensionality:1 bre:1 x6:2 improved:1 wei:2 formulation:1 evaluated:1 furthermore:1 until:1 hand:3 working:9 horizontal:2 web:1 google:1 perhaps:1 name:4 usa:1 validity:1 concept:1 true:1 multiplier:4 contain:1 former:1 regularization:18 analytically:1 equality:5 nonzero:1 dhillon:1 deal:1 x5:2 during:2 davis:1 reassignment:1 m:1 outline:1 theoretic:1 demonstrate:2 tn:61 rendell:1 novel:3 recently:1 fridlyand:1 qp:11 jp:2 extend:1 xi0:6 association:1 significant:2 refer:1 cambridge:3 rd:2 consistency:10 similarly:2 sugiyama:2 han:1 optimizing:1 apart:1 scenario:2 nagoya:1 nonconvex:1 binary:3 inconsistency:1 yi:4 minimum:8 maximize:3 monotonically:1 multiple:1 smooth:2 technical:2 characterized:1 cross:1 long:1 clinical:1 divided:2 equally:1 molecular:1 paired:1 feasibility:2 prediction:1 variant:1 involving:1 scalable:1 enhancing:1 titech:1 metric:20 essentially:1 patient:1 iteration:4 represent:1 gilad:1 achieved:2 cell:1 background:1 addressed:2 decreased:2 else:2 median:2 microarray:5 allocated:1 sch:1 w2:4 posse:1 med:2 member:4 effectiveness:2 jordan:1 call:1 abnormality:1 revealed:1 hastie:1 restrict:2 reduce:1 regarding:1 idea:1 inner:1 multiclass:1 nitech:1 inactive:1 whether:2 bottleneck:2 handled:2 expression:5 becker:1 todorovic:1 useful:4 generally:1 clear:1 governs:1 simplest:1 reduced:1 febbo:1 xij:2 restricts:1 percentage:1 wav:1 trapped:1 track:2 correctly:3 tibshirani:1 kira:1 discrete:2 probed:1 key:1 four:1 terminology:2 nevertheless:1 changing:1 kept:2 nocedal:1 sum:1 run:1 oligonucleotide:1 throughout:2 eased:1 decide:1 summarizes:2 hi:11 guaranteed:2 fold:1 quadratic:3 discretely:1 constraint:26 x2:2 sake:1 step1:1 speed:1 argument:1 min:9 optimality:4 department:2 according:1 combination:2 cleverly:1 smaller:1 remain:7 slightly:1 computationally:2 remains:2 slack:1 abbreviated:1 dia:1 adopted:1 operation:1 neighbourhood:1 encounter:1 weinberger:1 original:2 top:2 standardized:1 clustering:3 h20:5 classical:1 dat:1 unchanged:2 objective:5 move:6 added:1 already:2 schroedl:1 strategy:3 diagonal:1 surrogate:1 said:3 obermayer:1 win:1 subspace:1 distance:20 link:2 thrun:1 sci:1 majority:1 outer:1 evenly:1 discriminant:1 boldface:1 length:4 index:6 illustration:2 ratio:1 minimizing:2 balance:1 favorably:1 negative:3 implementation:1 allowing:2 vertical:2 benchmark:2 finite:2 extended:1 hinton:1 rn:2 introduced:7 pair:5 connection:1 learned:2 address:1 usually:1 pattern:2 xm:9 challenge:2 program:1 interpretability:1 misclassification:5 natural:2 regularized:2 indicator:2 zhu:1 scheme:1 technology:2 prior:2 literature:1 discovery:1 multiplication:1 loss:11 expect:1 limitation:1 filtering:1 validation:5 h2:2 asterisk:1 consistent:9 editor:3 systematically:2 cancer:8 changed:2 repeat:3 supported:2 transpose:1 side:1 institute:2 neighbor:27 wide:1 saul:2 pag:1 absolute:1 plain:2 world:2 avoids:1 stand:1 far:2 cope:1 transaction:1 correlate:1 preferred:1 keep:2 gene:13 active:26 overfitting:1 kononenko:1 consuming:1 xi:24 navot:1 continuous:1 pen:1 iterative:1 table:7 learn:1 sra:1 simba:8 bottou:1 necessarily:1 european:1 da:1 did:1 significance:1 main:1 n2:1 allowed:1 repeated:2 x1:2 referred:1 m20:5 fashion:1 explicit:2 xh:9 candidate:1 weighting:26 yh:1 removing:2 theorem:6 qua:1 showing:1 insightful:1 ionosphere:1 incorporating:2 naively:1 sequential:5 adding:2 supplement:9 illustrates:1 margin:12 disregarded:1 phenotype:1 lagrange:4 huard:1 tracking:5 partially:1 temporarily:1 recommendation:1 collectively:2 springer:1 satisfies:2 acm:1 ma:3 goal:1 formulated:6 dudoit:1 fisher:1 considerable:1 change:23 hard:1 included:1 determined:7 specifically:2 fw:16 miss:17 lemma:13 tumor:3 called:9 experimental:1 vote:1 mext:2 support:1 latter:1
3,726
4,374
On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference Guy Van den Broeck Department of Computer Science, Katholieke Universiteit Leuven Celestijnenlaan 200A, B-3001 Heverlee, Belgium [email protected] Abstract Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the first-order level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models. We then show how to obtain a completeness result using a first-order knowledge compilation approach for theories of formulae containing up to two logical variables. 1 Introduction and related work Probabilistic logic models build on first-order logic to capture relational structure and on graphical models to represent and reason about uncertainty [1, 2]. Due to their expressivity, these models can concisely represent large problems with many interacting random variables. While the semantics of these logics is often defined through grounding the models [3], performing inference at the propositional level is ? as for first-order logic ? inefficient. This has motivated the quest for lifted inference methods that exploit the structure of probabilistic logic models for efficient inference, by reasoning about groups of objects as a whole and avoiding repeated computations. The first approaches to exact lifted inference have upgraded the variable elimination algorithm to the first-order level [4, 5, 6]. More recent work is based on methods from logical inference [7, 8, 9, 10], such as knowledge compilation. While these approaches often yield dramatic improvements in runtime over propositional inference methods on specific problems, it is still largely unclear for which classes of models these lifted inference operators will be useful and for which ones they will eventually have to resort to propositional inference. One notable exception in this regard is lifted belief propagation [11], which performs exact lifted inference on any model whose factor graph representation is a tree. A first contribution of this paper is that we introduce a notion of domain lifted inference, which formally defines what lifting means, and which can be used to characterize the classes of probabilistic models to which lifted inference applies. Domain lifted inference essentially requires that probabilistic inference runs in polynomial time in the domain size of the logical variables appearing in the model. As a second contribution we show that the class of models expressed as 2-WFOMC formulae (weighted first-order model counting with up to 2 logical variables per formula) can be domain lifted using an extended first-order knowledge compilation approach [10]. The resulting approach allows for lifted inference even in the presence of (anti-) symmetric or total relations in a theory. These are extremely common and useful concepts that cannot be lifted by any of the existing first-order knowledge compilation inference rules. 1 2 Background We will use standard concepts of function-free first-order logic (FOL). An atom p(t1 , . . . , tn ) consists of a predicate p/n of arity n followed by n arguments, which are either constants or logical variables. An atom is ground if it does not contain any variables. A literal is an atom a or its negation ?a. A clause is a disjunction l1 ? ... ? lk of literals. If k = 1, it is a unit clause. An expression is an atom, literal or clause. The pred(a) function maps an atom to its predicate and the vars(e) function maps an expression to its logical variables. A theory in conjunctive normal form (CNF) is a conjunction of clauses. We often represent theories by their set of clauses and clauses by their set of literals. Furthermore, we will assume that all logical variables are universally quantified. In addition, we associate a set of constraints with each clause or atom, either of the form X 6= t, where X is a logical variable and t is a constant or variable, or of the form X ? D, where D is a domain, or the negation of these constraints. These define a finite domain for each logical variable. Abusing notation, we will use constraints of the form X = t to denote a substitution of X by t. The function atom(e) maps an expression e to its atoms, now associating the constraints on e with each atom individually. To add the constraint c to an expression e, we use the notation e ? c. Two atoms unify if there is a substitution which makes them identical and if the conjunction of the constraints on both atoms with the substitution is satisfiable. Two expressions e1 and e2 are independent, written e1 ? ? e2 , if no atom a1 ? atom(e1 ) unifies with an atom a2 ? atom(e2 ). We adopt the Weighted First-Order Model Counting (WFOMC) [10] formalism to represent probabilistic logic models, building on the notion of a Herbrand interpretation. Herbrand interpretations are subsets of the Herbrand base HB (T ), which consists of all ground atoms that can be constructed with the available predicates and constant symbols in T . The atoms in a Herbrand interpretation are assumed to be true. All other atoms in HB (T ) are assumed to be false. An interpretation I satisfies a theory T , written as I |= T , if it satisfies all the clauses c ? T . The WFOMC problem is defined on a weighted logic theory T , which is a logic theory augmented with a positive weight function w and a negative weight function w, which assign a weight to each predicate. The WFOMC problem involves computing XY Y wmc(T, w, w) = w(pred(a)) w(pred(a)). (1) I|=T a?I 3 3.1 a?HB(T )\I First-order knowledge compilation for lifted probabilistic inference Lifted probabilistic inference A first-order probabilistic model defines a probability distribution P over the set of Herbrand interpretations H. Probabilistic inference in these models is concerned with computing the posterior probability P(q|e) of query q given evidence e, where q and e are logical expressions in general: P h?H,h|=q?e P(h) P(q|e) = P (2) h?H,h|=e P(h) We propose one notion of lifted inference for first-order probabilistic models, defined in terms of the computational complexity of inference w.r.t. the domains of logical variables. It is clear that other notions of lifted inference are conceivable, especially in the case of approximate inference. Definition 1 (Domain Lifted Probabilistic Inference). A probabilistic inference procedure is domain lifted for a model m, query q and evidence e iff the inference procedure runs in polynomial time in |D1 |, . . . , |Dk | with Di the domain of the logical variable vi ? vars(m, q, e). Domain lifted inference does not prohibit the algorithm to be exponential in the size of the vocabulary, that is, the number of predicates, arguments and constants, of the probabilistic model, query and evidence. In fact, the definition allows inference to be exponential in the number of constants which occur in arguments of atoms in the theory, query or evidence, as long as it is polynomial in the cardinality of the logical variable domains. This definition of lifted inference stresses the ability to efficiently deal with the domains of the logical variables that arise, regardless of their size, and formalizes what seems to be generally accepted in the lifted inference literature. 2 A class of probabilistic models is a set of probabilistic models expressed in a particular formalism. As examples, consider Markov logic networks (MLN) [12] or parfactors [4], or the weighted FOL theories for WFOMC that we introduced above, when the weights are normalized. Definition 2 (Completeness). Restricting queries to atoms and evidence to a conjunction of literals, a procedure that is domain lifted for all probabilistic models m in a class of models M and for all queries q and evidence e, is called complete for M . 3.2 First-order knowledge compilation First-order knowledge compilation is an approach to lifted probabilistic inference consisting of the following three steps (see Van den Broeck et al. [10] for details): 1. Convert the probabilistic logical model to a weighted CNF. Converting MLNs or parfactors requires adding new atoms to the theory that represent the (truth) value of each factor or formula. _ set-disjunction 2 friends(X, Y ) ? smokes(X) ? smokes(Y ) Smokers ? People decomposable conjunction unit clause leaf (a) MLN Model ? smokes(X), X ? Smokers smokes(Y ) ? ? smokes(X) ?? friends(X, Y ) ? ? f(X, Y ) friends(X, Y ) ? f(X, Y ) smokes(X) ? f(X, Y ) ? smokes(Y ) ? f(X, Y ). ? f(X, Y ), Y ? Smokers ? ? smokes(Y ), Y ? / Smokers ? ^ f(X, Y ), X ? / Smokers, Y ? / Smokers x ? Smokers (b) CNF Theory ^ deterministic disjunction Predicate friends smokes f w 1 1 e2 w 1 1 1 y? / Smokers ? set-conjunction ? f(x, y) (c) Weight Functions ? friends(x, y) ? friends(x, y) ? f(x, y) (d) First-Order d-DNNF Circuit Figure 1: Friends-smokers example (taken from [10]) Example 1. The MLN in Figure 1a assigns a weight to a formula in FOL. Figure 1b represents the same model as a weighted CNF, introducing a new atom f(X, Y ) to encode the truth value of the MLN formula. The probabilistic information is captured by the weight functions in Figure 1c. 2. Compile the logical theory into a First-Order d-DNNF (FO d-DNNF) circuit. Figure 1d shows an example of such a circuit. Leaves represent unit clauses. Inner nodes represent the disjunction or conjunction of their children l and r, but with the constraint that disjunctions must be deterministic (l ? r is unsatisfiable) and conjunctions must be decomposable (l ? ? r). 3. Perform WFOMC inference to compute posterior probabilities. In a FO d-DNNF circuit, WFOMC is polynomial in the size of the circuit and the cardinality of the domains. To compile the CNF theory into a FO d-DNNF circuit, Van den Broeck et al. [10] propose a set of compilation rules, which we will refer to as CR 1 . We will now briefly describe these rules. Unit Propagation introduces a decomposable conjunction when the theory contains a unit clause. Independence creates a decomposable conjunction when the theory contains independent subtheories. Shannon decomposition applies when the theory contains ground atoms and introduces a deterministic disjunction between two modified theories: one where the ground atom is true, and one where it is false. Shattering splits clauses in the theory until all pairs of atoms represent either a disjoint or identical set of ground atoms. Example 2. In Figure 2a, the first two clauses are made independent from the friends(X, X) clause and split off in a decomposable conjunction by unit propagation. The unit clause becomes a leaf of the FO d-DNNF circuit, while the other operand requires further compilation. 3 friends(X, Y ) ? dislikes(X, Y ) ? friends(X, Y ) ? likes(X, Y ) friends(X, X) dislikes(X, Y ) ? friends(X, Y ) fun(X) ? ? friends(X, Y ) ? ^ fun(X) ? ? friends(X, Y ) fun(X) ? ? friends(Y, X) _ FunPeople ? People x ? People friends(X, X) friends(X, Y ) ? dislikes(X, Y ), X 6= Y ? friends(X, Y ) ? likes(X, Y ), X 6= Y likes(X, X) dislikes(x, Y ) ? friends(x, Y ) fun(x) ? ? friends(x, Y ) fun(X), X ? FunPeople ? fun(X), X ? / FunPeople fun(X) ? ? friends(X, Y ) fun(X) ? ? friends(Y, X) (a) Unit propagation of friends(X, X) (b) Independent partial grounding (c) Atom counting of fun(X) Figure 2: Examples of compilation rules. Circles are FO d-DNNF inner nodes. White rectangles show theories before and after applying the rule. All variable domains are People. (taken from [10]) Independent Partial Grounding creates a decomposable conjunction over a set of child circuits, which are identical up to the value of a grounding constant. Since they are structurally identical, only one child circuit is actually compiled. Atom Counting applies when the theory contains an atom with a single logical variable X ? D. It explicitly represents the domain D> ? D of X for which the atom is true. It compiles the theory into a deterministic disjunction between all possible such domains. Again, these child circuits are identical up to the value of D> and only one is compiled. Example 3. The theory in Figure 2b is compiled into a decomposable set-conjunction of theories that are independent and identical up to the value of the x constant. The theory in Figure 2c contains an atom with one logical variable: fun(X). Atom counting compiles it into a deterministic setdisjunction over theories that differ in FunPeople, which is the domain of X for which fun(X) is true. Subsequent steps of unit propagation remove the fun(X) atoms from the theory entirely. 3.3 Completeness We will now characterize those theories where the CR 1 compilation rules cannot be used, and where the inference procedure has to resort to grounding out the theory to propositional logic. For these, first-order knowledge compilation using CR 1 is not yet domain lifted. When a logical theory contains symmetric, anti-symmetric or total relations, such as friends(X, Y ) ? friends(Y, X), parent(X, Y ) ? ? parent(Y, X), X 6= Y, ? (X, Y) ? ? (Y, X), or more general formulas, such as enemies(X, Y ) ? ? friend(X, Y ) ? ? friend(Y, X), (3) (4) (5) (6) none of the CR 1 rules apply. Intuitively, the underlying problem is the presence of either: ? Two unifying (not independent) atoms in the same clause which contain the same logical variable in different positions of the argument list. Examples include (the CNF of) Formulas 3, 4 and 5, where the X and Y variable are bound by unifying two atoms from the same clause. ? Two logical variables that bind when unifying one pair of atoms but appear in different positions of the argument list of two other unifying atoms. Examples include Formula 6, which in CNF is ? friend(X, Y ) ? ? enemies(X, Y ) ? friend(Y, X) ? ? enemies(X, Y ) Here, unifying the enemies(X, Y ) atoms binds the X variables from both clauses, which appear in different positions of the argument lists of the unifying atoms friend(X, Y ) and friend(Y, X). Both of these properties preclude the use of CR 1 rules. Also in the context of other model classes, such as MLNs, probabilistic versions of the above formulas cannot be processed by CR 1 rules. 4 Even though first-order knowledge compilation with CR 1 rules does not have a clear completeness result, we can show some properties of theories to which none of the compilation rules apply. First, we need to distinguish between the arity of an atom and its dimension. A predicate with arity two might have atoms with dimension one, when one of the arguments is ground or both are identical. Definition 3 (Dimension of an Expression). The dimension of an expression e is the number of logical variables it contains: dim(e) = | vars(e)|. Lemma 1 (CR 1 Postconditions). The CR 1 rules remove all atoms from the theory T which have zero or one logical variable arguments, such that afterwards ?a ? atom(T ) : dim(a) > 1. When no CR 1 rule applies, the theory is shattered and contains no independent subtheories. Proof. Ground atoms are removed by the Shannon decomposition operator followed by unit propagation. Atoms with a single logical variable (including unary relations) are removed by the atom counting operator followed by unit propagation. If T contains independent subtheories, the independence operator can be applied. Shattering is always applied when T is not yet shattered. 4 Extending first-order knowledge compilation In this section we introduce a new operator which does apply to the theories from Section 3.3. 4.1 Logical variable properties To formally define the operator we propose, and prove its correctness, we first introduce some mathematical concepts related to the logical variables in a theory (partly after Jha et al. [8]). Definition 4 (Binding Variables). Two logical variables X, Y are directly binding b(X, Y ) if they are bound by unifying a pair of atoms in the theory. The binding relationship b+ (X, Y ) is the transitive closure of the directly binding relation b(X, Y ). Example 4. In the theory ? p(W, X) ? ? q(X) r(Y ) ? ? q(Y ) ? r(Z) ? s(Z) the variable pairs (X, Y ) and (Y, Z) are directly binding. The variables X, Y and Z are binding. Variable W does not bind to any other variable. Note that the binding relationship b+ (X, Y ) is an equivalence relation that defines two equivalence classes: {X, Y, Z} and {W }. Lemma 2 (Binding Domains). After shattering, binding logical variables have identical domains. Proof. During shattering (see Section 3.2), when two atoms unify, binding two variables with partially overlapping domains, the atoms? clauses are split up into clauses where the domain of the variables is identical, and clauses where the domains are disjoint and the atoms no longer unify. Definition 5 (Root Binding Class). A root variable is a variable that appears in all the atoms in its clause. A root binding class is an equivalence class of binding variables where all variables are root. Example 5. In the theory of Example 4, {X, Y, Z} is a root binding class and {W } is not. 4.2 Domain recursion We will now introduce the new domain recursion operator, starting with its preconditions. Definition 6. A theory allows for domain recursion when (i) the theory is shattered, (ii) the theory contains no independent subtheories and (iii) there exists a root binding class. From now on, we will denote with C the set of clauses of the theory at hand and with B a root binding class guaranteed to exist if C allows for domain recursion. Lemma 2 states that all variables in B have identical domains. We will denote the domain of these variables with D. The intuition behind the domain recursion operator is that it modifies D by making one element explicit: D = D0 ? {xD } with xD ? / D0 . This explicit domain element is introduced by the S PLIT D function, which splits clauses w.r.t. the new subdomain D0 and element xD . 5 Definition 7 (S PLIT D). For a clause c and given set of variables Vc ? vars(c) with domain D, let  c, if Vc = ? (7) S PLIT D(c, Vc ) = S PLIT D(c1 , Vc \ {V }) ? S PLIT D(c2 , Vc \ {V }), if Vc 6= ? where c1 = c ? (V = xD ) and c2 = c ? (V 6= xD ) ? (V ? D0 ) S for some V ? Vc . For a set of clauses C and set of variables V with domain D: S PLIT D(C, V) = c?C S PLIT D(c, V ? vars(c)). The domain recursion operator creates three sets of clauses: S PLIT D(C, B) = Cx ? Cv ? Cr , with ^ Cx = {c ? (V = xD )|c ? C}, (8) V ?B?vars(c) Cv = {c ? ^ (V 6= xD ) ? (V ? D0 )|c ? C}, (9) V ?B?vars(c) Cr = S PLIT D(C, B) \ Cx \ Cv . (10) Proposition recursion  theory: to theVoriginal V V V 3. The conjunction of the domain V V sets is equivalent c?Cr c . c?Cv c ? c?C c ? c?S PLIT D(C,B) c and therefore c?C c ? c?Cx c ? We will now show that these sets are independent and that their conjunction is decomposable. Theorem 4. The theories Cx , Cv and Cr are independent: Cx ? ? Cv , Cx ? ? Cr and Cv ? ? Cr . The proof of Theorem 4 relies on the following Lemma. Lemma 5. If the theory allows for domain recursion, all clauses and atoms contain the same number of variables from B: ?n, ?c ? C, ?a ? atom(C) : | vars(c) ? B | = | vars(a) ? B | = n. Proof. Denote with Cn the clauses in C that contain n logical variables from B and with Cnc its compliment in C. If C is nonempty, there is a n > 0 for which Cn is nonempty. Then every atom in Cn contains exactly n variables from B (Definition 5). Since the theory contains no independent subtheories, there must be an atom a in Cn which unifies with an atom ac in Cnc , or Cnc is empty. After shattering, all unifications bind one variable from a to a single variable from ac . Because a contains exactly n variables from B, ac must also contain exactly n (Definition 4), and because B is a root binding class, the clause of ac also contains exactly n, which contradicts the definition of Cnc . Therefore, Cnc is empty, and because the variables in B are root, they also appear in all atoms. Proof of Theorem 4. From Lemma 5, all atoms in C contain the same number of variables from B. In Cx , these variables are all constrained to be equal to xD , while in Cv and Cr at least one variable is constrained to be different from xD . An attempt to unify an atom from Cx with an atom from Cv or Cr therefore creates an unsatisfiable set of constraints. Similarly, atoms from Cv and Cr cannot be unified. Finally, we extend the FO d-DNNF language proposed in Van den Broeck et al. [10] with a new node, the recursive decomposable conjunction ? r , and define the domain recursion compilation rule. Definition 8 ( ? r ). The FO d-DNNF node ? r (nx , nr , D, D0 , V) represents a decomposable conjunction between the d-DNNF nodes nx , nr and a d-DNNF node isomorphic to the ? r node itself. In particular, the isomorphic operand is identical to the node itself, except for the size of the domain of the variables in V, which becomes one smaller, going from D to D0 in the isomorphic operand. We have shown that the conjunction between sets Cx , Cv and Cr is decomposable (Theorem 4) and logically equivalent to the original theory (Proposition 3). Furthermore, Cv is identical to C, up to the constraints on the domain of the variables in B. This leads us to the following definition of domain recursion. Definition 9 (Domain Recursion). The domain recursion compilation rule compiles C into ? r (nx , nr , D, D0 , B), where nx , nr are the compiled circuits for Cx , Cr . The third set Cv is represented by the recursion on D, according to Definition 8. 6 nv Cr ?r ? friends(x, X) ? friends(X, x), X = 6 x ? friends(X, x) ? friends(x, X), X = 6 x nr P erson ? P erson \ {x} nx ^ ? Cx ? friends(x, x) ? friends(x, x) x0 ?P erson x0 6=x ? friends(x, x) friends(x, x) ? ? ? friends(x, x0 ) ? friends(x0 , x) ? friends(x, x0 ) friends(x0 , x) Figure 3: Circuit for the symmetric relation in Equation 3, rooted in a recursive conjunction. Example 6. Figure 3 shows the FO d-DNNF circuit for Equation 3. The theory is split up into three independent theories: Cr and Cx , shown in the Figure 3, and Cv = {? friends(X, Y ) ? friends(Y, X), X 6= x, Y 6= x}. The conjunction of these theories is equivalent to Equation 3. Theory Cv is identical to Equation 3, up to the inequality constraints on X and Y . Theorem 6. Given a function size, which maps domains to their size, the weighted first-order model count of a ? r (nx , nr , D, D0 , V) node is size(D) Y wmc( ? r (nx , nr , D, D0 , V), size) = wmc(nx , size)size(D) wmc(nr , size ?{D0 7? s}), s=0 (11) 0 0 where size ?{D 7? s} adds to the size function that the subdomain D has cardinality s. Proof. If C allows for domain recursion, due to Theorem 4, the weighted model count is  1, if size(D) = 0 wmc(C, size) = 0 0 wmc(Cx ) ? wmc(Cv , size ) ? wmc(Cr , size ) if size(D) > 0 (12) where size0 = size ?{D0 7? size(D) ? 1}. Theorem 7. The Independent Partial Grounding compilation rule is a special case of the domain recursion rule, where ?c ? C : | vars(c) ? B | = 1 (and therefore Cr = ?). 4.3 Completeness In this section, we introduce a class of models for which first-order knowledge compilation with domain recursion is complete. Definition 10 (k-WFOMC). The class of k-WFOMC consist of WFOMC theories with clauses that have up to k logical variables. A first completeness result is for 2-WFOMC, using the set of knowledge compilation rules CR 2 , which are the rules in CR 1 extended with domain recursion. Theorem 8 (Completeness for 2-WFOMC). First-order knowledge compilation using the CR 2 compilation rules is a complete domain lifted probabilistic inference algorithm for 2-WFOMC. Proof. From Lemma 1, after applying the CR 1 rules, the theory contains only atoms with dimension larger than or equal to two. From Definition 10, each clause has dimension smaller than or equal to two. Therefore, each logical variable in the theory is a root variable and according to Definition 5, every equivalence class of binding variables is a root binding class. Because of Lemma 1, the theory allows for domain recursion, which requires further compilation of two theories: Cx and Cr into nx and nr . Both have dimension smaller than 2 and can be lifted by CR 1 compilation rules. The properties of 2-WFOMC are a sufficient but not necessary condition for first-order knowledge compilation to be domain lifted. We can obtain a similar result for MLNs or parfactors by reducing them to a WFOMC problem. If an MLN contains only formulae with up to k logical variables, then its WFOMC representation will be in k-WFOMC. 7 This result for 2-WFOMC is not trivial. Van den Broeck et al. [10] showed in their experiments that counting first-order variable elimination (C-FOVE) [6] fails to lift the ?Friends Smoker Drinker? problem, which is in 2-WFOMC. We will show in the next section that the CR 1 rules fail to lift the theory in Figure 4a, which is in 2-WFOMC. Note that there are also useful theories that are not in 2-WFOMC, such as those containing the transitive relation friends(X, Y ) ? friends(Y, Z) ? friends(X, Z). 5 Empirical evaluation To complement the theoretical results of the previous section, we extended the WFOMC implementation1 with the domain recursion rule. We performed experiments with the theory in Figure 4a, which is a version of the friends and smokers model [11] extended with the symmetric relation of Equation 3. We evaluate the performance querying P(smokes(bob)) with increasing domain size, comparing our approach to the existing WFOMC implementation and its propositional counterpart, which first grounds the theory and then compiles it with the c2d compiler [13] to a propositional d-DNNF circuit. We did not compare to C-FOVE [6] because it cannot perform lifted inference on this model. 2 smokes(X) ? friends(X, Y ) ? smokes(Y ) friends(X, Y ) ? friends(Y, X). Runtime [s] Propositional inference quickly becomes intractable when there are more than 20 people. The lifted inference algorithms scale much better. The CR 1 rules can exploit some regularities in the model. For example, they eliminate all the smokes(X) atoms from the theory. They do, however, resort to grounding at a later stage of the compilation process. With the domain recursion rule, there is no need for grounding. This advantage is clear in the experiments, our approach having an almost constant inference time in this range of domains sizes. Note that the runtimes for c2d include compilation and evaluation of the circuit, whereas the WFOMC runtimes only represent evaluation of the FO d-DNNF. After all, propositional compilation depends on the domain size but first-order compilation does not. First-order compilation takes a constant two seconds for both rule sets. 10000 1000 100 10 1 0.1 0.01 c2d WFOMC - CR1 WFOMC - CR2 10 20 30 40 50 60 Number of People 70 80 (b) Evaluation Runtime (a) MLN Model Figure 4: Symmetric friends and smokers experiment, comparing propositional knowledge compilation (c2d) to WFOMC using compilation rules CR 1 and CR 2 (which includes domain recursion). 6 Conclusions We proposed a definition of complete domain lifted probabilistic inference w.r.t. classes of probabilistic logic models. This definition considers algorithms to be lifted if they are polynomial in the size of logical variable domains. Existing first-order knowledge compilation turns out not to admit an intuitive completeness result. Therefore, we generalized the existing Independent Partial Grounding compilation rule to the domain recursion rule. With this one extra rule, we showed that first-order knowledge compilation is complete for a significant class of probabilistic logic models, where the WFOMC representation has up to two logical variables per clause. Acknowledgments The author would like to thank Luc De Raedt, Jesse Davis and the anonymous reviewers for valuable feedback. This work was supported by the Research Foundation-Flanders (FWO-Vlaanderen). 1 http://dtai.cs.kuleuven.be/wfomc/ 8 References [1] Lise Getoor and Ben Taskar, editors. An Introduction to Statistical Relational Learning. MIT Press, 2007. [2] Luc De Raedt, Paolo Frasconi, Kristian Kersting, and Stephen Muggleton, editors. Probabilistic inductive logic programming: theory and applications. Springer-Verlag, Berlin, Heidelberg, 2008. [3] Daan Fierens, Guy Van den Broeck, Ingo Thon, Bernd Gutmann, and Luc De Raedt. Inference in probabilistic logic programs using weighted CNF?s. In Proceedings of UAI, pages 256?265, 2011. [4] David Poole. First-order probabilistic inference. In Proceedings of IJCAI, pages 985?991, 2003. [5] Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In Proceedings of IJCAI, pages 1319?1325, 2005. [6] Brian Milch, Luke S. Zettlemoyer, Kristian Kersting, Michael Haimes, and Leslie Pack Kaelbling. Lifted Probabilistic Inference with Counting Formulas. In Proceedings of AAAI, pages 1062?1068, 2008. [7] Vibhav Gogate and Pedro Domingos. Exploiting Logical Structure in Lifted Probabilistic Inference. In Proceedings of StarAI, 2010. [8] Abhay Jha, Vibhav Gogate, Alexandra Meliou, and Dan Suciu. Lifted Inference Seen from the Other Side: The Tractable Features. In Proceedings of NIPS, 2010. [9] Vibhav Gogate and Pedro Domingos. Probabilistic theorem proving. In Proceedings of UAI, pages 256?265, 2011. [10] Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt. Lifted Probabilistic Inference by First-Order Knowledge Compilation. In Proceedings of IJCAI, pages 2178?2185, 2011. [11] Parag Singla and Pedro Domingos. Lifted first-order belief propagation. In Proceedings of AAAI, pages 1094?1099, 2008. [12] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, 62(1): 107?136, 2006. [13] Adnan Darwiche. New advances in compiling CNF to decomposable negation normal form. In Proceedings of ECAI, pages 328?332, 2004. 9
4374 |@word version:2 briefly:1 polynomial:5 seems:1 adnan:1 closure:1 decomposition:2 dramatic:1 substitution:3 contains:16 existing:4 comparing:2 yet:2 conjunctive:1 written:2 must:4 subsequent:1 remove:2 braz:1 leaf:3 amir:1 mln:6 completeness:11 node:9 mathematical:1 constructed:1 c2:2 consists:2 prove:1 dan:2 darwiche:1 introduce:6 x0:6 preclude:1 cardinality:3 increasing:1 becomes:3 parfactors:3 notation:2 underlying:1 circuit:15 what:2 unified:1 formalizes:1 every:2 fun:12 xd:9 runtime:3 exactly:4 unit:11 appear:3 katholieke:1 t1:1 positive:1 before:1 bind:4 despite:1 might:1 upgraded:1 quantified:1 equivalence:4 luke:1 compile:2 range:1 acknowledgment:1 recursive:2 procedure:4 empirical:1 cannot:5 operator:9 context:1 applying:2 milch:1 equivalent:3 map:4 deterministic:5 reviewer:1 roth:1 modifies:1 jesse:2 attention:1 regardless:1 starting:1 unify:4 decomposable:12 assigns:1 rule:31 d1:1 proving:1 notion:4 today:1 exact:2 programming:1 domingo:4 associate:1 element:3 taskar:1 capture:1 precondition:1 gutmann:1 removed:2 valuable:1 intuition:1 meert:1 complexity:1 wmc:8 creates:4 various:2 represented:1 describe:1 query:6 lift:2 disjunction:7 whose:1 larger:1 solve:1 enemy:4 ability:1 richardson:1 itself:2 advantage:1 propose:3 iff:1 intuitive:1 exploiting:1 parent:2 empty:2 regularity:1 extending:1 ijcai:3 ben:1 object:2 friend:54 ac:4 c:2 involves:1 differ:1 vc:7 elimination:2 assign:1 parag:1 anonymous:1 proposition:2 brian:1 ground:8 normal:2 matthew:1 adopt:1 a2:1 belgium:1 mlns:3 compiles:4 currently:1 individually:1 singla:1 correctness:1 weighted:9 mit:1 always:1 modified:1 cr:35 lifted:41 kersting:2 conjunction:19 encode:1 lise:1 improvement:1 logically:1 cr2:1 dim:2 inference:51 unary:1 shattered:3 eliminate:1 relation:8 going:1 semantics:1 constrained:2 special:1 equal:3 having:1 frasconi:1 atom:63 runtimes:2 identical:13 represents:3 shattering:5 consisting:1 negation:3 attempt:1 evaluation:4 introduces:2 behind:1 compilation:37 suciu:1 partial:4 unification:1 xy:1 necessary:1 tree:1 circle:1 theoretical:1 formalism:2 cr1:1 raedt:4 leslie:1 tractability:1 introducing:1 kaelbling:1 subset:1 predicate:7 characterize:2 broeck:7 probabilistic:35 off:1 receiving:1 meliou:1 plit:10 michael:1 quickly:1 again:1 aaai:2 containing:2 literal:5 guy:4 admit:1 resort:3 inefficient:1 de:5 includes:1 jha:2 notable:1 explicitly:1 vi:1 depends:1 performed:1 root:11 lot:1 later:1 eyal:1 fol:3 universiteit:1 compiler:1 satisfiable:1 contribution:3 implementation1:1 largely:1 efficiently:1 yield:1 unifies:2 none:2 bob:1 fo:9 definition:22 e2:4 proof:7 di:1 logical:35 knowledge:20 actually:1 appears:1 done:1 though:1 furthermore:2 stage:1 until:1 hand:1 expressive:1 overlapping:1 propagation:8 smoke:13 abusing:1 defines:3 vibhav:3 alexandra:1 building:1 grounding:9 concept:3 contain:6 true:4 counterpart:1 inductive:1 normalized:1 symmetric:6 deal:1 white:1 during:1 rooted:1 davis:2 prohibit:1 wfomc:29 generalized:1 stress:1 complete:5 tn:1 performs:1 l1:1 reasoning:1 subdomain:2 common:1 operand:3 clause:33 extend:1 interpretation:5 refer:1 significant:1 cv:16 compliment:1 leuven:1 similarly:1 language:1 longer:1 compiled:4 base:1 add:2 posterior:2 recent:1 showed:2 verlag:1 inequality:1 captured:1 seen:1 converting:1 ii:1 stephen:1 afterwards:1 d0:12 muggleton:1 long:1 e1:3 a1:1 unsatisfiable:2 essentially:1 represent:9 c1:2 background:1 addition:1 whereas:1 zettlemoyer:1 extra:1 nv:1 counting:8 presence:2 split:5 iii:1 concerned:1 hb:3 independence:2 associating:1 inner:2 cn:4 motivated:1 expression:8 fove:2 cnf:9 useful:3 generally:1 clear:3 dnnf:14 fwo:1 processed:1 http:1 exist:1 taghipour:1 disjoint:2 per:2 herbrand:5 paolo:1 group:2 key:1 rectangle:1 graph:1 convert:1 run:2 uncertainty:1 almost:1 entirely:1 bound:2 followed:3 distinguish:1 guaranteed:1 occur:1 constraint:10 haimes:1 argument:8 extremely:1 performing:1 department:1 according:2 smaller:3 contradicts:1 making:1 den:7 intuitively:1 taken:2 equation:5 turn:1 eventually:1 nonempty:2 count:2 fail:1 fierens:1 kuleuven:2 tractable:1 available:1 vlaanderen:1 cnc:5 apply:3 appearing:1 compiling:1 existence:1 original:1 c2d:4 include:3 graphical:1 unifying:7 exploit:2 build:1 especially:1 nr:9 unclear:1 conceivable:1 detrimental:1 thank:1 berlin:1 nx:9 considers:1 trivial:1 reason:3 relationship:2 gogate:3 negative:1 abhay:1 implementation:1 perform:2 wannes:1 markov:2 daan:1 finite:1 ingo:1 anti:2 relational:2 extended:4 interacting:1 pred:3 propositional:10 introduced:2 pair:4 complement:1 bernd:1 david:1 concisely:1 expressivity:2 salvo:1 nip:1 poole:1 program:1 including:1 belief:2 power:1 getoor:1 recursion:22 lk:1 transitive:2 literature:1 dislike:4 relative:1 querying:1 var:10 foundation:1 sufficient:1 editor:2 supported:1 dtai:1 free:1 ecai:1 formal:1 side:1 rodrigo:1 van:7 regard:1 feedback:1 dimension:7 vocabulary:1 author:1 made:1 universally:1 approximate:1 logic:17 uai:2 assumed:2 pack:1 heidelberg:1 domain:61 did:1 whole:2 arise:1 repeated:1 child:4 augmented:1 structurally:1 position:3 fails:1 explicit:2 exponential:2 flanders:1 third:1 formula:12 theorem:9 specific:1 arity:3 symbol:1 list:3 dk:1 evidence:6 exists:1 consist:1 intractable:1 false:2 restricting:1 adding:1 lifting:1 smoker:12 cx:15 expressed:2 partially:1 applies:4 binding:19 kristian:2 springer:1 truth:2 satisfies:2 relies:1 pedro:4 luc:4 nima:1 except:1 reducing:1 lemma:8 total:2 called:1 isomorphic:3 accepted:1 partly:1 starai:1 shannon:2 exception:1 formally:2 quest:1 people:6 thon:1 evaluate:1 erson:3 avoiding:1
3,727
4,375
Data Skeletonization via Reeb Graphs Xiaoyin Ge Issam Safa Mikhail Belkin Yusu Wang Computer Science and Engineering Department The Ohio State University gex,safa,mbelkin,[email protected] Abstract Recovering hidden structure from complex and noisy non-linear data is one of the most fundamental problems in machine learning and statistical inference. While such data is often high-dimensional, it is of interest to approximate it with a lowdimensional or even one-dimensional space, since many important aspects of data are often intrinsically low-dimensional. Furthermore, there are many scenarios where the underlying structure is graph-like, e.g, river/road networks or various trajectories. In this paper, we develop a framework to extract, as well as to simplify, a one-dimensional ?skeleton? from unorganized data using the Reeb graph. Our algorithm is very simple, does not require complex optimizations and can be easily applied to unorganized high-dimensional data such as point clouds or proximity graphs. It can also represent arbitrary graph structures in the data. We also give theoretical results to justify our method. We provide a number of experiments to demonstrate the effectiveness and generality of our algorithm, including comparisons to existing methods, such as principal curves. We believe that the simplicity and practicality of our algorithm will help to promote skeleton graphs as a data analysis tool for a broad range of applications. 1 Introduction Learning or inferring a hidden structure from discrete samples is a fundamental problem in data analysis, ubiquitous in a broad range of application fields. With the rapid generation of diverse data all across science and engineering, extracting geometric structure is often a crucial first step towards interpreting the data at hand, as well as the underlying process of phenomenon. Recently, there has been a large amount of research in this direction, especially in the machine learning community. In this paper, we consider a simple but important scenario, where the hidden space has a graphlike geometric structure, such as the branching filamentary structures formed by blood vessels. Our goal is to extract such structures from points sampled on and around them. Graph-like geometric structures arise naturally in many fields, both in modeling natural phenomena, and in understanding abstract procedures and simulations. However, there has been only limited work on obtaining a general-purpose algorithm to automatically extract skeleton graph structures [2]. In this paper, we present such an algorithm by bringing in a topological concept called the Reeb graph to extract skeleton graphs. Our algorithm is simple, efficient and easy to use. We demonstrate the generality and effectiveness of our algorithm via several applications in both low and high dimensions. Motivation. Geometric graphs are the underlying structures for modeling many natural phenomena from river / road networks, root systems of trees, to blood vessels, and particle trajectories. For example, if we are interested in obtaining the road network of a city, we may send out cars to explore various streets of the city, with each car recording its position using a GPS device. The resulting data is a set of potentially noisy points sampled from the roads in a city. Given these data, the goal is to automatically reconstruct the road network, which is a graph embedded in a two- dimensional space. Indeed, abundant data of this type are available at the open-streets project website [1]. 1 Geometric graphs also arise from many modeling processes, such as molecular simulations. They can sometimes provide a natural platform to study a collection of time-series data, where each timeseries corresponds to a trajectory in the feature space. These trajectories converge and diverge, which can be represented by a graph. This graph in turn can then be used as a starting point for further processing (such as matching) or inference tasks. Generally, there are a number of scenarios where we wish to extract a one-dimensional skeleton from an input space. The goal in this paper is to develop, as well as to demonstrate the use of, a practical and general algorithm to extract a graph structure from input data of any dimensions. New work. Given a set of points P sampling a hidden domain X, we present a simple and practical algorithm to extract a skeleton graph G for X. The input points P do not have to be embedded ? we only need their distance matrix or simply a proximity graph as input to our algorithm. Our algorithm is based on using the so-called Reeb graph to model skeleton graphs. Given a continuous function f : X ? IR, the Reeb graph tracks the connected components in the level-set f ?1 (a) of f as we vary the value a. It provides a meaningful abstraction of the scalar field f , and has been widely used in graphics, visualization, and computer vision (see [6] for a survey). However, it has not yet been aimed as a tool to analyze high dimensional data from unorganized input data. By bringing the concept of the Reeb graph to machine learning applications, we can leverage the recent algorithms developed to compute and process Reeb graphs [15, 9]. Moreover, combining the Reeb graph with the so-called Rips complex allows us to obtain theoretical guarantees for our algorithm. Our algorithm is simple and efficient. There is only one parameter involved, which intuitively specifies the scale at which we look at the data. Our algorithm always outputs a graph G given data. Furthermore, it also computes a map ? : P ? G, which maps each sample point to G. Hence we can decompose the input data into sets, each corresponding to a single branch in the skeleton graph. Finally, there is a canonical way to measure importance of features in the Reeb graph, which allows us to easily simplify the resulting graph. We summarize our contributions as follows: (1) We bring in Reeb graphs to the learning community for analyzing high dimensional unorganized data sets. We developed an accompanying software to not only extract, but also process skeleton graphs from data. Our algorithm is simple and robust, always extracting a graph from the input. Our algorithm complements principal curve algorithms and can be used in combination with them. (2) We provide certain theoretical guarantees for our algorithm. We also demonstrate both the effectiveness of our software and the usefulness of skeleton graphs via a sets of experiments on diverse datasets. Experimental results show that despite being simple and general, our algorithm compares favorably to existing graph-extracting algorithms in various settings. Related work. At a broad level, the graph-extraction problem is related to manifold learning and non-linear dimensionality reduction which has a rich literature, see e.g [4, 24, 25, 27]. Manifold learning methods typically assume that the hidden domain has a manifold structure. An even more general scenario is that the hidden domain is a stratified space, which intuitively, can be thought of as a collection of manifolds (strata) glued together. Recently, there have been several approaches to learn stratified spaces [5, 14]. However, this general problem is hard and requires algorithms both mathematically sophisticated and computationally intensive. In this case, we aim to learn a graph structure, which is simply a one-dimensional stratified space, allowing for simple approaches. The most relevant previous work related to our graph-extraction problem is a series of results on an elegant concept of principal curves, originally proposed by Hastie and Stuetzle [16, 17]. Intuitively, principal curves are ?self-consistent? curves that pass through the middle of the data. Since its original introduction, there has been much work on analyzing and extending the concept and algorithms as well as on numerous applications. See, e.g, [7, 11, 10, 19, 22, 26, 28, 29] among many others. Below we discuss the results most relevant to the current work. Original principal curves are simple smooth curves with no self-intersections. In [19], K?egl et al. represented principal curves as polygonal lines, and proposed a regularized version of principal curves. They gave a practical algorithm to compute such a polygonal principal curve. This algorithm was later extended in [18] into a principal graph algorithm to compute the skeleton graph of hand-written digits and characters. To the best of our knowledge, this was the first algorithm to explicitly allow self-intersections in the output principal curves. However, this principal graph algo2 rithm could only handle 2D images. Very recently in [22], Ozertem and Erdogmus proposed a new definition for the principal curve associated to the probability density function. Intuitively, imagining the probability density function as a terrain, their principal curves are the mountain ridges. A rigorous definition can be made in terms of the Hessian of the probability density. Their approach has several nice properties, including connections to the popular mean-shift clustering algorithm. It also allows for certain bifurcations and self-intersections. However, the output of the algorithm is only a collection of points with neither connectivity information, nor the information about which points are junction points (graph nodes) and which points belong to the same arc in the principal graph. Furthermore, the algorithm depends on reliable density estimation from input data, which is a challenging task for high dimensional data. Aanijaneya et al. [2] recently proposed perhaps the first general algorithm to approximate a hidden metric graph from an input graph with theoretical guarantees. While the goal of [2] is to approximate a metric graph, their algorithm can also be used to skeletonize data. The algorithm relies on inspecting the local neighborhood of each point to first classify whether it should be a ?branching point? or an ? edge point?. Although this approach has theoretical guarantees when the sampling is nice and the parameters are chosen correctly, it is often hard to find suitable parameters in practice, and such local decisions tend to be less reliable when the input data are not as nice (such as a ?fat? junction region). In the section on experimental results we show that our algorithm tends to be more robust in practical applications. Finally we note that the concept of the Reeb graph has been used in a number of applications in graphics, visualization, and computer vision (see [6] for a survey). However, it has been typically used with mesh structures rather than a tool for analyzing unorganized point cloud data, especially in high dimensions, where constructing meshes is prohibitively expensive. An exception is the very recent work[20], where the authors propose to use the Reeb graph for point cloud data and show applications for several data-sets still in 3D. The advantage of our approach is that it is based on the Rips complex, which allows for a general and cleaner Reeb graph reconstruction algorithm with theoretical justification (see [9, 15] and Theorem 3.1). 2 Reeb Graphs We now give a very brief description of the Reeb graph; see Section VI.4 of [12] for a more formal discussion of it. Let f : X ? IR be a continuous function defined on a domain X. For each scalar value a ? IR, the level set f ?1 (a) = {x ? X | f (x) = a} may have multiple connected components. The Reeb graph of f , denoted by Rf (X), is obtained by continuously identifying every connected component in a level set to a single point. In other words, R f (X) is the image of a continuous surjective map ? : X ? Rf (X) where ?(x) = ?(y) if and only if x and y come from the same connected component of a level set of f . Intuitively, as the value a increases, connected components in the level set f ?1 (a) appear, disappear, split and merge, and the Reeb graph of f tracks such changes. The Reeb graph is an abstract graph. Its nodes indicate changes in the connected components in level sets, and each arc represents the evolution of a connected component before it is merged, killed, or split. See the right figure for an example, where we show (an embedding of) the Reeb graph of the height function f defined on a topological torus. The Reeb graph Rf (X) provides a simple yet meaningful abstraction of the input domain X w.r.t function f . ? a Rf (X) X f x z y ?(z) ?(x) = ?(y) Computation in discrete setting. Assume the input domain is modeled by a simplicial complex K. Specifically, a k-dimensional simplex ? is simply the convex combination of k + 1 independent points {v0 , . . . , vk }, and any simplex formed by a subset of its vertices is called a face of ?. A simplical complex K is a collection of simplices with the property that if a simplex ? is in K, then any face of it is also in K. A piecewise-linear (PL) function f defined on K is a function with values given at vertices of K and linearly interpolated within each simplex in K. Given a PL-function f on K, its Reeb graph Rf (K) is decided by all the 0, 1 and 2-simplices from K, which are the vertices, edges, and triangles of K. Hence from now on we use only 2-dimensional simplicial complex. 3 p p? p? Given a PL function defined on a simplicial com- f p? p? p p p ? plex domain K, its Reeb graph can be computed efficiently in O(n log n) expected time by a simp p? p p? ple randomized algorithm [15], where n is the size p p? of K. In fact, the algorithm outputs the so-called p? p p augmented Reeb graph R, which contains the imp? p? p? p? p age of all vertices in K under the surjection map ? : K ? R introduced earlier. See figure on the right: the Reeb graph (middle) is an abstract graph with four nodes, while the augmented Reeb graph (on the right) shows the image of all vertices (i.e, p?i s). From the augmented Reeb graph R, we can easily extract junction points (graph nodes), the set of points from the input data that should be mapped to each graph arc, as well as the connectivity between these points along the Reeb graph (e.g, p?1 p?4 p?7 form one arc between p?1 and p?7 ). 8 8 8 7 7 7 6 6 5 5 4 4 3 3 2 2 1 0 1 0 1 0 3 Method 3.1 Basic algorithm Step 1: Set up complex K. The input data we consider can be a set of points sampled from a hidden domain or a probabilistic distribution, or it can be the distance matrix, or simply the proximity graph, among a set of points. (So the input points do not have to be embedded.) Our goal is to compute (possibly an embedding of) a skeleton graph from the input data. First, we construct an appropriate space approximating the hidden domain that input points are sampled from. We use a simplicial complex K to model such a space. Specifically, given input sampled points P and the distance matrix of P , we first construct a proximity graph based on either r-neighborhood or k-nearest neighbors(NN) information; that is, a point p ? P is connected either to all its neighbors within r distance to p, or to its k-NNs. We add all points in P and all edges from this proximity graph to the simplicial complex K we are building. Next, for any three vertices p1 , p2 , p3 ? P , if they are pairwise connected in the proximity graph, we add the triangle 4p1 p2 p3 to K. Note that if the proximity graph is already given as the input, then we simply fill in a triangle whenever all its three edges are in the proximity graph to obtain the target simplicial complex K. We remark that there is only one parameter involved in the basic algorithm, which is the parameter r (if we use r-neighborhood) or k (if we use k-NN) to specify the scale with which we look at the input data. Motivation behind this construction. If the proximity graph is built based on r-neighborhood, then the above construction is simply that of the so-called Vietoris-Rips complex, which has been widely used in manifold reconstruction (especially surface reconstruction) community to recover the hidden domain from its point samples. Intuitively, imagine that we grow a ball of radius r around each sample point. The union of these balls roughly captures the hidden domain at scale r. On the ? other hand, the topological structure of the union of these balls is captured by the so-called Cech ? complex, which mathematically is the nerve of this union of balls. Hence the Cech complex captures the topology of the hidden domain when the sampling is reasonable (see e.g., [8, 21]). However, ? Cech complex is hard to compute, and the Vietoris-Rips complex is a practical approximation of the ? Cech complex that is much easier to construct. Furthermore, it has been shown that the Reeb graph of a hidden manifold can be approximated with theoretical guarantees from the Rips complex [9]. Step 2: Reeb graph computation. Now we have a simplicial complex K that approximates the hidden domain. In order to extract the skeleton graph using the Reeb v graph, we need to define a function g on K that respects its shape. It is also desirable that this function is intrinsic, given that input points may not be embedded. To this end, we construct the function g as the geodesic distance in K to a certain base point b ? K. We compute the base point by taking an arbitrary point v ? K and choosing b b as the point furtherest away from v. Intuitively, this base point is an extreme point. If the underlying domain indeed has a branching filamentary structure, then the geodesic distance to b tends to progress along each filament, and branch out at junction points. See the right figure for an example, where the thin curves are level sets of the geodesic distance function to the base point b. 4 Figure 1: Overview of the algorithm. The input points are light (yellow) shades beneath dark curves. (Left): the augmented Reeb graph output by our algorithm. (Center): after iterative smoothing. (Right): final output after repairing missing links (e.g top box) and simplification (lower box). Since the Reeb graph tracks the evolution of the connected components in the level sets, a branching (splitting in the level set) will happen when the level set passes through point v. In our algorithm, the geodesic distance function g to b in K is approximated by the shortest distance in the proximity graph (i.e, the set of edges in K) to b. We then perform the algorithm from [15] to compute the Reeb graph of K with respect to g, and denote the resulting Reeb graph as R. Recall that this algorithm in fact outputs the augmented Reeb graph R. Hence we not only obtain a graph structure, but also the set of input points (together with their connectivity) that are mapped to every graph arc in this graph structure. Time complexity. The time complexity of the basic algorithm is the summation of time to compute (A) the proximity graph, (B) the complex K from the proximity graph, (C) the geodesic distance and (D) the Reeb graph. (A) is O(n2 ) for high dimensional data (and can be made near-linear for data in very low dimensions) where n is the number of input points. (B) is O(k 3 n) if each point takes k neighbors. (C) and (D) takes time O(m log n) = O(k 3 n log n) where m is the size of K. Hence overall, the time complexity is O(n2 + k 3 n log n). For high dimensional data sets, this is dominated by the computation of the proximity graph O(n2 ). Theoretical guarantees. Given a domain X and a function f : X ? IR defined on it, the topology (i.e, the number of independent loops) of the Reeb graph R f (X) may not reflect that of the given domain X. However, in our case, we have the following result which offers a partial theoretical guarantee for the basic algorithm. Intuitively, the theorem states that if the hidden space is a graph G, and if our simplicial complex K approximates G both in terms of topology (as captured by homotopy equivalence) and metric (as captured by the ?-approximation), then the Reeb graph captures all loops in G. Below, dY (?, ?) denotes the geodesic distance in domain Y . Theorem 3.1 Suppose K is homotopy equivalent to a graph G, and h : K ? G is the corresponding homotopy. Assume that the metric is ?-approximated under h; that is, |d K (x, y)?dG (h(x), h(y))| ? ? for any x, y ? K, Let R be the Reeb graph of K w.r.t the geodesic distance function to an arbitrary base point b ? K. If ? < l/4, where l is the length of the shortest arc in G, we have that there is a one-to-one correspondence between loops in R and loops in G. The proof can be found in the full version [13]. It relies on results and observations from [9]. The above result can be made even stronger: (i) There is not only a one-to-one correspondence between loops in R and in G, the ranges of each pair of corresponding loops are also close. Here, the range of a loop ? w.r.t. a function f is the interval [minx?? f (x), maxx?? f (x)]. (ii) The condition on ? < l/4 can be relaxed. Furthermore, even when ? does not satisfy this condition, the reconstructed Reeb graph R can still preserve all loops in G whose range is larger than 2?. 3.2 Embedding and Visualization The Reeb graph is an abstract graph. To visualize the skeleton graph, we need to embed it in a reasonable way that reflects the geometry of hidden domain. To this end, if points are not already embedded in 2D or 3D, we project the input points P to IR3 using any standard dimensionality reduction algorithm. We then connect projected points based on their connectivity given in the 5 augmented Reeb graph R. Each arc of the Reeb graph is now embedded as a polygonal curve. To further improve the quality of this curve, we fix its endpoints, and iteratively smooth it by repeatedly assigning a point?s position to be the average of its neighbors? positions. See Figure 1 for an example. 3.3 Further Post-processing In practice, data can be noisy, and there may be spurious branches or loops in the Reeb graph R constructed no matter how we choose parameter r or k to decide the scale. Following [3], there is a natural way to define ?features? in a Reeb graph and measure their ?importance?. Specifically, given a function f : X ? IR, imagine we plot its Reeb graph Rf (X) such that the height of each point z ? Rf (X) is the function value of all those points in X mapped to z. Now we sweep the Reeb graph bottom-up in increasing order of the function values. As we sweep through a point z, we inspect what happens to the part of Reeb graph that we already swept, denoted by R zf := {w ? Rf (X) | f (w) ? f (z)}. When we sweep past a down-fork saddle s, there are two possibilities: (i). The two branches merged by s belong to different connected components, say C1 and C2 , in Rsf . In such case, we have a branch-feature, where two disjoint lower-branches in Rsf will be merged at s. The importance of this feature is the smaller height of the lower-branches being merged. Intuitively, this is the amount we have to perturb function f in order to remove this branch-feature. See the right figure, where the height h of C2 is the importance of this branch-feature. s C1 (ii). The two branches merged by s are already connected below s in R sf . In such case, when s connects them again, we create a family of new loops. This is called a loop-feature. Its size is measured as smallest height of any loop formed by s in R sf , where the height of a loop ? is defined as maxz?? f (z) ? minz?? f (z). See the right figure, where the dashed loop ? is the thinnest loop created by s. C2 h s ? Now if we sweep Rf (X) top-down, we will also obtain branch-features and loop-features captured by up-fork saddles in a symmetric manner. It turns out that these features (and their sizes) correspond to the so-called extended persistence of the Reeb graph Rf (X) with respect to function f [12]. The size of each feature is called its persistence, as it indicates how persistent this feature is as we perturb the function f . These features and their persistence can be computed in O(n log 2 n) time, where n is the number of nodes and arcs in the Reeb graph [3]. We can now simplify the Reeb graph by merging features whose persistence value is smaller than a given threshold. This simplification step not only removes noise, but can also be used as a way to look at features at larger scales. Finally, there may also be missing data causing missing links in the constructed skeleton graph. Hence in post-processing the user can also choose to first fill some missing links before the simplification step. This is achieved by connecting pairs of degree-1 nodes (x, y) in the Reeb graph whose distances d(x, y) is smaller than certain distance threshold. Here d(x, y) is the input distance between x and y (if the input points are embedded, or the distance matrix is given), not the distance in the simplicial complex K constructed by our algorithm. Connecting x and y may either connect two disjoint component in the Reeb graph, thus creating new branch-features; or form new loopfeatures. See Figure 1. We do not check the size of the new features created when connecting pairs of vertices. Small newly-created features will be removed in the subsequent simplification step. 4 Experimental Results In this section we first provide comparisons of our algorithm to three existing methods. We then present three sets of experiments to demonstrate the effectiveness of our software and show potential applications of skeleton graph extraction for data analysis. Experimental Comparisons. We compare our approach with three existing comparable algorithms: (1) the principal graph algorithm (PGA) [18]; (2) the local-density principal curve algorithm (LDPC) [22]; and (3) the metric-graph reconstruction algorithm (MGR) [2]. Note that PGA only works for 2D images. LDPC only outputs point cloud at the center of the input data with no connectivity information. 6 In the figure on the right, we show the skeleton graph of the image of a hand-written Chinese character. Our result is shown in (a). PGA [18] is shown in (b), while the output of (the KDE version of) LDPC [22] is shown in (c). We see that the algorithm from [18], specifically designed for these 2-D applications provides the best output. However, the results of our al(a) Our output (b) PGA [18] (c) LDPC [22] gorithm, which is completely generic, are comparable. On the other hand, the output of LDPC is a point cloud (rather than a graph). In this example, many points do not belong to the 1D structure1 . We do not show the results from MGR [2] as we were not able to produce a satisfactory result for this data using MGR even after tuning the parameters. However, note that the goal of their algorithm is to approximate a graph metric, which is different from extracting a skeleton graph. For the second set of comparisons we build a skeleton graph out of an input metric graph. Note that PGA and LDPC cannot handle such graph-type input, and the only comparable algorithm is MGR (a) Our output (b) MGR [2] (c) MGR on nautilus [2]. We use the image-web data previously used in [2]. Figure (a) on the right is our output and (b) is the output by MGR [2]. The input image web graph is shown in light (gray) color in the background. Finally to provide an additional comparison we apply MGR to image edge detection: (c) above shows the reconstructed edges for the nautilus image used earlier in Figure 1. To be fair, MGR does not provide an embedding, so we should focus on comparing graph structure. Still, MGR collapses the center of nautilus into a single point, while out algorithm is able to recover the structure more accurately 2. (a) (b) (c) Figure 2: (a) & (b): Edge detection in images. (c) Sharp feature curves detection. We now proceed with three examples of our algorithms applied to different datasets. Example 1: Image edge detection and surface feature curve reconstruction. In edge detection from images, it is often easy to identify (disconnected) points potentially lying on an edge. We can then use our Reeb-graph algorithm to connect them into curves. See Figure 1 and 2 (a), (b) for some examples. The yellow (light) shades are input potential edge-points computed by a standard edge-detection algorithm based on Roberts edge filter. Original images are given in the full version [13]. In Figure 2 (c), we are given a set of points sampled from a hidden surface model (gray points), and the goal is to extract (sharp) feature curves from it automatically. We first identify points lying around sharp feature lines using a local differential operator (yellow points) and apply our algorithm to connect them into feature lines/graphs (dark curves). 1 2 Larger ? for kernel density estimation fixes that problem but causes important features to disappear. Tuning the parameters of MGR does not seem to help, see the full version [13] for details. 7 Example 2: Speech data. The input speech data contains utterances of single digits by different speakers. Each utterance is sampled every 10msec with a moving frame. Each sample is represented by the first 13 coefficients resulting from running the standard Perceptual Linear Prediction (PLP) algorithm on the wave file. Given this setup, each utterance of a digit is a trajectory in this 13D feature space. In the left panel, we show the trajectory of an utterance of digit ?1? projected to IR3 . The right panel shows the graph reconstructed by our algorithm by treating the input simply as a set of points (i.e, removing the time sequence information). No postprocessing is performed. Note the main portion of the utterance (the large loop) is well-reconstructed. The cluster of points in the right side corresponds to sampling of silence at the beginning and end of the utterance. This indicates that our algorithm can potentially be used to automatically reconstruct trajectories when the time information is lost. 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 ?0.2 ?0.2 ?0.4 ?0.4 1 1 0.5 0.5 0 0 ?0.5 ?5.5 ?5 ?4.5 ?4 ?3.5 ?3 ?2.5 ?0.5 ?2 ?5.5 ?5 ?4.5 ?4 ?3.5 ?3 ?2.5 ?2 Next, we combine three utterances of the digit ?1? and construct the graph from the resulting point cloud shown in the left panel. Each color represents the point cloud coming from one utterance of ?1?. As shown in the right panel, the graph reconstructed by our algorithm automatically aligns these three utterances (curves) in the feature space: well-aligned subcurves are merged into single pieces along the graph skeleton, while divergent portions will appear as branches and loops in the graph (see the loops on the left-side of this picture). We expect that our methods can be used to produce a summary representation for multiple similar trajectories (low and high-dimensional curves), to both align trajectories with no time information and to discover convergent and divergent portions of the trajectories. 1.2 1.2 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 ?0.2 ?0.2 ?0.4 ?0.4 1 1 0.5 0.5 0 0 ?0.5 ?0.5 ?1 ?5.5 ?5 ?4.5 ?4 ?3.5 ?3 ?2.5 ?2 ?1 ?5.5 ?5 ?4.5 ?4 ?3.5 ?3 ?2.5 ?2 Example 3: Molecular simulation. The input is a molecular simulation data using the replicaexchange molecular dynamics method [23]. It contains 250K protein conformations, generated by 20 simulation runs, each of which produces a trajectory in the protein conformational space. The figure on the right shows a 3D-projection of the Reeb graph constructed by our algorithm. Interestingly, filaments structures can be seen at the beginning of the simulation, which indicates the 20 trajectories at high energy level. As the simulation proceeds, these different simulation runs converge and about 40% of the data points are concentrated in the oval on the right of the figure, which correspond to low-energy conformations. Ideally, simulations at low energy should provide a good sampling in the protein conformational space around the native structure of this protein. However, it turns out that there are several large loops in the Reeb graph close to the native structure (the conformation with lowest energy). Such loop features could be of interest for further investigation. Combining with principal curve algorithms. Finally, our algorithm can be used in combination with principal curve algorithms. In particular, one way is to use our algorithm to first decompose the input data into different arcs of a graph structure, and then use a principal curve algorithm to compute an embedding of this arc in the center of points contributing to it. Alternatively, we can first use the LDPC algorithm [22] to move points to the center of the data, and then perform our algorithm to connect them into a graph structure. Some preliminary results on such combination applied to the hand-written Chinese character can be found in the full version [13]. Acknowledgments. The authors thank D. Chen and U. Ozertem for kindly providing their software and for help with using the software. This work was in part supported by the NSF under CCF-0747082, CCF-1048983, CCF-1116258, IIS-1117707, IIS-0643916. 8 References [1] Open street map. http://www.openstreetmap.org/. [2] M. Aanjaneya, F. Chazal, D. Chen, M. Glisse, L. Guibas, and D. Morozov. Metric graph reconstruction from noisy data. In Proc. 27th Sympos. Comput. Geom., 2011. [3] P. K. Agarwal, H. Edelsbrunner, J. Harer, and Y. Wang. Extreme elevation on a 2-manifold. Discrete and Computational Geometry (DCG), 36(4):553?572, 2006. [4] M. Belkin and P. Niyogi. Laplacian Eigenmaps for dimensionality reduction and data representation. Neural Comp, 15(6):1373?1396, 2003. [5] P. Bendich, B. Wang, and S. Mukherjee. Local homology transfer and stratification learning. In ACMSIAM Sympos. Discrete Alg., 2012. To appear. [6] S. Biasotti, D. Giorgi, M. Spagnuolo, and B. Falcidieno. Reeb graphs for shape analysis and applications. Theor. Comput. Sci., 392:5?22, February 2008. [7] K. Chang and J. Grosh. A unified model for probabilistic principal surfaces. IEEE Trans. Pattern Anal. Machine Intell., 24(1):59?64, 2002. [8] F. Chazal, D. Cohen-Steiner, and A. Lieutier. A sampling theory for compact sets in Euclidean space. Discrete Comput. Geom., 41(3):461?479, 2009. [9] T. K. Dey and Y. Wang. Reeb graphs: Approximation and persistence. In Proc. 27th Sympos. Comput. Geom., pages 226?235, 2011. [10] D Dong and T. J Mcavoy. Nonlinear principal component analysis based on principal curves and neural networks. Computers & Chemical Engineering, 20:65?78, 1996. [11] T. Duchamp and W. Stuetzle. Extremal properties of principal curves in the plane. The Annals of Statistics, 24(4):1511?1520, 1996. [12] H. Edelsbrunner and J. Harer. Computational Topology, An Introduction. Amer. Math. Society, 2010. [13] X. Ge, I. Safa, M. Belkin, and Y. Wang. Data skeletonization via Reeb graphs, 2011. Full version at www.cse.ohio-state.edu/?yusu. [14] G. Haro, G. Randall, and G. Sapiro. Translated poisson mixture model for stratification learning. International Journal of Computer Vision, 80(3):358?374, 2008. [15] W. Harvey, Y. Wang, and R. Wenger. A randomized O(mlogm) time algorithm for computing Reeb graphs of arbitrary simplicial complexes. In Proc. 26th Sympos. Comput. Geom., pages 267?276, 2010. [16] T. J. Hastie. Principal curves and surfaces. PhD thesis, stanford university, 1984. [17] T. J. Hastie and W. Stuetlze. Principal curves. Journal of the American Statistical Association, 84(406):502?516, 1989. [18] B. K?egl and A. Krzy?zak. Piecewise linear skeletonization using principal curves. IEEE Trans. Pattern Anal. Machine Intell., 24:59?74, January 2002. [19] B. K?egl, A. Krzyzak, T. Linder, and K. Zeger. Learning and design of principal curves. IEEE Trans. Pattern Anal. Machine Intell., 22:281?297, 2000. [20] M. Natali, S. Biasotti, G. Patan`e, and B. Falcidieno. Graph-based representations of point clouds. Graphical Models, 73(5):151 ? 164, 2011. [21] P. Niyogi, S. Smale, and S. Weinberger. Finding the homology of submanifolds with high confidence from random samples. Discrete Comput. Geom., 39(1-3):419?441, 2008. [22] U. Ozertem and D. Erdogmus. Locally defined principal curves and surfaces. Journal of Machine Learning Research, 12:1249?1286, 2011. [23] I.-H. Park and C. Li. Dynamic ligand-induced-fit simulation via enhanced conformational samplings and ensemble dockings: A survivin example. J. Phys. Chem. B., 114:5144?5153, 2010. [24] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, 2000. [25] B. Scholkopf, A. Smola, and K.R. Muller. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation, 10:1299?1319, 2000. [26] Derek Stanford and Adrian E. Raftery. Finding curvilinear features in spatial point patterns: Principal curve clustering with noise. IEEE Trans. Pattern Anal. Machine Intell., 22(6):601?609, 2000. [27] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000. [28] R. Tibshirani. Principal curves revisited. Statistics and Computing, 2:183?190, 1992. [29] J. J. Verbeek, N. Vlassis, and B. Kr?ose. A k-segments algorithm for finding principal curves. Pattern Recognition Letters, 23(8):1009?1017, 2002. 9
4375 |@word version:7 middle:2 stronger:1 open:2 adrian:1 simulation:10 reduction:5 series:2 contains:3 interestingly:1 past:1 existing:4 steiner:1 current:1 com:1 comparing:1 yet:2 assigning:1 written:3 mesh:2 subsequent:1 happen:1 zeger:1 shape:2 remove:2 plot:1 designed:1 treating:1 device:1 website:1 plane:1 beginning:2 provides:3 math:1 cse:2 node:6 revisited:1 org:1 height:6 along:3 constructed:4 c2:3 differential:1 persistent:1 scholkopf:1 acmsiam:1 combine:1 manner:1 pairwise:1 expected:1 indeed:2 rapid:1 p1:2 nor:1 roughly:1 automatically:5 increasing:1 project:2 discover:1 underlying:4 moreover:1 panel:4 lowest:1 what:1 mountain:1 submanifolds:1 developed:2 unified:1 finding:3 guarantee:7 sapiro:1 every:3 fat:1 prohibitively:1 appear:3 before:2 engineering:3 local:5 simp:1 tends:2 despite:1 analyzing:3 glued:1 merge:1 equivalence:1 challenging:1 limited:1 collapse:1 stratified:3 range:5 decided:1 practical:5 acknowledgment:1 filament:2 practice:2 union:3 lost:1 digit:5 procedure:1 stuetzle:2 maxx:1 thought:1 matching:1 persistence:5 word:1 road:5 projection:1 confidence:1 protein:4 cannot:1 close:2 operator:1 www:2 equivalent:1 map:5 maxz:1 center:5 missing:4 send:1 starting:1 convex:1 survey:2 simplicity:1 identifying:1 splitting:1 fill:2 embedding:6 handle:2 justification:1 annals:1 target:1 construction:2 imagine:2 rip:5 suppose:1 user:1 gps:1 enhanced:1 expensive:1 approximated:3 recognition:1 gorithm:1 mukherjee:1 thinnest:1 native:2 bottom:1 cloud:8 fork:2 wang:6 capture:3 region:1 connected:12 removed:1 complexity:3 skeleton:20 ideally:1 dynamic:2 geodesic:7 segment:1 completely:1 triangle:3 translated:1 easily:3 various:3 represented:3 filamentary:2 repairing:1 sympos:4 neighborhood:4 choosing:1 whose:3 widely:2 larger:3 stanford:2 say:1 reconstruct:2 niyogi:2 statistic:2 noisy:4 final:1 advantage:1 sequence:1 eigenvalue:1 propose:1 lowdimensional:1 reconstruction:6 coming:1 causing:1 relevant:2 combining:2 beneath:1 loop:21 aligned:1 roweis:1 description:1 curvilinear:1 cluster:1 extending:1 produce:3 help:3 develop:2 measured:1 nearest:1 conformation:3 progress:1 p2:2 recovering:1 come:1 indicate:1 direction:1 radius:1 merged:6 filter:1 require:1 fix:2 decompose:2 homotopy:3 investigation:1 preliminary:1 elevation:1 inspecting:1 mathematically:2 summation:1 theor:1 pl:3 accompanying:1 proximity:13 around:4 lying:2 guibas:1 visualize:1 vary:1 smallest:1 purpose:1 estimation:2 proc:3 extremal:1 create:1 city:3 tool:3 reflects:1 always:2 aim:1 rather:2 krzy:1 focus:1 vk:1 indicates:3 check:1 rigorous:1 mbelkin:1 inference:2 abstraction:2 nn:2 typically:2 dcg:1 hidden:17 spurious:1 interested:1 overall:1 among:2 ir3:2 denoted:2 platform:1 smoothing:1 bifurcation:1 spatial:1 field:3 construct:5 extraction:3 sampling:7 stratification:2 represents:2 broad:3 look:3 park:1 imp:1 promote:1 thin:1 simplex:4 others:1 simplify:3 piecewise:2 belkin:3 dg:1 preserve:1 intell:4 geometry:2 connects:1 detection:6 interest:2 possibility:1 surjection:1 mixture:1 extreme:2 light:3 behind:1 edge:14 partial:1 tree:1 euclidean:1 abundant:1 theoretical:9 classify:1 modeling:3 earlier:2 vertex:7 subset:1 usefulness:1 eigenmaps:1 graphic:2 connect:5 nns:1 density:6 fundamental:2 river:2 stratum:1 randomized:2 morozov:1 international:1 probabilistic:2 dong:1 diverge:1 connecting:3 continuously:1 together:2 connectivity:5 thesis:1 again:1 reflect:1 choose:2 possibly:1 creating:1 american:1 li:1 potential:2 de:1 spagnuolo:1 rsf:2 coefficient:1 matter:1 satisfy:1 explicitly:1 depends:1 vi:1 piece:1 later:1 root:1 performed:1 analyze:1 portion:3 wave:1 recover:2 contribution:1 formed:3 ir:5 efficiently:1 ensemble:1 simplicial:10 correspond:2 identify:2 yellow:3 accurately:1 trajectory:12 comp:1 plex:1 chazal:2 phys:1 yusu:3 whenever:1 aligns:1 definition:2 energy:4 derek:1 involved:2 naturally:1 associated:1 proof:1 sampled:7 newly:1 intrinsically:1 popular:1 recall:1 knowledge:1 car:2 dimensionality:5 ubiquitous:1 color:2 sophisticated:1 nerve:1 originally:1 specify:1 amer:1 box:2 harer:2 generality:2 furthermore:5 dey:1 smola:1 langford:1 hand:6 web:2 nonlinear:4 quality:1 perhaps:1 gray:2 believe:1 building:1 concept:5 homology:2 ccf:3 evolution:2 hence:6 chemical:1 symmetric:1 iteratively:1 satisfactory:1 branching:4 self:4 plp:1 speaker:1 ridge:1 demonstrate:5 interpreting:1 bring:1 silva:1 postprocessing:1 image:13 ohio:3 recently:4 overview:1 cohen:1 endpoint:1 belong:3 association:1 approximates:2 zak:1 tuning:2 particle:1 killed:1 bendich:1 moving:1 surface:6 v0:1 add:2 base:5 align:1 edelsbrunner:2 recent:2 scenario:4 certain:4 harvey:1 swept:1 muller:1 captured:4 cech:4 additional:1 relaxed:1 seen:1 converge:2 shortest:2 dashed:1 ii:4 branch:13 multiple:2 desirable:1 full:5 smooth:2 offer:1 post:2 molecular:4 laplacian:1 prediction:1 verbeek:1 basic:4 vision:3 metric:8 poisson:1 represent:1 sometimes:1 kernel:2 agarwal:1 achieved:1 c1:2 background:1 interval:1 grow:1 crucial:1 bringing:2 pass:1 file:1 recording:1 tend:1 elegant:1 induced:1 effectiveness:4 seem:1 extracting:4 near:1 leverage:1 split:2 easy:2 fit:1 gave:1 hastie:3 topology:4 gex:1 intensive:1 vietoris:2 shift:1 openstreetmap:1 whether:1 krzyzak:1 speech:2 hessian:1 proceed:1 cause:1 remark:1 repeatedly:1 generally:1 conformational:3 aimed:1 cleaner:1 amount:2 dark:2 locally:2 simplical:1 concentrated:1 tenenbaum:1 http:1 specifies:1 canonical:1 nsf:1 disjoint:2 track:3 correctly:1 tibshirani:1 diverse:2 discrete:6 four:1 threshold:2 blood:2 neither:1 graph:141 run:2 letter:1 family:1 reasonable:2 decide:1 p3:2 decision:1 dy:1 comparable:3 pga:5 simplification:4 convergent:1 correspondence:2 topological:3 software:5 dominated:1 interpolated:1 aspect:1 haro:1 department:1 combination:4 ball:4 disconnected:1 across:1 smaller:3 character:3 happens:1 randall:1 intuitively:9 computationally:1 visualization:3 previously:1 turn:3 discus:1 ge:2 issam:1 end:3 available:1 junction:4 apply:2 away:1 appropriate:1 generic:1 skeletonization:3 weinberger:1 original:3 top:2 clustering:2 denotes:1 running:1 graphical:1 practicality:1 perturb:2 especially:3 chinese:2 february:1 surjective:1 disappear:2 approximating:1 build:1 sweep:4 move:1 society:1 already:4 ozertem:3 minx:1 distance:17 link:3 mapped:3 thank:1 sci:1 street:3 manifold:7 length:1 modeled:1 ldpc:7 providing:1 setup:1 robert:1 potentially:3 kde:1 favorably:1 smale:1 anal:4 design:1 perform:2 allowing:1 zf:1 inspect:1 observation:1 datasets:2 arc:10 timeseries:1 january:1 extended:2 vlassis:1 frame:1 arbitrary:4 sharp:3 community:3 introduced:1 complement:1 pair:3 connection:1 unorganized:5 trans:4 able:2 proceeds:1 below:3 pattern:6 summarize:1 geom:5 rf:10 including:2 reliable:2 built:1 suitable:1 natural:4 regularized:1 improve:1 brief:1 numerous:1 picture:1 created:3 raftery:1 extract:11 utterance:9 docking:1 nice:3 geometric:6 understanding:1 literature:1 contributing:1 embedded:7 expect:1 generation:1 age:1 degree:1 consistent:1 summary:1 supported:1 silence:1 formal:1 allow:1 side:2 neighbor:4 saul:1 face:2 taking:1 mikhail:1 curve:38 dimension:4 rich:1 computes:1 author:2 collection:4 made:3 projected:2 ple:1 reconstructed:5 approximate:4 compact:1 global:1 mcavoy:1 alternatively:1 terrain:1 continuous:3 iterative:1 learn:2 transfer:1 robust:2 obtaining:2 alg:1 imagining:1 vessel:2 complex:23 constructing:1 domain:18 kindly:1 main:1 linearly:1 motivation:2 noise:2 arise:2 n2:3 fair:1 augmented:6 rithm:1 simplices:2 ose:1 inferring:1 position:3 wish:1 torus:1 sf:2 msec:1 comput:6 perceptual:1 minz:1 theorem:3 down:2 embed:1 shade:2 removing:1 divergent:2 reeb:60 intrinsic:1 polygonal:3 merging:1 importance:4 kr:1 phd:1 egl:3 chen:2 easier:1 intersection:3 simply:7 explore:1 saddle:2 scalar:2 chang:1 ligand:1 corresponds:2 relies:2 goal:7 erdogmus:2 towards:1 wenger:1 hard:3 change:2 specifically:4 justify:1 principal:31 called:10 oval:1 pas:1 experimental:4 meaningful:2 exception:1 linder:1 chem:1 algo2:1 phenomenon:3
3,728
4,376
Differentially Private M-Estimators Lei, Jing Department of Statistics Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract This paper studies privacy preserving M-estimators using perturbed histograms. The proposed approach allows the release of a wide class of M-estimators with both differential privacy and statistical utility without knowing a priori the particular inference procedure. The performance of the proposed method is demonstrated through a careful study of the convergence rates. A practical algorithm is given and applied on a real world data set containing both continuous and categorical variables. 1 Introduction Privacy-preserving data analysis has received increasing attention in recent years. Among various notions of privacy, differential privacy [1, 2] provides mathematically rigorous privacy guarantee and protects against essentially all kinds of identity attacks regardless of the auxiliary information that may be available to the attackers. Differential privacy requires that the presence or absence of any individual data record can never greatly change the outcome and hence the user can hardly learn much about any individual data record from the output. However, designing differentially private statistical inference procedures has been a challenging problem. Differential privacy protects individual data by introducing uncertainty in the outcome, which generally requires the output of any inference procedure to be random even for a fixed input data set. This makes differentially private statistical analysis different from most traditional statistical inference procedures, which are deterministic once the data set is given. Most existing works [3, 4, 5] focus on the interactive data release where a particular statistical inference problem is chosen a priori and the randomized output for that particular inference is released to the users. In reality a data release that allows multiple inference procedures are often desired because real world statistical analyses usually consist of a series of inferences such as exploratory analysis, model fitting, and model selection, where the exact inference problem in a later stage is determined by results of previous steps and cannot be determined in advance. In this work we study M-estimators under a differentially private framework. The proposed method uses perturbed histograms to provide a systematic way of releasing a class of M-estimators in a non-interactive fashion. Such a non-interactive method uses randomization independent of any particular inference procedure, therefore it allows the users to apply different inference procedures on the same synthetic data set without additional privacy compromise. The accuracy of these private preserving estimates has also been studied and we prove that, under mild conditions on the contrast functions of the M-estimators, the proposed differentially private M-estimators are consistent. As ? a special case, this approach gives 1/ n-consistent estimates for quantiles, providing a simple and efficient alternative solution to similar problems considered in [4, 5]. Our main condition requires convexity and bounded partial derivatives of the contrast function. The convexity is used to ensure the existence and stability of the M-estimator whereas the bounded derivative controls the bias caused by the perturbed histogram. In classical theory of M-estimators, a contrast function with 1 bounded derivative implies robustness of the corresponding M-estimator. This is another evidence of the natural connection between robustness and differential privacy [4]. We also describe an algorithm that is conceptually simple and computationally feasible. It is flexible enough to accommodate continuous, ordinal, and categorical variables at the same time, as demonstrated by its application on a Bay Area housing data. 1.1 Related Work The perturbed histogram is first described under the context of differential privacy in [1]. The problem of non-interactive release has also been studied by [6], which targets at releasing the differentially private distribution function or the density function in a non-parametric setting. Theoretically, M-estimators could be indirectly obtained from the released density function. However, the more direct perspective taken in this paper leads to an improved rate of convergence as well as an efficient algorithm. Several aspects of parameter estimation problems have been studied with differential privacy under the interactive framework. In particular, [4] shows that many robust estimators can be made differentially private and that general private estimators can be obtained from composition of robust location and scale estimators. [5] shows that statistical estimators with generic asymptotic normality can be made differentially private with the same asymptotic variance. Both works involve estimating the inter-quartile range in a differentially private manner, where the algorithm may output ?No Response? [4], or the data is assumed to have known upper and lower bounds [5]. In a slightly different context, [3] considers penalized logistic regression as a special case of empirical risk minimization, where the penalized logistic regression coefficients are estimated with differential privacy by minimizing a perturbed objective function. Their method uses a different form of perturbation and is still interactive. It connects with the present paper in the sense that the perturbation is finally expressed in the objective function. Both papers assume convexity, which ensures that the shift in the minimizer is small when the deviation in the objective function is small. We also note that the method in [3] depends on a strictly convex penalty term which is typically used in high-dimensional problems, while our method works for problems where no penalization is used. 2 Preliminaries 2.1 Definition of Privacy A database is modeled as a set of data points D = {x1 , . . . , xn } ? X n , where X is the data universe. In most cases each data entry xi represents the microdata of an individual. We use the Hamming distance to measure the proximity between two databases of the same size. Suppose |D| = |D0 |, the Hamming distance is H(D, D0 ) = |D\D0 | = |D0 \D|. The objective of data privacy is to release useful information from the data set while protecting information about any individual data entry. Definition 1 (Differential Privacy [1]). A randomized function T (D) gives ?-differential privacy if for all pairs of databases (D, D0 ) with H(D, D0 ) = 1 and all measurable subsets E of the image of T: log P (T ? E|D) ? ?. (1) 0 P (T ? E|D ) In the rest of this paper we assume that, n, the size of database, is public. 2.2 The Perturbed Histogram In most statistical problems, a database D consists of n independent copies of a random variable X with density f (x). For simplicity, we assume X = [0, 1]d . As we will see in Section 3.2, our method can be extended to non-compact X for some important examples. Suppose [0, 1]d is partitioned into cubic cells with equal bandwidth hn such that kn = h?1 n is an integer. Denote each Nd 1 cell as Br = [(r ? 1)h , r h ), for all r = (r , ..., r ) ? {1, ..., kn }d . The histogram j n j n 1 d j=1 1 To make sure that Br ?s do form a partition of [0, 1]d , the interval should be [(kn ? 1)hn , 1] when rj = kn . 2 density estimator is then f?hist (x) = h?d n X nr r where nr := Pn i=1 n 1(x ? Br ), (2) 1(Xi ? Br ) is the number of data points in Br . Clearly the density estimator described above depends on the data only through the histogram counts (nr , r ? {1, . . . , kn }d ). If we can find a differentially private version of (nr , r ? {1, . . . , kn }d ), then the corresponding density estimator f? will also be differentially private by a simple change-ofmeasure argument. We consider the following perturbed histogram as described in [1]: n ? r = nr + zr , ? r ? {1, . . . , kn }d , (3) where zr ?s are independent with density ? exp(??|z|/2)/4. We have Lemma 2 ([1]). (? nr , r ? {1, . . . , kn }d ) satisfies ?-differential privacy. We call (? nr , r ? {1, . . . , kn }d ) the Perturbed Histogram. Substituting nr by n ? r in (2), we obtain a differentially private version of f?hist : Xn ?r f?P H (x) = h?d 1(x ? Br ) . (4) n n r In general f?P H given by (4) is not a valid density function, since it can take negative values and may not integrate to 1. To avoid these undesirable properties, [6] uses n ? r = (? nr ? 0) instead of n ? r and P n ? = rn ? r instead of n so that the resulting density estimator is non-negative and integrates to 1. 2.3 M-estimators Given a random variable X withRdensity f (x), the parameter of interest is defined as: ?? = arg min? M (?), where M (?) = m(x, ?)f (x)dx, ? ? Rp , and m(x, ?) is the contrast funciid tion. Assuming Xi ? f , the corresponding M-estimator is usually obtained by minimizing the empirical average of contrast function: X ?? = arg min Mn (?), where Mn (?) = n?1 m(Xi , ?). (5) ??? i=1 M-estimators cover many important statistical inference procedures such as sample quantiles, max? imum likelihood estimators (MLE), and least square estimators. Most M-estimators are 1/ nconsistent and asymptotically normal. For more details about the theory and application of Mestimators, see [7]. 3 Differentially private M-estimators Combining equations (4) and (5) gives a differentially private objective function: Z Mn,P H (?) = f?P H (x)m(x, ?)dx. (6) [0,1]d We wish to use the minimizer of Mn,P H as a differentially private estimate of ?? . Consider the following set of conditions on the contrast function m(x, ?). (A1) g(x, ?) := ? ?? m(x, ?) exists and |g(x, ?)| ? C1 on [0, 1]d ? ?. (A2) g(x, ?) is Lipschitz in x and ?: ||g(x1 , ?) ? g(x2 , ?)||2 ? C2 ||x1 ? x2 ||2 , for all ?; and ||g(x, ?1 ) ? g(x, ?2 )||2 ? C2 ||?1 ? ?2 ||2 , for all x. (A3) m(x, ?) is convex in ? for all x and M (?) is twice continuously differentiable with R ? M 00 (?? ) := f (x) ?? g(x, ?? )dx positive definite. Condition (A1) requires a bounded derivative of the contrast function, which is closely related to the robustness of the corresponding M-estimator [8]. It indicates that any small changes in the 3 underlying distribution cannot change the outcome by too much, which is also required implicitly by the definition of differential privacy. Condition (A2) has two parts. The Lipschitz condition on x is used to bound the bias caused by histogram approximation, while the Lipschitz P condition on ? is used to establish a uniform upper bound of the sampling error in Mn0 (?) = n?1 i g(xi , ?) as well as a uniform upper bound on the error caused by the additive Laplacian noises. Condition (A3) requires some curvature in the objective function in a neighborhood of the true parameter, which ensures that the minimizer is stable under small perturbations. The following theorem is our first main result: ? Theorem 3. Under conditions (A1)-(A3), if hn  ( log n/n)2/(d+2) , then there exists a local minimizer, ??P? H , of Mn,P H , such that p  |??P? H ? ?? | = OP n?1/2 ? ( log n/n)2/(d+2) . (7) A proof of Theorem 3 is given in the supplementary material. At a high level, ? by?assumption (A3) it 0 (?)?M 0 (?)| = OP (1/ n?( log n/n)2/(2+d) ), suffices to show (Lemma 9) that sup???0 |Mn,priv for some compact neighborhood ?0 of ?? . 0 The approximation error of Mn,P H (?) can be decomposed into three parts: Z Z X (f?P H (x) ? f (x))g(x, ?)dx =n?1 zr h?d g(x, ?)dx Br r +n +n ?1 ?1 X nr h?d n Z g(x, ?)dx ? r Br X g(Xi , ?) ? Eg(X, ?) . n X  g(Xi , ?) i:Xi ?Br (8) i The three terms on the right hand side of (8) correspond to the effect of Laplace noises added for privacy, the bias caused by using histogram, and the sampling error, respectively. As in the general theory of histogram estimators, the approximation error depends on the choice of bandwidth hn . Generally speaking, if the bandwidth is small, then the histogram bias term will be small. However, a smaller bandwidth leads to a larger number of cells and hence more Laplacian noises. As a result, there is a trade-off between the histogram bias and Laplacian noises in the choice of bandwidth. The bandwidth given in Theorem 3 balances these two parts. We also comment on practical choices of hn in Section 4. We prove Theorem 3 by investigating the convergence rate of each term in the right hand side of (8). First (Lemma 10) by empirical process theory? [9, 10] we have, under conditions A(1) and A(2), the sampling error term in (8) is of order OP (1/ n), uniformly on ?0 . Second, using Lipschitz property of g, the histogram bias term in (8) is of order O(hn ). Therefore it suffices to show that P R ? ?d/2  sup???0 r n?1 zr h?d Br m(x, ?)dx = OP ( log n/n)hn , which can be established using a concentration inequality due to Talagrand [11] (see also [12, Equation 1.3]), together with a ?-net argument (Lemma 11) enabled by the Lipschitz property of g in ?. 3.1 Algorithm based on perturbed histogram In practice, exact integration of f?P H (x)m(x, ?) over each cell Br may be computationally expensive and approximations must be adopted to make the implementation feasible. Note that f?P H (x) is piecewise constant. The integration can be simplified by using a piecewise constant approximation of m(x, ?). Formally, we introduce the following algorithm: Algorithm 1 (M-estimator using perturbed histogram) Input: D = {X1 , ? ? ? , Xn }, m(?, ?), ?, hn . 1. Construct perturbed histogram with bandwidth hn and privacy parameter ? as in (3). P 2. Let Mn,P H (?) = n?1 r n ? r m(ar , ?), where ar ? [0, 1]d is the center of Br , with ar (j) = (rj ? 0.5)hn for all 1 ? j ? d. 4 3. Output ??P H = arg min Mn,P H (?). ? Comparing to ??n,P by minimizing the exact integral, the only term in (8) impacted by H obtained R ?d using g(ar , ?) instead of hn Br g(x, ?)dx is the histogram bias term. However, note that Z = O(hn ) . g(ar , ?) ? h?d g(x, ?)dx n Br As a result, the convergence rate of ??n,P H remains the same: Theorem 4 (Statistical Utility of Algorithm 1). Under Assumptions (A1-A3), if Mn,P H (?) is giv? en by Algorithm 1 with hn  ( log n/n)2/(2+d) then there exists a local minimizer, ??P H , of Mn,P H (?), such that p ? |??P H ? ?? | = OP (1/ n ? ( log n/n)2/(2+d) ). (9) Example (Logistic regression) We give a concrete example that satisfies (A1)-(A3). Let D = {(Xi , Yi ) ? [0, 1] ? {0, 1} : 1 ? i ? n}, where the conditional distribution of Yi given Xi is Bernoulli with parameterPexp(?Xi )/[1 + exp(?Xi )]. The maximum likelihood estimator for ? is ?MLE = arg min i [??Yi Xi + log(1 + exp(?Xi ))]. Here the contrast function m(x, y; ?) = ??xy + log(1 + exp(?x)) and it is easy to check that (A1)-(A3) hold. In this example X is continuous and Y is binary, so it is only necessary to discretize X when constructing the histogram. To be specific, suppose [0, 1] is partitioned into equal-sized cells (Br , 1 ? r ? kn ) as in the ordinary univariate histogram. The joint histogram for (X, Y ) is constructed by counting the number of data points in each of the product cells Br,j := Br ? {j} for j = 0, 1. See Subsection 4.1 for more details on constructing histograms when there are categorical variables. Note that Theorems 3 and 4 do not guarantee the uniqueness or even existence of a global minimizer for the perturbed objective function Mn,P H (?). This is because sometimes with small probability some perturbed histogram count n ? r can be negative hence the corresponding objective function Mn,P H may not be convex. In our simulation and real data experience, this is usually not a real problem since a similar argument as in Theorem 3 shows that, with high probability, the second 00 00 derivative Mn,P H is uniformly close to M in any compact subset of ?. To completely avoid this issue, one can use thresholding after perturbation as described in the following algorithm. Algorithm 10 (Perturbed histogram with nonnegative counts) Input: D = {X1 , ? ? ? , Xn }, m(?, ?), ?, hn . 1 Construct perturbed histogram with bandwidth hn and privacy parameter ? as in (3). ? n,P H (?) = n?1 P n 2 Let M ? r m(ar , ?), where n ? r = max(? nr , 0). r ? n,P H (?). 3 Output ??P H = arg min M 0 Although the thresholding guarantees that the zero points of Mn,P H (?) is indeed a global minimizer by convexity of Mn,P H (?), it increases the approximation error introduced by the Laplacian noises because now these noises no longer cancel with each other nicely in the first term of the right hand side of equation (8). We have the following utility result for Algorithm 10 : Theorem 5. Under Assumptions (A1-A3) and hn  (log n/n)1/(1+d) , the estimator given by Algorithm 10 satisfies |??P H ? ?? | = OP ((log n/n)1/(1+d) ). Proof. The proof follows essentially from that of Theorem 3, with P a different choice of bandwidth hn . The concentration inequality result no longer holds for r z?r g(ar , ?) where z?r = max(zr , ?nr ), because z?r ?s are not independent. Instead, we consider a direct union bound: supr |? zr | ? supr |zr | = OP (log h?d n ) = OP (log n). Therefore the Laplacian noise term in right hand side of (8) is bounded uniformly for all ? by OP (n?1 h?d n log n). The histogram bias is still O(hn ) as we mentioned in the discussion of Algorithm 1. Therefore the convergence rate is optimized by choosing hn  (log n/n)1/(1+d) . 5 3.2 Non-differentiable contrast functions Now we consider the possibility of relaxing condition (A2). Allowing discontinuity in g(x, ?) is motivated by a class of M-estimators whose contrast functions m(x, ?) are non-differentiable on a set of zero measure. An important example is the quantile. For a random variable X ? R1 with cumulative distribution function F (?) and any given ? ? (0, 1), the ? -th quantile of X is q(? ) := F ?1 (? ), which corresponds to an M-estimator with m(x, ?) = (1?? )(x??)? +? (x??)+ (see [13]). Quantiles provide important information about the distribution, including both location (median) and scale (inter-quartile range). The robustness of sample quantiles also makes them good candidates for differentially private data release. Differentially private quantile estimators are indeed major building blocks for some existing privacy preserving statistical estimators [4, 5]. Our result in this subsection shows that perturbed histograms can give simple, consistent, and differentially private quantile estimators. The following set of conditions will suffice for this purpose and the argument is largely the same as Theorem 4: (B1) m(x, ?) is convex and Lipschitz in both x and ?. (B2) M (?) is twice differentiable at ?? with M 00 (?? ) > 0. (B3) ? is compact and convex. Corollary 6 (Statistical utility of Algorithm 1). Under conditions (B1-B3) and hn ? ( log n/n)2/(2+d) , any minimizer ??P H of Mn,P H given by Algorithm 1 satisfies (9).  Proof. The argument is largely the same as the proof of Theorem 3. Here we consider the original objective functions Mn,P H and M instead of their derivatives. By a similar decomposition as in eq. ? ? (8), using the compactness of ?, we have sup? |Mn,P H ?M | = OP (1/ n?( log n/n)?2/(2+d) ). Then the convergence of ??P H follows from the convexity of M . Remark 7. Condition (B3) is the most restrictive one. It requires ? to be bounded. This is because the proof uses the fact that Mn (?) and M (?) are uniformly close for large n, which is usually true for a bounded set of ?. Remark 8. For quantiles the contrast function is piecewise linear, so for most cells in the histogram there would be no approximation error if the data points are approximated by the cell center. The M-estimators for quantiles actually enjoy faster convergence rates. Extension to distributions supported on (??, ?). Recall that we assume X ? [0, 1]d . For quantiles, we have d = 1 and the quantile estimators described above can be extended to any continuous random variable whose density function is supported on (??, ?). Let {Zi , i = 1, . . . , n} be an independent sample from density fZ with fZ (z) > 0, ? z ? R1 . Let ? ? (0, 1) and suppose we want to estimate qZ (? ), the ? -th quantile of Z. To apply our method, define X = exp(Z)/(1 + exp(Z)). Clearly the quantiles are preserved under this monotone transformation. Applying the perturbed histogram quantile estimator on {Xi , i?= 1, . . . , n} we obtain q?X,P H (? ), the differentially private ? -th qunatile of X, which is 1/ n-consistent?by Corollary 6. As a result, the estimate q?Z,P H (? ) := log[? qX,P H (? )/(1 ? q?X,P H (? ))] is a 1/ n-consistent estimator for qZ (? ). 4 4.1 Practical Aspects Complexity and Flexibility From now on we will drop the logarithm terms to simplify presentation. Suppose hn  n?2/(2+d) . d 2d/(2+d) Then the perturbed histogram (? nr : r ? {1, . . . , h?1 ) n } ) can be constructed in O(n time by specifying the corresponding cell for each data point. Once the histogram is construct?d 2d/(2+d) ed, ) weighted data points  following Algorithm 1, we can view it as a set of hn = O(n ?1 d ar , r ? {1, . . . , hn } associated with weights {? nr }, where each data point ar is the center of cell Br as defined in Step 2 of Algorithm 1. For M-estimators that allow a close form solution in terms of the minimum sufficient statistics, such as least square regression, Mn,P H (?) (and hence ??P H ) can be calculated in O(n2d/(2+d) ) time. For general M-estimators that require an iterative optimization, such as logistic regression, the Hessian and gradients can be calculated in O(n2d/(2+d) ) 6 time in each iteration. Such a weighted sample representation can be easily implemented using standard data structures in common statistical programming packages such as R and Matlab. Another attractive property of the proposed approach is its flexibility to accommodate different data types. As seen in the logistic regression example in Subsection 3.1, it is straightforward to construct multivariate histograms when some variables are categorical and some are continuous. In such cases it suffices to discretize the continuous variables. To be specific, let (X 1 , . . . , X d1 ) ? Qd2 {1, . . . , kj } be a set [0, 1]d1 be a d1 -dimensional continuous variable and (Y 1 , . . . , Y d2 ) ? j=1 of d2 discrete variables where Y j takes value in {1, . . . , kj }. For any bandwidth h, let {Br , r ? {1, . . . , h?1 }d1 } be the corresponding set of histogram cells in [0, 1]d1 . Then the joint histogram for (X, Y ) is constructed with cells  Br,y , r ? {1, . . . , h?1 }d1 , y ? d2 O {1, . . . , kj } . j=1 Because only the continuous variables have histogram approximation error, the theoretical results developed in Section 3 are applicable with sample size n and dimensionality d1 . 4.2 Improvement by enhanced thresholding In applications such as regression, the multivariate distribution often concentrates on a subset (usually a lower dimensional manifold) of [0, 1]d . Therefore many non-zero cells are artificially created by additive noises. To alleviate this problem, we threshold the histogram with an enhanced cut-off value: n ?r = n ? r 1(? nr ? A log n/?), where A > 0 is a tuning parameter. This is based on the intuition that the maximal noise will be O(log n/?). As shown in the following data example, such a simple thresholding step remarkably improves the accuracy. 4.3 Application to housing price data As an illustration, we apply our method to a housing price data consisting of 348,189 houses sold in San Francisco Bay Area between 2003 and 2006. For each house, the data contains the price, size, year of transaction, and county in which the house is located. The inference problem of interest is to study the relationship between housing price and other variables [14]. In our case, we want to build a simple linear regression model to predict the housing price using the other variables while protecting each individual transaction record with differential privacy. The data set has two continuous variables (price and size), one ordinal variable (year of sale) with 4 levels, and one categorical variable (county) with 9 levels. The preprocessing filters out data points with price outside of the range $105 ? $9 ? 105 or with size larger than 3000 sqft. We also combine small counties that are geologically close and have similar housing prices. After the preprocessing, there are 250,070 data points and the county variable has 6 levels after the combination. For each (year, county) combination, a perturbed histogram is constructed over the two continuous variables with privacy parameter ? and K levels in each continuous dimension. Then there are 4 ? 6 ? K 2 cells, each having a perturbed histogram count. Using the weighted sample representation described in Subsection 4.1, the perturbed data can be viewed as a data set with 24K 2 data points weighted by the perturbed histogram counts. A differentially private regression coefficient is obtained by applying a weighted least square regression on this data set. To assess the performance, the privacy preserving regression coefficients are compared with those given by the non-private ordinary least square (OLS) estimates. In particular, we look at the coordinate-wise relative deviance from OLS coefficients: ? = |??priv /??OLS ? 1|. To account for the randomness of additive noises, P100 we repeat 100 times and report the root mean square error: ?? = ( 1 ?2i /100)1/2 , where ?i is the relative error obtained in the ith repetition. The results are summarized in Table 1. We test 2 values of ?, the privacy parameter. Recall that a smaller value of ? indicates a stronger privacy guarantee. For each value of ? we apply both the original Algorithm 1 and the enhanced thresholding described in Subsection 4.2, with tuning parameter A = 1/2. For ? = 1 the coefficients given by the perturbed histogram are close to those given by OLS with most relative deviances below 5%. When ? = 0.1, which is a conservative choice because exp(0.1) ? 1.1, the perturbed histogram still gives reasonably close estimates with average deviance below 10% for all parameters 7 Table 1: Linear regression coefficients using the Bay Area housing data. The second column is the regression coefficients given by ordinary least square method without any perturbation. We compare estimate given by (1) perturbed histogram (PH, Algorithm 1) and (2) perturbed histogram with enhanced thresholding (THLD) as described in Subsection 4.2. The reported number is the root mean square relative error (in percentage) over 100 perturbations as described above. The histogram with use K = 10 segments in each continuous dimension. Variable Intercept Size Year County2 County3 County4 County5 County6 OLS 135141 209 56375 -53765 146593 -27546 45828 -140738 ? = 0.1 PH THLD 10.6 7.7 4.7 3.5 4.6 2.8 8.0 7.8 4.2 2.5 29.8 37.1 9.8 7.9 7.1 3.3 PH 7.2 3.6 1.0 1.5 0.8 2.8 1.4 1.0 ?=1 THLD 4.4 2.3 0.4 0.7 0.3 2.1 1.3 0.4 except the county dummy variable ?County4?. This variable has the smallest OLS coefficient among all county dummy variables, so weight fluctuation in the histogram causes a relatively larger impact on the relative deviance. Even though, the perturbed histogram still gives at least qualitatively correct estimate. We also observe that the thresholded histogram gives more accurate estimate for all coefficients except for County4 when ? = 0.1. The choice of K should depend on the sample size and dimensionality. Our theory suggests K = O(n2/(2+d) ) where d is the dimensionality of the histogram and hence equals the number of continuous variables. In this data set n = 250, 070 and d = 2, which suggests K ? 500. This is not a good choice since it produces 24 ? 5002 = 6 ? 106 cells. Let the number of cells be c(K). In practice, it makes sense to choose K such that the average data counts in a cell, n/c(K), is much larger than the maximum additive noise maxr |zr |, which is OP (log c(K)). For this data set, when K = 10 we have n/c(K) ? 100 and log(c(K)) ? 7.78. 5 Further Discussions We demonstrate how histograms can be used as a basic tool for statistical parameter estimation under strong privacy constraints. The perturbed histogram adds to each histogram count a doubleexponential noise with constant parameter depending only on the privacy budget ?. The histogram approximation bias and the additive noise on the cell counts result in a bias-variance trade-off as usually seen for histogram-based methods. Such an algorithm should work well for low-dimensional problems. Solutions to higher dimensional problems are yet to be developed. One possibility is to perturb the minimum sufficient statistics because the dimensionality of minimum sufficient statistics is usually much smaller than the number of histogram cells. For example, in linear regression analysis, it suffices to obtain the first and second moments of all variables in a privacy-preserving way. However, perturbing minimum sufficient statistics would only work for a single estimator and is only possible for interactive release. We are seeing another type of privacy-utility trade-off, where the utility is not only about the rate of convergence, but also about the range of possible analyses allowed by the data releasing mechanism. The perturbed histogram is also related to ?error in variable? inference problems. Suppose the original data is just the histogram, then the perturbed version can be thought as the true histogram counts contaminated by some measurement errors. In this paper we provide consistency results for a class of inference problems in presence of such measurement errors. However, plugging in the perturbed values does not necessarily give the best inference procedure and better alternatives may be possible, see [15] for a hypothesis testing example in contingency tables. An important and challenging question is how to find the optimal inference procedure in presence of such measurement errors. A positive answer to this question will help establish a lower bound of approximation error and better understand the power and limit of perturbed histograms. 8 Acknowledgements Jing Lei was partially supported by NSF Grant BCS-0941518. References [1] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the 3rd Theory of Cryptography Conference, pages 265?284, 2006. [2] C. Dwork. Differential privacy. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP)(2), pages 1?12, 2006. [3] K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems, 2008. [4] C. Dwork and J. Lei. Differential privacy and robust statistics. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing, 2009. [5] A. Smith. Privacy-preserving statistical estimation with optimal convergence rates. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing, 2011. [6] L. Wasserman and S. Zhou. A statistical framework for differential privacy. Journal of the American Statistical Association, 105:375?389, 2010. [7] P. J. Huber and E. M. Ronchetti. Robust Statistics. John Wiley & Sons, Inc., 2nd edition, 2009. [8] F. Hampel, E. Ronchetti, P. Rousseeuw, and W. Stahel. Robust Statistics: The Approach Based on Influence Functions. John Wiley, New York, 1986. [9] A. W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998. [10] M. Talagrand. Sharper bounds for Gaussian and empirical processes. The Annals of Probability, 22:28?76, 1994. [11] M. Talagrand. A new isoperimetric inequality and the concentration of measure phenomenon. Lecture Notes in Mathematics, 1469/1991:94?124, 1991. [12] S. Bobkov and M. Ledoux. Poincar?e?s inequalities and Talagrand?s concentration phenomenon for the exponential distribution. Probability Theory and Related Fields, 107:383?400, 1997. [13] R. Koenker and K. F. Hallock. Quantile regression. Journal of Economic Perspectives, 15:143? 156, 2001. [14] R. K. Pace and R. Barry. Sparse spatial autoregressions. Statistics & Probability Letters, 33:291?297, 1997. [15] D. Vu and A. Slavkovic. Differential privacy for clinical trial data: Preliminary evaluations. In Proceedings of the 2009 IEEE International Conference on Data Mining Workshops, 2009. 9
4376 |@word mild:1 trial:1 private:24 version:3 stronger:1 nd:2 d2:3 simulation:1 decomposition:1 ronchetti:2 accommodate:2 moment:1 series:1 contains:1 existing:2 comparing:1 yet:1 dx:9 must:1 john:2 additive:5 partition:1 drop:1 ith:1 smith:2 stahel:1 record:3 provides:1 location:2 attack:1 c2:2 direct:2 differential:17 constructed:4 symposium:2 prove:2 consists:1 fitting:1 combine:1 introduce:1 manner:1 privacy:38 theoretically:1 inter:2 huber:1 indeed:2 decomposed:1 increasing:1 estimating:1 bounded:7 underlying:1 suffice:1 kind:1 developed:2 transformation:1 guarantee:4 interactive:7 control:1 sale:1 grant:1 enjoy:1 positive:2 local:2 limit:1 fluctuation:1 twice:2 studied:3 specifying:1 challenging:2 relaxing:1 suggests:2 range:4 practical:3 testing:1 vu:1 practice:2 union:1 definite:1 block:1 procedure:10 poincar:1 area:3 empirical:4 thought:1 deviance:4 seeing:1 cannot:2 undesirable:1 selection:1 close:6 context:2 risk:1 applying:2 intercept:1 influence:1 measurable:1 deterministic:1 demonstrated:2 center:3 straightforward:1 attention:1 regardless:1 convex:5 automaton:1 simplicity:1 wasserman:1 estimator:43 enabled:1 stability:1 notion:1 exploratory:1 coordinate:1 laplace:1 annals:1 target:1 suppose:6 enhanced:4 user:3 exact:3 programming:2 us:5 designing:1 hypothesis:1 pa:1 expensive:1 approximated:1 located:1 cut:1 database:5 ensures:2 trade:3 mentioned:1 intuition:1 colloquium:1 convexity:5 complexity:1 isoperimetric:1 depend:1 imum:1 segment:1 compromise:1 completely:1 easily:1 joint:2 various:1 describe:1 outcome:3 neighborhood:2 choosing:1 outside:1 whose:2 supplementary:1 larger:4 statistic:10 vaart:1 housing:7 differentiable:4 ledoux:1 net:1 product:1 maximal:1 combining:1 flexibility:2 chaudhuri:1 differentially:20 convergence:9 jing:2 r1:2 produce:1 help:1 depending:1 andrew:1 op:11 received:1 eq:1 strong:1 auxiliary:1 implemented:1 implies:1 concentrate:1 closely:1 correct:1 filter:1 quartile:2 slavkovic:1 public:1 material:1 require:1 suffices:4 preliminary:2 randomization:1 alleviate:1 county:7 mathematically:1 strictly:1 extension:1 hold:2 proximity:1 considered:1 normal:1 exp:7 predict:1 substituting:1 major:1 a2:3 released:2 smallest:1 purpose:1 uniqueness:1 estimation:3 integrates:1 applicable:1 repetition:1 tool:1 weighted:5 minimization:1 clearly:2 gaussian:1 priv:2 pn:1 avoid:2 zhou:1 corollary:2 release:7 focus:1 improvement:1 bernoulli:1 likelihood:2 indicates:2 check:1 greatly:1 contrast:11 rigorous:1 sense:2 inference:17 typically:1 compactness:1 arg:5 among:2 flexible:1 issue:1 priori:2 spatial:1 special:2 integration:2 bobkov:1 equal:3 once:2 never:1 construct:4 nicely:1 sampling:3 having:1 field:1 represents:1 look:1 cancel:1 report:1 contaminated:1 piecewise:3 simplify:1 individual:6 connects:1 consisting:1 interest:2 possibility:2 dwork:3 mining:1 evaluation:1 mcsherry:1 n2d:2 accurate:1 integral:1 partial:1 necessary:1 xy:1 experience:1 supr:2 logarithm:1 desired:1 theoretical:1 column:1 cover:1 ar:9 ordinary:3 introducing:1 deviation:1 subset:3 entry:2 uniform:2 mn0:1 too:1 reported:1 kn:10 answer:1 perturbed:32 synthetic:1 st:2 density:11 international:2 randomized:2 sensitivity:1 systematic:1 off:4 together:1 continuously:1 concrete:1 containing:1 hn:23 choose:1 american:1 derivative:6 giv:1 account:1 b2:1 summarized:1 coefficient:9 inc:1 caused:4 depends:3 later:1 tion:1 view:1 root:2 sup:3 ass:1 square:7 accuracy:2 variance:2 largely:2 correspond:1 conceptually:1 randomness:1 monteleoni:1 ed:1 definition:3 against:1 proof:6 associated:1 hamming:2 recall:2 subsection:6 dimensionality:4 improves:1 actually:1 higher:1 response:1 improved:1 impacted:1 though:1 just:1 stage:1 talagrand:4 hand:4 logistic:6 lei:3 building:1 effect:1 b3:3 calibrating:1 true:3 hence:5 eg:1 attractive:1 demonstrate:1 image:1 wise:1 common:1 ols:6 perturbing:1 association:1 mellon:1 composition:1 measurement:3 cambridge:1 tuning:2 rd:2 consistency:1 mathematics:1 language:1 stable:1 longer:2 add:1 curvature:1 multivariate:2 recent:1 perspective:2 inequality:4 binary:1 yi:3 der:1 preserving:8 minimum:4 additional:1 seen:2 barry:1 multiple:1 rj:2 bcs:1 d0:6 faster:1 clinical:1 mle:2 a1:7 laplacian:5 impact:1 plugging:1 regression:16 basic:1 essentially:2 cmu:1 histogram:59 sometimes:1 iteration:1 cell:19 c1:1 preserved:1 whereas:1 want:2 remarkably:1 interval:1 median:1 releasing:3 rest:1 sure:1 comment:1 integer:1 call:1 presence:3 counting:1 enough:1 easy:1 zi:1 bandwidth:10 economic:1 knowing:1 br:20 shift:1 motivated:1 utility:6 penalty:1 york:1 speaking:1 hessian:1 hardly:1 remark:2 matlab:1 cause:1 generally:2 useful:1 involve:1 rousseeuw:1 ph:3 fz:2 percentage:1 nsf:1 estimated:1 dummy:2 pace:1 carnegie:1 discrete:1 threshold:1 thresholded:1 asymptotically:1 monotone:1 year:5 package:1 letter:1 uncertainty:1 bound:7 nonnegative:1 annual:2 constraint:1 x2:2 protects:2 aspect:2 mestimators:1 argument:5 min:5 relatively:1 department:1 combination:2 smaller:3 slightly:1 son:1 partitioned:2 taken:1 computationally:2 equation:3 remains:1 count:9 mechanism:1 ordinal:2 koenker:1 adopted:1 available:1 apply:4 observe:1 indirectly:1 generic:1 alternative:2 robustness:4 rp:1 existence:2 original:3 ensure:1 restrictive:1 quantile:8 build:1 establish:2 perturb:1 classical:1 objective:9 added:1 question:2 parametric:1 concentration:4 traditional:1 nr:15 gradient:1 distance:2 manifold:1 nissim:1 considers:1 assuming:1 modeled:1 relationship:1 illustration:1 providing:1 minimizing:3 balance:1 sharper:1 negative:3 implementation:1 attacker:1 allowing:1 upper:3 discretize:2 sold:1 protecting:2 extended:2 rn:1 perturbation:6 introduced:1 pair:1 required:1 connection:1 optimized:1 established:1 discontinuity:1 usually:7 below:2 max:3 including:1 power:1 natural:1 hampel:1 zr:8 mn:21 normality:1 created:1 categorical:5 kj:3 autoregressions:1 acknowledgement:1 asymptotic:3 relative:5 lecture:1 icalp:1 penalization:1 integrate:1 contingency:1 sufficient:4 consistent:5 thresholding:6 penalized:2 supported:3 repeat:1 copy:1 bias:10 side:4 allow:1 understand:1 wide:1 sparse:1 van:1 calculated:2 xn:4 world:2 valid:1 cumulative:1 dimension:2 made:2 qualitatively:1 san:1 simplified:1 preprocessing:2 qx:1 transaction:2 compact:4 implicitly:1 global:2 maxr:1 hist:2 investigating:1 b1:2 pittsburgh:1 assumed:1 francisco:1 xi:15 continuous:13 iterative:1 bay:3 reality:1 table:3 qz:2 learn:1 reasonably:1 robust:5 p100:1 necessarily:1 artificially:1 constructing:2 main:2 universe:1 doubleexponential:1 noise:14 edition:1 n2:1 allowed:1 cryptography:1 x1:5 quantiles:8 en:1 fashion:1 cubic:1 wiley:2 wish:1 exponential:1 candidate:1 house:3 theorem:12 specific:2 evidence:1 a3:8 consist:1 exists:3 workshop:1 budget:1 univariate:1 expressed:1 partially:1 corresponds:1 minimizer:8 satisfies:4 acm:2 conditional:1 identity:1 sized:1 presentation:1 viewed:1 careful:1 lipschitz:6 absence:1 feasible:2 change:4 price:8 determined:2 except:2 uniformly:4 lemma:4 conservative:1 formally:1 d1:7 phenomenon:2
3,729
4,377
Learning Eigenvectors for Free Wouter M. Koolen Royal Holloway and CWI Wojtek Kot?owski Centrum Wiskunde & Informatica Manfred K. Warmuth UC Santa Cruz [email protected] [email protected] [email protected] Abstract We extend the classical problem of predicting a sequence of outcomes from a finite alphabet to the matrix domain. In this extension, the alphabet of n outcomes is replaced by the set of all dyads, i.e. outer products uu> where u is a vector in Rn of unit length. Whereas in the classical case the goal is to learn (i.e. sequentially predict as well as) the best multinomial distribution, in the matrix case we desire to learn the density matrix that best explains the observed sequence of dyads. We show how popular online algorithms for learning a multinomial distribution can be extended to learn density matrices. Intuitively, learning the n2 parameters of a density matrix is much harder than learning the n parameters of a multinomial distribution. Completely surprisingly, we prove that the worst-case regrets of certain classical algorithms and their matrix generalizations are identical. The reason is that the worst-case sequence of dyads share a common eigensystem, i.e. the worst case regret is achieved in the classical case. So these matrix algorithms learn the eigenvectors without any regret. 1 Introduction We consider the extension of the classical online problem of predicting outcomes from a finite alphabet to the matrix domain. In this extension, the alphabet of n outcomes is replaced by a set of all dyads, i.e. outer products uu> where u is a unit vector in Rn . Whereas classically the goal is to learn as well as the best multinomial distribution over outcomes, in the matrix case we desire to learn the distribution over dyads that best explains the sequence of dyads seen so far. A distribution on dyads is summarized as a density matrix, i.e. a symmetric positive-definite1 matrix of unit trace. Such matrices are heavily used in quantum physics, where dyads represent states. We will show how popular online algorithms for learning multinomials can be extended to learn density matrices. Considerable attention has been placed recently on generalizing algorithms for learning and optimization problems from probability vector parameters to density matrices [17, 19]. Efficient semidefinite programming algorithms have been devised [1] and better approximation algorithms for NP-hard problems have been obtained [2] by employing on-line algorithms that update a density matrix parameter. Also two important quantum complexity classes were shown to collapse based on these algorithms [8]. Even though the matrix generalization led to progress in many contexts, in the original domain of on-line learning, the regret bounds proven for the algorithms in the matrix case are often the same as those provable for the original classical finite alphabet case [17, 19]. Therefore it was posed as an open problem to determine whether this is just a case of loose classical bound or whether there truly exists a ?free matrix lunch? for some of these algorithms [18]. Such algorithms essentially would learn the eigensystem of the data for free without incurring any additional regret. This is non-intuitive, since one would expect a matrix to have n2 parameters and be much harder to learn than an n dimensional parameter vector. 1 We use positive in the non-strict sense, and omit ?symmetric? and ?definite?. Our matrices are real-valued. 1 for trial t = 1, 2, . . . , T do Algorithm predicts with density matrix Wt?1 Nature responds with density matrix Xt .  Algorithm incurs loss ? tr Xt log(Wt?1 ) . end for Density matrix prediction for trial t = 1, 2, . . . , T do Algorithm predicts with probability vector ?t?1 Nature responds with outcome xt . Algorithm incurs loss ? log ?t?1,xt . end for Probability vector prediction Figure 1: Protocols In this paper we investigate this frivolously named but deep ?free matrix lunch? question in arguably the simplest context: learning a multinomial distribution. In the classical case, there are n ? 2 outcomes and a distribution is parametrized by an n-dimensional probability vector ?, where ?i is the probability of outcome i. One can view the basePvectors ei as the elementary events and the probability vector as a mixture of these events: ? = i ?i ei . We define a ?matrix generalization? of a multinomial which is parametrized by a density matrix W (positive matrix of unit trace). Now the elementary events are dyads of the form uu> , where u is a unit vector in Rn . Dyads are the representations of states used in quantum physics [20]. A density matrix is a mixture of dyads. Whereas probability vectors represent uncertainty over n basis vectors, density matrices can be viewed as representing uncertainty over infinitely many dyads in Rn . In the classical case, the algorithm predicts at trial t with multinomial ?t?1 . Nature produces an outcome xt ? {1, . . . , n}, and the algorithm incurs loss ? log(?t?1,xt ). The most common heuristic (a.k.a. the Laplace estimator) chooses ?t?1,i proportional to 1 plus the number of previous trials in which outcome i was observed. The on-line algorithms are evaluated by their worst-case regret over data sequences, where the regret is the additional loss of the algorithm over the total loss of the best probability vector chosen in hindsight. In this paper we develop the corresponding matrix setting, where the algorithm predicts with a den> sity matrix Wt?1 , Nature produces a dyad xt x> t , and the algorithm incurs loss ?xt log(Wt?1 )xt . Here log denotes the matrix logarithm. We are particularly interested in how the regret changes when the algorithms are generalized to the matrix case. Surprisingly we can show that for the Laplace as well as the Krichevsky-Trofimov [10] estimators the worst-case regret is the same in the matrix case as it is in the classical case. For the Last-Step Minimax algorithm [16], we can prove the same regret bound for the matrix case that was proven for the classical case. Why are we doing this? Most machine learning algorithms deal with vector parameters. The goal of this line of research is to develop methods for handling matrix parameters. We are used to dealing with probability vectors. Recently a probability calculus was developed for density matrices [20] including various Bayes rules for updating generalized conditionals. The vector problems are typically retained as special cases of the matrix problems, where the eigensystem is fixed and only the vectors of eigenvalues has to be learned. We exhibit for the first time a basic fundamental problem, for which the regret achievable in the matrix case is no higher than the regret achievable in the original vector setting. Paper outline Definitions and notation are given in the next section, followed by proofs of the free matrix lunch for the three discussed algorithms in Section 3. At the core of our proofs is a new technical lemma for mixing quantum entropies. We also discuss the minimax algorithm for multinomials due to Shtarkov, and corresponding minimax algorithm for density matrices. We provide strong experimental evidence that the free matrix lunch holds for this algorithm as well. To put the results into context, we motivate and discuss our choice of the loss function, and compare it to several alternatives in Section 4. More discussion and perspective is provided in the Section 5. 2 Setup The protocol for the classical probability vector prediction problem and the new density matrix prediction problem are displayed side-by-side in Figure 1. We explain the latter problem. Learning proceeds in trials. During trial t the algorithm predicts with a density matrix Wt?1 . We use index t ? 1 to indicate that is based on the t ? 1 previous outcomes. Then nature responds with an outcome 2 density matrix Xt . The discrepancy between prediction and outcome is measured by the matrix entropic loss  `(Wt?1 , Xt ) := ? tr Xt log(Wt?1 ) , (1) where log denotes matrix logarithm2 . When the outcome density matrix Xt is a dyad xt x> t , then this loss becomes ?x> t log(Wt?1 )xt , which is the simplified form of the entropic loss discussed in Also if the prediction density matrix is diagonal, i.e. it has the form Wt?1 = Pthe introduction. > ? e e for some probability vector ?t?1 , and the outcome Xt is an eigendyad ej e> j of the i t?1,i i i same eigensystem, then this loss simplifies to the classical log loss: `(Wt?1 , Xt ) = ? log(?t?1,j ). The above definition is not the only way to promote the log loss to the matrix domain. Yet, in Section 4 we justify this choice. We aim to design algorithms with low regret compared to the best fixed density matrix in hindsight. The loss of the best fixed density matrix can be expressed succinctly in terms of the von Neumann entropy, which is defined for any density matrix D as H(D) := ? tr(D log D), and the suffi PT PT ST cient statistic ST = . For fixed data t=1 `(W , Xt ) = T H t=1 Xt as follows: inf W T X1 , . . . , XT , the regret of a strategy that issues prediction Wt after observing X1 , . . . , Xt is   T X ST , (2) `(Wt?1 , Xt ) ? T H T t=1 and the worst-case regret on T trials is obtained by taking supX1 ,...,XT over (2). Our aim is to design strategies for density matrix prediction that have low worst-case regret. 3 Free Matrix Lunches In this section, we will show how four popular online algorithms for learning multinomials can be extended to learning density matrices. We start with the simple Laplace estimator, continue with its improved version known as the Krichevsky-Trofimov estimator, and also extend the less known Last Step Minimax strategy which has even less regret. We will prove a version of the free matrix lunch (FML) for all three algorithms. Finally we discuss the minimax algorithm for which we have experimental evidence that the free matrix lunch holds as well. 3.1 Laplace Pt After observing classical data with sufficient statistic vector ?t = q=1 exq , classical Laplace ?t +1 := predicts with the probability vector ?t t+n consisting of the normalized smoothed counts. By Pt analogy, after observing matrix data with sufficient statistic St = q=1 Xt , matrix Laplace predicts t +I with the correspondingly smoothed matrix Wt := St+n . Classical Laplace is commonly motivated as either the Bayes predictive distribution w.r.t. the uniform prior or as a loss minimization with virtual outcomes [3]. The latter motivation can be ?lifted? to the matrix domain by adding n virtual outcomes at I/n: ( ) t X St + I Wt = argmin n `(W , I/n) + `(W , Xq ) = . (3) t+n W dens. mat. q=1  The worst-case regret of classical Laplace after T iterations equals log T +n?1 ? (n?1) log(T +1) n?1 (see e.g. [6]). We now show that in the matrix case, no additional regret is incurred. Theorem 1 (Laplace FML). The worst-case regrets of classical and matrix Laplace coincide. Proof. Let Wt? denote the best density matrix for the first t outcomes. The regret (2) of matrix Laplace can be bounded as follows: T X t=1 2 `(Wt?1 , Xt ) ? T X `(WT? , Xt ) ? t=1 T  X  `(Wt?1 , Xt ) ? `(Wt? , Xt ) . t=1 For any positive matrix with eigendecomposition A = 3 P i ?i ai a> i , log(A) := P i log(?i ) ai a> i . (4) Now consider each term in the right-hand sum separately. The tth term equals       St?1 + I St t?1+n ? tr Xt log ? log = log ? tr Xt log(St?1 + I) ? log St . t?1+n t t Note that the first term constitutes the ?classical? part of the per-round regret, while the second term is the ?matrix? part. The matrix part is non-positive since St?1 + I  St , and the logarithm is a matrix monotone operation (i.e. A  B implies log A  log B). By omitting it, we obtain an upper bound on the regret of matrix Laplace, that is tight: for any sequence of identical dyads the matrix part is zero and (4) holds with equality since Wt? = WT? for all t ? T . The same upper bound is also met by classical Laplace on any sequence of identical outcomes [6]. We just showed that matrix Laplace has the same worst-case regret as classical Laplace, albeit matrix Laplace learns a matrix of n2 parameters whereas classical Laplace only learns n probabilities. No additional regret is incurred for learning the eigenvectors. Matrix Laplace can update Wt in O(n2 ) time per trial. The same will be true for our next algorithm. 3.2 Krichevsky-Trofimov (KT) t +1/2 t +I/2 Classical and matrix KT smooth by adding 21 to each count, i.e. ?t := ?t+n/2 and Wt := St+n/2 . The former can again be obtained as the Bayes predictive distribution w.r.t. Jeffreys? prior, the latter as the solution to the matrix entropic loss minimization problem (3) with n/2 virtual outcomes instead of n for Laplace. The leading term in the worst-case regret for classical KT is the optimal 12 log(T ) rate per parameter instead of the log(T ) rate for Laplace. More precisely, classical KT?s worst-case regret after T  +n/2) ?(1/2) n?1 iterations is known to be log ?(T log(T + 1) + log(?) (see e.g. [6]). ?(T +1/2) + log ?(n/2) ? 2 Again we show that no additional regret is incurred in the matrix case. Theorem 2 (KT FML). The worst-case regrets of classical and matrix KT coincide. The proof uses the following key entropy decomposition lemma (proven in Appendix A): P Lemma 1. For positive matrices A, B with A = i ?i ai a> i the eigendecomposition of A: H(A + B) ? n X a> Bai i i=1 tr(B)  H A + tr(B) ai a> i , Proof of Theorem 2. We start by telescoping the regret (2) of matrix KT as follows     T  X  St?1 + Xt St?1 ? tr Xt log(Wt?1 ) ? tH + (t ? 1)H . t t?1 t=1 (5) We bound each term separately. Let us denote the eigendecomposition of St?1 by St?1 = P n > i=1 ?i si si . Notice that since Wt?1 plays in the eigensystem of St?1 , we have: n n X X   ? tr Xt log(Wt?1 ) = ? tr Xt log(?t?1,i ) si s> = ? s> i i Xt si log(?t?1,i ). i=1 i=1 Moreover, it follows from Lemma 1 that:     n X St?1 + Xt St?1 + si s> i > H ? si Xt si H . t t i=1 Taking this equality and inequality into account, the tth term in (5) is bounded above by:      n X St?1 + si s> St?1 i > ?t := si Xt si ? log(?t?1,i ) ? tH + (t ? 1)H , t t?1 i=1 which, in turn, is at most:      St?1 + si s> St?1 i + (t ? 1)H . ?t ? sup ? log(?t?1,i ) ? tH t t?1 i 4 (6) In other words the per-round regret increase is largest for one of the eigenvectors of the sufficient statistic St?1 , i.e. for classical data. To get an upper bound, maximize over S0 , . . . , ST ?1 independently, each with the constraint that tr(St ) = t. A particular maximizer is St = t e1 e> 1 , which is the sufficient statistic of the sequence of outcomes all equal to Xt = e1 e> 1 . For this sequence all bounding steps hold with equality. Hence the matrix KT regret is below the classical KT regret. The reverse is obvious. 3.3 Last Step Minimax The bounding technique, developed using Lemma 1 and applied to KT can be used to prove bounds for a much broader class of prediction strategies. The crucial part of the KT proof was showing that each term in the telescoped regret (5) can be bounded above by ?t as defined in (6), in which all matrices share the same eigensystem, and which is hence equivalent to the corresponding classical expression. The only property of the prediction strategy that we used was that it plays in the eigensystem of the past sufficient statistic. Therefore, using the same line of argument, we can show that if for some classical prediction strategy we can obtain a meaningful regret bound by bounding each term in the regret ?t independently, we can obtain the same bound for the corresponding matrix strategy, i.e. its spectral promotion. In particular, we can push this argument to its limit by considering the algorithm designed to minimize ?t in each iteration. This algorithm is known as Last Step Minimax. In fact, the Last Step Minimax (LSM) principle is a general recipe for online prediction, which states that the algorithm should minimize the worst-case regret with respect to the next outcome [16]. In other words, it should act as the minimax algorithm given that the time horizon is one iteration ahead. In the classical case for the multinomial distribution, after observing data with sufficient statistic ?t?1 , classical LSM predicts with ( )  t n X X exp ?tH( ?t?1t +ei ) ? (7) ?t?1 := argmin max `(?, xt ) ? `(?t , xq ) = P ?t?1 +ej  ei . xt ? ) j exp ?tH( q=1 t i=1 | {z } | {z } ? ? log(?t?1,xt ) tH ( tt ) Classical LSM is analyzed in [16] for the Bernoulli (n = 2) case. For our straightforward generalization to the classical multinomial case, the regret is bounded by n?1 2 ln(T + 1) + 1. LSM is therefore slightly better than KT. Applying the Last Step Minimax principle to density prediction, we obtain matrix LSM which issues prediction:     St . Wt?1 := argmin max ? tr Xt log(W ) ? tH X t t W We show that matrix LSM learns the eigenvectors without additional regret. Theorem 3 (LSM FML). The regrets of classical and matrix LSM are at most n?1 2 ln(T + 1) + 1. Proof. We determine the form of Wt?1 . By Sion?s minimax theorem [15]:         St St min max ? tr Xt log(W ) ? tH = max min EP ? tr Xt log(W ) ? tH , P W Xt W t t where P ranges over probability distribution on density matrices Xt . Plugging in the minimizer W = EP [Xt ], the right hand side becomes:      St . (8) max H EP [Xt ] ? EP tH P t Pn Now decompose St?1 as i=1 ?i si s> i . Using Lemma 1, we can bound the second expression inside the maximum: " n     #   n X > X St St?1 + si s> St?1 + si s> i i EP tH ? EP t s i Xt s i H = t s> . i EP [Xt ] si H t t t i=1 i=1 5 On the other hand, we know that the entropy does not decrease we replace the argument Pn when > > EP [Xt ] by its pinching (a.k.a. projective measurement) (u E P [Xt ]ui ) ui ui w.r.t. any i i=1 eigensystem ui [12]. Therefore, we have: ! n X  > > H EP [Xt ] ? H (si EP [Xt ]si ) si si = H(p), i=1 where the last entropy is a classical entropy and p is a vector such that pi = s> i EP [Xt ]si . Combining those two results together, we have:      n X  St ?t?1 + ei H EP [Xt ] ? EP tH ? H(p) ? t . pi H t t i=1 Note that we have equality only when the distribution P puts nonzero mass only on the eigenvectors s1 , . . . , sn . This means that when p is fixed, we will maximize (8) by using a distribution with such a property, i.e. P is restricted to the eigensystem of St?1 . This, in turn, means that Wt?1 = EP [Xt ] will play in the eigensystem of St?1 asP well. It follows that Wt?1 is the classical LSM strategy in the eigensystem of St?1 , i.e. Wt?1 = i ?t?1,i si s> i , where ?t?1 are taken as in (7). The proof of the classical LSM guarantee is based on bounding the per-round regret increase:     ?t?1 + ext ?t?1 ?t := ? log(?t?1,xt ) ? tH + (t ? 1)H , t t?1 by choosing the worst case w.r.t. xt and ?t?1 . Since, for matrices, the worst case for the corresponding matrix version of ?t , see (6), is the diagonal case, the whole analysis immediately goes through and we get the same bound as for classical LSM. Note that the bound for LSM is not tight, i.e. there exists no data sequence for which the bound is achieved. Therefore, the bound for matrix LSM is also not tight. This theorem is a weaker FML because it only relates worst-case regret bounds. We have verified experimentally that the actual regrets coincide in dimension n = 2 for up to T = 5 outcomes, using a grid of 30 dyads per trial, with uniformly spaced (x> e1 )2 . So we believe that in fact Conjecture 1 (LSM FML). The worst-case regrets of classical and matrix LSM coincide. To execute the LSM matrix strategy, we need to have the eigendecomposition of the sufficient statistic. For density matrix data Xt , we may need to recompute it each trial in ?(n3 ) time. For dyadic 2 data xt x> t it can be incrementally updated in O(n ) per trial with methods along the lines of [11]. 3.4 Shtarkov Fix horizon T . The minimax algorithm for multinomials, due to Shtarkov [14], minimizes the worstcase regret T ?  X T . (9) inf sup . . . inf sup `(?t?1 , xt ) ? T H T ?0 x1 ?T ?1 xT t=1 After observing data with sufficient statistic ?t and hence with r := T ? t rounds remaining, classical Shtarkov predicts with n X ?r?1 (?t + ei ) ?t := ei ?r (?t ) i=1 where ?r (?) := X 1 ,...,cn Pcn i=1 ci =r r c1 , . . . , c n !   ? + c  exp ?T H . T (10)  The so-called Shtarkov sum ?r can be evaluated in time O n r log(r) using a straightforward extension of the method described in [9] for computing ?T (0), which is based on dynamic programming and Fast Fourier Transforms.  The regret of classical Shtarkov equals log ?T (0) ? n?1 log(T ) ? log(n ? 2) + 1 [6]. This is 2 again better than Last Step Minimax, which is in turn better than KT which dominates Laplace. 6 The minimax algorithm for density matrices, called matrix Shtarkov, optimizes the worst-case regret inf sup . . . inf sup W0 X1 WT ?1 XT T X  `(Wt?1 , Xt ) ? T H t=1 ST T  . (11) To this end, after observing data with sufficient statistic St , with r rounds remaining, it predicts with Wt := argmin sup `(W , X) + Rr?1 (St + X), W X where Rr is the tail sequence of inf/sups of (11) of length r. We now argue that the FML holds for matrix Shtarkov. Matrix Shtarkov is surprisingly difficult to analyze. However, we provide a simplifying conjecture that we verified experimentally. A rigorous proof remains an open problem. Our conjecture is that Lemma 1 holds with the entropy H replaced by the minimax regret tail Rr : Conjecture 2. For each integer r, for each pair of positive matrices A and B X a> Bai  i Rr (A + B) ? Rr A + tr(B) ai a> i . tr(B) i Note that this conjecture generalizes Lemma 1, which is retained as the case r = 0. It follows from this conjecture, using the same P argument as for LSM, that matrix Shtarkov predicts in the eigensystem of St , i.e. with Wt = i ?t,i si s> i , where ?t as in (10), and furthermore that Conjecture 3 (Shtarkov FML). The worst-case regrets of classical and matrix Shtarkov coincide. We have verified Conjecture 3 for the matrix Bernoulli case (n = 2) up to T = 5 outcomes, using a grid of 30 dyads per trial, with uniformly spaced (x> e1 )2 . Then assuming that Rr (S) = log(?(?)), where ? are the eigenvalues of S, for each n from 2 to 5 we drew 105 trace pairs uniformly from [0, 10], then drew matrix pairs A and B uniformly at random with those traces. Conjecture 2 always held. Obtaining the FML for the minimax algorithm is mathematically challenging and of academic interest but of minor practical relevance. First, the time horizon T must be specified in advance, so the minimax algorithm can not be used in a purely online fashion. Secondly, the running time is superlinear in the number of rounds remaining, while it is constant for the previous three algorithms. 4 Motivation and Discussion of the Loss Function The matrix entropic loss (1) that we choose as our loss function has a coding interpretation and it is a proper scoring rule. The latter seems to be a necessary condition for the free matrix lunch. Quantum coding Classical log-loss forecasting can be motivated from the point of view of data compression and variable-length coding [7]. In information theory, the Kraft-McMillan inequality states that, ignoring rounding issues, for every uniquely decodable code with a code length function ?, there is a probability distribution ? such that ?i = ? log ?i for all symbols i = 1, . . . , n, and vice versa. Therefore, the log loss can be interpreted as the code length assigned to the observed outcome. Quantum information theory[13, 5] generalizes variable length coding to the quantum/density matrix case. Instead of messages composed of bits, the sender and the receiver exchange messages described by density matrices, and the role analogous to the message length is now played by the dimension of the density matrix. Variable-length quantum coding requires the definition of a code length operator L, which is a positive matrix such that for any density matrix X, tr(XL) gives the expected dimension (?length?) of the message assigned to X. The quantum version of Kraft?s inequality states that, ignoring rounding issues, for every variable-length quantum code with codelength operator L, there exists a density matrix W such that L = ? log W . Therefore, the matrix entropic loss can be interpreted as the (expected) code length of the observed outcome. Proper score function In decision theory, the loss function `(?, x) assessing the quality of predictions is also referred to as a score function. A score function is said to be proper, if for any distribution p on outcomes, the expected loss is minimized by predicting with p itself, i.e. argmin? Ex?p [`(?, x)] = p. Minimization of a proper score function leads to well-calibrated forecasting. The log loss is known to be a proper score function [4]. 7 We will say that a matrix loss function `(W , X) is proper if for any distribution P on density matrix outcomes, the expected loss with respect to P is minimized by predicting with the mean outcome of P , i.e. argminW EX?P [`(W , X)] = EX?P [X].  The matrix entropic loss (1) is proper, for EX?P [? tr(X log W )] = ? tr EX?P [X] log W is minimized at W = EX?P [X] [12]. Therefore, minimization of the matrix entropic loss leads to well-calibrated forecasting, as in the classical case. A second generalization of the log loss to the matrix domain used in quantum physics [12] is the log trace loss `(W , X) := ? log tr(XW ) . Note that here the trace and the logarithm are exchanged compared to (1). The expression tr(XW ) plays an important role in quantum physics as the expected value of a measurement outcome, and for X = xx> , tr(xx> W ) is interpreted as a probability. However, log trace loss is not proper. The counterexample is straightforward: > if we take P uniform on {x1 x> 1 , x2 x2 }, then the minimizer of the expected log trace loss is > W ? (x1 + x2 )(x1 + x2 )> , which differs from EX?P [X] = 12 (x1 x> 1 + x2 x2 ). Also for log trace loss we found an example (not presented) against the FML for the minimax algorithm.  A third generalization of the loss is `(W , X) := ? log tr(X W ) , where denotes the commutative ?product? between matrices that underlies the probability calculus of [20].3 This loss upper bounds the log trace loss. We don?t know whether it is a proper scoring function. However, it equals the matrix entropic loss when X is a dyad. Finally, another loss explored in the on-line learning community is the trace loss `(W , X) := tr(W X). This loss is not a proper scoring function (it behaves like the absolute loss in the vector case) and we have an example that shows that there is no FML for the minimax algorithm in this case (not presented). In summary, for there to exist a FML, properness of the loss function seems to be required. 5 Conclusion We showed that the free matrix lunch holds for the matrix version of the KT estimator. Thus the conjectured free matrix lunch [18] is realized. Our paper raises many open questions. Perhaps the main one is whether the free matrix lunch holds for the matrix minimax algorithm. Also we would like to know what properties of the loss function and algorithm cause the free matrix lunch to occur. From the examples given in this paper it is tempting to believe that you always get a free matrix lunch when upgrading any classical sufficient-statistics-based predictor to a matrix version by just playing this predictor in the eigensystem of the current matrix sufficient statistics. However the following counter example shows that a general reduction must be more subtle: Consider floored KT, which predicts with ?t,i ? b?t,i c + 1/2. For T = 5 trials in dimension n = 2, the worst-case regret is 1.297 for the classical log loss and 1.992 for matrix entropic loss. A Proof of Lemma 1 We prove the following slightly stronger inequality for all ? ? 0. The lemma is the case ? = 1. f (?) := H(A + ?B) ? n X a> Bai i i=1 tr(B) H(A + ? tr(B)ai a> i ) ? 0. Since f (0) = 0, it suffices to show that f 0 (?) ? 0. Since  f 0 (?) = ? tr B log(A + ?B) + n X ?H(D) ?D = ? log(D) ? I,   > > a> i Bai tr ai ai log A + ? tr(B) ai ai i=1     = tr B log A + ? tr(B)I ? tr B log(A + ?B) . Since tr(B)I  B, we have A + ? tr(B)I   A + ?B, and hence the matrix monotonicity of the logarithm implies that log A + ? tr(B)I  log(A + ?B), so that f 0 (?) ? 0. 3 We can compute A B as the matrix exponential of the sum of matrix logarithms of A and B. 8 References [1] S. Arora, E. Hazan, and S. Kale. Fast algorithms for approximate semidefinite programming using the multiplicative weights update method. In FOCS, pages 339?348, 2005. [2] S. Arora and S. Kale. A combinatorial, primal-dual approach to semidefinite programs. In STOC, pages 227?236, 2007. [3] K. S. Azoury and M. K. Warmuth. Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning, 43(3):211?246, 2001. [4] J. M. Bernardo and A. F. M. Smith. Bayesian Theory. Wiley, 1994. [5] K. Bostroem and T. Felbinger. Lossless quantum data compression and variable-length coding. Phys. Rev. A, 65(3):032313, 2002. [6] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [7] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, 1991. [8] R. Jain, Z. Ji, S. Upadhyay, and J. Watrous. QIP = PSPACE. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC, pages 573?582, 2010. [9] P. Kontkanen and P. Myllym?aki. A fast normalized maximum likelihood algorithm for multinomial data. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05), pages 1613?1616, 2005. [10] R. E. Krichevsky and V. K. Trofimov. The performance of universal encoding. IEEE Transactions on Information Theory, 27(2):199?207, Mar. 1981. [11] J. T. Kwok and H. Zhao. Incremental eigen decomposition. In IN PROC. ICANN, pages 270?273, 2003. [12] M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000. [13] B. Schumacher and M. D. Westmoreland. Indeterminate-length quantum coding. Phys. Rev. A, 64(4):042304, 2001. [14] Y. M. Shtarkov. Universal sequential coding of single messages. Problems of Information Transmission, 23(3):3?17, 1987. [15] M. Sion. On general minimax theorems. Pacific Jouronal of Mathematics, 8(1):171?176, 1958. [16] E. Takimoto and M. Warmuth. The last-step minimax algorithm. In Proceedings of the 13th Annual Conference on Computational Learning Theory, pages 100?106, 2000. [17] K. Tsuda, G. R?atsch, and M. K. Warmuth. Matrix exponentiated gradient updates for on-line learning and Bregman projections. Journal of Machine Learning Research, 6:995?1018, June 2005. [18] M. K. Warmuth. When is there a free matrix lunch. In Proc. of the 20th Annual Conference on Learning Theory (COLT ?07). Springer-Verlag, June 2007. Open problem. [19] M. K. Warmuth and D. Kuzmin. Online variance minimization. In Proceedings of the 19th Annual Conference on Learning Theory (COLT ?06), Pittsburg, June 2006. Springer-Verlag. [20] M. K. Warmuth and D. Kuzmin. Bayesian generalized probability calculus for density matrices. Journal of Machine Learning, 78(1-2):63?101, January 2010. 9
4377 |@word trial:13 version:6 compression:2 achievable:2 seems:2 stronger:1 nd:1 open:4 trofimov:4 calculus:3 decomposition:2 simplifying:1 incurs:4 tr:35 harder:2 reduction:1 bai:4 score:5 past:1 current:1 si:22 yet:1 must:2 john:1 cruz:1 designed:1 update:4 intelligence:1 warmuth:7 smith:1 core:1 manfred:2 recompute:1 cse:1 lsm:17 shtarkov:13 ucsc:1 along:1 symposium:1 focs:1 prove:5 inside:1 expected:6 owski:1 actual:1 considering:1 becomes:2 provided:1 xx:2 notation:1 bounded:4 moreover:1 mass:1 codelength:1 what:1 argmin:5 interpreted:3 minimizes:1 watrous:1 developed:2 hindsight:2 guarantee:1 every:2 act:1 bernardo:1 uk:1 unit:5 omit:1 arguably:1 positive:8 limit:1 ext:1 encoding:1 lugosi:1 plus:1 challenging:1 collapse:1 projective:1 schumacher:1 range:1 practical:1 regret:50 definite:1 differs:1 universal:2 indeterminate:1 projection:1 word:2 get:3 superlinear:1 operator:2 put:2 context:3 applying:1 equivalent:1 straightforward:3 attention:1 go:1 independently:2 kale:2 immediately:1 estimator:5 rule:2 laplace:21 updated:1 analogous:1 pt:4 play:4 heavily:1 programming:3 us:1 element:1 centrum:1 particularly:1 updating:1 pinching:1 predicts:12 observed:4 ep:14 role:2 worst:21 decrease:1 counter:1 complexity:1 ui:4 dynamic:1 motivate:1 raise:1 tight:3 predictive:2 purely:1 kraft:2 completely:1 basis:1 joint:1 various:1 alphabet:5 jain:1 fast:3 artificial:1 outcome:30 choosing:1 heuristic:1 posed:1 valued:1 nineteenth:1 say:1 statistic:12 itself:1 online:7 sequence:11 eigenvalue:2 rr:6 product:3 argminw:1 combining:1 mixing:1 pthe:1 intuitive:1 recipe:1 ijcai:1 transmission:1 neumann:1 assessing:1 produce:2 incremental:1 sity:1 develop:2 ac:1 measured:1 minor:1 progress:1 strong:1 c:1 indicate:1 uu:3 met:1 implies:2 virtual:3 explains:2 exchange:1 fix:1 generalization:6 suffices:1 decompose:1 elementary:2 secondly:1 mathematically:1 extension:4 hold:8 exp:3 predict:1 entropic:9 estimation:1 proc:2 suffi:1 combinatorial:1 largest:1 vice:1 minimization:5 promotion:1 always:2 aim:2 pn:2 ej:2 asp:1 lifted:1 sion:2 broader:1 cwi:2 june:3 bernoulli:2 likelihood:1 rigorous:1 sense:1 typically:1 interested:1 issue:4 pittsburg:1 dual:1 colt:2 special:1 uc:1 equal:5 identical:3 constitutes:1 promote:1 discrepancy:1 minimized:3 np:1 decodable:1 composed:1 replaced:3 consisting:1 interest:1 message:5 wouter:2 investigate:1 truly:1 wojtek:1 nl:1 semidefinite:3 mixture:2 analyzed:1 primal:1 held:1 kt:15 bregman:1 necessary:1 logarithm:5 exchanged:1 tsuda:1 cover:1 uniform:2 predictor:2 rounding:2 chooses:1 calibrated:2 st:44 density:38 fundamental:1 international:1 physic:4 together:1 von:1 again:3 cesa:1 choose:1 classically:1 zhao:1 leading:1 account:1 summarized:1 coding:8 multiplicative:1 view:2 doing:1 observing:6 sup:7 start:2 bayes:3 analyze:1 hazan:1 minimize:2 variance:1 spaced:2 bayesian:2 explain:1 phys:2 definition:3 against:1 obvious:1 proof:10 popular:3 subtle:1 nielsen:1 higher:1 improved:1 evaluated:2 though:1 execute:1 mar:1 furthermore:1 just:3 supx1:1 hand:3 ei:7 maximizer:1 incrementally:1 quality:1 perhaps:1 believe:2 usa:1 omitting:1 normalized:2 true:1 former:1 equality:4 hence:4 assigned:2 symmetric:2 nonzero:1 deal:1 round:6 during:1 game:1 uniquely:1 aki:1 generalized:3 eigensystem:13 pcn:1 outline:1 tt:1 recently:2 common:2 koolen:1 multinomial:14 behaves:1 ji:1 extend:2 discussed:2 tail:2 interpretation:1 measurement:2 versa:1 counterexample:1 ai:10 cambridge:2 fml:12 grid:2 mathematics:1 showed:2 perspective:1 conjectured:1 inf:6 optimizes:1 reverse:1 certain:1 verlag:2 inequality:4 continue:1 scoring:3 seen:1 additional:6 determine:2 maximize:2 tempting:1 relates:1 kontkanen:1 smooth:1 technical:1 academic:1 devised:1 e1:4 plugging:1 prediction:16 underlies:1 basic:1 essentially:1 iteration:4 represent:2 pspace:1 achieved:2 c1:1 whereas:4 conditionals:1 separately:2 crucial:1 strict:1 integer:1 properness:1 simplifies:1 cn:1 whether:4 motivated:2 expression:3 dyad:18 forecasting:3 wiskunde:1 york:1 cause:1 deep:1 santa:1 eigenvectors:6 transforms:1 informatica:1 simplest:1 tth:2 exist:1 notice:1 per:8 mat:1 key:1 four:1 takimoto:1 verified:3 monotone:1 sum:3 uncertainty:2 you:1 named:1 family:1 decision:1 appendix:1 bit:1 bound:18 followed:1 played:1 annual:3 ahead:1 occur:1 precisely:1 constraint:1 n3:1 x2:6 fourier:1 argument:4 min:2 conjecture:9 upgrading:1 pacific:1 slightly:2 son:1 lunch:14 rev:2 s1:1 jeffreys:1 intuitively:1 den:2 restricted:1 handling:1 taken:1 ln:2 remains:1 discus:3 loose:1 count:2 turn:3 know:3 end:3 generalizes:2 operation:1 incurring:1 kwok:1 spectral:1 alternative:1 eigen:1 original:3 thomas:1 denotes:3 remaining:3 running:1 chuang:1 xw:2 classical:47 question:2 realized:1 strategy:9 responds:3 diagonal:2 said:1 exhibit:1 gradient:1 krichevsky:4 parametrized:2 outer:2 w0:1 argue:1 reason:1 provable:1 assuming:1 length:14 code:6 retained:2 index:1 setup:1 difficult:1 stoc:2 trace:11 design:2 proper:10 bianchi:1 upper:4 finite:3 displayed:1 january:1 extended:3 rn:4 smoothed:2 mcmillan:1 community:1 pair:3 required:1 specified:1 learned:1 proceeds:1 below:1 kot:1 program:1 royal:1 including:1 max:5 event:3 predicting:4 telescoping:1 representing:1 minimax:22 lossless:1 arora:2 sn:1 xq:2 prior:2 relative:1 loss:47 expect:1 proportional:1 proven:3 analogy:1 eigendecomposition:4 incurred:3 sufficient:11 s0:1 principle:2 playing:1 share:2 pi:2 succinctly:1 summary:1 surprisingly:3 placed:1 free:16 last:9 side:3 weaker:1 exponentiated:1 taking:2 correspondingly:1 absolute:1 dimension:4 quantum:16 commonly:1 coincide:5 simplified:1 far:1 employing:1 transaction:1 approximate:1 dealing:1 monotonicity:1 sequentially:1 receiver:1 don:1 why:1 learn:9 nature:5 ignoring:2 obtaining:1 qip:1 domain:6 protocol:2 icann:1 main:1 azoury:1 motivation:2 bounding:4 whole:1 n2:4 myllym:1 dyadic:1 x1:8 kuzmin:2 referred:1 cient:1 fashion:1 ny:1 wiley:2 exponential:2 xl:1 third:1 learns:3 upadhyay:1 theorem:7 xt:66 showing:1 symbol:1 rhul:1 explored:1 evidence:2 dominates:1 exists:3 albeit:1 adding:2 sequential:1 drew:2 ci:1 push:1 commutative:1 horizon:3 entropy:7 generalizing:1 led:1 infinitely:1 sender:1 desire:2 expressed:1 springer:2 minimizer:2 worstcase:1 acm:1 goal:3 viewed:1 replace:1 considerable:1 hard:1 change:1 experimentally:2 uniformly:4 wt:35 justify:1 lemma:10 total:1 called:2 experimental:2 meaningful:1 atsch:1 holloway:1 latter:4 relevance:1 ex:7
3,730
4,378
EigenNet: A Bayesian hybrid of generative and conditional models for sparse learning Feng Yan Computer Science Dept. Purdue University West Lafayette, IN 47907, USA Yuan Qi Computer Science and Statistics Depts. Purdue University West Lafayette, IN 47907, USA Abstract For many real-world applications, we often need to select correlated variables? such as genetic variations and imaging features associated with Alzheimer?s disease?in a high dimensional space. The correlation between variables presents a challenge to classical variable selection methods. To address this challenge, the elastic net has been developed and successfully applied to many applications. Despite its great success, the elastic net does not exploit the correlation information embedded in the data to select correlated variables. To overcome this limitation, we present a novel hybrid model, EigenNet, that uses the eigenstructures of data to guide variable selection. Specifically, it integrates a sparse conditional classification model with a generative model capturing variable correlations in a principled Bayesian framework. We develop an efficient active-set algorithm to estimate the model via evidence maximization. Experimental results on synthetic data and imaging genetics data demonstrate the superior predictive performance of the EigenNet over the lasso, the elastic net, and the automatic relevance determination. 1 Introduction In this paper we consider the problem of selecting correlated variables in a high dimensional space. Among many variable selection methods, the lasso and the elastic net are two popular choices (Tibshirani, 1994; Zou and Hastie, 2005). The lasso uses a l1 regularizer on model parameters. This regularizer shrinks the parameters towards zero, removing irreverent variables and yielding a sparse model (Tibshirani, 1994). However, the l1 penalty may lead to over-sparisification: given many correlated variables, the lasso often only select a few of them. This not only degenerates its prediction accuracy but also affects the interpretability of the estimated model. For example, based on high-throughput biological data such as gene expression and RNA-seq data, it is highly desirable to select multiple correlated genes associated with a phenotype since it may reveal underlying biological pathways. Due to its over-sparsification, the lasso may not be suitable for this task. To address this issue, the elastic net has been developed to encourage a grouping effect, where strongly correlated variables tend to be in or out of the model together (Zou and Hastie, 2005). However, the grouping effect is just the result of its composite l1 and l2 regularizer; the elastic net does not explicitly incorporate correlation information among variables in its model. In this paper, we propose a new sparse Bayesian hybrid model to utilize the eigen-information extracted from data for the selection of correlated variables. Specifically, it integrates a sparse conditional classification model with a generative model in a principle Bayesian framework (Lasserre et al., 2006): the conditional model achieves sparsity via automatic relevance determination (ARD) (MacKay, 1991), an empirical Bayesian approach for model sparisification; and the generative model is a latent variable model in which the observations are the eigenvectors of the unlabeled data, capturing correlations between variables. By integrating these two models together, the hybrid 1 model enables identification of groups of correlated variables guided by the eigenstructures. At the same time, the model passes the information from its conditional part to its generative part, selecting informative eigenvectors for the classification task. Furthermore, using the Bayesian hybrid model, we can automate the estimation of model hyperparameters. From the regularization perspective, the new hybrid model naturally generalizes the elastic net using a composite regularizer adaptive to the data eigenstructures. It contains a sparsity regularizer and a directional regularizer that encourages selecting variables associated with eigenvectors chosen by the model. When the variables are independent of each other, the eigenvectors are parallel to the axes and this composite regularizer reduces to the combination of the ARD and a l2 regularizer (similar to the composite regularizer of the elastic net). But when some of the input variables are strongly correlated, the regularizer will encourage the classifier aligned with eigenvectors selected by the model. On one hand, our model is like the elastic net to retain ?all the big fish?. On the other hand, our model is different from the elastic net by the guidance from the eigen-information. Hence the name EigenNet. Experiments on synthetic data are presented in Section 5. Our results demonstrate that the EigenNet significantly outperforms the lasso, and the elastic net in terms of prediction accuracy. We applied this new approach to two tasks in imaging genetics: i) predicting cognitive function of healthy subjects and AD patients based on brain imaging markers, and ii) classifying the healthy and AD subjects based on single-nucleotide polymorphism (SNP) data. Compared to the lasso, the elastic net and the ARD, our approach achieves improved prediction accuracy. 2 Background: lasso and elastic net We denote n independent and identically distributed samples as D = {(x1 , y1 ), . . . , (xn , yn )}, where xi is a p dimensional input features (i.e., explanatory variables) and yi is a scalar label (i.e., response). Also, we denote [x1 , . . . , xn ] by X and (y1 , . . . , yn ) by y. Although our presentation focuses on the binary classification problem (yi ? {?1, 1}), our approach can be readily applied to other problems such as regression and survival analysis by choosing appropriate likelihood functions. For classification, we use a probit model as the data likelihood: p(y|X, w) = n Y ?(yi wT xi ) (1) i=1 where ?(z) is the Gaussian cumulative distribution function and w denotes the classifier. To identify relevant variables for high dimensional problems, the lasso (Tibshirani, 1994) uses a l1 penalty, effectively shrinking w and b towards zero and pruning irrelevant variables. In a probabilistic framework this penalty corresponds to a Laplace prior distribution: Y p(w) = ? exp(??|wj |) (2) j where ? is a hyperparameter that controls the sparsity of the estimated model. The larger the hyperparameter ?, the sparser the model. As described in Section 1, the lasso may over-penalize relevant variables and hurt its predictive performance, especially when there are strongly correlated variables. To address this issue, the elastic net (Zou and Hastie, 2005) combines l1 and l2 regularizers to avoid the over-penalization. The Q combined regularizer corresponds to the following prior distribution, p(w) ? j exp(??1 |wj | ? ?2 wj2 ), where ?1 and ?2 are hyperparameters. While it is well known that the elastic net tends to select strongly correlated variables together, it does not uses correlation information embedded in the unlabeled data. The selection of correlated variables is merely the result of a less aggressive regularizer for sparisty. Besides the elastic net, there are many variants (and extensions) to the lasso, such as the bridge (Frank and Friedman, 1993) and smoothly clipped absolute deviation (Fan and Li, 2001). These variants modify the l1 penalty to improve variable selection, but do not explicitly use the correlation information embedded in data. 2 2.5 Lasso, EigenNet 2.5 2 2 1.5 1.5 1 1 0.5 Lasso EigenNet 0.5 0 1 2 3 0 (a) Independent variables 1 2 3 (b) Correlated variables Figure 1: Toy examples. (a) When the variables x1 and x2 are independent of each other, both the lasso and the EigenNet select only x1 . (b) When the variables x1 and x2 are correlated, the lasso selects only one variable. By contrast, guided by the major eigenvector of the data, the EigenNet selects both variables. 3 EigenNet: eigenstructure-guided variable selection In this section, we propose to use the covariance structure in data to guide the sparse estimation of model parameters. First, let us consider the following toy examples. 3.1 Toy examples Figure 1(a) shows samples from two classes. Clearly the variables x1 and x2 are not correlated. The lasso or the elastic net can successfully select the relevant variable x1 to classify the data. For the samples in Figure 1(b), the variables x1 and x2 are strongly correlated. Despite the strong correlation, the lasso would select only x1 and ignore x2 . The elastic net may select both x1 and x2 if the regularization weight ?1 is small and ?2 is big, so that the elastic net behaves like l2 regularized classifier. The elastic net, however, does not explore the fact that x1 and x2 are correlated. Since the eigenstructure of the data covariance matrix captures correlation information between variables, we propose to not only regularize the classifier to be sparse, but also encourage it to be aligned with certain eigenvector(s) that are helpful for the classification task. Note that although classical Fisher linear discriminant also uses the data covariance matrix to learn the classifier, it generally does not provide a sparse solution, thus not suitable for the task of selecting correlated variables and removing irrelevant ones. For the data in Figure 1(a), since the two eigenvectors are parallel to the horizontal and vertical axes, the EigenNet essentially reduces to the elastic net and selects x1 . For the data in Figure 1(b), the principle eigenvector can guide the EigenNet to select both x1 and x2 . The minor eigenvector is, however, not useful for the classification task (in general, we need to select which eigenvectors are relevant to classification). We use a Bayesian framework to materialize the above ideas as described in the following section. 3.2 Bayesian hybrid of conditional and generative models The EigenNet is a hybrid of conditional and generative models. The conditional component allows us to learn the classifier via ?discriminative? training; the generative component captures the correlations between variables; and these two models are glued together via a joint prior distribution, so that the correlation information is used to guide the estimation of the classifier and the classification task is used to choose or scale relevant eigenvectors. Our approach is based on the general Bayesian framework proposed by Lasserre et al. (2006)), which allows one to combine conditional and generative models in an elegant principled way. Specifically, for the conditional model we have the same likelihoodQas (1), p(y|X, w) = Q p ?1 T i ?(yi w xi ). For the classifier w, we use a Gaussian prior: p(w) = j=1 N (wj |0, ?j ). We will describe later how to efficiently learn the precision parameter ?j from the data to obtain a sparse classifier. 3 To encourage the classifier aligned with certain ? eigenvectors, we introduce w?a latent vector (tightly) linked to the classifier w?in the generative model: m Y ? ? ? (?v ?j )?1 I) (3) p(V|s, w) N (vj |sj w, ? w ? w ?v ?s j=1 where vj and ?j are the j-th eigenvector and yi vj xi sj eigenvalue of the data covariance matrix, ?v is a hyperparameter, s = [s1 , . . . , sm ] are scaling facj = 1, . . . , m i = 1, . . . , n ? To combat overfitting, tors for the parameter w. we assign a Gamma prior Gam(?v |c0 , d0 ) over ? ?v . Note that this generative model encourages w to align with the major eigenvectors with bigger Figure 2: The graphical model of the EigenNet. eigenvalues. However, eigenvectors are noisy and not all of them relevant to the classification task? we need to select relevant eigenvectors (i.e. the relevant sub-eigenspace) and remove irrelevant ones. To enable the selection of the relevant eigenvectors, we assign a Laplace prior on sj : p(s) ? m Y ?s exp(??s |sj |) (4) j=1 where ?s is a hyperparameter. ? conditional on Finally, to link the conditional and generative models together, we use a prior for w w: ? ? p(w|w) ? N (w|w, rI) (5) ? are in our joint model. For Note that the variance parameter r controls how similar w and w ? ? ? w) where ?(a) = 1 if a = 1 and ?(a) = 0 simplicity, we set r = 0 here so that p(w|w) = ?(w otherwise. The graphical model representation of the EigenNet is given in Figure 2. 3.3 Model estimation In this section we present how to estimate the model based on an empirical Bayesian approach. Specifically, we will use expectation propagation (EP) (Minka, 2001) to estimate the posterior of ? and optimize the marginal likelihood of the joint model over the scaling the classifier w (and w) variables s and the precision parameters ?. First, given the hyperparameter ?v and the latent variable s, the posterior distribution of w is Y Y p(w|y, X, ) ? N (w|0, diag(?)?1 ) ?(yi wT xi ) N (vj |sj w, (?v ?j )?1 I) i ? N (w|mp , Vp ) Y (6) j ?(yi wT xi ) (7) i P P where Vp = (diag(? + ?v j ?j s2j I))?1 and mp = ?v j ?j sj vj . Then we initialize the EP updates by p(w) = N (w|mp , Vp ) and then iteratively approximate each likelihood factor ?(yi wT xi ) ?1 by a factor with the Gaussian form: N (ti |xT i w, hi ). In other words, EP maps the nonlinear nonGaussian factor to the Gaussian factor with the virtual observation ti and the noise variance h?1 i . After the convergence of EP, we obtain both the mean mw and the covariance Vw . Given the approximate posterior q(w), we maximize the variational lower bound over ?v : X L(?v ) = Eqw [ ln N (vj |sj w, (?v ?j )?1 I) + ln Gam(?v |c0 , d0 )] j = pm F ln ?v ? ?v + (c0 ? 1) ln ?v ? d0 ?v + contant 2 2 4 (8) Algorithm 1 The empirical Bayesian estimation algorithm 1. Initialize the model to contain a small fraction of features and initialize the parameters: s = 0, ?v = 1, t = 0 h = ?. 2. Run EP to obtain the initial mean and the covariance mw and Vw . 3. Loop until convergence or reaching the maximum number of iterations 4. Loop over the j-th active set a. Update ? via (12) and (13). b. If u2j < rj , remove the features in the j-th active set from the model c. Update the posterior mean mw and the covariance Vw based on EP. d. Optimize the precision parameter ?v via (9). e. Optimize the scaling factors s via (11). where F = P j ?j ? 2( P j vj ?j sj )T mw + ?v = P j ?j s2j ((mw )2i + (Vw )i,i ). As a result, we have c0 ? 1 + pm/2 . d0 + F/2 (9) Similarly, we maximize the variational lower bound over s: X  L(s) = Eqw [ln N (vj |sj w, (?v ?j )?1 I)] ? ?s |sj | + contant. (10) j Consequently we have for each j, if |vjT mw | < |vjT mw | ? ?s /(?j ?v ) ?s , sj = Sign(vjT mw ) ; otherwise, sj = 0. ?j ?v (mw )2i + (Vw )i,i (11) To estimate ?, we develop an active-set method to iteratively maximize the model marginal likelihood over elements of ?. In particular, we use a strategy similar to Tipping and Faul (2003)?s approach: given the approximation factors N (t|XT w, diag(h)?1 ), the distribution over eigenvectors N (vj |sj w, (?v ?j )?1 I), and the prior distribution N (w|0, diag(?)?1 ), we can compute and decompose the log marginal likelihood L(?) = log p(y|X, s, ?v ) into two parts: L(?j ) and L(?\j ) where j and \j index the elements of ? in the active set and the rest elements, respectively. Note that because the effective prior over w becomes N (w|mp , Vp ) as in (7) ? instead of the zero mean prior N (w|0, diag(?)?1 )? we cannot apply the algorithm proposed by Tipping and Faul (2003). Instead, we decompose L(?) into L(?j ) and L(?\j ) as follows. First let us define Uj = tT diag(h)xj + ?v m X ?k sk vkj ? bT mw , Rj = (xj )T diag(h)xj + ?v k=1 m X ?k s2k ? bT Vw b k=1 ?j Rj rj = ?j ? R j ?j Uj uj = ?j ? Rj (12) Pm where b = (xj )T diag(h)Xa + ?v eaj k=1 ?k s2k , xj is the j-th column of the data matrix X, vkj is the j-th element of the vector vk , Xa are the columns of X associated with currently selected features (indexed by a), and eaj are the a-th elements of the j-th row of the identity matrix. r2 j Then we have L(?) = L(? \j ) + 21 (ln ?j ? ln(?j + uj ) + ?j +u ). where L(? \j ) does not depend j on ?j . Therefore, we can directly optimize over ?j without updating ?\j . Setting the gradient of L(?) over ?j , we easily obtain the following optimality condition: if u2j ? rj , ?j = rj2 ; u2j ? rj if u2j < rj , ?j = ?. In the latter case we remove the j-th feature if it is currently in the model. 5 (13) 0 10 20 30 (a) Lasso 40 0 10 20 30 40 0 (b) Elastic net 10 20 30 (c) EigenNet 40 0 10 20 30 40 (d) True Figure 3: Visualization of the lasso, the elastic net, the EigenNet and the true classifier weights. We used 80 training samples with 40 features. The test error rates of the lasso, the elastic net, and the EigenNet on 2000 test samples are 0.297, 0.245, and 0.137, respectively. The above active-set updates are very efficient, because during each iteration we only deal with a reduced model defined on the currently selected features. This approach significantly reduces the computational cost of EP from O(np2 ) to O(nl2 ) where l is the biggest model size during the active-set iterations. The empirical Bayesian estimation algorithm of EigenNet is summarized in Algorithm 1. 4 Related work The EigenNet is related to the classical eigenface approaches (Turk and Pentland, 1991; Sirovich and Kirby, 1987). The eigenface approach learns a model in the subspace spanned by the major eigenvectors of the data covariance matrix. The EigenNet also uses the eigensubspace to guide the model estimation. However, unlike the eigenface approach, the EigenNet adaptively selects eigenvectors and learns a sparse classifier. There are Bayesian versions of the lasso and the elastic net. Bayesian lasso (Park et al., 2008) puts a hyper-prior on the regularization coefficient and use a Gibbs sampler to jointly sample both regression weights and the regularization coefficient. Using a similar treatment to Bayesian lasso, Bayesian elastic net (Li and Lin, 2010) samples the two regularization coefficients simultaneously, potentially avoiding the ?double shrinkage? problem described in the original elastic net paper (Zou and Hastie, 2005). As the EigenNet, these methods are grounded in a Bayesian framework, sharing the benefits of obtaining posterior distributions for handling estimation uncertainty. However, Bayesian lasso and Bayesian elastic net are presented to handle regression problems (though certainly they can be generalized for classification problems) and do not use the eigen-information embedded in data. The EigenNet, by contrast, selects the eigen-subspace and uses it to guide classification. Group lasso (Jacob et al., 2009) enforces sparsity on the groups of predictors?an entire group of correlated predictors may be retained or pruned off. However, applying the idea of group lasso to the EigenNet faces several difficulties: First, this approach won?t give (approximately) sparse classifiers unless we truncate eigenvectors. If we use truncation, we need to decide what threshold we should use to truncate each eigenvector?again it?s a difficult task. Second, it will be hard to tune all regularization coefficients associated with all major eigenvectors?cross validation would not suffice. By contrast, our classifier is sparse because of the ARD effect. More importantly, the latent variables sj in our model are automatically estimated from data, deciding how important each eigenvector is for the classification task in a principled Bayesian framework. 5 Experimental results We evaluated the new sparse Bayesian model, the EigenNet, on both synthetic and real data and compared it with three representative variable selection methods, the lasso, the elastic net, and an ARD approach (Qi et al., 2004). For the lasso and the elastic net, we used the Glmnet software package that uses cyclical coordinate descent in a pathwise fashion1 . Like the EigenNet, the ARD approach also uses EP to approximate the model marginal likelihood. For the lasso and the elastic net, we used cross-validation to tune the hyperparameters; for the EigenNet, we estimated ?v from data and tuned ?s by cross-validation. 1 http://www-stat.stanford.edu/ tibs/glmnet-matlab/ 6 0.1 0 15 Lasso Elastic net EigenNet 0.3 0.25 0.2 10 (a) independent features 0.1 30 Lasso Elastic net EigenNet 20 5 10 0.15 20 40 60 80 # of training examples 40 Lasso Elastic net EigenNet RMSE 0.2 0.35 RMSE 0.3 Lasso Elastic net EigenNet test error rate test error rate 0.4 0 20 40 60 80 # of training examples (b) correlated features 20 40 60 80 # of training examples (c) independent features 0 20 40 60 80 # of training examples (d) correlated features Figure 4: Predictive performance on synthetic datasets. (a) and (b): classification; (c) and (d): regression The results were averaged over 10 runs. For the data with independent features, the EigenNet outperforms the alternative methods when the number of training samples is small; for data with correlated features, the EigenNet outperforms the alternative methods consistently. 5.1 Visualization of estimated classifiers First, we tested these methods on synthetic data that contain correlated features. We sampled 40 dimensional data points, each of which contains two groups of correlated variables. The correlation coefficient between variables in each group is 0.81 and there are 4 variables in each group. We set the values of the classifier weights in one group as 5 and in the other group as -5. We also generated the bias term randomly from a standard Gaussian distribution. We set the number of training points to 80. Figure 3 shows the estimated classifiers and the true classifier we used to produce the data labels. Unlike the lasso and the elastic net, the EigenNet clearly identifies two groups of correlated variables, very close to the ground truth. As a result, on 2000 test points, the EigenNet achieves the lowest prediction error rate, 0.137, while the test error rates of the lasso and the elastic net are 0.297 and 0.245, respectively. 5.2 Experiments on synthetic data Now we systematically compared these methods for classification and regression on synthetic datasets containing correlated features and containing independent features (Although this presentation so far has been focused on classification, we can easily implement the EigenNet for regression; since we can compute the marginal likelihood exactly, the EP approximation is not needed for regression.) To generate data with correlated variables we used a similar procedure as in the visualization example: we sampled 40 dimensional data points, each of which contains two groups of correlated variables. The correlation coefficient between variables in each group is 0.81 and there are 4 variables in each group. However, unlike for the previous example where the classifier weights are the same for the correlated variables, now we set the weights within the same group to have the same sign, but with different random values. We varied the number of training points, ranging from 10 to 80, and tested all these methods. For the datasets with independent features, we followed the same procedure except that the features are independently sampled. We ran the experiments 10 times. Figure 4 shows the results averaged over 10 runs. We did not report the standard errors since they are very small. For the datasets with independent features, the EigenNet outperforms the alternative methods when the number of training examples is small (probably because in this case the eigenspace has a smaller dimension than than that of the classifier, effectively controlling the model flexibility); with more training examples, it is not unsurprising to see all these methods perform quite similarly. For the data with correlated features, although the results of the elastic net appear to overlaps with those of the lasso in the figure, the elastic net often outperforms the lasso with a small margin; also, the EigenNet consistently outperforms the lasso and the elastic net signficantly. The improved predictive performance of the EigenNet reflects the benefit of using the valuable correlation information to help the model estimation. 5.3 Application to imaging genetics Imaging genetics is an emerging research area where imaging markers and genetic variations (e.g., SNPs) are used to study neurodegenerative diseases, in particular, Alzheimer?s disease (AD). We 7 Root?Mean?Square Error Classification Error Rate 8.5 8 0.4 7.5 0.35 7 6.5 Lasso Elastic net EigenNet (a) Regression of ADAS-Cog score 0.3 Lasso Elastic net ARD EigenNet (b) Classification of healthy & AD subjects Figure 5: Imaging genetics applications: (a) prediction of the ADAS-Cog score based on 14 imaging features and (b) AD classification based on 2000 SNPs. The error bars represent the standard errors. applied the EigenNet to two critical problems in imaging genetics and compared its performance with that of alternative sparse learning methods. First, we considered a regression problem where the predictors are imaging features, which were generated by Holland et al. (2009) for ADNI and include volume measured in 14 brain regions of interest (ROI)?including the whole brain, ventricles, hippocampus, etc. We used these imaging features to predict the ADAS-Cog score, which is widely used to assess cognitive function of AD patients. It is hypothesized that the brain ROI volumes are associated with the ADAS-Cog score. But this association has not been rigorously studied by statistical learning methods. After removing missing entries, we obtained the data of 726 subjects, including healthy people, people with mild cognitive impairment (MCI), and AD patients. Then we applied the lasso, the elastic net, and the EigenNet to this prediction task. We randomly selected 508 training samples and 218 test samples for 50 times. The results are shown in Figure 5.(a). Second, we used SNP data to classify a subject into the healthy group or AD patients. We chose the top 2000 SNPs that are associated with AD based on a simple statistical test. There are 374 subjects in total (roughly the same size for each class). We compared the EigenNet with the lasso and the elastic net as well as the the ARD approach?since it corresponds to EigenNet?s conditional component. We randomly split the dataset into 262 training and 112 test samples 10 times. The results are summarized in Figure 5.(b). As shown in the Figure, for both the regression and classification problems, the EigenNet outperforms the alternative methods significantly. 6 Conclusions In this paper, we have presented a novel sparse Bayesian hybrid model to select correlated variables for regression and classification. It integrates the sparse conditional ARD model with a latent variable model for eigenvectors. For this hybrid model, we could explore other latent variable models, such as sparse projection methods (Guan and Dy, 2009; Archambeau and Bach, 2009); these models can better deal with noise in the unlabeled data and improve the selection of interdependent features (i.e., predictors). Furthermore, if we have certain prior knowledge about the interdependence between features, such as linkage disequilibrium between SNPs, we could easily incorporate them into our model. Thus, our model provides an elegant framework for integrating complex data generation processes and domain knowledge in sparse learning. 7 Acknowledgments The authors thank the anonymous reviewers and T. S. Jaakkola for constructive suggestions. This work was supported by NSF IIS-0916443, NSF CAREER award IIS-1054903, and the Center for Science of Information (CSoI), an NSF Science and Technology Center, under grant agreement CCF-0939370. 8 References Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267?288, 1994. Hui Zou and Trevor Hastie. Regularization and variable selection via the Elastic Net. Journal of the Royal Statistical Society B, 67:301?320, 2005. Julia A. Lasserre, Christopher M. Bishop, and Thomas P. Minka. Principled hybrids of generative and discriminative models. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pages 87?94, 2006. David J.C. MacKay. Bayesian interpolation. Neural Computation, 4:415?447, 1991. Ildiko E. Frank and Jerome H. Friedman. A statistical view of some chemometrics regression tools. Technometrics, 35(2):109?135, 1993. Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348?1360, 2001. Thomas P. Minka. Expectation propagation for approximate Bayesian inference. In Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362?369, 2001. Michael E. Tipping and Anita C. Faul. Fast marginal likelihood maximisation for sparse Bayesian models. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003. Matthew Turk and Alex Pentland. Eigenfaces for recognition. J. Cognitive Neuroscience, 3:71?86, 1991. L. Sirovich and M. Kirby. Low-dimensional procedure for the characterization of human faces. J. Opt. Soc. Am. A, 4(3):519?524, 1987. Park, Trevor, Casella, and George. The Bayesian Lasso. Journal of the American Statistical Association, 103(482):681?686, 2008. Qing Li and Nan Lin. The Bayesian Elastic Net. Bayesian Analysis, 5(1):151?170, 2010. Laurent Jacob, Guillaume Obozinski, and Jean-Philippe Vert. Group lasso with overlap and graph lasso. In Proceedings of the 26th Annual International Conference on Machine Learning, 2009. Yuan Qi, Thomas P. Minka, Rosalind W. Picard, and Zoubin Ghahraman. Predictive automatic relevance determination by expectation propagation. In Proceedings of Twenty-first International Conference on Machine Learning, pages 671?678, 2004. Dominic Holland, James B Brewer, Donald J Hagler, Christine Fenema-Notestine, and Anders M Dale. Subregional neuroanatomical change as a biomarker for alzheimer?s disease. Proceedings of the National Academy of Sciences, 106(49):20954?20959, 2009. Yue Guan and Jennifer Dy. Sparse probabilistic principal component analysis. JMLR W&CP: AISTATS, 5, 2009. C?edric Archambeau and Francis Bach. Sparse probabilistic projections. In Advances in Neural Information Processing Systems 21. 2009. 9
4378 |@word mild:1 version:1 hippocampus:1 c0:4 covariance:8 jacob:2 edric:1 initial:1 contains:3 score:4 selecting:4 series:1 wj2:1 genetic:2 tuned:1 outperforms:7 readily:1 informative:1 enables:1 remove:3 update:4 generative:13 selected:4 intelligence:2 runze:1 provides:1 characterization:1 yuan:2 pathway:1 combine:2 introduce:1 interdependence:1 roughly:1 brain:4 automatically:1 becomes:1 underlying:1 suffice:1 eigenspace:2 lowest:1 what:1 eigenvector:7 emerging:1 developed:2 sparsification:1 combat:1 ti:2 exactly:1 classifier:22 control:2 grant:1 yn:2 appear:1 eigenstructure:2 modify:1 tends:1 despite:2 laurent:1 glued:1 approximately:1 interpolation:1 chose:1 studied:1 archambeau:2 averaged:2 lafayette:2 acknowledgment:1 enforces:1 maximisation:1 implement:1 procedure:3 area:1 empirical:4 yan:1 neurodegenerative:1 significantly:3 composite:4 projection:2 vert:1 word:1 integrating:2 donald:1 zoubin:1 cannot:1 unlabeled:3 selection:13 close:1 put:1 applying:1 optimize:4 www:1 map:1 reviewer:1 missing:1 center:2 independently:1 focused:1 simplicity:1 importantly:1 spanned:1 regularize:1 handle:1 variation:2 hurt:1 laplace:2 coordinate:1 controlling:1 us:9 agreement:1 element:5 recognition:2 updating:1 ep:9 tib:1 u2j:4 capture:2 wj:3 region:1 sirovich:2 valuable:1 ran:1 disease:4 principled:4 rigorously:1 depend:1 predictive:5 easily:3 joint:3 regularizer:12 fast:1 describe:1 effective:1 artificial:2 hyper:1 choosing:1 eaj:2 quite:1 larger:1 stanford:1 widely:1 jean:1 otherwise:2 statistic:2 jointly:1 noisy:1 eigenvalue:2 net:46 propose:3 aligned:3 relevant:9 loop:2 degenerate:1 flexibility:1 academy:1 chemometrics:1 convergence:2 double:1 produce:1 help:1 develop:2 stat:1 measured:1 ard:9 minor:1 strong:1 soc:1 faul:3 guided:3 human:1 enable:1 virtual:1 eigenface:3 assign:2 polymorphism:1 decompose:2 anonymous:1 opt:1 biological:2 extension:1 considered:1 ground:1 roi:2 exp:3 great:1 deciding:1 predict:1 automate:1 matthew:1 major:4 achieves:3 tor:1 estimation:9 proc:1 integrates:3 label:2 currently:3 healthy:5 bridge:1 successfully:2 tool:1 reflects:1 clearly:2 rna:1 gaussian:5 reaching:1 avoid:1 shrinkage:2 jaakkola:1 np2:1 ax:2 focus:1 vk:1 consistently:2 likelihood:10 biomarker:1 contrast:3 am:1 helpful:1 inference:1 anders:1 anita:1 bt:2 entire:1 explanatory:1 selects:5 issue:2 classification:21 among:2 mackay:2 initialize:3 marginal:6 park:2 throughput:1 report:1 few:1 randomly:3 gamma:1 tightly:1 rj2:1 simultaneously:1 national:1 qing:1 friedman:2 technometrics:1 interest:1 highly:1 picard:1 certainly:1 yielding:1 regularizers:1 contant:2 encourage:4 nucleotide:1 unless:1 indexed:1 guidance:1 classify:2 column:2 s2j:2 maximization:1 ada:4 cost:1 deviation:1 entry:1 predictor:4 unsurprising:1 synthetic:7 combined:1 adaptively:1 international:3 retain:1 probabilistic:3 off:1 michael:1 together:5 nongaussian:1 again:1 containing:2 choose:1 cognitive:4 american:2 li:4 toy:3 aggressive:1 summarized:2 coefficient:6 explicitly:2 mp:4 ad:9 later:1 root:1 view:1 linked:1 francis:1 parallel:2 rmse:2 ass:1 square:1 accuracy:3 variance:2 efficiently:1 identify:1 directional:1 vp:4 bayesian:28 identification:1 nl2:1 casella:1 sharing:1 trevor:2 turk:2 minka:4 james:1 naturally:1 associated:7 sampled:3 dataset:1 treatment:1 popular:1 knowledge:2 tipping:3 response:1 improved:2 evaluated:1 shrink:1 strongly:5 though:1 furthermore:2 just:1 xa:2 correlation:14 until:1 hand:2 jerome:1 horizontal:1 christopher:1 nonlinear:1 marker:2 propagation:3 reveal:1 usa:2 effect:3 name:1 contain:2 true:3 hypothesized:1 ccf:1 regularization:7 hence:1 iteratively:2 deal:2 during:2 encourages:2 won:1 generalized:1 tt:1 demonstrate:2 julia:1 l1:6 christine:1 cp:1 snp:6 ranging:1 variational:2 novel:2 superior:1 behaves:1 volume:2 association:3 gibbs:1 automatic:3 pm:3 similarly:2 eqw:2 etc:1 align:1 posterior:5 perspective:1 irrelevant:3 certain:3 jianqing:1 binary:1 success:1 yi:8 george:1 maximize:3 ii:3 multiple:1 desirable:1 rj:8 reduces:3 d0:4 determination:3 adni:1 cross:3 bach:2 lin:2 award:1 bigger:1 qi:3 prediction:6 variant:2 regression:13 patient:4 essentially:1 expectation:3 vision:1 iteration:3 grounded:1 represent:1 penalize:1 background:1 rest:1 unlike:3 pass:1 probably:1 subject:6 tend:1 elegant:2 yue:1 nonconcave:1 alzheimer:3 mw:10 vw:6 split:1 identically:1 affect:1 xj:5 hastie:5 lasso:46 idea:2 expression:1 linkage:1 penalty:4 matlab:1 impairment:1 generally:1 useful:1 eigenvectors:19 tune:2 reduced:1 http:1 generate:1 nsf:3 fish:1 sign:2 estimated:6 disequilibrium:1 tibshirani:4 neuroscience:1 materialize:1 hyperparameter:5 group:17 threshold:1 utilize:1 imaging:12 graph:1 merely:1 fraction:1 run:3 package:1 uncertainty:2 clipped:1 decide:1 seq:1 dy:2 scaling:3 capturing:2 bound:2 hi:1 followed:1 nan:1 fan:2 oracle:1 annual:1 alex:1 x2:8 ri:1 software:1 optimality:1 pruned:1 signficantly:1 truncate:2 combination:1 vkj:2 smaller:1 kirby:2 s1:1 ln:7 vjt:3 visualization:3 jennifer:1 brewer:1 needed:1 generalizes:1 apply:1 gam:2 appropriate:1 rosalind:1 alternative:5 eigen:4 original:1 thomas:3 denotes:1 top:1 include:1 neuroanatomical:1 graphical:2 exploit:1 especially:1 uj:4 classical:3 society:2 feng:1 strategy:1 gradient:1 subspace:2 link:1 thank:1 discriminant:1 besides:1 index:1 retained:1 difficult:1 robert:1 potentially:1 frank:2 twenty:1 perform:1 vertical:1 observation:2 datasets:4 sm:1 purdue:2 descent:1 pentland:2 philippe:1 dominic:1 incorporate:2 y1:2 varied:1 ninth:1 david:1 address:3 bar:1 pattern:1 sparsity:4 challenge:2 interpretability:1 including:2 royal:2 suitable:2 overlap:2 difficulty:1 hybrid:11 regularized:1 predicting:1 s2k:2 critical:1 improve:2 technology:1 identifies:1 eigensubspace:1 prior:12 interdependent:1 l2:4 embedded:4 probit:1 generation:1 limitation:1 suggestion:1 penalization:1 validation:3 ventricle:1 principle:2 systematically:1 classifying:1 row:1 genetics:6 penalized:1 supported:1 truncation:1 guide:6 bias:1 eigenfaces:1 face:2 absolute:1 sparse:21 distributed:1 benefit:2 overcome:1 dimension:1 xn:2 world:1 cumulative:1 dale:1 author:1 adaptive:1 far:1 sj:14 pruning:1 approximate:4 ignore:1 gene:2 active:7 overfitting:1 xi:7 discriminative:2 latent:6 sk:1 lasserre:3 learn:3 correlated:31 elastic:46 career:1 obtaining:1 complex:1 zou:5 domain:1 vj:9 diag:8 did:1 aistats:1 big:2 noise:2 hyperparameters:3 whole:1 x1:13 west:2 biggest:1 representative:1 shrinking:1 precision:3 sub:1 guan:2 jmlr:1 learns:2 removing:3 cog:4 xt:2 bishop:1 r2:1 evidence:1 grouping:2 survival:1 workshop:1 effectively:2 hui:1 depts:1 margin:1 sparser:1 phenotype:1 smoothly:1 explore:2 glmnet:2 pathwise:1 scalar:1 cyclical:1 holland:2 corresponds:3 truth:1 extracted:1 obozinski:1 conditional:14 identity:1 presentation:2 consequently:1 towards:2 fisher:1 hard:1 change:1 specifically:4 except:1 wt:4 sampler:1 csoi:1 principal:1 total:1 experimental:2 mci:1 select:13 guillaume:1 people:2 latter:1 relevance:3 constructive:1 dept:1 tested:2 avoiding:1 handling:1
3,731
4,379
Reconstructing Patterns of Information Diffusion from Incomplete Observations ? Jon Kleinberg Department of Computer Science Cornell University Ithaca, NY 14853 Flavio Chierichetti Department of Computer Science Cornell University Ithaca, NY 14853 David Liben-Nowell Department of Computer Science Carleton College Northfield, MN 55057 Abstract Motivated by the spread of on-line information in general and on-line petitions in particular, recent research has raised the following combinatorial estimation problem. There is a tree T that we cannot observe directly (representing the structure along which the information has spread), and certain nodes randomly decide to make their copy of the information public. In the case of a petition, the list of names on each public copy of the petition also reveals a path leading back to the root of the tree. What can we conclude about the properties of the tree we observe from these revealed paths, and can we use the structure of the observed tree to estimate the size of the full unobserved tree T ? Here we provide the first algorithm for this size estimation task, together with provable guarantees on its performance. We also establish structural properties of the observed tree, providing the first rigorous explanation for some of the unusual structural phenomena present in the spread of real chain-letter petitions on the Internet. 1 Introduction The on-line domain is a rich environment for observing social contagion ? the tendency of new information, ideas, and behaviors to spread from person to person through a social network [1, 4, 6, 10, 12, 14, 17, 19]. When a link, invitation, petition, or other on-line item passes between people in the network, it is natural to model its spread using a tree structure: each person has the ability to pass the item to one or more others who haven?t yet received it, producing a set of ?offspring? in this tree. Recent work has considered such tree structures in the context of on-line conversations [13], chain letters [5, 9, 16], on-line product recommendations [11, 15], and other forms of forwarded e-mail [18]. These types of trees encode enormous detail about the process by which information spreads, but it has been a major methodological challenge to infer properties of their structure from the incomplete pictures of them that on-line data provides. Specifically, how do we reconstruct the paths followed by an on-line item, using our incomplete observations, and how do we estimate from these observations the total number of people who encountered the item? A fundamental type of social contagion is one in which the item, by its very nature, accumulates information about the paths it follows as it travels through the social network. A canonical example ? A full version of this paper is available from the authors? Web pages. 1 of such a self-recording item is an on-line petition that spreads virally by e-mail ? in other words, a chain-letter petition [9, 16]. Each recipient who wants to take part in the petition signs his or her name to it and forwards copies of it to friends. In this way, each copy of the petition contains a growing list of names that corresponds to a path all the way back to the initiator of the petition. Such types of petitions are a central ingredient in broader forms of Internet-based activism, a topic of considerable current interest [7, 8]. In the remainder of this discussion, we will refer to the item being spread as a ?petition,? although more generally we are considering any item with this self-recording structure. Reconstructing the Spread of Social Contagion. Liben-Nowell and Kleinberg studied the following framework for reconstructing the spread of chain-letter petitions [16]. Empirical analyses of large-scale petitions suggest that the spreading pattern can be reasonably modeled as a tree T ; although there are a small number of deviations, almost all participants sign a copy of the petition exactly once (even if they receive it multiple times), and so we can view the person from whom they received this copy as their parent in T . The originator of the petition is the root of T . For a given petition, the tree T is the structure we wish to understand, because it captures how the message spreads through the population. But in general we cannot hope to observe T : assuming that the petition spreads through individuals? e-mail accounts, hosted by multiple providers, there is no single organization that has all the information needed to reconstruct T .1 Instead, we must obtain information about T indirectly by a revelation mechanism that we can model as follows. For each person v who signs the petition, there is a small probability ? > 0 that v will also publicly post their copy of it. In this case, we say that the node v is exposed. When v is exposed, we see not only that v belongs to T , but we also see v?s path all the way back to the root r of T , due to the list of names on v?s copy of the petition. Thus, if the set of people who post their copy of the petition is {v1 , v2 , . . . , vs }, then the subtree T 0 of T that we are able to observe consists precisely of the union of the r-to-vi paths in T (for i = 1, 2, . . . , s).2 We refer to this process as the ?-sampling of a tree T : each node v is exposed independently with probability ?, and then all nodes on any path from the root to an exposed node (including the exposed nodes themselves) are revealed. This results in an observed tree, consisting of all revealed nodes, given by a random variable T? drawn from the set of all possible subtrees of T . Understanding the relationship between T? and T is a fundamental question, since empirically we are often in a setting where we can observe T? and want to reason about properties of the larger unobserved tree T . Properties of ?-Sampling: Some Basic Questions. This is the basic issue we address in this paper: to understand the observation of a tree under ?-sampling. In Liben-Nowell and Kleinberg?s work, they looked at large trees revealed via the public posting of chain-letter petitions on the Internet ? the real-life process that is mathematically abstracted by ?-sampling ? and they identified some unexpected and recurring empirical properties in the observed trees. In particular, the observed trees that they reconstructed had a very large single-child fraction ? the fraction of nodes with only one child was above 94%. The resulting trees had a narrow, ?stringy? appearance, owing to long chains of these single-child nodes; this led naturally to the question of why the patterns of chain-letter diffusion were giving rise to such structures. Possible answers were hypothesized in subsequent work. In particular, Golub and Jackson proposed an explanation based on computer simulation [9]; they studied a model for generating trees T using a Galton?Watson branching process [3], and they showed that for branching processes near the critical value for extinction, ?-sampling with small values of ? produced large single-child fractions in simulations. This line of work has left open a number of questions, of which two principal questions are the following. First, can we provide a formal connection between ?-sampling and the single-child fraction, and can we characterize the types of trees on which this connection holds (whether generated by branching processes or otherwise)? Second, existing work on this topic has so far not provided any framework capable of addressing what is perhaps the most basic question about ?-sampling: given a tree T? with its set of exposed nodes indicated ? i.e., a single outcome of the ?-sampling process 1 Some petitions are hosted by a single Web site, rather than relying on social contagion; however, our focus here is on those that spread via person-to-person communication. 2 In practice, there is a separate algorithmic question inherent in constructing this union in the presence of noise that makes different copies of the lists slightly typographically different from each other [5, 16], but this noise-correction process can be treated as a ?black box? for our purposes here. 2 ? can we infer the number of nodes in the original tree T ? (Note that we must do this inference without knowing the value of ?.) This second question is a central issue in the sense that one generally asks, given partial observations of diffusion-based activism, for an estimate of the total number of people who were involved. Our Results: Single-Child Fractions and Size Estimation. In this paper, we provide answers to both of these questions. First, we prove that ?-sampling with small ? produces a large single-child fraction in all bounded-degree trees. We do not require any assumption that the unobserved tree T arises from a branching process; the tree may be arbitrary as long as the degrees are bounded. More precisely, we show that for every natural number k, there is a function fk (x) for which limx?0+ fk (x) = 0, such that if T has a maximum of k children at any node, then T? has a singlechild fraction of at least 1 ? fk (?) with high probability.3 This result shows how the long stringy structures observed by Liben-Nowell and Kleinberg are a robust property of the process by which these structures were observed, essentially independently of what we assume (beyond a degree bound) for the structure of the unobserved tree. Second, we consider the problem of estimating the size of T , which we define as the number of nodes it contains. In the basic formulation of the problem, we ask: given a single draw of T? , with its set of exposed nodes indicated, can we estimate the size of T to within a 1 ? ? factor with high probability for any constant ? > 0? Here we show that this is possible for any bounded-degree tree, as well as for trees of unbounded degree that satisfy certain structural conditions. Following our analysis of the estimation problem, we also consider the closely connected issue of concentration, which is related to estimation but distinct from it. Specifically, we ask: is it the case that the size of T? (a numerical random variable derived from T? itself) is concentrated near its mean? For sufficiently small ? the answer is no, and we give a bound on the threshold for ?, tight to within an exponentially smaller term, at which concentration begins to hold. We note that concentration is a fundamentally different issue from estimation, in the sense that to be able to perform estimation, it is not sufficient that the size of T? be concentrated as a random variable.4 Using our methodology, we provide the first estimate for the reach of the Iraq-War protest chain letter studied by Liben-Nowell and Kleinberg: while the tree structure and rate of posting are at the limit of the parameters that can be handled, our framework estimates that their observed tree of 18,119 signers is a subtree of a larger unobserved tree with approximately 173,000 signers, which in total generated roughly 3.5 million copies of the e-mailed petition when both signers and non-signers are considered. Our Results: Extensions of the Basic Model. Finally, we prove results for several extensions to our model. First, while we have focused on the case in which there is a fixed underlying tree T which is then sampled using randomization, we can also define a model in which both T and the sampling are the result of randomization ? in particular, we consider a case in which T is first generated from a critical Galton?Watson process [3], and then ?-sampling is applied to the generated tree T . For this model, we show that as long as the offspring distribution of the Galton?Watson process has finite variance and unit expectation we can estimate the size of the unobserved tree T . Note that this allows for unbounded degrees ? i.e., an offspring distribution with unbounded support ? provided that the variance of this distribution is bounded. A further extension relaxes the assumption that when a node v makes its copy of the petition public, the path is revealed all the way back to the root. Instead of the full path being visible, one can alternately consider a situation in which only the previous ` names on the petition are preserved, and hence the observed tree can only be reconstructed if it is possible to piece these snippets of length ` 3 Note that for simple reasons we need the given conditions on both k and ?. Indeed, if we don?t bound the maximum number of children at any node, then the star graph ? a single node with n ? 1 children ? has a single-child fraction of 0 with high probability for any non-trivial ?. And if we don?t consider the case of ? ? 0, then each node with multiple children has a constant probability of having several of them made public and hence the single-child fraction can?t converge to 1, unless the original tree was composed almost exclusively of single-child nodes to begin with. 4 For example, if T? is simply a star with s leaves, this observed tree is consistent with ?-sampling applied to any n-leaf star, for any value of n ? s and with ? = s/n. 3 together.5 Here we can show that size estimation is possible provided that ` is at least ? ?1 times a logarithmic factor, and this bound is asymptotically tight. Thus, our estimation methods work even when the data provides much less than a full path back to the root for each node made public. Due to space limitations, we defer details of this extension to the full version of the paper. 2 Single-Child Fraction We begin by showing that in any bounded-degree tree, the fraction of single-child nodes converges to 1 as the sampling rate ? goes to 0. The plan for this proof is as follows. First of all, let the unobserved tree T have n nodes, each having at most k children. Let us say that v is a branching node if it has more than one child. (That is, we partition the nodes of the tree into three disjoint categories: leaves, single-child nodes, and branching nodes.) In any bounded-degree tree, the number of leaves and the number of branching nodes are within a constant factor of each other; in particular, this will be true in the revealed tree T? . Now, all leaves in T? are nodes that are exposed (i.e., made public) by the ?-sampling process, so in expectation T? has at most ?n leaves (and we can bound the probability that the actual number exceeds this by more than a small factor). Thus, there will also be O(?n) branching nodes, and all other nodes in T? must be single-child nodes. Thus, the key to the argument is Theorem 2.1, which asserts that with high probability, T? has ?(?n logk ? ?1 ) nodes in total. Since there are only O(?n) leaves and branching nodes, the remaining nodes must be single-child nodes ? and since the size of T? exceeds O(?n) by a factor of ?(logk ? ?1 ), the fraction of single-child nodes in T? must therefore converge to 1 as ? goes to 0. Complete proofs of all the results in this paper are given in the full version; due to space limitations, we are not able to include them here. Where space permits, we will briefly summarize some of the proofs in the present version. For Theorem 2.1, the key is to show that in any bounded-degree tree T , we can identify ?(?n) many disjoint sub-trees T1 , T2 , T3 , . . ., each of size ?(? ?1 ). We then argue that in a constant fraction of these trees Ti , a node of Ti at distance at least ?(logk ? ?1 ) from Ti ?s root will be exposed, which will result in the appearance of ?(logk ? ?1 ) nodes in Ti . Theorem 2.1. Let T be a rooted n-node tree, and suppose that no node in T has more than k ? 2 children.6 Let ? ? k ?? , for any constant ? > 2. Let T? be the random subtree of T revealed by the ?-sampling process, and let X? be the number of its internal nodes. Then   Pr X? ? ?(?n logk ? ?1 ) ? 1 ? e??(?n) . We now follow the plan outlined at the beginning of this section, using this theorem to conclude that the fraction of single-child nodes converges to 1. Theorem 2.1 provided the main step; from here, we simply argue, in Theorem 2.2, that T? will have at most O(?n) branching nodes with high probability. Theorem 2.2. Given a tree T on n nodes, a sampling rate ?, and a number M ? n, let p be the probability that the size of the tree T? revealed by the ?-sampling process is at most M . Let m be the number of nodes in T? and m1 be the number of single-child nodes in T? . Then,      ?n Pr m1 ? 1 ? O ? m ? 1 ? e??(?n) ? p. M Now, using Theorems 2.1 and 2.2, we obtain the main result about single-child nodes as the following corollary. Corollary 2.3. Let T be a rooted n-node tree, and suppose that no node in T has more than k ? 2 children. 5 This version of the problem also arises naturally if we assume that individuals are not explicitly signing a petition, but that each forwarded message includes copies of the previous messages to a depth of `. 6 If, in T , each node has at most one child, then T is a path ? in which case, an easy argument shows that almost every node will be revealed, and that necessarily only one of the revealed nodes will not have one child. Still, this case is covered by the theorem: just choose k = 2. 4 Let ? ? k ?? , for any constant ? > 1. Let T? be the random subtree of T revealed by the ?-sampling process. Let m and m1 be, respectively, the number of nodes, and the number of nodes with exactly one child, in T? . Then      1 Pr m1 ? 1 ? O ? m ? 1 ? e??(?n) . logk ? ?1 For concreteness, observe that if we choose ? = k ??(1/) in Corollary 2.3, we obtain that the fraction of single-child nodes in the revealed tree will approach 1?O() with probability 1?exp (?? (?n)). 3 Estimation As before, given an unknown tree T , let T? be the tree revealed by the ?-sampling process. In this section, we focus on the problem of size estimation: we present an algorithm which can be used as an unbiased estimator ?? for ?, and then we estimate the size n of the full unobserved tree. Let V = V (T? ) be the set of nodes of T? , let L ? V be the set of its leaves, and let E ? V be the ? we consider set of its nodes that were exposed. (Observe that L ? E.) For the unbiased estimator ?, the set of all nodes ?above? the leaves of T? ? that is, internal nodes on a path from a leaf of T? to ? the root ? and we use the empirical fraction of exposures in this set as our value for ?. After establishing that ?? is an unbiased estimator, we show that the probability of a large deviation between ?? and ? decreases exponentially in |V ? L|, the number of internal nodes of T? . Thus, to show a high probability bound for our size estimate, we need to establish a lower bound on the number of internal nodes of T? , which will be the final step in the analysis. ? and a corresponding estimator n We begin by describing an algorithm to produce the estimator ?, ? for the size of T . ? If |V | = 0 then return ?? = 0; and if |V | = 1 then return ?? = 1. |E|?|L| |?|L| ? Otherwise return ?? = |V ? = |V |?|L| . If |E| > |L|, also return n |E|?|L| ? |E|. Observe that the algorithm is well-defined since, if |V | ? 2, then V will contain T? ?s root, which will not be contained in L, and therefore |V | ? |L| ? 1. For the following analysis of the algorithm observe that, since L ? E and L ? V , we have |E| ? |L| = |E ? L| and |V | ? |L| = |V ? L|. We begin by showing that ?? is an unbiased estimator for ?. Following the plan outlined above, we consider the independent exposures of all nodes that lie above the leaves of T? , resulting in the set E ?L ? V ?L. Because exposure decisions are made independently at each node, Chernoff bounds provide us with a concentration result. Lemma 3.1. ?? is an unbiased estimator for ?. Furthermore, if |V | ? 2, h i 1 2 Pr ?? ? ? ?  ? ? ? 2e? 3  ?|V ?L| . We now transfer our bound on |?? ? ?| to a bound on |? n ? n|. For this, it suffices to combine three relationships among these quantities: (i) n ? = |E|/?? by definition; (ii) |?? ? ?| ?  ? ? with high probability, by Lemma 3.1, and (iii) ||E| ? ?n| ? ?n with high probability via Chernoff bounds, since the exposure decisions consist of n independent coin flips each of probability ?. Putting these together, we have the following corollary of Lemma 3.1. Corollary 3.2. If |V | ? 2, then the size n of the unknown tree T satisfies 2 Pr [|n ? n ? | ? n] ? 1 ? e??( 3.1 ?|V ?L|) . Trees with sublinear maximum degree Our bounds thus far show that n ? is close to n with a probability that decreases exponentially in the number of internal nodes |V ? L|. We now investigate cases under which we can replace this upper bound on the probability by a more powerful one that decreases exponentially in a function that depends directly on n. 5 To do this, we require a theorem that guarantees that the number of internal nodes is at least an explicit function of n; this function can then be used in place of |V ? L| in the probability bounds. Our main result for this purpose is the following; in many respects, the bound it establishes it is less refined than the bound from Theorem 2.1, but it is useful for obtaining a guarantee for the estimation procedure. The crux of the proof is to show that if a node v has kv children, and ?kv ? 1, then the probability that v is revealed is at least a constant times the expected number of the exposed children of v: that is, ?(?kv ); if, instead, ?kv > 1, then the probability that v is revealed is ?(1). The result then follows from a bound on the number of nodes of degree greater than ? ?1 , linearity of expectation, and Chernoff bounds. Theorem 3.3. Let T be a rooted n-node tree, and suppose that no node in T has more than k ? 1 children. Let T? be the random subtree of T revealed by the ?-sampling process. Then, the number X? of internal nodes of T? satisfies    ?1 1 ? e?1 Pr X? ? ? min k ?1 , ? ? (n ? 1) ? 1 ? e??(n min(k ,?)) . 2 Using this theorem, we can directly replace the bound from Corollary 3.2 with one that is an explicit function of n. Specifically, the next result follows directly from Corollary 3.2 and Theorem 3.3. Corollary 3.4. Let T be a rooted n-node tree, and suppose that no node in T has more than k ? 1 children. Then, the event (1 ? )n ? n ? ? (1 + )n 2 happens with probability at least 1 ? e??( ? min(?,k?1 )n) . q  ln ? ?1 The smallest ? that Corollary 3.4 can tolerate is roughly n?1/2 : if ? ? ? , and no node 2 n ? in the unknown tree has more than ? ?1 children (observe that ? ?1 & n), then with probability 1 ? ?, the n ? returned by the estimator is within a multiplicative 1 ?  factor of the actual n. 3.2 Trees Arising from Branching Processes We observe that Corollary 3.2 can also be used, just as in Section 3.1, for critical branching processes ? those whose offspring distributions have unit expectation. (We also require finite variance.) The main fact we require about such branching processes is that the height of a uniformly chosen node from a branching process tree (with offspring distribution  having finite variance, unit expectation, and conditioned on being of size n) is at least ? n1/2? with high probability [2]. Now, since |V ? L| is at least  the length of the path joining a uniform chosen node to the root, if we choose ? ? ? n?1/2+2 it holds that |V ? L| ? ?(? ?1 ), and Corollary 3.2 can be applied to obtain a concentration result for n ?. 4 Concentration As we observed in previous sections, the size of T? plays a prominent role in determining both the fraction of single-child nodes and the size of the unknown tree T . In this section we prove some concentration results on the quantity |T? | ? that is, we will bound the probability that |T? | is far from its mean, over random outcomes of the ?-sampling process applied to the underlying tree T . To begin with, the mean E [|T? |] depends not just on |T | but also on the structure of T . However, it has a simple formulation in terms of this structure, as shown by the following claim, which is a direct application of linearity of expectation. Observation 4.1. Let T be a rooted tree, and let T? be the random subtree of T revealed by the ?-sampling process. Then, if |Tv | denotes the size of the subtree of T rooted at v,  X E [|T? |] = 1 ? (1 ? ?)|Tv | . v?T 6 Our main result on concentration gives a value of ? above which |T? | has a high probability of being near its mean. The proof requires an intricate balancing of two kinds of nodes ? those ?high? in T , with many descendants, and those ?low? in T , with few descendants. If there are many low nodes, then since their probabilities of being revealed behave relatively independently, we have concentration; if there only a few low nodes, then we have concentration simply from the fact that most of the high nodes will be revealed in almost all outcomes of the ?-sampling process. Theorem 4.2. Let T be a rooted tree on n nodes, with height at most H. Let T? be the random subtree of T revealed by the ?-sampling process. Let m be the size of T? . Then for any , ? bounded above by some constant, and for any ln2 n/? H ln3 n/? p , n3 ? n3 ? ? ? ? min !! , it holds that Pr [|m ? E[m]| ? E[m]] ? 1 ? ?. Note that the theorem requires a lower bound on the value of ?, and we now show why  this bound ?1/2 is necessary. In particular, we observe how the theorem does not hold if ? = o n . To do this,  ?1  ? leaves and also to a path of length let T be a tree whose root is connected directly to n ? 1 ?  ?1  ? . Then T? will not contain any node in the path with probability ? ?1 e (1 ? ?)d ??0 ????? e?1 . If T? does not contain any node in the path, then it will only contain nodes adjacent to the root. Since there are ?(n) of these nodes, it follows from Chernoff bounds that T? will contain at most U = O(?n) many nodes. On the other hand, the probability that T? will contain exactly one node in the path is  ?1  ? ?1 ?1 ??0 ? ? ? ? (1 ? ?)d e ????? e?1 . Since, under this conditioning, the single node in the path will be uniformly distributed over the path itself, with half the probability it will be in the lower half of the path ? causing the upper half of the path to be revealed. Hence with constant probability at least L = ?(? ?1 ) nodes will be revealed. We have shown that with constant probability the size of T? will  be at most U , and with constant probability the size of T? will be at least L. If ? = o n?1/2 , we have L/U = ?(? ?2 n?1 ) = ?(n)/n = ?(1),  from which it follows that the number of nodes of T? is not concentrated when ? = o n?1/2 . 5 The Iraq-War Petition Using the framework developed in the previous sections, we now turn to the anti-war petition studied by Liben-Nowell and Kleinberg. The petition, which protested the impending US-led invasion of Iraq, spread widely via e-mail in 2002?2003. The Iraq-War tree observed by Liben-Nowell and Kleinberg ? after they did some mild preprocessing to clean the data ? was deep and narrow, and contained the characteristic ?stringy? pattern analyzed in Section 2, with over 94% of nodes having exactly one child. The observed Iraq-War tree contained |V | = 18, 119 nodes and |E| = 620 exposed nodes, of which |L| = 557 were exposed leaves. Using this information, we can apply the algorithm from Section 3: we estimate the posting probability as ?? = (620 ? 557)/(18119 ? 557) ? 0.00359, and we estimate the size of the unobserved Iraq-War tree to be n ? = |E|/?? ? 172,832.38 signatories. We can also apply the results of Section 3 to analyze the error in our estimate n ? . For this purpose, we pose the question concretely as follows: if the observed Iraq-War tree arose via ?-sampling from an arbitrary unobserved tree T of size n, what is the probability of the event that the estimate n ? produced by our algorithm lies in the interval [ 12 n, 2n]? (Recall that our estimation algorithm is deterministic; the probability here is taken over the random choices of nodes exposed by the ?sampling process to the arbitrary fixed tree T .) We use a careful analysis (tight to constants), to show that the estimate n ? is quite tight, as indicated by the following theorem. 7 Theorem 5.1. For any tree T of size n, assuming the observed Iraq-War tree was produced via ?-sampling of T , the event that n ? lies in the interval [ 21 n, 2n] is at least 95%. In addition to the number of signers of the petition, it is also of interest to determine the total number of e-mail messages generated by the spread of the petition. For this purpose, we first need to estimate the distribution of the number of recipients of an e-mailed copy of the petition. To estimate this distribution, we collected a dataset of 147 copies of e-mail petitions with intact e-mail headers. In addition to data from the Iraq-War petition, these 147 copies include two other widely circulated petitions, supporting National Public Radio (NPR) and Mothers Against Drunk Driving (MADD). For each of these 147 e-mails, we counted the number of e-mail addresses to which the message was sent, including both direct and CCed recipients. (E-mails that were sent to mailing lists instead of to a list of individuals were not included in the set of 147.) The mean number of addressees was 20.37 (with standard deviation 20.60), the median was 14, and the maximum was 141. In addition to using the length of recipient lists to check the conditions needed for our theoretical results, we can also use these numbers to estimate the total reach of the Iraq-War petition. A person who signs the petition forwards the petition, on average, to 20.37 other addressees. Thus, by linearity of expectation, we can estimate that the ? 172,832.38 signers in the unobserved tree sent a total of ? 3,520,595.58 chain-letter e-mails in the Iraq-War petition. Finally, the ?-sampling process is a very simple abstraction of the process by which a widely circulated message becomes public, and with further inspection of the Iraq-War tree observed by LibenNowell and Kleinberg, we can begin to identify potential limitations of the basic ?-sampling model. Principally, we have been assuming that each individual signatory of the petition exposes her petition copy independently with probability ?. However, the assumption of independence of nodes? exposure events ? while useful as an analytical abstraction ? appears to be too simple to capture all the properties we see in the exposure events for the real data. One of the most common mechanisms that exposes a petition e-mail is when that e-mail is sent to a mailing list that archives its messages on the Web. When one person exposes her petition copy by sending it to a mailing list, then her friends are more likely to expose their petition copies by sending to the same list again, because they are more likely to be members of that same list (because of homophily) or because they ?reply to all? (including the list) with their petition copy. We can quantify this independence issue explicitly by noting that many of the exposed internal nodes in the observed Iraq-War tree are close to the leaves of the tree. In particular, 48 of the 63 exposed internal nodes are within 10 hops of a leaf, out of only 5351 total such nodes. Thus the exposure rate for internal nodes within distance 10 of a leaf is 48/5351 ? 0.00897, while the exposure rate for internal nodes more than distance 10 from any leaf is 15/12211 ? 0.00123. 6 Conclusion When information spreads through a social network, it often does so along a branching structure that can be reasonably modeled as a tree; but when we observe this spreading process, we frequently see only a portion of the full tree. In this work, we have developed techniques that allow us to reason about the full tree along which the information spreads from the portion that is observed; as a consequence, we are able to propose estimates for the size of a network cascade from a sample of it, and to deduce certain structural properties of the tree that it produces. When we apply these techniques to data such as the Iraq-War petition in Section 5, our conclusions must clearly be interpreted in light of the model?s underlying assumptions. Among these assumptions, the requirement of bounded degree may generally be fairly mild, since it essentially requires the tree of interest simply to be large enough compared to the number of children at any one node. Arguably more restrictive is the assumption that each node makes an independent decision about posting its copy of the information, and with the same fixed probability ?. It is an interesting direction for further work to consider how one might perform comparable analyses with a relaxed version of these underlying assumptions, as well as the extent to which estimations of the type we have pursued here are robust in the face of different variations on the assumptions. Acknowledgements. Supported in part by the MacArthur Foundation, a Google Research Grant, a Yahoo! Research Alliance Grant, and NSF grants IIS-0910664, CCF-0910940, and IIS-1016099. 8 References [1] E. Adar, L. Zhang, L. A. Adamic, and R. M. Lukose. Implicit structure and the dynamics of blogspace. In Workshop on the Weblogging Ecosystem, 2004. [2] D. Aldous. The continuum random tree II: An overview. In M. T. Barlow and N. H. Bingham, editors, Stochastic Analysis, pages 23?70. Cambridge University Press, 1991. [3] K. B. Athreya and P. E. Ney. Branching Processes. Dover, 2004. [4] E. Bakshy, B. Karrer, and L. A. Adamic. Social influence and the diffusion of user-created content. In Proc. 10th ACM Conference on Electronic Commerce, pages 325?334, 2009. [5] C. H. Bennett, M. Li, and B. Ma. Chain letters and evolutionary histories. Scientific American, 288(6):76?79, June 2003. [6] M. Cha, A. Mislove, and P. K. Gummadi. A measurement-driven analysis of information propagation in the flickr social network. In Proc. 18th International World Wide Web Conference, pages 721?730, 2009. [7] J. Earl. The dynamics of protest-related diffusion on the web. Information, Communication, and Society, 13(26):209?225, 2010. [8] R. K. Garrett. Protest in an information society: A review of literature on social movements and new ICTs. Information, Communication, and Society, 9(2):202?224, 2006. [9] B. Golub and M. O. Jackson. Using selection bias to explain the observed structure of internet diffusions. Proc. Natl. Acad. Sci. USA, 107(24):10833?10836, 15 June 2010. [10] D. Gruhl, R. V. Guha, D. Liben-Nowell, and A. Tomkins. Information diffusion through blogspace. In Proc. 13th International World Wide Web Conference, 2004. [11] J. L. Iribarren and E. Moro. Impact of human activity patterns on the dynamics of information diffusion. Physical Review Letters, 103(3), July 2009. [12] J. Kleinberg. Cascading behavior in networks: Algorithmic and economic issues. In N. Nisan, ? Tardos, and V. Vazirani, editors, Algorithmic Game Theory, pages 613? T. Roughgarden, E. 632. Cambridge University Press, 2007. [13] R. Kumar, M. Mahdian, and M. McGlohon. Dynamics of conversations. In Proc. 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 553? 562, 2010. [14] J. Leskovec, L. Adamic, and B. Huberman. The dynamics of viral marketing. ACM Transactions on the Web, 1(1), May 2007. [15] J. Leskovec, A. Singh, and J. M. Kleinberg. Patterns of influence in a recommendation network. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 380?389, 2006. [16] D. Liben-Nowell and J. Kleinberg. Tracing information flow on a global scale using Internet chain-letter data. Proc. Natl. Acad. Sci. USA, 105(12):4633?4638, Mar. 2008. [17] E. Sun, I. Rosenn, C. Marlow, and T. M. Lento. Gesundheit! Modeling contagion through Facebook News Feed. In Proc. 3rd International Conference on Weblogs and Social Media, 2009. [18] D. Wang, Z. Wen, H. Tong, C.-Y. Lin, C. Song, and A.-L. Barab?asi. Information spreading in context. In Proc. 20th International World Wide Web Conference, pages 735?744, 2011. [19] F. Wu, B. A. Huberman, L. A. Adamic, and J. R. Tyler. Information flow in social groups. Physica A, 337(1-2):327?335, 2004. 9
4379 |@word mild:2 version:6 briefly:1 addressee:2 extinction:1 open:1 cha:1 simulation:2 asks:1 contains:2 exclusively:1 existing:1 current:1 yet:1 invitation:1 must:6 numerical:1 visible:1 subsequent:1 partition:1 v:1 half:3 leaf:17 pursued:1 item:8 inspection:1 beginning:1 dover:1 provides:2 node:102 zhang:1 unbounded:3 height:2 along:3 direct:2 descendant:2 consists:1 prove:3 combine:1 npr:1 intricate:1 expected:1 roughly:2 themselves:1 frequently:1 growing:1 mahdian:1 indeed:1 behavior:2 relying:1 actual:2 considering:1 becomes:1 provided:4 estimating:1 bounded:9 begin:7 underlying:4 linearity:3 medium:1 what:4 virally:1 kind:1 interpreted:1 developed:2 unobserved:11 guarantee:3 every:2 ti:4 exactly:4 revelation:1 unit:3 grant:3 producing:1 arguably:1 t1:1 before:1 offspring:5 petition:47 limit:1 consequence:1 acad:2 accumulates:1 joining:1 establishing:1 path:22 approximately:1 black:1 might:1 studied:4 commerce:1 union:2 practice:1 procedure:1 empirical:3 asi:1 cascade:1 word:1 suggest:1 cannot:2 close:2 selection:1 context:2 influence:2 deterministic:1 go:2 exposure:8 circulated:2 independently:5 focused:1 estimator:8 cascading:1 jackson:2 his:1 population:1 variation:1 adar:1 tardos:1 suppose:4 play:1 user:1 iraq:14 observed:19 role:1 wang:1 capture:2 connected:2 sun:1 news:1 decrease:3 liben:9 movement:1 environment:1 dynamic:5 singh:1 tight:4 exposed:16 distinct:1 outcome:3 refined:1 header:1 whose:2 quite:1 larger:2 widely:3 say:2 reconstruct:2 forwarded:2 otherwise:2 ability:1 itself:2 final:1 analytical:1 propose:1 product:1 remainder:1 causing:1 icts:1 asserts:1 kv:4 parent:1 requirement:1 produce:3 generating:1 converges:2 friend:2 pose:1 received:2 quantify:1 direction:1 closely:1 owing:1 stochastic:1 human:1 public:9 require:4 crux:1 suffices:1 randomization:2 mathematically:1 extension:4 correction:1 hold:5 weblogs:1 sufficiently:1 considered:2 physica:1 exp:1 tyler:1 algorithmic:3 claim:1 driving:1 major:1 continuum:1 nowell:9 smallest:1 purpose:4 estimation:14 proc:8 travel:1 combinatorial:1 spreading:3 radio:1 expose:4 establishes:1 hope:1 clearly:1 rather:1 arose:1 cornell:2 broader:1 corollary:11 encode:1 derived:1 focus:2 june:2 methodological:1 check:1 sigkdd:1 rigorous:1 sense:2 inference:1 abstraction:2 her:4 issue:6 among:2 yahoo:1 plan:3 raised:1 fairly:1 once:1 having:4 sampling:29 chernoff:4 hop:1 jon:1 others:1 t2:1 fundamentally:1 haven:1 inherent:1 few:2 wen:1 randomly:1 composed:1 national:1 individual:4 consisting:1 n1:1 organization:1 interest:3 message:7 limx:1 investigate:1 mining:2 golub:2 analyzed:1 light:1 natl:2 chain:11 subtrees:1 capable:1 partial:1 necessary:1 ln3:1 unless:1 tree:85 incomplete:3 alliance:1 theoretical:1 leskovec:2 modeling:1 karrer:1 deviation:3 addressing:1 uniform:1 guha:1 too:1 characterize:1 answer:3 person:9 fundamental:2 international:5 marlow:1 together:3 again:1 central:2 choose:3 american:1 leading:1 return:4 blogspace:2 li:1 account:1 potential:1 star:3 includes:1 satisfy:1 explicitly:2 vi:1 depends:2 piece:1 multiplicative:1 root:12 view:1 invasion:1 nisan:1 observing:1 analyze:1 portion:2 participant:1 defer:1 publicly:1 variance:4 who:7 characteristic:1 t3:1 identify:2 produced:3 provider:1 history:1 explain:1 reach:2 flickr:1 facebook:1 definition:1 against:1 involved:1 naturally:2 proof:5 sampled:1 dataset:1 ask:2 recall:1 conversation:2 knowledge:2 garrett:1 back:5 appears:1 feed:1 tolerate:1 follow:1 methodology:1 asia:1 formulation:2 box:1 mar:1 furthermore:1 just:3 implicit:1 marketing:1 reply:1 hand:1 web:8 adamic:4 propagation:1 google:1 perhaps:1 indicated:3 scientific:1 name:5 usa:2 hypothesized:1 contain:6 true:1 unbiased:5 ccf:1 barlow:1 hence:3 adjacent:1 game:1 self:2 branching:16 rooted:7 ln2:1 prominent:1 complete:1 macarthur:1 common:1 viral:1 empirically:1 overview:1 physical:1 homophily:1 conditioning:1 exponentially:4 million:1 m1:4 ecosystem:1 refer:2 measurement:1 cambridge:2 mother:1 rd:1 fk:3 outlined:2 mailing:3 had:2 deduce:1 recent:2 showed:1 aldous:1 belongs:1 driven:1 certain:3 watson:3 life:1 flavio:1 greater:1 relaxed:1 converge:2 determine:1 july:1 ii:4 full:9 multiple:3 infer:2 exceeds:2 long:4 lin:1 post:2 gummadi:1 barab:1 impact:1 basic:6 essentially:2 expectation:7 receive:1 preserved:1 want:2 addition:3 signing:1 interval:2 median:1 carleton:1 ithaca:2 archive:1 pass:1 recording:2 sent:4 member:1 flow:2 structural:4 near:3 presence:1 noting:1 revealed:22 iii:1 easy:1 relaxes:1 enough:1 independence:2 identified:1 economic:1 idea:1 knowing:1 whether:1 motivated:1 war:14 handled:1 song:1 returned:1 deep:1 generally:3 useful:2 covered:1 concentrated:3 category:1 canonical:1 nsf:1 sign:4 impending:1 disjoint:2 arising:1 group:1 key:2 putting:1 threshold:1 enormous:1 drawn:1 clean:1 diffusion:8 v1:1 graph:1 asymptotically:1 concreteness:1 fraction:17 letter:11 powerful:1 place:1 almost:4 decide:1 electronic:1 wu:1 draw:1 decision:3 comparable:1 bound:23 internet:5 followed:1 encountered:1 activity:1 roughgarden:1 precisely:2 kleinberg:11 argument:2 min:4 kumar:1 relatively:1 department:3 tv:2 pacific:1 protest:3 smaller:1 slightly:1 reconstructing:3 happens:1 pr:7 principally:1 taken:1 ln:1 describing:1 turn:1 mechanism:2 madd:1 needed:2 flip:1 unusual:1 sending:2 available:1 permit:1 apply:3 observe:13 v2:1 indirectly:1 ney:1 coin:1 original:2 recipient:4 denotes:1 remaining:1 include:2 tomkins:1 giving:1 restrictive:1 establish:2 society:3 question:10 quantity:2 looked:1 concentration:10 evolutionary:1 distance:3 link:1 separate:1 sci:2 topic:2 mail:13 whom:1 argue:2 collected:1 trivial:1 reason:3 provable:1 extent:1 assuming:3 length:4 modeled:2 relationship:2 providing:1 rise:1 unknown:4 galton:3 perform:2 upper:2 observation:6 drunk:1 finite:3 snippet:1 behave:1 anti:1 supporting:1 situation:1 communication:3 arbitrary:3 moro:1 david:1 connection:2 narrow:2 alternately:1 address:2 able:4 recurring:1 beyond:1 pattern:6 challenge:1 summarize:1 including:3 explanation:2 signer:6 critical:3 event:5 natural:2 treated:1 mn:1 representing:1 contagion:5 picture:1 created:1 athreya:1 understanding:1 acknowledgement:1 review:2 literature:1 discovery:2 determining:1 sublinear:1 interesting:1 limitation:3 ingredient:1 foundation:1 earl:1 degree:12 sufficient:1 consistent:1 editor:2 balancing:1 supported:1 copy:21 formal:1 allow:1 understand:2 bias:1 wide:3 face:1 tracing:1 distributed:1 depth:1 world:3 rich:1 author:1 forward:2 made:4 preprocessing:1 concretely:1 counted:1 far:3 social:12 transaction:1 reconstructed:2 vazirani:1 mislove:1 abstracted:1 global:1 reveals:1 conclude:2 don:2 bingham:1 why:2 nature:1 reasonably:2 robust:2 transfer:1 obtaining:1 necessarily:1 constructing:1 domain:1 did:1 spread:17 main:5 noise:2 child:39 site:1 hosted:2 ny:2 chierichetti:1 tong:1 sub:1 wish:1 explicit:2 lie:3 posting:4 theorem:19 showing:2 list:12 consist:1 lukose:1 workshop:1 logk:6 subtree:8 conditioned:1 led:2 logarithmic:1 simply:4 appearance:2 likely:2 unexpected:1 contained:3 recommendation:2 corresponds:1 satisfies:2 acm:3 ma:1 careful:1 replace:2 bennett:1 considerable:1 content:1 included:1 specifically:3 uniformly:2 huberman:2 principal:1 lemma:3 total:8 pas:1 tendency:1 intact:1 college:1 internal:11 people:4 support:1 arises:2 phenomenon:1
3,732
438
Grouping Contours by Iterated Pairing Network Amnon Shashua M.I.T. Artificial Intelligence Lab., NE43-737 and Department of Brain and Cognitive Science Cambridge, MA 02139 Shimon Ullman Abstract We describe in this paper a network that performs grouping of image contours. The input to the net are fragments of image contours, and the output is the partitioning of the fragments into groups, together with a saliency measure for each group. The grouping is based on a measure of overall length and curvature. The network decomposes the overall optimization problem into independent optimal pairing problems performed at each node. The resulting computation maps into a uniform locally connected network of simple computing elements. 1 The Problenl: Contour Grouping A problem that often arises in visual information processing is the linking of contour fragments into optimal groups. For example, certain subsets of contours spontaneously form perceptual groups, as illustrated in Fig. 1, and are often detected immediately without scanning the image in a systematic manner. Grouping process of this type are likely to play an important role in object recognition by segmenting the image and selecting image structures that are likely to correspond to objects of interest in the scene. 'Ve propose that some form of autonomous grouping is performed at an early stage based on geometrical characteristics, that are independent of the identity of objects to be selected. The grouping process is governed by the notion of saliency in a way that priority is given to forming salient groups at the expense of potentially less salient ones. This general notion can again be illustrated by Fig. 1; it appears that certain groups spontaneously emerge, while grouping decisions concerning the less salient parts of the image may remain unresolved. As we shall see, the computation below exhibits a similar behavior. We define a grouping of the image contours as the formation of a set of disjoint 335 336 Shashua and Ullman ~ \..- '''" '- '".-1 ;' 'I )_' " '....,-- ... .,.,"''''4.-.... '>,," I'........ I'~ '\ ..r ...... ,' ",-:'--.., ... ' I':: .,.; l .... :. '"/...,,) "':;,""'" ... " .... - -",(, )' , ~ - h \ .''''' " \, ~ ....' I I " ..... ' A .... I ' \ ~/ ""J I -t:' I I \......' ' \ ," I .... ..... ', I ... " ,,(I ",' .......... ~. . ll..' ...:" " ~ ,( f'. /f r " ,~.... '-.... _ ~ I I"- I I,. .... ,~,,,.' '''}... - 1. . " ( ,,1-\ .. ~ JI" ..... , ::;.!\ 1".-" "... I ". I ... ., .... , \ " \ ' \ ;' - \ ~~ ,.... - \. I ,\ t -, (. \ " , "- - \ , _',. ,,~, ;' I ,'\;' ~ I , pi - I .... ,'04."'_ I "' .... ;' .....)' ... / ~\,.o 1 " , ,- .. .... , - , ' }.... - \ _". ~ ......... </~" , ,_ ' - ~""\ _I " - I " " ,~ 1 I' ., "-t"'0('/' ,~ f ~_ ;' I Figure 1: Contours that spontaneously form perceptual groups with various degrees of saliency. On the left is an edge image of a plane surrounded by a car, a house, trees and texture. The image on t.he right contains three circles, having decreasing degrees of saliency, in a background of randomly placed and oriented segments. groups, each corresponding to a curve that may have any number of gaps, and whose union covers all the contour fragments in the image. Given a function F(A) that measures some desired property of a group A, we would like to find a disjoint set of groups {AI, ... , Am} that maximizes 2:; F(Ad over all possible groupings. Our definition of the problem is related to, but not identical with, problems studied in the past under headings of "perceptual organization" , "segmentation" , "cueing" and "figure-ground separat.ion". In our definition of grouping, local grouping decisions based on collinearity of neighboring edge segments may be overridden in favor of more global decisions that are governed by the overall saliency of the groups. The paper introduces a novel grouping method having the following properties: (i) the grouping optimizes an overall saliency measure, (ii) the optimization problem is mapped onto a uniform locally connected network of simple computing elements, and (iii) the network's architecture and its computation are different in several respects from traditional neural network models. 2 Optimal Grouping For the purpose of grouping it is convenient to consider the image as a graph of edge elements. The vertices of the graph correspond to image pixels, and the arcs to elementary edge fragments. The input to the grouping problem is a contour image, represented by a su bset E r of the elements in the graph. A path in the graph corresponds to a contour in the image having any number of gaps. This implies that the grouping process implicitly bridges across gaps. This filling-in process is critical to any grouping scheme as demonst.rated by the circles in Fig. 1. The emphasis in this paper is on 1-D chains of elements such as objects' bounding contours. Grouping is therefore a collections of chains of AI, ... , Am such that Ai n Aj 0 i;/; j and Uj Ai 2 Er. To define an optimal grouping we will define a function F(A) that measures the quality of a group A. An optimal grouping is then a grouping that maximizes L~I F(Ai) over all possible groupings of the elements. = Grouping Contours by Iterated Pairing Network 2.1 The Quality Measure of a Group, F(A) The definition of the measure F(A) is motivated by both perceptual and computational considerations. In agreement with perceptual observations, it is defined to favor long smooth contours. Its form is also designed to facilitate distributed multistage optimization, as discussed below. To define F(A) of a chain of elements A = {e1' ... , em}, consider first a single element ei, and the n preceding elements in the chain. We use first a quantity Sn (i) which is the contribution of the n preceding elements to ei, which is: i j=max{1.i-n} is defined as 1 when ej corresponds to a contour fragment in the image and 0 for gaps. sn(i) is therefore simply a weighted sum of the contributions of the elements in the chain. The weighting factor Gij is taken to be a decreasing function of the total curvature of the pa.th l i j between elements ei and ej. This will lead to a grouping that prefers curves with small overall curvature over wiggly ones. Gij is given by the formula: Uj Gij = e- J ( dB)2 d -raj Tr 6 The exponent is the squared total curvature of the path between elements ei and ej, and the resulting Gij lies between 0 (highly curved contour) and 1 (straight line). For a discrete sampling of the curve, Gij can be approximated by the product: j+l Gij = II !p.p-1 p=i where !P.q is referred to as the coupling constant between adjacent elements e p and eq and is given by !P.q e- atan ~ where (l' is the angle measuring the orientation difference between p and q [3]. In a similar manner, one can define Sn(i) , the contribution of the n elements following ei in the chain. Sn(i) sn(i) + sn(i) Ui measures the contribution to element ei from both direction. This increases monotonically with the length and low total curvature of the curve passing through element ei. Then then the overall quality of the chain A is finally given by = = m Fn(A) =L Sn(i) i=1 Fn(A) increases quadratica.lly with the size of A and is non-linear with respect to the total curvature of A. l\'I aximizing 2: F(Aj) over all possible groupings will, therefore, prefer groups that are long and smooth. As n increases, the measure Fn will depend on larger portions of the curve surrounding each element, resulting in a finer discrimination between groups. In practice, we limit the measure to a finite n, and the optimal grouping is defined as: m In = arg m.Almax L ?...? Am . Fn(Ai) 1=1 where the max is taken over all possible groupings. That is, we are looking for a grouping that will maximize the overall criterion function based on length and smoothness. 337 338 Shashua and Ullman 3 The Optimization Approach Optimizing In is a nonlinear problem with an energy landscape that can be quite complex making it difficult to find a global optimum, or even good local optima, using straightforward gradient descent methods. We define below a computation that proceeds in two stages, saliency and pairing stages, of n steps each. In the saliency stage we compute, by iterating a local computation, optimal values of Sn(i) for all elements in the graph. These values are an upper-bound on the saliency values achievable by any grouping. In the pairing stage we further update Sn(i) by repeatedly forming local pairings of elements at each node of the graph. The details of both stages are given below. 3.1 Saliency Stage For any given grouping AI, ... , Am, because they are disjoint, we have that m N 3=1 1=1 L. Fn (Aj) = L Sn (i) ~ L max Sn (i) . 'Y. where N is the number of elements in the graph and "fi is a curve passing through element ei. We denote Sn(i) to be the saliency of element ei with respect to a curve "fi. We therefore have that the maximal saliency value S~ (i) max'Yi Sn (i) is an upper-bound on the saliency value element ei receives on the optimal grouping In. = We define a local computation on the grid of elements such that each element ei computes maximal Sn(i) by iterating the following simple computation, at each step taking the maximal contribution of its neighbors. soC i) = (fa Sn+l (i) = (fa + m~x Sn (j)fij (1) 3 where this computation is performed by all elements in parallel. It can be shown that at the n'th iteration Sn (i) is maximal over all possible curves of length n, having any number of gaps, that come into ei. Since Sn (i) Sn (i) + sn (i) - (fi, we have found the maximal Sn (i) as well. For further details on the properties of this computation, see [3]. Note that since the computation is carried by all elements of the net, including gaps ((f equals 0), the gaps are filled-in as a by-product of the computation. One can show that the filling-in contour between two end-elements has the smallest overall curva.ture, and therefore has the shape of a cubic spline. = 3.2 Pairing Stage Given the optimal saliency values S~( i) computed at the saliency stage we would like next to find a near-optimal grouping In. We first note the one-to-one correspondence between a grouping and a pairing of elements at each node of the 1raph. We define a pairing to be a partition of the k elements around node Pinto '21 disjoint pairs. A pairing performed over all nodes of the net creates an equivalence relation over the elements of the net and therefore, by transitivity, determines a grouping. We therefore proceed by selecting a pairing at each node of the net that will yield a near optimal grouping In. r Given sn(i), the optimal saliency values computed by (1), and a pairing at node P Grouping Contours by Iterated Pairing Network we update the saliency values by sn+l(i) = (fi + Sn(j)iij (2) where ei and ej are pairs determined by the pairing. This computation is exactly like (1) with the exception that (2) is applied to a fixed pairing while in (1) each element selects the neighbor with maximal contribution. Further applications of pairing followed by (2) allows the result of pairing decisions to propagate along curves and influence other pairing decisions. This gives rise to the notion of iterated pairings, a repetitive pairing procedure applied simultaneously over all nodes of the graph followed by saliency computation (2). We define below a pairing procedure that identifies salient groups in contour images. For every node P in the graph with elements el, ... , ek coming into P, we have that sn(i) i = 1, ... , k computed by (1) are measured along optimal, not necessarily disjoint, curves AI, ... , Ak of length n each. An optimal pairing at node P is defined as a disjoint pairing that concatenates AI, ... , Ak into r~ 1 curves such that the sum of their quality measure Fe) is maximal. Because F is defined to prefer smooth curves and because of its non-linearity with respect to total curvature, an optimal pairing agrees with the notion of forming salient groups on the expense of potentially less salient ones. The following proposition shows that an optimal pairing can be determined locally without the need to evaluate the quality measure of the concatenated curves. Proposition 1 For a given node P, let el, ... ,ek be the elements around P, AI, ... , Ak be curves cowing into P that are associated with the non-zero saliency values Sl(n), ... , Sk(n) with sufficiently large n (at least twice the largest chain Ai), 7r be a permutation of the indices (1, ... , k) and J {(I, 2), (3,4), ... , (k-1, k)}, then = ~ Fn (A 11' A1I' J0) =argmax "L..t L-,; 7r argmax 7r l (i,j)E J W7f 0 7f " J (i,j)E J where AiAj stands for the concatenation of curves Ai, A j , and Wij (sn(i)cn(j) + sn(j)cn(i)) where Cn is defined as cn(i) Lk Ckj where k is taken over all elements in the chain A j . = iij Proof: This is merely a calculation. Fn(AiAj), the measure of group-saliency of the chain AiAj, is equal to Fn(Ai) + Fn(Aj) + Wij. Finally, without loss of generality, we can assume that k is even, because we can always add another element with zero weights attached to it. 0 Proposition 1 shows that an optimal pairing of elements can be determined locally on the basis of the saliency values computed in (1). One way to proceed is therefore the following. The quantities Cn and therefore Wij can be accumulated and computed during computation (1). Then, the optimal pairing is computed at every node. Finding an optimal pairing is equivalent to finding an optimal weighted match in a general graph [2], with weights Wij. The weighted matching problem on graphs has a polynomial algorithm due to Edmonds [1] and therefore its implementation is not unwieldly. Below we describe an alternative and more biologically plausible scheme that can be implemented in a simple network using iterative local computations . The computation is in fact almost identical to the saliency computation described in (1). 339 340 Shashua and Ullman Since the saliency values Sn computed by (1) are an upper-bound on the final values achievable in any grouping, we would like to find a pairing that will preserve these values as closely as possible. Suppose that at P, ei receives its maximal contribution from ej, and at the same time ej provides the maximal contribution to ej ('mutual neighbors'). When performing local pairing at P, it is reasonable to select ei and ei as a pair. Note that although this is a local decision at P, the values sn(i) and snU) already take into account the contribution of extended curves. The remaining elements undergo another round of saliency selection and pairing of mutual neighbors, until all elements at P are paired. The following proposition shows that this pairing process is well behaved in the sense that at each selection round there will always be at least one pair that mutually select each other. We therefore have that the number of selection rounds is bounded by r~ 1, where k is the number of elements having non-zero saliency value coming into node P . = Wii =j proposition 2 Let Xl, ... , Xk be k positive real numbers, wii i, j 1, ... , k and bi al'g maxj XjWij, then 3i, j such that bi j are mutual neighbors). = = be positive weights and bi i (i and = Proof: by induction on k. For k = 3 assume there exists a cycle in the selection pattern. For any given cycle we can renumber the indecis such that bl 2, b2 3 and b3 = 1. Let Wi stand for wi-l ,i where WI = Wk,l. We get (i) X2W2 > X3Wl, (ii) X2W3 > XlW2 and (iii) XIWI > X2W3. From (ii) and (iii) we get an inequality that contradicts (i). For the induction hypothesis, assume the claim holds for arbitrary k - 1. We must show that the claim holds for k . Given the induction hypothesis we must show that there is no selection pattern that will give rise to a cycle of size k. Assume in contradiction that such a cycle exists. For any given cycle of size k we can renumber t.he indecis such that bi = i + 1 and bk = 1 which implies that XiWi > XjWij for all j =/; i. In particular we have the following k inequalities: XiWi > Xi-2Wi-l where i = 1, ... , k. From the k - 1 inequalities corresponding to i = 2, ... , k we get, by transitivity, that XIWI < Xk-lWk which contradicts the remaining inequality that corresponds to i = 1. 0 = = 3.3 Summary of Computation The optimization is ma.pped onto a locally connected network with a simple uniform computation. The computation consists of the following steps. (i) Compute the saliency S~ of each line element using the computation defined in (1). (ii) At each node perform a pairing of the line elements at the node. The pairing is performed by repeatedly selecting mutual neighbors. (iii) Update at each node the values Sn based on the newly formed pa.iring (eq. 2). (iv) Go back to step 2. These iterated pairings allow pairing decisions to propagate along maximally salient curves and influence other pairing decisions . In the implementation, the number of iterations n is equal in both stages and as n increases, the finer the pairing would be, resulting in a finer discrimination between groups. During the computation, the more salient groups emerge first, the less salient groups require additional iterations. Although the process is not guaranteed to converge to an optimal solution, it is a very simple computation that yields in practice good results. Some examples are shown in the next section. Grouping Contours by Iterated Pairing Network o Figure 2: Results after 30 iterations of saliency and pairing on a net of size 128 X 128 with 16 elements per node. Images from left to right display the saliency map following the saliency and pairing stages and a number of strongest groups. The saliency of elements in the display is represented in terms of brightness and width - increased saliency measure corresponds to increase in brightness and width of element in display. 3.4 Examples Fig. 2 shows the results of the network applied to the images in Fig. 1. The saliency values following the saliency and pairing stages illustrate that perceptually salient curves are also associated with high saliency values (see also [3]). Finally, in these examples, the highest saliency value of each group has been propagated along all elements of the group such that each group is now associated with a single saliency value. A number of strongest groups has been pulled out showing the close correspondence of these groups to objects of interest in the images. Acknowledgments This work was supported by NSF grant IRI-8900267. Part of the work was done while A.S. was visiting the exploratory vision group at IBM research center, Yorktown Heights. References [1] J. Edmonds. Path trees and flowers. Can. J. Math., 1:263-271, 1965. [2] C.H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, New Jersey, 1982. [3] A. Shashua and S. Ullman. Structural saliency: The detection of globally salient structures using a locally connected network. In Proceedings of the 2nd International Conference on Computer Vision, pages 321-327, 1988. 341
438 |@word collinearity:1 polynomial:1 achievable:2 nd:1 lwk:1 propagate:2 brightness:2 tr:1 contains:1 fragment:6 selecting:3 past:1 must:2 fn:9 partition:1 shape:1 designed:1 update:3 discrimination:2 intelligence:1 selected:1 plane:1 xk:2 provides:1 math:1 node:17 height:1 along:4 pairing:42 consists:1 manner:2 behavior:1 brain:1 globally:1 decreasing:2 linearity:1 bounded:1 maximizes:2 xiwi:4 finding:2 every:2 exactly:1 partitioning:1 grant:1 segmenting:1 positive:2 local:8 limit:1 ak:3 path:3 emphasis:1 twice:1 studied:1 equivalence:1 bi:4 acknowledgment:1 spontaneously:3 union:1 practice:2 procedure:2 j0:1 convenient:1 matching:1 get:3 onto:2 close:1 selection:5 prentice:1 influence:2 equivalent:1 map:2 center:1 straightforward:1 go:1 iri:1 immediately:1 contradiction:1 aiaj:3 notion:4 autonomous:1 exploratory:1 play:1 suppose:1 hypothesis:2 agreement:1 pa:2 element:47 recognition:1 approximated:1 role:1 connected:4 cycle:5 highest:1 ne43:1 ui:1 complexity:1 multistage:1 depend:1 overridden:1 segment:2 creates:1 basis:1 various:1 represented:2 jersey:1 surrounding:1 describe:2 artificial:1 detected:1 formation:1 whose:1 quite:1 larger:1 plausible:1 favor:2 final:1 net:6 propose:1 product:2 unresolved:1 maximal:9 coming:2 neighboring:1 optimum:2 object:5 coupling:1 illustrate:1 measured:1 eq:2 soc:1 implemented:1 implies:2 come:1 direction:1 fij:1 closely:1 require:1 proposition:5 elementary:1 hold:2 around:2 sufficiently:1 ground:1 hall:1 claim:2 early:1 smallest:1 purpose:1 combinatorial:1 bridge:1 agrees:1 largest:1 weighted:3 always:2 ej:7 am:4 sense:1 el:2 accumulated:1 relation:1 wij:4 atan:1 selects:1 pixel:1 overall:8 arg:1 orientation:1 exponent:1 mutual:4 equal:3 having:5 sampling:1 identical:2 filling:2 papadimitriou:1 spline:1 randomly:1 oriented:1 simultaneously:1 ve:1 preserve:1 maxj:1 argmax:2 detection:1 organization:1 interest:2 highly:1 introduces:1 wiggly:1 separat:1 chain:10 edge:4 raph:1 tree:2 filled:1 iv:1 circle:2 desired:1 increased:1 cover:1 measuring:1 vertex:1 subset:1 uniform:3 scanning:1 international:1 systematic:1 a1i:1 together:1 again:1 squared:1 priority:1 cognitive:1 ek:2 ullman:5 account:1 b2:1 wk:1 ad:1 performed:5 lab:1 shashua:5 portion:1 parallel:1 contribution:9 formed:1 characteristic:1 correspond:2 saliency:37 landscape:1 yield:2 iterated:6 finer:3 straight:1 strongest:2 definition:3 energy:1 associated:3 proof:2 propagated:1 cueing:1 newly:1 car:1 segmentation:1 back:1 appears:1 maximally:1 done:1 generality:1 stage:12 until:1 receives:2 ei:16 su:1 nonlinear:1 quality:5 aj:4 behaved:1 b3:1 facilitate:1 illustrated:2 adjacent:1 ll:1 transitivity:2 during:2 round:3 width:2 yorktown:1 criterion:1 performs:1 geometrical:1 image:19 demonst:1 consideration:1 novel:1 fi:4 ji:1 attached:1 linking:1 he:2 discussed:1 cambridge:1 ai:13 smoothness:1 grid:1 add:1 curvature:7 raj:1 optimizes:1 optimizing:1 certain:2 inequality:4 yi:1 additional:1 preceding:2 converge:1 maximize:1 monotonically:1 ii:5 smooth:3 match:1 calculation:1 long:2 concerning:1 e1:1 paired:1 vision:2 lly:1 iteration:4 repetitive:1 ion:1 background:1 undergo:1 db:1 structural:1 near:2 iii:4 ture:1 architecture:1 ckj:1 cn:5 amnon:1 motivated:1 passing:2 proceed:2 repeatedly:2 prefers:1 iterating:2 locally:6 sl:1 nsf:1 disjoint:6 per:1 edmonds:2 discrete:1 shall:1 group:28 salient:11 graph:11 merely:1 sum:2 angle:1 almost:1 reasonable:1 decision:8 prefer:2 bound:3 followed:2 guaranteed:1 display:3 correspondence:2 scene:1 performing:1 department:1 remain:1 across:1 em:1 contradicts:2 wi:4 snu:1 making:1 biologically:1 taken:3 mutually:1 end:1 wii:2 alternative:1 remaining:2 concatenated:1 uj:2 bl:1 already:1 quantity:2 fa:2 traditional:1 visiting:1 exhibit:1 gradient:1 mapped:1 concatenation:1 induction:3 length:5 index:1 difficult:1 fe:1 potentially:2 expense:2 rise:2 implementation:2 perform:1 upper:3 observation:1 arc:1 finite:1 descent:1 curved:1 extended:1 looking:1 steiglitz:1 arbitrary:1 bk:1 pair:4 proceeds:1 below:6 pattern:2 flower:1 max:4 including:1 critical:1 scheme:2 rated:1 identifies:1 lk:1 carried:1 sn:30 loss:1 permutation:1 degree:2 pi:1 surrounded:1 ibm:1 summary:1 placed:1 supported:1 heading:1 allow:1 pulled:1 neighbor:6 taking:1 emerge:2 distributed:1 curve:18 stand:2 contour:20 computes:1 collection:1 implicitly:1 global:2 xi:1 iterative:1 decomposes:1 sk:1 concatenates:1 complex:1 necessarily:1 bounding:1 fig:5 referred:1 pped:1 cubic:1 iij:2 xl:1 lie:1 governed:2 perceptual:5 house:1 weighting:1 renumber:2 shimon:1 formula:1 showing:1 er:1 grouping:40 exists:2 texture:1 perceptually:1 gap:7 simply:1 likely:2 forming:3 visual:1 pinto:1 corresponds:4 determines:1 ma:2 identity:1 determined:3 total:5 gij:6 exception:1 select:2 arises:1 evaluate:1
3,733
4,380
Expressive Power and Approximation Errors of Restricted Boltzmann Machines 1 ? 1 , Johannes Rauh1 , and Nihat Ay1,2 Guido F. Montufar Max Planck Institute for Mathematics in the Sciences, Inselstra?e 22 04103 Leipzig, Germany 2 Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501, USA {montufar,jrauh,nay}@mis.mpg.de Abstract We present explicit classes of probability distributions that can be learned by Restricted Boltzmann Machines (RBMs) depending on the number of units that they contain, and which are representative for the expressive power of the model. We use this to show that the maximal Kullback-Leibler divergence to the RBM model with n visible and m hidden units is bounded from above by (n?1)?log(m+1). In this way we can specify the number of hidden units that guarantees a sufficiently rich model containing different classes of distributions and respecting a given error tolerance. 1 Introduction A Restricted Boltzmann Machine (RBM) [24, 10] is a learning system consisting of two layers of binary stochastic units, a hidden layer and a visible layer, with a complete bipartite interaction graph. RBMs are used as generative models to simulate input distributions of binary data. They can be trained in an unsupervised way and more efficiently than general Boltzmann Machines, which are not restricted to have a bipartite interaction graph [11, 6]. Furthermore, they can be used as building blocks to progressively train and study deep learning systems [13, 4, 16, 21]. Hence, RBMs have received increasing attention in the past years. An RBM with n visible and m hidden units generates a stationary distribution on the states of the visible units which has the following form: X  1 pW,C,B (v) = exp h? W v + C ? h + B ? v ?v ? {0, 1}n , ZW,C,B m h?{0,1} where h ? {0, 1}m denotes a state vector of the hidden units, W ? Rm?n , C ? Rm and B ? Rn constitute the model parameters, and ZW,C,B is a corresponding normalization constant. In the sequel we denote by RBMn,m the set of all probability distributions on {0, 1}n which can be approximated arbitrarily well by a visible distribution generated by the RBM with m hidden and n visible units for an appropriate choice of the parameter values. As shown in [21] (generalizing results from [15]) RBMn,m contains any probability distribution if m ? 2n?1 ? 1. On the other hand, if RBMn,m equals the set P of all probability distributions on {0, 1}n , then it must have at least dim(P) = 2n ? 1 parameters, and thus at least ?2n /(n + 1)? ? 1 hidden units [21]. In fact, in [8] it was shown that for most combinations of m and n the dimension of RBMn,m (as a manifold, possibly with singularities) equals either the number of parameters or 2n ? 1, whatever is smaller. However, the geometry of RBMn,m is intricate, and even an RBM of dimension 2n ? 1 is not guaranteed to contain all visible distributions, see [20] for counterexamples. In summary, an RBM that can approximate any distribution arbitrarily well must have a very large number of parameters and hidden units. In practice, training such a large system is not desirable or even possible. However, there are at least two reasons why in many cases this is not necessary: 1 ? An appropriate approximation of distributions is sufficient for most purposes. ? The interesting distributions the system shall simulate belong to a small class of distributions. Therefore, the model does not need to approximate all distributions. For example, the set of optimal policies in reinforcement learning [25], the set of dynamics kernels that maximize predictive information in robotics [26] or the information flow in neural networks [3] are contained in very low dimensional manifolds; see [2]. On the other hand, usually it is very hard to mathematically describe a set containing the optimal solutions to general problems, or a set of interesting probability distributions (for example the class of distributions generating natural images). Furthermore, although RBMs are parametric models and for any choice of the parameters we have a resulting probability distribution, in general it is difficult to explicitly specify this resulting probability distribution (or even to estimate it [18]). Due to these difficulties the number of hidden units m is often chosen on the basis of experience [12], or m is considered as a hyperparameter which is optimized by extensive search, depending on the distributions to be simulated by the RBM. In this paper we give an explicit description of classes of distributions that are contained in RBMn,m , and which are representative for the expressive power of this model. Using this description, we estimate the maximal Kullback-Leibler divergence between an arbitrary probability distribution and the best approximation within RBMn,m . This paper is organized as follows: Section 2 discusses the different kinds of errors that appear when an RBM learns. Section 3 introduces the statistical models studied in this paper. Section 4 studies submodels of RBMn,m . An upper bound of the approximation error for RBMs is found in Section 5. 2 Approximation Error When training an RBM to represent a distribution p, there are mainly three contributions to the discrepancy between p and the state of the RBM after training: 1. Usually the underlying distribution p is unknown and only a set of samples generated by p is observed. These samples can be represented as an empirical distribution pData , which usually is not identical with p. 2. The set RBMn,m does not contain every probability distribution, unless the number of hidden units is very large, as we outlined in the introduction. Therefore, we have an approximation error given by the distance of pData to the best approximation pData RBM contained in the RBM model. Data 3. The learning process may yield a solution p?Data RBM in RBM which is not the optimum pRBM . This occurs, for example, if the learning algorithm gets trapped in a local optimum, or if it optimizes an objective different from Maximum Likelihood, e.g. contrastive divergence (CD), see [6]. In this paper we study the expressive power of the RBM model and the Kullback-Leibler divergence from an arbitrary distribution to its best representation within the RBM model. Estimating the approximation error is difficult, because the geometry of the RBM model is not sufficiently understood. Our strategy is to find subsets M ? RBMn,m that are easy to describe. Then the maximal error when approximating probability distributions with an RBM is upper bounded by the maximal error when approximating with M. Consider a finite set X . A real valued function on X can be seen as a real vector with |X | entries. The set P = P(X ) of all probability distributions on X is a (|X | ? 1)-dimensional simplex in R|X | . There are several notions of distance between probability distributions, and in turn for the error in the representation (approximation) of a probability distribution. One possibility is to use the induced distance of the Euclidian space R|X | . From the point of view of information theory, a more meaningful distance notion for probability distributions is the Kullback-Leibler divergence: X p(x) D(pkq) := p(x) log . q(x) x In this paper we use the basis 2 logarithm. The Kullback-Leibler (KL) divergence is non-negative and vanishes if and only if p = q. If the support of q does not contain the support of p it is defined 2 q=p q= 128 255 relative error 0 1 |X | 1 Figure 1: This figure gives an intuition on what the size of an error means for probability distributions on images with 16 ? 16 pixels. Every column shows four samples drawn from the best approximation q of the distribution p = 21 (?(1...1) + ?(0...0) ) within a partition model with 2 randomly chosen cubical blocks, containing (0 . . . 0) and (1 . . . 1), of cardinality from 1 (first column)  to |X2 | (last column). As a measure of error ranging from 0 to 1 we take D(pkq)/D pk |X1 | . The last column shows samples from the uniform distribution, which is, in particular, the best approximation of p within RBMn,0 . Note that an RBM with 1 hidden unit can approximate p with arbitrary accuracy, see Theorem 4.1. as ?. The summands with p(x) = 0 are set to 0. The KL-divergence is not symmetric, but it has nice information theoretic properties [14, 7]. If E ? P is a statistical model and if p ? P, then any probability distribution pE ? E satisfying D(pkpE ) = D(pkE) := min{D(pkq) : q ? E} is called a (generalized) reversed information projection, or rI-projection. Here, E denotes the closure of E. If p is an empirical distribution, then one can show that any rI-projection is a maximum likelihood estimate. In order to assess an RBM or some other model M we use the maximal approximation error with respect to the KL-divergence when approximating arbitrary probability distributions using M: DM := max {D(pkM) : p ? P} . For example, the maximal KL-divergence to the uniform distribution delta distributions ?x , x ? X , and amounts to: 1 |X | D{ 1 } = D(?x k |X1 | ) = log |X | . |X | 3 3.1 is attained by any Dirac (1) Model Classes Exponential families and product measures In this work we only need a restricted class of exponential families, namely exponential families on a finite set with uniform reference measure. See [5] for more on exponential families. The boundary of discrete exponential families is discussed in [23], which uses a similar notation. Let A ? Rd?|X | be a matrix. The columns Ax of A will be indexed by x ? X . The rows of A can be interpreted as functions on R. The exponential family EA with sufficient statistics A consists of all probability distributions of the form p? , ? ? Rd , where exp(?? Ax ) p? (x) = P , ? x exp(? Ax ) for all x ? X . Note that any probability distribution in EA has full support. Furthermore, EA is in general not a closed set. The closure EA (with respect to the usual topology on RX ) will be important in the following. Exponential families behave nicely with respect to rI-projection: Any p ? P has a unique rI-projection pE to EA . 3 The most important exponential families in this work are the independence models. The independence model of n binary random variables consists of all probability distributions on {0, 1}n that factorize: n o n Y pi (xi ) for some pi ? P({0, 1}) . En = p ? P(X ) : p(x1 , . . . , xn ) = i=1 It is the closure of an n-dimensional exponential family En . This model corresponds to the RBM model with no hidden units. An element of the independence model is called a product distribution. Lemma 3.1 (Corollary 4.1 of [1]) Let En be the independence model on {0, 1}n . If n > 0, then DEn = (n ? 1). The global maximizers are the distributions of the form 21 (?x + ?y ), where x, y ? {0, 1}n satisfy xi + yi = 1 for all i. This result should be compared with (1). Although the independence model is much larger than the set { |X1 | }, the maximal divergence decreases only by 1. As shown in [22], if E is any exponential family of dimension k, then DE ? log(|X |/(k + 1)). Thus, this notion of distance is rather strong. The exponential families satisfying DE = log(|X |/(k+1)) are partition models; they will be defined in the following section. 3.2 Partition models and mixtures of products with disjoint supports The mixture of m models M1 , . . . , Mm ? P is the set of all convex combinations X X p= ?i pi , where pi ? Mi , ?i ? 0, ?i = 1 . i (2) i In general, mixture models are complicated objects. Even if all models M1 = ? ? ? = Mm are equal, it is difficult to describe the mixture [17, 19]. The situation simplifies considerably if the models have disjoint supports. Note that given any partition ? = {X1 , . . . , Xm } of X , any p ? P can be written as p(x) = pXi (x)p(Xi ) for all x ? Xi and i ? {1, . . . , m}, where pXi is a probability measure in P(Xi ) for all i. Lemma 3.2 Let ? = {X1 , . . . , Xm } be a partition of X and let M1 , . . . , Mm be statistical models such that Mi ? P(Xi ). Consider any p ? P and corresponding pXi such that p(x) = pXi (x)p(Xi ) for x ? Xi . Let pi be an rI-projection of pXi to Mi . Then the rI-projection pM of P to the mixture M of M1 , . . . , Mm satisfies pM (x) = p(Xi )pi (x), whenever x ? Xi . P Xi Therefore, D(pkM) = i p(Xi )D(p kMi ), and so DM = maxi=1,...,m DMi . Pm Proof Let p ? M be as in (2). Then D(qkp) = i=1 q(Xi )D(q Xi kpi ) for all q ? P. For fixed q this sum is minimal if and only if each term is minimal.  If each Mi is an exponential family, then the mixture is also an exponential family (this is not true if the supports of the models Mi are not disjoint). In the rest of this section we discuss two examples. If each Mi equals the set containing just the uniform distribution on Xi , then M is called the partition model of ?, denoted with P? . The partition model P? is given by all distributions with constant value on each block Xi , i.e. those that satisfy p(x) = p(y) for all x, y ? Xi . This is the closure of the exponential family with sufficient statistics Ax = (?1 (x), ?2 (x), . . . , ?d (x)) ? , where ?i := ?Xi is 1 on x ? Xi , and 0 everywhere else. See [22] for interesting properties of partition models. The partition models include the set of finite exchangeable distributions (see e.g. [9]), where the blocks of the partition are the sets of binary vectors which have the same number of entries equal to one. The probability of a vector v depends only on the number of ones, but not on their position. Corollary 3.3 Let ? = {X1 , . . . , Xm } be a partition of X . Then DP? = maxi=1,...,m log |Xi |. 4 ) 1 (? ( 2 ?(1 1) 11 ) + ? (0 1) ?(0 1) ?(1 1) P? P ?(0 0) E1 ?(0 1) P ?(0 0) ?(1 0) E2 ?(1 0) Figure 2: Models in P({0, 1}2 ). Left: The blue line represents the partition model P? with partition ? = {(11), (01)}?{(00), (10)}. The dashed lines represent the set of KL-divergence maximizers for P? . Right: The mixture of the product distributions E1 and E2 with disjoint supports on {(11), (01)} and {(00), (10)} corresponding to the same partition ? equals the whole simplex P. Now assume that X = {0, 1}n is the set of binary vectors of length n. As a subset of Rn it consists of the vertices (extreme points) of the n-dimensional hypercube. The vertices of a k-dimensional face of the n-cube are given by fixing the values of x in n ? k positions: {x ? {0, 1}n : xi = x ?i , ?i ? I, for some I ? {1, . . . , n}, |I| = n ? k} We call such a subset Y ? X cubical or a face of the n-cube. A cubical subset of cardinality 2k can be naturally identified with {0, 1}k . This identification allows to define independence models and product measures on P(Y) ? P(X ). Note that product measures on Y are also product measures on X , and the independence model on Y is a subset of the independence model on X . Corollary 3.4 Let ? = {X1 , . . . , Xm } be a partition of X = {0, 1}n into cubical sets. For any i let Ei be the independence model on Xi , and let M be the mixture of E1 , . . . , Em . Then DM = max log(|Xi |) ? 1 . i=1,...,m See Figure 1 for an intuition on the approximation error of partition models, and see Figure 2 for small examples of a partition model and of a mixture of products with disjoint support. 4 Classes of distributions that RBMs can learn Consider a set ? = {Xi }m i=1 of m disjoint cubical sets Xi in X . Such a ? is a partition of some subset ?? = ?i Xi of X into m disjoint cubical sets. We write Gm for the collection of all such partitions. We have the following result: Theorem 4.1 RBMn,m contains the following distributions: ? Any mixture of one arbitrary product distribution, m ? k product distributions with support on arbitrary but disjoint faces of the n-cube, and k arbitrary distributions with support on any edges of the n-cube, for any 0 ? k ? m. In particular: ? Any mixture of m + 1 product distributions with disjoint cubical supports. In consequence, RBMn,m contains the partition model of any partition in Gm+1 . Restricting the cubical sets of the second item to edges, i.e. pairs of vectors differing in one entry, we see that the above theorem implies the following previously known result, which was shown in [21]. Corollary 4.2 RBMn,m contains the following distributions: ? Any distribution with a support set that can be covered by m + 1 pairs of vectors differing in one entry. In particular, this includes: ? Any distribution in P with a support of cardinality smaller than or equal to m + 1. 5 Corollary 4.2 implies that an RBM with m ? 2n?1 ? 1 hidden units is a universal approximator of distributions on {0, 1}n , i.e. can approximate any distribution to an arbitrarily good accuracy. Assume m + 1 = 2k and let ? be a partition of X into m + 1 disjoint cubical sets of equal size. Let us denote by P?,1 the set of all distributions which can be written as a mixture of m + 1 product distributions with support on the elements of ?. The dimension of P?,1 is given by  n  2 dim P?,1 = (m + 1) log + m + 1 + n = (m + 1) ? n + (m + 1) + n ? (m + 1) log(m + 1) . m+1 The dimension of the set of visible distribution represented by an RBM is at most equal to the number of paramters, see [21], this is m ? n + m + n. This means that the class given above has roughly the same dimension of the set of distributions that can be represented. In fact, dim P?,1 ? dim RBMm?1 = n + 1 ? (m + 1) log(m + 1) . This means that the class of distributions P?,1 which by Theorem 4.1 can be represented by RBMn,m is not contained in RBMn,m?1 when (m + 1)m+1 ? 2n+1 . Proof of Theorem 4.1 The proof draws on ideas from [15] and [21]. An RBM with no hidden units can represent precisely the independence model, i.e. all product distributions, and in particular any uniform distribution on a face of the n-cube. Consider an RBM with m ? 1 hidden units. For any choice of the parameters W ? Rm?1?n , B ? Rn , C ? Rm?1 we can write the resulting distribution on the visible units as: P z(v, h) p(v) = P h , (3) ? ? v ? ,h? z(v , h ) where z(v, h) = exp(hW v + Bv + Ch). Appending one additional hidden unit, with connection weights w to the visible units and bias c, produces a new distribution which can be written as follows: P (1 + exp(wv + c)) h z(v, h) . pw,c (v) = P ? ? ? v ? ,h? (1 + exp(wv + c))z(v , h ) Consider now any set I ? [n] := {1, . . . , n} and an arbitrary visible vector u ? X . The values of u in the positions [n]\I define a face F := {v ? X : vi = ui , ?i 6? I} of the n-cube of dimension |I|. Let 1 := (1, . . . , 1) ? Rn and denote by uI,0 the vector with entries uI,0 = ui , ?i 6? I and i I,0 I n I ui = 0, ?i ? I. Let ? ? R with ?i = 0 , ?i 6? I and let ?c , a ? R. Define the connection weights w and c as follows: 1 w = a(uI,0 ? 1I,0 ) + ?I , 2 1 I,0 ? I,0 c = ?a(u ? 1 ) u + ?c . 2 For this choice and a ? ? equation (4) yields: ? p(v) ? P , 1+ v? ?F exp (?I ?v ? +?c )p(v ? ) pw,c (v) = I (1+exp(? ?v+? ))p(v) c ? P I ? ? , 1+ v ? ?F exp (? ?v +?c )p(v ) ?v 6? F ?v ? F . (4) If the initial p from equation (3) is such that its restriction to F is a product distribution, then p(v) = K exp(? I ? v) , ?v ? F , where K is a constant and ? I is a vector with ?iI = 0 , ?i 6? I. We can choose ?I = ? I ? ? I , and exp(?c ) = ? K P 1exp(? I ?v) . For this choice, equation (4) yields: v?F pw,c = (? ? 1)p + ?? p, where p? is a product distribution with support in F and arbitrary natural parameters ? I , and ? is an arbitrary mixture weight in [0, 1]. Finally, the product distributions on edges of the cube are arbitrary, see [19] or [21] for details, and hence the restriction of any p to any edge is a product distribution.  6 RBMs with 4 visible units D(pparity kpRBM ) 50 2 D D(pparity kpRBM ) RBMs with 3 visible units 2.5 1.5 0 0 1 1 2 3 4 m 0.5 0 0 1 2 3 3 2.5 (n ? 1) ? log(m + 1) 2 1.5 1 0.5 0 0 4 Number of hidden units m 1 2 3 4 5 6 7 8 Number of hidden units m Figure 3: This figure demonstrates our results for n = 3 and n = 4 visible units. The red curves represent the bounds from Theorem 5.1. We fixed pparity as target distribution, the uniform distribution on binary length n vectors with an even number of ones. The distribution pparity is not the KL-maximizer from RBMn,m , but it is in general difficult to represent. Qualitatively, samples from pparity look like uniformly distributed, and representing pparity requires the maximal number of product mixture components [20, 19]. For both values of n and each m = 0, . . . , 2n /2 we initialized 500 resp. 1000 RBMs at parameter values chosen uniformly at random in the range [?10, 10]. The inset of the left figure shows the resulting KL-divergence D(pparity kprand RBM ) (for n = 4 the resulting KL-divergence was larger). Randomly chosen distributions in RBMn,m are likely to be very far from the target distribution. We trained these randomly initialized RBMs using CD for 500 training epochs, learning rate 1 and a list of even parity vectors as training data. The result after training is given by the blue circles. After training the RBMs the result is often not better than the uniform 1 distribution, for which D(pparity k |{0,1} n | ) = 1. For each m, the best set of parameters after training was used to initialize a further CD training with a smaller learning rate (green squares, mostly covered) followed by a short maximum likelihood gradient ascent (red filled squares). 5 Maximal Approximation Errors of RBMs Let m < 2n?1 ? 1. By Theorem 4.1 all partition models for partitions of {0, 1}n into m + 1 cubical sets are contained in RBMn,m . Applying Corollary 3.3 to such a partition where the cardinality of all blocks is at most 2n??log(m+1)? yields the bound DRBMn,m ? n ? ?log(m + 1)?. Similarly, using mixtures of product distributions, Theorem 4.1 and Corollary 3.4 imply the smaller bound DRBMn,m ? n ? 1 ? ?log(m + 1)?. In this section we derive an improved bound which strictly decreases, as m increases, until 0 is reached. Theorem 5.1 Let m ? 2n?1 ? 1. Then the maximal Kullback-Leibler divergence from any distribution on {0, 1}n to RBMn,m is upper bounded by max D(pk RBMn,m ) ? (n ? 1) ? log(m + 1) . p?P Conversely, given an error tolerance 0 ? ? ? 1, the choice m ? 2(n?1)(1??) ? 1 ensures a sufficiently rich RBM model that satisfies DRBMn,m ? ?DRBMn,0 . For m = 2n?1 ? 1 the error vanishes, corresponding to the fact that an RBM with that many hidden units is a universal approximator. In Figure 3 we use computer experiments to illustrate Theorem 5.1. The proof makes use of the following lemma: Lemma 5.2 Let n1 , . . . , nm ? 0 such that 2n1 + ? ? ? + 2nm = 2n . Let M be the union of all mixtures of independent models P corresponding to all cubical partitions of X into blocks of cardinalities i ?1 2n1 , . . . , 2nm . Then DM ? i:ni >1 2nn?n . i Proof of Lemma 5.2 The proof is by induction on n. If n = 1, then m = 1 or m = 2, and in both cases it is easy to see that the inequality holds (both sides vanish). If n > 1, then order the ni such that n1 ? n2 ? ? ? ? ? nm ? 0. Without loss of generality assume m > 1. Let p ? P(X ), and let Y be a cubical subset of X of cardinality 2n?1 such that p(Y) ? 12 . Since the numbers 2n1 + ? ? ? + 2ni for i = 1, . . . , m contain all multiples of 2n1 up to 2n and 2n /2n1 is even, there exists k such that 2n1 + ? ? ? + 2nk = 2n?1 = 2nk+1 + ? ? ? + 2nm . 7 Let M? be the union of all mixtures of independence models corresponding to all cubical partitions ? = {X1 , . . . , Xm } of X intoP m blocks of cardinalities n1 , . . . , nm such that X1 ? ? ? ? ? Xk = Y. ? In the following, the symbol i shall denote summation over all indices i such that ni > 1. By induction k m X X ? ni ? 1 ? nj ? 1 D(pkM) ? D(pkM ) ? p(Y) + p(X \ Y) . n?1?n i 2 2n?1?nj i=1 ? (5) j=k+1 There exist j1 = k + 1 < j2 < ? ? ? < jk < jk+1 = m + 1 such that 2ni = 2nji + ? ? ? + 2nji+1 ?1 for all i ? k. Note that ji+1 X? nj ? 1 ni ? 1 ni ? 1 ? n?1 (2nji + ? ? ? + 2nji+1 ?1 ) = n?1?ni , n?1?nj 2 2 2 j=j i and therefore j ( 21 ?1 i+1 X? nj ? 1 ni ? 1 1 ? p(Y)) n?1?ni + ( 2 ? p(X \ Y)) ?0. 2 2n?1?nj j=j i Adding these terms for i = 1, . . . , k to the right hand side of equation (5) yields D(pkM) ? k m 1 X? ni ? 1 1 X? nj ? 1 + , 2 i=1 2n?1?ni 2 2n?1?nj j=k+1  from which the assertions follow. Proof of Theorem 5.1 From Theorem 4.1 we know that RBMn,m contains the union M of all mixtures of independent models corresponding to all partitions with up to m + 1 cubical blocks. Hence, DRBMn,m ? DM . Let k = n ? ?log(m + 1)? and l = 2m + 2 ? 2n?k+1 ? 0; then l2k?1 + (m + 1 ? l)2k = 2n . Lemma 5.2 with n1 = ? ? ? = nl = k ? 1 and nl+1 = ? ? ? = nm+1 = k implies l(k ? 2) (m + 1 ? l)(k ? 1) m+1 DM ? n?k+1 + = k ? n?k . 2 2n?k 2 m+1 The assertion follows from log(m + 1) ? (n ? k) + 2n?k ? 1, where log(1 + x) ? x for all x > 0 was used.  6 Conclusion We studied the expressive power of the Restricted Boltzmann Machine model with n visible and m hidden units. We presented a hierarchy of explicit classes of probability distributions that an RBM can represent. These classes include large collections of mixtures of m + 1 product distributions. In particular any mixture of an arbitrary product distribution and m further product distributions with disjoint supports. The geometry of these submodels is easier to study than that of the RBM models, while these subsets still capture many of the distributions contained in the RBM models. Using these results we derived bounds for the approximation errors of RBMs. We showed that it is always possible to reduce the error to at most (n ? 1) ? log(m + 1). That is, given any target distribution, there is a distribution within the RBM model for which the Kullback-Leibler divergence between both is not larger than that number. Our results give a theoretical basis for selecting the size of an RBM which accounts for a desired error tolerance. Computer experiments showed that the bound captures the order of magnitude of the true approximation error, at least for small examples. However, learning may not always find the best approximation, resulting in an error that may well exceed our bound. Acknowledgments Nihat Ay acknowledges support by the Santa Fe Institute. 8 References [1] N. Ay and A. Knauf. Maximizing multi-information. Kybernetika, 42:517?538, 2006. [2] N. Ay, G. Mont?ufar, and J. Rauh. Selection criteria for neuromanifolds of stochastic dynamics. International Conference on Cognitive Neurodynamics, 2011. [3] N. Ay and T. Wennekers. Dynamical properties of strongly interacting Markov chains. Neural Networks, 16:1483?1497, 2003. [4] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. NIPS, 2007. [5] L. Brown. Fundamentals of Statistical Exponential Families: With Applications in Statistical Decision Theory. Inst. Math. Statist., Hayworth, CA, USA, 1986. [6] M. A. Carreira-Perpi?nan and G. E. Hinton. On contrastive divergence learning. In Proceedings of the 10-th International Workshop on Artificial Intelligence and Statistics, 2005. [7] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, 2006. [8] M. A. Cueto, J. Morton, and B. Sturmfels. Geometry of the Restricted Boltzmann Machine. In M. A. G. Viana and H. P. Wynn, editors, Algebraic methods in statistics and probability II, AMS Special Session. AMS, 2010. [9] P. Diaconis and D. Freedman. Finite exchangeable sequences. Ann. Probab., 8:745?764, 1980. [10] Y. Freund and D. Haussler. Unsupervised learning of distributions on binary vectors using 2-layer networks. NIPS, pages 912?919, 1992. [11] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput., 14:1771?1800, 2002. [12] G. E. Hinton. A practical guide to training Restricted Boltzmann Machines, version 1. Technical report, UTML2010-003, University of Toronto, 2010. [13] G. E. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for Deep Belief Nets. Neural Comput., 18:1527?1554, 2006. [14] S. Kullback and R. Leibler. On information and sufficiency. Ann. Math. Stat., 22:79?86, 1951. [15] N. Le Roux and Y. Bengio. Representational power of Restricted Boltzmann Machines and Deep Belief Networks. Neural Comput., 20(6):1631?1649, 2008. [16] N. Le Roux and Y. Bengio. Deep Belief Networks are compact universal approximators. Neural Comput., 22:2192?2207, 2010. [17] B. Lindsay. Mixture models: theory, geometry, and applications. Inst. Math. Statist., 1995. [18] P. M. Long and R. A. Servedio. Restricted Boltzmann Machines are hard to approximately evaluate or simulate. In Proceedings of the 27-th ICML, pages 703?710, 2010. [19] G. Mont?ufar. Mixture decompositions using a decomposition of the sample space. ArXiv 1008.0204, 2010. [20] G. Mont?ufar. Mixture models and representational power of RBMs, DBNs and DBMs. NIPS Deep Learning and Unsupervised Feature Learning Workshop, 2010. [21] G. Mont?ufar and N. Ay. Refinements of universal approximation results for Deep Belief Networks and Restricted Boltzmann Machines. Neural Comput., 23(5):1306?1319, 2011. [22] J. Rauh. Finding the maximizers of the information divergence from an exponential family. PhD thesis, Universit?at Leipzig, 2011. [23] J. Rauh, T. Kahle, and N. Ay. Support sets of exponential families and oriented matroids. Int. J. Approx. Reason., 52(5):613?626, 2011. [24] P. Smolensky. Information processing in dynamical systems: foundations of harmony theory. In Symposium on Parallel and Distributed Processing, 1986. [25] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). MIT Press, March 1998. [26] K. G. Zahedi, N. Ay, and R. Der. Higher coordination with less control ? a result of infromation maximization in the sensori-motor loop. Adaptive Behavior, 18(3-4):338?355, 2010. 9
4380 |@word nihat:2 version:1 pw:4 closure:4 decomposition:2 contrastive:3 euclidian:1 initial:1 contains:5 selecting:1 past:1 must:2 written:3 john:1 visible:15 partition:28 j1:1 motor:1 leipzig:2 progressively:1 stationary:1 generative:1 greedy:1 intelligence:1 item:1 zahedi:1 xk:1 short:1 math:3 toronto:1 symposium:1 consists:3 nay:1 intricate:1 behavior:1 mpg:1 roughly:1 multi:1 cardinality:7 increasing:1 estimating:1 bounded:3 underlying:1 notation:1 what:1 kind:1 interpreted:1 kybernetika:1 differing:2 finding:1 nj:8 guarantee:1 every:2 universit:1 rm:4 demonstrates:1 exchangeable:2 control:1 whatever:1 unit:27 appear:1 planck:1 understood:1 local:1 consequence:1 sutton:1 approximately:1 studied:2 conversely:1 range:1 unique:1 acknowledgment:1 practical:1 practice:1 block:8 union:3 empirical:2 universal:4 projection:7 road:1 get:1 selection:1 applying:1 restriction:2 maximizing:1 attention:1 convex:1 kpi:1 l2k:1 roux:2 haussler:1 lamblin:1 notion:3 qkp:1 target:3 gm:2 resp:1 hierarchy:1 guido:1 lindsay:1 dbns:1 us:1 element:3 approximated:1 satisfying:2 jk:2 cueto:1 observed:1 capture:2 ensures:1 montufar:2 decrease:2 intuition:2 vanishes:2 respecting:1 kmi:1 ui:6 dynamic:2 trained:2 predictive:1 bipartite:2 basis:3 represented:4 train:1 fast:1 describe:3 artificial:1 larger:3 valued:1 statistic:4 sequence:1 net:1 interaction:2 maximal:10 product:23 pke:1 j2:1 loop:1 representational:2 description:2 dirac:1 optimum:2 produce:1 generating:1 object:1 depending:2 derive:1 illustrate:1 stat:1 fixing:1 received:1 strong:1 implies:3 larochelle:1 stochastic:2 dbms:1 rauh:3 hyde:1 singularity:1 mathematically:1 summation:1 strictly:1 mm:4 hold:1 sufficiently:3 considered:1 exp:12 purpose:1 harmony:1 coordination:1 mit:1 always:2 rather:1 barto:1 corollary:7 ax:4 pxi:5 derived:1 morton:1 likelihood:3 mainly:1 am:2 dim:4 inst:2 nn:1 hidden:20 germany:1 pixel:1 denoted:1 special:1 initialize:1 cube:7 equal:9 dmi:1 nicely:1 identical:1 represents:1 park:1 look:1 unsupervised:3 icml:1 pdata:3 discrepancy:1 simplex:2 report:1 randomly:3 oriented:1 diaconis:1 divergence:18 geometry:5 consisting:1 n1:10 possibility:1 introduces:1 mixture:23 extreme:1 nl:2 chain:1 edge:4 necessary:1 experience:1 unless:1 indexed:1 filled:1 logarithm:1 initialized:2 circle:1 desired:1 theoretical:1 minimal:2 column:5 assertion:2 cover:1 maximization:1 vertex:2 subset:8 entry:5 uniform:7 osindero:1 considerably:1 international:2 fundamental:1 sequel:1 thesis:1 nm:7 containing:4 choose:1 possibly:1 cognitive:1 expert:1 kahle:1 account:1 de:3 includes:1 int:1 satisfy:2 explicitly:1 depends:1 vi:1 view:1 closed:1 red:2 reached:1 wynn:1 complicated:1 parallel:1 contribution:1 ass:1 square:2 ni:13 accuracy:2 efficiently:1 yield:5 sensori:1 identification:1 cubical:14 rx:1 whenever:1 servedio:1 rbms:14 dm:6 e2:2 proof:7 mi:7 rbm:33 naturally:1 organized:1 ea:5 attained:1 higher:1 follow:1 specify:2 improved:1 sufficiency:1 strongly:1 generality:1 furthermore:3 just:1 until:1 hand:3 expressive:5 ei:1 maximizer:1 usa:2 building:1 contain:5 true:2 brown:1 hence:3 symmetric:1 leibler:8 criterion:1 generalized:1 ay:7 complete:1 theoretic:1 image:2 ranging:1 wise:1 ji:1 belong:1 discussed:1 m1:4 counterexample:1 rd:2 approx:1 outlined:1 mathematics:1 pm:3 similarly:1 session:1 summands:1 pkq:3 showed:2 optimizes:1 inequality:1 wv:2 binary:7 arbitrarily:3 approximators:1 yi:1 der:1 seen:1 additional:1 maximize:1 dashed:1 ii:2 full:1 desirable:1 multiple:1 technical:1 long:1 e1:3 arxiv:1 represent:6 normalization:1 kernel:1 robotics:1 nji:4 else:1 zw:2 rest:1 ascent:1 induced:1 pkm:5 flow:1 call:1 exceed:1 bengio:3 easy:2 independence:11 topology:1 identified:1 reduce:1 simplifies:1 idea:1 ufar:4 algebraic:1 constitute:1 deep:7 santa:3 covered:2 johannes:1 amount:1 sturmfels:1 statist:2 exist:1 trapped:1 delta:1 disjoint:11 blue:2 paramters:1 discrete:1 hyperparameter:1 shall:2 write:2 four:1 drawn:1 graph:2 year:1 sum:1 everywhere:1 viana:1 family:17 submodels:2 draw:1 decision:1 bound:8 layer:5 guaranteed:1 followed:1 nan:1 bv:1 hayworth:1 precisely:1 x2:1 ri:6 generates:1 simulate:3 min:1 combination:2 march:1 smaller:4 em:1 son:1 den:1 restricted:11 ay1:1 equation:4 previously:1 discus:2 turn:1 know:1 appropriate:2 appending:1 thomas:1 denotes:2 include:2 approximating:3 hypercube:1 objective:1 occurs:1 parametric:1 strategy:1 usual:1 gradient:1 dp:1 distance:5 reversed:1 simulated:1 manifold:2 reason:2 induction:2 length:2 index:1 minimizing:1 mexico:1 difficult:4 mostly:1 fe:3 negative:1 boltzmann:10 policy:1 unknown:1 teh:1 upper:3 markov:1 finite:4 behave:1 situation:1 hinton:4 rn:4 interacting:1 arbitrary:12 namely:1 pair:2 kl:8 extensive:1 optimized:1 connection:2 learned:1 nip:3 usually:3 dynamical:2 xm:5 smolensky:1 max:4 green:1 belief:4 power:7 natural:2 difficulty:1 representing:1 imply:1 acknowledges:1 nice:1 epoch:1 popovici:1 probab:1 relative:1 freund:1 loss:1 interesting:3 approximator:2 foundation:1 sufficient:3 editor:1 pi:6 cd:3 row:1 summary:1 last:2 parity:1 mont:4 bias:1 side:2 guide:1 institute:3 face:5 matroids:1 tolerance:3 distributed:2 boundary:1 dimension:7 xn:1 curve:1 rich:2 collection:2 reinforcement:2 qualitatively:1 refinement:1 adaptive:2 far:1 approximate:4 compact:1 kullback:8 global:1 factorize:1 xi:26 search:1 why:1 neurodynamics:1 learn:1 ca:1 pk:2 whole:1 freedman:1 n2:1 x1:10 representative:2 en:3 wiley:1 position:3 explicit:3 exponential:17 comput:5 pe:2 vanish:1 learns:1 hw:1 theorem:12 perpi:1 inset:1 maxi:2 list:1 symbol:1 maximizers:3 exists:1 workshop:2 restricting:1 adding:1 phd:1 magnitude:1 nk:2 easier:1 generalizing:1 likely:1 contained:6 ch:1 corresponds:1 satisfies:2 ann:2 hard:2 carreira:1 uniformly:2 lemma:6 called:3 meaningful:1 support:18 evaluate:1
3,734
4,381
Randomized Algorithms for Comparison-based Search Dominique Tschopp AWK Group Bern, Switzerland [email protected] Suhas Diggavi University of California Los Angeles (UCLA) Los Angeles, CA 90095 [email protected] Soheil Mohajer Princeton University Princeton, NJ 08544 [email protected] Payam Delgosha Sharif University of Technology Tehran, Iran [email protected] Abstract This paper addresses the problem of finding the nearest neighbor (or one of the R-nearest neighbors) of a query object q in a database of n objects, when we can only use a comparison oracle. The comparison oracle, given two reference objects and a query object, returns the reference object most similar to the query object. The main problem we study is how to search the database for the nearest neighbor (NN) of a query, while minimizing the questions. The difficulty of this problem depends on properties of the underlying database. We show the importance of a characterization: combinatorial disorder D which defines approximate triangle n inequalities on ranks. We present a lower bound of ?(D log D + D2 ) average number of questions in the search phase for any randomized algorithm, which demonstrates the fundamental role of D for worst case behavior. We develop 3 a randomized scheme for NN retrieval in O(D3 log2 n + D log2 n log log nD ) 3 questions. The learning requires asking O(nD3 log2 n + D log2 n log log nD ) questions and O(n log2 n/ log(2D)) bits to store. 1 Introduction Consider the situation where we want to search and navigate a database, but the underlying relationships between the objects are unknown and are accessible only through a comparison oracle. The comparison oracle, given two reference objects and a query object, returns the reference object most similar to the query object. Such an oracle attempts to model the behavior of human users, capable of making statements about similarity, but not of assigning meaningful numerical values to distances between objects. These situations could occur in many tasks, such as recommendation for movies, restaurants etc., or a human-assisted search system for image databases among other applications. Using such an oracle, the best we can hope for is to obtain, for every object u in the database, a ranking of the other objects according to their similarity to u. However, the use of the oracle to get complete information about ranking could be costly, since invoking the oracle is to represent human input to the task (preferences in movies, comparison of images etc). We can pre-process the database by asking questions during a learning phase, and use the resulting answers to facilitate the search process. Therefore, the main question we ask in this paper is to design a (approximate) nearest neighbor retrieval algorithm while minimizing the number of questions to such an oracle. Clearly the difficulty of searching using such an oracle depends critically on the properties of the set of objects. We demonstrate the importance of a characterization which determines the performance 1 of comparison based search algorithms. Combinatorial disorder (introduced by Goyal et al. [1]), defines approximate triangle inequalities on ranks. Roughly speaking, it defines a multiplicative factor D by which the triangle inequality on ranks can be violated. We show our first lower bound of n +D2 ) on average number of questions in the search phase for any randomized algorithm, ?(D log D and therefore demonstrate the fundamental importance of D for worst case behavior. When the disorder is known, we can use partial rank information to estimate, or infer the other ranks. This allows us to design a novel hierarchical scheme which considerably improves the existing bounds for nearest neighbor search based on a similarity oracle, and performs provably close to the lower bound. If no characterization of the hidden space can be used as an input, we develop algorithms that can decompose the space such that dissimilar objects are likely to get separated, and similar objects have the tendency to stay together; generalizing the notion of randomized k-d-trees [2]. This is developed in more detail in [3]. Due to space constraints, we give statements of the results along with an outline of proof ideas in the main text. Additionally we provide proof details in the appendix [4] as extra material allowed by NIPS. Relationship to published works: Nearest neighbor (NN) search problem has been very well studied for metric spaces (see [5]). However, in all these works, it is assumed that one can compute distances between points in the data set. In [6, 7, 8, 9, 10, 11], various approaches to measure similarities between images are presented, which could be used as comparison oracles in our setup. The algorithmic aspects of searching with a comparison oracle was first studied in [1], where a random walk algorithm is presented. The main limitation of this algorithm is the fact that all rank relationships need to be known in advance, which amounts to asking the oracle O(n2 log n) questions, in a database of size n. In [12], a data structure similar in spirit to ?-nets of [13] is introduced. It is shown that a learning phase with complexity O(D7 n log2 n) questions and a space complexity of O(D5 n + Dn log n) allows to retrieve the NN in O(D4 log n) questions. The learning phase builds a hierarchical structure based on coverings of exponentially decreasing radii. In this paper, we present what we believe is the first lower bound for search through comparisons. This gives a more fundamental meaning to D as a parameter determining worst case behavior. Based on the insights gained from this worst case analysis, we then improve (see Section 3) the existing upper bounds by a poly(D) factor, if we are willing to accept a negligible (less than n1 ) probability of failure. Our algorithm is based on random sampling, and can be seen as a form of metric skip list (as introduced in [14]), but applied to a combinatorial (non-metric) framework. However, the fact that we do not have access to distances forces us to use new techniques in order to minimize the number of questions (or ranks we need to compute). In particular, we sample the database at different densities, and infer the ranks from the density of the sampling, which we believe is a new technique. We also need to relate samples to each other when building the data structure top down. A natural question to ask is whether one can develop data structures for NN when a characterization of the underlying space is unknown. In [2], when one has access to metric distances, a binary tree decomposition of a dataset that adapts to its ?intrinsic dimension? [13] has been designed. We extend the result of [2] to our setup, where we have a comparison oracle but do not have access to metric distances. This can be used in a manner similar to [2] to find (approximate) NN (see [3] for more details). To the best of our knowledge, the notion of randomized NN search using similarity oracle is studied for the first time in this paper. Moreover, the hierarchical search scheme proposed is more efficient than earlier schemes. The lower bound presented appears to be new and demonstrates that our schemes are (almost) efficient. 2 Definitions and Problem Statement We consider a hidden space K, and a database of objects T ? K, with |T | = n. We can only access this space through a similarity oracle which for any point q ? K, and objects u, v ? T returns  u if u is more similar to q than v O(q, u, v) = (1) v else. The goal is to develop and analyse algorithms which for any given q ? K, can find an object in the database a ? T which is the nearest neighbor (NN) to q, using the smallest number of questions of type (1). We also relax this goal to find the approximate NN with ?high probability?. The algorithm 2 may have a learning phase, in which it explores the structure of the database, and stores it using a certain amount of memory. Note that this phase has to be done prior to knowing the query q ? K. Then, once the query is given, the search phase of the algorithm asks a certain number of questions of type (1) and finds the closest object in the database. The performance of the algorithm is measured by three components among which there could be a trade-off: the number of questions asked in the learning phase, the number of questions asked in the searching phase, and the total memory to be stored. The main goal of this work is to design algorithms for NN search and characterize its performance in terms of these parameters. We will present some definitions which are required to state the results of this paper. Definition 1. The rank of u in a set S with respect to v, rv (u, S) is equal to c, if u is the cth nearest object to v in S, i.e., |{w ? S : d(w, v) < d(u, v)}| = c ? 1, where d(w, v) < d(u, v) could be interpreted as a distance function. Also the rank ball ?x (r) is defined to be {y : rx (y, S)) ? r}. Note that we do not need existence of a distance function in Definition 1. We could replace d(w, v) < d(u, v) with ?v is more similar to w than u? by using the oracle in (1). To simplify the notation, we only indicate the set if it is unclear from the context i.e., we write rv (u) instead of rv (u, S) unless there is an ambiguity. Note that rank need not be a symmetric relationship between objects i.e., ru (v) 6= rv (u) in general. Further, note that we can rank m objects w.r.t. an object o by asking the oracle O(m log m) questions, using standard sort algorithms [15]. Our characterization of the space of objects is through a form of approximate triangle inequalities introduced in [1] and [12]. Instead of defining a inequalities between distances, these triangle inequalities defined over ranks, and depend on a property of the space, called the disorder constant. Definition 2. The combinatorial disorder of a set of objects S is the smallest D such that ?x, y, z ? S, we have the following approximate triangle inequalities: (i) rx (y, S) ? D(rz (x, S) + rz (y, S)) (ii) rx (y, S) ? D(rx (z, S) + ry (z, S)) (iii) rx (y, S) ? D(rx (z, S) + rz (y, S)) (iv) rx (y, S) ? D(rz (x, S) + ry (z, S)) In particular, rx (x, S) = 0 and rx (y, S) ? Dry (x, S). 3 Contributions Our contributions are the following: (i) we design a randomized hierarchical data structure with which we can do NN search using the comparison oracle (ii) we develop the first lower bound for the search complexity in the combinatorial framework of [1, 12], and thereby demonstrate the importance of combinatorial disorder. The performance of the randomized algorithm (see (i)) is shown to be close to this lower bound. We also develop a binary tree decomposition that adapts to the data set in a manner analogous to [2]. More precisely, we prove a lower bound on the average search time to retrieve the nearest neighbor of a query point for randomized algorithms in the combinatorial framework. Theorem 1. There exists a space, a configuration of a database of n objects in that space that for the uniform distribution over placements of the query point q such that no randomized search algorithm, even if O(n3 ) questions can be asked in the learning phase, can find q?s nearest neighbor in the database for sure (with a probability of error?of 0) by asking less than an expected ?(D2 + D log n/D) questions in the worst case when D < n. As a consequence of this theorem, there must exist at least one query point in this configuration n ) + D2 ) questions, hence setting a lower bound on the which requires asking at least ?(D log( D search complexity. Based on the insights gained from this worst case analysis, we introduce a conceptually simple randomized hierarchical scheme that allows us to reduce the learning compared to the existing algorithm (see [12, 1]) by a factor D4 , memory consumption by a factor D5 / log2 n, 3 and a factor D/ log n log log nD for search. Theorem 2. We design a randomized algorithm, which for a given query point q, can retrieve 3 its nearest neighbor with high probability in O(D3 log2 n + D log2 n log log nD ) questions. The 3 3 learning requires asking O(nD3 log2 n + D log2 n log log nD ) questions and we need to store O(n log2 n/ log(2D)) bits. Consequently, our schemes are asymptotically (for n) within Dpolylog(n) questions of the optimal search algorithm. 4 Lower Bounds for NNS A natural question to ask is whether there are databases and query points for which we need to ask a minimal number of questions, independent of the algorithm used. In this section, we construct a database T of n objects, a universe of queries K\T and similarity relationships, for which no n search algorithm can find the NN of a query point in less than expected ?(D log D + D2 ) questions. We show this even when all possible questions O(u, v, w) related to the n database objects (i.e., u, v, w ? T ) can be asked during the learning phase. The query is chosen uniformly from the universe of queries and is unknown during the learning phase. Database Structure: Consider the weighted graph shown in Fig. 1. It consists of a star with ? branches ?1 , ?2 , . . . , ?? , each composed of n/?2 supernodes (SN). Each of the supernodes in turn contains ? database objects (i.e., objects in T ). Clearly, in total there are ?? ?n2 = n objects. Note that the database T only includes the set of objects inside the supernodes, and the supernodes, themselves, are not element of T . We indicate the objects in each branch by numbers from 1 to n/?. We define the set of queries, M, as follows: every query point q is attached to one object form T on each branch of the star with an edge; this object is called a direct node (DN) on the corresponding branch. Moreover, we assume that the weights of all query edges, the ? edges connecting the query to its DNs, are different. Therefore, the set of all queries, M could be restricted to ?!(n/?)? elements, since there are n/? choices for choosing the direct node in each branch (i.e., (n/?)? choices for ? branches), and the weight of the query edges can be ordered in ?! different ways. In this example, distance between two nodes is given by the weighted graph distance, and the oracle answers queries based on this distance. All edges connecting the SNs to each other have weight 1 expect those ? edges emitting from the center of the star and ending at the first SNs which have weight n/(?2 ). Edges connecting the objects in a supernode to its root are called object edges. We assume that all n/? object edges in branch ?i have weight i/(4?). It remains to fix the weight of the query edges. We will define the weight of these edges in the following. Definition 3. For a query q ? M, define the ?-tuple ?q ? {1, 2, . . . , n/?}? to be the sequence of DNs of q in ? branches, i.e., ?q (i) denotes the indicator of the object on ?i which is connected to q via a query edge. We also represent the rank of the DNs w.r.t. q, by an ?-tuple ?q ? {1, . . . , ?}? , i.e., ?q (i) denotes the rank of the DN on branch ?i among all the other DNs w.r.t. q. Now we can define the weight of query edges. For a query q ? M, the weight of the query edge which connects q to ?q (i) is given to be 1 + (?q (i)/?)?, where ? ? 1/(4?) is a constant. As mentioned before, the disorder constant plays an important role in the performance of the algorithm. The following lemma gives the disorder constant for the database introduced. The proof of this lemma is presented in the appendix [4]. Lemma 1. The star shaped graph introduced above has disorder constant D = ?(?). The Lower Bound: In the proof of Theorem 1, we will use Yao?s minimax principle (see [16]), which states that, for any distribution on the inputs the expected cost for the best deterministic algorithm provides a lower bound on the worst case running time of any randomized algorithm. In the following, we state two lower bounds for the number of questions in the searching phase of any deterministic algorithm for the database illustrated in Fig. 1. Proposition 1. The number of questions asked by a deterministic algorithm A, on average w.r.t. uniform distribution, to solve the NNS problem in star graph, is lower bounded by ? (? log(n/?)). To outline the proof of this claim: each question asked by the algorithm involves two database nodes. Note that the weights of the edges emitting from the center of the graph are chosen so that the branches become independent, in the sense that questioning nodes on one branch will not reveal 4 Figure 1: The star database: a weighted star graph with ? branches, each composed of n/?2 ?supernodes?. Each supernode further includes ? database objects. Finally, each query points is randomly connected to one object on each branch of the star via a weighted edge. The weights of the edges are chosen so than the disorder constant be D = ?(?). any information about other branches. Therefore, in order to find the nearest node to q, the algorithm has to find the direct node on each branch, and then compare them to find the NN. For any branch ?i , there are n/? candidates which can be DN of q with equal probability. Hence, roughly speaking, the algorithm needs to ask ?(log(n/?)) questions for each branch. This yields to a minimum total of ?(? log(n/?)) questions for ? independent branches in the graph. Proposition 2. Any deterministic algorithm A solving nearest neighbor search problem in the input query set M with uniform distribution should ask on average ?(?2 ) questions from the oracle. To outline the proof of this claim: consider an arbitrary branch ?i and assume a genie tells us that which supernode on ?i contains the DN for q. However, we do not know which of p1 , p2 , . . . , p? , the nodes inside the revealed supernode, is the DN of q on ?i . Since all the edges connecting the supernode to its children have the same weight, questioning just some of them is not sufficient to find the direct node, and effectively all of them should be asked on average. Since each question involves at most two of such nodes, an ?(?) questions is required to find the DN on ?i . Summing up the same number over all ? branches, we obtain the ?(?2 ) lower bound on the number of questions. Theorem 1 is a direct consequence of the above mentioned propositions. Proof of Theorem 1. Let A be an arbitrary deterministic algorithm which solves NNS problem in star shaped graph with uniform distribution. If QA denotes the average number of questions A asks, according to Proposition 1 and Proposition 2 we have n  o 1    n n QA ? max ? ? log , ?(?2 ) ? ? ? log , ?(?2 ) = ? ?2 + ? log n/? . ? 2 ? (2) By using the Yao?s Minimax principle, we can conclude Theorem 1. We can show that this bound is best bound one can find for this dataset. Indeed,  we present an algorithm in the appendix [4], which finds the query by asking ? ?2 + ? log n/? questions. 5 5 Hierarchical Data Structure For Nearest-Neighbor Search In this section we develop the search algorithm that guarantees the performance stated in Theorem 2. The learning phase is described in Algorithm 1. The algorithm builds a hierarchical decomposition level by level, top-down. At each level, we sample objects from the database. The set of samples at level i is denoted by Si , and we have |Si | = mi = a(2D)i log n, where a is a constant independent1 of n and D. At each level i, every object in T is put in the ?bin? of the sample in Si closest to it. To find this sample at level i, for every object p we rank the samples in Si w.r.t. p (by using the oracle to make pairwise comparisons). However, we show that given that we know D, we only need to rank those samples that fell in the bin of one of the at most 4aD log n nearest samples to p at level i ? 1. This is a consequence of the fact that we carefully chose the density of the samples at each level. Further, the fact that we build the hierarchy top-down, allows us to use the answers to the questions asked at level i, to reduce the number of questions we need to ask at level i + 1. This way, the number of questions per object does not increase as we go down in the hierarchy, even though the number of samples increases. For object p, ?p (i) denotes the nearest neighbor to object p in Si . We want to keep the ?i = n/(2D)i?1 closest objects in Si to p in the set ?p (i), i.e., all objects o ? Si so that rp (o, Si ) ? ?i . It could be shown that for an object o to be in ?p (i) it is necessary that ?o (i ? 1) be in ?p (i ? 1). Therefore by taking ?p (i) = {o ? Si |?o (i ? 1) ? ?p (i ? 1)} we have ?p (i) ? ?p (i). It could be verified that |?p (i)| ? 4aD log n, therefore ?p (i) can be constructed by finding the 4aD log n closest objects in ?p (i) to p. Definitely the first object in ?p (i) is ?p (i). Therefore we can recursively build ?p (i), ?p (i) and ?p (i) for 1 ? i ? log n/ log 2D for any object p, as it is done in the algorithm. The role of macros BuildHeap and ExtractMin is to build a heap from unordered data, and extract the minimum element from the heap, respectively. Although they are well-known and standard algorithms, we will present them in the appendix [4] for completeness. The search process is described in Algorithm 2. The key idea is that the sample closest to the query point on the lowest level will be its NN. Hence, by repeating the same process for inserting objects in the database, we can retrieve the NN w.h.p. We first bound the number of questions asked by Algorithm 1 (w.h.p.), in Theorem 3. Having this result, the proof of Theorem 2 is then immediate. Theorem 3. Algorithm 1 succeeds with probability higher than 1 ? n1 , and it requires asking no 3 more than O(nD3 log2 n + D log2 n log log nD ) questions w.h.p. We first state a technical lemma that we will need to prove Theorem 3. The proof could be found in Appendix [4]. n Lemma 2. Take a a constant and ?i = (2D) i=1 . For every object p ? T ? {q}, where q is the query point, the following four properties of the data structure are true w.h.p. 1. |Si ? ?p (?i+1 )| ? 1 3. |Si+1 ? ?p (?i?1 )| ? 16aD3 log n 5. |Si+1 ? ?p (4?i?1 )| ? 64aD3 log n 2. |Si ? ?p (?i )| ? 4aD log n 4. |Si ? ?p (4?i )| ? 4aD log n Proof of Theorem 3. Let mi = a(2D)i log n denote the number of objects we sample at level i, and let Si be the set of samples at level i i.e., |Si | = mi . Here, a is an appropriately chosen constant, independent of D and n. Further, let ?i = (2D)ni?1 . From now on, we assume that we are in the situation where Properties (1) to (5) in Lemma 2 are true for all objects (which is the case w.h.p.). Again, fix an object p. For each object p, we need to find ?p (i), which is the nearest neighbor in Si with respect to p. In order for being able to continue this procedure in every level, we keep a wider range of objects: those objects in Si that have ranks less than ?i+1 with respect to p in level i; we store them in ?p (i) (property 1 tells us that such objects exist), in this way the first object in ?p (i) would be ?p (i). In practice our algorithm stores some redundant objects in ?p (i), but we claim that totally no more than 4aD log n objects are stored in ?p (i + 1). To summarize, the properties we want to maintain in each level are: 1?p ? T and 1 ? i ? log n/ log 2D, Si ? ?p (?i ) ? ?p (i) and 2- |?p (i)| ? 4aD log n. 1 in fact the value of a is dependent on the value of error we expect, the more accurate we want to be, the more sample points we need in each level and a would be larger. 6 input : A database with n objects p1 , ..., pn , and disorder constant D output: For each object u, a vector ?u of length log n/ log(2D). The list of all samples ?i Si Def.: Si : The set of a(2D)i log n random samples at level i, i = 1, . . . , log n/ log(2D); ?o : ?o (i) =nearest neighbor to object o in Si ; o ? T , i = 1, . . . , log n/ log(2D); ?o (i): contains the ?i closest objects to p in Si , possibly with redundant objects; ?o (i): The set of p ? Si , for which ?p (i ? 1) ? ?o (i ? 1); log n for i ? 1 to L = log 2D do for p ? 1 to n do if i = 1 then ?p (1) ? S1 else ?p (i) = {o ? Si |?o (i ? 1) ? ?p (i ? 1)}; if |?p (i)| = 0 then Report Failure else H ? BuildHeap(?p (i)) ; for k ? 1 to 4aD log n do m ? ExtractMin(H) ; add m to ?p (i) end end ?p (i) ? first object in ?p (i); end end end Algorithm 1: Learning Algorithm input : A database with n objects and disorder D, the list of samples, the vectors ?u for u ? T , a query point q output: The nearest neighbor of q in the database ?q (1) = S1 ; log n for i ? 2 to L = log 2D do ?q (i) ? {p ? Si |?p (i ? 1) ? ?q (i ? 1)}; H ? BuildHeap(?q (i)) ; for k ? 1 to 4aD log n do m ? ExtractMin(H) ; add m to ?q (i) end end log n return first object in ?q ( log 2D ) Algorithm 2: Search Algorithm In the first step, for all p, ?p (1) = S1 , and since |S1 | = 2aD log n < 4aD log n, all the objects in S1 are extracted from the heap and therefore ?p (i) is S1 ordered with respect to p, as a result both the properties hold when i = 1. The argument for the maintenance of this property is as follows: Assume the property holds up to level i; we analyze level i + 1. In fact we want an object s ? Si+1 to be in ?p (i + 1) if rp (s) ? ?i+1 (note that Property 1 guarantees that there is a least one such sample). Further, let s? ? Si be the sample at level i closest to s i.e., s? = minx?Si rs (s? ). Again, by Property 1, we know that rs (s? ) ? ?i+1 . Hence, by the approximate triangle inequality 3 (see Section 2), we have: rp (s, T ) ? ?i+1 and rs (s? , T ) ? ?i+1 ? rp (s? , T ) ? 2D?i+1 = ?i hence s? = ?s (i) ? Si ? ?p (?i ) ? ?p (i) using the first property for step i. Therefore ?s (i) ? ?p (i) and therefore s ? ?p (i + 1). Property 2 tells us that |Si+1 ? ?p (?i+1 )| ? 4aD log n. Hence by 7 taking the first 4aD log n closest objects to p in ?p (i + 1) and storing them in ?p (i + 1), we can make sure than both s ? ?p (i + 1) for s ? Si+1 , s ? ?p (?i+1 ) and |?p (i + 1)| ? 4aD log n. Note that in the last loop of the algorithm when i = log n/ log 2D, according to Property 1, |Si ? ?p (?i+1 )| ? 1. But ?i+1 in the last step is 1, therefore the closest object to p in the database is in Slog n/ log 2D , which means that ?p (log n/ log 2D) is the nearest neighbor of p in the database. Repeating this argument for the query point in the Search algorithm shows that after the termination, the algorithm finds the nearest neighbor. To analyze the complexity of the algorithm, we should show that |?p (i + 1)| is not big. Property 4 tells us that all of the 4aD log n closest samples to p at level i have rank less than 8?i ,so all objects in ?p (i) have ranks less than 8?i with respect to p. Consider a sample s ? Si such that rp (s, T ) ? 8?i and a sample s?? ? Si+1 that falls in the bin of s. If an object s?? is in ?p (i + 1), it means that it falls in the bin of an object s in ?p (i), i.e. ?s?? (i) ? ?p (i). Since s ? ?p (i), we have rp (s, T ) ? 8?i . By property 1, we must have rs?? (s, T ) ? ?i+1 . Thus, by inequality 2, we have: rs?? (s, T ) ? ?i+1 and rp (s, T ) ? 8?i ? rp (s?? , T ) < D(8?i + ?i+1 ) ? 4?i?1 By property 5, there are at most O(D3 log n) such samples at level i + 1, i.e. ?p (i + 1) = O(D3 log n). To summarize, at each level for each object, we build a heap out of O(D3 log n) objects and apply O(aD log n) ExtractMin procedures to find the first 4aD log n objects in the heap. Each 3 ExtractMin requires O(log(D3 log n)) = O(log log nD ). Hence the complexity for each level 3 and for each object is O(D3 log n + D log n log log nD ). There are O(log n) levels and n objects, 3 so the overall complexity is O(nD3 log n + nD log2 n log log nD ). Proof of Theorem 2. The upper bound on the number of questions to be asked in the learning phase is immediate from Theorem 3. For each object, we need to store one identifier (the identifier of the closest object) at every level i in the hierarchy, and one bit to mark it as a member of Si or not; also one bit if it is in ?q (i ? 1) and one bit for being in ?q (i) (we can reuse this memory in the next level) (note that a heap with size N needs O(N log n) memory, where log n is for storing each object). Hence, the total memory requirement2 do not exceed O(n log2 n/ log(2D)) bits. Finally, the properties 1-5 shown in the proof of Theorem 3 are also true for an external query object q. Hence, to find the closest object to q on every level, we build the same heap structure, the only difference is that instead of repeating this procedure n times in each level, since there is just one 3 query point, we need to ask at most O(D3 log2 n + D log2 n log log nD ) questions totally. In particular, the closest object at level L = log2D (n) will be q?s nearest neighbor w.h.p. Note that this scheme can be easily modified for R-nearest neighbor search. At the i-the level of the n hierarchy, the closest sample to q will, w.h.p., be one of its (2D) i nearest neighbors. If we are only interested in the level of precision, we can stop the hierarchy construction at the desired level. 6 Discussion The use of a comparison oracle is motivated by a human user who can make comparisons between objects but not assign meaningful numerical values to similarities between objects. There are many interesting questions raised by studying such a model including fundamental characterizations of the complexity of search in terms of number of oracle questions. We also believe that ideas of searching through comparisons form a bridge between many well known search techniques in metric spaces to perceptually important (non-metric spaces) situations, and could lead to innovative practical applications. Analogous to locality sensitive hashing, one can develop notions of rank-sensitive hashing, where ?similar? objects based on ranks are given the same hash value. Some preliminary ideas for it were given in [3], but we believe this is an interesting line of inquiry. Also in [3], we have implemented comparison-based search heuristics to navigate image database. 2 Making the assumption that every object can be uniquely identified with log n bits 8 References [1] N. Goyal, Y. Lifshits, and H. Schutze, ?Disorder inequality: A combinatorial approach to nearest neighbor search,? in WSDM, 2008, pp. 25?32. [2] S. Dasgupta and Y. Freund, ?Random projection trees and low dimensional manifolds,? in STOC, 2008, pp. 537?546. ? [3] D. Tschopp, ?Routing and search on large scale networks,? Ph.D. dissertation, Ecole Polytechnique F?ed?erale de Lausanne (EPFL), 2010. [4] D. Tschopp, S. Diggavi, P. Delgosha, and S. Mohajer, ?Randomized algorithms for comparison-based search: Supplementary material,? 2011, submitted to NIPS as supplementary material. [5] K. Clarkson, ?Nearest-neighbor searching and metric space dimensions,? in Nearest-Neighbor Methods for Learning and Vision: Theory and Practice, G. Shakhnarovich, T. Darrell, and P. Indyk, Eds. MIT Press, 2006, pp. 15?59. [6] Y. Rubner, C. Tomasi, and L. J. Guibas, ?The earth mover?s distance as a metric for image retrieval,? International Journal of Computer Vision, vol. 40, no. 2, pp. 99?121, 2000. [7] E. Demidenko, ?Kolmogorov-smirnov test for image comparison,? in Computational Science and Its Applications - ICCSA, 2004, pp. 933?939. [8] M. Nachtegael, S. Schulte, V. De Witte, T. Mlange, and E. Kerre, ?Image similarity, from fuzzy sets to color image applications,? in Advances in Visual Information Systems, 2007, pp. 26?37. [9] S. Santini and R. Jain, ?Similarity measures,? IEEE transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 9, pp. 871?883, 1999. [10] G. Chechik, V. Sharma, U. Shalit, and S. Bengio, ?Large scale online learning of image similarity through ranking,? Journal of Machine Learning Research, vol. 11, pp. 1109?1135, 2010. [11] A. Frome, Y. Singer, F. Sha, and J. Malik, ?Learning globally-consistent local distance functions for shape-based image retrieval and classification,? in ICCV, 2007, pp. 1?8. [12] Y. Lifshits and S. Zhang, ?Combinatorial algorithms for nearest neighbors, near-duplicates and smallworld design,? in SODA, 2009, pp. 318?326. [13] R. Krauthgamer and J. R. Lee, ?Navigating nets: simple algorithms for proximity search,? in SODA, 2004, pp. 798?807. [14] D. R. Karger and M. Ruhl, ?Finding nearest neighbors in growth-restricted metrics,? in STOC, 2002, pp. 741?750. [15] T. Cormen, C. Leiserson, R. Rivest, and C. Stein, ?Introduction to algorithms,? MIT Press and McGrawHill Book Company, vol. 7, pp. 1162?1171, 1976. [16] R. Motwani and P. Raghavan, Randomized Algorithms. 9 Cambridge University Press, 1995.
4381 |@word smirnov:1 nd:11 termination:1 willing:1 d2:5 dominique:2 r:5 decomposition:3 invoking:1 asks:2 thereby:1 recursively:1 configuration:2 contains:3 karger:1 ecole:1 existing:3 com:1 si:36 gmail:1 assigning:1 must:2 numerical:2 shape:1 designed:1 hash:1 intelligence:1 ruhl:1 dissertation:1 characterization:6 provides:1 node:10 completeness:1 preference:1 ad3:2 zhang:1 along:1 dn:7 direct:5 become:1 constructed:1 prove:2 consists:1 inside:2 introduce:1 manner:2 pairwise:1 indeed:1 expected:3 roughly:2 themselves:1 p1:2 behavior:4 ry:2 wsdm:1 globally:1 decreasing:1 company:1 totally:2 underlying:3 moreover:2 notation:1 bounded:1 rivest:1 lowest:1 what:1 interpreted:1 fuzzy:1 developed:1 finding:3 nj:1 guarantee:2 every:9 growth:1 demonstrates:2 before:1 negligible:1 local:1 consequence:3 chose:1 studied:3 lausanne:1 range:1 practical:1 practice:2 goyal:2 procedure:3 projection:1 chechik:1 pre:1 get:2 close:2 put:1 context:1 deterministic:5 center:2 go:1 disorder:13 insight:2 d5:2 retrieve:4 searching:6 notion:3 analogous:2 hierarchy:5 play:1 construction:1 user:2 element:3 database:34 role:3 sharif:2 worst:7 awk:1 connected:2 trade:1 mentioned:2 complexity:8 asked:10 depend:1 solving:1 shakhnarovich:1 triangle:7 easily:1 various:1 kolmogorov:1 separated:1 jain:1 query:39 tell:4 choosing:1 heuristic:1 larger:1 solve:1 supplementary:2 relax:1 analyse:1 indyk:1 online:1 sequence:1 questioning:2 net:2 macro:1 inserting:1 loop:1 erale:1 adapts:2 los:2 motwani:1 darrell:1 object:97 wider:1 develop:8 measured:1 nearest:29 solves:1 p2:1 implemented:1 skip:1 indicate:2 involves:2 frome:1 switzerland:1 radius:1 human:4 routing:1 raghavan:1 material:3 bin:4 assign:1 fix:2 decompose:1 preliminary:1 dns:4 proposition:5 assisted:1 hold:2 proximity:1 guibas:1 algorithmic:1 claim:3 smallest:2 heap:7 earth:1 combinatorial:9 bridge:1 sensitive:2 weighted:4 hope:1 mit:2 clearly:2 modified:1 pn:1 rank:22 schutze:1 sense:1 dependent:1 nn:15 tehran:1 epfl:1 accept:1 hidden:2 interested:1 provably:1 overall:1 among:3 classification:1 denoted:1 raised:1 equal:2 once:1 construct:1 shaped:2 having:1 sampling:2 schulte:1 tschopp:4 report:1 simplify:1 duplicate:1 randomly:1 composed:2 mover:1 phase:16 connects:1 n1:2 nd3:4 attempt:1 maintain:1 leiserson:1 slog:1 accurate:1 edge:18 capable:1 partial:1 tuple:2 necessary:1 unless:1 tree:4 iv:1 walk:1 desired:1 shalit:1 minimal:1 earlier:1 asking:9 cost:1 uniform:4 characterize:1 stored:2 answer:3 considerably:1 nns:3 density:3 fundamental:4 randomized:15 explores:1 accessible:1 stay:1 definitely:1 international:1 off:1 lee:1 together:1 connecting:4 yao:2 again:2 ambiguity:1 possibly:1 external:1 book:1 return:4 de:2 star:9 unordered:1 includes:2 ranking:3 depends:2 ad:17 multiplicative:1 root:1 analyze:2 mcgrawhill:1 sort:1 contribution:2 minimize:1 ir:1 ni:1 who:1 yield:1 diggavi:2 dry:1 payam:1 conceptually:1 soheil:1 critically:1 rx:9 published:1 submitted:1 inquiry:1 ed:2 definition:6 failure:2 pp:13 proof:12 mi:3 stop:1 dataset:2 ask:8 knowledge:1 color:1 improves:1 genie:1 carefully:1 appears:1 higher:1 hashing:2 done:2 though:1 just:2 defines:3 reveal:1 believe:4 facilitate:1 building:1 true:3 hence:9 symmetric:1 illustrated:1 during:3 uniquely:1 covering:1 d4:2 outline:3 complete:1 demonstrate:3 polytechnique:1 performs:1 image:10 meaning:1 novel:1 attached:1 exponentially:1 extend:1 cambridge:1 access:4 similarity:11 etc:2 add:2 closest:14 store:6 certain:2 inequality:10 binary:2 continue:1 santini:1 seen:1 minimum:2 sharma:1 redundant:2 ii:2 rv:4 branch:20 d7:1 infer:2 technical:1 retrieval:4 maintenance:1 vision:2 metric:10 represent:2 want:5 else:3 appropriately:1 extra:1 sure:2 fell:1 smallworld:1 member:1 spirit:1 ee:1 near:1 revealed:1 iii:1 exceed:1 bengio:1 restaurant:1 identified:1 reduce:2 idea:4 knowing:1 angeles:2 whether:2 motivated:1 reuse:1 clarkson:1 speaking:2 amount:2 repeating:3 iran:1 stein:1 ph:1 exist:2 per:1 write:1 dasgupta:1 vol:4 group:1 key:1 four:1 d3:8 verified:1 asymptotically:1 graph:8 soda:2 almost:1 appendix:5 bit:7 bound:20 def:1 oracle:25 occur:1 placement:1 constraint:1 precisely:1 n3:1 ucla:2 aspect:1 argument:2 innovative:1 according:3 witte:1 ball:1 cormen:1 cth:1 making:2 s1:6 restricted:2 iccv:1 remains:1 turn:1 singer:1 know:3 end:7 studying:1 apply:1 hierarchical:7 rp:8 existence:1 rz:4 top:3 denotes:4 running:1 krauthgamer:1 log2:18 build:7 malik:1 question:48 costly:1 sha:1 unclear:1 minx:1 navigating:1 distance:13 consumption:1 manifold:1 ru:1 length:1 relationship:5 minimizing:2 setup:2 statement:3 relate:1 stoc:2 stated:1 design:6 unknown:3 upper:2 supernodes:5 immediate:2 situation:4 defining:1 arbitrary:2 introduced:6 required:2 tomasi:1 california:1 nip:2 qa:2 address:1 able:1 pattern:1 summarize:2 max:1 memory:6 including:1 difficulty:2 force:1 natural:2 indicator:1 minimax:2 scheme:8 improve:1 movie:2 technology:1 extract:1 sn:3 text:1 prior:1 determining:1 freund:1 expect:2 interesting:2 limitation:1 rubner:1 sufficient:1 consistent:1 principle:2 storing:2 last:2 bern:1 neighbor:26 fall:2 taking:2 dimension:2 ending:1 emitting:2 transaction:1 approximate:8 keep:2 supernode:5 summing:1 assumed:1 conclude:1 search:37 additionally:1 ca:1 poly:1 main:5 universe:2 big:1 n2:2 identifier:2 allowed:1 child:1 fig:2 lifshits:2 precision:1 mohajer:2 candidate:1 down:4 theorem:16 navigate:2 list:3 intrinsic:1 exists:1 effectively:1 importance:4 gained:2 perceptually:1 locality:1 generalizing:1 likely:1 visual:1 ordered:2 recommendation:1 determines:1 extracted:1 goal:3 consequently:1 replace:1 uniformly:1 lemma:6 total:4 called:3 tendency:1 succeeds:1 meaningful:2 mark:1 dissimilar:1 violated:1 princeton:3
3,735
4,382
A Collaborative Mechanism for Crowdsourcing Prediction Problems Rafael M. Frongillo Division of Computer Science University of California at Berkeley [email protected] Jacob Abernethy Division of Computer Science University of California at Berkeley [email protected] Abstract Machine Learning competitions such as the Netflix Prize have proven reasonably successful as a method of ?crowdsourcing? prediction tasks. But these competitions have a number of weaknesses, particularly in the incentive structure they create for the participants. We propose a new approach, called a Crowdsourced Learning Mechanism, in which participants collaboratively ?learn? a hypothesis for a given prediction task. The approach draws heavily from the concept of a prediction market, where traders bet on the likelihood of a future event. In our framework, the mechanism continues to publish the current hypothesis, and participants can modify this hypothesis by wagering on an update. The critical incentive property is that a participant will profit an amount that scales according to how much her update improves performance on a released test set. 1 Introduction The last several years has revealed a new trend in Machine Learning: prediction and learning problems rolled into prize-driven competitions. One of the first, and certainly the most well-known, was the Netflix prize released in the Fall of 2006. Netflix, aiming to improve the algorithm used to predict users? preferences on its database of films, released a dataset of 100M ratings to the public and asked competing teams to submit a list of predictions on a test set withheld from the public. Netflix offered $1,000,000 to the first team achieving prediction accuracy exceeding a given threshold, a goal that was eventually met. This competitive model for solving a prediction task has been used for a range of similar competitions since, and there is even a new company (kaggle.com) that creates and hosts such competitions. Such prediction competitions have proven quite valuable for a couple of important reasons: (a) they leverage the abilities and knowledge of the public at large, commonly known as ?crowdsourcing?, and (b) they provide an incentivized mechanism for an individual or team to apply their own knowledge and techniques which could be particularly beneficial to the problem at hand. This type of prediction competition provides a nice tool for companies and institutions that need help with a given prediction task yet can not afford to hire an expert. The potential leverage can be quite high: the Netflix prize winners apparently spent more than $1,000,000 in effort on their algorithm alone. Despite the extent of its popularity, is the Netflix competition model the ideal way to ?crowdsource? a learning problem? We note several weaknesses: It is anti-collaborative. Competitors are strongly incentivized to keep their techniques private. This is in stark contrast to many other projects that rely on crowdsourcing ? Wikipedia being a prime example, where participants must build off the work of others. Indeed, in the case of the Netflix prize, not only do leading participants lack incentives to share, but the work of non-winning competitors is effectively wasted. 1 The incentives are skewed and misaligned. The winner-take-all prize structure means that second place is as good as having not competed at all. This ultimately leads to an equilibrium where only a few teams are actually competing, and where potential new teams never form since catching up seems so unlikely. In addition, the fixed achievement benchmark, set by Netflix as a 10% improvement in prediction RMSE over a baseline, leads to misaligned incentives. Effectively, the prize structure implies that an improvement of %9.9 percent is worth nothing to Netflix, whereas a 20% improvement is still only worth $1,000,000 to Netflix. This is clearly not optimal. The nature of the competition precludes the use of proprietary methods. By requiring that the winner reveal the winning algorithm, potential competitors utilizing non-open software or proprietary techniques will be unwilling to compete. By participating in the competition, a user must effectively give away his intellectual property. In this paper we describe a new and very general mechanism to crowdsource prediction/learning problems. Our mechanism requires participants to place bets, yet the space they are betting over is the set of hypotheses for the learning task at hand. At any given time the mechanism publishes the current hypothesis w and participants can wager on a modification of w to w0 , upon which the modified w0 is posted. Eventually the wagering period finishes, a set of test data is revealed, and each participant receives a payout according to their bets. The critical property is that every trader?s profit scales according to how well their modification improved the solution on the test data. The framework we propose has many qualities similar to that of an information or prediction market, and many of the ideas derive from recent research on the design of automated market makers [7, 8, 3, 4, 1]. Many information markets already exist; at sites like Intrade.com and Betfair.com, individuals can bet on everything ranging from election outcomes to geopolitical events. There has been a burst of interest in such markets in recent years, not least of which is due to their potential for combining large amounts of information from a range of sources. In the words of Hanson et al [9]: ?Rational expectations theory predicts that, in equilibrium, asset prices will reflect all of the information held by market participants. This theorized information aggregation property of prices has lead economists to become increasingly interested in using securities markets to predict future events.? In practice, prediction markets have proven impressively accurate as a forecasting tool [11, 2, 12]. The central contribution of the present paper is to take the framework of a prediction market as a tool for information aggregation and to apply this tool for the purpose of ?aggregating? a hypothesis (classifier, predictor, etc.) for a given learning problem. The crowd of ML researchers, practitioners, and domain experts represents a highly diverse range of expertise and algorithmic tools. In contrast to the Netflix prize, which pitted teams of participants against each other, the mechanism we propose allows for everyone to contribute whatever knowledge they may have available towards the final solution. In a sense, this approach decentralizes the process of solving the task, as individual experts can potentially apply their expertise to a subset of the problem on which they have an advantage. Whereas a market price can be thought of as representing a consensus estimate of the value of an asset, our goal is to construct a consensus hypothesis reflecting all the knowledge and capabilities about a particular learning problem1 . Layout: We begin in Section 2.1 by introducing the simple notion of a generalized scoring rule L(?, ?) representing the ?loss function? of the learning task at hand. In Section 2.2 we describe our proposed Crowdsourced Learning Mechanism (CLM) in detail, and discuss how to structure a CLM for a particular scoring function L, in order that the traders are given incentives to minimize L. In Section 3 we give an example based on the design of Huffman codes. In Section 4 we discuss previous work on the design of prediction markets using an automated prediction market maker (APMM). In Section 5 we finish by considering two learning settings (e.g. linear regression) and we construct a CLM for each. The proofs have been omitted throughout, but these are available in the full version of the present paper. Notation: Given a smooth strictly convex function R : Rd ? R, and points x, y ? dom(R), we define the Bregman divergence DR (x, y) as the quantity R(x) ? R(y) ? ?R(y) ? (x ? y). For any convex function R, we let R? denote the convex conjugate of R, that is R? (y) := supx?dom(R) y ? x ? R(x). We shall use ?(S) to refer to the set of integrable probability distributions over the set 1 It is worth noting that Barbu and Lay utilized concepts from prediction markets to design algorithms for classifier aggregation [10], although their approach was unrelated to crowdsourcing. 2 n S, and ?n to refer to the set of probability vectors Pn p ? R . The function H : ?n ? R shall denote the entropy function, that is H(p) := ? i=1 p(i) log p(i). We use the notation KL(p; q) to describe the relative entropy or Kullback-Leibler divergence between distributions p, q ? ?n , Pn n that is KL(p; q) := i=1 p(i) log p(i) q(i) . We will also use ei ? R to denote the ith standard basis vector, having a 1 in the ith coordinate and 0?s elsewhere. 2 2.1 Scoring Rules and Crowdsourced Learning Mechanisms Generalized Scoring Rules For the remainder of this section, we shall let H denote some set of hypotheses, which we will assume is a convex subset of Rn . We let O be some arbitrary set of outcomes. We use the symbol X to refer to either an element of O, or a random variable taking values in O. We recall the notion of a scoring rule, a concept that arises frequently in economics and statistics [6]. Definition 1. Let P ? ?(O) be some convex set of distributions on an outcome space O. A scoring rule is a function S : P ? O ? R where, for all P ? P, P ? argmaxQ?P EX?P S(Q, X). In other words, if you are paid S(P, X) upon stating belief P ? P and outcome X occurring, then you maximize your expected utility by stating your true belief. We offer a much weaker notion: Definition 2. Given a convex hypothesis space H ? Rn and an outcome space O, let L : H ? O ? R be a continuous function. Given any P ? ?(O), let WL (P ) := argminw?H EX?P [L(w; X)]. Then we say that L is a Generalized Scoring Rule (GSR) if WL (P ) is a nonempty convex set for every P ? ?(O). The generalized scoring rule shall represent the ?loss function? for the learning problem at hand, and in Section 2.2 we will see how L is utilized in the mechanism. The hypothesis w shall represent the advice we receive from the crowd, X shall represent the test data to be revealed at the close of the mechanism, and L(w; X) shall represent the loss of the advised w on the data X. Notice that we do not define L to be convex in its first argument as this does not hold for many important cases. Instead, we require the weaker condition that EX [L(w; X)] is minimized on a convex set for any distribution on X. Our scoring rule differs from traditional scoring rules in an important way. Instead of starting with the desire know about the true value of X, and then designing a scoring rule which incentivizes participants to elicit their belief P ? P, our objective is precisely to minimize our scoring rule. In other words, traditional scoring rules were a means to an end (eliciting P ) but our generalized scoring rule is the end itself. One can recover the traditional scoring rule definition by setting H = P and imposing the constraint that P ? WL (P ). A useful class of GSRs L are those based on a Bregman divergence. Definition 3. We say that a GSR L : H ? O ? R is divergence-based if there exists an alternative hypothesis space H0 ? Rm , for some m, where we can write L(w; X) ? DR (?(X), ?(w)) + f (X) 0 (1) 0 for arbitrary maps ? : O ? H , f : O ? R, and ? : H ? H , and any closed strictly convex R : H0 ? R whose convex conjugate R? is finite on all of Rm . This property allows us to think of L(w; X) as a kind of distance between ?(X) and ?(w). Clearly then, the minimum value of L for a given X will be attained when ?(w) = ?(X), given that DR (x, x) = 0 for any Bregman divergence. In fact, as the following proposition shows, we can even think of the expected value E[L(w; X)], as a distance between E[?(X)] and ?(w). Proposition 1. Given a divergence-based GSR L(w; X) = D  R (?(X), ?(w)) + f (X) and a belief distribution P on O, we have WL (P ) = ? ?1 EX?P [?(X)] . We now can see that the divergence-based property greatly simplifies the task of minimizing L; instead of worrying about E[L(?; X)] one can simply base the hypothesis directly on the expectation E[?(X)]. As we will see in section 4, this also leads to efficient prediction markets and crowdsourcing mechanisms. 3 2.2 The Crowdsourced Learning Mechanism We will now define our actual mechanism rigorously. Definition 4. A Crowdsourced Learning Mechanism (CLM) is the procedure in Algorithm 1 as defined by the tuple (H, O, Cost, Payout). The function Cost : H ? H ? R sets the cost charged to a participant that makes a modification to the posted hypothesis. The function Payout : H ? H ? O ? R determines the amount paid to each participant when the outcome is revealed to be X. Algorithm 1 Crowdsourced Learning Mechanism for (H, O, Cost, Payout) 1: Mechanism sets initial hypothesis to some w0 ? H 2: for rounds t = 0, 1, 2, . . . do 3: Mechanism posts current hypothesis wt ? H 4: Some participant places a bid on the update wt 7? w0 5: Mechanism charges participant Cost(wt , w0 ) 6: Mechanisms updates hypothesis wt+1 ? w0 7: end for 8: Market closes after T rounds and the outcome (test data) X ? O is revealed 9: for each t do 10: Participant responsible for the update wt 7? wt+1 receives Payout(wt , wt+1 ; X) 11: end for The above procedure describes the process by which participants can provide advice to the mechanism to select a good w, and the profit they earn by doing so. Of course, this profit will precisely determine the incentives of our mechanism, and hence a key question is: how can we design Cost and Payout so that participants are incentivized to provide good hypotheses? The answer is that we shall structure the incentives around a GSR L(w; X) chosen by the mechanism designer. Definition 5. For a CLM A = (H, O, Cost, Payout), denote the ex-post profit for the bid (w 7? w0 ) when the outcome is X ? O by Profit(w, w0 ; X) := Payout(w, w0 ; X) ? Cost(w, w0 ). We say that A implements a GSR L : H0 ? O ? R if there exists a surjective map ? : H ? H0 such that for all w1 , w2 ? H and X ? O, Profit(w1 , w2 ; X) = L(?(w1 ); X) ? L(?(w2 ); X). (2) 0 If additionally H = H and ? = idH , we call A an L-CLM and say that A is L-incentivized. When a CLM implements a given L, the incentives are structured in order that the participants will work to minimize L(w; X). Of course, the input X is unknown to the participants, yet we can assume that the mechanism has provided a public ?training set? to use in a learning algorithm. The participants are thus asked not only to propose a ?good? hypothesis wt but to wager on whether the update wt?1 7? wt improves generalization error. It is worth making clear that knowledge of the true distribution on X provides a straightforward optimal strategy. Proposition 2. Given a GSR L : H ? O ? R and an L-CLM (Cost, Payout), any participant who knows the true distribution P ? P over X will maximize expected profit by modifying the hypothesis to any w ? WL (P ). Cost of operating a CLM. It is clear that the agent operating the mechanism must pay the participants at the close of the competition, and is thus at risk of losing money (in fact, it is possible he may gain). How much money is lost depends on the bets (wt 7? wt+1 ) made by the participants, and of course the final outcome X. The agent has a clear interest in knowing precisely the potential cost ? fortunately this cost is easy to compute. The loss to the agent is clearly the total ex-post profit earned by the participants, and by construction this sum telescopes: PT t=0 Profit(wt , wt+1 ; X) = L(w0 ; X) ? L(wT ; X). This is a simple yet appealing property of the CLM: the agent pays only as much in reward to the participants as it benefits from the improvement of wT over the initial w0 . It is worth noting that this value could be negative when wT is actually ?worse? than w0 ; in this case, as we shall see in section 3, the CLM can act as an insurance policy with respect to the mistakes of the participants. A more typical scenario, of course, is where the participants provide an improved hypothesis, in which case the CLM will run at a cost. We can compute the WorstCaseLoss(L-CLM) := maxw?H,X?O (L(w0 ; X) ? L(w; X)). Given a budget 4 of size $B, the mechanism can always rescale L in order that WorstCaseLoss(L-CLM) = B. This requires, of course, that the WorstCaseLoss is finite. Computational efficiency of operating a CLM. We shall say that a CLM has the efficient computation (EC) property if both Cost and Payout are efficiently computable functions. We shall say a CLM has the tractable trading (TT) property if, given a current hypothesis w, a belief P ? ?(O) and a budget B, one can efficiently compute an element of the set n o   argmin EX?P Profit(w, w0 , X) : Cost(w, w0 ) ? B . w0 ?H The EC property ensures that the mechanism operator can run the CLM efficiently. The TT property says that participants can compute the optimal hypothesis to bet on given a belief on the outcome and a budget. This is absolutely essential for the CLM to successfully aggregate the knowledge and expertise of the crowd ? without it, despite their motivation to lower L(; ), the participants would not be able to compute the optimal bet. Suitable collateral requirements. We say that a CLM has the escrow (ES) property if the Cost and Payout functions are structured in order that, given any wager (w 7? w0 ), we have that Payout(w, w0 ; X) ? 0 for all X ? O. It is clear that, when designing an L-CLM for a particular L, the Payout function is fully specified once Cost is fixed, since we have the relation Payout(w, w0 ; X) = L(w; X) ? L(w0 ; X) + Cost(w, w0 ) for every w, w0 ? H and X ? O. A curious reader might ask, why not simply set Cost(w, w0 ) ? 0 and Payout ? Profit? The problem with this approach is that potentially Payout(w, w0 ; X) < 0 which implies that the participant who wagered on (w 7? w0 ) can be indebted to the mechanism and could default on this obligation. Thus the Cost function should be set in order to require every participant to deposit at least enough collateral in escrow to cover any possible losses. Subsidizing with a voucher pool. One practical weakness of a wagering-based mechanism is that individuals may be hesitant to participate when it requires depositing actual money into the system. This can be allayed to a reasonable degree by including a voucher pool where each of the first m participants may receive a voucher in the amount of $C. These candidates need not pay to participate, yet have the opportunity to win. Of course, these vouchers must be paid for by the agent running the mechanism, and hence a value of mC is added to the total operational cost. 3 A Warm-up: Compressing an Unfamiliar Data Stream Let us now introduce a particular setting motivated by a well-known problem in information theory. Imagine a firm is looking to do compression on an unfamiliar channel, and from this channel the firm will receive a stream of m characters from an n-sized alphabet which we shall index by [n]. The goal is to select a binary encoding of this alpha in such a way that minimizes the total bits required to store the data, as a cost of $1 is required for each bit. A first-order approach to encode such a stream is to assign a probability distribution q ? ?n to the alphabet, and to select an encoding of character i with a binary word of length log(1/q(i)) (we ignore round-off for simplicity). This can be achieved using Huffman Codes for example, and we refer the reader to Cover and Thomas ([5], Chapter 5) for more details. Thus, given a distribution q, the firm pays L(q; i) = ? log q(i) for each character i. It is easy to see that if the characters are sampled from some ?true? distribution p, then the expected cost L(q; p) := Ei?p [L(q; i)] = KL(p; q) + H(p), which is minimized at q = p. Not knowing the true distribution p, the firm is thus interested in finding a q with a low expected cost L(q; p). An attractive option available to the firm is to crowdsource the task of lowering this cost L(?; ?) by setting up an L-CLM. It is reasonably likely that outside individuals have private information about the behavior of the channel and, in particular, may be able to provide a better estimate q of the true distribution of the characters in the channel. As just discussed, the better the estimate the cheaper the compression. We set H = ?n and O = [n], where a hypothesis q represents the proposed distribution over the n characters, and X is some character sampled uniformly from the stream after it has been observed. 5 We define Cost and Payout as Cost(q, q0 ) := max log(q(i)/q0 (i)), i?[n] Payout(q, q0 ; i) := log(q(i)/q0 (i)) + Cost(q, q0 ), which is clearly an L-CLM for the loss defined above. It is worth noting that L is a divergence-based GSR if we take R(q) = ?H(q), ?(i) = ei , f ? 0, ? ? id?n , using the convention 0 log 0 = 0 (in fact, L is the LMSR). Finally, the firm will initially set q0 to be its best guess of p, which we will assume to be uniform (but need not be). We have devised this payout scheme according to the selection of a single character i, and it is worth noting that because this character is sampled uniformly at random from the stream (with private randomness), the participants cannot know which character will be released. This forces the ? of the characters from the stream. A reasonable participants to wager on the empirical distribution p ? ), which alternative, and one which lowers the payment variance, is to payout according to the L(q; p is also equal to the average of L(q; i) when i is chosen uniformly from the stream. The obvious question to ask is: how does this CLM benefit the firm that wants to design the encoding? More precisely, if the firm uses the final estimate qT from the mechanism, instead of the initial guess q0 , what is the trade-off between the money paid to participants and the money gained by using the crowdsourced hypothesis? At first glance, it appears that this trade-off can be arbitrarily bad: the worst case cost of encoding the stream using the final estimate qT is supi,qT ? log(qT (i)) = ?. Amazingly, however, by virtue of the aligned incentives, the firm has a very strong control of its total cost (the CLM cost plus the encoding cost). Suppose the firm scales L by a parameter ?, to separate the scale of the CLM from the scale of the encoding cost (which we assumed to be $1 per bit). Then given any initial estimate q0 and final estimate qT , the expected total cost over p is Encoding cost of using qT given p Mechanism?s cost of getting advice qT z }| { z }| { Total expected cost = H(p) + KL(p; qT ) + ?(KL(p; q0 ) ? KL(p; qT )) = H(p) + (1 ? ?)KL(p; qT ) + ?KL(p; q0 ) Let us spend a moment to analyze the above expression. Imagine that the firm set ? = 1. Then the total cost of the firm would be H(p) + KL(p; q0 ), which is bounded by log n for q0 uniform. Notice that this expression does not depend on qT ? in fact, this cost precisely corresponds to the scenario where the firm had not set up a CLM and instead used the initial estimate q0 to encode. In other words, for ? = 1, the firm is entirely neutral to the quality of the estimate qT ; even if the CLM provided an estimate qT which performed worse than q0 , the cost increase due to the bad choice of q is recouped from payments of the ill-informed participants. The firm may not want to be neutral to the estimate of the crowd, however, and under the reasonable assumption that the final estimate qT will improve upon q0 , the firm should set 0 < ? < 1 (of course, positivity is needed for nonzero payouts). In this case, the firm will strictly gain by using the CLM when KL(p; qT ) < KL(p; q0 ), but still has some insurance policy if the estimate qT is poor. 4 Prediction Markets as a Special Case Let us briefly review the literature for the type of prediction markets relevant to the present work. In such a prediction market, we imagine a future event to reveal one of n uncertain outcomes. Hanson [7, 8] proposed a framework in which traders make ?reports? to the market about their internal belief in the form of a distribution p ? ?n . Each trader would receive a reward (or loss) based on a function of their proposed belief and the belief of the previous trader, and the function suggested by Hanson was the Logarithmic Market Scoring Rule (LMSR). It was shown later that the LMSR-based market is equivalent to what is known as a cost function based automated market makers, proposed by Chen and Pennock [3]. More recently a much broader equivalence was established by Chen and Wortman Vaughan [4] between markets based on cost functions and those based on scoring rules. The market framework proposed by Chen and Pennock allows traders to buy and sell Arrow-Debreu securities (equivalently: shares, contracts), where an Arrow-Debreu security corresponding to outcome i pays out $1 if and only if i is realized. All shares are bought and sold through an automated market maker, which is the entity managing the market and setting prices. At any time period, traders can purchase bundles of contracts r ? Rn , where r(i) represents the number of shares purchased on 6 outcome i. The price of a bundle r is set as C(s + r) ? C(s), where C is some differentiable convex cost function and s ? Rn is the ?quantity vector? Pnrepresenting the total number of outstanding shares. The LMSR cost function is C(s) := ?1 log ( i=1 exp(?s(i))). This cost function framework was extended by Abernethy et al. [1] to deal with prohibitively large outcome spaces. When the set of potential outcomes O is of exponential size or even infinite, the market designer can offer a restricted number of contracts, say n ( |O|), rather than offer an Arrow-Debreu contract for each member of O. To determine the payout structure, the market designer chooses a function ? : O ? Rn , where contract i returns a payout of ?i (X) and, thus, a contract bundle r pays ?(X) ? r. As with the framework of Chen and Pennock, the contract prices are set according to a cost function C, so that a bundle r has a price of C(s + r) ? C(s). The design of the function C is addressed at length in Abernethy et al., to which we refer the reader. For the remainder of this section we shall discuss the prediction market template of Abernethy et al. as it provides the most general model; we shall refer to such a market as an Automated Prediction Market Maker. We now precisely state the ingredients of this framework. Definition 6. An Automated Prediction Market Maker (APMM) is defined by a tuple (S, O, ?, C) where S is the share space of the market, which we will assume to be the linear space Rn ; O is the set of outcomes; C : S ? R is a smooth and convex cost function with ?C(S) = relint(?C(S)) (here, we use ?C(S) := {?C(s) | s ? S} to denote the derivative space of C); and ? : O ? ?C(S) is a payoff function2 . Fortunately, we need not provide a full description of the procedure of the APMM mechanism: The APMM is precisely a special case of a CLM! Indeed, the APMM framework can be described as a CLM (H, O, Cost, Payout) where H = S(= Rn ) Cost(s, s0 ) = C(s0 ) ? C(s) Payout(s, s0 ; X) = ?(X) ? (s0 ? s). (3) Hence we can think of APMM prediction markets in terms of our learning mechanism. Markets of this form are an important special class of CLMs ? in particular, we can guarantee that they are efficient to work with, as we show in the following proposition. Proposition 3. An APMM (S, O, ?, C) with efficiently computable C satisfies EC and TT. We now ask, what is the learning problem that the participants of an APMM are trying to solve? More precisely, when we think of an APMM as a CLM, does it implement a particular L? Theorem 1. Given APMM A := (S, O, ?, C), then A implements L : ?C(S) ? O ? R defined by L(w; X) = DC ? (?(X), w), (4) where C ? is the conjugate dual of the function C. There is another more subtle benefit to APMMs ? and, in fact, to most prediction market mechanisms in practice ? which is that participants make bets via purchasing of shares or share bundles. When a trader makes a bet, she purchases a contract bundle r, is charged C(s + r) ? C(s) (when the current quantity vector is s), and shall receive payout ?(X) ? r if and when X is realized. But at any point before X is observed and trading is open, the trader can sell off this bundle, to the APMM or another trader, and hence neutralize her risk. In this sense bets made in an APMM are stateless, whereas for an arbitrary CLM this may not be the case: the wager defined by (wt 7? wt+1 ) can not necessarily be sold back to the mechanism, as the posted hypothesis may no longer remain at wt+1 . Given a learning problem defined by the GSR L : H ? O ? R, it is natural to ask whether we can design a CLM which implements this L and has this ?share-based property? of APMMs. More precisely, under what conditions is it possible to implement L with an APMM? Theorem 2. For any divergence-based GSR L(w; X) = DR (?(X), ?(w)) + f (X), with ? : H ? H0 one-to-one, H0 = relint(H0 ), and ?(O) ? ?(H), there exists an APMM which implements L. We point out, as a corollary, that if an APMM implements some arbitrary L, then we must be able to write L as a divergence function. This fully specifies the class of problems solvable using APMMs. 2 The conditions that ?(O) ? ?C(S) and ?C(S) = relint(?C(S)) are technical but important, and we do not address these details in the present extended abstract although they will be considered in the full version. More relevant discussion can also be found in Abernethy et al. [1]. 7 Corollary 1. If APMM (S, O, ?, C) implements a GSR L : H?O ? R, then L is divergence-based. Theorem 1 establishes a strong connection between prediction markets and a natural class of GSRs. One interpretation of this result is that any GSR based on a Bregman divergence has a ?dual? characterization as a share-based market, where participants buy and sell shares rather than directly altering the share prices (the hypothesis). This has many advantages for prediction markets, not least of which is that shares are often easier to think about than the underlying hypothesis space. Our notion of a CLM offers another interpretation. In light of Proposition 3, any machine learning problem whose hypotheses can be evaluated in terms of a divergence leads to a tractable crowdsourcing mechanism, as was the case in Section 3. Moreover, this theorem does not preclude efficient yet non-divergence-based loss functions as we see in the next section. 5 Example CLMs for Typical Machine Learning Tasks Regression. We now construct a CLM for a typical regression problem. We let H be the `2 -norm ball of radius 1 in Rd , and we shall let an outcome be a batch of a data, that is X := {(x1 , y1 ), . . . , (xn , yn )} where for each i we have xi ? Rd , yi ? [?1, 1], and we asn sume Pnkxi k2 ? 1. We 2construct a GSR according to the mean squared error, L(w; {(xi , yi )}i=1 ) = ? i=1 (w ? xi ? yi ) for some parameter ? > 0. It is worth noting that L is not divergence-based. 2n In order to satisfy the escrow property (ES), we can set Cost(w, w0 ) := 2?kw ? w0 k2 because the function L(w; X) is 2?-lipschitz with respect to w for any X. To ensure that the CLM is L-incentivized, we must set Payout(w, w0 ; X) := Cost(w, w0 ) + L(w; X) ? L(w0 ; X). If we set the initial hypothesis w0 = 0, it is easy to check that WorstCaseLoss = ?/2. It remains to check whether this CLM is tractable. It?s clear that we can efficiently compute Cost and Payout, hence the EC property holds. Given how Cost is defined, it is clear that the set {w0 : Cost(w, w0 ) ? B} is just an `2 -norm ball. Also, since L is convex in w for each X, so  is the function EX?P Profit(w, w0 , X) for every P . A budget-constrained profit-maximizing participant must simply solve a convex optimization problem, and hence the TT property holds. Betting Directly on the Labels. Let us return our attention to the Netflix Prize model as discussed in the Introduction. For this style of competition a host releases a dataset for a given prediction task. The host then requests participants to provide predictions on a specified set of instances on which it has correct labels. For every submission the agent computes an error measure, say the MSE, and reports this to the participants. Of course, the correct labels are withheld throughout. Our CLM framework is general enough to apply to this problem framework as well. Define H = O = K m where K ? R bounded is the set of valid labels, and m is the number of requested test set predictions. For some w ? H and y ? O, w(k) specifies the kth predicted label, Pmand y(k) specifies the true label. A natural scoring function is the total squared loss, L(w; y) := k=1 (w(k)?y(k))2 . Of course, this approach is quite different from the Netflix Prize model, in two key respects: (a) the participants have to wager on their predictions and (b) by participating in the mechanism they are required to reveal their modification to all of the other players. Hence while we have structured a competitive process the participants are de facto forced to collaborate on the solution. A reasonable critique of this collaborative mechanism approach to a Netflix-style competition is that it does not provide the instant feedback of the ?leaderboard? where individuals observe performance improvements in real time. However, we can adjust our mechanism to be online with a very simple modification of the CLM protocol, which we sketch here. Rather than make payouts in a large batch at the end, the competition designer could perform a mini-payout at the end of each of a sequence of time intervals. At each interval, the designer could select a (potentially random) subset S of user/movie pairs in the remaining test set, freeze updates on the predictions w(k) for all k ? S, and perform payouts to the participants on only these labels. What makes this possible, of course, is that the generalized scoring rule we chose decomposes as a sum over the individual labels. Acknowledgments. We gratefully acknowledge the support of the NSF under award DMS0830410, a Google University Research Award, and the National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a. 8 References [1] J. Abernethy, Y. Chen, and J. Wortman Vaughan. An optimization-based framework for automated market-making. In Proceedings of the 12th ACM Conference on Electronic Commerce, 2011. [2] J. E. Berg, R. Forsythe, F. D. Nelson, and T. A. Rietz. Results from a dozen years of election futures markets research. In C. A. Plott and V. Smith, editors, Handbook of Experimental Economic Results. 2001. [3] Y. Chen and D. M. Pennock. A utility framework for bounded-loss market makers. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, 2007. [4] Y. Chen and J. Wortman Vaughan. A new understanding of prediction markets via no-regret learning. In Proceedings of the 11th ACM Conference on Electronic Commerce, 2010. [5] T.M. Cover, J.A. Thomas, J. Wiley, et al. Elements of information theory, volume 6. Wiley Online Library, 1991. [6] T. Gneiting and A.E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359?378, 2007. [7] R. Hanson. Combinatorial information market design. 5(1):105?119, 2003. Information Systems Frontiers, [8] R. Hanson. Logarithmic market scoring rules for modular combinatorial information aggregation. Journal of Prediction Markets, 1(1):3?15, 2007. [9] R. Hanson, R. Oprea, and D. Porter. Information aggregation and manipulation in an experimental market. Journal of Economic Behavior & Organization, 60(4):449?459, 2006. [10] Nathan Lay and Adrian Barbu. Supervised aggregation of classifiers using artificial prediction markets. In ICML, pages 591?598, 2010. [11] J. Ledyard, R. Hanson, and T. Ishikida. An experimental test of combinatorial information markets. Journal of Economic Behavior and Organization, 69:182?189, 2009. [12] J. Wolfers and E. Zitzewitz. Prediction markets. Journal of Economic Perspective, 18(2):107? 126, 2004. 9
4382 |@word private:3 briefly:1 version:2 compression:2 seems:1 norm:2 open:2 adrian:1 jacob:1 paid:4 stateless:1 profit:14 wagering:3 moment:1 initial:6 current:5 com:3 yet:6 must:7 update:7 alone:1 intelligence:1 guess:2 ith:2 prize:10 smith:1 institution:1 provides:3 characterization:1 intellectual:1 contribute:1 preference:1 burst:1 become:1 incentivizes:1 introduce:1 indeed:2 expected:7 market:50 behavior:3 frequently:1 worstcaseloss:4 company:2 election:2 actual:2 preclude:1 considering:1 project:1 begin:1 notation:2 unrelated:1 provided:2 bounded:3 underlying:1 moreover:1 what:5 kind:1 argmin:1 minimizes:1 informed:1 finding:1 guarantee:1 berkeley:4 every:6 act:1 charge:1 prohibitively:1 classifier:3 rm:2 k2:2 whatever:1 control:1 facto:1 yn:1 before:1 engineering:1 aggregating:1 modify:1 gneiting:1 mistake:1 aiming:1 despite:2 encoding:7 id:1 critique:1 advised:1 might:1 plus:1 chose:1 equivalence:1 misaligned:2 range:3 graduate:1 practical:1 responsible:1 acknowledgment:1 commerce:2 practice:2 lost:1 implement:9 differs:1 regret:1 procedure:3 empirical:1 elicit:1 thought:1 word:5 cannot:1 close:3 selection:1 operator:1 risk:2 vaughan:3 function2:1 equivalent:1 map:2 charged:2 maximizing:1 layout:1 economics:1 starting:1 straightforward:1 convex:15 attention:1 unwilling:1 simplicity:1 rule:19 utilizing:1 his:1 notion:4 coordinate:1 crowdsourcing:7 construction:1 pt:1 heavily:1 user:3 barbu:2 losing:1 imagine:3 us:1 designing:2 hypothesis:29 suppose:1 trend:1 element:3 particularly:2 utilized:2 continues:1 lay:2 submission:1 predicts:1 database:1 observed:2 worst:1 ensures:1 compressing:1 earned:1 trade:2 valuable:1 reward:2 asked:2 rigorously:1 dom:2 ultimately:1 depend:1 solving:2 creates:1 division:2 upon:3 efficiency:1 basis:1 pitted:1 chapter:1 alphabet:2 forced:1 describe:3 artificial:2 sume:1 aggregate:1 outcome:17 crowd:4 abernethy:6 h0:7 quite:3 modular:1 film:1 whose:2 firm:17 say:10 spend:1 solve:2 precludes:1 rietz:1 ability:1 statistic:1 think:5 itself:1 final:6 online:2 advantage:2 differentiable:1 sequence:1 propose:4 hire:1 remainder:2 argminw:1 aligned:1 combining:1 relevant:2 description:1 participating:2 competition:14 getting:1 achievement:1 requirement:1 help:1 spent:1 derive:1 stating:2 rescale:1 qt:16 strong:2 c:2 predicted:1 implies:2 trading:2 met:1 convention:1 radius:1 correct:2 modifying:1 public:4 everything:1 require:2 assign:1 generalization:1 proposition:6 strictly:4 frontier:1 hold:3 around:1 considered:1 exp:1 equilibrium:2 algorithmic:1 predict:2 collaboratively:1 released:4 omitted:1 purpose:1 estimation:1 label:8 combinatorial:3 maker:7 neutralize:1 lmsr:4 wl:5 create:1 successfully:1 tool:5 establishes:1 clearly:4 always:1 modified:1 rather:3 forsythe:1 pn:2 frongillo:1 theorized:1 bet:10 broader:1 corollary:2 encode:2 release:1 competed:1 improvement:5 she:1 likelihood:1 check:2 greatly:1 contrast:2 baseline:1 sense:2 unlikely:1 initially:1 her:2 relation:1 interested:2 dual:2 ill:1 constrained:1 special:3 equal:1 construct:4 never:1 having:2 once:1 represents:3 sell:3 kw:1 icml:1 problem1:1 future:4 purchase:2 others:1 report:2 minimized:2 few:1 divergence:15 national:1 individual:7 cheaper:1 argmaxq:1 organization:2 interest:2 highly:1 insurance:2 certainly:1 adjust:1 weakness:3 rolled:1 light:1 clm:40 amazingly:1 held:1 wager:6 bundle:7 accurate:1 bregman:4 tuple:2 collateral:2 catching:1 uncertain:1 instance:1 cover:3 altering:1 cost:54 introducing:1 subset:3 neutral:2 trader:11 predictor:1 uniform:2 successful:1 wortman:3 answer:1 supx:1 chooses:1 contract:8 off:5 pool:2 earn:1 w1:3 squared:2 reflect:1 central:1 ndseg:1 payout:28 positivity:1 dr:4 worse:2 expert:3 derivative:1 leading:1 return:2 stark:1 style:2 american:1 potential:6 relint:3 de:1 satisfy:1 depends:1 stream:8 performed:1 later:1 closed:1 apparently:1 doing:1 subsidizing:1 netflix:14 competitive:2 crowdsourced:7 participant:47 raf:1 aggregation:6 capability:1 recover:1 option:1 rmse:1 collaborative:3 contribution:1 minimize:3 accuracy:1 variance:1 who:2 efficiently:5 mc:1 worth:8 asset:2 researcher:1 expertise:3 indebted:1 randomness:1 definition:7 competitor:3 against:1 obvious:1 proof:1 couple:1 rational:1 gain:2 dataset:2 sampled:3 ask:4 recall:1 knowledge:6 improves:2 subtle:1 actually:2 reflecting:1 back:1 appears:1 attained:1 supervised:1 improved:2 evaluated:1 strongly:1 just:2 hand:4 receives:2 sketch:1 ei:3 lack:1 glance:1 google:1 porter:1 quality:2 reveal:3 concept:3 requiring:1 true:8 hence:7 q0:16 leibler:1 nonzero:1 deal:1 attractive:1 round:3 skewed:1 generalized:6 trying:1 tt:4 percent:1 ranging:1 recently:1 wikipedia:1 winner:3 volume:1 wolfers:1 discussed:2 he:1 interpretation:2 association:1 refer:6 unfamiliar:2 freeze:1 imposing:1 rd:4 kaggle:1 collaborate:1 gratefully:1 had:1 longer:1 operating:3 money:5 etc:1 base:1 own:1 recent:2 perspective:1 driven:1 prime:1 scenario:2 store:1 manipulation:1 binary:2 arbitrarily:1 yi:3 scoring:21 integrable:1 zitzewitz:1 minimum:1 fortunately:2 managing:1 payouts:3 determine:2 maximize:2 period:2 full:3 debreu:3 smooth:2 technical:1 offer:4 devised:1 host:3 post:3 award:2 prediction:40 regression:3 supi:1 expectation:2 publish:1 represent:4 achieved:1 receive:5 addition:1 whereas:3 huffman:2 want:2 addressed:1 interval:2 fellowship:1 source:1 w2:3 pennock:4 member:1 bought:1 practitioner:1 call:1 curious:1 leverage:2 ideal:1 revealed:5 noting:5 easy:3 enough:2 automated:7 bid:2 finish:2 competing:2 economic:4 idea:1 simplifies:1 knowing:2 computable:2 whether:3 motivated:1 expression:2 utility:2 defense:1 effort:1 forecasting:1 afford:1 proprietary:2 useful:1 clear:6 amount:4 telescope:1 specifies:3 exist:1 nsf:1 notice:2 designer:5 popularity:1 per:1 diverse:1 write:2 shall:16 incentive:10 key:2 threshold:1 achieving:1 obligation:1 lowering:1 wasted:1 worrying:1 outside:1 year:3 sum:2 compete:1 run:2 you:2 uncertainty:1 place:3 throughout:2 reader:3 reasonable:4 electronic:2 draw:1 bit:3 entirely:1 pay:6 precisely:9 constraint:1 your:2 software:1 nathan:1 argument:1 analyze:1 betting:2 structured:3 according:7 ball:2 poor:1 request:1 conjugate:3 beneficial:1 describes:1 increasingly:1 character:11 remain:1 appealing:1 modification:5 making:2 restricted:1 payment:2 remains:1 discus:3 eventually:2 mechanism:41 nonempty:1 needed:1 know:3 tractable:3 end:6 available:3 apply:4 observe:1 away:1 oprea:1 alternative:2 batch:2 thomas:2 asn:1 running:1 ensure:1 remaining:1 opportunity:1 instant:1 build:1 surjective:1 jake:1 eliciting:1 purchased:1 objective:1 already:1 quantity:3 question:2 added:1 strategy:1 realized:2 traditional:3 win:1 kth:1 distance:2 incentivized:5 separate:1 entity:1 w0:35 participate:2 nelson:1 extent:1 consensus:2 cfr:1 reason:1 economist:1 code:2 length:2 index:1 mini:1 minimizing:1 equivalently:1 potentially:3 negative:1 design:9 proper:1 policy:2 unknown:1 perform:2 sold:2 benchmark:1 withheld:2 finite:2 acknowledge:1 anti:1 payoff:1 extended:2 looking:1 team:6 dc:1 rn:7 y1:1 arbitrary:4 escrow:3 rating:1 publishes:1 required:3 kl:11 specified:2 connection:1 pair:1 hanson:7 security:3 california:2 established:1 address:1 able:3 suggested:1 including:1 max:1 everyone:1 belief:9 event:4 critical:2 suitable:1 rely:1 warm:1 force:1 natural:3 solvable:1 representing:2 scheme:1 improve:2 movie:1 library:1 raftery:1 nice:1 review:1 literature:1 understanding:1 relative:1 loss:10 fully:2 deposit:1 impressively:1 proven:3 ingredient:1 leaderboard:1 agent:6 degree:1 offered:1 purchasing:1 s0:4 editor:1 share:13 elsewhere:1 course:10 last:1 weaker:2 fall:1 template:1 taking:1 benefit:3 feedback:1 default:1 xn:1 valid:1 computes:1 commonly:1 made:2 ec:4 alpha:1 ignore:1 rafael:1 crowdsource:3 kullback:1 keep:1 ml:1 buy:2 gsr:14 handbook:1 assumed:1 xi:3 continuous:1 decomposes:1 why:1 additionally:1 learn:1 reasonably:2 nature:1 channel:4 operational:1 requested:1 mse:1 necessarily:1 posted:3 domain:1 submit:1 protocol:1 arrow:3 motivation:1 nothing:1 x1:1 site:1 advice:3 wiley:2 exceeding:1 winning:2 exponential:1 candidate:1 dozen:1 theorem:4 bad:2 symbol:1 list:1 virtue:1 exists:3 essential:1 effectively:3 gained:1 budget:4 occurring:1 chen:7 easier:1 entropy:2 logarithmic:2 simply:3 likely:1 desire:1 maxw:1 corresponds:1 determines:1 satisfies:1 acm:2 goal:3 sized:1 towards:1 price:8 lipschitz:1 typical:3 infinite:1 uniformly:3 wt:21 called:1 total:9 e:2 experimental:3 player:1 select:4 berg:1 internal:1 support:1 arises:1 absolutely:1 outstanding:1 ex:8
3,736
4,383
Sequence learning with hidden units in spiking neural networks Johanni Brea, Walter Senn and Jean-Pascal Pfister Department of Physiology University of Bern B?uhlplatz 5 CH-3012 Bern, Switzerland {brea, senn, pfister}@pyl.unibe.ch Abstract We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. We show that learning synaptic weights towards hidden neurons significantly improves the storing capacity of the network. Furthermore, we derive an approximate online learning rule and show that our learning rule is consistent with Spike-Timing Dependent Plasticity in that if a presynaptic spike shortly precedes a postynaptic spike, potentiation is induced and otherwise depression is elicited. 1 Introduction Learning to produce temporal sequences is a general problem that the brain needs to solve. Movements, songs or speech, all require the generation of specific spatio-temporal patterns of neural activity that have to be learned. Early attempts to model sequence learning used a simple asymmetric Hebbian learning rule [10, 20, 6] and succeeded to store sequences of random patterns, but perform poorly as soon as there are temporal correlations between the patterns [3]. Later work on pattern storage or sequence learning recognized the need for matching the storage rule with the recall dynamics [2, 18, 12] and derived the optimal storage rule for a given recall dynamics [2, 18] or an optimal recall dynamics for a given storage rule [12], but didn?t consider hidden neurons and therefore restricted the class of possible patterns to be learned. Other studies [14] included a reservoir of hidden neurons but assumed weights towards the hidden neurons to be fixed. Finally, Boltzmann machines [1] - which learn to produce a given distribution of patterns with visible and hidden neurons - applied to sequence learning [9, 22, 21] are trained with Contrastive Divergence [8] and either an approximation that neglects the influence of the future or use a nonlocal and non-causal learning rule. Here we start by defining a stochastic neuronal dynamics - that can be arbitrarily complicated (e.g. with non-Markovian dependencies). This stochastic dynamics defines the overall probability distribution which is parametrized by the synaptic weights. The goal of learning is to adapt synaptic weights such that the model distribution approximates as good as possible the target distribution of temporal sequences. This can be seen as the extension of the maximum likelihood approach of Barber [2] where we add stochastic hidden neurons with plastic weights. In order to learn the weights, we implement a variant of the Expectation-Maximization (EM) algorithm [5] where we use importance sampling in the expectation step in a way that makes the sampling procedure easy. 1 A B ht?1 ht stochastic hidden neurons ht?1 ht vt?1 vt stochastic visible neurons vt?1 vt Figure 1: Graphical representation of the conditional dependencies of the joint distribution over visible and hidden sequences. A Graphical model used for the derivation of the learning rule in section 2 and the example in section 4. B Markovian model used in the example with binary neurons in section 3. The resulting learning rule is local (but modulated by a global factor), causal and biologically relevant in the sense that it shares important features with Spike-Timing Dependent Plasticity (STDP). We also derive an online version of the learning rule and show numerically that it performs almost equally well as the exact batch learning rule. 2 Learning a distribution of sequences Let us consider temporal sequences v = {vt,i |t = 0 . . . T, i = 1 . . . Nv } of Nv visible neurons over the interval [0, T ]. We will use the notation vt = {vt,i |i = 1 . . . Nv } and vt1 :t2 = {vt,i |t = t1 . . . t2 , i = 1 . . . Nv } to denote parts of the sequence. Note that v = v0:T denotes the whole sequence. Those visible sequences v are drawn i.i.d. from a target distribution P ? (v) that must be learned by a model which consists of Nv visible neurons andP Nh hidden neurons. The model distribution over those visible sequences is denoted by P? (v) = h P? (v, h) where ? denotes the model parameters, h = {ht,i |t = 0 . . . T, i = 1 . . . Nh } the hidden temporal sequence and P? (v, h) the joint distribution over the visible and the hidden sequences. The natural way to quantify the mismatch between the target distribution P ? (v) and the model distribution P? (v) is given by the Kullback-Leibler divergence: X P ? (v) DKL (P ? (v)||P? (v)) = P ? (v) log . (1) P? (v) v If the joint model distribution P? (v, h) is differentiable with respect to the model parameters ?, then the sequence learning problem can be phrased as gradient descent on the KL divergence in Eq. (1):   ? log P? (v, h) , (2) ?? = ? ?? P? (h|v)P ? (v) P ? ? where ? is the learning rate and we used the fact that ?? log P? (v) = P?1(v) ?? h P? (v, h) = P ? h P? (h|v) ?? log P? (v, h). Eq. (2) can be seen as a variant of the EM algorithm [5, 16, 3] where the expectation h?iP? (h|v)P ? (v) corresponds to the E step and the gradient of log P? (v, h) is related to the M step1 . Instead of calculating analytically the true expectation in Eq. (2), it is possible to approximate it by sampling the visible sequences v from the target distribution P ? (v) and the hidden sequences from the posterior distribution P? (h|v) given the visible ones. Note that the posterior distribution P? (h|v) could be hard to sample from. Indeed, at a time t the posterior distribution over ht does not only depend on the past visible activity but also on the future visible activity, since it is conditioned on the whole visible activity v0:T from time step 0 to T . This renders a true challenge for online algorithms. In the case of Hidden Markov Model training, the forward-backward algorithm ? ?? 1 Strictly speaking the M step of the EM algorithm directly calculates the solution ?new for which log P? (v, h) = 0 whereas in Eq. (2) there is only one step done in the direction of the gradient. 2 [4, 19] combines information from the past (by forward filtering) and from the future (by backward smoothing) to calculate P? (h|v). If the statistical model does not have the Markovian property, the problem of calculating P? (h|v) (or sampling from it) becomes even harder. Here, we propose an alternative solution that does not require to sample from P? (h|v) and does not require the Markovian assumption (see [11, 17] for other approaches on sampling P? (h|v)). We exploit that in all neuronal network models of interest, neuronal firing at any time point is conditionally independent given the past activity of the network. Using the chain rule this means that we can write the joint distribution P? (v, h) (see Fig. 1A) as ! ! Nv Nh T Y T Y Y Y P? (vt,i |v0:t?1 , h0:t?1 ) P? (v, h) = P? (v0 ) P? (ht,i |v0:t?1 , h0:t?1 ) , P? (h0 ) | t=1 i=1 }| {z R? (v|h) t=1 i=1 {z Q? (h|v) } (3) where R? (v|h) is easy to calculate (see below) and Q? (h|v) is easy to sample from. The sampling can be accomplished by clamping the visible neurons to a target sequence v and let the hidden dynamics run, i.e. at time t, ht is sampled from P? (ht |v0:t?1 h0:t?1 ). 2 From Eq. (3), the posterior distribution P? (h|v) can be written as P? (h|v) = R? (v|h)Q? (h|v) , P? (v) (4) where the marginal distribution over the visible sequences v can be also expressed as P? (v) = hR? (v|h)iQ? (h|v) . As a consequence, by using Eq. (4), the learning rule in Eq. (2) can be rewritten as X ? log P? (v, h) X ? R? (v|h) ? log P? (v, h) ?? = P ? (v)P? (h|v) = P (v)Q? (h|v) ?? P? (v) ?? v,h v,h * + R? (v|h) ? log P? (v, h) =? . (5) hR? (v|h? )iQ? (h? |v) ?? ? Q? (h|v)P (v) Instead of calculating the true expectation, Eq. (5) can be evaluated by using N samples (see algorithm 1) where the factor ?? (v, h) := R? (v|h)/ hR? (v|h? )iQ? (h? |v) acts as the importance weight [15]. Note that in the absence of hidden neurons, this factor ?? (v, h) is equal to one and the maximum likelihood learning rule [2, 18] is recovered. 2 Note that for other conditional dependencies it might be reasonable to split P? (h|v) differently. For example in models with the structure of Hidden Markov Models one could make use of the fact that QT ?1 QT ?1 P? (ht+1 |ht ) P? (h|v) = t=0 P? (ht |v0:t , ht+1 ) = t=0 P? (ht+1 |v0:t ) P? (ht |v0:t ) and take the product of filtering QT ?1 distributions Q? (h|v) = P (h |v ) t 0:t to sample from and use the importance weights R? (v, h) = ? t=0 QT ?1 P? (ht+1 |ht ) t=0 P? (ht+1 |v0:t ) . Following the reasoning in the main text one finds an alternative to the forward-backward algorithm [4, 19] that might be interesting to investigate further. Algorithm 1 Sequence learning (batch mode) Set an initial ? while ? not converged do v ? P ? (v) ?(v) = 0, P? (v) = 0 for i = 1 . . . N do h ? Q? (h|v) ?(v) ? ?(v) + R? (v|h) ? log P??? (v,h) P? (v) ? P? (v) + N ?1 R? (v|h) end for ? ? ? + ? P?(v) ? (v) end while return ? 3 B C G 10 20 30 10 20 30 time step D 10 20 30 time step E 10 20 30 time step perform ance unit number A 1. 0.9 0.8 0.7 0.6 0.5 F 20 u n it n um b er un it number H 40 60 10 20 30 time step 10 20 30 time step 10 20 30 time step 0 7500 learning step I 15 000 J 10 20 30 10 20 30 40 10 20 30 10 20 30 tim e step time step time step 10 20 30 Figure 2: Learning a non-Markovian sequence of temporally correlated and linearly dependent states with different learning rules. A The target distribution contained only this training pattern for 30 visible neurons and 30 time steps. B-F, H-J Overlay of 20 recalls after learning with 15 000 training pattern presentations, B with only visible neurons and a simple asymmetric Hebb rule (see main text) C only visible neurons and learning rule Eq. (5) D static weights towards 30 hidden neurons (Reservoir Computing) E learning rule Eq. (5), F online approximation Eq. (14). G Learning curves for the training pattern in A for only visible neurons (black line), static weights towards hidden (blue line), online learning approximation (purple line) exact learning rule (red line). The performance was measured in one minus average Hamming distance per neuron per time step (see main text). H A training pattern that exhibits a gap of 5 time-steps. I Recall with a network of 30 visible and 10 hidden neurons without learning the weights towards hidden neurons. J Recall after training the same network with learning rule Eq. (5). 3 Binary neurons In order to illustrate the learning rule given by Eq. (5), let us consider sequences of binary patterns. Let x denote the activity of the visible and hidden neurons, i.e. x = (v, h). Since the individual neurons are binary xt,i ? {?1, 1}, their distribution is given by P? (xt,i |x0:t?1 ) = (?t,i ?t)(1+xt,i )/2 (1 ? ?t,i ?t)(1?xt,i )/2 , where the firing rate ?t,i of neuron i at time t is given by a monotonically increasing (and non-linear) function g of its membrane potential ut,i , i.e. X wij xt?1,j . (6) ?t,i = g(ut,i ) with ut,i = j Note that these assumptions lead to Markovian neuronal dynamics i.e. P? (xt,i |x0:t?1 ) = P? (xt,i |xt?1 ) (see Fig. 1B). Further calculations will be slightly simplified, if we assume that the non-linear function g is constraint by the following differential equation dg(u)/du = ?g(u)(1 ? g(u)?t). Note that in the limit of ?t ? 0, this function is an exponential, i.e. g(u) = g0 exp(?u) and  ?1 for finite ?t, it is a sigmoidal and takes the form g(u) = ?t?1 1 + (g0 ?t)?1 ? 1 exp(??u) , where we constrained the solutions such that g(0) = g0 in order to be consistent with the case where ?t ? 0. For the distribution over the initial conditions P? (v0 ) and P? (h0 ) we choose delta distributions such that v0 is equal to the first state of the training sequence and h0 is an arbitrary but fixed vector of binary values. If we assume that the weights wij are the only adaptable parameters in this model, 4 B 1.0 1.0 0.9 0.9 performance performance A 0.8 0.7 0.6 0.5 0.8 0.7 0.6 0.5 20 40 60 80 100 20 40 60 80 100 seq u en ce len gth number of hidden units Figure 3: Adding trainable hidden neurons leads to much better recall performance than having static hidden neurons or no hidden neurons at all. A Comparison of the performance after 20000 learning cycles between static (blue curve) and dynamic weights (red curve) towards hidden neurons for a network with 30 visible and different numbers of hidden neurons in a training task with a uncorrelated random pattern of length 60 time steps. For B we generated random, uncorrelated sequences of different length and compared the performance after 20000 learning cycles for only visible neurons (black curve), static weights towards hidden (blue curve) and dynamic weights towards hidden (red curve). we have ? log Pw (xt,i |x0:t?1 ) 1 = ?wij 2   g ? (ut,i ) ?ut,i g ? (ut,i )?t (1 + xt,i ) . ? (1 ? xt,i ) g(ut,i ) 1 ? g(ut,i )?t ?wij (7) With the above assumption on g(u) and Eq. (3) and (6) we find T ? log Pw (x) ?X = (xt,i ? hxt,i iP? (xt,i |xt?1 ) )xt?1,j , ?wij 2 t=1 (8) where hxt,i iP? (xt,i |xt?1 ) = g(ut,i )?t ? (1 ? g(ut,i )?t) and the indices i and j run over all visible and hidden neurons. The factor Rw (v|h) can be expressed as ! T Nv 1 XX (1 + vt,i ) log(?t,i ?t) + (1 ? vt,i ) log(1 ? ?t,i ?t) . (9) Rw (v|h) = exp 2 t=0 i=1 Let us now consider a simple case (Fig. 2) where the distribution over sequences is a delta distribution P ? (v) = ?(v ? v ? ) around a single pattern v ? (Fig. 2A) which is made of a set of temporally correlated and linearly dependent states {vt? }Tt=0 , i.e. a non-Markovian pattern, thus making it a difPT ? ? vt,j (Fig. 2B) or only ficult pattern to learn with a simple asymmetric Hebb rule ?wij ? t=0 vt+1,i visible neurons (Fig. 2C), which are both Markovian learning rules. The performance was measured P ? |/2 by one minus the Hamming distance per visible neuron and time step 1?(T Nv )?1 t,i |vt,i ?vt,i between target pattern and recall pattern averaged over 100 runs. Adding hidden neurons without learning the weights towards hidden neurons is similar to the idea used in the framework of Reservoir Computing (for a review see [13]): the visible states feed a fixed reservoir of neurons that returns a non-linear transformation of the input. Only the readout from hidden to visible neurons and in our case the recurrent connections in the visible layer are trained. To assure a sensible distribution of weights towards hidden units, we used the weights that were obtained after learning with Eq. (5) and reshuffled them. Obviously, without training the reservoir the performance is always worse compared to a system with an equal number of hidden neurons but dynamic weights (Fig. 2E and 2F). With only a few hidden neurons our rule is also capable to learn patterns where the visible neurons are silent during a few time-steps. The training pattern in Fig. 2H exhibits a gap of 5 time steps. After learning the weights towards 10 hidden neurons with learning rule Eq. (5) recall performance is nearly perfect (see Fig. 2J). With only visible neurons (not shown in Fig. 2) or static weights towards hidden neurons the time gap was not learned (see Fig. 2I). 5 Dw @arbitrary unitsD 0 -40 -20 0 20 t post -t pre @msD 40 Figure 4: The learning rule Eq. (11) is compatible with Spike-Timing Dependent Plasticity (STDP): the weight gets potentiated if a presynaptic spike is followed by a postsynaptic spike and depressed otherwise. The time course of the postsynaptic potential and the refractory kernel is given in the text. In Fig. 3 we used again delta target distributions P ? (v) = ?(v ? v ? ) with random uncorrelated patterns v ? of different length. Each model was trained with 20000 pattern presentations. For a pattern of length 2Nv = 60 only Nv /2 = 15 trainable hidden neurons are sufficient to reach perfect recall (see Fig. 3A). This is in clear contrast to the case of static hidden weights. Again the static weights were obtained by reshuffling those that we obtained after learning with Eq. (5). Fig. 3B compares the capacity of our learning rule with Nh = Nv = 30 hidden neurons to the case of no hidden neuron or static weights towards hidden neurons. Without learning the weights towards hidden neurons the performance drops to almost chance level for sequences of 45 or more time steps, whereas with our learning rule this decrease of performance occurs only at sequences of 100 or more time steps. 4 Limit to Continuous Time Starting from the neurons in the last section we show that in the limit to continuous time we can implement the sequence learning task with stochastic spiking neurons [7]. First note that the state ofPa neuron at time t in the model described in the previous section is fully defined by ut,i := j wij xt?1,j (see Eq. (6)) and its spiking activity xt,i . The weighted P sum j wij xt?1,j is the response of neuron i to the spikes of its presynaptic neurons and its own spikes. The terms in this sum depend on the previous time step only. In a more realistic model the postsynaptic neuron feels the influence of presynaptic spikes through a perturbation of the membrane potential on the order of a few milliseconds, which in the limit to continuous time clearly cannot be modeled by a one-time step response. For a more realistic model we replace ut,i in Eq. (6) by ut,i = ? X s=1 | ?s xt?s,i + {z =:x? t,i } X j6=i wij ? X s=1 | ?s xt?s,j , {z =:x?t,j (10) } where xt?s,i ? {0, 1}. The kernel ? models the time-course of the response to a presynaptic spike and ? the refractoriness. Our model holds for any choices of ? and ?, including for example a hard refractory period where the neuron is forced not to spike. In order to take the limit ?t ? 0 in Eq. (9) we note that we can scale Rw (v|h) without changing the learning rule Eq. (5), since there only the ratio R? (v|h)/ hR? (v|h? )iQ? (h? |v) enters. We use the ew (v|h) := (g0 ?t)?Sv Rw (v|h), where Sv denotes the total number of spikes scaling Rw (v|h) ? R PT PNv in the visible sequence v, i.e. Sv = t=0 i=1 vt,i . Note that for (0, 1)-units the expectation in Eq. (8) becomes hxt,i iP? (xt,i |xt?1 ) = g(ut,i )?t = ?t,i ?t . Now we take the limit ?t ? 0 in Eq. (8) 6 and (9) and find ? log Pw (x) = ?wij Z T 0 ew (v|h) = exp R dt ?(xi (t) ? ?i (t))x?j (t) Z T dt 0 Nv X (11) ! ?vi (t)ui (t) ? ?i (t) i=1 , (12) P (f ) where the training pattern runs from time 0 to T , xi (t) = (f ) ?(t ? t i ) is the sum of delta ti R (f ) spikes of neuron i at times ti , x?j (t) = ds ?(s)xj (t ? s) (and similarly x?i (t)) is the convolution of presynaptic spike trainsP with the response kernel ?(t). With neuron i?s response to past spiking activity ui (t) = x?i (t) + j6=i wij x?j (t) and the escape rate function ?i (t) = g0 exp (?ui (t)) we recovered the defining equations of a simplified stochastic spike response model [7]. In Fig. 4 we display the weight change after forcing two neurons to fire with a fixed time lag. For the figure we used the kernels ?s ? exp(?s/?m )?exp(?s/?s ) and ?s ? ? exp(?s/?m ) with ?m = 10 ms and ?s = 2 ms. Our learning rule is consistent with STDP in the sense that a presynaptic spike followed by a postsynaptic spike leads to potentiation and to depression otherwise. Note that this result was also found in [18]. 5 Approximate online version Without hidden neurons the learning rule found by using Eq. (11) is straightforward to implement in an online way where the parameters are updated at every moment in time according to w? ij ? (xi (t) ? ?i (t))x?j (t) instead of waiting with the update until a training batch finished. Finding an online version of the learning algorithm for networks with hidden neurons turns out to be a challenge, since we need to know the whole sequences v and h in order to evaluate the importance factor R? (v|h)/hR? (v|h? )iQ? (h? |v) . Here we propose to use in each time step an approximation of the importance factor based on the network dynamics during the preceding period of typical sequence length and multiply it by the low-pass filtered change of parameters. We write this section with xi (t) ? {0, 1}, but similar expressions are easily found for xi (t) ? {?1, 1}. Algorithm 2 Sequence learning (online mode) Set an initial wij , eij , a, r?, t while wij not converged do if t mod N T == 0 then v ? P ? (v) end if s = t mod T if s < ? then h(s) ? P (h(s)) else h(s) ? Pw (h(s)|past spiking activity) end if x(s) = (v(s), h(s)) ? eij ? (1 ? ?t T )eij + ?(xi (s) ? ?i (s))xj (s) P N v a ? (1 ? ?t i=1 ?vi (s)ui (s) ? ?i (s) T )a + ?t r + exp(a) r? ? (1 ? N T )? wij ? wij + ? exp(a) r? eij t ? t + ?t end while return wij In Eq. (13a) and (13b) we summarize how to use low-pass filters to approximate the integrals in Eq. (11) and Eq. (12). The time constant of the low-pass filter is chosen to match the sequence length T . To find an online estimate of hR? (v, h? )iQ? (h? |v) we assume that a training pattern v ? P ? (v) is presented a few times in a row and after time N T , with N ? N, N ? 1, a new training pattern is picked from the training distribution. Under this assumption we can replace the average over 7 hidden sequences by a low-pass filter of r with time constant N T , see Eq. (13c). At the beginning of each pattern presentation - i.e. during the time interval [0, ? ), with ? on the order of the kernel time constant ?m - the hidden activity h(s) is drawn from a given distribution P (h(s)). 1 e? ij (t) = ? eij (t) + ?(xi (t) ? ?i (t))x?j (t) T Nv X 1 ?vi (t)ui (t) ? ?i (t) a(t) ? = ? a(t) + T i=1 N T r?? (t) = ?? r(t) + r(t), r(t) := exp(a(t)) eij (T ) ? ? log Pw (x) ?wij exp(a(T )) ? Rw (v|h) r?(N T ) ? hR? (v, h? )iQ? (h? |v) Finally we learn the model parameters in each time step according to r(t) w? ij (t) = ? eij (t) . r?(t) (13a) (13b) (13c) (14) This online algorithm is certainly a rough approximation of the batch algorithm. Nevertheless, when applied to the challenging example (Fig. 2A) in section 3, the performance of the online rule is close to the one of the batch rule (Fig. 2F, G). 6 Discussion Learning long and temporally correlated sequences with neural networks is a difficult task. In this paper we suggested a statistical model with hidden neurons and derived a learning rule that leads to optimal recall of the learned sequences given the neuronal dynamics. The learning rule is derived by minimizing the Kullback-Leibler divergence from training distribution to model distribution with a variant of the EM-algorithm, where we use importance sampling to draw hidden sequences given the visible training sequence. Choosing an appropriate distribution in the importance sampling step we are able to circumvent inference which usually makes the training of non-Markovian models hard. The resulting learning algorithm consists of a local term modulated by a global factor. We showed that it is ready to be implemented with biologically realistic neurons and that an approximate online version exists. Our approach follows the ideas outlined in [2], where sequence learning was considered with visible neurons. Here we extended this model by adding stochastic hidden neurons that help to perform well with sequences of linearly depend states - including non-Markovian sequences - or long sequences. As in [18] we look at the limit of continuous time and find that the learning rule is consistent with Spike-Timing Dependent Plasticity. In contrast to Reservoir Computing [13] we train the weights towards hidden neurons which clearly helps to improve performance. Our learning rule does not need a ?wake? and a ?sleep? phase as we know it from Boltzmann machines [1, 8]. Viewed in a different light our learning algorithm has a nice interpretation: as in reinforcement learning, the hidden neurons explore different sequences, where each trial leads to a global reward signal that modulates the weight change. However, in contrast to common reinforcement learning the reward is not provided by an external teacher but depends solely on the internal dynamics and the visible neurons do not explore but are clamped to the training sequence. To make our model even more biologically relevant, future work should aim for a biological implementation of the global importance factor that depends on the spike timing and the membrane potential of all the visible neurons (see Eq. (9)). It would also be interesting to study online approximations of the learning algorithm in more detail or its application to models with the Hidden Markov structure. Acknowledgments The authors thank Robert Urbanczik for helpful discussions. This work was supported by the Swiss National Science Foundation (SNF), grant 31-133094, and a grant from the Swiss SystemsX.ch initiative (Neurochoice, evaluated by the SNF). 8 References [1] D. Ackley and G. E. Hinton. A learning algorithm for boltzmann machines. Cognitive Science, 9(1):147? 169, 1985. [2] D. Barber. Learning in spiking neural assemblies. Advances in Neural Information Processing Systems, 15, 2003. [3] D. Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press, 2011. In press. [4] L. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statistics, 41(1):164?171, 1970. [5] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1?38, 1977. [6] A. D?uring, A. Coolen, and D. Sherrington. Phase diagram and storage capacity of sequence processing neural networks. Journal of Physics A: Mathematical and General, 31:8607, 1998. [7] W. Gerstner and W. M. Kistler. Spiking neuron models: single neurons, populations, plasticity. Cambridge University Press, 2002. [8] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771?800, 2002. [9] G. E. Hinton and A. Brown. Spiking boltzmann machines. Advances in Neural Information Processing Systems, 12, 2000. [10] J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America, 79(8):2554, 1982. [11] P. Latham and J. W. Pillow. Neural characterization in partially observed populations of spiking neurons. Advances in Neural Information Processing Systems, 20:1161?1168, 2008. [12] M. Lengyel, J. Kwag, O. Paulsen, and P. Dayan. Matching storage and recall: hippocampal spike timingdependent plasticity and phase response curves. Nature Neuroscience, 8(12):1677?83, 2005. [13] M. Luko?sevi?cius and H. Jaeger. Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127?149, 2009. [14] W. Maass, T. Natschl?ager, and H. Markram. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation, 14(11):2531?60, 2002. [15] D. J. C. MacKay. Information Theory, Inference & Learning Algorithms. Cambridge University Press, 2002. [16] G. McLachlan and T. Krishnan. The EM Algorithm and Extensions. John Wiley and Sons, 1997. [17] Y. Mishchenko and L. Paninski. Efficient methods for sampling spike trains in networks of coupled neurons. The Annals of Applied Statistics, 5(3):1893?1919, 2011. [18] J.-P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner. Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Computation, 18(6):1318?1348, 2006. [19] L. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257?86, 1989. [20] H. Sompolinsky and I. Kanter. Temporal association in asymmetric neural networks. Physical Review Letters, 57(22):2861?64, 1986. [21] I. Sutskever, G. E. Hinton, and G. Taylor. The Recurrent Temporal Restricted Boltzmann Machine. Advances in Neural Information Processing Systems, 21:1601?08, 2009. [22] G. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. Advances in Neural Information Processing Systems, 19:1345?52, 2007. 9
4383 |@word trial:1 version:4 pw:5 contrastive:2 paulsen:1 minus:2 harder:1 cius:1 moment:1 initial:3 series:1 united:1 past:5 recovered:2 soules:1 must:1 written:1 john:1 realistic:4 visible:37 plasticity:7 drop:1 update:1 selected:1 beginning:1 filtered:1 characterization:1 sigmoidal:1 mathematical:2 differential:1 initiative:1 consists:2 combine:1 x0:3 indeed:1 brain:1 brea:2 increasing:1 becomes:2 provided:1 xx:1 notation:1 didn:1 gth:1 transformation:1 finding:1 temporal:9 every:1 act:1 ti:2 um:1 unit:5 grant:2 t1:1 timing:6 local:2 limit:7 consequence:1 firing:3 solely:1 might:2 black:2 challenging:1 averaged:1 acknowledgment:1 implement:3 ance:1 swiss:2 procedure:1 urbanczik:1 snf:2 physiology:1 significantly:1 matching:2 pre:1 get:1 cannot:1 close:1 storage:6 influence:2 baum:1 straightforward:1 starting:1 systemsx:1 rule:38 dw:1 population:2 feel:1 updated:1 target:8 pt:1 annals:2 exact:2 assure:1 recognition:1 asymmetric:4 observed:1 ackley:1 enters:1 calculate:2 readout:1 cycle:2 sompolinsky:1 movement:1 decrease:1 dempster:1 ui:5 reward:2 dynamic:14 trained:3 depend:3 easily:1 joint:4 hopfield:1 differently:1 emergent:1 america:1 sevi:1 derivation:1 pnv:1 walter:1 forced:1 train:2 vt1:1 precedes:1 choosing:1 h0:6 jean:1 lag:1 kanter:1 solve:1 toyoizumi:1 otherwise:3 ability:1 statistic:2 laird:1 ip:4 online:14 obviously:1 sequence:48 differentiable:1 propose:2 product:2 relevant:2 poorly:1 roweis:1 academy:1 sutskever:1 jaeger:1 produce:2 perfect:2 tim:1 derive:3 recurrent:4 iq:7 illustrate:1 help:2 measured:2 ij:3 qt:4 eq:30 implemented:1 quantify:1 switzerland:1 direction:1 filter:3 stochastic:9 human:1 kistler:1 require:3 potentiation:2 biological:1 extension:2 strictly:1 hold:1 around:1 considered:1 stdp:3 exp:12 early:1 coolen:1 weighted:1 mclachlan:1 rough:1 clearly:2 reshuffling:1 always:1 aim:1 derived:3 methodological:1 likelihood:3 contrast:3 sense:2 helpful:1 inference:2 dependent:7 dayan:1 hidden:57 wij:17 overall:1 pascal:1 denoted:1 smoothing:1 constrained:1 mackay:1 marginal:1 equal:3 having:1 sampling:9 look:1 nearly:1 future:4 t2:2 escape:1 few:4 dg:1 divergence:5 national:2 individual:1 phase:3 fire:1 attempt:1 interest:1 investigate:1 multiply:1 certainly:1 light:1 chain:2 succeeded:1 capable:1 integral:1 ager:1 incomplete:1 taylor:2 causal:2 modeling:1 markovian:10 maximization:2 dependency:3 teacher:1 sv:3 probabilistic:1 physic:1 again:2 choose:1 worse:1 external:1 cognitive:1 expert:1 return:3 potential:5 vi:3 depends:2 later:1 picked:1 red:3 start:1 len:1 elicited:1 complicated:1 purple:1 rabiner:1 bayesian:1 plastic:1 j6:2 lengyel:1 converged:2 reach:1 synaptic:4 static:9 hamming:2 sampled:1 recall:13 ut:14 improves:1 adaptable:1 feed:1 dt:2 supervised:1 response:7 wei:1 done:1 evaluated:2 refractoriness:1 furthermore:1 correlation:1 d:1 until:1 defines:1 mode:2 brown:1 true:3 analytically:1 leibler:2 maass:1 conditionally:1 during:3 timingdependent:1 m:2 hippocampal:1 tt:1 sherrington:1 latham:1 performs:1 motion:1 reasoning:2 petrie:1 common:1 spiking:10 physical:2 refractory:2 nh:4 association:1 interpretation:1 approximates:1 numerically:1 cambridge:3 outlined:1 similarly:1 depressed:1 stable:1 v0:12 add:1 posterior:4 own:1 showed:1 forcing:1 store:1 binary:6 arbitrarily:1 vt:17 accomplished:1 seen:2 preceding:1 recognized:1 period:2 monotonically:1 signal:1 hebbian:1 match:1 adapt:1 calculation:1 long:2 msd:1 post:1 equally:1 dkl:1 calculates:1 variant:3 expectation:6 kernel:5 whereas:2 interval:2 else:1 wake:1 diagram:1 reshuffled:1 unibe:1 natschl:1 nv:13 induced:1 mod:2 split:1 easy:3 krishnan:1 xj:2 silent:1 idea:2 expression:1 song:1 render:1 speech:2 speaking:1 action:1 depression:2 clear:1 rw:6 generate:1 overlay:1 millisecond:1 tutorial:1 senn:2 delta:4 kwag:1 per:3 neuroscience:1 blue:3 write:2 waiting:1 nevertheless:1 drawn:2 changing:1 ce:1 ht:18 backward:3 sum:3 run:4 letter:1 almost:2 reasonable:1 seq:1 draw:1 scaling:1 layer:1 followed:2 display:1 sleep:1 activity:10 constraint:1 phrased:1 step1:1 department:1 according:2 membrane:3 slightly:1 em:6 postsynaptic:4 son:1 biologically:4 making:1 luko:1 restricted:2 equation:2 turn:1 know:2 tractable:1 end:5 rewritten:1 appropriate:1 batch:5 alternative:2 shortly:1 denotes:3 assembly:1 graphical:2 calculating:3 neglect:1 exploit:1 society:1 g0:5 spike:24 occurs:1 exhibit:2 gradient:3 distance:2 thank:1 capacity:3 parametrized:1 sensible:1 presynaptic:7 barber:4 length:6 index:1 modeled:1 ratio:1 minimizing:2 difficult:1 robert:1 implementation:1 collective:1 boltzmann:5 perform:3 potentiated:1 neuron:76 convolution:1 markov:5 finite:1 descent:1 defining:2 extended:1 hinton:5 precise:1 perturbation:2 arbitrary:2 kl:1 connection:1 learned:5 able:1 andp:1 suggested:1 below:1 pattern:27 mismatch:1 usually:1 challenge:2 summarize:1 including:2 royal:1 natural:1 circumvent:1 hr:7 improve:1 temporally:3 finished:1 ready:1 coupled:1 text:4 review:3 nice:1 fully:1 generation:1 interesting:2 filtering:2 foundation:1 sufficient:1 consistent:4 rubin:1 storing:1 share:1 uncorrelated:3 row:1 compatible:1 course:2 supported:1 last:1 soon:1 bern:2 markram:1 curve:7 pillow:1 forward:3 made:1 reinforcement:2 author:1 simplified:2 nonlocal:1 approximate:5 kullback:2 global:4 assumed:1 spatio:2 xi:7 un:1 continuous:4 latent:1 learn:6 nature:1 du:1 gerstner:2 main:3 linearly:3 whole:3 mishchenko:1 neuronal:6 reservoir:7 fig:17 en:1 hebb:2 wiley:1 exponential:1 clamped:1 specific:1 xt:25 er:1 exists:1 hxt:3 adding:3 importance:8 modulates:1 conditioned:1 occurring:1 clamping:1 gap:3 eij:7 explore:2 paninski:1 expressed:2 contained:1 partially:1 uring:1 ficult:1 ch:3 corresponds:1 chance:1 conditional:2 goal:1 presentation:3 viewed:1 towards:16 replace:2 absence:1 hard:3 change:3 included:1 typical:1 total:1 pfister:3 pas:4 ew:2 internal:1 modulated:2 evaluate:1 trainable:2 correlated:3
3,737
4,384
Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Yan Karklin and Eero P. Simoncelli Howard Hughes Medical Institute and Center for Neural Science New York University New York, NY 10003 {yan.karklin, eero.simoncelli}@nyu.edu Abstract Efficient coding provides a powerful principle for explaining early sensory coding. Most attempts to test this principle have been limited to linear, noiseless models, and when applied to natural images, have yielded oriented filters consistent with responses in primary visual cortex. Here we show that an efficient coding model that incorporates biologically realistic ingredients ? input and output noise, nonlinear response functions, and a metabolic cost on the firing rate ? predicts receptive fields and response nonlinearities similar to those observed in the retina. Specifically, we develop numerical methods for simultaneously learning the linear filters and response nonlinearities of a population of model neurons, so as to maximize information transmission subject to metabolic costs. When applied to an ensemble of natural images, the method yields filters that are center-surround and nonlinearities that are rectifying. The filters are organized into two populations, with On- and Off-centers, which independently tile the visual space. As observed in the primate retina, the Off-center neurons are more numerous and have filters with smaller spatial extent. In the absence of noise, our method reduces to a generalized version of independent components analysis, with an adapted nonlinear ?contrast? function; in this case, the optimal filters are localized and oriented. 1 Introduction Coding efficiency is a well-known objective for the evaluation and design of signal processing systems, and provides a theoretical framework for understanding biological sensory systems. Attneave [1] and Barlow [2] proposed that early sensory systems are optimized, subject to the limitations of their available resources, for representing information contained in naturally occurring stimuli. Although these proposals originated more than 50 years ago, they have proven difficult to test. The optimality of a given sensory representation depends on the family of possible neural transformations to which it is compared, the costs of building, maintaining, and operating the system, the distribution of input signals over which the system is evaluated, and the levels of noise in the input and output. A substantial body of work has examined coding efficiency of early visual representations. For example, the receptive fields of retinal neurons have been shown to be consistent with efficient coding principles [3, 4, 5, 6]. However, these formulations rely on unrealistic assumptions of linear response and Gaussian noise, and their predictions are not uniquely constrained. For example, the observation that band-pass filtering is optimal [4] is insufficient to explain rotationally symmetric (center-surround) structure of receptive fields in the retina. 1 The simplest models that attempt to capture both the receptive field properties and the response nonlinearities are linear-nonlinear (LN) cascades, in which the incoming sensory stimulus is projected onto a linear kernel, and this linear response is then passed through a memoryless scalar nonlinear function whose output is used to generate the spiking response of the neuron. Such approaches have been used to make predictions about neural coding in general [7, 8], and, when combined with a constraint on the mean response level, to derive oriented receptive fields similar to those found in primary visual cortex [9, 10]. These models do not generally incorporate realistic levels of noise. And while the predictions are intuitively appealing, it is also somewhat of a mystery that they bypass the earlier (e.g., retinal) stages of visual processing, in which receptive fields are center-surround. A number of authors have studied coding efficiency of scalar nonlinear functions in the presence of noise and compared them to neural responses to variables such as contrast [11, 12, 13, 14, 15]. Others have verified that the distributions of neural responses are in accordance with predictions of coding efficiency [16, 17, 18, 19]. To our knowledge, however, no previous result has attempted to jointly optimize the linear receptive field and the nonlinear response properties in the presence of realistic levels of input and output noise, and realistic constraints on response levels. Here, we develop methods to optimize a full population of linear-nonlinear (LN) model neurons for transmitting information in natural images. We include a term in the objective function that captures metabolic costs associated with firing spikes [20, 21, 22]. We also include two sources of noise, in both input and output stages. We implement an algorithm for jointly optimizing the population of linear receptive fields and their associated nonlinearities. We find that, in the regime of significant noise, the optimal filters have a center-surround form, and the optimal nonlinearities are rectifying, consistent with response properties of retinal ganglion cells. We also observe asymmetries between the On- and the Off-center types similar to those measured in retinal populations. When both the input and the output noise are sufficiently small, our learning algorithm reduces to a generalized form of independent component analysis (ICA), yielding optimal filters that are localized and oriented, with corresponding smooth nonlinearities. 2 A model for noisy nonlinear efficient coding We assume a neural model in the form of an LN cascade (Fig. 1a), which has been successfully fit to neural responses in retina, lateral geniculate nucleus, and primary visual cortex of primate visual systems [e.g., 23, 24, 25]. We develop a numerical method to optimize both the linear receptive fields and the corresponding point nonlinearities so as to maximize the information transmitted about natural images in the presence of input and output noise, as well as metabolic constraints on neural processing. Consider a vector of inputs x of dimensionality D (e.g. an image with D pixels), and output vector r of dimensionality J (the underlying firing rate of J neurons). The response of a neuron rj is computed by taking an inner product of the (noise-corrupted) input with a linear filter wj to obtain a generator signal yj (e.g. membrane voltage), which is then passed through neural nonlinearity fj (corresponding to the spike-generating process) and corrupted with additional neural noise, rj = fj (yj ) + nr yj = wjT (x + nx ) , (1) (2) (Fig. 1a). Note that we did not constrain the model to be ?complete? (the number of neurons can be smaller or larger than the input dimensionality) and that each neuron can have a different nonlinearity. We aim to optimize an objective function that includes the mutual information between the input signal and the population responses, denoted I(X; R), as well as an approximate measure of the metabolic operating cost of the system. It has been estimated that most of the energy expended by spiking neurons is associated with the cost of generating (and recovering from) spikes and that this cost is roughly proportional to the neural firing rate [22]. Thus we incorporate a penalty on the expected output, which gives the following objective function: X ?j hrj i . (3) I(X; R) ? j 2 replacements b x2 f1 w1 x1 w2 f2 x2 x1 Figure 1: a. Schematic of the model (see text for description). The goal is to maximize information transfer between images x and the neural response r, subject to metabolic cost of firing spikes. b. Information about the stimulus is conveyed both by the arrangement of the filters and the steepness of the neural nonlinearities. Top: two neurons encode two stimulus components (e.g. two pixels of an image, x1 and x2 ) with linear filters (black lines) whose output is passed through scalar nonlinear functions (thick color lines; thin color lines show isoresponse contours at evenly spaced output levels). The steepness of the nonlinearities specifies the precision with which each projection is represented: regions of steep slope correspond to finer partitioning of the input space, reducing the uncertainty about the input. Bottom: joint encoding leads to binning of the input space according to the isoresponse lines above. Grayscale shading indicates the level of uncertainty (entropy) in regions of the input (lighter shades correspond to higher uncertainty). Efficient codes optimize this binning, subject to input distribution, noise levels, and metabolic costs on the outputs. Parameter ?j specifies the trade-off between information gained by firing more spikes, and the cost of generating them. It is difficult to obtain a biologically valid estimate for this parameter, and ultimately, the value of sensory information gained depends on the behavioral task and its context [26]. Alternatively, we can use ?j as a Lagrange multiplier to enforce the constraint on the mean output of each neuron. Our goal is to adjust both the filters and the nonlinearities of the neural population so as to maximize the expectation of (3) under the joint distribution of inputs and outputs, p(x, r). We assume the filters are unit norm (kwj k = 1) to avoid an underdetermined model in which the nonlinearity scales along its input dimension to compensate for filter amplification. The nonlinearities fj are assumed to be monotonically increasing. We parameterized the slope of the nonlinearity gj = dfj /dyj using a weighted sum of Gaussian kernels, ! K X (yj ? ?jk )2 gj (yj |cjk , ?jk , ?j ) = cjk exp ? , (4) 2?j2 k=1 with coefficients cjk ? 0. The number of kernels K was chosen for sufficiently flexible nonlinearity (in our experiments K = 500). We spaced ?jk evenly over the range of yj and chose ?j for smooth overlap of adjacent kernels (kernel centers 2?j apart). 2.1 Computing mutual information How can we compute the information transmitted by the nonlinear network of neurons? Mutual information can be expressed as the difference between two entropies, I(X; R) = H(X)?H(X|R). The first term is the entropy of the data, which is constant (i.e. it does not depend on the model) and can therefore be dropped from the objective function. The second term is the conditional differential entropy and represents the uncertainty in the input after observing  Rthe neural response. It is computed by taking the expectation over output values H(X|R) = Er ? p(x|r) ln p(x|r)dx . In general, computing the entropy of an arbitrary high dimensional distribution is not tractable. We make several assumptions that allow us to approximate the posterior, compute its entropy, and maximize mutual information. The posterior is proportional to the product of the likelihood and the prior, p(x|r) ? p(r|x)p(x); below we describe these two functions in detail. 3 The likelihood. First, we assume the nonlinearity is smooth enough that, at the level of the noise (both input and output), fj can be linearized using first-order Taylor series expansion. This means that locally, for each input xi and instance of noise, ri ? Gi WT (xi + nix ) + nir + f0i , (5) where W is a matrix collecting the neural filters, f0i is a vector of constants, and Gi is a diagonal matrix containing the local derivatives of the response functions gj (yj ) at yj (xi ). Here we have used i to index parameters and random variables that change with each input. (Similar approximations have been used to minimize reconstruction error in neural nonlinearities [27] and maximize information in networks of interacting genes [28].) If input and output noises are assumed to be constant and Gaussian, with covariances Cnx and Cnr , respectively, we obtain a Gaussian likelihood p(r|x), with covariance Cir|x = Gi WT Cnx WGi + Cnr . (6) We emphasize that although the likelihood locally takes the form of a Gaussian distribution, its covariance is not fixed but depends on the input, leading to different values for the entropy of the posterior across the input space. Fig. 1b illustrates schematically how the organization of the filters and the nonlinearities affects the entropy and thus determines the precision with which neurons encode the inputs. The prior. We would like to make as few assumptions as possible about the prior distribution of natural images. As described below, we rely on sampling image patches to approximate this density when computing H(X|R). Nevertheless, to compute local estimates of the entropy we need to combine the prior with the likelihood. For smooth densities, the entropy depends on the curvature of the prior in the region where likelihood has significant mass. When an analytic form for the prior is available, we can use a second-order expansion of the prior around the maximum of the posterior (known as the ?Laplace approximation? to the posterior). Unfortunately, this is difficult to compute reliably in high dimensions when only samples are available. Instead, we use the global curvature estimate in the form of the covariance matrix of the data, Cx . Putting these ingredients together, we compute the posterior as a product of two Gaussian distributions. This gives a Gaussian with covariance i ?1 i i i i T G WT (7) Cix|r = C?1 x + WG (G W Cnx WG + Cnr ) This provides a measure of uncertainty about each input and allows us to express information conveyed about the input ensemble by taking the expectation over the input and output distributions,   1 i ?H(X|R) = ?E ln 2?e det(Cx|r ) . (8) 2 We obtain Monte Carlo estimates of this conditional entropy by averaging the term in the brackets over a large ensemble of patches drawn from natural images and input/output noise sampled from assumed noise distributions. 2.2 Numerical optimization We made updates to model parameters using online gradient ascent on the objective function computed on small batches of data. We omit the gradients here, as they are obtained using standard methods but do not yield easily interpretable update rules. One important special case is derived when the number of inputs equals the number of outputs, and both noise levels approach zero. In this setting, the update rule for the filters reduces to the ICA learning rule [8], with the gradient updates maximizing the entropy of the output distributions. Because our response constraint effectively limits the mean firing rate and not the maximum, the anti-Hebbian term is different from that found in standard ICA, and the optimal (maximum entropy) response distributions are exponential, rather than uniform. Note also that our method is more general than standard ICA: it adaptively adjusts the nonlinearities to match the input distribution, whereas standard ICA relies on a fixed nonlinear ?contrast? function. To ensure all nonlinearities were monotonically increasing, the coefficients cjk were adapted in log-space. After each step of gradient ascent, we normalized filters so that kwj k = 1. It was also 4 necessary to adjust the sampling of the nonlinearities (location of ?jk ?s) because, as the fixed-norm filters rotated through input space, the variance of the projections can change drastically. Thus, whenever data fell outside the range, the range was doubled, and when all data fell inside the central 25%, it was halved. 3 Training the model on natural images 3.1 Methods Natural image data were obtained by sampling 16?16 patches randomly from a collection of grayscale photographs of outdoor scenes [29], whose pixel intensities were linear w.r.t. light luminance levels. Importantly, we did not whiten images. The only preprocessing steps were to subtract the mean of each large image and rescale the image to attain a variance of 1 for the pixels. We assumed that the input and output noises were i.i.d., so Cnx = ?n2 x ID and Cnr = ?n2 r IJ . We chose 8dB for the input (?nx ? 0.4). Although this is large relative to the variance of a pixel, as a result of strong spatial correlations in the input, some projections of the data (low frequency components) had SNR over 40dB. Output noise levels were set to -6dB (computed as 20 log10 (hrj i /?nr ); ?nr = 2) in order to match the high variability observed in retinal ganglion cells (see below). Parameter ?j was adjusted to attain an average rate of one spike per neuron per input image, hrj i = 1. The model consisted of 100 neurons. We found this number to be sufficient to produce homogeneous sets of receptive fields that spatially tiled the image patch. In the retina, the ratio of inputs (cones) to outputs (retinal ganglion cells) varies greatly, from almost 1:3 in central fovea to more than 10:1 in the periphery [30]. Our ratio of 256:100 is within the physiological range, but other factors, such as eccentricity-dependent sampling, optical blur, and multiple ganglion cell subtypes make exact comparisons impossible. We initialized filter weights and nonlinearity coefficients to random Gaussian values. Batch size was 100 patches, resampled after each update of the parameters. We trained the model for 100,000 iterations of gradient ascent with fixed step size. Initial conditions did not affect the learned parameters, with multiple runs yielding similar results. Unlike algorithms for training generative models, such as PCA or ICA, it is not possible to synthesize data from the LN model to verify convergence to the generating parameters. 3.2 Optimal filters and nonlinearities We found that, in the presence of significant input and output noise, the optimal filters have centersurround structure, rather than the previously reported oriented shapes (Fig. 2a). Neurons organize into two populations with On-center and Off-center filters, each independently tiling the visual space. The population contains fewer On-center neurons (41 of 100) and their filters are spatially larger (Fig. 2b). These results are consistent with measurements of receptive field structure in retinal ganglion cells [31] (Fig. 3). The optimal nonlinear functions show hard rectification, with thresholds near the mode of the input distribution (Fig. 2c). Measured neural nonlinearities are typically softer, but when rectified noise is taken into account, a hard-rectified model has been shown to be a good description of neural variability [32]. The combination of hard-rectifying nonlinearities and On/Off filter organization means that the subspace encoded by model neurons is approximately half the dimensionality of the output. For substantial levels of noise, we find that even a ?complete? network (in which the number of outputs equals the number of inputs) does not span the input space and instead encodes the subspace with highest signal power. The metabolic cost parameters ?j that yielded the target output rate were close to 0.2. This means that increasing the firing rate of each neuron by one spike per image leads to an information gain of 20 bits for the entire population. This value is consistent with previous estimates of 40-70 bits per second for the optic nerve [33], and an assumption of 2-5 fixations (and thus unique images seen) per second. To examine the effect of noise on optimal representations, we trained the model under different regimes of noise (Fig. 4). We found that decreasing input noise leads to smaller filters and a reduction 5 a b ON?center OFF?center 16 16 1 c 1 1 16 1 16 15 0 Figure 2: In the presence of biologically realistic level of noise, the optimal filters are centersurround and contain both On-center and Off-center profiles; the optimal nonlinearities are hardrectifying functions. a. The set of learned filters for 100 model neurons. b. In pixel coordinates, contours of On-center (Off-center) filters at 50% maximum (minimum) levels. c. The learned nonlinearities for the first four model neurons, superimposed on distributions of filter outputs. a b 10 sp/sec sp/sec 24 12 1 11 0 10 sp/sec sp/sec 0 60 30 0 ?3 0 3 1 16 0 ?3 0 3 Figure 3: a. A characterization of two retinal ganglion cells obtained with white noise stimulus [31]. We plot the estimated linear filters, horizontal slices through the filters, and mean output as a function of input (black line, shaded area shows one standard deviation of response). b. For comparison, we performed the same analysis on two model neurons. Note that the spatial scales of model and data filters are different. in the number of On-center neurons (bottom left panel). In this case, increasing the number of neurons restored the balance of On- and Off-center filters (not shown). In the case of vanishing input and output noise, we obtain localized oriented filters (top left panel), and the nonlinearities are smoothly accelerating functions that map inputs to an exponential output distribution (not shown). These results are consistent with previous theoretical work showing that optimal nonlinearity in the low noise regime maximizes the entropy of the output subject to response constraints [11, 7, 17]. How important is the choice of linear filters for efficient information transmission? We compared the performance of different filtersets across a range of firing rates (Fig. 5). For each simulation, we re-optimized the nonlinearities, adjusting ?j ?s for desired mean rate, while holding the filters fixed. As a rough estimate of input entropy H(X), we used an upper bound ? a Gaussian distribution with the covariance of natural images. Our results show that when filters are mismatched to the noise levels, performance is significantly degraded. At equivalent output rate, the ?wrong? filters transmit approximately 10 fewer bits; conversely, it takes about 50% more spikes to encode the same amount of information. We also compared the coding efficiency of networks with variable number of neurons. First, we fixed the allotted population spike budget to 100 (per input), fixed the absolute output noise, and 6 ?nx = 0.10 (20dB) ?nx = 0.18 (15dB) ?nr = 0.10 (20dB) ?nx = 0.40 (8dB) ?nr = 2 (?6dB) ? output noise input noise ? Figure 4: Each panel shows a subset of filters (20 of 100) obtained under different levels of input and output noise, as well as the nonlinearity for a typical neuron in each model. 45 W2 W3 40 MI (bits) W1 35 30 1 25 20 2 3 15 80 100 120 total spikes Figure 5: Information transmitted as a function of spike rate, under noisy conditions (8dB SNRin , ?6dB SNRout ). We compare the performance of optimal filters (W1 ) to filters obtained under low noise conditions (W2 , 20dB SNRin , 20dB SNRout ) and PCA filters, i.e. the first 100 eigenvectors of the data covariance matrix (W3 ). varied the number of neurons from 1 (very precise) neuron to 150 (fairly noisy) neurons (Fig. 6a). We estimated the transmitted information as described above. In this regime of noise and spiking budget, the optimal population size was around 100 neurons. Next, we repeated the analysis but used neurons with fixed precision, i.e., the spike budget was scaled with the population to give 1 noisy neuron or 150 equally noisy neurons (Fig. 6b). As the population grows, more information is transmitted, but the rate of increase slows. This suggests that incorporating an additional penalty, such as a fixed metabolic cost per neuron, would allow us to predict the optimal number of canonical noisy neurons. 4 Discussion We have described an efficient coding model that incorporates ingredients essential for computation in sensory systems: non-Gaussian signal distributions, realistic levels of input and output noise, metabolic costs, nonlinear responses, and a large population of neurons. The resulting optimal solution mimics neural behaviors observed in the retina: a combination of On and Off center-surround receptive fields, halfwave-rectified nonlinear responses, and pronounced asymmetries between the On- and the Off- populations. In the noiseless case, our method provides a generalization of ICA and produces localized, oriented filters. In order to make the computation of entropy tractable, we made several assumptions. First, we assumed a smooth response nonlinearity, to allow local linearization when computing entropy. Although some of our results produce non-smooth nonlinearities, we think it unlikely that this systematically affected our findings; nevertheless, it might be possible to obtain better estimates by considering higher order terms of local Taylor expansion. Second, we used the global curvature of the prior density to estimate the local posterior in Eqn. 7. A better approximation would be obtained 7 a b 40 20 50 10 MI (bits) 100 total spikes total spikes MI (bits) 150 40 30 0 50 150 100 30 20 50 10 0 50 100 0 150 0 # neurons 0 50 100 0 150 # neurons Figure 6: Transmitted information (solid line) and total spike rate (dashed line) as a function of the number of neurons, assuming (a) fixed total spike budget and (b) fixed spike budget per neuron. from an adaptive second-order expansion of the prior density around the maximum of the posterior. This requires the estimation of local density (or rather, its curvature) from samples, which is a non-trivial problem in a high-dimensional space. Our results bear some resemblance to previous attempts to derive retinal properties as optimal solutions. Most notably, optimal linear transforms that optimize information transmission under a constraint on total response power have been shown to be consistent with center-surround [4] and more detailed [34] shapes of retinal receptive fields. But such linear models do not provide a unique solution, nor can they make predictions about nonlinear behaviors. An alternative formulation, using linear basis functions to reconstruct the input signal, has also been shown to exhibit center-surround shapes [35, 6]. However, this approach makes additional assumptions about the sparsity of weights in linear filters, nor does it explicitly maximize the efficiency of the code. Our results suggest several directions for future efforts. First, noise in our model is a known constant value. In contrast, neural systems must deal with changing levels of noise and signal, and must estimate them based only on their inputs. An interesting question, unaddressed in current work, is how to adapt representations (e.g., synaptic weights and nonlinearities) to dynamically regulate coding efficiency. Second, we are interested in extending this model to make predictions about higher visual areas. We do not interpret our results in the noiseless case (oriented, localized filters) as predictions for optimal cortical representations. Instead, we intend to extend this framework to cortical representations that must deal with accumulated nonlinearity and noise arising from previous stages of the processing hierarchy. References [1] F. Attneave, ?Some informational aspects of visual perception.,? Psychological Review, vol. 61, no. 3, pp. 183?193, 1954. [2] H. Barlow, ?Possible principles underlying the transformations of sensory messages,? in Sensory Communication, pp. 217?234, MIT Press, 1961. [3] M. V. Srinivasan, S. B. Laughlin, and A. Dubs, ?Predictive coding: A fresh view of inhibition in the retina,? Proceedings of the Royal Society of London. Series B. Biological Sciences, vol. 216, pp. 427 ?459, Nov. 1982. [4] J. J. Atick and A. N. Redlich, ?Towards a theory of early visual processing,? Neural Computation, vol. 2, no. 3, pp. 308?320, 1990. [5] J. J. Atick, ?Could information theory provide an ecological theory of sensory processing?,? Network Computation in Neural Systems, vol. 3, no. 2, pp. 213?251, 1992. [6] E. Doi and M. S. Lewicki, ?A theory of retinal population coding,? in Advances in Neural Information Processing Systems 19 (B. Sch?olkopf, J. Platt, and T. Hoffman, eds.), pp. 353?360, Cambridge, MA: MIT Press, 2007. [7] J. Nadal and N. Parga, ?Nonlinear neurons in the low-noise limit: a factorial code maximizes information transfer,? Network: Computation in Neural Systems, vol. 5, no. 4, pp. 565?581, 1994. [8] A. J. Bell and T. J. Sejnowski, ?An Information-Maximization approach to blind separation and blind deconvolution,? Neural Computation, vol. 7, no. 6, pp. 1129?1159, 1995. [9] B. A. Olshausen and D. J. Field, ?Emergence of simple-cell receptive field properties by learning a sparse code for natural images,? Nature, vol. 381, no. 6583, pp. 607?609, 1996. 8 [10] A. J. Bell and T. J. Sejnowski, ?The ?independent components? of natural scenes are edge filters,? Vision Research, vol. 37, no. 23, pp. 3327?3338, 1997. [11] S. Laughlin, ?A simple coding procedure enhances a neuron?s information capacity,? Z Naturforsch, no. Sep-Oct, 1981. [12] A. Treves, S. Panzeri, E. T. Rolls, M. Booth, and E. A. Wakeman, ?Firing rate distributions and efficiency of information transmission of inferior temporal cortex neurons to natural visual stimuli,? Neural Computation, vol. 11, no. 3, p. 601?631, 1999. [13] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck, ?Adaptive rescaling maximizes information transmission,? Neuron, vol. 26, no. 3, pp. 695?702, 2000. PMID: 10896164. [14] A. L. Fairhall, G. D. Lewen, W. Bialek, and R. R. de Ruyter van Steveninck, ?Efficiency and ambiguity in an adaptive neural code,? Nature, vol. 412, no. 6849, p. 787?792, 2001. [15] M. D. McDonnell and N. G. Stocks, ?Maximally informative stimuli and tuning curves for sigmoidal Rate-Coding neurons and populations,? Physical Review Letters, vol. 101, no. 5, p. 058103, 2008. [16] W. B. Levy and R. A. Baxter, ?Energy efficient neural codes,? Neural Computation, vol. 8, no. 3, pp. 531? 543, 1996. [17] R. Baddeley, L. F. Abbott, M. C. Booth, F. Sengpiel, T. Freeman, E. A. Wakeman, and E. T. Rolls, ?Responses of neurons in primary and inferior temporal visual cortices to natural scenes.,? Proceedings of the Royal Society B: Biological Sciences, vol. 264, no. 1389, pp. 1775?1783, 1997. [18] V. Balasubramanian and M. J. Berry, ?A test of metabolically efficient coding in the retina,? Network: Computation in Neural Systems, vol. 13, no. 4, p. 531?552, 2002. [19] L. Franco, E. T. Rolls, N. C. Aggelopoulos, and J. M. Jerez, ?Neuronal selectivity, population sparseness, and ergodicity in the inferior temporal visual cortex,? Biol. Cybernetics, vol. 96, no. 6, pp. 547?560, 2007. [20] S. B. Laughlin, R. R. V. Steveninck, and J. C. Anderson, ?The metabolic cost of neural information,? Nat. Neurosci, vol. 1, no. 1, p. 36?41, 1998. [21] P. Lennie, ?The cost of cortical computation,? Current Biology, vol. 13, pp. 493?497, Mar. 2003. [22] D. Attwell and S. B. Laughlin, ?An energy budget for signaling in the grey matter of the brain,? Journal of Cerebral Blood Flow and Metabolism, vol. 21, no. 10, pp. 1133?1145, 2001. [23] D. K. Warland, P. Reinagel, and M. Meister, ?Decoding visual information from a population of retinal ganglion cells,? Journal of Neurophysiology, vol. 78, no. 5, pp. 2336 ?2350, 1997. [24] J. W. Pillow, L. Paninski, V. J. Uzzell, E. P. Simoncelli, and E. J. Chichilnisky, ?Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model,? The Journal of Neuroscience, vol. 25, no. 47, pp. 11003 ?11013, 2005. [25] D. J. Heeger, ?Half-Squaring in responses of cat striate cells,? Visual Neuroscience, vol. 9, no. 05, pp. 427? 443, 1992. [26] W. Bialek, R. van Steveninck, and N. Tishby, ?Efficient representation as a design principle for neural coding and computation,? in IEEE International Symposium on Information Theory, pp. 659?663, 2006. [27] T. von der Twer and D. I. A. MacLeod, ?Optimal nonlinear codes for the perception of natural colours,? Network: Computation in Neural Systems, vol. 12, no. 3, pp. 395?407, 2001. [28] A. M. Walczak, G. Tka?cik, and W. Bialek, ?Optimizing information flow in small genetic networks. II. feed-forward interactions,? Physical Review E, vol. 81, no. 4, p. 041905, 2010. [29] E. Doi, T. Inui, T.-W. Lee, T. Wachtler, and T. J. Sejnowski, ?Spatiochromatic receptive field properties derived from information-theoretic analyses of cone mosaic responses to natural scenes,? Neural Computation, vol. 15, pp. 397?417, 2003. [30] H. W?assle, U. Gr?unert, J. R?ohrenbeck, and B. B. Boycott, ?Retinal ganglion cell density and cortical magnification factor in the primate,? Vision Research, vol. 30, no. 11, pp. 1897?1911, 1990. [31] E. J. Chichilnisky and R. S. Kalmar, ?Functional asymmetries in ON and OFF ganglion cells of primate retina,? The Journal of Neuroscience, vol. 22, no. 7, pp. 2737 ?2747, 2002. [32] M. Carandini, ?Amplification of Trial-to-Trial response variability by neurons in visual cortex,? PLoS Biol, vol. 2, no. 9, p. e264, 2004. [33] C. L. Passaglia and J. B. Troy, ?Information transmission rates of cat retinal ganglion cells,? Journal of Neurophysiology, vol. 91, no. 3, pp. 1217 ?1229, 2004. [34] E. Doi, J. L. Gauthier, G. D. Field, J. Shlens, A. Sher, M. Greschner, T. Machado, K. Mathieson, D. Gunning, A. M. Litke, L. Paninski, E. J. Chichilnisky, and E. P. Simoncelli, ?Redundant representations in macaque retinal populations are consistent with efficient coding,? in Computational and Systems Neuroscience (CoSyNe), February 2011. [35] B. T. Vincent and R. J. Baddeley, ?Synaptic energy efficiency in retinal processing,? Vision Research, vol. 43, no. 11, pp. 1285?1292, 2003. 9
4384 |@word neurophysiology:2 trial:2 version:1 norm:2 grey:1 simulation:1 linearized:1 covariance:7 solid:1 shading:1 reduction:1 initial:1 series:2 contains:1 genetic:1 current:2 dx:1 must:3 realistic:6 numerical:3 blur:1 informative:1 shape:3 analytic:1 plot:1 interpretable:1 update:5 generative:1 fewer:2 half:2 metabolism:1 greschner:1 vanishing:1 provides:4 characterization:1 location:1 sigmoidal:1 along:1 differential:1 symposium:1 fixation:1 boycott:1 combine:1 behavioral:1 inside:1 notably:1 twer:1 expected:1 behavior:2 ica:7 examine:1 nor:2 roughly:1 brain:1 freeman:1 informational:1 decreasing:1 balasubramanian:1 considering:1 increasing:4 underlying:2 panel:3 mass:1 maximizes:3 nadal:1 finding:1 transformation:2 dfj:1 temporal:3 collecting:1 wrong:1 scaled:1 platt:1 partitioning:1 unit:1 medical:1 omit:1 organize:1 dropped:1 accordance:1 local:6 limit:2 encoding:1 id:1 firing:10 approximately:2 black:2 chose:2 might:1 studied:1 examined:1 dynamically:1 conversely:1 shaded:1 suggests:1 limited:1 range:5 steveninck:4 unique:2 yj:8 hughes:1 implement:1 signaling:1 procedure:1 area:2 yan:2 bell:2 cascade:2 attain:2 projection:3 significantly:1 suggest:1 doubled:1 onto:1 close:1 context:1 impossible:1 optimize:6 equivalent:1 map:1 center:23 maximizing:1 independently:2 rule:3 adjusts:1 reinagel:1 importantly:1 shlens:1 population:22 coordinate:1 laplace:1 transmit:1 target:1 hierarchy:1 exact:1 lighter:1 homogeneous:1 mosaic:1 synthesize:1 magnification:1 jk:4 predicts:1 binning:2 observed:4 bottom:2 capture:2 wj:1 region:3 plo:1 trade:1 highest:1 substantial:2 ultimately:1 trained:2 depend:1 predictive:1 efficiency:10 f2:1 basis:1 easily:1 joint:2 sep:1 stock:1 represented:1 cat:2 pmid:1 describe:1 london:1 monte:1 doi:3 sejnowski:3 outside:1 whose:3 encoded:1 larger:2 reconstruct:1 wg:2 gi:3 think:1 jointly:2 noisy:7 emergence:1 online:1 reconstruction:1 interaction:1 product:3 j2:1 amplification:2 description:2 pronounced:1 olkopf:1 convergence:1 transmission:6 asymmetry:3 eccentricity:1 produce:3 generating:4 extending:1 rotated:1 derive:2 develop:3 measured:2 ij:1 rescale:1 strong:1 recovering:1 direction:1 thick:1 filter:47 softer:1 f1:1 generalization:1 biological:3 underdetermined:1 adjusted:1 subtypes:1 sufficiently:2 around:3 exp:1 panzeri:1 predict:1 early:4 estimation:1 geniculate:1 wachtler:1 successfully:1 weighted:1 hoffman:1 rough:1 mit:2 gaussian:10 aim:1 rather:3 avoid:1 voltage:1 sengpiel:1 encode:3 derived:2 indicates:1 likelihood:6 superimposed:1 greatly:1 contrast:4 litke:1 lennie:1 dependent:1 squaring:1 accumulated:1 typically:1 entire:1 unlikely:1 interested:1 pixel:6 flexible:1 denoted:1 spatial:3 constrained:1 special:1 mutual:4 fairly:1 field:17 equal:2 sampling:4 biology:1 represents:1 thin:1 mimic:1 future:1 others:1 stimulus:7 few:1 retina:9 oriented:8 randomly:1 simultaneously:1 replacement:1 attempt:3 organization:2 message:1 evaluation:1 adjust:2 bracket:1 yielding:2 light:1 edge:1 necessary:1 taylor:2 initialized:1 re:1 desired:1 theoretical:2 psychological:1 instance:1 earlier:1 maximization:1 cost:15 deviation:1 subset:1 snr:1 uniform:1 gr:1 tishby:1 reported:1 varies:1 corrupted:2 combined:1 adaptively:1 density:6 international:1 probabilistic:1 off:13 lee:1 decoding:2 together:1 transmitting:1 w1:3 von:1 central:2 ambiguity:1 containing:1 tile:1 cosyne:1 derivative:1 leading:1 rescaling:1 expended:1 account:1 nonlinearities:26 de:2 retinal:17 coding:21 sec:4 includes:1 coefficient:3 matter:1 explicitly:1 depends:4 blind:2 performed:1 view:1 observing:1 slope:2 rectifying:3 minimize:1 degraded:1 roll:3 variance:3 ensemble:3 yield:2 spaced:2 correspond:2 parga:1 vincent:1 dub:1 carlo:1 rectified:3 finer:1 cybernetics:1 ago:1 explain:1 whenever:1 synaptic:2 ed:1 energy:4 frequency:1 pp:26 attneave:2 naturally:1 associated:3 mi:3 sampled:1 gain:1 adjusting:1 carandini:1 knowledge:1 color:2 dimensionality:4 organized:1 cik:1 nerve:1 feed:1 higher:3 response:33 maximally:1 formulation:2 evaluated:1 mar:1 anderson:1 ergodicity:1 stage:3 atick:2 correlation:1 eqn:1 horizontal:1 gauthier:1 nonlinear:18 mode:1 resemblance:1 grows:1 olshausen:1 building:1 effect:1 normalized:1 multiplier:1 barlow:2 consisted:1 f0i:2 verify:1 contain:1 spatially:2 symmetric:1 memoryless:1 white:1 deal:2 adjacent:1 uniquely:1 inferior:3 whiten:1 generalized:2 complete:2 theoretic:1 fj:4 image:22 functional:1 spiking:4 physical:2 machado:1 cerebral:1 extend:1 interpret:1 significant:3 cix:1 measurement:1 surround:7 cambridge:1 tuning:1 nonlinearity:11 had:1 cortex:7 operating:2 gj:3 inhibition:1 curvature:4 posterior:8 halved:1 optimizing:2 apart:1 periphery:1 selectivity:1 inui:1 ecological:1 der:1 rotationally:1 transmitted:6 additional:3 somewhat:1 seen:1 minimum:1 maximize:7 redundant:1 monotonically:2 dashed:1 signal:8 ii:1 multiple:2 full:1 simoncelli:4 rj:2 reduces:3 hebbian:1 smooth:6 match:2 adapt:1 compensate:1 equally:1 schematic:1 prediction:8 noiseless:3 expectation:3 vision:3 iteration:1 kernel:5 cell:13 proposal:1 schematically:1 whereas:1 kalmar:1 source:1 sch:1 w2:3 unlike:1 ascent:3 fell:2 subject:5 cnr:4 db:12 incorporates:2 unaddressed:1 flow:2 near:1 presence:5 enough:1 baxter:1 affect:2 fit:1 w3:2 inner:1 det:1 pca:2 colour:1 passed:3 accelerating:1 effort:1 penalty:2 york:2 generally:1 detailed:1 eigenvectors:1 walczak:1 factorial:1 amount:1 transforms:1 band:1 locally:2 mathieson:1 simplest:1 generate:1 specifies:2 dyj:1 halfwave:1 canonical:1 estimated:3 arising:1 per:8 neuroscience:4 vol:30 affected:1 steepness:2 express:1 putting:1 hrj:3 four:1 nevertheless:2 threshold:1 srinivasan:1 drawn:1 blood:1 changing:1 verified:1 abbott:1 luminance:1 year:1 sum:1 cone:2 run:1 mystery:1 parameterized:1 powerful:1 uncertainty:5 letter:1 family:1 almost:1 patch:5 separation:1 bit:6 bound:1 resampled:1 yielded:2 fairhall:1 adapted:2 optic:1 constraint:7 constrain:1 x2:3 ri:1 scene:4 encodes:1 aspect:1 franco:1 optimality:1 span:1 optical:1 according:1 combination:2 mcdonnell:1 spatiochromatic:1 membrane:1 smaller:3 across:2 appealing:1 biologically:3 primate:4 intuitively:1 taken:1 ln:6 resource:1 rectification:1 previously:1 tractable:2 tiling:1 available:3 meister:1 observe:1 enforce:1 regulate:1 batch:2 alternative:1 top:2 include:2 ensure:1 maintaining:1 log10:1 macleod:1 warland:1 february:1 society:2 objective:6 intend:1 arrangement:1 question:1 spike:17 restored:1 receptive:15 primary:4 striate:1 nr:5 diagonal:1 bialek:4 exhibit:1 gradient:5 enhances:1 fovea:1 subspace:2 lateral:1 capacity:1 centersurround:2 nx:5 evenly:2 extent:1 trivial:1 fresh:1 assuming:1 code:7 index:1 insufficient:1 ratio:2 balance:1 difficult:3 steep:1 unfortunately:1 holding:1 troy:1 slows:1 design:2 reliably:1 upper:1 neuron:49 observation:1 nix:1 howard:1 anti:1 variability:3 precise:1 communication:1 interacting:1 varied:1 arbitrary:1 intensity:1 treves:1 metabolically:1 chichilnisky:3 optimized:2 learned:3 macaque:1 below:3 perception:2 regime:4 sparsity:1 royal:2 unrealistic:1 overlap:1 power:2 natural:16 rely:2 karklin:2 representing:1 cir:1 numerous:1 sher:1 nir:1 text:1 prior:9 understanding:1 review:3 lewen:1 berry:1 wakeman:2 relative:1 bear:1 interesting:1 limitation:1 filtering:1 proportional:2 proven:1 ingredient:3 localized:5 generator:1 nucleus:1 conveyed:2 tka:1 sufficient:1 consistent:8 principle:5 metabolic:11 systematically:1 bypass:1 drastically:1 allow:3 laughlin:4 institute:1 explaining:1 mismatched:1 taking:3 absolute:1 sparse:1 van:3 slice:1 curve:1 dimension:2 cortical:4 valid:1 pillow:1 contour:2 sensory:10 author:1 made:2 collection:1 projected:1 preprocessing:1 adaptive:3 forward:1 approximate:3 emphasize:1 rthe:1 nov:1 gene:1 global:2 incoming:1 assumed:5 eero:2 xi:3 alternatively:1 grayscale:2 naturforsch:1 nature:2 transfer:2 ruyter:2 expansion:4 did:3 sp:4 neurosci:1 noise:44 profile:1 n2:2 repeated:1 body:1 x1:3 fig:11 redlich:1 neuronal:1 ny:1 precision:3 originated:1 heeger:1 exponential:2 outdoor:1 levy:1 shade:1 showing:1 er:1 nyu:1 physiological:1 deconvolution:1 cjk:4 incorporating:1 essential:1 effectively:1 gained:2 linearization:1 nat:1 illustrates:1 occurring:1 budget:6 sparseness:1 booth:2 subtract:1 entropy:17 cx:2 smoothly:1 photograph:1 paninski:2 ganglion:11 visual:17 lagrange:1 expressed:1 contained:1 scalar:3 lewicki:1 kwj:2 determines:1 relies:1 ma:1 oct:1 conditional:2 goal:2 towards:1 wjt:1 absence:1 brenner:1 change:2 hard:3 specifically:1 typical:1 reducing:1 wt:3 averaging:1 attwell:1 total:6 pas:1 tiled:1 attempted:1 allotted:1 uzzell:1 incorporate:2 baddeley:2 biol:2
3,738
4,385
Efficient Offline Communication Policies for Factored Multiagent POMDPs Matthijs T.J. Spaan Delft University of Technology Delft, The Netherlands [email protected] Jo?ao V. Messias Institute for Systems and Robotics Instituto Superior T?ecnico Lisbon, Portugal [email protected] Pedro U. Lima Institute for Systems and Robotics Instituto Superior T?ecnico Lisbon, Portugal [email protected] Abstract Factored Decentralized Partially Observable Markov Decision Processes (DecPOMDPs) form a powerful framework for multiagent planning under uncertainty, but optimal solutions require a rigid history-based policy representation. In this paper we allow inter-agent communication which turns the problem in a centralized Multiagent POMDP (MPOMDP). We map belief distributions over state factors to an agent?s local actions by exploiting structure in the joint MPOMDP policy. The key point is that when sparse dependencies between the agents? decisions exist, often the belief over its local state factors is sufficient for an agent to unequivocally identify the optimal action, and communication can be avoided. We formalize these notions by casting the problem into convex optimization form, and present experimental results illustrating the savings in communication that we can obtain. 1 Introduction Intelligent decision making in real-world scenarios requires an agent to take into account its limitations in sensing and actuation. These limitations lead to uncertainty about the state of environment, as well as how the environment will respond to performing a certain action. When multiple agents interact and cooperate in the same environment, the optimal decision-making problem is particularly challenging. For an agent in isolation, planning under uncertainty has been studied using decisiontheoretic models like Partially Observable Markov Decision Processes (POMDPs) [4]. Our focus is on multiagent techniques, building on the factored Multiagent POMDP model. In this paper, we propose a novel method that exploits sparse dependencies in such a model in order to reduce the amount of inter-agent communication. The major source of intractability for optimal Dec-POMDP solvers is that they typically reason over all possible histories of observations other agents can receive. In this work, we consider factored Dec-POMDPs in which communication between agents is possible, which has already been explored for non-factored models [10, 11, 15, 13] as well as for factored Dec-MDPs [12]. When agents share their observations at each time step, the decentralized problem reduces to a centralized one, known as a Multiagent POMDP (MPOMDP) [10]. In this work, we develop individual policies which map beliefs over state factors to actions or communication decisions. 1 Maintaining an exact, factorized belief state is typically not possible in cooperative problems. While bounded approximations are possible for probabilistic inference [2], these results do not carry over directly to decision-making settings (but see [5]). Intuitively, even a small difference in belief can lead to a different action being taken. However, when sparse dependencies between the agents? decisions exist, often the belief over its local state factors is sufficient for an agent to identify the action that it should take, and communication can be avoided. We formalize these notions as convex optimization problems, extracting those situations in which communication is superfluous. We present experimental results showing the savings in communication that we can obtain, and the overall impact on decision quality. The rest of the paper is organized as follows. First, Section 2 presents the necessary background material. Section 3 presents the formalization of our method to associate belief points over state factors to actions. Next, Section 4 illustrates the concepts with experimental results, and Section 5 provides conclusions and discusses future work. 2 Background In this section we provide background on factored Dec-POMDPs and Multiagent POMDPs. A factored Dec-POMDP is defined as the following tuple [8]: D = {1, ..., n} is the set of agents. Di will be used to refer to agent i; S = ?i Xi , i = 1, . . . , nf is the state space, decomposable into nf factors Xi ? {1, ..., mi } which lie inside a finite range of integer values. X = {X1 , . . . , Xnf } is the set of all state factors; A = ?i Ai , i = 1, ..., n is the joint action space. At each step, every agent i takes an individual action ai ? Ai , resulting in the joint action a = ha1 , ..., an i ? A; O = ?i Oi , i = 1, ..., n is the space of joint observations o = ho1 , ..., on i, where oi ? Oi are the individual observations. An agent receives only its own observation; T : S ? S ? A ? [0, 1] specifies the transition probabilities Pr (s? |s, a); O : O ? S ? A ? [0, 1] specifies the joint observation probabilities Pr (o|s? , a); R : S ? A ? R specifies the reward for performing action a ? A in state s ? S; b0 ? B is the initial state distribution. The set B is the space of all possible distributions over S; h is the planning horizon. The main advantage of factored (Dec-)POMDP models over their standard formulation lies in their more efficient representation. Existing methods for factored Dec-POMDPs can partition the decision problem across local subsets of agents, due to the possible independence between their actions and observations [8]. A natural state-space decomposition is to perform an agent-wise factorization, in which a state in the environment corresponds to a unique assignment over the states of individual agents. Note that this does not preclude the existence of state factors which are common to multiple agents. The possibility of exchanging information between agents greatly influences the overall complexity of solving a Dec-POMDP. In a fully communicative Dec-POMDP, the decentralized model can be reduced to a centralized one, the so-called Multiagent POMDP (MPOMDP) [10]. An MPOMDP is a regular single-agent POMDP but defined over the joint models of all agents. In a Dec-POMDP, at each t an agent i knows only ai and oi , while in an MPOMDP, it is assumed to know a and o. In the latter case, inter-agent communication is necessary to share the local observations. Solving an MPOMDP is of a lower complexity class than solving a Dec-POMDP (PSPACE-Complete vs. NEXP-Complete) [1]. It is well-known that, for a given decision step t, the value function V t of a POMDP is a piecewise linear, convex function [4], which can be represented as V t (bt ) = maxt ?T ? bt ??? , (1) where ?t is a set of vectors (traditionally referred to as ?-vectors). Every ? ? ?t has a particular joint action a associated to it, which we will denote as ?(?). The transpose operator is here denoted as (?)T . In this work, we assume that a value function is given for the Multiagent POMDP. However, 2 this value function need not be optimal, nor stationary. Our techniques preserve the quality of the supplied value function, even if it is an approximation. A joint belief state is a probability distribution over the set of states S, and encodes all of the information gathered by all agents in the Dec-POMDP up to a given time t: bt (s) = Pr(st |ot?1 , at?1 , ot?2 , at?2 , . . . , o1 , a1 , b0 ) = Pr(X1t , . . . , Xnt f |?) (2) A factored belief state is a representation of this very same joint belief as the product of nF assumed independent belief states over the state factors Xi , which we will refer to as belief factors: F btFi bt = ?ni=1 (3) Every factor btFi is defined over a subset Fi ? X of state factors, so that: bt (s) ? Pr(F1t |?)Pr(F2t |?) ? ? ? Pr(Fnt F |?) (4) With Fi ? Fj = ? , ?i 6= j. A belief point over factors L which are locally available to the agent will be denoted bL . The marginalization of b onto bF is:  btF (F t ) = Pr F t |a1,??? ,t?1 , o1,??? ,t?1   X X bt (st ), Pr X1t , X2t , ? ? ? , Xnt f |? = = (5) X t \F t X t \F t which can be viewed as a projection of b onto the smaller subspace BF : bF = MFX b (6) where MFX is a matrix where MFX (u, v) = 1 if the assignments to all state factors contained in state u ? F are the same as in state v ? X , and 0 otherwise. This intuitively carries out the marginalization of points in B onto BF . 3 Exploiting Sparse Dependencies in Multiagent POMDPs In the implementation of Multiagent POMDPs, an important practical issue is raised: since the joint policy arising from the value function maps joint beliefs to joint actions, all agents must maintain and update the joint belief equivalently for their decisions to remain consistent. The amount of communication required to make this possible can then become problematically large. Here, we will deal with a fully-communicative team of agents, but we will be interested in minimizing the necessary amount of communication. Even if agents can communicate with each other freely, they might not need to always do so in order to act independently, or even cooperatively. The problem of when and what to communicate has been studied before for Dec-MDPs [12], where factors can be directly observed with no associated uncertainty, by reasoning over the possible local alternative actions to a particular assignment of observable state features. For MPOMDPs, this had been approximated at runtime, but implied keeping track and reasoning over a rapidly-growing number of possible joint belief points [11]. We will describe a method to map a belief factor (or several factors) directly to a local action, or to a communication decision, when applicable. Our approach is the first to exploit, offline, the structure of the value function itself in order to identify regions of belief space where an agent may act independently. This raises the possibility of developing more flexible forms for joint policies which can be efficiently decoupled whenever this is advantageous in terms of communication. Furthermore, since our method runs offline, it is not mutually exclusive with online communication-reduction techniques: it can be used as a basis for further computations at runtime, thereby increasing their efficiency. 3 3.1 Decision-making with factored beliefs Note that, as fully described in [2], the factorization (4) typically results in an approximation of the true joint belief, since it is seldom possible to decouple the dynamics of a MDP into strictly independent subprocesses. The dependencies between factors, induced by the transition and observation model of the joint process, quickly develop correlations when the horizon of the decision problem is increased, even if these dependencies are sparse. Still, it was proven in [2] that, if some of these dependencies are broken, the resulting error (measured as the KL-divergence) of the factored belief state, with respect to the true joint belief, is bounded. Unfortunately, even a small error in the belief state can lead to different actions being selected, which may significantly affect the decision quality of the multiagent team in some settings [5, 9]. However, in rapidly-mixing processes (i.e., models with transition functions which quickly propagate uncertainty), the overall negative effect of using this approximation is minimized. Each belief factor?s dynamics can be described using a two-stage Dynamic Bayesian Network (DBN). For an agent to maintain, at each time step, a set of belief factors, it must have access to the state factors contained in a particular time slice of the respective DBNs. This can be accomplished either through direct observation, when possible, or by requesting this information from other agents. In the latter case, it may be necessary to perform additional communication in order to keep belief factors consistent. The amount of data to be communicated in this case, as well as its frequency, depends largely on the factorization scheme which is selected for a particular problem. We will not be here concerned with the problem of obtaining a suitable partition scheme of the joint belief onto its factors. Such a partitioning is typically simple to identify for multi-agent teams which exhibit sparsity of interaction. Instead we will focus on the amount of communication which is necessary for the joint decision-making of the multi-agent team. 3.2 Formal model We will hereafter focus on the value function, and its associated quantities, at a given decision step t, and, for simplicity, we shall omit this dependency. However, we restate that the value function does not need to be stationary ? for a finite-horizon problem, the following methods can simply be applied for every t = 1, . . . , h. 3.2.1 Value Bounds Over Local Belief Space Recall that, for a given ?-vector, V? (b) = ? ? b represents the expected reward for selecting the action associated with ?. Ideally, if this quantity could be mapped from a local belief point bL , then it would be possible to select the best action for an agent based only on its local information. This is typically not possible since the projection (6) is non-invertible. However, as we will show, it is possible to obtain bounds on the achievable value of any given vector, in local belief space. The available information regarding V? (b) in local space can be expressed in the linear forms: V? (b) = ? ? b 1Tn b = 1 MLX b (7) = bL T where 1n = [ 1 1 . . . 1 ] ? Rn . Let m be size of the local belief factor which contains bL . Reducing this system, we can associate V? (b) with b and bL , having at least n ? m free variables in the leading row, induced by the locally unavailable dimensions of b. The resulting equation can be rewritten as: V? (b) = ? ? b + ? ? bL + ? , (8) with ? ? Rn , ? ? Rm and ? ? R. By maximizing (or minimizing) the terms associated with the potentially free variables, we can use this form to establish the maximum (and minimum) value that can be attained at bL .  Theorem 1. Let Iu = v : MLX (u, v) = 1 , ? ? Rm : ?i = maxj?Ii ?j , i = 1, . . . , m and ? ? Rm : ?i = minj?Ii ?j , i = 1, . . . , m. The maximum achievable value for a local belief point, bL , according to ?, is:  V? (bL ) = ? + ? ? bL + ? . (9) 4 Analogously, the minimum achievable value is  V? (bL ) = ? + ? ? bL + ? , (10) Proof. First, we shall establish that V? (bL ) is an upper bound on V? (b). The set Ii contains the indexes of the elements of b which marginalize onto (bL )i . From the definition of ? it follows that, ?b ? B: X X ? i bj ? ?j bj , i = 1, . . . , m ? j?Ii ? ? i (bL )i ? j?Ii X ?j b j , i = 1, . . . , m , j?Ii where we used the fact that P bj = (bL )i . Summing over all i, this implies that ? ? bL ? ? ? b. j?Ii Using (8) and (9), ? ? bL + ? ? bL + ? ? ? ? b + ? ? bL + ? ? V? (bL ) ? V? (b) Next, we need to show that ?b ? B : V? (bL ) = V? (b). Since 1Tn b = 1 and bi ? 0 ?i, ? ? b is a convex combination of the elements in ?. Consequently, max ? ? b = max ? ? MLX b = max ?i b?B Therefore, for bm = arg max ? ? b, we have that b?B i b?B V? (MLX bm ) = V? (bm ). The proof for the minimum achievable value V? (bL ) is analogous. By obtaining the bounds (9) and (10), we have taken a step towards identifying the correct action for an agent to take, based on the local information contained in bL . From their evaluation, the following remarks can be made: if ? and ?? are such that V?? (bL ) ? V? (bL ), then ?? is surely not the maximizing vector at b; if this property holds for all ?? such that (?(?? ))i 6= (?(?))i , then by following the action associated with ?, agent i will accrue at least as much value as with any other vector for all possible b subject to (6). That action can be safely selected without needing to communicate. The complexity of obtaining the local value bounds for a given value function is basically that of reducing the system (7) for each vector. This is typically achieved through Gaussian Elimination, with an associated complexity of O(n(m + 2)2 ) [3]. Note that the dominant term corresponds to the size of the local belief factor, which is usually exponentially smaller than n. This is repeated for all vectors, and if pruning is then done over the resulting set (the respective cost is O(|?|2 )), the total complexity is O(|?|n(m + 2)2 + |?|2 ). The pruning process used here is the same as what is typically done by POMDP solvers [14]. 3.2.2 Dealing With Locally Ambiguous Actions The definition of the value bounds (9) and (10) only allows an agent to act in atypical situations in which an action is clearly dominant in terms of expected value. However, this is often not the case, particularly when considering a large decision horizon, since the present effects of any given action on the overall expected reward are typically not pronounced enough for these considerations to be practical. In a situation where multiple value bounds are conflicting (i.e. V? (bL ) > V?? (bL ) and V? (bL ) < V?? (bL )), an agent is forced to further reason about which of those actions is best. In order to tackle this problem, let us assume that two actions a and a? have conflicting bounds at ? bL . Given ?a = {? ? ? : (?(?))i = a} and similarly defined ?a , we will define the matrices ? ? A = [?ai ]k?n , i = 1, . . . , |?a | and A? = [?ai ]k? ?n , i = 1, . . . , |?a |. Then, the vectors v = Ab ? and v? = A? b (in Rk and Rk respectively) contain all possible values attainable at b through the ? vectors in ?a and ?a . Naturally, we will be interested in the maximum of these values for each action. In particular, we want to determine if maxi vi is greater than maxj vj? for all possible b such that bL = MLX b. If this is the case, then a should be selected as the best action, since it is guaranteed to provide a higher value at bL than a? . 5 The problem of determining the minimum value of v ? v? at bL can be expressed as the following set of Linear Programs (LPs) [6]. Note that x  y is here assumed to mean that xi ? yi ?i: ? ? ?i = 1, . . . , |?a | maximize ?ai b ? s subject to Ab  1k s MLX b b  0n = bL 1Tn b (11) =1 If the solution bopt to each of these LPs is such that maxi (Abopt )i ? maxj (A? bopt )j , then action a can be safely selected based on bL . If this is not the case for any of the solutions, then it is not possible to map the agent?s best action solely through bL . In order to disambiguate every possible action, this optimization needs to be carried out for all conflicting pairs of actions. However, a less computationally expensive alternative is to approximate the optimization (11) by a single LP (refer to [6] for more details): maximize 1Tk? ? subject to Ab  1k s b  0n ? A b = 1k ? s + ? 1Tn b MLX b = bL (12) =1 3.2.3 Mapping Local Belief Points to Communication Decisions For an environment with only two belief factors, the method described so far could already incorporate an explicit communication policy: given the local belief bL of an agent, if it is possible to unequivocally identify any action as being maximal, then that action can be safely executed without any loss of expected value. Otherwise, the remaining belief factor should be requested from other agents, in order to reconstruct b through (4), and map that agent?s action through the joint policy. However, in most scenarios, it is not sufficient to know whether or not to communicate: equally important are the issues of what to communicate, and with whom. Let us consider the general problem with nF belief factors contained in the set F . In this case there are 2|F |?1 combinations of non-local factors which the agent can request. Our goal is to identify one such combination which contains enough information to disambiguate the agent?s actions. Central to this process is the ability to quickly determine, for a given set of belief factors G ? F , if there are no points in bG with non-decidable actions. The exact solution to this problem would require, in the ? worst case, the solution of |?a | ? |?a | LPs of the form (11) for every pair of actions with conflicting value bounds. However, a modification of the approximate LP (12) allows us to tackle this problem efficiently: maximize 1Tk? ? ? + 1Tk ? subject to Ab  1k s A? b = 1k? s + ? MLX b = bL A? b?  1k? s? Ab? = 1k s? + ? ? MLX b? = bL b  0n b ?  0n MGX b = MGX b? (13) The rationale behind this formulation is that any solution to the LP, in which maxi ?i > 0 and maxj ?j? > 0 simultaneously, identifies two different points b and b? which map to the same point bG in G, but share different maximizing actions a? and a respectively. This implies that, in order to select an action unambiguously from the belief over G, no such solution may be possible. Equipped with this result, we can now formulate a general procedure that, for a set of belief points in local space, returns the corresponding belief factors which must be communicated in order for an agent to act unambiguously. We refer to this as obtaining the communication map for the problem. This procedure is as follows (a more detailed version is included in [6]): we begin by computing the value bounds of V over local factors L, and sampling N reachable local belief points bL ; for each of these points, if the value bounds of the best action are not conflicting (see Section 3.2.1), or any conflicting bounds are resolved by LP (12), we can mark bL as safe, add it to the communication map, and continue on to the next point; otherwise, using LP (13), we search for the minimum set of non-local factors G which resolves all conflicts; we then associate bL with G and add it to the map. During execution, an agent updates its local information bL , finds the nearest neighbor point in the communication map, and requests the corresponding factors from the other agents. The agent then selects the action which exhibits the highest maximum value bound given the resulting information. 6 L2 D2 D1 M ap((bX1 )1 ) L1 R1 R2 (a) Relay-Small. (b) Relay-Large. 1 0 0 0.2 0.4 0.6 0.8 1 (bX1 )1 (c) Communication Map. Figure 1: (a) Layout of the Relay-Small problem. (b) Layout of the Relay-Large problem. (c) Communication map for the Relay-Small problem. 4 Experiments We now analyze the results of applying the aforementioned offline communication mapping process to three different MPOMDP environments, each with a different degrees of interdependency between agents. The first and smallest of the test problems, shown in Figure 1a, is named the Relay-Small problem, and is mainly used for explanatory purposes. In this world each agent is confined to a two-state area. One of the agents possesses a package which it must hand over to the other agent, through the non-traversable opening between the rooms L1 and R1. Each agent can move randomly inside its own room (a Shuffle action), Exchange the package with the other agent, or Sense its environment in order to find the opening. An Exchange is only successful if both agents are in the correct position (L1, R1) and if both agents perform this action at the same time, which makes it the only available cooperative action. The fact that, in this problem, each belief factor is twodimensional (each factor spans one of the rooms) allows us to visualize the results of our method. In Figure 2, we see that some of the agent?s expected behavior is already contained in the value bounds over its local factor: if an agent is certain of being in room R1 (i.e. (bX1 )1 = 0), then the action with the highest-valued bound is Shuffle. Likewise, an Exchange should only be carried out when the agent is certain of being in L1, but it is an ambiguous action since the agent needs to be sure that its teammate can cooperate. In Figure 1c we represent the communication map which was obtained offline through the proposed algorithm. Since there are only two factors, the agent only needs to make a binary decision of whether or not to communicate for a given local belief point. The belief points considered safe are marked as 0, and those associated with a communication decision are marked as 1. In terms of quantitative results, we see that ? 30 ? 40% of communication episodes are avoided in this simple example, without a significant loss of collected reward. Another test scenario is the OneDoor environment of [7], which is further described in [6]. In this 49-state world, two agents lie inside opposite rooms, akin to the Relay-Small problem, but each agent has the goal of moving to the other room. There is only one common passage between both rooms, where the agents may collide. For shorter-horizon solutions, agents may not be able to reach their goal, and they communicate so as to minimize negative reward (collisions). For the infinitehorizon case, however, typically only one of the agents communicates, while waiting for its partner to clear the passage. Note that this relationship between the problem?s horizon and the amount of communication savings does not hold for all of the problems. The proposed method exploits the invariance of local policies over subsets of the joint belief space, and this may arbitrarily change with the problem?s horizon. A larger example is displayed in Figure 1b. This is an adaptation of the Relay-Small problem (aptly named Relay-Large) to a setting in which each room has four different states, and each agent may be carrying a package at a given time. Agent D1 may retrieve new packages from position L1, and D2 h. Relay-Small Full Comm. Red. Comm. OneDoor Full Comm. Red. Comm. 6 10 ? 15.4, 100% 39.8, 100% 77.5, 100% 0.35, 100% 1.47, 100% 2.31, 100% 14.8, 56.9% 38.7, 68.2% 73.9, 46.1% 0.30, 89.0% 1.38, 76.2% 2.02, 61.3% Relay-Large Full Comm. Red. Comm. 27.4, 100% -19.7, 100% 134.0, 100% 25.8, 44.1% -21.6, 62,5% 129.7, 58.9% Table 1: Results of the proposed method for various environments. For settings assuming full and reduced communication, we show empirical control quality, online communication usage. 7 Relay-Small 6 10 ? 1.1 4.3 0.1 5.9 21.4 7.4 h Perseus Comm. Map OneDoor 6 10 ? 7.3 33.3 5.3 12.4 57.7 5.9 Relay-Large 6 10 ? 239.5 643.0 31.5 368.7 859.5 138.1 Table 2: Running time (in seconds) of the proposed method in comparison to the Perseus point-based POMDP solver. Pruned Value Bounds (Relay) 160 140 140 120 120 100 100 80 Exchange Sense 80 60 60 40 40 20 Shuffle V V Value Bounds (Relay) 160 0 0.2 0.4 0.6 0.8 1 20 0 (bX1 )1 0.2 0.4 0.6 0.8 1 (bX1 )1 Figure 2: Value bounds for the Relay-Small problem. The dashed lines indicate the minimum value bounds, and the filled lines represent the maximum value bounds, for each action. can deliver them to L2, receiving for that a positive reward. There are a total of 64 possible states for the environment. Here, since the agents can act independently for a longer time, the communication savings are more pronounced, as shown in Table 1. Finally, we argue that the running time of the proposed algorithm is comparable to that of general POMDP solvers for these same environments. Even though both the solver and the mapper algorithms must be executed in sequence, the results in Table 2 show that they are typically both in the same order of magnitude. 5 Conclusions and Future Work Traditional multiagent planning on partially observable environments mostly deals with fullycommunicative or non-communicative situations. For a more realistic scenario where communication should be used only when necessary, state-of-the-art methods are only capable of approximating the optimal policy at run-time [11, 15]. Here, we have analyzed the properties of MPOMDP models which can be exploited in order to increase the efficiency of communication between agents. We have shown that these properties hold, for various MPOMDP scenarios, and that the decision quality can be maintained while significantly reducing the amount of communication, as long as the dependencies within the model are sparse. Although one of the main features of these techniques is that they may be applied to any given MPOMDP value function, in some situations this value function may be costly to obtain. As future work, we will investigate methods for obtaining MPOMDP value functions that are easy to partition using our techniques. Acknowledgments This work was funded in part by Fundac?a? o para a Ci?encia e a Tecnologia (ISR/IST pluriannual funding) through the PIDDAC Program funds and was supported by project CMU-PT/SIA/0023/2009 under the Carnegie Mellon-Portugal Program. J.M. was supported by a PhD Student Scholarship, SFRH/BD/44661/2008, from the Portuguese FCT POCTI programme. M.S. is funded by the FP7 Marie Curie Actions Individual Fellowship #275217 (FP7-PEOPLE-2010-IEF). 8 References [1] Daniel S. Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity of decentralized control of Markov decision processes. Mathematics of Operations Research, 27(4):819?840, 2002. [2] Xavier Boyen and Daphne Koller. Tractable inference for complex stochastic processes. In Proc. of Uncertainty in Artificial Intelligence, 1998. [3] X.G. Fang and G. Havas. On the worst-case complexity of integer gaussian elimination. In Proceedings of the 1997 international symposium on Symbolic and algebraic computation, pages 28?31. ACM, 1997. [4] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99?134, 1998. [5] David A. McAllester and Satinder Singh. Approximate planning for factored POMDPs using belief state simplification. In Proc. of Uncertainty in Artificial Intelligence, 1999. [6] J.V. Messias, M.T.J. Spaan, and P. U. Lima. Supplementary material for ?Efficient Offline Communication Policies for Factored Multiagent POMDPs?. ISR/IST, 2011. [7] Frans A. Oliehoek, Matthijs T. J. Spaan, and Nikos Vlassis. Dec-POMDPs with delayed communication. In Multi-agent Sequential Decision Making in Uncertain Domains, 2007. Workshop at AAMAS07. [8] Frans A. Oliehoek, Matthijs T. J. Spaan, Shimon Whiteson, and Nikos Vlassis. Exploiting locality of interaction in factored Dec-POMDPs. In Proc. of Int. Conference on Autonomous Agents and Multi Agent Systems, 2008. [9] P. Poupart and C. Boutilier. Value-directed belief state approximation for POMDPs. In Proc. of Uncertainty in Artificial Intelligence, volume 130, 2000. [10] David V. Pynadath and Milind Tambe. The communicative multiagent team decision problem: Analyzing teamwork theories and models. Journal of Artificial Intelligence Research, 16:389? 423, 2002. [11] M. Roth, R. Simmons, and M. Veloso. Decentralized communication strategies for coordinated multi-agent policies. In Multi-Robot Systems: From Swarms to Intelligent Automata, volume IV. Kluwer Academic Publishers, 2005. [12] Maayan Roth, Reid Simmons, and Manuela Veloso. Exploiting factored representations for decentralized execution in multi-agent teams. In Proc. of Int. Conference on Autonomous Agents and Multi Agent Systems, 2007. [13] Matthijs T. J. Spaan, Frans A. Oliehoek, and Nikos Vlassis. Multiagent planning under uncertainty with stochastic communication delays. In Proc. of Int. Conf. on Automated Planning and Scheduling, pages 338?345, 2008. [14] Chelsea C. White. Partially observed Markov decision processes: a survey. Annals of Operations Research, 32, 1991. [15] Feng Wu, Shlomo Zilberstein, and Xiaoping Chen. Multi-agent online planning with communication. In Int. Conf. on Automated Planning and Scheduling, 2009. 9
4385 |@word illustrating:1 version:1 achievable:4 advantageous:1 bf:4 d2:2 propagate:1 decomposition:1 attainable:1 thereby:1 carry:2 reduction:1 initial:1 contains:3 hereafter:1 selecting:1 daniel:1 existing:1 must:5 bd:1 portuguese:1 realistic:1 partition:3 shlomo:2 update:2 fund:1 v:1 stationary:2 intelligence:5 selected:5 provides:1 bopt:2 daphne:1 direct:1 become:1 symposium:1 frans:3 inside:3 inter:3 expected:5 behavior:1 planning:10 nor:1 growing:1 multi:9 resolve:1 preclude:1 solver:5 increasing:1 considering:1 equipped:1 begin:1 bounded:2 project:1 factorized:1 what:3 perseus:2 safely:3 quantitative:1 every:6 nf:4 act:5 tackle:2 runtime:2 rm:3 partitioning:1 control:2 omit:1 reid:1 before:1 teammate:1 positive:1 local:28 instituto:2 analyzing:1 solely:1 ap:1 might:1 f1t:1 studied:2 subprocesses:1 challenging:1 teamwork:1 factorization:3 tambe:1 range:1 bi:1 directed:1 unique:1 practical:2 acknowledgment:1 communicated:2 procedure:2 area:1 empirical:1 significantly:2 projection:2 ho1:1 regular:1 symbolic:1 onto:5 marginalize:1 operator:1 scheduling:2 twodimensional:1 influence:1 applying:1 map:15 roth:2 maximizing:3 layout:2 independently:3 convex:4 pomdp:18 formulate:1 automaton:1 decomposable:1 simplicity:1 identifying:1 survey:1 factored:17 fang:1 retrieve:1 swarm:1 notion:2 traditionally:1 autonomous:2 analogous:1 simmons:2 annals:1 pt:3 lima:2 dbns:1 exact:2 maayan:1 associate:3 element:2 approximated:1 particularly:2 expensive:1 cooperative:2 observed:2 oliehoek:3 worst:2 region:1 episode:1 shuffle:3 highest:2 traversable:1 environment:12 broken:1 complexity:7 comm:7 reward:6 ideally:1 littman:1 dynamic:3 raise:1 solving:3 carrying:1 singh:1 bx1:5 deliver:1 efficiency:2 basis:1 resolved:1 joint:22 collide:1 represented:1 various:2 forced:1 describe:1 artificial:5 larger:1 valued:1 supplementary:1 otherwise:3 reconstruct:1 ability:1 mfx:3 neil:1 itself:1 online:3 advantage:1 sequence:1 propose:1 interaction:2 product:1 maximal:1 decpomdps:1 adaptation:1 rapidly:2 mixing:1 pronounced:2 x1t:2 exploiting:4 r1:4 tk:3 develop:2 measured:1 nearest:1 b0:2 implies:2 indicate:1 safe:2 restate:1 correct:2 stochastic:3 mcallester:1 material:2 elimination:2 require:2 exchange:4 fundac:1 ao:1 givan:1 strictly:1 cooperatively:1 hold:3 considered:1 mapping:2 bj:3 visualize:1 major:1 smallest:1 relay:16 purpose:1 sfrh:1 proc:6 applicable:1 communicative:4 clearly:1 always:1 gaussian:2 casting:1 zilberstein:2 focus:3 mainly:1 greatly:1 sense:2 utl:2 inference:2 rigid:1 typically:10 bt:6 explanatory:1 koller:1 interested:2 selects:1 iu:1 overall:4 issue:2 flexible:1 arg:1 denoted:2 aforementioned:1 raised:1 art:1 saving:4 having:1 sampling:1 represents:1 future:3 minimized:1 intelligent:2 piecewise:1 opening:2 randomly:1 preserve:1 divergence:1 simultaneously:1 individual:5 delayed:1 maxj:4 delft:2 maintain:2 ab:5 centralized:3 possibility:2 investigate:1 evaluation:1 analyzed:1 nl:1 behind:1 superfluous:1 tuple:1 capable:1 necessary:6 respective:2 shorter:1 decoupled:1 filled:1 iv:1 accrue:1 uncertain:1 increased:1 assignment:3 exchanging:1 cost:1 kaelbling:1 subset:3 delay:1 successful:1 pal:1 dependency:9 btf:1 para:1 st:2 international:1 matthijs:4 probabilistic:1 receiving:1 invertible:1 analogously:1 quickly:3 milind:1 jo:1 central:1 conf:2 leading:1 return:1 account:1 student:1 int:4 coordinated:1 depends:1 vi:1 bg:2 analyze:1 red:3 curie:1 minimize:1 oi:4 ni:1 largely:1 likewise:1 efficiently:2 gathered:1 identify:6 bayesian:1 basically:1 pomdps:13 history:2 minj:1 reach:1 whenever:1 definition:2 xiaoping:1 frequency:1 naturally:1 associated:8 di:1 mi:1 proof:2 recall:1 organized:1 formalize:2 attained:1 higher:1 unambiguously:2 formulation:2 done:2 though:1 furthermore:1 stage:1 correlation:1 hand:1 receives:1 quality:5 mdp:1 building:1 effect:2 usage:1 concept:1 true:2 contain:1 xavier:1 deal:2 white:1 during:1 ambiguous:2 maintained:1 ecnico:2 complete:2 tn:4 l1:5 fj:1 passage:2 reasoning:2 cooperate:2 wise:1 consideration:1 novel:1 fi:2 funding:1 superior:2 common:2 exponentially:1 volume:2 kluwer:1 refer:4 significant:1 mellon:1 ai:7 seldom:1 dbn:1 mathematics:1 similarly:1 portugal:3 had:1 funded:2 reachable:1 nexp:1 access:1 moving:1 longer:1 mapper:1 robot:1 add:2 dominant:2 chelsea:1 own:2 scenario:5 certain:3 binary:1 continue:1 arbitrarily:1 accomplished:1 yi:1 exploited:1 minimum:6 additional:1 greater:1 nikos:3 freely:1 surely:1 determine:2 maximize:3 dashed:1 ii:7 multiple:3 interdependency:1 needing:1 reduces:1 full:4 academic:1 veloso:2 long:1 equally:1 a1:2 impact:1 cmu:1 ief:1 represent:2 pspace:1 robotics:2 messias:2 dec:15 receive:1 background:3 achieved:1 want:1 confined:1 fellowship:1 source:1 publisher:1 ot:2 rest:1 posse:1 sure:1 induced:2 subject:4 integer:2 extracting:1 bernstein:1 enough:2 concerned:1 easy:1 automated:2 independence:1 isolation:1 marginalization:2 affect:1 opposite:1 reduce:1 regarding:1 requesting:1 whether:2 akin:1 fnt:1 algebraic:1 action:50 remark:1 boutilier:1 collision:1 detailed:1 clear:1 netherlands:1 amount:7 locally:3 reduced:2 specifies:3 supplied:1 exist:2 arising:1 track:1 carnegie:1 shall:2 waiting:1 ist:4 key:1 four:1 marie:1 run:2 package:4 powerful:1 uncertainty:9 respond:1 communicate:7 named:2 wu:1 decision:27 comparable:1 bound:20 guaranteed:1 simplification:1 encodes:1 span:1 pruned:1 performing:2 fct:1 developing:1 according:1 combination:3 request:2 across:1 smaller:2 remain:1 spaan:6 lp:8 making:6 modification:1 intuitively:2 pr:9 taken:2 computationally:1 equation:1 mutually:1 turn:1 discus:1 x2t:1 know:3 fp7:2 tractable:1 available:3 operation:2 decentralized:6 rewritten:1 f2t:1 alternative:2 existence:1 remaining:1 running:2 maintaining:1 exploit:3 scholarship:1 establish:2 approximating:1 bl:45 implied:1 move:1 feng:1 already:3 quantity:2 strategy:1 costly:1 exclusive:1 traditional:1 exhibit:2 subspace:1 mapped:1 aptly:1 poupart:1 whom:1 partner:1 collected:1 argue:1 reason:2 assuming:1 sia:1 o1:2 index:1 relationship:1 minimizing:2 equivalently:1 unfortunately:1 executed:2 mostly:1 potentially:1 robert:1 negative:2 xnt:2 implementation:1 policy:12 perform:3 upper:1 observation:10 markov:4 finite:2 displayed:1 situation:5 vlassis:3 communication:41 team:6 incorporate:1 rn:2 david:2 pair:2 required:1 kl:1 conflict:1 conflicting:6 able:1 usually:1 boyen:1 tudelft:1 sparsity:1 program:3 max:4 belief:49 suitable:1 lisbon:2 natural:1 scheme:2 immerman:1 technology:1 mdps:2 identifies:1 carried:2 l2:2 determining:1 mlx:9 multiagent:16 fully:3 loss:2 rationale:1 limitation:2 proven:1 agent:81 degree:1 sufficient:3 consistent:2 unequivocally:2 intractability:1 share:3 maxt:1 row:1 supported:2 transpose:1 keeping:1 free:2 offline:6 formal:1 allow:1 institute:2 neighbor:1 sparse:6 isr:4 ha1:1 slice:1 dimension:1 world:3 transition:3 made:1 avoided:3 bm:3 programme:1 far:1 pruning:2 observable:5 approximate:3 keep:1 dealing:1 satinder:1 summing:1 manuela:1 assumed:3 xi:4 search:1 table:4 disambiguate:2 obtaining:5 unavailable:1 requested:1 interact:1 whiteson:1 complex:1 domain:2 vj:1 main:2 repeated:1 x1:1 referred:1 formalization:1 position:2 explicit:1 lie:3 atypical:1 communicates:1 shimon:1 theorem:1 rk:2 showing:1 sensing:1 explored:1 maxi:3 r2:1 workshop:1 sequential:1 ci:1 phd:1 magnitude:1 execution:2 illustrates:1 horizon:7 cassandra:1 chen:1 locality:1 simply:1 infinitehorizon:1 expressed:2 contained:5 partially:5 pedro:1 corresponds:2 acm:1 viewed:1 goal:3 marked:2 consequently:1 towards:1 room:8 change:1 included:1 tecnologia:1 reducing:3 acting:1 decouple:1 called:1 total:2 invariance:1 experimental:3 jmessias:1 decisiontheoretic:1 select:2 mark:1 people:1 latter:2 actuation:1 encia:1 d1:2
3,739
4,386
Robust Lasso with missing and grossly corrupted observations Nam H. Nguyen Johns Hopkins University [email protected] Nasser M. Nasrabadi U.S. Army Research Lab [email protected] Trac D. Tran Johns Hopkins University [email protected] Abstract This paper studies the problem of accurately recovering a sparse vector ? ? from highly corrupted linear measurements y = X? ? + e? + w where e? is a sparse error vector whose nonzero entries may be unbounded and w is a bounded noise. We propose a so-called extended Lasso optimization which takes into consideration sparse prior information of both ? ? and e? . Our first result shows that the extended Lasso can faithfully recover both the regression and the corruption vectors. Our analysis is relied on a notion of extended restricted eigenvalue for the design matrix X. Our second set of results applies to a general class of Gaussian design matrix X with i.i.d rows N (0, ?), for which we provide a surprising phenomenon: the extended Lasso can recover exact signed supports of both ? ? and e? from only ?(k log p log n) observations, even the fraction of corruption is arbitrarily close to one. Our analysis also shows that this amount of observations required to achieve exact signed support is optimal. 1 Introduction One of the central problems in statistics is the linear regression in which the goal is to accurately estimate a regression vector ? ? ? Rp from the noisy observations y = X? ? + w, n?p (1) n where X ? R is the measurement or design matrix, and w ? R is the stochastic observation vector noise. A particular situation recently attracted much attention from research community concerns with the model in which the number of regression variables p is larger than the number of observations n (p ? n). In such circumstances, without imposing some additional assumptions for this model, it is well known that the problem is ill-posed, and thus the linear regression is not consistent. Accordingly, there have been various lines of work on high dimensional inference based on imposing different types of structure constraints such as sparsity and group sparsity [15] [5] [21]. Among them, the most popular model focused on sparsity assumption of the regression vector. To estimate ?, a standard method, namely Lasso [15], was proposed to use l1 -penalty as a surrogate function to enforce sparsity constraint. 1 2 min ky ? X?k2 + ? k?k1 , (2) ? 2 Pp where ? is the positive regularization parameter and l1 -norm k?k1 is defined by k?k1 = i=1 |?i |. During the past few years, there has been numerous studies to understand the `1 -regularization for sparse regression models [23] [11] [10] [17] [4] [2] [22]. These works are mainly characterized by 1 the type of the loss functions considered. For instance, some authors [4] seek to obtain a regression estimate ?b that delivers small prediction error while other authors [2] [11] [22] seek to produce a regressor with minimal parameter estimation error, which is measured by the `2 -norm of (?b ? ? ? ). Another line of work [23] [17] considers the variable selection in which the goal is to obtain an estimate that correctly identifies the support of the true regression vector. To achieve low prediction or parameter estimation loss, it is now well known that it is both sufficient and necessary to impose certain lower bounds on the smallest singular values of the design matrix [10] [2], while a notion of small mutual incoherence for the design matrix [4] [23] [17] is required to achieve accurate variable selection. We notice that all the previous work relies on the assumption that the observation noise has bounded energy. Without this assumption, it is very likely that the estimated regressor is either not reliable or unable to identify the correct support. With this observation in mind, in this paper, we extend the linear model (1) by considering the noise with unbounded energy. It is clear that if all the entries of y is corrupted by large error, then it is impossible to faithfully recover the regression vector ? ? . However, in many practical applications such as face and acoustic recognition, only a portion of the observation vector is contaminated by gross error. Formally, we have the mathematical model y = X? ? + e? + w, (3) where e? ? Rn is the sparse error whose locations of nonzero entries are unknown and magnitudes can be arbitrarily large and w is another noise vector with bounded entries. In this paper, we assume that w has a multivariate Gaussian N (0, ? 2 In?n ) distribution. This model also includes as a particular case the missing data problem in which all the entries of y is not fully observed, but some are missing. This problem is particularly important in computer vision and biology applications. If some entries of y are missing, the nonzero entries of e? whose locations are associated with the missing entries of the observation vector y have the same values as entries of y but with inverse signs. The problems of recovering the data under gross error has gained increasing attentions recently with many interesting practical applications [18] [6] [7] as well as theoretical consideration [9] [13] [8]. Another recent line of research on recovering the data from grossly corrupted measurements has been also studied in the context of robust principal component analysis (RPCA) [3] [20] [1]. Let us consider some examples to illustrate: ? Face recognition. The model (3) has been originally proposed by Wright et al. [19] in the context of face recognition. In this problem, a face test sample y is assumed to be represented as a linear combination of training faces in the dictionary X, y = X? where ? is the coefficient vector used for classification. However, it is often the case that the face is occluded by unwanted objects such as glasses, hats etc. These occlusions, which occupy a portion of the test face, can be considered as the sparse error e? in the model (3). ? Subspace clustering. One of the important problem on high dimensional analysis is to cluster the data points into multiple subspaces. A recent work of Elhamifar and Vidal [6] showed that this problem can be solved by expressing each data point as a sparse linear combination of all other data points. Coefficient vectors recovered from solving the Lasso problems are then employed for clustering. If the data points are represented as a matrix X, then we wish to find a sparse coefficient matrix B such that X = XB and diag(B) = 0. When the data is missing or contaminated with outliers, [6] formulates the problem as X = XB + E and minimize a sum of two `1 -norms with respect to both B and E. ? Sensor network. In this model, sensors collect measurements of a signal ? ? independently by simply projecting ? ? onto row vectors of a sensing matrix X, yi = hXi , ? ? i. The measurements yi are then sent to the center hub for analysis. However, it is highly likely that some sensors might fail to send the measurements correctly and sometimes report totally irrelevant measurements. Therefore, it is more accurate to employ the observation model (3) than model (1). It is worth noticing that in the aforementioned applications, e? plays the role as the sparse (undesired) error. However, in many other applications, e? can contain meaningful information, and thus necessary to be recovered. An example of this kind is signal separation, in which ? ? and e? are two distinct signal components (video or audio). Furthermore, in applications such as classification and 2 clustering, the assumption that the test sample y is a linear combination of a few training samples in the dictionary (design matrix) X might be violated. This sparse component e? can thus be seen as the compensation for linear regression model mismatch. Given the observation model (1) and the sparsity assumptions on both regression vector ? ? and error e? , we propose the following convex minimization to estimate the unknown parameter ? ? as well as the error e? . 1 2 min ky ? X? ? ek2 + ?? k?k1 + ?e kek1 , (4) ?,e 2 where ?? and ?e are positive regularization parameters. This optimization, we call extended Lasso, can be seen as a generalization of the Lasso program. Indeed, by setting ?e = 0, (4) returns to the standard Lasso. The additional regularization associated with e encourages sparsity on the error where parameter ?e controls the sparsity level. In this paper, we focus on the following questions: what are necessary and sufficient conditions for the ambient dimension p, the number of observations n, the sparsity index k of the regression ? ? and the fraction of corruption so that (i) the extended Lasso is able (or unable) to recover the exact support sets of both ? ? and e? ? (ii) the extended Lasso is able to recover ? ? and e? with small prediction error and parameter error? We are particularly interested in understanding the asymptotic situation where the the fraction of error is arbitrarily close to 100%. Previous work. The problem of recovering the estimation vector ? ? and error e? has originally proposed and analyzed by Wright and Ma [18]. In the absence of the stochastic noise w in the observation model (3), the authors proposed to estimate (? ? , e? ) by solving the linear program min k?k1 + kek1 s.t. y = X? + e. ?,e (5) The result of [18] is asymptotic in nature. They showed that for a class of Gaussian design matrix with i.i.d entries, the optimization (5) can recover (? ? , e? ) precisely with high probability even when the fraction of corruption is arbitrarily close to one. However, the result holds under rather stringent conditions. In particularly, they require the number of observations n grow proportionally with the ambient dimension p, and the sparsity index k is a very small portion of n. These conditions is of course far from the optimal bound in compressed sensing (CS) and statistics literature (recall k ? O(n/ log p) is sufficient in conventional analysis [17]). Another line of work has also focused on the optimization (5). In both papers of Laska et al. [7] and Li et al. [9], the authors establish that for Gaussian design matrix X, if n ? C(k + s) log p where s is the sparsity level of e? , then the recovery is exact. This follows from the fact that the combination matrix [X, I] obeys the restricted isometry property, a well-known property used to guarantee exact recovery of sparse vectors via `1 -minimization. These results, however, do not allow the fraction of corruption close to one. Among the previous work, the most closely related to the current paper are recent results by Li [8] and Nguyen et al. [13] in which a positive regularization parameter ? is employed to control the sparsity of e? . ? Using different methods, both sets of authors show that as ? is deterministically selected to be 1/ log p and X is a sub-orthogonal matrix, then the solution of following optimization is exact even a constant fraction of observation is corrupted. Moreover, [8] establishes a similar result with Gaussian design matrix in which the number of observations is only an order of k log p an amount that is known to be optimal in CS and statistics. min k?k1 + ? kek1 s.t. y = X? + e. ?,e (6) Our contribution. This paper considers a general setting in which the observations are contaminated by both sparse and dense errors. We allow the corruptions to linearly grow with the number of observations and have arbitrarily large magnitudes. We establish a general scaling of the quadruplet (n, p, k, s) such that the extended Lasso stably recovers both the regression and corruption vectors. Of particular interest to us are the following equations: (a) First, under what scalings of (n, p, k, s) does the extended Lasso obtain the unique solution with small estimation error. (b) Second, under what scalings of (n, p, k) does the extended Lasso obtain the exact signed support recovery even almost all the observations are corrupted? 3 (c) Third, under what scalings of (n, p, k, s) does no solution of the extended Lasso specify the correct signed support? To answer for the first question, we introduce a notion of extended restricted eigenvalue for a matrix [X, I] where I is an identity matrix. We show that this property satisfies for a general class of random Gaussian design matrix. The answers to the last two questions requires stricter conditions for the design matrix. In particular, for random Gaussian design matrix with i.i.d rows N (0, ?), we rely on two standard assumptions: invertibility and mutual incoherence. T T If we denote Z = [X, I] where I is an identity matrix and ? = [? ? , e? ]T , then the observation vector y is reformulated as y = Z? + w, which is the same as standard Lasso model. However, previous results [2] [17] applying to random Gaussian design matrix are irrelevant to this setting since the Z no longer behave like a Gaussian matrix. To establish theoretical analysis, we need more study on the interaction between the Gaussian and identity matrices. By exploiting the fact that the matrix Z consists of two component where one component has special structure, our analysis reveals an interesting phenomenon: extended Lasso can accurately recover both the regressor ? ? and corruption e? even when the fraction of corruption is up to 100%. We measure the recoverability of these variables under two criterions: parameter accuracy and feature selection accuracy. Moreover, our analysis can be extended to the situation in which the identity matrix can be replaced by a tight frame D as well as extended to other models such as group Lasso or matrix Lasso with sparse error. Notation We summarize here some standard notation used throughout the paper. We reserve T and S as the sparse support of ? ? and e? , respectively. Given and design matrix X ? Rn?p and subsets S and T , we use XST to denote the |S| ? |T | submatrix obtained by extracting those rows indexed by S and columns indexed by T . We use the notation C1 , C2 , c1 , c2 , etc., to refer to positive constants, whose value may change from line to line. Given two functions f and g, the notation f (n) = O(g(n)) means that there exists a constant c < +? such that f (n) ? cg(n); the notation f (n) = ?(g(n)) means that f (n) ? cg(n) and the notation f (n) = ?(g(n)) means that f (n) = (g(n)) and f (n) = ?(g(n)). The symbol f (n) = o(g(n)) means that f (n)/g(n) ? 0. 2 Main results In this section, we provide precise statements of the main results of this paper. In the first subsection, we establish the parameter estimation and provide a deterministic result which bases on the notion of extended restricted eigenvalue. We further show that the random Gaussian design matrix satisfies this property with high probability. The next sub-section considers the feature estimation. We establish conditions for the design matrix such that the solution of the extended Lasso has the exact signed supports. 2.1 Parameter estimation As in conventional Lasso, to obtain a low parameter estimation bound, it is necessary to impose conditions on the design matrix X. In this paper, we introduce a notion of extended restricted eigenvalue (extended RE) condition. Let C be a restricted set, we say that the matrix X satisfies the extended RE assumption over the set C if there exists some ?l > 0 such that kXh + f k2 ? ?l (khk2 + kf k2 ) for all (h, f ) ? C, (7) where the restricted set C of interest is defined with ?n := ?e /?? as follow C := {(h, f ) ? Rp ? Rn | khT c k1 + ?n kfS c k1 ? 3 khT k1 + 3?n kfS k1 }. (8) This assumption is a natural extension of the restricted eigenvalue condition and restricted strong convexity considered in [2] [14] and [12]. In the absent of a vector f in the equation (7) and in the set C, this condition returns to the restricted eigenvalue defined in [2]. As explained at more length in [2] and [16], restricted eigenvalue is among the weakest assumption on the design matrix such that the solution of the Lasso is consistent. With this assumption at hand, we now state the first theorem 4 b eb) to the optimization problem (4) with regularization Theorem 1. Consider the optimal solution (?, parameters chosen as ?? ? 2 kX ? wk? ? and ?n := kwk? ?e =? , ?? kX ? wk? (9) where ? ? (0, 1]. Assuming that the design matrix X obeys the extended RE, then the error set (h, f ) = (?b ? ? ? , eb ? e? ) is bounded by  ? ?  k + ? s . (10) khk2 + kf k2 ? 3??2 ? e ? l There are several interesting observations from this theorem 1) The error bound naturally split into two components related to the sparsity indices of ? ? and e? . In addition, the error bound contains three quantity: the sparsity indices, regularization parameters and the extended RE constant. If the terms related to the corruption e? are omitted, then we obtain similar parameter estimation bound as the standard Lasso [2] [12]. 2) The choice of regularization parameters ?? and ?e can make explicitly: assuming w is a Gaussian random vector whose entries are N (0, ? 2 )pand the design matrix hasp unit-normed columns, it is clear ? 2 that with high probability, kX wk? ? p 2 ? log p and kwk? ? 2 ? 2 log n. Thus, it is sufficient p 4 2 to select ?? ? ? ? log p and ?e ? 4 ? 2 log n. 3) At the first glance, the parameter ? does not seem to have any meaningful interpretation and the ? = 1 seems to be the best selection due to the smallest estimation error it can produce. However, this parameter actually control the sparsity level of the regression vector with respect to the fraction of corruption. This relation is made via the restricted set C. In the following lemma, we show that the extended RE condition actually exists for a large class of random Gaussian design matrix whose rows are i.i.d zero mean with covariance ?. Before stating the lemma, let us define some quantities operating on the covariance matrix ?: Cmin := ?min (?) is the smallest eigenvalue of ?, Cmax := ?max (?) is the biggest eigenvalue of ? and ?(?) := maxi ?ii is the maximal entry on the diagonal of the matrix ?. Lemma 1. Consider the random Gaussian design matrix whose rows are i.i.d N (0, ?) and assume n2 Cmax ?(?) = ?(1). Select s ?n := p ? ?(?)n log n , log p (11) then with probability greater than 1 ? c1 exp(?c2 n), the matrix X satisfies the extended n o RE with ?(?) 1 n ? parameter ?l = 4 2 , provided that n ? C Cmin k log p and s ? min C1 ? 2 log n , C2 n for some small constants C1 , C2 . We would like to make some remarks: 1) The choice of parameter ?n is nothing special here. When design matrix is Gaussian p and independent with the Gaussian stochastic noise w, we can easily show that kX ? wk? ? 2 ?(?)n? 2 log p with probability at least 1 ? 2 exp(? log p). Therefore, the selection of ?n follows from Theorem 1. 2) The proof of this lemma, shown in the Appendix, boils down to control two terms ? Restricted eigenvalue with X. 2 2 2 2 kXhk2 + kf k2 ? ?r (khk2 + kf k2 ) for all (h, f ) ? C. ? Mutual incoherence. Column space of the matrix X is incoherent with the column space of the identity matrix. That is, there exists some ?m > 0 such that | hXh, f i | ? ?m (khk2 + kf k2 )2 for all (h, f ) ? C. If the incoherence between these two column spaces is sufficiently small such that 4?m < ?r , then 2 we can conclude that kXh + f k2 ? (?r ? 2?m )(khk2 + kf k2 )2 . The small mutual incoherence 5 property is especially important since it provides how the regression separates away from the sparse error. 3) To simplify our result, we consider a special case of the uniform Gaussian design, in which ? = n1 Ip?p . In this situation, Cmin = Cmax = ?(?) = 1/n. We have the following result which is a corollary of Theorem 1 and Lemma 1 Corollary 1 (Standard Gaussian design). Let X be a standard Gaussian design matrix. Consider b eb) to the optimization problem (4) with regularization parameters chosen as the optimal solution (?, p 4p 2 ?? ? ? log p and ?e ? 4 ? 2 log n, (12) ? n where ? ? (0, 1]. Also assuming that n ? Ck log p and s ? min{C1 ? 2 log n , C2 n} for some small constants C1 , C2 . Then with probability greater than 1 ? c1 exp(?c2 n), the error set (h, f ) = (?b ? ? ? , eb ? e? ) is bounded by  p  p 1 2 2 khk2 + kf k2 ? 384 ? k log p + ? s log n , (13) ? ? Corollary 1 reveals an interesting phenomenon: by setting ? = 1/ log n, even when the fraction of corruption is linearly proportional with the number of samples n, the extended Lasso (4) is still capable to recover both coefficient vector ? ? and corruption (missing) vector e? within a bounded error (13). Without the dense noise w in the observation model (3) (? = 0), the extended Lasso recovers the exact solution. This result is impossible to achieve with standard Lasso. Furthermore, if we know in prior that the number of corrupted observations is an order of O(n/ log p), then selecting ? = 1 instead of 1/ log n will minimize the estimation error (see equation (13)) of Theorem 1. 2.2 Feature selection with random Gaussian design In many applications, the feature selection criteria is more preferred [17] [23]. Feature selection refers to the property that the recovered parameter has the same signed support as the true regressor. In general, good feature selection implies good parameter estimation but the reverse direction does not usually hold. In this part, we investigate conditions for the design matrix and the scaling of (n, p, k, s) such as both regression and sparse error vectors obtain this criteria. Consider the linear model (3) where X is the Gaussian random design matrix whose rows are i.i.d zero mean with covariance matrix ?. It has been well known in the Lasso that in order to obtain feature selection accuracy, the covariance matrix ? must obey two properties: invertibility and small mutual coherence restricted on the set T . The first property guarantees that (4) is strictly convex, leading to the unique solution of the convex program, while the second property requires the separation between two components of ?, one related to the set T and the other to the set T c must be sufficiently small. 1. Invertibility. To guarantee uniqueness, we require ?T T to be invertible. Particularly, let Cmin = ?min (?T T ), we require Cmin > 0. 2. Mutual incoherence. For some ? ? (0, 1), ? ?T c T (?T T )?1 ? 1 (1 ? ?) (14) ? 2 where k?k? refers to `? /`? operator norm. It is worth noting that in the standard Lasso the factor 21 is omitted. Our condition is tighter than condition used to establish feature estimation in the Lasso by a constant factor. In fact, the quantity 1/2 is nothing special here and we can set any value close to one with a compensation that the number of samples n will increase. Thus, we put 1/2 for the simplicity of the proof. Toward the end, we will also elaborate three other quantities operating on the restricted covariance matrix ?T T : Cmax , which is defined as the maximum eigenvalue of ?T T : Cmax := ?1 ? + ?max (?T T ); Dmax and Dmax+, which are denoted as `? -norm of matrices ?T T and ?T T : ? ?1 Dmax := (?T T ) and Dmax := k?T T k? . ? 6 Our result also involves in two other quantities operating on the conditional covariance matrix of (XT c |XT ) defined as ?T c |T := ?T c T c ? ?T c T ??1 (15) T T ?T T c . We then define ?u (?T c |T ) = maxi (?T c |T )ii and ?l (?T c |T ) = 21 mini6=j [(?T c |T )ii + (?T c |T )jj ? 2(?T c |T )ij ]. Toward the end, we denote a shorthand ?u and ?l . We establish the following result for Gaussian random design whose covariance matrix ? obeys the two assumptions. Theorem 2. (Achievability) Given the linear model (3) with random Gaussian design and the covariance matrix ? satisfy invertibility and incoherence properties for any ? ? (0, 1), suppose we solve the extended Lasso (4) with regularization parameters obeying q p 4 + ?? = max{?u , Dmax (16) }n? 2 log p and ?e = 8 ? 2 log n. ? Also, let ? = 32? 21log n , the sequence (n, p, k, s) and regularization parameters ?? , ?e satisfying s ? ?n   + } max{?u , Dmax ?u ? 1 k log(p ? k), C2 k log(p ? k) log n , n ? max C1 (1 ? ?) Cmin (1 ? ?)2 Cmin (17) where C1 and C2 are numerical constants. In addition, suppose that mini?T |?i? | > f? (?? ) and mini?S |e?i | > fe (?? , ?e ) where s r k log(p ? k) ? 2 log k ?? ?1/2 2 and (18) f? := c1 ?T T + 20 n?s n Cmin (n ? s) ? r ? 1/2 ?? ? k log(p ? k) ?1/2 2 fe := c2 (Cmax (k s + s k)) (19) ?T T + c3 ?e . n?s n ? Then the following properties holds with probability greater than 1?c exp(?c0 max{log n, log pk}) b eb) of the extended Lasso (4) is unique and has exact signed support. 1. The solution pair (?, 2. `? -norm bounds: ?b ? ? ? ? f? (?? ) and kb e ? e? k? ? fe (?? ). ? There are several interesting observations from the theorem 1) The first and important observation is that extended Lasso is robust to arbitrarily large and sparse error observation. In that sense, the extended Lasso can be viewed as a generalization of the Lasso. Under the same invertibility and mutual incoherence assumptions on the covariance matrix ? as the standard Lasso, the extended Lasso program can recover both the regression vector and error with exact signed supports even when almost all the observations are contaminated by arbitrarily large error with unknown support. What we sacrifice for the corruption robustness is an additional log factor to the number of samples. We notice that when the error fraction is O(n/ log n), only O(k log(p ? k)) samples are sufficient to recover the exact signed supports of both regression and sparse error vectors. 2) We consider the special case with Gaussian random design in which the covariance matrix ? = 1 n Ip?p . In this case, entries of X is i.i.d N (0, 1/n) and we have quantities Cmin = Cmax = + ? Dmax = Dmax = ?u = ?l = 1. In addition, the invertibility and mutual incoherence properties are automatically satisfied. The theorem implies that when the number of errors s is close to n, the number of samples n needed to recover exact signed supports satisfies logn n = ?(k log(p ? k)). Furthermore, Theorem 2 guarantees in element-wise `? -norm of the estimated   consistency q p b k log(p?k) ? 2 regression at the rate ? ? ? = O ? log p . ?2n ? ? As ? is ? chosen to be 1/ 32 log n (equivalent to establish s close to n), the `? error rate is an order of O(? log p), which is known to be the same as that of the standard Lasso. 7 3) Corollary 1, though interesting, is not able to guarantee stable recovery when the fraction of corruption converges to one. We show in Theorem 2 that this fraction can come arbitrarily close to one by sacrificing a factor of log n for the number of samples. Theorem 2 also implies that there is a significant difference between recovery to obtain small parameter estimation error versus recovery to obtain correct variable selection. When the amount of corrupted observations is linearly proportional with n, recovering the exact signed supports require an increase from ?(k log p) (in Corollary 1) to ?(k log p log n) samples (in Theorem 2). This behavior is captured similarly by the standard Lasso, as pointed out in [17], Corollary 2. Our next theorem show that the number of samples needed to recover accurate signed support is optimal. That is, whenever the rescaled sample size satisfies (20), then for whatever regularization parameters ?? and ?e are selected, no solution of the extended Lasso correctly identifies the signed supports with high probability. Theorem 3. (Inachievability) Given the linear model (3) with random Gaussian design and the covariance matrix ? satisfy invertibility and incoherence properties for any ? ? (0, 1). Let ? = 1 32? 2 log(n?s) and the sequence (n, p, k, s) satisfies s ? ?n and ? !?1 ? p ? ? + 2 1 ?u ? min{?l , Dmax } ? log n n ? min C3 k log(p ? k), C4 k log(p ? k) log(1 ? ?)n 1 + , ? (1 ? ?) Cmin ? (1 ? ?)2 Cmax ?e (20) where C3 and C4 are some small universal constants. Then with probability tending to one, no solution pair of the extended Lasso (5) has the correct signed support. 3 Illustrative simulations In this section, we provide some simulations to illustrate the possibility of the extended Lasso in recovering the exact regression signed support when a significant fraction of observations is corrupted by large error. Simulations are performed for a range of parameters (n, p, k, s) where the design matrix X is uniform Gaussian random whose rows are i.i.d N (0, Ip?p ). For each fixed set of (n, p, k, s), we generate sparse vectors ? ? and e? where locations of nonzero entries are uniformly random and magnitudes are Gaussian distributed. In our experiments, we consider varying problem sizes p = {128, 256, 512} and three types of regression sparsity indices: sublinear sparsity (k = 0.2p/ log(0.2p)), linear sparsity (k = 0.1p) and fractional power sparsity (k = 0.5p0.75 ). In all cases, we fixed the error support size s = n/2. This means half of the observations is corrupted. By this selection, Theorem 2 suggests that number of samples n ? 2Ck log(p ? k) log n to guarantee exact signed support recovery. We choose n log n = 4?k log(p ? k) where parameter ? is the rescaled sample size. This parameter control the success/failure of the extended Lasso. p p In the algorithm, we select ?? = 2 ? 2 log p log n and ?e = 2 ? 2 log n as suggested by Theorem 2, where the noise level ? = 0.1 is fixed. The algorithm reports a success if the solution pair has the same signed support as (? ? , e? ). In Fig. 1, each point on the curve represents the average of 100 trials. As demonstrated by simulations, our extended Lasso is cable to recover the exact signed support of both ? ? and e? even 50% of the observations are contaminated. Furthermore, up to unknown constants, our theorem 2 and 3 match with simulation results. As the sample size logn n ? 2k log(p ? k), the probability of success starts going to zero, implying the failure of the extended Lasso. Acknowledgments We acknowledge support from the Army Research Office (ARO) under Grant 60291-MA and National Science Foundation (NSF) under Grant CCF-1117545. References [1] A. Agarwal, S. Negahban, and M. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. Proc. 28th Inter. Conf. Mach. Learn. (ICML-11), pages 1129?1136, 2011. 8 Sublinear sparsity 0.6 0.4 p=128 p=256 p=512 0.2 0.2 0.4 0.6 Rescaled sample size ? 0.8 1 1 1 0.8 0.8 Probability of success Probability of success Probability of success 0.8 0 0 Fractional power sparsity Linear sparsity 1 0.6 0.4 p=128 p=256 p=512 0.2 0 0 0.2 0.4 0.6 Rescaled sample size ? 0.8 1 0.6 0.4 p=128 p=256 p=512 0.2 0 0 0.2 0.4 0.6 Rescaled sample size ? 0.8 1 Figure 1: Probability of success in recovering the signed supports [2] P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Annals of statistics, 37(4):1705?1732, 2009. [3] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Submitted for publication, 2009. [4] E. J. Cand`es and Y. Plan. Near-ideal model selection by l1 minimization. Annals of Statistics, 37:2145? 2177, 2009. [5] E. J. Cand`es and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. Annals of statistics, 35(6):2313?2351, 2007. [6] E. Elhamifar and R. Vidal. Sparse subspace clustering. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2790?2797, 2009. [7] J. N. Laska, M. A. Davenport, and R. G. Baraniuk. Exact signal recovery from sparsely corrupted measurements through the pursuit of justice. In Asilomar conference on Signals, Systems and Computers, pages 1556?1560, 2009. [8] X. Li. Compressed sensing and matrix completion with constant proportion of corruptions. Preprint, 2011. [9] Z. Li, F. Wu, and J. Wright. On the systematic measurement matrix for compressed sensing in the presence of gross error. In Data compression conference (DCC), pages 356?365, 2010. [10] N. Meinshausen and P. Buehlmann. High dimensional graphs and variable selection with the lasso. Annals of statistics, 34(3):1436?1462, 2008. [11] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data. Annals of statistics, 37(1):2246?2270, 2009. [12] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Preprint, 2010. [13] N. H. Nguyen and Trac. D. Tran. Exact recoverability from dense corrupted observations via l1 minimization. preprint, 2010. [14] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated gaussian designs. Journal of Machine Learning Research, 11:2241?2259, 2010. [15] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B, 58(1):267?288, 1996. [16] S. A. van de Geer and P. Buehlmann. On the conditions used to prove oracle results for the lasso. Electronic Journal of Statistics, 3(1360-1392), 2009. [17] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using l1 -constrained quadratic programming ( lasso ). IEEE Trans. Information Theory, 55(5):2183?2202, 2009. [18] J. Wright and Y. Ma. Dense error correction via l1 minimization. IEEE Transaction on Information Theory, 56(7):3540?3560, 2010. [19] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transaction on Pattern Analysis and Machine Intelligence, 31(2):210?227, 2009. [20] H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. Ad. Neural Infor. Proc. Sys. (NIPS), pages 2496?2504, 2010. [21] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B, 68(1):49?67, 2006. [22] T. Zhang. Some sharp performance bounds for least squares regression with l1 regularization. Annals of statistics, 37(5):2109?2144, 2009. [23] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning Research, 7:2541?2563, 2006. 9
4386 |@word trial:1 compression:1 norm:7 seems:1 justice:1 c0:1 proportion:1 seek:2 simulation:5 covariance:11 p0:1 decomposition:1 contains:1 series:2 selecting:1 past:1 recovered:3 current:1 surprising:1 attracted:1 must:2 john:2 numerical:1 civ:1 implying:1 half:1 selected:2 intelligence:1 accordingly:1 sys:1 provides:1 location:3 zhang:1 unbounded:2 mathematical:1 c2:11 yuan:1 consists:1 shorthand:1 prove:1 introduce:2 inter:1 sacrifice:1 indeed:1 behavior:1 cand:3 automatically:1 considering:1 increasing:1 totally:1 provided:1 bounded:6 moreover:2 notation:6 what:5 kind:1 unified:1 guarantee:6 buehlmann:2 unwanted:1 stricter:1 k2:10 control:5 unit:1 whatever:1 grant:2 positive:4 before:1 mach:1 incoherence:10 signed:19 might:2 eb:5 studied:1 dantzig:2 meinshausen:2 collect:1 suggests:1 range:1 obeys:3 practical:2 unique:3 acknowledgment:1 universal:1 jhu:2 trac:3 refers:2 onto:1 close:8 selection:17 operator:1 put:1 context:2 impossible:2 applying:1 conventional:2 deterministic:1 equivalent:1 missing:7 center:1 send:1 demonstrated:1 attention:2 independently:1 convex:4 focused:2 normed:1 simplicity:1 recovery:10 decomposable:1 estimator:1 nam:2 notion:5 annals:6 play:1 suppose:2 exact:19 programming:1 element:1 recognition:5 particularly:4 satisfying:1 sparsely:1 observed:1 role:1 preprint:3 solved:1 rescaled:5 gross:3 convexity:1 occluded:1 solving:2 tight:1 easily:1 various:1 represented:2 caramanis:1 distinct:1 whose:10 larger:2 posed:1 solve:1 say:1 cvpr:1 compressed:3 statistic:10 noisy:3 ip:3 sequence:2 eigenvalue:12 propose:2 tran:2 interaction:1 maximal:1 aro:1 achieve:4 ky:2 exploiting:1 cluster:1 produce:2 converges:1 object:1 illustrate:2 stating:1 completion:1 measured:1 ij:1 strong:1 recovering:7 c:2 involves:1 implies:3 come:1 direction:1 closely:1 correct:4 mini6:1 stochastic:3 kb:1 stringent:1 cmin:10 require:4 generalization:2 tighter:1 extension:1 strictly:1 correction:1 hold:3 sufficiently:2 considered:3 wright:6 exp:4 reserve:1 dictionary:2 bickel:1 smallest:3 omitted:2 uniqueness:1 estimation:16 proc:2 rpca:1 grouped:1 faithfully:2 establishes:1 minimization:5 sensor:3 gaussian:28 kxh:2 rather:1 ck:2 shrinkage:1 varying:1 mil:1 office:1 publication:1 corollary:6 focus:1 mainly:1 cg:2 sense:1 glass:1 inference:1 relation:1 going:1 interested:1 tao:1 infor:1 among:3 ill:1 classification:2 aforementioned:1 denoted:1 logn:2 plan:1 constrained:1 special:5 laska:2 mutual:8 biology:1 represents:1 yu:4 icml:1 contaminated:5 report:2 simplify:1 sanghavi:1 few:2 employ:1 national:1 replaced:1 occlusion:1 n1:1 interest:2 highly:2 investigate:1 possibility:1 analyzed:1 regularizers:1 xb:2 accurate:3 ambient:2 capable:1 necessary:4 orthogonal:1 indexed:2 kfs:2 re:6 sacrificing:1 theoretical:2 minimal:1 instance:1 column:5 formulates:1 entry:14 subset:1 uniform:2 answer:2 corrupted:12 negahban:2 systematic:1 regressor:4 invertible:1 hopkins:2 central:1 satisfied:1 choose:1 davenport:1 conf:1 zhao:1 leading:1 return:2 li:5 de:1 wk:4 includes:1 coefficient:4 invertibility:7 satisfy:2 explicitly:1 ad:1 performed:1 lab:1 kwk:2 portion:3 start:1 recover:13 relied:1 contribution:1 minimize:2 square:1 accuracy:3 pand:1 identify:1 accurately:3 worth:2 corruption:16 submitted:1 simultaneous:1 whenever:1 failure:2 grossly:2 energy:2 pp:1 naturally:1 associated:2 proof:2 recovers:2 boil:1 popular:1 recall:1 subsection:1 fractional:2 actually:2 originally:2 dcc:1 follow:1 specify:1 ritov:1 though:1 furthermore:4 hand:1 ganesh:1 glance:1 stably:1 contain:1 true:2 ccf:1 regularization:13 nonzero:4 undesired:1 during:1 quadruplet:1 encourages:1 illustrative:1 criterion:3 l1:7 delivers:1 wise:1 consideration:2 recently:2 raskutti:1 tending:1 extend:1 interpretation:1 measurement:9 expressing:1 refer:1 significant:2 imposing:2 consistency:2 sastry:1 similarly:1 pointed:1 hxi:1 stable:1 longer:1 operating:3 etc:2 base:1 multivariate:1 isometry:1 recent:3 showed:2 irrelevant:2 reverse:1 certain:1 arbitrarily:8 success:7 yi:2 seen:2 captured:1 additional:3 greater:3 impose:2 employed:2 nasrabadi:2 signal:5 ii:4 multiple:1 match:1 characterized:1 lin:1 ravikumar:1 prediction:3 regression:25 circumstance:1 vision:2 sometimes:1 agarwal:1 c1:11 addition:3 xst:1 singular:1 grow:2 khk2:6 sent:1 seem:1 call:1 extracting:1 near:1 noting:1 ideal:1 presence:1 split:1 yang:1 lasso:51 absent:1 pca:1 penalty:1 reformulated:1 jj:1 remark:1 clear:2 proportionally:1 amount:3 tsybakov:1 generate:1 occupy:1 nsf:1 notice:2 sign:1 estimated:2 correctly:3 tibshirani:1 group:2 threshold:1 hasp:1 graph:1 relaxation:1 fraction:13 year:1 sum:1 inverse:1 noticing:1 baraniuk:1 almost:2 throughout:1 wu:1 electronic:1 separation:2 coherence:1 appendix:1 scaling:5 submatrix:1 bound:8 quadratic:1 oracle:1 constraint:2 precisely:1 min:10 combination:4 cable:1 outlier:2 restricted:16 projecting:1 explained:1 asilomar:1 equation:3 dmax:9 fail:1 needed:2 mind:1 know:1 end:2 pursuit:2 vidal:2 obey:1 away:1 enforce:1 robustness:1 rp:2 hat:1 clustering:4 cmax:8 k1:10 especially:1 establish:8 ek2:1 society:2 question:3 quantity:6 diagonal:1 surrogate:1 subspace:3 unable:2 separate:1 mail:1 considers:3 toward:2 assuming:3 length:1 index:5 mini:2 fe:3 statement:1 design:35 unknown:4 observation:33 acknowledge:1 compensation:2 behave:1 situation:4 extended:37 precise:1 frame:1 rn:3 recoverability:2 sharp:2 community:1 namely:1 required:2 pair:3 c3:3 c4:2 acoustic:1 kht:2 trans:1 nip:1 able:3 suggested:1 usually:1 pattern:2 mismatch:1 sparsity:22 summarize:1 program:4 kek1:3 reliable:1 max:6 video:1 royal:2 wainwright:4 power:2 natural:1 rely:1 nasser:2 numerous:1 identifies:2 incoherent:1 prior:2 understanding:1 literature:1 kf:7 asymptotic:2 loss:2 fully:1 sublinear:2 interesting:6 proportional:2 versus:1 foundation:1 sufficient:5 consistent:2 row:8 course:1 achievability:1 last:1 allow:2 understand:1 face:8 sparse:22 distributed:1 van:1 curve:1 dimension:3 author:5 made:1 nguyen:3 far:1 transaction:2 selector:2 preferred:1 reveals:2 assumed:1 conclude:1 nature:1 learn:1 robust:6 diag:1 pk:1 dense:4 main:2 linearly:3 noise:9 n2:1 nothing:2 xu:1 fig:1 biggest:1 elaborate:1 sub:2 wish:1 deterministically:1 obeying:1 third:1 theorem:18 down:1 xt:2 hub:1 sensing:4 symbol:1 maxi:2 concern:1 weakest:1 exists:4 gained:1 magnitude:3 elhamifar:2 kx:4 simply:1 army:2 likely:2 applies:1 satisfies:7 relies:1 ma:5 conditional:1 goal:2 identity:5 viewed:1 absence:1 change:1 uniformly:1 principal:2 lemma:5 called:1 geer:1 e:3 meaningful:2 formally:1 select:3 support:26 violated:1 audio:1 phenomenon:3 correlated:1
3,740
4,387
Variational Learning for Recurrent Spiking Networks Danilo Jimenez Rezende Brain Mind Institute ? Ecole Polytechnique F?ed?erale de Lausanne 1015 Lausanne EPFL, Switzerland [email protected] Daan Wierstra School of Computer and Communication Sciences, Brain Mind Institute ? Ecole Polytechnique F?ed?erale de Lausanne 1015 Lausanne EPFL, Switzerland [email protected] Wulfram Gerstner School of Computer and Communication Sciences, Brain Mind Institute ? Ecole Polytechnique F?ed?erale de Lausanne 1015 Lausanne EPFL, Switzerland [email protected] Abstract We derive a plausible learning rule for feedforward, feedback and lateral connections in a recurrent network of spiking neurons. Operating in the context of a generative model for distributions of spike sequences, the learning mechanism is derived from variational inference principles. The synaptic plasticity rules found are interesting in that they are strongly reminiscent of experimental Spike Time Dependent Plasticity, and in that they differ for excitatory and inhibitory neurons. A simulation confirms the method?s applicability to learning both stationary and temporal spike patterns. 1 Introduction This study considers whether recurrent networks of spiking neurons can be seen as a generative model not only of stationary patterns but also of temporal sequences. More precisely, we derive a model that learns to adapt its spontaneously spike sequences to conform as closely as possible to the empirical distribution of actual spike sequences caused by inputs impinging upon the sensory layer of the network. A generative model is a model of the joint distribution of percepts and hidden causes in the world. Since the world has complex temporal relationships, we need a model that is able to both recognize and predict temporal patterns. Behavioural studies (e.g., [1]) support the assumption that the brain is performing approximate Bayesian inference. More recently, evidence for this hypothesis was found in electro-physiological work as well [2]. Various abstract Bayesian models have been proposed to account for this phenomenon [3, 4, 5, 6, 7]. However, it remains an open question whether optimization in abstract Bayesian models can be translated into plausible learning rules for synapses in networks of spiking neurons. In this paper, we show that the derivation of spike-based plasticity rules from statistical learning principles yields learning dynamics for a generative spiking network model which are akin to those 1 ?????? ???????? Figure 1: A network of spiking neurons, divided into observed and latent pools of neurons. of Spike-Time Dependent Plasticity (STDP) [8]. Our learning rule is derived from a variational optimization process. Typically, optimization in recurrent Bayesian networks involves both forward and backward propagation steps. We propose a plasticity rule that approximates backward steps by the introduction of delayed updates in the synaptic weights and dynamics. The theory is supported by simulations in which we demonstrate that the learning mechanism is able to capture the hidden causes behind the observed spiking patterns. We use the Spike Response Model (SRM) [9, 10], in which spikes are generated stochastically depending on the neuronal membrane potential. The SRM is an example of a generalized linear model (GLM). It is closely related to the integrate-and-fire model, and has been successfully used to explain neuronal spike trains [11, 12]. In this model, the membrane potential of a neuron i at time t, expressed as ui (t) is given by X ? u? i (t) = ui (t) + bi + Wi,j Xj (t), (1) j where bi is a bias which represents a constant external input to the neuron, and Xj (t) is the spike P train of the jth neuron defined by Xj (t) = tf 2{t1 ,...,tN } (t tfj ), where {t1j , . . . , tN j } is the set j j j of spike timings. The diagonal elements of the synaptic matrix are kept fixed to a negative value Wi,i = ?0 with ?0 = 1.0, which implements a reset of the membrane potential after each spike and is a simple way to take into account neuronal refractoriness [9, 13]. The time constant is taken to be ? = 10ms as in [13]. The spike generation process is stochastic with time-dependent firing intensity ?i (t) which depends on the membrane potential ui (t): ?i (t) = ?0 exp (ui (t)) . (2) An exponential dependence of the firing intensity upon the membrane potential agrees with experimental results [12]. The set of equations (2) and (1) captures the simplified dynamics of a spiking neuron with stochastic spike timing. In the following sections, we will introduce the theoretical framework and the approximations used in this paper. The basic learning mechanism is introduced and derived, followed by a simulation illustrating that our proposed learning rule is able to learn spatio-temporal features in the input spike trains and reproduce them in its spontaneous activity. 2 Principled Framework We consider a network consisting of two distinct sets of neurons, observed neurons ( also called visible neurons or V) and latent neurons ( also called hidden or H), as illustrated in Figure 1. The activities of the observed neurons represent the quantity of interest to be modelled, while the latent neurons fulfill a mediating role representing the hidden causes of the observed spike train. Learning in the context of this neuronal network consists of changing the synaptic strengths between neurons. We postulate that the underlying principle behind learning relies on learning distributions of spike trains evoked by either sensory inputs or more complicated sequences of cognitive events. In statistics, learning distributions involves minimizing a measure of distance between the model (that is, our neuronal network) and a target distribution (e.g. observations). A principled measure of distance between two distributions p and pempirical is the Kullback-Leibler divergence [14] defined as Z pempirical (X) KL(pempirical ||p) = DXpempirical (X) log . (3) p(X) 2 where individual X represent entire spike trains. DX is a measure of integration over spike trains. Our learning mechanism tries to minimize the KL divergence between the distribution defined by our network p(X) and the observed spike timings distribution pempirical that is evoked by an unknown external process. Note that minimizing the KL divergence entails maximizing the likelihood that the observed spike trains XV could have been generated by the model. In order to derive the learning dynamics of our model in the next section, we need to evaluate the gradient of the likelihood (3) with respect to the free parameters of our model, i.e. the synaptic efficacies Wi,j and biases bi . The joint likelihood of a particular spike train of both the observed XV and the latent neurons XH under our neuronal model can be written as [13] X Z T log p(XV , XH ) = d? [log ?i (? )Xi (? ) ?i (? )] (4) i2V[H 0 Since we have a neuronal network including latent units (that is, neurons not receiving external inputs), the actual observation likelihood is an effective quantity obtained by integrating over all possible latent spike trains XH , Z p(XV ) = DXH p(XV , XH ). (5) The gradient of (5) is given by an expectation conditioned on the observed neurons? history: Z r log p(XV ) = r log DXH p(X) = hr log p(X)ip(XH |XV ) R where hf (X)ip = DXf (x)p(x). This is difficult to evaluate since it conditions an entire latent spike train on an entire observed spike train. In other words, the posterior distribution of spiketimings of the latent neurons depends on both past and future of the observed neurons? spike train. 2.1 Weak Coupling Approximation In order to render the model more tractable, we introduce an approximation on the dynamics based on the weak coupling approximation [15], which amounts to replacing (1) by X ? u? i (t) = ui (t) + bi + Wi,j ?j (t) + zj (t), (6) j where zi (t) is a Gaussian process with mean zero and inverse variance 1 1 X 2 1 Wi,j ?j (t), i (t) = 0 + 2 ? j i (t) given by (7) where 0 is intrinsic noise which we have added to regularize the simulations (we assume 0 = 0.1). Note that i (t) is a function of both the network state and synaptic efficacies. Our network model defines a joint distribution between observed input spike trains and membrane potentials given by XZ X Z i (t) log p(XV , u) = dt [Xi (t)ui (t) ?0 exp(ui (t))] dt (u? i (t) fi (t))2 , (8) 2 i2V i2V[H where terms not depending on the model parameters and latent states have been dropped out as they do not contribute to the gradients we are interested in and fi (t) is the drift of the Gaussian process of the membrane potentials and can be read from equation (6). It is given by 0 1 X 1@ fi (t) = ui (t) + bi + Wi,j ?j (t)A (9) ? j 1 The variance of u? due to the external input can be obtained by noting that ui (t+dt) = ui (t) exp( dt/? )+ R t+dt P ds exp((s t dt)/? )(bi + j Wi,j Xj (t))/? . Thus, in the weak coupling regime t X 2 Z t+dt dt X 2 V ar(u(t + dt)|u(t)) = Wi,j ds exp(2(s t dt)/? )(?j (t))/? 2 = 2 Wi,j ?j (t) ? j t j 3 The weak coupling approximation amounts to replacing spikes of the latent neurons by intensities plus Gaussian noise. Note that in this approximated model, the latent variables are non longer the latent spike trains, but the membrane potentials. However, we emphasize that in the end the intensities can be substituted by spikes as we will see below. 2.2 Variational Approximation of the Posterior Membrane Potential p(u|XV ) The variational approach in statistics is a method to approximate some complex distribution p by a family of simpler distributions q . Variational methods have been applied to spiking neural networks in many different contexts, such as in connectivity or external source inference [20, 21]. In the following, we try to interpret the neural activity and plasticity together as an approximate form of variational learning. We approximate the posterior p(u|XV ) by the Gaussian process XZ i (t) 2 log q(u) = dt (u? i (t) hi (t)) + c 2 i (10) where the hi (t) are variational parameters representing the drift of the ith membrane potential at time t in the posterior process and c is a normalization constant. Note that the parameters i (t) of the posterior process are taken to be the same as the network dynamics noise in (6). This is necessary in order to have a finite KL-divergence between the prior and the posterior processes [22]. Finding a good approximation for the variational parameters hi (t) amounts to minimizing the quantity KL(q(u) k p(XV , u)), which is given by * Z X KL(q k p) = dt [Xi (t)ui (t) ?0 exp(ui (t))] i2V + X i2V[H i (t) 2 (u? i (t) fi (t)) 2 X i2V[H i (t) 2 (u? i (t) hi (t)) 2 + (11) q(u) Although (11) can be written analytically in terms of the instantaneous mean and covariance of the posterior process, we adopt a simpler mean-field approximation, i.e. hF (ui (t))i ? F (hui (t)i). We write the mean hui (t)i = u ?i (t) as Z t u ?i (t) = u ?i (0) + dshi (s) (12) 0 (t) where the hi plays the role of the ?drift? or the derivative of u ?i . Note that hu?ji(t t0 ) i,j , 0 ) = ?(t where ?(x) is the Heaviside step function. As a result, the KL-divergence becomes Z X? i (t) KL(q k p) ? dt [Xi (t)? ui (t) ?0 exp(? ui (t))] i2V + (hi (t) fi (t))2 (13) 2 i The drifts hi (t) of the variational approximation can be updated using gradient descent Z KL = dt [Xk (t) ?0 exp(? uk (t))] ?(t t0 ) k2V hk (t0 ) Z X fi (t) + k (t0 )(hk (t0 ) fi (t0 )) dt fi (t)) i (t)(hi (t) 0 h k (t ) i Z 1X i (t) + dt (hi (t) fi (t))2 , 2 i hk (t0 ) (14) where fi (t) hk (t0 ) i (t) hk (t0 ) = = 1 ( ? 1 ?2 i,k t0 ) + Wi,k ?k (t)) ?(t 2 2 i (t)Wi,k ?k (t)?(t 4 t0 ) (15) (16) Figure 2: Posterior firing intensity for two simple networks: (a) A network with 4 neurons, simulated with mean field approximation. (b) From top to bottom: the observed spike train, the firing intensity for the three latent neurons and the posterior inverse variance. The green neuron has a direct connection to the observed neuron, and as such has a much stronger modulation of its firing rate than the other two latent neurons. (c) A network with two pools of 20 neurons, the observed and the latent pools. (d) Simulation results. From top to bottom: observed spike trains, spike trains in the latent pool and mean firing intensities of the latent neurons over different realizations of the network. The rate of the latent pool increases just before the spikes of the observed neurons. Note that the spiking implementation of the model has the same rates as the mathematical rate model. There are few key points to note regarding (14). First, in the absence of observations, the best approximating hi (t) is simply given by fi (t), that is the posterior and the prior processes become equal. Second, the first, third and fourth terms in (14) are backward terms, that is, they correspond to corrections in the ?belief? about the past states generated by new inputs. This implies that in order to estimate the drift hi (t) of the posterior membrane potential of neuron i at time t, we need to know the observations X(t0 ) at time t0 > t. Third, the fourth term in equation (14) is a contribution to the gradient that comes from the fact that the inverse variance i (t) defined in equation (7) is also a function of the network state. This is an important feature of the model, since it implies that the amount of noise in the dynamics is also being adapted to better explain the observed spike trains. 2.3 Towards Spike-time Dependent Plasticity We learn the parameters of our network, that is, the synaptic weights and the neural ?biases? by gradient descent with learning rate ?: Z i (t) bi = ? KL = ? dt (hi (t) fi (t)) (17) bi ? Z k (t) Wk,l = ? KL = ? dt (hk (t) fk (t))?l (t) Wk,l ? Z 1X i (t) +? dt (hi (t) fi (t))2 , (18) 2 i Wk,l (t) where Wi k,l = 2 ?12 computation of b and 2 i (t)Wi,l ?l (t) k,i . Note that once the posterior drift hi (t) is known, the W can be done purely locally. 5 A long ?backward window? would, of course, be biologically implausible. However, on-line approximations to the backward terms provide a reasonable approximation by taking small backwards filters of up to 50ms. Mechanistically, applications of W can operate with a small delay, which is required to calculate the backwards correction term. In biology such delays indeed exist, as the weights are switched to a new value only some time after the stimulation that induces the change [23, 24] More precisely, using a small backward window amounts to approximating the gradient of the posterior drift hi (t) by cutting off the time integrals using a finite time horizon, i.e., in equation (14) R R t0 + T we replace integral dt by t0 dt where T is the size of the ?backward window? used to approximate the gradient. The expression (14) can now be written as a delayed update equation Z t hk (t T) / ds [Xk (s) ?0 exp(? uk (s))] k2V t + k (t Z t t T T 1X + 2 i Z T )(hk (t T) X ds i (s)(hi (s) fk (t fi (s)) i t ds t T i (s) hk (t T) T )) fi (s) hk (t T) (hi (s) fi (s))2 , (19) The resulting update for the variable hk is used in the learning equation 18. The simulation shown in Figure 2 provides a conceptual illustration of how the posterior firing intensity ?l (t) propagates information backward from observed into latent neurons, a process that is essential for learning temporal patterns. Note that ?l is the firing rate of the presynaptic neuron l and as such it is information that is not directly available at the site of the synapse which has only access to spike arrivals (but not the underlying firing rate). However, spike arrivals do provide a reasonable estimate of the rate. Indeed Figure 2c and d show that a simulation of a network of pools of spiking neurons where updates are only based on spike times (rather than rates) gives qualitatively the same information as the rate formula derived above. In equations (20,15) we could therefore replace the pre-synaptic firing intensity ?j (t) by temporally filtered spike trains which constitute a good approximation to ?j (t). 2.4 STDP Window From our learning equation for the synaptic weightR(18), we can extract an STDP-like learning window by rewriting the plasticity rules as Wi,j = dt Wi,j (t), where 1 X k (t) i (t) Wi,j (t) = (hi (t) fi (t))?j (t) + (hk (t) fk (t))2 (20) ? 2 Wi,j k Wi,j (t) is the expected change in Wi,j at time t under the posterior. As before, we replace the firing intensity ?j in a given trial by the spikes. Assuming a spike of the observed neuron at t = 0, we have evaluated h(t) and f (t) and plot the weight change k (t0 )(hk (t0 ) fk (t0 )) that would occur if the latent neuron fires at t0 cf. equation (18). We show the resulting Spike-time Dependent Plasticity for a simple network of two neurons in Figure 3. Note that the shape of Wi,j (t) is remarkably reminiscent of the experimentally found measurements for STDP [8]. In particular, the shape of the STDP curve depends on the type of neuron and is different for connections from excitatory to excitatory than from excitatory to inhibitory or inhibitory to inhibitory neurons (Figure 3). 3 Simulations In order to demonstrate the method?s ability to capture both stationary and temporal patterns, we performed simulations on two tasks. The first one involves the formation of a temporal chain, while the second one involves a stationary pattern generator. Both simulations were done using a discretetime (Euler method) version of the equations (14, 17, 18 and 19) with dt = 1ms. The backward window size was taken to be T = 50ms, and a learning rate of 0.02 was used. 6 ? ? ? ?? ?? ? ?? ?? ? ? ?? ?? ? ?? ?? ?? ?? ?? ? ?? ?? ? ?? ? ?? ?? ? ? ?? ? ?? ?? ? ?? ?? ? ?? ?? ?? ?? ? ?? ?? ? ?? ?? ?? ?? ? ?? ?? ? Figure 3: Spike-time Dependent Plasticity in a simple network composed of two neurons. Weight change Wi,j (t) (vertical axis) as a function of spike timing of the neuron at the top (the latent neuron), given that the bottom (observed) neuron produces a spike at t = 0 (horizontal axis). Shown are all permutations of excitatory (e) and inhibitory (i) neuron types, with the left and right learning windows next to each network corresponding to the downward and upward synapses, respectively. The first task consisted of learning a periodic chain, in which three pools of observed neurons were successively activated as shown in Figure 4a. A time lag was introduced between the third and the first pattern so as to force the network to form temporal hidden cause representations that are capable of capturing time dependencies without obvious observable instantaneous clues ? during a blank moment, the only way a network can tell which pattern comes next is by actively using the latent neurons. After learning, the spontaneously patterns in the observable neurons developed a clear resemblance to the patterns provided during training, although a slightly larger amount of noise was present, as shown in Figure 4b. If the noise level of the model network is reduced, a noise-free ?cleared-up concept? of the observed patterns is generated (Figure 4d) which clearly demonstrates that the recurrent network has indeed learned the task. The way learning has configured the network in the sequence task can be understood if we study the connectivity pattern of the latent neurons. The latent neuron are active during the whole sequence (Figure 4c). We have reordered the labels of the neurons so that the structure of the connectivity matrix becomes as visble. There are subsets of latent neurons that are particularly active during each of the three ?subpatterns? in the sequence task, and other latent neurons that become active while the observable units are quiescent (Figure 4i). The lateral connectivity between the latent neurons has an asymmetry in the forward direction of the chain. The second task aimed at learning to randomly generate one of three statinonary patterns every 10ms. Successfull learning of this task requires both the learning of the stationary patterns and the stochastic transitions between them. Figure 4d?g shows the results on this task. 4 Discussion Some models have recently been proposed where STDP-like learning rules derive from ?first principles? (e.g., [25, 26, 13]). However, these models have either difficulty dealing with recurrent latent dynamics, or they do not account for non-factorial latent representations. In this work, we have proposed a plausible derivation for synaptic plasticity in a network consisting of spiking neurons, which can both capture time dependencies in observed spike trains and process combinatorial features. Using a generative model comprising both latent and observed neurons, the mechanism utilizes implicit (that is, short-term delayed) backward iterations that arise naturally from variational inference. A plasticity mechanism emerges that closely resembles that of the familiar STDP mechanism found in experimental studies. In our simulations we show that the plasticity rules are capable of learning both a temporal and a stationary pattern generator. Future work will attempt to further elucidate the possible biological plausibility of the approach, and its connection to Spike-Time Dependent Plasticity. Acknowledgments Support was provided by the SNF grant (CRSIK0 122697), the ERC grant (268689) and the SystemsX IPhD grant. 7 Figure 4: Simulation results. Sequence task a?d, i: a 20ms-periodic sequence with a network of 30 observed neurons and 15 latent neurons having 50% of inhibitory neurons (chosen randomly). The connections between the observed neurons have been set to zero in order to illustrate the use of latent-to-latent recurrent connections. (a) A sample of the periodic input pattern. Note the long waiting time after each sequence 1 2 3 (1 2 3 wait 1 2 3 . . . ). (b) Simulations from the network with the first 20ms clamped to the data. (c) Latent neurons sample. (d) Sample simulation of the network with the same parameters but with less noise, in order to better show the underlying dynamics. This is achieved by the transformation ?i (t) ! ?i (t) with = 2. Random jump task e?h: learning to produce one of three patterns (4ms long) every 10ms. (e) A sample input pattern (f) One realization from the network with first the 20ms clamped to the data. (g) Sample latent pattern. (h) Sample simulation of the network with the same parameters but with less noise. Note that decreasing the level of noise is actually an impairment in performance for this task. (i) The learned synaptic matrix for the first task; the latent neurons have been re-ordered in order show the role of the latent-to-latent synapses in the dynamics as well as the role of the latent-to-observed synapses which represent the pattern features. References [1] Konrad P K?ording and Daniel M Wolpert. Bayesian integration in sensorimotor learning. Nature, 427(6971):244?7, January 2004. [2] P. Berkes, G. Orban, M. Lengyel, and J. Fiser. Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Science, 331(6013):83?87, January 2011. [3] Wei Ji Ma, Jeffrey M Beck, and Alexandre Pouget. Spiking networks for Bayesian inference and choice. Current opinion in neurobiology, 18(2):217?22, April 2008. [4] Joshua B Tenenbaum, Thomas L Griffiths, and Charles Kemp. Theory-based Bayesian models of inductive learning and reasoning. Trends in cognitive sciences, 10(7):309?18, 2006. [5] Konrad P K?ording and Daniel M Wolpert. Bayesian decision theory in sensorimotor control. Trends in cognitive sciences, 10(7):319?26, July 2006. [6] D. Acuna and P. Schrater. Bayesian modeling of human sequential decision-making on the multi-armed bandit problem. In Proceedings of the 30th Annual Conference of the Cognitive Science Society. Washington, DC: Cognitive Science Society, 2008. [7] Michael D. Lee. A Hierarchical Bayesian Model of Human Decision-Making on an Optimal Stopping Problem. Cognitive Science, 30(3):1?26, May 2006. 8 [8] G. Bi and M. Poo. Synaptic modification by correlated activity: Hebb?s postulate revisited. Annual review of neuroscience, 24(1):139?166, 2001. [9] W. Gerstner and W. K. Kistler. Mathematical Formulations of Hebbian Learning. Biological Cybernetics, 87(5-6):404?415, 2002. article. [10] W. Gerstner. Spike-response model. Scholarpedia, 3(12):1343, 2008. [11] J W Pillow, J Shlens, L Paninski, A Sher, A M Litke, E J Chichilnisky, and E P Simoncelli. Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454(7206):995? 999, Aug 2008. [12] Renaud Jolivet, Alexander Rauch, Hans R. L?uscher, and Wulfram Gerstner. Predicting spike timing of neocortical pyramidal neurons by simple threshold models. J Comput Neurosci, 21(1):35?49, August 2006. [13] J.P. Pfister, Taro Toyoizumi, D. Barber, and W. Gerstner. Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Computation, 18(6):1318?1348, 2006. [14] S. Kullback and R. A. Leibler. On Information and Sufficiency. The Annals of Mathematical Statistics, 22(1):79?86, March 1951. [15] Taro Toyoizumi, Kamiar Rahnama Rad, and Liam Paninski. Mean-field approximations for coupled populations of generalized linear model spiking neurons with Markov refractoriness. Neural computation, 21(5):1203?43, May 2009. [16] Brendan J. Frey and Geoffrey E. Hinton. Variational Learning in Nonlinear Gaussian Belief Networks. Neural Computation, 11(1):193?213, January 1999. [17] Karl Friston, J?er?emie Mattout, Nelson Trujillo-Barreto, John Ashburner, and Will Penny. Variational free energy and the Laplace approximation. NeuroImage, 34(1):220?34, January 2007. [18] Matthew J Beal and Zoubin Ghahramani. Variational Bayesian Learning of Directed Graphical Models with Hidden Variables. Bayesian Analysis, 1(4):793?832, 2006. [19] T.S. Jaakkola and M.I. Jordan. Bayesian parameter estimation via variational methods. Statistics and Computing, 10(1):25?37, 2000. [20] Jayant E Kulkarni and Liam Paninski. Common-input models for multiple neural spike-train data. Network (Bristol, England), 18(4):375?407, December 2007. [21] Ian H Stevenson, James M Rebesco, Nicholas G Hatsopoulos, Zach Haga, Lee E Miller, and Konrad P K?ording. Bayesian inference of functional connectivity and network structure from spikes. IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, 17(3):203?13, June 2009. [22] C. Archambeau, Dan Cornford, Manfred Opper, and J. Shawe-Taylor. Gaussian process approximations of stochastic differential equations. In Journal of Machine Learning Research Workshop and Conference Proceedings, volume 1, pages 1?16. Citeseer, 2007. [23] Daniel H O?Connor, Gayle M Wittenberg, and Samuel S-H Wang. Graded bidirectional synaptic plasticity is composed of switch-like unitary events. Proceedings of the National Academy of Sciences of the United States of America, 102(27):9679?84, July 2005. [24] C C Petersen, R C Malenka, R a Nicoll, and J J Hopfield. All-or-none potentiation at CA3-CA1 synapses. Proceedings of the National Academy of Sciences of the United States of America, 95(8):4732?7, April 1998. [25] Rajesh P N Rao. Bayesian computation in recurrent neural circuits. Neural computation, 16(1):1?38, January 2004. [26] Bernhard Nessler, Michael Pfeiffer, and Wolfgang Maass. STDP enables spiking neurons to detect hidden causes of their inputs. Advances in Neural Information Processing Systems (NIPS09), pages 1357?1365, 2009. 9
4387 |@word trial:1 illustrating:1 version:1 stronger:1 open:1 hu:1 confirms:1 simulation:15 covariance:1 citeseer:1 moment:1 efficacy:2 united:2 jimenez:1 daniel:3 ecole:3 ording:3 cleared:1 past:2 blank:1 current:1 dx:1 reminiscent:2 written:3 john:1 visible:1 plasticity:16 shape:2 enables:1 plot:1 update:4 stationary:6 generative:5 xk:2 ith:1 short:1 manfred:1 filtered:1 provides:1 contribute:1 revisited:1 simpler:2 wierstra:2 mathematical:3 direct:1 become:2 differential:1 consists:1 dan:1 introduce:2 expected:1 indeed:3 xz:2 multi:1 brain:4 decreasing:1 actual:2 armed:1 window:7 becomes:2 dxf:1 provided:2 underlying:3 circuit:1 developed:1 ca1:1 finding:1 transformation:1 temporal:10 every:2 demonstrates:1 uk:2 control:1 unit:2 grant:3 t1:1 before:2 dropped:1 timing:6 understood:1 xv:11 frey:1 engineering:2 firing:12 modulation:1 subpatterns:1 plus:1 resembles:1 evoked:2 lausanne:6 archambeau:1 liam:2 bi:9 directed:1 acknowledgment:1 spontaneously:2 implement:1 signaling:1 snf:1 empirical:1 word:1 integrating:1 pre:1 griffith:1 wait:1 rahnama:1 zoubin:1 acuna:1 petersen:1 context:3 nessler:1 maximizing:1 poo:1 gayle:1 k2v:2 systemsx:1 pouget:1 rule:10 shlens:1 regularize:1 population:2 laplace:1 updated:1 annals:1 spontaneous:2 target:1 play:1 elucidate:1 hypothesis:1 element:1 trend:2 approximated:1 particularly:1 observed:28 role:4 bottom:3 wang:1 capture:4 calculate:1 cornford:1 renaud:1 hatsopoulos:1 principled:2 environment:1 ui:15 dynamic:10 purely:1 upon:2 reordered:1 translated:1 joint:3 hopfield:1 various:1 america:2 derivation:2 train:22 distinct:1 effective:1 tell:1 formation:1 lag:1 larger:1 plausible:3 toyoizumi:2 ability:1 statistic:4 pempirical:4 ip:2 beal:1 sequence:11 propose:1 reset:1 realization:2 erale:3 dxh:2 academy:2 asymmetry:1 produce:2 derive:4 recurrent:8 depending:2 coupling:4 illustrate:1 school:2 aug:1 involves:4 implies:2 come:2 differ:1 switzerland:3 direction:1 closely:3 filter:1 stochastic:4 human:2 opinion:1 kistler:1 potentiation:1 biological:2 kamiar:1 correction:2 stdp:8 exp:9 predict:1 matthew:1 adopt:1 estimation:1 label:1 combinatorial:1 agrees:1 tf:1 successfully:1 clearly:1 gaussian:6 fulfill:1 rather:1 jaakkola:1 publication:1 rezende:2 derived:4 june:1 wittenberg:1 likelihood:4 hk:13 brendan:1 litke:1 detect:1 inference:6 dependent:8 stopping:1 epfl:6 typically:1 entire:3 hidden:7 bandit:1 reproduce:1 interested:1 comprising:1 upward:1 i2v:7 integration:2 field:3 equal:1 once:1 having:1 washington:1 biology:2 represents:1 future:2 few:1 randomly:2 composed:2 recognize:1 divergence:5 individual:1 delayed:3 national:2 familiar:1 beck:1 consisting:2 fire:2 jeffrey:1 attempt:1 interest:1 uscher:1 behind:2 activated:1 chain:3 rajesh:1 integral:2 capable:2 necessary:1 taylor:1 re:1 theoretical:1 modeling:1 rao:1 ar:1 applicability:1 ca3:1 subset:1 euler:1 srm:2 delay:2 dependency:2 periodic:3 spatiotemporal:1 mechanistically:1 lee:2 off:1 receiving:1 pool:7 michael:2 together:1 connectivity:5 postulate:2 successively:1 stochastically:1 external:5 cognitive:6 derivative:1 actively:1 account:3 potential:12 stevenson:1 de:3 wk:3 configured:1 caused:1 depends:3 scholarpedia:1 performed:1 try:2 wolfgang:1 hf:2 complicated:1 contribution:1 minimize:1 variance:4 percept:1 miller:1 yield:1 correspond:1 modelled:1 bayesian:15 weak:4 none:1 cybernetics:1 lengyel:1 bristol:1 history:1 explain:2 synapsis:5 implausible:1 ed:3 synaptic:13 ashburner:1 energy:1 sensorimotor:2 james:1 obvious:1 naturally:1 emerges:1 actually:1 alexandre:1 bidirectional:1 dt:23 danilo:2 supervised:1 response:2 wei:1 synapse:1 april:2 formulation:1 done:2 refractoriness:2 strongly:1 evaluated:1 sufficiency:1 just:1 implicit:1 fiser:1 correlation:1 d:5 horizontal:1 replacing:2 nonlinear:1 propagation:1 defines:1 resemblance:1 consisted:1 concept:1 inductive:1 analytically:1 read:1 leibler:2 maass:1 illustrated:1 konrad:3 during:4 samuel:1 m:10 generalized:2 complete:1 polytechnique:3 demonstrate:2 tn:2 neocortical:1 reasoning:1 hallmark:1 variational:15 instantaneous:2 recently:2 fi:17 charles:1 common:1 stimulation:1 spiking:15 ji:2 functional:1 volume:1 approximates:1 interpret:1 schrater:1 measurement:1 trujillo:1 connor:1 fk:4 erc:1 shawe:1 access:1 entail:1 longer:1 operating:1 han:1 berkes:1 posterior:15 joshua:1 seen:1 july:2 multiple:1 simoncelli:1 hebbian:1 adapt:1 plausibility:1 england:1 long:3 divided:1 basic:1 expectation:1 iteration:1 represent:3 normalization:1 achieved:1 remarkably:1 pyramidal:1 source:1 operate:1 electro:1 december:1 jordan:1 unitary:1 noting:1 backwards:2 feedforward:1 switch:1 xj:4 zi:1 regarding:1 t0:19 whether:2 expression:1 rauch:1 akin:1 mattout:1 render:1 cause:5 constitute:1 action:1 impairment:1 clear:1 aimed:1 factorial:1 amount:6 locally:1 tenenbaum:1 induces:1 discretetime:1 reduced:1 generate:1 exist:1 zj:1 inhibitory:6 neuroscience:1 conform:1 t1j:1 write:1 waiting:1 key:1 threshold:1 changing:1 rewriting:1 kept:1 backward:10 inverse:3 fourth:2 family:1 reasonable:2 utilizes:1 decision:3 capturing:1 layer:1 hi:18 followed:1 annual:2 activity:5 strength:1 adapted:1 occur:1 precisely:2 orban:1 performing:1 malenka:1 march:1 membrane:11 slightly:1 wi:21 biologically:1 making:2 modification:1 rehabilitation:1 glm:1 taken:3 behavioural:1 equation:12 nicoll:1 remains:1 mechanism:7 mind:3 know:1 tractable:1 end:1 available:1 hierarchical:1 nicholas:1 thomas:1 top:3 cf:1 graphical:1 medicine:1 rebesco:1 ghahramani:1 graded:1 approximating:2 society:3 question:1 quantity:3 spike:54 added:1 dependence:1 diagonal:1 gradient:8 distance:2 lateral:2 simulated:1 nelson:1 presynaptic:1 considers:1 kemp:1 barber:1 assuming:1 relationship:1 illustration:1 minimizing:3 difficult:1 mediating:1 negative:1 implementation:1 unknown:1 vertical:1 neuron:65 observation:4 markov:1 daan:2 finite:2 descent:2 january:5 neurobiology:1 communication:2 precise:1 hinton:1 dc:1 august:1 intensity:10 drift:7 introduced:2 required:1 kl:11 chichilnisky:1 connection:6 rad:1 learned:2 jolivet:1 able:3 below:1 pattern:21 regime:1 emie:1 including:1 green:1 belief:2 event:2 difficulty:1 force:1 friston:1 predicting:1 hr:1 pfeiffer:1 representing:2 temporally:1 axis:2 extract:1 sher:1 coupled:1 prior:2 review:1 permutation:1 interesting:1 generation:1 geoffrey:1 generator:2 integrate:1 switched:1 taro:2 propagates:1 principle:4 article:1 karl:1 excitatory:5 course:1 supported:1 free:3 jth:1 bias:3 institute:3 taking:1 penny:1 feedback:1 curve:1 cortical:1 world:2 transition:1 pillow:1 opper:1 sensory:2 forward:2 qualitatively:1 clue:1 jump:1 simplified:1 transaction:1 approximate:5 emphasize:1 observable:3 cutting:1 kullback:2 bernhard:1 dealing:1 active:3 reveals:1 conceptual:1 quiescent:1 spatio:1 xi:4 latent:39 jayant:1 learn:2 nature:2 gerstner:6 complex:2 impinging:1 substituted:1 neurosci:1 whole:1 noise:10 arise:1 arrival:2 neuronal:8 site:1 hebb:1 neuroimage:1 xh:5 exponential:1 comput:1 zach:1 clamped:2 third:3 learns:1 ian:1 formula:1 er:1 physiological:1 evidence:1 intrinsic:1 essential:1 workshop:1 barreto:1 sequential:1 hui:2 conditioned:1 downward:1 horizon:1 wolpert:2 simply:1 tfj:1 paninski:3 visual:1 expressed:1 ordered:1 ch:3 relies:1 ma:1 haga:1 towards:1 replace:3 absence:1 wulfram:3 change:4 experimentally:1 called:2 pfister:1 experimental:3 internal:1 support:2 alexander:1 kulkarni:1 evaluate:2 heaviside:1 phenomenon:1 correlated:1
3,741
4,388
Prediction strategies without loss Rina Panigrahy Microsoft Research Silicon Valley Mountain View, CA [email protected] Michael Kapralov Stanford University Stanford, CA [email protected] Abstract Consider a sequence of bits where we are trying to predict the next bit from the previous bits. Assume we are allowed to say ?predict 0? or ?predict 1?, and our payoff is +1 if the prediction is correct and ?1 otherwise. We will say that at each point in time the loss of an algorithm is the number of wrong predictions minus the number of right predictions so far. In this paper we are interested in algorithms that have essentially zero (expected) loss over any string at any point in time and yet have small regret with respect to always predicting 0 or always predicting 1. For a sequence of length T our algorithm has regret 14T and loss ? 2 2 T e? T in expectation for all strings. We show that the tradeoff between loss and regret is optimal up to constant factors. Our techniques extend to the general setting of N experts, where the related problem of trading off regret to the best expert for regret to the ?special? expert has been studied by Even-Dar et al. (COLT?07). We obtain essentially zero loss with respect to the special expert and optimal loss/regret tradeoff, improving upon the results of Even-Dar et al and settling the main question left open in their paper. The strong loss bounds of the algorithm have some surprising consequences. First, we obtain a parameter free algorithm for the experts problem that has optimal regret bounds with respect to k-shifting optima, i.e. bounds with respect to the optimum that is allowed to change arms multiple times. Moreover, for anypwindow of size n the regret of our algorithm to any expert never exceeds O( n(log N + log T )), where N is the number of experts and T is the time horizon, while maintaining the essentially zero loss property. 1 Introduction Consider a gambler who is trying to predict the next bit in a sequence of bits. One could think of the bits as indications of whether a stock price goes up or down on a given day, where we assume that the stock always goes up or down by 1 (this is, of course, a very simplified model of the stock market). If the gambler predicts 1 (i.e. that the stock will go up), she buys one stock to sell it the next day, and short sells one stock if her prediction is 0. We will also allow the gambler to bet fractionally by letting him specify a confidence c where 0 ? c ? 1 in his prediction. If the prediction is right the gambler gets a payoff of c otherwise ?c. While the gambler is tempted to make predictions with the prospect of making money, there is also the risk of ending up with a loss. Is there a way to never end up with a loss? Clearly there is the strategy of never predicting (by setting confidence 0) all the time that never has a loss but also never has a positive payoff. However, if the sequence is very imbalanced and has many more 0?s than 1?s then this never predict strategy has a high regret with respect to the strategy that predicts the majority bit. Thus, one is interested in a strategy that has a small regret with respect to predicting the majority bit and incurs no loss at the same time. Our main result is that while one cannot always avoid a loss and still have a small regret, this ? is possible if we allow for an exponentially small loss. More precisely, we show that for any  > 1/ T 1 2 ? there exists an algorithm that achieves regret at most 14T and loss at most 2e? T T , where T is the time horizon. Thus, the loss is exponentially small in the length of the sequence. The bit prediction problem can be cast as the experts problem with two experts: S+ , that always predicts 1 and S? that always predicts 0. This problem has been studied extensively, and very efficient algorithms are known. The weighted majority algorithm of [12] is known to give optimal ? regret guarantees. However, it can be seen that weighted majority may result in a loss of ?( T ). The best known result on bounding loss is the work of Even-Dar et al. [7] on the problem of trading off regret to the best expert for regret to the average expert, which is equivalent to our ? problem. Stated as a result on bounding loss, they were able to obtain a constant loss and regret O( ?T log T ). Their work left the question open as to whether it is possible to even get a regret of O( T log T ) and constant loss. In this paper we give an optimal regret/loss tradeoff, in particular showing that this regret can be achieved even with subconstant loss. Our results extend to the general setting of prediction with expert advice when there are multiple experts. In this problem the decision maker iteratively chooses among N available alternatives without knowledge of their payoffs, and gets payoff based on the chosen alternative. The payoffs of all alternatives are revealed after the decision is made. This process is repeated over T rounds, and the goal of the decision maker is to maximize her cumulative payoff over all time steps t = 1, . . . , T . This problem and its variations has been studied extensively, and efficient algorithms have been obtained (e.g. [5, 12, 6, 2, 1]). The most widely used measure of performance of an online decision making algorithm is regret, which is defined as the difference between the payoff of the best fixed alternative and the?payoff of the algorithm. The well-known weighted majority algorithm of [12] obtains regret O( T log N ) even when no assumptions are made on the process generating the payoff. Regret to the best fixed alternative in hindsight is a very natural notion when the payoffs are ? sampled from an unknown distribution, and in fact such scenarios show that the bound of O( T log N ) on regret achieved by the weighted majority algorithm is optimal. Even-Dar et al. [7] gave an? algorithm that has constant regret to any fixed distribution on the experts 1 at the expense of regret O( T log N (log T + log log N )) with respect to all other p experts . We obtain an optimal tradeoff between the two, getting an algorithm with regret O( T (log N + log T )) to the best and O((N T )??(1) ) to the average as a special case. We also note, similarly to [7] that our regret/loss tradeoff cannot be obtained by using standard regret minimization algorithms with a prior that is concentrated on the ?special? expert, since the prior would have to put a significant weight on the ?special? expert, resulting in ?(T ) regret to the best expert. The extension to the case of N experts uses the idea of improving one expert?s predictions by that of another. The strong loss bounds of our algorithm allow us to achieve lossless boosting, i.e. we use available expert to continuously improve upon the performance of the base expert whenever possible while essentially never hurting its performance. When comparing two experts, we track the difference in the payoffs discounted geometrically over time and apply a transform g(x) on this difference to obtain a weighting that is applied to give a linear combination of the two experts with a higher weightbeingapplied on the expert with a higher discounted payoff. The shape of g(x) is given 2 by erf 4?xT ex /(16T ) , capped at ?1. The weighted majority algorithm on the other hand uses a transform with the shape of the tanh( ?xT ) function and ignores geometric discounting (see Figure 1). An important property of our algorithm is that it does not need a high imbalance between the number of ones and the number of zeros in the whole sequence to have a gain: it is sufficient for the imbalance to be large enough in at least one contiguous time window2 , the size of which is a parameter of the algorithm. This property allows us to easily obtain optimal adaptive regret bounds, i.e. we p show that the payoff of our algorithm in any geometric window of size n is at most O( n log(N T )) 1 In fact, [7] provide several algorithms, of which the most relevant for comparison p are Phased Agression, ? yielding O( T log N (log T +log log N )) regret to the best and D-Prod, yielding O( T / log N log T ) regret to the best. For the bit prediction problem one would set N = 2 and use the uniform distribution over the ?predict 0? and ?predict 1? strategy as the special distribution. Our algorithm improves on both of them, yielding an optimal tradeoff. 2 More precisely, we use an infinite window with geometrically decreasing weighting, so that most of the weight is contained in the window of size O(n), where n is a parameter of the algorithm. 2 worse than the payoff of the strategy that is best in that window (see Theorem 11). In the full version of the paper ([11]) we also obtain bounds against the class of strategies that are allowed to change experts multiple times while maintaining the essentially zero loss property. We note that even though similar bounds (without the essentially zero loss property) have been obtained before ([3, 9, 14] and, more recently, [10]), our approach is very different and arguably simpler. In the full version of the paper, we also show how our algorithm yields regret bounds that depend on the lp norm of the costs, regret bounds dependent on Kolmogorov complexity as well as applications of our framework to multi-armed bandits with partial information and online convex optimization. 1.1 Related work The question of what can be achieved if one would like to have a significantly better guarantee with respect to a fixed arm or a distribution on arms was asked before in [7] as we discussed in the introduction. Tradeoffs between regret and loss were also examined in [13], where the author studied the set of values of a, b for which an algorithm can have payoff aOP T + b log N , where OP T is the payoff of the best arm and a, b are constants. The problem of bit prediction was also considered in [8], where several loss functions are considered. None of them, however, corresponds to our setting, making the results incomparable. In recent work on the NormalHedge algorithm[4] the authors use a potential function which is very similar to our function g(x) (see (2) below), getting strong regret guarantees to the -quantile of best experts. However, the use of the function g(x) seems to be quite different from ours, as is the focus of the paper [4]. 1.2 Preliminaries We start by defining the bit prediction problem formally. Let bt , t = 1, . . . , T be an adversarial sequence of bits. It will be convenient to adopt the convention that bt ? {?1, +1} instead of bt ? {0, 1} since it simplifies the formula for the payoff. In fact, in what follows we will only assume that ?1 ? bt ? 1, allowing bt to be real numbers. At each time step t = 1, . . . , T the algorithm is required to output a confidence level ft ? [?1, 1], and then the value of bt is revealed to it. The payoff of the algorithm by time t0 is t0 X At0 = ft bt . (1) t=1 For example, if bt ? {?1, +1}, then this setup is analogous to a prediction process in which a player observes a sequence of bits and at each point in time predicts that the value of the next bit will be sign(ft ) with confidence |ft |. Predicting ft ? 0 amounts to not playing the game, and incurs no loss, while not bringing any profit. We define the loss of the algorithm on a string b as loss = min{?At , 0}, i.e. the absolute value of the smallest negative payoff over all time steps. It is easy to see that any algorithm that has a positive expected payoff on some sequence necessarily loses on another sequence. Thus, we are concerned with finding a prediction strategy that has exponentially small loss bounds but also has low regret against a number of given prediction strategies. In the simplest setting we would like to design an algorithm that has low regret against two basic strategies: S+ , which always predicts +1 and S? , which P always predicts ?1. Note that the T maximum of the payoffs of S+ and S? is always equal to t=1 bt . We denote the base random strategy, which predicts with confidence 0, by S0 . In what follows we will use the notation AT for the cumulative payoff of the algorithm by time T as defined above. As we will show in section 3, our techniques extend easily to give an algorithm that has low regret with respect to the best of any N bit prediction strategies and exponentially small loss. Our techniques work for the general experts problem, where loss corresponds to regret with respect to the ?special? expert S0 , and hence we give the proof in this setting. This provides the connection to the work of [7]. In section 2 we give an algorithm for the case of two prediction strategies S+ and S? , and in section 3 we extend it to the general experts problem, additionally giving the claimed adaptive regret bounds. 3 2 Main algorithm The main result of this section is q Theorem 1 For any  ? T1 there exists an algorithm A for which ? ? T ? ? X ? 2 bj ? 14T, 0 ? 2 T e? T , AT ? max ? ? j=1 i.e. the algorithm has at most 14T regret as a exponentially small loss. ? against S+ and S? as well p By setting  so that the loss bound is 2Z T , we get a regret bound of T log(1/Z). We note that the algorithm is a strict generalization of weighted majority, which can be seen by letting Z = ?(1) (this property will also hold for the generalization to N experts in section 3). Our algorithm will have the following form. For a chosen discount factor ? = 1 ? 1/n, 0 ? ? ? 1 Pt?1 the algorithm maintains a discounted deviation xt = j=1 ?t?1?j bj at each time t = 1, . . . , T . The value of the prediction at time t is then given by g(xt ) for a function g(?) to be defined (note that xt depends only on bt0 for t0 < t, so this is an online algorithm). The function g as well as the discount factor ? depend on the desired bound on expected loss and regret against S+ and S? . In particular, we will set ? = 1 ? T1 for our main result on regret/loss tradeoff, and will use the freedom to choose different values of ? to obtain adaptive regret guarantees in section 3. The algorithm is given by Algorithm 1: Bounded loss prediction 1: x1 ? 0 2: for t = 1 to T do 3: Predict sign(g(xt )) with confidence |g(xt )|. 4: Set xt+1 ? ?xt + bt . 5: end for We start with an informal sketch of the proof, which will be made precise in Lemma 2 and Lemma 3 below. The proof is based on a potential function argument. In particular, we will choose the confidence function g(x) so that Z xt ?t = g(s)ds. 0 is a potential function, which serves as a repository for guarding our loss (we will chose g(x) to be an odd function, and hence will always have ?t ? 0). In particular, we will choose g(x) so that the change of ?t lower bounds the payoff of the algorithm. If we let ?t = G(xt ) (assuming for sake of clarity that xt > 0), where Z x G(x) = g(s)ds, 0 we have ?t+1 ? ?t = G(xt+1 ) ? G(xt ) ? G0 (x)?x + G00 (x)?x2 /2 ? g(x) [(? ? 1)x + bt ] + g 0 (x)/2. Since the payoff of the algorithm at time step t is g(xt )bt , we have ??t ? g(xt )bt = ?g(xt )(1 ? ?)xt + g 0 (xt )/2, so the condition becomes ?g(xt )(1 ? ?)xt + g 0 (xt )/2 ? Z, where Z is the desired bound on per step loss of the algorithm. Solving this equation yields a function of the form   ? 2 x ? g(x) = (2Z T ) ? erf ex /(16T ) , 4 T 4 Rx 2 e?s ds is the error function (see Figure 1 for the shape of g(x)). Rx We now make this proof sketch precise. For t = 1, . . . , T define ?t = 0 t g(x)dx. The function g(x) will be chosen to be a continuous odd function that is equal to 1 for x > U and to ?1 when x < ?U , for some 0 < U < T . Thus, we will have that |xt | ? U ? ?t ? |xt |. Intuitively, ?t captures the imbalance between the number of ?1?s and +1?s in the sequence up to time t. where erf(x) = ?2 ? 0 We will use the following parameters. We always have ? = 1 ? 1/n for some n > 1 and use the notation ?? = 1 ? ?. We will later choose n = T to prove Theorem 1, but we will use different value of n for the adaptive regret guarantees in section 3. We now prove that if the function g(x) approximately satisfies a certain differential equation, then ?t defined as above is a potential function. The statement of Lemma 2 involves a function h(x) that will be chosen as a step function that is 1 when x ? [?U, U ] and 0 otherwise. Lemma 2 Suppose that the function g(x) used in Algorithm 1 satisfies 1 (? ?x + 1)2 ? max |g 0 (s)| ? ??xg(x)h(x) + Z 0 2 s?[?x?1,?x+1] for a function h(x), 0 ? h(x) ? 1, ?x, for some Z 0 > 0. Then the payoff of the algorithm is at least T X ??xt g(xt )(1 ? h(x)) + ?T +1 ? Z 0 T t=1 as long as |bt | ? 1 for all t. Proof: We will show that at each t ?t+1 ? ?t ? bt g(xt ) + Z 0 ? ??xt g(xt )(1 ? h(xt )), i.e. T T X X bt g(xt ) ? ?Z 0 T + ??xt g(xt )(1 ? h(xt )) + ?T +1 ? ?1 , t=1 t=1 thus implying the claim of the lemma since ?1 = 0. We consider the case xt > 0. The case xt < 0 is analogous. In the following derivation we will write [A, B] to denote [min{A, B}, max{A, B}]. 0 ? bt ? 1: We have xt+1 = ?xt + bt = xt ? ??xt + bt , and the expected payoff of the algorithm is g(xt )bt . Then Z xt ??x ? t +bt ?t+1 ? ?t = g(s)ds xt 1 ? g(xt )(bt ? ??xt ) + (? ?xt + bt )2 ? max |g 0 (s)| 2 s?[xt ,xt ??x ? t +bt ]   1 2 0 ? g(xt )bt + ?g(xt )? ?xt + (? ?xt + 1) ? max |g (s)| 2 s?[xt ,xt ??x ? t +bt ] ? g(xt )bt + (?1 + h(xt ))? ?xt g(xt ) + Z 0 . ?1 ? bt ? 0: This case is analogous. We now define g(x) to satisfy the requirement of Lemma 2. For any Z, L > 0 and let     x2 |x| 2 g(x) = sign(xt ) ? min Z ? erf e 16L , 1 . (2) 4L p One can show that one has g(x) = 1 for |x| ? U for some U ? 7L log(1/Z). A plot of the function g(x) is given in Figure 1. We choose  h(x) = 1, |x| < U . 0 o.w. The following lemma shows that the function g(x) satisfies the properties stated in Lemma 2: 5 (3) +1 tanh x U  g(x) ?U +U x ?1 Figure 1: The shape of the confidence function g(x) (solid line) and the tanh(x) function used by weighted majority (dotted line). Lemma 3 Let L > 0 be such that ?? = 1/n ? 1/L2 . Then for n ? 80 log(1/Z) the function g(x) defined by (2) satisfies 1 (? ?x + 1)2 ? max |g 0 (s)| ? ??xg(x)h(x)/2 + 2? ?LZ, 2 s?[?x?1,?x+1] where h(x) is the step function defined above. Proof: The intuition behind the Lemma is very simple. Note that s ? [?x ? 1, ?x + 1] is not much x ?1 Z. Since ? further than 1 away from x, so g 0 (s) is very close to g 0 (x) = ( 2L ? ? 1/L2 , 2 )g(x) + ?L 1 0 we have g (x) ? ??xg(x)/2 + ?? ??LZ. We defer the details of the proof to the full version. We can now lower bound the payoff of Algorithm 1. We will use the notation  0, |x| <  + |x| = |x| ? , |x| >  Theorem 4 Let n be the window size parameter of Algorithm 1. Then one has AT ? T X ? + ??|xt |+ U + |xT +1 |U ? 2ZT / n. t=1 Proof: By Lemma 3 we have that the function g(x) satisfies the conditions of Lemma 2, and so from the bounds stated in Lemma 2 the payoff of the algorithm is at least T X ? ??|xt |+ U + ?T +1 ? 2ZT / n. t=1 By definition of ?t , since |g(x)| = 1 for |x| ? U , one has ?T +1 ? |xT +1 |+ U , which gives the desired statement. Now, setting n = T , we obtain Theorem 5 ? ? T ? ? X p ? AT ? max bj ? 14 T log(1/Z), 0 ? 2Z T . ? ? j=1 Proof: In light of Theorem 4 it remains to bound ?? T X t=1 xt + xT +1 = ?? T ?1 X t X PT ?t?j bj + xT +1 = t=1 j=1 ?xt t=1 ? T ?1 X + xT +1 . We have bt (1 ? ?T ?t ) + t=1 T X t=1 ?T ?t bt = T X bt . (4) t=1 p Thus, since U ? 2 T log(1/Z), and we chose ? = 1 ? 1/n = 1 ? 1/T , we get the result by combining Theorem 4 and equation (4). Proof of Theorem 1: Follows by setting log(1/Z) = 2 T . Our loss/regret tradeoff is optimal up to constant factors (proof deferred to the full version): 6 p ? Theorem 6 Any algorithm that has regret O( T log(1/Z)) incurs loss ?(Z T ) on at least one sequence of bits bt , t = 1, . . . , T . Note that if Z = o(1/T ), then the payoff ? of the algorithm is positive whenever the absolute value of the deviation xt is larger than, say 8 n log T in at least one window of size n. 3 Combining strategies (lossless boosting) In the previous section we derived an algorithm for the bit prediction problem with low regret to the S+ and S? strategies and exponentially small loss. We now show how our techniques yield an algorithm that has low regret to the best of N bit prediction strategies S1 , . . . , SN and exponentially small loss. However, since the proof works for the general experts problem, where loss corresponds to regret to a ?special? expert S0 , we state it in the general experts setting. In what follows we will refer to regret to S0 as loss. We will also prove optimal bounds on regret that hold in every window of length n at the end of the section. We start by proving Theorem 7 For any Z < 1/e there exists an algorithm for combining N strategies that has regret p ? O( T log(N/Z)) against the best of N strategies and loss at most O(ZN T ) with respect to any strategy S0 fixed a priori. These bounds are optimal up to constant factors. We first fix notation. A prediction strategy S given a bit string bt , produces a sequence of weights PN wjt on the set of experts j = 1, . . . , N such that wjt depends only on bt0 , t0 < t and j=1 wjt = 1, wjt ? 0 for all t. Thus, using strategy S amounts to using expert j with probability wj,t at time t, for all t = 1, . . . , T . For two strategies S1 , S2 we write ?t S1 + (1 ? ?t )S2 to denote the strategy whose weights are a convex combination of weights of S1 and S2 given by coefficients ?t ? [0, 1]. For a strategy S we denote its payoff at time t by st . We start with the case of two strategies S1 , S2 . Our algorithm will consider S1 as the base strategy (corresponding to the null strategy S0 in the previous section) and will use S2 to improve on S1 whenever possible, without introducing significant loss over S1 in the process. We define  g( 12 x), x > 0 g?(x) = 0 o.w, i.e. we are using a one-sided version of g(x). It is easy to see that g?(x) satisfies the conditions of Lemma 2 with h(x) as defined in (3). The intuition behind the algorithm is that since the difference in payoff obtained by using S2 instead of S1 is given by (s2,t ? s1,t ), it is sufficient to emulate Pt?1 t?1?j Algorithm 1 on this sequence. In particular, we set xt = (s2,j ? s1,j ) and predict j=1 ? g?(xt ) (note that since |s1,t ? s2,t | ? 2, we need to use g( 21 x) in the definition of g? to scale the payoffs). Predicting 0 corresponds to using S1 , predicting 1 corresponds to using S2 and fractional values correspond to a convex combination of S1 and S2 . Formally, the algorithm COMBINE(S1 , S2 , ?) takes the following form: Algorithm 2: COMBINE(S1 , S2 , ?) 1: Input: strategies S1 , S2 2: Output: strategy S ? 3: x1 ? 0 4: for t = 1 to T do 5: Set St? ? S1,t (1 ? g?(xt )) + S2,t g?(xt ). 6: Set xt+1 ? ?xt + (s2,t ? s1,t ). 7: end for 8: return S ? Note that COMBINE(S1 , S2 , ?) is an online algorithm, since St? only depends on s1,t0 , s2,t0 , t0 < t. 7 Lemma 8 There exists an algorithm that given two strategies S1 and S2 gets payoff at least ( T ) T p  X X ? s1,t + max (s2,t ? s1,t ) ? O T log(1/Z) , 0 ? O(Z T ). t=1 t=1 P ROOF O UTLINE : Use Algorithm 2 with ? = 1 ? 1/T . This amounts to applying Algorithm 1 to the sequence (s2,t ? s1,t ), so the guarantees follow by Theorem 4. We emphasize the property that Algorithm 2 combines two strategies S1 and S2 , improving on the performance of S1 using S2 whenever possible, essentially without introducing any loss with respect to S1 . Thus, this amounts to lossless boosting of one strategy?s performance using another. Thus, we have Proof of Theorem 7: Use Algorithm 2 repeatedly to combine N strategies S1 , . . . , SN by initializing S 0 ? S0 and setting S j ? COMBINE(S j?1 , Sj , 1 ? 1/T ), j = 1, . . . , N , where S0 is the null strategy. The regret and loss guarantees follow by Lemma 8. p Corollary 9 Setting Z = (N T )?1?? for ? > 0, we get regret O( ?T (log N + log T )) to the best of N strategies and loss at most O((N T )?? ) wrt strategy S0 fixed a priori. These bounds are optimal and improve on the work on [7]. So far we have used ? = 1 ? 1/T for all results. One can obtain optimal adaptive guarantees by performing boosting over a range of decay parameters ?. In particular, choose ?j = 1 ? nj , where nj , j = 1, . . . , W are powers of two between 80 log(N T ) and T . Then let Algorithm 3: Boosting over different time scales 1: 2: 3: 4: 5: 6: 7: 8: 9: S 0,W ? S0 for j = W downto 1 do for k = 1 to N do S k,j ? COMBINE(S k?1,j , Sk , 1 ? 1/nj ) end for S 0,j?1 ? S N,j end for S ? ? S 0,0 return S ? We note that it is important that the outer loop in Algorithm 3 goes from large windows down to small windows. Finally, we show another adaptive regret property of Algorithm 3. First, for a sequence wt , t = 1, . . . , T of real numbers and for a parameter ? = 1 ? 1/n ? (0, 1) define w ?t? = t X ?t?j wj . j=1 We will need the following definition: p ? Definition 10 A sequence wj is Z-uniform at scale ? = 1 ? 1/n if one has w ft ? c n log(1/Z) for all 1 ? t ? T , for some constant c > 0. Note that if the input sequence is iid Ber(?1, 1/2), then it is Z-uniform at any scale with probability at least 1 ? Z for any Z > 0. We now prove that the difference between the payoff of our algorithm and the payoff of any expert is Z-uniform, i.e. does not exceed the standard deviation of a uniformly random variable in any sufficiently large window, when the loss is bounded by Z. More precisely, Theorem 11 The sequences sj,t ? s?t are Z-uniform for any 1 ? j ? N at any scale ? ? 1 ? 1/(80 log(1/Z)) when Z = ? o((N T )?2 ). Moreover, the loss of the algorithm with respect to the base strategy is at most 2ZN T . The proof of Theorem 11 is given in the full version. 8 References [1] J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. COLT, 2009. [2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multi-armed bandit problem. SIAM J. Comput., 32:48?77, 2002. [3] A. Blum and Y. Mansour. From external to internal regret. Journal of Machine Learning Research, pages 1307?1324, 2007. [4] K. Chaudhuri, Y. Freund, and D. Hsu. A parameter free hedging algorithm. NIPS, 2009. [5] T. Cover. Behaviour of sequential predictors of binary sequences. Transactions of the Fourth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, 1965. [6] T. Cover. Universal portfolios. Mathematical Finance, 1991. [7] E. Even-Dar, M. Kearns, Y. Mansour, and J. Wortman. Regret to the best vs. regret to the average. Machine Learning, 72:21?37, 2008. [8] Y. Freund. Predicting a binary sequence almost as well as the optimal biased coin. COLT, 1996. [9] Y. Freund, R. E. Schapire, Y. Singer, and M. K. Warmuth. Using and combining predictors that specialize. STOC, pages 334?343, 1997. [10] E. Hazan and C. Seshadhri. Efficient learning algorithms for changing environments (full version available at http://eccc.hpi-web.de/eccc-reports/2007/tr07-088/index.html). ICML, pages 393?400, 2009. [11] M. Kapralov and R. Panigrahy. Prediction without loss in multi-armed bandit problems. http://arxiv.org/abs/1008.3672, 2010. [12] N. Littlestone and M.K. Warmuth. The weighted majority algorithm. FOCS, 1989. [13] V. Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences, 1998. [14] V. Vovk. Derandomizing stochastic prediction strategies. Machine Learning, pages 247?282, 1999. 9
4388 |@word repository:1 version:7 norm:1 seems:1 open:2 guarding:1 incurs:3 profit:1 minus:1 solid:1 ours:1 com:1 comparing:1 surprising:1 yet:1 dx:1 shape:4 plot:1 v:1 implying:1 warmuth:2 short:1 provides:1 boosting:5 org:1 simpler:1 mathematical:1 differential:1 focs:1 prove:4 specialize:1 combine:7 market:1 expected:4 multi:3 discounted:3 decreasing:1 armed:3 window:10 becomes:1 moreover:2 notation:4 bounded:2 null:2 what:4 mountain:1 string:4 hindsight:1 finding:1 nj:3 guarantee:8 every:1 finance:1 seshadhri:1 wrong:1 arguably:1 positive:3 before:2 t1:2 consequence:1 approximately:1 chose:2 studied:4 examined:1 range:1 phased:1 regret:61 universal:1 significantly:1 convenient:1 confidence:8 get:7 cannot:2 close:1 valley:1 put:1 risk:1 applying:1 derandomizing:1 equivalent:1 go:4 convex:3 his:1 proving:1 notion:1 variation:1 analogous:3 pt:3 suppose:1 us:2 predicts:8 ft:6 initializing:1 capture:1 wj:3 rina:2 prospect:1 observes:1 intuition:2 environment:1 complexity:1 asked:1 depend:2 solving:1 upon:2 easily:2 stock:6 emulate:1 kolmogorov:1 derivation:1 quite:1 whose:1 stanford:3 widely:1 larger:1 say:3 otherwise:3 erf:4 think:1 transform:2 online:4 sequence:21 indication:1 relevant:1 combining:4 loop:1 chaudhuri:1 achieve:1 getting:2 optimum:2 requirement:1 produce:1 generating:1 odd:2 op:1 strong:3 involves:1 trading:2 convention:1 correct:1 downto:1 stochastic:2 behaviour:1 fix:1 generalization:2 preliminary:1 extension:1 hold:2 sufficiently:1 considered:2 predict:9 bj:4 claim:1 achieves:1 adopt:1 smallest:1 tanh:3 maker:2 him:1 weighted:8 minimization:1 clearly:1 always:11 avoid:1 pn:1 bet:1 corollary:1 derived:1 focus:1 she:1 adversarial:2 dependent:1 bt:33 her:2 bandit:4 interested:2 among:1 colt:3 html:1 priori:2 special:8 equal:2 never:7 sell:2 icml:1 report:1 roof:1 microsoft:2 ab:1 freedom:1 deferred:1 hpi:1 yielding:3 light:1 behind:2 partial:1 littlestone:1 desired:3 contiguous:1 cover:2 zn:2 cost:1 introducing:2 deviation:3 uniform:5 predictor:2 wortman:1 chooses:1 st:3 siam:1 off:2 michael:1 continuously:1 cesa:1 choose:6 worse:1 external:1 expert:38 return:2 potential:4 de:1 coefficient:1 satisfy:1 audibert:1 depends:3 hedging:1 later:1 view:1 hazan:1 kapralov:3 start:4 maintains:1 defer:1 who:1 yield:3 correspond:1 iid:1 none:1 rx:2 whenever:4 definition:4 against:6 proof:14 sampled:1 gain:1 hsu:1 knowledge:1 normalhedge:1 improves:1 fractional:1 auer:1 higher:2 day:2 follow:2 specify:1 though:1 d:4 hand:1 sketch:2 web:1 discounting:1 hence:2 iteratively:1 round:1 game:2 trying:2 recently:1 at0:1 exponentially:7 extend:4 discussed:1 silicon:1 significant:2 refer:1 hurting:1 similarly:1 portfolio:1 money:1 base:4 imbalanced:1 recent:1 scenario:1 claimed:1 certain:1 binary:2 seen:2 maximize:1 full:6 multiple:3 exceeds:1 long:1 prediction:27 basic:1 essentially:7 expectation:1 arxiv:1 achieved:3 biased:1 bringing:1 strict:1 bt0:2 prague:1 revealed:2 exceed:1 enough:1 easy:2 concerned:1 gave:1 tr07:1 nonstochastic:1 incomparable:1 idea:1 simplifies:1 tradeoff:9 gambler:5 t0:7 whether:2 repeatedly:1 dar:5 amount:4 discount:2 extensively:2 concentrated:1 simplest:1 schapire:2 http:2 dotted:1 sign:3 track:1 per:1 write:2 fractionally:1 blum:1 clarity:1 changing:1 geometrically:2 fourth:1 almost:1 decision:5 bit:20 bound:23 precisely:3 x2:2 g00:1 sake:1 argument:1 min:3 performing:1 combination:3 lp:1 making:3 s1:29 intuitively:1 sided:1 equation:3 remains:1 wrt:1 singer:1 letting:2 end:6 serf:1 informal:1 available:3 apply:1 away:1 alternative:5 coin:1 maintaining:2 giving:1 quantile:1 g0:1 question:3 strategy:39 majority:10 outer:1 panigrahy:2 assuming:1 length:3 index:1 setup:1 statement:2 stoc:1 expense:1 stated:3 negative:1 design:1 zt:2 policy:1 unknown:1 allowing:1 imbalance:3 bianchi:1 payoff:36 defining:1 precise:2 mansour:2 cast:1 required:1 connection:1 nip:1 capped:1 able:1 below:2 max:8 shifting:1 power:1 natural:1 settling:1 predicting:8 eccc:2 arm:4 minimax:1 improve:3 lossless:3 xg:3 aop:1 sn:2 prior:2 geometric:2 l2:2 freund:4 loss:57 sufficient:2 s0:10 playing:1 course:1 free:2 allow:3 ber:1 absolute:2 ending:1 cumulative:2 ignores:1 author:2 made:3 adaptive:6 simplified:1 lz:2 far:2 transaction:1 sj:2 obtains:1 emphasize:1 buy:1 continuous:1 prod:1 sk:1 additionally:1 ca:2 improving:3 necessarily:1 main:5 bounding:2 whole:1 s2:23 allowed:3 repeated:1 x1:2 advice:2 comput:1 weighting:2 down:3 theorem:14 formula:1 xt:75 showing:1 decay:1 exists:4 sequential:1 horizon:2 bubeck:1 contained:1 corresponds:5 loses:1 satisfies:6 goal:1 tempted:1 wjt:4 price:1 change:3 infinite:1 uniformly:1 vovk:2 wt:1 lemma:16 kearns:1 player:1 formally:2 internal:1 ex:2
3,742
4,389
An Exact Algorithm for F-Measure Maximization ? Krzysztof Dembczynski Institute of Computing Science Pozna?n University of Technology Pozna?n, 60-695 Poland [email protected] Willem Waegeman Mathematical Modelling, Statistics and Bioinformatics, Ghent University Ghent, 9000 Belgium [email protected] ? Eyke Hullermeier Mathematics and Computer Science Philipps-Universit?at Marburg Marburg, 35032 Germany [email protected] Weiwei Cheng Mathematics and Computer Science Philipps-Universit?at Marburg Marburg, 35032 Germany [email protected] Abstract The F-measure, originally introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction. Optimizing this measure remains a statistically and computationally challenging problem, since no closed-form maximizer exists. Current algorithms are approximate and typically rely on additional assumptions regarding the statistical distribution of the binary response variables. In this paper, we present an algorithm which is not only computationally efficient but also exact, regardless of the underlying distribution. The algorithm requires only a quadratic number of parameters of the joint distribution (with respect to the number of binary responses). We illustrate its practical performance by means of experimental results for multi-label classification. 1 Introduction While being rooted in information retrieval [1], the so-called F-measure is nowadays routinely used as a performance metric for different types of prediction problems, including binary classification, multi-label classification (MLC), and certain applications of structured output prediction, like text chunking and named entity recognition. Compared to measures like error rate in binary classification and Hamming loss in MLC, it enforces a better balance between performance on the minority and the majority class, respectively, and, therefore, it is more suitable in the case of imbalanced data. Given a prediction h = (h1 , . . . , hm ) ? {0, 1}m of an m-dimensional binary label vector y = (y1 , . . . , ym ) (e.g., the class labels of a test set of size m in binary classification or the label vector associated with a single instance in MLC), the F-measure is defined as follows: Pm 2 i=1 yi hi P Pm F (y, h) = m ? [0, 1] , (1) i=1 yi + i=1 hi where 0/0 = 1 by definition. This measure essentially corresponds to the harmonic mean of precision prec and recall rec: Pm Pm yi hi yi hi Pi=1 prec(y, h) = Pi=1 , rec(y, h) = . m m i=1 hi i=1 yi One can generalize the F-measure to a weighted harmonic average of these two values, but for the sake of simplicity, we stick to the unweighted mean, which is often referred to as the F1-score or the F1-measure. 1 Despite its popularity in experimental settings, only a few methods for training classifiers that directly optimize the F-measure have been proposed so far. In binary classification, the existing algorithms are extensions of support vector machines [2, 3] or logistic regression [4]. However, the most popular methods, including [5], rely on explicit threshold adjustment. Some algorithms have also been proposed for structured output prediction [6, 7, 8] and MLC [9, 10, 11]. In these two application domains, three different aggregation schemes of the F-measure can be distinguished, namely the instance-wise, the micro-, and the macro-averaging. One should carefully distinguish these versions, as algorithms optimized with a given objective are usually performing suboptimally for other (target) evaluation measures. All the above algorithms intend to optimize the F-measure during the training phase. Conversely, in this article we rather investigate an orthogonal problem of inference from a probabilistic model. Modeling the ground-truth as a random variable Y , i.e., assuming an underlying probability distribution p(Y ) on {0, 1}m , the prediction h?F that maximizes the expected F-measure is given by X p(Y = y) F (y, h). (2) h?F = arg max Ey?p(Y ) [F (y, h)] = arg max h?{0,1}m h?{0,1}m y?{0,1}m As discussed in Section 2, this setting wasQ mainly examined before by [12], under the assumption of m independence of the Yi , i.e., p(Y = y) = i=1 pyi i (1 ? pi )1?yi with pi = p(Yi =1). Indeed, finding the maximizer (2) is in general a difficult problem. Apparently, there is no closed-form expression, and a brute-force search is infeasible (it would require checking all 2m combinations of prediction vector h). At first sight, it also seems that information about the entire joint distribution p(Y ) is needed to maximize the F-measure. Yet, as will be shown in this paper, the problem can be solved more efficiently. In Section 3, we present a general algorithm for maximizing the F-measure that requires only m2 + 1 parameters of the joint distribution. If these parameters are given, the exact solution can be obtained in time o(m3 ). This result holds regardless of the underlying distribution. In particular, unlike algorithms such as [12], we do not require independence of the binary response variables (labels). While being natural for problems like binary classification, this assumption is indeed not tenable in domains like MLC and structured output prediction. A discussion of existing methods for F-measure maximization, along with results indicating their shortcomings, is provided in Section 2. An experimental comparison in the context of MLC is presented in Section 4. 2 Existing Algorithms for F-Measure Maximization Current algorithms for solving (2) make different assumptions to simplify the problem. First of all, the algorithms operate on a constrained hypothesis space, sometimes justified by theoretical arguments. Secondly, they guarantee optimality only for specific distributions p(Y ). 2.1 Algorithms Based on Label Independence By assuming independence of the random variables Y1 , ..., Ym , the optimization problem (2) can be substantially simplified. It has been shown independently in [13] and [12] that the optimal solution always contains the labels with the highest marginal probabilities pi , or no labels at all. As a consequence, only a few hypotheses h (m+1 instead of 2m ) need to be examined, and the computation of the expected F-measure can be performed in an efficient way. Lewis [13] showed that the expected F-measure can be approximated by the following expression under the assumption of independence:1 ( Qm if h = 0 i=1 P(1 ? pi ), Ey?p(Y ) [F (y, h)] ' 2 m p h Pm i=1Pim i , if h 6= 0 pi + hi i=1 i=1 This approximation is exact for h = 0, while for h 6= 0, an upper bound of the error can easily be determined [13]. Jansche [12], however, has proposed an exact procedure, called maximum expected utility framework (MEUF), that takes marginal probabilities p1 , p2 , . . . , pm as inputs and solves (2) in time 1 In the following, we denote 0 and 1 as vectors containing all zeros and ones, respectively. 2 O(m4 ). He noticed that (2) can be solved via outer and inner maximization. Namely, (2) can be transformed into an inner maximization ? h(k) = arg max Ey?p(Y ) [F (y, h)] , (3) h?Hk where Hk = {h ? {0, 1}m | Pm i=1 hi = k}, followed by an outer maximization h?F = arg max ? ? Ey?p(Y ) [F (y, h)] . (4) h?{h(0) ,...,h(m) } The outer maximization (4) can be done by simply checking all m + 1 possibilities. The main effort is then devoted for solving the inner maximization (3). According to Theorem 2.1, to solve (3) for a given k, we need to check only one vector h in which hi = 1 for the k labels with highest marginal probabilities pi . The remaining problem is the computation of the expected F-measure in (3). This expectation cannot be computed naively, as the sum is over exponentially many terms. But the F-measure is a function of integer counts that are bounded, so it can normally only assume a much smaller number of distinct values. The cardinality of its domain is indeed exponential in m, but the cardinality of its range is polynomial in m, so the expectation can be computed in polynomial time. As a result, Jansche [12] obtains a procedure that is cubic in m for computing (3). He also presents approximate variants of this procedure, reducing its complexity from cubic to quadratic or even to linear. The results of the quadratic-time approximation, according to [12], are almost indistinguishable in practice from the exact algorithm; but still the overall complexity of the approach is O(m3 ). If the independence assumption is violated, the above methods may produce predictions being far away from the optimal one. The following result shows this concretely for the method of Jansche.2 Proposition 2.1. Let hJ be a vector of predictions obtained by MEUF, then the worst-case regret converges to one in the limit of m, i.e.,   lim sup (EY F (Y, h?F ) ? F (Y, hJ ) ) = 1, m?? p where the supremum is taken over all possible distributions p(Y ). Additionally, one can easily construct families of probability distributions that obtain a relatively fast convergence rate as a function of m. 2.2 Algorithms Based on the Multinomial Distribution Solving (2) becomes straightforward in the case of a specific distribution in whichPthe probabilm ity mass is distributed over vectors y containing only a single positive label, i.e., i=1 yi = 1, corresponding to the multinomial distribution. This was studied in [14] in the setting of so-called non-deterministic classification. Theorem 2.2 (Del Coz et al. [14]). Denote by y(i) a vector for which yi = 1 and all the other entries are zeros. Assume that p(Y ) is a joint distribution such that p(Y = y(i)) = pi . The maximizer h?F of (2) consists of the k labels with the highest marginal probabilities, where k is the first integer for which k X pj ? (1 + k)pk+1 ; j=1 if there is no such integer, then h = 1. 2.3 Algorithms Based on Thresholding on Ordered Marginal Probabilities Since all the methods so far rely on the fact that the optimal solution contains ones for the labels with the highest marginal probabilities (or consists of a vector of zeros), one may expect that thresholding on the marginal probabilities (hi = 1 for pi ? ?, and hi = 0 otherwise) will provide a solution to 2 Some of the proofs have been attached to the paper as supplementary material and will also be provided later with the extended version of the paper. 3 (2) in general. Obviously, to find an optimal threshold ?, access to the entire joint distribution is needed. However, this is not the main problem here, since in the next section, we will show that only a polynomial number of parameters of the joint distribution is needed. What is more interesting is the observation that the F-maximizer is in general not consistent with the order of marginal label probabilities. In fact, the regret can be substantial, as shown by the following result. Proposition 2.3. Let hT be a vector of predictions obtained by putting a threshold on sorted marginal probabilities in the optimal way, then the worst-case regret is lower bounded by   1 2 sup (EY F (Y, h?F ) ? F (Y, hT ) ) ? max(0, ? ), 6 m +4 p where the supremum is taken over all possible distributions p(Y ).3 This is a rather surprising result in light of the existence of many algorithms that rely on finding a threshold for maximizing the F-measure [5, 9, 10]. While being justified by Theorems 2.1 and 2.3 for specific applications, this approach does not yield optimal predictions in general. 3 An Exact Algorithm for F-Measure Maximization We now introduce an exact and efficient algorithm for computing the F-maximizer without using any additional assumption on the probability distribution p(Y ). While adopting the idea of decomposing the problem into an outer and an inner maximization, our algorithm differs from Jansche?s in the way the inner maximization is solved. As a key element, we consider equivalence classes for the labels in terms of the number of ones in the vectors h and y. The optimization of the F-measure can be substantially simplified by using these equivalence classes, since h and y then only appear in the numerator of the objective function. First, we show that only m2 + 1 parameters of the joint distribution p(Y ) are needed to compute the F-maximizer. Pm Theorem 3.1. Let sy = i=1 yi . The solution of (2) can be computed by solely using p(Y = 0) and the values of pis = p(Yi = 1 , sy = s), i, s ? {1, . . . , m} , which constitute an m ? m matrix P. Proof. The inner optimization problem (3) can be formulated as follows: ? X h(k) = arg max Ey?p(Y ) [F (y, h)] = arg max h?Hk h?Hk p(y) y?{0,1}m 2 Pm i=1 yi hi . sy + k The sums can be swapped, resulting in (k)? h = arg max 2 h?Hk m X X hi i=1 y?{0,1}m p(y)yi . sy + k (5) Furthermore, one can sum up the probabilities p(y) for all ys with an equal value of sy . By using X pis = yi p(y) , y?{0,1}m :sy =s one can transform (5) into the following expression: ? h(k) = arg max 2 h?Hk m X i=1 hi m X pis s+k s=1 (6) As a result, one does not need the whole distribution to solve (3), but only the values of pis , which can be given in the form of an m ? m matrix P with entries pis . For the special case of k = 0, we ? have h(k) = 0 and Ey?p(Y ) [F (y, 0)] = p(Y = 0). 3 Finding the exact value of the supremum is an interesting open question. 4 Algorithm 1 General F-measure Maximizer INPUT: matrix P and probability p(Y = 0) define matrix W with elements given by Eq. 7; compute F = PW for k = 1 to m do solve the inner optimization problem (3) that can be reformulated as: ? h(k) = arg max 2 h?Hk m X hi fik i=1 by setting hi= 1 for top k elements in the k-th column of matrix F, and hi= 0 for the rest; store a value of m h i X ? (k)? Ey?p(Y ) F (y, h(k) ) = 2 hi fik ; i=1 end for ? for k = 0 take h(k) = 0, and Ey?p(Y ) [F (y, 0)] = p(Y = 0); solve the outer optimization problem (4): h?F = arg max ? ? Ey?p(Y ) [F (y, h)] ; h?{h(0) ,...,h(m) } return h?F and Ey?p(Y ) [F (y, h?F )]; If the matrix P is given, the solution of (2) is straight-forward. To simplify the notation, let us introduce an m ? m matrix W with elements 1 wsk = , s, k ? {1, . . . , m} , (7) s+k The resulting algorithm, referred to as General F-measure Maximizer (GFM), is summarized in Algorithm 1 and its time complexity is analyzed in the following theorem. Theorem 3.2. Algorithm 1 solves problem (2) in time o(m3 ) assuming that the matrix P of m2 parameters and p(Y = 0) are given. Proof. We can notice in (6) that the sum s + k assumes at most m + 1 values (it varies from s to s + m). By introducing the matrix W with elements (7), we can simplify (6) to ? h(k) = arg max 2 h?Hk m X hi fik , (8) i=1 where fik are elements of a matrix F = PW. To solve (8), it is enough to find the top k elements (i.e., the elements with the highest values) in the k-th column of matrix F, which can be carried out in linear time [15]. The solution of the outer optimization problem (4) is then straight-forward. Consequently, the complexity of the algorithm is dominated by a matrix multiplication that is solved naively in O(m3 ), but faster algorithms working in O(m2.376 ) are known [16].4 Let us briefly discuss the properties of our algorithm in comparison to the other algorithms discussed in Section 2. First of all, MEUF is characterized by a much higher time complexity being O(m4 ) for the exact version. The recommended approximate variant reduces this complexity to O(m3 ). In turn, the GFM algorithm has a complexity of o(m3 ). In addition, let us also remark that this complexity can be further decreased if the number of distinct values of sy with non-zero probability mass is smaller than m. Moreover, the MEUF framework will not deliver an exact F-maximizer if the assumption of independence is violated. On the other hand, MEUF relies on a smaller number of parameters (m values 4 The complexity of the Coppersmith-Winograd algorithm [16] is more of theoretical significance, since practically this algorithm outperforms the na??ve method only for huge matrices. 5 representing marginal probabilities). Our approach needs m2 + 1 parameters, but then computes the maximizer exactly. Since estimating a larger number of parameters is statistically more difficult, it is a priori unclear which method performs better in practice. Our algorithm can also be tailored for finding an optimal threshold. It is then simplified due to constraining the number of hypotheses. Instead of finding the top kPelements in the k-th column, m it is enough to rely on the order of the marginal probabilities pi = s=1 pis . As a result, there is no need to compute the entire matrix F; instead, only the elements that correspond to the k highest marginal probabilities for each column k are needed. Of course, the thresholding can be further simplified by verifying only a small number t < m of thresholds. 4 Application of the Algorithm The GFM algorithm can be used whenever an estimation of the distribution p(Y ) or, alternatively, estimates of the matrix P and probability p(Y = 0) are available. In this section, we focus on the application of GFM in the multi-label setting. Thus, we consider the task of predicting a vector y = (y1 , y2 , . . . , ym ) ? {0, 1}m given another vector x = (x1 , x2 , . . . , xn ) ? Rn as input attributes. To this end, we train a classifier h(x) on a training set {(xi , y i )}N i=1 and perform inference for a given test vector x so as to deliver an optimal prediction under the F-measure (1). Thus, we optimize the performance for each instance individually (instance-wise F-measure), in contrast to macro- and micro-averaging of the F-measure. We follow an approach similar to Conditional Random Fields (CRFs) [17, 18], which estimates the joint conditional distribution p(Y | x). This approach has the additional advantage that one can easily sample from the estimated distribution. The underlying idea is to repeatedly apply the product rule of probability to the joint distribution of the labels Y = (Y1 , . . . , Ym ): p(Y = y | x) = m Y p(Yk = yk | x, y1 , . . . , yk?1 ) (9) k=1 This approach, referred to as Probabilistic Classifier Chains (PCC), has proved to yield state-ofthe-art performance in MLC [19]. Learning in this framework can be considered as a procedure that relies on constructing probabilistic classifiers for estimating p(Yk = yk |x, y1 , . . . , yk?1 ), independently for each k = 1, . . . , m. To sample from the conditional joint distribution p(Y | x), one follows the chain and picks the value of label yk by tossing a biased coin with probabilities given by the k-th classifier. Based on a sample of observations generated in this way, our GFM algorithm can be used to perform the optimal inference under F-measure. In the experiments, we train PCC by using linear regularized logistic regression. By plugging the log-linear model into (9), it can be shown that pairwise dependencies between labels yi and yj can be modeled. We tune the regularization parameter using 3-fold cross-validation. To perform inference, we draw for each test example a sample of 200 observations from the estimated conditional distribution. We then apply five inference methods. The first one (H) estimates marginal probabilities pi (x) and predicts 1 for labels with p?i (x) ? 0.5; this is an optimal strategy for the Hamming loss. The second method (MEUF) uses the estimates p?i (x) for computing the F-measure by applying the MEUF method. If the labels are independent, this method computes the F-maximizer exactly. As a third method, we use the approximate cubic-time variant of MEUF with the parameters suggested in the original paper [12]. Finally, we use GFM and its variant that finds the optimal threshold (GFM-T). Before showing the results of PCC on benchmark datasets, let us discuss results for two synthetic models, one with independent and another one with dependent labels. Plots and a description of the models are given in Fig. 1. As can be observed, MEUF performs the best for independent labels, while GFM approaches its performance if the sample size increases. This is coherent with our theoretical analysis, since GFM needs to estimate more parameters. However, in the case of dependent labels, MEUF performs poorly, even for a larger sample size, since the underlying assumption is not satisfied. Interestingly, both approximate variants perform very similarly to the original algorithms. We also see that GFM has a huge advantage over MEUF regarding the time complexity.5 5 All the computations are performed on a typical desktop machine. 6 ? 0.6 ? ? ? ? ? ? ? ? ? ? 150 0.18 ? ? ? ? ? 0.5 ? ? F1 0.4 ? 0.3 GFM GFM?T MEUF MEUF Approx 0.2 ? 50 F1 0.16 ? 0 0.14 time [s] 100 ? ? 20 40 60 sample size 80 100 20 40 60 sample size 80 ? ? 100 ? 10 ? ? ? ? 20 30 # of labels ? ? 40 50 Figure 1: The plots show the performance under the F-measure of the inference methods: GFM, its thresholding variant GFM-T, MEUF, and its approximate version MEUF Approx. Left: the performance as a function of sample size generated from independent distribution with pi = 0.12 and m = 25 labels. Center: similarly as above, but the distribution is defined according to (9), where all p(Yi = yi | y1 , . . . , yi?1 ) are defined by P logistic models with a linear part ? 12 (i?1)+ i?1 j=1 yj . Right: running times as a function of the number of labels with a sample size of 200. All the results are averaged over 50 trials. Table 1: Experimental results on four benchmark datasets. For each dataset, we give the number of labels (m) and the size of training and test sets (in parentheses: training/test set). A ?-? symbol indicates that an algorithm did not complete the computations in a reasonable amount of time (several days). In bold: the best results for a given dataset and performance measure. M ETHOD H AMMING MACRO -F MICRO -F F LOSS I NFERENCE TIME [ S ] S CENE : m = 6 (1211/1169) PCC H PCC GFM PCC GFM-T PCC MEUF APPROX . PCC MEUF BR BR MEUF APPROX . BR MEUF 0.1030 0.1341 0.1343 0.1323 0.1323 0.1023 0.1140 0.1140 PCC H PCC GFM PCC GFM-T PCC MEUF APPROX . PCC MEUF BR BR MEUF APPROX . BR MEUF 0.0471 0.0521 0.0521 0.0523 0.0523 0.0468 0.0513 0.0513 0.6673 0.7159 0.7154 0.7131 0.7131 0.6591 0.7048 0.7048 0.6675 0.6915 0.6908 0.6910 0.6910 0.6602 0.6948 0.6948 0.5185 0.5943 0.5948 0.5932 0.5932 0.5223 0.5969 0.5969 0.5779 0.7101 0.7094 0.6977 0.6977 0.5542 0.6468 0.6468 0.4892 0.6006 0.6011 0.6007 0.6007 0.4821 0.5947 0.5947 F LOSS I NFERENCE TIME [ S ] Y EAST: m = 14 (1500/917) 0.969 0.985 1.031 1.406 1.297 1.125 1.579 2.094 E NRON : m = 53 (1123/579) 0.1141 0.1618 0.1619 0.1612 0.1612 0.1049 0.1554 0.1554 H AMMING MACRO -F MICRO -F 0.2046 0.2322 0.2324 0.2295 0.2292 0.1987 0.2248 0.2263 0.3633 0.4034 0.4039 0.4030 0.4034 0.3349 0.4098 0.4096 0.6391 0.6554 0.6553 0.6551 0.6557 0.6299 0.6601 0.6591 0.6160 0.6479 0.6476 0.6469 0.6477 0.6039 0.6527 0.6523 3.704 3.796 3.907 10.000 11.453 0.640 7.110 10.031 M EDIAMILL : m = 101 (30999/12914) 195.061 194.889 196.030 1081.837 6676.145 8.594 850.494 7014.453 0.0304 0.0348 0.0348 0.0350 0.0304 0.3508 - 0.0931 0.1491 0.1499 0.1504 0.1429 0.1917 - 0.5577 0.5849 0.5854 0.5871 0.5623 0.5889 - 0.5429 1405.772 0.5734 1420.663 0.5737 1464.147 0.5740 308582.019 0.5462 207.655 0.5744 258431.125 - The results on four commonly used benchmark datasets6 with known training and test sets are presented in Table 1, which also includes some basic statistics of these datasets. We additionally present results of the binary relevance (BR) approach which trains an independent classifier for each label (we used the same base learner as in PCC). We also apply the MEUF method on marginals delivered by BR. This is the best we can do if only marginals are known. From the results of the F-measure, we can clearly state that all approaches tailored for this measure obtain better results. However, there is no clear winner among them. It seems that in practical applications, the theoretical results concerning the worst-case scenario do not directly apply. Also, the number of parameters to be estimated does not play an important role. However, GFM drastically outperforms MEUF in terms of computational complexity. For the Mediamill dataset, the MEUF algorithm in its exact version did not complete the computations in a reasonable amount of time. The running times for the approximate version are already unacceptably high for this dataset. We also report results for the Hamming loss, macro- and micro-averaging F-measure. We can see, for example, that approaches appropriate for Hamming loss obtain the best results regarding this measure. The macro and micro F-measure are presented mainly as a reference. The former is computed by averaging the F-measure label-wise, while the latter concatenates all test examples and computes a single value over all predictions. These two variants of the F-measure are not directly optimized by the algorithms used in the experiment. 6 These datasets are taken from the MULAN (http://mulan.sourceforge.net/datasets.html) and LibSVM (http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/multilabel.html) repositories. 7 5 Discussion The GFM algorithm can be considered for maximizing the macro F-measure, for example, in a similar setting as in [10], where a specific Bayesian on-line model is used. In order to maximize the macro F-measure, the authors sample from the graphical model to find an optimal threshold. The GFM algorithm may solve this problem optimally, since, as stated by the authors, the independence of labels is lost after integrating out the model parameters. Theoretically, one may also consider a direct maximization of the micro F-measure with GFM, but the computational burden is rather high in this case. Interestingly, there are no other MLC algorithms that maximize the F-measure in an instance-wise manner. We also cannot refer to other results already published in the literature, since usually only the micro- and macro-averaged F-measures are reported [20, 11]. This is rather surprising, especially since some closely related measures are often computed in the instance-wise manner in empirical studies. For example, the Jaccard distance (sometimes referred to as accuracy [21]), which differs from the F-measure in an additional term in the denominator, is commonly used in such a way. The situation is slightly different in structured output prediction, where algorithms for instance-wise maximization of the F-measure do exist. These include, for example, struct SVM [6], SEARN [8], and a specific variant of CRFs [7]. Usually, these algorithms are based on additional assumptions, like label independence in struct SVM. The GFM algorithm can also be easily tailored for maximizing the instance-wise F-measure in structured output prediction, in a similar way as presented above. If the structured output classifier is able to model the joint distribution from which we can easily sample observations, then the use of the algorithm is straight-forward. An application of this kind is planned as future work. Surprisingly, in both papers [8] and [6], experimental results are reported in terms of micro Fmeasure, although the algorithms maximize the instance-wise F-measure on the training set. Needless to say, one should not expect such an approach to result in optimal performance for the microaveraged F-measure.PDespite being related to each other, these two measures coincide only in the m specific case where i=1 (yi + hi ) is constant for all test examples. The discrepancy between these measures strongly depends on the nature of the data and the classifier used. For high variability in Pm i=1 (yi + hi ), a significant difference between the values of these two measures is to be expected. The use of the GFM algorithm in binary classification seems to be superfluous, since in this case, the assumption of label independence is rather reasonable. MEUF seems to be the right choice for probabilistic classifiers, unless its application is prevented due to its computational complexity. Thresholding methods [5] or learning algorithms optimizing the F-measure directly [2, 3, 4] are probably the most appropriate solutions here. 6 Conclusions In contrast to other performance measures commonly used in experimental studies, such as misclassification error rate, squared loss, and AUC, the F-measure has been investigated less thoroughly from a theoretical point of view so far. In this paper, we analyzed the problem of optimal predictive inference from the joint distribution under the F-measure. While partial results were already known from the literature, we completed the picture by presenting the solution for the general case without any distributional assumptions. Our GFM algorithm requires only a polynomial number of parameters of the joint distribution and delivers the exact solution in polynomial time. From a theoretical perspective, GFM should be preferred to existing approaches, which typically perform threshold maximization on marginal probabilities, often relying on the assumption of (conditional) independence of labels. Acknowledgments. Krzysztof Dembczy?nski has started this work during his post-doctoral stay at Philipps-Universit?at Marburg supported by German Research Foundation (DFG) and finalized it at Pozna?n University of Technology under the grant 91-515/DS of the Polish Ministry of Science and Higher Education. Willem Waegeman is supported as a postdoc by the Research Foundation of Flanders (FWO-Vlaanderen). The part of this work has been done during his visit at PhilippsUniversit?at Marburg. Weiwei Cheng and Eyke H?ullermeier are supported by DFG. We also thank the anonymous reviewers for their valuable comments. 8 References [1] C. J. van Rijsbergen. Foundation of evaluation. Journal of Documentation, 30(4):365?373, 1974. [2] David R. Musicant, Vipin Kumar, and Aysel Ozgur. Optimizing F-measure with support vector machines. In FLAIRS-16, 2003, pages 356?360, 2003. [3] Thorsten Joachims. A support vector method for multivariate performance measures. In ICML 2005, pages 377?384, 2005. [4] Martin Jansche. Maximum expected F-measure training of logistic regression models. In HLT/EMNLP 2005, pages 736?743, 2005. [5] Sathiya Keerthi, Vikas Sindhwani, and Olivier Chapelle. An efficient method for gradientbased adaptation of hyperparameters in SVM models. In Advances in Neural Information Processing Systems 19, 2007. [6] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res., 6:1453?1484, 2005. [7] Jun Suzuki, Erik McDermott, and Hideki Isozaki. Training conditional random fields with multivariate evaluation measures. In ACL, pages 217?224, 2006. [8] Hal Daum?e III, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning, 75:297?325, 2009. [9] Rong-En Fan and Chih-Jen Lin. A study on threshold selection for multi-label classification. Technical report, Department of Computer Science, National Taiwan University, 2007. [10] Xinhua Zhang, Thore Graepel, and Ralf Herbrich. Bayesian online learning for multi-label and multi-variate performance measures. In AISTATS 2010, pages 956?963, 2010. [11] James Petterson and Tiberio Caetano. Reverse multi-label learning. In Advances in Neural Information Processing Systems 23, pages 1912?1920, 2010. [12] Martin Jansche. A maximum expected utility framework for binary sequence labeling. In ACL 2007, pages 736?743, 2007. [13] David Lewis. Evaluating and optimizing autonomous text classification systems. In SIGIR 1995, pages 246?254, 1995. [14] Juan Jose del Coz, Jorge Diez, and Antonio Bahamonde. Learning nondeterministic classifiers. J. Mach. Learn. Res., 10:2273?2293, 2009. [15] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, 2nd edition. MIT Press, 2001. [16] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation, 3(9):251?280, 1990. [17] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML 2001, pages 282?289, 2001. [18] Nadia Ghamrawi and Andrew McCallum. Collective multi-label classification. In CIKM 2005, pages 195?200, 2005. [19] Krzysztof Dembczy?nski, Weiwei Cheng, and Eyke H?ullermeier. Bayes optimal multilabel classification via probabilistic classifier chains. In ICML 2010, pages 279?286, 2010. [20] Piyush Rai and Hal Daum?e III. Multi-label prediction via sparse infinite CCA. In Advances in Neural Information Processing Systems 22, pages 1518?1526, 2009. [21] Matthew R. Boutell, Jiebo Luo, Xipeng Shen, and Christopher M. Brown. Learning multi-label scene classification. Pattern Recognition, 37(9):1757?1771, 2004. 9
4389 |@word trial:1 repository:1 version:6 pw:2 polynomial:5 seems:4 briefly:1 pcc:14 nd:1 open:1 pick:1 contains:2 score:1 daniel:1 interestingly:2 outperforms:2 existing:4 current:2 surprising:2 luo:1 yet:1 john:2 ronald:1 hofmann:1 plot:2 unacceptably:1 desktop:1 mccallum:2 herbrich:1 zhang:1 five:1 mathematical:1 along:1 direct:1 consists:2 nondeterministic:1 manner:2 introduce:2 theoretically:1 pairwise:1 indeed:3 expected:8 p1:1 multi:11 relying:1 cardinality:2 becomes:1 provided:2 estimating:2 underlying:5 bounded:2 maximizes:1 mass:2 notation:1 moreover:1 what:1 rivest:1 kind:1 substantially:2 finding:5 guarantee:1 exactly:2 universit:3 classifier:11 qm:1 stick:1 brute:1 normally:1 grant:1 appear:1 segmenting:1 before:2 positive:1 limit:1 consequence:1 despite:1 mach:2 solely:1 acl:2 doctoral:1 studied:1 examined:2 equivalence:2 conversely:1 challenging:1 range:1 statistically:2 averaged:2 practical:2 acknowledgment:1 enforces:1 yj:2 practice:2 regret:3 lost:1 differs:2 procedure:4 empirical:1 integrating:1 altun:1 symbolic:1 cannot:2 needle:1 tsochantaridis:1 selection:1 put:1 context:1 applying:1 optimize:3 www:1 deterministic:1 reviewer:1 center:1 maximizing:4 crfs:2 straightforward:1 regardless:2 nron:1 independently:2 sigir:1 pyi:1 boutell:1 simplicity:1 shen:1 m2:5 fik:4 rule:1 his:2 ralf:1 ity:1 autonomous:1 target:1 play:1 exact:13 olivier:1 us:1 hypothesis:3 element:9 documentation:1 recognition:2 approximated:1 rec:2 pozna:3 marcu:1 predicts:1 distributional:1 winograd:2 observed:1 role:1 csie:1 solved:4 verifying:1 worst:3 hullermeier:1 caetano:1 highest:6 valuable:1 yk:7 substantial:1 complexity:12 xinhua:1 coz:2 multilabel:2 solving:3 poznan:1 deliver:2 predictive:1 learner:1 easily:5 joint:13 routinely:2 train:3 distinct:2 fast:1 shortcoming:1 labeling:2 supplementary:1 solve:6 larger:2 say:1 otherwise:1 statistic:2 transform:1 delivered:1 online:1 obviously:1 advantage:2 sequence:2 net:1 product:1 adaptation:1 macro:9 poorly:1 mediamill:1 description:1 sourceforge:1 convergence:1 produce:1 converges:1 piyush:1 illustrate:1 andrew:2 eq:1 solves:2 p2:1 c:1 closely:1 attribute:1 libsvmtools:1 material:1 education:1 require:2 f1:4 ntu:1 anonymous:1 proposition:2 tiberio:1 secondly:1 extension:1 pl:1 rong:1 hold:1 practically:1 gradientbased:1 considered:2 ground:1 matthew:1 belgium:1 estimation:1 label:41 individually:1 weighted:1 mit:1 clearly:1 always:1 sight:1 rather:5 hj:2 focus:1 datasets6:1 joachim:2 modelling:1 check:1 mainly:2 indicates:1 hk:8 contrast:2 polish:1 inference:7 dependent:2 typically:2 entire:3 transformed:1 germany:2 arg:11 classification:16 among:1 html:2 overall:1 priori:1 constrained:1 special:1 art:1 marginal:14 equal:1 construct:1 field:3 nadia:1 icml:3 discrepancy:1 future:1 report:2 ullermeier:2 simplify:3 micro:9 pim:1 few:2 ve:1 national:1 petterson:1 m4:2 dfg:2 phase:1 keerthi:1 huge:2 investigate:1 possibility:1 leiserson:1 evaluation:3 analyzed:2 light:1 fmeasure:1 devoted:1 superfluous:1 chain:3 nowadays:2 partial:1 nference:2 orthogonal:1 unless:1 re:2 theoretical:6 instance:9 column:4 modeling:1 planned:1 maximization:14 introducing:1 ugent:1 entry:2 dembczy:2 optimally:1 reported:2 dependency:1 varies:1 wsk:1 synthetic:1 thoroughly:1 nski:2 stay:1 probabilistic:6 ym:4 na:1 squared:1 clifford:1 satisfied:1 containing:2 philipps:3 emnlp:1 juan:1 return:1 de:2 summarized:1 bold:1 includes:1 ioannis:1 depends:1 performed:2 h1:1 later:1 closed:2 view:1 apparently:1 sup:2 aggregation:1 bayes:1 dembczynski:1 accuracy:1 efficiently:1 sy:7 yield:2 correspond:1 ofthe:1 generalize:1 bayesian:2 ghamrawi:1 straight:3 published:1 whenever:1 hlt:1 definition:1 bahamonde:1 james:1 associated:1 proof:3 hamming:4 proved:1 dataset:4 popular:1 recall:1 lim:1 marburg:8 graepel:1 carefully:1 originally:1 higher:2 day:1 follow:1 response:3 done:2 strongly:1 furthermore:1 langford:1 d:1 working:1 hand:1 christopher:1 maximizer:11 del:2 logistic:4 hal:2 thore:1 brown:1 y2:1 former:1 regularization:1 eyke:4 indistinguishable:1 during:3 numerator:1 auc:1 rooted:1 vipin:1 flair:1 presenting:1 complete:2 performs:3 delivers:1 harmonic:2 wise:8 charles:1 multinomial:2 attached:1 exponentially:1 winner:1 discussed:2 he:2 marginals:2 refer:1 significant:1 approx:6 mathematics:2 pm:10 similarly:2 chapelle:1 access:1 base:1 multivariate:2 imbalanced:1 showed:1 perspective:1 optimizing:4 reverse:1 scenario:1 store:1 certain:1 cene:1 binary:13 jorge:1 yi:21 mcdermott:1 musicant:1 yasemin:1 ministry:1 additional:5 isozaki:1 ey:12 tossing:1 maximize:4 fernando:1 recommended:1 arithmetic:1 reduces:1 technical:1 faster:1 characterized:1 cross:1 retrieval:2 lin:1 concerning:1 post:1 prevented:1 y:1 visit:1 plugging:1 parenthesis:1 prediction:18 variant:8 regression:3 basic:1 denominator:1 essentially:1 metric:2 expectation:2 sometimes:2 adopting:1 tailored:3 justified:2 addition:1 decreased:1 swapped:1 operate:1 unlike:1 rest:1 biased:1 probably:1 comment:1 lafferty:1 integer:3 mulan:2 constraining:1 iii:2 weiwei:3 enough:2 independence:11 variate:1 inner:7 regarding:3 idea:2 br:8 expression:3 utility:2 effort:1 reformulated:1 searn:1 constitute:1 remark:1 repeatedly:1 antonio:1 xipeng:1 clear:1 tune:1 amount:2 fwo:1 stein:1 http:2 exist:1 notice:1 estimated:3 cikm:1 popularity:1 putting:1 waegeman:3 key:1 threshold:10 four:2 pj:1 libsvm:1 ht:2 krzysztof:3 tenable:1 sum:4 jose:1 named:1 almost:1 family:1 reasonable:3 chih:1 draw:1 jaccard:1 cca:1 bound:1 hi:20 followed:1 distinguish:1 cheng:4 fold:1 quadratic:3 fan:1 x2:1 scene:1 sake:1 dominated:1 argument:1 optimality:1 kumar:1 performing:1 relatively:1 martin:2 structured:9 gfm:26 according:3 department:1 rai:1 combination:1 cormen:1 smaller:3 slightly:1 tw:1 ozgur:1 thorsten:2 taken:3 chunking:1 computationally:2 mathematik:2 remains:1 discus:2 count:1 turn:1 cjlin:1 needed:5 german:1 end:2 available:1 decomposing:1 willem:3 vlaanderen:1 apply:4 progression:1 away:1 prec:2 appropriate:2 distinguished:1 coin:1 struct:2 existence:1 original:2 vikas:1 top:3 remaining:1 assumes:1 running:2 include:1 graphical:1 completed:1 thomas:2 daum:2 especially:1 objective:2 intend:1 noticed:1 question:1 already:3 strategy:1 unclear:1 distance:1 thank:1 entity:1 majority:1 outer:6 minority:1 assuming:3 erik:1 suboptimally:1 taiwan:1 modeled:1 rijsbergen:1 balance:1 difficult:2 stated:1 ethod:1 collective:1 perform:5 upper:1 observation:4 datasets:6 benchmark:3 situation:1 extended:1 variability:1 y1:7 rn:1 mlc:8 jiebo:1 introduced:1 david:2 namely:2 optimized:2 coherent:1 hideki:1 able:1 suggested:1 usually:3 pattern:1 coppersmith:2 including:2 max:12 suitable:1 misclassification:1 natural:1 rely:5 force:1 predicting:1 regularized:1 representing:1 scheme:1 technology:2 picture:1 started:1 carried:1 hm:1 jun:1 poland:1 text:2 literature:2 interdependent:1 checking:2 multiplication:2 loss:7 expect:2 interesting:2 validation:1 foundation:3 consistent:1 article:1 thresholding:5 pi:19 course:1 surprisingly:1 supported:3 infeasible:1 drastically:1 institute:1 jansche:6 sparse:1 distributed:1 van:1 xn:1 evaluating:1 unweighted:1 computes:3 concretely:1 forward:3 commonly:3 author:2 simplified:4 coincide:1 suzuki:1 far:4 approximate:7 obtains:1 uni:2 preferred:1 finalized:1 supremum:3 sathiya:1 xi:1 alternatively:1 don:1 search:2 table:2 additionally:2 nature:1 concatenates:1 learn:2 shmuel:1 investigated:1 postdoc:1 constructing:1 domain:3 did:2 pk:1 main:2 significance:1 aistats:1 whole:1 hyperparameters:1 edition:1 x1:1 fig:1 referred:4 en:1 cubic:3 precision:1 pereira:1 explicit:1 exponential:1 flanders:1 third:1 theorem:6 specific:6 jen:1 showing:1 symbol:1 svm:3 exists:1 naively:2 burden:1 margin:1 simply:1 microaveraged:1 adjustment:1 ordered:1 sindhwani:1 corresponds:1 truth:1 lewis:2 relies:2 conditional:7 sorted:1 formulated:1 consequently:1 determined:1 typical:1 reducing:1 infinite:1 averaging:4 ghent:2 called:3 experimental:6 m3:6 east:1 indicating:1 support:3 latter:1 bioinformatics:1 violated:2 ediamill:1 relevance:1
3,743
439
The Devil and the Network: What Sparsity Implies to Robustness and Memory Sanjay Biswas and Santosh S. Venkatesh Department of Electrical Engineering University of Pennsylvania Philadelphia, PA 19104 Abstract Robustness is a commonly bruited property of neural networks; in particular, a folk theorem in neural computation asserts that neural networks-in contexts with large interconnectivity-continue to function efficiently, albeit with some degradation, in the presence of component damage or loss. A second folk theorem in such contexts asserts that dense interconnectivity between neural elements is a sine qua non for the efficient usage of resources. These premises are formally examined in this communication in a setting that invokes the notion of the "devil" 1 in the network as an agent that produces sparsity by snipping connections. 1 ON REMOVING THE FOLK FROM THE THEOREM Robustness in the presence of component damage is a property that is commonly attributed to neural networks. The content of the following statement embodies this sentiment. Folk Theorem 1: Computation in neural networks is not substantially affected by damage to network components. While such a statement is manifestly not true in general-witness networks with "grandmother cells" where damage to the critical cells fatally impairs the computational ability of the network-there is anecdotal evidence in support of it in 1 Well, maybe an imp. 883 884 Biswas and Venkatesh situations where the network has a more "distributed" flavour with relatively dense interconnectivity of elements and a distributed format for the storage of information. Qualitatively, the phenomenon is akin to holographic modes of storing information where the distributed, non-localised format of information storage carries with it a measure of security against component damage. The flip side to the robust folk theorem is the following observation, robustness notwithstanding: Folk Theorem 2: Dense interconnectivity is a sine qua non for efficient usage of resources; in particular, sparser structures exhibit a degradation in compu tationalcapability. Again, disclaimers have to be thrown in on the applicability of such a statement . In recurrent network architectures, however, this might seem to have some merit. In particular, in associative memory applications, while structural robustness might guarantee that the loss in memory storage capacity with increased interconnection sparsity may not be catastrophic , nonetheless intuitively a drop in capacity with increased sparsity may be expected. This communication represents an effort to mathematically codify these tenets. In the setting we examine we formally introduce sparse network inter connectivity by invoking the notion of a (puckish) devil in the network which severs interconnection links between neurons. Our results here involve some surprising consequencesviewed in the light of the two folk theorems-of sparse interconnectivity to robustness and to memory storage capability. Only the main results are stated here; for extensions and details of proofs we refer the interested reader to Venkatesh (1990) and Biswas and Venkatesh (1990). We denote by IB the set {-1, 1}. For every integer k we denote the set of integers {1, 2, . . . ,k} by [k]. By ordered multiset we mean an ordered collection of elements with repetition of elements allowed, and by k-set we mean an ordered multiset of k elements. All logarithms in the exposition are to base e. Notation 2 2.1 RECURRENT NETWORKS INTERCONNECTION GRAPHS We consider a recurrent network of n formal neurons. The allowed pattern of neural inter connectivity is specified by the edges of a (bipartite) interconnectivity graph, Gn , on vertices, [n] x [n]. In particular, the existence of an edge {i,i} in G n is indicative that the state of neuron j is input to neuron i. 2 The network is characterised by an n x n matrix of weights, W = [Wij], where Wij denotes the (real) weight modulating the state of neuron i at the input of neuron i. If u E IBn is the current state of the system, an update, Ui ~ u~ of the state of neuron i is 2Equivalently, imagine a devil loose with a pair of scissors snipping those interconnections for which {i, j} ~ G n ? For a complementary discussion of sparse interconnectivity see Koml6s and Paturi (1988) . The Devil and the Network specified by the linear threshold rule U~ = sgn (. ~ WiiUi) .dl,)}e G The network dynamics describe trajectories in a state space comprised of the vertices of the n-cube. 3 We are interested in an associative memory application where we wish to store a desired set of states-the memories-as fixed points of the network, and with the property that errors in an input representation of a memory are corrected and the memory retrieved. 2.2 DOMINATORS Let u E IBn be a memory and 0 ~ p < 1 a parameter. Corresponding to the memory u we generate a probe u E mn by independently specifying the components, Uj, of the probe as follows: ~ u? ) - We call ua {U)' -Uj with probability 1 - P with probability p. (1) random probe with parameter p. Definition 2.1 We say that a memory, u, dominates over a radius pn if, with probability approaching one as n --r 00, the network corrects all errors in a random probe with parameter p in one synchronous step. We call p the (fractional) dominance radius. We also say that u is stable if it is a O-dominator . Note that stable memories are just fixed points of the network. Also, the expected number of errors in a probe is pn. REMARKS: 2.3 CODES For given integers m ~ 1, n ~ 1, a code, x::;a, is a collection of ordered multisets of size m from IBn. We say that an m-set of memories is admissible iff it is in x::;a.4 Thus, a code just specifies which m-sets are allowable as memories. Examples of codes include: the set of all multisets of size m from IBn; a single multiset of size m from IBn; all collections of m mutually orthogonal vectors in IBn; all m-sets of vectors in IBn in general position. Define two ordered multisets of memories to be equivalent if they are permutations of one another. We define the size of a code, X::;a, to be the number of distinct equivalence classes of m-sets of memories. We will be interested in codes of relatively large size: log Ix::;a lin --r 00 as n --r 00. In particular, we require at least an exponential number of choices of (equivalence classes of) admissible m-sets of memOries. 3 As usual, there are Liapunov functions for the system under suitable conditions On the interconnectivity graph and the corresponding weights. 4We define admissible m-sets of memories in terms of ordered multisets rather than sets so as to obviate certain technical nuisances. 885 886 Biswas and Venkatesh 2.4 CAPACITY For each fixed nand inter connectivity graph, G n , an algorithm, X, is a prescription which, given an m-set of memories, produces a corresponding set of interconnection weights, Wij, i E [n], {i,j} E Gn . For m ~ 1 let A(u 1 , ... ,urn) be some attribute of m-sets of memories. (The following, for instance, are examples of attributes of admissible sets of memories: all the memories are stable in the network generated by X; almost all the memories dominate over a radius pn.) For given nand m, we choose a random m-set of memories, u 1 , ?.. , urn, from the uniform distribution on K~. Definition 2.2 Given interconnectivity graphs Gn , codes K~, and algorithm X, a sequence, {Cn}~=l' is a capacity function for the attribute A (or A-capacity for short) if for .-\ > 0 arbitrarily small: a) P {A(u 1 , ... , urn)} -+ 1 as n -+ 00 whenever m ~ (1 - .-\)Cn ; b) P {A(u 1 , ... ,urn)} -+ 0 as n -+ 00 whenever m ~ (1 + .-\)Cn . We also say that Cn is a lower A-capacity if property (a) holds, and that C n is an upper A-capacity if property (b) holds. For m ~ 1 let u 1 , ... , urn E IBn be an m-set of memories chosen from a code K~. The outer-product algorithm specifies the interconnection weights, Wij, according to the following rule: for i E [n], {i,j} E Gn , rn W?? I) -- '~ "' tluf3 i j' (2) f3=1 In general, if the interconnectivity graph, Gn , is symmetric then, under a suitable mode of operation, there is a Liapunov function for the network specified by the outer-product algorithm. Given graphs G n , codes K~, and the outer-product algorithm, for fixed 0 ~ p < 1/2 we are interested in the attribute V p that each of the m memories dominates over a radius pn. 3 RANDOM GRAPHS We investigate the effect of a random loss of neural interconnections in a recurrent network of n neurons by considering a random bipartite interconnectivity graph RG n on vertices [n] x [n] with P {{i,j} E RGn } = p for all i E [n], j E [n], and with these probabilities being mutually independent. The interconnection probability p is called the sparsity parameter and may depend on n. The system described above is formally equivalent to beginning with a fullyinterconnected network of neurons with specified interconnection weights Wij, and then invoking a devil which randomly severs interconnection links, independently retaining each interconnection weight Wij with probability p, and severing it (replacing it with a zero weight) with probability q = 1 - p. The Devil and the Network Let CK': denote the complete code of all choices of ordered multisets of size m from IBn. Theorem 3.1 Let 0 ~ p < 1/2 be a fixed dominance radius, and let the sparsity parameter p satisfy pn2 -+ 00 as n -+ 00. Then (1 - 2p)2pn/210gpn 2 is a Vpcapacity for random interconnectivity graphs RG n , complete codes CK':, and the outer-product algorithm. REMARKS: The above result graphically validates Folk Theorem 1 on the faulttolerant nature of the network; specifically, the network exhibits a graceful degradation in storage capacity as the loss in interconnections increases. Catastrophic failure occurs only when p is smaller than log n/n: each neuron need retain only of the order of o(log n) links of a total of n possible links with other neurons for useful associative properties to emerge. 4 BLOCK GRAPHS One of the simplest (and most regular) forms of sparsity that a favourably disposed devil might enjoin is block sparsity where the neurons are partitioned into disjoint subsets of neurons with full-interconnectivity within each subset and no neural interconnections between subsets. The weight matrix in this case takes on a block diagonal form, and the interconnectivity graph is composed of a set of disjoint, complete bipartite sub-graphs. More formally, let 1 ~ b ~ n be a positive integer, and let {h, ... ,!n/b} partition [n] such that each subset of indices, lTc, k = 1, ... , nib, has size IITcI = b. 5 We call each ITc a block and b the block size. We specify the edges of the (bipartite) block interconnectivity graph BG n by {i, j} E BG n iff i and j lie in a common block. = Theorem 4.1 Let the block size b be such that b O(n) as n -+ 00, and let be a fixed dominance radius. Then (1- 2p)2b/210gbn is a Vp-capacity for block interconnectivity graphs BGn , complete codes CK'::, and the outer-product algorithm. o ~ p < 1/2 Corollary 4.2 Under the conditions of theorem is b/210g bn. 4.1 the fixed point memory capacity Corollary 4.3 For a fully-interconnected graph, complete codes CK'::, and the outer-product algorithm, the fixed point memory capacity is n/410g n. Corollary 4.3 is the main result shown by McEliece, Posner, Rodemich, and Venkatesh (1987). Theorem 4.1 extends the result and shows (formally validating the intuition espoused in Folk Theorem 2) that increased sparsity causes a loss in capacity if the code is complete, i.e., all choices of memories are considered admissible. It is possible, however, to design codes to take advantage of the sparse interconnectivity structure, rather at odds with the Folk Theorem. SHere, as in the rest of the paper, we ignore details with regard to integer rounding. 887 888 Biswas and Venkatesh Without loss of generality let us assume that block h consists of the first b indices, [b], block 12 the next b indices, [2b] - [b], and so on, with the last block Inlb consisting of the last b indices, [n] - [n - b). We can then partition any vector u E rnn as u =( :~ ) (3) Unlb = where for k 1, ... , nib, Uk is the vector of components corresponding to block Ik. Mn./b For M ~ 1 we form the block code BK,n as follows: to each ordered multiset of M vectors, u 1 , ... , u M from rn n , we associate a unique ordered multiset in BK:r;;n/b by lexicographically ordering all Mnlb vectors of the form 01 n./b u nlb Thus, we obtain an admissible set of Mnl b memories from any ordered multiset of M vectors in rnn by "mixing" the blocks of the vectors. We call each M-set of vectors, u 1 , ... , u M E rn n , the generating vectors for the corresponding admissible .. Mn./b set of memones m BK,n . EXAMPLE: Consider a case with n = 4, block size b = 2, and M = 2 generating vectors. To any 2-set of generating vectors there corresponds a unique 4(=Mn l b)_set in the block code as follows : u 21 u 11 u 21 u 12 ul ul 1 2 u 21 u 22 u 21 u2 u2 u 22 1---+ u31 u 41 u32 u 24 u 31 u 41 u 32 u 42 u 31 u1 4 u 32 u 42 ~ p < 1/2 be a fixed dominance radius. Then we have the following capacity estimates for block interconnectivity graphs BGn , block codes BKr;: , and the outer-product algorithm: Theorem 4.4 Let 0 a) If the block size b satisfies n log log bnlb log bn is 1) p-capacity [ b) Define for any v (1 - 2P)2 b] nib 2 log bn --+ 0 as n --+ 00 then the The Devil and the Network If the block size b satisfies b/logn -- 00 and blogbn/loglogbn n -- 00, then Cn(v) is a lower 1J p -capacity for any choice of v Cn(v) is an upper 1Jp-capacity for any v> 3/2. Corollary 4.5 If, for fixed t ~ 1, we have b theorem 4.4, the 1J p -capacity is = nit, = O(n) as < 3/2 and then, under the conditions of t t(~)t (1- 2p) 2t t- 4- logn Corollary 4.6 For any fixed dominance radius 0 ~ p < 1/2, and for any T < 1, a constant c > 0 and a code of size n cn2 - r ) can be found such that it is possible (2 to achieve lower 1J p -capacities which are n (2nr) in recurrent neural networks with interconnectivity graphs of degree (n 1 - T ) . e If the number of blocks is kept fixed as n grows (i.e., the block size grows linearly with n) then capacities polynomial in n are attained. If the number of blocks increases with n (i.e., the block size grows sub-linearly with n) then super-polynomial capacities are attained. Furthermore, we have the surprising result rather at odds with Folk Theorem 2 that very large storage capacities can be obtained at the expense of code size (while still retaining large code sizes) in increasingly sparse networks. REMARKS : Acknow ledgements The support of research grants from E. I. Dupont de Nemours, Inc . and the Air Force Office of Scientific Research (grant number AFOSR 89-0523) is gratefully acknowledged. References Biswas, S. and S. S. Venkatesh (1990), "Codes, sparsity, and capacity in neural associative memory," submitted for publication. Koml6s, J. and R. Paturi (1988), "Effects of connectivity in associative memory models," Technical Report CS88-131, University of California, San Diego, 1988. McEliece, R. J., E. C. Posner, E. R. Rodemich, and S. S. Venkatesh (1987), "The capacity of the Hopfield associative memory," IEEE Trans. Inform. Theory, vol. IT33, pp. 461-482. Venkatesh, S. S. (1990), "Robustness in neural computation: random graphs and sparsity," to appear IEEE Trans. Inform. Theory. 889
439 |@word polynomial:2 bn:3 invoking:2 carry:1 current:1 surprising:2 tenet:1 partition:2 dupont:1 drop:1 update:1 liapunov:2 indicative:1 beginning:1 short:1 multiset:6 ik:1 consists:1 introduce:1 inter:3 expected:2 examine:1 considering:1 ua:1 notation:1 what:1 substantially:1 guarantee:1 every:1 uk:1 grant:2 appear:1 positive:1 engineering:1 might:3 examined:1 equivalence:2 specifying:1 unique:2 block:25 rnn:2 regular:1 storage:6 context:2 equivalent:2 pn2:1 nit:1 graphically:1 independently:2 rule:2 dominate:1 obviate:1 posner:2 notion:2 imagine:1 diego:1 gpn:1 pa:1 element:5 manifestly:1 associate:1 electrical:1 ordering:1 intuition:1 ui:1 dynamic:1 depend:1 bipartite:4 hopfield:1 distinct:1 describe:1 say:4 interconnection:13 ability:1 validates:1 associative:6 sequence:1 advantage:1 interconnected:1 product:7 iff:2 mixing:1 achieve:1 asserts:2 produce:2 generating:3 recurrent:5 ibn:9 implies:1 radius:8 attribute:4 sgn:1 require:1 premise:1 bkr:1 mathematically:1 extension:1 hold:2 considered:1 modulating:1 repetition:1 interconnectivity:19 anecdotal:1 super:1 rather:3 ck:4 pn:5 office:1 publication:1 corollary:5 bgn:2 nand:2 wij:6 interested:4 logn:2 retaining:2 cube:1 santosh:1 f3:1 represents:1 imp:1 report:1 randomly:1 composed:1 consisting:1 ltc:1 thrown:1 investigate:1 light:1 edge:3 folk:11 orthogonal:1 logarithm:1 desired:1 increased:3 instance:1 gn:5 applicability:1 vertex:3 subset:4 uniform:1 comprised:1 holographic:1 rounding:1 nlb:1 retain:1 corrects:1 connectivity:4 again:1 espoused:1 choose:1 compu:1 de:1 inc:1 satisfy:1 scissors:1 bg:2 sine:2 capability:1 disclaimer:1 air:1 efficiently:1 vp:1 trajectory:1 submitted:1 inform:2 whenever:2 definition:2 against:1 failure:1 nonetheless:1 pp:1 proof:1 attributed:1 fractional:1 rodemich:2 attained:2 specify:1 generality:1 furthermore:1 just:2 mceliece:2 favourably:1 replacing:1 mode:2 scientific:1 disposed:1 grows:3 usage:2 effect:2 true:1 biswas:6 symmetric:1 codify:1 nuisance:1 allowable:1 paturi:2 complete:6 common:1 jp:1 refer:1 gratefully:1 stable:3 base:1 retrieved:1 store:1 certain:1 continue:1 arbitrarily:1 full:1 technical:2 lexicographically:1 lin:1 prescription:1 itc:1 cell:2 rest:1 validating:1 seem:1 integer:5 call:4 structural:1 odds:2 presence:2 pennsylvania:1 architecture:1 approaching:1 cn:6 synchronous:1 impairs:1 ul:2 effort:1 akin:1 sentiment:1 cause:1 remark:3 useful:1 involve:1 maybe:1 simplest:1 generate:1 specifies:2 disjoint:2 ledgements:1 dominator:1 vol:1 affected:1 dominance:5 threshold:1 acknowledged:1 kept:1 graph:19 extends:1 almost:1 reader:1 flavour:1 nib:3 u1:1 urn:5 graceful:1 relatively:2 format:2 department:1 according:1 smaller:1 increasingly:1 partitioned:1 severing:1 intuitively:1 resource:2 mutually:2 loose:1 flip:1 merit:1 operation:1 probe:5 robustness:7 existence:1 denotes:1 include:1 embodies:1 invokes:1 uj:2 occurs:1 damage:5 usual:1 diagonal:1 nr:1 exhibit:2 link:4 capacity:23 outer:7 code:22 index:4 equivalently:1 statement:3 expense:1 localised:1 stated:1 acknow:1 severs:2 design:1 upper:2 observation:1 neuron:13 situation:1 witness:1 communication:2 rn:3 bk:3 venkatesh:10 pair:1 specified:4 connection:1 security:1 california:1 trans:2 sanjay:1 pattern:1 sparsity:11 grandmother:1 memory:33 critical:1 suitable:2 force:1 mn:4 multisets:5 philadelphia:1 afosr:1 loss:6 fully:1 permutation:1 agent:1 cn2:1 degree:1 storing:1 last:2 side:1 formal:1 emerge:1 sparse:5 distributed:3 regard:1 commonly:2 qualitatively:1 collection:3 san:1 ignore:1 gbn:1 nature:1 robust:1 mnl:1 dense:3 main:2 linearly:2 allowed:2 complementary:1 sub:2 position:1 wish:1 exponential:1 lie:1 ib:1 ix:1 admissible:7 removing:1 theorem:17 qua:2 evidence:1 dl:1 dominates:2 albeit:1 notwithstanding:1 sparser:1 rg:2 ordered:10 u2:2 corresponds:1 satisfies:2 exposition:1 content:1 characterised:1 specifically:1 corrected:1 degradation:3 called:1 total:1 catastrophic:2 formally:5 support:2 devil:9 phenomenon:1
3,744
4,390
H OGWILD !: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent Feng Niu [email protected] Benjamin Recht [email protected] Christopher R?e [email protected] Stephen J. Wright [email protected] Computer Sciences Department University of Wisconsin-Madison Madison, WI 53706 Abstract Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve stateof-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performancedestroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called H OGWILD ! which allows processors access to shared memory with the possibility of overwriting each other?s work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then H OGWILD ! achieves a nearly optimal rate of convergence. We demonstrate experimentally that H OGWILD ! outperforms alternative schemes that use locking by an order of magnitude. 1 Introduction With its small memory footprint, robustness against noise, and rapid learning rates, Stochastic Gradient Descent (SGD) has proved to be well suited to data-intensive machine learning tasks [3, 5, 24]. However, SGD?s scalability is limited by its inherently sequential nature; it is difficult to parallelize. Nevertheless, the recent emergence of inexpensive multicore processors and mammoth, web-scale data sets has motivated researchers to develop several clever parallelization schemes for SGD [4, 10, 12, 16, 27]. As many large data sets are currently pre-processed in a MapReduce-like parallel-processing framework, much of the recent work on parallel SGD has focused naturally on MapReduce implementations. MapReduce is a powerful tool developed at Google for extracting information from huge logs (e.g., ?find all the urls from a 100TB of Web data?) that was designed to ensure fault tolerance and to simplify the maintenance and programming of large clusters of machines [9]. But MapReduce is not ideally suited for online, numerically intensive data analysis. Iterative computation is difficult to express in MapReduce, and the overhead to ensure fault tolerance can result in dismal throughput. Indeed, even Google researchers themselves suggest that other systems, for example Dremel, are more appropriate than MapReduce for data analysis tasks [20]. For some data sets, the sheer size of the data dictates that one use a cluster of machines. However, there are a host of problems in which, after appropriate preprocessing, the data necessary for statistical analysis may consist of a few terabytes or less. For such problems, one can use a single inexpensive work station as opposed to a hundred thousand dollar cluster. Multicore systems have significant performance advantages, including (1) low latency and high throughput shared main memory (a processor in such a system can write and read the shared physical memory at over 12GB/s with latency in the tens of nanoseconds); and (2) high bandwidth off multiple disks (a thousand-dollar RAID 1 can pump data into main memory at over 1GB/s). In contrast, a typical MapReduce setup will read incoming data at rates less than tens of MB/s due to frequent checkpointing for fault tolerance. The high rates achievable by multicore systems move the bottlenecks in parallel computation to synchronization (or locking) amongst the processors [2, 13]. Thus, to enable scalable data analysis on a multicore machine, any performant solution must minimize the overhead of locking. In this work, we propose a simple strategy for eliminating the overhead associated with locking: run SGD in parallel without locks, a strategy that we call H OGWILD !. In H OGWILD !, processors are allowed equal access to shared memory and are able to update individual components of memory at will. Such a lock-free scheme might appear doomed to fail as processors could overwrite each other?s progress. However, when the data access is sparse, meaning that individual SGD steps only modify a small part of the decision variable, we show that memory overwrites are rare and that they introduce barely any error into the computation when they do occur. We demonstrate both theoretically and experimentally a near linear speedup with the number of processors on commonly occurring sparse learning problems. In Section 2, we formalize a notion of sparsity that is sufficient to guarantee such a speedup and provide canonical examples of sparse machine learning problems in classification, collaborative filtering, and graph cuts. Our notion of sparsity allows us to provide theoretical guarantees of linear speedups in Section 4. As a by-product of our analysis, we also derive rates of convergence for algorithms with constant stepsizes. We demonstrate that robust 1/k convergence rates are possible with constant stepsize schemes that implement an exponential back-off in the constant over time. p This result is interesting in of itself and shows that one need not settle for 1/ k rates to ensure robustness in SGD algorithms. In practice, we find that computational performance of a lock-free procedure exceeds even our theoretical guarantees. We experimentally compare lock-free SGD to several recently proposed methods. We show that all methods that propose memory locking are significantly slower than their respective lock-free counterparts on a variety of machine learning applications. 2 Sparse Separable Cost Functions Our goal throughout is to minimize a function f : X ? Rn ! R of the form X f (x) = fe (xe ) . (1) e2E Here e denotes a small subset of {1, . . . , n} and xe denotes the values of the vector x on the coordinates indexed by e. The key observation that underlies our lock-free approach is that the natural cost functions associated with many machine learning problems of interest are sparse in the sense that |E| and n are both very large but each individual fe acts only on a very small number of components of x. That is, each subvector xe contains just a few components of x. The cost function (1) induces a hypergraph G = (V, E) whose nodes are the individual components of x. Each subvector xe induces an edge in the graph e 2 E consisting of some subset of nodes. A few examples illustrate this concept. Sparse SVM. Suppose our goal is to fit a support vector machine to some data pairs E = {(z1 , y1 ), . . . , (z|E| , y|E| )} where z 2 Rn and y is a label for each (z, y) 2 E. X minimizex max(1 y? xT z? , 0) + kxk22 , (2) ?2E and we know a priori that the examples z? are very sparse (see for example [14]). To write this cost function in the form of (1), let e? denote the components which are non-zero in z? and let du denote the number of training examples which are non-zero in component u (u = 1, 2, . . . , n). Then we can rewrite (2) as ! X X x2 u T minimizex max(1 y? x z? , 0) + . (3) du u2e ?2E ? Each term in the sum (3) depends only on the components of x indexed by the set e? . 2 Matrix Completion. In the matrix completion problem, we are provided entries of a low-rank, nr ? nc matrix Z from the index set E. Such problems arise in collaborative filtering, Euclidean distance estimation, and clustering [8,17,23]. Our goal is to reconstruct Z from this sparse sampling of data. A popular heuristic recovers the estimate of Z as a product LR? of factors obtained from the following minimization: X 2 2 minimize(L,R) (Lu Rv? Zuv )2 + ?2 kLkF + ?2 kRkF , (4) (u,v)2E where L is nr ? r, R is nc ? r and Lu (resp. Rv ) denotes the uth (resp. vth) row of L (resp. R) [17, 23, 25]. To put this problem in sparse form, i.e., as (1), we write (4) as o X n minimize(L,R) (Lu Rv? Zuv )2 + 2|E?u | kLu k2F + 2|E? v | kRv k2F (u,v)2E where Eu = {v : (u, v) 2 E} and E v = {u : (u, v) 2 E}. Graph Cuts. Problems involving minimum cuts in graphs frequently arise in machine learning (see [6] for a comprehensive survey). In such problems, we are given a sparse, nonnegative matrix W which indexes similarity between entities. Our goal is to find a partition of the index set {1, . . . , n} that best conforms to this similarity matrix. Here the graph structure is explicitly determined by the similarity matrix W ; arcs correspond to nonzero entries in W . We want to match each string to some list of D entities. Each node is associated with a vector xi in the D-dimensional simplex PD SD = {? 2 RD : ?v 0 v=1 ?v = 1}. Here, two-way cuts use D = 2, but multiway-cuts with tens of thousands of classes also arise in entity resolution problems [18]. For example, we may have a list of n strings, and Wuv might index the similarity of each string. Several authors (e.g., [7]) propose to minimize the cost function X minimizex wuv kxu xv k1 subject to xv 2 SD for v = 1, . . . , n . (5) (u,v)2E In all three of the preceding examples, the number of components involved in a particular term fe is a small fraction of the total number of entries. We formalize this notion by defining the following statistics of the hypergraph G: ? := max |e|, e2E := max1?v?n |{e 2 E : v 2 e}| maxe2E |{? e 2 E : e? \ e 6= ;}| , ? := . (6) |E| |E| The quantity ? simply quantifies the size of the hyper edges. ? determines the maximum fraction of edges that intersect any given edge. determines the maximum fraction of edges that intersect any variable. ? is a measure of the sparsity of the hypergraph, while measures the node-regularity. For our examples, we can make the following observations about ? and . 1. Sparse SVM. is simply the maximum frequency that any feature appears in an example, while ? measures how clustered the hypergraph is. If some features are very common across the data set, then ? will be close to one. 2. Matrix Completion. If we assume that the provided examples are sampled uniformly at r) r) random and we see more than nc log(nc ) of them, then ? log(n and ? ? 2 log(n . nr nr This follows from a coupon collector argument [8]. 3. Graph Cuts. is the maximum degree divided by |E|, and ? is at most 2 . We now describe a simple protocol that achieves a linear speedup in the number of processors when ?, , and ? are relatively small. 3 The H OGWILD ! Algorithm Here we discuss the parallel processing setup. We assume a shared memory model with p processors. The decision variable x is accessible to all processors. Each processor can read x, and can 3 Algorithm 1 H OGWILD ! update for individual processors 1: loop 2: Sample e uniformly at random from E 3: Read current state xe and evaluate Ge (xe ) 4: for v 2 e do xv xv Gev (xe ) 5: end loop contribute an update vector to x. The vector x is stored in shared memory, and we assume that the componentwise addition operation is atomic, that is xv xv + a can be performed atomically by any processor for a scalar a and v 2 {1, . . . , n}. This operation does not require a separate locking structure on most modern hardware: such an operation is a single atomic instruction on GPUs and DSPs, and it can be implemented via a compare-and-exchange operation on a general purpose multicore processor like the Intel Nehalem. In contrast, the operation of updating many components at once requires an auxiliary locking structure. Each processor then follows the procedure in Algorithm 1. Let Ge (xe ) denote a gradient or subgradient of the function fe multiplied by |E|. That is, |E| 1 Ge (xe ) 2 @fe (xe ). Since it is clear by notation, we often write Ge (x), dropping the notation that identifies the affected indices of x. Note that as a consequence of the uniform random sampling of e from E, we have E[Ge (xe )] 2 @f (x) . In Algorithm 1, each processor samples an term e 2 E uniformly at random, computes the gradient of fe at xe , and then writes xv xv Gev (xe ), for each v 2 e. (7) We assume that the stepsize is a fixed constant. Note that the processor modifies only the variables indexed by e, leaving all of the components in ?e (i.e., not in e) alone. Even though the processors have no knowledge as to whether any of the other processors have modified x, we define xj to be the state of the decision variable x after j updates have been performed1 . Since two processors can write to x at the same time, we need to be a bit careful with this definition, but we simply break ties at random. Note that xj is generally updated with a stale gradient, which is based on a value of x read many clock cycles earlier. We use xk(j) to denote the value of the decision variable used to compute the gradient or subgradient that yields the state xj . In what follows, we provide conditions under which this asynchronous, incremental gradient algorithm converges. Moreover, we show that if the hypergraph induced by f is isotropic and sparse, then this algorithm converges in nearly the same number of gradient steps as its serial counterpart. Since we are running in parallel and without locks, this means that we get a nearly linear speedup in terms of the number of processors. 4 Fast Rates for Lock-Free Parallelism To state our theoretical results, we must describe several quantities that important in the analysis of our parallel stochastic gradient descent scheme. We follow the notation and assumptions of Nemirovski et al [21]. To simplify the analysis, we will assume that each fe in (1) is a convex function. We assume Lipschitz continuous differentiability of f with Lipschitz constant L: krf (x0 ) rf (x)k ? Lkx0 xk, 8 x0 , x 2 X. We also assume f is strongly convex with modulus c. By this we mean that c f (x0 ) f (x) + (x0 x)T rf (x) + kx0 xk2 , for all x0 , x 2 X. 2 (8) (9) 1 Our notation overloads subscripts of x. For clarity throughout, subscripts i, j, and k refer to iteration counts, and v and e refer to components or subsets of components. 4 When f is strongly convex, there exists a unique minimizer x? and we denote f? = f (x? ). We additionally assume that there exists a constant M such that kGe (xe )k2 ? M almost surely for all x 2 X . (10) We assume throughout that c < 1. (Indeed, when c > 1, even the ordinary gradient descent algorithms will diverge.) Our main results are summarized by the following Proposition 4.1 Suppose in Algorithm 1 that the lag between when a gradient is computed and when it is used in step j ? namely, j k(j) ? is always less than or equal to ? , and is defined to be #?c = . (11) 2LM 2 1 + 6?? + 4? 2 ? 1/2 for some ? > 0 and # 2 (0, 1). Define D0 := kx0 k x? k2 and let k be an integer satisfying 2LM 2 1 + 6? ? + 6? 2 ? c2 #? Then after k updates of x, we have E[f (xk ) 1/2 log(LD0 /?) . (12) f? ] ? ?. A proof of Proposition 4.1 is provided in the full version of this paper [22]. In the case that ? = 0, this reduces to precisely the rate achieved by the serial SGD protocol. A similar rate is achieved if ? = o(n1/4 ) as ? and are typically both o(1/n). In our setting, ? is proportional to the number of processors, and hence as long as the number of processors is less n1/4 , we get nearly the same recursion as in the linear rate. Note that up to the log(1/?) term in (12), our analysis nearly provides a 1/k rate of convergence for a constant stepsize SGD scheme, both in the serial and parallel cases. Moreover, note that our rate of convergence is fairly robust to error in the value of c; we pay linearly for our underestimate of the curvature of f . In contrast, Nemirovski et al demonstrate that when the stepsize is inversely proportional to the iteration counter, an overestimate of c can result in exponential slow-down [21]! Robust 1/k rates. We note that a 1/k can be achieved by a slightly more complicated protocol where the stepsize is slowly decreased after a large number of iterations. Suppose we run Algorithm 1 for a fixed number of gradient updates K with stepsize < 1/c. Then, we wait for the 1 threads to coalesce, reduce by a constant factor 2 (0, 1), and run for K iterations. This scheme results in a 1/k rate of convergence with the only synchronization overhead occurring at the end of each ?round? or ?epoch? of iteration. In some sense, this piecewise constant stepsize protocol approximates a 1/k diminishing stepsize. The main difference with our approach from previous analysis is that our stepsizes are always less than 1/c in contrast to beginning with very large stepsizes. Always working with small stepsizes allows us to avoid the possible exponential slow-downs that occur with standard diminishing stepsize schemes. 5 Related Work Most schemes for parallelizing stochastic gradient descent are variants of ideas presented in the seminal text by Bertsekas and Tsitsiklis [4]. For instance, in this text, they describe using stale gradient updates computed across many computers in a master-worker setting and describe settings where different processors control access to particular components of the decision variable. They prove global convergence of these approaches, but do not provide rates of convergence (This is one way in which our work extends this prior research). These authors also show that SGD convergence is robust to a variety of models of delay in computation and communication in [26]. Recently, a variety of parallel schemes have been proposed in a variety of contexts. In MapReduce settings, Zinkevich et al proposed running many instances of stochastic gradient descent on different machines and averaging their output [27]. Though the authors claim this method can reduce both the variance of their estimate and the overall bias, we show in our experiments that for the sorts of problems we are concerned with, this method does not outperform a serial scheme. Schemes involving the averaging of gradients via a distributed protocol have also been proposed by several authors [10, 12]. While these methods do achieve linear speedups, they are difficult 5 type SVM MC Cuts data set RCV1 Netflix KDD Jumbo DBLife Abdomen size (GB) 0.9 1.5 3.9 30 3e-3 18 ? 0.44 2.5e-3 3.0e-3 2.6e-7 8.6e-3 9.2e-4 1.0 2.3e-3 1.8e-3 1.4e-7 4.3e-3 9.2e-4 H OGWILD ! time train (s) error 9.5 0.297 301.0 0.754 877.5 19.5 9453.5 0.031 230.0 10.6 1181.4 3.99 test error 0.339 0.928 22.6 0.013 N/A N/A ROUND ROBIN time train test (s) error error 61.8 0.297 0.339 2569.1 0.754 0.927 7139.0 19.5 22.6 N/A N/A N/A 413.5 10.5 N/A 7467.25 3.99 N/A Figure 1: Comparison of wall clock time across of H OGWILD ! and RR. Each algorithm is run for 20 epochs and parallelized over 10 cores. to implement efficiently on multicore machines as they require massive communication overhead. Distributed averaging of gradients requires message passing between the cores, and the cores need to synchronize frequently in order to compute reasonable gradient averages. The work most closely related to our own is a round-robin scheme proposed by Langford et al [16]. In this scheme, the processors are ordered and each update the decision variable in order. When the time required to lock memory for writing is dwarfed by the gradient computation time, this method results in a linear speedup, as the errors induced by the lag in the gradients are not too severe. However, we note that in many applications of interest in machine learning, gradient computation time is incredibly fast, and we now demonstrate that in a variety of applications, H OGWILD ! outperforms such a round-robin approach by an order of magnitude. 6 Experiments We ran numerical experiments on a variety of machine learning tasks, and compared against a roundrobin approach proposed in [16] and implemented in Vowpal Wabbit [15]. We refer to this approach as RR. To be as fair as possible to prior art, we hand coded RR to be nearly identical to the H OG WILD ! approach, with the only difference being the schedule for how the gradients are updated. One notable change in RR from the Vowpal Wabbit software release is that we optimized RR?s locking and signaling mechanisms to use spinlocks and busy waits (there is no need for generic signaling to implement round robin). We verified that this optimization results in nearly an order of magnitude increase in wall clock time for all problems that we discuss. We also compare against a model which we call AIG which can be seen as a middle ground between RR and H OGWILD !. AIG runs a protocol identical to H OGWILD ! except that it locks all of the variables in e in before and after the for loop on line 4 of Algorithm 1. Our experiments demonstrate that even this fine-grained locking induces undesirable slow-downs. All of the experiments were coded in C++ are run on an identical configuration: a dual Xeon X650 CPUs (6 cores each x 2 hyperthreading) machine with 24GB of RAM and a software RAID-0 over 7 2TB Seagate Constellation 7200RPM disks. The kernel is Linux 2.6.18-128. We never use more than 2GB of memory. All training data is stored on a seven-disk raid 0. We implemented a custom file scanner to demonstrate the speed of reading data sets of disk into small shared memory. This allows us to read data from the raid at a rate of nearly 1GB/s. All of the experiments use a constant stepsize which is diminished by a factor at the end of each pass over the training set. We run all experiments for 20 such passes, even though less epochs are often sufficient for convergence. We show results for the largest value of the learning rate which converges and we use = 0.9 throughout. We note that the results look the same across a large range of ( , ) pairs and that all three parallelization schemes achieve train and test errors within a few percent of one another. We present experiments on the classes of problems described in Section 2. Sparse SVM. We tested our sparse SVM implementation on the Reuters RCV1 data set on the binary text classification task CCAT [19]. There are 804,414 examples split into 23,149 training and 781,265 test examples, and there are 47,236 features. We swapped the training set and the test set for our experiments to demonstrate the scalability of the parallel multicore algorithms. In this example, 6 3 4 Speedup Speedup 4 5 Hogwild AIG RR 2 1 0 0 3 10 Hogwild AIG RR 8 Speedup 5 2 1 (a) 2 4 6 8 Number of Splits 10 0 0 6 Hogwild AIG RR 4 2 (b) 2 4 6 8 Number of Splits 10 0 0 (c) 2 4 6 8 Number of Splits 10 Figure 2: Total CPU time versus number of threads for (a) RCV1, (b) Abdomen, and (c) DBLife. ? = 0.44 and = 1.0?large values that suggest a bad case for H OGWILD !. Nevertheless, in Figure 2(a), we see that H OGWILD ! is able to achieve a factor of 3 speedup with while RR gets worse as more threads are added. Indeed, for fast gradients, RR is worse than a serial implementation. For this data set, we also implemented the approach in [27] which runs multiple SGD runs in parallel and averages their output. In Figure 3(b), we display at the train error of the ensemble average across parallel threads at the end of each pass over the data. We note that the threads only communicate at the very end of the computation, but we want to demonstrate the effect of parallelization on train error. Each of the parallel threads touches every data example in each pass. Thus, the 10 thread run does 10x more gradient computations than the serial version. Here, the error is the same whether we run in serial or with ten instances. We conclude that on this problem, there is no advantage to running in parallel with this averaging scheme. Matrix Completion. We ran H OGWILD ! on three very large matrix completion problems. The Netflix Prize data set has 17,770 rows, 480,189 columns, and 100,198,805 revealed entries. The KDD Cup 2011 (task 2) data set has 624,961 rows, 1,000,990, columns and 252,800,275 revealed entries. We also synthesized a low-rank matrix with rank 10, 1e7 rows and columns, and 2e9 revealed entries. We refer to this instance as ?Jumbo.? In this synthetic example, ? and are both around 1e-7. These values contrast sharply with the real data sets where ? and are both on the order of 1e-3. Figure 3(a) shows the speedups for these three data sets using H OGWILD !. Note that the Jumbo and KDD examples do not fit in our allotted memory, but even when reading data off disk, H OGWILD ! attains a near linear speedup. The Jumbo problem takes just over two and a half hours to complete. Speedup graphs like in Figure 2 comparing H OGWILD ! to AIG and RR on the three matrix completion experiments are provided in the full version of this paper. Similar to the other experiments with quickly computable gradients, RR does not show any improvement over a serial approach. In fact, with 10 threads, RR is 12% slower than serial on KDD Cup and 62% slower on Netflix. We did not allow RR to run to completion on Jumbo because it several hours. Graph Cuts. Our first cut problem was a standard image segmentation by graph cuts problem popular in computer vision. We computed a two-way cut of the abdomen data set [1]. This data set consists of a volumetric scan of a human abdomen, and the goal is to segment the image into organs. The image has 512 ? 512 ? 551 voxels, and the associated graph is 6-connected with maximum capacity 10. Both ? and are equal to 9.2e-4 We see that H OGWILD ! speeds up the cut problem by more than a factor of 4 with 10 threads, while RR is twice as slow as the serial version. Our second graph cut problem sought a mulit-way cut to determine entity recognition in a large database of web data. We created a data set of clean entity lists from the DBLife website and of entity mentions from the DBLife Web Crawl [11]. The data set consists of 18,167 entities and 180,110 mentions and similarities given by string similarity. In this problem each stochastic gradient step must compute a Euclidean projection onto a simplex of dimension 18,167. As a result, the individual stochastic gradient steps are quite slow. Nonetheless, the problem is still very sparse with ?=8.6e-3 and =4.2e-3. Consequently, in Figure 2, we see the that H OGWILD ! achieves a ninefold speedup with 10 cores. Since the gradients are slow, RR is able to achieve a parallel speedup for this problem, however the speedup with ten processors is only by a factor of 5. That is, even in this case where the gradient computations are very slow, H OGWILD ! outperforms a round-robin scheme. 7 1 Thread 3 Threads 10 Threads 0.335 Train Error Speedup 6 Jumbo Netflix KDD 4 2 0.33 0.325 0.32 0 0 2 4 6 8 Number of Splits 10 0.31 0 8 6 Hogwild AIG RR 4 2 0.315 (a) 10 Speedup 0.34 8 (b) 5 10 Epoch 15 20 0 0 10 (c) 2 10 4 10 Gradient Delay (ns) 6 10 Figure 3: (a) Speedup for the three matrix completion problems with H OGWILD !. In all three cases, massive speedup is achieved via parallelism. (b) The training error at the end of each epoch of SVM training on RCV1 for the averaging algorithm [27]. (c) Speedup achieved over serial method for various levels of delays (measured in nanoseconds). What if the gradients are slow? As we saw with the DBLIFE data set, the RR method does get a nearly linear speedup when the gradient computation is slow. This raises the question whether RR ever outperforms H OGWILD ! for slow gradients. To answer this question, we ran the RCV1 experiment again and introduced an artificial delay at the end of each gradient computation to simulate a slow gradient. In Figure 3(c), we plot the wall clock time required to solve the SVM problem as we vary the delay for both the RR and H OGWILD ! approaches. Notice that H OGWILD ! achieves a greater decrease in computation time across the board. The speedups for both methods are the same when the delay is few milliseconds. That is, if a gradient takes longer than one millisecond to compute, RR is on par with H OGWILD ! (but not better). At this rate, one is only able to compute about a million stochastic gradients per hour, so the gradient computations must be very labor intensive in order for the RR method to be competitive. 7 Conclusions Our proposed H OGWILD ! algorithm takes advantage of sparsity in machine learning problems to enable near linear speedups on a variety of applications. Empirically, our implementations outperform our theoretical analysis. For instance, ? is quite large in the RCV1 SVM problem, yet we still obtain significant speedups. Moreover, our algorithms allow parallel speedup even when the gradients are computationally intensive. Our H OGWILD ! schemes can be generalized to problems where some of the variables occur quite frequently as well. We could choose to not update certain variables that would be in particularly high contention. For instance, we might want to add a bias term to our Support Vector Machine, and we could still run a H OGWILD ! scheme, updating the bias only every thousand iterations or so. For future work, it would be of interest to enumerate structures that allow for parallel gradient computations with no collisions at all. That is, It may be possible to bias the SGD iterations to completely avoid memory contention between processors. An investigation into such biased orderings would enable even faster computation of machine learning problems. Acknowledgements BR is generously supported by ONR award N00014-11-1-0723 and NSF award CCF-1139953. CR is generously supported by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181, the NSF CAREER award under IIS-1054009, ONR award N000141210041, and gifts or research awards from Google, LogicBlox, and Johnson Controls, Inc. SJW is generously supported by NSF awards DMS-0914524 and DMS-0906818 and DOE award DE-SC0002283. Any opinions, findings, and conclusion or recommendations expressed in this work are those of the authors and do not necessarily reflect the views of any of the above sponsors including DARPA, AFRL, or the US government. 8 References [1] Max-flow problem instances in vision. From http://vision.csd.uwo.ca/data/maxflow/. [2] K. Asanovic and et al. The landscape of parallel computing research: A view from berkeley. Technical Report UCB/EECS-2006-183, Electrical Engineering and Computer Sciences, University of California at Berkeley, 2006. [3] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 2nd edition, 1999. [4] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont, MA, 1997. [5] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems, 2008. [6] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9):1124? 1137, 2004. [7] G. C?alinescu, H. Karloff, and Y. Rabani. An improved approximation algorithm for multiway cut. In Proceedings of the thirtieth annual ACM Symposium on Theory of Computing, pages 48?52, 1998. [8] E. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717?772, 2009. [9] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107?113, 2008. [10] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using minibatches. Technical report, Microsoft Research, 2011. [11] A. Doan. http://dblife.cs.wisc.edu. [12] J. Duchi, A. Agarwal, and M. J. Wainwright. Distributed dual averaging in networks. In Advances in Neural Information Processing Systems, 2010. [13] S. H. Fuller and L. I. Millett, editors. The Future of Computing Performance: Game Over or Next Level. Committee on Sustaining Growth in Computing Performance. The National Academies Press, Washington, D.C., 2011. [14] T. Joachims. Training linear svms in linear time. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), 2006. [15] J. Langford. https://github.com/JohnLangford/vowpal_wabbit/wiki. [16] J. Langford, A. J. Smola, and M. Zinkevich. Slow learners are fast. In Advances in Neural Information Processing Systems, 2009. [17] J. Lee, , B. Recht, N. Srebro, R. R. Salakhutdinov, and J. A. Tropp. Practical large-scale optimization for max-norm regularization. In Advances in Neural Information Processing Systems, 2010. [18] T. Lee, Z. Wang, H. Wang, and S. Hwang. Web scale entity resolution using relational evidence. Technical report, Microsoft Research, 2011. Available at http://research.microsoft.com/apps/ pubs/default.aspx?id=145839. [19] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361?397, 2004. [20] S. Melnik, A. Gubarev, J. J. Long, G. Romer, S. Shivakumar, M. Tolton, and T. Vassilakis. Dremel: Interactive analysis of web-scale datasets. In Proceedings of VLDB, 2010. [21] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009. [22] F. Niu, B. Recht, C. R?e, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. Technical report, 2011. arxiv.org/abs/1106.5730. [23] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Review, 52(3):471?501, 2010. [24] S. Shalev-Shwartz and N. Srebro. SVM Optimization: Inverse dependence on training set size. In Proceedings of the 25th Internation Conference on Machine Learning, 2008. [25] N. Srebro, J. Rennie, and T. Jaakkola. Maximum margin matrix factorization. In Advances in Neural Information Processing Systems, 2004. [26] J. Tsitsiklis, D. P. Bertsekas, and M. Athans. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Transactions on Automatic Control, 31(9):803?812, 1986. [27] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. Advances in Neural Information Processing Systems, 2010. 9
4390 |@word middle:1 version:4 eliminating:1 achievable:1 norm:2 nd:1 disk:5 dekel:1 instruction:1 vldb:1 sgd:16 mention:2 configuration:1 contains:1 pub:1 fa8750:1 outperforms:4 kx0:2 current:1 comparing:1 com:2 yet:1 must:4 belmont:2 numerical:2 partition:1 kdd:6 designed:1 plot:1 update:11 juditsky:1 alone:1 half:1 kxu:1 website:1 intelligence:1 xk:3 isotropic:1 beginning:1 prize:1 core:5 krkf:1 lr:1 provides:1 node:4 contribute:1 org:1 c2:1 symposium:1 prove:1 consists:2 overhead:5 wild:1 introduce:1 x0:5 theoretically:1 indeed:3 rapid:1 themselves:1 frequently:3 cand:1 salakhutdinov:1 cpu:2 gift:1 provided:4 notation:4 moreover:3 what:2 string:4 developed:1 finding:1 guarantee:3 berkeley:2 every:2 act:1 growth:1 interactive:1 tie:1 k2:2 control:3 appear:1 bertsekas:4 overestimate:1 before:1 engineering:1 modify:2 sd:2 wuv:2 xv:8 consequence:1 id:1 parallelize:2 niu:2 subscript:2 might:3 twice:1 sustaining:1 limited:1 nemirovski:3 factorization:1 range:1 fazel:1 unique:1 practical:1 atomic:2 practice:1 implement:3 writes:1 footprint:1 procedure:2 signaling:2 maxflow:1 intersect:2 significantly:1 dictate:1 projection:1 pre:1 wait:2 suggest:2 get:4 onto:1 clever:1 close:1 undesirable:1 put:1 context:1 seminal:1 writing:1 zinkevich:3 dean:1 deterministic:1 vowpal:2 modifies:1 incredibly:1 convex:4 focused:1 survey:1 resolution:2 bachrach:1 checkpointing:1 nuclear:1 ogwild:30 notion:3 overwrite:1 coordinate:1 updated:2 resp:3 shamir:1 suppose:3 massive:2 exact:1 programming:3 sc0002283:1 satisfying:1 recognition:1 updating:2 particularly:1 cut:16 database:1 electrical:1 wang:2 thousand:4 cycle:1 connected:1 eu:1 counter:1 decrease:1 ordering:1 ran:3 rose:1 benjamin:1 pd:1 locking:11 hypergraph:5 ideally:1 raise:1 rewrite:1 segment:1 max1:1 learner:1 completely:1 darpa:1 various:1 kolmogorov:1 train:6 fast:4 describe:4 artificial:1 hyper:1 shalev:1 vowpal_wabbit:1 whose:1 heuristic:1 lag:2 quite:3 solve:1 rennie:1 reconstruct:1 statistic:1 emergence:1 itself:1 klu:1 online:2 advantage:3 rr:23 dwarfed:1 wabbit:2 propose:3 mb:1 product:2 frequent:1 loop:3 achieve:5 academy:1 scalability:2 convergence:10 cluster:4 regularity:1 categorization:1 incremental:1 converges:3 derive:1 develop:1 illustrate:1 completion:9 measured:1 multicore:7 progress:1 implemented:5 c:5 auxiliary:1 closely:1 stochastic:14 human:1 enable:3 settle:1 opinion:1 require:3 exchange:1 government:1 clustered:1 wall:3 investigation:1 proposition:2 scanner:1 around:1 wright:2 ground:1 lm:2 claim:1 achieves:4 sought:1 vary:1 xk2:1 purpose:1 estimation:1 krv:1 label:1 currently:1 saw:1 largest:1 organ:1 tool:1 minimization:3 generously:3 overwriting:1 always:3 aim:1 modified:1 e7:1 avoid:2 cr:1 stepsizes:4 og:1 thirtieth:1 jaakkola:1 release:1 joachim:1 improvement:1 rank:4 contrast:5 attains:1 dollar:2 sense:2 typically:1 diminishing:2 overall:1 classification:2 dual:2 stateof:1 priori:1 art:2 fairly:1 equal:3 once:1 never:1 fuller:1 washington:1 sampling:2 identical:3 look:1 k2f:2 nearly:9 throughput:2 future:2 simplex:2 report:4 simplify:2 piecewise:1 few:5 modern:1 national:1 comprehensive:1 individual:6 consisting:1 n1:2 microsoft:3 ab:1 huge:1 interest:3 message:1 possibility:1 mining:1 custom:1 severe:1 edge:5 worker:1 necessary:1 respective:1 conforms:1 indexed:3 euclidean:2 theoretical:5 instance:7 xeon:1 earlier:1 column:3 ordinary:1 cost:5 subset:3 pump:1 rare:1 hundred:1 entry:6 uniform:1 delay:6 johnson:1 too:1 stored:2 answer:1 eec:1 synthetic:1 recht:5 siam:2 accessible:1 contract:1 off:3 lee:2 uth:1 diverge:1 quickly:1 linux:1 again:1 reflect:1 opposed:1 choose:1 slowly:1 e9:1 worse:2 li:2 busy:1 parrilo:1 de:1 summarized:1 inc:1 notable:1 explicitly:1 depends:1 performed:1 break:1 hogwild:5 view:2 netflix:4 sort:1 competitive:1 parallel:19 complicated:1 e2e:2 collaborative:2 minimize:5 air:1 variance:1 efficiently:1 ensemble:1 correspond:1 yield:1 landscape:1 apps:1 ccat:1 lu:3 mc:1 researcher:3 processor:27 definition:1 volumetric:1 against:3 inexpensive:2 underestimate:1 nonetheless:1 frequency:1 involved:1 energy:1 dm:2 naturally:1 associated:5 proof:1 recovers:1 athans:1 sampled:1 proved:1 popular:3 knowledge:2 segmentation:1 formalize:2 schedule:1 back:1 appears:1 afrl:2 follow:1 improved:1 though:3 strongly:2 roundrobin:1 just:2 smola:2 clock:4 langford:3 working:1 hand:1 web:6 christopher:1 touch:1 nonlinear:1 tropp:1 google:3 scientific:2 stale:2 hwang:1 modulus:1 effect:1 concept:1 counterpart:2 ccf:1 hence:1 regularization:1 read:6 nonzero:1 laboratory:1 round:6 game:1 generalized:1 complete:1 demonstrate:9 duchi:1 percent:1 meaning:2 image:3 novel:1 recently:3 contention:2 nanosecond:2 common:1 boykov:1 physical:1 empirically:1 million:1 dismal:1 approximates:1 numerically:1 synthesized:1 doomed:1 significant:2 refer:4 cup:2 rd:1 automatic:1 mathematics:1 multiway:2 access:4 atomically:1 similarity:6 x650:1 longer:1 add:1 curvature:1 own:1 recent:2 prime:1 certain:1 n00014:1 binary:1 onr:2 fault:3 xe:14 seen:1 minimum:2 greater:1 preceding:1 terabyte:1 parallelized:2 surely:1 determine:1 stephen:1 rv:3 multiple:2 full:2 ii:1 reduces:1 d0:1 exceeds:1 technical:4 match:1 faster:1 long:2 divided:1 host:1 serial:11 award:7 coded:2 sponsor:1 prediction:1 scalable:1 underlies:1 maintenance:1 involving:2 variant:1 vision:4 arxiv:1 iteration:7 kernel:1 gilad:1 achieved:5 agarwal:1 addition:1 want:3 fine:1 decreased:1 leaving:1 parallelization:3 swapped:1 biased:1 file:1 pass:1 subject:1 induced:2 johnlangford:1 flow:2 call:2 extracting:1 integer:1 near:3 yang:1 revealed:3 split:5 logicblox:1 concerned:1 variety:8 xj:3 fit:2 brecht:1 bandwidth:1 karloff:1 reduce:2 idea:1 computable:1 br:1 intensive:4 abdomen:4 tradeoff:1 bottleneck:1 whether:3 motivated:1 thread:12 url:1 gb:6 passing:1 gev:2 enumerate:1 generally:1 latency:2 clear:1 collision:1 ten:5 induces:3 processed:1 hardware:1 differentiability:1 svms:1 http:4 wiki:1 outperform:2 shapiro:1 canonical:1 hyperthreading:1 notice:1 millisecond:2 nsf:3 sjw:1 per:1 write:5 dropping:1 affected:1 express:1 key:1 sheer:1 nevertheless:2 lan:1 wisc:5 clarity:1 krf:1 verified:1 clean:1 ram:1 graph:11 subgradient:2 fraction:3 sum:1 aig:7 run:13 inverse:1 powerful:1 master:1 communicate:1 extends:1 throughout:4 almost:1 reasonable:1 decision:7 rpm:1 bit:1 pay:1 guaranteed:1 display:1 shivakumar:1 nonnegative:1 annual:1 occur:3 precisely:1 sharply:1 x2:1 software:2 bousquet:1 speed:2 argument:1 simulate:1 min:1 rabani:1 rcv1:7 separable:1 relatively:1 gpus:1 speedup:27 department:1 across:6 slightly:1 wi:1 computationally:1 equation:1 discus:2 count:1 fail:1 mechanism:1 committee:1 know:1 ge:5 overwrites:1 end:7 available:1 operation:5 multiplied:1 appropriate:2 generic:1 romer:1 stepsize:10 alternative:1 robustness:2 slower:3 denotes:3 clustering:1 ensure:3 running:3 lock:12 madison:2 k1:1 alinescu:1 feng:1 move:1 added:1 quantity:2 question:2 strategy:2 dependence:1 nr:4 gradient:45 amongst:1 distance:1 separate:1 entity:8 capacity:1 athena:2 seven:1 barely:1 index:5 performant:1 nc:4 difficult:3 setup:2 fe:7 implementation:5 observation:2 datasets:1 arc:1 benchmark:1 descent:9 defining:1 relational:1 communication:3 ever:1 y1:1 rn:2 dsps:1 uwo:1 station:1 parallelizing:3 introduced:1 pair:2 subvector:2 namely:1 required:2 z1:1 componentwise:1 coalesce:1 optimized:1 california:1 hour:3 able:4 parallelism:2 pattern:1 sparsity:4 reading:2 tb:2 rf:2 including:2 memory:17 max:6 wainwright:1 natural:1 force:1 synchronize:1 recursion:1 scheme:21 kxk22:1 github:1 inversely:1 identifies:1 created:1 text:4 epoch:5 prior:2 mapreduce:9 voxels:1 acknowledgement:1 discovery:1 review:1 wisconsin:1 synchronization:3 par:1 interesting:1 filtering:2 proportional:2 srebro:3 versus:1 foundation:1 degree:1 sufficient:2 minimizex:3 doan:1 xiao:1 editor:1 row:4 supported:3 free:8 asynchronous:2 tsitsiklis:3 bias:4 allow:3 sparse:16 coupon:1 tolerance:3 distributed:6 dimension:1 asanovic:1 crawl:1 default:1 computes:1 author:5 commonly:1 collection:1 preprocessing:1 simplified:1 transaction:2 global:1 incoming:1 conclude:1 xi:1 shwartz:1 continuous:1 iterative:1 quantifies:1 robin:5 additionally:1 nature:1 robust:5 ca:1 zuv:2 inherently:1 career:1 du:2 bottou:1 necessarily:1 protocol:6 did:1 main:4 linearly:1 csd:1 reuters:1 noise:1 arise:3 edition:1 weimer:1 allowed:1 collector:1 fair:1 intel:1 board:1 slow:12 raid:4 n:1 exponential:3 grained:1 down:3 bad:1 xt:1 ghemawat:1 constellation:1 list:3 svm:9 evidence:1 consist:1 exists:2 sequential:1 n000141210041:1 magnitude:3 occurring:2 margin:1 aspx:1 suited:2 jumbo:6 klkf:1 simply:3 vth:1 labor:1 ordered:1 expressed:1 gubarev:1 scalar:1 recommendation:1 minimizer:1 determines:2 lewis:1 acm:3 ma:2 minibatches:1 goal:5 consequently:1 careful:1 internation:1 shared:7 lipschitz:2 experimentally:3 change:1 diminished:1 typical:1 determined:1 uniformly:3 except:1 averaging:6 called:1 total:2 pas:3 experimental:1 e:1 ucb:1 swright:1 allotted:1 support:2 scan:1 overload:1 evaluate:1 tested:1
3,745
4,391
Learning unbelievable probabilities Xaq Pitkow Department of Brain and Cognitive Science University of Rochester Rochester, NY 14607 [email protected] Yashar Ahmadian Center for Theoretical Neuroscience Columbia University New York, NY 10032 [email protected] Ken D. Miller Center for Theoretical Neuroscience Columbia University New York, NY 10032 [email protected] Abstract Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we show that many probability distributions have marginals that cannot be reached by belief propagation using any set of model parameters or any learning algorithm. We call such marginals ?unbelievable.? This problem occurs whenever the Hessian of the Bethe free energy is not positive-definite at the target marginals. All learning algorithms for belief propagation necessarily fail in these cases, producing beliefs or sets of beliefs that may even be worse than the pre-learning approximation. We then show that averaging inaccurate beliefs, each obtained from belief propagation using model parameters perturbed about some learned mean values, can achieve the unbelievable marginals. 1 Introduction Calculating marginal probabilities for a graphical model generally requires summing over exponentially many states, and is NP-hard in general [1]. A variety of approximate methods have been used to circumvent this problem. One popular technique is belief propagation (BP), in particular the sumproduct rule, which is a message-passing algorithm for performing inference on a graphical model [2]. Though exact and efficient on trees, it is merely an approximation when applied to graphical models with loops. A natural question is whether one can compensate for the shortcomings of the approximation by setting the model parameters appropriately. In this paper, we prove that some sets of marginals simply cannot be achieved by belief propagation. For these cases we provide a new algorithm that can achieve much better results by using an ensemble of parameters rather than a single instance. We are given a set of variables x with a given probability distribution P (x) of some data. We would like to construct a model that reproduces certain of its marginal probabilities, in particular those over P individual variables pi (xi ) = x\xi P (x) for nodes i 2 V , and those over some relevant clusters P of variables, p? (x? ) = x\x? P (x) for ? = {i1 , . . . , id? }. We will write the collection of all these marginals as a vector p. 1 We assume a model distribution Q0 (x) in the exponential family taking the form Q0 (x) = e with normalization constant Z = P E(x) (1) /Z and energy function X E(x) = ? ? ? ? (x? ) x e E(x) (2) ? Here, ? indexes sets of interacting variables (factors in the factor graph [3]), and x? is a subset of variables whose interaction is characterized by a vector of sufficient statistics ? (x? ) and corresponding natural parameters ? ? . We assume without loss of generality that each ? (x? ) is irreducible, meaning that the elements are linearly independent functions of x? . We collect all these sufficient statistics and natural parameters in the vectors and ?. Normally when learning a graphical model, one would fit its parameters so the marginal probabilities match the target. Here, however, we will not use exact inference to compute the marginals. Instead we will use approximate inference via loopy belief propagation to match the target. 2 Learning in Belief Propagation 2.1 Belief propagation The sum-product algorithm for belief propagation on a graphical model with energy function (2) uses the following equations [4]: Y X Y mi!? (xi ) / m !i (xi ) m?!i (xi ) / e?? ? ? (x? ) mj!? (xj ) (3) 2Ni \? x? \xi j2N? \i where Ni and N? are the neighbors of node i or factor ? in the factor graph. Once these messages converge, the single-node and factor beliefs are given by Y Y bi (xi ) / m?!i (xi ) b? (x? ) / e?? ? ? (x? ) mi!? (xi ) (4) ?2Ni i2N? where the beliefs must each be normalized to one. For tree graphs, these beliefs exactly equal the marginals of the graphical model Q0 (x). For loopy graphs, the beliefs at stable fixed points are often P good approximations of the marginals. While they are guaranteed to be locally consistent, x? \xi b? (x? ) = bi (xi ), they are not necessarily globally consistent: There may not exist a single joint distribution B(x) of which the beliefs are the marginals [5]. This is why the resultant beliefs are called pseudomarginals, rather than simply marginals. We use a vector b to refer to the set of both node and factor beliefs produced by belief propagation. 2.2 Bethe free energy Despite its limitations, BP is found empirically to work well in many circumstances. Some theoretical justification for loopy belief propagation emerged with proofs that its stable fixed points are local minima of the Bethe free energy [6, 7]. Free energies are important quantities in machine learning because the Kullback-Leibler divergence between the data and model distributions can be expressed in terms of free energies, so models can be optimized by minimizing free energies appropriately. Given an energy function E(x) from (2), the Gibbs free energy of a distribution Q(x) is F [Q] = U [Q] where U is the average energy of the distribution X X X U [Q] = E(x)Q(x) = ?? ? x ? (5) S[Q] ? (x? )q? (x? ) which depends on the marginals q? (x? ) of Q(x), and S is the entropy X S[Q] = Q(x) log Q(x) x 2 (6) x? (7) Minimizing the Gibbs free energy F [Q] recovers the distribution Q0 (x) for the graphical model (1). The Bethe free energy F is an approximation to the Gibbs free energy, F [Q] = U [Q] (8) S [Q] in which the average energy U is exact, but the true entropy S is replaced by an approximation, the Bethe entropy S , which is a sum over the factor and node entropies [6]: X X S [Q] = S? [q? ] + (1 di )Si [qi ] (9) ? i X X S? [q? ] = q? (x? ) log q? (x? ) Si [qi ] = qi (xi ) log qi (xi ) (10) x? xi The coefficients di = |Ni | are the number of factors neighboring node i, and compensate for the overcounting of single-node marginals due to Q overlapping Q factor marginals. For tree-structured graphical models, which factorize as Q(x) = ? q? (x? ) i qi (xi )1 di , the Bethe entropy is exact, and hence so is the Bethe free energy. On loopy graphs, the Bethe entropy S isn?t really even an entropy (e.g. it may be negative) because it neglects all statistical dependencies other than those present in the factor marginals. Nonetheless, the Bethe free energy is often close enough to the Gibbs free energy that its minima approximate the true marginals [8]. Since stable fixed points of BP are minima of the Bethe free energy [6, 7], this helped explain why belief propagation is often so successful. To emphasize that the Bethe free energy directly depends only on the marginals and not the joint distribution, we will write F [q] where q is a vector of pseudomarginals q? (x? ) for all ? and all x? . Pseudomarginal space is the convex set [5] of all q that satisfy the positivity and local consistency X X constraints, 0 ? q? (x? ) ? 1 q? (x? ) = qi (xi ) qi (xi ) = 1 (11) 2.3 Pseudo-moment matching xi x? \xi We now wish to correct for the deficiencies of belief propagation by identifying the parameters ? so that BP produces beliefs b matching the true marginals p of the target distribution P (x). Since the fixed points of BP are stationary points of F [6], one may simply try to find parameters ? that produce a stationary point in pseudomarginal space at p, which is a necessary condition for BP to reach a stable fixed point there. Simply evaluate the gradient at p, set it to zero, and solve for ?. Note that in principle this gradient could be used to directly minimize the Bethe free energy, but F [q] is a complicated function of q that usually cannot be minimized analytically [8]. In contrast, here we are using it to solve for the parameters needed to move beliefs to a target location. This is much easier, since the Bethe free energy is linear in ?. This approach to learning parameters has been described as ?pseudo-moment matching? [9, 10, 11]. The Lq -element vector q is an overcomplete representation of the pseudomarginals because it must obey the local consistency constraints (11). It is convenient to express the pseudomarginals in terms of a minimal set of parameters ? with the smaller dimensionality L? of ? and , using an affine transform q = W? + k (12) where W is an Lq ? L? rectangular matrix. One example is the expectation parameters ? ? = P ? ? ?. The gradient with respect to x? q? (x? ) ? (x? ) [5], giving the energy simply as U = those minimal parameters is @F @U @S @q @S = = ? W @? @? @q @? @q The Bethe entropy gradient is simplest in the overcomplete representation q, @S = @q? (x? ) 1 log q? (x? ) @S =( 1 @qi (xi ) log qi (xi ))(1 (13) di ) (14) Setting the gradient (13) to zero, we have a simple linear equation for the parameters ? that tilt the Bethe free energy surface (Figure 1A) enough to place a stationary point at the desired marginals p: ?= @S @q 3 W p (15) B C +1 0 ?1 F [q] @2F @(v 1?q)2 F [q] A 0 pseudomarginal space v1?q pseudomarginal space v1?q p min b v2?q v1?q Figure 1: Landscape of Bethe free energy for the binary graphical model with pairwise interactions. (A) A slice through the Bethe free energy (solid lines) along one axis v 1 of pseudomarginal space, for three different values of parameters ?. The energy U is linear in the pseudomarginals (dotted lines), so varying the parameters only changes the tilt of the free energy. This can add or remove local minima. (B) The second derivatives of the free energies in (A) are all identical. Where the second derivative is positive, a local minimum can exist (cyan); where it is negative (yellow), no parameters can produce a local minimum. (C) A two-dimensional slice of the Bethe free energy, colored according to the minimum eigenvalue min of the Bethe Hessian. During a run of Bethe wake-sleep learning, the beliefs (blue dots) proceed along v 2 toward the target marginals p. Stable fixed points of BP can exist only in the believable region (cyan), but the target p resides in an unbelievable region (yellow). As learning equilibrates, the stable fixed points jump between believable regions on either side of the unbelievable zone. 2.4 Unbelievable marginals It is well known that BP may converge on stable fixed points that cannot be realized as marginals of any joint distribution. In this section we show that the converse is also true: There are some distributions whose marginals cannot be realized as beliefs for any set of couplings. In these cases, existing methods for learning often yield poor results, sometimes even worse than performing no learning at all. This is surprising in view of claims to the contrary: [9, 5] state that belief propagation run after pseudo-moment matching can always reach a fixed point that reproduces the target marginals. While BP does technically have such fixed points, they are not always stable and thus may not be reachable by running belief propagation. Definition 1. A set of marginals are ?unbelievable? if belief propagation cannot converge to them for any set of parameters. For belief propagation to converge to the target ? namely, the marginals p ? a zero gradient is not sufficient: The Bethe free energy must also be a local minimum [7].1 This requires a positivedefinite Hessian of F (the ?Bethe Hessian? H) in the subspace of pseudomarginals that satisfies the local consistency constraints. Since the energy U is linear in the pseudomarginals, the Hessian is given by the second derivative of the Bethe entropy, H= @2F = @? 2 W> @2S W @q 2 (16) where projection by W constrains the derivatives to the subspace spanned by the minimal parameters ?. If this Hessian is positive definite when evaluated at p then the parameters ? given by (15) give F a minimum at the target p. If not, then the target cannot be a stable fixed point of loopy belief propagation. In Section 3, we calculate the Bethe Hessian explicitly for a binary model with pairwise interactions. Theorem 1. Unbelievable marginal probabilities exist. Proof. Proof by example. The simplest unbelievable example is a binary graphical P model with pairwise interactions between four nodes, x 2 { 1, +1}4 , and the energy E(x) = J (ij) xi xj . 1 Even this is not sufficient, but it is necessary. 4 By symmetry and (1), marginals of this target P (x) are the same for all nodes and pairs: pi (xi ) = 12 and pij (xi = xj ) = ? = (2 + 4/(1 + e2J e4J + e6J )) 1 . Substituting these marginals into the appropriate Bethe Hessian (22) gives a matrix that has a negative eigenvalue for all ? > 38 , or J > 0.316. The associated eigenvector u hasp the same symmetry as the marginals, with single1 2 node components ui = 2 ( 2 + 7? 8? + 10 28? + 81?2 112?3 + 64?4 ) and pairwise components uij = 1. Thus the Bethe free energy does not have a minimum at the marginals of these P (x). Stable fixed points of BP occur only at local minima of the Bethe free energy [7], and so BP cannot reproduce the marginals p for any parameters. Hence these marginals are unbelievable. Not only do unbelievable marginals exist, but they are actually quite common, as we will see in Section 3. Graphical models with multinomial or gaussian variables and at least two loops always have some pseudomarginals for which the Hessian is not positive definite [12]. On the other hand, all marginals with sufficiently small correlations are believable because they are guaranteed to have a positive-definite Bethe Hessian [12]. Stronger conditions have not yet been described. 2.5 Bethe wake-sleep algorithm When pseudo-moment matching fails to reproduce unbelievable marginals, an alternative is to use a gradient descent procedure for learning, analagous to the wake-sleep algorithm used to train Boltzmann machines [13]. That original rule can be derived as gradient descent of the Kullback-Leibler divergence DKL between the target P (x) and the Boltzmann distribution Q0 (x) (1), X P (x) DKL [P ||Q0 ] = P (x) log = F [P ] F [Q0 ] 0 (17) Q 0 (x) x where F is the Gibbs free energy (5). Note that this free energy depends on the same energy function E (2) that defines the Boltzmann distribution Q0 (1), and achieves its minimal value of log Z for that distribution. The Kullback-Leibler divergence is therefore bounded by zero, with equality if and only if P = Q0 . By changing the energy E and thus Q0 to decrease this divergence, the graphical model moves closer to the target distribution. Here we use a new cost function, the ?Bethe divergence? D [p||b], by replacing these free energies by Bethe free energies [14] evaluated at the true marginals p and at the beliefs b obtained from BP stable fixed points, D [p||b] = F [p] F [b] (18) We use gradient descent to optimize this cost, with gradient dD @D @D @b = + d? @? @b @? (19) The data?s free energy does not depend on the beliefs, so @F [p]/@b = 0, and fixed points of belief propagation are stationary points of the Bethe free energy, so @F [b]/@b = 0. Consequently @D /@b = 0. Furthermore, the entropy terms of the free energies do not depend explicitly on ?, so dD @U (p) = d? @? @U (b) = @? ?(p) + ?(b) (20) P where ?(q) = x q(x) (x) are the expectations of the sufficient statistics (x) under the pseudomarginals q. This gradient forms the basis of a simple learning algorithm. At each step in learning, belief propagation is run, obtaining beliefs b for the current parameters ?. The parameters are then changed in the opposite direction of the gradient, dD = ?(?(p) ?(b)) (21) d? where ? is a learning rate. This generally increases the Bethe free energy for the beliefs while decreasing that of the data, hopefully allowing BP to draw closer to the data marginals. We call this learning rule the Bethe wake-sleep algorithm. ?= ? Within this algorithm, there is still the freedom of how to choose initial messages for BP at each learning iteration. The result depends on these initial conditions because BP can have several stable fixed points. One might re-initialize the messages to a fixed starting point for each run of BP, choose 5 random initial messages for each run, or restart the messages where they stopped on the previous learning step. In our experiments we use the first approach, initializing to constant messages at the beginning of each BP run. The Bethe wake-sleep learning rule sometimes places a minimum of F at the true data distribution, such that belief propagation can give the true marginals as one of its (possibly multiple) stable fixed points. However, for the reasons provided above, this cannot occur where the Bethe Hessian is not positive definite. 2.6 Ensemble belief propagation When the Bethe wake-sleep algorithm attempts to learn unbelievable marginals, the parameters and beliefs do not reach a fixed point but instead continue to vary over time (Figure 2A,B). Still, if learning reaches equilibrium, then the temporal average of beliefs is equal to the unbelievable marginals. Theorem 2. If the Bethe wake-sleep algorithm reaches equilibrium, then unbelievable marginals are matched by the belief propagation stable fixed points averaged over the equilibrium ensemble of parameters. Proof. At equilibrium, the time average of the parameter changes is zero by definition, h ?it = 0. Substitution of the Bethe wake-sleep equation, ? = ?(?(p) ?(b(t))) (20), directly implies that h?(b(t))it = ?(p). The deterministic mapping (12) from the minimal representation to the pseudomarginals gives hb(t)it = p. After learning has equilibrated, stable fixed points of belief propagation occur with just the right frequency so that they can be averaged together to reproduce the target distribution exactly (Figure 2C). Note that none of the individual stable fixed points may be close to the true marginals. We call this inference algorithm ensemble belief propagation (eBP). Ensemble BP produces perfect marginals by exploiting a constant, small amplitude learning, and thus assumes that the correct marginals are perpetually available. Yet it also works well when learning is turned off, if parameters are drawn randomly from a gaussian distribution with mean ? ?? ). In the simulations below and covariance matched to the equilibrium distribution, ? ? N (?, (Figures 2C?D, 3B?C), ?? was always low-rank, and only one or two principle components were needed for good performance. The gaussian ensemble is not quite as accurate as continued learning (Figure 3B,C), but the performance is still markedly better than any of the available stable fixed points. If the target is not within a convex hull of believable pseudomarginals, then learning cannot reach equilibrium: Eventually BP gets as close as it can but there remains a consistent difference ?(p) ?(b), so ? must increase without bound. Though possible in principle, we did not observe this effect in any of our experiments. There may also be no equilibrium if belief propagation at each learning iteration fails to converge. 3 Experiments The experiments in this section concentrate on the Ising model: N binary variables, s 2 { 1, +1}N , with individual variables xi and pairs xi , xj . The energy function is E(x) = P factors comprising P h x J x xj . Then the sufficient statistics are the various first and second moments, i i ij i i (ij) xi and xi xj , and the natural parameters are hi , Jij . We use this model both for the target distri++ butions and the model. We parameterize pseudomarginals as {qi+ , qij } where qi+ = qi (xi = +1) ++ and qij = qij (xi = xj = +1) [8]. The remaining probabilities are linear functions of these values. Positivity constraints and local consistency constraints then appear as 0 ? qi+ ? 1 and ++ max(0, qi+ + qj+ 1) ? qij ? min(qi+ , qj+ ). If all the interactions are finite, then the inequality constraints are not active [15]. In this parameterization, the elements of the Bethe Hessian (16) are @2S = @qi+ @qj+ i,j (1 + i,j ? di ) (qi+ ) X ? (qi+ 1 + (1 ++ qik ) 1 qi+ ) + (1 k2Ni 6 1 ? + qi+ j2Ni ? (1 ++ qk+ + qik ) qi+ 1 ? ++ qj+ + qij ) 1 ? (22) C min <0 >0 v2?q u2?? ? u1?? v1?q learning iteration t 1 D estimated marginals B Jij hi ? A 0 q+i q++ ij BP EBP 0 1 true marginals Figure 2: Averaging over variable couplings can produce marginals otherwise unreachable by belief propagation. (A) As learning proceeds, the Bethe wake-sleep algorithm causes parameters ? to converge on a discrete limit cycle when attempting to learn unbelievable marginals. (B) The same limit cycle, projected onto their first two principal components u1 and u2 of ? during the cycle. (C) The corresponding beliefs b during the limit cycle (blue circles), projected onto the first two principal components v 1 and v 2 of the trajectory through pseudomarginal space. Believable regions of pseudomarginal space are colored with cyan and the unbelievable regions with yellow, and inconsistent ? (blue ?) are precisely equal pseudomarginals are black. Over the limit cycle, the average beliefs b ? (red +) over many stable fixed points of BP to the target marginals p (black ?). The average b ? + ? still produces a better approxima(red dots) generated from randomly perturbed parameters ? tion of the target marginals than any of the individual believable stable fixed points. (D) Even the best amongst several BP stable fixed points cannot match unbelievable marginals (black and grey). Ensemble BP leads to much improved performance (red and pink). @2S = ++ @qi+ @qjk i,j i,k @2S = ++ ++ @qij @qk` ij,k` ? (qi+ ? ? (qi+ ++ (qij ) ? ++ qik ) 1 + (1 qi+ ++ qk+ + qik ) 1 ++ qij ) 1 + (1 qi+ ++ qj+ + qij ) 1 1 + (qi+ 1 ++ qij ) + (qj+ ++ qij ) ? 1 + (1 qi+ ++ qj+ + qij ) 1 ? Figure 3A shows the fraction of marginals that are unbelievable for 8-node, fully-connected Ising models with random coupling parameters hi ? N (0, 13 ) and Jij ? N (0, J ). For J & 14 , most marginals cannot be reproduced by belief propagation with any parameters, because the Bethe Hessian (22) has a negative eigenvalue. fraction unbelievable A B 1 i BP ii ii iii iii iv iv eBP 0 0 coupling standard deviation J 1 C i v v 10?5 10?4 .001 .01 .1 1 Bethe divergence D [p||b] 10 10?3 10?2 10?1 Euclidean distance |p 1 b| Figure 3: Performance in learning unbelievable marginals. (A) Fraction of marginals that are unbelievable. Marginals were generated from fully connected, 8-node binary models with random biases and pairwise couplings, hi ? N (0, 13 ) and Jij ? N (0, J ). (B,C) Performance of five models on 370 unbelievable random target marginals (Section 3), measured with Bethe divergence D [p||b] (B) and Euclidean distance |p b| (C). Target were generated as in (A) with J = 13 , and selected for unbelievability. Bars represent central quartiles, and white line indicates the median. The five models are: (i) BP on the graphical model that generated the target distribution, (ii) BP after parameters are set by pseudomoment matching, (iii) the beliefs with the best performance encountered during Bethe wake-sleep learning, (iv) eBP using exact parameters from the last 100 iterations of learning, and (v) eBP with gaussian-distributed parameters with the same first- and second-order statistics as iv. 7 We generated 500 Ising model targets using J = 13 , selected the unbelievable ones, and evaluated the performance of BP and ensemble BP for various methods of choosing parameters ?. Each run of BP used exponential temporal message damping of 5 time steps [16], mt+1 = amt + (1 a)mundamped with a = e 1/5 . Fixed points were declared when messages changed by less than 10 9 on a single time step. We evaluated BP performance for the actual parameters that generated the target (1), pseudomoment matching (15), and at best-matching beliefs obtained at any time during Bethe wake-sleep learning. We also measured eBP performance for two parameter ensembles: the last 100 iterations of Bethe wake-sleep learning, and parameters sampled from a ? ?? ) with the same mean and covariance as that ensemble. gaussian N (?, Belief propagation gave a poor approximation of the target marginals, as expected for a model with many strong loops. Even with learning, BP could never get the correct marginals, which was guaranteed by selection of unbelievable targets. Yet ensemble belief propagation gave excellent results. Using the exact parameter ensemble gave orders of magnitude improvement, limited by the number of beliefs being averaged. The gaussian parameter ensemble also did much better than even the best results of BP. 4 Discussion Other studies have also made use of the Bethe Hessian to draw conclusions about belief propagation. For instance, the Hessian reveals that the Ising model?s paramagnetic state becomes unstable in BP for large enough couplings [17]. For another example, when the Hessian is positive definite throughout pseudomarginal space, then the Bethe free energy is convex and thus BP has a unique stable fixed point [18]. Yet the stronger interpretation appears to be underappreciated: When the Hessian is not positive definite for some pseudomarginals, then BP can never have a stable fixed point there, for any parameters. One might hope that by adjusting the parameters of belief propagation in some systematic way, ? ! ? BP , one could fix the approximation and so perform exact inference. In this paper we proved that this is a futile hope, because belief propagation simply can never converge to certain marginals. However, we also provided an algorithm that does work: Ensemble belief propagation uses BP on several different parameters with different stable fixed points and averages the results. This approach preserves the locality and scalability which make BP so popular, but corrects for some of its defects at the cost of running the algorithm a few times. Additionally, it raises the possibility that a systematic compensation for the flaws of BP might exist, but only as a mapping from individual parameters to an ensemble of parameters ? ! {? eBP } that could be used in eBP. An especially clear application of eBP is to discriminative models like Conditional Random Fields [19]. These models are trained so that known inputs produce known inferences, and then generalize to draw novel inferences from novel inputs. When belief propagation is used during learning, then the model will fail even on known training examples if they happen to be unbelievable. Overall performance will suffer. Ensemble BP can remedy those training failures and thus allow better performance and more reliable generalization. This paper addressed learning in fully-observed models only, where marginals for all variables were available during training. Yet unbelievable marginals exist for models with hidden variables as well. Ensemble BP should work as in the fully-observed case, but training will require inference over the hidden variables during both wake and sleep phases. One important inference engine is the brain. When inference is hard, neural computations may resort to approximations, perhaps including belief propagation [20, 21, 22, 23, 24]. It would be undesirable for neural circuits to have big blind spots, i.e. reasonable inferences it cannot draw, yet that is precisely what occurs in BP. By averaging over models with eBP, this blind spot can be eliminated. In the brain, synaptic weights fluctuate due to a variety of mechanisms. Perhaps such fluctuations allow averaging over models and thereby reach conclusions unattainable by a deterministic mechanism. Note added in proof: After submission of this work, [25] presented partially overlapping results showing that some marginals cannot be achieved by belief propagation. Acknowledgments The authors thank Greg Wayne for helpful conversations. 8 References [1] Cooper G (1990) The computational complexity of probabilistic inference using bayesian belief networks. Artificial intelligence 42: 393?405. [2] Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann Publishers, San Mateo CA. [3] Kschischang F, Frey B, Loeliger H (2001) Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47: 498?519. [4] Bishop C (2006) Pattern recognition and machine learning. Springer New York. [5] Wainwright M, Jordan M (2008) Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning 1: 1?305. [6] Yedidia JS, Freeman WT, Weiss Y (2000) Generalized belief propagation. In: Advances in Neural Information Processing Systems 13. MIT Press, pp. 689?695. [7] Heskes T (2003) Stable fixed points of loopy belief propagation are minima of the Bethe free energy. Advances in Neural Information Processing Systems 15: 343?350. [8] Welling M, Teh Y (2001) Belief optimization for binary networks: A stable alternative to loopy belief propagation. In: Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc., pp. 554?561. [9] Wainwright MJ, Jaakkola TS, Willsky AS (2003) Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching. In: Artificial Intelligence and Statistics. [10] Welling M, Teh Y (2003) Approximate inference in Boltzmann machines. Artificial Intelligence 143: 19?50. [11] Parise S, Welling M (2005) Learning in markov random fields: An empirical study. In: Joint Statistical Meeting. volume 4. [12] Watanabe Y, Fukumizu K (2011) Loopy belief propagation, Bethe free energy and graph zeta function. arXiv cs.AI: 1103.0605v1. [13] Hinton G, Sejnowski T (1983) Analyzing cooperative computation. Proceedings of the Fifth Annual Cognitive Science Society, Rochester NY . [14] Welling M, Sutton C (2005) Learning in markov random fields with contrastive free energies. In: Cowell RG, Ghahramani Z, editors, Artificial Intelligence and Statistics. pp. 397-404. [15] Yedidia J, Freeman W, Weiss Y (2005) Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory 51: 2282?2312. [16] Mooij J, Kappen H (2005) On the properties of the Bethe approximation and loopy belief propagation on binary networks. Journal of Statistical Mechanics: Theory and Experiment 11: P11012. [17] Mooij J, Kappen H (2005) Validity estimates for loopy belief propagation on binary real-world networks. In: Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, pp. 945?952. [18] Heskes T (2004) On the uniqueness of loopy belief propagation fixed points. Neural Computation 16: 2379?2413. [19] Lafferty J, McCallum A, Pereira F (2001) Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Proceedings of the 18th International Conference on Machine Learning : 282?289. [20] Litvak S, Ullman S (2009) Cortical circuitry implementing graphical models. Neural Computation 21: 3010?3056. [21] Steimer A, Maass W, Douglas R (2009) Belief propagation in networks of spiking neurons. Neural Computation 21: 2502?2523. [22] Ott T, Stoop R (2007) The neurodynamics of belief propagation on binary markov random fields. In: Advances in Neural Information Processing Systems 19, Cambridge, MA: MIT Press. pp. 1057?1064. [23] Shon A, Rao R (2005) Implementing belief propagation in neural circuits. Neurocomputing 65?66: 393? 399. [24] George D, Hawkins J (2009) Towards a mathematical theory of cortical micro-circuits. PLoS Computational Biology 5: 1?26. [25] Heinemann U, Globerson A (2011) What cannot be learned with Bethe approximations. In: Uncertainty in Artificial Intelligence. Corvallis, Oregon: AUAI Press, pp. 319?326. 9
4391 |@word stronger:2 pseudomoment:2 grey:1 simulation:1 covariance:2 contrastive:1 thereby:1 solid:1 kappen:2 moment:6 substitution:1 initial:3 loeliger:1 existing:1 current:1 paramagnetic:1 surprising:1 si:2 yet:6 must:4 happen:1 pseudomarginals:14 remove:1 stationary:4 intelligence:6 selected:2 parameterization:1 mccallum:1 beginning:1 qjk:1 colored:2 node:12 location:1 five:2 mathematical:1 along:2 qij:12 prove:1 pairwise:5 expected:1 mechanic:1 brain:3 freeman:2 globally:1 decreasing:1 actual:1 positivedefinite:1 becomes:1 provided:2 distri:1 bounded:1 matched:2 circuit:3 what:2 eigenvector:1 pseudo:5 temporal:2 every:1 auai:1 exactly:2 normally:1 converse:1 wayne:1 appear:1 producing:1 segmenting:1 positive:8 local:10 frey:1 limit:4 despite:1 sutton:1 id:1 analyzing:1 fluctuation:1 might:4 black:3 mateo:1 collect:1 limited:1 bi:2 i2n:1 averaged:3 unique:1 acknowledgment:1 globerson:1 definite:7 spot:2 procedure:1 litvak:1 empirical:1 matching:9 convenient:1 pre:1 projection:1 get:2 cannot:15 close:3 onto:2 selection:1 undesirable:1 optimize:1 deterministic:2 center:2 starting:1 overcounting:1 convex:3 rectangular:1 identifying:1 pitkow:1 rule:4 continued:1 spanned:1 justification:1 target:26 exact:7 us:2 element:3 trend:1 recognition:1 submission:1 ising:4 cooperative:1 observed:2 initializing:1 parameterize:1 calculate:1 region:5 cycle:5 connected:2 plo:1 decrease:1 ui:1 constrains:1 complexity:1 parise:1 trained:1 depend:2 raise:1 p11012:1 technically:1 basis:1 joint:4 various:2 train:1 shortcoming:1 ahmadian:1 sejnowski:1 artificial:6 labeling:1 choosing:1 whose:2 emerged:1 quite:2 solve:2 plausible:1 otherwise:1 statistic:7 transform:1 reproduced:1 sequence:1 eigenvalue:3 interaction:5 product:2 jij:4 neighboring:1 relevant:1 loop:4 turned:1 achieve:2 scalability:1 exploiting:1 cluster:1 produce:7 perfect:1 coupling:6 measured:2 ij:5 approxima:1 equilibrated:1 strong:1 c:1 implies:1 direction:1 concentrate:1 correct:3 hull:1 quartile:1 implementing:2 require:1 fix:1 generalization:1 really:1 sufficiently:1 hawkins:1 equilibrium:7 mapping:2 claim:2 circuitry:1 substituting:1 achieves:1 vary:1 purpose:1 uniqueness:1 estimation:1 hope:3 fukumizu:1 mit:3 always:4 gaussian:6 rather:2 fluctuate:1 varying:1 jaakkola:1 derived:1 improvement:1 rank:1 indicates:1 contrast:1 helpful:1 inference:16 flaw:1 inaccurate:1 hidden:2 uij:1 reproduce:3 i1:1 comprising:1 overall:1 unreachable:1 initialize:1 marginal:4 equal:3 construct:1 once:1 never:3 field:5 eliminated:1 identical:1 biology:1 minimized:1 np:1 intelligent:1 micro:1 few:1 irreducible:1 randomly:2 preserve:1 divergence:7 neurocomputing:1 individual:5 replaced:1 phase:1 attempt:1 freedom:1 message:9 possibility:1 accurate:1 closer:2 necessary:2 damping:1 tree:4 iv:4 euclidean:2 desired:1 re:1 circle:1 overcomplete:2 theoretical:3 minimal:5 stopped:1 instance:2 rao:1 steimer:1 loopy:12 cost:3 ott:1 deviation:1 subset:1 successful:1 dependency:1 unattainable:1 perturbed:2 international:1 systematic:2 off:1 corrects:1 probabilistic:3 together:1 zeta:1 central:1 choose:2 possibly:1 positivity:2 worse:2 cognitive:2 equilibrates:1 resort:1 derivative:4 ullman:1 coefficient:1 inc:1 oregon:1 analagous:1 satisfy:1 explicitly:2 depends:4 blind:2 tion:1 helped:1 try:1 view:1 reached:1 red:3 complicated:1 rochester:3 minimize:1 ni:4 greg:1 qk:3 kaufmann:2 miller:1 ensemble:17 yield:1 landscape:1 yellow:3 generalize:1 bayesian:1 produced:1 none:1 trajectory:1 explain:1 reach:7 whenever:1 synaptic:1 definition:2 failure:1 energy:51 nonetheless:1 frequency:1 pp:6 resultant:1 proof:5 mi:2 recovers:1 di:5 associated:1 sampled:1 proved:1 adjusting:2 popular:2 conversation:1 dimensionality:1 amplitude:1 actually:1 appears:1 improved:1 wei:2 evaluated:4 though:2 generality:1 furthermore:1 just:1 correlation:1 hand:1 replacing:1 overlapping:2 propagation:51 hopefully:1 ebp:10 defines:1 perhaps:2 effect:1 validity:1 normalized:1 true:9 remedy:1 hence:2 analytically:1 believable:6 equality:1 q0:10 leibler:3 maass:1 white:1 reweighted:1 during:8 generalized:2 butions:1 performs:1 reasoning:1 meaning:1 variational:1 novel:2 common:1 multinomial:1 mt:1 empirically:1 spiking:1 tilt:2 exponentially:1 volume:1 interpretation:1 marginals:64 refer:1 corvallis:1 cambridge:2 gibbs:5 ai:1 consistency:4 heskes:2 dot:2 reachable:1 stable:25 surface:1 add:1 j:1 certain:2 inequality:1 binary:9 continue:1 meeting:1 morgan:2 minimum:13 george:1 xaq:2 converge:7 ii:3 multiple:1 match:3 characterized:1 compensate:3 dkl:2 qi:28 circumstance:1 expectation:2 arxiv:1 iteration:5 normalization:1 sometimes:2 represent:1 achieved:2 qik:4 addressed:1 pseudomarginal:8 wake:13 median:1 publisher:2 appropriately:2 markedly:1 contrary:2 inconsistent:1 lafferty:1 jordan:1 call:3 iii:3 enough:3 hb:1 variety:2 xj:7 fit:1 gave:3 opposite:1 stoop:1 qj:7 whether:1 suffer:1 york:3 hessian:17 passing:1 proceed:1 cause:1 generally:2 clear:1 locally:2 ken:2 simplest:2 exist:7 dotted:1 neuroscience:2 estimated:1 blue:3 write:2 discrete:1 express:1 four:1 drawn:1 changing:1 douglas:1 hasp:1 v1:5 graph:7 defect:1 merely:1 fraction:3 sum:3 run:8 uncertainty:2 place:2 family:2 throughout:1 reasonable:1 draw:4 cyan:3 bound:1 hi:4 guaranteed:3 sleep:13 encountered:1 annual:1 occur:3 constraint:6 deficiency:1 precisely:2 bp:42 declared:1 u1:2 min:4 performing:2 attempting:1 department:1 structured:1 according:1 poor:2 pink:1 smaller:1 equation:3 previously:1 remains:1 eventually:1 fail:2 mechanism:2 needed:2 available:3 yedidia:2 obey:1 observe:1 v2:2 appropriate:1 alternative:2 original:1 assumes:1 running:2 remaining:1 graphical:17 neglect:1 calculating:1 giving:1 ghahramani:1 especially:1 society:1 move:2 question:1 quantity:1 occurs:2 realized:2 added:1 gradient:12 amongst:1 subspace:2 distance:2 thank:1 restart:1 unstable:1 toward:1 reason:1 willsky:1 index:1 minimizing:2 negative:4 boltzmann:4 perform:1 allowing:1 teh:2 neuron:1 markov:3 finite:1 descent:3 compensation:1 t:1 hinton:1 interacting:1 sumproduct:1 namely:1 pair:2 optimized:1 engine:1 unbelievable:27 learned:2 pearl:1 bar:1 proceeds:1 usually:1 below:1 pattern:1 max:1 reliable:1 including:1 belief:78 wainwright:2 natural:4 circumvent:1 axis:1 columbia:5 isn:1 mooij:2 loss:1 fully:4 limitation:1 foundation:1 affine:1 sufficient:6 consistent:4 pij:1 principle:3 dd:3 editor:1 pi:2 changed:2 last:2 free:39 side:1 bias:1 allow:2 neighbor:1 taking:1 fifth:1 distributed:1 slice:2 cortical:2 world:1 resides:1 author:1 made:2 collection:1 jump:1 perpetually:1 projected:2 san:1 welling:4 transaction:2 approximate:6 emphasize:1 kullback:3 ml:1 reproduces:2 active:1 reveals:1 yashar:1 summing:1 xi:30 factorize:1 discriminative:1 why:2 neurodynamics:1 additionally:1 bethe:53 mj:2 learn:2 ca:1 kschischang:1 symmetry:2 obtaining:1 futile:1 excellent:1 necessarily:2 constructing:1 did:2 linearly:1 big:1 arise:1 neurotheory:2 ny:4 cooper:1 fails:2 watanabe:1 wish:1 pereira:1 exponential:3 lq:2 theorem:2 bishop:1 showing:1 explored:1 magnitude:1 easier:1 locality:1 entropy:10 rg:1 simply:6 expressed:1 partially:1 shon:1 u2:2 cowell:1 springer:1 satisfies:1 amt:1 ma:2 conditional:2 consequently:1 towards:1 hard:2 change:2 heinemann:1 averaging:4 wt:1 principal:2 called:1 zone:1 evaluate:1
3,746
4,392
Learning a Distance Metric from a Network Blake Shaw? Computer Science Dept. Columbia University Bert Huang? Computer Science Dept. Columbia University Tony Jebara Computer Science Dept. Columbia University [email protected] [email protected] [email protected] Abstract Many real-world networks are described by both connectivity information and features for every node. To better model and understand these networks, we present structure preserving metric learning (SPML), an algorithm for learning a Mahalanobis distance metric from a network such that the learned distances are tied to the inherent connectivity structure of the network. Like the graph embedding algorithm structure preserving embedding, SPML learns a metric which is structure preserving, meaning a connectivity algorithm such as k-nearest neighbors will yield the correct connectivity when applied using the distances from the learned metric. We show a variety of synthetic and real-world experiments where SPML predicts link patterns from node features more accurately than standard techniques. We further demonstrate a method for optimizing SPML based on stochastic gradient descent which removes the running-time dependency on the size of the network and allows the method to easily scale to networks of thousands of nodes and millions of edges. 1 Introduction The proliferation of social networks on the web has spurred many significant advances in modeling networks [1, 2, 4, 12, 13, 15, 16, 26]. However, while many efforts have been focused on modeling networks as weighted or unweighted graphs [17], or constructing features from links to describe the nodes in a network [14, 25], few techniques have focused on real-world network data which consists of both node features in addition to connectivity information. Many social networks are of this form; on services such as Facebook, Twitter, or LinkedIn, there are profiles which describe each person, as well as the connections they make. The relationship between a node?s features and connections is often not explicit. For example, people ?friend? each other on Facebook for a variety of reasons: perhaps they share similar parts of their profile such as their school or major, or perhaps they have completely different profiles. We want to learn the relationship between profiles and links from massive social networks such that we can better predict who is likely to connect. To model this relationship, one could simply model each link independently, where one simply learns what characteristics of two profiles imply a possible link. However, this approach completely ignores the structural characteristics of the links in the network. We posit that modeling independent links is insufficient, and in order to better model these networks one must account for the inherent topology of the network as well as the interactions between the features of nodes. We thus propose structure preserving metric learning (SPML), a method for learning a distance metric between nodes that preserves the structural network behavior seen in data. 1.1 Background Metric learning algorithms have been successfully applied to many supervised learning tasks such as classification [3, 23, 24]. These methods first build a k-nearest neighbors (kNN) graph from ? Blake Shaw is currently at Foursquare, and Bert Huang is currently at the University of Maryland. 1 training data with a fixed k, and then learn a Mahalanobis distance metric which tries to keep connected points with similar labels close while pushing away class impostors, pairs of points which are connected but of different classes. Fundamentally, these supervised methods aim to learn a distance metric such that applying a connectivity algorithm (for instance, k-nearest neighbors) under the metric will produce a graph where no point is connected to others with different class labels. In practice, these constraints are enforced with slack. Once the metric is learned, the class label for an unseen datapoint can be predicted by the majority vote of nearby points under the learned metric. Unfortunately, these metric learning algorithms are not easily applied when we are given a network as input instead of class labels for each point. Under this new regime, we want to learn a metric such that points connected in the network are close and points which are unconnected are more distant. Intuitively, certain features or groups of features should influence how nodes connect, and thus it should be possible to learn a mapping from features to connectivity such that the mapping respects the underlying topological structure of the network. Like previous metric learning methods, SPML learns a metric which reconciles the input features with some auxiliary information such as class labels. In this case, instead of pushing away class impostors, SPML pushes away graph impostors, points which are close in terms of distance but which should remain unconnected in order to preserve the topology of the network. Thus SPML learns a metric where the learned distances are inherently tied to the original input connectivity. Preserving graph topology is possible by enforcing simple linear constraints on distances between nodes [21]. By adapting the constraints from the graph embedding technique structure preserving embedding, we formulate simple linear structure preserving constraints for metric learning that enforce that neighbors of each node are closer than all others. Furthermore, we adapt these constraints for an online setting similar to PEGASOS [20] and OASIS [3], such that we can apply SPML to large networks by optimizing with stochastic gradient descent (SGD). 2 Structure preserving metric learning Given as input an adjacency matrix A ? Bn?n , and node features X ? Rd?n , structure preserving metric learning (SPML) learns a Mahalanobis distance metric parameterized by a positive semidefinite (PSD) matrix M ? Rd?d , where M  0. The distance between two points under the metric is defined as DM (xi , xj ) = (xi ?xj )> M(xi ?xj ). When the metric is the identity M = Id , DM (xi , xj ) represents the squared Euclidean distance between the i?th and j?th points. Learning M is equivalent to learning a linear scaling on the input features LX where M = L> L and L ? Rd?d . SPML learns an M which is structure preserving, as defined in Definition 1. Given a connectivity algorithm G, SPML learns a metric such that applying G to the input data using the learned metric produces the input adjacency matrix exactly.1 Possible choices for G include maximum weight b-matching, k-nearest neighbors, -neighborhoods, or maximum weight spanning tree. Definition 1 Given a graph with adjacency matrix A, a distance metric parametrized by M ? Rd?d is structure preserving with respect to a connectivity algorithm G, if G(X, M) = A. 2.1 Preserving graph topology with linear constraints To preserve graph topology, we use the same linear constraints as structure preserving embedding (SPE) [21], but apply them to M, which parameterizes the distances between points. A useful tool for defining distances as linear constraints on M is the transformation > > > DM (xi , xj ) = x> i Mxi + xj Mxj ? xi Mxj ? xj Mxi , (1) which allows linear constraints on the distances to be written as linear constraints on the M matrix. For different connectivity schemes below, we present linear constraints which enforce graph structure to be preserved. Nearest neighbor graphs The k-nearest neighbor algorithm (k-nn) connects each node to the k neighbors to which the node has shortest distance, where k is an input parameter; therefore, setting k 1 In the remainder of the paper, we interchangeably use G to denote the set of feasible graphs and the algorithm used to find the optimal connectivity within the set of feasible graphs. 2 to the true degree for each node, the distances to all disconnected nodes must be larger than the distance to the farthest connected neighbor: DM (xi , xj ) > (1 ? Aij ) maxl (Ail DM (xi , xl )), ?i, j. Similarly, preserving an -neighborhood graph obeys linear constraints on M: DM (xi , xj ) ? , ?{i, j|Aij = 1}, and DM (xi , xj ) ? , ?{i, j|Aij = 0}. If for each node the connected distances are less than the unconnected distances (or some ), i.e., the metric obeys the above linear constraints, Definition 1 is satisfied, and thus the connectivity computed under the learned metric M is exactly A. Maximum weight subgraphs Unlike nearest neighbor algorithms, which select edges greedily for each node, maximum weight subgraph algorithms select edges from a weighted graph to produce a subgraph which has total maximal weight [6]. Given a metric parametrized by M, let the weight between two points (i, j) be the negated pairwise distance between them: Zij = ?DM (xi , xj ) = ?(xi ? xj )> M(xi ? xj ). For example, maximum weight b-matching finds the maximum weight subgraph while also enforcing that every node has a fixed degree bi for each i?th node. The formulation for maximum weight spanning tree is similar. Unfortunately, preserving structure for these ? ?A ? ? G. algorithms requires enforcing many linear constraints of the form: tr(Z> A) ? tr(Z> A), This reveals one critical difference between structure preserving constraints of these algorithms and those of nearest-neighbor graphs: there are exponentially many linear constraints. To avoid an exponential enumeration, the most violated inequalities can be introduced sequentially using a cutting-plane approach as shown in the next section. 2.2 Algorithm derivation By combining the linear constraints from the previous section with a Frobenius norm (denoted ||?||F ) regularizer on M and regularization parameter ?, we have a simple semidefinite program (SDP) which learns an M that is structure preserving and has minimal complexity. Algorithm 1 summarizes the naive implementation of SPML when the connectivity algorithm is k-nearest neighbors, which is optimized by a standard SDP solver. For maximum weight subgraph connectivity (e.g., b-matching), we use a cutting-plane method [10], iteratively finding the worst violating constraint and adding it to a working-set. We can find the most violated constraint at each iteration by computing the adjacency ? that maximizes tr(Z ? A) ? s.t. A ? ? G, which can be done using various methods [6, 7, 8]. matrix A Each added constraint enforces that the total weight along the edges of the true graph is greater than total weight of any other graph by some margin. Algorithm 2 shows the steps for SPML with cutting-plane constraints. Algorithm 1 Structure preserving metric learning with nearest neighbor constraints Input: A ? Bn?n , X ? Rd?n , and parameter ? 1: K = {M  0, DM (xi , xj ) ? (1 ? Aij ) maxl (Ail DM (xi , xl )) + 1 ? ? ?i,j } ? 2 ? ? argmin 2: M M?K 2 ||M||F + ? {Found via SDP} ? 3: return M Algorithm 2 Structure preserving metric learning with cutting-plane constraints Input: A ? Bn?n , X ? Rd?n , connectivity algorithm G, and parameters ?, ? 1: K = {M  0} 2: repeat ? 2 ? ? argmin 3: M M?K 2 ||M||F + ? {Found via SDP} > > ? ? ? 2X> MX ? ? diag(X> MX)1 ? ? 1diag(X> MX) 4: Z >? ? ? ? 5: A ? argmaxA ? tr(Z A) s.t. A ? G {Find worst violator} ? > A) ? ? tr(Z ? > A)| ? ? then 6: if |tr(Z ? >1?? 7: add constraint to K : tr(Z> A) ? tr(Z> A) 8: end if ? > A) ? ? tr(Z ? > A)| ? ? 9: until |tr(Z ? 10: return M 3 Unfortunately, for networks larger than a few hundred nodes or for high-dimensional features, these SDPs do not scale adequately. The complexity of the SDP scales with the number of variables and constraints, yielding a worst-case time of O(d3 + C 3 ) where C = O(n2 ). By temporarily omitting the PSD requirement on M, Algorithm 2 becomes equivalent to a one-class structural support vector machine (structural SVM). Stochastic SVM algorithms have been recently developed that have convergence time with no dependence on input size [19]. Therefore, we develop a large-scale algorithm based on projected stochastic subgradient descent. The proposed adaptation removes the dependence on n, where each iteration of the algorithm is O(d2 ), sampling one random constraint at a time. We can rewrite the optimization as unconstrained over an objective function with a hinge-loss on the structure preserving constraints: X ? 1 f (M) = ||M||2F ? max(DM (xi , xj ) ? DM (xi , xk ) + 1, 0). 2 |S| (i,j,k)?S Here the constraints have been written in terms of hinge-losses over triplets, each consisting of a node, its neighbor and its non-neighbor. The set of all such triplets is S = {(i, j, k) | Aij = 1, Aik = 0}. Using the distance transformation in Equation 1, each of the |S| constraints can be written using a sparse matrix C(i,j,k) , where (i,j,k) Cjj (i,j,k) = 1, Cik (i,j,k) = 1, , Cki (i,j,k) = 1, , Cij (i,j,k) = ?1, Cji (i,j,k) = ?1, , Ckk = ?1, and whose other entries are zero. By construction, sparse matrix multiplication of C(i,j,k) indexes the proper elements related to nodes i, j, and k, such that tr(C(i,j,k) X> MX) is equal to DM (xi , xj ) ? DM (xi , xk ). The subgradient of f at M is then X 1 ?f = ?M + XC(i,j,k) X> , |S| (i,j,k)?S+ where S+ = {(i, j, k)|DM (xi , xj ) ? DM (xi , xk ) + 1 > 0}. If for all triplets this quantity is negative, there exists no unconnected neighbor of a point which is closer than a point?s farthest connected neighbor ? precisely the structure preserving criterion for nearest neighbor algorithms. In practice, we optimize this objective function via stochastic subgradient descent. We sample a batch of triplets, replacing S in the objective function with a random subset of S of size B. If a true metric is necessary, we intermittently project M onto the PSD cone. Full details about constructing the constraint matrices and minimizing the objective are shown in Algorithm 3. Algorithm 3 Structure preserving metric learning with nearest neighbor constraints and optimization with projected stochastic subgradient descent Input: A ? Bn?n , X ? Rd?n , and parameters ?, T, B 1: M1 ? Id 2: for t from 1 to T ? 1 do 1 3: ?t ? ?t 4: C ? 0n,n 5: for b from 1 to B do 6: (i, j, k) ? Sample random triplet from S = {(i, j, k) | Aij = 1, Aik = 0} 7: if DMt (xi , xj ) ? DMt (xi , xk ) + 1 > 0 then 8: Cjj ? Cjj + 1, Cik ? Cik + 1, Cki ? Cki + 1 9: Cij ? Cij ? 1, Cji ? Cji ? 1, Ckk ? Ckk ? 1 10: end if 11: end for 12: ?t ? XCX> + ?Mt 13: Mt+1 ? Mt ? ?t ?t 14: Optional: Mt+1 ? [Mt+1 ]+ {Project onto the PSD cone} 15: end for 16: return MT 2.3 Analysis In this section, we provide analysis for the scaling behavior of SPML using SGD. A primary insight is that, since Algorithm 3 regularizes with the L2 norm and penalizes with hinge-loss, omitting the 4 positive semidefinite requirement for M and vectorizing M makes the algorithm equivalent to a oneclass, linear support vector machine with O(n3 ) input vectors. Thus, the stochastic optimization is an instance of the PEGAGOS algorithm [19], albeit a cleverly constructed one. The running time of PEGASOS does not depend on the input size, and instead only scales with the dimensionality, the desired optimization error on the objective function  and the regularization parameter ?. The optimization error  is defined as the difference between the found objective value and the true ? ? minM f (M). optimal objective value, f (M) Theorem 2 Assume that the data is bounded such that max(i,j,k)?S ||XC(i,j,k) X> ||2F ? R, and ? = R ? 1. During Algorithm 3 at iteration T , with ? ? 1/4, and batch-size B = 1, let M PT 1 M be the average M so far. Then, with probability of at least 1 ? ?, t t=1 T 2 ? ? min f (M) ? 84R ln(T /?) . f (M) M ?T ? 1 ). Consequently, the number of iterations necessary to reach an optimization error of  is O( ? Proof The theorem is proven by realizing that Algorithm 3 is an instance of PEGASOS without a projection step on one-class data, since Corollary 2 in [20] proves this same bound for traditional SVM input, also without a projection step. The input to the SVM is the set of all d ? d matrices XC (i,j,k) X > for each triplet (i, j, k) ? S. Note that the large size of set S plays no role in the running time; each iteration requires O(d2 ) work. Assuming the node feature vectors are of bounded norm, the radius of the input data R is constant with respect to n, since each is constructed using the feature vectors of three nodes. In practice, as in the PEGASOS algorithm, we propose using MT as the output instead of the average, as doing so performs better on real data, but an averaging version is easily implemented by storing a running sum of M matrices and dividing by T before returning. Figure 2(b) shows the training and testing prediction performance on one of the experiments described in detail in Section 3 as stochastic SPML converges. The area under the receiver operator characteristic (ROC) curve is measured, which is related to the structure preserving hinge loss, and the plot clearly shows fast convergence and quickly diminishing returns at higher iteration counts. 2.4 Variations While stochastic SPML does not scale with the size of the input graph, evaluating distances using a full M matrix requires O(d2 ) work. Thus, for high-dimensional data, one approach is to use principal component analysis or random projections to first reduce dimensionality. It has been shown that n points can be mapped into a space of dimensionality O(log n/?2 ) such that distances are distorted by no more than a factor of (1 ? ?) [5, 11]. Another approach is to to limit M to be nonzero only along the diagonal. Diagonalizing M reduces the amount of work to O(d). If modeling cross-feature interactions is necessary, another option for reducing the computational cost is to perform SPML using a low-rank factorization of M. In this case, all references to M can be replaced with L> L, thus inducing a true metric without projection. The updated gradient with respect to L is simply ?t ? 2XCX> L> + ?Lt . Using a factorization also allows replacing the regularizer with the Frobenius norm of the L matrix, which is equivalent to the nuclear norm of M [18]. Using this formulation causes the objective to no longer be convex, but seems to work well in practice. Finally, when predicting links of new nodes, SPML does not know how many connections to predict. To address this uncertainty, we propose a variant to SPML called degree distributional metric learning (DDML), which simultaneously learns the metric as well as parameters for the connectivity algorithm. Details on DDML and low-rank SPML are provided in the Appendix. 3 Experiments We present a variety of synthetic and real-world experiments that elucidate the behavior of SPML. First we show how SPML performs on a simple synthetic dataset that is easily visualized in two 5 dimensions and which we believe mimics many traditional network datasets. We then demonstrate favorable performance for SPML in predicting links of the Wikipedia document network and the Facebook social network. 3.1 Synthetic example To better understand the behavior of SPML, consider the following synthetic experiment. First n points are sampled from a d-dimensional uniform distribution. These vectors represent the true features for the n nodes X ? Rd?n . We then compute an adjacency matrix by performing a minimumdistance b-matching on X. Next, the true features are scrambled by applying a random linear transformation: RX where R ? Rd?d . Given RX and A, the goal of SPML is to learn a metric M that undoes the linear scrambling, so that when b-matching is applied to RX using the learned distance metric, it produces the input adjacency matrix. Figure 1 illustrates the results of the above experiment for d = 2, n = 50, and b = 4. In Figure 1(a), we see an embedding of the graph using the true features for each node as coordinates, and connectivity generated from b-matching. In Figure 1(b), the random linear transformation has been applied. We posit that many real-world datasets resemble plot 1(b), with seemingly incongruous feature and connectivity information. Applying b-matching to the scrambled data produces connections shown in Figure 1(c). Finally, by learning M via SPML (Algorithm 2) and computing L by Cholesky decomposition of M, we can recover features LRX (Figure 1(d)) that respect the structure in the target adjacency matrix and thus more closely resemble the true features used to generate the data. (a) True network (b) Scrambled features & true connectivity (c) Scrambled features & implied connectivity (d) Recovered features & true connectivity Figure 1: In this synthetic experiment, SPML finds a metric that inverts the random transformation applied to the features (b), such that under the learned metric (d) the implied connectivity is identical to the original connectivity (a) as opposed to inducing a different connectivity (c). 3.2 Link prediction We compare SPML to a variety of methods for predicting links from node features: Euclidean distances, relational topic models (RTM) , and traditional support vector machines (SVM). A simple baseline for comparison is how well the Euclidean distance metric performs at ranking possible connections. Relational topic models learn a link probability function in addition to latent topic mixtures describing each node [2]. For the SVM, we construct training examples consisting of the pairwise differences between node features. Training examples are labeled positive if there exists an edge between the corresponding pair of nodes, and negative if there is no edge. Because there are potentially O(n2 ) possible examples, and the graphs are sparse, we subsample the negative examples so that we include a randomly chosen equal number of negative examples as positive edges. Without subsampling, the SVM is unable to run our experiments in a reasonable time. We use the SVMPerf implementation for our SVM [9], and the authors? code for RTM [2]. Interestingly, an SVM with these inputs can be interpreted as an instance of SPML using diagonal M and the -neighborhood connectivity algorithm, which connects points based on their distance, completely independently of the rest of the graph structure. We thus expect to see better performance using SPML in cases where the structure is important. The RTM approach is appropriate for data that consists of counts, and is a generative model which recovers a set of topics in addition to link predictions. Despite the generality of the model, RTM does not seem to perform as well as discriminative methods in our experiments, especially in the Facebook experiment where the data is quite different from bag-of-words features. For SPML, we run the stochastic algorithm with batch size 10. We skip the PSD projection step, since these experiments are only concerned with 6 prediction, and obtaining a true metric is not necessary. SPML is implemented in MATLAB and requires only a few minutes to converge for each of the experiments below. 0.8 0.85 0.6 SPML Euclidean RTM SVM Random 0.4 0.2 0 AUC true positive rate 1 0 0.2 0.4 0.6 false positive rate 0.8 0.8 0.75 0.7 1 (a) Average ROC curve for Wikipedia Experiment: ?graph theory topics? Training Testing 0 1000 2000 Iteration 3000 4000 (b) Convergence behavior of SPML optimized via SGD on Facebook Data Figure 2: Average ROC performance for the ?graph theory topics? Wikipedia experiment (left) shows a strong lift for SPML over competing methods. We see that SPML converges quickly with diminishing returns after many iterations (right). Wikipedia articles We apply SPML to predicting links on Wikipedia pages. Imagine the scenario where an author writes a new Wikipedia entry and then, by analyzing the word counts on the newly written page, an algorithm is able to suggest which other Wikipedia pages it should link to. We first create a few subnetworks consisting of all the pages in a given category, their bag-of-words features, and their connections. We choose three categories: ?graph theory topics?, ?philosophy concepts?, and ?search engines?. We use a word dictionary of common words with stop-words removed. For each network, we split the data 80/20 for training and testing, where 20% of the nodes are held out for evaluation. On the remaining 80% we cross-validate (five folds) over the parameters for each algorithm (RTM, SVM, SPML), and train a model using the best-scoring regularization parameter. For SPML, we use the diagonal variant of Algorithm 3, since the high-dimensionality of the input features reduces the benefit of cross-feature weights. On the held-out nodes, we task each algorithm to rank the unknown edges according to distance (or another measure of link likelihood), and compare the accuracy of the rankings using receiver operator characteristic (ROC) curves. Table 1 lists the statistics of each category and the average area under the curve (AUC) over three train/test splits for each algorithm. A ROC curve for the ?graph theory? category is shown in Figure 2(a). For ?graph theory? and ?search engines?, SPML provides a distinct advantage over other methods, while no method has a particular advantage on ?philosophy concepts?. One possible explanation for why the SVM is unable to gain performance over Euclidean distance is that the wide range of degrees for nodes in these graphs makes it difficult to find a single threshold that separates edges from nonedges. In particular, the ?search engines? category had an extremely skewed degree distribution, and is where SPML shows the greatest improvement. We also apply SPML to a larger subset of the Wikipedia network, by collecting word counts and connections of 100,000 articles in a breadth-first search rooted at the article ?Philosophy?. The experimental setup is the same as previous experiments, but we use a 0.5% sample of the nodes for testing. The final training algorithm ran for 50,000 iterations, taking approximately ten minutes on a desktop computer. The resulting AUC on the edges of the held-out nodes is listed in Table 1 as the ?Philosophy Crawl? dataset. The SVM and RTM do not scale to data of this size, whereas SPML offers a clear advantage over using Euclidean distance for predicting links. Facebook social networks Applying SPML to social network data allows us to more accurately predict who will become friends based on the profile information for those users. We use Facebook data [22], where we have a small subset of anonymized profile information for each student of a university, as well as friendship information. The profile information consists of gender, status (meaning student, staff, or faculty), dorm, major, and class year. Similarly to the Wikipedia experiments in the previous section, we compared SPML to Euclidean, RTM, and SVM. For SPML, we learn a full M via Algorithm 3. For each person, we construct a sparse feature vector where there is one feature corresponding to every possible dorm, major, etc. for each feature type. We select only people who have indicated all five feature types on their profiles. Table 1 shows details of 7 Table 1: Wikipedia (top), Facebook (bottom) dataset and experiment information. Shown below: number of nodes n, number of edges m, dimensionality d, and AUC performance. Graph Theory Philosophy Concepts Search Engines Philosophy Crawl Harvard MIT Stanford Columbia n 223 303 269 100,000 1937 2128 3014 3050 m 917 921 332 4,489,166 48,980 95,322 147,516 118,838 d 6695 6695 6695 7702 193 173 270 251 Euclidean 0.624 0.705 0.662 0.547 0.764 0.702 0.718 0.717 RTM 0.591 0.571 0.487 ? 0.562 0.494 0.532 0.519 SVM 0.610 0.708 0.611 ? 0.839 0.784 0.784 0.796 SPML 0.722 0.707 0.742 0.601 0.854 0.801 0.808 0.818 the Facebook networks for the four schools we consider: Harvard, MIT, Stanford, and Columbia. We perform a separate experiment for each school, randomly splitting the data 80/20 for training and testing. We use the training data to select parameters via five-fold cross validation, and train a model. The AUC performance on the held-out edges is also listed in Table 1. It is clear from the quantitative results that structural information is contributing to higher performance for SPML as compared to other methods. Harvard MIT Stanford Columbia status gender major dorm year 0 0.1 0.2 0.3 0.4 Relative Importance 0.5 Figure 3: Comparison of Facebook social networks from four schools in terms of feature importance computed from the learned structure preserving metric. By looking at the weight of the diagonal values in M normalized by the total weight, we can determine which feature differences are most important for determining connectivity. Figure 3 shows the normalized weights averaged by feature types for Facebook data. Here we see the feature types compared across four schools. For all schools except MIT, the graduating year is most important for determining distance between people. For MIT, dorms are the most important features. A possible explanation for this difference is that MIT is the only school in the list that makes it easy for students to stay in a residence for all four years of their undergraduate program, and therefore which dorm one lives in may affect more strongly the people they connect to. 4 Discussion We have demonstrated a fast convex optimization for learning a distance metric from a network such that the distances are tied to the network?s inherent topological structure. The structure preserving distance metrics introduced in this article allow us to better model and predict the behavior of large real-world networks. Furthermore, these metrics are as lightweight as independent pairwise models, but capture structural dependency from features making them easy to use in practice for link-prediction. In future work, we plan to exploit SPML?s lack of dependence on graph size to learn a structure preserving metric on massive-scale graphs, e.g., the entire Wikipedia site. Since each iteration requires only sampling a random node, following a link to a neighbor, and sampling a non-neighbor, this can all be done in an online fashion as the algorithm crawls a network such as the worldwide web, learning a metric that may gradually change over time. Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. 1117631, by a Google Research Award, and by the Department of Homeland Security under Grant No. N66001-09-C-0080. 8 References [1] E. Airoldi, D. Blei, S. Fienberg, and E. Xing. Mixed membership stochastic blockmodels. JMLR, 9:1981? 2014, 2008. [2] J. Chang and D. Blei. Hierarchical relational models for document networks. Annals of Applied Statistics, 4:124?150, 2010. [3] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online learning of image similarity through ranking. J. Mach. Learn. Res., 11:1109?1135, March 2010. [4] J. Chen, W. Geyer, C. Dugan, M. Muller, and I. Guy. Make new friends, but keep the old: recommending people on social networking sites. In CHI, pages 201?210. ACM, 2009. [5] S. Dasgupta and A. Gupta. An elementary proof of a theorem of Johnson and Lindenstrauss. Random Struct. Algorithms, 22:60?65, January 2003. [6] C. Fremuth-Paeger and D. Jungnickel. Balanced network flows, a unifying framework for design and analysis of matching algorithms. Networks, 33(1):1?28, 1999. [7] B. Huang and T. Jebara. Loopy belief propagation for bipartite maximum weight b-matching. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, volume 2 of JMLR: W&CP, pages 195?202, 2007. [8] B. Huang and T. Jebara. Fast b-matching via sufficient selection belief propagation. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011. [9] T. Joachims. Training linear SVMs in linear time. In ACM SIG International Conference On Knowledge Discovery and Data Mining (KDD), pages 217 ? 226, 2006. [10] T. Joachims, T. Finley, and C. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1):27?59, 2009. [11] W. Johnson and J. Lindenstrauss. Extensions of Lipschitz maps into a Hilbert space. Contemporary Mathematics, (26):189?206, 1984. [12] J. Leskovec and E. Horvitz. Planetary-scale views on a large instant-messaging network. ACM WWW, 2008. [13] J. Leskovec, J Kleinberg, and C. Faloutsos. Graphs over time: densification laws, shrinking diameters and possible explanations. In Proc. of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, 2005. [14] M. Middendorf, E. Ziv, C. Adams, J. Hom, R. Koytcheff, C. Levovitz, and G. Woods. Discriminative topological features reveal biological network mechanisms. BMC Bioinformatics, 5:1471?2105, 2004. [15] G. Namata, H. Sharara, and L. Getoor. A survey of link mining tasks for analyzing noisy and incomplete networks. In Link Mining: Models, Algorithms, and Applications. Springer, 2010. [16] M. Newman. The structure and function of complex networks. SIAM REVIEW, 45:167?256, 2003. [17] M. Newman. Analysis of weighted networks. Phys. Rev. E, 70(5):056131, Nov 2004. [18] J. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the Twenty-Second International Conference, volume 119 of ACM International Conference Proceeding Series, pages 713?719. ACM, 2005. [19] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In Proceedings of the 24th International Conference on Machine Learning, ICML ?07, pages 807?814, New York, NY, USA, 2007. ACM. [20] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for SVM. Mathematical Programming, To appear. [21] B. Shaw and T. Jebara. Structure preserving embedding. In Proc. of the 26th International Conference on Machine Learning, 2009. [22] A. Traud, P. Mucha, and M. Porter. Social structure of Facebook networks. CoRR, abs/1102.2166, 2011. [23] K. Weinberger and L. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207?244, 2009. [24] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information. In S. Becker, S. Thrun, and K. Obermayer, editors, NIPS, pages 505?512. MIT Press, 2002. [25] J. Xu and Y. Li. Discovering disease-genes by topological features in human protein-protein interaction network. Bioinformatics, 22(22):2800?2805, 2006. [26] T. Yang, R. Jin, Y. Chi, and S. Zhu. Combining link and content for community detection: a discriminative approach. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ?09, pages 927?936, New York, NY, USA, 2009. ACM. 9
4392 |@word faculty:1 version:1 norm:5 seems:1 d2:3 bn:4 decomposition:1 sgd:3 tr:11 lightweight:1 series:1 zij:1 document:2 interestingly:1 horvitz:1 recovered:1 must:2 written:4 distant:1 kdd:2 remove:2 plot:2 generative:1 intelligence:2 discovering:1 plane:5 xk:4 desktop:1 geyer:1 realizing:1 blei:2 provides:1 node:39 lx:1 five:3 mathematical:1 along:2 constructed:2 become:1 consists:3 eleventh:2 pairwise:3 behavior:6 proliferation:1 sdp:5 chi:2 enumeration:1 solver:3 becomes:1 project:2 provided:1 underlying:1 bounded:2 maximizes:1 what:1 argmin:2 interpreted:1 ail:2 developed:1 finding:1 transformation:5 quantitative:1 every:3 collecting:1 exactly:2 returning:1 farthest:2 grant:2 appear:1 positive:6 service:1 before:1 limit:1 despite:1 mach:1 id:2 analyzing:2 graduating:1 approximately:1 factorization:3 bi:1 range:1 obeys:2 averaged:1 acknowledgment:1 enforces:1 testing:5 practice:5 impostor:3 incongruous:1 writes:1 area:2 adapting:1 matching:10 projection:5 word:7 chechik:1 argmaxa:1 suggest:1 protein:2 pegasos:6 close:3 onto:2 operator:2 selection:1 applying:5 influence:1 optimize:1 equivalent:4 map:1 demonstrated:1 www:1 independently:2 convex:2 focused:2 formulate:1 survey:1 splitting:1 subgraphs:1 insight:1 nuclear:1 embedding:7 variation:1 linkedin:1 coordinate:1 updated:1 annals:1 construction:1 pt:1 play:1 massive:2 aik:2 elucidate:1 target:1 imagine:1 user:1 programming:1 sig:1 harvard:3 element:1 predicts:1 distributional:1 labeled:1 bottom:1 role:1 capture:1 worst:3 thousand:1 connected:7 russell:1 removed:1 contemporary:1 ran:1 balanced:1 disease:1 complexity:2 depend:1 rewrite:1 mxj:2 upon:1 bipartite:1 completely:3 easily:4 various:1 regularizer:2 derivation:1 train:3 distinct:1 fast:4 describe:2 artificial:2 dorm:5 newman:2 lift:1 neighborhood:3 shalev:2 whose:1 quite:1 larger:3 stanford:3 rennie:1 statistic:4 knn:1 unseen:1 noisy:1 final:1 online:3 seemingly:1 advantage:3 propose:3 interaction:3 maximal:1 remainder:1 adaptation:1 combining:2 subgraph:4 frobenius:2 inducing:2 validate:1 convergence:3 requirement:2 produce:5 adam:1 converges:2 friend:3 develop:1 measured:1 nearest:13 school:7 ckk:3 strong:1 dividing:1 auxiliary:1 c:3 predicted:1 implemented:2 resemble:2 skip:1 posit:2 radius:1 closely:1 correct:1 stochastic:11 human:1 material:1 adjacency:7 biological:1 elementary:1 svmperf:1 extension:1 blake:3 mapping:2 predict:4 major:4 dictionary:1 favorable:1 proc:2 bag:2 label:5 currently:2 create:1 successfully:1 tool:1 weighted:3 cotter:1 namata:1 mit:7 clearly:1 aim:1 avoid:1 corollary:1 joachim:2 improvement:1 rank:3 likelihood:1 sigkdd:2 greedily:1 baseline:1 twitter:1 membership:1 nn:1 entire:1 diminishing:2 classification:2 ziv:1 denoted:1 plan:1 equal:2 once:1 construct:2 ng:1 sampling:3 identical:1 represents:1 bmc:1 yu:1 icml:1 mimic:1 future:1 others:2 fundamentally:1 inherent:3 few:4 randomly:2 preserve:3 simultaneously:1 national:1 replaced:1 connects:2 consisting:3 psd:5 ab:1 detection:1 mining:5 evaluation:1 mixture:1 semidefinite:3 yielding:1 primal:2 held:4 edge:12 closer:2 necessary:4 tree:2 incomplete:1 euclidean:8 old:1 penalizes:1 desired:1 shalit:1 re:1 minimal:1 leskovec:2 instance:4 modeling:4 loopy:1 cost:1 entry:2 subset:3 hundred:1 uniform:1 johnson:2 dependency:2 connect:3 rtm:9 synthetic:6 person:2 international:9 siam:1 cki:3 stay:1 quickly:2 connectivity:27 squared:1 satisfied:1 opposed:1 huang:4 choose:1 messaging:1 guy:1 return:5 li:1 account:1 student:3 ranking:3 try:1 view:1 doing:1 xing:2 recover:1 option:1 collaborative:1 accuracy:1 who:3 characteristic:4 yield:1 sdps:1 accurately:2 rx:3 minm:1 datapoint:1 reach:1 networking:1 phys:1 facebook:12 definition:3 spe:1 dm:16 proof:2 recovers:1 sampled:1 newly:1 dataset:3 stop:1 gain:1 knowledge:3 dimensionality:5 hilbert:1 cik:3 higher:2 supervised:2 violating:1 formulation:2 done:2 strongly:1 generality:1 furthermore:2 until:1 working:1 web:2 replacing:2 lack:1 google:1 propagation:2 porter:1 perhaps:2 indicated:1 reveal:1 believe:1 usa:2 omitting:2 concept:3 true:14 normalized:2 adequately:1 regularization:3 iteratively:1 nonzero:1 mahalanobis:3 interchangeably:1 during:1 skewed:1 auc:5 rooted:1 criterion:1 demonstrate:2 performs:3 cp:1 meaning:2 image:1 intermittently:1 recently:1 wikipedia:11 common:1 mt:7 exponentially:1 volume:2 million:1 m1:1 significant:1 rd:9 unconstrained:1 mathematics:1 similarly:2 had:1 longer:1 similarity:1 etc:1 add:1 optimizing:2 dugan:1 scenario:1 certain:1 inequality:1 life:1 muller:1 scoring:1 preserving:27 seen:1 greater:1 staff:1 converge:1 shortest:1 determine:1 sharma:1 full:3 worldwide:1 reduces:2 adapt:1 cjj:3 cross:4 offer:1 mucha:1 award:1 prediction:6 variant:2 metric:51 iteration:10 represent:1 dmt:2 preserved:1 addition:3 want:2 background:1 whereas:1 rest:1 unlike:1 flow:1 seem:1 jordan:1 structural:7 yang:1 split:2 easy:2 concerned:1 bengio:1 variety:4 xj:18 affect:1 topology:5 competing:1 reduce:1 parameterizes:1 cji:3 becker:1 effort:1 york:2 cause:1 matlab:1 useful:1 clear:2 listed:2 amount:1 ten:1 visualized:1 category:5 svms:2 diameter:1 generate:1 estimated:2 dasgupta:1 group:1 four:4 threshold:1 d3:1 breadth:1 n66001:1 graph:33 subgradient:4 wood:1 year:4 cone:2 enforced:1 sum:1 parameterized:1 uncertainty:1 run:2 distorted:1 fourteenth:1 lrx:1 reasonable:1 residence:1 summarizes:1 scaling:2 appendix:1 bound:1 hom:1 fold:2 topological:4 constraint:31 precisely:1 n3:1 nearby:1 kleinberg:1 min:1 extremely:1 performing:1 department:1 according:1 march:1 disconnected:1 cleverly:1 remain:1 across:1 rev:1 making:1 intuitively:1 gradually:1 fienberg:1 ln:1 equation:1 slack:1 count:4 describing:1 mechanism:1 singer:2 know:1 end:4 subnetworks:1 apply:4 hierarchical:1 away:3 enforce:2 appropriate:1 shaw:3 batch:3 faloutsos:1 struct:1 weinberger:1 original:2 top:1 running:4 tony:1 spurred:1 include:2 subsampling:1 remaining:1 hinge:4 clustering:1 unifying:1 instant:1 xc:3 pushing:2 exploit:1 build:1 prof:1 especially:1 implied:2 objective:8 added:1 quantity:1 primary:1 dependence:3 traditional:3 diagonal:4 obermayer:1 gradient:5 mx:4 distance:39 link:22 maryland:1 mapped:1 unable:2 majority:1 parametrized:2 separate:2 thrun:1 topic:7 reason:1 enforcing:3 spanning:2 assuming:1 code:1 index:1 relationship:3 insufficient:1 minimizing:1 difficult:1 unfortunately:3 cij:3 setup:1 potentially:1 negative:4 implementation:2 design:1 proper:1 unknown:1 negated:1 perform:3 twenty:1 datasets:2 descent:5 jin:1 optional:1 january:1 defining:1 unconnected:4 regularizes:1 relational:3 looking:1 scrambling:1 bert:3 jebara:5 community:1 introduced:2 pair:2 connection:7 optimized:2 security:1 homeland:1 engine:4 learned:10 planetary:1 nip:1 address:1 able:1 below:3 pattern:1 regime:1 program:2 max:2 explanation:3 belief:2 greatest:1 critical:1 getoor:1 predicting:5 diagonalizing:1 zhu:1 scheme:1 mxi:2 imply:1 finley:1 naive:1 columbia:9 review:1 l2:1 vectorizing:1 discovery:3 multiplication:1 contributing:1 relative:1 determining:2 law:1 loss:4 expect:1 mixed:1 proven:1 srebro:3 validation:1 foundation:1 degree:5 sufficient:1 anonymized:1 article:4 editor:1 storing:1 share:1 repeat:1 supported:1 aij:6 side:1 allow:1 understand:2 neighbor:22 wide:1 taking:1 saul:1 undoes:1 sparse:4 benefit:1 curve:5 maxl:2 dimension:1 world:6 evaluating:1 unweighted:1 crawl:3 ignores:1 author:2 lindenstrauss:2 projected:2 far:1 social:9 nov:1 cutting:5 status:2 keep:2 gene:1 sequentially:1 reveals:1 receiver:2 recommending:1 xi:23 discriminative:3 foursquare:1 shwartz:2 scrambled:4 search:5 latent:1 triplet:6 why:1 table:5 learn:10 inherently:1 obtaining:1 complex:1 constructing:2 reconciles:1 diag:2 blockmodels:1 subsample:1 profile:9 n2:2 xu:1 site:2 roc:5 fashion:1 ny:2 shrinking:1 sub:2 explicit:1 inverts:1 exponential:1 xl:2 tied:3 jmlr:2 learns:9 theorem:3 minute:2 friendship:1 densification:1 list:2 oneclass:1 svm:17 gupta:1 exists:2 undergraduate:1 albeit:1 adding:1 false:1 importance:2 corr:1 airoldi:1 illustrates:1 push:1 margin:3 chen:1 lt:1 simply:3 likely:1 temporarily:1 chang:1 springer:1 gender:2 oasis:1 violator:1 acm:9 identity:1 goal:1 consequently:1 lipschitz:1 feasible:2 change:1 content:1 except:1 reducing:1 averaging:1 principal:1 total:4 called:1 experimental:1 vote:1 select:4 people:5 support:3 cholesky:1 bioinformatics:2 violated:2 philosophy:6 dept:3
3,747
4,393
Predicting response time and error rates in visual search Bo Chen Caltech [email protected] Vidhya Navalpakkam Yahoo! Research [email protected] Pietro Perona Caltech [email protected] Abstract A model of human visual search is proposed. It predicts both response time (RT) and error rates (RT) as a function of image parameters such as target contrast and clutter. The model is an ideal observer, in that it optimizes the Bayes ratio of target present vs target absent. The ratio is computed on the firing pattern of V1/V2 neurons, modeled by Poisson distributions. The optimal mechanism for integrating information over time is shown to be a ?soft max? of diffusions, computed over the visual field by ?hypercolumns? of neurons that share the same receptive field and have different response properties to image features. An approximation of the optimal Bayesian observer, based on integrating local decisions, rather than diffusions, is also derived; it is shown experimentally to produce very similar predictions to the optimal observer in common psychophysics conditions. A psychophyisics experiment is proposed that may discriminate between which mechanism is used in the human brain. A B C Figure 1: Visual search. (A) Clutter and camouflage make visual search difficult. (B,C) Psychologists and neuroscientists build synthetic displays to study visual search. In (B) the target ?pops out? (?? = 450 ), while in (C) the target requires more time to be detected (?? = 100 ) [1]. 1 Introduction Animals and humans often use vision to find things: mushrooms in the woods, keys on a desk, a predator hiding in tall grass. Visual search is challenging because the location of the object that one is looking for is not known in advance, and surrounding clutter may generate false alarms. The three ecologically relevant performance parameters of visual search are the two error rates (ER): false alarms (FA) and false rejects (FR), and response time (RT). The design of a visual system is crucial in obtaining low ER and RT. These parameters may be traded off by manipulating suitable thresholds [2, 3, 4]. Psychologists and physiologists have long been interested in understanding the performance and the mechanisms of visual search. In order to approach this difficult problem they present human subjects with synthetic stimuli composed of a variable number of ?items? which may include a ?target? 1 and multiple ?distractors? (see Fig. 1). By varying the number of items one may vary the amount of clutter; by designing different target-distractor pairs one may probe different visual cues (contrast, orientation, color, motion) and by varying the visual distinctiveness of the target vis-a-vis the distractors one may study the effect of the signal-to-noise ratio (SNR). Several studies since 1980s have investigated how RT and ER are affected by the complexity of the stimulus (number of distractors), and by target-distractor discriminability with different visual cues. One early observation is that when the target and distractor features are widely separated in feature space (e.g., red target among green distractors), the target ?pops out?. In these situations the ER is nearly zero, and the slope of RT vs. setsize is flat, i.e., RT to find the target is independent of number of items in the display [1]. Decreasing the discriminability between the target and distractor increases error rates, and increases the slope of RT vs. setsize [5]. Moreover, it was found that the RT for displays with no target is longer than where the target is present (see review in [6]). Recent studies investigated the shape of RT distributions in visual search [7, 8]. Neurophysiologically plausible models have been recently proposed to predict RTs in visual discrimination tasks [9] and various other 2AFC tasks [10] at a single spatial location in the visual field. They are based on sequential tests of statistical hypotheses (target present vs target absent) [11] computed on the response of stimulus-tuned neurons [2, 3]. We do not yet have satisfactory models for explaining RTs in visual search, which is harder as it involves integrating information across several locations across the visual field, as well as time. Existing models predicting RT in visual search are either qualitative (e.g. [12]) or descriptive (e.g., the drift-diffusion model [13, 14, 15]), and do not attempt to predict experimental results with new set sizes, target and distractor settings. We propose a Bayesian model of visual search that predicts both ER and RT. Our study makes a number of contributions. First, while visual search has been modeled using signal-detection theory to predict ER [16], our model is based on neuron-like mechanisms and predicts both ER and RT. Second, our model is an optimal observer, given a physiologically plausible front-end of the visual system. Third, our model shows that in visual search the optimal computation is not a diffusion, as one might believe by analogy with single-location discrimination models [17, 18], rather, it is a ?softmax? nonlinear combination of locally-computed diffusions. Fourth, we study a physiologically parsimonious approximation to the optimal observer, we show that it is almost optimal when the characteristics of the task are known in advance and held constant, and we explore whether there are psychophysical experiments that could discriminate between the two models. Our model is based on a number of simplifying assumptions. First, we assume that stimulus items are centered on cortical hypercolumns [19] and at locations where there is no item neuronal firing is negligible. Second, retinal and cortical magnification [19] are ignored, since psychophysicists have developed displays that sidestep this issue (by placing items on a constant-eccentricity ring as shown in Fig 1). Third, we do not account for overt and covert attentional shifts. Overt attentional shifts are manifested by saccades (eye motions), which happen every 200ms or so. Since the post-decision motor response to a stimulus by pressing a button takes about 250-300ms, one does not need to worry about eye motions when response times are shorter than 500ms. For longer RTs, one may enforce eye fixation at the center of the display so as to prevent overt attentional shifts. Furthermore, our model explains serial search without the need to invoke covert attentional shifts [20] which are difficult to prove neurophysiologically. 2 Target discrimination at a single location with Poisson neurons We first consider probabilistic reasoning at one location, where two possible stimuli may appear. The stimuli differ in one respect, e.g. they have different orientations ?(1) and ?(2) . We will call them distractor (D) and target (T), also labeled C = 1 and C = 2 (call c ? {1, 2} the generic value of C). Based on the response of N neurons (a hypercolumn) we will decide whether the stimulus was a target or a distractor. Crucially, a decision should be reached as soon as possible, i.e. as soon as there is sufficient evidence for T or D [11]. Given the evidence T (defined further below in terms of the neurons? activity) we wish to decide whether the stimulus was of type 1 or 2. We may do so when the probability P (C = 1|T ) of the stimulus being of type 1 given the observations in T exceeds a given threshold T1 (T1 = 0.99). We may instead decide in favor of C = 2 e.g. when P (C = 1|T ) < T2 (e.g. T2 = 0.01). If 2 Neurons? tuning curves Mean spiking rate per second 11 12 D 0.25 eT 10 10 9 eD=90o e =105o ... T h (D,e ) per s 0.2 i jump on spike 0.15 interspike drift per s=0.01 h (T,ei) per s 7 6 5 4 diffusion jump per spike 8 Poisson h (spikes/s) Expected firing rate h (spikes per second) Diffusion jump caused by action potential e 8 6 4 ? 0.1 0.05 ? dt ... dt exp 0 0 ?0.1 ? 1 3 ?0.15 2 2 0 A ?0.25 0 50 100 150 Stimulus orientation e (degrees) 0 0 50 100 150 Neuron?s preferred orientation e (degrees) 0 50 100 150 Neuron?s preferred orientation e (degrees) dt max ? dt 0 1 B ? dt <T0 >T1 1 <T0 >T1 <T0 >T1 0 ... ? 0 log ?0.2 1 dt ? exp <T0 >T1 ?0.05 ... ? dt <T0 >T1 ...0 1 1 C AND OR 0 1 D Figure 2: (Left three panels) Model of a hypercolumn in V1/V2 cortex composed of four orientation-tuned neurons (our simulations use 32). The left panel shows the neurons? tuning curve ?(?) representing the expected Poisson firing rate when the stimulus has orientation ?. The middle plot shows the expected firing rate of the population of neurons for two stimuli whose orientation is indicated with a red (distractor) and green (target) vertical line. The third plot shows the step-change in the value of the diffusion when an action potential is registered from a given neuron. (Right panel) Diagram of the decision models. (A) One-location Bayesian observer. The action potentials of a hypercolumn of neurons (top) are integrated in time to produce a diffusion. When the diffusion reaches either an upper bound T1 or a lower bound T0 the decision is taken that either the target is present (1) or the target is absent (0). (B?D) Multi-location ideal Bayesian observer. (B) While not a diffusion, it may be seen as a ?soft maximum? combination of local diffusions: the local diffusions are first exponentiated, then averaged; the log of the result is compared to two thresholds to reach a decision. (C) The ?Max approximation? is a simplified approximation of the ideal observer, where the maximum of local diffusions replaces a soft-maximum. (D) Equivalently, in the Max approximation decisions are reached locally and combined by logical operators. The white AND in a dark field indicates inverted AND of multiple inverted inputs. P (C = 1|T ) ? (T2 , T1 ) we will wait for more evidence. Thus, we need to compute P (C = 1|T ) : Pr(C = 1|T ) = where R(T ) = 1 1+ P (C=2|T ) P (C=1|T ) = 1 P (C=2) 1 + R(T ) P (C=1) P (T |C = 2) P (C = 2|T ) P (C = 1) = P (T |C = 1) P (C = 1|T ) P (C = 2) (1) where P (C = 1) = 1 ? P (C = 2) is the prior probability of C = 1. Thus, it is equivalent to take decisions by thresholding log R(T )1 ; we will elaborate on this in Sec. 3. We will model the firing rate of the neurons with a Poisson pdf: the number n of action potentials that will be observed during one second is distributed as P (n|?) = ?n e?? /n!. The constant ? is the expectation of the number of action potentials per second. Each neuron i ? {1, . . . , N } is tuned to a different orientation ?i ; for the sake of simplicity we will assume that the width of the tuning curve is the same for all neurons; i.e. each neuron i will respond to stimulus c with expectation ?ic = f (|?(c) ??i |) (in spikes per second) which are determined by the distance between the neuron?s preferred orientation ?i and by the stimulus orientation ?(c) . Let Ti = {tik } be the set of action potentials from neuron i produced starting at t = 0 and until S the end of the observation period t = T . Indicate with T = {tk } = i Ti the complete set of action potentials from all neurons (where the tk are sorted). We will indicate with i(k) the index of the neuron who fired the action potential at time tk . Call Ik = (tk tk+1 ) the intervals of time in between action potentials, where I0 = (0 t1 ). These intervals are open i.e. they do not contain the boundaries, hence they do not contain the action potentials. The signal coming from the neurons is thus a concatenation of ?spikes? and ?intervals?, and the interval (0, TS ) may S S Sbe viewed as the union of instants tk and open intervals (tk , tk+1 ). i.e. (0, T ) = I0 t1 I1 t2 ? ? ? Since the spike trains Ti and T are Poisson processes, once we condition on the class of the stimulus the spike times are independent. This implies that: P (T |C = c) = ?k P (Ik |C = c)P (tk |C = c). This may be proven by dividing up (0, T ) into smaller and smaller intervals and taking the limit for 1 We use base 10 for all our logarithms and exponentials, i.e. log(x) ? log10 (x) and exp(x) ? 10x . 3 <T0 0 the size of the intervals going to zero. The intervals containing action potentials converge to the ti and the intervals not containing action potentials may be merged into the intervals Ii . Let?s analyze separately the log likelihood ratio for the intervals and for the spikes. Diffusion drift during the intervals. During the intervals no neuron spiked. The ratio therefore is computed as a function of the Poissons P (n = 0|?) when the spike count n is zero. The Poisson expectation has to be multiplied by the time-length of the interval; call ?tk = tk+1 ? tk the length of the interval Ik . Assuming that the neurons i = 1, . . . , N are independent we obtain: N log R(Ik ) = log X P (n = 0|C = 2, t ? Ik ) ?N P (n = 0|?i2 ?tk ) = ?tk (?i1 ? ?i2 ) = log i=1 i N P (n = 0|C = 1, t ? Ik ) ?i=1 P (n = 0|?1 ?tk ) i=1 (2) Thus, during the time-intervals where no action potential is observed, the diffusion drifts linearly with a slope equal to the sum over all neurons of the difference between the expected firing rate with stimulus 1 and the expected firing rate with stimulus 2. Notice that if there are neurons that fire equally well to targets and distractors, and if the population of neurons is large and made of neuronsP whose tuning P curve?s shape is identical and whose preferred orientation ?i is regularly spaced, then i ?i1 ? i ?i2 , thus the diffusion has drift with slope close to zero and the drift term may be ignored. In this case intervals carry no information. Diffusion jump at the action potentials. If the neurons are uncorrelated, then the probability of two or more action potentials happening at the same time is zero. Thus, at any time tk there is only one action potential from one neuron. We can compute the likelihood ratio by taking a limit for the length ?t of the interval t ? (tk ? ?t/2, tk + ?t/2) going to zero. As seen in the previous section, the contribution from the neurons who did not register a spike is ?t(?i1 ? ?i2 ) and goes to zero as ?t ? 0. Thus we are only left with the contribution of the neuron i(k) whose spike happened at time tk . i(k) log R(tk ) = lim log ?t?0 P (n = 1|?2 i(k) P (n = 1|?1 ?t) ?tk ) i(k) = lim log ?t?0 (?2 i(k) (?1 i(k) ?t)1 e??2 ?t)1 e i(k) ?t i(k) ??1 ?t = log ?2 i(k) ?1 (3) As a result, at each action potential tk the diffusion jumps by an amount that is the log of the ratio of the expected firing rate of the neuron i(k)?s response to target vs distractor. Thus: 1. Neurons that are equally tuned to target and distractor, whether they respond much or not, will not contribute to the diffusion, while neurons whose response is very different for target and distractor will contribute substantially to the diffusion. 2. A larger number of neurons will produce more action potentials and thus a faster actionpotential-driven drift in the diffusion. Diffusion overall. Given the analysis presented above: log R(T ) = X k ?tk i(k) i(k) X X X X ? ?2 (?i1 ? ?i2 ) + log 2i(k) = |T | (?i1 ? ?i2 ) + log i(k) ?1 ?1 i i k k (4) Ignoring diffusion during the intervals, the diffusion at a single location where the stimulus is of type c can be described as: log R(T ) ? N X (log i=1 ?i2 )P oiss(?ic |T |) ?i1 (5) E[log R(T )] = ac |T |, V[log R(T )] = b2c |T | (6) PN ?i2 ?i1 where P oiss(?) denotes a Poisson distributed variable with mean ?, ac ? )?ic and i=1 (log PN ?i b2c ? i=1 (log ?i2 )2 ?ic . The mean and variance of the diffusion grows linearly with time. 1 4 3 ER=1.0% 0 0 10 2 Full bayes MAX 2 0 ?2 ?1 ?3 ?2 ?4 ?3 ?5 ?4 Full bayes MAX ?1 ?1 10 0 500 1000 1500 Time (ms) 2000 2500 ?5 30000 ?2 10 1000 1500 Time (ms) A 2000 2500 B ?1 ?1 10 ?2 10 10?2 10 500 ?2 10 ?3 0 10 1 2 10 M 10 C 10 10?3 0 10 0 ?3 0.5 1 10 1 M 6e 2 1.5 10 2 10 Visual search: detection across M locations with Poisson neurons We now consider the case with M locations with N Poisson neurons each. At each location we may either have a target T or a distractor D. In the whole display we have two hypotheses: no target (C = 1) or one target at a location l (C = 2). The second hypothesis may be broken up into the target being at any of M locations l. Because of this, the numerator of the likelihood ratio is the sum of M terms. The Bayesian observer must integrate the action potentials from each unit to a central unit that computes the posterior beliefs. The multi-location Bayesian observer may be computed by observing that the target-present event is the union of the target-present events in each one of the locations, while the target absent event implies that each location has no target. Thus, the likelihood can be computed as the weighted sum of local likelihoods at each location in the display. We assume that 1. The likelihood at each location is independent from the rest when the stimulus type at that Q location is known; i.e. P (T |C l , ?l) = l P (T l |C l ) . 2. The target, if present, is equally likely to occur at any location in the display; i.e. ?l, P (C l = 2, C l = 1|C = 2) = 1/M. Calling l a location and l the complement of that location (i.e. all locations but l) we have: P (T |C = 1) = M Y P (T l |C l = 1) l=1 P (T |C = 2) = M X P (T |C l = 2, C l = 1)P (C l = 2, C l = 1|C = 2) l=1 = M M X 1 Y ( P (T l |C l = 1)) Rl (T l ) M l=1 l=1 M M P (T |C = 2) 1 X l l 1 X log Rtot (T ) = log = log R (T ) = log exp(log Rl (T l )) P (T |C = 1) M M l=1 Eqn. 7 tells us two things: 5 l=1 0 0.5 1 6e D Figure 3: (A) Diffusions realized at 10 spatial locations when the target is absent (black). The ideal observer Bayes ratio is shown in green, the max-model approximation is shown in red. Thresholds ?1 = ?2, ?2 = 2 are shown, which produce 1% error rates in the ideal observer. (B) Target present case. Notice that the decision takes longer when the target is absent (see also Fig. 4). (C) Error rates vs. number of items and (D) vs target contrast when decision thresholds are held constant. Decision thresholds were chosen to obtain 5% error rates in the condition M = 10,?? = ?/6. As we change target contrast and the number of targets the optimal observer has constant error rates, while the Max approximation produces variable error rates. Testing human subjects with a mix of stimuli with different values of M and ?? may prevent them from adjusting decision thresholds between stimuli; thus, one would expect constant error rates if the visual system uses the ideal observer and variable error rates if it uses the Max approximation. 3 ER=10.0% 0 10 10 10 ?3 ?6 ER=10.0% ER=1.0% FPR,Full Bayes FNR,Full Bayes FPR,Max FNR,Max Error Rate (%) 1 ?1 Error Rate (%) 0 log R log R 1 0 10 10 FPR,Full Bayes FNR,Full Bayes FPR,Max FNR,Max 3 (7) 1.5 2 1. The process log Rtot is not a diffusion, in that log Rtot at time t + 1 can not be computed by incrementing its value at time t by a term that depends only on the interval (t, t + 1). 2. The process log Rtot may be computed easily from the local diffusions log Rl (T l ) (in Sec. 4 we find an approximation that has a natural neural implementation). Now that we know how to compute log R(T ) for single and multi-location Bayesian observer, we may take our decision by thresholding log R(T ) (Eqn. 1). Specifically, we choose separate thresholds for making the target absent and the target present decision, and adjusted the thresholds based on tolerance levels for the false positive rate (FPR) and the false negative rate (FNR). We keep accumulating evidence until either decision can be made. The relationship between FPR, FNR and the two thresholds can be derived using analysis similar to [11]. When log Rtot (T ) reaches the target present threshold (?2 ), with probability P (C = 2|T ), the target is present and with probability P (C = 1|T ) the target is absent, i.e. F P R = P (C = 1|T ) and 1 ? F N R = P (C = 2|T ). We have: ?2 = log Rtot (T ) = log P (C = 2|T ) 1 ? FNR = log P (C = 1|T ) FPR (8) Similarly, when log R(T ) reaches the target absent threshold (?1 ), we have: ?1 = log Rtot (T ) = log FPR P (C = 2|T ) = log P (C = 1|T ) 1 ? FNR (9) Therefore, given desired FPR and FNR, we can analytically compute the upper and lower thresholds for the Full Bayesian model using Eqn. 8 and 9. 4 Max approximation An alternative, more economic, approach to full Bayesian decision is to approximate the global belief using the largest local diffusion and suppress the rest. This is because, in the limit where |T | is large, the diffusion at the location where the target is present will dominate over the other diffusions and thus it is a good approximation of the sum in Eq. 7. We will call this approach ?max approximation? and also ?Max model?. In this scheme, at each location a diffusion based on the local Bayesian observer is computed. If any location ?detects? a target, then a target is declared. If all locations detect a distractor, then the ?no target? condition is declared. This may not be the optimal method, but it has the advantage of requiring only two low-frequency communication lines between each location and the central decision unit. Equivalently, the max approximation can be implemented by computing the maximum of the local diffusions and comparing it to an adjusted high and a low threshold for target present/absent decision (see Fig. 2). ? More specifically, let l? denote the location of maximum diffusion in the display, and log Rl denote ? l l the maximum diffusion (i.e., log Rl = maxM l=1 log R (T )). From eqn 7 we know that the global likelihood ratio is the average of the local likelihood ratios, and equivalently, the log likelihood ratio is the soft-max of the local diffusions: ! M  1 X l l log Rtot (T ) = log exp log R (T ) M l=1 ? ? X ? 1 = log Rl? + log ? (1 + exp(log Rl ? log Rl ))? M ? (10) l6=l Target present ? When the target is present in the display, if the target is different from the distractor, the diffusion at the target?s location will frequently become much higher than at other locations, l and the terms corresponding to RRl? may be safely ignored. Thus, the total log likelihood ratio may be approximated by the maximum local diffusions minus a constant: log Rtot ? ? log Rl ? log M 6 if Rl << Rl ? (11) 6e=pi/6 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 5.0%,TA 5.0%,TP 0.05 M=3.00 0.25 0.05 M=10.00 250 1200 1200 1200 1000 1000 1000 800 800 800 600 600 600 400 400 400 200 200 M=30.00 250 250 200 200 150 150 150 100 100 100 200 Response time (ms) Normalized counts 0.2 6e=pi/18,Full Bayes6e=pi/6,Full Bayes 6e=pi/2,Full Bayes 6e=pi/2 0.25 Mean response time (ms) 6e=pi/18 0.25 TA,Full Bayes TP,Full Bayes TA,Max TP,Max 0.05 200 TA TP 0 0 10 2 10 4 10 Response time (ms) 0 0 10 2 10 4 10 0 0 10 2 10 4 10 0 0 10 1 10 2 10 M A 0 0 10 1 10 B 2 10 0 0 10 1 10 2 10 50 0 10 1 10 Error Rate (%) 50 0 10 1 10 50 0 10 1 10 C Figure 4: (A) Histogram of response-times (RT) when the target is present (green) and when the target is absent (red) for M = 10 for different values of target contrast (??). Response times are longer when the contrast is smaller (see Fig. 1). Also, they are longer when the target is absent (see Fig. 3). Notice that the response times have a Gaussian-like distribution when time is plotted on a log scale, and the width of the distribution does not change significantly as the difficulty of the task changes; thus, the mean and median response time are equivalently informative statistics of RT. (B) Mean RT as a function of the number M of items for different values of target contrast; the curves appear linear as a function of log M [21]. Notice that RT slope is almost zero (?parallel search?) when the target has high contrast, while when target contrast is low RT increases significantly with M (?serial search?) [1]. The response times observed using the Max approximation are almost identical to those obtained with the ideal observer. (C) Error vs. RT tradeoff curves obtained by changing systematically the value of the decision threshold. The mean RT ?? is shown. Ideal bayesian observer (blue) and Max approximation (cyan) are almost identical indicating that the Max approximation?s performance is almost as good as that of the optimal observer. From Eqn. 5 and 6 we know that the difference in diffusion value between the target location and the distractor location grows linearly in time. Thus, the longer the process lasts, the better the approximation. Conversely, when t = |T | is small, the approximation is unreliable, and a different approximation term must be introduced (see supplementary material2 for derivation):   ? 1 (M ? 1) b2 + b22 ? + exp((a1 ? a2 + 1 )t)) if Rl ? Rl log Rtot ? log Rl ? a2 t + log( M M 2 (12) Target absent ? When the target is absent in the display, the value of all the local diffusions at time t will?be distributed according to the same density. According to Eqn. 6, the standard deviation ? grows as t, hence the expected value of log Rl ? log Rl is monotonically increasing. When this expected difference is large enough, we can make the same approximation as Eqn. 11: log Rtot ? ? log Rl ? log M if Rl << Rl ? (13) On the other hand, when |T | is small, we resort to another approximation (see supplementary material for derivation): ? ? b2 t 1 exp(b2 t) + M ? 1 ? log Rtot ? log Rl ? ?M b1 t + 1 ? log( ) if Rl ? Rl (14) 2 2 M R? where ?M ? M ?? z?M ?1 (z)N (z)dz, and N (z) and ?(z) denote the pdf and cdf of normal distribution. Since the max diffusion does not represent the global log likelihood ratio, its thresholds can not be computed directly from the error rates. Nonetheless we can first compute analytically the thresholds for the Bayesian observer (Eqn. 8 and 9), and adjust them based on the approximations stated above ? (Eqn. 11, 12, 13 and 14). Finally, we threshold the maximum local diffusion log Rl with respect to the adjusted upper and lower threshold to make our decision. 5 Experiments Experiment 1. - Overall model predictions. In this experiment we explore the model?s prediction of response time over a series of interesting conditions. The default parameters are the number of 2 http://vision.caltech.edu/?bchen3/nips2011/supplementary.pdf 7 neurons per location N = 32, the tuning width of each neuron = ?/8, the maximum expected firing rate (? = 10 action potentials per second) and minimum expected firing rate (? = 1 a.p./s) of a neuron, which reflects the signal-to-noise ratio of the neuron?s tuning curves, the number of items (locations) in the display M = 10 and the stimulus contrast ?? = ?/6. Both M and ?? refers to the display, while the other parameters refer to the brain. We will focus on how predictions change when the display parameters are changed over a set of discrete settings: M ? {3, 10, 30} and ?? ? {?/18, ?/6, ?/2}. For each setting of the parameters, we simulate the bayesian and the max model for 1000 runs. The length of simulation is set to a large value (4 seconds) to make sure that all decisions are made before the simulation terminates. We are also interested in the trade-off between RT and ER ? for ? = {1%, 5%, 10%}. For each ? we search for the best pair of upper and lower thresholds that achieve F N R ? FPR ? ?. We search over the interval [0 3.5] for the optimal upper threshold and over [?3.5 0] for the optimal lower threshold (an upper threshold of 3.5 corresponds to a FPR of 0.03%). The search is conducted exhaustively over an [80 ? 80] discretization of the joint space of the thresholds. We record the response time distributions for all parameter settings and for all values of ? (Fig. 4). Experiment 2. - Conditions where Bayesian and Max models differ maximally In this experiment we test the robustness of Bayesian and Max models with respect to a fixed threshold. For a Bayesian observer, the thresholds yielding a given error rate can be computed exactly independent of the display (Eqn. 9 and 8). On the contrary, in order for the max model to achieve the equivalent performance, its threshold must be adjusted differently depending on the number of items M and the target contrasts ?? (Eqn. 11-14). As a result, if a constant threshold is used for all conditions, we would expect the Bayesian observer ER to be roughly constant, whereas the Max model would have considerable ER variability. The error rates are shown in Fig. 3 as we vary M and ??. The threshold is set as the optimal threshold that produces 5% error for the Bayesian observer at a single location M = 1 and with ?? = ?/18. 6 Discussion and conclusions We presented a Bayesian ideal observer model of visual search. To the best of our knowledge, this is the first model that can predict the statistics of both response times (RT) and error rates (ER) purely from physiologically relevant constants (number, tuning width, signal-to-noise ratio of cortical mechanisms) and from image parameters (target contrast and number of distractors). Neurons are modeled as Poisson units and the model has only four free parameters: the number of neurons per hypercolumn, the tuning width of their response curve, the maximum and the minimum firing rate of each neuron. The model predicts qualitatively the main phenomena that are observed in visual search: serial vs. parallel search [1], the Gaussian-like shape of the response time histograms in log time [7] and the faster response times when the target is present [3]. The model is easily adaptable to predictions involving multiple targets, different image features and conjunction of features. Unlike the case of binary detection/decision, the ideal observer may not be implemented by a diffusion. However, it may be implemented using a precisely defined ?soft-max? combination of diffusions, each one of which is computed at a different location across the visual field. We discuss an approximation of the ideal observer, the Max model, which has two natural and simple implementations in neural hardware. The Max model is found experimentally to have a performance that is very close to that of the ideal observer when the task parameters do not change. We explored whether any combinations of target contrast and number of distractors would produce significantly different predictions of the ideal observer vs the Max model approximation and found none in the case where the visual system can estimate decision thresholds in advance. However, our simulations predict different error rates when interleaving images containing diverse contrast levels and distractor numbers. Acknowledgements: We thank the three anonymous referees for many insightful comments and suggestions; thanks to M. Shadlen for a tutorial discussion on the history of discrimination models. This research was supported by the California Institute of Technology. 8 References [1] A.M. Treisman and G. Gelade. A feature-integration theory of attention. Cognitive psychology, 12(1):97? 136, 1980. [2] W.T. Newsome, K.H. Britten, and J.A. Movshon. Neuronal correlates of a perceptual decision. Nature, 341(6237):52?54, 1989. [3] P. Verghese. Visual search and attention:: A signal detection theory approach. Neuron, 31(4):523?535, 2001. [4] Vidhya Navalpakkam and Laurent Itti. Search goal tunes visual features optimally. Neuron, 53(4):605?17, Feb 2007. [5] J. Duncan and G.W. Humphreys. Visual search and stimulus similarity. Psychological review, 96(3):433, 1989. [6] J.M. Wolfe. Attention (Ed. H. Pashler), chapter Visual Search, pages 13?73. University College London Press, London, U.K., 1998. [7] J.M. Wolfe, E.M. Palmer, and T.S. Horowitz. Reaction time distributions constrain models of visual search. Vision research, 50(14):1304?1311, 2010. [8] E.M. Palmer, T.S. Horowitz, A. Torralba, and J.M. Wolfe. What are the shapes of response time distributions in visual search? Journal of Experimental Psychology: Human Perception and Performance, 37(1):58, 2011. [9] Jeffrey M Beck, Wei Ji Ma, Roozbeh Kiani, Tim Hanks, Anne K Churchland, Jamie Roitman, Michael N Shadlen, Peter E Latham, and Alexandre Pouget. Probabilistic population codes for bayesian decision making. Neuron, 60(6):1142?52, Dec 2008. [10] R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J.D. Cohen. The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review, 113(4):700, 2006. [11] A. Wald. Sequential tests of statistical hypotheses. The Annals of Mathematical Statistics, 16(2):117?186, 1945. [12] M.M. Chun and J.M. Wolfe. Just say no: How are visual searches terminated when there is no target present? Cognitive Psychology, 30(1):39?78, 1996. [13] R. Ratcliff. A theory of memory retrieval. Psychological Review, 85(2):59?108, 1978. [14] Philip L Smith and Roger Ratcliff. Psychology and neurobiology of simple decisions. Trends Neurosci, 27(3):161?8, Mar 2004. [15] Roger Ratcliff and Gail McKoon. The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput, 20(4):873?922, Apr 2008. [16] D.G. Pelli. Uncertainty explains many aspects of visual contrast detection and discrimination. JOSA A, 2(9):1508?1531, 1985. [17] R. Ratcliff. A theory of order relations in perceptual matching. Psychological Review, 88(6):552, 1981. [18] Joshua I Gold and Michael N Shadlen. The neural basis of decision making. Annu Rev Neurosci, 30:535? 74, 2007. [19] R.L. De Valois and K.K. De Valois. Spatial vision. Oxford University Press, USA, 1990. [20] MI Posner, Y. Cohen, and RD Rafal. Neural systems control of spatial orienting. Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 298(1089):187, 1982. [21] W.E. Hick. On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1):11? 46, 1952. 9
4393 |@word middle:1 open:2 simulation:4 crucially:1 simplifying:1 minus:1 harder:1 carry:1 valois:2 series:1 tuned:4 existing:1 reaction:1 com:1 comparing:1 discretization:1 anne:1 mushroom:1 yet:1 must:3 happen:1 informative:1 interspike:1 shape:4 motor:1 plot:2 v:10 grass:1 cue:2 discrimination:5 item:10 rts:3 fpr:11 smith:1 record:1 contribute:2 location:41 mathematical:1 become:1 ik:6 qualitative:1 prove:1 fixation:1 expected:10 roughly:1 frequently:1 distractor:16 multi:3 brain:2 detects:1 decreasing:1 increasing:1 hiding:1 moreover:1 panel:3 what:1 vidhya:2 bogacz:1 substantially:1 developed:1 safely:1 every:1 ti:4 exactly:1 control:1 unit:4 appear:2 before:1 t1:11 negligible:1 local:14 positive:1 limit:3 oxford:1 laurent:1 firing:12 might:1 black:1 discriminability:2 conversely:1 challenging:1 palmer:2 averaged:1 testing:1 union:2 reject:1 significantly:3 matching:1 integrating:3 refers:1 wait:1 close:2 operator:1 accumulating:1 pashler:1 equivalent:2 center:1 dz:1 go:1 attention:3 starting:1 simplicity:1 pouget:1 holmes:1 dominate:1 posner:1 population:3 poissons:1 annals:1 target:75 us:2 designing:1 hypothesis:4 referee:1 wolfe:4 magnification:1 approximated:1 trend:1 predicts:4 labeled:1 observed:4 trade:1 broken:1 complexity:1 exhaustively:1 nips2011:1 churchland:1 purely:1 basis:1 easily:2 joint:1 differently:1 various:1 chapter:1 surrounding:1 train:1 separated:1 derivation:2 forced:1 london:3 detected:1 tell:1 psychophysicist:1 whose:5 widely:1 plausible:2 larger:1 supplementary:3 say:1 favor:1 statistic:3 descriptive:1 pressing:1 advantage:1 propose:1 jamie:1 coming:1 fr:1 relevant:2 fired:1 achieve:2 gold:1 eccentricity:1 produce:7 ring:1 object:1 tall:1 tk:23 depending:1 ac:2 tim:1 eq:1 dividing:1 implemented:3 ois:2 involves:1 indicate:2 implies:2 differ:2 merged:1 camouflage:1 human:6 centered:1 mckoon:1 material:1 explains:2 anonymous:1 biological:1 adjusted:4 ic:4 normal:1 exp:8 predict:5 traded:1 vary:2 early:1 a2:2 torralba:1 overt:3 tik:1 largest:1 maxm:1 gail:1 weighted:1 reflects:1 gaussian:2 rrl:1 rather:2 pn:2 varying:2 conjunction:1 derived:2 focus:1 verghese:1 indicates:1 likelihood:11 ratcliff:4 contrast:15 detect:1 i0:2 integrated:1 perona:2 relation:1 manipulating:1 going:2 interested:2 i1:8 issue:1 among:1 orientation:12 overall:2 yahoo:2 animal:1 spatial:4 softmax:1 psychophysics:1 integration:1 field:6 once:1 equal:1 identical:3 placing:1 afc:1 nearly:1 t2:4 stimulus:24 composed:2 beck:1 fire:1 jeffrey:1 attempt:1 detection:5 neuroscientist:1 adjust:1 yielding:1 held:2 moehlis:1 shorter:1 logarithm:1 desired:1 plotted:1 psychological:4 soft:5 tp:4 newsome:1 deviation:1 snr:1 conducted:1 front:1 optimally:1 hypercolumns:2 synthetic:2 combined:1 thanks:1 density:1 probabilistic:2 off:2 invoke:1 physic:1 michael:2 treisman:1 central:2 containing:3 choose:1 rafal:1 cognitive:2 horowitz:2 resort:1 sidestep:1 itti:1 account:1 potential:20 de:2 retinal:1 sec:2 b2:3 inc:1 caused:1 register:1 vi:2 depends:1 observer:28 analyze:1 observing:1 red:4 reached:2 bayes:12 parallel:2 predator:1 slope:5 contribution:3 variance:1 characteristic:1 who:2 spaced:1 bayesian:20 produced:1 none:1 ecologically:1 history:1 reach:4 ed:2 nonetheless:1 frequency:1 mi:1 josa:1 gain:1 adjusting:1 logical:1 distractors:7 color:1 lim:2 knowledge:1 adaptable:1 worry:1 alexandre:1 higher:1 dt:7 ta:4 response:25 maximally:1 wei:1 roozbeh:1 mar:1 hank:1 furthermore:1 just:1 roger:2 until:2 hand:1 eqn:11 ei:1 nonlinear:1 indicated:1 believe:1 grows:3 orienting:1 usa:1 effect:1 roitman:1 contain:2 requiring:1 normalized:1 brown:1 hence:2 analytically:2 satisfactory:1 i2:9 white:1 during:5 width:5 numerator:1 m:8 pdf:3 complete:1 latham:1 covert:2 motion:3 reasoning:1 image:5 recently:1 common:1 spiking:1 rl:23 ji:1 cohen:2 refer:1 tuning:8 rd:1 similarly:1 longer:6 cortex:1 similarity:1 base:1 feb:1 posterior:1 recent:1 optimizes:1 driven:1 manifested:1 binary:1 joshua:1 caltech:5 inverted:2 seen:2 b22:1 minimum:2 converge:1 period:1 monotonically:1 signal:6 ii:1 multiple:3 full:13 mix:1 hick:1 exceeds:1 faster:2 b2c:2 long:1 retrieval:1 post:1 serial:3 equally:3 a1:1 prediction:6 involving:1 wald:1 vision:4 expectation:3 poisson:11 histogram:2 represent:1 dec:1 whereas:1 separately:1 interval:21 diagram:1 median:1 crucial:1 rest:2 unlike:1 sure:1 comment:1 subject:2 thing:2 regularly:1 contrary:1 call:5 ideal:13 enough:1 psychology:5 economic:1 tradeoff:1 absent:14 shift:4 t0:7 whether:5 movshon:1 peter:1 action:20 ignored:3 tune:1 amount:2 clutter:4 dark:1 desk:1 locally:2 hardware:1 kiani:1 generate:1 http:1 tutorial:1 notice:4 happened:1 per:11 blue:1 diverse:1 discrete:1 affected:1 key:1 four:2 threshold:31 changing:1 prevent:2 diffusion:46 v1:2 button:1 pietro:1 wood:1 sum:4 run:1 fourth:1 respond:2 uncertainty:1 almost:5 decide:3 parsimonious:1 decision:30 duncan:1 bound:2 cyan:1 display:15 replaces:1 activity:1 occur:1 precisely:1 constrain:1 flat:1 sake:1 calling:1 declared:2 aspect:1 simulate:1 according:2 combination:4 across:4 smaller:3 terminates:1 rev:1 making:4 psychologist:2 spiked:1 pr:1 taken:1 discus:1 count:2 mechanism:5 know:3 end:2 physiologist:1 multiplied:1 probe:1 quarterly:1 v2:2 enforce:1 generic:1 alternative:2 robustness:1 top:1 denotes:1 include:1 log10:1 l6:1 instant:1 build:1 society:1 psychophysical:1 realized:1 spike:12 receptive:1 fa:1 rt:22 distance:1 attentional:4 separate:1 thank:1 concatenation:1 philip:1 assuming:1 length:4 code:1 navalpakkam:2 modeled:3 index:1 ratio:16 relationship:1 equivalently:4 difficult:3 negative:1 stated:1 suppress:1 design:1 rtot:12 implementation:2 upper:6 vertical:1 neuron:49 observation:3 t:1 situation:1 neurobiology:1 looking:1 communication:1 variability:1 drift:7 introduced:1 complement:1 pair:2 hypercolumn:4 philosophical:1 pelli:1 california:1 registered:1 pop:2 gelade:1 below:1 pattern:1 perception:1 max:33 green:4 memory:1 belief:2 royal:1 suitable:1 event:3 natural:2 difficulty:1 predicting:2 representing:1 scheme:1 technology:1 eye:3 britten:1 review:5 understanding:1 prior:1 acknowledgement:1 expect:2 neurophysiologically:2 interesting:1 suggestion:1 analogy:1 proven:1 sbe:1 integrate:1 degree:3 sufficient:1 shadlen:3 thresholding:2 systematically:1 uncorrelated:1 share:1 pi:6 changed:1 supported:1 last:1 soon:2 free:1 formal:1 exponentiated:1 institute:1 explaining:1 distinctiveness:1 taking:2 distributed:3 tolerance:1 curve:8 boundary:1 cortical:3 default:1 computes:1 made:3 jump:5 qualitatively:1 simplified:1 correlate:1 transaction:1 approximate:1 preferred:4 keep:1 unreliable:1 global:3 b1:1 search:31 physiologically:3 nature:1 ignoring:1 obtaining:1 investigated:2 did:1 apr:1 main:1 linearly:3 terminated:1 whole:1 noise:3 alarm:2 incrementing:1 neurosci:2 neuronal:2 fig:8 elaborate:1 wish:1 exponential:1 comput:1 perceptual:2 third:3 humphreys:1 interleaving:1 annu:1 insightful:1 er:15 explored:1 chun:1 evidence:4 false:5 sequential:2 chen:1 explore:2 likely:1 visual:37 happening:1 bo:1 saccade:1 corresponds:1 cdf:1 ma:1 sorted:1 viewed:1 goal:1 considerable:1 experimentally:2 change:6 fnr:9 determined:1 specifically:2 total:1 discriminate:2 experimental:3 indicating:1 college:1 phenomenon:1
3,748
4,394
Kernel Embeddings of Latent Tree Graphical Models Le Song College of Computing Georgia Institute of Technology [email protected] Ankur P. Parikh School of Computer Science Carnegie Mellon University [email protected] Eric P. Xing School of Computer Science Carnegie Mellon University [email protected] Abstract Latent tree graphical models are natural tools for expressing long range and hierarchical dependencies among many variables which are common in computer vision, bioinformatics and natural language processing problems. However, existing models are largely restricted to discrete and Gaussian variables due to computational constraints; furthermore, algorithms for estimating the latent tree structure and learning the model parameters are largely restricted to heuristic local search. We present a method based on kernel embeddings of distributions for latent tree graphical models with continuous and non-Gaussian variables. Our method can recover the latent tree structures with provable guarantees and perform local-minimum free parameter learning and efficient inference. Experiments on simulated and real data show the advantage of our proposed approach. 1 Introduction Real world problems often produce high dimensional features with sophisticated statistical dependency structures. One way to compactly model these statistical structures is to use probabilistic graphical models that relate the observed features to a set of latent or hidden variables. By defining a joint probabilistic model over observed and latent variables, the marginal distribution of the observed variables is obtained by integrating out the latent ones. This allows complex distributions over observed variables (e.g., clique models) to be expressed in terms of more tractable joint models (e.g., tree models) over the augmented variable space. Probabilistic models with latent variables have been deployed successfully to a diverse range of problems such as in document analysis [3], social network modeling [10], speech recognition [18] and bioinformatics [5]. In this paper, we will focus on latent variable models where the latent structures are trees (we call it a ?latent tree? for short). In these tree-shaped graphical models, the leaves are the set of observed variables (e.g., taxa, pixels, words) while the internal nodes are hidden and intuitively ?represent? the common properties of their descendants (e.g., distinct ancestral species, objects in an image, latent semantics). This class of models strike a nice balance between their representation power (e.g., ability to model cliques) and the complexity of learning and inference processes on these structures (e.g., message passing is exact on trees). In particular, we will study the problems of estimating the latent tree structures, learning the model parameters and performing inference on these models for continuous and non-Gaussian variables where it is not easy to specify a parametric family. In previous works, the challenging problem of estimating the structure of latent trees has largely been tackled by heuristics since the search space of structures is intractable. For instance, Zhang et al. [28] proposed a search heuristic for hierarchical latent class models by defining a series of local search operations and using EM to compute the likelihood of candidate structures. Harmeling and Williams [8] proposed a greedy algorithm to learn binary trees by joining two nodes with a high mutual information and iteratively performing EM to compute the mutual information among newly added hidden nodes. Alternatively, Bayesian hierarchical clustering [9] is an agglomerative clustering technique that merges clusters based on a statistical hypothesis test. Many other local search heuristics based on maximum parsimony and maximum likelihood methods can also be found from 1 the phylogenetic community [21]. However, none of these methods extend easily to the nonparametric case since they require the data to be discrete or to have a parametric form such that statistical tests or likelihoods/EM can be easily computed. Given the structures of the latent trees, learning the model parameters has predominantly relied on likelihood maximization and local search heuristics such as expectation maximization (EM) [6]. Besides the problem of local minima, non-Gaussian statistical features such as multimodality and skewness may pose additional challenges for EM. For instance, parametric models such as mixture of Gaussians may lead to an exponential blowup in terms of representation during the inference stage of EM, so further approximations may be needed to make these cases tractable. Furthermore, EM can require many iterations to reach a prescribed training precision. In this paper, we propose a method for latent tree models with continuous and non-Gaussian observation based on the concept of kernel embedding of distributions [23]. The problems we try to address are: how to estimate the structures of latent trees with provable guarantees, and how to perform local-minimum-free parameter learning and efficient inference given the tree structures, all in nonparametric fashion. The main flavor of our method is to exploit the spectral properties of the joint embedding (or covariance operators) in both the structure recovery and learning/inference stage. For the former, we define a distance measure between variables based on the singular value decomposition of covariance operators. This allows us to generalize some of the distance based latent tree learning procedures such as neighbor joining [20] and the recursive grouping methods [4] to the nonparametric setting. These distance based methods come with strong statistical guarantees which carry over to our nonparametric generalization. After the structure is recovered, we further use the covariance operator and its principal singular vectors to design surrogates for parameters of the latent variables (called a ?spectral algorithm?). One advantage of our spectral algorithm is that it is local-minimum-free and hence amenable for further statistical analysis (see [11, 25, 16] for previous work on spectral algorithms). Last, we will demonstrate the advantage of our method over existing approaches in both simulation and real data experiments. 2 Latent Tree Graphical Models We will focus on latent variable models where the observed variables are continuous and nonGaussian and the conditional independence structures are specified by trees. We will use uppercase letters to denote random variables (e.g., Xi ) and lowercase letters their instantiations (e.g., xi ). A latent tree model defines a joint distribution over a set, O = {X1 , . . . , XO }, of O observed variables and a set, H = {XO+1 , . . . , XO+H }, of H hidden variables. The complete set of variables is denoted by X = O ? H . For simplicity, we will assume that all observed variables have the same domain XO , and all hidden variables take values from XH and have finite dimension d. The joint distribution of X in a latent tree model is fully characterized by a set of conditional distributions (CD). More specifically, we can select an arbitrary latent node in the tree as the root, and reorient all edges away from the root. Then the set of CDs between nodes and their parents P(Xi |X?i ) are sufficient to characterize the joint distribution (for the root node Xr , we set P(Xr |X?r ) = QO+H P(Xr ); and we use P to refer to density in continuous case), P(X ) = i=1 P(Xi |X?i ). Compared to tree models which are defined solely on observed variables, latent tree models encompass a much larger classes of models, allowing more flexibility in modeling observed variables. This is evident if we sum out the latent variables in the joint distribution, X YO+H P(O) = P(Xi |X?i ). (1) H i=1 This expression leads to complicated conditional independence structures between observed variables depending on the tree topology. In other words, latent tree models allow complex distributions over observed variables (e.g., clique models) to be expressed in terms of more tractable joint models over the augmented variable space. This can lead to a significant saving in model parametrization. For simplicity of explanation, we will focus on latent tree structures where each internal node has exactly 3 neighbors. We can reroot the tree and redirect all the edges away from the root. For a variable Xs , we use ?s to denote its sibling, ?s to denote its parent, ?s to denote its left child and ?s to denote its right child; the root node will have 3 children, and we use ?s to denote the extra child. All the observed variables are leaves in the tree, and we will use ??s , ??s , ?s? to denote an observed variable which is found by tracing in the direction from node s to its left child ?s , right child ?s , and its parent ?s respectively. s? denotes any observed variable in the subtree rooted at node s. 2 3 Kernel Density Estimator and Hilbert Space Embedding Kernel density estimation (KDE) is a nonparametric way of fitting the density of continuous random variables with non-Gaussian statistical features such as multi-modality and skewness [22]. However, traditional KDE cannot model the latent tree structure. In this paper, we will show that the kernel density estimator can be augmented to deal with latent tree structures using a recent concept called Hilbert space embedding of distributions [23]. Next, we will first explain the basic idea of KDE and distribution embeddings, and show how they are related.  n Kernel density estimator. Given a set of i.i.d. samples S = (xi1 , . . . , xiO ) i=1 from P(X1 , . . . , XO ), KDE estimates the density via Xn YO b 1 , . . . , xO ) = 1 P(x k(xj , xij ), (2) i=1 j=1 n 0 where k(x, x ) is a kernel function. A commonly used kernel function, which we will focus on, is 1 exp(?kx ? x0 k2 /2? 2 ). For Gaussian RBF kernel, there the Gaussian RBF kernel k(x, x0 ) = ?2?? exists a feature map ? : R 7? F such that k(x, x0 ) = h?(x), ?(x0 )iF , and the feature space has the reproducing property, i.e. for all f ? F, f (x) = hf, ?(x)iF . Products of kernels are also kernels, QO O 0 which allow us to write j=1 k(xj , x0j ) as a single inner product ?O j=1 ?(xj ), ?j=1 ?(xj ) F O . Here ?O j=1 ? denotes the tensor product of O feature vectors which results in a rank-1 tensor of order O. This inner product can be understood by analogy to the finite dimensional case: given x, y, z, x0 , y 0 , z 0 ? Rd , (x> x0 )(y > y 0 )(z > z 0 ) = hx ? y ? z, x0 ? y 0 ? z 0 iRd3 .   Hilbert space embedding. CO := EO ?O j=1 ?(Xj ) is called the Hilbert space embedding of distribution P(O) with tensor features ?O j=1 ?(Xj ). In other words, the embedding of a distribution is simply the expected feature of that distribution. The essence of Hilbert space embedding is to represent distributions as elements in Hilbert spaces, and then subsequent manipulation of the distributions can be carried out via Hilbert space operations such as inner product and distance. We next show how to represent a KDE using distribution embeddings. Taking the expected value of a KDE with respect to the random sample S , Y  h i   O O b ES P(x1 , . . . , xO ) = EO k(xj , Xj ) = EO ?O j=1 ?(Xj ) , ?j=1 ?(xj ) F O , j=1 (3) we see that this expected value is the inner product between the embedding CO and tensor b features ?O j=1 ?(xj ). If we replace the embedding CO by its finite sample estimate CO := Pn 1 O i i=1 ?j=1 ?(xj ) , we recover the density estimator in (2). Alternatively, using tensor notan tion (described in supplemental), we can rewrite equation (3) as   O ? ? ? EO ?O (4) j=1 ?(Xj ) , ?j=1 ?(xj ) F O = CO ?O ?(xO ) . . . ?2 ?(x2 ) ?1 ?(x1 ) where CO is a big tensor of order O which can be difficult to store and maintain. While traditional KDE can not make use of the fact that the embedding CO originates from a distribution with latent tree structure, the embedding view actually allows us to exploit this special structure and further decompose CO to simpler tensors of much lower orders. 4 Kernel Embedding of Latent Tree Graphical Models In this section, we assume that the structures of the latent tree graphical models are given, and we will deal with structure learning in the next section. We will show that the tensor expression of KDE in (4) can be computed recursively using a collection of lower order tensors. Essentially, these lower order tensors correspond to the conditional densities in the latent tree graphical models; and the recursive computations try to integrate out the latent variables in the model, and they correspond to the steps in the message passing algorithm for graphical model inference. The challenge is that message passing algorithm becomes nontrivial to represent and implement in continuous and nonparametric settings. Previous methods may lead to exponential blowup in their message representation and hence various approximations are needed, such as expectation propagation [15], mixture of Gaussian simplification [27], and sampling [12]. In contrast, the distribution embedding view allows us to represent and implement message passing algorithm efficiently without resorting to approximations. Furthermore, it also allows us to develop a local-minimum-free algorithm for learning the parameters of latent tree graphical models. 3 4.1 Covariance Operator and Conditional Embedding Operator We will first explain the concept of conditional embedding operators which are the nonparametric counterparts for conditional probability tables in the discrete case. Conditional embedding operators will be the key building blocks to a nonparametric message passing algorithm as much as conditional probability tables are to the ordinary message passing algorithm. Following [7], we first define the covariance operator CXs Xt which allows us to compute the expectation of the product of function f (Xs ) and g(Xt ), i.e., EXs Xt [f (Xs )g(Xt )], using linear operations in the RKHS. More formally, let CXs Xt : F 7? F such that for all f, g ? F, ?2 g ? ? 1 f (5) EXs Xt [f (Xs )g(Xt )] = hf, EXs Xt [?(Xs ) ? ?(Xt )] giF = hf, CXs Xt giF = Cst ? where we abbreviate the notation CXs Xt as Cst , and will follow such abbreviation in the rest of the paper (e.g. Cs2 is an abbreviation for CXs Xs ) . This can be understood by analogy with the ?2 v ? ? 1 x where we use the finite dimensional case: if x, y, z, v ? Rd , then x> (yz > )v = (yz > ) ? tensor-vector multiplication notation from [13] (see supplemental for details). In other words, the covariance operator is also the embedding of the joint distribution P(Xs , Xt ). Then the conditional embedding operator can be defined via covariance operators according to Song et al. [26]. A conditional embedding operator allows us to compute conditional expectations ?1 such that for all f ? F, EXt |xs [f (Xt )] as linear operations in the RKHS. Let Ct|s := Cts Css ? 2 ?(xs ) ? ? 1 f. (6) EXt |xs [f (Xt )] = f, EXt |xs [?(Xt ) = f, Ct|s ?(xs ) = Ct|s ? F F In other words, the operator Ct|s takes the feature map ?(xs ) of the point on which we condition, and outputs the conditional expectation of the feature ?(Xt ) with respect to P(Xt |xs ). Although the formula looks similar to the Gaussian case, it is important to note that the conditional embedding operator allows us to compute the conditional expectation of any f ? F, regardless of the distribution of the random variable in feature space (aside from the condition that h(?) := EXt |Xs =? [f (Xt )] is in the RKHS on Xs , as noted by Song et al.). In particular, we do not need to assume the random variables have a Gaussian distribution in feature space. 4.2 Representation for Message Passing Algorithm For simplicity, we will focus on latent trees where all latent variables have degree 3 (but our method hcan be generalized i to higher degrees). We first introduce latent variables into equation (3), QO EO?H k(x , X ) j j ; Then we integrate out the latent variables according to the latent tree j=1 structure using a message passing algorithm [17], * At a leaf node (always observed variable) we pass the following message to its parent ms (X?s ) = EXs |X?s [k(xs , Xs )]. ** An internal latent variable aggregates incoming messages from its two children and then sends an outgoing message to its own parent ms (X?s ) = EXs |X?s [m?s (Xs )m?s (Xs )]. *** Finally, at the root node, all incoming messages are multiplied together and the root variable QO is integrated out br := EO [ j=1 k(xj , Xj )] = EXr [m?s (Xr )m?s (Xr )m?r (Xr )]. The challenge is that message passing becomes nontrivial to represent and implement in continuous and nonparametric settings. Previous methods may lead to exponential blowup in their message representation and hence various approximations are needed, such as expectation propagation [15], mixture of Gaussian simplification [27], and sampling [12]. Song et al. [24] show that the above 3 message update operations can be expressed using Hilbert space embeddings [26], and no further approximation is needed in the message computation. Basically, the embedding approach assume that messages are functions in the reproducing kernel Hilbert space, and message update is an operator that takes several functions as inputs and output another function in the reproducing kernel Hilbert space. More specifically, message updates are linear (or multi-linear) operations in feature space, > ? 1 ?(xs ) * At leaf nodes, we have mts (?) = EXs |X? =? [k(xs , Xs )] = Cs|? ?(xs ) = Cs|?s ? s s ** At internal nodes, we define a tensor product reproducing kernel Hilbert space H := F ?F, under which the product of incoming messages can be written as a single inner product, m?s (Xs ) m?s (Xs ) = hm?s , ?(Xs )i hm?s , ?(Xs )i = hm?s ? m?s , ?(Xs ) ? ?(Xs )iH Then the message update becomes ? 2 m?s ? ? 1 m?s (7) ms (?) = m?s ? m?s , EXs |X?s =? [?(Xs ) ? ?(Xs )] H = Cs2 |?s ? 4 where we define the conditional embedding operator for the tensor features ?(Xs )??(Xs ). By analogy with (6)), Cs2 |?s is defined in terms of a covariance operator Cs2 ?s := . EXs X?s [?(Xs ) ? ?(Xs ) ? ?(Xs )], and the operator C??1 s ?s *** Finally, at the root nodes, we use the property of tensor product features and arrives at: Er [m?r (Xr ) m?r (Xr ) m?r (Xr )] = hm?r ? m?r ? m?r , EXr [?(Xr ) ? ?(Xr ) ? ?(Xr )]i ? 3 m?r ? ? 2 m?r ? ? 1 m?r = Cr3 ? (8) We note that the traditional kernel density estimator needs to estimate a tensor of order O involving all observed variables (equation (4)). By making use of the conditional independence structure of latent tree models, we only need to estimate tensors of much smaller orders. Particularly, we only need to estimate tensors involving two variables (for each parent-child pair), and then the density can be estimated via message passing algorithms using these tensors of much smaller order. The drawback of the representations in (7) and (8) is that they require exact knowledge of conditional embedding operators associated with latent variables, but none of these are available in training. Next we will show that we can still make use of the tensor decomposition representation without the need for recovering the latent variables explicitly. 4.3 Spectral Algorithm for Learning Latent Tree Parameters Our observation from (7) and (8) is that if we can recover the conditional embedding operators associated with latent variables up to some invertible transformations, we will still be able to compute latent tree density correctly. For example, we can transform the messages: m e ?s = T?s m?s , m e ?s = T?s m?s , and m e ?s = T?s m?s , and we can update these transformed messages: > ?(xs ) * At leaf nodes, m e s = Ts> Cs|? s ?2 m ?1 m ** At internal nodes, m e s = (Cs2 |?s ?1 T??1 ?2 T??1 ?3 Ts> ) ? e ?s ? e ?s s s ?1 ?1 ?1 ?3 m ?2 m ?1 m *** At the root, br = (Cr3 ?1 T?r ?2 T?r ?3 T?r ) ? e ?r ? e ?r ? e ?r without changing the final br . Basically, all the invertible transformations T cancel out with each other. These transformations provide us an additional degree of freedom for algorithm design: we can choose the invertible transforms cleverly, such that the transformed representation can be recovered from observed quantities without the need for accessing the latent variables. This representation is related to but different from that of [16] for discrete variables which uses only 3rd order tensors. The kernel case is more challenging and requires q th order tensors (where q is the degree of a node). More specifically, these transformations T can be constructed from cross covariance operators of ?1 certain pairs of observed variables and their singular vectors U . We set Ts = (Us> Cs? |?s ) and let Us be the top d right eigenvectors of C?s? s? . Consider the simple case for the leaf node (?). In this case, we can set s? = s and get that Ts?1 = Us> Cs|?s . Consider the following expansion: ?1 > > T m e> s (Us Cs?s? ) = ? (xs )Cs|?s (Us Cs|?s ) ? (Us> Cs|?s )(C?s2 C?>s? |?s ) = ?(xs )> Cs?s? (9) ? m e s = (C?s? s Us ) C?s? s ?(xs ) (10) Here ? denotes pseudo-inverse. The general pattern is that we can relate the transformed latent quantity to observed quantities in two different ways such that we can solve for the transformed latent quantity. A similar strategy can be applied to Ces2 |?s := Cs2 |?s ?1 T??1 ?2 T??1 ?3 Ts> in the s s internal message update, and the Cer3 := Cr3 ?1 T??1 ?2 T??1 ?3 T??1 at the root. We summarize the s s r results on how to compute the transformed quantities below (see supplemental for details). ? * At leaf nodes, m e s = (C?s? s Us ) C?s? s ?(xs ). ** At internal nodes, Ces2 |?s = C??s ??s ?s? ?1 U?>s ?2 U?>s ?3 (C?s? ??s Us )? . *** At the root, Cer3 = C?? ?? ?? ?1 U?> ?2 U?> ?3 U?> . r r r r r r The above results give us an efficient algorithm for computing the expected kernel density br which can take into account the latent tree structures while at the same time avoiding the local minimum problems associated with explicitly recovering latent parameters. The main computation only involves tensor-matrix and tensor-vector multiplications, and a sequence of singular value decompositions of pairwise cross covariance operators. After we obtain the transformed quantities, we can then use them in the message passing algorithm to obtain the final belief br .  n Given a sample S = (xi1 , . . . , xiO ) i=1 drawn i.i.d. from a P(O), the spectral algorithm for latent trees proceeds by replacing all population quantities by their empirical counterpart. For instance, 5 the SVD of covariance operators between Xs and Xt can be estimated by first forming matrices ? = (?(x1s ), . . . , ?(xns )) and ? = (?(x1t ), . . . , ?(xnt )), and estimate Cbts = n1 ??> . Then a singular b (See [25] for more details). value decomposition of Cb can be carried out to obtain an estimate for U 5 Structure Learning of Latent Tree Graphical Models The last section focused on density estimation where the structure of the latent tree is known. In this section, we focus on learning the structure of the latent tree. Structure learning of latent trees is a challenging problem that has largely been tackled by heuristics since the search space of structures is intractable. The additional challenge in our case is that the observed variables are continuous and non-Gaussian, which we are not aware of any existing methods for this problem. Structure learning algorithm We develop a distance based method for constructing latent trees of continuous, non-Gaussian variables. The idea is that if we have a tree metric (distance) between distributions on observed nodes, we can use the property of the tree metric to reconstruct the latent tree structure using algorithms such as neighbor joining [20] and the recursive grouping algorithm [4]. These methods take a distance matrix among all pairs of observed variables as input and output a tree by iteratively adding hidden nodes. While these methods are iterative, they have strong theoretical guarantees on structure recovery when the true distance matrix forms an additive tree metric. However, most previously known tree metrics are defined for discrete and Gaussian variables. The additional challenge in our case is that the observed variables are continuous and non-Gaussian. We propose a tree metric below which works for continuous non-Gaussian cases. Tree metric and pseudo-determinant We will first explain some basic concepts of a tree metric. If the joint probability distribution P(X ) has a latent tree structure, then a distance measure dst between an arbitrary variables P pairs Xs and Xt are called tree metric if it satisfies the following path additive condition: dst = (u,v)?P ath(s,t) duv . For discrete and Gaussian variables, tree metric can be defined via the determinant | ? | [4] > > > dst = ? 21 log |Cst Cst | + 41 log |Css Css | + 14 log |Ctt Ctt |, (11) where Cst denotes joint probability matrix in the discrete case and the covariance in the Gaussian case; Css is the diagonalized marginal probability vector in the discrete case and variance in the Gaussian case. However, this definition of tree metric is restricted in the sense that it requires all discrete variables to have the same number of states and all Gaussian variables have the same dimension. This is because determinant is only defined (and non-zero) for square and non-singular matrices. For our more general scenario, where the observed variables are continuous non-Gaussian but the hidden variables have dimension d, we will define a tree metric based on pseudo-determinant which works for our operators. Nonparametric tree metric The pseudo-determinant is defined as the product of non-zero singular Qd values of an operator |C|? = i=1 ?i (C). In our case, since we assume that the dimension of the hidden variables is d, the pseudo-determinant is simply the product of top d singular values. Then we define the distance metric between non-Gaussian variables Xs and Xt as two continuous > > > 1 1 + log |C (12) dst = ? 21 log Cst Cst ss Css |? + 4 log |Ctt Ctt |? . 4 ? One can prove that (12) defines a tree metric by inducting on the path length. Here we only show the additive property for the simplest path Xs ? Xu ? Xt involving only a single hidden > > > variable Xu . In this case, we first factorize |Cst Cst |? into |Cs|u Cuu Ct|u Ct|u Cuu Cs|u |? according to the Markov property. Then using Sylvester?s determinant theorem, the latter is also equal to > > > |Cs|u Cs|u Cuu Ct|u Ct|u Cuu |? by flipping Cs|u to the front. Next, introducing two copies of |Cuu | and rearranging terms, we have > > > > |Cs|u Cuu Cuu Cs|u |? |Ct|u Cuu Cuu Ct|u |? |Csu Csu |? |Ctu Ctu |? > |Cst Cst |? = = . (13) |Cuu |? |Cuu |? |Cuu Cuu |? Last, we plug this into (12) and we have the desired path additive property > > > > > dst = ? 12 log |Csu Csu |? ? 12 log |Ctu Ctu |? + 12 log |Cuu Cuu |? + 14 log |Css Css |? + 41 log |Ctt Ctt | = dsu + dut 6 Experiments We evaluate our method on synthetic data as well as a real-world crime/communities dataset [1, 19]. For all experiments we compare to 2 existing approaches. The first is to assume the data are 6 200 NPN Gaussian Error Kernel?8 Error 10 Kernel?5 50 50 Gaussian NPN 100 Kernel?8 Kernel?5 Kernel?8 Kernel?5 Kernel?2 Kernel?2 Kernel?2 NPN Gaussian 10 Error 50 10 0.2 0.5 1 2 5 10 20 50 100 Training Sample Size (x103) (a) balanced binary tree 0.2 0.5 1 2 5 10 20 50 100 Training Sample Size (x103) 0.2 0.5 1 2 5 10 20 50 100 Training Sample Size (x103) (b) skewed HMM-like tree (c) random trees Figure 1: Comparison of our kernel structure learning method to the Gaussian and Nonparanormal methods on different tree structures. True # of hidden states = 3 15 10 5 0 ?3 ?2 ?1 0 1 2 3 Diff from true # of hidden states 10 5 0 ?3 True # of hidden states = 5 ?2 ?1 0 1 2 10 5 0 ?3 3 Diff from true # of hidden states 15 Frequency 15 True # of hidden states = 4 15 Frequency 20 Frequency Frequency True # of hidden states = 2 20 ?2 ?1 0 1 2 3 Diff from true # of hidden states 10 5 0 ?3 ?2 ?1 0 1 2 3 Diff from true # of hidden states Figure 2: Histogram of the differences between the estimated number of hidden states and the true number of states. multivariate Gaussians and use the tree metric defined in [4] (which is essentially a function of the correlation coefficient). The second existing approach we compare to is the Nonparanormal (NPN) [14] which assumes that there exist marginal transformations f1 , . . . , fp such that f (X1 ), . . . , f (Xp ) ? N (?, ?). If the data comes from a Nonparanormal distribution, then the transformed data are assumed to be multivariate Gaussians and the same tree metric as the Gaussian case can be used on the transformed data. Our approach makes much fewer assumptions about the data than either of these two methods which can be more favorably in practice. To perform learning and inference in our approach, we use the spectral algorithm and message passing algorithm described earlier in the paper. For inference in the Gaussian (and nonparanormal) cases, we use the technique in [4] to learn the model parameters (covariance matrix). Once the covariance matrix has been estimated, computing the marginal of one variable given a set of evidence reduces to solving a linear equation of one variable [2]. Synthetic data: structure recovery. The first experiment is to demonstrate how our method compares to the Gaussian and Nonparanormal methods in terms of structure recovery. We experiment with 3 different tree types (each with 64 leaves or observed variables): a balanced binary tree, a completely binary skewed tree (like an HMM), and randomly generated binary trees. For all trees, (n) we use the following generative process to generate the n-th sample from a node s (denoted xs ): If 1 s is the root, sample from a mixture of 2 Gaussians. Else, with probability 2 sample from a Gaussian (n) (n) with mean ?x?s and with probability 12 sample from a Gaussian with mean x?s . We vary the training sample size from 200 to 100,000. Once we have computed the empirical tree distance matrix for each algorithm, we use the neighbor joining algorithm [20] to learn the trees. For evaluation we compare the number of hops between each pair of leaves in the true tree to the estimated tree. For a pair of leaves i, j the error is defined as: error(i, j) = ? [ |hops (i,j)?hops(i,j)| , [ hops(i,j) [ |hops? (i,j)?hops(i,j)| hops? (i,j) + [ is the estimated number of where hops? is the true number of hops and hops hops. The total error is then computed by adding the error for each pair of leaves. The performance of our method depends on the number of singular values chosen and we experimented with 2, 5 and 8 singular values. Furthermore, we choose the bandwidth ? for the Gaussian RBF kernel needed for the covariance operators using median distance between pairs of training points. For all these choices our method performs better than the Gaussian and Nonparanormal methods. This is to be expected, since the data we generated is neither Gaussian or Nonparamnormal, yet our method is able to learn the structure correctly. We also note that balanced binary trees are the easiest to learn while the skewed trees are the hardest (Figure 1). Synthetic data: model selection. Next we evaluate the ability of our model to select the correct number of singular values via held-out likelihood. For this experiment we use a balanced binary tree with 16 leaves (total of 31 nodes) and 100000 samples. A different generative process is used so 7 0.25 race elderly Error Gaussian Education/job Urban/rural 0.2 NPN 0.15 Divorce/crime/poverty (a) 5 Kernel 10 20 30 query size 40 (b) Figure 3: (a) visualization of kernel latent tree learned from crime data (b) Comparison of our method to Gaussian and NPN in predictive task. that it is clear what the correct number of singular values should be (When the hidden state space is continuous like in our first synthetic experiment this is unclear). Each internal node is discrete and takes on d values. The leaf is a mixture of d Gaussians where which Gaussian to sample from is dictated by the discrete value of the parent. We vary d from 2 through 5 and then run our method for a range of 2 through 8 singular values. We select the model that has the highest likelihood computed using our spectral algorithm on a hold-out set of 500 examples. We then take the difference between the number of singular values chosen and the true singular values, and plot histograms of this difference (Ideally all the trials should be in the zero bin). The experiment is run for 20 trials. As we can see in Figure 2, when d is low, the held-out likelihood computed by our method does a fairly good job in recovering the correct number. However, as the true number of eigenvalues rises our method underestimates the true number (although it is still fairly close). Crime Data. Finally, we explore the performance of our method on a communities and crime dataset from the UCI repository [1, 19]. In this dataset several real valued attributes are collected for several communities, such as ethnicity proportions, income, poverty rate, divorce rate etc., and the goal is to predict the number of violent crimes (proportional to the size of the community) that occur based on these attributes. In general these attributes are highly skewed and therefore not well characterized by a Gaussian model. We divide the data into 1400 samples for training, 300 samples for model selection (held-out likelihood), and 300 samples for testing. We pick the first 50 of these attributes, plus the violent crime variable and construct a latent tree using our tree metric and the neighbor joining algorithm [20]. We depict the tree in Figure 3 and highlight a few coherent groupings. For example, the ?elderly? group attributes are those related to retirement and social security (and thus correlated). The large clustering in the center is where the class variable (violent crimes) is located next to the poverty rate and the divorce rate among other relevant variables. Other groupings include type of occupation and education level as well as ethnic proportions. Thus, overall our method captures sensible relationships. For a more quantitative evaluation, we condition on a set of E of evidence variables and predict the violent crimes class label. We experiment with a varying number of sizes of evidence sets from 5 to 40 and repeat for 40 randomly chosen evidence sets of a fixed size. Since the crime variable is a number between 0 and 1, our error measure is simply err(? c) = |? c ? c? | (where c? is the predicted ? value and c is the true value. As one can see in Figure 3 our method outperforms both the Gaussian and the nonparanormal for the range of query sizes. Thus, in this case our method is better able to capture the skewed distributions of the variables than the other methods. Acknowledgments This work was partially done when LS was at Carnegie Mellon University and Google Research. This work is also supported by an NSF Graduate Research Fellowship (under Grant No. 0750271) to APP, NIH 1R01GM093156, NIH 1RC2HL101487, NSF DBI-0546594, and an Alfred P. Sloan Fellowship to EPX. 8 References [1] A. Asuncion and D.J. Newman. Uci machine learning repository, 2007. [2] D. Bickson. Gaussian belief propagation: Theory and application. Arxiv preprint arXiv:0811.2518, 2008. [3] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. In NIPS, 2002. [4] M.J. Choi, V.Y.F. Tan, A. Anandkumar, and A.S. Willsky. Learning latent tree graphical models. Arxiv preprint arXiv:1009.2722, 2010. [5] A. Clark. Inference of haplotypes from pcr-amplified samples of diploid populations. Molecular Biology and Evolution, 7(2):111?122, 1990. [6] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society B, 39(1):1?22, 1977. [7] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. J. Mach. Learn. Res., 5:73?99, 2004. [8] S. Harmeling and C.K.I. Williams. Greedy learning of binary latent trees. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010. [9] K.A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In Proceedings of the 22nd international conference on Machine learning, pages 297?304. ACM, 2005. [10] Peter D. Hoff, Adrian E. Raftery, and Mark S. Handcock. Latent space approaches to social network analysis. JASA, 97(460):1090?1098, 2002. [11] D. Hsu, S. Kakade, and T. Zhang. A spectral algorithm for learning hidden markov models. In COLT, 2009. [12] A. Ihler and D. McAllester. Particle belief propagation. In AISTATS, pages 256?263, 2009. [13] Tamara. Kolda and Brett Bader. Tensor decompositions and applications. SIAM Review, 51(3):455?500, 2009. [14] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. The Journal of Machine Learning Research, 10:2295?2328, 2009. [15] T. Minka. Expectation Propagation for approximative Bayesian inference. PhD thesis, MIT Media Labs, Cambridge, USA, 2001. [16] A. Parikh, L. Song, and E. Xing. A spectral algorithm for latent tree graphical models. In ICML, 2011. [17] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2001. [18] L. R. Rabiner and B. H. Juang. An introduction to hidden Markov models. IEEE ASSP Magazine, 3(1):4?16, January 1986. [19] M. Redmond and A. Baveja. A data-driven software tool for enabling cooperative information sharing among police departments. European Journal of Operational Research, 141(3):660?678, 2002. [20] N. Saitou, M. Nei, et al. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol, 4(4):406?425, 1987. [21] C. Semple and M.A. Steel. Phylogenetics, volume 24. Oxford University Press, USA, 2003. [22] B. W. Silverman. Density Estimation for Statistical and Data Analysis. Monographs on statistics and applied probability. Chapman and Hall, London, 1986. [23] A.J. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A hilbert space embedding for distributions. In E. Takimoto, editor, Algorithmic Learning Theory, Lecture Notes on Computer Science. Springer, 2007. [24] L. Song, A. Gretton, and C. Guestrin. Nonparametric tree graphical models. In 13th Workshop on Artificial Intelligence and Statistics, volume 9 of JMLR workshop and conference proceedings, pages 765?772, 2010. [25] Le Song, Byron Boots, Sajid Siddiqi, Geoffrey Gordon, and Alex Smola. Hilbert space embeddings of hidden markov models. In International Conference on Machine Learning, 2010. [26] Le Song, Jonathan Huang, Alex Smola, and Kenji Fukumizu. Hilbert space embeddings of conditional distributions. In ICML, 2009. [27] E. Sudderth, A. Ihler, W. Freeman, and A. Willsky. Nonparametric belief propagation. In CVPR, 2003. [28] N.L. Zhang. Hierarchical latent class models for cluster analysis. The Journal of Machine Learning Research, 5:697?723, 2004. 9
4394 |@word trial:2 determinant:7 repository:2 proportion:2 nd:1 adrian:1 simulation:1 covariance:15 decomposition:5 pick:1 recursively:1 carry:1 reduction:1 liu:1 series:1 document:1 rkhs:3 nonparanormal:8 outperforms:1 existing:5 diagonalized:1 recovered:2 err:1 yet:1 written:1 subsequent:1 additive:4 plot:1 update:6 depict:1 aside:1 bickson:1 greedy:2 leaf:13 fewer:1 generative:2 ctu:4 intelligence:2 parametrization:1 short:1 blei:1 node:26 simpler:1 zhang:3 phylogenetic:2 constructed:1 descendant:1 prove:1 fitting:1 multimodality:1 introduce:1 pairwise:1 x0:7 elderly:2 expected:5 blowup:3 multi:2 freeman:1 becomes:3 estimating:3 notation:2 brett:1 medium:1 easiest:1 what:1 parsimony:1 skewness:2 gif:2 r01gm093156:1 supplemental:3 transformation:5 guarantee:4 pseudo:5 quantitative:1 exactly:1 k2:1 originates:1 dsu:1 grant:1 understood:2 local:10 ext:4 mach:1 joining:6 oxford:1 solely:1 path:4 sajid:1 plus:1 dut:1 ankur:1 challenging:3 co:8 range:4 graduate:1 harmeling:2 acknowledgment:1 testing:1 recursive:3 block:1 implement:3 practice:1 silverman:1 xr:12 procedure:1 empirical:2 word:5 integrating:1 get:1 cannot:1 divorce:3 selection:2 operator:26 close:1 map:2 center:1 williams:2 regardless:1 rural:1 l:1 focused:1 simplicity:3 recovery:4 evol:1 semple:1 wasserman:1 estimator:5 dbi:1 embedding:26 population:2 cs:7 kolda:1 tan:1 exr:2 magazine:1 exact:2 us:1 approximative:1 hypothesis:1 element:1 recognition:1 particularly:1 located:1 cooperative:1 observed:26 preprint:2 capture:2 highest:1 balanced:4 accessing:1 dempster:1 monograph:1 complexity:1 xio:2 ideally:1 rewrite:1 solving:1 predictive:1 eric:1 completely:1 compactly:1 easily:2 joint:11 various:2 distinct:1 london:1 query:2 artificial:1 newman:1 aggregate:1 heuristic:6 larger:1 solve:1 valued:1 cvpr:1 s:1 reconstruct:1 ability:2 statistic:2 transform:1 laird:1 final:2 advantage:3 sequence:1 eigenvalue:1 propose:2 product:13 uci:2 ath:1 relevant:1 flexibility:1 amplified:1 olkopf:1 x1t:1 epx:1 parent:7 cluster:2 juang:1 produce:1 object:1 depending:1 develop:2 pose:1 xns:1 lsong:1 school:2 job:2 strong:2 recovering:3 c:19 involves:1 come:2 kenji:1 qd:1 predicted:1 direction:1 drawback:1 correct:3 attribute:5 bader:1 mcallester:1 education:2 bin:1 require:3 hx:1 f1:1 generalization:1 decompose:1 hold:1 hall:1 exp:1 cb:1 algorithmic:1 predict:2 vary:2 estimation:4 violent:4 label:1 successfully:1 tool:2 fukumizu:2 mit:1 gaussian:41 always:1 pn:1 varying:1 gatech:1 focus:6 yo:2 rank:1 likelihood:9 contrast:1 sense:1 inference:12 lowercase:1 integrated:1 hidden:22 transformed:8 semantics:1 reorient:1 pixel:1 overall:1 among:5 colt:1 denoted:2 special:1 fairly:2 mutual:2 marginal:4 equal:1 construct:1 aware:1 saving:1 shaped:1 sampling:2 hop:11 biology:1 once:2 ng:1 look:1 cancel:1 hardest:1 icml:2 chapman:1 gordon:1 few:1 randomly:2 poverty:3 maintain:1 n1:1 freedom:1 message:28 highly:1 evaluation:2 mixture:5 arrives:1 uppercase:1 held:3 amenable:1 edge:2 reroot:1 retirement:1 tree:92 incomplete:1 divide:1 desired:1 re:1 theoretical:1 instance:3 modeling:2 earlier:1 maximization:2 ordinary:1 introducing:1 front:1 characterize:1 dependency:2 synthetic:4 density:15 international:2 siam:1 ancestral:1 probabilistic:3 xi1:2 invertible:3 together:1 nongaussian:1 thesis:1 choose:2 huang:1 account:1 coefficient:1 taxon:1 explicitly:2 race:1 depends:1 sloan:1 tion:1 try:2 root:12 view:2 lab:1 xing:2 recover:3 relied:1 complicated:1 hf:3 asuncion:1 saitou:1 square:1 variance:1 largely:4 efficiently:1 correspond:2 diploid:1 rabiner:1 generalize:1 bayesian:3 basically:2 none:2 cc:1 app:1 explain:3 reach:1 sharing:1 definition:1 underestimate:1 frequency:4 tamara:1 minka:1 associated:3 ihler:2 hsu:1 newly:1 dataset:3 knowledge:1 dimensionality:1 hilbert:15 sophisticated:1 actually:1 higher:1 supervised:1 follow:1 specify:1 done:1 furthermore:4 stage:2 smola:3 correlation:1 qo:4 replacing:1 propagation:6 google:1 defines:2 hcan:1 building:1 usa:2 concept:4 true:16 counterpart:2 former:1 hence:3 evolution:1 iteratively:2 deal:2 during:1 skewed:5 rooted:1 essence:1 noted:1 m:3 generalized:1 evident:1 complete:1 demonstrate:2 performs:1 reasoning:1 image:1 parikh:2 predominantly:1 common:2 duv:1 nih:2 mt:1 haplotype:1 volume:2 hoff:1 extend:1 mellon:3 expressing:1 refer:1 significant:1 cambridge:2 rd:3 resorting:1 handcock:1 particle:1 language:1 baveja:1 etc:1 multivariate:2 own:1 recent:1 dictated:1 ctt:6 driven:1 manipulation:1 store:1 certain:1 scenario:1 binary:8 guestrin:1 minimum:6 additional:4 eo:6 strike:1 encompass:1 reduces:1 gretton:2 characterized:2 plug:1 cross:2 long:1 bach:1 molecular:1 involving:3 basic:2 sylvester:1 vision:1 cmu:2 expectation:8 essentially:2 metric:17 iteration:1 kernel:34 represent:6 histogram:2 arxiv:4 fellowship:2 semiparametric:1 else:1 singular:15 median:1 sends:1 modality:1 sch:1 extra:1 rest:1 sudderth:1 byron:1 undirected:1 lafferty:1 jordan:2 call:1 anandkumar:1 embeddings:7 easy:1 npn:6 ethnicity:1 independence:3 xj:16 topology:1 bandwidth:1 inner:5 idea:2 sibling:1 br:5 expression:2 song:9 peter:1 speech:1 passing:12 clear:1 eigenvectors:1 transforms:1 nonparametric:12 siddiqi:1 simplest:1 generate:1 xij:1 exist:1 nsf:2 estimated:6 correctly:2 diverse:1 alfred:1 carnegie:3 discrete:11 write:1 group:1 key:1 drawn:1 urban:1 changing:1 neither:1 takimoto:1 graph:1 sum:1 run:2 inverse:1 letter:2 dst:5 family:1 x0j:1 ct:11 simplification:2 tackled:2 nontrivial:2 occur:1 constraint:1 alex:2 x2:1 software:1 prescribed:1 performing:2 department:1 according:3 x103:3 cleverly:1 smaller:2 em:8 reconstructing:1 kakade:1 making:1 intuitively:1 restricted:3 xo:8 equation:4 visualization:1 previously:1 needed:5 tractable:3 available:1 operation:6 gaussians:5 multiplied:1 hierarchical:5 away:2 spectral:10 denotes:4 clustering:4 top:2 assumes:1 include:1 graphical:15 dirichlet:1 exploit:2 ghahramani:1 yz:2 society:1 tensor:24 added:1 quantity:7 flipping:1 parametric:3 strategy:1 traditional:3 surrogate:1 unclear:1 distance:12 simulated:1 hmm:2 sensible:1 agglomerative:1 collected:1 provable:2 willsky:2 besides:1 length:1 relationship:1 balance:1 difficult:1 phylogenetics:1 relate:2 kde:8 favorably:1 rise:1 steel:1 xnt:1 design:2 perform:3 allowing:1 boot:1 observation:2 markov:4 finite:4 enabling:1 t:5 january:1 defining:2 assp:1 reproducing:5 arbitrary:2 police:1 nei:1 community:5 pair:8 specified:1 crime:10 security:1 coherent:1 merges:1 learned:1 pearl:1 nip:1 address:1 able:3 redmond:1 proceeds:1 below:2 pattern:2 fp:1 challenge:5 summarize:1 pcr:1 royal:1 explanation:1 belief:4 power:1 natural:2 abbreviate:1 technology:1 epxing:1 redirect:1 carried:2 raftery:1 hm:4 nice:1 heller:1 review:1 multiplication:2 occupation:1 fully:1 lecture:1 highlight:1 proportional:1 allocation:1 analogy:3 geoffrey:1 clark:1 integrate:2 degree:4 jasa:1 sufficient:1 xp:1 rubin:1 editor:1 cd:2 repeat:1 last:3 free:4 copy:1 supported:1 allow:2 institute:1 neighbor:6 taking:1 tracing:1 dimension:4 xn:1 world:2 commonly:1 collection:1 income:1 social:3 transaction:1 clique:3 instantiation:1 incoming:3 assumed:1 xi:5 factorize:1 alternatively:2 search:7 latent:71 continuous:15 iterative:1 table:2 learn:6 rearranging:1 correlated:1 operational:1 mol:1 expansion:1 complex:2 european:1 constructing:1 domain:1 aistats:1 main:2 big:1 s2:1 child:8 x1:5 augmented:3 xu:2 ethnic:1 causality:1 georgia:1 deployed:1 fashion:1 cs2:6 precision:1 xh:1 exponential:3 candidate:1 jmlr:1 formula:1 theorem:1 choi:1 xt:22 er:1 x:47 experimented:1 evidence:4 grouping:4 intractable:2 exists:1 ih:1 workshop:2 adding:2 phd:1 subtree:1 kx:1 flavor:1 simply:3 explore:1 forming:1 expressed:3 partially:1 springer:1 satisfies:1 acm:1 conditional:20 abbreviation:2 goal:1 rbf:3 replace:1 cst:11 specifically:3 diff:4 principal:1 called:4 specie:1 pas:1 total:2 e:1 svd:1 select:3 college:1 formally:1 internal:8 mark:1 latter:1 jonathan:1 bioinformatics:2 avoiding:1 evaluate:2 outgoing:1 biol:1 ex:8
3,749
4,395
A Model for Temporal Dependencies in Event Streams Asela Gunawardana Microsoft Research One Microsoft Way Redmond, WA 98052 [email protected] Christopher Meek Microsoft Research One Microsoft Way Redmond, WA 98052 [email protected] Puyang Xu ECE Dept. & CLSP Johns Hopkins University Baltimore, MD 21218 [email protected] Abstract We introduce the Piecewise-Constant Conditional Intensity Model, a model for learning temporal dependencies in event streams. We describe a closed-form Bayesian approach to learning these models, and describe an importance sampling algorithm for forecasting future events using these models, using a proposal distribution based on Poisson superposition. We then use synthetic data, supercomputer event logs, and web search query logs to illustrate that our learning algorithm can efficiently learn nonlinear temporal dependencies, and that our importance sampling algorithm can effectively forecast future events. 1 Introduction The problem of modeling temporal dependencies in temporal streams of discrete events arises in a wide variety of applications. For example, system error logs [14], web search query logs, the firing patterns of neurons [18] and gene expression data [8], can all be viewed as streams of events over time. Events carry both information about their timing and their type (e.g., the web query issued or the type of error logged), and the dependencies between events can be due to both their timing and their types. Modeling these dependencies is valuable for forecasting future events in applications such as system failure prediction for preemptive maintenance or forecasting web users? future interests for targeted advertising. We introduce the Piecewise-Constant Conditional Intensity Model (PCIM), which is a class of marked point processes [4] that can model the types and timing of events. This model captures the dependencies of each type of event on events in the past through a set of piecewise-constant conditional intensity functions. We use decision trees to represent these dependencies and give a conjugate prior for this model, allowing for closed-form computation of the marginal likelihood and parameter posteriors. Model selection then becomes a problem of choosing a decision tree. Decision tree induction can be done efficiently because of the closed form for the marginal likelihood. Forecasting can be carried out using forward sampling for arbitrary finite duration queries. For episodic sequence queries, that is, queries that specify particular sequences of events in given future time intervals, we develop a novel approach for estimating the probability of rare queries, which we call the Poisson Superposition Importance Sampler (PSIS). We validate our learning and inference procedures empirically. Using synthetic data we show that PCIMs can correctly learn the underlying dependency structure of event streams, and that the PSIS leads to effective forecasting. We then use real supercomputer event log data to show that PCIMs can be learned more than an order of magnitude faster than Poisson Networks [15, 18], and that they have better test set likelihood. Finally, we show that PCIMs and the PSIS are useful in forecasting future interests of real web search users. 1 2 Related Work While graphical models such as Bayesian networks [2] and dependency networks [10] are widely used to model the dependencies between variables, they do not model temporal dependencies (see e.g., [8]). Dynamic Bayesian Networks (DBN) [5, 9] allow modeling of temporal dependencies in discrete time. It is not clear how timestamps in our data should be discretized in order to apply the DBN approach. At a minimum, too slow a sampling rate results in poor representation of the data, and too fast a sampling rate increases the number of samples making learning and inference more costly. In addition, allowing long term dependencies requires conditioning on multiple steps into the past, and choosing too fast a sampling rate increases the number of such steps that need to be conditioned on. Recent progress in modeling continuous time processes include Continuous Time Bayesian Networks (CTBNs) [12, 13], Continuous Time Noisy-Or (CT-NOR) [16], Poisson Cascades [17], and Poisson Networks [15, 18]. CTBNs are homogeneous Markov models of the joint trajectories of discrete finite variables, rather than models of event streams in continuous time [15]. In contrast, CT-NOR and Poisson Cascades model event streams, but require the modeler to choose a parametric form for temporal dependencies. Simma et al [16, 17] describe how this choice significantly impacts model performance, and depends strongly on the domain. In particular, the problem of model selection for CT-NOR and Poisson Cascades is unaddressed. PCIMs, in contrast to CT-NOR and Poisson Cascades, perform structure learning to learn how different events in the past affect future events. Poisson Networks, described in more detail below, are closely related to PCIMs, but PCIMs are over an order of magnitude faster to learn and can model nonlinear temporal dependencies. 3 Conditional Intensity Models In this section, we define Conditional Intensity Models, introduce the class of Piecewise-Constant Conditional Intensity Models, and describe Poisson Networks. We assume that events of different types are distinguished by labels l drawn from a finite set L. An event is then comn posed of a non-negative time-stamp t and a label l. An event sequence x = {(ti , li )}i=1 where 0 < t1 < ? ? ? < tn . The history at time t of event sequence x is the sub-sequence h(t, x) = {(ti , li ) | (ti , li ) ? x, ti ? t}. We write hi for h(ti?1 , x) when it is clear from context which x is meant. By convention t0 = 0. We define the ending time t(x) of an event sequence x as the time of the last event in x: t(x) = max ({t : (t, l) ? x}) so that t(hi ) = ti?1 . A Conditional Intensity Model (CIM) is a set of non-negative conditional intensity functions indexed by label {?l (t|x; ?)}l?L . The data likelihood for this model is p(x|?) = n YY ?l (ti |hi , ?)1l (li ) e??l (ti |hi ;?) (1) l?L i=1 Rt where ?l (t|x; ?) = ?? ?l (? |x; ?)d? for each event sequence x and the indicator function 1l (l0 ) is one if l0 = l and zero otherwise. The conditional intensities are assumed to satisfy ?l (t|x; ?) = 0 for t ? t(x) to ensure that ti > ti?1 = t(hi ). These modeling assumptions are quite weak. In fact, any distribution for x in which the timestamps are continuous random variables can be written in this form. For more details see [4, 6]. Despite the fact that the modeling assumptions are weak, these models offer a powerful approach for decomposing the dependencies of different event types on the past. In particular, this per-label conditional factorization allows one to model detailed label-specific dependence on past events. 3.1 Piecewise-Constant Conditional Intensity Models Piecewise-Constant Conditional Intensity Models (PCIMs) are Conditional Intensity Models where the conditional intensity functions are assumed to be piecewise-constant. As described below, this assumption allows efficient learning and inference. PCIMs are defined in terms of local structures Sl for each label l, which specify regions in time where the corresponding conditional intensity function is constant, and local parameters ?l for each label which specify the values taken in those regions. Piecewise-Constant Conditional Intensity Models (PCIMs) are defined by local structures Sl = (?l , ?l (t, x)) and local parameters ?l = {?ls }s??l , where ?l denotes a set discrete states, ?ls 2 are non-negative constants, and ?l denotes a state function that maps a time and an event sequence to ?l and is piecewise constant in time for every event sequence. The conditional intensity functions are defined as ?l (t|x) = ?ls with s = ?l (t, x), and thus are piecewise constant. The resulting data likelihood can be written as Y Y c (x) p(x|S, ?) = ?lsls e??ls dls (x) (2) l?L s??l where S = {Sl }l?L , ? = {?l }l?L P , cls (x) is the number of times label l occurs in x when the state function for l maps to state s (i.e., i 1l (li ) 1s (?l (ti , hi ))), and dls (x) is the total duration during R t(x) which the state function for l maps to state s in the data x (i.e., 0 1s (? (?, h (?, x))) d? ). 3.2 Poisson Networks Poisson networks[15, 18] are closely related to PCIMs. Given a basis set B of piecewise-constant real-valued feature functions f (t, x), a feature vector ?l (t, x) is defined for each l by selecting component feature functions from B. The resulting ?l (t, x) are piecewise-constant in time. The conditional intensity for l is given by the regression ?l (t|x, ?) = ewl ??l (t,x) with parameter wl . By convention, the component ?l,0 (t, x) = 1 so that wl,0 is a bias parameter. The resulting likelihood does not have a conjugate prior, and in our experiments we use iterative MAP parameter estimates under a Gaussian prior, and use a Laplace approximation of the marginal likelihood for structure learning (i.e., feature selection) [15]. In our experiments, each   f ? B is specc (t,x) 2 ified by a label l and a pair of time offsets 0 ? d1 < d2 , and takes on the value log 1 + l,dd12,d?d 1 where cl,d1 ,d2 (t, x) is the number of times l occurs in x in the interval [t ? d2 , t ? d1 ). 4 Learning PCIMs In this section, we present an efficient learning algorithm for PCIMs. We give a conjugate prior for the parameters ? which yields closed form formulas for the parameter posteriors and the marginal likelihood of the data given a structure S. We then give a decision tree based learning algorithm that uses the closed-form marginal likelihood formula to learn the local structure Sl for each label. 4.1 Closed-Form Parameter Posterior and Marginal Likelihood In general, computing parameter posteriors for likelihoods of the form of equation (1) is complicated. However, in the case of PCIMs, the Gamma distribution is a conjugate prior for ?ls , despite the fact that the data likelihood of equation (2) is not a product of exponential densities (i.e., when cls (x) 6= 1). The corresponding prior and posterior densities are given by p(?ls |?ls , ?ls ) = ?ls ?ls ?ls ?1 ??ls ?ls ? e ; ?(?ls ) ls p(?ls |?ls , ?ls , x) = p(?ls |?ls + cls (x), ?ls + dls (x)) Assuming the prior over ? is a product of such p(?ls |?ls , ?ls ), the marginal likelihood is Y Y ?ls ?ls ?(?ls + cls (x)) p(x|S) = ?ls (x); ?ls (x) = ?(?ls ) (?ls + dls (x))?ls +cls (x) l?L s??l ? ls = In our experiments, we use the point estimate ? 4.2 ?ls +cls (x) ?ls +dls (x) which is E [?ls | x]. Structure Learning with Decision Trees In this section, we specify the set of possible structures in terms of a set of basis state functions, a set of decision trees built from them, and a greedy Bayesian model selection procedure for learning a structure. Finally, we describe the particular set of basis state functions we use in our experiments. We use B to denote the set of basis state functions f (t, x), each taking values in a basis state set ?f . Given B, we specify Sl through a decision tree whose interior nodes each have an associated f ? B and a child corresponding to each value in ?f . The per-label state set ?l is then the set of 3 leaves in the tree. The state function ?l (t, x) is computed by recursively applying the basis state functions in the tree until a leaf is reached. Note that the resulting mapping is a valid state function by construction. In order Q Q to carry out Bayesian model selection, we use a factored structural prior p(S) ? l?L s??l ?ls . Since the prior and the marginal likelihood both factor over l, the local structures Sl can be chosen independently. We search for each Sl as follows. We begin with Sl being the trivial decision tree that maps all event sequences and times to the root. In this case, ?l (t|x) = ?l . Given the current Sl , we consider Sl0 specified by choosing a leaf s ? ?l and a basis state function f ? B, and assigning f to s to get a set of new child leaves {s1 , ? ? ? , sm } where m = |?f |. Because the marginal likelihood factors over states, the gain in the posterior of the structure due to this ? ? (x)????lsm ?lsm (x) p(S 0 |x) split is p(Sll |x) = ls1 ls1 ?ls ?ls (x) . The next structure Sl0 is chosen by selecting the s and f with the largest gain. The search terminates if there is no gain larger than one. We note that the local structure representation and search can be extended from decision trees to decision graphs in a manner analogous to [3]. In our experiments, we wish to learn how events depend on the timing and type of prior events. We therefore use a set of time and label specific basis state functions. In particular, we use binary basis state functions fl0 ,d1 ,d2 ,? indexed by a label l0 ? L, two time offsets 0 ? d1 < d2 and a threshold ? > 0. Such a f encodes whether or not the event sequence x contains at least ? events with label l0 with timestamps in the window [t ? d2 , t ? d1 ). Examples of decision trees that use such basis state functions are shown in Figure 1. 5 Forecasting In this section, we describe how to use PCIMs to forecast whether a sequence of target labels will occur in a given order and in given time intervals. For example, we may wish to know the probability that a computer system will experience a system failure in the next week and again in the following week, or that an internet user will be shown a particular display ad and then visit the advertising merchant?s website in the next month. We call such a sequence and set of associated intervals an  k  episodic sequence and denote it by e = lj? , [a?j , b?j ) j=1 . We call (lj? , [a?j , b?j )) the j th episode. We say that the episodic sequence e occurs in an event sequence x if ?i1 < ? ? ? < ik : (tij , lij ) ? x, lij = lj? , tij ? [a?j , b?j ). The set of event sequences x in which e occurs is denoted Xe . Given an event sequence h and a time t? ? t(h), we term any event sequence x whose history up to t? agrees with h (i.e., h(t? , x) = h) an extension of h from t? . Our forecasting problem is, given at observed sequence h at time t? ? t(h), to compute the probability that e occurs in extensions of h from t? . This probability is p (X ? Xe | h(t? , X) = h) and will be denoted using the shorthand p(Xe |h, t? ). Computing p(Xe |h, t? ) is hard in general because the probability of episodes of interest can depend on arbitrary numbers of intervening events. We therefore give Monte Carlo estimates for p(Xe |h, t? ), first describing a forward sampling procedure for forecasting episodic sequences (also applicable to other forecasting problems), and then introducing an importance sampling scheme specifically designed for forecasting episodic sequences. 5.1 Forward Sampling The probability of an episodic sequence can be estimated using a forward sampling approach by ? sampling M extensions {x(m) }M ?Fwd (Xe |h, t? ; M ) = m=1 of h from t and using the estimate p PM 1 (m) ? ). By Hoeffding?s inequality, P (|? pFwd (Xe |h, t ; M ) ? p(Xe |h, t? )| > ) ? m=1 1Xe (x M ? 2 2e?2 M . Thus, the error in p?Fwd (Xe |h, t? ; M ) falls as O(1/ M ). It is important to note that 1Xe (x) only depends on x up to b?k , and thus we need only sample finite extensions x such that t(x) < b?k from p x | h(t? , x) = h, t|x|+1 ? b?k . The forward sampling algorithm for Poisson Networks [15] can be easily adapted for PCIMs. Here we outline how to forward sample an extension x of h from t? to b?k given a general CIM. Forward sampling consists of iteratively obtaining a sample sequence xi of length i by sampling (ti , li ) and appending to a prior sampled sequence xi?1 ofQ length i ? 1. The CIM likelihood (Equation 1) of n an arbitrary event sequence x can be written as i=1 p(ti , li |hi ; ?). Thus, we begin with x|h| = h, 4 and iteratively sample (ti , li ) from p(ti , li |hi = xi?1 ; ?) and append to xi?1 to obtain xi . Note that one needs to use rejection sampling during the first iteration to ensure t|h|+1 > t? . The finite extension up to b?k is obtained by terminating when ti > b?k and rejecting ti . To sample (ti , li ) we Q note that p(ti , li |hi ; ?) = ?li (ti |hi , ?)e??li (ti |hi ;?) l6=li e??l (ti |hi ;?) has a competing risks form [1, 11], so that we can sample |L| candidate times tli independently from the non-homogeneous l exponential densities ?l (tli |hi , ?)e??l (ti |hi ;?) and then let ti be the smallest of these candidate times and li be the corresponding l. A more detailed description of sampling tli from a piecewise constant conditional intensities is given in [15]. Finally, we note that the basic sampling procedure can be made more efficient using the techniques described in [15] and [7]. 5.2 Importance Sampling When using a forward sampling approach to forecast unlikely episodic sequences, the episodes of interest will not occur in most of the sampled extensions and our estimate of p(Xe |h, t? ) will be noisy. In fact, due to the fact that absolute error in p?Fwd falls as the square root of the number of sequences sampled, we would need O(1/p(Xe |h, t? )2 ) sample sequences to get non-trivial lower bounds on p(Xe |h, t? ) using a forward sampling approach. To mitigate this problem we develop an importance sampling approach, where sequences are drawn from a proposal distribution q(?) that has an increased likelihood of generating extensions in which Xe occurs, and then ? uses a weighted empirical estimate. In particular, we will sample extensions x(m)  of h from t from q x | h(t? , x) = h, t|x|+1 ? b?k instead of p x | h(t? , x) = h, t|x|+1 ? b?k , and will estimate p(Xe |h, t? ) through ? p?Imp (Xe |h, t ; M ) = PM M X 1 (m) ) m=1 m=1 w(x w(x(m) )1Xe (x(m) ),  p x | h(t? , x) = h, t|x|+1 ? b?k  w(x) = q x | h(t? , x) = h, t|x|+1 ? b?k The Poisson Superposition Importance Sampler (PSIS) is an importance sampler whose proposal distribution q is based on Poisson superposition. This proposal distribution is defined to be a CIM whose conditional intensity functions are given by ?l (t|x; ?) + ??l (t|x) where ?l (t|x; ?) is the conditional intensity function of l under the model and ??l (t|x) is given by ( 1 ? for l = lj(x) , t ? [aj(x) (x), b?j(x) ), and j(x) 6= 0. ? ? ?l (t|x) = bj(x) ?aj(x) (x) 0 otherwise, where the active episode j(x) is 0 if t(x) ? bj (x), j = 1, ? ? ? , k and is min ({j : bj (x) > t(x)}) otherwise. The time bj (x) when the j th episode ceases to be active is the time at which the j th episode occurs in x, or b?j if it does not occur. If the episodic intervals [a?j , b?j ) do not overlap, aj (x) = a?j . In general aj (x) and bj (x) are given by the recursion  aj (x) = max {a?j , bj?1 (x)}  bj (x) = min {b?j } ? {(ti , li ) ? x : li = lj? , ti ? [aj (x), b?j )} . This choice of q makes it likely that the j th episode will occur after the j ? 1th episode. As the proposal distribution is also a CIM, importance sampling can be done using the forward sampling procedure above. If the model is a PCIM, the proposal distribution is also a PCIM, since ??l (t|x) are piecewise constant in t. In practice the computation of j(x), aj (x), and bj (x) can be done during forward sampling. The importance weight corresponding to our proposal distribution is ! k Y Y ?lj? (ti |xi ) bj (x) ? aj (x) w(x) = exp . ? bj ? aj (x) ?lj? (ti |xi ) + b? ?a1j (x) j=1 (ti ,li )?x: ti =bj (x),li =lj? 5 j In many problems, the importance weight w(x) of a sequence x of length n is a product of n small terms. When n large, this can cause the importance weights to become degenerate, and this problem is often solved using particle filtering [7]. Note that the second product in w(x) above has at most one term for each j so that w(x) has k terms corresponding to the k episodes, which is independent of n. Thus, we do not experience the problem of degenerate weights when k is small, regardless of the number of events sampled. 6 Experimental Results We first validate that PCIMs can learn temporal dependencies and that the PSIS gives faster forecasting than forward sampling using a synthetic data set. We then show that PCIMs are more than an order of magnitude faster to train than Poisson Networks, and better model unseen test data using real supercomputer log data. Finally we show that PCIMs and the PSIS allow the forecasting future interests of web search users using real log data from a major commercial search engine. 6.1 Validation on Synthetic Data In order to evaluate the ability of PCIMs to learn nonlinear temporal dependencies we sampled data from a known model and verified that the dependencies learned were correct. Data was sampled from a PCIM with L = {A,B,C}. The known model is shown in Figure 1. A in [t-1,t) no ?=0.1 B in [t-1,t) yes no A in [t-2,t-1) no ?=10.0 A in [t-5,t) yes ?=0.0 (a) Event type A no ?=0.002 C in [t-1,t) yes no B in [t-2,t-1) yes ?=0.2 no ?=10.0 (b) Event type B B in [t-5,t) yes ?=0.0 no ?=0.002 yes C in [t-2,t-1) yes ?=0.2 no ?=10.0 yes ?=0.0 (c) Event type C Figure 1: Decision trees representing S and ? for events of type A, B and C. We sampled 100 time units of data, observing 97 instances of A, 58 instances of B, and 71 instances of C. We then learn a PCIM from the sampled data. We used basis state functions that tested for the presence of each label in windows with boundaries at t ? 0, 1, 2, ? ? ? , 10, and +? time units. We used a common prior with a mean rate of 0.1 and a equivalent sample size of one time unit for all ?ls , and the structural prior described above with ?ls = 0.1 for all s. The learned PCIM perfectly recovered the correct model structure. We repeated the experiment by sampling data from a model with fifteen labels, consisting of five independent copies of the model above. That is, L = {A1 , B1 , C1 , ? ? ? , A5 , B5 , C5 } with each triple Ai , Bi , Ci independent of other labels, and dependent on each other as specified by Figure 1. Once again, the model structure was recovered perfectly. We evaluated the PSIS in forecasting event sequences with the model shown in Figure 1. The convergence of importance sampling is compared with that of forward sampling in Figure 2. We give results for forecasting three different episodic sequences, consisting of the label sequences {C}, {C, B}, and {C, B, A}, all in the interval [0, 1], given an empty history. The three queries are given in order of decreasing probability, so that inference becomes harder. We show how estimates of the probabilities of given episodic sequences vary as a function of the number of sequences sampled, giving the mean and variance of the trajectories of the estimates computed over ten runs. For all three queries, importance sampling converges faster and has lower variance. Since exact inference is infeasible for this model, we forward sample 4,000,000 event sequences and display this estimate. Note that despite the large sample size the Hoeffding bound gives a 95% confidence 6 (a) Label C in [0, 1] (b) Labels C, B in [0, 1] (c) Labels C, B, A in [0, 1] Figure 2: Trajectories of p?Imp and p?Fwd vs. the number of sequences sampled for three different queries. The dashed and dotted lines show the empirical mean and standard deviation over ten runs of p?Imp and p?Fwd . The solid line shows p?Fwd based on 4 million event sequences. interval of ?0.0006 for this estimate, which is large relative to the probabilities estimated. This further suggests the need for importance sampling for rare label sequences. 6.2 Modeling Supercomputer Event Logs We compared PCIM and Poisson Nets on the task of modeling system event logs from the BlueGene/L supercomputer at Lawrence Livermore National Laboratory [14], available at the USENIX Computer Failure Data Repository. We filtered out informational (non-alert) messages from the logs, and randomly split the events by node into a training set with 311,060 alerts from 21,962 nodes, and a test set with 68,502 alerts from 9,412 nodes. We learned dependencies between the 38 alert types in the data. We treat the events from each node as separate sequences, and use a product of the per-sequence likelihoods given in equation (1). For both models, we used window boundaries at t ? 1/60, 1, 60, 3600, and ? seconds. The PCIM used count threshold basis state functions with thresholds of 1, 4, 16 and 64 while the Poisson Net used log count feature vectors as described above. Both models used priors with a mean rate of an event every 100 days, no dependencies, and an equivalent sample size of one second. Both used a structural prior with ?ls = 0.1. Table 1 shows the test set likelihood and the run time for the two approaches. PCIM achieves better test set likelihood and is more than an order of magnitude faster. PCIM Poisson Net Test Log Likelihood -85.3 -88.8 Training Time 11 min 3 hr 33 min Table 1: A comparison of the PCIM and Poisson Net in modeling supercomputer event logs. The test set log likelihood reported has been divided by the number of test nodes (9,412). The training time for the PCIM and Poisson Net are also shown. 6.3 Forecasting Future Interests of Web Search Users We used the query logs of a major internet search engine to investigate the use of PCIMs in forecasting the future interests of web search users. All queries are mapped to one of 36 different interest categories using an automatic classifier. Thus, L contains 36 labels, such as ?Travel? or ?Health & Wellness.? Our training set contains event sequences for approximately 23k users consisting of about 385k timestamped labels recorded over a two month period. The test set contains event sequences for approximately 11k users of about 160k timestamped labels recorded over the next month. We trained a PCIM on the training data using window boundaries at t ? 1 hour, t ? 1 day, and t ? 1 week, and basis state functions that tested for the presence of one or more instance of each label in each window, treating users as i.i.d. The prior had a mean rate of an event every year, an equivalent sample size of one day. The structural prior had ?ls = 0.1. The model took 1 day and 18 hours to train on 3 GHz workstation. We did not compare to a Poisson network on this data since, as shown above, Poisson networks take an order of magnitude longer to learn. 7 Figure 3: Precision-recall curves for forecasting future Health & Wellness queries using a full PCIM, a restricted PCIM that conditions only on past Health & Wellness queries, a baseline that takes into account only past Health & Wellness queries and not their timing, and random guessing. Given the first week of each test user?s event sequence, we forecasted whether they would issue a query in a chosen target category in the second week. We used the PSIS with 100 sample sequences for forecasting. Figure 3 shows the precision recall curve for one target category label. Also shown is the result for restricted PCIMs that only model dependencies on prior occurrences of the target category. This is compared to a baseline where the conditional intensity depends only on whether the target label appeared in the history. This shows that modeling the temporal aspect of dependencies does provide a large improvement. Modeling dependencies on past occurrences of other labels also provides an improvement in the right-hand region of the precision-recall curve. To better understand the performance of PCIMs we also examined the problem of predicting the first occurrence of the target label. As Figure 3 suggests (but doesn?t show), the PCIM can model crosslabel dependencies to forecast the first occurrence of the target label. Forecasting new interests is valuable in a variety of applications including advertising and the fact that PCIMs are able to forecast first occurrences is promising. Results similar to Figure 3 were obtained for other target labels. 7 Discussion We presented the Piecewise-Constant Conditional Intensity Model, which is a model of temporal dependencies in continuous time event streams. We gave a conjugate prior and a greedy tree building procedure that allow for efficient learning of these models. Dependencies on the history are represented through automatically learned combinations of a given set of basis state functions. One of the key benefits of PCIMs is that they allow domain knowledge to be encoded in these basis state functions. This domain knowledge is incorporated into the model during structure search in situations where it is supported by the data. The fact that we use decision trees allows us to easily interpret the learned dependencies. In this paper, we focused on basis state functions indexed by a fixed set of time windows and labels. Exploring alternative types of basis state functions is an area for future research. For example, basis state functions could encode the most recent events that have occurred in the history rather than the events that occurred in windows of interest. The capacity of the resulting model class depends on the set of basis state functions chosen. Understanding how to choose the basis state functions and how to adapt our learning procedure to control the resulting capacity is another open topic. We also presented the Poisson Superposition Importance Sampler for forecasting episodic sequences with PCIMs. Developing forecasting algorithms for more general queries is of interest. Finally, we demonstrated the value of PCIMs in modeling the temporal behavior of web search users and of supercomputer nodes. In many applications, we have access to richer event streams such as spatio-temporal event streams and event streams with structured labels. It would be interesting to extend PCIMs to handle such rich event streams. 8 References [1] Simeon M. Berman. Note on extreme values, competing risks and semi-Markov processes. Ann. Math. Stat., 34(3):1104?1106, 1963. [2] W. Buntine. Theory refinement on Bayesian networks. In UAI, 1991. [3] David Maxwell Chickering, David Heckerman, and Christopher Meek. A Bayesian approach to learning Bayesian networks with local structure. In UAI, 1997. [4] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes: Elementary Theory and Methods, volume I. Springer, 2 edition, 2003. [5] Thomas Dean and Keiji Kanazawa. Probabilistic temporal reasoning. In AAAI, 1988. [6] Vanessa Didelez. Graphical models for marked point processes based on local independence. J. Roy. Stat. Soc., Ser. B, 70(1):245?264, 2008. [7] Yu Fan and Christian R. Shelton. Sampling for approximate inference in continuous time Bayesian networks. In AI & M, 2008. [8] N. Friedman, I. Nachman, and D. Pe?er. Using Bayesian networks to analyze expression data. J. Comp. Bio., 7:601?620, 2000. [9] Nir Friedman, Kevin Murphy, and Stuart Russell. Learning the structure of dynamic probabilistic networks. In UAI, 1998. [10] David Heckerman, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, and Carl Kadie. Dependency networks for inference, collaborative filtering, and data visualization. JMLR, 1:49?75, October 2000. [11] A. A. J. Marley and Hans Colonius. The ?horse race? random utility model for choice probabilities and reaction times, and its competing risks interpretation. J. Math. Psych., 36:1?20, 1992. [12] Uri Nodelman, Christian R. Shelton, and Daphne Koller. Continuous time Bayesian networks. In UAI, 2002. [13] Uri Nodelman, Christian R. Shelton, and Daphne Koller. Expectation Maximization and complex duration distributions for continuous time Bayesian networks. In UAI, 2005. [14] Adam Oliner and Jon Stearley. What supercomputers say - an analysis of five system logs. In IEEE/IFIP Conf. Dep. Sys. Net., 2007. [15] Shyamsundar Rajaram, Thore Graepel, and Ralf Herbrich. Poisson-networks: A model for structured point processes. In AIStats, 2005. [16] Aleksandr Simma, Moises Goldszmidt, John MacCormick, Paul Barham, Richard Brock, Rebecca Isaacs, and Reichard Mortier. CT-NOR: Representing and reasoning about events in continuous time. In UAI, 2008. [17] Aleksandr Simma and Michael I. Jordan. Modeling events with cascades of Poisson processes. In UAI, 2010. [18] Wilson Truccolo, Uri T. Eden, Matthew R. Gellows, John P. Donoghue, and Emery N. Brown. A point process framework relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J. Neurophysiol., 93:1074?1089, 2005. 9
4395 |@word repository:1 open:1 d2:6 fifteen:1 solid:1 harder:1 recursively:1 carry:2 contains:4 selecting:2 past:8 reaction:1 current:1 com:2 recovered:2 assigning:1 written:3 vere:1 john:3 timestamps:3 christian:3 designed:1 treating:1 comn:1 v:1 greedy:2 leaf:4 website:1 sys:1 filtered:1 provides:1 math:2 node:7 lsm:2 herbrich:1 daphne:2 five:2 alert:4 become:1 marley:1 ik:1 shorthand:1 consists:1 manner:1 introduce:3 behavior:1 nor:5 discretized:1 informational:1 decreasing:1 moises:1 automatically:1 window:7 becomes:2 begin:2 estimating:1 underlying:1 what:1 psych:1 temporal:16 mitigate:1 every:3 ti:30 classifier:1 ser:1 control:1 unit:3 bio:1 reichard:1 t1:1 timing:5 local:9 treat:1 despite:3 aleksandr:2 firing:1 approximately:2 examined:1 suggests:2 factorization:1 bi:1 colonius:1 practice:1 procedure:7 episodic:11 area:1 empirical:2 jhu:1 cascade:5 significantly:1 confidence:1 get:2 interior:1 selection:5 context:1 applying:1 risk:3 equivalent:3 map:5 demonstrated:1 dean:1 regardless:1 duration:3 l:43 independently:2 focused:1 factored:1 d1:6 ralf:1 handle:1 laplace:1 analogous:1 construction:1 target:8 commercial:1 user:11 exact:1 homogeneous:2 us:2 carl:1 roy:1 observed:1 solved:1 capture:1 region:3 episode:9 russell:1 valuable:2 dynamic:2 terminating:1 depend:2 trained:1 basis:20 neurophysiol:1 easily:2 joint:1 represented:1 train:2 fast:2 describe:6 effective:1 monte:1 query:17 horse:1 kevin:1 choosing:3 quite:1 whose:4 widely:1 posed:1 valued:1 larger:1 say:2 otherwise:3 encoded:1 richer:1 ability:1 unseen:1 noisy:2 sequence:47 net:6 took:1 product:5 sll:1 degenerate:2 intervening:1 description:1 validate:2 convergence:1 empty:1 generating:1 adam:1 converges:1 emery:1 illustrate:1 develop:2 stat:2 dep:1 progress:1 soc:1 berman:1 convention:2 closely:2 correct:2 require:1 truccolo:1 fwd:6 ofq:1 elementary:1 extension:9 exploring:1 exp:1 lawrence:1 mapping:1 week:5 bj:11 matthew:1 major:2 vary:1 achieves:1 smallest:1 travel:1 applicable:1 label:35 nachman:1 superposition:5 vanessa:1 largest:1 wl:2 agrees:1 weighted:1 gaussian:1 rather:2 wilson:1 forecasted:1 encode:1 l0:4 improvement:2 likelihood:22 contrast:2 baseline:2 inference:7 dependent:1 lj:8 unlikely:1 koller:2 i1:1 issue:1 denoted:2 marginal:9 once:1 sampling:31 stuart:1 jones:1 yu:1 imp:3 jon:1 future:12 piecewise:15 richard:1 randomly:1 shyamsundar:1 gamma:1 national:1 murphy:1 consisting:3 microsoft:6 friedman:2 interest:11 message:1 a5:1 investigate:1 extreme:1 experience:2 tree:15 indexed:3 increased:1 instance:4 modeling:13 maximization:1 introducing:1 deviation:1 rare:2 too:3 buntine:1 reported:1 dependency:30 synthetic:4 density:3 ifip:1 probabilistic:2 michael:1 hopkins:1 again:2 aaai:1 gunawardana:1 recorded:2 choose:2 hoeffding:2 conf:1 li:19 account:1 kadie:1 satisfy:1 race:1 depends:4 stream:12 ad:1 tli:3 root:2 closed:6 observing:1 analyze:1 reached:1 complicated:1 collaborative:1 square:1 variance:2 efficiently:2 rajaram:1 yield:1 ensemble:1 yes:8 weak:2 bayesian:13 rejecting:1 carlo:1 trajectory:3 advertising:3 comp:1 history:7 failure:3 isaac:1 associated:2 psi:8 modeler:1 workstation:1 gain:3 sampled:10 recall:3 knowledge:2 graepel:1 maxwell:2 day:4 specify:5 done:3 evaluated:1 strongly:1 until:1 hand:1 web:9 christopher:3 nonlinear:3 aj:9 thore:1 building:1 effect:1 brown:1 iteratively:2 laboratory:1 during:4 outline:1 tn:1 reasoning:2 novel:1 common:1 spiking:2 empirically:1 conditioning:1 volume:1 million:1 extend:1 occurred:2 interpretation:1 relating:1 interpret:1 ai:2 automatic:1 dbn:2 pm:2 particle:1 had:2 access:1 han:1 longer:1 posterior:6 recent:2 sl0:2 issued:1 inequality:1 binary:1 xe:18 minimum:1 ewl:1 period:1 dashed:1 semi:1 multiple:1 timestamped:2 full:1 faster:6 adapt:1 offer:1 long:1 dept:1 divided:1 visit:1 a1:1 impact:1 prediction:1 regression:1 maintenance:1 basic:1 expectation:1 poisson:26 iteration:1 represent:1 c1:1 proposal:7 addition:1 baltimore:1 interval:7 keiji:1 unaddressed:1 jordan:1 call:3 structural:4 presence:2 split:2 variety:2 affect:1 independence:1 gave:1 competing:3 perfectly:2 puyang:1 barham:1 donoghue:1 t0:1 whether:4 expression:2 b5:1 utility:1 forecasting:22 cause:1 useful:1 tij:2 clear:2 detailed:2 ten:2 category:4 sl:9 dotted:1 estimated:2 extrinsic:1 correctly:1 yy:1 per:3 discrete:4 write:1 key:1 threshold:3 eden:1 drawn:2 verified:1 ls1:2 graph:1 year:1 run:3 powerful:1 logged:1 preemptive:1 decision:13 bound:2 ct:5 hi:14 meek:4 internet:2 display:2 fan:1 activity:1 adapted:1 occur:4 encodes:1 aspect:1 min:4 structured:2 developing:1 combination:1 poor:1 conjugate:5 terminates:1 heckerman:2 making:1 s1:1 restricted:2 taken:1 equation:4 visualization:1 describing:1 count:2 know:1 ctbns:2 available:1 decomposing:1 apply:1 occurrence:5 distinguished:1 appending:1 alternative:1 supercomputer:8 thomas:1 denotes:2 ensure:2 include:1 graphical:2 l6:1 giving:1 occurs:7 parametric:1 costly:1 rt:1 md:1 dependence:1 guessing:1 separate:1 mapped:1 maccormick:1 capacity:2 topic:1 trivial:2 induction:1 assuming:1 length:3 ified:1 october:1 a1j:1 robert:1 negative:3 append:1 perform:1 allowing:2 neuron:1 markov:2 sm:1 finite:5 situation:1 extended:1 merchant:1 incorporated:1 arbitrary:3 usenix:1 intensity:22 rebecca:1 david:4 pair:1 specified:2 livermore:1 engine:2 learned:6 hour:2 able:1 redmond:2 below:2 pattern:1 appeared:1 bluegene:1 built:1 max:2 including:1 event:69 overlap:1 predicting:1 indicator:1 hr:1 recursion:1 representing:2 scheme:1 clsp:1 cim:5 carried:1 health:4 lij:2 rounthwaite:1 nir:1 brock:1 prior:19 understanding:1 relative:1 nodelman:2 interesting:1 filtering:2 triple:1 validation:1 supported:1 last:1 copy:1 infeasible:1 bias:1 allow:4 understand:1 wide:1 fall:2 taking:1 absolute:1 ghz:1 benefit:1 boundary:3 curve:3 ending:1 valid:1 rich:1 doesn:1 forward:14 made:1 c5:1 refinement:1 approximate:1 gene:1 active:2 uai:7 b1:1 assumed:2 spatio:1 xi:7 search:13 continuous:10 iterative:1 table:2 simeon:1 promising:1 learn:10 obtaining:1 cl:7 complex:1 domain:3 did:1 aistats:1 paul:1 edition:1 child:2 repeated:1 xu:1 slow:1 precision:3 sub:1 wish:2 daley:1 exponential:2 candidate:2 stamp:1 chickering:2 pe:1 jmlr:1 formula:2 specific:2 covariate:1 er:1 offset:2 cease:1 dl:5 kanazawa:1 effectively:1 importance:16 ci:1 magnitude:5 conditioned:1 uri:3 forecast:5 rejection:1 likely:1 simma:3 springer:1 pcims:27 conditional:23 viewed:1 targeted:1 marked:2 month:3 ann:1 hard:1 specifically:1 sampler:4 total:1 ece:1 experimental:1 goldszmidt:1 arises:1 meant:1 evaluate:1 tested:2 shelton:3
3,750
4,396
Iterative Learning for Reliable Crowdsourcing Systems David R. Karger Sewoong Oh Devavrat Shah Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Abstract Crowdsourcing systems, in which tasks are electronically distributed to numerous ?information piece-workers?, have emerged as an effective paradigm for humanpowered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers? answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker. 1 Introduction Background. Crowdsourcing systems have emerged as an effective paradigm for human-powered problem solving and are now in widespread use for large-scale data-processing tasks such as image classification, video annotation, form data entry, optical character recognition, translation, recommendation, and proofreading. Crowdsourcing systems such as Amazon Mechanical Turk provide a market where a ?taskmaster? can submit batches of small tasks to be completed for a small fee by any worker choosing to pick them up. For example, a worker may be able to earn a few cents by indicating which images from a set of 30 are suitable for children (one of the benefits of crowdsourcing is its applicability to such highly subjective questions). Since typical crowdsourced tasks are tedious and the reward is small, errors are common even among workers who make an effort. At the extreme, some workers are ?spammers?, submitting arbitrary answers independent of the question in order to collect their fee. Thus, all crowdsourcers need strategies to ensure the reliability of answers. Because the worker crowd is large, anonymous, and transient, it is generally difficult to build up a trust relationship with particular workers.1 It is also difficult to condition payment on correct answers, as the correct answer may never truly be known and delaying payment can annoy workers and make it harder to recruit them for your future tasks. Instead, most crowdsourcers resort to redundancy, giving each task to multiple workers, paying them all irrespective of their answers, and aggregating the results by some method such as majority voting. For such systems there is a natural core optimization problem to be solved. Assuming the e-mail: {karger,swoh,devavrat}@mit.edu. This work was supported in parts by AFOSR complex networks project, MURI on network tomography, and National Science Foundation. 1 For certain high-value tasks, crowdsourcers can use entrance exams to ?prequalify? workers and block spammers, but this increases the cost and still provides no guarantee that the prequalified workers will try hard. 1 taskmaster wishes to achieve a certain reliability in their answers, how can they do so at minimum cost (which is equivalent to asking how they can do so while asking the fewest possible questions)? Several characteristics of crowdsourcing systems make this problem interesting. Workers are neither persistent nor identifiable; each batch of tasks will be solved by a worker who may be completely new and who you may never see again. Thus one cannot identify and reuse particularly reliable workers. Nonetheless, by comparing one worker?s answer to others? on the same question, it is possible to draw conclusions about a worker?s reliability, which can be used to weight their answers to other questions in their batch. However, batches must be of manageable size, obeying limits on the number of tasks that can be given to a single worker. Another interesting aspect of this problem is the choice of task assignments. Unlike many inference problems which makes inferences based on a fixed set of signals, our algorithm can choose which signals to measure by deciding which questions to ask which workers. In the following, we first define a formal model that captures these aspects of the problem. We will then describe a scheme for deciding which tasks to assign to which workers and introduce a novel iterative algorithm to infer the correct answers from the workers? responses. Setup. We model a set of m tasks {ti }i?[m] as each being associated with an unobserved ?correct? answer si ? {?1}. Here and after, we use [N ] to denote the set of first N integers. In the earlier image categorization example, each task corresponds to labeling an image as suitable for children (+1) or not (?1). We assign these tasks to n workers from the crowd, which we denote by {wj }j?[n] . When a task is assigned to a worker, we get a possibly inaccurate answer from the worker. We use Aij ? {?1} to denote the answer if task ti is assigned to worker wj . Some workers are more diligent or have more expertise than others, while some other workers might be spammers. We choose a simple model to capture this diversity in workers? reliability: we assume that each worker wj is characterized by a reliability pj ? [0, 1], and that they make errors randomly on each question they answer. Precisely, if task ti is assigned to worker wj then  si with probability pj , Aij = ?si with probability 1 ? pj , and Aij = 0 if ti is not assigned to wj . The random variable Aij is independent of any other event given pj . (Throughout this paper, we use boldface characters to denote random variables and random matrices unless it is clear from the context.) The underlying assumption here is that the error probability of a worker does not depend on the particular task and all the tasks share an equal level of difficulty. Hence, each worker?s performance is consistent across different tasks. We further assume that the reliability of workers {pj }j?[n] are independent and identically distributed random variables with a given distribution on [0, 1]. One example is spammer-hammer model where, each worker is either a ?hammer? with probability q or is a ?spammer? with probability 1 ? q. A hammer answers all questions correctly, in which case pj = 1, and a spammer gives random answers, in which case pj = 1/2. Given this random variable pj , we define an important parameter q ? [0, 1], which captures the ?average quality? of the crowd: q ? E[(2pj ? 1)2 ]. A value of q close to one indicates that a large proportion of the workers are diligent, whereas q close to zero indicates that there are many spammers in the crowd. The definition of q is consistent with use of q in the spammer-hammer model. We will see later that our bound on the error rate of our inference algorithm holds for any distribution of pj but depends on the distribution only through this parameter q. It is quite realistic to assume the existence of a prior distribution for pj . The model is therefore quite general: in particular, it is met if we simply randomize the order in which we upload our task batches, since this will have the effect of randomizing which workers perform which batches, yielding a distribution that meets our requirements. On the other hand, it is not realistic to assume that we know what the prior is. To execute our inference algorithm for a given number of iterations, we do not require any knowledge of the distribution of the reliability. However, q is necessary in order to determine how many times a task should be replicated and how many iterations we need to run to achieve certain reliability. Under this crowdsourcing model, a taskmaster first decides which tasks should be assigned to which workers, and then estimates the correct solutions {si }i?[m] once all the answers {Aij } are submitted. We assume a one-shot scenario in which all questions are asked simultaneously and then an estimation is performed after all the answers are obtained. In particular, we do not allow allocating 2 tasks adaptively based on the answers received thus far. Then, assigning tasks to nodes amounts to designing a bipartite graph G({ti }i?[m] ? {wj }j?[n] , E) with m task and n worker nodes. Each edge (i, j) ? E indicates that task ti was assigned to worker wj . Prior Work. A naive approach to identify the correct answer from multiple workers? responses is to use majority voting. Majority voting simply chooses what the majority of workers agree on. When there are many spammers, majority voting is error-prone since it weights all the workers equally. We will show that majority voting is provably sub-optimal and can be significantly improved upon. To infer the answers of the tasks and also the reliability of workers, Dawid and Skene [1, 2] proposed an algorithm based on expectation maximization (EM) [3]. This approach has also been applied in classification problems where the training data is annotated by low-cost noisy ?labelers? [4, 5]. In [6] and [7], this EM approach has been applied to more complicated probabilistic models for image labeling tasks. However, the performance of these approaches are only empirically evaluated, and there is no analysis that proves performance guarantees. In particular, EM algorithms require an initial starting point which is typically randomly guessed. The algorithm is highly sensitive to this initialization, making it difficult to predict the quality of the resulting estimate. The advantage of using low-cost noisy ?labelers? has been studied in the context of supervised learning, where a set of labels on a training set is used to find a good classifier. Given a fixed budget, there is a tradeoff between acquiring a larger training dataset or acquiring a smaller dataset but with more labels per data point. Through extensive experiments, Sheng, Provost and Ipeirotis [8] show that getting repeated labeling can give considerable advantage. Contributions. In this work, we provide a rigorous treatment of designing a crowdsourcing system with the aim of minimizing the budget to achieve completion of a set of tasks with a certain reliability. We provide both an asymptotically optimal graph construction (random regular bipartite graph) and an asymptotically optimal algorithm for inference (iterative algorithm) on that graph. As the main result, we show that our algorithm performs as good as the best possible algorithm. The surprise lies in the fact that the optimality of our algorithm is established by comparing it with the best algorithm, one that is free to choose any graph, regular or irregular, and performs optimal estimation based on the information provided by an oracle about reliability of the workers. Previous approaches focus on developing inference algorithms assuming that a graph is already given. None of the prior work on crowdsourcing provides any systematic treatment of the graph construction. To the best of our knowledge, we are the first to study both aspects of crowdsourcing together and, more importantly, establish optimality. Another novel contribution of our work is the analysis technique. The iterative algorithm we introduce operates on real-valued messages whose distribution is a priori difficult to analyze. To overcome this challenge, we develop a novel technique of establishing that these messages are subGaussian (see Section 3 for a definition) using recursion, and compute the parameters in a closed form. This allows us to prove the sharp result on the error rate, and this technique could be of independent interest for analyzing a more general class of algorithms. 2 Main result Under the crowdsourcing model introduced, we want to design algorithms to assign tasks and estimate the answers. In what follows, we explain how to assign tasks using a random regular graph and introduce a novel iterative algorithm to infer the correct answers. We state the performance guarantees for our algorithm and provide comparisons to majority voting and an oracle estimator. Task allocation. Assigning tasks amounts to designing a bipartite graph G {ti }i?[m] ?  {wj }j?[n] , E , where each edge corresponds to a task-worker assignment. The taskmaster makes a choice of how many workers to assign to each task (the left degree l) and how many tasks to assign to each worker (the right degree r). Since the total number of edges has to be consistent, the number of workers n directly follows from ml = nr. To generate an (l, r)-regular bipartite graph we use a random graph generation scheme known as the configuration model in random graph literature [9, 10]. In principle, one could use arbitrary bipartite graph G for task allocation. However, as we show later in this paper, random regular graphs are sufficient to achieve order-optimal performance. 3 Inference algorithm. We introduce a novel iterative algorithm which operates on real-valued task messages {xi?j }(i,j)?E and worker messages {yj?i }(i,j)?E . The worker messages are initialized as independent Gaussian random variables. At each iteration, the messages are updated according to the described update rule, where ?i is the neighborhood of ti . Intuitively, a worker message yj?i represents our belief on how ?reliable? the worker j is, such that our Pfinal estimate is a weighted sum of the answers weighted by each worker?s reliability: s?i = sign( j??i Aij yj?i ). Iterative Algorithm Input: E, {Aij }(i,j)?E , kmax Output: Estimation s? {Aij } 1: For all (i, j) ? E do (0) Initialize yj?i with random Zij ? N (1, 1) ; 2: For k = 1, . . . , kmax do P (k) (k?1) For all (i, j) ? E do xi?j ? j 0 ??i\j Aij 0 yj 0 ?i ; P (k) (k) For all (i, j) ? E do yj?i ? i0 ??j\i Ai0 j xi0 ?j ; P (kmax ?1) 3: For all i ? [m] do xi ? j??i ;  Aij yj?i 4: Output estimate vector s? {Aij } = [sign(xi )] . While our algorithm is inspired by the standard Belief Propagation (BP) algorithm for approximating max-marginals [11, 12], our algorithm is original and overcomes a few critical limitations of the standard BP. First, the iterative algorithm does not require any knowledge of the prior distribution of pj , whereas the standard BP requires the knowledge of the distribution. Second, there is no efficient way to implement standard BP, since we need to pass sufficeint statistics (or messages) which under our general model are distributions over the reals. On the otherhand, the iterative algorithm only passes messages that are real numbers regardless of the prior distribution of pj , which is easy to implement. Third, the iterative algorithm is provably asymptotically order-optimal. Density evolution, explained in detail in Section 3, is a standard technique to analyze the performance of BP. Although we can write down the density evolution for the standard BP, we cannot analyze the densities, analytically or numerically. It is also very simple to write down the density evolution equations for the iterative algorithm, but it is not trivial to analyze the densities in this case either. We develop a novel technique to analyze the densities and prove optimality of our algorithm. 2.1 Performance guarantee We state the main analytical result of this paper: for random (l, r)-regular bipartite graph based task assignments with our iterative inference algorithm, the probability of error decays exponentially in lq, up to a universal constant and for a broad range of the parameters l, r and q. With a reasonable choice of l = r and both scaling like (1/q) log(1/), the proposed algorithm is guarantted to achieve error less than  for any  ? (0, 1/2). Further, an algorithm independent lower bound that we establish suggests that such an error dependence on lq is unavoidable. Hence, in terms of the task allocation budget, our algorithm is order-optimal. The precise statements follow next. Let ? = E[2pj ? 1] and recall q = E[(2pj ? 1)2 ]. To lighten the notation, let ?l ? l ? 1 and r? ? r ? 1. Define  1  1 ? (1/q 2 ?l? r)k?1 2q + 3+ . ?2k ? q? r ?2 (q 2 ?l? r)k?1 1 ? (1/q 2 ?l? r) For q 2 ?l? r > 1, let ?2? ? limk?? ?2k such that  ?2? = 3 + (1/q? r) q 2 ?l? r/(q 2 ?l? r ? 1) . Then we can show the following bound on the probability of making an error. Theorem 2.1. For fixed l > 1 and r > 1, assume that m tasks are assigned to n = ml/r workers according to a random (l, r)-regular graph drawn from the configuration model. If the distribution of the worker reliaiblity satisfy ? ? E[2pj ? 1] > 0 and q 2 > 1/(?l? r), then for any s ? {?1}m , the estimates from k iterations of the iterative algorithm achieve m  2 1 X  lim sup P si 6= s?i {Aij }(i,j)?E ? e?lq/(2?k ) . (1) m m?? i=1 4 As we increase k, the above bound converges to a non-trivial limit. Corollary 2.2. Under the hypotheses of Theorem 2.1, lim sup lim sup k?? m?? m  1 X  P si 6= s?i {Aij }(i,j)?E ? m i=1 2 e?lq/(2?? ) . (2) One implication of this corollary is that, under a mild assumption that r ? l, the probabiilty of error is upper bounded by e?(1/8)(lq?1) . Even if we fix the value?of q = E[(2pj ? 1)2 ], different distributions of pj can have different values of ? in the range of [q, q]. Surprisingly, the asymptotic bound on the error rate does not depend on ?. Instead, as long as q is fixed, ? only affects how fast the algorithm converges (cf. Lemma 2.3). Notice that the bound in (2) is only meaningful when it is less than a half, whence ?l? rq 2 > 1 and 2 ? lq > 6 log(2) > 4. While as a task master the case of l? rq < 1 may not be of interest, for the purpose of completeness we comment on the performance of our algorithm in this regime. Specifically, we empirically observe that the error rate increases as the number of iterations k increases. Therefore, it makes sense to use k = 1. In which case, the algorithm essentially boils down to the majority rule. We can prove the following error bound. The proof is omitted due to a space constraint. lim sup m?? 2.2 m  1 X  P si 6= s?i {Aij }(i,j)?E m i=1 2 ? e?l? /4 . (3) Discussion Here we make a few comments relating to the execution of the algorithm and the interpretation of the main results. First, the iterative algorithm is efficient with runtime comparable to the simple majority voting which requires O(ml) operations. Lemma 2.3. Under the hypotheses of Theorem 2.1, the total computational cost sufficient to achieve the bound in Corollary 2.2 up to any constant factor in the exponent is O(ml log(q/?2 )/ log(q 2 ?l? r)). ? By definition, we have q ? ? ? q. The runtime is the worst ? when ? = q, which happens?under the spammer-hammer model, and it is the best when ? = q which happens if pj = (1 + q)/2 deterministically. There exists a (non-iterative) polynomial time algorithm with runtime independent of q for computing the estimate which achieves (2), but in practice we expect that the number of iterations needed is small enough that the iterative algorithm will outperform this non-iterative algorithm. Detailed proof of Lemma 2.3 will be skipped here due to a space constraint. Second, the assumption that ? > 0 is necessary. If there is no assumption on ?, then we cannot distinguish if the responses came from tasks with {si }i?[m] and workers with {pj }j?[n] or tasks with {?si }i?[m] and workers with {1 ? pj }j?[n] . Statistically, both of them give the same output. In the case when we know that ? < 0, we can use the same algorithm changing the sign of the final output and get the same performance guarantee. Third, our algorithm does not require any information on the distribution of pj . Further, unlike other EM based algorithms, the iterative algorithm is not sensitive to initialization and with random initialization converges to a unique estimate with high probability. This follows from the fact that the algorithm is essentially computing a leading eigenvector of a particular linear operator. 2.3 Relation to singular value decomposition The leading singular vectors are often used to capture the important aspects of datasets in matrix form. In our case, the leading left singular vector of A can be used to estimate the correct answers, where A ? {0, ?1}m?n is the m ? n adjacency matrix of the graph G weighted by the submitted answers. We can compute it using power iteration: for u ? Rm and v ? Rn , starting with a randomly initialized v, power iteration iteratively updates u and v according to X X for all i, ui = Aij vj , and for all j, vj = Aij ui . i??j j??i 5 It is known that normalized u converges exponentially to the leading left singular vector. This update rule is very similar to that of our iterative algorithm. But there is one difference that is crucial in the analysis: in our algorithm we follow the framework of the celebrated belief propagation algorithm [11, 12] and exclude the incoming message from node j when computing an outgoing message to j. This extrinsic nature of our algorithm and the locally tree-like structure of sparse random graphs [9, 13] allow us to perform asymptotic analysis on the average error rate. In particular, if we use the leading singular vector of A to estimate s, such that si = sign(ui ), then existing analysis techniques from random matrix theory does not give the strong performance guarantee we have. These techniques typically focus on understanding how the subspace spanned by the top singular vector behaves. To get a sharp bound, we need to analyze how each entry of the leading singular vector is distributed. We introduce the iterative algorithm in order to precisely characterize how each of the decision variable xi is distributed. Since the iterative algorithm introduced in this paper is quite similar to power iteration used to compute the leading singular vectors, this suggests that our analysis may shed light on how to analyze the top singular vectors of a sparse random matrix. 2.4 Optimality of our algorithm As a taskmaster, the natural core optimization problem of our concern is how to achieve a certain reliability in our answers with minimum cost. Since we pay equal amount for all the task assignments, the cost is proportional to the total number of edges of the graph G. Here we compute the total budget sufficient to achieve a target error rate using our algorithm and show that this is within a constant factor from the necessary budget to achieve the given target error rate using any graph and the best possible inference algorithm. The order-optimality is established with respect to all algorithms that operate in one-shot, i.e. all task assignments are done simultaneously, then an estimation is performed after all the answers are obtained. The proofs of the claims in this section are skipped here due to space limitations. Formally, consider a scenario where there are m tasks to complete and a target accuracy  ? (0, 1/2). To measure  task denoted by dm (s, s?) ? P accuracy, we use the average probability of error per (1/m) i?[m] P(si 6= s?i ). We will show that ? (1/q) log(1/) assignments per task is necessary and sufficient to achieve the target error rate: dm (s, s?) ? . To establish this fundamental limit, we use the following minimax bound on error rate. Consider the case where nature chooses a set of correct answers s ? {?1}m and a distribution of the worker reliability pj ? f . The distribution f is chosen from a set of all distributions on [0, 1] which satisfy Ef [(2pj ? 1)2 ] = q. We use F(q) to denote this set of distributions. Let G(m, l) denote the set of all bipartite graphs, including irregular graphs, that have m task nodes and ml total number of edges. Then the minimax error rate achieved by the best possible graph G ? G(m, l) using the best possible inference algorithm is at least inf sup ALGO,G?G(m,l) s,f ?F (q) dm (s, s?G,ALGO ) ? (1/2)e?(lq+O(lq 2 )) , (4) where s?G,ALGO denotes the estimate we get using graph G for task allocation and algorithm ALGO for inference. This minimax bound is established by computing the error rate of an oracle esitimator, which makes an optimal decision based on the information provided by an oracle who knows how reliable each worker is. Next, we show that the error rate of majority voting decays significantly slower: the leading term in the error exponent scales like ?lq 2 . Let s?MV be the estimate produced by majority voting. Then, for q ? (0, 1), there exists a numerical constant C1 such that inf sup G?G(m,l) s,f ?F (q) dm (s, s?MV ) = e?(C1 lq 2 +O(lq 4 +1)) . (5) The lower bound in (4) does not depend on how many tasks are assigned to each worker. However, our main result depends on the value of r. We show that for a broad range of parameters l, r, and q our algorithm achieves optimality. Let s?Iter be the estimate given by random regular graphs and the iterative algorithm. For ?lq ? C2 , r?q ? C3 and C2 C3 > 1, Corollary 2.2 gives lim sup m?? s,f ?F (q) dm (s, s?Iter ) ? e?C4 lq . (6) This is also illustrated in Figure 1. We ran numerical experiments with 1000 tasks and 1000 workers from the spammer-hammer model assigned according to random graphs with l = r from the configuration model. For the left figure, we fixed q = 0.3 and for the right figure we fixed l = 25. 6 1 1 0.1 0.1 PError0.01 PError 0.01 0.001 0.001 0.0001 Majority Voting EM Algorithm Iterative Algorithm Lower Bound 0.0001 1e-05 0 5 10 15 l Majority Voting EM Algorithm Iterative Algorithm Lower Bound 1e-05 1e-06 20 25 30 0 0.05 0.1 0.15 0.2 q 0.25 0.3 0.35 0.4 Figure 1: The iterative algorithm improves over majority voting and EM algorithm [8]. Now, let ?LB be the minimum cost per task necessary to achieve a target accuracy  ? (0, 1/2) using any graph and any possible algorithm. Then (4) implies ?LB ? (1/q) log(1/), where x ? y indicates that x scales as y. Let ?Iter be the minimum cost per task sufficient to achieve a target accuracy  using our proposed algorithm. Then from (6) we get ?Iter ? (1/q) log(1/). This establishes the order-optimality of our algorithm. It is indeed surprising that regular graphs are sufficient to achieve this optimality. Further, let ?Majority be the minimum cost per task necessary to achieve a target accuracy  using the Majority voting. Then ?Majority ? (1/q 2 ) log(1/), which significantly more costly than the optimal scaling of (1/q) log(1/) of our algorithm. 3 Proof of Theorem 2.1 By symmetry, we can assume all si ?s are +1. If I is a random integer drawn uniformly in [m], then P (k) (k) (1/m) i?[m] P(si 6= s?i ) ? P(xI ? 0), where xi denotes the decision variable for task i after k iterations of the iterative algorithm. Asymptotically, for a fixed k, l and r, the local neighborhood (k) of xI converges to a regular tree. To analyze limm?? P(xI ? 0), we use a standard probabilistic analysis technique known as ?density evolution? in coding theory or ?recursive distributional equations? in probabilistic combinatorics [9, 13]. Precisely, we use the following equality. (k) lim P(xI m?? ? 0) = P(? x(k) ? 0) , (7) ? (k) is defined through density evolution equations (8) and (9) in the following. where x Density evolution. In the large system limit as m ? ?, the (l, r)-regular random graph locally converges in distribution to a (l, r)-regular tree. Therefore, for a randomly chosen edge (i, j), the messages xi?j and yj?i converge in distribution to x and ypj defined in the following density evolution equations (8). Here and after, we drop the superscript k denoting the iteration number whenever it is clear from the context. We initialize yp with a Gaussian distribution independent of (0) d p: yp ? N (1, 1). Let = denote equality in distribution. Then, for k ? {1, 2, . . .}, X X d d (k?1) (k) x(k) = zpi ,i ypi ,i , yp(k) = zp,j xj , i?[l?1] (8) j?[r?1] where xj ?s, pi ?s, and yp,i ?s are independent copies of x, p, and yp , respectively. Also, zp,i ?s and zp,j ?s are independent copies of zp . p ? [0, 1] is a random variable distributed according to the distribution of the worker?s quality. zp,j ?s and xj ?s are independent. zpi ,i ?s and ypi ,i ?s are conditionally independent conditioned on pi . Finally, zp is a random variable which is +1 with probability p and ?1 with probability 1 ? p. Then, for a randomly chosen I, the decision variable (k) xI converges in distribution to X d (k?1) ? (k) = x zpi ,i ypi ,i . (9) i?[l] Analyzing the density. Our strategy to provide an upper bound on P(? x(k) ? 0) is to show (k) ? that x is sub-Gaussian with appropriate parameters and use the Chernoff bound. A random 7 variable z with mean m is said to be sub-Gaussian with parameter ? if for all ? ? R the fol2 2 lowing inequality holds: E[e?z ] ? em?+(1/2)? ? . Define ?k2 ? 2?l(?l? r)k?1 + ?2 ?l3 r?(3q? r + 2k?4 2 ? k?1 2? k?1 ? ? ? 1)(q l? r) (1 ? (1/q l? r) )/(1 ? (1/q l? r)) and mk ? ?l(q l? r) for k ? Z. We will first show that, x(k) is sub-Gaussian with mean mk and parameter ?k2 for a regime of ? we are interested in. Precisely, we will show that for |?| ? 1/(2mk?1 r?), E[e?x (k) 2 2 ] ? emk ?+(1/2)?k ? . (10) (k) (k) ? By definition, due to distributional independence, we have E[e??x ] = E[e?x ](l/l) . Therefore, it (k) ? ? 2 2 ? (k) satisfies E[e??x ] ? e(l/l)mk ?+(l/2l)?k ? . Applying the Chernoff bound follows from (10) that x 2 with ? = ?mk /(?k ), we get   (k)  2 ? 2 ? (k) ? 0 ? E e??x P x ? e?l mk /(2 l ?k ) , (11) Since mk mk?1 /(?k2 ) ? ?2 ?l2 (q?l? r)2k?3 /(3?2 q?l3 r?2 (q?l? r)2k?4 ) = 1/(3? r), it is easy to check that |?| ? 1/(2mk?1 r?). Substituting (11) in (7), this finishes the proof of Theorem 2.1. Now we are left to prove that x(k) is sub-Gaussian with appropriate parameters. We can write down a recursive formula for the evolution of the moment generating functions of x and yp as  (k)  E e?x =  (k)  E e?yp = h i?l (k?1) (k?1) Ep pE[e?yp |p] + p ? E[e??yp |p] ,   (k)  ?  (k)  r pE e?x + p?E e??x ,  (12) (13) where p? = 1 ? p and p ? = 1 ? p. We can prove that these are sub-Gaussian using induction. First, for k = 1, we show that x(1) is sub-Gaussian with mean m1 = ??l and parameter ?12 = 2?l, where ? ? E[2p ? 1]. Since yp is initialized as Gaussian with unit mean and variance, we have (0) 2 (1) E[e?yp ] = e?+(1/2)? regardless of p. Substituting this into (12), we get for any ?, E[e?x ] = ? ? 2 ? ? 2 (E[p]e? + (1 ? E[p])e?? )l e(1/2)l? ? el??+l? , where the inequality follows from the fact that 2 aez + (1 ? a)e?z ? e(2a?1)z+(1/2)z for any z ? R and a ? [0, 1] (cf. [14, Lemma A.1.5]). (k) 2 2 (k+1) Next, assuming E[e?x ] ? emk ?+(1/2)?k ? for |?| ? 1/(2mk?1 r?), we show that E[e?x ]? 2 mk+1 ?+(1/2)?k+1 ?2 2 e for |?| ? 1/(2mk r?), and compute appropriate mk+1 and ?k+1 . Substituting (k) 2 (k) 2 2 2 the bound E[e?x ] ? emk ?+(1/2)?k ? in (13), we get E[e?yp ] ? (pemk ? + p?e?mk ? )r?e(1/2)?r?k ? . Further applying this bound in (12), we get h (k+1) i  h i?l ? 2 2 E e?x ? Ep p(pemk ? + p e(1/2)l?r?k ? .(14) ? e?mk ? )r? + p ? (pe?mk ? + p ? emk ? )r? To bound the first term in the right-hand side, we use the next key lemma. Lemma 3.1. For any |z| ? 1/(2? r) and p ? [0, 1] such that q = E[(2p ? 1)2 ], we have h i 2 2 Ep p(pez + p ? e?z )r? + p ? (? pez + pe?z )r? ? eq?rz+(1/2)(3q?r +?r)z . For the proof, we refer to the journal version of  this paper. Applying this inequality to (14) gives 2 q? l? r mk ?+(1/2) (3q? l? r 2 +? l? r )m2k +? l? r ?k ?2 ?x(k+1) ]?e , for |?| ? 1/(2mk r?). In the regime where q?l? r? E[e 1 as per our assumption, mk is non-decreasing in k. At iteration k, the above recursion holds for |?| ? min{1/(2m1 r?), . . . , 1/(2mk?1 r?)} = 1/(2mk?1 r?). Hence, we get a recursion for mk and ?k such that (10) holds for |?| ? 1/(2mk?1 r?): mk+1 = q?l? rmk , 2 ?k+1 = (3q?l? r2 + ?l? r)m2k + ?l? r?k2 . With the initialization m1 = ??l and ?12 = 2?l, we have mk = ??l(q?l? r)k?1 for k ? {1, 2, . . .} and 2 2 k?2 2 2 2 ?k = a?k?1 + bc for k ? {2, 3, . . .}, with a = ?l? r, b = ? ?l (3q?l? r + ?l? r), and c = (q?l? r)2 . After P k?2 some algebra, it follows that ?k2 = ?12 ak?1 + bck?2 `=0 (a/c)` . For ?l? rq 2 6= 1, we have a/c 6= 1, 2 2 k?1 k?2 k?1 whence ?k = ?1 a + bc (1 ? (a/c) )/(1 ? a/c). This finishes the proof of (10). 8 References [1] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20? 28, 1979. [2] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labelling of venus images. In Advances in neural information processing systems, pages 1085? 1092, 1995. [3] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):pp. 1?38, 1977. [4] R. Jin and Z. Ghahramani. Learning with multiple labels. In Advances in neural information processing systems, pages 921?928, 2003. [5] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. J. Mach. Learn. Res., 99:1297?1322, August 2010. [6] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems, volume 22, pages 2035?2043, 2009. [7] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds. In Advances in Neural Information Processing Systems, pages 2424?2432, 2010. [8] V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ?08, pages 614?622. ACM, 2008. [9] T. Richardson and R. Urbanke. Modern Coding Theory. Cambridge University Press, march 2008. [10] B. Bollob?as. Random Graphs. Cambridge University Press, January 2001. [11] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann Publ., San Mateo, Califonia, 1988. [12] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations, pages 239?269. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2003. [13] M. Mezard and A. Montanari. Information, Physics, and Computation. Oxford University Press, Inc., New York, NY, USA, 2009. [14] N. Alon and J. H. Spencer. The Probabilistic Method. John Wiley, 2008. 9
4396 |@word mild:1 version:1 manageable:1 polynomial:1 proportion:1 tedious:1 decomposition:1 paid:2 pick:1 shot:2 harder:1 bck:1 moment:1 initial:1 configuration:3 celebrated:1 series:2 karger:2 zij:1 denoting:1 bc:2 outperforms:1 subjective:2 existing:1 comparing:2 surprising:1 si:13 assigning:3 perror:1 must:3 john:1 realistic:2 numerical:2 entrance:1 kdd:1 drop:1 update:3 half:1 ruvolo:1 core:2 provides:2 completeness:1 node:4 c2:2 persistent:1 prove:5 baldi:1 introduce:5 indeed:1 market:1 nor:1 annoy:1 inspired:1 freeman:1 decreasing:1 project:1 provided:2 underlying:1 notation:1 bounded:1 what:3 eigenvector:1 recruit:1 lowing:1 unobserved:1 guarantee:6 every:1 multidimensional:1 voting:15 ti:8 shed:1 runtime:3 k2:5 classifier:1 rm:1 unit:1 engineering:1 aggregating:1 local:1 limit:4 mach:1 ak:1 analyzing:2 oxford:1 establishing:1 meet:1 might:1 initialization:4 studied:1 mateo:1 collect:1 suggests:2 branson:1 range:3 statistically:1 unique:1 yj:8 practice:1 block:1 implement:2 recursive:2 movellan:1 universal:1 significantly:4 confidence:1 regular:12 get:11 cannot:3 close:2 operator:1 context:3 kmax:3 applying:3 equivalent:1 otherhand:1 regardless:2 starting:2 amazon:1 estimator:1 rule:3 importantly:1 spanned:1 oh:1 updated:1 target:8 construction:2 smyth:1 designing:3 hypothesis:2 dawid:2 recognition:2 particularly:1 muri:1 distributional:2 ep:3 electrical:1 solved:2 capture:4 worst:1 wj:8 ran:1 rq:3 dempster:1 ui:3 reward:1 asked:1 depend:3 solving:2 algo:4 algebra:1 upon:1 bipartite:7 completely:1 fewest:1 fast:1 effective:2 describe:1 labeling:3 choosing:1 crowd:6 neighborhood:2 aez:1 quite:3 emerged:2 larger:1 valued:2 whose:2 statistic:2 richardson:1 noisy:3 laird:1 final:1 superscript:1 advantage:2 analytical:1 ai0:1 combining:1 achieve:16 getting:1 requirement:1 zp:6 categorization:1 generating:1 converges:7 exam:1 completion:1 pose:1 develop:2 alon:1 received:1 eq:1 paying:1 strong:1 implies:1 met:1 correct:10 hammer:6 annotated:1 human:1 transient:1 adjacency:1 require:4 assign:7 fix:1 generalization:1 anonymous:1 spencer:1 hold:4 ground:1 deciding:3 predict:1 claim:1 substituting:3 achieves:2 omitted:1 purpose:1 estimation:5 label:5 sensitive:2 establishes:1 weighted:3 mit:1 gaussian:9 aim:1 corollary:4 focus:2 methodological:1 indicates:4 check:1 likelihood:2 rigorous:1 skipped:2 sigkdd:1 sense:1 whence:2 inference:11 el:1 i0:1 inaccurate:1 typically:3 perona:2 relation:1 limm:1 interested:1 provably:2 overall:1 classification:3 among:1 denoted:1 priori:1 exponent:2 integration:1 initialize:2 equal:2 once:1 never:2 chernoff:2 represents:1 broad:2 yu:1 nearly:1 future:1 others:2 lighten:1 intelligent:1 few:3 modern:1 randomly:5 simultaneously:2 national:1 interest:2 message:12 highly:2 mining:2 truly:1 extreme:1 yielding:1 light:1 implication:1 allocating:1 edge:6 worker:65 necessary:6 unless:1 tree:3 incomplete:1 urbanke:1 initialized:3 re:1 mk:25 diligent:2 earlier:1 rmk:1 asking:2 assignment:7 maximization:1 applicability:1 cost:10 entry:3 welinder:1 characterize:1 answer:32 randomizing:1 chooses:2 adaptively:1 density:11 fundamental:1 international:1 probabilistic:5 systematic:1 physic:1 together:1 earn:1 again:1 unavoidable:1 choose:3 possibly:1 resort:1 zhao:1 leading:8 valadez:1 yp:12 exclude:1 diversity:1 coding:2 inc:2 satisfy:2 combinatorics:1 mv:2 depends:2 piece:1 later:2 try:1 observer:1 performed:2 closed:1 analyze:8 sup:7 crowdsourced:1 complicated:1 annotation:1 contribution:2 accuracy:5 variance:1 who:4 characteristic:1 kaufmann:2 guessed:1 identify:2 wisdom:1 produced:1 none:1 expertise:2 submitted:2 explain:1 whenever:1 definition:4 nonetheless:1 pp:1 turk:1 dm:5 associated:1 proof:7 boil:1 dataset:2 treatment:2 massachusetts:1 ask:1 recall:1 knowledge:5 lim:6 improves:1 supervised:1 follow:2 response:3 improved:1 wei:1 execute:1 evaluated:1 done:1 taskmaster:5 hand:2 sheng:2 trust:1 propagation:3 widespread:1 quality:4 usa:2 effect:1 normalized:1 evolution:8 hence:3 assigned:9 analytically:1 equality:2 burl:1 iteratively:1 illustrated:1 conditionally:1 raykar:1 complete:1 performs:2 reasoning:1 image:7 novel:6 ef:1 common:1 behaves:1 empirically:2 exponentially:2 volume:1 xi0:1 interpretation:1 relating:1 marginals:1 numerically:1 emk:4 m1:3 refer:1 cambridge:2 swoh:1 reliability:16 l3:2 labelers:4 bergsma:1 inf:2 scenario:2 certain:5 inequality:3 came:1 devise:1 morgan:2 minimum:5 determine:1 paradigm:2 converge:1 signal:2 multiple:5 infer:3 characterized:1 long:1 equally:1 essentially:2 expectation:1 iteration:12 achieved:1 irregular:2 c1:2 background:1 whereas:2 want:1 singular:9 crucial:1 publisher:1 operate:1 unlike:2 limk:1 pass:1 comment:2 integer:2 subgaussian:1 identically:1 easy:2 enough:1 affect:1 xj:3 independence:1 finish:2 florin:1 venus:1 tradeoff:1 reuse:1 effort:1 moy:1 spammer:11 york:1 generally:1 clear:2 detailed:1 amount:3 locally:2 tomography:1 generate:1 outperform:1 notice:1 sign:4 extrinsic:1 correctly:1 per:7 write:3 redundancy:1 iter:4 key:1 ypi:3 drawn:2 changing:1 neither:1 pj:24 asymptotically:5 graph:30 sum:1 run:1 you:1 master:1 throughout:1 reasonable:1 wu:1 draw:1 decision:4 scaling:2 fee:2 comparable:1 bound:20 pay:1 distinguish:1 oracle:5 identifiable:1 precisely:4 constraint:2 your:1 bp:6 aspect:4 optimality:8 min:1 fayyad:1 proofreading:2 optical:2 skene:2 department:1 developing:1 according:5 march:1 across:1 smaller:1 em:10 character:3 making:2 happens:2 intuitively:1 explained:1 equation:4 agree:1 payment:2 devavrat:2 count:1 needed:1 know:4 operation:1 yedidia:1 observe:1 appropriate:3 m2k:2 batch:6 shah:1 slower:1 existence:1 cent:1 original:1 top:2 denotes:2 ensure:1 cf:2 completed:1 rz:1 giving:1 ghahramani:1 build:1 prof:1 establish:3 approximating:1 society:2 question:9 already:1 strategy:2 randomize:1 dependence:1 costly:1 nr:1 said:1 subspace:1 majority:19 mail:1 trivial:2 boldface:1 induction:1 assuming:3 relationship:1 bollob:1 minimizing:2 difficult:4 setup:1 statement:1 whitehill:1 design:1 publ:1 unknown:1 perform:2 upper:2 datasets:1 jin:1 january:1 pez:2 delaying:1 precise:1 rn:1 provost:2 arbitrary:2 sharp:2 lb:2 august:1 david:1 introduced:2 mechanical:1 extensive:1 c3:2 c4:1 established:3 pearl:1 able:1 regime:3 challenge:1 including:1 royal:2 reliable:4 video:1 belief:4 max:1 power:3 suitable:2 event:1 natural:2 difficulty:1 critical:1 ipeirotis:2 recursion:3 minimax:3 scheme:3 technology:1 numerous:1 irrespective:1 naive:1 prior:6 literature:1 understanding:2 l2:1 powered:1 discovery:1 asymptotic:2 afosr:1 expect:1 interesting:2 generation:1 allocation:4 limitation:2 proportional:1 foundation:1 degree:2 sufficient:6 consistent:3 rubin:1 sewoong:1 principle:1 share:1 pi:2 translation:1 prone:1 supported:1 surprisingly:1 free:1 electronically:1 copy:2 aij:16 formal:1 allow:2 side:1 institute:1 sparse:2 distributed:5 benefit:1 overcome:1 replicated:1 san:2 far:1 unreliable:1 overcomes:1 ml:5 decides:1 incoming:1 belongie:1 francisco:1 xi:12 iterative:26 nature:2 learn:1 ca:1 symmetry:1 improving:1 complex:1 domain:1 submit:1 vj:2 main:5 montanari:1 child:2 repeated:1 ny:1 wiley:1 sub:7 inferring:2 mezard:1 wish:1 obeying:1 lq:13 deterministically:1 lie:1 pe:4 third:2 down:4 theorem:5 formula:1 zpi:3 decay:2 r2:1 submitting:1 concern:1 exists:2 execution:1 labelling:1 budget:5 conditioned:1 surprise:1 simply:2 bogoni:1 recommendation:2 acquiring:2 corresponds:2 truth:1 satisfies:1 acm:2 price:1 considerable:1 hard:1 typical:1 specifically:1 operates:2 uniformly:1 lemma:6 total:6 pas:1 vote:1 meaningful:1 indicating:1 formally:1 outgoing:1 crowdsourcing:12
3,751
4,397
A Global Structural EM Algorithm for a Model of Cancer Progression Erik Sj?olund Stockholm Bioinformatics Center Stockholm University, Sweden erik.sj? [email protected] Ali Tofigh School of Computer Science McGill Centre for Bioinformatics McGill University, Canada [email protected] Mattias H?oglund Department of Oncology Lund University, Sweden [email protected] Jens Lagergren Science for Life Lab Swedish e-Science Research Center Stockholm Bioinformatics Center School of Computer Science and Communication KTH Royal Institute of Technology, Sweden [email protected] Abstract Cancer has complex patterns of progression that include converging as well as diverging progressional pathways. Vogelstein?s path model of colon cancer was a pioneering contribution to cancer research. Since then, several attempts have been made at obtaining mathematical models of cancer progression, devising learning algorithms, and applying these to cross-sectional data. Beerenwinkel et al. provided, what they coined, EM-like algorithms for Oncogenetic Trees (OTs) and mixtures of such. Given the small size of current and future data sets, it is important to minimize the number of parameters of a model. For this reason, we too focus on tree-based models and introduce Hidden-variable Oncogenetic Trees (HOTs). In contrast to OTs, HOTs allow for errors in the data and thereby provide more realistic modeling. We also design global structural EM algorithms for learning HOTs and mixtures of HOTs (HOT-mixtures). The algorithms are global in the sense that, during the M-step, they find a structure that yields a global maximum of the expected complete log-likelihood rather than merely one that improves it. The algorithm for single HOTs performs very well on reasonable-sized data sets, while that for HOT-mixtures requires data sets of sizes obtainable only with tomorrow?s more cost-efficient technologies. 1 Introduction In the learning literature, there are several previous results on learning probabilistic tree models, including various Expectation Maximization-based inference algorithms. In [1], trees were considered where the vertices were associated with observable variables and an efficient algorithm for finding a globally optimal Maximum Likelihood (ML) solution was described. Subsequently, [2] presented a structural Expectation Maximization (EM) algorithm for finding the ML mixture of trees as well as MAP solutions with respect to several priors. There are three axes along which it is natural to compare these as well as other results. The first axis is the type of dependency structure allowed. The second axis is the type of variables used? 1 observable only or hidden and observable?and the type of relations they can have. The third axis is the type of inference algorithms that are known for the model. It is interesting in relation to the present result to ask in what respect the structural EM algorithm of [3] constitutes an improvement when compared with Friedman?s earlier structural EM algorithm [4]. In fact, it may seem like the former constitutes no improvement at all, since the latter is concerned with more general dependency structures. Notice, however, that it is customary to distinguish between EM algorithms and generalized EM algorithms for inferring numerical parameters, the difference being that in the M-step of the former, parameters are found that maximize the expected complete log-likelihood, whereas in the latter, parameters are found that merely improve it. As Friedman points out in his article on the Bayesian Structural EM algorithm [4], the same distinction can be made regarding the maximization over structures. Clearly, it would be convenient to use the same terminology for structural EM algorithms as for ordinary EM algorithms. However, the distinction is often not made for structural EM algorithms and even researchers that consider themselves experts in the field seem to be unaware of it. For this reason, we define global structural EM algorithms to be EM algorithms that in the M-step find a structure yielding a global maximum of the expected complete log-likelihood (as opposed to a structure that merely improves it). Equipped with this definition, we note that the phylogeny algorithm of [3] is a global structural EM algorithm in contrast to the earlier algorithm [4]. Another example of a global structural EM algorithm is the learning algorithm for trees with hidden variables presented in [5]. In an effort to provide mathematical models of cancer progression, Desper et al. introduced the Oncogenetic Tree model where observable variables corresponding to aberrations are associated with vertices of a tree [6]. They then proceeded to show that an algorithm based on Edmonds?s optimum branching algorithm will, with high probability, correctly reconstruct an Oncogenetic Tree T from sufficiently long series of data generated from T . The Oncogenetic Tree model suffers from two problems; monotonicity?an aberration associated with a child cannot occur unless the aberration associated with its parent has occurred?and limitedstructure?compared to a network, the tree structure severely limits the sets of progressional paths that can be modeled. In an attempt to remedy these problems, the Network Aberration Model was proposed [7, 8]. However, the computational problems associated with these network models are hard; for instance, no efficient EM algorithm for training is yet known. In another attempt, Beerenwinkel et al. used mixtures of Oncogenetic Trees to overcome the problem of limited-structure, but without removing the monotonicity and only obtaining an algorithm with an EM-like structure that has not been proved to deliver a locally optimal maximum-likelihood solution [9, 10, 11]. Beerenwinkel and coworkers used Conjunctive Bayesian Networks (CBNs) to model cancer progression [12, 13]. In order to overcome the limited ability of CBNs to model noisy biological data, [14] introduced the hidden CBN model. A hidden CBN can be obtained from a CBN by considering each variable in the CBN to be hidden and associating an observable variable with each hidden variable. The hidden CBN also has a common error parameter specifying the probability that any individual observable variable differs from its associated hidden variable. In a hidden CBN, values are first generated for the hidden variables, and then, the observable variables obtain values based both on the hidden variables and the error parameter. We present the Hidden-variable Oncogenetic Tree (HOT) model where a hidden and an observable variable are associated with each vertex of a rooted directed tree. The value of the hidden variable indicates whether or not the tumor progression has reached the vertex (a value of one means that cancer progression has reached the vertex and zero that it has not), while the value of the observable variable indicates whether a specific aberration has been detected (a value of one represents detection and zero the opposite). This interpretation provides several relations between the variables in a HOT. An asymmetric relation is required between the hidden variables associated with the two endpoints of an arc of the directed tree. Because of this asymmetry, the global structural EM algorithm that we derive for the HOT ML problem cannot, in contrast to many of the above mentioned algorithms, be based on a maximum spanning tree algorithm and is instead based on the optimal branching algorithm [15, 16, 17]. Having so rectified the monotonicity problem, we proceed to obtain a model allowing for a higher degree of structural variation by introducing mixtures of HOTs (HOT-mixtures) and, in contrast to Beerenwinkel et al., we derive a proper structural EM algorithm for training these. 2 In the near future, multiple types of high throughput (HTP) data will be available for large collections of tumors, providing great opportunities as well as computational challenges for progression model inference. One of the main motivations for our models and inference methods is that they enable analysis of future HTP-data, which most likely will require the ability to handle large numbers of mutational events. In this paper, however, we apply our methods to cytogenetic data for colon and kidney cancer, mostly due to the availability of cytogenetic data for large numbers of tumors provided by the Mitelman database [18]. 2 HOTs and the novel global structural EM algorithm 2.1 Hidden-variable Oncogenetic Trees We will denote the set of observed data points D and an individual data point X. In Section 3, we will apply our methods to CNA, i.e., a data point will be a set of observed copy number abberations, but in general, more complex events can be used. A rooted directed tree T consists of a set of vertices, denoted V (T ) and a set of arcs denoted A(T ). An arc hu, vi is directed from the vertex u called its tail towards the vertex v called its head. If there is an arc with tail p and head u in a directed tree T , then p is called the parent of u in T and denoted p(u) (the tree T will be clear from context). An OT is a rooted directed tree where an aberration associated with each vertex and a probability associated with each arc. One can view an OT as generating a set of aberrations by first visiting the root and then continuing towards the leaves (preorder) visiting each vertex with the probability of its incoming arc if the parent has been visited, and with probability zero if the parent has not been visited. The result of the progression is the set of aberrations associated with the visited vertices. ? ? T1 ,?1 =0.7 ? ? 0.5 ? 0.25 ? (a) ? +17q 0.25 -3p 0.25 ? 0.5 -4p (b) 0.25 +Xp 0.5 -3p 0.9 0.25 T2 ,?2 =0.3 ? 0.5 0.25 0.5 0.25 +Xp 0.9 0.5 0.25 -3p 0.9 0.25 0.25 +17q -4p +Xp -4p +17q -3p +17q -4p +Xp 0.8 0.9 0.8 0.6 0.8 0.9 0.8 0.9 0.8 (c) (d) Figure 1: (a) A rooted directed tree with the root at the top. All arcs are directed downwards, i.e., away from the root. (b) An OT with probabilities associated with arcs and CNAs associated with vertices. (c) A HOT with probabilities associated with arcs (indicating the probability that the hidden variable associated with the head of the arc receives the value 1 conditioned that the hidden variable associated with the tail has this value), and CNAs as well as probabilities associated with vertices (indicating the probability that the observable variable associated with the vertex receives the value 1 conditioned that the hidden variable associated with the vertex has received this value). (d) A HOT-mixture consisting of two HOTs. The mixing probability for T1 is 0.7 and that for T2 is 0.3. So with probability 0.7 a synthetic tumor is generated from T1 and otherwise one is generated from T2 . In Figure 1(b), an OT for CNA is depicted (aberrations are written in the standard notation for CNAs in cytogenetic data, i.e., each represents a duplication (+) or deletion (-) of a specific chromosomal region). Notice that an aberration associated with a vertex cannot occur unless the aberration associated with its parent has occurred. For instance, the set {+Xp, +17q} cannot be generated by the OT in Figure 1(b). In a data-modeling context, this is highly undesirable as data is typically noisy and is bound to contain both false positives and negatives. Our HOT model does not suffer from this problem. A Hidden-variable Oncogenetic Tree (HOT) is a directed tree where, just like OTs, each vertex represents a specific aberration. Unlike OTs however, the progression of cancer is modeled with hidden variables associated with vertices and conditional probabilities associated with the arcs. The observation of the aberrations (the data) are modeled with a different set of random variables whose values are conditioned on the hidden variables. 3 Formally, a Hidden-variable Oncogenetic Tree (HOT) is a pair T = (T, ?) where: 1. T is a rooted directed tree and ? consists of two conditional probability distributions, ?X (u) and ?Z (u), for each vertex u; 2. two random variables are associated with each vertex: an observable variable X(u) and a hidden variable Z(u), each assuming the values 0 or 1; 3. the hidden variable associated with the root, Z(r), is defined to have a value of one; 4. for each non-root vertex u, ?Z (u) is a conditional probability distribution on Z(u) conditioned by Z(p(u)) satisfying Pr[Z(u) = 1|Z(p(u)) = 0] = Z (u); and 5. for each non-root vertex u, ?X (u) is a conditional probability distribution on X(u) conditioned by Z(u) satisfying Pr[X(u) = 1|Z(u) = 0] = X (u). With respect to (4), one might argue that Pr[Z(u) = 1|Z(p(u)) = 0] should be zero, since if the progression has not reached p(u) it should not be able to proceed to u. However, the derivation and implementation of the EM algorithm depends on the non-zero value of this probability for much the same reasons that people use pseudo-counts [19], namely, once a parameter receives the value 0 in an EM algorithm for training, it will subsequently not be changed. Moreover, Z has a natural interpretation: it corresponds to a small probability of spontaneous mutations occurring independently from the overall progressional path that the disease is following. Similar arguments apply to (5) where we interpret X as the small probability of falsely detecting an aberration that is not actually present (corresponding to a false positive test). We note here that it is possible to have CPDs where X(u) and Z(u) depend on both X(p(u)) and Z(p(u)), and even to let X(u) depend on all three of Z(u), X(p(u)), and Z(p(u)). We note here that our arguments can easily be extended to cover these cases, although we will not consider them further in the following text. Figure 1(c) shows an example of a HOT where Z and X have been omitted for clarity. 2.2 The novel global structural EM algorithm for HOTs We have derived a global structural Expectation Maximization (EM) algorithm for inferring HOTs from data. According to standard EM theory [20], such an algorithm is obtained if there is a procedure that given a HOT T finds a HOT T 0 that maximizes the so-called complete log-likelihood (also known as the Q-term): Q(T 0 ; T ) = X X Pr[Z|X, T ] log Pr[Z, X|T 0 ]. X?D Z The likelihood of T 0 is guaranteed to be at least as high as T , which immediately leads to an iterative procedure. In standard EM, the Q-term is maximized only over the parameters of a model, in our case the conditional probabilities, leaving the structure, i.e., the directed tree, unchanged. Friedman et al. [3] extended the use of EM algorithms from the standard parameter estimation to also finding an optimal structure. In their case, the probabilistic model was reversible and the tree that maximized the expected complete log-likelihood could be obtained using a maximum spanning tree algorithm. In our case, the pair-wise relations between hidden variables are asymmetric and a maximum spanning tree algorithm cannot be used. However, as we show below, the Q-term can be maximized by instead using Edmonds?s optimal branching algorithm. When dealing with mixtures of HOTs in later sections, we will need to maximize the weighted version of the Q-term, which we introduce already here: X X Qf (T 0 ; T ) = f (X)Pr[Z|X, T ] log Pr[Z, X|T 0 ], (1) X?D Z where f is a weight function on the data points in D and can be computed in constant time. By expanding and rearranging the terms in (1) (see the appendix), it can be shown that Qf (T 0 ; T ) equals X X X 0 f (X)Pr[Z(v) = a, Z(u) = b|X, T ] log Pr[Z(v) = a|Z(u) = b, ?Z (u)] hu,vi?A(T 0 ) a,b?{0,1} X?D 4 X + X X 0 f (X)Pr[Z(v) = a|X, T ] log Pr[X(v) = ?|Z(v) = a, ?X (u)]. hu,vi?A(T ) ?,a?{0,1} X?D:X(u)=? As long as the directed tree T 0 is fixed, the standard EM methodology (see for instance [19]) can be used to find the ?0 that maximizes Qf (T 0 , ?0 ; T ) as follows. First, let X Au (a, b) = f (X)Pr[Z(u) = a, Z(p0 (u)) = b|X, T ] (2) X?D and Bu (?, a) = X f (X)Pr[Z(u) = a|X, T ]. (3) X?D:X(u)=? Then the ?0 that, for a fixed T 0 , maximizes Qf (T 0 ; T ) (i.e. Qf (T 0 , ?0 ; T )) is given by X 0 Pr[Z(u) = a|Z(p0 (u)) = b, ?Z (u)] = Au (a, b)/( Au (a, b)) a?{0,1} and 0 Pr[X(u) = ?|Z(u) = a, ?Z (u)] = Bu (?, a)/( X Bu (?, a)). ??{0,1} The time required for computing the right hand sides of (2) and (3) is O(n2 ), where n is the number of aberrations (The probabilities Pr[Z(u) = a, Z(v) = b|X, T ] can be computed using techniques analogous to those appearing in [3]). For each arc hp, ui of T 0 , using the CPDs defined above, we define the weight of the arc, specific to this tree to be X X 0 f (X)Pr[Z(u) = a, Z(p0 (u)) = b|X, T ] log Pr[Z(u) = a|Z(p0 (u)) = b, ?Z (u)] a,b?{0,1} X?D + X X 0 f (X)Pr[Z(u) = a|X, T ] log Pr[X(u)|Z(u) = a, ?X (u)]. b?{0,1} X?D We now make two important observations from which it follows how to maximize the weighted expected complete log-likelihood over all directed trees. First, notice that if two directed trees T 0 and T 00 have a common arc hp, ui, then this arc has the same weight in these two trees (since the weights on the arc does not depend on any other arc in the tree). Let G be the directed, complete, and arc-weighted graph with the same vertex set as the tree T , and with arc weights given by the above expression. An optimal arborescence of a directed graph is a rooted directed tree on the same set of vertices as the directed graph, i.e., a subgraph that has exactly one directed path from one specified vertex called the root to any other vertex, and has maximum arc weight sum among all such rooted directed trees. For any arborescence T 0 of G, the sum of the arc weights equals, by the construction of G, the maximum value of Qf (T 0 , ?0 ; T ) over all ?0 . From this follows that, a (spanning) directed tree T 0 is an optimal arborescence of G if and only if T 0 maximizes the Qf term. And so, applying Edmonds?s algorithm to G gives the desired directed tree. Tarjan?s implementation of Edmonds?s algorithm runs in quadratic time [15, 16, 17]. Hence, the total running time for the algorithm is O(|D| ? n2 ). 2.3 HOT-mixtures In this section we extend our model to HOT-mixtures by including an initial random choice of one of several HOTs and letting the final outcome be generated by the chosen HOT. We will also obtain an EM-based model-training algorithm for HOT-mixtures by showing how to optimize the expected complete log-likelihoods for HOT-mixtures. Formally, we will use k HOTs T1 , . . . , Tk and a random mixing variable I that takes on values in 1, . . . , k. The probability that I = i is denoted ?i and ? = (?1 , . . . , ?k ) is a vector of parameters of the model in addition to those of the HOTs (?1 , . . . , ?k are constrained to sum to 1). The following notation is convenient ?i Pr[X|Ti ] . j?[k] ?j Pr[X|Tj ] ?i (X) = Pr[I = i|X, M ] = P 5 For a HOT-mixture, the expected complete log-likelihood can be expressed as follows X X Pr[Z, I|X, M ] log Pr[Z, I, X|M 0 ]. (4) X?D Z,I Using standard EM methodology, it is possible to show that (4) can be maximized by independently maximizing X X ?i (X) log(?0i ) (5) i?[k] X?D and, for each i = 1, . . . , k, maximizing X X Pr[Z|X, Ti ]?i (X) log(Pr[Z, X|Ti0 ]) (6) X?D Z Finding a ?0 = ?01 , . . . , ?0k maximizing (5) is straightforward (see for instance [19]) and, for each i = 1, . . . , k, finding a Ti0 that maxmizes the weighted Q-term in (6) can be done as described in the previous subsections (with ?i (X) weighting the data points). 3 Results In this section, we report results obtained by applying our algorithms to synthetic and cytogenetic cancer data. In the standard version of the EM algorithm, there are four parameters per edge of a HOT. The number of parameters can be reduced by letting some parameters be global, e.g., by letting x (u) = x (u0 ) for all vertices u and u0 . There are three parameters whose global estimation is desirable: x , Z , and Pr[X(u) = 0|Z(u) = 1]. However, for technical reasons, requiring that z be global makes it impossible to derive an EM algorithm. Therefore, we will distinguish between two different versions of the algorithm: one with free parameters and one with global parameters. The free parameter version then corresponds to the standard EM algorithm, while the global parameter version corresponds to letting x and Pr[X(u) = 0|Z(u) = 1] be global. When evaluating the global parameter version of the algorithm using synthetic data, we will follow the convention of letting all three error parameters be global when generating data. Other conventions used for all the tests described here include the following. We enforce an upper limit of 0.5 on z and x . Also, for each data set, we first run the algorithm on a set of randomly generated start HOTs or start HOT-mixtures for 10 iterations. The HOT or HOT-mixture that results in the best likelihood is then run until convergence. Unless stated otherwise, the number of start trees and mixtures is 100. 3.1 3.1.1 Tests on Synthetic Data sets Single HOTs We generated random HOTs with 10, 25, and 40 vertices with parameters on the edges chosen uniformly in the intervals Pr[Z(u) = 1|Z(p(u)) = 1] ? [0.1, 1.0], Pr[X(u) = 0|Z(u) = 1], x , z ? [0.01, q], (7) (8) where q ? {0.05, 0.10, 0.25, 0.50}. For each combination, we generated 100 HOTs for a total of 3 ? 4 ? 100 = 1200 HOTs. Figure 2 shows the result of our experiments on synthetic data. An edge of the generated HOT connecting one specific aberration to another is considered to have been correctly recovered if the HOT obtained from the algorithm connects the same two aberrations in the same direction. We also compared the performance of our algorithms with that of Mtreemix by Beerenwinkel et al [11]. The generated data from our single HOTs were passed to Mtreemix and the same criteria as above were used to detect correctly recovered edges (no special options were set when running Mtreemix on data generated with global parameters since no distinction between global and free parameters can be made on oncogenetic trees). Mtreemix outperforms our methods when the HOTs and the error parameters are small, while our algorithms outperform Mtreemix significantly as the size of the HOTs or error parameters become larger. 6 EM algorithm global parameters Mtreemix q = 0.05 q = 0.1 q = 0.25 q = 0.5 10 vertices 25 vertices 40 vertices % correctly recovered edges 0.6 0.4 0.0 0.2 % correctly recovered edges % correctly recovered edges 0.6 0.4 0.0 0.2 % correctly recovered edges Mtreemix 0.8 10 vertices 25 vertices 40 vertices 0.8 q = 0.05 q = 0.1 q = 0.25 q = 0.5 1.0 1.0 EM algorithm free parameters 0 2000 4000 0 # data points 2000 4000 0 # data points 2000 4000 0 # data points (a) 2000 4000 # data points (b) Figure 2: Histograms showing the mean percentage of edges that were correctly recovered by the algorithm for the free parameter case together with error bars showing one standard deviation. 0 2000 4000 6000 8000 # data points 0.6 0.4 0.5 mixture probabilities 0.5 vs 0.5 0.3 vs 0.7 0.1 vs 0.9 0.2 0.3 % correctly recovered edges 0.5 0.4 0.2 0.3 % correctly recovered edges 0.5 0.4 0.3 0.2 % correctly recovered edges q = 0.25 0.6 q = 0.1 0.6 q = 0.05 0 2000 4000 6000 8000 # data points 0 2000 4000 6000 8000 # data points Figure 3: Histograms showing proportion of edges correctly recovered by the EM algorithm for HOT-mixtures with global parameters on two HOTs with 25 vertices each. Each bar represents 100 mixtures. Error bars show one standard deviation. 3.1.2 HOT Mixtures We also tested the ability of the algoriithm to recover a mixture of two HOTs. The results are shown in 3. When measuring the number of correctly recovered edges, the following procedure was used. Each HOT produced from the algorithm was compared to each HOT from which the data was generated, and the number of correctly recovered edges was noted. The best way of matching the two HOTs produced from the algorithm with the two original HOTs was then determined. Two 7 17p- +7 ? ? 3p- 3p+7 5q- 5q- +17 +5q +5q,-14 +17 18q-,8p- 8p- +16 -4 -9 8p- +1q,8p-,-17 +16 +20,+21 +2 6q-,-9,-22,-X +12 1p-,-13,-15 -18,-21 +20 +2,+12,+19 -4,-10,-13,-15 (a) -13 -15 -17 -18 -X +1q 1p- -22 +19 +19 9p-,-9q-10,-22 -10 -10 6q-,-21 14p-15,-21 -14 -4 +2 +2p,+8q,+21 +12 (c) +8q +8q,1p-,-18,-21 (d) (b) Figure 4: HOTs obtained from RCC data. (a) shows an adapted version of the pathways for CC data published in [21]. (b) is a figure adapted from [22] showing the pathways obtained from statistical analysis of RCC data. (c) and (d) are the HOTs we obtained from the RCC data using only aberrations on the left and right pathways in (b), respectively. Notice the high level of agreement between the root-to-leaf paths in the recovered HOTs with those in (b). features can clearly be distinguished: the results improve as the size of the data increases, and the algorithm performs better when the HOTs have equal probability in the mixture. 3.2 Tests on Cancer Data Our cytogenetic data for colon (CC) and kidney (RCC) cancer consist of 512 and 998 tumors, respectively. The data consist of measurements on 41 common aberrations (18 gains, 23 losses) for CC and 28 (13 gains, 15 losses) for RCC. The data have previously been analyzed in [21] and [22] resulting in suggested pathways of progression. These analyses were based on Principal Component Analysis (PCA) performed on correlations between aberrations and a statistical measure called time of occurrence (TO) which measures how early or late an aberration occurs during progression. The aberrations were then clustered based on the PCA and each cluster was manually formed into a pathway (based on PCA and TO). One advantage of our approach is that we are able to replace the manual curation by automated computational steps. Another advantage is that our models assign probabilities to data and the different models can therefore be compared objectively. We expect z and x to be small in the type of data that we are using. We obtained the n most correlated aberrations in our CC data, for n ? {4, . . . , 11}, and tested different upper limits on z and x . The best correspondence to previously published analyses of the data was found when z ? 0.25 and x ? 0.01 by counting the number of bad edges. A bad edge is one that contradicts the partial ordering given by the pathways described in [21], of which the relevant part is shown in the Figure 4(a). Having found upper limits that work well on the CC data, we applied the algorithm with these upper bounds to the RCC data. The earlier analyses in [22] strongly suggests that two HOTs are required to model the RCC data. Given that our mixture model appears, from our tests on synthetic data, to require substantially more data points to recover the underlying HOTs in a satisfactory manner, we used the results of the analysis in [22] to divide the aberrations into two (overlapping) clusters for which we created HOTs separately. These HOTs can be seen in Figure 4(c) and 4(d) and they show very good agreement to the pathways from [22] shown in Figure 4(b). For instance, each root-to-leaf path in the HOT of Figure 4(c) agrees perfectly with the pathway shown in Figure 4(b). 8 References [1] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans Inform Theor, 14(3):462?467, 1968. [2] M. Meila and M.I. Jordan. Learning with mixtures of trees. J Mach Learn Res, 1(1):1?48, 2000. [3] N. Friedman, M. Ninio, I. Pe?er, and T. Pupko. A structural em algorithm for phylogenetic inference. J Comput Biol, 9(2):331?353, 2002. [4] N. Friedman. The bayesian structural em algorithm. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 129?138. Morgan Kaufmann, 1998. [5] P Leray and O Franc?ois. Bayesian network structural learning and incomplete data. Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR 2005), pages 33?40, 2005. [6] R. Desper, F. Jiang, O.P. Kallioniemi, H. Moch, C.H. Papadimitriou, and A.A. Schaffer. Inferring tree models for oncogenesis from comparative genome hybridization data. J Comput Biol, 6(1):37?51, 1999. [7] M. Hjelm, M. H?oglund, and J. Lagergren. New probabilistic network models and algorithms for oncogenesis. J Comput Biol, 13(4):853?865, May 2006. [8] M.D. Radmacher, R. Simon, R. Desper, R. Taetle, A.A. Schaffer, and M.A. Nelson. Graph models of oncogenesis with an application to melanoma. J Theor Biol, 212(4):535?48, Oct 2001. [9] N. Beerenwinkel, J. Rahnenfuhrer, M. Daumer, D. Hoffmann, R. Kaiser, J. Selbig, and T. Lengauer. Learning multiple evolutionary pathways from cross-sectional data. J Comput Biol, 12(6):584?598, Jul 2005. [10] J. Rahnenfuhrer, N. Beerenwinkel, W.A. Schulz, C. Hartmann, A. von Deimling, B. Wullich, and T. Lengauer. Estimating cancer survival and clinical outcome based on genetic tumor progression scores. Bioinformatics, 21(10):2438?2446, May 2005. [11] N. Beerenwinkel, J. Rahnenfuhrer, R. Kaiser, D. Hoffmann, J. Selbig, and T. Lengauer. Mtreemix: a software package for learning and using mixture models of mutagenetic trees. Bioinformatics, 21(9):2106? 2107, 2005. [12] N. Beerenwinkel, N. Eriksson, and B. Sturmfels. Conjunctive bayesian networks. Bernoulli, 13(4):893? 909, Jan 2007. [13] N. Beerenwinkel, N. Eriksson, and B. Sturmfels. Evolution on distributive lattices. J Theor Biol, 242(2):409?20, Sep 2006. [14] M. Gerstung, M. Baudis, H. Moch, and N. Beerenwinkel. Quantifying cancer progression with conjunctive bayesian networks. Bioinformatics, 25(21):2809?15, Nov 2009. [15] R.E. Tarjan. Finding optimum branchings. Networks, 7(1):25?36, 1977. [16] R.M. Karp. A simple derivation of edmond?s algorithm for optimum branching. Networks, 1(265-272):5, 1971. [17] P. Camerini, L. Fratta, and F. Maffioli. The k best spanning arborescences of a network. Networks, 10(2):91?110, 1980. [18] F. Mitelman, B. Johansson, and F. Mertens (Eds.). Mitelman database of chromosome aberrations and gene fusions in cancer, 2010. http://cgap.nci.nih.gov/Chromosomes/Mitelman. [19] R. Durbin, S.R. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis. Cambridge University Press, Cambridge, 1998. [20] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the em algorithm. J Roy Stat Soc B, 39(1):1?38, 1977. [21] M. H?oglund, D. Gisselsson, G.B. Hansen, T. S?all, F. Mitelman, and M. Nilbert. Dissecting karyotypic patterns in colorectal tumors: Two distinct but overlapping pathways in the adenoma-carcinoma transition. Canc Res, 62:5939?5946, 2002. [22] M. H?oglund, D. Gisselsson, M. Soller, G.B. Hansen, P. Elfving, and F. Mitelman. Dissecting karyotypic patterns in renal cell carcinoma: an analysis of the accumulated cytogenetic data. Canc Genet Cytogenet, 153(1):1?9, 2004. 9
4397 |@word proceeded:1 version:7 proportion:1 johansson:1 hu:3 p0:4 cytogenetic:6 thereby:1 initial:1 liu:1 series:1 score:1 genetic:1 outperforms:1 current:1 recovered:14 yet:1 conjunctive:3 written:1 realistic:1 csc:1 numerical:1 cpds:2 mutagenetic:1 v:3 intelligence:1 leaf:3 devising:1 cbns:2 provides:1 detecting:1 phylogenetic:1 mathematical:2 along:1 become:1 tomorrow:1 consists:2 pathway:11 manner:1 introduce:2 falsely:1 expected:7 themselves:1 globally:1 gov:1 equipped:1 considering:1 provided:2 estimating:1 notation:2 moreover:1 maximizes:4 underlying:1 what:2 renal:1 substantially:1 finding:6 pseudo:1 ti:2 exactly:1 rcc:7 cbn:6 t1:4 positive:2 melanoma:1 limit:4 severely:1 mach:1 jiang:1 path:6 might:1 au:3 specifying:1 suggests:1 limited:2 directed:22 differs:1 procedure:3 jan:1 significantly:1 convenient:2 matching:1 cannot:5 undesirable:1 eriksson:2 context:2 applying:3 impossible:1 optimize:1 map:1 center:3 maximizing:3 straightforward:1 kidney:2 independently:2 immediately:1 his:1 handle:1 cna:2 variation:1 analogous:1 pupko:1 mcgill:3 spontaneous:1 construction:1 agreement:2 roy:1 satisfying:2 asymmetric:2 database:2 observed:2 region:1 mutational:1 ordering:1 mentioned:1 disease:1 dempster:1 ui:2 ti0:2 depend:3 ali:2 deliver:1 easily:1 sep:1 various:1 derivation:2 distinct:1 detected:1 artificial:1 outcome:2 whose:2 larger:1 reconstruct:1 otherwise:2 ability:3 objectively:1 tofigh:2 noisy:2 laird:1 final:1 advantage:2 sequence:1 relevant:1 mixing:2 subgraph:1 parent:5 convergence:1 optimum:3 asymmetry:1 cluster:2 generating:2 comparative:1 tk:1 derive:3 stat:1 school:2 received:1 krogh:1 soc:1 ois:1 convention:2 direction:1 subsequently:2 enable:1 require:2 ots:4 assign:1 clustered:1 biological:2 stockholm:3 theor:3 sufficiently:1 considered:2 great:1 early:1 omitted:1 akrr:1 estimation:2 visited:3 hansen:2 agrees:1 mertens:1 weighted:4 clearly:2 htp:2 rather:1 karp:1 ax:1 focus:1 derived:1 improvement:2 bernoulli:1 likelihood:13 indicates:2 contrast:4 sense:1 detect:1 colon:3 inference:5 accumulated:1 typically:1 chow:1 hidden:27 relation:5 schulz:1 dissecting:2 overall:1 among:1 hartmann:1 denoted:4 constrained:1 special:1 field:1 once:1 equal:3 having:2 manually:1 represents:4 constitutes:2 throughput:1 future:3 papadimitriou:1 t2:3 report:1 franc:1 randomly:1 individual:2 lagergren:2 consisting:1 connects:1 attempt:3 friedman:5 detection:1 mattias:2 highly:1 mixture:27 analyzed:1 yielding:1 tj:1 edge:17 partial:1 sweden:3 unless:3 tree:48 incomplete:2 continuing:1 divide:1 desired:1 re:2 instance:5 modeling:2 earlier:3 chromosomal:1 cover:1 measuring:1 maximization:4 ordinary:1 cost:1 introducing:1 vertex:35 deviation:2 lattice:1 too:1 dependency:2 synthetic:6 international:1 interdisciplinary:1 bu:3 probabilistic:3 canc:2 connecting:1 together:1 von:1 opposed:1 expert:1 availability:1 vi:3 depends:1 later:1 view:1 root:9 lab:1 performed:1 reached:3 start:3 recover:2 option:1 jul:1 simon:1 mutation:1 contribution:1 minimize:1 formed:1 kaufmann:1 maximized:4 yield:1 bayesian:6 produced:2 lu:1 researcher:1 rectified:1 cc:5 published:2 inform:1 suffers:1 manual:1 ed:1 definition:1 associated:25 gain:2 proved:1 ask:1 subsection:1 knowledge:1 improves:2 eddy:1 obtainable:1 actually:1 appears:1 higher:1 leray:1 follow:1 methodology:2 swedish:1 done:1 strongly:1 just:1 correlation:1 until:1 hand:1 receives:3 su:1 reversible:1 overlapping:2 cgap:1 lengauer:3 contain:1 requiring:1 remedy:1 former:2 hence:1 evolution:1 satisfactory:1 during:2 branching:5 rooted:7 noted:1 criterion:1 generalized:1 complete:9 performs:2 reasoning:1 wise:1 novel:2 nih:1 common:3 endpoint:1 tail:3 occurred:2 interpretation:2 extend:1 interpret:1 measurement:1 cambridge:2 meila:1 hp:2 centre:1 life:1 jens:1 seen:1 morgan:1 maximize:3 coworkers:1 vogelstein:1 u0:2 multiple:2 desirable:1 technical:1 cross:2 long:2 clinical:1 curation:1 converging:1 expectation:3 iteration:1 histogram:2 aberration:25 cell:1 whereas:1 addition:1 separately:1 nci:1 interval:1 leaving:1 ot:5 unlike:1 duplication:1 med:1 hybridization:1 seem:2 jordan:1 structural:20 near:1 counting:1 concerned:1 automated:1 associating:1 opposite:1 perfectly:1 regarding:1 genet:1 whether:2 expression:1 pca:3 passed:1 effort:1 suffer:1 proceed:2 se:3 clear:1 colorectal:1 sturmfels:2 locally:1 reduced:1 http:1 outperform:1 percentage:1 notice:4 correctly:14 per:1 carcinoma:2 edmonds:4 discrete:1 four:1 terminology:1 clarity:1 graph:4 merely:3 sum:3 run:3 package:1 uncertainty:1 reasonable:1 appendix:1 bound:2 guaranteed:1 distinguish:2 correspondence:1 quadratic:1 durbin:1 adapted:2 occur:2 software:1 argument:2 department:1 according:1 combination:1 elfving:1 em:40 contradicts:1 pr:31 previously:2 count:1 letting:5 available:1 mtreemix:8 apply:3 progression:15 edmond:1 away:1 enforce:1 appearing:1 distinguished:1 occurrence:1 customary:1 original:1 top:1 running:2 include:2 opportunity:1 coined:1 approximating:1 unchanged:1 beerenwinkel:11 already:1 hoffmann:2 occurs:1 kaiser:2 dependence:1 visiting:2 evolutionary:1 kth:2 distributive:1 nelson:1 argue:1 reason:4 spanning:5 assuming:1 erik:2 modeled:3 providing:1 mostly:1 negative:1 stated:1 design:1 implementation:2 proper:1 allowing:1 upper:4 observation:2 arc:21 extended:2 communication:1 head:3 oncology:1 tarjan:2 canada:1 schaffer:2 introduced:2 pair:2 required:3 namely:1 specified:1 preorder:1 distinction:3 deletion:1 trans:1 able:2 bar:3 suggested:1 below:1 pattern:3 lund:1 challenge:1 pioneering:1 royal:1 including:2 hot:31 event:2 natural:2 improve:2 technology:2 axis:3 created:1 text:1 prior:1 literature:1 loss:2 expect:1 interesting:1 degree:1 xp:5 article:1 rubin:1 cancer:16 qf:7 changed:1 copy:1 free:5 side:1 allow:1 institute:1 overcome:2 evaluating:1 transition:1 unaware:1 genome:1 made:4 collection:1 adaptive:1 selbig:2 sj:2 nov:1 observable:11 gene:1 monotonicity:3 ml:3 global:24 dealing:1 incoming:1 mitchison:1 iterative:1 learn:1 chromosome:2 ca:1 expanding:1 rearranging:1 obtaining:2 complex:2 main:1 motivation:1 n2:2 allowed:1 child:1 downwards:1 inferring:3 comput:4 pe:1 third:1 weighting:1 late:1 removing:1 bad:2 specific:5 showing:5 er:1 survival:1 fusion:1 consist:2 false:2 conditioned:5 occurring:1 depicted:1 arborescence:3 likely:1 sectional:2 expressed:1 corresponds:3 oct:1 conditional:5 sized:1 quantifying:1 towards:2 replace:1 hjelm:1 hard:1 determined:1 uniformly:1 tumor:7 principal:1 called:6 total:2 diverging:1 indicating:2 formally:2 phylogeny:1 people:1 latter:2 bioinformatics:6 tested:2 biol:6 correlated:1
3,752
4,398
Semi-supervised Regression via Parallel Field Regularization Binbin Lin Chiyuan Zhang Xiaofei He State Key Lab of CAD&CG, College of Computer Science, Zhejiang University Hangzhou 310058, China {binbinlinzju, chiyuan.zhang.zju, xiaofeihe}@gmail.com Abstract This paper studies the problem of semi-supervised learning from the vector field perspective. Many of the existing work use the graph Laplacian to ensure the smoothness of the prediction function on the data manifold. However, beyond smoothness, it is suggested by recent theoretical work that we should ensure second order smoothness for achieving faster rates of convergence for semisupervised regression problems. To achieve this goal, we show that the second order smoothness measures the linearity of the function, and the gradient field of a linear function has to be a parallel vector field. Consequently, we propose to find a function which minimizes the empirical error, and simultaneously requires its gradient field to be as parallel as possible. We give a continuous objective function on the manifold and discuss how to discretize it by using random points. The discretized optimization problem turns out to be a sparse linear system which can be solved very efficiently. The experimental results have demonstrated the effectiveness of our proposed approach. 1 Introduction In many machine learning problems, one is often confronted with very high dimensional data. There is a strong intuition that the data may have a lower dimensional intrinsic representation. Various researchers have considered the case when the data is sampled from a submanifold embedded in the ambient Euclidean space. Consequently, learning with the low dimensional manifold structure, or specifically the intrinsic topological and geometrical properties of the data manifold, becomes a crucial problem. In the past decade, many geometrically motivated approaches have been developed. The early work mainly considers the problem of dimensionality reduction. One hopes that the manifold structure could be preserved in the much lower dimensional Euclidean space. For example, ISOMAP [1] is a global approach which tries to preserve the pairwise geodesic distance on the manifold. Different from ISOMAP, Hessian Eigenmaps (HLLE, [2]) is a local approach for similar purpose. Locally Linear Embedding (LLE, [3]) and Laplacian Eigenmaps (LE, [4]) can be viewed as Laplacian operator based methods which mainly consider the local neighborhood structure of the manifold. Besides dimensionality reduction, Laplacian based regularization has also been widely employed in semi-supervised learning. These methods construct a nearest neighbor graph over the labeled and unlabeled data to model the underlying manifold structure, and use the graph Laplacian [5] to measure the smoothness of the learned function on the manifold. A variety of semi-supervised learning approaches using the graph Laplacian have been proposed [6, 7, 8]. In semi-supervised regression, some recent theoretical analysis [9] shows that using the graph Laplacian regularizer does not lead to faster minimax rates of convergence. [9] also states that the Laplacian regularizer is way too general for measuring the smoothness of the function. It is further suggested that we 1 should ensure second order smoothness to achieve faster rates of convergence for semi-supervised regression problems. The Laplacian regularizer is the integral on the norm of the gradient of the function, which is the first order derivative on the function. In this paper, we design regularization terms that penalize the second order smoothness of the function, i.e., the linearity of the function. Estimating the second order covariant derivative of the function is a very challenging problem. We try to address this problem from vector fields perspective. We show that the gradient field of a linear function has to be a parallel vector field (or parallel field in short). Consequently, we propose a novel approach called Parallel Field Regularization (PFR) to simultaneously find the function and its gradient field, while requiring the gradient field to be as parallel as possible. Specifically, we propose to compute a function and a vector field which satisfy three conditions simultaneously: 1) the function minimizes the empirical error on the labeled data, 2) the vector field is close to the gradient field of the function, 3) the vector field should be as parallel as possible. A novel regularization framework from the vector filed perspective is developed. We give both the continuous and discrete forms of the objective function, and develop an efficient optimization scheme to solve this problem. 2 Regularization on the Vector Field We first briefly introduce semi-supervised learning methods with regularization on the function. Let M be a d-dimensional submanifold in Rm . Given l labeled data points (xi , yi )li=1 on M, we aim to learn a function f : M ? R based on the manifold M and the labeled points (xi , yi )li=1 . A framework of semi-supervised learning based on differential operators can be formulated as follows: l arg min E(f ) = f ?C ? (M) 1X R0 (f (xi ), yi ) + ?1 R1 (f ) l i=1 where C ? (M) denotes smooth functions on M, R0 : R ? R ? R is the loss function and R1 (f ) : C ? (M) ? R is a regularization functional. R R1 is often written as a functional norm associated with a differential operator, i.e., R1 (f ) = M kDf k2 where operator. If D is R D is a differential R 2 the covariant derivative ? on the manifold, then R1 (f ) = M k?f k = M f L(f )R becomes the Laplacian regularizer. If D is the Hessian operator on the manifold, then R1 (f ) = M kHessf k2 becomes the Hessian regularizer. 2.1 Parallel Fields and Linear Functions We first show the relationship between a parallel field and a linear function on the manifold. Definition 2.1 (Parallel Field [10]). A vector field X on manifold M is a parallel field if ?X ? 0, where ? is the covariant derivative on M. Definition 2.2 (Linear Function [10]). A continuous function f : M ? R is said to be linear if (f ? ?)(t) = f (?(0)) + ct (1) for each geodesic ?. A function f is linear means that it varies linearly along the geodesics of the manifold. It is a natural extension of linear functions on Euclidean space. Proposition 2.1. [10] Let V be a parallel field on the manifold. If it is also a gradient field for function f , V = ?f , then f is a linear function on the manifold. This proposition tells us the relationship between a parallel field and a linear function on the manifold. 2.2 Objective Function We aim to design regularization terms that penalize the second order smoothness of the function. Following the above analysis, we first approximate gradient field of the prediction function by a 2 Figure 1: Covariant derivative demonstration. Let V, Y be two vector fields on manifold M. Given a point x ? M, we show how to compute the vector ?Y V |x . Let ?(t) be a curve on M: ? : I ? M which satisfies ?(0) = x and ? 0 (0) = Yx . Then the covariant derivative along the direction d?(t) dt |t=0 can be computed by prodV jecting dt |t=0 to the tangent space Tx M at x. In other words, ?? 0 (0) V |x = Px ( dV dt |t=0 ), where Px : v ? Rm ? Px (v) ? Tx M is the projection matrix. It is not difficult to check that the computation of ?Y V |x is independent to the choice of the curve ?. vector field, then we require the vector field to be as parallel as possible. Therefore, we try to learn the function f and its gradient field ?f simultaneously. Formally, we propose to learn a function f and a vector field V on the manifold with two constraints: ? The vector field V should be close to the gradient field ?f of f , which can be formularized as follows: Z k?f ? V k2 min R1 (f, V ) = f ?C ? ,V (2) M ? The vector field V should be as parallel as possible: Z min R2 (V ) = k?V k2F V (3) M where ? is the covariant derivative on the manifold, k ? kF denotes the Frobenius norm. In the following, we provide some detailed explanation of R2 (V ). ?V measures the change of the vector field V . If ?V vanishes, then V is a parallel field. For a given direction Yx at x ? M, the geometrical meaning of ?Y V |x is demonstrated in Fig. 1. For a fixed point x ? M, ?V |x is a linear map on the tangent space Tx M. According to the definition of Frobenius norm, we have k?V k2F = d X (g(??i V, ?j ))2 = d X (g(??i V, ??i V )) (4) i=1 i,j=1 where g is the Riemannian metric on M and ?1 , . . . , ?d is an orthonormal basis of Tx M. Naturally, we propose the following objective function based on vector field regularization: l 1X arg min E(f, V ) = R0 (xi , yi , f ) + ?1 R1 (f, V ) + ?2 R2 (V ) l i=1 f ?C ? (M),V (5) For the loss function R0 , we use the squared loss R0 (f (xi ), yi ) = (f (xi ) ? yi )2 for simplicity. 3 Implementation Since the manifold M is unknown, the function f which minimizes (5) can not be directly solved. In this section, we discuss how to discretize the continuous objective function (5). 3.1 Vector Field Representation Given l labeled data points (xi , yi )li=1 and n ? l unlabeled points xl+1 , . . . , xn in Rm . Let fi = f (xi ), i = 1, . . . , n, our goal is to learn a function f = (f1 , . . . , fn )T . We first construct a nearest neighbor graph by either -neighborhood or k nearest neighbors. Let xi ? xj denote that xi and 3 xj are neighbors. For each point xi , we estimate its tangent space Txi M by performing PCA on its neighborhood. We choose the largest d eigenvectors as the bases since Txi M is d dimensional. Let Ti ? Rm?d be the matrix whose columns constitute an orthonormal basis for Txi M. It is easy to show that Pi = Ti TiT is the unique orthogonal projection from Rm onto the tangent space Txi M [11]. That is, for any vector a ? Rm , we have Pi a ? Txi M and (a ? Pi a) ? Pi a. Let V be a vector field on the manifold. For each point xi , let Vxi denote the value of the vector field V at xi , and ?V |xi denote the value of ?V at xi . According to the definition of vector field, Vxi should be a vector in tangent space Txi M. Therefore, it can be represented by the local coordinates T of the tangent space, Vxi = Ti vi , where vi ? Rd . We define V = v1T , . . . , vnT ? Rdn . That is, V is a dn-dimensional big column vector which concatenates all the vi ?s. In the following, we first discretize our objective function E(f, V ), and then minimize it to obtain f and V. 3.2 Gradient Field Computation In order to discretize R1 (f, V ), we first discuss the Taylor expansion of f on the manifold. Let expx denote the exponential map at x. The exponential map expx : Tx M ? M maps the tangent space Tx M to the manifold M. Let a ? Tx M be a tangent vector. Then there is a unique geodesic ?a satisfying ?a (0) = x with the initial tangent vector ?a0 (0) = a. The corresponding exponential map is defined as expx (ta) = ?a (t), t ? [0, 1]. Locally, the exponential map is a diffeomorphism. Note that f ?expx : Tx M ? R is a smooth function on Tx M. Then the following Taylor expansion of f holds: f (expx (a)) ? f (x) + h?f (x), ai, (6) where a ? Tx M is a sufficiently small tangent vector. In the discrete case, let expxi denote the exponential map at xi . Since expxi is a diffeomorphism, there exists a tangent vector aij ? Txi M such that expxi (aij ) = xj . According to the definition of exponential map, kaij k equals to the geodesic distance between xi and xj , which can be denoted as dij . Let eij be the unit vector in the direction of aij , i.e., eij = aij /dij . We approximate aij by projecting the vector xj ? xi to the tangent space, i.e., aij = Pi (xj ? xi ). Therefore, Eq. (6) can be rewritten as follows: f (xj ) = f (xi ) + h?f (xi ), Pi (xj ? xi )i (7) Since f is unknown, ?f is also unknown. In the following, we discuss how to compute k?f (xi ) ? Vxi k2 discretely. We first show that the vector norm can be computed by an integral on a unit sphere, where the unit sphere can be discretely approximated by a neighborhood. Let u be a unit vector on tangent space Tx M, then we have (see the exercise 1.12 in [12]) Z 1 hX, ui2 d?(X) = 1 (8) ?d S d?1 where S d?1 is the unit (d ? 1)-sphere, d?d is its volume, and d? is its volume form. Let ?i , i = 1, . . . , d, be an orthonormal basis of Tx M. Then for any vector b ? Tx M, it can be written as Pd b = i=1 bi ?i . Furthermore, we have Z Z d d X X 1 2 i 2 i 2 1 2 kbk = (b ) = (b ) hX, ?i i d?(X) = hX, bi2 d?(X) ? ? d?1 d?1 d d S S i=1 i=1 From Eq. (7), we can see that h?f (xi ), Pi (xj ? xi )i = f (xj ) ? f (xi ). Thus, we have Z 1 k?f (xi ) ? Vxi k2 = hX, ?f (xi ) ? Vxi i2 d?(X) ?d S d?1 X X ? heij , ?f (xi ) ? Vxi i2 = wij haij , ?f (xi ) ? Vxi i2 j?i = X = X j?i wij hPi (xj ? xi ), ?f (xi ) ? Vxi i2 j?i 2 wij (Pi (xj ? xi ))T Vxi ? f (xj ) + f (xi ) . j?i 4 (9) where wij = d?2 ij . The weight wij can be approximated either by heat kernel weight exp(?kxi ? 2 xj k /?) or simply by 0 ? 1 weight. Then R1 reduces to the following: XX 2 R1 (f, V) = wij (xj ? xi )T Ti vi ? fj + fi (10) i 3.3 j?i Parallel Field Computation As discussed before, we hope the vector field to be as parallel as possible on the manifold. In the discrete case, R2 becomes n X R2 (V) = k?V |xi k2F (11) i=1 In the following, we discuss how to approximate k?V |xi k2F for a given point xi . Since we do not know ??i V for a given basis ?i , k?V |xi k2F cannot be computed according to Eq. (4). We define a (0, 2) symmetric tensor ? as ?(X, Y ) = g(?X V, ?Y V ), where X and Y are vector fields Pd 2 on the manifold. We have Trace(?) = i=1 g(??i V, ??i V ) = k?V kF , where ?1 , . . . , ?d is an orthonormal basis of the tangent space. For the trace of ?, we have the following geometric interpretation (see the exercise 1.12 in [12]): Z 1 Trace(?) = ?(X, X)d?(X) (12) ?d S d?1 where S d?1 is the unit (d ? 1)-sphere, d?d its volume, and d? its volume form. So for a given point xi , we can approximate k?V |xi k by the following Z X X 1 k?V |xi k2F = Trace(?)xi = ?(X, X)|xi d?(X) ? ?(eij , eij ) = k?eij V k2 ?d S d?1 j?i j?i (13) Then we discuss how to discretize ?eij V . Given eij ? Txi M, there exists a unique geodesic ?(t) which satisfies ?(0) = xi and ? 0 (0) = eij . Then the covariant derivative of vector field V along eij is given by (please see Fig. 1)   (Vxj ? Vxi ) ? V (?(t)) ? V (?(0)) dV |t=0 = Pi lim ? Pi ?eij V = Pi = wij (Pi Vxj ? Vxi ) t?0 dt t dij Combining Eq. (13), R2 becomes: R2 (V) = XX i 3.4 wij kPi Tj vj ? Ti vi k 2 (14) j?i Objective Function in the Discrete Form Let I denote a n ? n diagonal matrix where Iii = 1 if xi is labeled and Iii = 0 otherwise. And let y ? Rn be a column vector whose i-th element is yi if xi is labeled and 0 otherwise. Then R0 (f ) = 1 (f ? y)T I(f ? y) l (15) Combining R1 in Eq. (10) and R2 in Eq. (14), the final objective function in the discrete form can be written as follows: XX 2 1 E(f, V) = (f ? y)T I(f ? y) + ?1 wij (xj ? xi )T Ti vi ? fj + fi l i j?i XX 2 + ?2 wij kPi Tj vj ? Ti vi k (16) i j?i 5 3.5 Optimization In this subsection, we discuss how to solve the optimization problem (16). Let L denote the Laplacian matrix of the graph with weights wij . Then we can rewrite R1 as follows: XX XX 2 R1 (f, V) = 2f T Lf + wij (xj ? xi )T Ti vi ? 2 wij (xj ? xi )T Ti vi sTij f i j?i i j?i n where sij ? R is a selection vector of all zero elements except for the i-th element being ?1 and the j-th element being 1. Then the partial derivative of R1 with respect to the variable vi is X X ?R1 (f, V) =2 wij TiT (xj ? xi )(xj ? xi )T Ti vi ? 2 wij TiT (xj ? xi )sTij f ?vi j?i j?i Thus we get ?R1 (f, V) = 2GV ? 2Cf (17) ?V where G is a dn ? dn block diagonal matrix, and C = [C1T , . . . , CnT ]T is a dn ? n block matrix. Denote the i-th d ? d diagonal block of G by Gii and the i-th d ? n block of C by Ci , we have X Gii = wij TiT (xj ? xi )(xj ? xi )T Ti (18) j?i Ci = X wij TiT (xj ? xi )sTij (19) j?i The partial derivative of R1 with respect to the variable f is ?R1 (f, V) = 4Lf ? 2C T V ?f (20) Similarly, we can compute the partial derivative of R2 with respect to the variable vi : X X   ?R2 (V) =2 wij (TiT Tj TjT Ti + I)vi ? 2TiT Tj vj = 2 wij (Qij QTij + I)vi ? 2Qij vj ?vi j?i j?i where Qij = TiT Tj . Thus we obtain ?R2 = 2BV (21) ?V where B is a dn ? dn sparse block matrix. If we index each d ? d block by Bij , then for i, j = 1, . . . , n, we have X Bii = wij (Qij QTij + I) (22) j?i  Bij Notice that ?R0 ?f = ?2wij Qij , if xi ? xj 0, otherwise (23) = 2 1l I(f ? y). Combining Eq. (17), Eq. (20) and Eq. (21), we have ?E(f, V) ?R0 ?R1 ?R2 1 1 = + ?1 + ?2 = 2( I + 2?1 L)f ? 2?1 C T V ? 2 y ?f ?f ?f ?f l l ?E(f, V) ?R0 ?R1 ?R2 = + ?1 + ?2 = ?2?1 Cf + 2(?1 G + ?2 B)V ?V ?V ?V ?V Requiring that the derivatives vanish, we finally get the following linear system 1    1  f ??1 C T y l I + 2?1 L = l V ??1 C ?1 G + ?2 B 0 6 (24) (25) (26) (a) Ground truth (b) Laplacian (3.65) (c) Hessian (1.35) (d) PFR (1.14) Figure 2: Global temperature prediction. Regression on the satellite measurement of temperatures in the middle troposphere. 1% samples are randomly selected as training data. The ground truth is shown in (a). The colors indicate temperature values (in Kelvin). The regression results are visualized in (b)?(d). The numbers in the captions are the mean absolute prediction errors. 4 Related Work and Discussion The approximation of the Laplacian operator using the graph Laplacian [5] has enjoyed a great success in the last decade. Some theoretical results [13, 14] also show the consistency of the approximation. One of the most important features of the graph Laplacian is that it is coordinate free. That is, it does not depend on any special coordinate system. The estimation of Hessian is very difficult and there is few work on it. Previous approaches [2, 15] first estimate normal coordinates in the tangent space, and then estimate the first order derivative at each point, which is a matrix pseudo-inversion problem. One major limitation of this is that when the number of nearest neighbors k is larger than d + d(d+1) , where d is the dimension of the manifold, 2 the estimation will be inaccurate and unstable [15]. This is contradictory to the asymptotic case, since it is not desirable that k is bounded by a finite number when the data is sufficiently dense. In contrast, our method is coordinate free. Also, we directly estimate the norm of the second order derivative instead of trying to estimate its coefficients, which turns out to be an integral problem over the neighboring points. We only need to do simple matrix multiplications to approximate the integral at each point, but do not have to solve matrix inversion problems. Therefore, asymptotically, we would expect our method to be much more accurate and robust for the approximation of the norm of the second order derivative. 5 Experiments In this section, we compare our proposed Parallel Field Regularization (PFR) algorithm with two state-of-the-art semi-supervised regression methods: Laplacian regularized transduction (Laplacian) [8] and Hessian regularized transduction (Hessian)1 [15], respectively. Our experiments are carried out on two real-world data sets. Regularization parameters for all algorithms are chosen via crossvalidation. 5.1 Global Temperature In this test, we perform regression on the earth surface, which is a 2D sphere manifold. We try to predict the satellite measurement of temperatures in the middle troposphere in Dec. 20042 , which contains 9504 valid temperature measurements. The coordinates (latitude, longitude) of the measurements are used as features and the corresponding temperature values are the responses. The dimension of manifold is set to 2 and the number of nearest neighbors is set to 6 in graph construction. We randomly select 1% of the samples as labeled data, and compare the predicted temperature values with the ground truth on the rest of the data. The regression results are shown in Fig. 2. The numbers in the captions indicate the mean absolute prediction errors generated by different algorithms. It can be seen from the visualization result that 1 We use the code from the authors downloadable from http://www.ml.uni-saarland.de/code/ HessianSSR/HessianSSR.html. 2 http://www.remss.com/msu/. 7 frame 300 frame 016 PFR Hessian Laplacian 20 MAE 15 10 Laplacian 5 20 40 60 80 number of labels 100 Figure 3: Results on the moving hand dataset. Hessian PFR Figure 4: The examples of regression results on the moving hand data set. 60 labeled samples are used for training. Each row shows the results obtained via the three algorithms for a frame. In each image, the red dots indicate the ground truth positions we labeled manually, and the blue arrows show the positions predicted by different algorithms. Hessian and PFR perform better than Laplacian. Furthermore, from the prediction error, we can see that PFR outperforms Hessian. 5.2 Positions of Moving Hands In this subsection, we perform experiments using a video of a subject sitting in a sofa and waving his arms 1 . Our goal is to predict the positions of the (left and right) elbows and wrists. We extract the first 500 frames of the video and manually label the positions of the elbows and wrists. We scale each frame to size of 120 ? 90 and use the raw pixels (10800-dimensional vectors) as the features. The response for each frame is a 8-dimensional vector whose elements are the 2D coordinates of the elbows and wrists. Since there are 8 free parameters, we set the dimension of manifold to 8. We use 18 nearest neighbors in graph construction. We run the experiments with different numbers of labeled frames. For each given number of labeled frames, we perform 10 tests with randomly selected labeled set. The average of the mean absolute error (MAE) for each test is calculated. The final result is shown in Fig. 3. As can be seen, PFR consistently outperforms the other two algorithms. Laplacian yields high MAE. Hessian is very unstable on this dataset, and the results vary drastically with different numbers of labels. We also show some example frames in Fig. 4. The red dots in the figures indicate the ground truth positions and the blue arrows are drawn by connecting the positions of elbows and wrists predicted by different algorithms. Again we can verify that PFR performs better than the other two algorithms. 6 Conclusion In this paper, we propose a novel semi-supervised learning algorithm from the vector field perspective. We show the relationship between vector fields and functions on the manifold. The parallelism of the vector field is used to measure the linearity of the target prediction function. Parallel fields are one kind of special vector fields on the manifold, which have very nice properties. It is interesting to explore other kinds of vector fields to facilitate learning on manifolds. Moreover, vector fields can also be used to study the geometry and topology of the manifold. For example, Poincar?e-Hopf theorem tells us that the sum of the indices over all the isolated zeroes of a vector field equals to the Euler characteristic of the manifold, which is a very important topological invariant. Acknowledgments This work was supported by the National Natural Science Foundation of China under Grant 61125203, the National Basic Research Program of China (973 Program) under Grant 2012CB316404, and the National Natural Science Foundation of China under Grants 90920303 and 60875044. 1 The video is obtained from http://www.csail.mit.edu/?rahimi/manif. 8 References [1] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000. [2] D. L. Donoho and C. E. Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Sciences of the United States of America, 100(10):5591?5596, 2003. [3] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, 2000. [4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems 14, pages 585?591. 2001. [5] Fan R. K. Chung. Spectral Graph Theory, volume 92 of Regional Conference Series in Mathematics. AMS, 1997. [6] X. Zhu and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proc. of the 20th Internation Conference on Machine Learning, 2003. [7] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Sch? olkopf. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16, 2003. [8] Mikhail Belkin, Irina Matveeva, and Partha Niyogi. Regularization and semi-supervised learning on large graphs. In Conference on Learning Theory, pages 624?638, 2004. [9] John Lafferty and Larry Wasserman. Statistical analysis of semi-supervised regression. In Advances in Neural Information Processing Systems 20, pages 801?808, 2007. [10] P. Petersen. Riemannian Geometry. Springer, New York, 1998. [11] G. H. Golub and C. F. Van Loan. Matrix computations. Johns Hopkins University Press, 3rd edition, 1996. [12] B. Chow, P. Lu, and L. Ni. Hamilton?s Ricci Flow. AMS, Providence, Rhode Island, 2006. [13] Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for laplacian-based manifold methods. In Conference on Learning Theory, pages 486?500, 2005. [14] Matthias Hein, Jean yves Audibert, and Ulrike Von Luxburg. From graphs to manifolds - weak and strong pointwise consistency of graph laplacians. In Conference on Learning Theory, pages 470?485, 2005. [15] K. I. Kim, F. Steinke, and M. Hein. Semi-supervised regression using hessian energy with an application to semi-supervised dimensionality reduction. In Advances in Neural Information Processing Systems 22, pages 979?987. 2009. 9
4398 |@word middle:2 briefly:1 inversion:2 norm:7 reduction:5 initial:1 contains:1 series:1 united:1 past:1 existing:1 outperforms:2 com:2 cad:1 gmail:1 written:3 john:2 fn:1 gv:1 selected:2 short:1 zhang:2 saarland:1 along:3 dn:6 differential:3 hopf:1 qij:5 introduce:1 pairwise:1 discretized:1 v1t:1 rem:1 elbow:4 becomes:5 estimating:1 linearity:3 underlying:1 xx:6 bounded:1 moreover:1 kind:2 minimizes:3 developed:2 pseudo:1 ti:12 rm:6 k2:6 unit:6 grant:3 kelvin:1 hamilton:1 before:1 local:4 hlle:1 rhode:1 china:4 challenging:1 bi:1 zhejiang:1 unique:3 acknowledgment:1 wrist:4 block:6 lf:2 poincar:1 empirical:2 projection:2 word:1 petersen:1 get:2 onto:1 unlabeled:2 close:2 operator:6 selection:1 cannot:1 www:3 map:8 demonstrated:2 vxi:12 kpi:2 simplicity:1 wasserman:1 orthonormal:4 his:1 embedding:4 coordinate:7 construction:2 target:1 ui2:1 caption:2 matveeva:1 element:5 satisfying:1 approximated:2 labeled:13 solved:2 intuition:1 vanishes:1 pd:2 geodesic:6 depend:1 rewrite:1 tit:8 basis:5 various:1 tx:13 represented:1 regularizer:5 america:1 heat:1 tell:2 neighborhood:4 whose:3 jean:1 widely:1 solve:3 larger:1 otherwise:3 niyogi:3 final:2 confronted:1 matthias:1 propose:6 neighboring:1 combining:3 achieve:2 roweis:1 academy:1 frobenius:2 troposphere:2 olkopf:1 crossvalidation:1 convergence:3 kdf:1 r1:21 satellite:2 cnt:1 develop:1 ij:1 nearest:6 eq:9 strong:2 longitude:1 predicted:3 indicate:4 direction:3 larry:1 require:1 hx:4 ricci:1 f1:1 proposition:2 tjt:1 extension:1 hold:1 sufficiently:2 considered:1 ground:5 normal:1 exp:1 great:1 predict:2 major:1 chiyuan:2 early:1 vary:1 purpose:1 earth:1 estimation:2 proc:1 sofa:1 label:3 largest:1 hope:2 mit:1 gaussian:1 aim:2 zhou:1 consistently:1 zju:1 check:1 mainly:2 contrast:1 cg:1 kim:1 am:2 hangzhou:1 inaccurate:1 a0:1 chow:1 wij:21 c1t:1 pixel:1 arg:2 html:1 denoted:1 art:1 special:2 field:56 construct:2 equal:2 manually:2 k2f:6 few:1 belkin:3 randomly:3 simultaneously:4 preserve:1 national:4 geometry:2 irina:1 golub:1 hpi:1 grime:1 tj:5 accurate:1 ambient:1 integral:4 partial:3 orthogonal:1 euclidean:3 taylor:2 isolated:1 hein:2 theoretical:4 column:3 measuring:1 euler:1 submanifold:2 eigenmaps:4 dij:3 too:1 providence:1 varies:1 kxi:1 filed:1 csail:1 connecting:1 hopkins:1 squared:1 again:1 von:1 choose:1 derivative:15 chung:1 li:3 de:2 downloadable:1 vnt:1 coefficient:1 stij:3 satisfy:1 audibert:1 vi:16 try:4 lab:1 red:2 ulrike:1 parallel:21 waving:1 partha:2 minimize:1 yves:1 ni:1 characteristic:1 efficiently:1 sitting:1 yield:1 weak:1 raw:1 lu:1 researcher:1 definition:5 energy:1 naturally:1 associated:1 riemannian:2 sampled:1 dataset:2 lim:1 subsection:2 dimensionality:5 color:1 ta:1 dt:4 supervised:15 response:2 furthermore:2 langford:1 hand:3 nonlinear:2 semisupervised:1 facilitate:1 requiring:2 verify:1 isomap:2 regularization:13 symmetric:1 i2:4 please:1 trying:1 txi:8 performs:1 temperature:8 fj:2 silva:1 geometrical:2 meaning:1 harmonic:1 image:1 novel:3 fi:3 functional:2 volume:5 discussed:1 he:1 interpretation:1 mae:3 measurement:4 ai:1 smoothness:9 rd:2 enjoyed:1 consistency:3 mathematics:1 similarly:1 dot:2 moving:3 surface:1 base:1 recent:2 perspective:4 success:1 yi:8 seen:2 employed:1 r0:9 semi:15 desirable:1 reduces:1 rahimi:1 smooth:2 faster:3 sphere:5 lin:1 laplacian:23 prediction:7 regression:12 basic:1 metric:1 kernel:1 dec:1 penalize:2 preserved:1 crucial:1 sch:1 rest:1 regional:1 subject:1 gii:2 lafferty:2 flow:1 effectiveness:1 iii:2 easy:1 variety:1 xj:25 topology:1 haij:1 motivated:1 pca:1 hessian:14 york:1 constitute:1 detailed:1 eigenvectors:1 locally:4 tenenbaum:1 visualized:1 http:3 notice:1 blue:2 discrete:5 key:1 achieving:1 drawn:1 graph:15 asymptotically:1 geometrically:1 sum:1 run:1 luxburg:1 ct:1 fan:1 topological:2 discretely:2 bv:1 constraint:1 bousquet:1 min:4 performing:1 px:3 according:4 island:1 kbk:1 dv:2 projecting:1 sij:1 invariant:1 visualization:1 discus:7 turn:2 know:1 rewritten:1 spectral:2 bii:1 denotes:2 clustering:1 ensure:3 cf:2 yx:2 tensor:1 objective:8 diagonal:3 said:1 gradient:12 distance:2 manifold:38 considers:1 unstable:2 besides:1 code:2 index:2 relationship:3 pointwise:1 demonstration:1 difficult:2 trace:4 design:2 implementation:1 unknown:3 perform:4 discretize:5 finite:1 xiaofei:1 frame:9 rn:1 lal:1 learned:1 address:1 beyond:1 suggested:2 parallelism:1 latitude:1 laplacians:1 program:2 explanation:1 video:3 bi2:1 natural:3 regularized:2 arm:1 minimax:1 scheme:1 zhu:1 carried:1 extract:1 nice:1 geometric:2 tangent:15 kf:2 multiplication:1 asymptotic:1 embedded:1 loss:3 expect:1 interesting:1 limitation:1 foundation:3 pi:12 expx:5 row:1 supported:1 last:1 free:3 aij:6 drastically:1 lle:1 neighbor:7 saul:1 steinke:1 absolute:3 sparse:2 mikhail:2 van:1 curve:2 dimension:3 xn:1 world:1 valid:1 calculated:1 author:1 approximate:5 uni:1 ml:1 global:5 xi:57 continuous:4 msu:1 decade:2 learn:4 concatenates:1 robust:1 expansion:2 vj:4 dense:1 linearly:1 arrow:2 big:1 edition:1 fig:5 transduction:2 position:7 exponential:6 xl:1 exercise:2 vanish:1 rdn:1 bij:2 theorem:1 pfr:9 r2:13 intrinsic:2 exists:2 ci:2 eij:10 simply:1 explore:1 springer:1 covariant:7 truth:5 satisfies:2 weston:1 goal:3 viewed:1 formulated:1 consequently:3 diffeomorphism:2 vxj:2 donoho:1 internation:1 towards:1 change:1 loan:1 specifically:2 except:1 contradictory:1 called:1 experimental:1 formally:1 college:1 select:1
3,753
4,399
Metric Learning with Multiple Kernels Jun Wang Huyen Do Adam Woznica Alexandros Kalousis AI Lab, Department of Informatics University of Geneva, Switzerland {Jun.Wang, Huyen.Do, Adam.Woznica, Alexandros.Kalousis}@unige.ch Abstract Metric learning has become a very active research field. The most popular representative?Mahalanobis metric learning?can be seen as learning a linear transformation and then computing the Euclidean metric in the transformed space. Since a linear transformation might not always be appropriate for a given learning problem, kernelized versions of various metric learning algorithms exist. However, the problem then becomes finding the appropriate kernel function. Multiple kernel learning addresses this limitation by learning a linear combination of a number of predefined kernels; this approach can be also readily used in the context of multiple-source learning to fuse different data sources. Surprisingly, and despite the extensive work on multiple kernel learning for SVMs, there has been no work in the area of metric learning with multiple kernel learning. In this paper we fill this gap and present a general approach for metric learning with multiple kernel learning. Our approach can be instantiated with different metric learning algorithms provided that they satisfy some constraints. Experimental evidence suggests that our approach outperforms metric learning with an unweighted kernel combination and metric learning with cross-validation based kernel selection. 1 Introduction Metric learning (ML), which aims at learning dissimilarities by determining the importance of different input features and their correlations, has become a very active research field over the last years [23, 5, 3, 14, 22, 7, 12]. The most prominent form of ML is learning the Mahalanobis metric. Its computation can be seen as a two-step process; in the first step we perform a linear projection of the instances and in the second step we compute their Euclidean metric in the projected space. Very often a linear projection cannot adequately represent the inherent complexities of a problem at hand. To address this limitation various works proposed kernelized versions of ML methods in order to implicitly compute a linear transformation and Euclidean metric in some non-linear feature space; this computation results in a non-linear projection and distance computation in the original input space [23, 5, 3, 14, 22]. However, we are now faced with a new problem, namely that of finding the appropriate kernel function and the associated feature space matching the requirements of the learning problem. The simplest approach to address this problem is to select the best kernel from a predefined kernel set using internal cross-validation. The main drawback of this approach is that only one kernel is selected which limits the expressiveness of the resulting method. Additionally, this approach is limited to a small number of kernels?due to computational constraints?and requires the use of extra data. Multiple Kernel Learning (MKL) [10, 17] lifts the above limitations by learning a linear combination of a number of predefined kernels. The MKL approach can also naturally handle the multiple-source learning scenarios where instead of combining kernels defined on a single input data, which depending on the selected kernels could give rise to feature spaces with redundant 1 features, we combine different and complementary data sources. In [11, 13] the authors propose a method that learns a distance metric for multiple-source problems within a multiple-kernel scenario. The proposed method defines the distance of two instances as the sum of their distances in the feature spaces induced by the different kernels. During learning, a set of Mahalanobis metrics, one for each source, are learned together. However, this approach ignores the potential correlations between the different kernels. To the best of our knowledge most of the work on MKL has been confined in the framework of SVMs and despite the recent popularity of ML there exists so far no work that performs MKL in the ML framework by learning a distance metric in the weighted linear combination of feature spaces. In this paper we show how to perform the Mahalanobis ML with MKL. We first propose a general framework of ML with MKL which can be instantiated with virtually any Mahalanobis ML algorithm h provided that the latter satisfies some stated conditions. We examine two parametrizations of the learning problem that give rise to two alternative formulations, denoted by MLh -MKL? and MLh -MKLP . Our approach can be seen as the counterpart of MKL with SVMs [10, 20, 17] for ML. Since the learned metric matrix has a regularized form (i.e. it has internal structure) we propose a straightforward non-regularized version of ML with MKL, denoted by NR-MLh -MKL; however, due to the number of free parameters the non-regularized version can only scale with very small number of kernels and requires ML methods that are able to cope with large dimensionalities. We performed a number of experiments for ML with MKL in which, for the needs of this paper, we have chosen the well known Large Margin Nearest Neighbor [22] (LMNN) algorithm as the ML method h. The experimental results suggest that LMNN-MKLP outperforms LMNN with an unweighted kernel combination and the single best kernel selected by internal cross-validation. 2 Preliminaries In the different flavors of metric learning we are given a matrix of learning instances X : n ? d, the i-th row of which is the xTi ? Rd instance, and a vector of class labels y = (y1 , . . . , yn )T , yi ? {1, . . . , c}. Consider a mapping ?l (x) of instances x to some feature space Hl , i.e. x ? ?l (x) ? Hl . The corresponding kernel function kl (xi , xj ) computes the inner product of two instances in the Hl feature space, i.e. kl (xi , xj ) = h?l (xi ), ?l (xj )i. We denote dimensionality of Hl (possibly infinite) as dl . The squared Mahalanobis distance of two instances in the Hl space is given by d2Ml (?l (xi ), ?l (xj )) = (?l (xi ) ? ?l (xj ))T Ml (?l (xi ) ? ?l (xj )), where Ml is a Positive Semi-Definite (PSD) metric matrix in the Hl space (Ml  0). For some given ML method h we optimize (most often minimize) some cost function Fh with respect to the Ml metric matrix1 under the PSD constraint for Ml and an additional set of pairwise distance constraints Ch ({d2Ml (?l (xi ), ?l (xj )) | i, j = 1, . . . , n}) that depend on the choice of h, e.g. similarity and dissimilarity pairwise constraint [3] and relative comparison constraint [22]. In the reminder of this paper, for simplicity, we denote this set of constraints as Ch (d2Ml (?l (xi ), ?l (xj ))). The kernelized ML optimization problem can be now written as: (1) min Fh (Ml ) s.t. Ch (d2Ml (?l (xi ), ?l (xj ))), Ml  0 Ml Kernelized ML methods do not require to learn the explicit form of the Mahalanobis metric Ml . It was shown in [9] that the optimal solution of the Mahalanobis metric Ml is in the form of Ml = ?h I + ?l (X)T Al ?l (X), where I is the identity matrix of dimensionality dl ? dl , Al is a n ? n PSD matrix, ?l (X) is the matrix of learning instances in the Hl space (with instances in rows), and ?h is a constant that depends on the ML method h. Since in the vast majority of the existing ML methods [19, 8, 18, 23, 5, 14, 22] the value of constant ?h is zero, in this paper we only consider the optimal form of Ml with ?h = 0. Under the optimal parametrization of Ml = ?l (X)T Al ?l (X) the squared Mahalanobis distance becomes: d2Ml (?l (xi ), ?l (xj )) = (Kil ? Kjl )T Al (Kil ? Kjl ) = d2Al (?l (xi ), ?l (xj )) (2) Kil where is the i-th column of kernel matrix Kl , the (i, j) element of which is Klij = kl (xi , xj ). As a result, (1) can be rewritten as: min Fh (?l (X)T Al ?l (X)) s.t. Ch (d2Al (?l (xi ), ?l (xj ))), Al  0 (3) Al 1 The optimization could also be done with respect to other variables of the cost function and not only Ml . However, to keep the notation uncluttered we parametrize the optimization problem only over Ml . 2 In MKL we are given a set of kernel functions Z = {kl (xi , xj ) | l = 1 . . . m} and the goal is to learn an appropriate kernel function k? (xi , xj ) parametrized by ? under a cost function Q. The cost function Q is determined by the cost function of the learning method that is coupled with multiple kernel learning, e.g. it can be the SVM cost function if one is using an SVM as the learning approach. As in [10, 17] we parametrize k? (xi , xj ) by a linear combination of the form: k? (xi , xj ) = m X ?l kl (xi , xj ), ?l ? 0, i=l m X ?l = 1 (4) l We denote the feature space that is induced by the k? kernel by H? , feature space which is given ? ? by the mapping x ? ?? (x) = ( ?1 ?1 (x)T , . . . , ?m ?m (x)T )T ? H? . We denote the dimensionality of H? by d; it can be infinite. Finally, we denote by H the feature space that we get by the unweighted concatenation of the m feature spaces, i.e. ??i , ?i = 1, whose representation is given by x ? ?(x) = (?1 (x)T , . . . , ?m (x)T )T . 3 Metric Learning with Multiple Kernel Learning The goal is to learn a metric matrix M in the feature space H? induced by the mapping ?? as well as the kernel weight ?; we denote this metric by d2M,? . Based on the optimal form of the Mahalanobis metric M for metric learning method learning with a single kernel function [9], we have the following lemma: Lemma 1. Assume that for a metric learning method h the optimal parameterization of its Mahalanobis metric M ? is ?l (X)T A? ?l (X), for some A? , when learning with a single kernel function kl (x, x0 ). Then, for h with multiple kernel learning the optimal parametrization of its Mahalanobis metric M ?? is given by ?? (X)T A?? ?? (X), for some A?? . The proof of the above Lemma is similar to the proof of Theorem 1 in [9] (it is not presented here due to the lack of space). Following Lemma 1, we have: d2M,? (?? (xi ), ?? (xj )) = = (?? (xi ) ? ?? (xj ))T ?? (X)T A?? (X)(?? (xi ) ? ?? (xj )) (5) X X ?l (Kil ? Kjl )T A ?l (Kil ? Kjl ) = d2A,? (?? (xi ), ?? (xj )) l l Based on (5) and the constraints from (4), the ML optimization problem with MKL can be presented as: m X min Fh (?? (X)T A?? (X)) s.t. Ch (d2A,? (?? (xi ), ?? (xj ))), A  0, ?l ? 0, ?l = 1(6) A,? l We denote the resulting optimization problem and the learning method by MLh -MKL? ; clearly this is not fully specified until we choose a specific ML method h. ? ? (Ki1 ? Kj1 )T ?. We note that d2A,? (?? (xi ), ?? (xj )) from (5) can also be written Let B = ? ... i j T (Km ? Km ) as: d2A,? (?? (xi ), ?? (xj )) = ?T BABT ? = tr(PBABT ) = d2A,P (?P (xi ), ?P (xj )) (7) where P = ??T and tr(?) is the trace of a matrix. We use ?P (X) to emphasize the explicit the dependence of ?? (X) to P = ??T . As a result, instead of optimizing over ? we can also use the parametrization over P; the new optimization problem can now be written as: min A,P s.t. Fh (?P (X)T A?P (X)) Ch (d2A,P (?P (xi ), ?P (xj ))), A  0, (8) X Pij = 1, Pij ? 0, Rank(P) = 1, P = PT ij where the constraints ij Pij = 1, Pij ? 0, Rank(P) = 1, and P = PT are added so that P = ??T . We call the optimization problem and learning method (8) as MLh -MKLP ; as before in order to fully instantiate it we need to choose a specific metric learning method h. P 3 Now, we derive an alternative parametrization of (5). We need two additional matrices: C?i ?j = 0 ?i ?j I, where the dimensionality of I is n ? n, and ? (X) which is an mn ? d dimensional matrix: # " ?1 (X) . . . 0 0 ... ... ... ? (X) = 0 . . . ?m (X) We have: 0 d2A,? (?? (xi ), ?? (xj )) = (?(xi ) ? ?(xj ))T M (?(xi ) ? ?(xj )) where: 0 0 0 M = ? (X)T A0 ? (X) (9) (10) 0 and A is a mn ? mn matrix: " A0 = C?1 ?1 A . . . C?1 ?m A ... ... ... C?m ?1 A . . . C?m ?m A # . (11) From (9) we see that the Mahalanobis metric, parametrized by the M or A matrix, in the feature space H? induced by the kernel k? , is equivalent to the Mahalanobis metric in the feature space H which is parametrized by M0 or A0 . As we can see from (11), MLh -MKL? and MLh -MKLP learn a regularized matrix A0 (i.e. matrix with internal structure) that corresponds to a parametrization of 0 the Mahalanobis metric M in the feature space H. 3.1 Non-Regularized Metric Learning with Multiple Kernel Learning We present here a more general formulation of the optimization problem (6) in which we lift the regularization of matrix A0 from (11), and learn instead a full PSD matrix A00 : " # A11 . . . A1m 00 ... ... ... A = (12) A1m . . . Amm where Akl is an n ? n matrix. The respective Mahalanobis matrix, which we denote by M00 , still 0 0 have the same parametrization form as in (10), i.e. M00 = ? (X)T A00 ? (X). As a result, by using A00 instead of A0 the squared Mahalanobis distance can be written now as: 00 d2A00 (?(xi ), ?(xj )) = (?(xi ) ? ?(xj ))T M (?(xi ) ? ?(xj )) = = ? Kj1 )T , . . . , (Kim ? Kjm )T ]A00 [(Ki1 ? [?Z (xi ) ? ?Z (xj )]T A00 (?Z (xi ) ? ?Z (xj )] [(Ki1 Kj1 )T , . . . , (Kim (13) ? Kjm )T ]T where ?Z (xi ) = ((Ki1 )T , . . . , (Kim )T )T ? HZ . What we see here is that under the M00 parametrization computing the Mahalanobis metric in the H is equivalent to computing the Mahalanobis metric in the HZ space. Under the parametrization of the Mahalanobis distance given by (13), the optimization problem of metric learning with multiple kernel learning is the following: 0 0 min Fh (? (X)T A00 ? (X)) s.t. Ch (d2A00 (?(xi ), ?(xj ))), A00  0 00 A (14) We call this optimization problem NR-MLh -MKL. We should note that this formulation has scaling problems since it has O(m2 n2 ) parameters that need to be estimated, and it clearly requires a very efficient ML method h in order to be practical. 4 4.1 Optimization Analysis The NR-MLh -MKL optimization problem obviously has the same convexity properties as the metric 0 0 learning algorithm h that will be used, since the parametrization M00 = ? (X)T A00 ? (X) used in NR-MLh -MKL is linear with A00 , and the composition of a function with an affine mapping preserves 4 the convexity property of the original function [1]. This is also valid for the subproblems of learning matrix A in MLh -MKL? and MLh -MKLP given the weight vector ?. Given the PSD matrix A, we have the following two lemmas for optimization problems MLh MKL{?|P} : Lemma 2. Given the PSD matrix A the MLh -MKL? optimization problem is convex with ? if metric learning algorithm h is convex with ?. Proof. The last two constraints on ? of the optimization problem from (6) are linear, thus this problem is convex if metric learning algorithm h is convex with ?. Since d2A,? (?? (xi ), ?? (xj )) is convex quadratic of ?, which can be easily proved based on the PSD property of matrix BABT in (7), many of the well known metric learning algorithms, such as Pairwise SVM [21], POLA [19] and Xing?s method [23] satisfy the conditions in Lemma 2. The MLh -MKLP optimization problem (8) is not convex given a PSD matrix A because the rank constraint is not convex. However, when the number of kernels m is small, e.g. a few tens of kernels, there is an equivalent convex formulation. Lemma 3. Given the PSD matrix A, the MLh -MKLP optimization problem (8) can be formulated as an equivalent convex problem with respect to P if the ML algorithm h is linear with P and the number of kernel m is small. Proof. Given the PSD matrix A, if h is linear with P, we can formulate the rank constraint problem with the help of the two following convex problems [2]: min P s.t. Fh (?P (X)T A?P (X)) + w ? tr(PT W) Ch (d2A,P (?P (xi ), ?P (xj ))), A  0, P  0, (15) X Pij = 1, Pij ? 0, P = PT ij where w is a positive scalar just enough to make tr(PT W) vanish, i.e. global convergence defined in (17), and the direction matrix W is an optimal solution of the following problem: min tr(P? T W) s.t. 0  W  I, tr(W) = m ? 1 W (16) where P? is an optimal solution of (15) given A and W, and m is the number of kernels. The problem (16) has a closed form solution W = UUT , where U ? Rm?m?1 is the eigenvector matrix of P? whose columns are the eigenvectors which correspond to the m ? 1 smallest eigenvalues of P? . The two convex problems are iteratively solved until global convergence, defined as: m X ?(P? )i = tr(P? T W? ) = ?(P? )T ?(W? ) ? 0 (17) i=2 where ?(P? )i is the i-th largest eigenvalue of P? . This formulation is not a projection method. At global convergence the convex problem (15) is not a relaxation of the original problem, instead it is an equivalent convex problem [2]. We will now prove the convergence of problem (15). Suppose the objective value of (15) is fi at iteration i. Since both (15) and (16) minimize the objective value of (15), we have fj < fi for any iteration j > i. Beacuse the infimum f ? of the objective value of (15) corresponds to the optimal objective value of (15) when the second term is removed. Thus the nonincreasing sequence of objective values is bounded below and as a result converges because any bounded monotonic sequence in R is convergent. Thus the local convergence of (15) is now established. Only the local convergence can be established for problem (15) because the objective tr(PT W) is generally multimodal [2]. However, as indicated in section 7.2 [2], when the size of m is small, the global optimal of problem (15) can be often achieved. This can be simply verified by comparing the difference between the infimum f ? and the optimal objective value f of problem (15). For a number of known metric learning algorithms, such as LMNN [22], POLA [19], MLSVM [14] and Xing?s method [23] linearity with respect to P holds given A  0. 5 Algorithm 1 MLh -MKL? , MLh -MKLP Input: X, Y, A0 , ?0 , and matrices K1 , . . . , Km Output: A and ? repeat (i?1) ?(i) =WeightLearning(A ) P K?(i) = k ?ik Kk A(i) =MetricLearningh (A(i?1) ,X,K?(i) ) i := i + 1 until convergence 4.2 Optimization Algorithms The NR-MLh -MKL optimization problem can be directly solved by any metric learning algorithm h on the space HZ when the optimization problem of the latter only involves the squared pairwise Mahalanobis distance, e.g. LMNN [22] and MCML [5]. When the metric learning algorithm h has regularization term on M, e.g. trace norm [8] and Frobenius norm [14, 19], most often the NR-MLh -MKL optimization problem can be solved by a slightly modification of original algorithm. We now describe how we can solve the optimization problems of MLh -MKL? and MLh -MKLP . Based on Lemmas 2 and 3 we propose for both methods a two-step iterative algorithm, Algorithm 1, at the first step of which we learn the kernel weighting and at the second the metric under the kernel weighting learned in the first step. At the first step of the i-th iteration we learn the ?(i) kernel weight vector under fixed PSD matrices A(i?1) , learned at the preceding iteration (i ? 1). For MLh -MKL? we solve the weight learning problem using Lemma 2 and for MLh -MKLP using Lemma 3. At the second step we apply the metric learning algorithm h and we learn the PSD matrices A(i) with P (i) the K?(i) = l ?l Ki kernel matrix using as the initial metric matrices the A(i?1) . We should make clear that the optimization problem we are solving is only individually convex with respect to ? given the PSD matrix A and vice-versa. As a result, the convergence of the two-step algorithm (possible to a local optima) is guaranteed [6] and checked by the variation of ? and the objective value of metric learning method h. In our experiments (Section 6) we observed that it most often converges in less than ten iterations. 5 LMNN-Based Instantiation We have presented two basic approaches to metric learning with multiple kernel learning: MLh MKL? (MLh -MKLP ) and NR-MLh -MKL. In order for the approaches to be fully instantiated we have to specify the ML algorithm h. In this paper we focus on the LMNN state-of-the-art method [22]. Due to the relative comparison constraint, LMNN does not satisfy the condition of Lemma 2. However, as we already mentioned LMNN satisfies the condition of Lemma 3 so we get the MLh -MKLP variant of the optimization problem for LMNN which we denote by LMNN-MKLP . The resulting optimization problem is: X X min Sij {(1 ? ?)d2A,P (?P (xi ), ?P (xj )) + ? (1 ? Yik )?ijk } (18) A,P,? s.t. ij k d2A,P (?P (xi ), ?P (xk )) X ? d2A,P (?P (xi ), ?P (xj )) Pkl = 1, Pkl ? 0, Rank(P) = 1, P = P ? 1 ? ?ijk , ?ijk > 0, A  0 T kl where the matrix Y, Yij ? {0, 1}, indicates if the class labels yi and yj are the same (Yij = 1) or different (Yij = 0). The matrix S is a binary matrix whose Sij entry is non-zero if instance xj is one of the k same class nearest neigbors of instance xi . The objective is to minimize the sum of the distances of all instances to their k same class nearest neighbors while allowing for some errors, trade of which is controlled by the ? parameter. As the objective function of LMNN only involves the squared pairwise Mahalanobis distances, the instantiation of NR-MLh -MKL is straightforward and it consists simply of the application of LMNN on the space HZ in order to learn the metric. We denote this instantiation by NR-LMNN-MKL. 6 Table 1: Accuracy results. The superscripts +?= next to the accuracies of NR-LMNN-MKLand LMNN-MKLP indicate the result of the McNemar?s statistical test of their comparison to the accuracies of LMNNH and LMNN-MKLCV and denote respectively a significant win, loss or no difference. The number in the parenthesis indicates the score of the respective algorithm for the given dataset based on the pairwise comparisons of the McNemar?s statistical test. Datasets Sonar Wine Iris Ionosphere Wdbc CentralNervous Colon Leukemia MaleFemale Ovarian Prostate Stroke Total Score 6 NR-LMNN-MKL 88.46+= (3.0) 98.88== (2.0) 93.33== (2.0) 93.73== (2.5) 94.90?= (1.0) 55.00== (2.0) 80.65== (2.0) 95.83+= (2.5) 86.57== (2.5) 95.26+= (3.0) 79.50== (2.0) 69.71== (2.0) 26.5 LMNN-MKLP 85.58== (2.0) 98.88== (2.0) 95.33== (2.0) 94.87=+ (3.0) 97.36=+ (3.5) 63.33== (2.0) 85.48+= (2.5) 94.44+= (2.5) 88.81+= (3.0) 94.47+= (3.0) 80.43== (2.5) 72.12== (2.0) 30.0 LMNNH 82.21(1.0) 98.31(2.0) 94.67(2.0) 92.59(2.5) 97.36(3.0) 65.00(2.0) 66.13(1.5) 70.83(0.0) 80.60(1.5) 90.51(0.5) 79.19(2.0) 71.15(2.0) 20.0 LMNN-MKLCV 88.46(3.0) 96.07(2.0) 94.00(2.0) 90.88(2.0) 95.96(1.5) 65.00(2.0) 79.03(2.0) 95.83(2.5) 89.55(3.0) 94.47(3.0) 78.88(2.0) 70.19(2.0) 27.0 1-NN 82.21(1.0) 97.19(2.0) 95.33(2.0) 86.89(0.0) 95.43(1.0) 58.33(2.0) 74.19(2.0) 88.89(2.5) 58.96(0.0) 87.35(0.5) 76.71(1.5) 65.38(2.0) 16.5 Experiments In this section we perform a number of experiments on real world datasets in order to compare the two of the LMNN-based instantiations of our framework, i.e. LMNN-MKLP and NR-LMNN-MKL. We compare these methods against two baselines: LMNN-MKLCV in which a kernel is selected from a set of kernels using 2-fold inner cross-validation (CV), and LMNN with the unweighted sum of kernels, which induces the H feature space, denoted by LMNNH . Additionally, we report performance of 1-Nearest-Neighbor, denoted as 1-NN, with no metric learning. The PSD matrix A and weight vector ? in LMNN-MKLP were respectively initialized by I and equal weighting (1 divided by the number of kernels). The parameter w in the weight learning subproblem of LMNNMKLP was selected from {10i | i = 0, 1, . . . , 8} and was the smallest value enough to achieve global convergence. Its direction matrix W was initialized by 0. The number of k same class nearest neighbors required by LMNN was set to 5 and its ? parameter to 0.5. After learning the metric and the multiple kernel combination we used 1-NN for classification. 6.1 Benchmark Datasets We first experimented with 12 different datasets: five from the UCI machine learning repository, i.e. Sonar, Ionosphere, Wine, Iris, and Wdbc; three microarray datasets, i.e. CentralNervous, Colon, and Leukemia; and four proteomics datasets, i.e. MaleFemale, Stroke, Prostate and Ovarian. The attributes of all the datasets are standardized in the preprocessing step. The Z set of kernels that we use consists of the following 20 kernels: 10 polynomial with degree from one to ten, ten Gaussians with bandwidth ? ? {0.5, 1, 2, 5, 7, 10, 12, 15, 17, 20} (the same set of kernels was used in [4]). Each basic kernel Kk was normalized by the average of its diag(Kk ). LMNN-MKLP , LMNNH and LMNN-MKLCV were tested using the complete Z set. For NR-LMNN-MKL due to its scaling limitations we could only use a small subset of Z consisting of the linear, the second order polynomial, and the Gaussian kernel with the kernel width of 0.5. We use 10-fold CV to estimate the predictive performance of the different methods. To test the statistical significance of the differences we used McNemar?s test and we set the p-value to 0.05. To get a better understanding of the relative performance of the different methods for a given dataset we used a ranking schema in which a method A was assigned one point if its accuracy was significantly better than that of another method B, 0.5 points if the two methods did not have a significantly different performance, and zero points if A was found to be significantly worse than B. The results are reported in Table 1. First, we observe that by learning the kernel inside LMNNMKLP we improve performance over LMNNH that uses the unweighted kernel combination. More precisely, LMNN-MKLP is significantly better than LMNNH in four out of the thirteen datasets. If we now compare LMNN-MKLP with LMNN-MKLCV , the other baseline method where we select the best kernel with CV, we can see that LMNN-MKLP also performs better being statistically significant 7 Table 2: Accuracy results on the multiple source datasets. Datasets Multiple Feature Oxford Flowers LMNN-MKLP 98.79++ (3.0) 86.01++ (3.0) LMNNH 98.44(1.5) 85.74(2.0) LMNN-MKLCV 98.44(1.5) 65.46(0.0) 1-NN 97.86(0.0) 67.38(1.0) better in two dataset. If we now examine NR-LMNN-MKL and LMNNH we see that the former method, even though learning with only three kernels, is significantly better in two datasets, while it is significantly worse in one dataset. Comparing NR-LMNN-MKL and LMNN-MKLCV we observe that the two methods achieve comparable predictive performances. We should stress here that NRLMNN-MKL has a disadvantage since it only uses three kernels, as opposed to other methods that use 20 kernels; the scalability of NR-LMNN-MKL is left as a future work. In terms of the total score that the different methods obtain the best one is LMNN-MKLP followed by LMNN-MKLCV and NR-LMNN-MKL. 6.2 Multiple Source Datasets To evaluate the proposed method on problems with multiple sources of information we also perform experiments on the Multiple Features and the Oxford flowers datasets [16]. Multiple Features from UCI has six different feature representations for 2,000 handwritten digits (0-9); each class has 200 instances. In the preprocessing step all the features are standardized in all the data sources. Oxford flowers dataset has 17 category flower images; each class has 80 instances. In the experiment seven distance matrices from the website2 are used; these matrices are precomputed respectively from seven features, the details of which are described in [16, 15]. For both datasets Gaussian kernels are constructed respectively using the different feature representations of instances with kernel width ?0 , where ?0 is the mean of all pairwise distances. We experiment with 10 random splits where half of the data is used for training and the other half for testing. We do not experiment here with NR-LMNN-MKL here due to its scaling limitations. The accuracy results are reported in Table 2. We can see that by learning a linear combination of different feature representations LMNN-MKLP achieves the best predictive performance on both datasets being significantly better than the two baselines, LMNNH and LMNN-MKLCV . The bad performance of LMNN-MKLCV on the Oxford flowers dataset could be explained by the fact that the different Gaussian kernels are complementary for the given problem, but in LMNN-MKLCV only one kernel is selected. 7 Conclusions In this paper we combine two recent developments in the field of machine learning, namely metric learning and multiple kernel learning, and propose a general framework for learning a metric in a feature space induced by a weighted combination of a number of individual kernels. This is in contrast with the existing kernelized metric learning techniques which consider only one kernel function (or possibly an unweighted combination of a number of kernels) and hence are sensitive to the selection of the associated feature space. The proposed framework is general as it can be coupled with many existing metric learning techniques. In this work, to practically demonstrate the effectiveness of the proposed approach, we instantiate it with the well know LMNN metric learning method. The experimental results confirm that the adaptively induced feature space does bring an advantage in the terms of predictive performance with respect to feature spaces induced by an unweighted combination of kernels and the single best kernel selected by internal CV. Acknowledgments This work was funded by the Swiss NSF (Grant 200021-122283/1). The support of the European Commission through EU projects DebugIT (FP7-217139) and e-LICO (FP7-231519) is also gratefully acknowledged. 2 http://www.robots.ox.ac.uk/?vgg/data/flowers/index.html 8 References [1] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Pr, 2004. [2] J. Dattorro. Convex optimization & Euclidean distance geometry. Meboo Publishing USA, 2005. [3] J.V. Davis, B. Kulis, P. Jain, S. Sra, and I.S. Dhillon. Information-theoretic metric learning. In ICML, 2007. [4] K. Gai, G. Chen, and C. Zhang. Learning kernels with radiuses of minimum enclosing balls. NIPS, 2010. [5] A. Globerson and S. Roweis. Metric learning by collapsing classes. In NIPS, 2006. [6] L. Grippo and M. Sciandrone. On the convergence of the block nonlinear gauss-seidel method under convex constraints* 1. Operations Research Letters, 26(3):127?136, 2000. [7] M. Guillaumin, J. Verbeek, and C. Schmid. Is that you? Metric learning approaches for face identification. In ICCV, pages 498?505, 2009. [8] K. Huang, Y. Ying, and C. Campbell. Gsml: A unified framework for sparse metric learning. In Data Mining, 2009. ICDM?09. Ninth IEEE International Conference on, pages 189?198. IEEE, 2009. [9] P. Jain, B. Kulis, and I. Dhillon. Inductive regularized learning of kernel functions. NIPS, 2010. [10] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the Kernel Matrix with Semidefinite Programming. Journal of Machine Learning Research, 5:27? 72, 2004. [11] B. McFee and G. Lanckriet. Partial order embedding with multiple kernels. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 721?728. ACM, 2009. [12] B. McFee and G. Lanckriet. Metric learning to rank. In ICML. ACM New York, NY, USA, 2010. [13] B. McFee and G. Lanckriet. Learning multi-modal similarity. The Journal of Machine Learning Research, 12:491?523, 2011. [14] N. Nguyen and Y. Guo. Metric Learning: A Support Vector Approach. In ECML/PKDD, 2008. [15] M.E. Nilsback and A. Zisserman. A visual vocabulary for flower classification. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 1447?1454. Ieee, 2006. [16] M.E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP?08. Sixth Indian Conference on, pages 722?729. IEEE, 2008. [17] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. SimpleMKL. Journal of Machine Learning Research, 9:2491?2521, 2008. [18] M. Schultz and T. Joachims. Learning a distance metric from relative comparisons. In NIPS, 2003. [19] S. Shalev-Shwartz, Y. Singer, and A.Y. Ng. Online and batch learning of pseudo-metrics. In Proceedings of the twenty-first international conference on Machine learning. ACM, 2004. [20] S. Sonnenburg, G. Ratsch, and C. Schafer. A general and efficient multiple kernel learning algorithm. In NIPS, 2006. [21] J.P. Vert, J. Qiu, and W. Noble. A new pairwise kernel for biological network inference with support vector machines. BMC bioinformatics, 8(Suppl 10):S8, 2007. [22] K.Q. Weinberger and L.K. Saul. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research, 10:207?244, 2009. [23] E.P. Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information. In NIPS, 2003. 9
4399 |@word kulis:2 repository:1 version:4 polynomial:2 norm:2 km:3 tr:8 initial:1 score:3 outperforms:2 existing:3 comparing:2 written:4 readily:1 half:2 selected:7 instantiate:2 parameterization:1 xk:1 parametrization:9 alexandros:2 matrix1:1 zhang:1 five:1 constructed:1 become:2 ik:1 prove:1 consists:2 combine:2 inside:1 x0:1 pairwise:8 pkdd:1 examine:2 multi:1 lmnn:49 xti:1 becomes:2 provided:2 project:1 notation:1 bounded:2 linearity:1 schafer:1 what:1 akl:1 eigenvector:1 unified:1 finding:2 transformation:3 a1m:2 pseudo:1 rm:1 uk:1 grant:1 yn:1 positive:2 before:1 local:3 limit:1 despite:2 oxford:4 simplemkl:1 might:1 suggests:1 limited:1 statistically:1 practical:1 acknowledgment:1 globerson:1 yj:1 testing:1 block:1 definite:1 swiss:1 digit:1 mcfee:3 area:1 significantly:7 vert:1 projection:4 matching:1 boyd:1 suggest:1 get:3 cannot:1 selection:2 context:1 optimize:1 equivalent:5 www:1 straightforward:2 convex:17 formulate:1 simplicity:1 m2:1 fill:1 vandenberghe:1 embedding:1 handle:1 variation:1 pt:6 suppose:1 programming:1 us:2 lanckriet:4 element:1 recognition:1 pkl:2 observed:1 subproblem:1 wang:2 solved:3 sonnenburg:1 eu:1 trade:1 removed:1 russell:1 mentioned:1 convexity:2 complexity:1 cristianini:1 depend:1 solving:1 predictive:4 easily:1 multimodal:1 various:2 univ:1 instantiated:3 jain:2 describe:1 lift:2 shalev:1 whose:3 unige:1 solve:2 superscript:1 online:1 obviously:1 sequence:2 eigenvalue:2 advantage:1 propose:5 product:1 uci:2 combining:1 parametrizations:1 achieve:2 roweis:1 frobenius:1 scalability:1 convergence:10 requirement:1 optimum:1 a11:1 adam:2 converges:2 help:1 depending:1 derive:1 ac:1 ij:4 nearest:6 involves:2 indicate:1 switzerland:1 direction:2 radius:1 drawback:1 attribute:1 require:1 preliminary:1 biological:1 yij:3 hold:1 practically:1 mapping:4 m0:1 achieves:1 smallest:2 fh:7 wine:2 label:2 sensitive:1 individually:1 largest:1 vice:1 weighted:2 clearly:2 always:1 gaussian:3 aim:1 focus:1 joachim:1 rank:6 indicates:2 contrast:1 kim:3 baseline:3 colon:2 inference:1 el:1 nn:4 a0:7 kernelized:5 transformed:1 classification:4 html:1 denoted:4 development:1 art:1 field:3 equal:1 ng:2 bmc:1 icml:2 leukemia:2 noble:1 future:1 report:1 prostate:2 inherent:1 few:1 preserve:1 individual:1 geometry:1 consisting:1 psd:14 mining:1 semidefinite:1 nonincreasing:1 predefined:3 partial:1 respective:2 euclidean:4 kjl:4 initialized:2 amm:1 instance:15 column:2 disadvantage:1 cost:6 rakotomamonjy:1 entry:1 subset:1 graphic:1 reported:2 commission:1 adaptively:1 international:3 informatics:1 together:1 squared:5 opposed:1 choose:2 possibly:2 huang:1 collapsing:1 worse:2 potential:1 satisfy:3 ranking:1 depends:1 performed:1 lab:1 closed:1 schema:1 xing:3 icvgip:1 minimize:3 accuracy:6 correspond:1 handwritten:1 identification:1 stroke:2 checked:1 guillaumin:1 sixth:1 against:1 m00:4 naturally:1 associated:2 proof:4 meboo:1 proved:1 dataset:6 popular:1 knowledge:1 reminder:1 dimensionality:5 campbell:1 specify:1 modal:1 zisserman:2 formulation:5 done:1 though:1 ox:1 just:1 correlation:2 until:3 hand:1 nonlinear:1 lack:1 mkl:39 defines:1 infimum:2 indicated:1 usa:2 normalized:1 kjm:2 counterpart:1 adequately:1 regularization:2 assigned:1 former:1 hence:1 inductive:1 iteratively:1 dhillon:2 mahalanobis:22 during:1 width:2 davis:1 iris:2 prominent:1 stress:1 complete:1 demonstrate:1 theoretic:1 performs:2 bring:1 fj:1 image:2 fi:2 kil:5 volume:1 s8:1 a00:9 composition:1 significant:2 versa:1 cambridge:1 ai:1 cv:4 rd:1 canu:1 gratefully:1 funded:1 robot:1 similarity:2 recent:2 optimizing:1 scenario:2 binary:1 mcnemar:3 yi:2 seen:3 minimum:1 additional:2 preceding:1 redundant:1 semi:1 multiple:26 full:1 uncluttered:1 seidel:1 cross:4 bach:1 divided:1 icdm:1 controlled:1 parenthesis:1 verbeek:1 variant:1 basic:2 proteomics:1 metric:69 vision:2 nilsback:2 iteration:5 kernel:80 represent:1 suppl:1 confined:1 achieved:1 ratsch:1 source:10 microarray:1 extra:1 induced:7 hz:4 virtually:1 effectiveness:1 jordan:2 call:2 split:1 enough:2 automated:1 xj:41 bandwidth:1 inner:2 vgg:1 six:1 bartlett:1 york:1 yik:1 generally:1 clear:1 eigenvectors:1 ten:4 induces:1 svms:3 category:1 simplest:1 http:1 exist:1 nsf:1 estimated:1 neigbors:1 popularity:1 ovarian:2 woznica:2 four:2 acknowledged:1 verified:1 vast:1 fuse:1 relaxation:1 year:1 sum:3 letter:1 you:1 mcml:1 scaling:3 comparable:1 ki:1 guaranteed:1 followed:1 convergent:1 fold:2 quadratic:1 annual:1 constraint:14 precisely:1 min:8 department:1 kalousis:2 combination:12 ball:1 slightly:1 modification:1 hl:7 explained:1 iccv:1 sij:2 pr:1 ghaoui:1 precomputed:1 mlh:29 know:1 singer:1 fp7:2 parametrize:2 gaussians:1 rewritten:1 operation:1 apply:1 observe:2 appropriate:4 sciandrone:1 alternative:2 batch:1 weinberger:1 original:4 standardized:2 clustering:1 publishing:1 kj1:3 k1:1 society:1 objective:10 added:1 already:1 dependence:1 nr:18 d2a:12 win:1 distance:19 concatenation:1 majority:1 parametrized:3 seven:2 d2m:2 index:1 kk:3 ying:1 thirteen:1 subproblems:1 trace:2 stated:1 rise:2 enclosing:1 twenty:1 perform:4 allowing:1 datasets:15 benchmark:1 ecml:1 y1:1 ninth:1 expressiveness:1 namely:2 required:1 kl:8 extensive:1 specified:1 dattorro:1 learned:4 established:2 nip:6 address:3 able:1 below:1 flower:8 pattern:1 regularized:6 mn:3 improve:1 jun:2 coupled:2 schmid:1 faced:1 understanding:1 determining:1 relative:4 fully:3 loss:1 limitation:5 validation:4 degree:1 affine:1 pij:6 grandvalet:1 row:2 surprisingly:1 last:2 free:1 repeat:1 side:1 neighbor:5 saul:1 face:1 sparse:1 vocabulary:1 valid:1 world:1 unweighted:7 computes:1 ignores:1 author:1 projected:1 preprocessing:2 schultz:1 nguyen:1 far:1 cope:1 geneva:1 emphasize:1 implicitly:1 keep:1 confirm:1 ml:38 global:5 active:2 instantiation:4 xi:43 shwartz:1 iterative:1 sonar:2 table:4 additionally:2 learn:9 sra:1 european:1 diag:1 did:1 significance:1 main:1 n2:1 qiu:1 complementary:2 representative:1 gai:1 ny:1 explicit:2 vanish:1 weighting:3 learns:1 theorem:1 bad:1 specific:2 uut:1 experimented:1 svm:3 ionosphere:2 evidence:1 dl:3 exists:1 importance:1 dissimilarity:2 margin:2 gap:1 flavor:1 wdbc:2 chen:1 simply:2 visual:1 scalar:1 grippo:1 monotonic:1 ch:9 corresponds:2 satisfies:2 acm:3 identity:1 goal:2 formulated:1 pola:2 infinite:2 determined:1 lemma:13 total:2 experimental:3 gauss:1 ijk:3 select:2 internal:5 support:3 guo:1 latter:2 bioinformatics:1 indian:1 evaluate:1 ki1:4 tested:1
3,754
44
317 PARTITIONING OF SENSORY DATA BY A CORTICAL NETWORK1 Richard Granger, Jose Ambros-Ingerson, Howard Henry, Gary Lynch Center for the Neurobiology of Learning and Memory University of California Irvine, CA. 91717 SUMMARY To process sensory data, sensory brain areas must preserve information about both the similarities and differences among learned cues: without the latter, acuity would be lost, whereas without the former, degraded versions of a cue would be erroneously thought to be distinct cues, and would not be recognized. We have constructed a model of piriform cortex incorporating a large number of biophysical, anatomical and physiological parameters, such as two-step excitatory firing thresholds, necessary and sufficient conditions for long-term potentiation (LTP) of synapses, three distinct types of inhibitory currents (short IPSPs, long hyperpolarizing currents (LHP) and long cellspecific afterhyperpolarization (AHP)), sparse connectivity between bulb and layer-II cortex, caudally-flowing excitatory collateral fibers, nonlinear dendritic summation, etc. We have tested the model for its ability to learn similarity- and difference-preserving encodings of incoming sensory cueSj the biological characteristics of the model enable it to produce multiple encodings of each input cue in such a way that different readouts of the cell firing activity of the model preserve both similarity and difference 'information. In particular, probabilistic quantal transmitter-release properties of piriform synapses give rise to probabilistic postsynaptic voltage levels which, in combination with the activity of local patches of inhibitory interneurons in layer II, differentially select bursting vs. single-pulsing layer-II cells. Time-locked firing to the theta rhythm (Larson and Lynch, 1986) enables distinct spatial patterns to be read out against a relatively quiescent background firing rate. Training trials using the physiological rules for induction of LTP yield stable layer-II-cell spatial firing patterns for learned cues. Multiple simulated olfactory input patterns (Le., those that share many chemical features) will give rise to strongly-overlapping bulb firing patterns, activating many shared lateral olfactory tract (LOT) axons innervating layer Ia of piriform cortex, which in tum yields highly overlapping layer-II-cell excitatory potentials, enabling this spatial layer-II-cell encoding to preserve the overlap (similarity) among similar inputs . At the same time, those synapses that are enhanced by the learning process cause stronger cell firing, yielding strong, cell-specific afterhyperpolarizing (AHP) currents. Local inhibitory intemeurons effectively select alternate cells to fire once strongly-firing cells have undergone AHP. These alternate cells then activate their caudally-flowing recurrent collaterals, activating distinct populations of synapses in caudal layer lb. Potentiation of these synapses in combination with those of still-active LOT axons selectively enhance the response of caudal cells that tend to accentuate the differences among even very-similar cues. Empirical tests of the computer simulation have shown that, after training, the initial spatial layer II cell firing responses to similar cues enhance the similarity of the cues, such that the overlap in response is equal to or greater than the overlap in lThis research was supported in part by the Office of Naval Research under grants NOOOl4-84-K-0391 and NOOOl4-87-K-0838 and by the National Science Foundation under grant IST-8S-12419. ? American Institute of Physics 1988 318 input cell firing (in the bulb): e.g., two cues that overlap by 65% give rise to response patterns that overlap by 80% or more. Reciprocally, later cell firing patterns (after AHP), increasingly enhance the differences among even very-similar patterns, so that cues with 90% input overlap give rise to output responses that overlap by less than 10%. This difference-enhancing response can be measured with respect to its acuity; since 90% input overlaps are reduced to near zero response overlaps, it enables the structure to distinguish between even very-similar cues. On the other hand, the similarity-enhancing response is properly viewed as a partitioning mechanism, mapping quite-distinct input cues onto nearly-identical response patterns (or category indicators). We therefore use a statistical metric for the information value of categorizations to measure the value of partitionings produced by the piriform simulation network. INTRODUCTION The three primary dimensions along which network processing models vary are their learning rules, their performance rules and their architectural structures. In practice, performance rules are much the same across different models, usually being some variant of a 'weighted-sum' rule (in which a unit's output is calculated as some function of the sum ofits inputs multiplied by their 'synaptic' weights). Performance rules are usually either 'static' rules (calculating unit outputs and halting) or 'settling' rules (iteratively calculating outputs until a convergent solution is reached). Most learning rules are either variants of a 'correlation' rule, loosely based on Hebb's (1949) postulate; or a 'delta' rule, e.g., the perceptron rule (Rosenblatt, 1962), the adaline rule (Widrow and Hoff, 1960) or the generalized delta or 'backpropagation'rule (Parker, 1985; Rumelhart et al., 1986). Finally, architectures vary by and large with learning rules: e.g., multilayered feedforward nets require a generalized delta rule for convergence; bidirectional connections usually imply a variant of a Hebbian or correlation rule, etc. Architectures and learning and performance rules are typically arrived at for reasons of their convenient computational properties and analytical tractability. These rules are sometimes based in part on some results borrowed from neurobiology: e.g., 'units' in some network models are intended to correspond loosely to neurons, and 'weights' loosely to synapses; the notions of parallelism and distributed processing are based on metaphors derived from neural processes. An open question is how much of the rest of the rich literature of neurobiological results should or could profitably be incorporated into a network modeL From the point of view of constructing mechanisms to perform certain pre-specified computatonal functions (e.g., correlation, optimization), there are varying answers to this question. However, the goal of understanding brain circuit function introduces a fundamental problem: there are no known, pre-specified functions of any given cortical structures. We have constructed and studied a physiologically- and anatomically-accurate model of a particular brain structure, olfactory cortex, that is strictly based on biological data, with the goal of elucidating the local function of this circuit from its performance in a 'bottom-up' fashion. We measure our progress by the accuracy with which the model corresponds to known data, and predicts novel physiological results (see, e.g., Lynch and Granger, 1988; Lynch et al., 1988). Our initial analysis of the circuit reveals a mechanism consisting of a learning rule that is notably simple and restricted compared to most network models, a relatively novel architecture with some unusual properties, and a performance rule that is ex- 319 traordinarily complex compared to typical network-model performance rules. Taken together, these rules, derived directly from the known biology of the olfactory cortex, generate a coherent mechanism that has interesting computational properties. This paper describes the learning and performance rules and the architecture of the model; the relevant physiology and anatomy underlying these rules and structures, respectively; and an analysis of the coherent mechanism that results. LEARNING RULES DERIVED FROM LONG-TERM POTENTIATION Long-term potentiation (LTP) of synapses is a phenomenon in which a brief series of biochemical events gives rise to an enhancement of synaptic efficacy that is extraordinarily long-lasting (Bliss and L{lJmo, 1973; Lynch and Baudry, 1984; Staubli and Lynch, 1987); it is therefore a candidate mechanism underlying certain forms of learning, in which few training trials are required for long-lasting memory. The physiological characteristics of LTP form the basis for a straightforward network learning rule. It is known that simultaneous pre- and post-synaptic activity (i.e., intense depolarization) result in LTP (e.g., Wigstr{IJm et al., 1986). Since excitatory cells are embedded in a meshwork of inhibitory interneurons, the requisite induction of adequate levels of pre- and postsynaptic activity is achieved by stimulation of large numbers of afferents for prolonged periods, by voltage clamping the postsyna.ptic cell, or by chemically blocking the activity of inhibitory interneurons . In the intact animal, however, the q~estion of how simultaneous pre- and postsynaptic activity might be induced has been an open question. Recent work (Larson and Lynch, 1986) has shown that when hippocampal afferents are subjected to patterned stimulation with particular temporal and frequency parameters, inhibition is naturally eliminated within a specific time window, and LTP can arise as a result . Figure 1 shows that LTP naturally occurs using short (3-4 pulse) bursts of high-frequency (100Hz) stimulation with a 200ms interburst interval; only the second of a pair of two such bursts causes potentiation. This occurs because the normal short inhibitory currents (IPSPs), which prevent the first burst from depolarizing the postsynaptic cell sufficiently to produce LTP, are maximally refractory at 200ms after being stimulated, and therefore, although the second burst arrives against a hyperpolarized background resulting from the long hyperpolarizing currents (LHP) initiated by the first burst, the second burst does not initiate its own IPSPs, since they are then refractory. The studies leading to these conclusions were performed in in vitro hippocampal slices; LTP induced by this patterned stimulation technique in intact animals shows no measurable decrement prior to the time at which recording arrangements deteriora.te: more than a month in some cases (see Staubli and Lynch, 1987). PERFORMANCE RULES DERIVED FROM OLFACTORY PHYSIOLOGY AND BEHAVIOR From the above data we may infer that LTP itself depends on simultaneous pre- and postsynaptic activity, as Hebb postulated, but that a sufficient degree of the latter occurs only under particular conditions. Those conditions (patterned stimulation) suggest the beginnings of a performance rule for the network. Drawing this out requires a review of the inhibitory currents active in hippocampus and in piriform cortex. Three classes of such currents are known to be present: short IPSPs, long LHPs and extremely long, cell-specific afterhyperpolarization, or AHP (see Figure 2). Short IPSPs arise from both feedforward and feedback activation of inhibitory interneurons which in turn synapse 320 on excitatory cells (e.g., layer IT cells, which are primary excitatory cells in piriform). IPSPs develop more slowly than excitatory postsynaptic potentials (EPSPs) but quickly shunt the EPSP, thus reversing the depolarization that arises from EPSPs, and bringing the cell voltage down below its original resting potential. IPSPs last approximately 50lOOms, and then enter a refractory period during which they cannot be reactivated from about 100-30Oms after they have been once activated. Longer hyperpolarization (LHP) is presumably dependent on a distinct type of inhibitory interneuron or inhibitory receptor, and arises in much the same way; however, these cells are apparently not refractory once activated. LHP lasts for 300-500ms. Taken together, IPSPs and LHP constitute a form of high-pass frequency filter: 200ms after an input burst, a subsequent input will arrive against a background of hyperpolarization due to LHP, yet this input will not initiate its own IPSP due to the refractory period. If the input is a single pulse, its EPSP will fail to trigger the postsynaptic cell, since it will not be able to overcome the LHP-induced hyperpolarized potential of the cell. Yet if the input is a high-frequency burst, the pulses comprising the burst will give rise to different behavior. Ordinarily, the first EPSP would have been driven back to resting potential by its accompanying IPSP, before the second pulse in the burst could arrive. But when the IPSP is absent, the first EPSP is not driven rapidly down to resting potential, and the second pulse sums with it, raising the voltage of the postsynaptic cell and allowing voltage-dependent channels to open, thereby further depolarizing the cell, and causing it to spike (Figure 3). Hence these high-frequency bursts fire the cell, while single pulses or lower-frequency bursts would not do so. When these cells fire, then active synapses can be potentiated. The third inhibitory mechanism, AHP, is a current that causes an excitatory cell to become refractory after it has fired strongly or rapidly. This mechanism is therefore specific to those cells that have fired, unlike the first two mechanisms. AHP can prevent a cell from firing again for as long as 1000ms (1 second). It has long been observed that EEG waves in the hippocampi of learning animals are dominated by the theta rhythm, Le., activity occuring at about 4-8Hz. This is now seen to correspond to the optimal rate for firing postsynaptic cells and for enhancing synapses via LTP; i.e., this rhythmic aspect of the performance rules of these networks is suggested by the physiology ofLTP. The resulting activation patterns may take the following form: relatively synchronized cell firing occurring approximately once every 200ms, i.e., spatial patterns of induced activity occurring at the rate of one new spatial cell-firing pattern every 200ms. The cells most strongly participating in anyone firing pattern will not participate in subsequent patterns (at least the next 4-5 patterns, i.e., SOO-lOOOms), due to AHP. This raises the interesting possibility that different spatial patterns (at different times) may be conveying different information about their inputs. In summary, postsynaptic cells fire in pulses or bursts depending on the synaptically-weighted sums of their active axonal inputs; this firing is synchronized across the cells in a structure, giving rise to a spatial pattern of activity across these cells; once cells fire they will not fire again in subsequent patterns; each pattern (occuring at the theta rhythm, i.e., approximately once every 200ms) will therefore consist of extremely different spatial patterns of cell activity. Hence the 'output' of such a network is a sequence of spatial patterns. In an animal engaged in an olfactory discrimination learning task, the theta rhythm 321 dominates the animals behavior: the animals literally sniff at theta. We have been able to sustitute direct stimulation (in theta-burst mode) of the lateral olfactory tract (LOT), which is the input to the olfactory cortex, for odors: these 'electrical odors' are learned and discriminated by the animals, either from other electrical odors (via different stimulating electrodes) or from real odors. Furthermore, behavioral learning in this paradigm is accompanied by LTP of piriform synapses (Roman et al., 1987). This experimental paradigm thus provides us with a known set of behaviorally-relevant inputs to the olfactory cortex that give rise to synaptic potentiation that apparently underlies the learning of the stimuli. ARCHITECTURE OF OLFACTORY CORTEX Nasal receptor cells respond differentially to different chemicals; these cells topographically innervate the olfactory bulb, which is arranged such that combinations of specific spatial 'patches' of bulb characteristically respond to specific odors. Bulb also receives a number of centrifugal afferents from brain, most of which terminate on the inhibitory granule cells. The excitatory mitral cells in bulb send out axons that form the lateral olfactory tract (LOT), which constitutes the only major input to olfactory (piriform) cortex. This cortex in turn has some feedback connections to bulb via the anterior olfactory nucleus. Figure 4 illustrates the anatomy of the superficial layers of olfactory cortex: the LOT axons flow across layer la, synapsing with the dendrites of piriform layer-IT cells. Those cells in turn give rise to collateral axon outputs which flow, in layer Ib, parallel and subjacent to the LOT, in a predominantly rostral-to-caudal direction, eventually terminating in entorhinal cortex. Layer 180 is very sparsely connected; the probability of synapses between LOT axons and layer-IT cell dendrites is less than 0.10 (Lynch, 1986), and decreases caudally. Layer Ib (where collaterals synapse with dendrites) is also sparse, but its density increases caudally, as the number of collaterals increases; the overall connectivity density on layer-IT-cell dendrites is approximately constant throughout most of piriform. Layer IT also contains, in addition to the principal excitatory cells (modified stellates), inhibitory interneurons which synapse on excitatory cells within a specified radius, forming a 'patchwork' of cells affected by a particular inhibitory cell; the spheres of influence of inhibitory cells almost certainly overlap somewhat. There are approximately 50,000 LOT axons, 500,000 piriform layer IT cells, and a much smaller number of inhibitory cells that divide layer IT roughly into functional patches. (See Price, 1973; Luskin and Price, 1983; Krettek and Price, 1977; Price and Slotnick, 1983; Haberly and Price, 1977, 197880, 1978b). The layer IT cell collateral axons flow through layer ill for a distance before rising up to layer Ib (Haberly, 1985); taken in combination with the predominantly caudal directionality of these collaterals, this means that rostral piriform will be dominated by LOT inputs. Extreme caudal piriform (and all of lateral entorhinal cortex) is dominated by collaterals from more rostral cells; moving from rostral to caudal piriform, cells increasingly can be thought of as 'hybrid cells': cells receiving inputs from both the bulb (via the LOT) and from rostral piriform (via collateral axons). The architectural characteristics of rostral piriform is therefore quite different from that of caudal piriform, and differential analysis must be performed of rostral cells vs. hybrid cells, as will be seen later in the paper. 322 SIMULATION AND FORMAL ANALYSIS: INTRODUCTION We have conducted several simulations of olfactory cortex incorporating many of the physiological features discussed earlier. Two hundred layer IT cells are used with 100 input (LOT) lines and 200 collateral axons; both the LOT and collateral axons flow caudally. LOT axons connect with rostral dendrites with a probability of 0.2, which decreases linearly to 0.05 by the caudal end of the model. The connectivity is arranged randomly, subject to the constraint that the number of contacts for axons and dendrites is fixed within certain narrow b01llldaries (in the most severe case, each axon forms 20 synapses and each dendrite receives 20 contacts). The resulting matrix is thus hypergeometric in both dimensions. There are 20 simulated inhibitory interneurons, such that the layer IT cells are arranged in 20 overlapping patches, each within the influence of one such inhibitory cell. Inhibition rules are approximately as discussed above; i.e., the short IPSP is longer than an EPSP but only one fifth the length of the LHP; cell-specific AHP in tum is twice as long as LHP. Synaptic activity in the model is probabilistic and quantal: for any presynaptic activation, there is a fixed probability that the synapse will allow a certain amount of conductance to be contributed to the postsynaptic cell. Long-term potentiation was represented by a 40% increase in contact strength, as well as an increase in the probability of conductance being transmitted. These effects would be expected to arise, in situ, from modifying existing synapses as well as adding new ones (Lynch, 1986), two results obtained in electron microscopic studies (Lee et al., 1980). Only excitatory cell synapses are subject to LTP. LTP occurred when a cell was activated twice at a simulated 200ms interval: the first input 'primes' the synapse so that a subsequent burst input can drive it past a threshold value; following from the physiological results, previously potentiated synapses were much less different from "naive" synapses when driven at high frequency (see Lynch et al., 1988). The simulation used theta burst activation (i.e., bursts of pulses with the bursts occurring at 5Hz) of inputs during learning, and operated according to these synchronized fixed time steps, as discussed above. The network was trained on sets of "odors" , each of which was represented as a group of active LOT lines, as in the "electric odor" experiments already described. Usually three or four "components" were used in an odor, with each component consisting of a group of contiguous LOT lines. We assumed that the bulb normalized the output signal to about 20% of all LOT fibers. In some cases, more specific bulb rules were used and in particular inhibition was assumed to be greatest in areas surrounding an active bulb "patch" . The network exhibited several interesting behaviors. Learning, as expected, increased the robustness of the response to specific vectors; thus adding or subtracting LOT lines from a previously learned input did not, within limits, greatly change the response. The model, like most network simulations, dealt reasonably well with degraded or noisy known signals. An unexpected result developed after the network had learned a succession of cues. In experiments of this type, the simulation would begin to generate two quite distinct output signals within a given sampling episode; that is, a single previously learned cue would generate two successive responses in successive 'sniffs' presented to an "experienced" network. The first of these response patterns proved to be common to several signals while the second was specific to each learned signal. The 323 common signal was found to occur when the network had learned 3-5 inputs which had substantial overlap in their components (e.g., four odors that shared ::::::70% of their components). It appeared then that the network had begun to produce "category" or "clustering" responses, on the first sniff of a simulated odor, and "individual" or "differentiation" responses on subsequent sniffs of that same odor. When presented with a novel cue which contained elements shared with other, previously learned signals, the network produced the cluster response but no subsequent individual or specific output signal. Four to five cluster response patterns and 20 - 25 individual responses were produced in the network without distortion. In retrospect, it was clear that the model accomplished two necessary and in some senses opposing operations: 1) it detected similarities in the members of a cue category or cluster, and, 2) it nonetheless distinguished between cues that were quite similar. Its first response was to the similarity-based category and its second to the specific signal. ANALYSIS OF CATEGORIZATION IN ROSTRAL PIRIFORM Assume that a set of input cues (or 'simulated odors') XCI, Xf3 .. . X' differ from each other in the firing of dx LOT input lines; similarly, inputs Y CI , yf3 ... y' differ in dy lines, but that inputs from the sets X and Y differ from each other in D X,y > > d lines, such that the XS and the Ys form distinct natural categories. Then the performance of the network should give rise to output (layer II cell) firing patterns that are very similar among members of either category, but different for members of different categories; i.e., there should be a single spatial pattern of response for members of X, with little variation in response across members, and there should be a distinct spatial pattern of response for members of Y. Considering a matrix constructed by uniform selection of neurons, each with a hypergeometric distribution for its synapses, as an approximation of the bidimensional hypergeometric matrix described above, the following results can be derived. The expected value of el, the Hamming distance between responses for two input cues differing by 2d LOT lines (input Hamming distance of d) is: where No is the number of postsynaptic cells, each Si is the probability that a cell will have precisely i active contacts from one of the two cues, and I(i, j) is the probability that the number of contacts on the cell will increase (or decrease) from i to j with the change in d LOT lines; Le., changing from the first cue to the second. Hence, the first term denotes the probability of a cell decreasing its number of active contacts from above to below some threshold, (), such that that cell fired in response to one cue but not the other (and therefore is one of the cells that will contribute to the difference between responses to the two cues). Reciprocally, the second term is the probability that the cell increases its number of active synapses such that it is now over the threshold; this cell also will contribute to the difference in response. We restrict our analysis for now to rostral piriform, in which there are assumed to be few if any collateral axons. We will return to this issue in the next subsection. 324 The value for each Sa., the probability of a active contacts on a cell, is a hypergeometric function, since there are a fixed number of contacts anatomically between LOT and (rostral) piriform cells: where N is the number of LOT lines, A is the number of active (firing) LOT lines, n is the number of synapses per dendrite formed by the LOT, and a is the number of active such synapses. The formula can be read by noting that the first binomial indicates the number of ways of choosing a active synapses on the dendrite from the A active incoming LOT lines; for each of these, the next expression calculates the number of ways in which the remaining n - a (inactive) synapses on the dendrite are chosen from the N - A inactive incoming LOT lines; the probability of active synapses on a dendrite depends on the sparseness ofthe matrix (Le., the probability of connection between any given LOT line and dendrite); the solution must be normalized by the number of ways in which n synapses on a dendrite can be chosen from N incoming LOT lines. The probability of a cell changing its number of contacts from a to a is: I(a, a) = 2: 11-'= a.-4 where N, n, A, and a are as above, I is the "loss" or reduction in the number of active synapses, and 9 is the gain or increase. Hence the left expression is the probability of losing I active synapses by changing d LOT lines, and the right-hand expression is the probability of gaining 9 active synapses. The product of the expressions are summed over all the ways of choosing I and 9 such that the net change 9 - I is the desired difference a-a. If training on each cue induces only fractional LTP, then over trials, synapses contacted by any overlapping parts of the input cues should become stronger than those contacted only by unique parts of the cue. Comparing two cues from within a category, vs. two cues from between categories, there may be the same number of active synapses lost across the two cues in either case, but the expected strength of the synapses lost in the former case (within category) should be significantly lower than in the latter case (across categories). Hence, for a given threshold, the difference J between output firing patterns will be smaller for two within-category cues than for cues from two different categories. It is important to note that clustering is an operation that is quite distinct from stimulus generalization. Observing that an object is a car does not occur because of a comparison with a specific, previously learned car. Instead the category "car" emerges from the learning of many different cars and may be based on a "prototype" that has no necessary correspondence with a specific, real object. The same could be said of the network. It did not produce a categorical response when one cue had been learned 325 and second similar stimulus was presented. Category or cluster responses, as noted, required the learning of several exemplars of a similarity-based cluster. It is the process of extracting corrunonalities from the environment that defines clustering, not the simple noting of similarities between two cues. An essential question in clustering concerns the location of the boundaries of a given group; i.e., what degree of similarity must a set of cues possess to be grouped together? This issue has been discussed from any number of theoretical positions (e.g., information theory) i all these analyses incorporate the point that the breadth of a category must reflect the overall homogeneity or heterogeneity of the environment. In a world where things are quite similar, useful categories will necessarily be composed of objects with much in common. Suppose, for instance, that subjects were presented with a set of four distinct coffee cups of different colors, and asked later to recall the objects. The subjects might respond by listing the cups as a blue, red, yellow and green coffee cup, reflecting a relatively specific level of description in the hierarchy of objects that are coffee cups. In contrast, if presented with four different objects, a blue coffee cup, a drinking glass, a silver fork and a plastic spoon, the cup would be much more likely to be recalled as simply a cup, or a coffee cup, and rarely as a blue coffee cup; the specificity of encoding chosen depends on the overall heterogeneity of the environment. The categories formed by the simulation were quite appropriate when judged by an information theoretic measure, but how well it does across a wide range of possible worlds has not been addressed. ANALYSIS OF PROBLEMS ARISING FROM CAUDAL AXON FLOW The anatomical feature of directed flow of collateral axons gives rise to an immediate problem in principle. In essence, the more rostral cells that fire in response to an input, the more active inputs there are from these cells to the caudal cells, via collateral axons, such that the probability of caudal cell firing increases precipitously with probability of rostral cell firing. Conversely, reducing the number of rostral cells from firing, either by reducing the number of active input LOT axons or by raising the layer II cell firing threshold, prevents sufficient input to the caudal cells to enable their probability of firing to be much above zero. This problem can be stated formally, by making assumptions about the detailed nature of the connectivity of LOT and collateral axons in layer I as these axons proceed from rostral to caudal piriform. The probability of contact between LOT axons and layer-II-cell dendrites decreases caudally, as the number of collateral axons is increasing, given their rostral to caudal flow tendency. This situation is depicted in Figure 4. Assuming that probability of LOT contact tends to go to zero, we may adopt a labelling scheme for axons and synaptic contacts, as in the diagram, in which some combination of LOT axons (a:k) and collateral axons (h",,) contact any particular layer II cell dendrite (hn), each of which is itself the source of an additional collateral axon flowing to cells more caudal than itself. Then the cell firing function for layer II cell hn is: where the a:k denote LOT axon activity of those axons still with nonzero probability of contact for layer II cell hn, the hm. denote activity of layer II cells rostral of hn, 0 is 326 the cell firing threshold, Wnm is the synaptic strength between axon m and dendrite n, and H is the Heaviside step function, equal to 1 or 0 according to whether its argument is positive or negative. H we assume instead that probability of cell firing is a graded function rather than a step function, we may eliminate the H step function and calculate the firing of the cell (hn) from its inputs (hn,net) via the logistic: hn,net =L m<n hmWnm + L Zk Wnk k~n 1 hn = 1 + e- (khn,net + 9n ) Then we may expand the expression for firing of cell h n as follows: hn = [1 + e-(l:m<n hmwnm + l:k~n ZkWnk + 9)]-1 By assuming a fixed firing threshold, and varying the number of active input LOT lines, the probability of cell firing can be examined. Numerical simulation of the above expressions across a range of LOT spatial activation patterns demonstrates that probability of cell firing remains near zero until a critical number of LOT lines are active, at which point the probability flips to close to 100% (Figure 5). This means that, for any given firing threshold, given fewer than a certain amount of LOT input, practically no piriform cells will fire, whereas a slight increase in the number of active LOT lines will mean that practically all piriform cells should fire. This excruciating dependence of cell firing on amount of LOT input indicates that normalization of the size of the LOT input alone will be insufficient to stabilize the size of the layer II response; even slight variation of LOT activity in either direction has extreme consequences. A number of solutions are possible; in particular, the known local anatomy and physiology of layer II inhibitory intemeurons provides a mechanism for controlling the amount of layer II response. As discussed, inhibitory interneurons give rise to both feedforward (activated by LOT input) and feedback (activated by collateral axons) activity; the influence of any particuar interneuron is limited anatomically to a relatively small radius around itself within layer II, and the influence of multiple interneurons probably overlap to some extent. Nonetheless, the 'sphere of influence' of a particular inhibitory interneuron can be viewed as a local patch in layer II, within which the number of active excitatory cells is in large measure controlled by the activity of the inhibitory cell in that patch. If a number of excitatory cells are firing with varying depolarization levels within a patch in layer II, activation of the inhibitory cells by the excitatory cells will tend to weaken those excitatory cells that are less depolarized than the most strongly-firing cell within the patch, leading to a competition in which only those cells firing most strongly within a patch will burst, and these cells will, via the interneuron, suppress multiple firing of other cells within the patch. Thus the patch takes on some of the characteristics of a 'winner-take-all' network (Feldman, 1982): only the most strongly firing cells will be able to overcome inhibition sufficiently to burst, some additional cells will pulse once and then be overwhelmed by inhibition, and the rest of the cells in the patch will be silent, even though that patch may be receiving a large amount of excitatory input via LOT and collateral axon activity in layer I. 327 EMERGENT CATEGORIZATION BEHAVIOR IN THE MODEL The probabilistic quantal transmitter-release properties of piriform synapses described above give rise to probabilistic levels of postsynaptic depolarization. This inherent randomness of cell firing, in combination with activity oflocal inhibitory patches in layer IT, selects different sets of bursting and pulsing cells on different trials if no synaptic enhancement has taken place. The time-locked firing to the theta rhythm enables distinct spatial patterns of firing to be read out against a relatively quiescent background firing rate. Synaptic LTP enhances the conductances and alters the probabilistic nature of corrununication between a given axon and dendrite, which tends to overcome the randomness of the cell firing patterns in untrained cells, yielding a stable spatial pattern that will reliably appear in response to the same input in the future, and in fact will appear even in response to degraded or noisy versions of the input pattern. Furthermore, subsequent input patterns that differ in only minor respects from a learned LOT input pattern will contact many of the already-potentiated synapses from the original pattern, thereby tending to give rise to a very similar (and stable) output firing pattern. Thus as multiple cues sharing many overlapping LOT lines are learned, the layer IT cell responses to each of these cues will strongly resemble the responses to the others. Hence, the response(s) behave as though simply labelling a category of very-similar cues; sufficiently different cues will give rise to quite-different category responses. EMERGENT DIFFERENTIATION BEHAVIOR IN THE MODEL Potentiated synapses cause stronger depolarization and firing of those cells participating in a 'category' response to a learned cue. This increased depolarization causes strong, cell-specific afterhyperpolarization (AHP), effectively putting those cells into a relatively long-lasting (~ lsec) refractory period that prevents them from firing in response to the next few sampling sniffs of the cue. Then the inhibitory 'winner-take-all' behavior within patches effectively selects alternate cells to fire, once these stronglyfiring (learned) cells have undergone AHP. These alternates will be selected with some randomness, given the probabilistic release characteristics discussed above, since these cells will tend not to have potentiated synapses. These alternate cells then activate their caudally-flowing recurrent collaterals, activating distinct populations of synapses in caudal layer Th. Potentiation of these synapses in combination with those of stillactive LOT axons tends to 'recruit' stable subpopulations of caudal cells that are distinct for each simulated odor. They are distinct for each odor because first rostral cells are selected from the population of unpotentiated or weakly-potentiated cells (after the strongly potentiated cells have been removed via AHP)i hence they will at first tend to be selected randomly. Then, of the caudal cells that receive some activation from the weakening caudal LOT lines, those that also receive collateral innervation from these semi-randomly selected rostrals will be those that will tend to fire most strongly, and hence to be potentiated. The probability of a cell participating in the rostral semi-randomly selected groups for more than one odor (e.g., for two similar odors) is lower than the probability of cells being recruited by these two odors initially, since the population are those that receive not enough input from the LOT to have been recruited as a category cell and potentiated, yet receive enough input to fire as an alternate cell. The probability of any caudal cell then being recruited for more than one odor by these rostral cell collaterals 328 in combination with weakening caudal LOT lines is similarly low. The product of these two probabilities is of course lower still. Hence, the probability that any particular caudal cell potentiated as part of this process will participate in response to more than one odor is very low. This means that, when sampling (sniffing), the first pattern of cell firing will indicate similarity among learned odors, causing AHP of those patterns; thus later sniffs will generate patterns of firing that tend to be quite different for different odors, even when those odors are very similar. Empirical tests of the simulation have shown that odors consisting of 90%-overlapping LOT firing patterns will give rise to overlaps of between 85% and 95% in their initial layer IT spatial firing patterns, whereas these same cues give rise to layer IT patterns that overlap by less than 20% on 2nd and 3rd sniffs. The spatia-temporal pattern of layer IT firing over multiple samples thus can be taken as a strong differentiating mechanism for even very-similar cues, while the initial sniffresponse for those cues will nonetheless give rise to a spatial firing pattern that indicates the similarity of sets of learned cues, and therefore their 'category membership' in the clustering sense. CLUSTERING Incremental clustering of cues into similarity-based categories is a more subtle process than might be thought and while it is clear that the piriform simulation performs this function, we do not know how optimal its performance is in an information-theoretic sense, relative to some measure of the value or cost of information in the encoding. Building a categorical scheme is a non-monotonic, combinatorial problem: that is, each new item to be learned can have disproportionate effects on the existing scheme, and the number of potential categories (clusters) climbs factorially with the number of items to be categorized. Algorithmic solutions to problems of this type are computationally very expensive. Calculation of an ideal categorization scheme (with respect to particular cost measures in a performance task), using a hill-climbing algorithm derived from an information-theoretic measure of category value, applied to a problem involving 22 simulated odors, required more than 4 hours on a 68020-based processor. The simulation network reached the same answer as the game-theoretic program, but did so in seconds. It is worth mentioning again that the simulation did so while simultaneously learning unique encodings for the cues, as described above, which is itself a nontrivial task. Humans, on at least some tasks, may carry out clustering by building initial clusters and then merging or splitting them as more cues are presented. Thus far, the networks do not pass through successive categorization schema. However, experiments on human categorization have almost exclusively involved situations in which all cues were presented in rapid succession and category membership is taught explicitly, rather than developed independently by the subject. Hence, it is not clear from the experimental literature whether or not stable clusters develop in this way from stimuli presented at widely spaced intervals with no category membership information given, which is the problem corresponding to that given the network (and that is likely common in nature). It will be of interest to test categorizing skills of rats learning successive olfactory discriminations over several days. Using appropriately selected stimuli, it should be possible to determine if stable clusters are constructed and whether merging and splitting occurs over trials. 329 Any useful clustering device must utilize information about the heterogenity of the stimulus world in setting the heterogeneity of individual categories. Heterogeneity of categories refers to the degree of similarity that is used to determine if cues are to be grouped together or not. Several network parameters will influence category size and we are exploring how these influence the individuation function; one particularly interesting possibility involves a shifting threshold function, an idea used with great success by Cooper in his work on visual cortex. The problems presented to the simulation thus far involve a totally naive system, one that has had no "developmental" history. We are currently exploring a model in which early experiences are not learned by the network but instead set parameters for later ("adult") learning episodes. The idea is that early experience determines the heterogenity of the stimulus world and imprints this on the network, not by specific changes in synaptic strengths, but in a more general fashion. CONCLUSIONS Neurons have a nearly bewildering array of biophysical, chemical, electrophysiological and anatomical properties that control their behavior; an open question in neural network research is which of these properties need be incorporated into networks in order to simulate brain circuit function. The simulation described here incorporates an extreme amount of biological data, and in fact has given rise to novel physiological questions, which we have tested experimentally with results that are counterintuitive and previously unsuspected in the existing physiological literature (see, e.g., Lynch and Granger, 1988; Lynch et al., 1988). Incorporation of this mass of physiological parameters into the simulation gives rise to a coherent architecture and learning and performance rules, when interpreted in terms of computational function of the network, which generates a robust capability to encode multiple levels of information about learned stimuli. The coherence of the data in the model is useful in two ways: to provide a framework for understanding the purposes and interactions of many apparently-disparate biological properties of neurons, and to aid in the design of novel artificial network architectures inspired by biology, which may have useful computational functions. It is instructive to note that neurons are capable of many possible biophysical functions, yet early results from chronic recording of cells from olfactory cortex in animals actively engaged in learning many novel odors in an olfactory discrimination task clearly shows a particular operating mode of this cortical structure when it is actively in use by the animal (Larson et al., unpublished data). The rats in this task are very familiar with the testing paradigm and exhibit very raid learning, with no difficulty in acquiring large numbers of discriminations. Sampling, detection and responding occur in fractions of a second, indicating that the utilization of recognition memories in the olfactory system can be a rapid operation; it is not surprising, then, that the odor-coded units so far encountered in our physiological experiments have rapid and stereotyped responses. Given the dense innervation of the olfactory bulb by the brain, it is possible that the type of spatial encoding that appears to be responsible for the preliminary results of these chronic experiments would not appear in animals that were not engaged in active sampling or were confronted with unfamiliar problems. That is, the operation of the olfactory cortex might be as dependent upon the behavioral 'state' and behavioral history of the rat as upon the actual odors presented to it. It will be of interest to compare the results from well-trained freely-moving animals with those obtained using more restrictive testing conditions. 330 The temporal properties of synaptic currents and afterpotentials, results from simulations and chronic recording studies, taken together, suggest two useful caveats for biological models: ? Cell firing in cortical structures (e.g., piriform, hippocampus and possibly neocortex) is linked to particular rhythms (theta in the case of piriform and hippocampus) during real learning behavior, and thus it is likely that the 'coding language' of these structures involves spatial cell firing patterns within a brief time window. This stands in contrast to other methods such as frequency coding that appears in other structures (such as peripheral sensory structures, e.g., retina and cochlea; see, e.g., Sivilotti et al., 1987). ? Temporal sequences of spatial patterns may encode different types of information, such as hierarchical encodings of perceptions, in contrast with views in which either asynchronous 'cycling' activity occurs or a system yields a single punctate output and then halts. In particular, simulation of piriform gives rise to temporal sequences of spatial patterns of synchronized cell firing in layer IT, and the patterns change over time: the physiology and anatomy of the structure cause successive 'sniffs' of the same olfactory stimulus to give rise to a sequence of spatial patterns, each of which encodes successively more specific information about the stimulus, beginning with its similarity to other previouslylearned stimuli, and ending with a unique encoding of its characteristics. It is possible that both the early similarity-based 'cluster' information and the late unique encodings are used, for different purposes, by brain structures that receive these signals as output from piriform. ACKNOWLEDGEMENTS Much of the theoretical underpinning of this work depends critically on data generated by John Larson; we are grateful for his insightful advice and help. This work has benefited from discussions with Michel Baudry, Mark Gluck, and Ursula Staubli. Jose Ambros-Ingerson is supported by a fellowship from Hewlett-Packard, Mexico, administered by UC MEXUS. REFERENCES Bliss, T.V.P. and L~mo, T. (1973). Long-lasting potentiation of synaptic transmission in the dentate area of the anesthetized rabbit following stimulation of the perforant path. 1.Physiol.Lond. 232:357-374. Feldman, J.A. (1982). Dynamic connections in neural networks. Biological Cybernetics 46:27-39. Haberly, L.B. (1985). Neuronal circuitry in olfactory cortex: Anatomy and functional implications. Chemical Senses 10:219-238. Haberly, L.B. and J.L. Price (1977). The axonal projection patterns of the mitral and tufted cells of the olfactory bulb in the rat. Brain Res 129:152-157. 331 Haberly, L.B. and J.L. Price (1978a). Association and commissural fiber systems of the olfactory cortex of the rat. I. Systems originating in the piriform cortex and adjacent areas. J. Compo Neurol. 178:711-740. Haberly, L.B. and J.L. Price (1978b). Association and commissural fiber systems of the olfactory cortex of the rat. II. Systems originating in the olfactory peduncle. J. Compo Neurol. 181:781-808. Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley. Krettek, J.E. and J.L. Price (1977). Projections from the amygdaloid complex and adjacent olfactory structures to the entorhinal cortex and to the subiculum in the rat and cat. J Comp NeuroI172:723-752. Larson, J. and G. Lynch (1986). Synaptic potentiation in hippocampus by patterned stimulation involves two events. Science 232:985-988. Lee, K., Schottler, F., Oliver, M. and Lynch, G. (1980). Brief bursts of high-frequency stimulation produce two types of structural change in rat hippocampus. J.Neurophysiol. 44:247-258. Lynch, G. and Baudry, M. (1984). The biochemistry of memory: a new and specific hypothesis. Science 224:1057-1063. Lynch, G. (1986). Synapses, circuits, and the beginnings of memory. Cambridge, Mass: MIT Press. Lynch, G., Larson, J., Staubli, U., and Baudry, M. (1987). New perspectives on the physiology, chemistry and pharmacology of memory. Drug Devel.Res. 10:295-315. Lynch, G., Granger, R., Levy, W. and Larson, J. (1988). Some possible functions of simple cortical networks suggested by computer modeling. In: Neural Models of Plasticity: Theoretical and Empirical Approaches, Byrne, J. and Berry, W.O. (Eds.), (in press). Lynch, G. and Granger, R. (1988). Simulation and analysis of a cortical network. The Psychology of Learning and Motivation, Vo1.22 (in press). Luskin, M.B. and J.L. Price (1983). The laminar distribution of intracortical fibers orginating in the olfactory cortex of the rat . J Comp NeuroI216:292-302. Parker, D.B. (1985). Learning-logic. MIT TR-47, Massachusetts Institute of Technology, Center for Computational Research in Economics and Management Science, Cambridge, Mass. Price, J.L. (1973). An autoradiographic study of complementary laminar patterns of termination of afferent fibers to the olfactory cortex. J.Comp.Neur. 150:87-108. Price, J .L. and B.M. Slotnick (1983). Dual olfactory representation in the rat thalamus: An anatomical and electrophysiological study. J Comp NeuroI215:63-77. Roman, F., Staubli, U. and Lynch, G. (1987). Evidence for synaptic potentiation in a cortical network during learning. Brain Res. 418:221-226. Rosenblatt, F. (1962). Principles ofneurodynamics. New York: Spartan. Rumelhart, D., Hinton, G. and Williams, R. (1986). Learning Internal Representations by Error Propagation. In D.Rumelhart and J.McClelland (Eds.), Parallel 332 Distributed Processing, Cambridge: MIT Press. Sivilotti, M.A., Mahowald, M.A. and Mead, C.A. (1987). Real-time visual computations using analog CMOS processing arrays. In: Advanced Research in VLSI (Ed. Paul Losleben), MIT Press, Cambridge. Staubli, U. and Lynch, G. (1987). Stable hippocampal long-term potentiation elicited by "theta" pattern stimulation. Brain Res. (in press). Widrow, G. and Hoff, M.E. (1960). Adaptive Switching Circuits. Institute of Radio Engineers, Western Electronic Convention Record, Part ., pp.96-104. Wigstrf/Sm, H., B. Gustaffson, Y.Y. Huang and W.C. Abraham (1986). Hippocampal long-term potentiation is induced by pairing single afferent volleys with intracellularly injected depolarizing current pulses. Acta Physiol Scand 126:317-319. 333 A ....... ID SI HI! I I 200 11\8 B ,... I] In ID S2 2 8ec ? ? ? ? ? ?.. . .......... . . . . .. I] .s~ ? ? ? ? .. . . . ... .... . .. :::> 51 e 'V Il- en Il- "" I ,... M 'V .... ....en "" 140 80 ? ...... 3 :-:.~ 10 20 30 .A. . 40 ? ? .,.,....... ?? ? ? ?? 50 60 70 TIME (MINUTES) C BEFORE AFTER SUPERIMPOSED Figure 1. LTP induction by short high-frequency bursts involves sequential "priming" and "consolidation" events. A) Sl and S2 represent separate groups of Shaffer/commissural fibers converging on a single CAl pyramidal neuron. The stimulation pattern employed consisted of pairs of bursts (each 4 pulses at 100Hz) given to Sl and S2 respectively, with a 200ms delay between them. The pairs were repeated 10 times at 2 sec intervals. B) Only the synapses activated by the delayed burst (52) showed LTP. The top panel shows measurements of amplitudes of intracellular EPSPs evoked by single pulses to Sl before and after patterned stimulation (given at 20 min into the experiment). The middle panel shows the amplitude of EPSPs evoked by 52. Bottom panel shows EP5P amplitudes for both pathways expressed as a percentage of their respective sizes before burst stimulation. C) Shown are records of EPSPs evoked by 51 and 52 five min. before and 40 min. after patterned burst stimulation. Calibration bar: SmV, 5msec. (From Larson and Lynch, 1986). 334 -------1[> 12 ----~t> <]----- EPSP -20 msec No IPSP ~IOO msec CI LHP -.5sec K AHP -lsec K Figure 2. Onset and duration of events comprising stimulation of a layer IT cell in piriform cortex. Axonal stimulation via the lateral olfactory tract (LOT) activates feedforward EPSPs with rapid onset and short duration (:::::20msec) and two types of feedforward inhibition: short feedforward IPSPs with slower onset and somewhat longer duration (~10Omsec) than the EPSPs, and longer hyperpolarizing potentials (LHP) lasting :::::500msec. These two types of inhibition are not specific to firing cells; an additional, very long-lasting (:::::1sec) inhibitory afterhyperpolarizing current (AHP) is induced in a cell-specific fashion in those cells with intense firing activity. Finally, feedback EPSPs and IPSPs are induced by activation via recurrent collateral axons from layer IT cells. 335 FIRST SECOND TENTH Sl-AHP L S2 CONTROL ~S2 Figure 3. When short, high-frequency bursts are input to cells 200ms after an initial 'priming' event, the broadened EPSPs (see Figure 1) will allow the contributions ofthe second and subsequent pulses comprising the burst to sum with the depolarization of the first pulse, yielding higher postsynaptic depolarization sufficient to cause the cell to spike. (From Lynch, Larson, Staubli and Baudry, 1987). 336 KIDDLI NlTE1lIOll rOsntllOI to layer IV .nt.- po.t. " " .. ,rob.bUity of LOT cont.ct p.r uoa. cI.cr..... - n ia conn.at . _ _...... ~ ,robabUity of ???oc. coa.t.cc p.r UOD ia coa.eaa.t- "n" acr ????? Aa.oc lel.elv. contribution to .,lUna cell LOT .aterior ~.rior Figure 4. Organization of extrinsic and feedback inputs to layer-II cells of piriform cortex. The axons comprising the lateral olfactory tract (LOT), originating from the bulb, innervate distal dendrites, whereas the feedback collateral or associational fibers contact proximal dendrites. Layer II cells in anterior (rostral) piriform are depicted as being dominated by extrinsic (LOT) input, whereas feedback inputs are more prominent on cells in posterior (caudal) piriform. 337 Tapered Feedforward Firing Probabilities 10?r-----------------------~~~~~~~1 80 60 ... -Go ~ .c m .c a 4- 40 .... 000- ~ 0.. StirrufusA Stirrufus B Stirrufus C Stirrufus 0 CumHypergmt 20 O~~~~~~~~~~~~~ o 1 2 3 4 5 6 7 8 9 10 11 12 13 Number Firing (of 40 Connections) 14 15 Figure 5. Probability onayer-ll-cell firing as a function of number of LOT axons active, in the absence of local inhibitory patches. The hypergeometric function ('CumHypergmt ') specifies the probability o{layer II cell firing in the absence of caudally-directed feedback collaterals, i.e., assuming that all collaterals are equally probable to travel either rostrally or caudally. In this case, there is a smooth S-shaped function for probability of cell firing with increasing LOT activity, so that adjustment of global firing threshold (e.g., via nonspecific cholinergic inputs affecting all piriform inhibitory interneurons) can effectively normalize piriform layer II cell firing. However, when feedback axons are caudally directed, then probability steepens markedly, becoming a near step function, in which the probability of cell firing is exquisitely sensitive to the number of active inputs, across a range of empirically-tested LOT stimulation patterns (A - D in the figure). In this case, global adjustment of inhibition will fail to adequately normalize layer II cell firing: the probability of cell firing will always be either near zero or near 1.0; i.e., either nearly all cells will fire or almost none will fire. Local inhibitory control of 'patches' oflayer II solve this problem (refer to text).
44 |@word trial:5 version:2 rising:1 middle:1 stronger:3 hippocampus:6 nd:1 hyperpolarized:2 open:4 termination:1 simulation:19 pulse:14 thereby:2 innervating:1 tr:1 carry:1 reduction:1 initial:6 series:1 efficacy:1 contains:1 exclusively:1 past:1 existing:3 current:11 comparing:1 nt:1 surprising:1 anterior:2 si:2 yet:4 dx:1 must:6 activation:8 john:1 physiol:2 subsequent:8 hyperpolarizing:3 numerical:1 plasticity:1 enables:3 yf3:1 centrifugal:1 discrimination:4 alone:1 cue:49 selected:6 fewer:1 item:2 device:1 v:3 beginning:3 short:10 record:2 compo:2 caveat:1 provides:2 contribute:2 location:1 successive:5 five:2 along:1 burst:28 constructed:4 become:2 contacted:2 differential:1 pairing:1 direct:1 shaffer:1 pathway:1 behavioral:3 olfactory:34 rostral:21 notably:1 expected:4 rapid:4 roughly:1 behavior:10 coa:2 brain:10 inspired:1 decreasing:1 prolonged:1 actual:1 metaphor:1 little:1 window:2 innervation:2 totally:1 considering:1 increasing:2 underlying:2 begin:1 circuit:6 mass:3 panel:3 what:1 sivilotti:2 interpreted:1 depolarization:8 recruit:1 developed:2 differing:1 differentiation:2 temporal:5 every:3 demonstrates:1 partitioning:3 control:3 grant:2 utilization:1 appear:3 unit:4 broadened:1 before:6 positive:1 local:7 tends:3 limit:1 consequence:1 switching:1 receptor:2 encoding:10 punctate:1 id:2 initiated:1 mead:1 firing:70 becoming:1 path:1 approximately:6 might:4 stellates:1 twice:2 acta:1 shunt:1 studied:1 evoked:3 bursting:2 examined:1 conversely:1 mentioning:1 patterned:6 limited:1 locked:2 range:3 factorially:1 directed:3 unique:4 responsible:1 testing:2 practice:1 lost:3 backpropagation:1 area:4 empirical:3 drug:1 oflocal:1 significantly:1 thought:3 physiology:6 convenient:1 projection:2 pre:6 subpopulation:1 specificity:1 refers:1 suggest:2 onto:1 close:1 selection:1 cal:1 cannot:1 judged:1 influence:7 measurable:1 xci:1 center:2 chronic:3 send:1 straightforward:1 nonspecific:1 williams:1 economics:1 rabbit:1 independently:1 go:2 duration:3 splitting:2 rule:33 array:2 counterintuitive:1 his:2 population:4 notion:1 variation:2 controlling:1 enhanced:1 trigger:1 hierarchy:1 suppose:1 losing:1 hypothesis:1 element:1 rumelhart:3 recognition:1 expensive:1 particularly:1 intracellularly:1 sparsely:1 predicts:1 blocking:1 unsuspected:1 observed:1 fork:1 bottom:2 electrical:2 calculate:1 readout:1 connected:1 episode:2 decrease:4 removed:1 substantial:1 environment:3 developmental:1 asked:1 gustaffson:1 imprint:1 dynamic:1 terminating:1 trained:2 weakly:1 raise:1 grateful:1 topographically:1 upon:2 basis:1 neurophysiol:1 po:1 emergent:2 represented:2 fiber:8 cat:1 surrounding:1 distinct:15 activate:2 spartan:1 detected:1 artificial:1 choosing:2 extraordinarily:1 quite:9 widely:1 solve:1 distortion:1 drawing:1 ability:1 noisy:2 itself:5 confronted:1 sequence:4 biophysical:3 net:5 acr:1 analytical:1 subtracting:1 interaction:1 product:2 epsp:6 causing:2 relevant:2 rapidly:2 adaline:1 fired:3 description:1 participating:3 competition:1 differentially:2 staubli:7 normalize:2 convergence:1 enhancement:2 electrode:1 cluster:10 transmission:1 produce:5 categorization:6 silver:1 cmos:1 tract:5 object:6 help:1 depending:1 develop:2 recurrent:3 incremental:1 widrow:2 exemplar:1 measured:1 minor:1 borrowed:1 sa:1 strong:3 epsps:9 progress:1 resemble:1 indicate:1 disproportionate:1 convention:1 synchronized:4 direction:2 involves:4 anatomy:5 radius:2 differ:4 induced:7 filter:1 modifying:1 human:2 enable:2 require:1 potentiation:13 activating:3 generalization:1 preliminary:1 dendritic:1 biological:6 summation:1 probable:1 strictly:1 exploring:2 drinking:1 underpinning:1 accompanying:1 around:1 practically:2 sufficiently:3 normal:1 presumably:1 great:1 algorithmic:1 dentate:1 mapping:1 electron:1 circuitry:1 mo:1 biochemistry:1 major:1 vary:2 adopt:1 early:4 purpose:2 travel:1 khn:1 combinatorial:1 currently:1 radio:1 sensitive:1 grouped:2 smv:1 weighted:2 caudal:23 mit:4 clearly:1 activates:1 always:1 behaviorally:1 lynch:24 modified:1 rather:2 spoon:1 cr:1 varying:3 voltage:5 profitably:1 office:1 categorizing:1 release:3 acuity:2 derived:6 encode:2 naval:1 properly:1 transmitter:2 indicates:3 superimposed:1 greatly:1 contrast:3 sense:2 glass:1 dependent:3 el:1 membership:3 biochemical:1 eliminate:1 typically:1 weakening:2 initially:1 vlsi:1 luskin:2 originating:3 expand:1 comprising:4 selects:2 overall:3 issue:2 ill:1 among:6 dual:1 animal:11 spatial:23 summed:1 hoff:2 uc:1 equal:2 afterhyperpolarizing:2 once:8 shaped:1 sampling:5 eliminated:1 biology:2 identical:1 nearly:3 constitutes:1 future:1 others:1 stimulus:11 richard:1 roman:2 retina:1 inherent:1 randomly:4 few:3 composed:1 preserve:3 national:1 simultaneously:1 homogeneity:1 delayed:1 individual:4 familiar:1 baudry:5 intended:1 consisting:3 fire:14 opposing:1 conductance:3 detection:1 interest:2 organization:2 interneurons:9 possibility:2 highly:1 situ:1 elucidating:1 certainly:1 severe:1 cholinergic:1 introduces:1 arrives:1 extreme:3 operated:1 yielding:3 hewlett:1 activated:6 sens:2 implication:1 accurate:1 oliver:1 capable:1 necessary:3 experience:2 collateral:27 respective:1 intense:2 literally:1 iv:1 loosely:3 divide:1 desired:1 re:4 theoretical:3 weaken:1 increased:2 instance:1 earlier:1 modeling:1 contiguous:1 mahowald:1 tractability:1 cost:2 hundred:1 uniform:1 delay:1 conducted:1 connect:1 answer:2 proximal:1 individuation:1 density:2 fundamental:1 lee:2 physic:1 receiving:2 probabilistic:7 enhance:3 together:5 quickly:1 connectivity:4 again:3 postulate:1 uoa:1 successively:1 management:1 ambros:2 slowly:1 huang:1 reflect:1 hn:9 possibly:1 sniffing:1 luna:1 american:1 leading:2 return:1 michel:1 actively:2 halting:1 potential:8 intracortical:1 chemistry:1 accompanied:1 coding:2 sec:3 stabilize:1 afterhyperpolarization:3 bliss:2 postulated:1 explicitly:1 afferent:5 onset:3 depends:4 performed:2 view:2 lot:60 later:5 xf3:1 apparently:3 schema:1 red:1 reached:2 observing:1 linked:1 parallel:2 capability:1 elicited:1 wave:1 depolarizing:3 contribution:2 il:2 formed:2 degraded:3 accuracy:1 characteristic:6 succession:2 correspond:2 ofthe:2 conveying:1 listing:1 climbing:1 spaced:1 yield:3 yellow:1 dealt:1 plastic:1 produced:3 critically:1 none:1 comp:4 cc:1 cybernetics:1 worth:1 processor:1 drive:1 randomness:3 history:2 simultaneous:3 synapsis:38 sharing:1 ed:3 synaptic:14 against:4 nonetheless:3 bewildering:1 frequency:11 involved:1 pp:1 naturally:2 commissural:3 static:1 hamming:2 irvine:1 gain:1 proved:1 begun:1 massachusetts:1 recall:1 subsection:1 fractional:1 color:1 car:4 electrophysiological:2 subtle:1 amplitude:3 emerges:1 back:1 reflecting:1 appears:2 bidirectional:1 tum:2 higher:1 day:1 response:41 heterogenity:2 maximally:1 synapse:5 arranged:3 flowing:4 though:2 strongly:10 furthermore:2 until:2 correlation:3 retrospect:1 hand:2 receives:2 nonlinear:1 overlapping:6 western:1 propagation:1 defines:1 logistic:1 mode:2 building:2 effect:2 intemeurons:2 consisted:1 perforant:1 normalized:2 byrne:1 adequately:1 hence:10 former:2 chemical:4 read:3 nonzero:1 iteratively:1 distal:1 adjacent:2 ll:1 game:1 during:4 terminate:1 essence:1 noted:1 rhythm:6 larson:9 oc:2 rat:10 m:11 generalized:2 hippocampal:4 prominent:1 arrived:1 hill:1 occuring:2 theoretic:4 performs:1 rostrally:1 network1:1 novel:6 caudally:10 predominantly:2 common:4 tending:1 functional:2 stimulation:17 hyperpolarization:2 empirically:1 vitro:1 discriminated:1 winner:2 refractory:7 association:2 discussed:6 slight:2 analog:1 occurred:1 resting:3 refer:1 unfamiliar:1 measurement:1 cup:9 feldman:2 cambridge:4 enter:1 rd:1 similarly:2 innervate:2 language:1 had:6 henry:1 moving:2 stable:7 calibration:1 similarity:17 cortex:27 inhibition:8 etc:2 longer:4 spatia:1 operating:1 posterior:1 own:2 recent:1 showed:1 perspective:1 driven:3 prime:1 certain:5 success:1 accomplished:1 transmitted:1 preserving:1 additional:3 seen:2 somewhat:2 greater:1 employed:1 freely:1 recognized:1 determine:2 paradigm:3 period:4 signal:10 ii:27 semi:2 multiple:7 infer:1 thalamus:1 hebbian:1 smooth:1 reactivated:1 calculation:1 sphere:2 long:19 post:1 equally:1 y:1 ipsps:10 halt:1 controlled:1 calculates:1 coded:1 underlies:1 converging:1 subjacent:1 variant:3 enhancing:3 metric:1 involving:1 cochlea:1 represent:1 sometimes:1 accentuate:1 normalization:1 synaptically:1 cell:168 achieved:1 receive:5 addition:1 affecting:1 fellowship:1 whereas:5 interval:4 addressed:1 background:4 diagram:1 source:1 pyramidal:1 appropriately:1 rest:2 unlike:1 bringing:1 posse:1 probably:1 depolarized:1 subject:5 tend:6 hz:4 recruited:3 ltp:18 markedly:1 recording:3 flow:7 thing:1 member:6 incorporates:1 climb:1 extracting:1 structural:1 near:5 noting:2 ideal:1 axonal:3 feedforward:7 enough:2 psychology:1 architecture:7 restrict:1 silent:1 idea:2 prototype:1 exhibited:1 absent:1 administered:1 inactive:2 whether:3 expression:6 proceed:1 cause:7 york:2 patchwork:1 adequate:1 constitute:1 useful:5 detailed:1 clear:3 involve:1 nasal:1 amount:6 neocortex:1 induces:1 category:31 mcclelland:1 reduced:1 generate:4 specifies:1 sl:4 percentage:1 inhibitory:29 alters:1 delta:3 arising:1 extrinsic:2 per:1 anatomical:4 blue:3 rosenblatt:2 affected:1 taught:1 ist:1 conn:1 four:5 group:5 threshold:11 putting:1 changing:3 prevent:2 tapered:1 breadth:1 tenth:1 utilize:1 interburst:1 fraction:1 sum:5 synapsing:1 jose:2 injected:1 respond:3 chemically:1 place:1 throughout:1 heaviside:1 electronic:1 architectural:2 patch:18 arrive:2 almost:3 coherence:1 dy:1 layer:56 hi:1 ct:1 distinguish:1 convergent:1 correspondence:1 laminar:2 encountered:1 mitral:2 activity:22 nontrivial:1 strength:4 occur:3 constraint:1 precisely:1 incorporation:1 encodes:1 dominated:4 erroneously:1 aspect:1 simulate:1 lsec:2 min:3 lond:1 generates:1 anyone:1 extremely:2 argument:1 relatively:7 according:2 neur:1 peripheral:1 alternate:6 combination:8 across:10 describes:1 smaller:2 postsynaptic:14 increasingly:2 ingerson:2 rob:1 making:1 lasting:6 anatomically:3 restricted:1 taken:6 computationally:1 previously:6 remains:1 turn:3 granger:5 mechanism:11 fail:2 eventually:1 initiate:2 know:1 flip:1 ofits:1 subjected:1 ahp:16 unusual:1 end:1 operation:4 multiplied:1 hierarchical:1 appropriate:1 uod:1 distinguished:1 robustness:1 odor:27 slower:1 original:2 denotes:1 remaining:1 clustering:9 binomial:1 responding:1 top:1 calculating:2 giving:1 lel:1 restrictive:1 coffee:6 graded:1 granule:1 wnk:1 contact:16 arrangement:1 already:2 occurs:5 spike:2 question:6 primary:2 sniff:8 dependence:1 said:1 cycling:1 microscopic:1 exhibit:1 enhances:1 distance:3 separate:1 lateral:6 simulated:7 participate:2 presynaptic:1 extent:1 reason:1 induction:3 devel:1 characteristically:1 assuming:3 length:1 scand:1 quantal:3 cont:1 exquisitely:1 insufficient:1 mexico:1 piriform:36 tufted:1 stated:1 rise:22 ordinarily:1 negative:1 suppress:1 design:1 disparate:1 reliably:1 perform:1 contributed:1 potentiated:10 allowing:1 neuron:6 sm:1 howard:1 enabling:1 behave:1 immediate:1 heterogeneity:4 neurobiology:2 incorporated:2 hinton:1 situation:2 lb:1 ijm:1 unpublished:1 required:3 specified:3 pair:3 connection:5 raising:2 recalled:1 california:1 learned:20 coherent:3 narrow:1 hypergeometric:5 hour:1 adult:1 able:3 suggested:2 bar:1 subiculum:1 usually:4 perception:1 parallelism:1 pattern:53 below:2 appeared:1 program:1 gaining:1 memory:6 reciprocally:2 green:1 soo:1 greatest:1 overlap:14 packard:1 difficulty:1 hybrid:2 shifting:1 ia:3 event:5 critical:1 natural:1 advanced:1 settling:1 indicator:1 loom:1 noool4:2 scheme:4 technology:1 brief:3 imply:1 theta:10 hm:1 categorical:2 naive:2 text:1 prior:1 literature:3 understanding:2 berry:1 review:1 acknowledgement:1 relative:1 precipitously:1 embedded:1 loss:1 interesting:4 oms:1 foundation:1 nucleus:1 degree:3 bulb:15 sufficient:4 wnm:1 undergone:2 haberly:6 principle:2 share:1 excitatory:17 summary:2 consolidation:1 course:1 last:2 supported:2 asynchronous:1 formal:1 rior:1 allow:2 perceptron:1 institute:3 wide:1 differentiating:1 rhythmic:1 fifth:1 sparse:2 distributed:2 slice:1 overcome:3 feedback:9 cortical:7 calculated:1 world:4 rich:1 boundary:1 sensory:5 dimension:2 stand:1 adaptive:1 ending:1 far:3 ec:1 skill:1 neurobiological:1 logic:1 global:2 active:28 reveals:1 incoming:4 assumed:3 quiescent:2 ipsp:5 losleben:1 physiologically:1 lthis:1 stimulated:1 channel:1 reasonably:1 superficial:1 ca:1 nature:3 learn:1 robust:1 eeg:1 ioo:1 dendrite:19 zk:1 untrained:1 complex:2 priming:2 electric:1 constructing:1 necessarily:1 did:4 dense:1 stereotyped:1 linearly:1 intracellular:1 multilayered:1 decrement:1 s2:5 arise:3 abraham:1 pharmacology:1 motivation:1 repeated:1 complementary:1 paul:1 categorized:1 neuronal:1 advice:1 benefited:1 en:2 parker:2 hebb:3 cooper:1 fashion:3 axon:38 aid:1 experienced:1 position:1 raid:1 msec:5 wiley:1 volley:1 candidate:1 ib:3 levy:1 third:1 late:1 minute:1 formula:1 down:2 specific:21 insightful:1 neurol:2 x:1 physiological:10 evidence:1 dominates:1 incorporating:2 consist:1 essential:1 concern:1 merging:2 effectively:4 adding:2 sequential:1 ci:3 entorhinal:3 te:1 associational:1 illustrates:1 overwhelmed:1 sparseness:1 clamping:1 occurring:3 interneuron:4 labelling:2 gluck:1 depicted:2 simply:2 likely:3 forming:1 visual:2 prevents:2 unexpected:1 expressed:1 contained:1 adjustment:2 monotonic:1 acquiring:1 aa:1 gary:1 corresponds:1 determines:1 ptic:1 stimulating:1 goal:2 month:1 viewed:2 price:12 absence:2 shared:3 change:6 directionality:1 experimentally:1 typical:1 reducing:2 reversing:1 vo1:1 principal:1 engineer:1 pas:2 pulsing:2 engaged:3 experimental:2 la:1 tendency:1 intact:2 anesthetized:1 rarely:1 select:2 indicating:1 selectively:1 internal:1 mark:1 bidimensional:1 formally:1 arises:2 autoradiographic:1 latter:3 phenomenon:1 incorporate:1 requisite:1 tested:3 instructive:1 ex:1
3,755
440
Unsupervised Classifiers, Mutual Information and 'Phantom Targets' David J.e. MacKay John s. Bridle Anthony J .R. Heading California Institute of Technology 139-74 Pasadena CA 91125 U.S.A Defence Research Agency St. Andrew's Road, Malvern ""orcs. "\VR14 3PS, U.K. Abstract We derive criteria for training adaptive classifier networks to perform unsupervised data analysis. The first criterion turns a simple Gaussian classifier into a simple Gaussian mixture analyser. The second criterion, which is much more generally applicable, is based on mutual information. It simplifies to an intuitively reasonable difference between two entropy functions, one encouraging 'decisiveness,' the other 'fairness' to the alternat.ive interpretations of the input. This 'firm but fair' criterion can be applied to any network that produces probability-type outputs, but it does not necessarily lead to useful behavior. 1 Unsupervised Classification One of the main distinctions made in discussing neural network architectures, and pattern analysis algorithms generally, is between supervised and unsupervised data analysis. We should therefore be interested in any method of building bridges between techniques in these two categories. For instance, it is possible to use an unsupervised system such as a Boltzmann machine to learn the joint distribution of inputs and a teacher's classificat.ion labels. The particular type of bridge we seek is a method of taking a supervised pattern classifier and turning it into an unsupervised data analyser. That is, we are interested in methods of "bootstrapping" classifiers. Consider a classifier system. Its input is a vector x, and the output is a probability vector y(x). (That is, the elements ofy are positive and sum to 1.) The elements of y, (Yi (x), i = 1 ... N c ) are to be taken as the probabilities that x should be assigned to each of Nc classes. (Note that our definition of classifier does not include a decision process.) 1096 Unsupervised Classifiers, Mutual Information and 'Phantom Targets' To enforce the conditions we require for the output values, v,,'e recommend using a generalised logistic (normalised exponential, or SoftMax) output stage. \Ve call t.he unnormalised log probabilities of the classes ai, and the softmax performs: Yi = ea,/Z with Z = Lea, (1 ) Normally the parameters of such a system would be adjust.ed using a training set comprising examples of inputs and corresponding classes, {(Xi, cd}, vVe assume that the system includes means t.o convert derivatives of a t.raining criterion with respect to the outputs into a form suitable for adjusting the values of the parameters, for instance by "backpropagation", Imagine however that we have unlabelled data, X m , m. = 1, , ,Nts , and wish to use it to 'improve' the classifier. We could think of this as self-supervised learning, to hone an already good system on lots of easily-obtained unlabelled real-world data, or to adapt to a slowly changing environment, or as a way of turning a classifier int.o some sort of cluster analyser. (Just what kind depends on details of the classifier itself.) The ideal method would be theoretically well-founded, generalpurpose (independent of the details of the classifier), and computationally tractable. One well known approach to unsupervised data analysis is to minimise a reconstruction error: for linear projections and squared euclidean distance this leads to principal components analysis, while reference-point based classifiers lead to vector quantizer design methods, such as the LBG algorithm, Variants on VQ , such as Kohonen's feature maps, can be motivated by requiring robustness to distortions in the code space . Reconstruction error is only available as a training criterion if reconstruction is defined: in general we are only given class label probabilities. 2 A Data Likelihood Criterion For the special case of a Gaussian clustering of an unlabelled data set, it was demonstrated in [1] that gradient ascent on the likelihood of the data has an appealing interpretation in terms of backpropagation in an equivalent unit-Gaussian classifier network: for each input X presented to the network, the output y is doubled to give 'phantom targets' t = 2y; when the derivatives of the log likelihood criterion J = -Eiti 10gYi relative to these targets are propagated back through the network, it turns out that the resulting gradient is identical to t.he gradient of the likelihood of the data given a Gaussian mixture model. For the unit-Gaussian classifier, the activations ai in (1) are ai = -Ix - wd 2 , (2) so the outputs of the network are Yi = P(class = i Ix, w) (3) where we assume the inputs are drawn from equi-probable unit-Gaussian distributions with the mean of the distribution of the ith class equal to Wi. This result was only derived in a limited context, and it was speculated that it might be generalisable to arbitrary classification models . The above phantom t.arget. rule 1097 1098 Bridle, Heading, and MacKay has been re-derived for a larger class of networks [4], but the conditions for strict applicability are quite severe. Briefly, there should be exponential density functions for each class, and the normalizing factors for these densit.ies should be independent of the parameters. Thus Gaussians with fixed covariance matrices are acceptable, but variable covariances are not, and neither are linear transformat.ions preceeding the Gaussians. The next section introduces a new objective function which is independent of details of the classifier. 3 Mutual Information Criterion Intuitively, an unsupervised adaptive classifier is doing a plausible job if its outputs usually give a fairly clear indication of the class of an input vector, and if there is also an even dist.ribution of input patterns between the classes. We could label these desiderata 'decisive' and 'fair' respectively. Note that it is trivial to achieve either of them alone. For a poorly regularised model it may also be trivial to achieve both. There are several ways to proceed. We could devise ad-hoc measures corresponding to our notions of decisiveness and fairness, or we could consider particular types of classifier and their unsupervised equivalents, seeking a general way of turning one into the other. Our approach is to return to the general idea that the class predictions should retain as much information about the input values as possible. We use a measure of the information about x which is conveyed by the output distribution, i. e. the mutual information between the inputs and the outputs. 'Ne interpret the outputs y as a probability distribution over a discrete random variable e (the class label), thus y = p( elx). The mutual information between x and e is I(e; x) p(e,x) j r{J dcdxp(e, x) log p(e)p(x) J J J J (4) p~~~~) (5) dxp(x) dep(elx) log dxp(x) p(clx) de p(elx) log Jdxp(x)p( elx) (6) The elements of this expression are separately recognizable: Jdx p(x)(.) is equivalent to an average over a training set .~t. Lts (.); p( clx) is simply the network output Yc; Jdc(?) is a sum over the class labels and corresponding network outputs. Hence: I I(c; x) N ts Nc L L Yi log Yiy. :-! t$ i=l (7) Unsupervised Classifiers, Mutual Information and 'Phantom Targets' 1 Nc Nc - L fh log Yh + IV L L Yi log Yi i=l 1 is ts (8) i=l 1i(y) -1i(y) (9) The objective function I is the difference between the entropy of the average of the out.puts, and the average of the entropy of the outputs, where both averages are over the training set. 1i(y) has its maximum value when the average activities of the separate output.s are equal- this is 'fairness'. 1i(Y) has its minimum value when one output is full on and the rest are off for every training case - this is 'firmness'. \Ve now evaluate I for the training set. a.nd take the gradient of I. 4 Gradient descent To use this criterion with back-propagation network training, we need its derivatives with respect to the network outputs. oI(c ;x) OYi (10) (11 ) (12) The resulting expression is quite simple, but note that the presence of a fii term means that two passes through the training set are required: the first to calculate the average output node activations, and the second to back-propagate the derivatives. 5 Illustrations Figures 1 shows I (divided by its maximum possible value, log N c ) for a run of a particular unit-Gaussian classifier network. The 30 data points are drawn from a 2-d isotropic Gaussian. Figure 2 shows the fairness and firmness criteria separately. (The upper curve is 'fairness' ?i(y )/log N e , and the lower curve is 'firmness' (1 1i(y)/log N c ).) The t.en reference points had starting values drawn from the same distribution as the data. Figure 3 shows their movement during training. From initial positions within the data cluster, they move outwards into a circle around the data. The resulting classification regions are shown in Figure 4. (The grey level is proportional to the value of the maximum response at each point, and since the outputs are positive normalised this value drops to 0.5 or less at the decision boundaries.) We observe that the space is being partitioned into regions with roughly equal numbers of points. It might be surprising at. first t.hat t.he reference points do not end up near 1099 1100 Bridle, Heading, and MacKay 1.0 1.0 0.8 0.8 0.6 0.4 0.2 0.2 20 40 60 Iteration 80 100 1. The M.1. criterion 20 40 60 Iteration 2 o -2 4~~~~~~~~~~~ -2 o 2 100 2. Firm and Fair separately 4 4 80 4 3. 'Tracks of reference points 4. Decision Regions Unsupervised Classifiers, Mutual Information and 'Phantom Targets' the dat.a. However, it is only the transformat.ion from dat.a x to out.puts y that is being trained, and t.he refereme points are just parameters of t.hat t.ra.nsformation. As t.he reference point.s move further away from OBe anot.her t.he dE'cision bounclaries grow firmer. In t.his example the fairness crit.erion happens t.o decreasf' in favour of t.he firmness, and this usually happens. \Ve could consider different weightings of the two components of the criterion. 6 Con1n1ents The usefulness of this objective function will prove will depend very much on the form of classifier that it is applied t.o. For a poorly regularised classifier, maximisation of the criterion alone will not necessarily lead to good solutions to unsupervised classification; it could be ma.ximised by any implausible classification of the input. that is completely hard (i. e. the output vector always has one 1 and all the other outputs 0), and t.hat. chops the t.raining set int.o regions cont.aining similar numbers of training points; such a solution would be one of many global maxima, regardless of whether it chopped t.he data into natural classes. The meaning of a 'natural' partition in t.his cont.ext is, of course, rather ill-defined. Simple models often do not. have t.he capacity t.o break a pattern space int.o highly contorted regions - the decision boundaries shown in the figure below is an example of model producing a reasonable result as a consequence of its inherent simplicity. When we use more complex models, however, we must ensure t.hat we find simpler solutions in preference to more complex ones. Thus this criterion encourages us to pursue objective t.echniques for regularising classification networks [2, 3]; such techniques are probably long overdue. Copyright ? Controller HMSO London 1992 References [1] J .S. Bridle (1988). The phantom target cluster network: a peculiar relative of (unsupervised) maximum likelihood stochastic modelling and (supervised) error backpropagation, RSRE Research Note SP4: 66, DRA Malvern UK. [2] D.J .C. MacKay (1991). Bayesian interpolation, submitted to Neural computation. [3] D.J .C. MacKay (1991). A practical Bayesian framework for backprop networks, submitted to Neural computation. [4] .J S Bridle and S J Cox. Recnorm: Simultaneous normalisation and classification applied to speech recognition. In Advances in Ne'ural Information Processing Systems ;g. Morgan Kaufmann, 1991. [5] J S Bridle. Training stochastic model recognition algorithms as networks can lead to maximum mut.ual informat.ion estimation of parameters. In Advances in Neural Informatio71 Processing Systems 2. Morgan Kaufmann, 1990. 1101
440 |@word cox:1 briefly:1 nd:1 grey:1 seek:1 propagate:1 covariance:2 initial:1 wd:1 nt:1 surprising:1 activation:2 must:1 john:1 partition:1 drop:1 alone:2 isotropic:1 ith:1 quantizer:1 equi:1 node:1 preference:1 simpler:1 prove:1 recognizable:1 theoretically:1 ra:1 roughly:1 behavior:1 dist:1 encouraging:1 what:1 kind:1 pursue:1 bootstrapping:1 every:1 classifier:23 uk:1 normally:1 unit:4 producing:1 positive:2 generalised:1 consequence:1 ext:1 interpolation:1 might:2 limited:1 recnorm:1 practical:1 maximisation:1 backpropagation:3 projection:1 road:1 hmso:1 doubled:1 put:2 context:1 equivalent:3 phantom:7 map:1 demonstrated:1 regardless:1 starting:1 simplicity:1 preceeding:1 rule:1 his:2 notion:1 target:7 imagine:1 regularised:2 element:3 recognition:2 calculate:1 region:5 movement:1 agency:1 environment:1 trained:1 depend:1 arget:1 crit:1 completely:1 easily:1 joint:1 london:1 analyser:3 firm:2 quite:2 larger:1 ive:1 plausible:1 distortion:1 think:1 itself:1 dxp:2 hoc:1 indication:1 reconstruction:3 kohonen:1 poorly:2 achieve:2 cluster:3 p:1 produce:1 derive:1 andrew:1 transformat:2 dep:1 job:1 stochastic:2 backprop:1 require:1 probable:1 obe:1 around:1 fh:1 estimation:1 applicable:1 label:5 bridge:2 ribution:1 gaussian:9 defence:1 always:1 rather:1 derived:2 elx:4 modelling:1 likelihood:5 pasadena:1 her:1 interested:2 comprising:1 classification:7 ill:1 softmax:2 mackay:5 mutual:8 special:1 equal:3 fairly:1 identical:1 unsupervised:14 fairness:6 recommend:1 inherent:1 yiy:1 ve:3 mut:1 normalisation:1 highly:1 adjust:1 severe:1 introduces:1 mixture:2 copyright:1 peculiar:1 iv:1 euclidean:1 re:1 circle:1 instance:2 applicability:1 usefulness:1 aining:1 teacher:1 st:1 density:1 retain:1 off:1 squared:1 slowly:1 dra:1 derivative:4 return:1 de:2 includes:1 int:3 depends:1 decisive:1 ad:1 break:1 lot:1 doing:1 sort:1 oi:1 kaufmann:2 bayesian:2 submitted:2 simultaneous:1 implausible:1 ed:1 definition:1 bridle:6 propagated:1 jdc:1 adjusting:1 ea:1 back:3 supervised:4 response:1 just:2 stage:1 propagation:1 logistic:1 building:1 requiring:1 hence:1 assigned:1 lts:1 during:1 self:1 encourages:1 chop:1 criterion:15 performs:1 meaning:1 interpretation:2 he:9 interpret:1 ai:3 sp4:1 had:1 fii:1 firmer:1 discussing:1 yi:6 devise:1 morgan:2 minimum:1 ural:1 full:1 ofy:1 unlabelled:3 adapt:1 long:1 clx:2 divided:1 y:1 prediction:1 variant:1 desideratum:1 controller:1 iteration:2 ion:4 lea:1 separately:3 chopped:1 grow:1 rest:1 ascent:1 strict:1 pass:1 probably:1 call:1 near:1 presence:1 ideal:1 contorted:1 architecture:1 simplifies:1 idea:1 minimise:1 favour:1 whether:1 motivated:1 expression:2 speech:1 proceed:1 rsre:1 generally:2 useful:1 clear:1 densit:1 category:1 track:1 discrete:1 drawn:3 changing:1 neither:1 sum:2 convert:1 run:1 reasonable:2 jdx:1 decision:4 acceptable:1 informat:1 activity:1 anot:1 wi:1 appealing:1 partitioned:1 happens:2 intuitively:2 taken:1 computationally:1 vq:1 turn:2 tractable:1 end:1 available:1 gaussians:2 observe:1 away:1 enforce:1 robustness:1 hat:4 clustering:1 include:1 ensure:1 oyi:1 dat:2 seeking:1 objective:4 move:2 already:1 gradient:5 distance:1 separate:1 capacity:1 trivial:2 code:1 echniques:1 cont:2 illustration:1 nc:4 design:1 boltzmann:1 perform:1 upper:1 hone:1 descent:1 t:2 arbitrary:1 david:1 required:1 california:1 distinction:1 usually:2 pattern:4 below:1 yc:1 suitable:1 natural:2 ual:1 turning:3 improve:1 technology:1 ne:2 unnormalised:1 relative:2 proportional:1 conveyed:1 cd:1 course:1 heading:3 normalised:2 institute:1 taking:1 curve:2 raining:2 boundary:2 world:1 made:1 adaptive:2 founded:1 cision:1 global:1 xi:1 learn:1 ca:1 generalpurpose:1 necessarily:2 complex:2 anthony:1 main:1 erion:1 fair:3 vve:1 malvern:2 en:1 position:1 wish:1 lbg:1 exponential:2 yh:1 weighting:1 ix:2 normalizing:1 outwards:1 entropy:3 simply:1 speculated:1 ma:1 classificat:1 hard:1 regularising:1 principal:1 evaluate:1
3,756
4,400
Learning Sparse Representations of High Dimensional Data on Large Scale Dictionaries Zhen James Xiang Hao Xu Peter J. Ramadge Department of Electrical Engineering, Princeton University Princeton, NJ 08544, USA {zxiang,haoxu,ramadge}@princeton.edu Abstract Learning sparse representations on data adaptive dictionaries is a state-of-the-art method for modeling data. But when the dictionary is large and the data dimension is high, it is a computationally challenging problem. We explore three aspects of the problem. First, we derive new, greatly improved screening tests that quickly identify codewords that are guaranteed to have zero weights. Second, we study the properties of random projections in the context of learning sparse representations. Finally, we develop a hierarchical framework that uses incremental random projections and screening to learn, in small stages, a hierarchically structured dictionary for sparse representations. Empirical results show that our framework can learn informative hierarchical sparse representations more efficiently. 1 Introduction Consider approximating a p-dimensional data point x by a linear combination x ? Bw of m (possibly linearly dependent) codewords in a dictionary B = [b1 , b2 , . . . , bm ]. Doing so by imposing the additional constraint that w is a sparse vector, i.e., x is approximated as a weighted sum of only a few codewords in the dictionary, has recently attracted much attention [1]. As a further refinement, when there are many data points xj , the dictionary B can be optimized to make the representations wj as sparse as possible. This leads to the following problem. Given n data points in Rp organized as matrix X = [x1 , x2 , . . . , xn ] ? Rp?n , we want to learn a dictionary B = [b1 , b2 , . . . , bm ] ? Rp?m and sparse representation weights W = [w1 , w2 , . . . , wn ] ? Rm?n so that each data point xj is well approximated by Bwj with wj a sparse vector: 1 min kX ? BWk2F + ?kWk1 B,W 2 (1) s.t. kbi k22 ? 1, ?i = 1, 2, . . . , m. Here k?kF and k?k1 denote the Frobenius norm and element-wise l1 -norm of a matrix, respectively. There are two advantages to this representation method. First, the dictionary B is adapted to the data. In the spirit of many modern approaches (e.g. PCA, SMT [2], tree-induced bases [3,4]), rather than fixing B a priori (e.g. Fourier, wavelet, DCT), problem (1) assumes minimal prior knowledge and uses sparsity as a cue to learn a dictionary adapted to the data. Second, the new representation w is obtained by a nonlinear mapping of x. Algorithms such as Laplacian eigenmaps [5] and LLE [6], also use nonlinear mappings x 7? w. By comparison, l1 -regularization enjoys a simple formulation with a single tuning parameter (?). In many other approaches (including [2?4]), although the codewords in B are cleverly chosen, the new representation w is simply a linear mapping of x, e.g. w = B? x. In this case, training a linear model on w cannot learn nonlinear structure in the data. As a final point, we note that the human visual cortex uses similar mechanisms to encode visual scenes [7] and sparse representation has exhibited superior performance on difficult computer vision problems such as face [8] and object [9] recognition. 1 The challenge, however, is that solving the non-convex optimization problem(1) is computationally expensive. Most state-of-the-art algorithms solve (1) by iteratively optimizing W and B. For a fixed B, optimizing W requires solving n, p-dimensional, lasso problems of size m. Using LARS [10] with a Cholesky-based implementation, each lasso problem has a computation cost of O(mp? + m?2 ), where ? is the number of nonzero coefficients [11]. For a fixed W, optimizing B is a least squares problem of pm variables and m constraints. In an efficient algorithm [12], the dual formulation has only m variables but still requires inverting m ? m matrices (O(m3 ) complexity). To address this challenge, we examine decomposing a large dictionary learning problem into a set of smaller problems. First (?2), we explore dictionary screening [13, 14], to select a subset of codewords to use in each Lasso optimization. We derive two new screening tests that are significantly better than existing tests when the data points and codewords are highly correlated, a typical scenario in sparse representation applications [15]. We also provide simple geometric intuition for guiding the derivation of screening tests. Second (?3), we examine projecting data onto a lower dimensional space so that we can control information flow in our hierarchical framework and solve sparse representations with smaller p. We identify an important property of the data that?s implicitly assumed in sparse representation problems (scale indifference) and study how random projection preserves this property. These results are inspired by [16] and related work in compressed sensing. Finally (?4), we develop a framework for learning a hierarchical dictionary (similar in spirit to [17] and DBN [18]). To do so we exploit our results on screening and random projection and impose a zero-tree like structured sparsity constraint on the representation. This constraint is similar to the formulation in [19]. The key difference is that we learn the sparse representation stage-wise in layers and use the exact zero-tree sparsity constraint to utilize the information in previous layers to simplify the computation, whereas [19] uses a convex relaxation to approximate the structured sparsity constraint and learns the sparse representation (of all layers) by solving a single large optimization problem. Our idea of using incremental random projections is inspired by the work in [20, 21]. Finally, unlike [12] (that addresses the same computational challenge), we focus on a high level reorganization of the computations rather than improving basic optimization algorithms. Our framework can be combined with all existing optimization algorithms, e.g. [12], to attain faster results. 2 Reducing the Dictionary By Screening In this section we assume that all data points and codewords are normalized: kxj k2 = kbi k2 = 1, 1 ? j ? n, 1 ? i ? m (we discuss the implications of this assumption in ?3). When B is fixed, finding the optimal W in (1) requires solving n subproblems. The j th subproblem finds wj for xj . For notational simplicity, in this section we drop the index j and denote x = xj , w = wj = [w1 , w2 , . . . , wm ]T . Each subproblem is then of the form: min w1 ,w2 ,...,wm m m X X 1 kx ? wi bi k22 + ? |wi |. 2 i=1 i=1 (2) To address the challenge of solving (2) for large m, we first explore simple screening tests that identify and discard codewords bi guaranteed to have optimal solution w ?i = 0. El Ghaoui?s SAFE rule [13] is an example of a simple screening test. We introduce some simple geometric intuition for screening and use this to derive new tests that are significantly better than existing tests for the type of problems of interest here. To this end, it will help to consider the dual problem of (2): max ? s.t. ?2 x 1 kxk22 ? k? ? k22 2 2 ? |? T bi | ? 1 ?i = 1, 2, . . . , m. (3) ? = As is well known (see the supplemental material), the optimal solution of the primal problem w T ? [w ?1 , w ?2 , . . . , w ?m ] and the optimal solution of the dual problem ? are related through:  m X {sign w ?i } if w ?i 6= 0, ? ? T bi ? x= w ?i bi + ??, ? (4) [?1, 1] if w ?i = 0. i=1 The dual formulation gives useful geometric intuition. Since kxk2 = kbi k2 = 1, x and all bi lie on the unit sphere S p?1 (Fig.1(a)). For y on S p?1 , P (y) = {z : zT y = 1} is the tangent hyperplane 2 P(b2) 1 P(b ) b * 1 0 0 0 Feasible Region b 2 b* x x/?max? x/?max x/? (a) q (b) 0.6 0.4 0.2 0 0.6 ST2, our new test. ST1/SAFE 0.8 1 h / hmax hmax = 0.9 ? x/? 0.8 x b* (c) Discarding Threshold P(b ) Sp?1 Discarding Threshold hmax = 0.8 Sp?1 Sp?1 0.8 0.6 0.4 0.2 0 0.6 ST2, our new test. ST1/SAFE 0.8 1 h / hmax (d) Figure 1: (a) Geometry of the dual problem. (b) Illustration of a sphere test. (c) The solid red, dotted blue and solid magenta circles leading to sphere tests ST1/SAFE, ST2, ST3, respectively. (d) The thresholds in ST2 and ST1/SAFE when ?max = 0.8 (top) and ?max = 0.9 (bottom). A higher threshold yields a better test. of S p?1 at y and H(y) = {z : zT y ? 1} is the corresponding closed half space containing the origin. The constraints in (3) indicate that feasible ? must be in H(bi ) and H(?bi ) for all i. To ? that maximizes the objective in (3), we must find a feasible ? closest to x/?. By (4), if ? ? is find ? not on P (bi ) or P (?bi ), then w ?i = 0 and we can safely discard bi from problem (2). T Let ?max = maxi |xT bi | and b? ? {?bi }m i=1 be selected so that ?max = x b? . Note that ? = x/?max is a feasible solution for (3). ?max is also the largest ? for which (2) has a nonzero solution. If ? > ?max , then x/? itself is feasible, making it the optimal solution. Since it is not on any hyperplane P (bi ) or P (?bi ), w ?i = 0, i = 1, . . . , m. Hence we assume that ? ? ?max . ? is within a region R, These observations can be used for screening as follows. If we know that ? then we can discard those bi for which the tangent hyperplanes P (bi ) and P (?bi ) don?t intersect R, since by (4) the corresponding w ?i will be 0. Moreover, if the region R is contained in a closed ball (e.g. the shaded blue area in Fig.1(b)) centered at q with radius r, i.e., {? : k? ? qk2 ? r}, then one can discard all bi for which |qT bi | is smaller than a threshold determined by the common tangent hyperplanes of the spheres k? ? qk2 = r and S p?1 . This ?sphere test? is made precise in the following lemma (All lemmata are proved in the supplemental material). ? of (3) satisfies k? ? ? qk2 ? r, then |qT bi | < (1 ? r) ? w Lemma 1. If the solution ? ?i = 0. El Ghaoui?s SAFE rule [13] is a sphere test of the simplest form. To see this, note that x/?max is a feasible point of (3), so the optimal ? cannot be further away from x/? than x/?max . Therefore we ? ? x/?k2 ? 1/? ? 1/?max (solid red ball in Fig.1(c)). Plugging in q = x/? have the constraint : k? and r = 1/? ? 1/?max into Lemma 1 yields El Ghaoui?s SAFE rule: Sphere Test # 1 (ST1/SAFE): If |xT bi | < ? ? 1 + ?/?max , then w ?i = 0. Note that the SAFE rule is weakest when ?max is large, i.e., when the codewords are very similar to the data points, a frequent situation in applications [15]. To see that there is room for improvement, ? in the intersection of the previous closed ball (solid consider the constraint: ? T b? ? 1. This puts ? red) and H(b? ). This is indicated by the shaded green region in Fig. 1(c). Since this intersection is small when ?max is large, a better test results by selecting R to be the shaded green region. However, to simplify the test, we relax R to a closed ball and use the sphere test of Lemma 1. Two relaxations, the solid magenta ball and the dotted blue ball in Fig. 1(c), are detailed in the following lemma. Lemma 2. If ? satisfies (a) k? ? x/?k2 ? 1/? ? 1/?max and (b) ? T b? ? 1, then ? satisfies p k? ? (x/? ? (?max /? ? 1)b? k2 ? 1/?2 ? 1(?max /? ? 1), and (5) p max k? ? x/?max k2 ? 2 1/?2max ? 1(?max /? ? 1). (6) ? satisfies (a) and (b), it satisfies (5) and (6). We start with (6) because of its By Lemma 2, since ? similarity to the closed p ball constraint used to derive ST1/SAFE (solid red ball). Plugging q = x/?max and r = 2 1/?2max ? 1(?max /? ? 1) into Lemma 1 yields our first new test: 3 Sphere Test # 2 (ST2): p If |xT bi | < ?max (1 ? 2 1/?2max ? 1(?max /? ? 1)), then w ?i = 0. Since ST2 and ST1/SAFE both test |xT bi | against thresholds, we can compare the tests by plotting their thresholds. We do so for ?max = 0.8, 0.9 in Fig.1(d). The thresholds must be positive and large to be useful. ST2 is most useful when ?max is large. Indeed, we have the following lemma: ? Lemma 3. When ?max > 3/2, if ST1/SAFE discards bi , then ST2 also discards bi . Finally, to use the closed ball constraint (5), we plug in q = x/? ? (?max /? ? 1)b? and r = p 1/?2max ? 1(?max /? ? 1) into Lemma 1 to obtain a second new test: Sphere Test # 3 (ST3): p If |xT bi ? (?max ? ?)bT? bi | < ?(1 ? 1/?2max ? 1(?max /? ? 1)), then w ?i = 0. ST3 is slightly more complex. It requires finding b? and computing a weighted sum of inner products. But ST3 is always better than ST2 since its sphere lies strictly inside that of ST2: Lemma 4. Given any x, b? and ?, if ST2 discards bi , then ST3 also discards bi . ? To summarize, ST3 completely outperforms ST2, and when ?max is larger than 3/2 ? 0.866, ST2 completely outperforms ST1/SAFE. Empirical comparisons are given in ?5. By making two passes through the dictionary, the above tests can be efficiently implemented on large-scale dictionaries that can?t fit in memory. The first pass holds x, u, bi ? Rp in memory at once and computes u(i) = xT bi . By simple bookkeeping, after pass one we have b? and ?max . The second pass holds u, b? , bi in memory at once, computes bT? bi and executes the test. 3 Random Projections of the Data In ?4 we develop a framework for learning a hierarchical dictionary and this involves the use of random data projections to control information flow to the levels of the hierarchy. The motivation for using random projections will become clear, and is specifically discussed, in ?4. Here we lay some groundwork by studying basic properties of random projections in learning sparse representations. We first revisit the normalization assumption kxj k2 = kbi k2 = 1, 1 ? j ? n, 1 ? i ? m in ?2. The assumption that all codewords are normalized: kbi k2 = 1, is necessary for (1) to be meaningful, otherwise we can increase the scale of bi and inversely scale the ith row of W to lower the loss. The assumption that all data points are normalized: kxj k2 = 1, warrants a more careful examination. To see this, assume that the data {xj }nj=1 are samples from an underlying low dimensional smooth manifold X and that one desires a correspondence between codewords and local regions on X . Then we require the following scale indifference (SI) property to hold: Definition 1. X satisfies the SI property if ?x1 , x2 ? X , with x1 6= x2 , and ?? 6= 0, x1 6= ?x2 . Intuitively, SI means that X doesn?t contain points differing only in scale and it implies that points x1 , x2 from distinct regions on X will use different codewords in their representation. SI is usually implicitly assumed [9,15] but it will be important for what follows to make the condition explicit. SI is true in many typical applications of sparse representation. For example, for image signals when we are interested in the image content regardless of image luminance. When SI holds we can indeed normalize the data points to S p?1 = {x : kxk2 = 1}. Since a random projection of the original data doesn?t preserve the normalization kxj k2 = 1, it?s important for the random projection to preserve the SI property so that it is reasonable to renormalize the projected data. We will show that this is indeed the case under certain assumptions. Suppose we use a random projection matrix T ? Rd?p , with orthonormal rows, to project the data to Rd (d < p) and use TX as the new data matrix. Such T can be generated by running the GramSchmidt procedure on d, p-dimensional random row vectors with i.i.d. Gaussian entries. It?s known that for certain sets X , with high probability random projection preserves pairwise distances: p p kTx1 ? Tx2 k2 ? (1 + ) d/p. (7) (1 ? ) d/p ? kx1 ? x2 k2 For example, when X contains only ?-sparse vectors, we only need d ? O(? ln(p/?)) and when X is a K-dimensional Riemannian submanifold, we only need d ? O(K ln p) [16]. We will show that when the pairwise distances are preserved as in (7), the SI property will also be preserved: 4 Theorem 1. Define S(X ) = {z : z = ?x, x ? X , |?| ? 1}. If X satisfies SI and ?(x1 , x2 ) ? S(X ) ? S(X ) (7) is satisfied, then T (X ) = {z : z = Tx, x ? X } also satisfies SI. Proof. If T (X ) doesn?t satisfy SI, then by Definition 1, ?(x1 , x2 ) ? X ? X , ? ? / {0, 1} s.t.: Tx1 = ?Tx2 . Without loss of generality we can assume that |?| ? 1 (otherwise we can exchange the positions of x1 and x2 ).p Since x1 and ?x2 are both in S(X ), using (7) gives that kx1 ? ?x2 k2 ? kTx1 ? ?Tx2 k2 /((1 ? ) d/p) = 0. So x1 = ?x2 . This contradicts the SI property of X . For example, if X contains only ?-sparse vectors, so does S(X ). If X is a Riemannian submanifold, so is S(X ). Therefore applying random projections to these X will preserve SI with high probability. For the case of ?-sparse vectors, under some strong conditions, we can prove that random projection always preserves SI. (Proofs of the theorems below are in the supplemental material.) Theorem 2. If X satisfies SI and has a ?-sparse representation using dictionary B, then the projected data T (X ) satisfies SI if (2? ? 1)M (TB) < 1, where M (?) is matrix mutual coherence. Combining (7) with Theorem 1 or 2 provides an important insight: the projected data TX contains rough information about the original data X and we can continue to use the formulation (1) on TX to extract such information. Actually, if we do this for a Riemannian submanifold X , then we have: Theorem 3. Let the data points lie on a K-dimensional compact Riemannian submanifold X ? Rp with volume V , conditional number 1/? , and geodesic covering regularity R (see [16]). Assume that in the optimal solution of (1) for the projected data (replacing X with TX), data points Tx1 and Tx2 have nonzero weights on the same set of ? codewords. Let wj be the new representation of xj and ?i = kTxj ? Bwj k2 be the length of the residual (j = 1, 2). With probability 1 ? ?: kx1 ? x2 k22 ? (p/d)(1 + 1 )(1 + 2 )(kw1 ? w2 k22 + 2?21 + 2?22 ) kx1 ? x2 k22 ? (p/d)(1 ? 1 )(1 ? 2 )(kw1 ? w2 k22 , ?1 with 1 = O(( K ln(N V R?d ) ln(1/?) 0.5?? ) (8) ) (for any small ? > 0) and 2 = (? ? 1)M (B). Therefore the distances between the sparse representation weights reflect the original data point distances. We believe a similar result should also hold when X contains only ?-sparse vectors. 4 Learning a Hierarchical Dictionary Our hierarchical framework decomposes a large dictionary learning problem into a sequence of smaller hierarchically structured dictionary learning problems. The result is a tree of dictionaries. High levels of the tree give course representations, deeper levels give more detailed representations, and the codewords at the leaves form the final dictionary. The tree is grown top-down in l levels by refining the dictionary at the previous level to give the dictionary at the next level. Random data projections are used to control the information flow to different layers. We also enforce a zero-tree constraint on the sparse representation weights so that the zero weights in the previous level will force the corresponding weights in the next level to be zero. At each stage we combine this zero-tree constraint with our new screening tests to reduce the size of Lasso problems that must be solved. In detail, we use l random projections Tk ? Rdk ?p (1 ? k ? l) to extract information incrementally from the data in l stages. Each Tk has orthonormal rows and the rows of distinct Tk are orthogonal. At level k we learn a dictionary Bk ? Rdk ?mk and weights Wk ? Rmk ?n by solving a small sparse representation problem similar to (1): min Bk ,Wk s.t. 1 kTk X ? Bk Wk k2F + ?k kWk k1 2 (k) kbi k22 ? 1, (9) ?i = 1, 2, . . . , mk . (k) Here bi is the ith column of matrix Bk and mk is assumed to be a multiple of mk?1 , so the number of codewords mk increases with k. We solve (9) for level k = 1, 2, . . . , l sequentially. An additional constraint is required to enforce a tree structure. Denote the ith element of the j th (k) column of Wk by wj (i) and organize the weights at level k > 1 in mk?1 groups, one per level 5 (k) k ? 1 codeword. The ith group has mk /mk?1 weights: {wj (rmk?1 + i), 0 ? r < mk /mk?1 }, (k?1) and has weight wj (i) as its parent weight. To enforce a tree structure we require that a child weight is zero if its parent weight is zero. So for every level k ? 2, data point j (1 ? j ? n), group i (k) (1 ? i ? mk?1 ), and weight wj (rmk?1 + i) (0 ? r < mk /mk?1 ), we enforce: (k?1) wj (i) = 0 ? (k) wj (rmk?1 + i) = 0. (10) This imposed tree structure is analogous to a ?zero-tree? in EZW wavelet compression [22]. In addition, (10) means that the weights of the previous layer select a small subset of codewords to enter the Lasso optimization. When solving for wjk , (10) reduces the number of codewords from (k?1) (k?1) mk to (mk /mk?1 )kwj k0 , a considerable reduction since wj is sparse. Thus the screening rules in ?2 and the imposed screening rule (10) work together to reduce the effective dictionary size. The tree structure in the weights introduces a similar hierarchical tree structure in the dictionaries (k) (k?1) {Bk }lk=1 : the codewords {brmk?1 +i , 0 ? r < mk /mk?1 } are the children of codeword bi . This tree structure provides a heuristic way of updating Bk . When k > 1, the mk codewords in layer k are naturally divided into mk?1 groups, so we can solve Bk by optimizing each group sequentially. mk /mk?1 ?1 (k) This is similar to block coordinate descent. For i = 1, 2, . . . , mk?1 , let B0 = [brmk?1 +i ]r=0 denote the codewords in group i. Let W0 be the submatrix of W containing only the (rmk?1 + i)th rows of W, r = 0, 1, . . . , mk /mk?1 ? 1. W0 is the weight matrix for B0 . Denote the remaining codewords and weights by B00 and W00 . For all mk?1 groups in random order, we fix B00 and update B0 by solving (1) for data matrix Tk X ? B00 W00 . This reduces the complexity from O(mqk ) q to O(mqk /mq?1 k?1 ) where O(m ) is the complexity of updating a dictionary with size m. Since q ? 3, this offers big computational savings but might yield a suboptimal solution of (9). After finalizing Wk and Bk , we can solve an unconstrained QP to find Ck = arg minC kX ? CWk k2F . Ck is useful for visualization purposes; it represents the points on the original data manifold corresponding to Bk . In principle, our framework can use any orthogonal projection matrix Tk . We choose random projections because they?re simple and, more importantly, because they provide a mechanism to control the amount of information extracted at each layer. If all Tk are randomly generated independently of X, then on average, the amount of information in Tk X is proportional to dk . This allows us to control the flow of information to each layer so that we avoid using all the information in one layer. 5 Experiments We tested our framework on: (a) the COIL rotational image data set [23], (b) the MNIST digit classification data set [24], and (c) the extended Yale B face recognition data set [25] [26]. The basic sparse representation problem (1) is solved using the toolbox provided in [12] to iteratively optimize B and W until an iteration results in a loss function reduction of less than 0.01%. COIL Rotational Image Data Set: This is intended as a small scale illustration of our framework. We use the 72, 128x128 color images of object No. 80 rotating around a circle in 15 degreeincrements (18 images shown in Fig.2(a)). We ran the traditional sparse representation algorithm to compare the three screening tests in ?2. The dictionary size is m = 16 and we vary ?. As shown in Fig.2(c), ST3 discards a larger fraction of codewords than ST2 and ST2 discards a larger fraction than ST1/SAFE. We ran the same algorithms on 200 random data projections and the results are almost identical. The average ?max for these two situations is 0.98. Next we test our hierarchical framework using two layers. We set (d2 , m2 ) = (200, 16) so that the second layer solves a problem of the same scale as in the previous paragraph. We demonstrate how the result of the first layer, with (d1 , m1 , ?1 ) = (100, 4, 0.5), helps the second layer discard more codewords when the tree constraint (10) is imposed. Fig.2(b) illustrates this constraint: the 16 second layer codewords are organized in 4 groups of 4 (only 2 groups shown). The weight on any codeword in a group has to be zero if the parent codeword in the first layer has weight zero. This imposed constraint discards many more codewords in the screening stage than any of the three tests in ?2. (Fig.2(d)). Finally, the illustrated codewords and weights in Fig.2(b) are the actual values in 6 Learning non?hierarchical sparse representation Average % of codewords discarded 0(1& 0,1& Average percentage of discarded codewords in the prescreening. ST3, original data ST3, projected data ST2, original data ST2, projected data ST1/SAFE, original data ST1/SAFE, projected data 100 80 80 021& 0/1& 60 Use our new bound on the origianl data Use our new bound on the projected data Use El Ghaoui et al. 2010 on the original data Use El Ghaoui et al. 2010 on the projected data 60 40 40 20 20 0 0 0.2 0.4 0.6 0.8 1 ? 0 0 0.2 0.4 0.6 0.8 1 Learning the second layer sparse representation h codewords Average percentage of discarded in the prescreening. Average % of codewords discarded 100 80 80 (10) + ST3 60 60 Use (13) and our new bound ST3constraint only Use new bound (10)our + ST2 ST2constraint only Use (13) and El Ghaoui et al. 2010 (10)El+ Ghaoui ST1/SAFE Use et al. 2010 40 40 ST1/SAFE only 20 20 !"#$%&'()*#&& 0 0 0 0 +*,-./&'()*#&& 0.2 0.2 0.40.4 0.6 ? h 0.6 0.8 0.8 1 Figure 2: (a): Example images of the data set. (b): Illustration of a two layer hierarchical sparse representa- 96 95 Traditional sparse representation: m=64, with 6 different ? settings m=128, with 6 ? (same as above) m=192, with 6 ? m=256, with 6 ? m=512, with 6 ? Our hierarchical framework: m1=32, m2=512, with 6 ? 94 93 m1=64, m2=2048, with 6 ? 92 m1=16, m2=256, m3=4096, with 6 ? Baseline: the same linear classifier using 250 principal components using original pixel values 91 2 3 5 10 20 Average encoding time for a testing image (ms) 30 100 90 80 70 60 Traditional sparse representation Our hierarchical framework Our framework with PCA projections Linear classifier Wright et al., 2008, SRC 50 32(0.1%) 64(0.2%) 128(0.4%) 256(0.8%) # of random projections (percentage of image size) to use Average encoding time (ms) Classification accuracy (%) on testing set 97 Recognition rate (%) on testing set tion. (c): Comparison of the three screening tests for sparse representation. (d): Screening performance in the second layer of our hierarchical framework using combinations of screening criteria. The imposed constraint (10) helps to discard significantly more codewords when ? is small. 80 60 Traditional sparse representation Our hierarchical framework Our framework with PCA projections Linear classifier 40 20 0 32(0.1%) 64(0.2%) 128(0.4%) 256(0.8%) # of random projections (percentage of image size) to use Figure 3: Left: MNIST: The tradeoff between classification accuracy and average encoding time for various sparse representation methods. Our hierarchical framework yields better performance in less time. The average encoding time doesn?t apply to baseline methods. The performance of traditional sparse representation is consistent with [9]. Right: Face Recognition: The recognition rate (top) and average encoding time (bottom) for various methods. Traditional sparse representation has the best accuracy and is very close to a similar method SRC in [8] (SRC?s recognition rate is cited from [8] but data on encoding time is not available). Our hierarchical framework achieves a good tradeoff between the accuracy and speed. Using PCA projections in our framework yields worse performance since these projections do not spread information across the layers. C2 and W2 when ?2 = 0.4 (the marked point in Fig.2(d)). The sparse representation gives a multiresolution representation of the rotational pattern: the first layer encodes rough orientation and the second layer refines it. The next two experiments evaluate the performance of sparse representation by (1) the accuracy of a classification task using the columns in W (or in [W1T , W2T , . . . , WlT ]T for our framework) as features, and (2) the average encoding time required to obtain these weights for a testing data point. This time is highly correlated with the total time needed for iterative dictionary learning. We used linear SVM (liblinear [27]) with parameters tuned by 10-fold cross-validations on the training set. 7 1 MNIST Digit Classification: This data set contains 70,000 28x28 hand written digit images (60,000 training, 10,000 testing). We ran the traditional sparse representation algorithm for dictionary size m ? {64, 128, 192, 256} and ? ? ? = {0.06, 0.08, 0.11, 0.16, 0.23, 0.32}. In Fig.3 left panel, each curve contains settings with the same m but with different ?. Points to the right correspond to smaller ? values (less sparse solutions and more difficult computation). There is a tradeoff between speed (x-axis) and classification performance (y-axis). To see where our framework stands, we tested the following settings: (a) 2 layers with (d1 , d2 ) = (200, 500), (m1 , m2 ) = (32, 512), ?1 = 0.23 and ?2 ? ?, (b) (m1 , m2 ) = (64, 2048) and everything else in (a) unchanged, (c) 3 layers with (d1 , d2 , d3 ) = (100, 200, 400), (m1 , m2 , m3 ) = (16, 256, 4096), (?1 , ?2 ) = (0.16, 0.11) and ?3 ? ?. The plot shows that compared to the traditional sparse representation, our hierarchical framework achieves roughly a 1% accuracy improvement given the same encoding time and a roughly 2X speedup given the same accuracy. Using 3 layers also offers competitive performance but doesn?t outperform the 2 layer setting. Face Recognition: For each of 38 subjects we used 64 cropped frontal face views under differing lighting conditions, randomly divided into 32 training and 32 testing images. This set-up mirrors that in [8]. In this experiment we start with the random projected data (p ? {32, 64, 128, 256} random projections of the original 192x128 data) and use this data as follows: (a) learn a traditional non-hierarchical sparse representation, (b) our framework, i.e., sample the data in two stages using orthogonal random projections and learn a 2 layer hierarchical sparse representation, (c) use PCA projections to replace random projections in (b), (d) directly apply a linear classifier without first learning a sparse representation. For (a) we used m = 1024, ? = 0.030 for p = 32, 64 and ? = 0.029 for p = 128, 256 (tuned to yield the same average sparsity for different p). For (b) we used (m1 , m2 ) = (32, 1024), (d1 , d2 ) = ( 83 p, 58 p), ?1 = 0.02 and ?2 the same as ? in (a). For (c) we used the same setting in (b) except random projection matrices T1 , T2 in our framework are now set to the PCA projection matrices (calculate SVD X = USVT with singular values in descending order, then use the first d1 columns of U as the rows in T1 and the next d2 columns of U as the rows in T2 ). The results in Fig.3 right panel suggest that our framework strikes a good balance between speed and accuracy. The PCA variant of our framework has worse performance because the first 3 8 p projections contain too much information, leaving the second layer too little information (which also drags down the speed for lack of sparsity and structure). This reinforces our argument at the end of ?4 about the advantage of random projections. The fact that a linear SVM performs well given enough random projections suggests this data set does not have a strong nonlinear structure. Finally, at any iteration, the average ?max for all data points ranges from 0.76 to 0.91 in all settings in the MNIST experiment and ranges from 0.82 to nearly 1 in the face recognition experiment (except for the second layer in the PCA variant, in which average ?max can be as low as 0.54). As expected, ?max is large, a situation that favors our new screening tests (ST2, ST3). 6 Conclusion Our theoretical results and algorithmic framework effectively make headway on the computational challenge of learning sparse representations on large size dictionaries for high dimensional data The new screening tests greatly reduce the size of the lasso problems to be solved and the tests are proven, both theoretically and empirically, to be much more effective than the existing ST1/SAFE test. We have shown that under certain conditions, random projection preserves the scale indifference (SI) property with high probability, thus providing an opportunity to learn informative sparse representations with data fewer dimensions. Finally, the new hierarchical dictionary learning framework employs random data projections to control the flow of information to the layers, screening to eliminate unnecessary codewords, and a tree constraint to select a small number of candidate codewords based on the weights leant in the previous stage. By doing so, it can deal with large m and p simultaneously. The new framework exhibited impressive performance on the tested data sets, achieving equivalent classification accuracy with less computation time. Acknowledgements This research was partially supported by the NSF grant CCF-1116208. Zhen James Xiang thanks Princeton University for support through the Charlotte Elizabeth Procter honorific fellowship. 8 References [1] M. Elad. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer, 2010. [2] G. Cao and C.A. Bouman. Covariance estimation for high dimensional data vectors using the sparse matrix transform. In Advances in Neural Information Processing Systems, 2008. [3] A.B. Lee, B. Nadler, and L. Wasserman. Treelets An adaptive multi-scale basis for sparse unordered data. The Annals of Applied Statistics, 2(2):435?471, 2008. [4] M. Gavish, B. Nadler, and R.R. Coifman. Multiscale wavelets on trees, graphs and high dimensional data: Theory and applications to semi supervised learning. In International Conference on Machine Learning, 2010. [5] M. Belkin and P. Niyogi. Using manifold stucture for partially labeled classification. In Advances in Neural Information Processing Systems, pages 953?960, 2003. [6] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323, 2000. [7] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision research, 37(23):3311?3325, 1997. [8] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210?227, 2008. [9] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. In Advances in Neural Information Processing Systems, volume 3, 2009. [10] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, pages 407?451, 2004. [11] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. The Journal of Machine Learning Research, 11:19?60, 2010. [12] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. In Advances in Neural Information Processing Systems, volume 19, page 801, 2007. [13] L.E. Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. Arxiv preprint arXiv:1009.3515, 2010. [14] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R.J. Tibshirani. Strong rules for discarding predictors in lasso-type problems. Arxiv preprint arXiv:1011.2234, 2010. [15] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan. Sparse representation for computer vision and pattern recognition. Proceedings of the IEEE, 98(6):1031?1044, 2010. [16] R.G. Baraniuk and M.B. Wakin. Random projections of smooth manifolds. Foundations of Computational Mathematics, 9(1):51?77, 2007. [17] Y. Lin, T. Zhang, S. Zhu, and K. Yu. Deep coding network. In Advances in Neural Information Processing Systems, 2010. [18] G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006. [19] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In International Conference on Machine Learning, 2010. [20] M.B. Wakin, D.L. Donoho, H. Choi, and R.G. Baraniuk. Highresolution navigation on non-differentiable image manifolds. In IEEE International Conference on Acoustics, Speech and Signal Processing, volume 5, pages 1073?1076, 2005. [21] M.F. Duarte, M.A. Davenport, M.B. Wakin, JN Laska, D. Takhar, K.F. Kelly, and RG Baraniuk. Multiscale random projections for compressive classification. In IEEE International Conference on Image Processing, volume 6, 2007. [22] J.M. Shapiro. Embedded image coding using zerotrees of wavelet coefficients. IEEE Transactions on Signal Processing, 41(12):3445?3462, 2002. [23] S.A. Nene, S.K. Nayar, and H. Murase. Columbia object image library (coil-100). Techn. Rep. No. CUCS-006-96, dept. Comp. Science, Columbia University, 1996. [24] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 ?2324, nov 1998. [25] A.S. Georghiades, P.N. Belhumeur, and D.J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6):643?660, 2002. [26] K.C. Lee, J. Ho, and D.J. Kriegman. Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 684?698, 2005. [27] R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and C.J. Lin. LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research, 9:1871?1874, 2008. 9
4400 |@word compression:1 norm:2 d2:5 hsieh:1 covariance:1 solid:6 liblinear:2 reduction:3 contains:6 selecting:1 groundwork:1 tuned:2 document:1 outperforms:2 existing:4 si:17 attracted:1 must:4 written:1 dct:1 refines:1 informative:2 drop:1 plot:1 update:1 cue:1 half:1 selected:1 leaf:1 fewer:1 intelligence:3 ith:4 provides:2 hyperplanes:2 x128:2 zhang:2 rabbani:1 c2:1 become:1 prove:1 combine:1 cwk:1 inside:1 paragraph:1 introduce:1 coifman:1 theoretically:1 pairwise:2 expected:1 indeed:3 roughly:2 examine:2 usvt:1 multi:1 inspired:2 finalizing:1 actual:1 little:1 project:1 provided:1 moreover:1 underlying:1 maximizes:1 panel:2 what:1 compressive:1 supplemental:3 differing:2 finding:2 nj:2 safely:1 sapiro:2 every:1 rm:1 k2:17 classifier:4 control:6 unit:1 grant:1 organize:1 positive:1 t1:2 engineering:1 local:2 encoding:8 might:1 drag:1 st3:11 suggests:1 challenging:1 shaded:3 ramadge:2 factorization:1 bi:37 range:2 lecun:1 testing:6 block:1 digit:3 procedure:1 intersect:1 area:1 empirical:2 yan:1 significantly:3 attain:1 projection:39 b00:3 kbi:6 suggest:1 cannot:2 onto:1 close:1 put:1 context:1 applying:1 descending:1 optimize:1 equivalent:1 imposed:5 attention:1 regardless:1 independently:1 convex:2 simplicity:1 wasserman:1 m2:8 rule:7 insight:1 importantly:1 orthonormal:2 mq:1 embedding:1 coordinate:2 headway:1 analogous:1 annals:2 hierarchy:1 suppose:1 exact:1 us:4 origin:1 element:2 approximated:2 recognition:13 expensive:1 updating:2 lay:1 labeled:1 bottom:2 subproblem:2 preprint:2 electrical:1 solved:3 wang:1 calculate:1 wj:12 region:7 ran:3 src:3 intuition:3 complexity:3 kriegman:2 geodesic:1 solving:8 gramschmidt:1 completely:2 basis:2 kxj:4 georghiades:1 k0:1 various:2 tx:5 grown:1 derivation:1 distinct:2 fast:1 effective:2 heuristic:1 larger:3 solve:5 elad:1 tested:3 relax:1 otherwise:2 compressed:1 favor:1 statistic:2 niyogi:1 transform:1 itself:1 final:2 online:1 advantage:2 sequence:1 differentiable:1 net:1 product:1 frequent:1 cao:1 combining:1 kx1:4 multiresolution:1 roweis:1 frobenius:1 normalize:1 wjk:1 parent:3 regularity:1 incremental:2 object:3 help:3 derive:4 develop:3 tk:7 pose:1 gong:1 fixing:1 qt:2 b0:3 strong:3 solves:1 implemented:1 involves:1 indicate:1 implies:1 murase:1 safe:20 radius:1 stucture:1 lars:1 centered:1 human:1 material:3 everything:1 elimination:1 require:2 exchange:1 st1:15 fix:1 strictly:1 hold:5 around:1 wright:3 nadler:2 mapping:3 algorithmic:1 dictionary:35 vary:1 achieves:2 gavish:1 purpose:1 estimation:1 largest:1 weighted:2 rough:2 always:2 gaussian:1 representa:1 rather:2 ck:2 avoid:1 minc:1 encode:1 focus:1 refining:1 ponce:1 notational:1 improvement:2 greatly:2 baseline:2 duarte:1 dependent:1 el:7 bt:2 eliminate:1 interested:1 pixel:1 arg:1 dual:5 classification:10 orientation:1 priori:1 art:2 laska:1 mutual:1 field:1 once:2 saving:1 ng:1 identical:1 represents:1 yu:2 k2f:2 nearly:1 warrant:1 wlt:1 t2:2 simplify:2 employ:1 few:2 modern:1 randomly:2 belkin:1 preserve:7 simultaneously:1 geometry:1 intended:1 bw:1 friedman:1 screening:22 interest:1 highly:2 introduces:1 navigation:1 primal:1 implication:1 necessary:1 orthogonal:3 tree:18 taylor:1 rotating:1 circle:2 re:1 renormalize:1 overcomplete:1 theoretical:1 minimal:1 bouman:1 mk:26 column:5 modeling:1 rmk:5 cost:1 subset:2 entry:1 predictor:1 submanifold:4 eigenmaps:1 osindero:1 too:2 proximal:1 combined:1 thanks:1 cited:1 international:4 lee:3 together:1 quickly:1 qk2:3 w1:3 reflect:1 satisfied:1 containing:2 choose:1 possibly:1 huang:1 charlotte:1 davenport:1 worse:2 leading:1 unordered:1 b2:3 wk:5 coding:6 coefficient:2 satisfy:1 mp:1 tion:1 view:1 closed:6 treelet:1 doing:2 kwk:1 red:4 wm:2 start:2 competitive:1 simon:1 square:1 accuracy:9 efficiently:2 yield:7 identify:3 correspond:1 lighting:3 comp:1 executes:1 nene:1 definition:2 against:1 james:2 naturally:1 proof:2 riemannian:4 rdk:2 proved:1 knowledge:1 color:1 dimensionality:1 efron:1 organized:2 actually:1 jenatton:1 higher:1 supervised:2 improved:1 formulation:5 generality:1 stage:7 until:1 hand:1 replacing:1 nonlinear:6 multiscale:2 lack:1 incrementally:1 ganesh:1 indicated:1 believe:1 olshausen:1 usa:1 k22:8 normalized:3 contain:2 true:1 ccf:1 regularization:1 hence:1 iteratively:2 nonzero:3 illustrated:1 deal:1 covering:1 m:2 criterion:1 highresolution:1 demonstrate:1 performs:1 l1:2 image:18 wise:2 recently:1 superior:1 common:1 bookkeeping:1 qp:1 empirically:1 volume:5 discussed:1 m1:8 imposing:1 enter:1 tuning:1 rd:2 dbn:1 pm:1 unconstrained:1 sastry:1 mathematics:1 kw1:2 cortex:1 similarity:1 impressive:1 base:1 mqk:2 closest:1 optimizing:4 discard:13 scenario:1 codeword:4 certain:3 rep:1 continue:1 kwk1:1 additional:2 impose:1 employed:1 belhumeur:1 strike:1 redundant:1 signal:4 semi:1 multiple:1 reduces:2 smooth:2 faster:1 plug:1 offer:2 sphere:11 cross:1 x28:1 divided:2 bach:2 lin:2 plugging:2 laplacian:1 variant:2 basic:3 regression:1 vision:3 arxiv:4 iteration:2 normalization:2 preserved:2 whereas:1 want:1 addition:1 cropped:1 fellowship:1 else:1 singular:1 leaving:1 w2:6 unlike:1 exhibited:2 pass:1 smt:1 induced:1 subject:1 flow:5 spirit:2 yang:1 bengio:1 enough:1 wn:1 xj:6 fit:1 hastie:2 lasso:7 suboptimal:1 inner:1 idea:1 reduce:3 haffner:1 tradeoff:3 tx1:2 pca:8 peter:1 speech:1 deep:2 useful:4 detailed:2 clear:1 amount:2 locally:1 simplest:1 shapiro:1 outperform:1 percentage:4 nsf:1 revisit:1 dotted:2 sign:1 per:1 reinforces:1 tibshirani:3 blue:3 group:10 key:1 threshold:8 achieving:1 d3:1 utilize:1 viallon:1 luminance:1 v1:1 graph:1 relaxation:2 fraction:2 sum:2 cone:1 angle:1 baraniuk:3 almost:1 reasonable:1 coherence:1 submatrix:1 layer:29 bound:4 guaranteed:2 correspondence:1 yale:1 fold:1 fan:1 adapted:2 constraint:19 x2:14 scene:1 encodes:1 aspect:1 fourier:1 speed:4 min:3 argument:1 speedup:1 department:1 structured:4 combination:2 ball:9 cleverly:1 smaller:5 slightly:1 across:1 contradicts:1 elizabeth:1 wi:2 battle:1 making:2 projecting:1 intuitively:1 ghaoui:8 computationally:2 ln:4 visualization:1 discus:1 mechanism:2 needed:1 know:1 end:2 studying:1 available:1 decomposing:1 apply:2 hierarchical:22 away:1 enforce:4 ho:1 rp:5 ktk:1 original:10 jn:1 assumes:1 top:3 running:1 remaining:1 opportunity:1 wakin:3 exploit:1 k1:2 approximating:1 unchanged:1 objective:1 codewords:33 strategy:1 traditional:9 gradient:1 bwj:2 distance:4 subspace:1 w0:2 evaluate:1 manifold:5 length:1 reorganization:1 index:1 illustration:3 rotational:3 balance:1 providing:1 difficult:2 takhar:1 subproblems:1 hao:1 implementation:1 zt:2 teh:1 observation:1 st2:19 discarded:4 descent:1 situation:3 extended:1 hinton:1 precise:1 bk:9 inverting:1 required:2 toolbox:1 optimized:1 cucs:1 acoustic:1 w2t:1 address:3 usually:1 below:1 pattern:5 sparsity:6 challenge:5 summarize:1 bien:1 tb:1 including:1 max:46 green:2 memory:3 belief:1 examination:1 force:1 residual:1 raina:1 zhu:1 kxk22:1 inversely:1 library:2 lk:1 axis:2 zhen:2 extract:2 columbia:2 prior:1 geometric:3 acknowledgement:1 tangent:3 kf:1 kelly:1 xiang:2 embedded:1 loss:3 proportional:1 proven:1 validation:1 foundation:1 consistent:1 plotting:1 principle:1 tx2:4 row:8 course:1 supported:1 enjoys:1 lle:1 deeper:1 johnstone:1 saul:1 face:9 sparse:58 curve:1 dimension:2 xn:1 stand:1 computes:2 doesn:5 made:1 adaptive:2 refinement:1 projected:10 bm:2 transaction:4 approximate:1 compact:1 nov:1 implicitly:2 sequentially:2 mairal:3 b1:2 assumed:3 unnecessary:1 don:1 iterative:1 decomposes:1 learn:10 robust:1 correlated:2 improving:1 bottou:1 complex:1 sp:3 hierarchically:2 spread:1 linearly:1 motivation:1 big:1 w1t:1 child:2 xu:1 x1:10 fig:14 position:1 guiding:1 explicit:1 lie:3 kxk2:2 candidate:1 wavelet:4 learns:1 hmax:4 w00:2 magenta:2 theorem:5 down:2 choi:1 discarding:3 xt:6 sensing:1 maxi:1 dk:1 svm:2 weakest:1 mnist:4 effectively:1 mirror:1 illumination:1 illustrates:1 kx:3 rg:1 intersection:2 simply:1 explore:3 visual:2 indifference:3 contained:1 desire:1 partially:2 kwj:1 chang:1 springer:1 acquiring:1 satisfies:10 extracted:1 ma:2 coil:3 obozinski:1 conditional:1 marked:1 donoho:1 careful:1 room:1 replace:1 feasible:6 content:1 considerable:1 typical:2 determined:1 reducing:1 specifically:1 hyperplane:2 except:2 lemma:13 principal:1 total:1 techn:1 pas:3 svd:1 m3:3 meaningful:1 select:3 cholesky:1 support:1 frontal:1 dept:1 princeton:4 d1:5 nayar:1
3,757
4,401
On the Analysis of Multi-Channel Neural Spike Data Bo Chen, David E. Carlson and Lawrence Carin Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708 {bc69, dec18, lcarin}@duke.edu Abstract Nonparametric Bayesian methods are developed for analysis of multi-channel spike-train data, with the feature learning and spike sorting performed jointly. The feature learning and sorting are performed simultaneously across all channels. Dictionary learning is implemented via the beta-Bernoulli process, with spike sorting performed via the dynamic hierarchical Dirichlet process (dHDP), with these two models coupled. The dHDP is augmented to eliminate refractoryperiod violations, it allows the ?appearance? and ?disappearance? of neurons over time, and it models smooth variation in the spike statistics. 1 Introduction The analysis of action potentials (?spikes?) from neural-recording devices is a problem of longstanding interest (see [21, 1, 16, 22, 8, 4, 6] and the references therein). In such research one is typically interested in clustering (sorting) the spikes, with the goal of linking a given cluster to a particular neuron. Such technology is of interest for brain-machine interfaces and for gaining insight into the properties of neural circuits [14]. In such research one typically (i) filters the raw sensor readings, (ii) performs thresholding to ?detect? the spikes, (iii) maps each detected spike to a feature vector, and (iv) then clusters the feature vectors [12]. Principal component analysis (PCA) is a popular choice [12] for feature mapping. After performing such sorting, one typically must (v) search for refractory-time violations [5], which occur when two or more spikes that are sufficiently proximate are improperly associated with the same cluster/neuron (which is impossible due to the refractory time delay required for the same neuron to re-emit a spike). Recent research has combined (iii) and (iv) within a single model [6], and methods have been developed recently to address (v) while performing (iv) [5]. Many of the early methods for spike sorting were based on classical clustering techniques [12] (e.g., K-means and GMMs, with a fixed number of mixtures), but recently Bayesian methods have been developed to account for more modeling sophistication. For example, in [5] the authors employed a modification to the Chinese restaurant formulation of the Dirichlet process (DP) [3] to automatically infer the number of clusters (neurons) present, allow statistical drift in the feature statistics, permit the ?appearance?/?disappearance? of neurons with time, and automatically account for refractorytime requirements within the clustering (not as a post-clustering step). However, [5] assumed that the spike features were provided via PCA in the first two or three principal components (PCs). In [6] feature learning and spike sorting were performed jointly via a mixture of factor analyzers (MFA) formulation. However, in [6] model selection was performed (for the number of features and number of neurons) and a maximum likelihood (ML) ?point? estimate was constituted for the model parameters; since a fixed number of clusters are inferred in [6], the model does not directly allow for the ?appearance?/?disappearance? of neurons, or for any temporal dependence to the spike statistics. There has been an increasing interest in developing neural devices with C > 1 recording channels, each of which produces a separate electrical recording of neural activity. Recent research shows increased system performance with large C [18]. Almost all of the above research on spike sorting 1 300 Ground Truth ?300 PC?2 ?100 ?300 ?300 Unkown Neuron Known Neuron (a) 100 100 ?100 ?500 ?500 ?700 ?700 ?500 ?300 ?100 100 300 500 PC?1 HDP?DL GMM 100 PC?2 PC?2 100 ?100 300 300 K?means PC?2 300 ?400 ?100 PC?1 200 500 ?500 ?700 (b) ?100 ?300 ?400 ?100 PC?1 (c) 200 500 ?500 ?700 ?500 ?300 ?100 100 300 500 PC?1 (d) Figure 1: Comparison of spike sorting on real data. (a) Ground truth; (b) K-means clustering on the first 2 principal components; (c) GMM clustering with the first 2 principal components; (d) proposed method. We label using arrows examples K-means and the GMM miss, and that the proposed method properly sort. has been performed on a single channel, or when multiple channels are present each is typically analyzed in isolation. In [5] C = 4 channels were considered, but it was assumed that a spike occurred at the same time (or nearly same time) across all channels, and the features from the four channels were concatenated, effectively reducing this again to a single-channel analysis. When C 1, the assumption that a given neuron is observed simultaneously on all channels is typically inappropriate, and in fact the diversity of neuron sensing across the device is desired, to enhance functionality [18]. This paper addresses the multi-channel neural-recording problem, under conditions for which concatenation may be inappropriate; the proposed model generalizes the DP formulation of [5], with a hierarchical DP (HDP) formulation [20]. In this formulation statistical strength is shared across the channels, without assuming that a given neuron is simultaneously viewed across all channels. Further, the model generalizes the HDP, via a dynamic HDP (dHDP) [17] to allow the ?appearance?/?disappearance? of neurons, while also allowing smooth changes in the statistics of the neurons. Further, we explicitly account for refractory times, as in [5]. We also perform joint feature learning and clustering, using a mixture of factor analyzers construction as in [6], but we do so in a fully Bayesian, multi-channel setting (additionally, [6] did not account for time-varying statistics). The learned factor loadings are found to be similar to wavelets, but they are matched to the properties of neuron spikes; this is in contrast to previous feature extraction on spikes [11] based on orthogonal wavelets, that are not necessarily matched to neuron properties. To give a preview of the results, providing a sense of the importance of feature learning (relative to mapping data into PCA features learned offline), in Figure 1 we show a comparison of clustering results on the first channel of d533101 data from hc-1 [7]. For all cases in Figure 1 the data are depicted in the first two PCs for visualization, but the proposed method in (d) learns the number of features and their composition, while simultaneously performing clustering. The results in (b) and (c) correspond respectively to widely employed K-means and GMM analysis, based on using two PCs (in these cases the analysis are employed in PCA space, as have been many more-advanced approaches [5]). From Figures 1 (b) and (c), we observe that both K-means and GMM work well, but due to the constrained feature space they incorrectly classify some spikes (marked by arrows). However, the proposed model, shown in Figure 1(d), which incorporates dictionary learning with spike sorting, infers an appropriate feature space (not shown) and more effectively clusters the neurons. The details of this model, including a multi-channel extension, are discussed in detail below. 2 2.1 Model Construction Dictionary learning We initially assume that spike detection has been performed on all channels. Spike n 2 {1, . . . , Nc } (c) on channel c 2 {1, . . . , C} is a vector xn 2 RD , defined by D time samples for each spike, centered at the peak of the detected signal; there are Nc spikes on channel c. (c) Data from spike n on channel c, xn , is represented in terms of a dictionary D 2 RD?K , where K is an upper bound on the number of needed dictionary elements (columns of D), and the model 2 (c) infers the subset of dictionary elements needed to represent the data. Each xn is represented as (c) (c) (c) x(c) n = D? sn + ?n (c) (c) (1) (c) where ?(c) = diag( 1 b1 , 2 b2 , . . . , K bK ) is a diagonal matrix, with b = (b1 , . . . , bK )T 2 {0, 1}K . Defining dk as the kth column of D, and letting ID represent the D ? D identity matrix, the priors on the model parameters are dk ? N (0, 1 ID ) , D (c) (c) k ? T N + (0, c 1 ), 1 ?(c) n ? N (0, ?c ) (2) (c) where ?c = diag(?1 , . . . , ?D ), and T N + (?) represents the truncated (positive) normal distribution. Gamma priors (detailed when presenting results) are placed on c and on each of the ele(c) (c) ments of (?1 , . . . , ?D ). For the binary vector b we impose the prior bk ? Bernoulli(?k ), with ?k ? Beta(a/K, b(K 1)/K), implying that the number of non-zero components of b is drawn Binomial(K, a/(a + b(K 1))); this corresponds to Poisson(a/b) in the limit K ! 1. Parameters a and b are set to favor a sparse b. (c) This model imposes that each xn is drawn from a linear subspace, defined by the columns of D with corresponding non-zero components in b; the same linear subspace is shared across all channels (c) c 2 {1, . . . , C}. However, the strength with which a column of D contributes toward xn depends on the channel c, as defined by ?(c) . Concerning ?(c) , rather than explicitly imposing a sparse (c) diagonal via b, we may also draw k ? T N + (0, ck1 ), with shrinkage priors employed on the ck (i.e., with the ck drawn from a gamma prior that favors large ck ; which encourages many of the diagonal elements of ?(c) to be small, but typically not exactly zero). In tests, the model performed similarly when shrinkage priors were used on ?(c) relative to explicit imposition of sparseness via b; all results below are based on the latter construction. 2.2 Multi-Channel Dynamic hierarchical Dirichlet process (c) We sort the spikes on the channels by clustering the {sn }, and in this sense feature design (learning {D?(c) }) and sorting are performed simultaneously. We first discuss how this may be performed via a hierarchical Dirichlet process (HDP) construction [20], and then extend this via a dynamic (c) HDP (dHDP) [17] considering multiple channels. In an HDP construction, the {sn } are modeled as being drawn (c) s(c) n ? f (?n ) , ?n(c) ? G(c) , G(c) ? DP(?c G) , G ? DP(?0 G0 ) (3) P1 where a Q draw from, for example, DP(?0 G0 ) may be constructed [19] as G = i=1 ?i ?i? , where ?i = Vi h<i (1 Vh ), Vi ? Beta(1, ?0 ), ?i? ? G0 , and ?i? is a unit point measure situated at ?i? . P1 (c) P1 (c) Each of the G(c) is therefore of the form G(c) = i=1 ?i ?i? , with i=1 ?i = 1 and with the {?i? } shared across all G(c) , but with channel-dependent (c-dependent) probability of using elements of {?i? }. Gamma hyperpriors are employed for {?c } and ?0 . In the context of the model developed in Section 2.1, the density function f (?) corresponds to a Gaussian, and parameters ?i? = (??i , ?i ) correspond to means and precision matrices, with G0 a normal-Wishart distribution. The proposed model may be viewed as an mixture of factor analyzers (MFA) [6] applied to each channel, with the addition of sharing of statistical strength across the C channels via the HDP. Sharing is manifested in two forms: (i) via the shared linear subspace defined by the columns of D, and (ii) via hierarchical (c) clustering via HDP of the relative weightings {sn }. In tests, the use of channel-dependent ?(c) was found critical to modeling success, as compared to employing a single ? shared across all channels. P1 (c) The above HDP construction assumes that G(c) = i=1 ?i ?i? is time-independent, implying that (c) (c) the probability ?i that xn is drawn from f (?i? ) is time invariant. There are two ways this assumption may be violated. First, neuron refractory time implies a minimum delay between consecutive firing of the same neuron; this effect is addressed in a relatively straightforward manner discussed in Section 2.3. The second issue corresponds to the ?appearance? or ?disappearance? of neurons (c) [5]; the former would be characterized by an increase in the value of a component of ?i , while the (c) latter would be characterized by one of the components of ?i going to zero (or near zero). It is 3 desirable to augment the model to address these objectives. We achieve this by application of the dHDP construction developed in [17]. As in [5], we divide the time axis into contiguous, non-overlapping temporal blocks, where block j corresponds to spikes observed between times ?j 1 and ?j ; we consider J such blocks, indexed (c) j = 1, . . . , J. The spikes on channel c within block j are denoted {xjn }n=1,Ncj , where Ncj represents the number of spikes within block j on channel c. In the dHDP construction we have (c) (c) (c) (c) (c) (c) sjn ? f (?jn ) , ?jn ? wj Gj + (1 (c) (c) where w1 (c) Gj ? DP(?jc G) , G ? DP(?0 G0 ) , wj (c) = 1 for all c. The expression wj (c) Gj , while with probability 1 (c) (c) wj )Gj (4) 1 ? Beta(c, d) (5) (c) controls the probability that ?jn is drawn from (c) (c) wj parameter ?jn is drawn from Gj 1 . The cumulative mixture (c) (c) model + (1 wj )Gj 1 supports arbitrary levels of variation from block to block in (c) the spike-train analysis: If wj is small the probability of observing a particular type of neuron (c) doesn?t change significantly from block j 1 to j, while if wj ? 1 the mixture probabilities can (c) change quickly (e.g., due to the ?appearance?/?disappearance? of a neuron); for wj in between (c) (c) w j Gj these extremes, the probability of observing a particular neuron changes slowly/smoothly with consecutive blocks. The model therefore allows a significant degree of flexibility and adaptivity to changes in neuron statistics. 2.3 Accounting for refractory time and drift To demonstrate how one may explicitly account for refractory-time conditions within the model, (c) (c) assume the time difference between spikes xj? and xj? 0 is less than the refractory time, while all other spikes have temporal separations greater than the refractory time; we consider two spikes of this type for notational convenience, but the basic formulation below may be readily extended to (c) (c) more than two spikes of this type. We wish to impose that xj? and xj? 0 should not be associated (c) with the same cluster/neuron, but otherwise the model is unchanged. Hence, for n 6= ? 0 , ?jn ? (c) ? (c) = w(c) G(c) + (1 w(c) )G(c) as in (4). Assuming G ? (c) = P1 ? ? G j j j j j 1 j i=1 ?ji ?i , we have the new conditional generative construction (c) (c) ?j? 0 |?j? ? 1 X i=1 (c) ? ?ji [1 P1 l=1 (c) (c) I(?j? = ?i? )] ? ?jl [1 (c) I(?j? = ?l? )] (6) ?i? where I(?) is the indicator function (it is equal to one if the argument is true, and it is zero otherwise). (c) (c) This construction imposes that ?j? 0 6= ?j? , but otherwise the model preserves that the elements of ? (c) . Note that the time associated with {?i? } are drawn with a relative probability consistent with G j a given spike is assumed known after detection (i.e., it is a covariate), and therefore it is known a priori for which spikes the above adjustments must be made to the model. (c) The representation in (6) constitutes a proper generative construction for {?jn } in the presence of spikes that co-occur within the refractory time, but it complicates inference. Specifically, recall that P1 (c) (c) (c) (c) Q (c) (c) Gj = i=1 ?ji ?i? , with ?ji = Uji Ujh ), with Uji ? Beta(1, ?jc ). In the original h<i (1 construction, (4) and (5), in which refractory-time violations are not account for, the Gibbs update (c) (c) equations for {Uji } are analytic, due to model conjugacy. However, conjugacy for {Uji } is lost with (6), and therefore a Metropolis-Hastings (MH) step is required to draw these random variables with an Markov Chain Monte Carlo (MCMC) analysis. This added complexity is often unnecessary, since the number of refractory-time events is typically very small relative to the total number of spikes that must be sorted. Hence, we have successfully implemented the following approximation (c) (c) to the above construction. While the ?j? 0 is drawn as in (6), assigning ?j? 0 to one of the members of (c) {?i? } while avoiding a refractory-time violation, the update equations for {Uji } are executed as they 4 would be in (4) and (5), without an MH step. In other words, a construction like (6) is used to assign (c) elements of {?i? } to spikes, but after this step the update equations for {Uji } are implemented as in the original (conjugate) model. This is essentially the same approach employed in [5], but now in terms of a ?stick-breaking? rather than CRP construction of the DP (here an dHDP), and like in [5] we have found this to yield encouraging results (e.g., no refractory-time violations, and sorting in good agreement with ?truth? when available). Finally, in [5] the authors considered a ?drift? in the atoms associated with the DP, which here would correspond to a drift in the atoms associated with our dHDP. In this construction, rather that drawing the ?i? ? G0 once as in (5), one may draw ?i? ? G0 for the first block of time, and then a simple Gaussian auto-regressive model is employed to allow the {?i? } drift a small amount ? ? between consecutive blocks. Specifically, if {?ji } represents the atoms for block j, then ?j+1,i ? 1 ? N (?ji , 0 ), where it is imposed that 0 is large. We examined this within the context of the model proposed here, and for the data considered in Section 4 this added modeling complexity did not change the results significantly, and therefore we did not consider this added complexity when ? presenting results. This observed un-importance in imposing drift in {?ji } is likely due to the fact (c) (c) ? that we draw sjn ? f (?jn ) with a Gaussian f (?), and therefore even if the {?ji } do not change across data blocks, the model allows drift via variations in the draws from the Gaussian (effecting the inferred variance thereof). 3 Inference and Computations For online sorting of spikes, a Chinese restaurant process (CRP) formulation like that in [5] is desirable. The proposed model may be implemented as a generalization of the CRP, as the general form of the model in Section 2.2 is independent of the specific way inference is performed. In a CRP construction, the Chinese restaurant franchise (CRF) model [20] is invoked, and the model in Section 2.2 yields a dynamic CRF (dCRF), where each franchise is associated with a particular channel. The hierarchical form of the dCRF, including the dictionary-learning component of Section 2.1, is fully conjugate, and may therefore be implemented via a Gibbs sampler. As hinted by the construction in (6), we here employ a stick-breaking construction of the model, analogous to the form of inference employed in [17]. We employ a retrospective stick-breaking (c) (c) construction [15] for Gj and G [10], such that the number of terms used to construct G and Gj is unbounded and adapts to the data. Using this construction the model is able to adapt to the number of neurons present, adding and deleting clusters as needed. In this sense the stick-breaking construction may also be considered for online implementations. Further, in this model the parameter Gibbs sampling follows an online-style inference, since the data blocks come in sequentially and the parameters for each block only depend on the previous one or a new component. Therefore, while online implementation is not our principal focus here, it may be executed with the proposed model. We also implemented a CRF implementation, for which there is no truncation. Both inference methods (stick-breaking and CRF implementations) gave very similar results. Although this paper is not principally focused on online implementations, in the context of such, one may also consider online and evolving learning of the dictionary D [13]. There is recent research on online dictionary learning, which may be adapted here, using recent extensions via Bayesian formalisms [9]; this would, for example, allow the linear subspace in which the spike shapes reside to adapt/change with data block. 4 Example Results For these experiments we used a truncation level of K = 60 dictionary elements. In dictionary (c) learning, the hyperparameters in the gamma priors of c and ?p were set as a c = 10 6 and 6 5 b c = 10 , a?(c) = 0.1 and b?(c) = 10 . In the HDP, we set Ga(1,1) for ?0 and ?c . In dHDP, p p we set Ga(1,1) for ?0 and ?jc . Meanwhile, in order to encourage the groups to be shared, we set QC QJ 1 (c) the prior c=1 j=1 Beta(wj ; aw , bw ) with aw = 0.1 and bw = 1. These parameters have not been optimized, and many analogous settings yield similar results. We used 5000 burn-in samples and 5000 collection samples in the Gibbs sampler, and we choose the collection sample with the 5 Table 1: Summary of results on simulated data. Methods K-means GMM K-means with 2 PCs GMM with 2 PCs DP-DL HDP-DL Channel 1 96.00% 84.33% 96.8% 96.83% 97.00% 97.39% Channel 2 96.02% 94.25% 96.9% 96.98% 96.92% 97.08% Channel 3 95.77% 91.75% 96.50% 96.92% 97.08% 97.08% Average 95.93% 90.11% 96.81% 96.91% 97.00% 97.18% maximum likelihood when presenting below example clusterings. For the K-means and GMM, we set the cluster level to 3 in the simulated data and to 2 clusters in the real data (see below). 4.1 Simulated Data In neural spike trains it is very difficult to get ground truth information, so for testing and verification we initially consider simulated data with known ground truth. To generate data we draw from the (c) (c) model xn ? N (D(diag( (c) ))sn , 0.01ID ). We define D 2 RD?K and (c) 2 RK , which constructs our data from K = 2 primary dictionary elements of length D = 40 in C = 3 channels. These dictionary elements are randomly drawn. We vary (c) from channel to channel, and for each P3 (c) (c) (c) spike, we generate the feature strength according to p(sn ) = i=1 ?i N (sn |?i , 0.5IK ) with ? = [1/3 1/3 1/3], which means that there are three neurons across all the channels. We defined (c) ?i 2 RK as the mean in the feature space for each neuron and shift the neuron mean from channel to channel. For results we associate each cluster with a neuron and determine the percentage of spikes in their correct cluster. The results are shown in Table 1. The combined Dirichlet process and dictionary learning (DP-DL) give similar results to the GMM with 2 principal components (PCs). Because the DP-DL learns the appropriate number of clusters (three) and dictionary elements (two), these models are expected to perform similarly, except that the DP-DL does not require knowledge of the number of dictionary elements and clusters a priori. The HDP-DL is allowed to share global clusters and dictionary elements between channels, which improves results as well. In Figure 2 the sample posteriors show that we peak at the true values of 3 used ?global? clusters (at the top layer of the HDP) and 2 used dictionary elements. Additionally, the HDP shares cluster information between channels, which helps the cluster accuracy. In fact, the spikes at the same time will typically be drawn from the same global cluster despite having independent local clusters as seen in the global cluster from each channel in Figure 2(b). Thus, we can determine a global spike at each time point as well as on each channel. Index of Global Clusters Probability 0.6 0.4 0.2 0 0 2 4 6 8 10 Number of Dictionary Elements (a) Channel 2 Channel 3 7 Channel 1 7 6 6 6 5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0 0 500 1000 Spike Index 0 0 500 1000 Spike Index 0 0 500 1000 Spike Index (b) 7 0.4 0.3 Probability 0.8 0.2 0.1 0 1 2 3 4 5 6 7 8 9 10 Number of Global Clusters (c) Figure 2: Posterior information from HDP-DL on simulated data. (a) Approximate posterior distribution of the number of used dictionary elements (i.e., kbk0 ); (b) Example collection sample on the global cluster usage (each local cluster is mapped to its corresponding global index); (c) The approximate posterior distribution on the number of global cluster used. 6 Table 2: Results from testing on d533101 data [7]. KFM represent Kalman Filter Mixture method [2]. Methods K-means GMM K-means with 2 PCs GMM with 2 PCs KFM with 2 PCs DP with 2 PCs HDP with 2 PCs DP-DL HDP-DL 4.2 Channel 1 86.67% 87.43% 87.47% 89.00% 91.00% 89.04% 90.36% 92.29% 93.38% Channel 2 88.04% 90.06% 88.16% 89.04% 89.2% 89.00% 90.00% 92.38% 93.18% Channel 3 89.20% 86.75% 89.40% 87.43% 86.35% 87.43% 90.00% 89.52% 93.05% Channel 4 88.4% 85.43% 88.72% 90.7% 86.87% 86.79% 87.79% 92.45% 92.61% Average 88.08% 87.42% 88.44% 89.04% 88.36% 88.07% 89.54% 91.89% 93.05% Real Data with Partial Ground Truth We use the publicly available dataset1 hc-1. These data consist of both extracellular recordings and an intracellular recording from a nearby neuron in the hippocampus of a anesthetized rat [7]. Intracellular recordings give clean signals on a spike train from a specific neuron, giving accurate spike times for that neuron. Thus, if we detect a spike in a nearby extracellular recording within a close time period (<.5ms) to an intracellular spike, we assume that the spike detected in the extracellular recording corresponds to the known neuron?s spikes. This allows us to know partial ground truth, and allows us to test on methods compared to the known information. For the accuracy analysis, we determine one cluster that corresponds to the known neuron. Then we consider a spike to be correctly sorted if it is a known spike and is in the known cluster or if it is an unknown spike in the unknown cluster. In order to give a fair comparison of methods, we first considered the widely used data d533101 and used the same preprocessing from [2]. This data consists of a 4-channel extracellular recordings and 1-channel intracellular recording. We used 2491 detected spikes and 786 of those spikes came from the known neuron. The results are shown in Figure 2. The results show that learning the feature space instead of using the top 2 PCA components increases sorting accuracy. This phenomenon can be seen in Figure 1, where it is impossible to accurately resolve the clusters in the space based on the 2 principal components, through either K-means or GMM. Thus, by jointly learning the suitable feature space and clustering, we are able to separate the unknown and known neurons clusters more accurately. In the HDP model the advantage is clear in the global accuracy as we achieve 89.54% when using 2 PCs and 93.05% when using dictionary learning. In addition to learning the appropriate feature space, HDP-DL and DP-DL can infer the appropriate number of clusters, allowing the data to define the number of neurons. The posterior distribution on the number of global clusters and number of factors (dictionary elements) used is shown in Figure 3(a) and 3(b), along with the most used elements of the learned dictionary in Figure 3(c). The dictionary elements show shapes similar to both neuron spikes in Figure 3(d) and wavelets. The spiky nature of the learned dictionary can give factors similar to those use in the discrete wavelet transform cluster in [11], which choose to use the Daubechies wavelet for its spiky nature (but here, rather than a priori selecting an orthogonal wavelet basis, we learn a dictionary that is typically not orthogonal, but is wavelet-like). Next we used the d561102 data from hc-1, which consists of 4 extracellular recording and 1 intracellular recording. To do spike detection we high-pass filtered the data from 300 Hz and detected spikes when the voltage level passed a positive or negative threshold, as in [2]. We choose this data the known neuron displays dynamic properties by showing periods of activity and inactivity. The intracellular recording in Figure 4(a) shows the known neuron is active for only a brief section of the recorded signal, and is then inactive for the rest of the signal. The nonstationarity passes along to the extracellular spike train and the detect spikes. We used the first 930 detected spikes, which included 202 spikes from the known cluster. In order to model the dynamic properties, we binned the data into 31 subgroups of 30 spikes to use with our multichannel dynamic HDP. The results are shown in 1 available from http://crcns.org/data-sets/hc/hc-1 7 0.4 0.5 0.4 Probability Probability 0.3 0.2 0.1 0 0 0.3 0.2 0.1 1 2 3 4 5 6 7 8 9 10 Number of Global Clusters 0 20 25 30 35 40 45 Number of Dictionary Elements (a) (b) (c) (d) Figure 3: Results from HDP-DL on d533101 data. (a) approximate posterior probability on the number of global clusters (across all channels); (b) approximate posterior distribution on the number of dictionary elements; (c) six most used dictionary elements; (d) examples of typical spikes from the data. Methods K-means GMM K-means with 2 PCs GMM with 2 PCs DP-DL HDP-DL MdHDP-DL Table 3: Results for d566102 data [7]. Channel 1 Channel 2 Channel 3 61.82% 78.77% 83.59% 73.85% 78.66% 74.18% 61.82% 78.77% 84.79% 75.82% 78.77% 75.71% 68.49% 81.73% 84.57% 74.40% 82.49% 85.34% 76.04% 84.79% 87.53% Channel 4 89.39% 76.59% 89.39% 88.73% 88.73% 88.40% 90.48% Average 78.39% 75.82% 78.69% 79.76% 80.88% 82.66% 84.71% Recorded Signal 2000 1500 1000 500 10 20 Time, s 30 40 (a) 30 1 0.8 25 3 11 10 5 0.6 20 0.4 0.2 15 0 0 0.8 10 10 0 5 10 0 10 0 21 18 13 5 5 10 30 0.6 5 0 0 5 0.4 0.2 200 400 600 Spike Index 800 950 0 0 5 10 0 (b) 5 10 0 (c) 5 10 0 5 10 Probability of Changing Index of Mixture Distribution Table 3. The model adapts to the nonstationary spike dynamics by learning the parameters to model (c) dynamic properties at block 11 (w11 ? 1, indicating that the dHDP has detected a change in the characteristics of the spikes), where the known neuron goes inactive. Thus, the model is more likely to draw new local clusters at this point, reflecting the nonstationary data. Additionally, in Figure 4(c) the global cluster usage shows a dramatic change at time block 11, where a cluster in the model goes inactive at the same time the known neuron is inactive. Because the dynamic model can map these dynamic properties, the results improve while using this model. Additionally, we obtain a global accuracy (across all channels) of 82.66% using the HDP-DL and an global accuracy of 84.71% using the multichannel dynamic HDP-DL (MdHDP-DL). We also tried the KFM on these data, but we were unable to get satisfactory results with it. Additionally, we also calculated the true positive and false positive number to evaluate each method, but due to the limited space, those results were put in Supplementary Material. 1 The probability of introducing a new component for the 11th block 0.8 0.6 0.4 0.2 0 10 20 Block Index 30 (d) Figure 4: Results of the multichannel dHDP on d561102. (a) first 40 seconds of the intracellular recording of d561102; (b) local cluster usage by each spike in the d561102 data in channel 4; (c) global cluster usage at (c) different time blocks for the data d561102; (d) sharing weight wj at each time blocks in the fourth channel. The spike in 11 occurs when the known neuron goes inactive. 5 Conclusions We have presented a new method for performing multi-channel spike sorting, in which the underlying features (dictionary elements) and sorting are performed jointly, while also allowing timeevolving variation in the spike statistics. The model adaptively learns dictionary elements of a wavelet-like nature (but not orthogonal), with characteristics like the shape of the spikes. Encouraging results have been presented on simulated and real data sets.The authors would like to thank A. Calabrese for providing the KFM codes and processed d533101 data. 8 Acknowledgement The research reported here was supported under the DARPA HIST program. References [1] A. Bar-Hillel, A. Spiro, and E. Stark. Spike sorting: Bayesian clustering of non-stationary data. J. Neuroscience Methods, 2006. [2] A. Calabrese and L. Paniski. Kalman filter mixture model for spike sorting of non-stationary data. J. Neuroscience Methods, 2010. [3] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1973. [4] Y. Gao, M. J. Black, E. Bienenstock, S. Shoham, and J. P. Donoghue. Probabilistic inference of arm motion from neural activity in motor cortex. Proc. Advances in NIPS, 2002. [5] J. Gasthaus, F. Wood, D. Gorur, and Y.W. Teh. Dependent Dirichlet process spike sorting. In Advances in Neural Information Processing Systems, 2009. [6] D. Gorur, C. Rasmussen, A. Tolias, F. Sinz, and N. Logothetis. Modelling spikes with mixtures of factor analysers. Pattern Recognition, 2004. [7] D. A. Henze, Z. Borhegyi, J. Csicsvari, A. Mamiya, K. D. Harris, and G. Buzsaki. Intracellular feautures predicted by extracellular recordings in the hippocampus in vivo. J. Neurophysiology, 2010. [8] J.A. Herbst, S. Gammeter, D. Ferrero, and R.H.R. Hahnloser. Spike sorting with hidden Markov models. J. Neuroscience Methods, 2008. [9] M.D. Hoffman, D.M. Blei, and F. Bach. Online learning for latent Dirichlet allocation. Proc. NIPS, 2010. [10] H. Ishwaran and L.F. James. Gibbs sampling methods for stick-breaking priors. J. Am. Stat. Ass., 2001. [11] J. C. Letelier and P. P. Weber. Spike sorting based on discrete wavelet transform coefficients. J. Neuroscience Methods, 2000. [12] M. S. Lewicki. A review of methods for spike sorting: the detection and classification of neural action potentials. Network: Computation in Neural Systems, 1998. [13] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. J. Machine Learning Research, 2010. [14] M.A. Nicolelis. Brain-machine interfaces to restore motor function and probe neural circuits. Nature reviews: Neuroscience, 2003. [15] O. Papaspiliopoulos and G. O. Roberts. Retrospective Markov Chain Monte Carlo methods for Dirichlet process hierarchiacal models. Biometrika, 2008. [16] C. Pouzat, M. Delescluse, P. Viot, and J. Diebolt. Improved spike-sorting by modeling firing statistics and burst-dependent spike amplitude attenuation: A Markov Chain Monte Carlo approach. J. Neurophysiology, 2004. [17] L. Ren, D. B. Dunson, and L. Carin. The dynamic hierarchical dirichlet process. International Conference on Machine Learning, 2008. [18] G. Santhanam, S.I. Ryu, B.M. Yu, A. Afshar, and K.V. Shenoy. A high-performance braincomputer interface. Nature, 2006. [19] J. Sethuraman. A constructive definition of dirichlet priors. Statistica Sinica, 4:639?650, 1994. [20] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical dirichlet processes. J. Am. Stat. Ass., 2005. [21] F. Wood, S. Roth, and M. J. Black. Modeling neural population spiking activity with Gibbs distributions. Proc. Advances in Neural Information Processing Systems, 2005. [22] W. Wu, M. J. Black, Y. Gao, E. Bienenstock, M. Serruya, A. Shaikhouni, and J. P. Donoghue. Neural decoding of cursor motion using a Kalman filter. Proc. Advances in NIPS, 2003. 9
4401 |@word neurophysiology:2 loading:1 hippocampus:2 tried:1 accounting:1 dramatic:1 selecting:1 assigning:1 must:3 readily:1 shape:3 analytic:1 motor:2 update:3 implying:2 generative:2 stationary:2 device:3 filtered:1 blei:2 regressive:1 org:1 unbounded:1 along:2 constructed:1 burst:1 beta:6 ik:1 consists:2 manner:1 expected:1 p1:7 multi:7 brain:2 automatically:2 resolve:1 encouraging:2 inappropriate:2 considering:1 increasing:1 provided:1 matched:2 underlying:1 circuit:2 developed:5 sinz:1 temporal:3 sapiro:1 attenuation:1 exactly:1 biometrika:1 stick:6 control:1 unit:1 shenoy:1 positive:4 engineering:1 local:4 limit:1 despite:1 id:3 firing:2 black:3 burn:1 therein:1 examined:1 co:1 limited:1 factorization:1 shaikhouni:1 testing:2 lost:1 block:22 lcarin:1 evolving:1 significantly:2 shoham:1 word:1 diebolt:1 get:2 convenience:1 ga:2 selection:1 close:1 put:1 context:3 impossible:2 map:2 imposed:1 roth:1 straightforward:1 go:3 focused:1 qc:1 insight:1 borhegyi:1 population:1 variation:4 analogous:2 annals:1 construction:22 logothetis:1 duke:2 agreement:1 associate:1 element:23 recognition:1 observed:3 electrical:2 wj:11 kfm:4 complexity:3 dynamic:14 depend:1 basis:1 joint:1 mh:2 darpa:1 represented:2 train:5 monte:3 detected:7 analyser:1 hillel:1 widely:2 supplementary:1 drawing:1 otherwise:3 favor:2 statistic:9 jointly:4 transform:2 online:9 unkown:1 beal:1 advantage:1 flexibility:1 achieve:2 adapts:2 buzsaki:1 spiro:1 cluster:42 requirement:1 produce:1 franchise:2 help:1 stat:2 implemented:6 predicted:1 implies:1 come:1 correct:1 functionality:1 filter:4 centered:1 material:1 require:1 assign:1 generalization:1 hinted:1 extension:2 sufficiently:1 considered:5 ground:6 normal:2 henze:1 lawrence:1 mapping:2 pouzat:1 dictionary:31 early:1 consecutive:3 vary:1 proc:4 label:1 successfully:1 hoffman:1 sensor:1 gaussian:4 rather:4 ck:3 shrinkage:2 varying:1 voltage:1 focus:1 ponce:1 properly:1 notational:1 bernoulli:2 likelihood:2 modelling:1 contrast:1 detect:3 sense:3 am:2 inference:7 dependent:5 ferguson:1 eliminate:1 typically:9 initially:2 bienenstock:2 hidden:1 going:1 interested:1 issue:1 classification:1 augment:1 denoted:1 priori:3 constrained:1 equal:1 once:1 construct:2 extraction:1 having:1 atom:3 sampling:2 represents:3 yu:1 carin:2 nearly:1 constitutes:1 employ:2 randomly:1 simultaneously:5 gamma:4 preserve:1 bw:2 preview:1 detection:4 gorur:2 interest:3 mamiya:1 violation:5 mixture:10 analyzed:1 extreme:1 pc:22 viot:1 chain:3 accurate:1 emit:1 encourage:1 partial:2 orthogonal:4 indexed:1 iv:3 divide:1 re:1 desired:1 complicates:1 increased:1 classify:1 modeling:5 column:5 formalism:1 contiguous:1 introducing:1 subset:1 delay:2 reported:1 aw:2 combined:2 adaptively:1 density:1 peak:2 international:1 probabilistic:1 decoding:1 enhance:1 quickly:1 w1:1 again:1 daubechies:1 recorded:2 choose:3 slowly:1 wishart:1 style:1 stark:1 account:6 potential:2 diversity:1 b2:1 coding:1 coefficient:1 jc:3 explicitly:3 depends:1 vi:2 performed:12 observing:2 sort:2 vivo:1 publicly:1 accuracy:6 afshar:1 variance:1 characteristic:2 correspond:3 yield:3 bayesian:6 raw:1 dhdp:11 accurately:2 calabrese:2 ren:1 carlo:3 sharing:3 nonstationarity:1 definition:1 james:1 thereof:1 associated:6 popular:1 ele:1 recall:1 knowledge:1 infers:2 improves:1 amplitude:1 reflecting:1 improved:1 formulation:7 crp:4 spiky:2 hastings:1 overlapping:1 usage:4 effect:1 true:3 timeevolving:1 former:1 hence:2 satisfactory:1 encourages:1 rat:1 m:1 presenting:3 crf:4 demonstrate:1 performs:1 motion:2 interface:3 weber:1 invoked:1 recently:2 spiking:1 ji:8 refractory:13 jl:1 linking:1 occurred:1 discussed:2 extend:1 significant:1 composition:1 imposing:2 gibbs:6 rd:3 similarly:2 analyzer:3 cortex:1 gj:10 posterior:7 recent:4 manifested:1 binary:1 success:1 came:1 herbst:1 seen:2 minimum:1 greater:1 impose:2 employed:8 determine:3 period:2 signal:5 ii:2 multiple:2 desirable:2 infer:2 smooth:2 characterized:2 adapt:2 bach:2 concerning:1 proximate:1 post:1 basic:1 essentially:1 poisson:1 represent:3 serruya:1 addition:2 addressed:1 rest:1 pass:1 recording:16 hz:1 member:1 incorporates:1 gmms:1 jordan:1 nonstationary:2 near:1 presence:1 iii:2 xj:4 restaurant:3 isolation:1 gave:1 donoghue:2 shift:1 ncj:2 qj:1 inactive:5 expression:1 pca:5 six:1 passed:1 improperly:1 retrospective:2 inactivity:1 action:2 detailed:1 clear:1 amount:1 nonparametric:2 situated:1 processed:1 multichannel:3 generate:2 http:1 percentage:1 neuroscience:5 correctly:1 discrete:2 santhanam:1 group:1 four:1 threshold:1 drawn:11 changing:1 gmm:14 clean:1 wood:2 imposition:1 fourth:1 almost:1 wu:1 separation:1 p3:1 draw:8 bound:1 layer:1 display:1 activity:4 strength:4 occur:2 adapted:1 binned:1 nearby:2 argument:1 performing:4 relatively:1 extracellular:7 department:1 developing:1 according:1 conjugate:2 across:13 metropolis:1 modification:1 invariant:1 principally:1 equation:3 visualization:1 conjugacy:2 discus:1 needed:3 know:1 letting:1 generalizes:2 available:3 permit:1 ishwaran:1 observe:1 hierarchical:8 hyperpriors:1 appropriate:4 probe:1 jn:7 original:2 binomial:1 dirichlet:11 clustering:14 assumes:1 top:2 ck1:1 carlson:1 giving:1 concatenated:1 chinese:3 classical:1 unchanged:1 objective:1 g0:7 added:3 spike:88 occurs:1 primary:1 dependence:1 disappearance:6 diagonal:3 dp:18 kth:1 subspace:4 separate:2 mapped:1 simulated:6 concatenation:1 unable:1 thank:1 toward:1 assuming:2 hdp:25 length:1 kalman:3 modeled:1 index:8 code:1 providing:2 nc:3 effecting:1 executed:2 difficult:1 robert:1 dunson:1 sinica:1 negative:1 design:1 implementation:5 proper:1 unknown:3 perform:2 allowing:3 upper:1 teh:2 neuron:45 markov:4 dcrf:2 incorrectly:1 truncated:1 defining:1 extended:1 gasthaus:1 arbitrary:1 drift:7 inferred:2 david:1 bk:3 required:2 csicsvari:1 optimized:1 learned:4 ryu:1 subgroup:1 nip:3 address:3 able:2 bar:1 below:5 pattern:1 reading:1 program:1 gaining:1 including:2 deleting:1 mfa:2 critical:1 event:1 suitable:1 nicolelis:1 restore:1 braincomputer:1 indicator:1 advanced:1 arm:1 improve:1 w11:1 technology:1 brief:1 axis:1 sethuraman:1 coupled:1 auto:1 sn:7 vh:1 prior:10 review:2 acknowledgement:1 relative:5 fully:2 adaptivity:1 allocation:1 degree:1 verification:1 consistent:1 imposes:2 thresholding:1 share:2 summary:1 placed:1 supported:1 truncation:2 rasmussen:1 offline:1 allow:5 anesthetized:1 d533101:5 sparse:3 calculated:1 xn:7 cumulative:1 dataset1:1 doesn:1 author:3 made:1 reside:1 collection:3 preprocessing:1 longstanding:1 employing:1 approximate:4 ml:1 global:18 sequentially:1 active:1 hist:1 mairal:1 b1:2 assumed:3 unnecessary:1 tolias:1 search:1 un:1 latent:1 uji:6 table:5 additionally:5 channel:65 nature:5 learn:1 contributes:1 as:2 hc:5 necessarily:1 meanwhile:1 diag:3 did:3 statistica:1 constituted:1 intracellular:8 arrow:2 hyperparameters:1 sjn:2 allowed:1 fair:1 augmented:1 papaspiliopoulos:1 crcns:1 precision:1 explicit:1 wish:1 breaking:6 weighting:1 wavelet:9 learns:3 rk:2 specific:2 covariate:1 showing:1 sensing:1 dk:2 ments:1 dl:19 consist:1 false:1 adding:1 effectively:2 importance:2 sparseness:1 cursor:1 chen:1 durham:1 sorting:23 smoothly:1 depicted:1 sophistication:1 appearance:6 likely:2 xjn:1 gao:2 adjustment:1 kbk0:1 bo:1 lewicki:1 corresponds:6 truth:7 harris:1 conditional:1 hahnloser:1 goal:1 viewed:2 marked:1 identity:1 sorted:2 shared:6 change:10 included:1 specifically:2 except:1 reducing:1 typical:1 sampler:2 miss:1 principal:7 total:1 pas:1 indicating:1 support:1 latter:2 violated:1 avoiding:1 constructive:1 evaluate:1 mcmc:1 phenomenon:1
3,758
4,402
RTRMC: A Riemannian trust-region method for low-rank matrix completion Nicolas Boumal? ICTEAM Institute Universit?e catholique de Louvain B-1348 Louvain-la-Neuve [email protected] P.-A. Absil ICTEAM Institute Universit?e catholique de Louvain B-1348 Louvain-la-Neuve [email protected] Abstract We consider large matrices of low rank. We address the problem of recovering such matrices when most of the entries are unknown. Matrix completion finds applications in recommender systems. In this setting, the rows of the matrix may correspond to items and the columns may correspond to users. The known entries are the ratings given by users to some items. The aim is to predict the unobserved ratings. This problem is commonly stated in a constrained optimization framework. We follow an approach that exploits the geometry of the low-rank constraint to recast the problem as an unconstrained optimization problem on the Grassmann manifold. We then apply first- and second-order Riemannian trust-region methods to solve it. The cost of each iteration is linear in the number of known entries. Our methods, RTRMC 1 and 2, outperform state-of-the-art algorithms on a wide range of problem instances. 1 Introduction We address the problem of recovering a low-rank m-by-n matrix X of which a few entries are observed, possibly with noise. Throughout, we assume that r = rank(X)  m ? n and note ? ? {1 . . . m} ? {1 . . . n} the set of indices of the observed entries of X, i.e., Xij is known iff (i, j) ? ?. Solving this problem is namely useful in recommender systems, where one tries to predict the ratings users would give to items they have not purchased. 1.1 Related work In the noiseless case, one could state the minimum rank matrix recovery problem as follows: ? such that X ? ij = Xij ?(i, j) ? ?. min rank X, m?n ? X?R (1) This problem, however, is NP hard [CR09]. A possible convex relaxation of (1) introduced by Cand`es and ? as objective function, i.e., the sum of its singular values, Recht [CR09] is to use the nuclear norm of X ? ? . The SVT method [CCS08] attempts to solve such a convex problem using tools from compressed noted kXk sensing and the ADMiRA method [LB10] does so using matching pursuit-like techniques. ? and X at entries ? under the constraint that Alternatively, one may minimize the discrepancy between X ? ? rank(X) ? r for some small constant r. Since any matrix X of rank at most r may be written in the form U W with U ? Rm?r and W ? Rr?n , a reasonable formulation of the problem reads: X 2 min min (U W )ij ? Xij . (2) m?r r?n U ?R ? W ?R (i,j)?? Web: http://perso.uclouvain.be/nicolas.boumal/ 1 The LMaFit method [WYZ10] does a good job at solving this problem by alternatively fixing either of the variables and solving the resulting least-squares problem efficiently. ? into the product U W is not unique. One drawback of the latter formulation is that the factorization of a matrix X Indeed, for any r-by-r invertible matrix M , we have U W = (U M )(M ?1 W ). All the matrices U M share the same column space. Hence, the optimal value of the inner optimization problem in (2) is a function of col(U )? the column space of U ?rather than U specifically. Dai et al. [DMK11, DKM10] exploit this to recast (2) on the Grassmann manifold G(m, r), i.e., the set of r-dimensional vector subspaces of Rm (see Section 2): X 2 min min (U W )ij ? Xij , (3) r?n U ?G(m,r) W ?R (i,j)?? where U ? Rm?r is any matrix such that col(U ) = U and is often chosen to be orthonormal. Unfortunately, the objective function of the outer minimization in (3) may be discontinuous at points U for which the leastsquares problem in W does not have a unique solution. Dai et al. proposed ingenious ways to deal with the discontinuity. Their focus, though, was on deriving theoretical performance guarantees rather than developing fast algorithms. Keshavan et al. [KO09, KM10] state the problem on the Grassmannian too, but propose to simultaneously optimize on the row and column spaces, yielding a smaller least-squares problem which is unlikely to not have a unique solution, resulting in a smooth objective function. In one of their recent papers [KM10], they solve: X 2 2 (4) (U SV >)ij ? Xij + ?2 U SV > , min min U ?G(m,r),V ?G(n,r) S?Rr?r F (i,j)?? where U and V are any orthonormal bases of U and V , respectively, and ? is a regularization parameter. The authors propose an efficient SVD-based initial guess for U and V which they refine using a steepest descent method, along with strong theoretical guarantees. Meyer et al. [MBS11] proposed a Riemannian approach to linear regression on fixed-rank matrices. Their regression framework encompasses matrix completion problems. Likewise, Balzano et al. [BNR10] introduced GROUSE for subspace identification on the Grassmannian, applicable to matrix completion. Finally, in the preprint [Van11] which became public while we were preparing the camera-ready version of this paper, Vandereycken proposes an approach based on the submanifold geometry of the sets of fixed-rank matrices. 1.2 Our contribution and outline of the paper Dai et al.?s initial formulation (3) has a discontinuous objective function on the Grassmannian. The OptSpace formulation (4) on the other hand has a continuous objective and comes with a smart initial guess, but optimizes on a higher-dimensional search space, while it is arguably preferable to keep the dimension of the manifold search space low, even at the expense of a larger least-squares problem. Furthermore, the OptSpace regular ization term is efficiently computable since U SV > F = kSkF , but it penalizes all entries instead of just the entries (i, j) ? / ?. In an effort to combine the best of both worlds, we equip (3) with a regularization term weighted by ? > 0, which yields a smooth objective function defined over an appropriate search space: 2 ? 2 X 1 X 2 min min Cij (U W )ij ? Xij + (U W )2ij . (5) r?n 2 2 U ?G(m,r) W ?R (i,j)?? / (i,j)?? Here, we introduced a confidence index Cij > 0 for each observation Xij , which may be useful in applications. As we will see, introducing a regularization term is essential to ensure smoothness of the objective and hence obtain good convergence properties. It may not be critical for practical problem instances though. We further innovate on previous works by using a Riemannian trust-region method, GenRTR [ABG07], as optimization algorithm to minimize (5) on the Grassmannian. GenRTR is readily available as a free Matlab package and comes with strong convergence results that are naturally inherited by our algorithms. In Section 2, we rapidly cover the essential useful tools on the Grassmann manifold. In Section 3, we derive expressions for the gradient and the Hessian of our objective function while paying special attention to complexity. Section 4 sums up the main properties of the Riemannian trust-region method. Section 5 shows a few results of numerical experiments demonstrating the effectiveness of our approach. 2 2 Geometry of the Grassmann manifold Our objective function f (10) is defined over the Grassmann manifold G(m, r), i.e., the set of r-dimensional vector subspaces of Rm . Absil et al. [AMS08] give a computation-oriented description of the geometry of this manifold. Here, we only give a summary of the important tools we use. Each point U ? G(m, r) is a vector subspace we may represent numerically as the column space of a fullrank matrix U ? Rm?r : U = col(U ). For numerical reasons, we will only use orthonormal matrices U ? U(m, r) = {U ? Rm?r : U >U = Ir }. The set U(m, r) is the Stiefel manifold. The Grassmannian is a Riemannian manifold, and as such we can define a tangent space to G(m, r) at each point U , noted TU G(m, r). The latter is a vector space of dimension dim G(m, r) = r(m ? r). A tangent vector H ? TU G(m, r), where we represent U as the orthonormal matrix U , is represented by a unique d matrix H ? Rm?r verifying U >H = 0 and dt col(U + tH) t=0 = H . For practical purposes we may, with a slight abuse of notation we often commit hereafter, write H = H?assuming U is known from the context? and TU G(m, r) = {H ? Rm?r : U >H = 0}. Each tangent space is endowed with an inner product, the Riemannian metric, that varies smoothly from point to point. It is inherited from the embedding space of the matrix representation of tangent vectors Rm?r : ?H1 , H2 ? TU G(m, r) : hH1 , H2 iU = Trace(H2>H1 ). The orthogonal projector from Rm?r onto the tangent space TU G(m, r) is given by: PU : Rm?r ? TU G(m, r) : H 7? PU H = (I ? U U >)H. One can also project a vector onto the tangent space of the Stiefel manifold: PUSt : Rm?r ? TU U(m, r) : H 7? PUSt H = (I ? U U >)H + U skew(U >H), where skew(X) = (X ? X >)/2 extracts the skew-symmetric part of X. This is useful for the computation of gradf (U ) ? TU G(m, r). Indeed, according to [AMS08, eqs. (3.37) and (3.39)], considering f? : Rm?r ? R, its restriction f? U (m,r) to the Stiefel manifold and f : G(m, r) ? R such that f (col(U )) = f? U (m,r) (U ) is well-defined, as will be the case in Section 3, we have (with a slight abuse of notation): gradf (U ) = grad f? (U ) = P St gradf?(U ). (6) U U (m,r) Similarly, since PU ? PUSt = PU , the Hessian of f at U along H is given by [AMS08, eqs. (5.14) and (5.18)]: Hessf (U )[H] = PU (D(U 7? P St gradf?(U ))(U )[H]), (7) U where Dg(X)[H] is the directional derivative of g at X along H, in the classical sense. For our optimization algorithms, it is important to be able to move along the manifold from some initial point U in some prescribed direction specified by a tangent vector H. To this end, we use the retraction: RU (H) = qf(U + H), where qf(X) ? U(m, r) designates the m-by-r Q-factor of the QR decomposition of X ? R 3 (8) m?r . Computation of the objective function and its derivatives ? of rank not more than r such that X ? is as close as possible to a given matrix X We seek an m-by-n matrix X at the entries in the observation set ?. Furthermore, we are given a weight matrix C ? Rm?n indicating the confidence we have in each observed entry of X. The matrix C is positive at entries in ? and zero elsewhere. To this end, we consider the following function, where (X? )ij equals Xij if (i, j) ? ? and is zero otherwise: 1 ?2 2 2 f? : Rm?r ? Rr?n ? R : (U, W ) 7? f?(U, W ) = kC (U W ? X? )k? + kU W k?? , (9) 2 2 ? is the complement of the set ? and where is the entry-wise product, ? > 0 is a regularization parameter, ? P 2 2 kM k? , (i,j)?? Mij . Picking a small but positive ? will ensure that the objective function f (10) is smooth. For a fixed U , computing the matrix W that minimizes f? is a least-squares problem. The mapping between U and this (unique) optimal W , noted WU , U 7? WU = argmin f?(U, W ), W ?Rr?n 3 is smooth and easily computable?see Section 3.3. By virtue of the discussion in Section 1, we know that the mapping U 7? f?(U, WU ), with U ? Rm?r , is constant over sets of full-rank matrices U spanning the same column space. Hence, considering these sets as equivalence classes U , the following function f over the Grassmann manifold is well-defined: f : G(m, r) ? R : U 7? f (U ) = f?(U, WU ), (10) with any full-rank U ? Rm?r such that col(U ) = U . The interpretation is as follows: we are looking for an ? = U W of rank at most r; we have confidence Cij that X ? ij should equal Xij for (i, j) ? ? optimal matrix X ? ij should equal 0 for (i, j) ? and (very small) confidence ? that X / ?. 3.1 Rearranging the objective Considering (9), it looks like evaluating f?(U, W ) will require the computation of the product U W at the entries ? i.e., we would need to compute the whole matrix U W , which cannot cost much less than O(mnr). in ? and ?, Since applications typically involve very large values of the product mn, this is not acceptable. Alternatively, if we restrict ourselves?without loss of generality?to orthonormal matrices U , we observe that 2 2 2 2 kU W k? + kU W k?? = kU W kF = kW kF . Consequently, for all U in U(m, r), we have f?(U, WU ) = f?(U, WU ), where 1 ?2 ?2 2 2 2 f?(U, W ) = kC (U W ? X? )k? + kW kF ? kU W k? . (11) 2 2 2 m?r ? This only requires the computation of U W at entries ?R: in ?, at a cost of O(|?|r). Finally, let f : R ? ? U 7? f (U, WU ), and observe that f (col(U )) = f U (m,r) (U ) for all U in U(m, r), as in the setting of Section 2. 3.2 Gradient and Hessian of the objective We now derive formulas for the first and second order derivatives of f . In deriving these formulas, it is useful to note that, for a suitably smooth mapping g, ? 2  (12) grad X 7? 1/2 kg(X)kF (X) = Dg(X) [g(X)], ? where Dg(X) is the adjoint of the differential of g at X. For ease of notation, let us define the following m-by-n matrix with the sparsity structure induced by ?:  2 Cij ? ?2 if (i, j) ? ?, C?ij = (13) 0 otherwise. We also introduce a sparse residue matrix RU that will come up in various formulas: RU = C? (U WU ? X? ) ? ?2 X? . (14) Successively using the chain rule, the optimality of WU and (12), we obtain: d ? ? ? ? ? d ? ? gradf?(U ) = f (U, WU ) = f (U, WU ) + f (U, WU ) ? WU = f (U, WU ) = RU WU>. dU ?U ?W dU ?U ? ? Indeed, since WU is optimal, ?W f (U, WU ) = U >RU + ?2 WU = 0. Then, according to the identity (6) and > 2 since U RU = ?? WU , the gradient of f at U = col(U ) on the Grassmannian is given by: gradf (U ) = grad f? U (m,r) (U ) = PUSt gradf?(U ) = (I ? U U >)RU WU> + U skew(U >RU WU>) = (I ? U U >)RU WU> ? ?2 U skew(WU WU>) = RU WU> + ?2 U (WU WU>), (15) We now differentiate (15) according to the identity (7) to get a matrix representation of the Hessian of f at U along H . We note H a matrix representation of the tangent vector H chosen in accordance with U and WU,H , D(U 7? WU )(U )[H] the derivative of the mapping U 7? WU at U along the tangent direction H. Then: Hessf (U )[H] = (I ? U U >)Dgradf (U )[H] h i > > = (I ? U U >) C? (HWU + U WU,H ) WU> + RU WU,H + ?2 H(WU WU>) + ?2 U (WU WU,H ). (16) 4 3.3 WU and its derivative WU,H We still need to provide an explicit formula for WU and WU,H . We assume U ? U(m, r) since we use orthonormal matrices to represent points on the Grassmannian and U >H = 0 since H is a tangent vector at U . We use the vectorization operator, vec, that transforms matrices into vectors by stacking their columns?in Matlab notation, vec(A) = A(:). Denoting the Kronecker product of two matrices by ?, we will use the well-known identity for matrices A, Y, B of appropriate sizes [Bro05]: vec(AY B) = (B > ? A)vec(Y ). We also write I? for the orthonormal |?|-by-mn matrix such that vec? (M ) = I? vec(M ) is a vector of length |?| corresponding to the entries Mij for (i, j) ? ?, taken in order from vec(M ). Computing WU comes down to minimizing the least-squares objective f?(U, W ) (11) with respect to W . We first manipulate f? to reach a standard form for least-squares, with S = I? diag(vec(C)): 1 ?2 ?2 2 2 2 f?(U, W ) = kC (U W ? X? )k? + kW kF ? kU W k? 2 2 2 ?2 ?2 1 2 2 2 kvec(W )k2 ? kvec? (U W )k2 = kSvec(U W ) ? vec? (C X? )k2 + 2 2 2 1 1 1 2 2 2 = kS(In ? U )vec(W ) ? vec? (C X? )k2 + k?Irn vec(W )k2 ? k?I? (In ? U )vec(W )k2 2 2 2     2 1 S(In ? U ) vec? (C X? ) ? 1 k[?I? (In ? U )] vec(W )k2 = vec(W ) ? 2 ?I 0 rn rn 2 2 2 1 1 2 2 = kA1 w ? b1 k2 ? kA2 wk2 , 2 2 where w = vec(W ) ? Rrn , 0rn ? Rrn is the zero-vector and the definitions for A1 , A2 and b1 are obvious. If > A> 1 A1 ? A2 A2 is positive definite there is a unique minimizing vector vec(WU ), given by: > ?1 > vec(WU ) = (A> A1 b1 . 1 A1 ? A2 A2 ) It is easy to compute the following: > > 2 A> 1 A1 = (In ? U )(S S)(In ? U ) + ? Irn , > 2 > A> 2 A2 = (In ? U )(? I?I? )(In ? U ), > > > (2) A> X? ). 1 b1 = (In ? U )S vec? (C X? ) = (In ? U )vec(C Throughout the text, we use the notation M (n) for entry-wise exponentiation, i.e., (M (n) )ij = (Mij )n . Note ? We then define A ? Rrn?rn as: that S >S ? ?2 I?>I? = diag(vec(C)).   > > 2 ? A , A> (17) 1 A1 ? A2 A2 = (In ? U ) diag(vec(C)) (In ? U ) + ? Irn . Observe that the matrix A is block-diagonal, with n symmetric blocks of size r. Each block is indeed positive? we can compute these n definite provided ? > 0 (making A positive-definite too). Thanks to the sparsity of C, blocks with O(|?|r2 ) flops. To solve systems in A, we compute the Cholesky factorization of each block, at a total cost of O(nr3 ). Once these factorizations are computed, each system only costs O(nr2 ) to solve [TB97]. Collecting all equations in this subsection, we obtain a closed-form formula for WU :   vec(WU ) = A?1 vec U >[C (2) X? ] , (18) where A is a function of U . We would like to differentiate WU with respect to U . Using bilinearity and associativity of ? as well as the formula D(Y 7? Y ?1 )(X)[H] = ?X ?1 HX ?1 [Bro05], some algebra yields:   vec(WU,H ) = ?A?1 vec H >RU + U > C? (HWU ) . (19) 5 The most expensive operation involved in computing WU,H ought to be the resolution of a linear system in A. Fortunately, we already factored the n small diagonal blocks of A in Cholesky form to compute WU . Consequently, after computing WU , computing WU,H is cheaper than computing WU 0 for a new U 0 . This means that we can benefit from computing this information before we move on to a new candidate on the Grassmannian, i.e., it is worth trying second order methods. We summarize the complexities in the next subsection. 3.4 Numerical complexities By exploiting the sparsity of many of the matrices involved and the special structure of the matrix A appearing in the computation of WU and WU,H , it is possible to compute the objective f as well as its gradient and its Hessian on the Grassmannian in time essentially linear in the size of the data |?|. Memory complexities are also linear in |?|. We summarize the computational complexities in Table 1. Please note that most computations are easily parallelizable, but we do not take advantage of it here. Table 1: All complexities are essentially linear in |?|, the number of observed entries. Computation WU and f (U ) gradf (U ) WU,H and Hessf (U )[H] 4 Complexity O(|?|r2 + nr3 ) O(|?|r + (m + n)r2 ) O(|?|r + (m + n)r2 ) By-products Cholesky form of A RU and WU WU> Formulas (9), (10), (17), (18) (13), (14), (15) (16), (19) Riemannian trust-region method We use a Riemannian trust-region (RTR) method [ABG07] to minimize (10), via the freely available Matlab package GenRTR (version 0.3.0) with its default parameter values. The package is available at this address: http://www.math.fsu.edu/?cbaker/GenRTR/?page=download. At the current iterate U = col(U ), the RTR method uses the retraction RU (8) to build a quadratic model mU : TU G(m, r) ? R of the lifted objective function f ? RU (lift). It then classically minimizes the model inside a trust region on this vector space (solve), and retracts the resulting tangent vector H to a candidate U + = RU (H) on the Grassmannian (retract). The quality of U + = col(U + ) is assessed using f and the step is accepted or rejected accordingly. Likewise, the radius of the trust region is adapted based on the observed quality of the model. The model mU of f ? RU has the form: 1 hA(U )[H], HiU , 2 where A(U ) is some symmetric linear operator on TU G(m, r). Typically, the faster one can compute A(U )[H], the faster one can minimize mU (H) in the trust region. mU (H) = f (U ) + hgradf (U ), HiU + A powerful property of the RTR method is that global convergence of the algorithm toward critical points?local minimizers in practice since it is a descent method?is guaranteed independently of A(U ) [ABG07, Thm 4.24, Cor. 4.6]. We take advantage of this and first set it to the identity. This yields a steepest-descent algorithm we later refer to as RTRMC 1. Additionally, if we take A(U ) to be the Hessian of f at U (16), we get a quadratic convergence rate, even if we only approximately minimize mU within the trust region using a few steps of a well chosen iterative method [ABG07, Thm 4.14]. This means that the RTR method only requires a few computations of the Hessian along specific directions. We call our method using the Hessian RTRMC 2. 5 Numerical experiments We test our algorithms on both synthetic and real data and compare their performances against OptSpace, ADMiRA, SVT, LMaFit and Balanced Factorization in terms of accuracy and computation time. All algorithms are run sequentially by Matlab on the same personal computer1 . Table 2 specifies a few implementation details. 1 Intel Core i5 670 @ 3.60GHz (4), 8Go RAM, Matlab 7.10 (R2010a). 6 Table 2: All Matlab implementations call subroutines in non-Matlab code to efficiently deal with the sparsity of the matrices involved. PROPACK [Lar05] is a free package for large and sparse SVD computations. Method RTRMC 1 Environment Matlab + some C-Mex RTRMC 2 OptSpace ADMiRA SVT LMaFit Balanced Factorization Matlab + some C-Mex C code Matlab with PROPACK Matlab with PROPACK Matlab + some C-Mex Matlab + some C-Mex Comment Our method with ?approximate Hessian? set to identity, i.e., no second order information. ? = 10?6 . For the initial guess U0 , we use the OptSpace trimmed SVD. Same as RTRMC 1 but with exact Hessian. [KO09] with ? = 0. Trimmed SVD + descent on Grass. [LB10] Matching pursuit based. [CCS08] default ? and ?. Nuclear norm minimization. [WYZ10] Alternating minimization. [MBS11] One of their Riemannian regression methods. Our methods (RTRMC 1 and 2) and Balanced Factorization require knowledge of the target rank r. OptSpace, ADMiRA and LMaFit include a mechanism to guess the rank, but benefit from knowing it, hence we provide the true rank to these methods too. As is, the SVT code does not permit the user to specify the rank. ? We use the root mean square error (RMSE) criterion to assess the quality of reconstruction of X with X: ? ? = kX ? Xk ? F / mn. RMSE(X, X) Scenario 1. We first compare convergence behavior of the different methods on synthetic data. We pick m = n = 10 000 and r = 10. The dimension of the manifold of m-by-n matrices of rank r is d = r(m+n?r). We generate A ? Rm?r and B ? Rr?n with i.i.d. normal entries of zero mean and unit variance. The target matrix is X = AB. We sample 2.5d entries uniformly at random, which yields a sampling ratio of 0.5%. Figure 1 is typical and shows the evolution of the RMSE as a function of time (left) and iteration count (right). ? = U V with U ? Rm?r , V ? Rr?n , we compute the RMSE in O((m + n)r2 ) flops using: For X (mn)RMSE(AB, U V )2 = Trace((A>A)(BB >)) + Trace((U >U )(V V >)) ? 2Trace((U >A)(BV >)). Be wary though that this formula is numerically inaccurate when the RMSE is much smaller than the norm of either AB or U V , owing to the computation of the difference of close large numbers. Scenario 2. In this second test, we repeat the previous experience with rectangular matrices: m = 1 000, n = 30 000, r = 5 and a sampling ratio of 2.6% (5d known entries). We expect RTRMC to perform well on rectangular matrices since the dimension of the Grassmann manifold we optimize on only grows linearly with min(m, n), whereas it is the (simple) least-squares problem dimension that grows linearly in max(m, n). Figure 2 is typical and shows indeed that RTRMC is the fastest tested algorithm on this test. Scenario 3. Following the protocol in [KMO09], we test our method on the Jester dataset 1 [GRGP01] of ratings of a hundred jokes by 24 983 users. We randomly select 4 000 users and the corresponding continuous ratings in the range [?10, 10]. For each user, we extract two ratings at random as test data. We run the different matrix completion algorithms with a prescribed rank on the remaining training data, N = 100 times for each rank. Table 3 reports the average Normalized Mean Absolute Error (NMAE) on the test data along with a confidence interval computed as the standard deviation of the NMAE?s obtained for the different runs divided ? by N . All methods but ADMiRA minimize a similar cost function and consequently perform the same. 6 Conclusion Our contribution is an efficient numerical method to solve large low-rank matrix completion problems. RTRMC competes with the state-of-the-art and enjoys proven global and local convergence to local optima, with a quadratic convergence rate for RTRMC 2. Our methods are particularly efficient on rectangular matrices. To obtain such results, we exploited the geometry of the low-rank constraint and applied techniques from the field of optimization on manifolds. Matlab code for RTRMC 1 and 2 is available at: http://www.inma.ucl.ac.be/?absil/RTRMC/. 7 Table 3: NMAE?s on the Jester dataset 1 (Scenario 3). All algorithms solve the problem in well under a minute for rank 7. All but ADMiRA reach similar results. As a reference, consider that a random guesser would obtain a score of 0.33. Goldberg et al. [GRGP01] report a score of 0.187 but use a different protocol. rank 1 3 5 7 RTRMC 2 0.1799 ? 2 ? 10?4 0.1624 ? 2 ? 10?4 0.1584 ? 2 ? 10?4 0.1578 ? 2 ? 10?4 101 OptSpace 0.1799 ? 2 ? 10?4 0.1625 ? 2 ? 10?4 0.1584 ? 2 ? 10?4 0.1581 ? 2 ? 10?4 SVT LMaFit 0.1799 ? 2 ? 10?4 0.1624 ? 2 ? 10?4 0.1584 ? 2 ? 10?4 0.1578 ? 2 ? 10?4 Bal. Fac. 0.1799 ? 2 ? 10?4 0.1626 ? 2 ? 10?4 0.1584 ? 2 ? 10?4 0.1580 ? 2 ? 10?4 ADMiRA 0.1836 ? 2 ? 10?4 0.1681 ? 2 ? 10?4 0.1635 ? 2 ? 10?4 0.1618 ? 2 ? 10?4 101 ADMiRA OptSpace 10?2 10?2 RMSE Bal. Fac. 10?5 10?5 LMaFit RTRMC 1 10 RTRMC 2 ?8 0 10?8 50 25 0 Time [s] 50 100 Iteration count Figure 1: Evolution of the RMSE for the six methods under Scenario 1 (m = n = 10 000, r = 10, |?|/(mn) = 0.5%, i.e., 99.5% of the entries are unknown). For RTRMC 2, we count the number of inner iterations, i.e., the number of parallelizable steps. ADMiRA stagnates and SVT diverges. All other methods eventually find the exact solution. 101 101 ADMiRA SVT Bal. Fac. RMSE 10?2 10?2 OptSpace 10?5 10?5 LMaFit 10?8 RTRMC 1 RTRMC 2 0 25 10?8 50 Time [s] 0 50 100 Iteration count Figure 2: Evolution of the RMSE for the six methods under Scenario 2 (m = 1 000, n = 30 000, r = 5, |?|/(mn) = 2.6%). For rectangular matrices, RTRMC is especially efficient owing to the linear growth of the dimension of the search space in min(m, n), whereas for most methods the growth is linear in m + n. Acknowledgments This paper presents research results of the Belgian Network DYSCO (Dynamical Systems, Control, and Optimization), funded by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office. NB is an FNRS research fellow (Aspirant). The scientific responsibility rests with its authors. 8 References [ABG07] P.-A. Absil, C. G. Baker, and K. A. Gallivan. Trust-region methods on Riemannian manifolds. Found. Comput. Math., 7(3):303?330, July 2007. [AMS08] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, NJ, 2008. [BNR10] L. Balzano, R. Nowak, and B. Recht. Online identification and tracking of subspaces from highly incomplete information. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pages 704?711. IEEE, 2010. [Bro05] M. Brookes. The matrix reference manual. Imperial College London, 2005. [CCS08] J.F. Cai, E.J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. Arxiv preprint arXiv:0810.3286, 2008. [CR09] E.J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717?772, 2009. [DKM10] W. Dai, E. Kerman, and O. Milenkovic. A Geometric Approach to Low-Rank Matrix Completion. Arxiv preprint arXiv:1006.2086, 2010. [DMK11] W. Dai, O. Milenkovic, and E. Kerman. Subspace evolution and transfer (SET) for low-rank matrix completion. Signal Processing, IEEE Transactions on, PP(99):1, 2011. [GRGP01] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative filtering algorithm. Information Retrieval, 4(2):133?151, 2001. [KM10] R.H. Keshavan and A. Montanari. Regularization for matrix completion. In Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, pages 1503?1507. IEEE, 2010. [KMO09] R.H. Keshavan, A. Montanari, and S. Oh. Low-rank matrix completion with noisy observations: a quantitative comparison. In Communication, Control, and Computing, 2009. Allerton 2009. 47th Annual Allerton Conference on, pages 1216?1222. IEEE, 2009. [KO09] R.H. Keshavan and S. Oh. OptSpace: A gradient descent algorithm on the Grassman manifold for matrix completion. Arxiv preprint arXiv:0910.5260 v2, 2009. [Lar05] R.M. Larsen. PROPACK?Software for large and sparse SVD calculations. Available online. URL http://sun. stanford. edu/rmunk/PROPACK, 2005. [LB10] K. Lee and Y. Bresler. ADMiRA: Atomic decomposition for minimum rank approximation. Information Theory, IEEE Transactions on, 56(9):4402?4416, 2010. [MBS11] G. Meyer, S. Bonnabel, and R. Sepulchre. Linear regression under fixed-rank constraints: a Riemannian approach. In 28th International Conference on Machine Learning. ICML, 2011. [TB97] L.N. Trefethen and D. Bau. Numerical linear algebra. Society for Industrial Mathematics, 1997. [Van11] B. Vandereycken. Low-rank matrix completion by riemannian optimization. Technical report, ? ANCHP-MATHICSE, Mathematics Section, Ecole Polytechnique F?ed?erale de Lausanne, 2011. [WYZ10] Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Technical report, Rice University, 2010. CAAM Technical Report TR10-07. 9
4402 |@word milenkovic:2 version:2 norm:3 suitably:1 km:1 seek:1 decomposition:2 pick:1 sepulchre:2 initial:5 score:2 hereafter:1 denoting:1 ecole:1 current:1 written:1 readily:1 numerical:6 grass:1 guess:4 item:3 accordingly:1 propack:5 xk:1 steepest:2 core:1 math:2 successive:1 allerton:4 zhang:1 along:8 differential:1 symposium:1 combine:1 inside:1 introduce:1 indeed:5 behavior:1 cand:3 considering:3 project:1 provided:1 notation:5 competes:1 hessf:3 gradf:8 fsu:1 baker:1 kg:1 argmin:1 minimizes:2 unobserved:1 nj:1 ought:1 guarantee:2 fellow:1 quantitative:1 collecting:1 inma:2 growth:2 preferable:1 universit:2 rm:19 k2:8 control:3 unit:1 arguably:1 positive:5 before:1 svt:7 accordance:1 local:3 initiated:1 abuse:2 approximately:1 k:1 wk2:1 equivalence:1 lausanne:1 ease:1 factorization:7 fastest:1 range:2 unique:6 camera:1 practical:2 acknowledgment:1 atomic:1 practice:1 block:6 definite:3 matching:2 confidence:5 regular:1 get:2 onto:2 close:2 cannot:1 operator:2 mahony:1 nb:1 context:1 optimize:2 restriction:1 projector:1 www:2 go:1 attention:1 independently:1 convex:3 rectangular:4 resolution:1 shen:1 rmunk:1 recovery:1 factored:1 rule:1 attraction:1 nuclear:2 orthonormal:7 deriving:2 oh:2 embedding:1 target:2 user:7 exact:3 us:1 goldberg:2 expensive:1 particularly:1 observed:5 preprint:4 verifying:1 region:11 sun:1 balanced:3 environment:1 mu:5 complexity:7 personal:1 solving:4 smart:1 algebra:2 easily:2 represented:1 various:1 fast:1 fac:3 fnrs:1 london:1 lift:1 balzano:2 trefethen:1 larger:1 solve:8 stanford:1 otherwise:2 compressed:1 nmae:3 commit:1 noisy:1 online:2 differentiate:2 advantage:2 rr:6 cai:1 ucl:2 propose:2 reconstruction:1 product:7 tu:10 rapidly:1 erale:1 iff:1 adjoint:1 description:1 qr:1 exploiting:1 convergence:7 optimum:1 diverges:1 derive:2 completion:15 fixing:1 ac:2 ij:11 eq:2 paying:1 job:1 recovering:2 strong:2 come:4 perso:1 direction:3 radius:1 drawback:1 discontinuous:2 owing:2 public:1 require:2 hx:1 isit:1 leastsquares:1 admira:11 bonnabel:1 normal:1 mapping:4 predict:2 a2:8 purpose:1 applicable:1 tool:3 weighted:1 minimization:3 nr3:2 aim:1 rather:2 lifted:1 office:1 focus:1 rank:33 industrial:1 absil:6 sense:1 dim:1 roeder:1 minimizers:1 inaccurate:1 unlikely:1 typically:2 associativity:1 kc:3 irn:3 subroutine:1 iu:1 jester:2 proposes:1 constrained:1 art:2 special:2 equal:3 once:1 field:1 sampling:2 mnr:1 preparing:1 kw:3 look:1 icml:1 discrepancy:1 np:1 report:5 few:5 wen:1 oriented:1 randomly:1 dg:3 simultaneously:1 cheaper:1 geometry:5 ourselves:1 attempt:1 ab:3 highly:1 vandereycken:2 brooke:1 neuve:2 yielding:1 chain:1 nowak:1 belgian:2 experience:1 orthogonal:1 incomplete:1 penalizes:1 theoretical:2 instance:2 column:7 cover:1 optspace:10 km10:3 nr2:1 pole:1 deviation:1 cost:6 introducing:1 stacking:1 entry:21 hundred:1 submanifold:1 too:3 varies:1 sv:3 synthetic:2 recht:3 st:2 thanks:1 international:2 lee:1 invertible:1 picking:1 interuniversity:1 successively:1 possibly:1 classically:1 derivative:5 rrn:3 de:3 later:1 try:1 h1:2 closed:1 root:1 responsibility:1 inherited:2 rmse:10 contribution:2 ass:1 minimize:6 ir:1 square:8 became:1 variance:1 collaborative:1 efficiently:3 likewise:2 correspond:2 yield:4 accuracy:1 ka1:1 directional:1 identification:2 worth:1 reach:2 parallelizable:2 retraction:2 manual:1 stagnates:1 ed:1 definition:1 against:1 pp:1 involved:3 larsen:1 obvious:1 naturally:1 riemannian:13 dataset:2 subsection:2 knowledge:1 higher:1 dt:1 follow:1 specify:1 formulation:4 though:3 generality:1 furthermore:2 just:1 rejected:1 hand:1 web:1 trust:11 keshavan:4 nonlinear:1 quality:3 scientific:1 grows:2 normalized:1 true:1 ization:1 evolution:4 hence:4 regularization:5 read:1 symmetric:3 alternating:1 deal:2 fullrank:1 please:1 noted:3 criterion:1 bal:3 trying:1 outline:1 ay:1 polytechnique:1 stiefel:3 wise:2 bilinearity:1 slight:2 interpretation:1 numerically:2 refer:1 vec:27 smoothness:1 unconstrained:1 mathematics:3 similarly:1 funded:1 base:1 pu:5 recent:1 optimizes:1 scenario:6 exploited:1 minimum:2 dai:5 fortunately:1 freely:1 bau:1 july:1 u0:1 signal:1 full:2 smooth:5 technical:3 faster:2 calculation:1 retrieval:1 divided:1 grassmann:7 manipulate:1 a1:6 regression:4 noiseless:1 metric:1 essentially:2 arxiv:6 iteration:5 represent:3 mex:4 cr09:3 whereas:2 residue:1 interval:1 singular:2 rest:1 comment:1 induced:1 effectiveness:1 call:2 easy:1 iterate:1 restrict:1 inner:3 knowing:1 computable:2 grad:3 expression:1 six:2 url:1 trimmed:2 effort:1 hessian:10 kskf:1 matlab:14 useful:5 involve:1 transforms:1 http:4 specifies:1 outperform:1 xij:9 generate:1 write:2 demonstrating:1 imperial:1 ram:1 relaxation:2 sum:2 run:3 package:4 exponentiation:1 powerful:1 i5:1 throughout:2 reasonable:1 wu:59 acceptable:1 kerman:2 guaranteed:1 quadratic:3 refine:1 annual:2 adapted:1 bv:1 constraint:4 kronecker:1 perkins:1 software:1 hgradf:1 min:11 prescribed:2 optimality:1 aspirant:1 developing:1 according:3 smaller:2 making:1 caam:1 taken:1 equation:1 skew:5 count:4 mechanism:1 eventually:1 know:1 end:2 cor:1 pursuit:2 available:5 endowed:1 grouse:1 operation:1 apply:1 observe:3 permit:1 v2:1 appropriate:2 appearing:1 remaining:1 ensure:2 include:1 exploit:2 build:1 especially:1 classical:1 society:1 purchased:1 objective:16 move:2 already:1 ingenious:1 joke:1 diagonal:2 gradient:5 subspace:6 grassmannian:10 outer:1 manifold:19 reason:1 equip:1 spanning:1 toward:1 assuming:1 ru:17 length:1 code:4 index:2 ratio:2 minimizing:2 unfortunately:1 cij:4 expense:1 trace:4 stated:1 lmafit:7 implementation:2 policy:1 unknown:2 perform:2 recommender:2 observation:3 eigentaste:1 descent:5 hwu:2 flop:2 looking:1 communication:2 rn:4 thm:2 download:1 hiu:2 rating:6 introduced:3 complement:1 namely:1 ka2:1 specified:1 louvain:4 discontinuity:1 address:3 able:1 dynamical:1 sparsity:4 summarize:2 dysco:1 encompasses:1 recast:2 max:1 memory:1 gallivan:1 critical:2 mn:6 uclouvain:2 ready:1 extract:2 text:1 geometric:1 tangent:11 kf:5 loss:1 expect:1 bresler:1 rtrmc:21 filtering:1 proven:1 h2:3 foundation:1 thresholding:1 share:1 row:2 qf:2 elsewhere:1 summary:1 repeat:1 free:2 catholique:2 enjoys:1 institute:2 boumal:3 wide:1 absolute:1 sparse:3 benefit:2 ghz:1 dimension:6 default:2 world:1 evaluating:1 author:2 commonly:1 kvec:2 programme:1 transaction:2 bb:1 approximate:1 keep:1 global:2 sequentially:1 b1:4 alternatively:3 continuous:2 search:4 designates:1 vectorization:1 iterative:1 wary:1 table:6 additionally:1 ku:6 transfer:1 nicolas:3 rearranging:1 du:2 protocol:2 diag:3 main:1 montanari:2 linearly:2 whole:1 noise:1 icteam:2 rtr:4 intel:1 meyer:2 explicit:1 col:10 candidate:2 comput:1 formula:8 down:1 minute:1 specific:1 sensing:1 r2:5 gupta:1 virtue:1 essential:2 kx:1 smoothly:1 yin:1 kxk:1 tracking:1 mij:3 rice:1 identity:5 consequently:3 hard:1 specifically:1 retract:1 uniformly:1 typical:2 total:1 accepted:1 e:3 la:2 svd:5 indicating:1 select:1 college:1 grassman:1 cholesky:3 latter:2 assessed:1 princeton:2 tested:1
3,759
4,403
Hashing Algorithms for Large-Scale Learning Ping Li Cornell University [email protected] Anshumali Shrivastava Cornell University [email protected] Joshua Moore Cornell University [email protected] Arnd Christian K?onig Microsoft Research [email protected] Abstract Minwise hashing is a standard technique in the context of search for efficiently computing set similarities. The recent development of b-bit minwise hashing provides a substantial improvement by storing only the lowest b bits of each hashed value. In this paper, we demonstrate that b-bit minwise hashing can be naturally integrated with linear learning algorithms such as linear SVM and logistic regression, to solve large-scale and high-dimensional statistical learning tasks, especially when the data do not fit in memory. We compare b-bit minwise hashing with the Count-Min (CM) and Vowpal Wabbit (VW) algorithms, which have essentially the same variances as random projections. Our theoretical and empirical comparisons illustrate that b-bit minwise hashing is significantly more accurate (at the same storage cost) than VW (and random projections) for binary data. 1 Introduction With the advent of the Internet, many machine learning applications are faced with very large and inherently high-dimensional datasets, resulting in challenges in scaling up training algorithms and storing the data. Especially in the context of search and machine translation, corpus sizes used in industrial practice have long exceeded the main memory capacity of single machine. For example, [33] discusses training sets with 1011 items and 109 distinct features, requiring novel algorithmic approaches and architectures. As a consequence, there has been a renewed emphasis on scaling up machine learning techniques by using massively parallel architectures; however, methods relying solely on parallelism can be expensive (both with regards to hardware requirements and energy costs) and often induce significant additional communication and data distribution overhead. This work approaches the challenges posed by large datasets by leveraging techniques from the area of similarity search [2], where similar increases in data sizes have made the storage and computational requirements for computing exact distances prohibitive, thus making data representations that allow compact storage and efficient approximate similarity computation necessary. The method of b-bit minwise hashing [26?28] is a recent progress for efficiently (in both time and space) computing resemblances among extremely high-dimensional (e.g., 264 ) binary vectors. In this paper, we show that b-bit minwise hashing can be seamlessly integrated with linear Support Vector Machine (SVM) [13, 18, 20, 31, 35] and logistic regression solvers. 1.1 Ultra High-Dimensional Large Datasets and Memory Bottlenecks In the context of search, a standard procedure to represent documents (e.g., Web pages) is to use w-shingles (i.e., w contiguous words), where w ? 5 in several studies [6, 7, 14]. This procedure can generate datasets of extremely high dimensions. For example, suppose we only consider 105 common English words. Using w = 5 may require the size of dictionary ? to be D = |?| = 1025 = 283 . In practice, D = 264 often suffices, as the number of available documents may not be large enough to exhaust the dictionary. For w-shingle data, normally only abscence/presence (0/1) information is used, as it is known that word frequency distributions within documents approximately follow a power-law [3], meaning that most single terms occur rarely, thereby making a w-shingle is unlikely to occur more than once in a document. Interestingly, even when the data are not too highdimensional, empirical studies [8, 17, 19] achieved good performance with binary-quantized data. When the data can fit in memory, linear SVM training is often extremely efficient after the data are loaded into the memory. It is however often the case that, for very large datasets, the data loading 1 time dominates the computing time for solving the SVM problem [35]. A more severe problem arises when the data can not fit in memory. This situation can be common in practice. The publicly available webspam dataset (in LIBSVM format) needs about 24GB disk space, which exceeds the memory capacity of many desktop PCs. Note that webspam, which contains only 350,000 documents represented by 3-shingles, is still very small compared to industry applications [33]. 1.2 Our Proposal We propose a solution which leverages b-bit minwise hashing. Our approach assumes the data vectors are binary, high-dimensional, and relatively sparse, which is generally true of text documents represented via shingles. We apply b-bit minwise hashing to obtain a compact representation of the original data. In order to use the technique for efficient learning, we have to address several issues: ? We need to prove that the matrices generated by b-bit minwise hashing are positive definite, which will provide the solid foundation for our proposed solution. ? If we use b-bit minwise hashing to estimate the resemblance, which is nonlinear, how can we effectively convert this nonlinear problem into a linear problem? ? Compared to other hashing techniques such as random projections, Count-Min (CM) sketch [11], or Vowpal Wabbit (VW) [32, 34], does our approach exhibits advantages? It turns out that our proof in the next section that b-bit hashing matrices are positive definite naturally provides the construction for converting the otherwise nonlinear SVM problem into linear SVM. 2 Review of Minwise Hashing and b-Bit Minwise Hashing Minwise hashing [6,7] has been successfully applied to a wide range of real-world problems [4,6,7, 9, 10, 12, 15, 16, 30], for efficiently computing set similarities. Minwise hashing mainly works well with binary data, which can be viewed either as 0/1 vectors or as sets. Given two sets, S1 , S2 ? ? = {0, 1, 2, ..., D ? 1}, a widely used measure of similarity is the resemblance R: R= |S1 ? S2 | a = , |S1 ? S2 | f1 + f2 ? a where f1 = |S1 |, f2 = |S2 |, a = |S1 ? S2 |. (1) Applying a random permutation ? : ? ? ? on S1 and S2 , the collision probability is simply Pr (min(?(S1 )) = min(?(S2 ))) = |S1 ? S2 | = R. |S1 ? S2 | (2) One can repeat the permutation k times: ?1 , ?2 , ..., ?k to estimate R without bias. The common practice is to store each hashed value, e.g., min(?(S1 )) and min(?(S2 )), using 64 bits [14]. The storage (and computational) cost will be prohibitive in truly large-scale (industry) applications [29]. b-bit minwise hashing [27] provides a strikingly simple solution to this (storage and computational) problem by storing only the lowest b bits (instead of 64 bits) of each hashed value. (b) (b) For convenience, denote z1 = min (? (S1 )) and z2 = min (? (S2 )), and denote z1 (z2 ) the (2) integer value corresponding to the lowest b bits of of z1 (z2 ). For example, if z1 = 7, then z1 = 3. Theorem 1 [27] Assume D is large. ? ? (b) (b) Pb = Pr z1 = z2 = C1,b + (1 ? C2,b ) R r1 = C1,b (3) f1 f2 , r2 = , f1 = |S1 |, f2 = |S2 | D D r1 r1 r2 r2 + A2,b , C2,b = A1,b + A2,b , = A1,b r1 + r2 r1 + r2 r1 + r2 r1 + r2 b A1,b = r1 [1 ? r1 ]2 b ?1 2b 1 ? [1 ? r1 ] A2,b = , r2 [1 ? r2 ]2 ?1 b 1 ? [1 ? r2 ]2 . This (approximate) formula (3) is remarkably accurate, even for very small D; see Figure 1 in [25]. We can then estimate Pb (and R) from k independent permutations: ? ? b = Pb ? C1,b , R 1 ? C2,b ? ? ?b = Var R ? ? Var P?b 2 [1 ? C2,b ] = 1 [C1,b + (1 ? C2,b )R] [1 ? C1,b ? (1 ? C2,b )R] (4) k [1 ? C2,b ]2 It turns out that our method only needs P?b for linear learning, i.e., no need to explicitly estimate R. 2 3 Kernels from Minwise Hashing b-Bit Minwise Hashing P Definition: A symmetric n ? n matrix K satisfying ij ci cj Kij ? 0, for all real vectors c is called positive definite (PD). Note that here we do not differentiate PD from nonnegative definite. Theorem 2 Consider n sets S1 , ..., Sn ? ? = {0, 1, ..., D ? 1}. Apply one permutation ? to each (b) set. Define zi = min{?(Si )} and zi the lowest b bits of zi . The following three matrices are PD. 1. The resemblance matrix R ? Rn?n , whose (i, j)-th entry is the resemblance between set |S ?S | |S ?Sj | Si and set Sj : Rij = |Sii ?Sjj | = |Si |+|Sji |?|S . i ?Sj | 2. The minwise hashing matrix M ? Rn?n : Mij = 1{zi = zj }. o n (b) (b) (b) 3. The b-bit minwise hashing matrix M(b) ? Rn?n : Mij = 1 zi = zj . (b) Consequently, consider k independent permutations and denote M(s) the b-bit minwise hashing Pk (b) matrix generated by the s-th permutation. Then the summation s=1 M(s) is also PD. Proof: A matrix A is PD if it can be written as an inner product BT B. Because Mij = 1{zi = zj } = D?1 X 1{zi = t} ? 1{zj = t}, (5) t=0 Mij is the inner product of two D-dim vectors. Thus, M is PD. Similarly, M(b) is PD because P2b ?1 (b) (b) (b) Mij = t=0 1{zi = t} ? 1{zj = t}. R is PD because Rij = Pr{Mij = 1} = E (Mij ) and Mij is the (i, j)-th element of the PD matrix M. Note that the expectation is a linear operation.  4 Integrating b-Bit Minwise Hashing with (Linear) Learning Algorithms Linear algorithms such as linear SVM and logistic regression have become very powerful and extremely popular. Representative software packages include SVMperf [20], Pegasos [31], Bottou?s SGD SVM [5], and LIBLINEAR [13]. Given a dataset {(xi , yi )}ni=1 , xi ? RD , yi ? {?1, 1}. The L2 -regularized linear SVM solves the following optimization problem): min w n n o X 1 T max 1 ? yi wT xi , 0 , w w+C 2 i=1 (6) and the L2 -regularized logistic regression solves a similar problem: n min w ? ? X T 1 T w w+C log 1 + e?yi w xi . 2 i=1 (7) Here C > 0 is a regularization parameter. Since our purpose is to demonstrate the effectiveness of our proposed scheme using b-bit hashing, we simply provide results for a wide range of C values and assume that the best performance is achievable if we conduct cross-validations. In our approach, we apply k random permutations on each feature vector xi and store the lowest b bits of each hashed value. This way, we obtain a new dataset which can be stored using merely nbk bits. At run-time, we expand each new data point into a 2b ? k-length vector with exactly k 1?s. For example, suppose k = 3 and the hashed values are originally {12013, 25964, 20191}, whose binary digits are {010111011101101, 110010101101100, 100111011011111}. Consider b = 2. Then the binary digits are stored as {01, 00, 11} (which corresponds to {1, 0, 3} in decimals). At run-time, we need to expand them into a vector of length 2b k = 12, to be {0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0}, which will be the new feature vector fed to a solver such as LIBLINEAR. Clearly, this expansion is directly inspired by the proof that the b-bit minwise hashing matrix is PD in Theorem 2. 5 Experimental Results on Webspam Dataset Our experiment settings closely follow the work in [35]. They conducted experiments on three datasets, of which only the webspam dataset is public and reasonably high-dimensional (n = 350000, D = 16609143). Therefore, our experiments focus on webspam. Following [35], we randomly selected 20% of samples for testing and used the remaining 80% samples for training. We chose LIBLINEAR as the workhorse to demonstrate the effectiveness of our algorithm. All experiments were conducted on workstations with Xeon(R) CPU ([email protected]) and 48GB 3 RAM, under Windows 7 System. Thus, in our case, the original data (about 24GB in LIBSVM format) fit in memory. In applications when the data do not fit in memory, we expect that b-bit hashing will be even more substantially advantageous, because the hashed data are relatively very small. In fact, our experimental results will show that for this dataset, using k = 200 and b = 8 can achieve similar testing accuracies as using the original data. The effective storage for the reduced dataset (with 350K examples, using k = 200 and b = 8) would be merely about 70MB. 5.1 Experimental Results on Nonlinear (Kernel) SVM We implemented a new resemblance kernel function and tried to use LIBSVM to train an SVM using the webspam dataset. The training time well exceeded 24 hours. Fortunately, using b-bit minswise hashing to estimate the resemblance kernels provides a substantial improvement. For example, with k = 150, b = 4, and C = 1, the training time is about 5185 seconds and the testing accuracy is quite close to the best results given by LIBLINEAR on the original webspam data. 5.2 Experimental Results on Linear SVM There is an important tuning parameter C. To capture the best performance and ensure repeatability, we experimented with a wide range of C values (from 10?3 to 102 ) with fine spacings in [0.1, 10]. ?1 b=1 0 10 10 1 2 10 10 b=4 b=2 b=1 svm: k = 50 Spam: Accuracy ?1 10 1 10 2 10 6 b=4 b=2 b=1 svm: k = 100 Spam: Accuracy ?1 100 b = 6,8,10,16 b=4 98 96 b=2 4 94 92 b=1 90 88 86 svm: k = 200 84 82 Spam: Accuracy 80 ?3 ?2 ?1 0 1 2 10 10 10 10 10 10 C 0 10 C Accuracy (%) C Accuracy (%) 0 10 100 b = 8,10,16 98 96 6 94 4 92 90 88 86 84 82 80 ?3 ?2 10 10 Accuracy (%) b=2 b=6 10 1 10 2 10 C 100 b = 6,8,10,16 b=4 98 b=2 96 4 94 b=1 92 90 88 86 svm: k = 300 84 82 Spam: Accuracy 80 ?3 ?2 ?1 0 1 2 10 10 10 10 10 10 C 100 b = 6,8,10,16 b=4 98 b=2 96 4 b = 1 94 92 90 88 86 svm: k = 400 84 82 Spam: Accuracy 80 ?3 ?2 ?1 0 1 2 10 10 10 10 10 10 C Accuracy (%) ?2 10 b=4 100 98 b = 8,10,16 96 94 92 90 88 86 84 82 80 ?3 ?2 10 10 Accuracy (%) 80 ?3 10 8 b=6 Accuracy (%) 100 98 96 b = 10,16 94 92 90 88 86 svm: k = 30 84 Spam: Accuracy 82 Accuracy (%) Accuracy (%) We experimented with k = 10 to k = 500, and b = 1, 2, 4, 6, 8, 10, and 16. Figure 1 (average) and Figure 2 (std, standard deviation) provide the test accuracies. Figure 1 demonstrates that using b ? 8 and k ? 200 achieves similar test accuracies as using the original data. Since our method is randomized, we repeated every experiment 50 times. We report both the mean and std values. Figure 2 illustrates that the stds are very small, especially with b ? 4. In other words, our algorithm produces stable predictions. For this dataset, the best performances were usually achieved at C ? 1. 100 b = 6,8,10,16 98 b=4 96 4 b=2 94 92 b =1 90 88 86 svm: k = 150 84 82 Spam: Accuracy 80 ?3 ?2 ?1 0 1 2 10 10 10 10 10 10 C 100 b = 6,8,10,16 b=4 98 b=2 96 4 b=1 94 92 90 88 86 svm: k = 500 84 82 Spam: Accuracy 80 ?3 ?2 ?1 0 1 2 10 10 10 10 10 10 C Figure 1: SVM test accuracy (averaged over 50 repetitions). With k ? 200 and b ? 8. b-bit hashing achieves very similar accuracies as using the original data (dashed, red if color is available). 0 10 b = 16 ?2 svm: k = 50 Spam accuracy (std) 10 ?3 10 ?2 10 ?1 0 10 10 C b=2 2 10 b=4 ?1 10 b=6 b=8 ?2 1 10 10 b=1 svm: k = 100 Spam accuracy (std) 10 ?3 10 ?2 10 ?1 0 10 10 b = 10,16 b=2 ?1 10 b=4 b=6 ?2 1 10 2 10 C Accuracy (std %) Accuracy (std %) Accuracy (std %) b=6 b=8 ?1 0 10 b=1 b=4 10 0 10 b=1 b=2 Accuracy (std %) 0 10 svm: k = 200 Spam accuracy (std) 10 ?3 10 ?2 10 ?1 0 10 10 C b=1 ?1 b=4 b = 8,10,16 ?2 1 10 2 10 b=2 10 svm: k = 500 Spam accuracy (std) 10 ?3 10 ?2 10 ?1 0 10 10 b = 6,8,10,16 1 10 Figure 2: SVM test accuracy (std). The standard deviations are computed from 50 repetitions. When b ? 8, the standard deviations become extremely small (e.g., 0.02%). Compared with the original training time (about 100 seconds), Figure 3 (upper panels) shows that our method only needs about 3 seconds (near C = 1). Note that our reported training time did not include data loading (about 12 minutes for the original data and 10 seconds for the hashed data). Compared with the original testing time (about 150 seconds), Figure 3 (bottom panels) shows that our method needs merely about 2 seconds. Note that the testing time includes both the data loading time, as designed by LIBLINEAR. The efficiency of testing may be very important in practice, for example, when the classifier is deployed in a user-facing application (such as search), while the cost of training or preprocessing may be less critical and can be conducted off-line. 4 2 10 C 10 1 10 0 10 1 10 0 ?2 10 ?1 0 10 1 10 10 10 ?3 10 2 10 1000 svm: k = 50 Spam: Testing time 100 10 2 1 ?3 10 ?2 10 ?1 0 10 10 b = 16 1 10 ?1 0 10 10 1 10 ?3 10 2 10 10 1 10 10 1000 svm: k = 100 Spam: Testing time 10 2 1 ?3 10 2 ?2 10 ?1 0 10 C 10 b = 16 b = 10 1 10 ?1 0 10 10 1 10 10 ?3 10 2 10 ?2 10 ?1 10 1 1000 svm: k = 200 Spam: Testing time 10 2 1 ?3 10 10 ?2 10 ?1 0 10 C 10 1 2 10 10 C 100 2 10 0 10 C 100 10 svm: k = 500 Spam: Training time 2 0 ?2 10 C Testing time (sec) Testing time (sec) 1000 2 0 ?2 10 C Testing time (sec) 10 ?3 10 2 3 10 svm: k = 200 Spam: Training time Training time (sec) 2 3 10 svm: k =100 Spam: Training time Testing time (sec) Training time (sec) Training time (sec) 3 10 svm: k = 50 Spam: Training time Training time (sec) 3 10 10 1 10 svm: k = 500 Spam: Testing time 100 10 2 1 ?3 10 2 10 ?2 10 ?1 0 10 C 10 1 10 2 10 C Figure 3: SVM training time (upper panels) and testing time (bottom panels). The original costs are plotted using dashed (red, if color is available) curves. b=2 b=1 logit: k = 100 Spam: Accuracy ?1 0 10 10 1 10 2 10 3 3 1 10 logit: k = 50 0 Spam: Training time 10 ?3 ?2 ?1 0 10 10 10 10 C 1 10 2 10 2 10 1 10 logit: k = 100 0 Spam: Training time 10 ?3 ?2 ?1 0 10 10 10 10 C 1 10 2 10 10 Training time (sec) 2 10 100 b=4 b = 6,8,10,16 98 b=2 4 96 b=1 94 92 90 88 86 logit: k = 500 84 82 Spam: Accuracy 80 ?3 ?2 ?1 0 1 2 10 10 10 10 10 10 C 3 10 Training time (sec) 10 Training time (sec) 10 100 b=4 98 b = 6,8,10,16 96 b=2 94 92 b=1 90 88 86 logit: k = 200 84 82 Spam: Accuracy 80 ?3 ?2 ?1 0 1 2 10 10 10 10 10 10 C Accuracy (%) b=6 b=4 C 3 Training time (sec) 100 98 b = 8,10,16 96 94 92 90 88 86 84 82 80 ?3 ?2 10 10 Accuracy (%) 100 98 b=6 b = 8,10,16 96 b=4 94 92 90 b=2 88 86 b=1 84 logit: k = 50 82 Spam: Accuracy 80 ?3 ?2 ?1 0 1 2 10 10 10 10 10 10 C Accuracy (%) Accuracy (%) 5.3 Experimental Results on Logistic Regression Figure 4 presents the test accuracies and training time using logistic regression. Again, with k ? 200 and b ? 8, b-bit minwise hashing can achieve similar test accuracies as using the original data. The training time is substantially reduced, from about 1000 seconds to about 30 seconds only. 2 10 1 10 logit: k = 200 0 Spam: Training time 10 ?3 ?2 ?1 0 10 10 10 10 C 1 10 2 10 b = 16 2 10 1 10 logit: k = 500 0 Spam: Training time 10 ?3 ?2 ?1 0 10 10 10 10 C 1 2 10 10 Figure 4: Logistic regression test accuracy (upper panels) and training time (bottom panels). In summary, it appears b-bit hashing is highly effective in reducing the data size and speeding up the training (and testing), for both SVM and logistic regression. We notice that when using b = 16, the training time can be much larger than using b ? 8. Interestingly, we find that b-bit hashing can be easily combined with Vowpal Wabbit (VW) [34] to further reduce the training time when b is large. 6 Random Projections, Count-Min (CM) Sketch, and Vowpal Wabbit (VW) Random projections [1, 24], Count-Min (CM) sketch [11], and Vowpal Wabbit (VW) [32, 34], as popular hashing algorithms for estimating inner products for high-dimensional datasets, are naturally applicable in large-scale learning. In fact, those methods are not limited to binary data. Interestingly, the three methods all have essentially the same variances. Note that in this paper, we use ?VW? particularly for the hashing algorithm in [34], not the influential ?VW? online learning platform. 6.1 Random Projections Denote the first two rows of a data matrix by u1 , u2 ? RD . The task is to estimate the inner PD product a = i=1 u1,i u2,i . The general idea is to multiply the data vectors by a random matrix {rij } ? RD?k , where rij is sampled i.i.d. from the following generic distribution with [24] Note that V 3 4 ) = 0, E(rij ) = s, s ? 1. E(rij ) = 0, V ar(rij ) = 1, E(rij 2 ar(rij ) = 4 2 E(rij ) ? E 2 (rij ) v1,j = D X u1,i rij , (8) = s ? 1 ? 0. This generates two k-dim vectors, v1 and v2 : v2,j = D X i=1 i=1 5 u2,i rij , j = 1, 2, ..., k (9) The general family of distributions (8) includes the standard normal case, s = 3) ? distribution (in this 1 1 with prob. ? 2s ? and the ?sparse projection? distribution specified as rij = s ? 0 with prob. 1 ? 1s ? 1 ?1 with prob. 2s [24] provided the following unbiased estimator a ?rp,s of a and the general variance formula: D k X 1X u1,i u2,i , v1,j v2,j , E(? arp,s ) = a = k j=1 i=1 # "D D D X 1 X 2 X 2 2 2 2 u1,i u2,i u + a + (s ? 3) u V ar(? arp,s ) = k i=1 1,i i=1 2,i i=1 a ?rp,s = (10) (11) which means s = 1 achieves the smallest variance. The only elementary distribution we know that satisfies (8) with s = 1 is the two point distribution in {?1, 1} with equal probabilities. [23] proposed an improved estimator for random projections as the solution to a cubic equation. Because it can not be written as an inner product, that estimator can not be used for linear learning. 6.2 Count-Min (CM) Sketch and Vowpal Wabbit (VW) Again, in this paper, ?VW? always refers to the hashing algorithm in [34]. VW may be viewed as a ?bias-corrected? version of the Count-Min (CM) sketch [11]. In the original CM algorithm, the key step is to independently and uniformly hash elements of the data vectors to k buckets and the hashed value is the sum of the elements in the bucket. That is h(i) = j with probability k1 , where  1 if h(i) = j j ? {1, 2, ..., k}. By writing Iij = , we can write the hashed data as 0 otherwise w1,j = D X w2,j = u1,i Iij , D X u2,i Iij (12) i=1 i=1 Pk The estimate a ?cm = j=1 w1,j w2,j is (severely) biased for estimating inner products. The original paper [11] suggested a ?count-min? step for positive data, by generating multiple independent estimates a ?cm and taking the minimum as the final estimate. That step but can not remove   can reduce P PD k 1 the bias. Note that the bias can be easily removed by using k?1 a . ?cm ? k i=1 u1,i D u 2,i i=1 [34] proposed a creative method for bias-correction, which consists of pre-multiplying (elementwise) the original data vectors with a random vector whose entries are sampled i.i.d. from the twopoint distribution in {?1, 1} with equal probabilities. Here, we consider the general distribution (8). After applying multiplication and hashing on u1 and u2 , the resultant vectors g1 and g2 are g1,j = D X g2,j = u1,i ri Iij , D X u2,i ri Iij , j = 1, 2, ..., k (13) i=1 i=1 where E(ri ) = 0, E(ri2 ) = 1, E(ri3 ) = 0, E(ri4 ) = s. We have the following Lemma. Theorem 3 a ?vw,s = k X g1,j g2,j , E(? avw,s ) = u1,i u2,i = a, (14) i=1 j=1 V ar(? avw,s ) = (s ? 1) D X D X i=1 u21,i u22,i # "D D D X 1 X 2 X 2 2 2 2 + u1,i u2,i  u +a ?2 u k i=1 1,i i=1 2,i i=1 (15) Interestingly, the variance (15) says we do need s = 1, otherwise the additional term (s ? PD 1) i=1 u21,i u22,i will not vanish even as the sample size k ? ?. In other words, the choice of random distribution in VW is essentially the only option if we want to remove the bias by premultiplying the data vectors (element-wise) with a vector of random variables. Of course, once we let s = 1, the variance (15) becomes identical to the variance of random projections (11). 6 7 Comparing b-Bit Minwise Hashing with VW (and Random Projections) 3 10 3 10 svm: VW vs b = 8 hashing Training time (sec) 100 10,100 100 10 C =1 1 98 C = 0.1 C = 0.1 96 94 C = 0.01 C = 0.01 92 90 88 86 84 logit: VW vs b = 8 hashing 82 Spam: Accuracy 80 1 2 3 4 5 6 10 10 10 10 10 10 k Training time (sec) 100 1,10,100 10,100 C =1 98 0.1 C = 0.1 96 C = 0.01 94 C = 0.01 92 90 88 86 84 svm: VW vs b = 8 hashing 82 Spam: Accuracy 80 1 2 3 4 5 6 10 10 10 10 10 10 k Accuracy (%) Accuracy (%) We implemented VW and experimented it on the same webspam dataset. Figure 5 shows that b-bit minwise hashing is substantially more accurate (at the same sample size k) and requires significantly less training time (to achieve the same accuracy). Basically, for 8-bit minwise hashing with k = 200 achieves similar test accuracies as VW with k = 104 ? 106 (note that we only stored the non-zeros). C = 100 2 C = 10 10 C = 1,0.1,0.01 C = 100 1 10 C = 10 Spam: Training time 0 10 C = 0.1,0.01 10 100 1 10,1.0,0.1 10 C = 0.01 logit: VW vs b = 8 hashing Spam: Training time 0 C = 1,0.1,0.01 2 10 C = 100,10,1 2 3 4 10 5 10 10 6 10 2 10 10 3 4 10 k 10 5 10 6 10 k Figure 5: The dashed (red if color is available) curves represent b-bit minwise hashing results (only for k ? 500) while solid curves for VW. We display results for C = 0.01, 0.1, 1, 10, 100. This empirical finding is not surprising, because the variance of b-bit hashing is usually substantially smaller than the variance of VW (and random projections). In the technical report (arXiv:1106.0967, which also includes the complete proofs of the theorems presented in this paper), we show that, at the same storage cost, b-bit hashing usually improves VW by 10- to 100-fold, by assuming each sample of VW needs 32 bits to store. Of course, even if VW only stores each sample using 16 bits, an improvement of 5- to 50-fold would still be very substantial. There is one interesting issue here. Unlike random projections (and minwise hashing), VW is a sparsity-preserving algorithm, meaning that in the resultant sample vector of length k, the number of non-zeros will not exceed the number of non-zeros in the original vector. c In fact, it is easy  to see that the fraction of zeros in the resultant vector would be (at least) 1 ? k1 ? exp ? kc , where c is the number of non-zeros in the original data vector. In this paper, we mainly focus on the scenario in which c ? k, i.e., we use b-bit minwise hashing or VW for the purpose of data reduction. However, in some cases, we care about c ? k, because VW is also an excellent tool for compact indexing. In fact, our b-bit minwise hashing scheme for linear learning may face such an issue. 8 Combining b-Bit Minwise Hashing with VW In Figures 3 and 4, when b = 16, the training time becomes substantially larger than b ? 8. Recall that in the run-time, we expand the b-bit minwise hashed data to sparse binary vectors of length 2b k with exactly k 1?s. When b = 16, the vectors are very sparse. On the other hand, once we have expanded the vectors, the task is merely computing inner products, for which we can use VW. Therefore, in the run-time, after we have generated the sparse binary vectors of length 2b k, we hash them using VW with sample size m (to differentiate from k). How large should m be? Theorem 4 ? b , of the resemblance may provide an insight. Recall Section 2 provides the estimator, denoted by R R, using b-bit minwise hashing. Now, suppose we first apply VW hashing with size m on the binary vector of length 2b k before estimating R, which will introduce some additional randomness. We ? b,vw . Theorem 4 provides its theoretical variance. denote the new estimator by R 2 100 0 90 85 ?3 10 10 ?1 0 10 10 C 1 10 1 0 90 Logit: 16?bit hashing +VW, k = 200 Spam:Accuracy ?2 2 95 SVM: 16?bit hashing + VW, k = 200 Training time (sec) Accuracy (%) Accuracy (%) 1 2 10 85 ?3 10 Spam: Accuracy ?2 10 ?1 0 10 10 2 0 3 1 10 1 10 2 10 8 8 0 10 ?3 10 Spam:Training Time ?2 10 ?1 0 10 10 C C 2 1 0 1 10 SVM: 16?bit hashing + VW, k = 200 3 2 95 10 8 8 3 Training time (sec) 100 1 10 0 8 8 1 10 10 0 Logit: 16?bit hashing +VW, k = 200 0 2 1 10 ?3 10 Spam: Training Time ?2 10 ?1 0 10 10 1 10 Figure 6: We apply VW hashing on top of the binary vectors (of length 2b k) generated by b-bit hashing, with size m = 20 k, 21 k, 22 k, 23 k, 28 k, for k = 200 and b = 16. The numbers on the solid curves (0, 1, 2, 3, 8) are the exponents. The dashed (red if color if available) curves are the results from only using b-bit hashing. When m = 28 k, this method achieves similar test accuracies (left panels) while substantially reducing the training time (right panels). 7 2 10 C Theorem 4      1 1 Pb (1 + Pb ) 2 ? ? , (16) Var Rb,vw = V ar Rb + 1 + Pb ? m [1 ? C2,b ]2 k   ? b = 1 Pb (1?Pb2) is given by (4) and C2,b is the constant defined in Theorem 1.  where V ar R k [1?C ]  2,b ? ? Compared to the original variance V ar R? b , the additional term in (16) can be relatively large, if m is small. Therefore, we should choose m ? k and m ? 2b k. If b = 16, then m = 28 k may be a good trade-off. Figure 8 provides an empirical study to verify this intuition. 9 Limitations While using b-bit minwise hashing for training linear algorithms is successful on the webspam dataset, it is important to understand the following three major limitations of the algorithm: (A): Our method is designed for binary (0/1) sparse data. (B): Our method requires an expensive preprocessing step for generating k permutations of the data. For most applications, we expect the preprocessing cost is not a major issue because the preprocessing can be conducted off-line (or combined with the data-collection step) and is easily parallelizable. However, even if the speed is not a concern, the energy consumption might be an issue, especially considering (b-bit) minwise hashing is mainly used for industry applications. In addition, testing an new unprocessed data vector (e.g., a new document) will be expensive. (C): Our method performs only reasonably well in terms of dimension reduction. The processed data need to be mapped into binary vectors in 2b ? k dimensions, which is usually not small. (Note that the storage cost is just bk bits.) For example, for the webspam dataset, using b = 8 and k = 200 seems to suffice and 28 ? 200 = 51200 is quite large, although it is much smaller than the original dimension of 16 million. It would be desirable if we can further reduce the dimension, because the dimension determines the storage cost of the model and (moderately) increases the training time for batch learning algorithms such as LIBLINEAR. In hopes of fixing the above limitations, we experimented with an implementation using another hashing technique named Conditional Random Sampling (CRS) [21, 22], which is not limited to binary data and requires only one permutation of the original data (i.e., no expensive preprocessing). We achieved some limited success. For example, CRS compares favorably to VW in terms of storage (to achieve the same accuracy) on the webspam dataset. However, so far CRS can not compete with b-bit minwise hashing for linear learning (in terms of training speed, storage cost, and model size). The reason is because even though the estimator of CRS is an inner product, the normalization factors (i.e, the effective sample size of CRS) to ensure unbiased estimates substantially differ pairwise (which is a significant advantage in other applications). In our implementation, we could not to use fully correct normalization factors, which lead to severe bias of the inner product estimates and less than satisfactory performance of linear learning compared to b-bit minwise hashing. 10 Conclusion As data sizes continue to grow faster than the memory and computational power, statistical learning tasks in industrial practice are increasingly faced with training datasets that exceed the resources on a single server. A number of approaches have been proposed that address this by either scaling out the training process or partitioning the data, but both solutions can be expensive. In this paper, we propose a compact representation of sparse, binary data sets based on b-bit minwise hashing, which can be naturally integrated with linear learning algorithms such as linear SVM and logistic regression, leading to dramatic improvements in training time and/or resource requirements. We also compare b-bit minwise hashing with the Count-Min (CM) sketch and Vowpal Wabbit (VW) algorithms, which, according to our analysis, all have (essentially) the same variances as random projections [24]. Our theoretical and empirical comparisons illustrate that b-bit minwise hashing is significantly more accurate (at the same storage) for binary data. There are various limitations (e.g., expensive preprocessing) in our proposed method, leaving ample room for future research. Acknowledgement This work is supported by NSF (DMS-0808864), ONR (YIP-N000140910911), and a grant from Microsoft. We thank John Langford and Tong Zhang for helping us better understand the VW hashing algorithm, and Chih-Jen Lin for his patient explanation of LIBLINEAR package and datasets. 8 References [1] Dimitris Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Sciences, 66(4):671?687, 2003. [2] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Commun. ACM, volume 51, pages 117?122, 2008. [3] Harald Baayen. Word Frequency Distributions, volume 18 of Text, Speech and Language Technology. Kulver Academic Publishers, 2001. [4] Michael Bendersky and W. Bruce Croft. Finding text reuse on the web. In WSDM, pages 262?271, Barcelona, Spain, 2009. [5] Leon Bottou. http://leon.bottou.org/projects/sgd. [6] Andrei Z. Broder. On the resemblance and containment of documents. In the Compression and Complexity of Sequences, pages 21?29, Positano, Italy, 1997. [7] Andrei Z. Broder, Steven C. Glassman, Mark S. Manasse, and Geoffrey Zweig. Syntactic clustering of the web. In WWW, pages 1157 ? 1166, Santa Clara, CA, 1997. [8] Olivier Chapelle, Patrick Haffner, and Vladimir N. Vapnik. Support vector machines for histogram-based image classification. IEEE Trans. Neural Networks, 10(5):1055?1064, 1999. [9] Ludmila Cherkasova, Kave Eshghi, Charles B. Morrey III, Joseph Tucek, and Alistair C. Veitch. Applying syntactic similarity algorithms for enterprise information management. In KDD, pages 1087?1096, Paris, France, 2009. [10] Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Michael Mitzenmacher, Alessandro Panconesi, and Prabhakar Raghavan. On compressing social networks. In KDD, pages 219?228, Paris, France, 2009. [11] Graham Cormode and S. Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. Journal of Algorithm, 55(1):58?75, 2005. [12] Yon Dourisboure, Filippo Geraci, and Marco Pellegrini. Extraction and classification of dense implicit communities in the web graph. ACM Trans. Web, 3(2):1?36, 2009. [13] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9:1871?1874, 2008. [14] Dennis Fetterly, Mark Manasse, Marc Najork, and Janet L. Wiener. A large-scale study of the evolution of web pages. In WWW, pages 669?678, Budapest, Hungary, 2003. [15] George Forman, Kave Eshghi, and Jaap Suermondt. Efficient detection of large-scale redundancy in enterprise file systems. SIGOPS Oper. Syst. Rev., 43(1):84?91, 2009. [16] Sreenivas Gollapudi and Aneesh Sharma. An axiomatic approach for result diversification. In WWW, pages 381?390, Madrid, Spain, 2009. [17] Matthias Hein and Olivier Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In AISTATS, pages 136?143, Barbados, 2005. [18] Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear svm. In Proceedings of the 25th international conference on Machine learning, ICML, pages 408?415, 2008. [19] Yugang Jiang, Chongwah Ngo, and Jun Yang. Towards optimal bag-of-features for object categorization and semantic video retrieval. In CIVR, pages 494?501, Amsterdam, Netherlands, 2007. [20] Thorsten Joachims. Training linear svms in linear time. In KDD, pages 217?226, Pittsburgh, PA, 2006. [21] Ping Li and Kenneth W. Church. Using sketches to estimate associations. In HLT/EMNLP, pages 708?715, Vancouver, BC, Canada, 2005 (The full paper appeared in Commputational Linguistics in 2007). [22] Ping Li, Kenneth W. Church, and Trevor J. Hastie. Conditional random sampling: A sketch-based sampling technique for sparse data. In NIPS, pages 873?880, Vancouver, BC, Canada, 2006 (Newer results appeared in NIPS 2008. [23] Ping Li, Trevor J. Hastie, and Kenneth W. Church. Improving random projections using marginal information. In COLT, pages 635?649, Pittsburgh, PA, 2006. [24] Ping Li, Trevor J. Hastie, and Kenneth W. Church. Very sparse random projections. In KDD, pages 287?296, Philadelphia, PA, 2006. [25] Ping Li and Arnd Christian K?onig. Theory and applications b-bit minwise hashing. In Commun. ACM, 2011. [26] Ping Li and Arnd Christian K?onig. Accurate estimators for improving minwise hashing and b-bit minwise hashing. Technical report, 2011 (arXiv:1108.0895). [27] Ping Li and Arnd Christian K?onig. b-bit minwise hashing. In WWW, pages 671?680, Raleigh, NC, 2010. [28] Ping Li, Arnd Christian K?onig, and Wenhao Gui. b-bit minwise hashing for estimating three-way similarities. In NIPS, Vancouver, BC, 2010. [29] Gurmeet Singh Manku, Arvind Jain, and Anish Das Sarma. Detecting Near-Duplicates for Web-Crawling. In WWW, Banff, Alberta, Canada, 2007. [30] Marc Najork, Sreenivas Gollapudi, and Rina Panigrahy. Less is more: sampling the neighborhood graph makes salsa better and faster. In WSDM, pages 242?251, Barcelona, Spain, 2009. [31] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In ICML, pages 807?814, Corvalis, Oregon, 2007. [32] Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and S.V.N. Vishwanathan. Hash kernels for structured data. Journal of Machine Learning Research, 10:2615?2637, 2009. [33] Simon Tong. Lessons learned developing a practical large scale http://googleresearch.blogspot.com/2010/04/lessons-learned-developing-practical.html, 2008. machine learning system. [34] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, pages 1113?1120, 2009. [35] Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Large linear classification when data cannot fit in memory. In KDD, pages 833?842, 2010. 9
4403 |@word multitask:1 version:1 achievable:1 loading:3 advantageous:1 logit:12 disk:1 seems:1 compression:1 tried:1 hsieh:3 dramatic:1 sgd:2 thereby:1 solid:3 reduction:2 liblinear:8 contains:1 renewed:1 document:8 interestingly:4 bc:3 com:2 z2:4 comparing:1 surprising:1 si:3 clara:1 crawling:1 written:2 suermondt:1 john:3 kdd:5 christian:5 remove:2 designed:2 hash:3 v:4 prohibitive:2 selected:1 item:1 desktop:1 cormode:1 provides:7 quantized:1 detecting:1 banff:1 org:1 zhang:1 c2:9 sii:1 become:2 enterprise:2 prove:1 consists:1 overhead:1 introduce:1 pairwise:1 inspired:1 relying:1 wsdm:2 alberta:1 cpu:1 window:1 solver:3 considering:1 becomes:2 provided:1 estimating:4 project:1 suffice:1 panel:8 spain:3 advent:1 lowest:5 cm:11 substantially:7 dror:1 finding:2 every:1 friendly:1 yugang:1 exactly:2 demonstrates:1 classifier:1 partitioning:1 onig:5 normally:1 grant:1 positive:5 before:1 consequence:1 severely:1 jiang:1 solely:1 approximately:1 might:1 chose:1 emphasis:1 limited:3 range:3 averaged:1 practical:2 testing:17 alexandr:1 practice:6 definite:5 digit:2 procedure:2 area:1 empirical:5 ri2:1 significantly:3 projection:16 word:6 induce:1 p2b:1 integrating:1 refers:1 pre:1 jui:3 chongwah:1 convenience:1 pegasos:2 close:1 cannot:1 janet:1 storage:12 context:3 applying:3 writing:1 www:5 shi:1 vowpal:7 independently:1 estimator:7 insight:1 his:1 coordinate:1 construction:1 suppose:3 user:1 exact:1 olivier:2 geraci:1 element:4 pa:3 expensive:6 satisfying:1 particularly:1 std:12 kave:2 database:1 bottom:3 steven:1 rij:14 capture:1 wang:1 compressing:1 rina:1 kilian:1 trade:1 removed:1 substantial:3 intuition:1 pd:13 alessandro:1 complexity:1 moderately:1 manasse:2 singh:1 solving:1 f2:4 efficiency:1 strikingly:1 easily:3 represented:2 various:1 muthukrishnan:1 train:1 distinct:1 jain:1 effective:3 neighborhood:1 shalev:1 whose:3 quite:2 posed:1 solve:1 widely:1 larger:2 say:1 otherwise:3 kai:3 g1:3 premultiplying:1 syntactic:2 final:1 online:1 indyk:1 differentiate:2 advantage:2 sequence:1 wabbit:7 matthias:1 propose:2 product:9 mb:1 combining:1 budapest:1 hungary:1 achieve:4 gollapudi:2 requirement:3 r1:10 prabhakar:1 produce:1 generating:2 categorization:1 ludmila:1 object:1 illustrate:2 fixing:1 nearest:1 ij:1 progress:1 solves:2 implemented:2 c:2 differ:1 closely:1 correct:1 anshu:1 raghavan:1 public:1 require:1 suffices:1 f1:4 civr:1 ultra:1 elementary:1 summation:1 svmperf:1 helping:1 rong:1 correction:1 marco:1 normal:1 exp:1 pellegrini:1 algorithmic:1 major:2 dictionary:2 achieves:5 a2:3 smallest:1 purpose:2 applicable:1 axiomatic:1 bag:1 repetition:2 successfully:1 tool:1 hope:1 anshumali:1 clearly:1 always:1 cr:5 cornell:6 focus:2 joachim:1 improvement:4 mainly:3 seamlessly:1 u21:2 industrial:2 dim:2 integrated:3 unlikely:1 bt:1 kc:1 expand:3 france:2 issue:5 among:1 classification:4 dual:1 denoted:1 exponent:1 hilbertian:1 development:1 colt:1 platform:1 yip:1 html:1 marginal:1 equal:2 once:3 extraction:1 piotr:1 sampling:4 identical:1 sreenivas:2 yu:1 icml:3 future:1 report:3 duplicate:1 randomly:1 petterson:1 keerthi:1 microsoft:3 gui:1 detection:1 highly:1 multiply:1 severe:2 truly:1 pc:1 primal:1 accurate:5 fu:1 necessary:1 sigops:1 conduct:1 pb2:1 plotted:1 hein:1 theoretical:3 kij:1 industry:3 xeon:1 contiguous:1 ar:7 cost:10 deviation:3 entry:2 successful:1 conducted:4 johnson:1 too:1 stored:3 reported:1 combined:2 cho:3 broder:2 randomized:1 international:1 off:3 barbados:1 michael:2 w1:2 again:2 management:1 choose:1 emnlp:1 leading:1 li:9 oper:1 syst:1 exhaust:1 sec:16 includes:3 oregon:1 explicitly:1 stream:1 red:4 jaap:1 option:1 parallel:1 shai:1 simon:1 bruce:1 publicly:1 ni:1 accuracy:54 variance:12 loaded:1 efficiently:3 wiener:1 lesson:2 repeatability:1 basically:1 multiplying:1 shingle:5 randomness:1 ping:9 parallelizable:1 hlt:1 trevor:3 definition:1 energy:2 u22:2 frequency:2 james:1 dm:1 naturally:4 proof:4 resultant:3 workstation:1 sampled:2 dataset:13 popular:2 recall:2 color:4 improves:1 cj:1 abscence:1 exceeded:2 appears:1 hashing:76 originally:1 follow:2 improved:2 wei:3 mitzenmacher:1 though:1 just:1 implicit:1 smola:2 achlioptas:1 langford:3 sketch:9 hand:1 dennis:1 web:7 nonlinear:4 logistic:9 resemblance:9 requiring:1 true:1 unbiased:2 twopoint:1 verify:1 regularization:1 evolution:1 symmetric:1 moore:1 satisfactory:1 semantic:1 complete:1 demonstrate:3 workhorse:1 performs:1 meaning:2 wise:1 image:1 novel:1 charles:1 common:3 volume:2 million:1 association:1 elementwise:1 sundararajan:1 significant:2 rd:3 tuning:1 similarly:1 language:1 chapelle:1 stable:1 hashed:10 similarity:7 patrick:1 recent:2 sarma:1 italy:1 commun:2 massively:1 store:4 scenario:1 server:1 sji:1 diversification:1 binary:18 success:1 continue:1 onr:1 yi:4 joshua:1 flavio:1 preserving:1 minimum:1 additional:4 fortunately:1 care:1 george:1 converting:1 sharma:1 dashed:4 multiple:1 desirable:1 full:1 manku:1 exceeds:1 technical:2 faster:2 academic:1 cross:1 long:1 lin:4 zweig:1 retrieval:1 arvind:1 a1:3 prediction:1 regression:9 essentially:4 metric:1 expectation:1 patient:1 arxiv:2 histogram:1 represent:2 kernel:6 normalization:2 achieved:3 harald:1 c1:5 proposal:1 ri3:1 remarkably:1 fine:1 spacing:1 want:1 addition:1 grow:1 leaving:1 publisher:1 w2:2 biased:1 unlike:1 file:1 ample:1 leveraging:1 effectiveness:2 integer:1 ngo:1 vw:42 presence:1 leverage:1 near:3 exceed:2 enough:1 easy:1 iii:1 yang:1 fit:6 zi:8 architecture:2 hastie:3 inner:9 reduce:3 idea:1 haffner:1 panconesi:1 bottleneck:1 unprocessed:1 gb:3 reuse:1 avw:2 speech:1 generally:1 collision:1 santa:1 netherlands:1 hardware:1 processed:1 svms:1 reduced:2 generate:1 http:2 zj:5 nsf:1 notice:1 estimated:1 rb:2 write:1 dasgupta:1 key:1 redundancy:1 pb:7 libsvm:3 ravi:1 kenneth:4 v1:3 ram:1 graph:2 merely:4 fraction:1 convert:1 sum:1 run:4 package:2 prob:3 powerful:1 compete:1 named:1 family:1 chih:4 scaling:3 graham:1 bit:64 internet:1 display:1 fold:2 fan:1 nonnegative:1 occur:2 filippo:1 vishwanathan:1 alex:2 ri:3 software:1 bousquet:1 generates:1 u1:11 speed:2 nathan:1 min:18 extremely:5 leon:2 kumar:1 expanded:1 format:2 relatively:3 influential:1 structured:1 according:1 creative:1 developing:2 anirban:1 smaller:2 increasingly:1 alistair:1 newer:1 joseph:1 rev:1 making:2 s1:13 pr:3 indexing:1 thorsten:1 bucket:2 equation:1 resource:2 discus:1 count:9 turn:2 anish:1 know:1 singer:1 fed:1 available:6 operation:1 apply:5 sjj:1 v2:3 generic:1 attenberg:1 batch:1 coin:1 weinberger:1 rp:2 original:19 assumes:1 remaining:1 include:2 ensure:2 top:1 clustering:1 linguistics:1 yoram:1 k1:2 especially:4 googleresearch:1 pingli:1 wenhao:1 exhibit:1 gradient:1 distance:1 thank:1 mapped:1 capacity:2 veitch:1 consumption:1 najork:2 reason:1 assuming:1 panigrahy:1 length:7 decimal:1 vladimir:1 nc:1 favorably:1 implementation:2 upper:3 datasets:9 descent:1 situation:1 communication:1 rn:3 community:1 canada:3 bk:1 paris:2 specified:1 z1:6 glassman:1 learned:2 hour:1 barcelona:2 nip:3 trans:2 address:2 forman:1 suggested:1 parallelism:1 usually:4 dimitris:1 appeared:2 sparsity:1 challenge:2 gideon:1 max:1 memory:11 explanation:1 video:1 webspam:11 power:2 critical:1 blogspot:1 regularized:2 scheme:2 technology:1 library:1 church:4 jun:1 philadelphia:1 sn:1 speeding:1 faced:2 review:1 text:3 l2:2 acknowledgement:1 multiplication:1 vancouver:3 xiang:1 law:1 fully:1 expect:2 permutation:9 interesting:1 limitation:4 srebro:1 var:3 facing:1 geoffrey:1 validation:1 foundation:1 storing:3 translation:1 row:1 course:2 summary:2 repeat:1 supported:1 english:1 bias:7 allow:1 understand:2 raleigh:1 wide:3 neighbor:1 taking:1 face:1 sparse:9 ghz:1 regard:1 curve:5 dimension:7 world:1 lindenstrauss:1 made:1 collection:1 preprocessing:6 corvalis:1 spam:36 far:1 social:1 sj:3 approximate:3 compact:4 aneesh:1 arnd:5 corpus:1 containment:1 pittsburgh:2 sathiya:1 xi:5 shwartz:1 search:5 eshghi:2 reasonably:2 ca:1 inherently:1 shrivastava:1 improving:2 expansion:1 bottou:3 excellent:1 marc:2 da:1 did:1 pk:2 main:1 dense:1 aistats:1 s2:12 positano:1 lattanzi:1 repeated:1 fetterly:1 representative:1 en:1 madrid:1 cubic:1 deployed:1 andrei:2 tong:2 iij:5 chierichetti:1 sub:1 hsiang:1 n000140910911:1 vanish:1 croft:1 theorem:9 formula:2 minute:1 jen:4 r2:10 experimented:4 svm:43 dominates:1 concern:1 andoni:1 vapnik:1 effectively:1 ci:1 illustrates:1 rui:1 simply:2 josh:1 amsterdam:1 g2:3 salsa:1 u2:10 chang:3 mij:8 corresponds:1 satisfies:1 determines:1 acm:3 conditional:2 viewed:2 consequently:1 towards:1 room:1 qinfeng:1 reducing:2 corrected:1 wt:1 uniformly:1 lemma:1 called:1 silvio:1 arp:2 experimental:5 rarely:1 highdimensional:1 support:2 mark:2 arises:1 minwise:46
3,760
4,404
Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability David P. Reichert, Peggy Seri?s, and Amos J. Storkey School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh, EH8 9AB {d.p.reichert@sms., pseries@inf., a.storkey@} ed.ac.uk Abstract It has been argued that perceptual multistability reflects probabilistic inference performed by the brain when sensory input is ambiguous. Alternatively, more traditional explanations of multistability refer to low-level mechanisms such as neuronal adaptation. We employ a Deep Boltzmann Machine (DBM) model of cortical processing to demonstrate that these two different approaches can be combined in the same framework. Based on recent developments in machine learning, we show how neuronal adaptation can be understood as a mechanism that improves probabilistic, sampling-based inference. Using the ambiguous Necker cube image, we analyze the perceptual switching exhibited by the model. We also examine the influence of spatial attention, and explore how binocular rivalry can be modeled with the same approach. Our work joins earlier studies in demonstrating how the principles underlying DBMs relate to cortical processing, and offers novel perspectives on the neural implementation of approximate probabilistic inference in the brain. 1 Introduction Bayesian accounts of cortical processing posit that the brain implements a probabilistic model to learn and reason about the causes underlying sensory inputs. The nature of the potential cortical model and its means of implementation are hotly debated. Of particular interest in this context is bistable perception, where the percept switches over time between two interpretations in the case of an ambiguous stimulus such as the Necker cube, or two different images that are presented to either eye in binocular rivalry [1]. In these cases, ambiguous or conflicting sensory input could result in a bimodal posterior over image interpretations in a probabilistic model, and perceptual bistability could reflect the specific way the brain explores and represents this posterior [2, 3, 4, 5, 6]. Unlike more classic explanations that explain bistability with low-level mechanism such as neuronal fatigue (e.g. [7, 8]), maybe making it more of an epiphenomenon, the probabilistic approaches see bistability as a fundamental aspect of how the brain implements probabilistic inference. Recently, it has been suggested that the cortex could employ approximate inference schemes, e.g. by estimating probability distributions with a set of samples, and studies show how electrophysiological [9] and psychophysical [10] data can be interpreted in that light. Gershman et al. [6] focus on binocular rivalry and point out how in particular Markov Chain Monte Carlo (MCMC) algorithms, where correlated samples are drawn over time to approximate distributions, might naturally account for aspects of perceptual bistability, such as its stochasticity and the fact that perception at any point in time only reflects an individual interpretation of the image rather than a full distribution over possibilities. Gershman et al. do not provide a concrete neural model, however. In earlier work, we considered Deep Boltzmann Machines (DBMs) as models of cortical perception, and related hierarchical inference in these generative models to hallucinations [11] and attention 1 [12]. With the connection between MCMC and bistability established, it is natural to explore DBMs as models of bistability as well, because Gibbs sampling, a MCMC method, can be performed to do inference. Importantly from a neuroscientific perspective, Gibbs sampling in Boltzmann machines simply corresponds to the ?standard? way of running the DBM as a neural network with stochastic firing of the units. However, it is well known that MCMC methods in general and Gibbs sampling in particular can be problematic in practice for complex, multi-modal distributions, as the sampling algorithm can get stuck in individual modes (?the chain does not mix?). In very recent machine learning work, Breuleux at al. [13] introduced a heuristic algorithm called Rates Fast Persistent Contrastive Divergence (rates-FPCD) that aims to improve sampling performance in a Boltzmann machine model by dynamically changing the model parameters, such as the connection strengths. In closely related work, Welling [14] suggested a potential connection to dynamic synapses in the brain. Hence, neuronal adaptation, here meant to be temporary changes to neuronal excitability and synaptic efficacy, could actually be seen as a means of enhancing sampling based inference [2]. We thus aim to demonstrate how the low-level and probabilistic accounts of bistable perception can be combined. We present a biological interpretation of rates-FPCD in terms of neuronal adaptation, or neuronal fatigue and synaptic depression specifically. Using a DBM that was trained on the two interpretations of the Necker cube, we show how such adaptation leads to bistable switching of the internal representations when the model is presented with the actual ambiguous Necker cube. Moreover, we model the role of spatial attention in biasing the perceptual switching. Finally, we explore how the same approach can be applied also to binocular rivalry. 2 Neuronal adaptation in a Deep Boltzmann Machine In this section we briefly introduce the DBM, the rates-FPCD algorithm as it is was motivated from a machine learning perspective, and then explain the latter?s relation to biology. A DBM [15] consists of stochastic binary units arranged hierarchically in several layers, with symmetric connections between layers and no connections within a layer. The first layer contains the visible units that are clamped to data, such as images, during inference, whereas the higher layers contain hidden units that learn representations from which they can generate the data in the visibles. With the states in layer k denoted by x(k) , connection weights W(k) and biases b(k) , the probability for a unit to switch on is determined by the input it gets from adjacent layers, using a sigmoid activation function: (k) P (xi = 1|x (k?1) ,x (k+1)  )=  X ?1 X (k) (k?1) (k?1) (k) (k+1) 1+exp ? wli xl ? wim xm ?bi . (1) m l Running the network by switching units on and off in this manner implements Gibbs sampling on a probability distribution determined by an energy function E, X P (x) ? exp(?E(x)) with E(x) = ?x(k)T W(k) x(k+1) ? x(k)T b(k) . (2) k Intuitively speaking, when run the model performs a random walk in the energy landscape shaped during learning, where it is attracted to ravines. Jumping between high-probability modes of the distribution corresponds to traversing from one ravine to another. 2.1 Rates-FPCD, neuronal fatigue and synaptic depression Unfortunately, for many realistically complex inference tasks MCMC methods such as Gibbs are prone to get stuck in individual modes, resulting in an incomplete exploration of the distribution, and there is much work in machine learning on improving sampling methods. One recently introduced algorithm is rates-FPCD (Rates Fast Persistent Contrastive Divergence) [13], which was utilized to sample from Restricted Boltzmann Machines (RBMs), the two layer building blocks of DBMs. Rates-FPCD is based on FPCD [16], which is used for training. Briefly, in FPCD one contribution to the weight training updates requires the model to be run continuously and independently of the data to explore the probability distribution as it is currently learned. Here it is important that the model does not get stuck in individual modes. It was found that introducing a fast changing component to 2 the weights (and biases) to dynamically and temporarily change the energy landscape can alleviate this problem. These fast weights Wf , which are added to the actual weights W, and the analogue (k) fast biases bf are updated according to Wf (0) bf (1) bf ? ?Wf + (x(0) p(x(1) |x0 ) ? x0 (0) ? ?bf + (x(0) ? x ? (1) ?bf 0 (0) (0) 0 (1)T x ), ), + (p(x(1) |x0 ) ? x (3) (4) 0 (1) ). (5) (0) (1) Here, the visibles x(0) are clamped to the current data item.1 x0 and x0 are current samples from the freely run model.  is a parameter determining the rate of adaptation, and ? ? 1 is a decay parameter that limits the amount of weight change contributed by the fast weights. The second term in each of the parentheses has the effect of changing the weights and biases such that whatever states are currently being sampled by the model are made less likely in the following. Hence, this will eventually ?push? the model out of a mode it is stuck in. The first terms in the parentheses are computed over the data and leads to the model being drawn to states supported by the current input. Computation of the first terms in the parentheses in equations 3-5 requires the training data. To turn FPCD into a general sampling algorithm applicable outside of training, when the training data is no longer around, rates-FPCD simply replaces the first terms with the so-called rates, which are the pairwise and unitary statistics averaged over all training data: Wf (0) bf (1) bf ? ?Wf + (E[x(0) x(1)T ] ? x0 ? (0) ?bf (0) + (E[x (1) ]?x (0) 0 (1)T x ), (6) 0 (0) ), (7) (1) ) (8) ? ?bf + (E[x(1) ] ? x0 (x(1) is sampled conditioned on the data). The rates are to be computed during training, but can then be used for sampling afterwards. It was found that these terms sufficiently serve to stabilize the sampling scheme, and that rates-FPCD yielded improved performance over Gibbs sampling [13]. Let us consider equations 6-8 from a biological perspective, interpreting the weight parameters as synaptic strengths and the biases as some overall excitability level of a neuron. The equations suggest that the capability of the network to explore the state space is improved by dynamically adjusting the neuron?s parameters (cf. e.g. [17]) depending on the current states of the neuron and its connected partners (second terms in parentheses), drawing them towards some set values (first terms, the rate statistics). All that is needed for the latter is that the neuron stores its average firing activity during learning (for the bias statistics) and the synapses remember some average firing correlation between connected neurons (for the weight statistics). In particular, if activation patterns in the network are sparse and neurons are off most of the time, then these average terms will be rather low. During inference,2 the neuron will fire strongly for its preferred stimulus (or stimulus interpretation), but then its firing probability will decrease as its excitability and synaptic efficacy drop, allowing the network to discover potential alternative interpretations of the stimulus. Thus, in the case of sparse activity, equations 6-8 implement a form of neuronal fatigue and synaptic depression. Preceding the introduction of rates-FPCD as a sampling algorithm, we also utilized the same mechanism (but only applied to the biases) in a biological model of hallucinations [11] to model homeostatic [18] regulation of neuronal firing. We showed how it helps to make the system more robust against noise corruption in the input, though it can lead to hallucinations under total sensory deprivation. Hence, the same underlying mechanisms could either be understood as short-term neuronal adaptation or longer term homeostatic regulation, depending on the time scales involved. 3 Experiments: Necker cube We trained a DBM on binary images of cubes at various locations, representing the two unambiguous interpretations of the Necker cube, and then tested the model on the actual, ambiguous Necker cube 1 2 In practice, minibatches are used. Applied in a DBM, not a RBM; see next section. 3 decoded hidden states input (a) Training and test set examples. time (b) Perceptual bistability. Figure 1: (a): Examples of the unambiguous training images (left) and the ambiguous test images (right). (b): During inference on an ambiguous image, the decoded hidden states reveal perceptual switching resulting from neuronal adaptation. Four consecutive sampling cycles are shown. (Figure 1a). We use a similar setup3 to that described in [11, 12], with localized receptive fields the size of which increased from lower to higher hidden layers, and sparsity encouraged simply by initializing the biases to negative values in training. As in the aforementioned studies, we are interested in what is inferred in the hidden layers as the image is presented in the visibles, and ?decode? the hidden states by computing a reconstructed image for each hidden layer. To this end, starting with the states of the hidden layer of interest, the activations (i.e. firing probabilities) in each subsequent lower layer are computed deterministically in a single top-down pass, doubling the weights to compensate for the lack of bottom-up input, until a reconstructed image is obtained in the visibles. In this way, the reconstructed image is determined by the states in the initial layer alone, independently of the actual current states in the other layers. When presented with a Necker cube image, the hidden states were found to converge within a few sampling cycles (each consisting of one up and one down pass of sampling all hidden layers) to one of the unambiguous interpretations and remained therein, exhibiting no perceptual switching to the respective alternative interpretation.4 We then employed rates-FPCD to model neuronal adaptation.5 It should be noted that unlike in [13], we utilize it in a DBM rather than a RBM, and during inference instead of when generating data samples (i.e. in our case the visibles are always clamped to an image). The rate statistics were computed by measuring unit activities and pairwise correlations when the trained model was run on the training data. With neuronal adaption, the internal representations as decoded from the hidden layer were found to switch over time between the two image interpretations, thus the model exhibited perceptual bistability. An example of the switching of internal representations is displayed in Figure 1b. It can be observed that the perceptual state is most distinct in higher layers. For quantitative analysis, we computed the squared reconstruction error of the image decoded from the topmost layer with regards to either of the two image interpretations. Plotted against time (Figure 2a), this shows how the internal representations evolve during a trial. The representations match one of the two image interpretations in a relatively stable manner over several sampling cycles, with some degradation before and a short transition phase during a perceptual switch. To examine the effects of adaptation on an individual neuron, we picked a unit in the top layer that showed high variance in both its activity levels and neuronal parameters as they changed over the 3 Images of 28x28 pixels, three hidden layers with 26x26 units each. Pretraining of the layers with CD-1, no training of full DBM. 4 It should be noted that the behavior of the network will depend heavily on the specifics of the training and the data set used. We employed only the most simple training methods ? layer-wise pre-training with CD-1 and no tuning of the full DBM ? and do not claim that more advanced methods could not lead to better sampling behavior, especially for this simple toy data. Indeed, using PCD instead we found some spontaneous switching, though reconstructions were noisy. But for the argument at hand it is more important that in general, bad mixing with these models can be a problem that might be alleviated by methods such as rates-FPCD, hence using a setup that exhibits this problem is useful to make the point. 5 ? = 0.95,  = 0.001 for Necker cube, ? = 0.9,  = 0.002 for binocular rivalry (Section 4). 4 reconstruction error 140 activation mean weights 120 1.0 100 80 60 0.5 40 20 0 0 20 40 0.0 60 80 100 120 140 sampling cycle (a) Match of internal state to interpretations. 0 20 40 60 80 100 120 140 sampling cycle (b) Single unit properties. Figure 2: (a): Time course of squared reconstruction errors of the decoded topmost hidden states w.r.t. either of the two image interpretations. Apart from during the transition periods, the percept at any point matches one (close to zero error) but not the other interpretation (high error). (b): Activation (i.e. firing probability) and mean synaptic strength (arbitrary origin and units) of a top layer unit that participates in coding for one but not the other interpretation (dashed line marks currently active interpretation). Depression and recovery of synaptic efficacy during instantiation of the preferred and non-preferred interpretations, respectively, lead to changes in activation that precede the next perceptual switch. trial, indicating that this unit was involved in coding for one but not the other image interpretation. In Figure 2b are plotted the time course of its activity levels (i.e. firing probability according to equation 1) and the mean synaptic efficacy, i.e. weight strength, of connections to this unit.6 As expected, the firing probability of this unit is close to one for one of the interpretations and close to zero for the other, especially in the initial time period after a perceptual switch. However, as the neuron?s firing rate and synaptic activity deviate from their low average levels, the synaptic efficacy changes as shown in the plot. For example, during instantiation of the preferred stimulus interpretation, the drop of neuronal excitability ultimately leads to a waning of activity that precedes and, together with the changes in the overall network, subsequently triggers the next perceptual switch. For another trial where we used an image of the Necker cube in a different position, the same unit showed constant low firing rates, indicating that it was not involved in representing that image. The neuronal parameters were then found to be stable throughout the trial, after a slight initial monotonic change that would allow the neuron to assume its low baseline activity as determined by the rate statistics. Moreover, other units were found to have relatively stable high firing rate for a given image throughout the trial, coding for features of the stimulus that were common to both image interpretations, even though their neuronal parameters equally adapted due to their elevated activity. This is due to the extent of adaptation being limited by the decay parameter ? (equations 6-8), and shows that the adaptation can be set to be sufficiently strong to allow for exploration of the posterior, without overwhelming the representations of unambiguous image features. Similarly, we note that internal representations of the model when presented with the unambiguous images from the training set were stable under adaptation with our setting of parameter values. We also quantified the statistics of perceptual switching by measuring the length of time the model?s state would stay in either of the two interpretations for one of the test images. The resulting histograms of percept durations, i.e. time intervals between switches, are displayed in Figure 3a separately for the two interpretations. They are shaped like gamma or log-normal distributions, qualitatively in agreement with experimental results in human subjects [19]. There is a bias apparent in the model towards one of the interpretations (different for different images). Some biases are observed in humans (as visible in the data in [4]), potentially induced by statistical properties of the environment. However, our data set did not involve any biases, so this seems to be merely an artifact produced by the (basic) training procedure used. 6 The changes to weights and biases are equivalent, so we show only the former. 5 0.18 0.14 no attention attention 0.16 0.12 0.10 0.08 0.06 0.04 0.02 0.00 no attention attention 0.12 norm. count norm. count 0.14 0.10 0.08 attended 0.06 0.04 0.02 0 5 10 15 20 25 30 percept duration 35 40 0.00 0 10 30 20 percept duration 40 (a) Percept durations for both interpretations (left and right figures), with/without attention. 50 (b) Figure 3: (a): Histograms over percept durations between perceptual switches, for either interpretation (left and right, respectively) of one of the test images. Ignoring the peaks at small interval lengths, which stem from fluctuations during transitions, the histograms are very well fitted by lognormal distributions (black curves, omitted in right figure to avoid clutter). Also plotted in both figures are histograms with spatial attention employed (see Section 3.1) to one of the interior corners of the Necker cube (as shown in (b)). The distributions shift or remain unchanged depending on whether the attended corner is salient or not for the image interpretation in question. 3.1 The role of spatial attention The statistics of multistable perception can be influenced voluntarily by human subjects [20]. For the Necker cube, overtly directing one?s gaze to corners of the cube, especially the interior ones, can have a biasing effect [21]. This could be explained by these features being in some way more salient for either of the two interpretations. An explanation matching our (simplified) setup would be that opaque cubes (as used in training) uniquely match one of the interpretations and lack one of the two interior corners. In the following, we model not eye movements but covert attention, involving only the shifting of an internal attentional ?spotlight?, which also has been shown to affect perceptual switching in the Necker cube [22].7 The presented image remained unchanged and a spatial spotlight that biased the internal representations of the model was employed in the first hidden layer. To implement the spotlight, we made use of the fact that receptive fields were topographically organized, and that sparsity in a DBM breaks the symmetry between units being on and off and makes it possible to suppress represented information by suppressing the activity of specific hidden units [12]. We used a Gaussian shaped spotlight that was centered at one of the salient internal corners of the Necker cube (Figure 3b) and applied it to the hidden units as additional negative biases, attenuating activity further away from the focus. The effect of attention on the percept durations for one of the test images are displayed in Figure 3a, together with the data obtained without attention for comparison. For the interpretation that matched the corner that was attended, we found a shift towards longer percept durations (Figure 3a, left), whereas the distribution for the other interpretation was relatively unchanged (Figure 3a, right). Averaged over all test images, the mean interval spent representing the interpretation favored by spatial attention saw a 25% increase vs. approx. no change for the other interpretation. Hence, in the model spatial attention prolongs the percept whose salient feature is being attended. This seems to be qualitatively in line with experimental data at least in terms of voluntary attention having an effect, although specifics can depend on the nature of the stimulus and the details of the instructions given to experimental subjects [23]. 4 Experiments: binocular rivalry Several related studies that considered perceptual multistability in the light of probabilistic inference focused on binocular rivalry [2, 5, 6]. There, human observers are presented with a different image to each eye, and their perception is found to switch between the two images. Depending on 7 We did not find an experimental study examining covert attention on the interior corners in unmodified Necker cubes, which is what we simulate. 6 L R L Figure 4: : Example images for the binocular rivalry experiment. Training images (left) contained either horizontal or vertical bars, and the left and right image halves were identical (corresponding to the left and right ?eyes?). For the test images (right), the left and right halves are drawn independently. They could come from the same category (top and bottom examples) or from conflicting categories (middle example). test 350 350 300 300 reconstruction error reconstruction error training R 250 200 150 100 50 0 250 200 150 100 50 0 0 100 300 200 sampling cycle 0 400 (a) Percept vs. eye images for same category. 100 300 200 sampling cycle 400 (b) Percept vs. eye images for conflict. categories. Figure 5: For binocular rivalry, displayed are the squared reconstruction errors for decoded top layer representations computed against either of the two input images. (a): The input images came from the same category (here, vertical bars), and fusing of the percept was prominent, resulting in modest, similar errors for both images. (b): For input images from conflicting categories, the percept alternated more strongly between the images, although intermediate, fused states were still more prevalent than was the case for the Necker cube. The step-like changes in the error were found to result from individual bars appearing and disappearing in the percept. specifics such as size and content of the images, perception can switch completely between the two images, fuse them, or do either to varying degrees over time [24, 25]. We demonstrate with a simple experiment that the phenomenon of binocular rivalry can be addressed in our framework as well. To this end, the same model architecture as before was used, but the number of visible units was doubled and the units were separated into left and right ?eyes?. During training, both sets of visibles simply received the same images. During testing however, the left and right halves were set to independently drawn training images to simulate the binocular rivalry experiment. The units in the first hidden layer were set to be monocular in the sense that their receptive fields covered visible units only in either of the left or right half, whereas higher layers did not made this distinction. As a data set we used images containing either vertical or horizontal bars (Figure 4). As with the Necker cube, perceptual switching was observed with adaptation but not without. Generally, the perceptual state was found to be biased to one of the two images for some periods, while fusing the images to some extent during transition phases (Figure 5). Interestingly, whether fusing or alternation was more prominent depended on the nature of the conflict in the two input images: For images from the same category (both vertical or horizontal lines), fusing occurred more often (Figure 5a), whereas for images from conflicting categories, the percept represented more distinctly either image and fusing happened primarily in transition periods (Figure 5b). We quantified this by computing the reconstruction errors from the decoded hidden states with regards to the two images, and taking the absolute difference averaged over the trial as measure for how much the internal states were representing both images individually rather than fused versions. We found that this measure was more than two times higher for conflicting categories. This result is qualitatively in line with psychophysical experiments that showed fusing for differing but compatible images (e.g. different patches of the same source image) [24, 25]. 7 5 Related work and discussion Our study contributes to the emerging trend in computational neuroscience to consider approximate probabilistic inference in the brain (e.g. [9, 10]), and complements several recent papers that examine perceptual multistability in this light. Gershman et al. [6] argued for interpreting multistability as inference based on MCMC, focusing on binocular rivalry only. Importantly, they use Markov random fields as a high-level description of the perceptual problem itself (two possible ?causes? generating the image, with a topology matching the stimulus). They argue that the brain might implement MCMC inference over these external variables, but do not make any statement w.r.t. the underlying neural mechanisms. In contrast, in our model MCMC is performed over the internal, neurally embodied latent variables that were learned from data. Bistability results from bimodality in the learned high-dimensional hidden representations, rather than directly from the problem formulation. In another study, Sundareswara and Schrater [4] model perceptual switching for the Necker cube, including the influence of image context, which we could explore in future work. Similar to [6], they start from a high-level description of the problem. They design a custom abstract inference process that makes different predictions from our model: In their model, samples are drawn i.i.d. from the two posterior modes representing the two interpretations and are accumulated over time, with older samples being exponentially discounted. A separate decision process selects from the samples and determines what interpretation reaches awareness. In our model, the current conscious percept is simply determined by the current overall state of the network, and the switching dynamics are a direct result of how this state evolves over time (as in [6]). Hohwy et al. [5] explain binocular rivalry descriptively in their predictive coding framework. They identify switching with exploration in an energy landscape, and suggest the contribution of stochasticity or adaptation, but they do not make the connection to sampling and do not provide a computational model. The work by Grossberg and Swaminathan [8] is an example of a non-probabilistic model of, among other things, Necker cube bistability, providing much biological detail, and considering the role of spatial attention. Their study is also an instance of an approach that bases the switching on neuronal adaptation, but does not see a functional role for multistability as such, relegating instead the functional relevance of adaptation to a role it plays during learning only. Similarly, in earlier work Dayan [2] utilizes an ad-hoc adaptation process in a deterministic probabilistic model of binocular rivalry. He suggests sampling could provide stochasticity, wondering about the relation between sampling and adaptation. This is was what we have addressed here. Indeed, our approach is supported by recent psychophysics results [26], which indicate that both noise and neuronal adaptation are necessary to explain binocular rivalry. We note that our setup is of course a simplification and abstraction in that we do not explicitly model depth. Indeed, in perceiving the Necker cube one does not see the actually opaque cubes we used in training, but rather a 3D wireframe cube. Peculiarly, this is actually contrary to the depth information available, as a (2D) image of a cube is not actually a 3D cube, but collection of lines on a flat surface. How is a paradoxically ?flat 3D cube? represented in the brain? In a hierarchical architecture consisting of specialized areas, this might be realized by having a high level area that codes for objects (e.g. area IT in the cortex) represent a 3D cube, whereas another area that is primarily involved with depth as such represents a flat surface. Our work here and earlier [11, 12] showed that in a DBM, different hidden layers can represent different and partially conflicting information (cf. Figure 1b). Finally, we also note that in preliminary experiments with depth information (using real valued visibles) perceptual switching did still occur. In conclusion, we provided a biological interpretation of rates-FPCD, and thus showed how two seemingly distinct explanations for perceptual multistability, probabilistic inference and neuronal adaptation, can be merged in one framework. Unlike other approaches, our account combines sampling based inference and adaptation in a concrete neural architecture utilizing learned representations of images. Moreover, our study further demonstrates the relevance of DBMs as cortical models [11, 12]. We believe that further developing hybrid approaches ? combining probabilistic models, dynamical systems, and classic connectionist networks ? will help identifying the neural substrate of the Bayesian brain hypothesis. Acknowledgments Supported by the EPSRC, MRC and BBSRC. We thank N. Heess and the reviewers for comments. 8 References [1] Leopold and Logothetis (1999) Multistable phenomena: changing views in perception. Trends in Cognitive Sciences, 3, 254?264, PMID: 10377540. [2] Dayan, P. (1998) A hierarchical model of binocular rivalry. Neural Computation, 10, 1119?1135. [3] van Ee, R., Adams, W. J., and Mamassian, P. (2003) Bayesian modeling of cue interaction: bistability in stereoscopic slant perception. Journal of the Optical Society of America A, 20, 1398?1406. [4] Sundareswara, R. and Schrater, P. R. (2008) Perceptual multistability predicted by search model for Bayesian decisions. Journal of Vision, 8, 1?19. [5] Hohwy, J., Roepstorff, A., and Friston, K. (2008) Predictive coding explains binocular rivalry: An epistemological review. Cognition, 108, 687?701. [6] Gershman, S., Vul, E., and Tenenbaum, J. (2009) Perceptual multistability as markov chain monte carlo inference. Advances in Neural Information Processing Systems 22. [7] Blake, R. (1989) A neural theory of binocular rivalry. Psychological Review, 96, 145?167, PMID: 2648445. [8] Grossberg, S. and Swaminathan, G. (2004) A laminar cortical model for 3D perception of slanted and curved surfaces and of 2D images: development, attention, and bistability. Vision Research, 44, 1147? 1187. [9] Fiser, J., Berkes, B., Orban, G., and Lengyel, M. (2010) Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences, 14, 119?130. [10] Vul, E., Goodman, N. D., Griffiths, T. L., and Tenenbaum, J. B. (2009) One and done? optimal decisions from very few samples. Proceedings of the 31st Annual Conference of the Cognitive Science Society.. [11] Reichert, D. P., Seri?s, P., and Storkey, A. J. (2010) Hallucinations in Charles Bonnet Syndrome induced by homeostasis: a Deep Boltzmann Machine model. Advances in Neural Information Processing Systems 23, 23, 2020?2028. [12] Reichert, D. P., Seri?s, P., and Storkey, A. J. (2011) A hierarchical generative model of recurrent ObjectBased attention in the visual cortex. Proceedings of the International Conference on Artificial Neural Networks (ICANN-11). [13] Breuleux, O., Bengio, Y., and Vincent, P. (2011) Quickly generating representative samples from an RBM-Derived process. Neural Computation, pp. 1?16. [14] Welling, M. (2009) Herding dynamical weights to learn. Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, Quebec, Canada, pp. 1121?1128, ACM. [15] Salakhutdinov, R. and Hinton, G. (2009) Deep Boltzmann machines. Proceedings of the 12th International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 5, pp. 448?455. [16] Tieleman, T. and Hinton, G. (2009) Using fast weights to improve persistent contrastive divergence. Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, Quebec, Canada, pp. 1033?1040, ACM. [17] Maass, W. and Zador, A. M. (1999) Dynamic stochastic synapses as computational units. Neural Computation, 11, 903?917. [18] Turrigiano, G. G. (2008) The self-tuning neuron: synaptic scaling of excitatory synapses. Cell, 135, 422? 435, PMID: 18984155. [19] Zhou, Y. H., Gao, J. B., White, K. D., Yao, K., and Merk, I. (2004) Perceptual dominance time distributions in multistable visual perception. Biological Cybernetics, 90, 256?263. [20] Meng, M. and Tong, F. (2004) Can attention selectively bias bistable perception? differences between binocular rivalry and ambiguous figures. Journal of Vision, 4. [21] Toppino, T. C. (2003) Reversible-figure perception: mechanisms of intentional control. Perception & Psychophysics, 65, 1285?1295, PMID: 14710962. [22] Peterson, M. A. and Gibson, B. S. (1991) Directing spatial attention within an object: Altering the functional equivalence of shape descriptions. Journal of Experimental Psychology: Human Perception and Performance, 17, 170?182. [23] van Ee, R., Noest, A. J., Brascamp, J. W., and van den Berg, A. V. (2006) Attentional control over either of the two competing percepts of ambiguous stimuli revealed by a two-parameter analysis: means do not make the difference. Vision Research, 46, 3129?3141, PMID: 16650452. [24] Tong, F., Meng, M., and Blake, R. (2006) Neural bases of binocular rivalry. Trends in Cognitive Sciences, 10, 502?511. [25] Knapen, T., Kanai, R., Brascamp, J., van Boxtel, J., and van Ee, R. (2007) Distance in feature space determines exclusivity in visual rivalry. Vision Research, 47, 3269?3275, PMID: 17950397. [26] Kang, M. and Blake, R. (2010) What causes alternations in dominance during binocular rivalry? Attention, Perception, & Psychophysics, 72, 179?186. 9
4404 |@word trial:6 middle:1 version:1 briefly:2 seems:2 norm:2 bf:9 instruction:1 contrastive:3 attended:4 initial:3 contains:1 efficacy:5 suppressing:1 interestingly:1 current:7 activation:6 attracted:1 slanted:1 subsequent:1 visible:4 shape:1 drop:2 plot:1 update:1 v:3 alone:1 generative:2 half:4 cue:1 item:1 intelligence:1 short:2 location:1 direct:1 persistent:3 consists:1 combine:1 manner:2 introduce:1 pairwise:2 x0:7 expected:1 indeed:3 behavior:3 examine:3 multi:1 brain:10 salakhutdinov:1 discounted:1 actual:4 overwhelming:1 considering:1 relegating:1 provided:1 estimating:1 underlying:4 moreover:3 discover:1 matched:1 what:5 interpreted:1 emerging:1 differing:1 remember:1 quantitative:1 visibles:7 demonstrates:1 uk:1 whatever:1 unit:25 control:2 before:2 understood:2 limit:1 depended:1 switching:16 wondering:1 meng:2 firing:12 fluctuation:1 might:4 black:1 therein:1 quantified:2 dynamically:3 disappearing:1 suggests:1 equivalence:1 limited:1 bi:1 statistically:1 averaged:3 grossberg:2 acknowledgment:1 testing:1 practice:2 block:1 implement:6 procedure:1 area:4 gibson:1 alleviated:1 matching:2 pre:1 griffith:1 suggest:2 get:4 doubled:1 close:3 interior:4 context:2 influence:2 equivalent:1 deterministic:1 reviewer:1 toppino:1 attention:23 starting:1 independently:4 duration:7 focused:1 zador:1 recovery:1 peculiarly:1 identifying:1 utilizing:1 importantly:2 classic:2 updated:1 spontaneous:1 trigger:1 heavily:1 decode:1 play:1 substrate:1 logothetis:1 hypothesis:1 origin:1 agreement:1 storkey:4 trend:4 utilized:2 exclusivity:1 bottom:2 role:5 observed:3 epsrc:1 initializing:1 connected:2 cycle:7 decrease:1 movement:1 topmost:2 voluntarily:1 environment:1 dynamic:3 ultimately:1 trained:3 depend:2 sundareswara:2 topographically:1 predictive:2 serve:1 completely:1 various:1 represented:3 america:1 pseries:1 separated:1 distinct:2 fast:7 pmid:6 monte:2 seri:3 precedes:1 artificial:2 outside:1 apparent:1 heuristic:1 whose:1 valued:1 drawing:1 statistic:9 noisy:1 itself:1 seemingly:1 hoc:1 turrigiano:1 reconstruction:8 interaction:1 adaptation:24 combining:1 mixing:1 realistically:1 description:3 generating:3 bimodality:1 adam:1 object:2 help:2 depending:4 spent:1 ac:1 recurrent:1 montreal:2 school:1 received:1 strong:1 predicted:1 hotly:1 come:1 indicate:1 exhibiting:1 posit:1 closely:1 merged:1 stochastic:3 subsequently:1 exploration:3 dbms:5 human:5 centered:1 bistable:4 explains:1 argued:2 alleviate:1 preliminary:1 biological:6 around:1 considered:2 sufficiently:2 normal:1 exp:2 blake:3 intentional:1 cognition:1 dbm:12 claim:1 consecutive:1 omitted:1 applicable:1 precede:1 currently:3 wim:1 saw:1 individually:1 homeostasis:1 amos:1 reflects:2 always:1 gaussian:1 aim:2 rather:6 hohwy:2 avoid:1 zhou:1 varying:1 derived:1 focus:2 x26:1 prevalent:1 contrast:1 baseline:1 wf:5 sense:1 inference:22 dayan:2 abstraction:1 accumulated:1 hidden:20 relation:2 interested:1 selects:1 pixel:1 overall:3 aforementioned:1 among:1 denoted:1 favored:1 development:2 spatial:9 psychophysics:3 cube:29 field:4 shaped:3 having:2 sampling:28 encouraged:1 biology:1 represents:2 identical:1 future:1 connectionist:1 stimulus:9 employ:2 few:2 primarily:2 gamma:1 divergence:3 individual:6 phase:2 consisting:2 fire:1 ab:1 wli:1 interest:2 possibility:1 custom:1 hallucination:4 light:3 chain:3 necessary:1 jumping:1 respective:1 traversing:1 modest:1 incomplete:1 mamassian:1 walk:1 plotted:3 fitted:1 psychological:1 increased:1 instance:1 earlier:4 modeling:1 measuring:2 bistability:13 unmodified:1 altering:1 introducing:1 fusing:6 examining:1 kanai:1 combined:2 st:1 explores:1 fundamental:1 peak:1 international:4 stay:1 probabilistic:15 off:3 informatics:1 participates:1 gaze:1 together:2 continuously:1 concrete:2 fused:2 quickly:1 yao:1 squared:3 reflect:1 containing:1 corner:7 external:1 cognitive:4 toy:1 account:4 potential:3 coding:5 stabilize:1 explicitly:1 ad:1 performed:3 break:1 picked:1 observer:1 view:1 analyze:1 start:1 capability:1 objectbased:1 contribution:2 variance:1 percept:18 identify:1 landscape:3 necker:20 bayesian:4 vincent:1 produced:1 carlo:2 mrc:1 corruption:1 lengyel:1 cybernetics:1 herding:1 explain:4 synapsis:4 influenced:1 reach:1 ed:1 synaptic:12 against:3 energy:4 rbms:1 pp:4 involved:4 naturally:1 rbm:3 sampled:2 adjusting:1 improves:1 electrophysiological:1 organized:1 actually:4 focusing:1 higher:5 modal:1 improved:2 arranged:1 formulation:1 though:3 strongly:2 done:1 binocular:21 swaminathan:2 correlation:2 until:1 hand:1 fiser:1 horizontal:3 reversible:1 lack:2 mode:6 artifact:1 reveal:1 believe:1 building:1 effect:5 contain:1 former:1 hence:5 excitability:4 symmetric:1 bbsrc:1 maass:1 white:1 adjacent:1 during:18 self:1 uniquely:1 ambiguous:10 unambiguous:5 noted:2 fpcd:15 prominent:2 fatigue:4 demonstrate:3 performs:1 covert:2 interpreting:2 image:68 wise:1 novel:1 recently:2 charles:1 sigmoid:1 common:1 specialized:1 functional:3 exponentially:1 interpretation:38 slight:1 elevated:1 occurred:1 schrater:2 he:1 refer:1 spotlight:4 gibbs:6 slant:1 tuning:2 approx:1 similarly:2 stochasticity:3 overtly:1 stable:4 cortex:3 longer:3 surface:3 base:2 berkes:1 posterior:4 recent:4 showed:6 perspective:4 inf:1 apart:1 store:1 binary:2 came:1 alternation:2 epistemological:1 vul:2 seen:1 additional:1 preceding:1 employed:4 freely:1 syndrome:1 converge:1 period:4 dashed:1 full:3 mix:1 afterwards:1 neurally:1 stem:1 match:4 x28:1 offer:1 compensate:1 equally:1 parenthesis:4 prediction:1 involving:1 basic:1 enhancing:1 vision:5 histogram:4 represent:2 bimodal:1 cell:1 whereas:5 separately:1 interval:3 addressed:2 source:1 goodman:1 breuleux:2 biased:2 unlike:3 exhibited:2 comment:1 subject:3 induced:2 thing:1 quebec:2 contrary:1 unitary:1 ee:3 intermediate:1 crichton:1 bengio:1 revealed:1 switch:11 affect:1 paradoxically:1 psychology:1 architecture:3 topology:1 competing:1 shift:2 whether:2 motivated:1 speaking:1 cause:3 pretraining:1 depression:4 deep:5 heess:1 useful:1 generally:1 covered:1 involve:1 maybe:1 amount:1 clutter:1 conscious:1 tenenbaum:2 category:9 generate:1 problematic:1 happened:1 stereoscopic:1 neuroscience:1 wireframe:1 vol:1 dominance:2 four:1 salient:4 demonstrating:1 drawn:5 changing:4 utilize:1 fuse:1 merely:1 run:4 opaque:2 throughout:2 patch:1 utilizes:1 decision:3 scaling:1 layer:29 simplification:1 laminar:1 replaces:1 yielded:1 activity:11 annual:3 strength:4 adapted:1 occur:1 pcd:1 flat:3 aspect:2 simulate:2 argument:1 orban:1 bonnet:1 optical:1 relatively:3 developing:1 according:2 remain:1 evolves:1 making:1 intuitively:1 restricted:1 explained:1 den:1 equation:6 monocular:1 turn:1 eventually:1 mechanism:7 count:2 needed:1 end:2 available:1 multistability:9 hierarchical:4 away:1 appearing:1 alternative:2 reichert:4 top:5 running:2 cf:2 especially:3 society:2 unchanged:3 psychophysical:2 added:1 question:1 realized:1 receptive:3 traditional:1 exhibit:1 distance:1 attentional:2 separate:1 thank:1 street:1 partner:1 argue:1 extent:2 reason:1 length:2 code:1 modeled:1 providing:1 regulation:2 unfortunately:1 setup:3 potentially:1 relate:1 statement:1 negative:2 neuroscientific:1 suppress:1 peggy:1 implementation:2 design:1 boltzmann:8 contributed:1 allowing:1 vertical:4 neuron:11 brascamp:2 markov:3 sm:1 noest:1 curved:1 displayed:4 voluntary:1 hinton:2 directing:2 homeostatic:2 arbitrary:1 canada:2 inferred:1 david:1 introduced:2 complement:1 connection:8 conflict:2 leopold:1 learned:4 conflicting:6 temporary:1 eh8:1 established:1 distinction:1 kang:1 suggested:2 bar:4 dynamical:2 perception:17 xm:1 pattern:1 biasing:2 sparsity:2 including:1 explanation:4 analogue:1 shifting:1 natural:1 hybrid:1 friston:1 advanced:1 representing:5 scheme:2 improve:2 older:1 eye:7 embodied:1 alternated:1 deviate:1 review:2 evolve:1 determining:1 multistable:3 gershman:4 localized:1 awareness:1 degree:1 principle:1 cd:2 prone:1 course:3 changed:1 compatible:1 supported:3 excitatory:1 bias:14 allow:2 peterson:1 lognormal:1 taking:1 rivalry:22 sparse:2 distinctly:1 edinburgh:2 regard:2 curve:1 absolute:1 cortical:7 transition:5 depth:4 van:5 sensory:4 stuck:4 made:3 qualitatively:3 collection:1 simplified:1 welling:2 reconstructed:3 approximate:4 preferred:4 active:1 instantiation:2 xi:1 alternatively:1 ravine:2 search:1 latent:1 learn:3 nature:3 robust:1 ignoring:1 symmetry:1 contributes:1 improving:1 complex:2 did:4 icann:1 hierarchically:1 aistats:1 noise:2 neuronal:23 representative:1 join:1 tong:2 position:1 decoded:7 deterministically:1 debated:1 epiphenomenon:1 clamped:3 perceptual:29 xl:1 deprivation:1 down:2 remained:2 bad:1 specific:5 decay:2 conditioned:1 push:1 simply:5 explore:6 likely:1 gao:1 visual:3 contained:1 temporarily:1 partially:1 doubling:1 monotonic:1 corresponds:2 tieleman:1 determines:2 adaption:1 acm:2 minibatches:1 attenuating:1 towards:3 content:1 change:10 specifically:1 determined:5 perceiving:1 degradation:1 called:2 total:1 pas:2 experimental:5 indicating:2 selectively:1 berg:1 internal:11 mark:1 latter:2 meant:1 relevance:2 mcmc:8 tested:1 phenomenon:2 correlated:1
3,761
4,405
Thinning Measurement Models and Questionnaire Design Ricardo Silva Department of Statistical Science University College London Gower Street, London WC1E 6BT [email protected] Abstract Inferring key unobservable features of individuals is an important task in the applied sciences. In particular, an important source of data in fields such as marketing, social sciences and medicine is questionnaires: answers in such questionnaires are noisy measures of target unobserved features. While comprehensive surveys help to better estimate the latent variables of interest, aiming at a high number of questions comes at a price: refusal to participate in surveys can go up, as well as the rate of missing data; quality of answers can decline; costs associated with applying such questionnaires can also increase. In this paper, we cast the problem of refining existing models for questionnaire data as follows: solve a constrained optimization problem of preserving the maximum amount of information found in a latent variable model using only a subset of existing questions. The goal is to find an optimal subset of a given size. For that, we first define an information theoretical measure for quantifying the quality of a reduced questionnaire. Three different approximate inference methods are introduced to solve this problem. Comparisons against a simple but powerful heuristic are presented. 1 Contribution A common goal in the applied sciences is to measure concepts of interest that are not directly observable (Bartholomew et al., 2008). Such is the case in the social sciences, medicine, economics and other fields, where quantifying key attributes such as ?consumer satisfaction,? ?anxiety? and ?recession? requires the development of indicators: observable variables that are postulated to measure the target latent variables up to some measurement error (Bollen, 1989; Carroll et al., 1995). In a probabilistic framework, this often boils down to a latent variable model (Bishop, 1998). One common setup is to assume each observed indicator Yi as being generated independently given the set of latent variables X. Conditioning on any given observed data point Y gives information about the distribution of the latent vector X, which can then be used for ranking, clustering, visualization or smoothing, among other tasks. Figure 1 provides an illustration. Questionnaires from large surveys are sometimes used to provide such indicators, each Yi recording an answer that typically corresponds to a Bernoulli or ordinal variable. For instance, experts can be given questions concerning whether there is freedom of press in a particular nation, as a way of measuring its democratization level (Bollen, 1989; Palomo et al., 2007). Nations can then be clustering or ranked within an interpretable latent space. Long questionnaires have nevertheless drawbacks, as summarized by Stanton et al. (2002) in the context of psychometric studies: Longer surveys take more time to complete, tend to have more missing data, and have higher refusal rates than short surveys. Arguably, then, techniques to reducing the length of scales while maintaining psychometric quality are wortwhile. 1 Factor scores: countries in the latent space Y2 Y3 Y4 Y5 0 Y1 5 X2 (Democratization) X1 (Industrialization) Democratization 10 Dem1960 Dem1965 1 5 9 13 18 23 28 33 38 43 48 53 58 63 68 73 Country (ordered by industrialization factor) (a) (b) Figure 1: (a) A graphical representation of a latent variable model. Notice that in general latent variables will be dependent. Here, the question is how to quantify democratization and industrialization levels of nations given observed indicators Y such as freedom of press and gross national product, among others (Bollen, 1989; Palomo et al., 2007). (b) An example of a result implied by the model (adapted from Palomo et al. (2007)): barplots of the conditional distribution of democratization levels given the observed indicators at two time points, ordered by the posterior mean industrialization level. The distribution of the latent variables given the observations is the basis of the analysis. Our contribution is a methodology for choosing which indicators to preserve (e.g., which items to keep in a questionnaire) given: i.) a latent variable model specification of the domain of interest; ii.) a target number of indicators that should be preserved. To accomplish this, we provide: i.) a target objective function that quantifies the amount of information preserved by a choice of a subset of indicators, with respect to the full set; ii.) algorithms for optimizing this choice of subset with respect to the objective function. The general idea is to start with a target posterior distribution of latent variables, defined by some latent variable measurement model M (i.e., PM (X | Y)). We want to choose a subset Yz ? Y so that the resulting conditional distribution PM (X | Yz ) is as close as possible to the original one according to some metric. Model M is provided either by expertise or by numerous standard approaches that can be applied to learn it from data (e.g., methods in Bishop, 2009). We call this task measurement model thinning. Notice that the size of Yz is a domain-dependent choice. Assuming M is a good model for the data, choosing a subset of indicators will incur some information loss. It is up to the analyst to choose a trade-off between loss of information and the design of simpler, cheaper ways of measuring latent variables. Even if a shorter questionnaire is not to be deployed, the outcome of measurement model thinning provides a formal sensitivity analysis of the target latent distribution with respect to the available indicators. The result is useful to generate different insights into the domain. This paper is organized as follows: Section 2 defines a formal criterion to quantify how appropriate a subset Yz is. Section 3 describes different approaches in which this criterion can be optimized. Related work is briefly discussed in Section 4. Experiments with synthetic and real data are discussed in Section 5, followed by the conclusion. 2 An Information-Theoretical Criterion Our focus is on domains where latent variables are not a by-product of a dimensionality reduction technique, but the target of the analysis as in the example of Figure 1. That is, measurement error problems where the variables to be recorded are designed specifically to obtain information concerning such unknowns (Carroll et al., 1995; Bartholomew et al., 2008). As such, we postulate that the outcome of any analysis should be a functional of PM (X | Y), the conditional distribution of unobservables X given observables Y within a model M. It is assumed that M specifies the joint PM (X, Y). We furtherQ assume that observed variables are conditionally independent given X, i.e. p PM (X, Y) = PM (X) i=1 PM (Yi | X), with p being the number of observed indicators. 2 If z ? (z1 , z2 , . . . , zp ) is a binary vector of the same dimensionality as Y, and Yz is the subset of Y corresponding the non-zero entries of z, we can assess z by the KL divergence Z PM (X | Y) dX KL(PM (X | Y) || PM (X | Yz )) ? PM (X | Y) log PM (X | Yz ) This is well-defined, since both distributions lie in the same sample space despite the difference of dimensionality between Y and Yz . Moreover, since Y is itself a random vector, our criterion becomes the expected KL divergence hKL(PM (X | Y) || PM (X | Yz ))iPM (Y) where h?i denotes expectation. Our goal is to minimize this function with respect to z. Rearranging this expression to drop all constants that do not depend on z, and multiplying it by ?1 to get a maximization problem, we obtain the problem of finding z? such that n o z? = argmaxz hlog(PM (Yz | X))iPM (X,Yz ) ? hlog(PM (Yz ))iPM (Yz ) ) ( p X zi hlog(PM (Yi | X))iPM (X,Yi ) + HM (Yz ) = argmaxz ? i=1 argmaxz FM (z) P subject to pi=1 zi = K for a choice of K, and zi ? {0, 1}. HM (?) denotes here the entropy of a distribution parameterized by M. Notice we used the assumption that indicators are mutually independent given X. There is an intuitive appeal of having a joint entropy term to reward not only marginal relationships between indicators and latent variables, but also selections that are jointly diverse. Notice that optimizing this objective function turns out to be equivalent to minimizing the conditional entropy of latent variables given Yz . Motivating conditional entropy from a more fundamental principle illustrates that other functions can be obtained by changing the divergence. 3 Approaches for Approximate Optimization P The problem of optimizing FM (z) subject to the constraints pi=1 zi = K, zi ? {0, 1}, is hard not only for its combinatorial nature, but due to the entropy term. This needs to be approximated, and the nature of the approximation should depend on the form taken by M. We will assume that it is possible to efficiently compute any marginals of PM (Y) of modest dimensionality (say, 10 dimensions). This is the case, for instance, in the probit model for binary data: X ? N (0, ?), Yi? ? N (?T i X + ?i;0 , 1), Yi = 1, if Yi? > 0, and 0 otherwise where N (m, S) is the multivariate Gaussian distribution with mean m and covariance matrix S. The probit model is one of the most common latent variable models for questionnaire data (Bartholomew et al., 2008), with a straigthforward extension to ordinal data. In this model, marginals for a few dozen variables can be obtained efficiently since this corresponds to calculating multivariate Gaussian probabilities (Genz, 1992). Parameters can be fit by a variety of methods (Hahn et al., 2010). We also assume that M allows for the computation of hlog(PM (Yi | X))iPM (X,Yi ) at little cost. Again, in the binary probit model this is simple, since this requires integrating away a single binary variable Yi and a univariate Gaussian ?T i X. 3.1 Gaussian Entropy One approximation to FM (z) is to replace its entropy term by the corresponding entropy from some Gaussian distribution PN (Yz ). The entropy of a Gaussian distribution is proportional to the logarithm of the determinant of its covariance matrix, and hence can be computed in O(p3 ) steps. This Gaussian can be chosen as the one closest to PM (Yz ) in a KL(PM || PN ) sense: that is, the one with the same first and second moments as PM (Yz ). In our case, computing these moments can be done deterministically (up to numerical error) using standard bivariate quadrature methods. No expectation-propagation (Minka, 2001) is necessary. The corresponding objective function is p X zi hlog(PM (Yi | X))iPM (X,Yi ) + 0.5 log |?z | FM;N (z) ? i=1 3 where ?z is the covariance matrix of Yz ? which for binary and ordinal data has a sensible interpretation. This function is also an upper bound on the exact function, FM (z), since the Gaussian is the distribution with the largest entropy for a given mean vector and covariance matrix. The resulting function is non-linear in z. In our experiments, we optimize for z using a greedy P scheme: for all possible pairs (i, j) such that zi = 1 and zj = 0, we swap its values (so that i zi is always K). We choose the pair with the highest increase in FM;N (z) and repeat the process until convergence. 3.2 Entropy with Bounded Neighborhoods An alternative bound can be derived from a standard fact in information theory: H(Y | S) ? H(Y | S ? ) for S ? ? S, where H(? | ?) denotes conditional entropy. This was exploited by Globerson and Jaakkola (2007) to define an upper bound in the entropy of a distribution as follows: consider a permutation e of the set {1, 2, . . . , p}, with e(i) being the i-th element of e. Denote by e(1 : i) the first i elements of this permutation (an empty set if i < 1). Moreover, let N (e, i) be a subset of e(1 : i ? 1). For a given set variables Y = {Y1 , Y2 , . . . , Yp } the following bound holds: H(Y1 , Y2 , . . . Yp ) = n X H(Ye(i) | Ye(1:i?1) ) ? p X H(Ye(i) | YN (e,i) ) (1) i=1 i=1 If each set N (e, i) is no larger than some constant D, then this bound can be computed in O(p ? 2D ) steps for binary probit models. The bound holds for any choice of e, but we want it to be as tight as possible so that it gets weighted in a reasonable way against the other terms in FM (?). Since the entropy function is decomposable as a sum of functions that depend on i and N (e, i) only, one can minimize this bound with respect to e by using permutation optimization methods such as (Jaakkola et al., 2010). In our implementation, we use a method similar to Teyssier and Koller (2005) that shuffles neighboring entries of e to generate candidates, chooses the optimal N (e, i) for each i given the candidate e, and picks as the next permutation the candidate e with the greatest decrease in the bound. Notice that a permutation choice e and neighborhood choices N (e, i) define a Bayesian network where N (e, i) are the parents of Ye(i) . Therefore, if this Bayesian network model provides a good approximation to PM (Y), the bound will be reasonably tight. Given e, we will further relax this bound with the goal of obtaining an integer programming formulation for the problem of optimizing an upper bound to FM (z). For any given z, we define the local term HL (z, i) as ? ? ?? Y Y X ? (1 ? zk )? HM (Ye(i) | S) HL (z, i) ? HM (Ye(i) | Yz ?N (e, i)) = zj ? ? S?P (N (e,i)) j?S k?N (e,i)\S (2) where P (?) denotes the power set of a set. The new approximate objective function becomes p p X X ze(i) HL (z, i) zi hlog(PM (Yi | X))iPM (X,Yi ) + FM;D (z) ? (3) i=1 i=1 Notice that HL (z, i) is still an upper bound on HM (Ye(i) | Ye(1:i?1) ). The intuition is that we are bounding HM (Yz ) by the entropy of a Bayesian network where a vertex Ye(i) is included if ze(i) = 1, with corresponding parents given by Yz ? N (e, i). This is a well-defined Bayesian network for any choice of z. The shortcoming is that ideally we would like this Bayesian network to be the actual marginal of the model given by e and N (e, i). It is not: if the network implied by e and N (e, i) was, for instance, Y1 ? Y2 ? Y3 , the choice of z = (1, 0, 1) would result on the entropy of the disconnected graph {Y1 , Y3 }, while the true marginal would correspond instead to the graph Y1 ? Y3 . However, our simplified marginalization has the advantage of avoiding an intractable problem. Moreover, it allows us to redefine the problem as an integer linear program (ILP). Q Q Each product ze(i) j zj k (1?zk ) appearing in (3) results in a sum of O(2D ) terms, each of which Q has (up to a sign) the form qM ? m?M zm for some set M . It is still the case that qM ? {0, 1}. Therefore, objective function (3) can be interpreted as being linear on a set of binary variables {{z}, {q}}. We need further to enforce the constraints coming from qM = 1 ? {?m ? M, zm = 1}; qM = 0 ? {?m ? M s.t. zm = 0} 4 It is well-known (Glover and Woolsey, 1974) that this corresponds to the linear constraints qM = 1 ? {?m ? M, zm = 1} ? P ?m ? M, qM ? zm ? 0 qM = 0 ? {?m ? M s.t. zm = 0} ? m?M zm ? qM ? |M | ? 1 Pp which combined with the linear constraint i=1 zi = K implies that optimizing FM;D (z) is an ILP with O(p ? 2D ) variables and O(p2 ? 2D ) constraints. In our experiments in Section 5, we were able to solve essentially all of such ILPs exactly using linear programming relaxations with branch-and-bound. 3.3 Entropy with Tree-Structured Bounds The previous bound simplifies marginalization, which might badly overestimate entropies where the corresponding Yz are uniformly spread out in permutation e. We now propose a different type of bound which treats different marginalizations on an equal footing. It comes from the following observation: since H(Ye(i) | Ye(1:i?1) ) is less than or equal to any conditional entropy H(Ye(i) | Yj ) for j ? e(1 : i ? 1), we have that the tighest bound given by singleton conditioning sets is H(Ye(i) | Ye(1:i?1) ) ? min j?e(1:i?1) HM (Ye(i) | Yj ), resulting in the objective function FM;tree (z) ? p X zi hlog(PM (Yi | X))iPM (X,Yi ) + p X i=1 i=1 ze(i) ? min {Yj ?Ye(1:i?1) ?Yz } H(Ye(i) | Yj ) (4) where min{Yj ?Ye(1:i?1) ?Yz } H(Ye(i) | Yj ) ? H(Ye(i) ) if Ye(1:i?1) ? Yz = ?. The intuition is that we are bounding the exact entropy using the entropy of a directed tree rooted at Yez (1) , the first element of Yz according to e. That is, all variables are marginally dependent in the approximation regardless of what z is, and for a fixed z the tree is, by construction, the one obtained by the usual greedy algorithm of adding edges corresponding to the next legal pair of vertices with maximum mutual information (following an ordering, in this case). It turns out we can also write (4) as a linear objective function of a polynomial number of 0\1 (i?1) (2) (1) be the values of set variables and constraints. Let z?i ? 1 ? zi . Let Hi , Hi , . . . , Hi (i?1) (1) be{HM (Ye(i) | Ye(1) ), . . . , HM (Ye(i) | Ye(i?1) )} sorted in ascending order, with zi , . . . , zi ing the corresponding permutation of {ze(1) , . . . , ze(i?1) }. We have min{Yj ?Ye(1:i?1) ?Yz } H(Ye(i) | Yj ) (1) (1) (1) (2) (2) (1) (2) (3) (3) = zi Hi + z?i zi Hi + z?i z?i zi Hi + . . . Qi?1 (j) (i?2) (i?1) (i?1) (1) + j=1 z?i HM (Ye(i) ) Hi zi z?i . . . z?i P (i) i?1 (j) (j) + qi HM (Ye(i) ) ? j=1 qi Hi (j) Qj?1 (k) (j) ?i , and also a binary 0\1 variable. Plugging this expression into (4) gives where qi ? zi k=1 z a linear objective function in this extended variable space. The corresponding constraints are (j?1) (j) (1) (j) , zi }, zm = 1} zi , . . . , z?i qi = 1 ? {?zm ? {? (j?1) (j) (1) (j) , zi } s.t. zm = 0} zi , . . . , z?i qi = 0 ? {?zm ? {? which, as shown in the previous section, can be written as linear constraints (substituting each z?i by 1 ? zi ). The total number of constraints is however O(p3 ), which can be expensive, and often a linear relaxation procedure with branch-and-bound fails to provide guarantees of optimality. 3.4 The Reliability Score Finally, it is important to design cheap, effective criteria whose maxima correlate with the maxima of FM (?). Empirically, we have found high quality selections in binary probit models using the solution to the problem maximize FM;R (z) = p X wi zi , subject to zi ? {0, 1}, p X i=1 i=1 5 zi = K where wi = ?T i ??i . This can be solved by picking the corresponding indicators with the highest K weights wi . Assuming a probit model where the measurement error for each Yi? has the same variance of 1, this score is related to the ?reliability? of an indicator. Simply put, the reliability Ri of an indicator is the proportion of its variance that is due to the latent variables (Bollen, 1989, Chapter 6): Ri = wi /(wi + 1) for each Yi? . There is no current theory linking this solution to the problem of maximizing FM (?): since there is no entropy term, we can set an adversarial problem to easily defeat this method. For instance, this happens in a model where the K indicators of highest reliability all measure the same latent variable Xi and nothing else ? much information about Xi would be preserved, but little about other variables. In any case, we found this criterion to be fairly competitive even if at times it produces extreme failures. An honest account of more sophisticated selection mechanisms cannot be performed without including it, as we do in Section 5. 4 Related Work The literature on survey analysis, in the context of latent variable models, contains several examples of guidelines on how to simplify questionnaires (sometimes described as providing ?shortened versions? of scales). Much of the literature, however, consists of describing general guidelines and rules-of-thumb to accomplish this task (e.g, Richins, 2004; Stanton et al., 2002). One possible exception is Leite et al. (2008), which uses different model fitness criteria with respect to a given dataset to score candidate solutions, along with an expensive combinatorial optimization method. This conflates model selection and questionnaire thinning, and there is no theory linking the score functions to the amount of information preserved. In the machine learning and statistics literature, there is a large body of research in active learning, which is related to our task. One of the closest approaches is the one by Liang et al. (2009), which casts the classical problem of measurement selection within a Bayesian graphical model perspective. In that work, one has to choose which measurements to add. This is done sequentially, partially motivated by problems where collecting new measurements can be done relatively fast and cheap (say, by paying graduate students to annotate text data), and so the choice of next measurement can make use of fresh data. In our case, it not might be realistic to expect we can perform a large number of iterations of data collection ? and as such the task of reducing the number of measurements from a large initial collection might be more relevant in practice. Liang et al. also focus on (multivariate) supervised learning instead of purely unsupervised learning. In statistics there is also a considerable body of literature on sufficient dimension reduction and its sparse variants (e.g., Chen et al., 2010). Such techniques create a bottleneck between two sets of variables in a regression problem (say, the mapping from Y to X) while eliminating some of the input variables. In principle one might want to adapt such models to take a latent variable model M as the target mapping. Besides some loss of interpretability, the computational implications might be problematic, though. Moreover, this framework has another free parameter corresponding to the dimensionality of the bottleneck that has to be set. It is not clear how this parameter, along with a choice of sparsity level, would interact with a fixed choice K of indicators to be kept. 5 Experiments In this section, we first describe some synthetic experiments to provide insights about the different methods, followed by one brief description of a case study. In all of the experiments, the target models M are binary probit. We set the neighborhood parameter for FM;N (?) to 9. The ordering e for the tree-structured method is obtained by the same greedy search of Section 3.2, where now the score is the average of all H(Yi | Yj ) for all Yj preceding Yi . Finally, all ordering optimization methods were initialized by sorting indicators in a descending order according to their reliability scores, and the initial solution for all entropy-based optimization methods was given by the reliability score solution of Section 3.4. The integer program solver G UROBI 4.02 was used in all experiments. 5.1 Synthetic studies We start with a batch of synthetic experiments. We generated 80 models with 40 indicators and 10 latent variables1 . We further preprocess such models into two groups: in 40 of them, we select a 1 Details on the model generation: we generate 40 models by sampling the latent covariance matrix from an inverse Wishart distribution with 10 degrees of freedom and scale matrix 10I, I being the identity matrix. 6 Improvement ratio: high signal Mean error: high signal Improvement ratio: low signal Mean error: low signal 0.6 0.5 0.5 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0.25 Reliability score Reliability score 0.4 0.2 0.15 0.25 0.2 0.15 ?0.1 ?0.1 0.1 N/R T/R G/R N/S T/S G/S N/R T/R G/R N/S T/S 0.1 G/S 0.1 (a) (b) 0.15 0.2 0.25 0.3 0.1 0.15 0.2 Tree bound Tree bound (c) (d) 0.25 0.3 Figure 2: (a) A comparison of the bounded neighborhood (N ), tree-based (T ) and Gaussian (G) methods with respect to a random solution (R) and the reliability score (S). (b) A similar comparison for models where indicators are more weakly correlated to the latent variables than in (a). (c) and (d) Scatterplots of the average absolute deviance for the tree-based method (horizontal axis) against the reliability method (vertical axis). The bottom-left clouds correspond to the K = 32 trials. target reliability ri for each indicator Yi , uniformly at random from the interval [0.4 0.7]. We then rescale coefficients ?i such that the reliability (defined in Section 3.4) of the respective Yi? becomes ri . For the remaining 40 models, we sample ri uniformly at random from the interval [0.2 0.4]. We perform two choices of subsets: sets Yz of size 20 and 32 (50% and 80% of the total number of indicators). Our evaluation is as follows: since the expected value is perhaps the most common functional of the posterior distribution PM (X | Y), we calculate the expected value of the latent variables for a sample {y(1) , y(2) , . . . , y(1000) } of size 1000 taken from the respective synthetic models. This is done for the full set of 40 indicators, and for each set chosen by our four criteria: for each data point i and each objective function F , we evaluate the average distance P (i) (i) (i) (i) ?j;F |/10. In this case, x ?j is the expected value of Xj obtained by conditioning dF ? 10 xj ? x j=1 |? (i) on all indicators, while x ?j;F is the one obtained with the subset selected by optimizing F . We denote (1) (2) (1000) by mF the average of {dF , dF , . . . , dF }. Finally, we compare the three main methods with respect to the reliability score method using the improvement ratio statistic sF = 1 ? mF /mFM;R , the proportion of average error decrease with respect to the reliability score. In order to provide a sense of scale on the difficulty of each problem, we compute the same ratios with respect to a random selection, obtained by choosing K = 20 and K = 32 indicators uniformly at random. Figure 2 provides a summary of the results. In Figure 2(a), each boxplot shows the distribution over the 40 probit models where reliabilities were sampled between [0.4 0.7] (the ?high signal? models). The first three boxplots show the scores sF of the bounded neighborhood, tree-structured and Gaussian methods, respectively, compared against random selections. The last three boxplots are comparisons against the reliability heuristic. The tree-based method easily beats the Gaussian method, with about 75% of its outcomes being better than the median Gaussian outcome. The Gaussian approach is also less reliable, with results showing a long lower tail. Although the reliability score is on average a good approach, in only a handful of cases it was better than the tree-based method, and by considerably smaller magnitudes compared to the upper tails in the tree-based outcome distribution. A separate panel (Figure 2(b)) is shown for the 40 models with lower reliabilities. In this case, all methods show stronger improvements over the reliability score, although now there is a less clear difference between the tree method and the Gaussian one. Finally, in panels (c) and (d) we present scatterplots for the average deviances mF of the tree-based method against the reliability score. The two clouds correspond to the solutions with 20 and 32 indicators. Notice that in the vast majority of the cases the tree-based method does better. We then rescale the matrix to make all variances equal to 1. We also generate 40 models using as the inverse Wishart scale matrix the correlation matrix will all off-diagonal entries set to 0.5. Coefficients linking indicators to latent variables were set to zero with probability 0.8, and sampled from a standard Gaussian otherwise. If some latent variable ends up with no child, or an indicator ends up with no parent, we uniformly choose one child/parent to be linked to it. Code to fully replicate the synthetic experiments is available at HTTP :// WWW. HOMEPAGES . UCL . AC . UK /? UCGTRBD /. 7 5.2 Case study The National Health Service (NHS) is the public health system of the United Kingdom. In 2009, a major survey called the National Health Service National Staff Survey was deployed with the goal of ?collect(ing) staff views about working in their local NHS trust? (Care Quality Comission and Aston University, 2010). A questionnaire of 206 items was filled by 156, 951 respondents. We designed a measurement model based on the structure of the questionnaire and fit it by the posterior expected value estimator. Gaussian and inverse Wishart priors were used, along with Gibbs sampling and a random subset of 50, 000 respondents. See the Supplementary Material for more details. Several items in this survey asked for the NHS staff member to provide degrees of agreement in a Likert scale (Bartholomew et al., 2008) to questions such as ? ? ? ? . . . have you ever come to work despite not feeling well enough to perform . . . ? Have you felt pressure from your manager to come to work? Have you felt pressure from colleagues to come to work? Have you put yourself under pressure to come to work? as different probes into an unobservable self-assessed level of work pressure. We preprocessed and binarized the data to first narrow it down to 63 questions. We compare selections of 32 (50%) and 50 (80%) items using the same statistics of the previous section. sF ;D sF ;tree sF ;N sF ;random mF ;tree mF ;R K = 32 K = 50 7.8% 10.5% 6.3% 11.9% 0.01% 7.6% ?16.0% ?0.05% 0.238 0.123 0.255 0.140 Although gains were relatively small (as measured by the difference between reconstruction errors mF ;tree ? mF ;R and the good performance of a random selection), we showed that: i.) we do improve results over a popular measure of indicator quality; ii.) we do provide some guarantees about the diversity of the selected items via a information-theoretical measure with an entropy term, with theoretically sound approximations to such a measure. For more details on the preprocessing, and more insights into the different selections, please refer to the Supplementary Material. 6 Conclusion There are problems where one posits that the relevant information is encoded in the posterior distribution of a set of latent variables. Questionnaires (and other instruments) can be used as evidence to generate this posterior, but there is a cost associated with complex questionnaires. One problem is how to simplify such instruments of measurement. To the best of our knowledge, we provide the first formal account on how to solve it. Nevertheless, we would like to stress there is no substitute for common sense. While the tools we provide here can be used for a variety of analyses ? from deploying simpler questionnaires to sensitivity analysis ? the value and cost of keeping particular indicators can go much beyond the information contained in the latent posterior distribution. How to combine this criterion with other domain-dependent criteria is a matter of future research. Another problem of importance is how to deal with model specification and transportability across studies. A measurement model built for a very specific population of respondents might transfer poorly to another group, and therefore taking into account model uncertainty will be important. The Bayesian setup discussed by Liang et al. (2009) might provide some directions on this issue. Also, there is further structure in real-world questionnaires we are not exploiting in the current work. Namely, it is not uncommon to have questionnaires with branching questions and other dynamic behaviour more commonly associated with Web based surveys and/or longitudinal studies. Finally, hybrid approaches combining the bounded neighborhood and the tree-structured methods, along with more sophisticated ordering optimization procedures and the use of other divergence measures and determinant-based criteria (e.g. Kulesza and Taskar, 2011), will also be studied in the future. Acknowledgments The author would like to thank James Cussens and Simon Lacoste-Julien for helpful discussions, as well as the anonymous reviewers for further comments. 8 References D. Bartholomew, F. Steele, I. Moustaki, and J. Galbraith. Analysis of Multivariate Social Science Data, 2nd edition. Chapman & Hall, 2008. C. Bishop. Latent variable models. In M. Jordan (editor), Learning in Graphical Models, pages 371?403, 1998. C. Bishop. Pattern Recognition and Machine Learning. Springer, 2009. K. Bollen. Structural Equations with Latent Variables. John Wiley & Sons, 1989. R. Carroll, D. Ruppert, and L. Stefanski. Measurement Error in Nonlinear Models. Chapman & Hall, 1995. X. Chen, C. Zou, and R. Cook. Coordinate-independent sparse sufficient dimension reduction and variable selection. Annals of Statistics, 38:3696?3723, 2010. Care Quality Commission and Aston University. Aston Business School, National Health Service National Staff Survey, 2009 [computer file]. Colchester, Essex: UK Data Archive [distributor], October 2010. Available at HTTPS :// WWW. ESDS . AC . UK, SN: 6570, 2010. A. Genz. Numerical computation of multivariate normal probabilities. Journal of Computational and Graphical Statistics, 1:141?149, 1992. A. Globerson and T. Jaakkola. Approximate inference using conditional entropy decompositions. Proceedings of the 11th International Conference on Artificial Intelligence and Statistics (AISTATS 2007), pages 141?149, 2007. F. Glover and E. Woolsey. Converting the 0-1 polynomial programming problem to a 0-1 linear program. Operations Research, 22:180?182, 1974. P. Hahn, J. Scott, and C. Carvalho. A sparse factor-analytic probit model for congressional voting patterns. Duke University Department of Statistical Science, Discussion Paper 2009-22, 2010. T. Jaakkola, D. Sontag, A. Globerson, and M. Meila. Learning Bayesian network structure using LP relaxations. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS 2010), pages 366?373, 2010. A. Kulesza and B. Taskar. k-DPPs: fixed-size determinantal point processes. Proceedings of the 28th International Conference on Machine Learning (ICML), pages 1193?1200, 2011. W. Leite, I-C. Huang, and G. Marcoulides. Item selection for the development of short forms of scales using an ant colony optimization algorithm. Multivariate Behavioral Research, 43:411? 431, 2008. P. Liang, M. Jordan, and D. Klein. Learning from measurements in exponential families. Proceedings of the 26th Annual International Conference on Machine Learning (ICML ?09), 2009. T. Minka. A family of algorithms for approximate Bayesian inference. PhD Thesis, Massachussets Institute of Technology, 2001. J. Palomo, D. Dunson, and K. Bollen. Bayesian structural equation modeling. In Sik-Yum Lee (ed.), Handbook of Latent Variable and Related Models, pages 163?188, 2007. M. Richins. The material values scale: Measurement properties and development of a short form. The Journal of Consumer Research, 31:209?219, 2004. J. Stanton, E. Sinar, W. Balzer, and P. Smith. Issues and strategies for reducing the length of selfreported scales. Personnel Psychology, 55:167?194, 2002. M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian networks. Proceedings of the Twenty-first Conference on Uncertainty in AI (UAI ?05), pages 584?590, 2005. 9
4405 |@word trial:1 determinant:2 version:1 briefly:1 polynomial:2 proportion:2 eliminating:1 stronger:1 replicate:1 nd:1 covariance:5 decomposition:1 pick:1 pressure:4 ipm:8 moment:2 reduction:3 initial:2 contains:1 score:17 united:1 ilps:1 longitudinal:1 existing:2 current:2 z2:1 dx:1 written:1 john:1 determinantal:1 numerical:2 realistic:1 cheap:2 analytic:1 designed:2 interpretable:1 drop:1 greedy:3 selected:2 cook:1 item:6 intelligence:2 smith:1 short:3 footing:1 provides:4 simpler:2 glover:2 along:4 consists:1 combine:1 distributor:1 redefine:1 behavioral:1 theoretically:1 expected:5 manager:1 little:2 actual:1 solver:1 becomes:3 provided:1 moreover:4 bounded:4 panel:2 homepage:1 what:1 interpreted:1 unobserved:1 finding:1 guarantee:2 y3:4 collecting:1 nation:3 binarized:1 voting:1 exactly:1 qm:8 uk:4 yn:1 arguably:1 overestimate:1 service:3 local:2 treat:1 aiming:1 despite:2 shortened:1 might:7 studied:1 collect:1 graduate:1 directed:1 acknowledgment:1 globerson:3 yj:10 practice:1 procedure:2 integrating:1 deviance:2 get:2 cannot:1 close:1 selection:12 put:2 context:2 applying:1 leite:2 descending:1 optimize:1 equivalent:1 www:2 reviewer:1 missing:2 maximizing:1 go:2 economics:1 regardless:1 independently:1 survey:11 decomposable:1 stats:1 insight:3 rule:1 estimator:1 population:1 coordinate:1 annals:1 target:10 construction:1 exact:2 programming:3 duke:1 us:1 agreement:1 element:3 approximated:1 ze:6 expensive:2 recognition:1 observed:6 bottom:1 cloud:2 taskar:2 solved:1 calculate:1 ordering:5 trade:1 highest:3 shuffle:1 decrease:2 gross:1 intuition:2 questionnaire:20 reward:1 ideally:1 asked:1 dynamic:1 depend:3 tight:2 weakly:1 incur:1 purely:1 basis:1 observables:1 swap:1 easily:2 joint:2 chapter:1 fast:1 shortcoming:1 london:2 effective:2 describe:1 artificial:2 choosing:3 outcome:5 neighborhood:6 whose:1 heuristic:2 larger:1 solve:4 supplementary:2 say:3 relax:1 otherwise:2 encoded:1 statistic:8 jointly:1 noisy:1 itself:1 moustaki:1 advantage:1 ucl:2 propose:1 reconstruction:1 product:3 coming:1 zm:11 neighboring:1 relevant:2 combining:1 poorly:1 intuitive:1 description:1 exploiting:1 convergence:1 empty:1 parent:4 zp:1 produce:1 help:1 ac:3 colony:1 measured:1 rescale:2 school:1 paying:1 variables1:1 p2:1 come:6 implies:1 quantify:2 direction:1 posit:1 drawback:1 attribute:1 public:1 material:3 behaviour:1 anonymous:1 extension:1 hold:2 hall:2 normal:1 mapping:2 substituting:1 major:1 combinatorial:2 largest:1 create:1 tool:1 weighted:1 gaussian:16 always:1 pn:2 esds:1 jaakkola:4 derived:1 focus:2 refining:1 democratization:5 improvement:4 bernoulli:1 adversarial:1 sense:3 helpful:1 inference:3 dependent:4 bt:1 typically:1 essex:1 koller:2 unobservable:2 among:2 issue:2 development:3 constrained:1 smoothing:1 fairly:1 mutual:1 marginal:3 field:2 equal:3 having:1 sampling:2 transportability:1 chapman:2 unsupervised:1 icml:2 future:2 others:1 simplify:2 few:1 preserve:1 national:6 comprehensive:1 individual:1 cheaper:1 fitness:1 divergence:4 freedom:3 interest:3 evaluation:1 uncommon:1 extreme:1 implication:1 edge:1 necessary:1 shorter:1 respective:2 modest:1 tree:20 filled:1 logarithm:1 initialized:1 theoretical:3 instance:4 modeling:1 measuring:2 maximization:1 yum:1 cost:4 vertex:2 subset:12 entry:3 motivating:1 commission:1 answer:3 accomplish:2 synthetic:6 chooses:1 combined:1 considerably:1 fundamental:1 sensitivity:2 international:4 probabilistic:1 off:2 lee:1 picking:1 thesis:1 again:1 recorded:1 postulate:1 choose:5 huang:1 wishart:3 expert:1 genz:2 ricardo:2 yp:2 account:3 singleton:1 diversity:1 summarized:1 student:1 coefficient:2 matter:1 postulated:1 ranking:1 performed:1 view:1 linked:1 start:2 competitive:1 simon:1 contribution:2 ass:1 minimize:2 variance:3 efficiently:2 correspond:3 preprocess:1 ant:1 bayesian:11 thumb:1 marginally:1 multiplying:1 expertise:1 deploying:1 ed:1 against:6 failure:1 colleague:1 pp:1 minka:2 james:1 associated:3 boil:1 sampled:2 gain:1 dataset:1 popular:1 knowledge:1 dimensionality:5 organized:1 sophisticated:2 thinning:4 higher:1 supervised:1 methodology:1 formulation:1 done:4 though:1 marketing:1 until:1 correlation:1 working:1 horizontal:1 web:1 trust:1 nonlinear:1 propagation:1 defines:1 quality:7 perhaps:1 ye:29 concept:1 y2:4 true:1 steele:1 hence:1 deal:1 conditionally:1 self:1 branching:1 please:1 rooted:1 criterion:11 stress:1 complete:1 silva:1 common:5 functional:2 empirically:1 conditioning:3 defeat:1 nh:3 discussed:3 interpretation:1 linking:3 tail:2 marginals:2 measurement:18 refer:1 gibbs:1 ai:1 dpps:1 meila:1 pm:27 bartholomew:5 reliability:20 specification:2 carroll:3 longer:1 recession:1 add:1 posterior:7 multivariate:6 closest:2 showed:1 perspective:1 optimizing:6 binary:10 yi:23 exploited:1 preserving:1 care:2 preceding:1 staff:4 converting:1 maximize:1 signal:5 ii:3 branch:2 full:2 sound:1 ing:2 adapt:1 long:2 concerning:2 plugging:1 qi:6 variant:1 regression:1 essentially:1 metric:1 expectation:2 df:4 annotate:1 sometimes:2 iteration:1 preserved:4 respondent:3 want:3 interval:2 else:1 median:1 source:1 country:2 archive:1 file:1 comment:1 recording:1 tend:1 subject:3 member:1 jordan:2 call:1 integer:3 structural:2 likert:1 enough:1 congressional:1 variety:2 marginalization:3 fit:2 zi:27 xj:2 psychology:1 fm:15 decline:1 idea:1 simplifies:1 qj:1 honest:1 whether:1 expression:2 motivated:1 bottleneck:2 sontag:1 useful:1 clear:2 amount:3 industrialization:4 reduced:1 generate:5 specifies:1 http:2 zj:3 problematic:1 notice:7 sign:1 klein:1 diverse:1 write:1 group:2 key:2 four:1 nevertheless:2 changing:1 preprocessed:1 boxplots:2 kept:1 lacoste:1 vast:1 graph:2 relaxation:3 sum:2 inverse:3 parameterized:1 powerful:1 you:4 uncertainty:2 family:2 reasonable:1 p3:2 cussens:1 bound:20 hi:8 followed:2 annual:1 badly:1 adapted:1 constraint:9 handful:1 your:1 x2:1 ri:5 boxplot:1 felt:2 min:4 optimality:1 refusal:2 relatively:2 department:2 structured:4 according:3 disconnected:1 describes:1 smaller:1 across:1 son:1 wi:5 lp:1 happens:1 hl:4 taken:2 legal:1 equation:2 visualization:1 mutually:1 turn:2 describing:1 mechanism:1 ilp:2 ordinal:3 ascending:1 instrument:2 end:2 available:3 operation:1 stefanski:1 probe:1 away:1 appropriate:1 enforce:1 appearing:1 alternative:1 batch:1 original:1 substitute:1 denotes:4 clustering:2 remaining:1 graphical:4 maintaining:1 medicine:2 wc1e:1 gower:1 calculating:1 yz:29 hahn:2 classical:1 implied:2 objective:10 question:7 strategy:1 teyssier:2 usual:1 diagonal:1 distance:1 separate:1 thank:1 street:1 sensible:1 majority:1 participate:1 y5:1 fresh:1 consumer:2 assuming:2 length:2 analyst:1 besides:1 y4:1 illustration:1 relationship:1 minimizing:1 providing:1 anxiety:1 liang:4 setup:2 ratio:4 hlog:7 kingdom:1 october:1 dunson:1 design:3 bollen:6 implementation:1 guideline:2 unknown:1 perform:3 twenty:1 upper:5 vertical:1 observation:2 unobservables:1 beat:1 extended:1 ever:1 y1:6 conflates:1 introduced:1 cast:2 pair:3 kl:4 namely:1 optimized:1 z1:1 narrow:1 able:1 beyond:1 pattern:2 scott:1 kulesza:2 sparsity:1 hkl:1 program:3 built:1 including:1 interpretability:1 reliable:1 woolsey:2 greatest:1 power:1 satisfaction:1 ranked:1 difficulty:1 hybrid:1 business:1 indicator:31 scheme:1 stanton:3 improve:1 aston:3 brief:1 technology:1 numerous:1 julien:1 axis:2 hm:11 health:4 sn:1 text:1 prior:1 literature:4 loss:3 probit:9 permutation:7 expect:1 fully:1 generation:1 proportional:1 carvalho:1 degree:2 sufficient:2 principle:2 editor:1 sik:1 pi:2 summary:1 repeat:1 last:1 free:1 keeping:1 formal:3 institute:1 taking:1 absolute:1 sparse:3 dimension:3 world:1 author:1 collection:2 commonly:1 preprocessing:1 simplified:1 feeling:1 social:3 correlate:1 approximate:5 observable:2 keep:1 active:1 sequentially:1 uai:1 handbook:1 assumed:1 xi:2 search:2 latent:35 quantifies:1 learn:1 zk:2 nature:2 rearranging:1 reasonably:1 transfer:1 obtaining:1 interact:1 complex:1 zou:1 domain:5 aistats:2 spread:1 main:1 bounding:2 massachussets:1 edition:1 nothing:1 child:2 quadrature:1 x1:1 body:2 psychometric:2 deployed:2 wiley:1 scatterplots:2 fails:1 inferring:1 deterministically:1 sf:6 exponential:1 lie:1 candidate:4 dozen:1 down:2 bishop:4 specific:1 showing:1 appeal:1 evidence:1 bivariate:1 intractable:1 adding:1 importance:1 phd:1 magnitude:1 illustrates:1 argmaxz:3 chen:2 sorting:1 mf:7 entropy:25 simply:1 univariate:1 ordered:2 contained:1 personnel:1 partially:1 springer:1 corresponds:3 conditional:8 goal:5 sorted:1 identity:1 quantifying:2 price:1 replace:1 considerable:1 hard:1 ruppert:1 included:1 specifically:1 yourself:1 reducing:3 uniformly:5 code:1 total:2 called:1 exception:1 select:1 college:1 assessed:1 evaluate:1 avoiding:1 correlated:1
3,762
4,406
Higher-Order Correlation Clustering for Image Segmentation Sungwoong Kim Department of EE, KAIST Daejeon, South Korea [email protected] Sebastian Nowozin Microsoft Research Cambridge Cambridge, UK [email protected] Pushmeet Kohli Microsoft Research Cambridge Cambridge, UK [email protected] Chang D. Yoo Department of EE, KAIST Daejeon, South Korea [email protected] Abstract For many of the state-of-the-art computer vision algorithms, image segmentation is an important preprocessing step. As such, several image segmentation algorithms have been proposed, however, with certain reservation due to high computational load and many hand-tuning parameters. Correlation clustering, a graphpartitioning algorithm often used in natural language processing and document clustering, has the potential to perform better than previously proposed image segmentation algorithms. We improve the basic correlation clustering formulation by taking into account higher-order cluster relationships. This improves clustering in the presence of local boundary ambiguities. We first apply the pairwise correlation clustering to image segmentation over a pairwise superpixel graph and then develop higher-order correlation clustering over a hypergraph that considers higher-order relations among superpixels. Fast inference is possible by linear programming relaxation, and also effective parameter learning framework by structured support vector machine is possible. Experimental results on various datasets show that the proposed higher-order correlation clustering outperforms other state-of-the-art image segmentation algorithms. 1 Introduction Image segmentation, a partitioning of an image into disjoint regions such that each region is homogeneous, is an important preprocessing step for many of the state-of-the-art algorithms for high-level image/scene understanding for three reasons. First, the coherent support of a region, commonly assumed to be of a single label, serves as a good prior for many labeling tasks. Second, these coherent regions allow a more consistent feature extraction that can incorporate surrounding contextual information by pooling many feature responses over the region. Third, compared to pixels, a small number of larger homogeneous regions significantly reduces the computational cost for a successive labeling task. Image segmentation algorithms can be categorized into either non-graph-based or graph-based algorithms. Some well-known non-graph-based algorithms represented by mode-seeking algorithms such as the K-means [1], mean-shift [2], and EM [3] are available, while well-known graph-based algorithms are available as the min-cuts [4], normalized cuts [5] and Felzenszwalb-Huttenlocher (FH) segmentation algorithm [6]. In comparison to non-graph-based segmentations, graph-based segmentations have been shown to produce consistent segmentations by adaptively balancing local 1 judgements of similarity [7]. Moreover, the graph-based segmentation algorithms with global objective functions such as the min-cuts and normalized cuts have been shown to perform better than the FH algorithm that is based on the local objective function, since the global-objective algorithms benefit from the global nature of the information [7]. However, in contrast to the min-cuts and normalized cuts which are node-labeling algorithms, the FH algorithm benefits from the edge-labeling in that it leads to faster inference and does not require a pre-specified number of segmentations in each image [7]. Correlation clustering is a graph-partitioning algorithm [8] that simultaneously maximizes intracluster similarity and inter-cluster dissimilarity by solving the global objective (discriminant) function. In comparison with the previous image segmentation algorithms, correlation clustering is a graph-based, global-objective, and edge-labeling algorithm and therefore, has the potential to perform better for image segmentation. Furthermore, correlation clustering leads to the linear discriminant function which allows for approximate polynomial-time inference by linear programming (LP) and large margin training based on structured support vector machine (S-SVM) [9]. A framework that uses S-SVM for training the parameters in correlation clustering has been considered previously by Finley et al. [10]; however, the framework was applied to noun-phrase and news article clusterings. Taskar derived a max-margin formulation for learning the edge scores for correlation clustering [11]. However, his learning criterion is different from the S-SVM and is limited to applications involving two different segmentations of a single image. Furthermore, Taskar does not provide any experimental comparisons or quantitative results. Even though the previous (pairwise) correlation clustering can consider global aspects of an image using the discriminatively-trained discriminant function, it is restricted in resolving the segment boundary ambiguities caused by neighboring pairwise relations presented by the pairwise graph. Therefore, to capture long-range dependencies of distant nodes in a global context, this paper proposes a novel higher-order correlation clustering to incorporate higher-order relations. We first apply the pairwise correlation clustering to image segmentation over a pairwise superpixel graph and then develop higher-order correlation clustering over a hypergraph that considers higher-order relations among superpixels. The proposed higher-order correlation clustering is defined over a hypergraph in which an edge can connect to two or more nodes [12]. Hypergraphs have been previously used to lift certain limitations of conventional pairwise graphs [13, 14, 15]. However, previously proposed hypergraphs for image segmentation are restricted to partitioning based on the generalization of normalized cut framework, which suffer from a number of problems. First, inference is slow and difficult especially with increasing graph size. A number of algorithms to approximate the inference process have been introduced based on the coarsening algorithm [14] and the hypergraph Laplacian matrices [13]; these are heuristic approaches and therefore are sub-optimal. Second, incorporating a supervised learning algorithm for parameter estimation under the spectral hypergraph partitioning framework is difficult. This is in line with the difficulties in learning spectral graph partitioning. This requires a complex and unstable eigenvector approximation which must be differentiable [16, 17]. Third, utilizing rich region-based features is restricted. Almost all previous hypergraph-based image segmentation algorithms are restricted to use only color variances as region features. The proposed higher-order correlation clustering overcomes all of these problems due to the generalization of the pairwise correlation clustering and enables to take advantages of using a hypergraph. The proposed higher-order correlation clustering algorithm uses as its input a hypergraph and leads to a linear discriminant function. A rich feature vector is defined based on several visual cues involving higher-order relations among superpixels. For fast inference, the LP relaxation is used to approximately solve the higher-order correlation clustering problem, and for supervised training of the parameter vector by S-SVM, we apply a decomposable structured loss function to handle unbalanced classes. We incorporate this loss function into the cutting plane procedure for S-SVM training. Experimental results on various datasets show that the proposed higher-order correlation clustering outperforms other state-of-the-art image segmentation algorithms. The rest of the paper is organized as follows. Section 2 presents the higher-order correlation clustering for image segmentation. Section 3 describes large margin training for supervised image segmentation based on the S-SVM and the cutting plane algorithm. A number of experimental and comparative results are presented and discussed in Section 4, followed by a conclusion in Section 5. 2 i k yik k yijk j yij y jk j (a) yj k (b) Figure 1: Illustrations of a part of (a) the pairwise graph (b) and the triplet graph built on superpixels. 2 Higher-order correlation clustering The proposed image segmentation is based on superpixels which are small coherent regions preserving almost all boundaries between different regions, since superpixels significantly reduce computational cost and allow feature extraction to be conducted from a larger homogeneous region. The proposed correlation clustering merges superpixels into disjoint homogeneous regions over a superpixel graph. 2.1 Pairwise correlation clustering over pairwise superpixel graph Define a pairwise undirected graph G = (V, E) where a node corresponds to a superpixel and a link between adjacent superpixels corresponds to an edge (see Figure 1.(a)). A binary label yjk for an edge (j, k) ? E between nodes j and k is defined such that { 1, if nodes j and k belong to the same region, yjk = (1) 0, otherwise. A discriminant function, which is the negative energy function, is defined over an image x and label y of all edges as ? Simw (x, j, k)yjk F (x, y; w) = (j,k)?E = ? ?w, ?jk (x)?yjk = ?w, (j,k)?E ? ?jk (x)yjk ? = ?w, ?(x, y)? (2) (j,k)?E where the similarity measure between nodes j and k, Simw (x, j, k), is parameterized by w and takes values of both signs such that a large positive value means strong similarity while a large negative value means high degree of dissimilarity. Note that the discriminant function F (x, y; w) is assumed to be linear in both the parameter vector w and the joint feature map ?(x, y), and ?jk (x) is a pairwise feature vector which reflects the correspondence between the jth and the kth ? , over the pairwise superpixel graph superpixels. An image segmentation is to infer the edge label, y G by maximizing F such that ? = argmax F (x, y; w) y (3) y?Y E where Y is the set of {0, 1} that corresponds to a valid segmentation, the so called multicut polytope. However, solving (3) with this Y is generally NP-hard. Therefore, we approximate Y by means of a common multicut LP relaxation [18] with the following two constraints: (1) cycle inequality and (2) odd-wheel inequality. When producing the segmentation results based on the approximated LP solutions, we take the floor of a fractionally-predicted label of each edge independently for simply obtaining valid integer solutions that may be sub-optimal. Even though this pairwise correlation clustering takes a rich pairwise feature vector and a trained parameter vector (which will be presented later), it often produces incorrectly predicted segments due to the segment boundary ambiguities caused by limited pairwise relations of neighboring superpixels (see Figure 2). Therefore, to incorporate higher-order relations, we develop higher-order correlation clustering by generalizing the correlation clustering over a hypergraph. 2.2 Higher-order correlation clustering over hypergraph The proposed higher-order correlation clustering is defined over a hypergraph in which an edge called hyperedge can connect to two or more nodes. For example, as shown in Figure 1.(b), one 3 (a) (b) (c) (d) Figure 2: Example of segmentation result by pairwise correlation clustering. (a) Original image. (b) Ground-truth. (c) Superpixels. (d) Segments obtained by pairwise correlation clustering. can introduce binary labels for each adjacent vertices forming a triplet such that yijk = 1 if all vertices in the triplet ({i, j, k}) are in the same cluster; otherwise, yijk = 0. Define a hypergraph HG ? = (V, E) where V is a set of nodes (superpixels) and E is a set of hyperedges (subsets of V) such that e?E = V. Here, a hyperedge e has at least two nodes, i.e. |e| ? 2. Therefore, the hyperedge set E can be divided into two disjoint subsets: pairwise ? edge set Ep = {e ? E | |e| = 2} and higherorder edge set Eh = {e ? E | |e| > 2} such that Ep Eh = E. Note that in the proposed hypergraph for higher-order correlation clustering all hyperedges containing just two nodes (?ep ? Ep ) are linked between adjacent superpixels. The pairwise superpixel graph is a special hypergraph where all hyperedges contain just two (neighboring) superpixels: Ep = E. A binary label ye for a hyperedge e ? E is defined such that { 1, if all nodes in e belong to the same region, (4) ye = 0, otherwise. Similar to the pairwise correlation clustering, a linear discriminant function is defined over an image x and label y of all hyperedges as ? F (x, y; w) = Homw (x, e)ye e?E ? ? ? = ?w, ?e (x)?ye = ?wp , ?ep (x)?yep+ ?wh , ?eh (x)?yeh =?w, ?(x, y)? (5) e?E ep ?Ep eh ?Eh where the homogeneity measure among nodes in e, Homw (x, e), is also the inner product of the parameter vector w and the feature vector ?e (x) and takes values of both signs such that a large positive value means strong homogeneity while a large negative value means high degree of nonhomogeneity. Note that the proposed discriminant function for higher-order correlation clustering is decomposed into two terms by assigning different parameter vectors to the pairwise edge set Ep and the higher-order edge set Eh such that w = [wpT , whT ]T . Thus, in addition to the pairwise similarity between neighboring superpixels, the proposed higher-order correlation clustering considers a broad homogeneous region reflecting higher-order relations among superpixels. Now the problem is how to build our hypergraph from a given image. Here, we use unsupervised multiple partitionings (quantizations) from baseline superpixels. We obtain unsupervised multiple partitionings by merging not pixels but superpixels with different image quantizations using the ultrametric contour maps [19]. For example, in Figure 3, there are three region layers, one superpixel (pairwise) layer and two higher-order layers, from which a hypergraph is constructed by defining hyperedges as follows: first, all edges (black line) in the pairwise superpixel graph from the first layer are incorporated into the pairwise edge set Ep , while hyperedges (yellow line) corresponding to regions (groups of superpixels) in the second and third layers are included in the higher-order edge set Eh . Note that we can further decompose the higher-order term in (5) into two terms associated with the second layer and the third layer, respectively, by assigning different parameter vectors; however for simplicity, this paper aggregates all higher-order edges from all higher-order layers into a single higher-order edge set assigning the same parameter vector. 2.2.1 LP relaxation for inference ? , over the hypergraph HG by maximizing An image segmentation is to infer the hyperedge label, y the discriminant function F such that ? = argmax F (x, y; w) y y?Y 4 (6) Higher-order layer Higher-order layer Superpixel (pairwise) layer (a) Superpixel (pairwise) layer (b) (c) Figure 3: Hypergraph construction from multiple partitionings. (a) Multiple partitionings from baseline superpixels. (b) Hyperedge (yellow line) corresponding to a region in the second layer. (c) Hyperedge (yellow line) corresponding to a region in the third layer. where Y is also the set of {0, 1}E that corresponds to a valid segmentation. Since the inference problem (6) is also NP-hard, we relax Y by (facet-defining) linear inequalities. In addition to the constraints placed on pairwise labels such that the cycle inequality and odd-wheel inequality hold pairwise correlation clustering, we augment the constraints for labels on the higher-order edges, called higher-order inequalities, for a valid segmentation; there is no all-one pairwise labels in a region for which the higher-order edge is labeled as zero (non-homogeneous region), and when a region is labeled as one (homogeneous region), all pairwise labels in that region should be one. These higher-order inequalities can be formulated as yeh ? yep , ?ep ? Ep |ep ? eh , ? (1 ? yeh ) ? (1 ? yep ). (7) ep ?Ep |ep ?eh Indeed, the LP relaxation to approximately solve (6) is formulated as ? ? argmax ?wp , ?ep (x)?yep + ?wh , ?eh (x)?yeh y s.t. ep ?Ep ? e ? E(= Ep ? (8) eh ?Eh Eh ), ye ? [0, 1], ? ep ? Ep , cycle inequalities, odd-wheel inequalities [18], ? eh ? Eh , higher-order inequalities (7). Note that the proposed higher-order correlation clustering follows the concept of soft constraints: superpixels within a hyperedge are encouraged to merge if a hyperedge is highly homogeneous. 2.2.2 Feature vector We construct a 771-dimensional feature vector ?e (x) by concatenating several visual cues with different quantization levels and thresholds. The pairwise feature vector ?ep (x) reflects the correspondence between neighboring superpixels, and the higher-order feature vector ?eh (x) characterizes a more complex relations among superpixels in a broader region to measure homogeneity. The magnitude of w determines the importance of each feature, and this importance is task-dependent. Thus, w is estimated by supervised training described in Section 3. 1. Pairwise feature vector (611-dim): ?ep = [?cep ; ?tep ; ?sep ; ?eep ; ?vep ; 1]. ? Color difference ?c : The 26 RGB/HSV color distances (absolute differences, ?2 distances, earth mover?s distances) between two adjacent superpixels. 5 ? Texture difference ?t : The 64 texture distances (absolute differences, ?2 -distances, earth mover?s distances) between two adjacent superpixels using 15 Leung-Malik (LM) filter banks [19]. ? Shape/location difference ?s : The 5-dimensional shape/location feature proposed in [20]. ? Edge strength ?e : The 1-of-15 coding of the quantized edge strength proposed in [19]. ? Joint visual word posterior ?v : The 100-dimensional vector holding the joint visual word posteriors for a pair of neighboring superpixels using 10 visual words and the 400-dimensional vector holding the joint posteriors based on 20 visual words [21]. e tm 2. Higher-order feature vector (160-dim): ?eh = [?va eh ; ?eh ; ?eh ; 1]. ? Variance ?va : The 14 color variances and 30 texture variances among superpixels in a hyperedge. ? Edge strength ?e : The 1-of-15 coding of the quantized edge strength proposed in [19]. ? Template matching score ?tm : The color/texture and shape/location features of all regions in the training images are clustered using k-means with k = 100 to obtain 100 representative templates of distinct regions. The 100-dimensional template matching feature vector is composed of the matching scores between a region defined by a hyperedge and templates using the Gaussian RBF kernel. In each feature vector, the bias (=1) is augmented for proper similarity/homogeneity measure which can either be positive or negative. 3 Structural learning The proposed discriminant function is defined over the superpixel graph, and therefore, the groundtruth segmentation needs to be transformed to the ground-truth edge labels in the superpixel graph. For this, we first assign a single dominant segment label to each superpixel by majority voting over the superpixel?s constituent pixels and then obtain the ground-truth edge labels. Using this ground-truth edge labels of the training data, the S-SVM [9] is performed to estimate the n parameter vector. Given N training samples {xn , yn }N n=1 where y is the ground-truth edge labels for the nth training image, the S-SVM [9] optimizes w by minimizing a quadratic objective function subject to a set of linear margin constraints: min w,? s.t. N ? 1 ?w?2 + C ?n 2 n=1 (9) ?w, ??(xn , y)? ? ?(yn , y) ? ?n , ?n, y ? Y\yn , ?n ? 0, ?n where ??(xn , y) = ?(xn , yn ) ? ?(xn , y), and C > 0 is a constant that controls the trade-off between margin maximization and training error minimization. In the S-SVM, the margin is scaled with a loss ?(yn , y), which is the difference measure between prediction y and ground-truth label yn of the nth image. The S-SVM offers good generalization ability as well as the flexibility to choose any loss function [9]. The cutting plane algorithm [9, 18] with LP relaxation for loss-augmented inference is used to solve the optimization problem of S-SVM, since fast convergence and high robustness of the cutting plane algorithm in handling a large number of margin constraints are well-known [22, 23]. A loss function is usually a non-negative function, and a loss function that is decomposable is preferred, since it enables the loss-augmented inference in the cutting plane algorithm to be performed efficiently. The most popular loss function that is decomposable is the Hamming distance which is equivalent to the number of mismatches between yn and y at the edge level in this correlation clustering. Unfortunately, in the proposed correlation clustering for image segmentation, the number of edges which are labeled as 1 is considerably higher than that of edges which are labeled as 0. This unbalance makes other learning methods such as the perceptron algorithm inappropriate, since it leads to the clustering of the whole image as one segment. This problem due to the unbalance also 6 PRI VOI SCO BDE 0.84 11 0.6 3 0.82 10 0.5 0.8 2.5 9 0.78 0.4 2 0.76 20 25 30 35 Average number of regions Multi -NCut 40 8 20 25 30 35 Average number of regions gPb-owt -ucm 40 0.3 gPb-Hoiem 20 25 30 35 Average number of regions 40 Corr -Cluster -Pairwise 20 25 30 35 Average number of regions 40 Corr -Cluster -Higher Figure 4: Obtained evaluation measures from segmentation results on the SBD. occurs when we use the Hamming loss in the S-SVM. Therefore, we use the following loss function: ) ) ?( ?( ?(yn , y)= Rp yenp +yep ? (Rp + 1)yenp yep +D Rh yenh +yeh ? (Rh + 1)yenh yeh (10) ep ?Ep eh ?Eh where D is the relative weight of the loss at higher-order edge level to that of the loss at pairwise edge level. In addition, Rp and Rh control the relative importance between the incorrect merging of the superpixels and the incorrect separation of the superpixels by imposing different weights to the false negative and the false positive. Here, we set both Rp and Rh to be less than 1 to overcome the problem due to the unbalance. 4 Experiments To evaluate segmentations obtained by various algorithms against the ground-truth segmentation, we conducted image segmentations on three benchmark datasets: Stanford background dataset [24] (SBD), Berkeley segmentation dataset (BSDS) [25], MSRC dataset [26]. For image segmentation based on correlation clustering, we initially obtain baseline superpixels (438 superpixels per image on average) by the gPb contour detector and the oriented watershed transform [19] and then construct a hypergraph. The function parameters are initially set to zero, and then based on the S-SVM, the structured output learning is used to estimate the parameter vectors. Note that the relaxed solutions in loss-augmented inference are used during training, while in testing, our simple rounding method is used to produce valid segmentation results. Rounding is only necessary in case we obtain fractional solutions from LP-relaxed correlation clustering. We compared the proposed pairwise/higher-order correlation clustering to the following state-of-theart image segmentation algorithms: multiscale NCut [27], gPb-owt-ucm [19], and gPb-Hoiem [20] that grouped the same superpixels based on pairwise same-label likelihoods. The pairwise samelabel likelihoods were independently learnt from the training data with the same 611-dimensional pairwise feature vector. We consider four performance measures: probabilistic Rand index (PRI) [28], variation of information (VOI) [29], segmentation covering (SCO) [19], and boundary displacement error (BDE) [30]. As the predicted segmentation is close to the ground-truth segmentation, the PRI and SCO are increased while the VOI and BDE are decreased. 4.1 Stanford background dataset The SBD consists of 715 outdoor images with corresponding pixel-wise annotations. We employed 5-fold cross-validation with the dataset randomly split into 572 training images and 143 test images for each fold. Figure 4 shows the obtained four measures from segmentation results according to the average number of regions. Note that the performance varies with different numbers of regions, and for this reason, we designed each algorithm to produce multiple segmentations (20 to 40 regions). Specifically, multiple segmentations in the proposed algorithm were obtained by varying Rp (0.001?0.2) and Rh (0.1?1.0) in the loss function during training (D=10). Irrespective of the measure, the proposed higher-order correlation clustering (Corr-Cluster-Higher) performed better than other algorithms including the pairwise correlation clustering (Corr-Cluster-Pairwise). Figure 5 shows some example segmentations. The proposed higher-order correlation clustering yielded the best segmentation results. In specific, incorrectly predicted segments by pairwise correlation clustering were reduced in the segmentation results obtained by higher-order correlation clustering 7 Original image Ground-truth Multi -NCut gPb-owt-ucm gPb-Hoiem Corr-Cluster Pairwise Corr-Cluster Higher Figure 5: Results of image segmentation. Table 1: Quantitative results on the BSDS test set and on the MSRC test set. Test set Multi-NCut gPb-owt-ucm gPb-Hoiem Corr-Cluster-Pairwise Corr-Cluster-Higher PRI 0.728 0.794 0.724 0.806 0.814 BSDS VOI SCO 3.043 0.315 1.909 0.571 3.194 0.316 1.829 0.585 1.743 0.599 BDE 14.257 11.461 14.795 11.194 10.377 PRI 0.628 0.779 0.614 0.773 0.784 MSRC VOI SCO 2.765 0.341 1.675 0.628 2.847 0.353 1.648 0.632 1.594 0.648 BDE 11.941 9.800 13.533 9.194 9.040 owing to the consideration of higher-order relations in broad regions. Regarding the runtime of our algorithm, we observed that for test-time inference it took on average around 15 seconds (graph construction and feature extraction: 14s, LP: 1s) per image on a 2.67GHz processor, whereas the overall training took 10 hours on the training set. Note that other segmentation algorithms such as the multiscale NCut and the gPb-owt-ucm took on average a few minutes per image. 4.2 Berkeley segmentation dataset and MSRC dataset The BSDS contains 300 natural images split into the 200 training images and 100 test images. Since each image is segmented by multiple human subjects, we defined a single probabilistic (real-valued) ground-truth segmentation of each image for training in the proposed correlation clustering. The MSRC dataset is composed of 591 natural images. We split the data into 45% training, 10% validation, and 45% test sets, following [26]. The performance was evaluated using the clean ground-truth object instance labeling of [31]. On average, all segmentation algorithms were set to produce 30 disjoint regions per image on the BSDS and 15 disjoint regions per image on the MSRC dataset. As shown in Table 1, the proposed higher-order correlation clustering gave the best results on both datasets. Especially, the obtained results on the BSDS are similar or even better than the best results ever reported on the BSDS [32, 19]. 5 Conclusion This paper proposed the higher-order correlation clustering over a hypergraph to merge superpixels into homogeneous regions. The LP relaxation was used to approximately solve the higher-order correlation clustering over a hypergraph where a rich feature vector was defined based on several visual cues involving higher-order relations among superpixels. The S-SVM was used for supervised training of parameters in correlation clustering, and the cutting plane algorithm with LP-relaxed inference was applied to solve the optimization problem of S-SVM. Experimental results showed that the proposed higher-order correlation clustering outperformed other image segmentation algorithms on various datasets. The proposed framework is applicable to a variety of other areas. Acknowledgments This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No.2011-0018249). 8 References [1] T. Kanungo, D. Mount, N. Netanyahu, C. Piatko, R. Silverman, and A. Wu, ?An efficient k-means clustering algorithm: Analysis and implementation,? PAMI, vol. 24, pp. 881?892, 2002. [2] D. Comaniciu and P. Meer, ?Mean shift: A robust approach toward feature space analysis,? PAMI, vol. 24, pp. 603?619, 2002. [3] C. Carson, S. Belongie, H. Greenspan, and J. Malik, ?Blobworld: image segmentation using expectationmaximization and its application to image querying,? PAMI, vol. 24, pp. 1026?1038, 2002. [4] F. Estrada and A. Jepson, ?Spectral embedding and mincut for image segmentation,? in BMVC, 2004. [5] J. Shi and J. Malik, ?Normalized cuts and image segmentation,? PAMI, vol. 22, pp. 888?905, 2000. [6] P. Felzenszwalb and D. Huttenlocher, ?Efficient graph-based image segmentation,? IJCV, vol. 59, pp. 167?181, 2004. [7] F. Estrada and A. Jepson, ?Benchmarking image segmentation algorithms,? IJCV, vol. 85, 2009. [8] N. Bansal, A. Blum, and S. Chawla, ?Correlation clustering,? Machine Learning, vol. 56, 2004. [9] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun, ?Large margin methods for structured and independent output variables,? JMLR, vol. 6, 2005. [10] T. Finley and T. Joachims, ?Supervised clustering with support vector machines,? in ICML, 2005. [11] B. Taskar, ?Learning structured prediction models: a large margin approach,? Ph.D. thesis, Stanford University, 2004. [12] C. Berge, Hypergraphs, North-Holland, Amsterdam, 1989. [13] L. Ding and A. Yilmaz, ?Image segmentation as learning on hypergraphs,? in Proc. ICMAL, 2008. [14] S. Rital, ?Hypergraph cuts and unsupervised representation for image segmentation,? Fundamenta Informaticae, vol. 96, pp. 153?179, 2009. [15] A. Ducournau, S. Rital, A. Bretto, and B. Laget, ?A multilevel spectral hypergraph partitioning approach for color image segmentation,? in Proc. ICSIPA, 2009. [16] F. Bach and M. I. Jordan, ?Learning spectral clustering,? in NIPS, 2003. [17] T. Cour, N. Gogin, and J. Shi, ?Learning spectral graph segmentation,? in AISTATS, 2005. [18] S. Nowozin and S. Jegelka, ?Solution stability in linear programming relaxations: Graph partitioning and unsupervised learning,? in ICML, 2009. [19] P. Arbel?aez, M. Maire, C. Fowlkes, and J. Malik, ?Contour detection and hierarchical image segmentation,? PAMI, vol. 33, pp. 898?916, 2011. [20] D. Hoiem, A. A. Efros, and M. Hebert, ?Recovering surface layout from an image,? IJCV, 2007. [21] D. Batra, R. Sukthankar, and T. Chen, ?Learning class-specific affinities for image labelling,? in CVPR, 2008. [22] T. Finley and T. Joachims, ?Training structural SVMs when exact inference is intractable,? in ICML, 2008. [23] A. Kulesza and F. Pereira, ?Structured learning with approximate inference,? in NIPS, 2007. [24] S. Gould, R. Fulton, and D. Koller, ?Decomposing a scene into geometric and semantically consistent regions,? in ICCV, 2009. [25] C. Fowlkes, D. Martin, and J. Malik, The Berkeley Segmentation Dataset and Benchmark (BSDB), http://www.cs.berkeley.edu/projects/vision/grouping/segbench/. [26] J. Shotton, J. Winn, C. Rother, and A. Criminisi, ?Textonboost: joint apprearence, shape and context modeling for multi-class object recognition and segmentation,? in ECCV, 2006. [27] T. Cour, F. Benezit, and J. Shi, ?Spectral segmentation with multiscale graph decomposition,? in CVPR, 2005. [28] W. M. Rand, ?Objective criteria for the evaluation of clustering methods,? Journal of the American Statistical Association, vol. 66, pp. 846?850, 1971. [29] M. Meila, ?Computing clusterings: An axiomatic view,? in ICML, 2005. [30] J. Freixenet, X. Munoz, D. Raba, J. Marti, and X. Cufi, ?Yet another survey on image segmentation: Region and boundary information integration,? in ECCV, 2002. [31] T. Malisiewicz and A. A. Efros, ?Improving spatial support for objects via multiple segmentations,? in BMVC, 2007. [32] T. Kim, K. Lee, and S. Lee, ?Learning full pairwise affinities for spectral segmentation,? in CVPR, 2010. 9
4406 |@word kohli:1 judgement:1 polynomial:1 rgb:1 decomposition:1 textonboost:1 contains:1 score:3 hoiem:5 document:1 outperforms:2 com:3 contextual:1 gmail:1 assigning:3 must:1 yep:6 yet:1 distant:1 shape:4 enables:2 hofmann:1 designed:1 cue:3 plane:6 quantized:2 node:13 hsv:1 successive:1 location:3 constructed:1 incorrect:2 consists:1 ijcv:3 introduce:1 pairwise:48 inter:1 indeed:1 multi:4 sbd:3 decomposed:1 inappropriate:1 increasing:1 project:1 moreover:1 maximizes:1 eigenvector:1 voi:5 quantitative:2 berkeley:4 voting:1 runtime:1 scaled:1 uk:2 partitioning:11 control:2 grant:1 yn:8 producing:1 positive:4 local:3 pkohli:1 mount:1 approximately:3 merge:2 black:1 pami:5 limited:2 range:1 malisiewicz:1 acknowledgment:1 yj:1 testing:1 piatko:1 silverman:1 ucm:5 procedure:1 maire:1 displacement:1 area:1 significantly:2 matching:3 pre:1 word:4 altun:1 segbench:1 yilmaz:1 wheel:3 close:1 tsochantaridis:1 context:2 sukthankar:1 www:1 conventional:1 map:2 equivalent:1 shi:3 maximizing:2 layout:1 independently:2 survey:1 decomposable:3 simplicity:1 utilizing:1 his:1 meer:1 handle:1 embedding:1 variation:1 stability:1 ultrametric:1 construction:2 exact:1 programming:3 homogeneous:9 us:2 superpixel:15 approximated:1 jk:4 recognition:1 cut:9 huttenlocher:2 labeled:4 ep:26 taskar:3 observed:1 ding:1 capture:1 region:41 news:1 cycle:3 trade:1 hypergraph:23 gpb:10 trained:2 solving:2 segment:7 sep:1 joint:5 various:4 represented:1 surrounding:1 distinct:1 fast:3 effective:1 labeling:6 lift:1 reservation:1 aggregate:1 aez:1 heuristic:1 larger:2 kaist:3 solve:5 stanford:3 relax:1 otherwise:3 valued:1 cvpr:3 ability:1 transform:1 advantage:1 differentiable:1 arbel:1 took:3 product:1 neighboring:6 wht:1 flexibility:1 yijk:3 constituent:1 convergence:1 cluster:11 cour:2 produce:5 comparative:1 object:3 develop:3 ac:1 odd:3 expectationmaximization:1 strong:2 berge:1 recovering:1 predicted:4 c:1 owing:1 filter:1 criminisi:1 human:1 require:1 government:1 multilevel:1 assign:1 generalization:3 clustered:1 decompose:1 yij:1 hold:1 around:1 considered:1 ground:11 bsds:7 lm:1 efros:2 fh:3 earth:2 estimation:1 proc:2 outperformed:1 applicable:1 axiomatic:1 label:20 grouped:1 cep:1 reflects:2 minimization:1 gaussian:1 greenspan:1 varying:1 broader:1 derived:1 joachim:3 likelihood:2 superpixels:34 contrast:1 kim:2 baseline:3 dim:2 inference:15 dependent:1 leung:1 initially:2 relation:11 koller:1 transformed:1 pixel:4 overall:1 among:8 augment:1 proposes:1 art:4 noun:1 special:1 integration:1 spatial:1 construct:2 extraction:3 encouraged:1 nrf:1 broad:2 unsupervised:4 icml:4 theart:1 np:2 few:1 oriented:1 randomly:1 composed:2 simultaneously:1 mover:2 homogeneity:4 national:1 argmax:3 microsoft:4 detection:1 highly:1 evaluation:2 hg:2 watershed:1 edge:34 necessary:1 korea:4 increased:1 instance:1 soft:1 modeling:1 facet:1 eep:1 maximization:1 phrase:1 cost:2 vertex:2 subset:2 rounding:2 conducted:2 reported:1 dependency:1 connect:2 varies:1 learnt:1 considerably:1 adaptively:1 probabilistic:2 off:1 lee:2 thesis:1 ambiguity:3 containing:1 choose:1 multicut:2 american:1 account:1 potential:2 coding:2 north:1 caused:2 later:1 performed:3 view:1 linked:1 characterizes:1 sco:5 annotation:1 variance:4 efficiently:1 yellow:3 processor:1 detector:1 sebastian:2 against:1 energy:1 pp:8 associated:1 hamming:2 dataset:10 popular:1 wh:2 color:6 fractional:1 improves:1 segmentation:72 organized:1 reflecting:1 higher:58 supervised:6 response:1 rand:2 bmvc:2 formulation:2 evaluated:1 though:2 furthermore:2 just:2 correlation:55 hand:1 multiscale:3 mode:1 ye:5 normalized:5 contain:1 concept:1 wp:2 pri:5 adjacent:5 during:2 comaniciu:1 covering:1 criterion:2 carson:1 tep:1 bansal:1 image:67 wise:1 consideration:1 novel:1 common:1 discussed:1 hypergraphs:4 belong:2 association:1 cambridge:4 imposing:1 munoz:1 tuning:1 meila:1 vep:1 msrc:6 language:1 funded:1 similarity:6 surface:1 owt:5 dominant:1 posterior:3 showed:1 optimizes:1 certain:2 inequality:10 binary:3 hyperedge:11 preserving:1 relaxed:3 floor:1 estrada:2 employed:1 resolving:1 multiple:8 full:1 reduces:1 infer:2 segmented:1 faster:1 offer:1 long:1 wpt:1 sungwoong:2 divided:1 cross:1 bach:1 laplacian:1 va:2 prediction:2 involving:3 basic:1 vision:2 kernel:1 addition:3 background:2 whereas:1 decreased:1 winn:1 hyperedges:6 rest:1 south:2 pooling:1 subject:2 undirected:1 coarsening:1 jordan:1 integer:1 ee:3 structural:2 presence:1 split:3 shotton:1 variety:1 gave:1 reduce:1 inner:1 tm:2 regarding:1 shift:2 suffer:1 yik:1 generally:1 kanungo:1 ph:1 svms:1 reduced:1 http:1 sign:2 estimated:1 disjoint:5 per:5 mest:1 vol:11 group:1 four:2 fractionally:1 threshold:1 blum:1 clean:1 graph:30 relaxation:8 parameterized:1 almost:2 groundtruth:1 wu:1 separation:1 layer:14 followed:1 correspondence:2 fold:2 quadratic:1 yielded:1 strength:4 constraint:6 scene:2 aspect:1 min:4 martin:1 gould:1 department:2 structured:7 according:1 describes:1 em:1 lp:11 restricted:4 iccv:1 previously:4 serf:1 available:2 decomposing:1 apply:3 hierarchical:1 spectral:8 chawla:1 fowlkes:2 robustness:1 rp:5 original:2 clustering:64 mincut:1 unbalance:3 especially:2 build:1 seeking:1 objective:7 malik:5 occurs:1 fulton:1 affinity:2 kth:1 distance:7 link:1 higherorder:1 majority:1 blobworld:1 polytope:1 considers:3 discriminant:10 unstable:1 reason:2 toward:1 rother:1 index:1 relationship:1 illustration:1 minimizing:1 difficult:2 unfortunately:1 yjk:5 holding:2 negative:6 bde:5 implementation:1 proper:1 perform:3 datasets:5 benchmark:2 incorrectly:2 defining:2 incorporated:1 ever:1 introduced:1 pair:1 specified:1 coherent:3 merges:1 hour:1 nip:2 usually:1 mismatch:1 kulesza:1 built:1 max:1 including:1 difficulty:1 natural:3 eh:22 nth:2 improve:1 irrespective:1 finley:3 prior:1 understanding:1 yeh:6 geometric:1 relative:2 loss:15 discriminatively:1 limitation:1 nonhomogeneity:1 querying:1 validation:2 foundation:1 degree:2 jegelka:1 consistent:3 article:1 bank:1 netanyahu:1 nowozin:3 balancing:1 eccv:2 placed:1 supported:1 hebert:1 jth:1 bias:1 allow:2 perceptron:1 template:4 taking:1 felzenszwalb:2 absolute:2 benefit:2 ghz:1 boundary:6 overcome:1 xn:5 valid:5 rich:4 contour:3 commonly:1 preprocessing:2 pushmeet:1 approximate:4 cutting:6 preferred:1 overcomes:1 global:7 assumed:2 belongie:1 triplet:3 table:2 nature:1 robust:1 obtaining:1 improving:1 complex:2 jepson:2 aistats:1 rh:5 whole:1 categorized:1 intracluster:1 augmented:4 representative:1 benchmarking:1 slow:1 sub:2 pereira:1 concatenating:1 outdoor:1 marti:1 jmlr:1 third:5 minute:1 load:1 specific:2 svm:15 grouping:1 incorporating:1 intractable:1 quantization:3 false:2 merging:2 corr:8 kr:1 importance:3 texture:4 dissimilarity:2 magnitude:1 labelling:1 margin:9 chen:1 generalizing:1 simply:1 forming:1 visual:7 ncut:5 amsterdam:1 chang:1 holland:1 corresponds:4 truth:11 determines:1 formulated:2 rbf:1 hard:2 daejeon:2 included:1 specifically:1 semantically:1 called:3 batra:1 experimental:5 support:5 unbalanced:1 incorporate:4 evaluate:1 yoo:1 handling:1
3,763
4,407
Non-conjugate Variational Message Passing for Multinomial and Binary Regression Thomas P. Minka Microsoft Research Cambridge, UK David A. Knowles Department of Engineering University of Cambridge Abstract Variational Message Passing (VMP) is an algorithmic implementation of the Variational Bayes (VB) method which applies only in the special case of conjugate exponential family models. We propose an extension to VMP, which we refer to as Non-conjugate Variational Message Passing (NCVMP) which aims to alleviate this restriction while maintaining modularity, allowing choice in how expectations are calculated, and integrating into an existing message-passing framework: Infer.NET. We demonstrate NCVMP on logistic binary and multinomial regression. In the multinomial case we introduce a novel variational bound for the softmax factor which is tighter than other commonly used bounds whilst maintaining computational tractability. 1 Introduction Variational Message Passing [20] is a message passing implementation of the mean-field approximation [1, 2], also known as variational Bayes (VB). Although Expectation Propagation [12] can have more desirable properties as a result of the particular Kullback-Leibler divergence that is minimised, VMP is more stable than EP under certain circumstances, such as multi-modality in the posterior distribution. Unfortunately, VMP is effectively limited to conjugate-exponential models since otherwise the messages become exponentially more complex at each iteration. In conjugate exponential models this is avoided due to the closure of exponential family distributions under multiplication. There are many non-conjugate problems which arise in Bayesian statistics, for example logistic regression or learning the hyperparameters of a Dirichlet. Previous work extending Variational Bayes to non-conjugate models has focused on two aspects. The first is how to fit the variational parameters when the VB free form updates are not viable. Various authors have used standard numerical optimization techniques [15, 17, 3], or adapted such methods to be more suitable for this problem [7, 8]. A disadvantage of this approach is that the convenient and efficient message-passing formulation is lost. The second line of work applying VB to non-conjugate models involves deriving lower bounds to approximate the expectations [9, 18, 5, 10, 11] required to calculate the KL divergence. We contribute to this line of work by proposing and evaluating a new bound for the useful softmax factor, which is tighter than other commonly used bounds whilst maintaining computational tractability. We also demonstrate, in agreement with [19] and [14], that for univariate expectations such as required for logistic regression, carefully designed quadrature methods can be effective. Existing methods typically represent a compromise on modularity or performance. To maintain modularity one is effectively constrained to use exponential family bounds (e.g. quadratic in the Gaussian case [9, 5]) which we will show often gives sub-optimal performance. Methods which uses more general bounds, e.g. [3], must then resort to numerical optimisation, and sacrifice modularity. 1 This is a particular disadvantage for an inference framework such as Infer.NET [13] where we want to allow modular construction of inference algorithms from arbitrary deterministic and stochastic factors. We propose a novel message passing algorithm, which we call Non-conjugate Variational Message Passing (NCVMP), which generalises VMP and gives a recipe for calculating messages out of any factor. NCVMP gives much greater freedom in how expectations are taken (using bounds or quadrature) so that performance can be maintained along with modularity. The outline of the paper is as follows. In Sections 2 and 3 we briefly review VB and VMP. Section 4 is the main contribution of the paper: Non-conjugate VMP. Section 5 describes the binary logistic and multinomial softmax regression models, and implementation options with and without NCVMP. Results on synthetic and standard UCI datasets are given in Section 6 and some conclusions are drawn in Section 7. 2 Mean-field approximation Q Our aim is to approximate some model p(x), represented as a factor graph p(x) = a fa (xa ) where factor fa is a function of Q all x ? xa . The mean-field approximation assumes a fully-factorised variational posterior q(x) = i qi (xi ) where qi (xi ) is an approximation to the marginal distribution of xi (note however xi might be vector valued, e.g. with multivariate normal qi ). The variational approximation q(x) is chosen to minimise the Kullback-Leibler divergence KL(q||p), given by Z Z q(x) KL(q||p) = q(x) log dx = ?H[q(x)] ? q(x) log p(x)dx. (1) p(x) R where H[q(x)] = ? q(x) log q(x)dx is the entropy. It can be shown [1] that if the functions qi (xi ) are unconstrained then minimising this functional can be achieved by coordinate descent, setting qi (xi ) = exphlog p(x)i?qi (xi ) , iteratively for each i, where h...i?qi (xi ) implies marginalisation of all variables except xi . 3 Variational Message Passing on factor graphs VMP is an efficient algorithmic implementation of the mean-field approximation which leverages the fact that the mean-field updates only requires local operations on the factor graph. The variational distribution q(x) factorises into approximate factors f?a (xa ). As a result of the fully factorised Q approximation, the approximate factors themselves factorise into messages, i.e. f?a (xa ) = xi ?xa ma?i (xi ) where the message from factor a to variable i is ma?i (xi ) = exphlog fa (xa )i?qi (xi ) . The message from variable i to Q factor a is the current variational posterior of xi , denoted qi (xi ), i.e. mi?a (xi ) = qi (xi ) = a?N (i) ma?i (xi ) where N (i) are the factors connected to variable i. For conjugate-exponential models the messages to a particular variable xi , will all be in the same exponential family. Thus calculating qi (xi ) simply involves summing sufficient statistics. If, however, our model is not conjugate-exponential, there will be a variable xi which receives incoming messages which are in different exponential families, or which are not even exponential family distributions at all. Thus qi (xi ) will be some more complex distribution. Computing the required expectations becomes more involved, and worse still the complexity of the messages (e.g. the number of possible modes) grows exponentially per iteration. 4 Non-conjugate Variational Message Passing In this section we give some criteria under which the algorithm was conceived. We set up required notation and describe the algorithm, and prove some important properties. Finally we give some intuition about what the algorithm is doing. The approach we take aims to fulfill certain criteria: 1. provides a recipe for any factor 2. reduces to standard VMP in the case of conjugate exponential factors 3. allows modular implementation and combining of deterministic and stochastic factors 2 NCVMP ensures the gradients of the approximate KL divergence implied by the message match the gradients of the true KL. This means that we will have a fixed point at the correct point in parameter space: the algorithm will be at a fixed point if the gradient of the KL is zero. We use the following notation: variable xi has current variational posterior qi (xi ; ?i ), where ?i is the vector of natural parameters of the exponential family distribution qi . Each factor fa which is a neighbour of xi sends a message ma?i (xi ; ?a?i ) to xi , where ma?i is in the same exponential family as qi , i.e. ma?i (xi ; ?) = exp(?T u(xi )??(?)) and qi (xi ; ?) = exp(?T u(xi )??(?)) where u(?) are sufficient statistics, and ?(?) is the log partition function. We define C(?) as the Hessian of ? 2 ?(?) ?(?) evaluated at ?, i.e. Cij (?) = ?? . It is straightforward to show that C(?) = cov(u(x)|?) i ??j so if the exponential family qi is identifiable, C will beR symmetric positive definite, and therefore invertible. The factor fa contributes a term Sa (?i ) = qi (xi ; ?i )hlog fa (x)i?qi (xi ) dxi to the KL divergence, where we have only made the dependence on ?i explicit: this term is also a function of the variational parameters of the other variables neighbouring fa . With this notation in place we are now able to describe the NCVMP algorithm. Algorithm 1 Non-conjugate Variational Message Passing 1: Initialise all variables to uniform ?i := 0?i 2: while not converged do 3: for all variables i do 4: for all neighbouring factors a ? N (i) do a (?i ) 5: ?a?i := C(?i )?1 ?S?? i 6: end for P 7: ?i := a?N (i) ?a?i 8: end for 9: end while To motivate Algorithm 1 we give a rough proof that we will have a fixed point at the correct point in parameter space: the algorithm will be at a fixed point if the gradient of the KL divergence is zero. Theorem 1. Algorithm 1 has a fixed point at {?i : ?i} if and only if {?i : ?i} is a stationary point of the KL divergence KL(q||p). Proof. Firstly define the function S?a (?; ?) := Z qi (xi ; ?) log ma?i (xi ; ?)dxi , (2) which is an approximation to the function Sa (?). Since qi and ma?i belong to the same exponential family we can simplify as follows, Z ??(?) S?a (?; ?) = qi (xi ; ?)(?T u(xi ) ? ?(?))dxi = ?T hu(xi )i? ? ?(?) = ?T ? ?(?), (3) ?? where h?i? implies expectation wrt qi (xi ; ?) and we have used the standard property of exponential ? S?a (?;?) families that hu(xi )i? = ??(?) = C(?)?. Now, the ?? . Taking derivatives wrt ? we have ?? update in Algorithm 1, Line 5 for ?a?i ensures that ?Sa (?) ? S?a (?; ?) ?Sa (?) C(?)? = ? = . (4) ?? ?? ?? Thus this update ensures that the gradients wrt ?i of S and S? match. The update in Algorithm 1, Line 7 for ?i is minimising an approximate local KL divergence for xi : ? ? X X ? ?a?i )? = ?i := arg min ??H[qi (xi , ?)] ? S(?; ?a?i (5) ? a?N (i) a?N (i) where H[.] is the entropy. If and only if we are at a fixed point of the algorithm, we will have ? ? X ? S?a (?i ; ?a?i ) X ? ? ?H[qi (xi , ?i )] ?H[qi (xi , ?i )] ? S?a (?i ; ?a?i )? = ? =0 ??i ??i ??i a?N (i) a?N (i) 3 for all variables i. By (4), if and only if we are at a fixed point (so that ?i has not changed since updating ?) we have ? X ?Sa (?i ) ?H[qi (xi , ?i )] ?KL(q||p) ? = =0 ??i ??i ??i (6) a?N (i) for all variables i. Theorem 1 showed that if NCVMP converges to a fixed point then it is at a stationary point of the KL divergence KL(q||p). In practice this point will be a minimum because any maximum would represent an unstable equilibrium. However, unlike VMP we have no guarantee to decrease KL(q||p) at every step, and indeed we do sometimes encounter convergence problems which require damping to fix: see Section 7. Theorem 1 also gives some intuition about what NCVMP is doing. S?a is a conjugate approximation to the true Sa function, chosen to have the correct gradients at the current ?i . The update at variable xi for ?i combines all these approximations from factors involving xi to get an approximation to the local KL, and then moves ?i to the minimum of this approximation. Another important property of Non-conjugate VMP is that it reduces to standard VMP for conjugate factors. Theorem 2. If hlog fa (x)i?qi (xi ) as a function of xi can be written ?T u(xi ) ? c where c is a constant, then the NCVMP message ma?i (xi , ?a?i ) will be the standard VMP message ma?i (xi , ?). Proof. To see this note that hlog fa (x)i?qi (xi ) = ?T u(xi ) ? c ? Sa (?) = ?T hu(xi )i? ? c, where ? is the expected natural statistic under the messages from the variables connected to fa other a (?) than xi . We have Sa (?) = ?T ??(?) ? ?S?? = C(?)? so from Algorithm 1, Line 7 we ?? ? c ?1 ?1 ?Sa (?) = C(?) C(?)? = ?, the standard VMP message. have ?a?i := C(?) ?? The update for ?i in Algorithm 1, Line 7 is the same as for VMP, and Theorem 2 shows that for conjugate factors the messages sent to the variables are the same as for VMP. Thus NCVMP is a generalisation of VMP. NCVMP can alternatively be derived by assuming the incoming messages to xi are fixed apart from ma?i (xi ; ?) and calculating a fixed point update for ma?i (xi ; ?). Gradient matching for NCVMP can be seen as analogous to moment matching in EP. Due to space limitations we defer the details to the supplementary material. 4.1 Gaussian variational distribution Here we describe the NCVMP updates for a Gaussian variational distribution q(x) = N (x; m, v) and approximate factor f?(x; mf , vf ). Although these can be derived from the generic formula using natural parameters it is mathematically more convenient to use the mean and variance (NCVMP is parameterisation invariant so it is valid to do this). dS(m, v) 1 = ?2 , vf dv 5 mf m dS(m, v) = + . vf vf dm (7) Logistic regression models We illustrate NCVMP on Bayesian binary and multinomial logistic regression. The regression part PD of the model is standard: gkn = d=1 Wkd Xdn + mk where g is the auxiliary variable, W is a matrix of weights with standard normal prior, X is the design matrix and m is a per class mean, which is also given a standard normal prior. For binary regression we just have k = 1, and the observation model is p(y = 1|g1n ) = ?(g1n ) where ?(x) = 1/(1 + e?x ) is the logistic function. xk In the multinomial case p(y = k|g:n ) = ?k (g:n ) where ?k (x) = Pe exl is the ?softmax? function. l The VMP messages for the regression part of the model are standard so we omit the details due to space limitations. 4 5.1 Binary logistic regression For logistic regression we require the following factor: f (s, x) = ?(x)s (1 ? ?(x))1?s where we assume s is observed. The log factor is sx ? log(1 + ex ). There are two problems: we cannot analytically compute expectations wrt to x, and we need to optimise the variational parameters. [9] propose the ?quadratic? bound on the integrand   ?(t) 2 2 ?(x) ? ? ? (x, t) = ?(t) exp (x ? t)/2 ? (x ? t ) , (8) 2 where ?(t) = tanht(t/2) = ?(t)?1/2 . It is straightforward to analytically optimise t to make the t bound as tight as possible. The bound is conjugate to a Gaussian, but its performance can be poor. An alternative proposed in [18] is to bound the integral: 1 hlog f (x)iq ? sm ? a2 v ? log(1 + em+(1?2a)v/2) ), (9) 2 where m, v are the mean and variance of q(x) and a is a variational parameter which can be optimised using the fixed point iteration a := ?(m?(1?2a)v/2). We refer to this as the ?tilted? bound. This bound is not conjugate to a Gaussian, but we can calculate the NCVMP message, which has m parameters: v1f = a(1?a), vff = vmf +s?a, where we have assumed a has been optimised. A final possibility is to use quadrature to calculate the gradients of S(m, v) directly. The NCVMP message hx?(x)iq ?mh?(x)iq mf , vf = vmf + s ? h?(x)iq . The univariate expectathen has parameters v1f = v tions h?(x)iq and hx?(x)iq can be efficiently computed using Gauss-Hermite or Clenshaw-Curtis quadrature. 5.2 Multinomial softmax regression QK Consider the softmax factor f (x, p) = k=1 ? (pk ? ?k (x)), where xk are real valued and p is a probability vector with current Dirichlet variational posterior q(p) = Dir(p; d). We can integrate PK P out p to give the log factor log f (x) = k=1 (dk ? 1)xk ? (d. ? K) log l exl where we define PK QK d. := k=1 dk . LetP the incoming message from x be q(x) = k=1 N (xk ; mk , vk ). How should xl we deal with the log l e term? The approach used by [3] is a linear Taylor expansion of the log, which is accurate for small variances v: X X X hlog exi i ? log hexi i = log emi +vi /2 , (10) i i i which we refer to as the ?log? bound. The messages are still not conjugate, so some numerical method must still be used to learn m and v: while [3] used LBFGS we will use NCVMP. Another bound was proposed by [5]: log K X k=1 exk ? a + K X log(1 + exk ?a ), (11) k=1 where a is a new variational parameter. Combining with (8) we get the ?quadratic bound? on the integrand, with K + 1 variational parameters. This has conjugate updates, so modularity can be achieved without NCVMP, but as we will see, results are often poor. [5] derives coordinate ascent fixed point updates to optimise a, but reducing to a univariate optimisation in a and using Newton?s method is much faster (see supplementary material). Inspired by the univariate ?tilted? bound in Equation 9 we propose the multivariate tilted bound: X X 1X 2 aj vj + log emi +(1?2ai )vi /2 (12) hlog exi i ? 2 j i i Setting ak = 0 for all k we recover Equation 10 (hence this is the ?tilted? version). Maximisation with  respect to a can be achieved by the fixed point update (see supplementary material): a := ? m + 21 (1 ? 2a) ? v . This is a O(K) operation since the denominator of the softmax function is shared. For the softmax factor quadrature is not viable because of the high dimensionality of the integrals. From Equation 7 the NCVMP messages using the tilted bound have natural parameters 5 m kf = (d. ? K)ak (1 ? ak ), vkf = vmkfk + dk ? 1 ? (d. ? K)ak where we have assumed a has been optimised. As an alternative we suggest choosing whether to send the message resulting from the quadratic bound or tilted bound depending on which is currently the tightest, referred to as the ?adaptive? method. Finally we consider a simple Taylor series expansion of the integrand around the mean of x, denoted ?Taylor?, and the multivariate quadratic bound of [4], denoted ?Bohning? (see the Supplementary material for details). 1 vkf 6 Results Here we aim to present the typical compromise between performance and modularity that NCVMP addresses. We will see that for both binary logistic and multinomial softmax models achieving conjugate updates by being constrained to quadratic bounds is sub-optimal, in terms of estimates of variational parameters, marginal likelihood estimation, and predictive performance. NCVMP gives the freedom to choose a wider class of bounds, or even use efficient quadrature methods in the univariate case, while maintaining simplicity and modularity. 6.1 The logistic factor We first test the logistic factor methods of Section 5.1 at the task of estimating the toy model ?(x)?(x) with varying Gaussian prior ?(x) (see Figure 1(a)). We calculate the true mean and variance using quadrature. The quadratic bound has the largest errors for the posterior mean, and the posterior variance is severely underestimated. In contrast, NCVMP using quadrature, while being slightly more computationally costly, approximates the posterior much more accurately: the error here is due only to the VB approximation. Using the tilted bound with NCVMP gives more robust estimates of the variance than the quadratic bound as the prior mean changes. However, both the quadratic and tilted bounds underestimate the variance as the prior variance increases. 0.1 error in posterior mean error in posterior mean 0.3 0.2 0.1 0.0 ?0.1 ?0.2 ?10 0 prior mean 10 20 0.0 ?0.2 ?0.3 ?0.4 ?0.5 ?1.0 ?1.5 ?2.0 ?2.5 ?3.0 ?3.5 ?20 ?10 0 prior mean 10 NCVMP quad NCVMP tilted VMP quadratic ?0.5 ?0.6 error in posterior variance error in posterior variance ?0.3 ?20 0.0 ?0.1 20 0 5 10 15 prior variance 20 5 10 15 prior variance 20 0 ?1 ?2 ?3 ?4 0 (a) Posterior mean and variance estimates of ?(x)?(x) with varying prior ?(x). Left: varying the prior mean with fixed prior variance v = 10. Right: varying the prior variance with fixed prior mean m = 0. (b) Log likelihood of the true regression coefficients under the approximate posterior for 10 synthetic logistic regression datasets. Figure 1: Logistic regression experiments. 6.2 Binary logistic regression We generated ten synthetic logistic regression datasets with N = 30 data points and P = 8 covariates. We evaluated the results in terms of the log likelihood of the true regression coefficients under the approximate posterior, a measure which penalises poorly estimated posterior variances. Figure 1(b) compares the performance of non-conjugate VMP using quadrature and VMP using the quadratic bound. For four of the ten datasets the quadratic bound finds very poor solutions. Non-conjugate VMP finds a better solution in seven out of the ten datasets, and there is marginal 6 difference in the other three. Non-conjugate VMP (with no damping) also converges faster in general, although some oscillation is seen for one of the datasets. 6.3 Softmax bounds PK To have some idea how the various bounds for the softmax integral Eq [log k=1 exk ] compare Q empirically we calculated relative absolute error on 100 random distributions q(x) = k N (xk ; mk , v). We sample mk ? N (0, u). When not being varied, K = 10, u = 1, v = 1. Ground truth was calculated using 105 Monte Carlo samples. We vary the number of classes, K, the distribution variance v and spread of the means u. Results are shown in Figure 2. As expected the tilted bound (12) dominates the log bound (10), since it is a generalisation. As K is increased the relative error made using the quadratic bound increases, whereas both the log and the tilted bound get tighter. In agreement with [5] we find the strength of the quadratic bound (11) is in the high variance case, and Bohning?s bound [4] is very loose under all conditions. Both the log and tilted bound are extremely accurate for variances v < 1. In fact, the log and tilted bounds are asymptotically optimal as v ? 0. ?Taylor? gives accurate results but is not a bound, so convergence is not guaranteed and the global bound on the marginal likelihood is lost. The spread of the means u does not have much of an effect on the tightness of these bounds. These results show that even when quadrature is not an option, much tighter bounds can be found if the constraint of requiring quadratic bounds imposed by VMP is relaxed. For the remainder of the paper we consider only the quadratic, log and tilted bounds. 4 log(relative abs error) 2 ?1 0 0 ?2 ?2 ?2 ?4 ?4 quadratic log tilted Bohning Taylor ?6 ?8 ?6 ?10 ?8 ?10 0 2 101 102 classes K 103 ?12 10?1 100 101 input variance v ?4 ?5 ?6 ?7 102 Figure 2: Log10 of the relative absolute error approximating E log 6.4 ?3 10?1 P 100 101 mean variance u exp, averaged over 100 runs. Multinomial softmax regression Synthetic data. For synthetic data sampled from the generative model we know the ground truth coefficients and can control characteristics of the data. We first investigate the performance with sample size N , with fixed number of features P = 6, classes K = 4, and no noise (apart from the inherent noise of the softmax function). As expected our ability to recover the ground truth regression coefficients improves with increasing N (see Figure 3(a), left). However, we see that the methods using the tilted bound perform best, closely followed by the log bound. Although the quadratic bound has comparable performance for small N < 200 it performs poorly with larger N due to its weakness at small variances. The choice of bound impacts the speed of convergence (see Figure 3(a), right). The log bound performed almost as well as the tilted bound at recovering coefficients it takes many more iterations to converge. The extra flexibility of the tilted bound allows faster convergence, analogous to parameter expansion [16]. For small N the tilted bound, log bound and adaptive method converge rapidly, but as N increases the quadratic bound starts to converge much more slowly, as do the tilted and adaptive methods to a lesser extent. ?Adaptive? converges fastest because the quadratic bound gives good initial updates at high variance, and the tilted bound takes over once the variance decreases. We vary the level of noise in the synthetic data, fixing N = 200, in Figure 3(b). For all but very large noise values the tilted bound performs best. UCI datasets. We test the multinomial regression model on three standard UCI datasets: Iris (N = 150, D = 4, K = 3), Glass (N = 214, D = 8, K = 6) and Thyroid (N = 7200, D = 21, K = 3), 7 0.2 0.0 101 102 103 sample size N 40 30 20 10 0 101 0.50 0.45 0.40 0.35 0.30 0.25 0.20 ?3 10 102 103 sample size N Iterations to convergence 0.4 0.55 50 RMS error of coefficents 0.6 Iterations to convergence RMS error of coefficents adaptive tilted quadratic log 0.8 10?2 10?1 100 synthetic noise variance (a) Varying sample size 50 40 30 20 10 0 10?3 10?2 10?1 100 synthetic noise variance (b) Varying noise level Figure 3: Left: root mean squared error of inferred regression coefficients. Right: iterations to convergence. Results are shown as quartiles on 16 random synthetic datasets. All the bounds except ?quadratic? were fit using NCVMP. Iris Marginal likelihood Predictive likelihood Predictive error Glass Marginal likelihood Predictive likelihood Predictive error Thyroid Marginal likelihood Predictive likelihood Predictive error Quadratic ?65 ? 3.5 ?0.216 ? 0.07 0.0892 ? 0.039 Quadratic ?319 ? 5.6 ?0.58 ? 0.12 0.197 ? 0.032 Quadratic ?1814 ? 43 ?0.114 ? 0.019 0.0241 ? 0.0026 Adaptive ?31.2 ? 2 ?0.201 ? 0.039 0.0642 ? 0.037 Adaptive ?193 ? 3.9 ?0.542 ? 0.11 0.200 ? 0.032 Adaptive ?909 ? 30 ?0.0793 ? 0.014 0.0225 ? 0.0024 Tilted ?31.2 ? 2 ?0.201 ? 0.039 0.065 ? 0.038 Tilted ?193 ? 5.4 ?0.531 ? 0.1 0.200 ? 0.032 Tilted ?916 ? 31 ?0.0753 ? 0.008 0.0226 ? 0.0023 Probit ?37.3 ? 0.79 ?0.215 ? 0.034 0.0592 ? 0.03 Probit ?201 ? 2.6 ?0.503 ? 0.095 0.195 ? 0.035 Probit ?840 ? 18 ?0.0916 ? 0.010 0.0276 ? 0.0028 Table 1: Average results and standard deviations on three UCI datasets, based on 16 random 50 : 50 training-test splits. Adaptive and tilted use NCVMP, quadratic and probit use VMP. see Table 1. Here we have also included ?Probit?, corresponding to a Bayesian multinomial probit regression model, estimated using VMP, and similar in setup to [6], except that we use EP to approximate the predictive distribution, rather than sampling. On all three datasets the marginal likelihood calculated using the tilted or adaptive bounds is optimal out of the logistic models (?Probit? has a different underlying model, so differences in marginal likelihood are confounded by the Bayes factor). In terms of predictive performance the quadratic bound seems to be slightly worse across the datasets, with the performance of the other methods varying between datasets. We did not compare to the log bound since it is dominated by the tilted bound and is considerably slower to converge. 7 Discussion NCVMP is not guaranteed to converge. Indeed, for some models we have found convergence to be a problem, which can be alleviated by damping: if the NCVMP message is mf ?i (xi ) then send ? old the message mf ?i (xi )1?? mold f ?i (xi ) where mf ?i (xi ) was the previous message sent to i and 0 ? ? < 1 is a damping factor. The fixed points of the algorithm remained unchanged. We have introduced Non-conjugate Variational Message Passing, which extends variational Bayes to non-conjugate models while maintaining the convenient message passing framework of VMP and allowing freedom to choose the most accurate available method to approximate required expectations. Deterministic and stochastic factors can be combined in a modular fashion, and conjugate parts of the model can be handled with standard VMP. We have shown NCVMP to be of practical use for fitting Bayesian binary and multinomial logistic models. We derived a new bound for the softmax integral which is tighter than other commonly used bounds, but has variational parameters that are still simple to optimise. Tightness of the bound is valuable both in terms of better approximating the posterior and giving a closer approximation to the marginal likelihood, which may be of interest for model selection. 8 References [1] H. Attias. A variational Bayesian framework for graphical models. Advances in neural information processing systems, 12(1-2):209215, 2000. [2] M. Beal and Z. Ghahramani. Variational Bayesian learning of directed graphical models with hidden variables. Bayesian Analysis, 1(4):793832, 2006. [3] D. Blei and J. Lafferty. A correlated topic model of science. Annals of Applied Statistics, 2007. [4] D. Bohning. Multinomial logistic regression algorithm. Annals of the Institute of Statistical Mathematics, 44:197?200, 1992. 10.1007/BF00048682. [5] G. Bouchard. Efficient bounds for the softmax and applications to approximate inference in hybrid models. In NIPS workshop on approximate inference in hybrid models, 2007. [6] M. Girolami and S. Rogers. Variational bayesian multinomial probit regression with gaussian process priors. Neural Computation, 18(8):1790?1817, 2006. [7] A. Honkela, T. Raiko, M. Kuusela, M. Tornio, and J. Karhunen. Approximate riemannian conjugate gradient learning for fixed-form variational bayes. Journal of Machine Learning Research, 11:3235?3268, 2010. [8] A. Honkela, M. Tornio, T. Raiko, and J. Karhunen. Natural conjugate gradient in variational inference. In M. Ishikawa, K. Doya, H. Miyamoto, and T. Yamakawa, editors, ICONIP (2), volume 4985 of Lecture Notes in Computer Science, pages 305?314. Springer, 2007. [9] T. S. Jaakkola and M. I. Jordan. A variational approach to bayesian logistic regression models and their extensions. In International Conference on Artificial Intelligence and Statistics, 1996. [10] M. E. Khan, B. M. Marlin, G. Bouchard, and K. P. Murphy. Variational bounds for mixed-data factor analysis. In Advances in Neural Information Processing (NIPS) 23, 2010. [11] B. M. Marlin, M. E. Khan, and K. P. Murphy. Piecewise bounds for estimating bernoullilogistic latent gaussian models. In Proceedings of the 28th Annual International Conference on Machine Learning, 2011. [12] T. P. Minka. Expectation propagation for approximate bayesian inference. In Uncertainty in Artificial Intelligence, volume 17, 2001. [13] T. P. Minka, J. M. Winn, J. P. Guiver, and D. A. Knowles. Infer.NET 2.4, 2010. Microsoft Research Cambridge. http://research.microsoft.com/infernet. [14] H. Nickisch and C. E. Rasmussen. Approximations for binary gaussian process classification. Journal of Machine Learning Research, 9:2035?2078, Oct. 2008. [15] M. Opper and C. Archambeau. The variational gaussian approximation revisited. Neural Computation, 21(3):786?792, 2009. [16] Y. A. Qi and T. Jaakkola. Parameter expanded variational bayesian methods. In B. Sch?olkopf, J. C. Platt, and T. Hoffman, editors, Advances in Neural Information Processing (NIPS) 19, pages 1097?1104. MIT Press, 2006. [17] T. Raiko, H. Valpola, M. Harva, and J. Karhunen. Building blocks for variational bayesian learning of latent variable models. Journal of Machine Learning Research, 8:155?201, 2007. [18] L. K. Saul and M. I. Jordan. A mean field learning algorithm for unsupervised neural networks. Learning in graphical models, 1999. [19] M. P. Wand, J. T. Ormerod, S. A. Padoan, and R. Fruhwirth. Variational bayes for elaborate distributions. In Workshop on Recent Advances in Bayesian Computation, 2010. [20] J. Winn and C. M. Bishop. Variational message passing. Journal of Machine Learning Research, 6(1):661, 2006. 9
4407 |@word version:1 briefly:1 seems:1 hu:3 closure:1 infernet:1 moment:1 initial:1 series:1 existing:2 current:4 com:1 dx:3 must:2 written:1 tilted:29 numerical:3 partition:1 designed:1 update:14 stationary:2 exl:2 generative:1 intelligence:2 xk:5 blei:1 provides:1 contribute:1 revisited:1 wkd:1 penalises:1 firstly:1 hermite:1 along:1 become:1 viable:2 prove:1 combine:1 fitting:1 introduce:1 sacrifice:1 expected:3 indeed:2 themselves:1 multi:1 inspired:1 quad:1 increasing:1 becomes:1 estimating:2 notation:3 underlying:1 what:2 proposing:1 whilst:2 marlin:2 guarantee:1 every:1 uk:1 control:1 platt:1 omit:1 positive:1 engineering:1 local:3 severely:1 ak:4 optimised:3 might:1 fastest:1 limited:1 archambeau:1 averaged:1 directed:1 practical:1 lost:2 practice:1 definite:1 maximisation:1 block:1 convenient:3 matching:2 alleviated:1 integrating:1 suggest:1 get:3 cannot:1 selection:1 applying:1 vkf:2 restriction:1 deterministic:3 imposed:1 send:2 straightforward:2 focused:1 guiver:1 simplicity:1 deriving:1 initialise:1 coordinate:2 analogous:2 annals:2 construction:1 neighbouring:2 us:1 agreement:2 updating:1 ep:3 observed:1 exk:3 calculate:4 ensures:3 connected:2 decrease:2 valuable:1 intuition:2 pd:1 complexity:1 covariates:1 motivate:1 tight:1 compromise:2 predictive:9 mh:1 exi:2 various:2 represented:1 effective:1 describe:3 monte:1 artificial:2 choosing:1 modular:3 supplementary:4 valued:2 larger:1 tightness:2 otherwise:1 ability:1 statistic:6 cov:1 final:1 beal:1 net:3 propose:4 remainder:1 uci:4 combining:2 rapidly:1 poorly:2 flexibility:1 olkopf:1 recipe:2 convergence:8 extending:1 converges:3 tions:1 illustrate:1 iq:6 depending:1 fixing:1 wider:1 tornio:2 eq:1 sa:9 auxiliary:1 recovering:1 involves:2 implies:2 girolami:1 closely:1 correct:3 stochastic:3 quartile:1 material:4 rogers:1 require:2 hx:2 fix:1 alleviate:1 tighter:5 mathematically:1 extension:2 around:1 ground:3 normal:3 exp:4 equilibrium:1 algorithmic:2 vary:2 a2:1 estimation:1 currently:1 largest:1 ormerod:1 hoffman:1 rough:1 mit:1 gaussian:10 aim:4 fulfill:1 rather:1 varying:7 jaakkola:2 derived:3 vk:1 likelihood:13 contrast:1 glass:2 inference:6 typically:1 hidden:1 gkn:1 arg:1 classification:1 denoted:3 constrained:2 special:1 softmax:15 marginal:10 field:6 once:1 sampling:1 ishikawa:1 vmp:28 exphlog:2 unsupervised:1 simplify:1 inherent:1 piecewise:1 neighbour:1 divergence:9 murphy:2 microsoft:3 maintain:1 clenshaw:1 freedom:3 factorise:1 ab:1 interest:1 message:41 possibility:1 investigate:1 weakness:1 accurate:4 integral:4 closer:1 damping:4 taylor:5 old:1 miyamoto:1 mk:4 increased:1 disadvantage:2 tractability:2 deviation:1 uniform:1 dir:1 synthetic:9 considerably:1 combined:1 nickisch:1 international:2 minimised:1 invertible:1 squared:1 choose:2 slowly:1 worse:2 resort:1 derivative:1 toy:1 factorised:2 coefficient:6 vi:2 performed:1 root:1 doing:2 start:1 bayes:7 option:2 recover:2 bouchard:2 defer:1 contribution:1 variance:26 qk:2 efficiently:1 characteristic:1 bayesian:13 accurately:1 carlo:1 converged:1 underestimate:1 g1n:2 involved:1 minka:3 dm:1 proof:3 mi:1 dxi:3 riemannian:1 sampled:1 dimensionality:1 improves:1 carefully:1 formulation:1 evaluated:2 xa:6 just:1 d:2 honkela:2 receives:1 propagation:2 logistic:20 mode:1 aj:1 grows:1 building:1 effect:1 requiring:1 true:5 analytically:2 hence:1 symmetric:1 leibler:2 iteratively:1 deal:1 maintained:1 iris:2 criterion:2 iconip:1 outline:1 demonstrate:2 vmf:2 performs:2 variational:42 novel:2 multinomial:14 functional:1 empirically:1 exponentially:2 volume:2 belong:1 approximates:1 refer:3 cambridge:3 ai:1 unconstrained:1 mathematics:1 stable:1 posterior:17 multivariate:3 showed:1 recent:1 apart:2 certain:2 binary:10 seen:2 minimum:2 greater:1 relaxed:1 converge:5 desirable:1 infer:3 reduces:2 mold:1 generalises:1 match:2 xdn:1 faster:3 minimising:2 qi:30 impact:1 involving:1 regression:27 denominator:1 circumstance:1 expectation:10 optimisation:2 iteration:7 represent:2 sometimes:1 achieved:3 whereas:1 want:1 winn:2 underestimated:1 sends:1 modality:1 sch:1 extra:1 marginalisation:1 unlike:1 ascent:1 sent:2 lafferty:1 jordan:2 call:1 leverage:1 split:1 fit:2 idea:1 lesser:1 attias:1 minimise:1 whether:1 handled:1 rms:2 passing:15 hessian:1 useful:1 ten:3 http:1 estimated:2 conceived:1 per:2 four:1 achieving:1 drawn:1 graph:3 asymptotically:1 wand:1 run:1 uncertainty:1 place:1 family:11 almost:1 extends:1 knowles:2 doya:1 oscillation:1 vb:6 vf:5 comparable:1 bound:71 guaranteed:2 followed:1 letp:1 quadratic:27 identifiable:1 annual:1 adapted:1 strength:1 constraint:1 dominated:1 aspect:1 integrand:3 emi:2 min:1 extremely:1 speed:1 thyroid:2 expanded:1 department:1 poor:3 conjugate:32 describes:1 slightly:2 em:1 across:1 parameterisation:1 dv:1 invariant:1 taken:1 computationally:1 equation:3 loose:1 wrt:4 know:1 end:3 confounded:1 available:1 operation:2 tightest:1 generic:1 alternative:2 encounter:1 slower:1 thomas:1 assumes:1 dirichlet:2 graphical:3 maintaining:5 newton:1 log10:1 calculating:3 giving:1 ghahramani:1 approximating:2 unchanged:1 implied:1 move:1 fa:10 costly:1 dependence:1 gradient:10 valpola:1 seven:1 topic:1 extent:1 unstable:1 assuming:1 setup:1 unfortunately:1 cij:1 hlog:6 coefficents:2 implementation:5 design:1 perform:1 allowing:2 observation:1 datasets:13 sm:1 descent:1 varied:1 arbitrary:1 inferred:1 david:1 introduced:1 required:5 kl:16 khan:2 nip:3 address:1 able:1 optimise:4 suitable:1 natural:5 hybrid:2 factorises:1 raiko:3 review:1 prior:15 kf:1 multiplication:1 relative:4 fully:2 probit:8 lecture:1 mixed:1 limitation:2 integrate:1 sufficient:2 editor:2 changed:1 free:1 rasmussen:1 allow:1 bohning:4 ber:1 institute:1 saul:1 taking:1 absolute:2 calculated:4 opper:1 evaluating:1 valid:1 author:1 commonly:3 made:2 adaptive:10 avoided:1 approximate:15 kullback:2 global:1 incoming:3 summing:1 assumed:2 xi:63 alternatively:1 latent:2 modularity:8 table:2 learn:1 robust:1 correlated:1 contributes:1 curtis:1 expansion:3 complex:2 vj:1 did:1 pk:4 main:1 spread:2 noise:7 arise:1 hyperparameters:1 quadrature:10 referred:1 elaborate:1 fashion:1 sub:2 harva:1 explicit:1 exponential:16 xl:1 pe:1 theorem:5 formula:1 remained:1 bishop:1 dk:3 dominates:1 derives:1 workshop:2 effectively:2 karhunen:3 sx:1 mf:6 entropy:2 simply:1 univariate:5 lbfgs:1 applies:1 springer:1 truth:3 ma:12 kuusela:1 oct:1 shared:1 change:1 included:1 generalisation:2 except:3 reducing:1 typical:1 gauss:1 ex:1
3,764
4,408
Learning to Search Efficiently in High Dimensions Zhen Li ? UIUC [email protected] Huazhong Ning Liangliang Cao Google Inc. [email protected] IBM T.J. Watson Research Center [email protected] Tong Zhang Yihong Gong Thomas S. Huang ? Rutgers University [email protected] NEC China [email protected] UIUC [email protected] Abstract High dimensional similarity search in large scale databases becomes an important challenge due to the advent of Internet. For such applications, specialized data structures are required to achieve computational efficiency. Traditional approaches relied on algorithmic constructions that are often data independent (such as Locality Sensitive Hashing) or weakly dependent (such as kd-trees, k-means trees). While supervised learning algorithms have been applied to related problems, those proposed in the literature mainly focused on learning hash codes optimized for compact embedding of the data rather than search efficiency. Consequently such an embedding has to be used with linear scan or another search algorithm. Hence learning to hash does not directly address the search efficiency issue. This paper considers a new framework that applies supervised learning to directly optimize a data structure that supports efficient large scale search. Our approach takes both search quality and computational cost into consideration. Specifically, we learn a boosted search forest that is optimized using pair-wise similarity labeled examples. The output of this search forest can be efficiently converted into an inverted indexing data structure, which can leverage modern text search infrastructure to achieve both scalability and efficiency. Experimental results show that our approach significantly outperforms the start-of-the-art learning to hash methods (such as spectral hashing), as well as state-of-the-art high dimensional search algorithms (such as LSH and k-means trees). 1 Introduction The design of efficient algorithms for large scale similarity search (such as nearest neighbor search) has been a central problem in computer science. This problem becomes increasingly challenging in modern applications because the scale of modern databases has grown substantially and many of them are composed of high dimensional data. This means that classical algorithms such as kd-trees are no longer suitable [25] and new algorithms have to be designed to handle high dimensionality. However, existing approaches for large scale search in high dimension relied mainly on algorithmic constructions that are either data independent or weakly dependent. Motivated by the success of machine learning in the design of ranking functions for information retrieval (the learning to rank problem [13, 9]) and the design of compact embedding into binary codes (the learning to hash problem [10]), it is natural to ask whether we can use machine learning (in particular, supervised learning) to optimize data structures that can improve search efficiency. We call this problem learning to search, and this paper demonstrates that supervised learning can lead to improved search efficiency over algorithms that are not optimized using supervised information. ? These authors were sponsored in part by the U.S. National Science Foundation under grant IIS-1049332 EAGER and by the Beckman Seed Grant. 1 To leverage machine learning techniques, we need to consider a scalable search structure with parameters optimizable using labeled data. The data structured considered in this paper is motivated by the success of vocabulary tree method in image retrieval [18, 27, 15], which has been adopted in modern image search engines to find near duplicate images. Although the original proposal was based on ?bag of local patch? image representation, this paper considers a general setting where each database item is represented as a high dimensional vector. Recent advances in computer vision show that it is desirable to represent images as numerical vectors of as high as thousands or even millions of dimensions [12, 28]. We can easily adapt the vocabulary tree to this setting: we partition the high dimensional space into disjoint regions using hierarchical k-means, and regard them as the ?vocabulary?. This representation can then be integrated into an inverted index based text search engine for efficient large scale retrieval. In this paper, we refer to this approach as k-means trees because the underlying algorithm is the same as in [5, 16]. Note that k-means trees can be used for high dimensional data, while the classical kd-trees [1, 3, 22] are limited to dimensions of no more than a few hundreds. In this paper, we also adopt the tree structural representation, and propose a learning algorithm to construct the trees using supervised data. It is worth noting that the k-means trees approach suffers from several drawbacks that can be addressed in our approach. First the k-means trees only use unsupervised clustering algorithm, which is not optimized for search purposes; as we will show in the experiments, by employing supervised information, our learning to search approach can achieve significantly better performance. Second, the underlying k-means clustering limits the k-means tree approach to Euclidean similarity measures (though possible to extended to Bregman distances), while our approach can be easily applied to more general metrics (including semantic ones) that prove effective in many scenarios [8, 11, 7]. Nevertheless our experiments still focus on Euclidean distance search, which is to show the advantage over the k-means trees. The learning to search framework proposed in this paper is based on a formulation of search as a supervised learning problem that jointly optimizes two key factors of search: retrieval quality and computational cost. Specifically, we learn a set of selection functions in the form of a tree ensemble, as motivated by the aforementioned kd-trees and k-means trees approaches. However, unlike the traditional methods that are based only on unsupervised information, our trees are learned under the supervision of pairwise similarity information, and are optimized for the defined search criteria, i.e., to maximize the retrieval quality while keeping the computational cost low. In order to form the forest, boosting is employed to learn the trees sequentially. We call this particular method Boosted Search Forest (BSF). It is worth comparing the influential Locality Sensitive Hashing (LSH) [6, 2] approach with our learning to search approach. The idea of LSH is to employ random projections to approximate the Euclidean distance of original features. An inverted index structure can be constructed based on the hashing results [6], which facilitates efficient search. However, the LSH algorithm is completely data independent (using random projections), and thus the data structure is constructed without any learning. While interesting theoretical results can be obtained for LSH, as we shall see with the experiments, in practice its performance is inferior to the data-dependent search structures optimized via the learning to search approach of this paper. Another closely related problem is learning to hash, which includes BoostSSC [20], Spectral Hashing [26], Restricted Boltzmann Machines [19], Semi-Supervised Hashing [24], Hashing with Graphs [14], etc. However, the motivation of hashing problem is fundamentally different from that of the search problem considered in this paper. Specifically, the goal of learning to hash is to embed data into compact binary codes so that the hamming distance between two codes reflects their original similarity. In order to perform efficient hamming distance search using the embedded representation, an additional efficient algorithmic structure is still needed. (How to come up with such an efficient algorithm is an issue usually ignored by learning to hash algorithms.) The compact hash codes were traditionally believed to achieve low search latency by employing either linear scan, hash table lookup, or more sophisticated search mechanism. As we shall see in our experiments, however, linear scan on the Hamming space is not a feasible solution for large scale search problems. Moreover, if other search data structure is implemented on top of the hash code, the optimality of the embedding is likely to be lost, which usually yields suboptimal solution inferior to directly optimizing a search criteria. 2 2 Background Given a database X = {x1 , . . . , xn } and a query q, the search problem is to return top ranked items from the database that are most similar to the query. Let s(q, x) ? 0 be a ranking function that measures the similarity between q and x. In large-scale search applications, the database size n can be billions or larger. Explicitly evaluating the ranking function s(q, x) against all samples is very expensive. On the other hand, in order to achieve accurate search results, a complicated ranking function s(q, x) is indispensible. Modern search engines handle this problem by first employing a non-negative selection function T (q, x) that selects a small set of candidates Xq = {x : T (q, x) > 0, x ? X } with most of the top ranked items (T (q, x) = 0 means ?not selected?). This is called candidate selection stage, which is followed by a reranking stage where a more costly ranking function s(q, x) is evaluated on Xq . Two properties of the selection function T (q, x) are: 1) It must be evaluated much more efficiently than the ranking function s(q, x). In particular, for a given query, the complexity of evaluating T (q, x) over the entire dataset should be sublinear or even constant, which is usually made possible by delicated data structures such as inverted index tables. 2) The selection function is an approximation to s(q, x). In other word, with high probability, the more similar q and x are, the more likely x is contained in Xq (which means T (q, x) should take a larger value). This paper focuses on the candidate selection stage, i.e., learning the selection function T (q, x). In order to achieve both effectiveness and efficiency, three aspects need to be taken into account: ? Xq can be efficiently obtained (this is ensured by properties of selection function). ? The size of Xq should be small since it indicates the computational cost for reranking. P ? The retrieval quality of Xq measured by the total similarity x?Xq s(q, x) should be large. Therefore, our objective is to retrieve a set of items that maximizes the ranking quality while lowering the computational cost (keeping the candidate set as small as possible). In additiona, to achieve search efficiency, the selection stage employs the inverted index structure as in standard text search engines to handle web-scale dataset. 3 Learning to Search This section presents the proposed learning to search framework. We present the general formulation first, followed by a specific algorithm based on boosted search trees. 3.1 Problem Formulation As stated in Section 2, the set of candidates returned for a query q is given by Xq = {x ? X : T (q, x) > 0}. Intuitively, the quality of this candidate set can be measured by the overall similarities while the reranking cost is linear in |Xq |. Mathematically, we define: XX Retrieval Quality: Q(T ) = s(q, x)1(T (q, x) > 0) (1) q C(T ) = Computational Cost: x?X XX q 1(T (q, x) > 0) (2) x?X where 1(?) is the indicator function. The learning to search framework considers the search problem as a machine learning problem that finds the optimal selection function T as follows: max Q(T ) subject to C(T ) ? C0 , T (3) where C0 is the upper-bound of computational cost. Alternatively, we can rewrite the optimization problem in (3) by applying Lagrange multiplier: max Q(T ) ? ?C(T ), T where ? is a tuning parameter that balances the retrieval quality and computational cost. 3 (4) To simplify the learning process, we assume that the queries are randomly drawn from the database. Let xi and xj be two arbitrary samples in the dataset, and let sij = s(xi , xj ) ? {1, 0} indicate if they are ?similar? or ?dissimilar?. Problem in (4) becomes: X X 1(T (xi , xj ) > 0) sij 1(T (xi , xj ) > 0) ? ? max J(T ) = max T T = i,j i,j max T X zij 1(T (xi , xj ) > 0) (5) i,j where zij =  1?? ?? for similar pairs . for dissimilar pairs (6) 3.2 Learning Ensemble Selection Function via Boosting Note that (5) is nonconvex in T and thus is difficult to optimize. Inspired by AdaBoost [4], we employ the standard trick of using a convex relaxation, and in particular, we consider the exponential loss as a convex surrogate: X e?zij T (xi ,xj ) = E[e?zT (xi ,xj ) ]. (7) min L(T ) = T i,j Here we replace the summation over ?(xi , xj ) ? X ? X by the expectation over two i.i.d. random variables xi and xj . We also drop the subscripts of zij and regard z as a random variable conditioned on xi and xj . We define the ensemble selection function as a weighted sum of a set of base selection functions: T (x, y) = M X cm ? tm (xi , xj ). (8) m=1 Suppose we have learnt M base functions, and we are about to learn the (M + 1)-th selection function, denoted as t(xi , xj ) with weight given by c. The updated loss function is hence given by min L(t, c) = E[e?z[T (xi ,xj )+ct(xi ,xj )] ] = Ew [e?czt(xi ,xj ) ], t where Ew [?] denotes the weighted expectation with weights given by  ?(1??)T (x ,x ) i j e for similar pairs wij = w(xi , xj ) = e?zT (xi ,xj ) = ?T (xi ,xj ) e for dissimilar pairs (9) (10) This reweighting scheme leads to the boosting algorithm in Algorithm 1. In many application scenarios, each base selection function t(xi , xj ) takes only binary values 1 or 0. Thus, we may want to minimize L(t, c) by choosing the optimal value of t(xi , xj ) for any given pair (xi , xj ). Case 1: t(xi , xj ) = 0 L(t, c) = Ew [e?0 ] = 1. (11) L(t, c) = Ew [e?zc ] = e?(1??)c ? Pw [sij = 1|xi , xj ] + e?c ? Pw [sij = 0|xi , xj ]. (12) Case 2: t(xi , xj ) = 1 Comparing the two cases leads to: ? t (xi , xj ) = ( 1 if Pw [sij = 1|xi , xj ] > 0 otherwise 1?e??c 1?e?c (13) To find the optimal c, we first decompose L in the following way: L(t, c) = Ew [e?czt(xi ,xj ) ] = Pw [t(xi , xj ) = 0|xi , xj ] + e?c(1??) ? Pw [t(xi , xj ) = 1, sij = 1|xi , xj ] +ec? ? Pw [t(xi , xj ) = 1, sij = 0|xi , xj ]. 4 (14) Taking the derivative of L with respect to c, we arrive at the optimal solution for c: (1 ? ?)Pw [t(xi , xj ) = 1, sij = 1|xi , xj ] . c? = log ?Pw [t(xi , xj ) = 0, sij = 1|xi , xj ] (15) Algorithm 1 Boosted Selection Function Learning Input: A set of data points X ; pairwise similarities sij ? {0, 1} and weights wij = 1 1: for m ? 1, 2, ? ? ? , M do 2: Learn a base selection function tm (x, y) based on weights wij 3: Update ensemble: T (xi , xj ) ? T (xi , xj ) + cm ? tm (xi , xj ) 4: Update weights: wij ? wij ? e?cm ?tm (xi ,xj ) 5: end for 3.3 Tree Implementation of Base Selection Function Simultaneously solving (13) and (15) leads to the optimal solutions at each iteration of boosting. In practice, however, the optimality can hardly be achieved. This is particularly because the binaryvalued base selection functions t(xi , xj ) has to be selected from limited function families to ensure the wearability (finite model complexity) and more importantly, the efficiency. As mentioned in Section 2, evaluating t(q, x) for ?x ? X needs to be accomplished in sublinear or constant time when a query q comes. This suggests using an inverted table data structure as an efficient implantation of the selection function. Specifically, t(xi , xj ) = 1 if xi and xj get hashed into the same bucket of the inverted table, and 0 otherwise. This paper considers trees (we name it ?search trees?) as an approximation to the optimal selection functions, and quick inverted table lookup follows naturally. A natural consideration for the tree construction is that the tree must be balanced. However, we do not need to explicitly enforce this constraint: the balanceness is automatically favored by the term C in (4) as balanced trees give the minimum computational cost. In this sense, unlike other methods that explicitly enforce balancing constraint, we relax it while jointly optimizing the retrieval quality and computational cost. Consider a search tree with L leaf nodes {?1 , ? ? ? , ?L }. The selection function given by this tree is defined as L X t(xi , xj ) = t(xi , xj ; ?k ), (16) k=1 where t(xi , xj ; ?k ) ? {0, 1} indicating whether both xi and xj reach the same leaf node ?k . Similar to (5), the objective function for a search tree can be written as: max J = max t t X wij zij i,j L X t(xi , xj ; ?k ) = max t k=1 L X J k, (17) k=1 P where J k = ij wij zij t(xi , xj ; ?k ) is a partial objective function for the k-th leaf node, and wij is given by (10). The appealing additive property of the objective function J makes it trackable to analyze each split when the search tree grows. In particular, we split the k-th leaf node into two child nodes k(1) and k(2) if and only if it increases the overall objective function J k(1) + J k(2) > J k . Moreover, we optimize each split by choosing the one that maximizes J k(1) + J k(2) . To find the optimal split for a leaf node ?k , we confine to the hyperplane split cases, i.e., a sample x is assigned to the left child ?k(1) if p? x+b = p?? x ? > 0 and right child otherwise, where p? = [p? b]? and x ? = [x? 1]? are the augmented projection and data vectors. The splitting criterion is given by: X wij zij 1(? p? x ?i ? p?? x?j > 0) max J k(1) + J k(2) = max kpk=1 ? ? = max kpk=1 ? ij X wij zij [? p? x ?i x ?? ?] j p ij ? X ? ? p?, max p?? XM kpk=1 ? 5 (18) ? is the stack of all augmented samples at node ?k . Note that as 1(a > where Mij = wij zij , and X 1 1 0) = 2 sign(a) + 2 is non-differentiable, we approximate it using 12 a + 21 . The optimal p? of the ? X ? ?. above objective function is the eigenvector corresponding to the largest eigenvalue of XM The search tree construction algorithm is listed in Algorithm 2. In the implementation, if computation resource is critical, we may use stump functions to split the nodes with a large amount of samples, while applying the optimal projection p to the small nodes. The selection of the stump functions is similar to that in traditional decision trees: on the given leaf node, a set of stump functions are attempted and the one that maximizes (17) is selected if the objective function increases. Algorithm 2 Search Tree Construction Input: A set of data points X ; pairwise similarities sij ? {0, 1} and weights wij given by (10) Output: Tree t 1: Assign X as root; enqueue root 2: repeat 3: Find a leaf node ? in the queue; dequeue ? 4: Find the optimal split for ? by solving (18) 5: if criteria in (17) increases then 6: Split ? into ?1 and ?2 ; enqueue ?1 and ?2 7: end if 8: until Queue is empty 3.4 Boosted Search Forest In summary, we present a Boosted Search Forest (BSF) algorithm to the learning to search problem. In the learning stage, this algorithm follows the boosting framework described in Algorithm 1 to learn an ensemble of selection functions; each base selection function, in the form of a search tree, is learned with Algorithm 2. We then build inverted indices by passing all data points through the learned search trees. In analogy to text search, each leaf node corresponds to an ?index word? in the vocabulary and the data points reaching this leaf node are the ?documents? associated with this ?index word?. In the candidate selection stage, instead of exhaustively evaluating T (q, x) for ?x ? X , we only need to traverse the search trees and retrieve all items that collide with the query example for at least one tree. The selected candidate set, given by Xq = {x ? X : T (q, x) > 0}, is statistically optimized to have a small size (small computation cost) while containing a large number of relevant samples (good retrieval quality). 4 Experiments We evaluate the Boosted Search Forest (BSF) algorithm on several image search tasks. Although a more general similarity measure can be used, for simplicity we set s(xi , xj ) ? {0, 1} according to whether xj is within the top K nearest neighbors (K-NN) of xi on the designated metric space. We use K = 100 in the implementation. We compare the performance of BSF to two most popular algorithms on high dimensional image search: k-means trees and LSH. We also compare to a representative method in the learning to hash community: spectral hashing, although this algorithm was designed for Hamming embedding instead of search. Here linear scan is adopted on top of spectral hashing for search, because its more efficient alternatives are either directly compared (such as LSH) or can easily fail as noticed in [24]. Our experiment shows that exhaustive linear scan is not scalable, especially with long hash codes needed for better retrieval accuracy (see Table 1). The above algorithms are most representative. We do not compare with other algorithms for several reasons. Fist, LSH was reported to be superior to kd-trees [21] and spectral hashing was reported to out-perform RBM and BoostSCC [26]. Second, kd-trees and its extensions still work on low dimensions, and is known to behave poorly on high dimension data like in image search. Third, since this paper focuses on learning to search, not learning to hash (Hamming embedding) or learning distance metrics that consider different goals, it is not essential to compare with more recent work on those topics such as [8, 11, 24, 7]. 6 100 100 BSF K?means LSH 80 Recall of 100?NN Recall of 100?NN 80 60 40 20 0 BSF SH?32bit SH?96bit SH?200bit 60 40 20 100 200 300 400 500 600 700 Number of returned images 800 900 0 1000 (a) 100 200 300 400 500 600 700 Number of returned images 800 900 1000 (b) Figure 1: Comparison of Boosted Search Forest (BSF) on Concept1000 dataset with (a) k-means trees and LSH (b) Spectral Hashing (SH) of varying bits. 4.1 Concept-1000 Dataset This dataset consists of more than 150K images of 1000 concepts selected from the Large Scale Concept Ontology for Multimedia (LSCOM) [17]. The LSCOM categories were specifically selected for multimedia annotation and retrieval, and have been used in the TRECVID video retrieval series. These concept names were inputed as queries in Google and Bing, and the top returned images were collected. We choose the image representation proposed in [28], which is a high dimensional (?84K) feature with reported state-of-the-art performance in many visual recognition tasks. PCA is applied to reduce the dimension to 1000. We then randomly select around 6000 images as queries, and use the remaining (?150K) images as the search database. In image search, we are interested in the overall quality of the set of candidate images returned by a search algorithm. This notion coincides with our formulation of the search problem in (4) that is aimed at maximizing retrieval quality while maintaining a relative low computational cost (for reranking stage). The number of returned images clearly reflects the computational cost, and the retrieval quality is measured by the recall of retrieved images, i.e., the number of retrieved images that are among the 100-NN of the query. Note that we use recall instead of accuracy because recall gives the upper-bound performance of the reranking stage. Figure 1(a) shows the performance comparison with two search algorithms: k-means trees and LSH. Since our boosted search forest consists of tree ensembles, for a fair comparison, we also construct equivalent number of k-means trees (with random initializations) and multiple sets of LSH codes. Our proposed approach significantly outperforms k-means trees and LSH. The better performance is due to our learning to search formulation that simultaneously maximizes recall while minimizing the size of returned candidate set. In contrast, k-means trees uses only unsupervised clustering algorithm and LSH employs purely random projections. Moreover, the performance of k-means algorithm deteriorates when dimension increases. It is still interesting to compare to spectral hashing, although it is not a search algorithm. Since our approach requires more trees when the number of returns increases, we implement spectral hashing with varying bits: 32-bit, 96-bit, and 200-bit. As illustrated in Figure 1(b), our approach significantly outperforms spectral hashing under all configurations. Although the search forest does not have an explicit concept of bits, we can measure it from the information theoretical point of view, by counting every binary-branching in the trees as one bit. In the experiment, our approach retrieves about 70% of 100-NN out of 500 returned images, after traversing 17 trees, each of 12 layers. This is equivalent to 17 ? 12 = 204 bits. With the same number of bits, spectral hashing only achieves a recall rate around 60%. 4.2 One Million Tiny Images In order to examine the scalability of BSF, we conducted experiments on a much larger database. We randomly sample one million images from the 80 Millions Tiny Images dataset [23] as the search database, and 5000 additional images as queries. We use the 384-dimensional GIST feature provided by the authors of [23]. Comparison with search algorithms (Figure 2(a)) and hashing methods 7 60 50 Recall of 100?NN Recall of 100?NN 50 60 BSF K?means LSH 40 30 20 10 0 BSF SH?100bit SH?500bit SH?800bit 40 30 20 10 500 0 1000 1500 2000 2500 3000 3500 4000 4500 5000 Number of returned images (a) 500 1000 1500 2000 2500 3000 3500 Number of returned images 4000 4500 5000 (b) Figure 2: Comparison of Boosted Search Forest (BSF) on 1 Millions Tiny Images dataset with (a) K-means trees and LSH (b) Spectral Hashing (SH) of varying bits. Table 1: Comparison of retrieval time in a database with 0.5 billion synthesized samples. #bits 32 64 128 256 512 Linear scan 1.55s 2.74s 5.13s 10.11s 19.79s Boosted search forest 0.006s 0.009s 0.017s 0.034s 0.073s (Figure 2(b)) are made in a similar way as in the previous section. Again, the BSF algorithm substantially outperforms the other methods: using 60 trees (less than 800 bits), our approach retrieves 55.0% of the 100-NN with 5000 returns (0.5% of the entire database), while k-means trees achieves only 47.1% recall rate and LSH and spectral hashing are even worse. Note that using more bits in spectral hashing can even hurt performance on this dataset. 4.3 Search Speed All three aforementioned search algorithms (boosted search trees, k-means trees, and LSH) can naturally utilize inverted index structures to facilitate very efficient search. In particular, both our boosted search trees and k-means trees use the leaf nodes as the keys to index a list of data points in the database, while LSH uses multiple independently generated bits to form the indexing key. In this sense, all three algorithm has the same order of efficiency (constant time complexity). On the other hand, in order to perform search with compact hamming codes generated by a learning to hash method (e.g. spectral hashing), one has to either use a linear scan approach or a hash table lookup technique that finds the samples within a radius-1 Hamming ball (or more complex methods like LSH). Although much more efficient, the hash table lookup approach is likely to fail as the dimension of hash code grows to a few dozens, as observed in [24]. The retrieval speed using exhaustive linear scan is, however, far from satisfactory. Table 1 clearly illustrates this phenomenon on a database of 0.5 billion synthesized items. Even small codes with 32 bits take around 1.55 seconds (without sorting). When the hash codes grow to 512 bits (which is not unusual for highdimensional image/video data), the query time is almost 20 seconds. This is not acceptable for most real applications. On the contrary, our boosted search forest with 32 16-layer trees (?512 bits) responds in less than 0.073s. Our timing is carried out on a Intel Xeon Quad X5560 CPU, with a highly optimized implementation of Hamming distance which is at least 8?10 times faster than a naive implementation. 5 Conclusion This paper introduces a learning to search framework for scalable similarity search in high dimensions. Unlike previous methods, our algorithm learns a boosted search forest by jointly optimizing search quality versus computational efficiency, under the supervision of pair-wise similarity labels. With a natural integration of the inverted index search structure, our method can handle web-scale datasets efficiently. Experiments show that our approach leads to better retrieval accuracy than the state-of-the-art search methods such as locality sensitive hashing and k-means trees. 8 References [1] J. S. Beis and D. G. Lowe. Shape indexing using approximate nearest-neighbour search in high-dimensional spaces. In CVPR, pages 1000?1006, 1997. [2] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Symposium on Computational Geometry, pages 253?262, 2004. [3] J. Friedman, J. Bentley, and R. Finkel. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software (TOMS), 3(3):209?226, 1977. [4] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 28(2):337?374, 2000. [5] K. Fukunage and P. Narendra. A branch and bound algorithm for computing k-nearest neighbors. IEEE Transactions on Computers, 100(7):750?753, 1975. [6] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In VLDB, pages 518?529, 1999. [7] J. He, W. Liu, and S.-F. Chang. Scalable similarity search with optimized kernel hashing. In KDD, 2010. [8] P. Jain, B. Kulis, and K. Grauman. Fast image search for learned metrics. In CVPR, 2008. [9] V. Jain and M. Varma. Learning to re-rank: query-dependent image re-ranking using click data. In WWW, pages 277?286, 2011. [10] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. NIPS, 2009. [11] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In ICCV, 2009. [12] Y. Lin, F. Lv, S. Zhu, M. Yang, T. Cour, K. Yu, L. Cao, and T. Huang. Large-scale image classification: fast feature extraction and svm training. In CVPR, 2011. [13] T.-Y. Liu. Learning to rank for information retrieval. In SIGIR, page 904, 2010. [14] W. Liu, J. Wang, S. Kumar, and S. Chang. Hashing with graphs. In ICML, 2011. [15] F. Moosmann, B. Triggs, and F. Jurie. Fast discriminative visual codebooks using randomized clustering forests. In NIPS, pages 985?992, 2006. [16] M. Muja and D. G. Lowe. Fast approximate nearest neighbors with automatic algorithm configuration. In VISSAPP, 2009. [17] M. Naphade, J. Smith, J. Tesic, S. Chang, W. Hsu, L. Kennedy, A. Hauptmann, and J. Curtis. Large-scale concept ontology for multimedia. IEEE Multimedia Magazine, 13(3):86?91, 2006. [18] D. Nist?er and H. Stew?enius. Scalable recognition with a vocabulary tree. In CVPR, pages 2161?2168, 2006. [19] R. Salakhutdinov and G. E. Hinton. Semantic hashing. Int. J. Approx. Reasoning, 50(7):969? 978, 2009. [20] G. Shakhnarovich. Learning task-specific similarity. PhD thesis, Massachusetts Institute of Technology, 2005. [21] G. Shakhnarovich, T. Darrell, and P. Indyk. Nearest-Neighbor Methods in Learning and Vision: Theory and Practice. The MIT Press, 2006. [22] C. Silpa-Anan and R. Hartley. Optimised kd-trees for fast image descriptor matching. In CVPR, 2008. [23] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Trans. PAMI, 30(11), 2008. [24] J. Wang, O. Kumar, and S.-F. Chang. Semi-supervised hashing for scalable image retrieval. In CVPR, 2010. [25] Weber, Roger, Schek, Hans J., and Blott, Stephen. A Quantitative Analysis and Performance Study for Similarity-Search Methods in High-Dimensional Spaces. In VLDB, 1998. [26] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2008. [27] T. Yeh, J. J. Lee, and T. Darrell. Adaptive vocabulary forests br dynamic indexing and category learning. In ICCV, pages 1?8, 2007. [28] X. Zhou, N. Cui, Z. Li, F. Liang, and T. S. Huang. Hierarchical gaussianization for image classification. In ICCV, 2009. 9
4408 |@word kulis:3 pw:8 triggs:1 c0:2 vldb:2 configuration:2 series:1 liu:3 zij:9 indispensible:1 document:1 outperforms:4 existing:1 com:3 comparing:2 gmail:1 must:2 written:1 additive:2 numerical:1 partition:1 kdd:1 shape:1 designed:2 sponsored:1 drop:1 update:2 hash:19 gist:1 reranking:5 selected:6 leaf:10 item:6 czt:2 smith:1 infrastructure:1 boosting:6 node:14 traverse:1 zhang:1 mathematical:1 constructed:2 symposium:1 prove:1 liangliang:2 consists:2 schek:1 pairwise:3 expected:1 ontology:2 examine:1 uiuc:4 inspired:1 salakhutdinov:1 freeman:1 automatically:1 cpu:1 quad:1 becomes:3 provided:1 xx:2 underlying:2 moreover:3 maximizes:4 advent:1 cm:3 substantially:2 eigenvector:1 finding:1 quantitative:1 every:1 ensured:1 demonstrates:1 grauman:2 grant:2 local:1 timing:1 limit:1 subscript:1 datar:1 optimised:1 pami:1 initialization:1 china:1 suggests:1 challenging:1 limited:2 statistically:1 jurie:1 practice:3 lost:1 implement:1 significantly:4 binaryvalued:1 projection:5 matching:1 word:3 get:1 selection:26 applying:2 optimize:4 equivalent:2 www:1 quick:1 center:1 maximizing:1 independently:1 convex:2 focused:1 sigir:1 simplicity:1 splitting:1 bsf:12 importantly:1 varma:1 retrieve:2 embedding:6 handle:4 notion:1 traditionally:1 hurt:1 updated:1 annals:1 construction:5 suppose:1 magazine:1 us:2 trick:1 expensive:1 particularly:1 recognition:3 database:14 labeled:2 observed:1 wang:2 thousand:1 region:1 mentioned:1 balanced:2 complexity:3 exhaustively:1 dynamic:1 weakly:2 rewrite:1 solving:2 shakhnarovich:2 purely:1 efficiency:11 completely:1 easily:3 collide:1 represented:1 retrieves:2 grown:1 jain:2 fast:5 effective:1 reconstructive:1 query:13 choosing:2 exhaustive:2 larger:3 cvpr:6 relax:1 otherwise:3 statistic:1 jointly:3 indyk:3 advantage:1 differentiable:1 eigenvalue:1 propose:1 cao:3 relevant:1 poorly:1 achieve:7 scalability:2 billion:3 cour:1 empty:1 motwani:1 darrell:3 object:1 stat:1 gong:1 measured:3 ij:3 nearest:6 implemented:1 come:2 indicate:1 ning:1 radius:1 drawback:1 closely:1 hartley:1 gaussianization:1 assign:1 decompose:1 summation:1 mathematically:1 extension:1 confine:1 considered:2 around:3 seed:1 algorithmic:3 narendra:1 achieves:2 adopt:1 torralba:2 purpose:1 beckman:1 bag:1 label:1 sensitive:5 largest:1 reflects:2 weighted:2 mit:1 clearly:2 rather:1 reaching:1 zhou:1 finkel:1 boosted:15 varying:3 focus:3 rank:3 indicates:1 mainly:2 contrast:1 sense:2 dependent:4 nn:8 integrated:1 entire:2 kernelized:1 wij:12 selects:1 interested:1 issue:2 aforementioned:2 overall:3 among:1 denoted:1 favored:1 classification:2 art:4 integration:1 construct:2 extraction:1 yu:1 unsupervised:3 icml:1 implantation:1 fundamentally:1 duplicate:1 few:2 employ:4 modern:5 simplify:1 randomly:3 composed:1 simultaneously:2 national:1 neighbour:1 geometry:1 friedman:2 highly:1 introduces:1 sh:8 accurate:1 bregman:1 partial:1 traversing:1 tree:62 euclidean:3 re:2 theoretical:2 xeon:1 cost:14 hundred:1 conducted:1 eager:1 reported:3 learnt:1 trackable:1 randomized:1 lee:1 again:1 central:1 thesis:1 containing:1 huang:4 choose:1 worse:1 derivative:1 return:3 li:2 huazhong:2 account:1 converted:1 lookup:4 stump:3 includes:1 int:1 inc:1 gionis:1 explicitly:3 ranking:8 root:2 view:2 lowe:2 analyze:1 start:1 relied:2 complicated:1 annotation:1 minimize:1 accuracy:3 descriptor:1 efficiently:5 ensemble:6 yield:1 worth:2 kennedy:1 reach:1 suffers:1 kpk:3 against:1 naturally:2 associated:1 rbm:1 hamming:8 hsu:1 dataset:9 popular:1 ask:1 massachusetts:1 recall:10 dimensionality:1 sophisticated:1 hashing:30 supervised:10 adaboost:1 tom:1 improved:1 wei:1 formulation:5 evaluated:2 though:1 roger:1 stage:8 until:1 hand:2 web:2 reweighting:1 google:2 logistic:1 quality:14 grows:2 bentley:1 name:2 facilitate:1 concept:6 multiplier:1 hence:2 assigned:1 satisfactory:1 semantic:2 illustrated:1 branching:1 inferior:2 coincides:1 criterion:4 reasoning:1 image:37 wise:2 consideration:2 weber:1 ifp:1 superior:1 specialized:1 muja:1 million:6 he:1 synthesized:2 refer:1 tuning:1 automatic:1 approx:1 lsh:20 stable:1 hashed:1 similarity:18 longer:1 supervision:2 etc:1 base:7 han:1 recent:2 retrieved:2 optimizing:3 optimizes:1 vissapp:1 scenario:2 nonconvex:1 binary:5 watson:1 success:2 accomplished:1 inverted:11 minimum:1 additional:2 employed:1 maximize:1 ii:1 semi:2 fist:1 desirable:1 multiple:2 branch:1 stephen:1 faster:1 adapt:1 match:1 believed:1 long:1 retrieval:20 lin:1 scalable:7 regression:1 vision:2 metric:4 rutgers:2 expectation:2 iteration:1 represent:1 kernel:1 achieved:1 proposal:1 background:1 want:1 addressed:1 grow:1 unlike:3 subject:1 facilitates:1 contrary:1 effectiveness:1 call:2 structural:1 near:1 leverage:2 noting:1 counting:1 split:8 embeddings:1 yang:1 xj:53 hastie:1 suboptimal:1 click:1 reduce:1 idea:1 tm:4 codebooks:1 br:1 yihong:1 whether:3 motivated:3 pca:1 queue:2 returned:10 passing:1 hardly:1 ignored:1 latency:1 listed:1 aimed:1 amount:1 nonparametric:1 category:2 sign:1 deteriorates:1 disjoint:1 tibshirani:1 shall:2 key:3 nevertheless:1 drawn:1 utilize:1 lowering:1 graph:2 relaxation:1 sum:1 tzhang:1 arrive:1 family:1 almost:1 patch:1 decision:1 acceptable:1 bit:23 bound:3 internet:1 ct:1 followed:2 layer:2 constraint:2 scene:1 software:1 aspect:1 speed:2 optimality:2 min:2 kumar:2 structured:1 influential:1 according:1 dequeue:1 designated:1 ball:1 cui:1 kd:7 increasingly:1 appealing:1 naphade:1 intuitively:1 iccv:3 restricted:1 indexing:4 sij:11 bucket:1 taken:1 resource:1 bing:1 mechanism:1 fail:2 needed:2 moosmann:1 end:2 optimizable:1 unusual:1 adopted:2 hierarchical:2 spectral:15 enforce:2 alternative:1 thomas:1 original:3 top:6 clustering:4 denotes:1 ensure:1 remaining:1 maintaining:1 build:1 especially:1 classical:2 objective:7 noticed:1 costly:1 mirrokni:1 traditional:3 surrogate:1 responds:1 distance:7 topic:1 considers:4 collected:1 reason:1 code:12 index:10 balance:1 minimizing:1 liang:1 difficult:1 negative:1 stated:1 design:3 implementation:5 zt:2 boltzmann:1 perform:3 upper:2 datasets:1 finite:1 nist:1 behave:1 extended:1 hinton:1 stack:1 arbitrary:1 community:1 pair:7 required:1 optimized:9 engine:4 learned:4 nip:3 trans:1 address:1 usually:3 xm:2 challenge:1 including:1 max:12 video:2 suitable:1 critical:1 natural:3 ranked:2 indicator:1 zhu:1 scheme:2 improve:1 technology:1 carried:1 zhen:1 naive:1 xq:10 text:4 literature:1 yeh:1 relative:1 embedded:1 loss:2 sublinear:2 interesting:2 analogy:1 versus:1 lv:1 foundation:1 tiny:4 balancing:1 ibm:2 summary:1 repeat:1 keeping:2 zc:1 institute:1 neighbor:5 taking:1 regard:2 dimension:11 vocabulary:6 xn:1 evaluating:4 author:2 made:2 adaptive:1 employing:3 ec:1 far:1 transaction:2 approximate:4 compact:5 sequentially:1 xi:53 discriminative:1 alternatively:1 fergus:2 search:103 table:10 learn:6 curtis:1 forest:16 complex:1 enqueue:2 motivation:1 silpa:1 child:3 fair:1 x1:1 augmented:2 representative:2 intel:1 tong:1 trecvid:1 explicit:1 exponential:1 candidate:10 third:1 learns:1 dozen:1 embed:1 specific:2 er:1 list:1 svm:1 essential:1 phd:1 nec:1 hauptmann:1 conditioned:1 illustrates:1 sorting:1 locality:5 logarithmic:1 likely:3 visual:2 lagrange:1 contained:1 chang:4 applies:1 blott:1 mij:1 corresponds:1 acm:1 goal:2 consequently:1 replace:1 feasible:1 specifically:5 hyperplane:1 called:1 total:1 multimedia:4 experimental:1 attempted:1 ew:5 indicating:1 select:1 highdimensional:1 immorlica:1 support:1 scan:8 dissimilar:3 evaluate:1 phenomenon:1
3,765
4,409
The Manifold Tangent Classifier Salah Rifai, Yann N. Dauphin, Pascal Vincent, Yoshua Bengio, Xavier Muller Department of Computer Science and Operations Research University of Montreal Montreal, H3C 3J7 {rifaisal, dauphiya, vincentp, bengioy, mullerx}@iro.umontreal.ca Abstract We combine three important ideas present in previous work for building classifiers: the semi-supervised hypothesis (the input distribution contains information about the classifier), the unsupervised manifold hypothesis (data density concentrates near low-dimensional manifolds), and the manifold hypothesis for classification (different classes correspond to disjoint manifolds separated by low density). We exploit a novel algorithm for capturing manifold structure (high-order contractive auto-encoders) and we show how it builds a topological atlas of charts, each chart being characterized by the principal singular vectors of the Jacobian of a representation mapping. This representation learning algorithm can be stacked to yield a deep architecture, and we combine it with a domain knowledge-free version of the TangentProp algorithm to encourage the classifier to be insensitive to local directions changes along the manifold. Record-breaking classification results are obtained. 1 Introduction Much of machine learning research can be viewed as an exploration of ways to compensate for scarce prior knowledge about how to solve a specific task by extracting (usually implicit) knowledge from vast amounts of data. This is especially true of the search for generic learning algorithms that are to perform well on a wide range of domains for which they were not specifically tailored. While such an outlook precludes using much domain-specific knowledge in designing the algorithms, it can however be beneficial to leverage what might be called ?generic? prior hypotheses, that appear likely to hold for a wide range of problems. The approach studied in the present work exploits three such prior hypotheses: 1. The semi-supervised learning hypothesis, according to which learning aspects of the input distribution p(x) can improve models of the conditional distribution of the supervised target p(y|x), i.e., p(x) and p(y|x) share something (Lasserre et al., 2006). This hypothesis underlies not only the strict semi-supervised setting where one has many more unlabeled examples at his disposal than labeled ones, but also the successful unsupervised pretraining approach for learning deep architectures, which has been shown to significantly improve supervised performance even without using additional unlabeled examples (Hinton et al., 2006; Bengio, 2009; Erhan et al., 2010). 2. The (unsupervised) manifold hypothesis, according to which real world data presented in high dimensional spaces is likely to concentrate in the vicinity of non-linear sub-manifolds of much lower dimensionality (Cayton, 2005; Narayanan and Mitter, 2010). 3. The manifold hypothesis for classification, according to which points of different classes are likely to concentrate along different sub-manifolds, separated by low density regions of the input space. 1 The recently proposed Contractive Auto-Encoder (CAE) algorithm (Rifai et al., 2011a), based on the idea of encouraging the learned representation to be robust to small variations of the input, was shown to be very effective for unsupervised feature learning. Its successful application in the pre-training of deep neural networks is yet another illustration of what can be gained by adopting hypothesis 1. In addition, Rifai et al. (2011a) propose, and show empirical evidence for, the hypothesis that the trade-off between reconstruction error and the pressure to be insensitive to variations in input space has an interesting consequence: It yields a mostly contractive mapping that, locally around each training point, remains substantially sensitive only to a few input directions (with different directions of sensitivity for different training points). This is taken as evidence that the algorithm indirectly exploits hypothesis 2 and models a lower-dimensional manifold. Most of the directions to which the representation is substantially sensitive are thought to be directions tangent to the datasupporting manifold (those that locally define its tangent space). The present work follows through on this interpretation, and investigates whether it is possible to use this information, that is presumably captured about manifold structure, to further improve classification performance by leveraging hypothesis 3. To that end, we extract a set of basis vectors for the local tangent space at each training point from the Contractive Auto-Encoder?s learned parameters. This is obtained with a Singular Value Decomposition (SVD) of the Jacobian of the encoder that maps each input to its learned representation. Based on hypothesis 3, we then adopt the ?generic prior? that class labels are likely to be insensitive to most directions within these local tangent spaces (ex: small translations, rotations or scalings usually do not change an image?s class). Supervised classification algorithms that have been devised to efficiently exploit tangent directions given as domain-specific prior-knowledge (Simard et al., 1992, 1993), can readily be used instead with our learned tangent spaces. In particular, we will show record-breaking improvements by using TangentProp for fine tuning CAE-pre-trained deep neural networks. To the best of our knowledge this is the first time that the implicit relationship between an unsupervised learned mapping and the tangent space of a manifold is rendered explicit and successfully exploited for the training of a classifier. This showcases a unified approach that simultaneously leverages all three ?generic? prior hypotheses considered. Our experiments (see Section 6) show that this approach sets new records for domain-knowledge-free performance on several real-world classification problems. Remarkably, in some cases it even outperformed methods that use weak or strong domain-specific prior knowledge (e.g. convolutional networks and tangent distance based on a-priori known transformations). Naturally, this approach is even more likely to be beneficial for datasets where no prior knowledge is readily available. 2 Contractive auto-encoders (CAE) We consider the problem of the unsupervised learning of a non-linear feature extractor from a dataset D = {x1 , . . . , xn }. Examples xi ? IRd are i.i.d. samples from an unknown distribution p(x). 2.1 Traditional auto-encoders The auto-encoder framework is one of the oldest and simplest techniques for the unsupervised learning of non-linear feature extractors. It learns an encoder function h, that maps an input x ? IRd to a hidden representation h(x) ? IRdh , jointly with a decoder function g, that maps h back to the input space as r = g(h(x)) the reconstruction of x. The encoder and decoder?s parameters ? are learned by stochastic gradient descent to minimize the average reconstruction error L(x, g(h(x))) for the examples of the training set. The objective being minimized is: X JAE (?) = L(x, g(h(x))). (1) x?D We will will use the most common forms of encoder, decoder, and reconstruction error: Encoder: h(x) = s(W x + bh ), where s is the element-wise logistic sigmoid s(z) = 1+e1?z . Parameters are a dh ? d weight matrix W and bias vector bh ? IRdh . Decoder: r = g(h(x)) = s2 (W T h(x) + br ). Parameters are W T (tied weights, shared with the encoder) and bias vector br ? IRd . Activation function s2 is either a logistic sigmoid (s2 = s) or the identity (linear decoder). 2 Loss function: Either the squared error: L(x, r) = kx ? rk2 or Bernoulli cross-entropy: L(x, r) = Pd ? i=1 xi log(ri ) + (1 ? xi ) log(1 ? ri ). The set of parameters of such an auto-encoder is ? = {W, bh , br }. Historically, auto-encoders were primarily viewed as a technique for dimensionality reduction, where a narrow bottleneck (i.e. dh < d) was in effect acting as a capacity control mechanism. By contrast, recent successes (Bengio et al., 2007; Ranzato et al., 2007a; Kavukcuoglu et al., 2009; Vincent et al., 2010; Rifai et al., 2011a) tend to rely on rich, oftentimes over-complete representations (dh > d), so that more sophisticated forms of regularization are required to pressure the auto-encoder to extract relevant features and avoid trivial solutions. Several successful techniques aim at sparse representations (Ranzato et al., 2007a; Kavukcuoglu et al., 2009; Goodfellow et al., 2009). Alternatively, denoising auto-encoders (Vincent et al., 2010) change the objective from mere reconstruction to that of denoising. 2.2 First order and higher order contractive auto-encoders More recently, Rifai et al. (2011a) introduced the Contractive Auto-Encoder (CAE), that encourages robustness of representation h(x) to small variations of a training input x, by penalizing its sensitivity to that input, measured as the Frobenius norm of the encoder?s Jacobian J(x) = ?h ?x (x). The regularized objective minimized by the CAE is the following: X JCAE (?) = L(x, g(h(x))) + ?kJ(x)k2 , (2) x?D where ? is a non-negative regularization hyper-parameter that controls how strongly the norm of the Jacobian is penalized. Note that, with the traditional sigmoid encoder form given above, one can easily obtain the Jacobian of the encoder. Its j th row is obtained form the j th row of W as: J(x)j = ?hj (x) = hj (x)(1 ? hj (x))Wj . ?x (3) Computing the extra penalty term (and its contribution to the gradient) is similar to computing the reconstruction error term (and its contribution to the gradient), thus relatively cheap. It is also possible to penalize higher order derivatives (Hessian) by using a simple stochastic technique that eschews computing them explicitly, which would be prohibitive. It suffices to penalize differences between the Jacobian at x and the Jacobian at nearby points x ? = x +  (stochastic corruptions of x). This yields the CAE+H (Rifai et al., 2011b) variant with the following optimization objective: h i X 2 2 JCAE+H (?) = L(x, g(h(x))) + ? ||J(x)|| + ?E?N (0,?2 I) ||J(x) ? J(x + )|| , (4) x?D where ? is an additional regularization hyper-parameters that controls how strongly we penalize local variations of the Jacobian, i.e. higher order derivatives. The expectation E is over Gaussian noise variable . In practice stochastic samples thereof are used for each stochastic gradient update. The CAE+H is the variant used for our experiments. 3 Characterizing the tangent bundle captured by a CAE Rifai et al. (2011a) reason that, while the regularization term encourages insensitivity of h(x) in all input space directions, this pressure is counterbalanced by the need for accurate reconstruction, thus resulting in h(x) being substantially sensitive only to the few input directions required to distinguish close by training points. The geometric interpretation is that these directions span the local tangent space of the underlying manifold that supports the data. The tangent bundle of a smooth manifold is the manifold along with the set of tangent planes taken at all points on it. Each such tangent plane can be equipped with a local Euclidean coordinate system or chart. In topology, an atlas is a collection of such charts (like the locally Euclidean map in each page of a geographic atlas). Even though the set of charts may form a non-Euclidean manifold (e.g., a sphere), each chart is Euclidean. 3 3.1 Conditions for the feature mapping to define an atlas on a manifold In order to obtain a proper atlas of charts, h must be a diffeomorphism. It must be smooth (C ? ) and invertible on open Euclidean balls on the manifold M around the training points. Smoothness is guaranteed because of our choice of parametrization (affine + sigmoid). Injectivity (different values of h(x) correspond to different values of x) on the training examples is encouraged by minimizing reconstruction error (otherwise we cannot distinguish training examples xi and xj by only looking at h(xi ) and h(xj )). Since h(x) = s(W x + bh ) and s is invertible, using the definition of injectivity we get (by composing h(xi ) = h(xj ) with s?1 ) ?i, j h(xi ) = h(xj ) ?? W ?ij = 0 where ?ij = xi ? xj . In order to preserve the injectivity of h, W has to form a basis spanned by Pd its rows Wk , where ? i, j ? ? ? IRdh , ?ij = kh ?k Wk . With this condition satisfied, mapping h is injective in the subspace spanned by the variations in the training set. If we limit the domain d of h to h(X ) ? (0, 1) h comprising values obtainable by h applied to some set X , then we obtain surjectivity by definition, hence bijectivity of h between the training set D and h(D). Let Mx be an open ball on the manifold M around training example x. By smoothness of the manifold M and of mapping h, we obtain bijectivity locally around the training examples (on the manifold) as well, i.e., between ?x?D Mx and h(?x?D Mx ). 3.2 Obtaining an atlas from the learned feature mapping Now that we have necessary conditions for local invertibility of h(x) for x ? D, let us consider how to define the local chart around x from the nature of h. Because h must be sensitive to changes from an example xi to one of its neighbors xj , but insensitive to other changes (because of the CAE penalty), we expect that this will be reflected in the spectrum of the Jacobian matrix J(x) = ?h(x) ?x at each training point x. In the ideal case where J(x) has rank k, h(x + v) differs from h(x) only if v is in the span of the singular vectors of J(x) with non-zero singular value. In practice, J(x) has many tiny singular values. Hence, we define a local chart around x using the Singular Value Decomposition of J T (x) = U (x)S(x)V T (x) (where U (x) and V (x) are orthogonal and S(x) is diagonal). The tangent plane Hx at x is given by the span of the set of principal singular vectors Bx : Bx = {U?k (x)|Skk (x) > } and Hx = {x + v|v ? span(Bx )}, P where U?k (x) is the k-th column of U (x), and span({zk }) = {x|x = k wk zk , wk ? IR}. We can thus define an atlas A captured by h, based on the local linear approximation around each example: A = {(Mx , ?x )|x ? D, ?x (? x) = Bx (? x ? x)}. (5) Note that this way of obtaining an atlas can also be applied to subsequent layers of a deep network. It is thus possible to use a greedy layer-wise strategy to initialize a network with CAEs (Rifai et al., 2011a) and obtain an atlas that corresponds to the nonlinear features computed at any layer. 4 Exploiting the learned tangent directions for classification Using the previously defined charts for every point of the training set, we propose to use this additional information provided by unsupervised learning to improve the performance of the supervised task. In this we adopt the manifold hypothesis for classification mentioned in the introduction. 4.1 CAE-based tangent distance One way of achieving this is to use a nearest neighbor classifier with a similarity criterion defined as the shortest distance between two hyperplanes (Simard et al., 1993). The tangents extracted on each points will allow us to shrink the distances between two samples when they can approximate each other by a linear combination of their local tangents. Following Simard et al. (1993), we define the tangent distance between two points x and y as the distance between the two hyperplanes Hx , Hy ? IRd spanned respectively by Bx and By . Using the usual definition of distance between two spaces, d(Hx , Hy ) = inf{kz?wk2 |/ (z, w) ? Hx ?Hy }, we obtain the solution for this convex 4 problem by solving a system of linear equations (Simard et al., 1993). This procedure corresponds to allowing the considered points x and y to move along the directions spanned by their associated local charts. Their distance is then evaluated on the new coordinates where the distance is minimal. We can then use a nearest neighbor classifier based on this distance. 4.2 CAE-based tangent propagation Nearest neighbor techniques are often impractical for large scale datasets because their computational requirements scale linearly with n for each test case. By contrast, once trained, neural networks yield fast responses for test cases. We can also leverage the extracted local charts when training a neural network. Following the tangent propagation approach of Simard et al. (1992), but exploiting our learned tangents, we encourage the output o of a neural network classifier to be insensitive to variations in the directions of the local chart of x by adding the following penalty to its supervised objective function: 2 X ?o (x) u ?(x) = (6) ?x u?Bx Contribution of this term to the gradients of network parameters can be computed in O(Nw ), where Nw is the number of neural network weights. 4.3 The Manifold Tangent Classifier (MTC) Putting it all together, here is the high level summary of how we build and train a deep network: 1. Train (unsupervised) a stack of K CAE+H layers (Eq. 4). Each is trained in turn on the representation learned by the previous layer. 2. For each xi ? D compute the Jacobian of the last layer representation J (K) (xi ) = ?h(K) 1 ?x (xi ) and its SVD . Store the leading dM singular vectors in set Bxi . 3. On top of the K pre-trained layers, stack an output layer of size the number of classes. Finetune the whole network for supervised classification2 with an added tangent propagation penalty (Eq. 6), using for each xi , tangent directions Bxi . We call this deep learning algorithm the Manifold Tangent Classifier (MTC). Alternatively, instead of step 3, one can use the tangent vectors in Bxi in a tangent distance nearest neighbors classifier. 5 Related prior work Many Non-Linear Manifold Learning algorithms (Roweis and Saul, 2000; Tenenbaum et al., 2000) have been proposed which can automatically discover the main directions of variation around each training point, i.e., the tangent bundle. Most of these algorithms are non-parametric and local, i.e., explicitly parametrizing the tangent plane around each training point (with a separate set of parameters for each, or derived mostly from the set of training examples in every neighborhood), as most explicitly seen in Manifold Parzen Windows (Vincent and Bengio, 2003) and manifold Charting (Brand, 2003). See Bengio and Monperrus (2005) for a critique of local non-parametric manifold algorithms: they might require a number of training examples which grows exponentially with manifold dimension and curvature (more crooks and valleys in the manifold will require more examples). One attempt to generalize the manifold shape non-locally (Bengio et al., 2006) is based on explicitly predicting the tangent plane associated to any given point x, as a parametrized function of x. Note that these algorithms all explicitly exploit training set neighborhoods (see Figure 2), i.e. they use pairs or tuples of points, with the goal to explicitly model the tangent space, while it is 1 (K) J is the product of the Jacobians of each encoder (see Eq. 3) in the stack. It suffices to compute its leading dM SVD vectors and singular values. This is achieved in O(dM ? d ? dh ) per training example. For comparison, the cost of a forward propagation through a single MLP layer is O(d ? dh ) per example. 2 A sigmoid output layer is preferred because computing its Jacobian is straightforward and efficient (Eq. 3). The supervised cost used is the cross entropy. Training is by stochastic gradient descent. 5 modeled implicitly by the CAE?s objective function (that is not based on pairs of points). More recently, the Local Coordinate Coding (LCC) algorithm (Yu et al., 2009) and its Local Tangent LCC variant (Yu and Zhang, 2010) were proposed to build a a local chart around each training example (with a local low-dimensional coordinate system around it) and use it to define a representation for each input x: the responsibility of each local chart/anchor in explaining input x and the coordinate of x in each local chart. That representation is then fed to a classifier and yield better generalization than x itself. The tangent distance (Simard et al., 1993) and TangentProp (Simard et al., 1992) algorithms were initially designed to exploit prior domain-knowledge of directions of invariance (ex: knowledge that the class of an image should be invariant to small translations rotations or scalings in the image plane). However any algorithm able to output a chart for a training point might potentially be used, as we do here, to provide directions to a Tangent distance or TangentProp (Simard et al., 1992) based classifier. Our approach is nevertheless unique as the CAE?s unsupervised feature learning capabilities are used simultaneously to provide a good initialization of deep network layers and a coherent non-local predictor of tangent spaces. TangentProp is itself closely related to the Double Backpropagation algorithm (Drucker and LeCun, 1992), in which one instead adds a penalty that is the sum of squared derivatives of the prediction error (with respect to the network input). Whereas TangentProp attempts to make the output insensitive to selected directions of change, the double backpropagation penalty term attempts to make the error at a training example invariant to changes in all directions. Since one is also trying to minimize the error at the training example, this amounts to making that minimization more robust, i.e., extend it to the neighborhood of the training examples. Also related is the Semi-Supervised Embedding algorithm (Weston et al., 2008). In addition to minimizing a supervised prediction error, it encourages each layer of representation of a deep architecture to be invariant when the training example is changed from x to a near neighbor of x in the training set. This algorithm works implicitly under the hypothesis that the variable y to predict from x is invariant to the local directions of change present between nearest neighbors. This is consistent with the manifold hypothesis for classification (hypothesis 3 mentioned in the introduction). Instead of removing variability along the local directions of variation, the Contractive Auto-Encoder (Rifai et al., 2011a) initially finds a representation which is most sensitive to them, as we explained in section 2. 6 Experiments We conducted experiments to evaluate our approach and the quality of the manifold tangents learned by the CAE, using a range of datasets from different domains: MNIST is a dataset of 28 ? 28 images of handwritten digits. The learning task is to predict the digit contained in the images. Reuters Corpus Volume I is a popular benchmark for document classification. It consists of 800,000 real-world news wire stories made available by Reuters. We used the 2000 most frequent words calculated on the whole dataset to create a bag-of-words vector representation. We used the LYRL2004 split to separate between a train and test set. CIFAR-10 is a dataset of 70,000 32 ? 32 RGB real-world images. It contains images of real-world objects (i.e. cars, animals) with all the variations present in natural images (i.e. backgrounds). Forest Cover Type is a large-scale database of cartographic variables for the prediction of forest cover types made available by the US Forest Service. We investigate whether leveraging the CAE learned tangents leads to better classification performance on these problems, using the following methodology: Optimal hyper-parameters for (a stack of) CAEs are selected by cross-validation on a disjoint validation set extracted from the training set. The quality of the feature extractor and tangents captured by the CAEs is evaluated by initializing an neural network (MLP) with the same parameters and fine-tuning it by backpropagation on the supervised classification task. The optimal strength of the supervised TangentProp penalty and number of tangents dM is also cross-validated. Results Figure 1 shows a visualization of the tangents learned by the CAE. On MNIST, the tangents mostly correspond to small geometrical transformations like translations and rotations. On CIFAR-10, the 6 Figure 1: Visualisation of the tangents learned by the CAE for MNIST, CIFAR-10 and RCV1 (top to bottom). The left-most column is the example and the following columns are its tangents. On RCV1, we show the tangents of a document with the topic ?Trading & Markets? (MCAT) with the negative terms in red(-) and the positive terms in green(+). Figure 2: Tangents extracted by local PCA on CIFAR-10. This shows the limitation of approaches that rely on training set neighborhoods. model also learns sensible tangents, which seem to correspond to changes in the parts of objects. The tangents on RCV1-v2 correspond to the addition or removal of similar words and removal of irrelevant words. We also note that extracting the tangents of the model is a way to visualize what the model has learned about the structure of the manifold. Interestingly, we see that hypothesis 3 holds for these datasets because most tangents do not change the class of the example. Table 1: Classification accuracy on several datasets using KNN variants measured on 10,000 test examples with 1,000 training examples. The KNN is trained on the raw input vector using the Euclidean distance while the K-layer+KNN is computed on the representation learned by a K-layer CAE. The KNN+Tangents uses at every sample the local charts extracted from the 1-layer CAE to compute tangent distance. MNIST CIFAR-10 COVERTYPE KNN 86.9 25.4 70.2 KNN+Tangents 88.7 26.5 70.98 1-Layer CAE+KNN 90.55 25.1 69.54 2-Layer CAE+KNN 91.15 67.45 We use KNN using tangent distance to evaluate the quality of the learned tangents more objectively. Table 1 shows that using the tangents extracted from a CAE always lead to better performance than a traditional KNN. As described in section 4.2, the tangents extracted by the CAE can be used for fine-tuning the multilayer perceptron using tangent propagation, yielding our Manifold Tangent Classifier (MTC). As it is a semi-supervised approach, we evaluate its effectiveness with a varying amount of labeled examples on MNIST. Following Weston et al. (2008), the unsupervised feature extractor is trained on the full training set and the supervised classifier is trained on a restricted labeled set. Table 2 shows our results for a single hidden layer MLP initialized with CAE+H pretraining (noted CAE for brevity) and for the same classifier fine-tuned with tangent propagation (i.e. the manifold tangent classifier of section 4.3, noted MTC). The methods that do not leverage the semi-supervised learning hypothesis (Support Vector Machines, traditional Neural Networks and Convolutional Neural Networks) give very poor performance when the amount of labeled data is low. In some cases, the methods that can learn from unlabeled data can reduce the classification error by half. The CAE gives better results than other approaches across almost the whole range considered. It shows that the features extracted 7 Table 2: Semi-supervised classification error on the MNIST test set with 100, 600, 1000 and 3000 labeled training examples. We compare our method with results from (Weston et al., 2008; Ranzato et al., 2007b; Salakhutdinov and Hinton, 2007). NN 25.81 11.44 10.7 6.04 100 600 1000 3000 SVM 23.44 8.85 7.77 4.21 CNN 22.98 7.68 6.45 3.35 TSVM 16.81 6.16 5.38 3.45 DBN-rNCA 8.7 3.3 EmbedNN 16.86 5.97 5.73 3.59 CAE 13.47 6.3 4.77 3.22 MTC 12.03 5.13 3.64 2.57 from the rich unlabeled data distribution give a good inductive prior for the classification task. Note that the MTC consistently outperforms the CAE on this benchmark. Table 3: Classification error on the MNIST test set with the full training set. K-NN 3.09% NN 1.60% SVM 1.40% DBN 1.17% CAE 1.04% DBM 0.95% CNN 0.95% MTC 0.81% Table 3 shows our results on the full MNIST dataset with some results taken from (LeCun et al., 1999; Hinton et al., 2006). The CAE in this figure is a two-layer deep network with 2000 units per layer pretrained with the CAE+H objective. The MTC uses the same stack of CAEs trained with tangent propagation using 15 tangents. The prior state of the art for the permutation invariant version of the task was set by the Deep Boltzmann Machines (Salakhutdinov and Hinton, 2009) at 0.95%. Using our approach, we reach 0.81% error on the test set. Remarkably, the MTC also outperforms the basic Convolutional Neural Network (CNN) even though the CNN exploits prior knowledge about vision using convolution and pooling to enhance the results. Table 4: Classification error on the Forest CoverType dataset. SVM 4.11% Distributed SVM 3.46% MTC 3.13% We also trained a 4 layer MTC on the Forest CoverType dataset. Following Trebar and Steele (2008), we use the data split DS2-581 which contains over 500,000 training examples. The MTC yields the best performance for the classification task beating the previous state of the art held by the distributed SVM (mixture of several non-linear SVMs). 7 Conclusion In this work, we have shown a new way to characterize a manifold by extracting a local chart at each data point based on the unsupervised feature mapping built with a deep learning approach. The developed Manifold Tangent Classifier successfully leverages three common ?generic prior hypotheses? in a unified manner. It learns a meaningful representation that captures the structure of the manifold, and can leverage this knowledge to reach superior classification performance. On datasets from different domains, it successfully achieves state of the art performance. Acknowledgments The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, FQRNT, Calcul Qu?ebec and CIFAR. References Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1?127. Also published as a book. Now Publishers, 2009. Bengio, Y. and Monperrus, M. (2005). Non-local manifold tangent learning. In NIPS?04, pages 129?136. MIT Press. Bengio, Y., Larochelle, H., and Vincent, P. (2006). Non-local manifold parzen windows. In NIPS?05, pages 115?122. MIT Press. 8 Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep networks. In Advances in NIPS 19. Brand, M. (2003). Charting a manifold. In NIPS?02, pages 961?968. MIT Press. Cayton, L. (2005). Algorithms for manifold learning. Technical Report CS2008-0923, UCSD. Drucker, H. and LeCun, Y. (1992). Improving generalisation performance using double back-propagation. IEEE Transactions on Neural Networks, 3(6), 991?997. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.-A., Vincent, P., and Bengio, S. (2010). Why does unsupervised pre-training help deep learning? JMLR, 11, 625?660. Goodfellow, I., Le, Q., Saxe, A., and Ng, A. (2009). Measuring invariances in deep networks. In NIPS?09, pages 646?654. Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527?1554. Kavukcuoglu, K., Ranzato, M., Fergus, R., and LeCun, Y. (2009). Learning invariant features through topographic filter maps. pages 1605?1612. IEEE. Lasserre, J. A., Bishop, C. M., and Minka, T. P. (2006). Principled hybrids of generative and discriminative models. pages 87?94, Washington, DC, USA. IEEE Computer Society. LeCun, Y., Haffner, P., Bottou, L., and Bengio, Y. (1999). Object recognition with gradient-based learning. In Shape, Contour and Grouping in Computer Vision, pages 319?345. Springer. Narayanan, H. and Mitter, S. (2010). Sample complexity of testing the manifold hypothesis. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1786?1794. Ranzato, M., Poultney, C., Chopra, S., and LeCun, Y. (2007a). Efficient learning of sparse representations with an energy-based model. In NIPS?06. Ranzato, M., Huang, F., Boureau, Y., and LeCun, Y. (2007b). Unsupervised learning of invariant feature hierarchies with applications to object recognition. IEEE Press. Rifai, S., Vincent, P., Muller, X., Glorot, X., and Bengio, Y. (2011a). Contracting auto-encoders: Explicit invariance during feature extraction. In Proceedings of the Twenty-eight International Conference on Machine Learning (ICML?11). Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., and Glorot, X. (2011b). Higher order contractive auto-encoder. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD). Roweis, S. and Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323?2326. Salakhutdinov, R. and Hinton, G. E. (2007). Learning a nonlinear embedding by preserving class neighbourhood structure. In AISTATS?2007, San Juan, Porto Rico. Omnipress. Salakhutdinov, R. and Hinton, G. E. (2009). Deep Boltzmann machines. In AISTATS?2009, volume 5, pages 448?455. Simard, P., Victorri, B., LeCun, Y., and Denker, J. (1992). Tangent prop - A formalism for specifying selected invariances in an adaptive network. In NIPS?91, pages 895?903, San Mateo, CA. Morgan Kaufmann. Simard, P. Y., LeCun, Y., and Denker, J. (1993). Efficient pattern recognition using a new transformation distance. In NIPS?92, pages 50?58. Morgan Kaufmann, San Mateo. Tenenbaum, J., de Silva, V., and Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319?2323. Trebar, M. and Steele, N. (2008). Application of distributed svm architectures in classifying forest data cover types. Computers and Electronics in Agriculture, 63(2), 119 ? 130. Vincent, P. and Bengio, Y. (2003). Manifold parzen windows. In NIPS?02. MIT Press. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR, 11(3371?3408). Weston, J., Ratle, F., and Collobert, R. (2008). Deep learning via semi-supervised embedding. In ICML 2008, pages 1168?1175, New York, NY, USA. Yu, K. and Zhang, T. (2010). Improved local coordinate coding using local tangents. Yu, K., Zhang, T., and Gong, Y. (2009). Nonlinear learning using local coordinate coding. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 2223?2231. 9
4409 |@word cnn:4 version:2 norm:2 open:2 rgb:1 decomposition:2 pressure:3 outlook:1 reduction:3 electronics:1 contains:3 tuned:1 document:2 interestingly:1 outperforms:2 activation:1 yet:1 must:3 readily:2 subsequent:1 shape:2 cheap:1 atlas:9 designed:1 update:1 greedy:2 prohibitive:1 selected:3 half:1 generative:1 plane:6 oldest:1 parametrization:1 record:3 hyperplanes:2 zhang:3 along:5 consists:1 combine:2 manner:1 market:1 pkdd:1 ratle:1 salakhutdinov:4 automatically:1 encouraging:1 equipped:1 window:3 provided:1 discover:1 underlying:1 what:3 substantially:3 developed:1 unified:2 transformation:3 impractical:1 every:3 ebec:1 classifier:18 k2:1 control:3 unit:1 appear:1 positive:1 service:1 local:34 limit:1 consequence:1 critique:1 might:3 initialization:1 studied:1 wk2:1 mateo:2 specifying:1 contractive:9 range:4 unique:1 lecun:9 acknowledgment:1 testing:1 practice:3 differs:1 backpropagation:3 digit:2 procedure:1 empirical:1 significantly:1 thought:1 pre:4 word:4 get:1 cannot:1 unlabeled:4 close:1 valley:1 bh:4 cartographic:1 map:5 tangentprop:7 straightforward:1 williams:2 convex:1 spanned:4 lamblin:1 his:1 embedding:4 variation:9 coordinate:7 target:1 hierarchy:1 us:2 designing:1 hypothesis:23 goodfellow:2 element:1 trend:1 recognition:3 showcase:1 labeled:5 database:2 bottom:1 initializing:1 capture:1 region:1 wj:1 news:1 culotta:2 ranzato:6 trade:1 mesnil:1 mentioned:2 principled:1 pd:2 agency:1 complexity:1 trained:9 solving:1 basis:2 cae:32 easily:1 stacked:2 separated:2 train:3 fast:2 effective:1 zemel:1 hyper:3 neighborhood:4 solve:1 otherwise:1 precludes:1 encoder:18 objectively:1 knn:10 topographic:1 jointly:1 h3c:1 itself:2 net:1 propose:2 reconstruction:8 product:1 frequent:1 relevant:1 insensitivity:1 roweis:2 frobenius:1 kh:1 exploiting:2 double:3 requirement:1 object:4 help:1 montreal:2 gong:1 measured:2 nearest:5 ij:3 eq:4 strong:1 trading:1 larochelle:3 concentrate:3 direction:21 closely:1 porto:1 filter:1 stochastic:6 exploration:1 mtc:12 lcc:2 saxe:1 require:2 hx:5 suffices:2 generalization:1 hold:2 around:11 considered:3 presumably:1 mapping:8 nw:2 predict:2 visualize:1 dbm:1 achieves:1 adopt:2 agriculture:1 outperformed:1 bag:1 label:1 sensitive:5 create:1 successfully:3 minimization:1 mit:4 j7:1 gaussian:1 always:1 aim:1 avoid:1 hj:3 varying:1 derived:1 validated:1 improvement:1 consistently:1 bernoulli:1 rank:1 contrast:2 nn:3 initially:2 hidden:2 visualisation:1 comprising:1 classification:20 pascal:1 dauphin:2 priori:1 animal:1 art:3 initialize:1 once:1 extraction:1 ng:1 washington:1 encouraged:1 yu:4 unsupervised:14 icml:2 minimized:2 yoshua:1 report:1 few:2 primarily:1 simultaneously:2 preserve:1 attempt:3 mlp:3 investigate:1 dauphiya:1 mixture:1 yielding:1 held:1 bundle:3 accurate:1 encourage:2 injective:1 necessary:1 orthogonal:1 mcat:1 euclidean:6 taylor:1 initialized:1 minimal:1 column:3 formalism:1 cover:3 measuring:1 cost:2 predictor:1 successful:3 conducted:1 osindero:1 characterize:1 encoders:7 density:3 international:1 sensitivity:2 off:1 invertible:2 enhance:1 together:1 parzen:3 squared:2 satisfied:1 huang:1 juan:1 book:1 simard:10 derivative:3 bx:6 leading:2 jacobians:1 de:1 coding:3 wk:4 invertibility:1 explicitly:6 collobert:1 responsibility:1 red:1 capability:1 tsvm:1 contribution:3 minimize:2 chart:19 ir:1 accuracy:1 convolutional:3 kaufmann:2 efficiently:1 correspond:5 yield:6 generalize:1 weak:1 handwritten:1 vincent:10 kavukcuoglu:3 raw:1 mere:1 corruption:1 published:1 reach:2 definition:3 energy:1 minka:1 thereof:1 dm:4 naturally:1 associated:2 dataset:7 popular:1 knowledge:14 car:1 dimensionality:4 obtainable:1 sophisticated:1 back:2 finetune:1 disposal:1 higher:4 rico:1 supervised:19 reflected:1 response:1 methodology:1 improved:1 evaluated:2 though:2 cayton:2 strongly:2 shrink:1 implicit:2 langford:1 autoencoders:1 nonlinear:5 monperrus:2 propagation:8 logistic:2 quality:3 grows:1 building:1 effect:1 steele:2 usa:2 true:1 rk2:1 geographic:1 xavier:1 vicinity:1 regularization:4 hence:2 inductive:1 during:1 encourages:3 noted:2 criterion:2 trying:1 complete:1 omnipress:1 silva:1 geometrical:1 image:8 wise:3 novel:1 recently:3 umontreal:1 funding:1 common:2 rotation:3 sigmoid:5 superior:1 bxi:3 insensitive:6 exponentially:1 volume:2 extend:1 salah:1 interpretation:2 ai:1 smoothness:2 tuning:3 dbn:2 shawe:1 similarity:1 add:1 something:1 curvature:1 recent:1 inf:1 irrelevant:1 store:1 fqrnt:1 success:1 muller:3 exploited:1 captured:4 injectivity:3 additional:3 seen:1 preserving:1 morgan:2 shortest:1 semi:8 full:3 smooth:2 technical:1 characterized:1 cross:4 compensate:1 sphere:1 cifar:6 devised:1 e1:1 prediction:3 underlies:1 variant:4 basic:1 multilayer:1 vision:2 expectation:1 tailored:1 adopting:1 achieved:1 penalize:3 addition:3 remarkably:2 fine:4 whereas:1 background:1 victorri:1 singular:9 publisher:1 extra:1 strict:1 pooling:1 tend:1 leveraging:2 lafferty:2 seem:1 effectiveness:1 call:1 extracting:3 near:2 leverage:6 ideal:1 chopra:1 bengio:18 split:2 embednn:1 xj:6 counterbalanced:1 architecture:5 topology:1 reduce:1 rifai:11 idea:2 haffner:1 br:3 drucker:2 bottleneck:1 whether:2 pca:1 penalty:7 ird:4 hessian:1 york:1 pretraining:2 deep:20 useful:1 amount:4 locally:6 tenenbaum:2 narayanan:2 svms:1 simplest:1 disjoint:2 per:3 putting:1 nevertheless:1 achieving:1 penalizing:1 vast:1 sum:1 almost:1 yann:1 scaling:2 investigates:1 capturing:1 layer:22 guaranteed:1 distinguish:2 courville:1 topological:1 strength:1 covertype:3 ri:2 hy:3 nearby:1 aspect:1 span:5 rcv1:3 rendered:1 relatively:1 department:1 according:3 ball:2 combination:1 poor:1 beneficial:2 across:1 surjectivity:1 qu:1 making:1 explained:1 invariant:7 restricted:1 taken:3 equation:1 visualization:1 remains:1 previously:1 turn:1 mechanism:1 fed:1 end:1 available:3 operation:1 eight:1 denker:2 v2:1 generic:5 indirectly:1 neighbourhood:1 robustness:1 top:2 exploit:7 build:3 especially:1 society:1 objective:7 move:1 added:1 strategy:1 parametric:2 usual:1 traditional:4 diagonal:1 gradient:7 lyrl2004:1 subspace:1 distance:17 mx:4 separate:2 capacity:1 decoder:5 parametrized:1 sensible:1 lajoie:1 topic:1 manifold:48 trivial:1 iro:1 reason:1 charting:2 modeled:1 relationship:1 illustration:1 manzagol:2 minimizing:2 mostly:3 potentially:1 negative:2 skk:1 ds2:1 proper:1 boltzmann:2 unknown:1 perform:1 allowing:1 teh:1 twenty:1 wire:1 convolution:1 datasets:6 benchmark:2 acknowledge:1 descent:2 parametrizing:1 ecml:1 hinton:7 looking:1 variability:1 dc:1 ucsd:1 stack:5 introduced:1 pair:2 required:2 coherent:1 learned:17 narrow:1 nip:9 able:1 usually:2 pattern:1 beating:1 vincentp:1 poultney:1 built:1 green:1 belief:1 natural:1 rely:2 regularized:1 predicting:1 hybrid:1 scarce:1 improve:4 historically:1 auto:15 extract:2 kj:1 prior:14 geometric:2 calcul:1 tangent:68 removal:2 popovici:1 discovery:1 loss:1 expect:1 permutation:1 contracting:1 interesting:1 limitation:1 validation:2 foundation:1 affine:1 consistent:1 principle:1 editor:2 story:1 classifying:1 tiny:1 share:1 translation:3 row:3 penalized:1 summary:1 changed:1 last:1 free:2 bias:2 allow:1 perceptron:1 wide:2 neighbor:7 characterizing:1 saul:2 explaining:1 sparse:2 distributed:3 dimension:1 xn:1 world:5 calculated:1 rich:2 kz:1 contour:1 forward:1 collection:1 made:2 author:1 san:3 adaptive:1 oftentimes:1 erhan:2 transaction:1 approximate:1 preferred:1 implicitly:2 global:1 anchor:1 corpus:1 tuples:1 xi:13 fergus:1 alternatively:2 spectrum:1 discriminative:1 search:1 why:1 lasserre:2 table:7 learn:1 nature:1 zk:2 ca:2 robust:2 composing:1 obtaining:2 forest:6 improving:1 schuurmans:1 bottou:1 european:1 domain:10 aistats:2 main:1 linearly:1 s2:3 noise:1 whole:3 reuters:2 jae:1 x1:1 mitter:2 ny:1 sub:2 explicit:2 bengioy:1 tied:1 breaking:2 jmlr:2 jacobian:11 extractor:4 learns:3 removing:1 specific:4 bishop:1 svm:6 evidence:2 grouping:1 glorot:2 mnist:8 adding:1 gained:1 kx:1 boureau:1 entropy:2 likely:5 crook:1 contained:1 nserc:1 pretrained:1 springer:1 corresponds:2 dh:5 extracted:8 prop:1 weston:4 conditional:1 viewed:2 identity:1 goal:1 diffeomorphism:1 shared:1 change:10 specifically:1 generalisation:1 acting:1 denoising:4 principal:2 called:1 bijectivity:2 svd:3 invariance:4 brand:2 meaningful:1 caes:4 support:4 brevity:1 evaluate:3 ex:2
3,766
441
Principled Architecture Selection for Neural Networks: Application to Corporate Bond Rating Prediction John Moody Department of Computer Science Yale University P. O. Box 2158 Yale Station New Haven, CT 06520 Joachim U tans Department of Electrical Engineering Yale University P. O. Box 2157 Yale Station New Haven, CT 06520 Abstract The notion of generalization ability can be defined precisely as the prediction risk, the expected performance of an estimator in predicting new observations. In this paper, we propose the prediction risk as a measure of the generalization ability of multi-layer perceptron networks and use it to select an optimal network architecture from a set of possible architectures. We also propose a heuristic search strategy to explore the space of possible architectures. The prediction risk is estimated from the available data; here we estimate the prediction risk by v-fold cross-validation and by asymptotic approximations of generalized cross-validation or Akaike's final prediction error. We apply the technique to the problem of predicting corporate bond ratings. This problem is very attractive as a case study, since it is characterized by the limited availability of the data and by the lack of a complete a priori model which could be used to impose a structure to the network architecture. 1 Generalization and Prediction Risk The notion of generalization ability can be defined precisely as the prediction risk, the expected performance of an estimator is predicting new observations. Consider a set of observations D {(Xj, tj); j 1 ... N} that are assumed to be generated 683 = = 684 Moody and Urans as ( 1) where J.l(x) is an unknown function, the inputs Xj are drawn independently with an unknown stationary probability density function p(x), the fj are independent random variables with zero mean (l = 0) and variance (j~, and the tj are the observed target values. The learning or regression problem is to find an estimate jt)..(x; D) of J.l(x) given the data set D from a class of predictors or models J.l)..(x) indexed by 'x. In general, ,x E A (5, A, W), where 5 C X denotes a chosen subset of the set of available input variables X, A is a selected architecture within a class of model architectures A, and Ware the adjustable parameters (weights) of architecture A. = The prediction risk P(,x) is defined as the expected performance on future data and can be approximated by the expected performance on a finite test set: (2) where (xi, ti) are new observations that were not used in constructing jt)..(x). In what follows, we shall use P(,x) as a measure of the generalization ability of a model. See [4] and [6] for more detailed presentations. 2 Estimates of Prediction Risk Since we cannot directly calculate the prediction risk P).., we have to estimate it from the available data D. The standard method based on test-set validation is not advisable when the data set is small. In this paper we consider such a case; the prediction of corporate bond ratings from a database of only 196 firms. Crossvalidation (CV) is a sample re-use method for estimating prediction risk; it makes maximally efficient use of the available data. Other methods are the generalized cross-validation (GCV) and the final prediction error (FPE) criteria, which combine the average training squared error ME with a measure of the model complexity. These will be discussed in the next sections. 2.1 Cross Validation Cross-Validation is a method that makes minimal assumptions on the statistics of the data. The idea of cross validation can be traced back to Mosteller and Tukey [7]. For reviews, see Stone [8, 9], Geisser [5] and Eubank [4]. Let jt)..(j)(x) be a predictor trained using all observations except (Xj, tj) such that jt )..(j) (x) minimizes ME j = (N ~ 1) L (tk - jt)..(j)(Xk?) 2 k~j Then, an estimator for the prediction risk P(-X) is the cross validation average Principled Architecture Selection for Neural Networks squared error N CV(,x) = ~ E (tj - flA(j)(Xj)) 2 (3) N j=l This form of CV(,x) is known as leave-one-out cross-validation. However, CV(,x) in (3) is expensive to compute for neural network models; it involves constructing N networks, each trained with N - 1 patterns . For the work described in this paper we therefore use a variation of the method, v-fold crossvalidation, that was introduced by Geisser [5] and Wahba et al [12]. Instead of leaving out only one observation for the computation of the sum in (3) we delete larger subsets of D. Let the data D be divided into v randomly selected disjoint subsets Pj of roughly equal size: Uj=lPj = D and Vi i= j, Pi n Pj = 0. Let N j denote the number of observations in subset Pj. Let flA(Pj) (x) be an estimator trained on all data except for (x, t) E Pj. Then, the cross-validation average squared error for subset j is defined as CVPj('x) = ~. E (tk - flA(Pj)(Xk)) 2 , 3 (Xk,tk)ePj and CVp(,x) = ;1 L (4) CVPj('x). j Typical choices for v are 5 and 10. Note that leave-one-out CV is obtained in the limit v N. = 2.2 Generalized Cross-Validation and Final Prediction Error For linear models, two useful criteria for selecting a model architecture are generalized cross-validation (CCV) (Wahba [11]) and Akaike's final prediction error (FPE) ([1]): GCV('x) = ASE('x) 1 (I-?) 2 FPE('x) I+~) = ASE('x) ( 1~ . S(A) denotes the number of weights of model'x. See [4] for a tutorial treatment. Note that although they are slightly different for small sample sizes, they are asymptotically equivalent for large N: p(,x) - ASE('x) (1 + 2S~)) ~ GCV('x) ~ FPE('x) (5) We shall use this asymptotic estimate for the prediction risk in our analysis of the bond rating models. It has been shown by Moody [6] that FPE and therefore p(,x) is an unbiased estimate of the prediction risk for the neural network models considered here provided that (1) the noise fj in the observed targets tj is independent and identically distributed, 685 686 Moody and Utans (2) weight decay is not used, and (3) the resulting model is unbiased. (In practice, however, essentially all neural network fits to data will be biased (see Moody [6]).) FPE is a special case of Barron's PSE [2] and Moody's GPE [6]. Although FPE and P{A.) are unbiased only under the above assumptions, they are much cheaper to compute than GVp since no retraining is tequired. 3 A Case Study: Prediction of Corporate Bond Ratings A bond is a debt security which constitutes a promise by the issuing firm to pay a given rate of interest on the original issue price and to redeem the bond at face value at maturity. Bonds are rated according to the default risk of the issuing firm by independent rating agencies such as Standard & Poors (S&P) and Moody's Investor Service. The firm is in default if it is not able make the promised interest payments. Representation of S&P Bond Ratings Table 1: Key to S&P bond ratings. We only used the range from 'AAA' or 'very low default risk' to 'CCC' meaning 'very high default risk'. (Note that AAA- is a not a standard category; its inclusion was suggested to us by a Wall Street analyst.) Bonds with rating BBB- or better are "investment grade" while "junk bonds" have ratings BB+ or below. For our output representation, we assigned an integer number to each rating as shown . S&P and Moody's determine the rating from various financial variables and possibly other information, but the exact set of variables is unknown. It is commonly believed that the rating is at least to some degree judged on the basis of subjective factors and on variables not directly related to a particular firm. In addition, the method used for assigning the rating based on the input variables is unknown. The problem we are considering here is to predict the S&P rating of a bond based on fundamental financial information about the issuer which is publicly available. Since the rating agencies update their bond ratings infrequently, there is considerable value to being able to anticipate rating changes before they are announced. A predictive model which maps fundamental financial factors onto an estimated rating can accomplish this. The input data for our model consists of 10 financial ratios reflecting the fundamental characteristics of the firms. The database was prepared for us by analysts at a major financial institution. Since we did not attempt to include all information in the input variables that could possibly be related to a firms bond rating (e.g. all fundamental or technical financial factors, or qualitative information such as quality of management), we can only attempt to approximate the S&P rating. 3.1 A Linear Bond Rating Predictor For comparison with the neural network models, we computed a standard linear regression model. All input variables were used to predict the rating which is represented by a number in [0,1]. The rating varies continuously from one category to the next higher or next lower one and this "smoothness" is captured in the single output representation and should make the task easier. To interpret the network Principled Architecture Selection for Neural Networks 1.00 Cross Validation Error vs. Nur.ber of Hidden Units ... ... ... ... ,....,....-.----r--,.--.--..-~~~.--,......., r... o r... r... ~I.. ~ ::: 1,10 III 't1 p vs . Nur.ber of Hidden Units III > fill . . . ID I .a o r... U Number of Hidden Units Number of Hidden Units Figure 1: Cross validation error CVp (>.) and Pp..) versus number of hidden units . response, the output was rescaled from [0,1] to [2 , 19] and rounded to the nearest integer; 19 corresponds to a rating of 'AAA' and 2 to 'eee' and below (see Table 1). The input variables were normalized to the interval [0,1] since the original financial ratios differed widely in magnitude. The model predicted the rating of 21.4 % of the firms correctly, for 37.2 % the error was one notch and for 21.9 % two notches (thus predicting 80.5 % of the data within two notches from the correct target). The RMS training error was 1.93 and the estimate of the prediction risk P == 2.038. 3.2 Beyond Linear Regression: Prediction by Two Layer Perceptrons The class of models we are considering as predictors are two-layer perceptron networks with h input variables, H>. internal units and a single output unit having the form H>. p>.(x) = f( Vo + L a=l I>. Va g(WaO + L Wa{3 X(3)) . (6) {3=1 The hidden units have a sigmoidal transfer function while our single output unit uses a piecewise linear function. 3.3 Heuristic Search over the Space of Percept ron Architectures Our proposed heuristic search algorithm over the space of perceptron architectures is as follows. First, we select the optimal number of internal units from a sequence of fully connected networks with increasing number of hidden units. Then, using the optimal fully connected network, we prune weights and input variables in parallel resulting in two separately pruned networks. Lastly, the methods were combined and the resulting networks is retrained to yield the final model 3.3.1 Selecting the Number of Hidden Units We initially trained fully connected networks with all 10 available inputs variables but with the number of hidden units H>. varying from 2 to 11. Five-fold cross- 687 688 Moody and Utans Training Error 3 Hidden Units IE.,,..,,./, I 0 1 2 >2 firms 67 84 34 11 % 34.2 42.9 17.3 5 .6 number of weights standard deviation mean absolute deviation training errOr Cross Validation Error 3 Hidden Units cum. % 34.2 77 . 1 94.4 100.0 37 1.206 0.898 1.320 .IE....n.otc.b.l 0 1 2 >2 firms 54 77 35 30 1'0 28.6 38.8 17.3 15.3 number of weights standard deviation mean absolute deviation cross validation error cum.1! 28.6 67.3 84.7 100.0 37 1.630 1.148 1.807 Table 2: Results for the network with 3 hidden units. The standard deviation and the mean absolute deviation are computed after rescaling the output of the network to [2,19] and rounding to the nearest integer (notches) . The RMS training error is computed using the rescaled output of the network before rounding. The table also describes the predictive ability of the network by a histogram; the error column gives the number of rating categories the network was off from the correct target. The network with 3 hidden units significantly outperformed the linear regression model. On the right Cross Validation results for the network with 3 hidden units are shown. In order to predict the rating for a firm, we choose among the networks trained for the cross-validation procedure the one that was not trained using the subset the firm belongs to. Thus the results concerning the predictive ability of the model reflect the expected performance of the model trained on all the data with new data in the cross-validation-sense. validation and P(>\) were used to select the number of hidden units. We compute CVp(A) according to equation (4); the data set was partitioned into v 5 subsets. We also computed P(A) according to equation (5). The results of the two methods are consistent, having a common minimum for H>.. = 3 internal units (see figure 1). = Table 2 (left ) shows the results for the network with H)" = 3 trained on the entire data set. A more accurate description of the performance of the model is shown in table 2( right) were the predictive ability is calculated from the hold-out sets of the cross-validation procedure. 3.3.2 Pruning of Input Variables via Sensitivity Analysis Next, we attempted to further reduce the number of weights of the network by eliminating some of the input variables. To test which inputs are most significant for determining the network output, we perform a sensitivity analysis. We define the "Sensitivity" of the network model to variable (3 as: 1 N 1 N Sf3 = N L AS'E(x{3) - AS'E(xf3) with x{3 = N LX{3j j=l j=l Here, x{3j is the 13th input variable of the ph exemplar. S{3 measures the effect on the training AS'E of replacing the 13th input xf3 by its average x{3. Replacement of a variable by its average value reIl!0ves its influence on the network output. Again we use 5-fold cross-validation and P to estimate the prediction risk P>... We constructed a sequence of models by deleting an increasin~ number of input variables in order of increasing S{3. For each model, CVp and P was computed, figure 2 shows the results. A minimum was attained for the model with 1>.. 8 input variables (2 inputs were removed). This reduces the number of weights by 2H)" = 6. = Principled Architecture Selection for Neural Networks P and I.a P and Sensitivity Analysis 1.00 a.1 .. 16 a.\ 1.00 c..a.o c.. "16 ... 1.10 1.? 1.. "Op t iIml Brain Damge" 1.76 I a ? \. NurDer of Inputs RSIDved .. \0 \ ao Nurrher of Weights ReaDved Figure 2: peA) for the sensitivity analysis and OBD. In both cases, the Cross validation error CVp(A) has a minimum for the same A. 3.3.3 Weight Pruning via "Optimal Brain Damage" Optimal Brain Damage (OBD) was introduced by Le Cun at al [3] as a method to reduce the number of weights in a neural network to avoid overfitting. OBD is designed to select those weights in the network whose removal will have a small effect on the training ME. Assuming that the original network was too large, removing these weights and retraining the now smaller network should improve the generalization performance. The method approximates ME at a minimum in weight space by a diagonal quadratic expansion. The saliency 1 {PME Si = -2 ow.2 I 2 w?I computed after training has stopped is a measure (in the diagonal approximation) for the change of ME when weight Wi is removed from the network. CVp and P were computed to select the optimal model. We find that CVp and P are minimized when 9 weights are deleted from the network using all input variables. However, some overlap exists when compared to the sensitivity analysis described above: 5 of the deleted weights would also have been removed by the sensitivity method. Table 3 show the overall performance of our model when the two techniques were combined to yield the final architecture. This architecture is obtained by deleting the union of the sets of weights that were deleted using weight and input pruning separately. Note the improvement in estimated prediction performance (CV error) in table 3 relative to 2. 4 Summary Our example shows that (1) nonlinear network models can out-perform linear regression models, and (2) substantial benefits in performance can be obtained by the use of principled architecture selection methods. The resulting structured networks 689 690 Moody and Utans Training Error, 3 Hidden Units 2 Inputs and 9 Connections Removed IEnatt'Ohl 0 1 2 >2 firms 69 81 32 14 % 35.2 41.3 16 .3 7.2 number of weights standard deviation mean absolute deviation training error cum . % 35.2 76.5 92 .8 100.0 27 1.208 0.882 1.356 Cross Validation Error, 3 Hidden Units 2 Inputs and 9 Connections Removed IEnotchl 0 1 2 >2 firms 58 76 37 26 % 29.6 38.8 18.9 12.8 number of weights standard deviation mean absolute deviation cross validation error cum . % 29.6 68.4 87 .2 100.0 27 1.546 1.117 1.697 Table 3: Results for the network with 3 hidden units with both, sensitivity analysis and OBD applied . Note the improvement in CV error performance of relative to Table 2. are optimized with respect to the task at hand, even though it may not be possible to design them based on a priori knowledge. Estimates of the prediction risk offer a sound basis for assessing the performance of the model on new data and can be used as a tool for principled architecture selection. Cross-validation, GCV and FPE provide computationally feasible means of estimating the prediction risk. These estimates of prediction risk provide very effective criteria for selecting the number of internal units and performing sensitivity analysis and OBD. References [1] H. Akaike. Statistical predictor identification. Ann. Inst. Statist. Math., 22:203-217, 1970. [2] A. Barron. Predicted squared error: a criterion for automatic model selection. In S. Farlow, editor, Self-Organizing Methods in Modeling. Marcel Dekker, New York, 1984. [3] Y. Le Cun J. S. Denker, and S. A. Solla. Optimal brain damage. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2. Morgan Kaufmann Publishers, 1990. [4] Randall L. Eubank. Spline Smoothing and Nonparametric Regression. Marcel Dekker, Inc., 1988. [5] Seymour Geisser. The predictive sample reuse method with applications. Journal of The American Statistical Association, 70(350), June 1975. [6] John Moody. The effective number of parameters: an analysis of generalization and regularization in nonlinear learning systems. short version in this volume, long version to appear, 1992. [7] F. Mosteller and J. W. Tukey. Data analysis, including statistics. In G. Lindzey and E. Aronson, editors, Handbook of Social Psychology, Vol. 2. Addison-Wesley, 1968 (first edition 1954). [8] M. Stone. Cross-validatory choice and assessment of statistical predictions. Roy. Stat. Soc., B36, 1974. [9] M. Stone. Cross-validation: A review. Math. Operationsforsch. Statist., Ser. Statistics, 9(1), 1978. [10] Joachim Utans and John Moody. Selecting neural network architectures via the ~ediction risk: Application to corporate bond rating prediction. In Proceedings of the First International Conference on Artifical Intelligence Applications on Wall Street. IEEE Computer Society Press, Los Alamitos, CA, 1991. [11] G. Wahba. Spline Models for Observational Data, volume 59 of Regional Conference Series in Applied Mathematics. SIAM Press, Philadelphia, 1990. [12] G. Wahba and S. Wold. A completely automatic french curve: Fitting spline functions by cross-validation. Communiations in Statistics, 4(1): 1-17, 1975.
441 |@word version:2 eliminating:1 retraining:2 dekker:2 series:1 selecting:4 subjective:1 si:1 assigning:1 issuing:2 john:3 nur:2 designed:1 update:1 v:2 stationary:1 intelligence:1 selected:2 xk:3 short:1 institution:1 math:2 ron:1 lx:1 ohl:1 sigmoidal:1 five:1 constructed:1 maturity:1 qualitative:1 consists:1 combine:1 fitting:1 expected:5 roughly:1 multi:1 grade:1 brain:4 considering:2 increasing:2 provided:1 estimating:2 what:1 minimizes:1 gvp:1 ti:1 ser:1 unit:24 appear:1 before:2 service:1 engineering:1 t1:1 seymour:1 limit:1 farlow:1 fpe:8 id:1 ware:1 limited:1 range:1 practice:1 investment:1 union:1 procedure:2 significantly:1 cannot:1 onto:1 selection:7 judged:1 risk:22 influence:1 equivalent:1 map:1 independently:1 estimator:4 fill:1 financial:7 notion:2 variation:1 target:4 tan:1 exact:1 akaike:3 us:1 infrequently:1 approximated:1 expensive:1 roy:1 database:2 observed:2 electrical:1 calculate:1 connected:3 solla:1 rescaled:2 removed:5 principled:6 substantial:1 agency:2 complexity:1 trained:8 predictive:5 basis:2 completely:1 various:1 represented:1 effective:2 firm:14 whose:1 heuristic:3 larger:1 widely:1 ability:7 statistic:4 final:6 sequence:2 propose:2 organizing:1 description:1 crossvalidation:2 los:1 assessing:1 leave:2 tk:3 advisable:1 stat:1 exemplar:1 nearest:2 op:1 soc:1 predicted:2 involves:1 marcel:2 correct:2 pea:1 observational:1 ao:1 generalization:7 wall:2 anticipate:1 hold:1 considered:1 predict:3 major:1 outperformed:1 bond:17 tool:1 avoid:1 varying:1 june:1 joachim:2 improvement:2 sense:1 inst:1 entire:1 initially:1 hidden:18 issue:1 among:1 overall:1 priori:2 smoothing:1 special:1 otc:1 equal:1 validatory:1 having:2 geisser:3 constitutes:1 future:1 minimized:1 spline:3 piecewise:1 haven:2 randomly:1 cheaper:1 replacement:1 attempt:2 interest:2 bbb:1 tj:5 accurate:1 indexed:1 re:1 minimal:1 delete:1 stopped:1 column:1 modeling:1 deviation:10 subset:7 predictor:5 rounding:2 gcv:4 too:1 varies:1 accomplish:1 combined:2 density:1 fundamental:4 mosteller:2 sensitivity:9 ie:2 international:1 siam:1 off:1 rounded:1 continuously:1 moody:12 squared:4 reflect:1 again:1 management:1 choose:1 possibly:2 american:1 rescaling:1 availability:1 inc:1 vi:1 xf3:2 tukey:2 investor:1 parallel:1 publicly:1 variance:1 characteristic:1 percept:1 kaufmann:1 yield:2 saliency:1 identification:1 eubank:2 touretzky:1 pp:1 junk:1 treatment:1 knowledge:1 back:1 reflecting:1 wesley:1 higher:1 attained:1 response:1 maximally:1 box:2 though:1 wold:1 lastly:1 hand:1 replacing:1 nonlinear:2 assessment:1 lack:1 french:1 fla:3 quality:1 effect:2 normalized:1 unbiased:3 regularization:1 assigned:1 attractive:1 self:1 criterion:4 generalized:4 stone:3 complete:1 vo:1 fj:2 meaning:1 common:1 volume:2 discussed:1 association:1 approximates:1 interpret:1 significant:1 cv:7 smoothness:1 automatic:2 mathematics:1 inclusion:1 belongs:1 captured:1 minimum:4 morgan:1 impose:1 prune:1 determine:1 corporate:5 sound:1 reduces:1 technical:1 characterized:1 cross:28 believed:1 long:1 offer:1 divided:1 concerning:1 ase:3 va:1 prediction:29 regression:6 essentially:1 histogram:1 addition:1 separately:2 interval:1 leaving:1 publisher:1 biased:1 regional:1 integer:3 iii:2 identically:1 xj:4 fit:1 psychology:1 architecture:19 wahba:4 reduce:2 idea:1 pse:1 notch:4 rms:2 reuse:1 f:1 york:1 useful:1 detailed:1 nonparametric:1 prepared:1 ph:1 statist:2 category:3 tutorial:1 utans:4 estimated:3 disjoint:1 correctly:1 promise:1 shall:2 vol:1 key:1 promised:1 traced:1 drawn:1 deleted:3 pj:6 asymptotically:1 sum:1 eee:1 announced:1 layer:3 ct:2 pay:1 yale:4 fold:4 quadratic:1 precisely:2 lpj:1 pruned:1 performing:1 department:2 structured:1 according:3 poor:1 describes:1 slightly:1 smaller:1 partitioned:1 wi:1 cun:2 aaa:3 randall:1 sf3:1 computationally:1 equation:2 payment:1 addison:1 available:6 apply:1 denker:1 barron:2 original:3 denotes:2 include:1 uj:1 society:1 alamitos:1 strategy:1 damage:3 diagonal:2 ccc:1 ow:1 street:2 me:5 analyst:2 assuming:1 ratio:2 design:1 unknown:4 adjustable:1 perform:2 observation:7 finite:1 station:2 retrained:1 rating:29 introduced:2 connection:2 optimized:1 security:1 able:2 suggested:1 beyond:1 below:2 pattern:1 including:1 debt:1 deleting:2 overlap:1 predicting:4 improve:1 rated:1 philadelphia:1 review:2 removal:1 determining:1 asymptotic:2 relative:2 fully:3 versus:1 validation:28 degree:1 consistent:1 cvp:7 editor:3 pi:1 summary:1 obd:5 perceptron:3 ber:2 face:1 absolute:5 distributed:1 benefit:1 curve:1 default:4 calculated:1 commonly:1 social:1 bb:1 approximate:1 pruning:3 overfitting:1 handbook:1 assumed:1 xi:1 search:3 table:10 transfer:1 ca:1 expansion:1 gpe:1 constructing:2 did:1 cum:4 noise:1 edition:1 differed:1 removing:1 jt:5 decay:1 exists:1 magnitude:1 easier:1 explore:1 corresponds:1 presentation:1 ann:1 ediction:1 price:1 considerable:1 change:2 feasible:1 typical:1 except:2 pme:1 attempted:1 perceptrons:1 select:5 internal:4 artifical:1
3,767
4,410
Optimal Reinforcement Learning for Gaussian Systems Philipp Hennig Max Planck Institute for Intelligent Systems Department of Empirical Inference Spemannstra?e 38, 72070 T?ubingen, Germany [email protected] Abstract The exploration-exploitation trade-off is among the central challenges of reinforcement learning. The optimal Bayesian solution is intractable in general. This paper studies to what extent analytic statements about optimal learning are possible if all beliefs are Gaussian processes. A first order approximation of learning of both loss and dynamics, for nonlinear, time-varying systems in continuous time and space, subject to a relatively weak restriction on the dynamics, is described by an infinite-dimensional partial differential equation. An approximate finitedimensional projection gives an impression for how this result may be helpful. 1 Introduction ? Optimal Reinforcement Learning Reinforcement learning is about doing two things at once: Optimizing a function while learning about it. These two objectives must be balanced: Ignorance precludes efficient optimization; time spent hunting after irrelevant knowledge incurs unnecessary loss. This dilemma is famously known as the exploration exploitation trade-off. Classic reinforcement learning often considers time cheap; the trade-off then plays a subordinate role to the desire for learning a ?correct? model or policy. Many classic reinforcement learning algorithms thus rely on ad-hoc methods to control exploration, such as ?-greedy? [1], or ?Thompson sampling? [2]. However, at least since a thesis by Duff [3] it has been known that Bayesian inference allows optimal balance between exploration and exploitation. It requires integration over every possible future trajectory under the current belief about the system?s dynamics, all possible new data acquired along those trajectories, and their effect on decisions taken along the way. This amounts to optimization and integration over a tree, of exponential cost in the size of the state space [4]. The situation is particularly dire for continuous space-times, where both depth and branching factor of the ?tree? are uncountably infinite. Several authors have proposed approximating this lookahead through samples [5, 6, 7, 8], or ad-hoc estimators that can be shown to be in some sense close to the Bayes-optimal policy [9]. In a parallel development, recent work by Todorov [10], Kappen [11] and others introduced an idea to reinforcement learning long commonplace in other areas of machine learning: Structural assumptions, while restrictive, can greatly simplify inference problems. In particular, a recent paper by Simpkins et al. [12] showed that it is actually possible to solve the exploration exploitation trade-off locally, by constructing a linear approximation using a Kalman filter. Simpkins and colleagues further assumed to know the loss function, and the dynamics up to Brownian drift. Here, I use their work as inspiration for a study of general optimal reinforcement learning of dynamics and loss functions of an unknown, nonlinear, time-varying system (note that most reinforcement learning algorithms are restricted to time-invariant systems). The core assumption is that all uncertain variables are known up to Gaussian process uncertainty. The main result is a first-order description of optimal reinforcement learning in form of infinite-dimensional differential statements. This kind of description opens up new approaches to reinforcement learning. As an only initial example of such treatments, Section 4 1 presents an approximate Ansatz that affords an explicit reinforcement learning algorithm; tested in some simple but instructive experiments (Section 5). An intuitive description of the paper?s results is this: From prior and corresponding choice of learning machinery (Section 2), we construct statements about the dynamics of the learning process (Section 3). The learning machine itself provides a probabilistic description of the dynamics of the physical system. Combining both dynamics yields a joint system, which we aim to control optimally. Doing so amounts to simultaneously controlling exploration (controlling the learning system) and exploitation (controlling the physical system). Because large parts of the analysis rely on concepts from optimal control theory, this paper will use notation from that field. Readers more familiar with the reinforcement learning literature may wish to mentally replace coordinates x with states s, controls u with actions a, dynamics with transitions p(s0 | s, a) and utilities q with losses (negative rewards) ?r. The latter is potentially confusing, so note that optimal control in this paper will attempt to minimize values, rather than to maximize them, as usual in reinforcement learning (these two descriptions are, of course, equivalent). 2 A Class of Learning Problems We consider the task of optimally controlling an uncertain system whose states s ? (x, t) ? K ? RD ?R lie in a D +1 dimensional Euclidean phase space-time: A cost Q (cumulated loss) is acquired at (x, t) with rate dQ/dt = q(x, t), and the first inference problem is to learn this analytic function q. A second, independent learning problem concerns the dynamics of the system. We assume the dynamics separate into free and controlled terms affine to the control: dx(t) = [f (x, t) + g(x, t)u(x, t)] dt (1) where u(x, t) is the control function we seek to optimize, and f, g are analytic functions. To simplify our analysis, we will assume that either f or g are known, while the other may be uncertain (or, alternatively, that it is possible to obtain independent samples from both functions). See Section 3 for a note on how this assumption may be relaxed. W.l.o.g., let f be uncertain and g known. Information about both q(x, t) and f (x, t) = [f1 , . . . , fD ] is acquired stochastically: A Poisson process of constant rate ? produces mutually independent samples yq (x, t) = q(x, t)+q and yf d (x, t) = fd (x, t)+f d where q ? N (0, ?q2 ); f d ? N (0, ?f2 d ). (2) The noise levels ?q and ?f are presumed known. Let our initial beliefs about q and f be given by QD Gaussian processes GP kq (q; ?q , ?q ); and independent Gaussian processes d GP kf d (fd ; ?f d , ?f d ), respectively, with kernels kr , kf 1 , . . . , kf D over K, and mean / covariance functions ? / ?. In other words, samples over the belief can be drawn using an infinite vector ? of i.i.d. Gaussian variables, as Z 1/2 1/2 ? fd ([x, t]) = ?f d ([x, t])+ ?f d ([x, t], [x0 , t0 ])?(x0 , t0 )dx0 dt = ?f d ([x, t])+(?f d ?)([x, t]) (3) the second equation demonstrates a compact notation for inner products that will be used throughout. It is important to note that f, q are unknown, but deterministic. At any point during learning, we can use the same samples ? to describe uncertainty, while ?, ? change during the learning process. To ensure continuous trajectories, we also need to regularize the control. Following control custom, we introduce a quadratic control cost ?(u) = 21 u| R?1 u with control cost scaling matrix R. Its units [R] = [x/t]/[Q/x] relate the cost of changing location to the utility gained by doing so. The overall task is to find the optimal discounted horizon value   Z ? 1 v(x, t) = min e?(? ?t)/? q[?[?, u(?, ? )], ? ] + u(?, ? )| R?1 u(?, ? ) d? u 2 t (4) where ?(?, u) is the trajectory generated by the dynamics defined in Equation (1), using the control law (policy) u(x, t). The exponential definition of the discount ? > 0 gives the unit of time to ?. Before beginning the analysis, consider the relative generality of this definition: We allow for a continuous phase space. Both loss and dynamics may be uncertain, of rather general nonlinear form, and may change over time. The specific choice of a Poisson process for the generation of samples is 2 somewhat ad-hoc, but some measure is required to quantify the flow of information through time. The Poisson process is in some sense the simplest such measure, assigning uniform probability density. An alternative is to assume that datapoints are acquired at regular intervals of width ?. This results in a quite similar model but, since the system?s dynamics still proceed in continuous time, can complicate notation. A downside is that we had to restrict the form of the dynamics. However, Eq. (1) still covers numerous physical systems studied in control, for example many mechanical systems, from classics like cart-and-pole to realistic models for helicopters [13]. 3 Optimal Control for the Learning Process The optimal solution to the exploration exploitation trade-off is formed by the dual control [14] of a joint representation of the physical system and the beliefs over it. In reinforcement learning, this idea is known as a belief-augmented POMDP [3, 4], but is not usually construed as a control problem. This section constructs the Hamilton-Jacobi-Bellman (HJB) equation of the joint control problem for the system described in Sec. 2, and analytically solves the equation for the optimal control. This necessitates a description of the learning algorithm?s dynamics: At time t = ? , let the system be at phase space-time s? = (x(? ), ? ) and have the Gaussian process belief GP(q; ?? (s), ?? (s, s0 )) over the function q (all derivations in this section will focus on q, and we will drop the sub-script q from many quantities for readability. The forms for f , or g, are entirely analogous, with independent Gaussian processes for each dimension d = 1, . . . , D). This belief stems from a finite number N of samples y 0 = [y1 , . . . , yN ]| ? RN collected at space-times S 0 = [(x1 , t1 ), . . . , (xN , tN )]| ? [s1 , . . . , sN ]| ? KN (note that t1 to tN need not be equally spaced, ordered, or < ? ). For arbitrary points s? = (x? , t? ) ? K, the belief over q(s? ) is a Gaussian with mean function ?? , and co-variance function ?? [15] ?? (s?i ) = k(s?i , S 0 )[K(S 0 , S 0 ) + ?q2 I]?1 y 0 (5) ?? (s?i , s?j ) = k(s?i , s?j ) ? k(s?i , S 0 )[K(S 0 , S 0 ) + ?q2 I]?1 k(S 0 , s?j ) where K(S 0 , S 0 ) is the Gram matrix with elements Kab = k(sa , sb ). We will abbreviate K0 ? [K(S 0 , S 0 ) + ?y2 I] from here on. The co-vector k(s? , S 0 ) has elements ki = k(s? , si ) and will be shortened to k0 . How does this belief change as time moves from ? to ? + dt? If dt _ 0, the chance of acquiring a datapoint y? in this time is ? dt. Marginalising over this Poisson stochasticity, we expect one sample with probability ? dt, two samples with (? dt)2 and so on. So the mean after dt is expected to be  ?1   K0 ? ? y0 ?? + dt = ? dt (k0 , k? ) + (1 ? ? dt ? O(? dt)2 ) ? k0 K0?1 y 0 + O(? dt)2 (6) | ? ? ?? y? where we have defined the map k? = k(s? , s? ), the vector ? ? with elements ??,i = k(si , s? ), and the scalar ?? = k(s? , s? ) + ?q2 . Algebraic re-formulation yields ?? + dt = k0 K0?1 y 0 + ?(kt ? k0 | K0?1 ? t )(?t ? ? |t K0?1 ? t )?1 (yt ? ? |t K0?1 y 0 ) dt. ? |? K0?1 y 0 ? |? K0?1 ? ? ) Note that = ?(s? ), the mean prediction at s? and (?? ? ? ? the marginal variance there. Hence, we can define scalars ?, ? and write (y? ? ? |? K0?1 y 0 ) [?1/2 ?](s? ) + ?? ? 1/2 = ?? ?? ? ? ?+? | ?1 1/2 [?(s? , s? ) + ? 2 ]1/2 (?? ? ? ? K0 ? ? ) = ?q2 (7) + ?(s? , s? ), with ? ? N (0, 1). (8) So the change to the mean consists of a deterministic but uncertain change whose effects accumulate linearly in time, and a stochastic change, caused by the independent noise process, whose variance accumulates linearly in time (in truth, these two points are considerably subtler, a detailed proof is left out for lack of space). We use the Wiener [16] measure d? to write d?s? (s? ) = ? k? ? k0 | K0?1 ? ? [?1/2 ?](s? ) + ?? ? 1/2 dt ? ?Ls? (s? )[? ?? d?] ? ? dt + ? (?? ? ? |? K0?1 ? ? )?1/2 [?(s? , s? ) + ? 2 ]1/2 (9) where we have implicitly defined the innovation function L. Note that L is a function of both s? and s? . A similar argument finds the change of the covariance function to be the deterministic rate d?s? (s?i , s?j ) = ??Ls? (s?i )L|s? (s?j ) dt. 3 (10) So the dynamics of learning consist of a deterministic change to the covariance, and both deterministic and stochastic changes to the mean, both of which are samples a Gaussian processes with covariance function proportional to LL| . This separation is a fundamental characteristic of GPs (it is the nonparametric version of a more straightforward notion for finite-dimensional Gaussian beliefs, for data with known noise magnitude). We introduce the belief-augmented space H containing states z(? ) ? [x(? ), ?, ??q (s), ??f 1 , . . . , ??f D , ??q (s, s0 ), ??f 1 , . . . , ??f D ]. Since the means and covariances are functions, H is infinite-dimensional. Under our beliefs, z(? ) obeys a stochastic differential equation of the form dz = [A(z) + B(z)u + C(z)?] dt + D(z) d? (11) with free dynamics A, controlled dynamics Bu, uncertainty operator C, and noise operator D h i A = ??f (zx , zt ) , 1 , 0 , 0 , . . . , 0 , ??Lq L|q , ??Lf 1 L|f 1 , . . . , ??Lf D L|f D ; (12) B = [g(s? ), 0, 0, 0, . . . ]; 1/2 ? 1/2 ? 1/2 ? 1/2 C = diag(?f ? , 0, ?Lq ? q , ?Lf 1 ?f 1 , . . . , ?Lf D ?f d , 0, . . . , 0); D = diag(0, 0, ?Lq ? ?q , ?Lf 1 ? ?f 1 , . . . , ?Lf D ? ?f D , 0, . . . , 0) (13) ? The value ? the expected cost to go ? of any state s is given by the Hamilton-Jacobi-Bellman equation, which follows from Bellman?s principle and a first-order expansion, using Eq. (4): Z Z     1 | ?1 1/2 (14) v(z? ) = min ?q (s? ) + ?q? ?q + ?q ?q + u R u dt + v(z? + dt ) d? d? u 2 Z  1 | ?1 v(z? ) ?v 1 = min ??q +?1/2 u+ + +[A+Bu+C?]| ?v+ tr[D| (?2 v)D]d? dt q? ?q + u R u 2 dt ?t 2 Integration over ? can be performed with ease, and removes the stochasticity from the problem; The uncertainty over ? is a lot more challenging. Because the distribution over future losses is correlated through space and time, ?v, ?2 v are functions of ?, and the integral is nontrivial. But there are some obvious approximate approaches. For example, if we (inexactly) swap integration and minimisation, draw samples ?i and solve for the value for each sample, we get an ?average optimal controller?. This over-estimates the actual sum of future rewards by assuming the controller has access to the true system. It has the potential advantage of considering the actual optimal controller for every possible system, the disadvantage that the average of optima need not be optimal for any actual solution. On the other hand, if we ignore the correlation between ? and ?v, we can integrate (17) locally, all terms in ? drop out and we are left with an ?optimal average controller?, which assumes that the system locally follows its average (mean) dynamics. This cheaper strategy was adopted in the following. Note that it is myopic, but not greedy in a simplistic sense ? it does take the effect of learning into account. It amounts to a ?global one-step look-ahead?. One could imagine extensions that consider the influence of ? on ?v to a higher order, but these will be left for future work. Under this first-order approximation, analytic minimisation over u can be performed in closed form, and bears u(z) = ?RB(z)| ?v(z) = ?Rg(x, t)| ?x v(z). (15) The optimal Hamilton-Jacobi-Bellman equation is then  1 1  ? ?1 v(z) = ??q + A| ?v ? [?v]| BRB| ?v + tr D| (?2 v)D . 2 2 A more explicit form emerges upon re-inserting the definitions of Eq. (12) into Eq. (16): (16)   1 ? ?1 v(z) = [??q + ??f (zx , zt )?x + ?t v(z) ? [?x v(z)]| g | (zx , zt )Rg(zx , zt )?x v(z) | {z } |2 {z } free drift cost + X c=q,f1 ,...,fD control benefit     1 ?c2 L|f d (?2?f d v(z))Lf d (17) ? ? Lc L|c ??c v(z) + ?2 ? 2 | {z } | {z } exploration bonus diffusion cost Equation (17) is the central result: Given Gaussian priors on nonlinear control-affine dynamic systems, up to a first order approximation, optimal reinforcement learning is described by an infinitedimensional second-order partial differential equation. It can be interpreted as follows (labels in the 4 equation, note the negative signs of ?beneficial? terms): The value of a state comprises the immediate utility rate; the effect of the free drift through space-time and the benefit of optimal control; an exploration bonus of learning, and a diffusion cost engendered by the measurement noise. The first two lines of the right hand side describe effects from the phase space-time subspace of the augmented space, while the last line describes effects from the belief part of the augmented space. The former will be called exploitation terms, the latter exploration terms, for the following reason: If the first two lines line dominate the right hand side of Equation (17) in absolute size, then future losses are governed by the physical sub-space ? caused by exploiting knowledge to control the physical system. On the other hand, if the last line dominates the value function, exploration is more important than exploitation ? the algorithm controls the physical space to increase knowledge. To my knowledge, this is the first differential statement about reinforcement learning?s two objectives. Finally, note the role of the sampling rate ?: If ? is very low, exploration is useless over the discount horizon. Even after these approximations, solving Equation (17) for v remains nontrivial for two reasons: First, although the vector product notation is pleasingly compact, the mean and covariance functions are of course infinite-dimensional, and what looks like straightforward inner vector products are in fact integrals. For example, the average exploration bonus for the loss, writ large, reads ZZ ?v(z) ? (q) ? (18) ??Lq L|q ??q v(z) = ? ?L(q) ds? ds? . s? (si )Ls? (sj ) ??(s?i , s?j ) i j K (note that this object remains a function of the state s? ). For general kernels k, these integrals may only be solved numerically. However, for at least one specific choice of kernel (square-exponentials) and parametric Ansatz, the required integrals can be solved in closed form. This analytic structure is so interesting, and the square-exponential kernel so widely used that the ?numerical? part of the paper (Section 4) will restrict the choice of kernel to this class. The other problem, of course, is that Equation (17) is a nontrivial differential Equation. Section 4 presents one, initial attempt at a numerical solution that should not be mistaken for a definitive answer. Despite all this, Eq. (17) arguably constitutes a useful gain for Bayesian reinforcement learning: It replaces the intractable definition of the value in terms of future trajectories with a differential equation. This raises hope for new approaches to reinforcement learning, based on numerical analysis rather than sampling. Digression: Relaxing Some Assumptions This paper only applies to the specific problem class of Section 2. Any generalisations and extensions are future work, and I do not claim to solve them. But it is instructive to consider some easier extensions, and some harder ones: For example, it is intractable to simultaneously learn both g and f nonparametrically, if only the actual transitions are observed, because the beliefs over the two functions become infinitely dependent when conditioned on data. But if the belief on either g or f is parametric (e.g. a general linear model), a joint belief on g and f is tractable [see 15, ?2.7], in fact straightforward. Both the quadratic control cost ? u| Ru and the control-affine form (g(x, t)u) are relaxable assumptions ? other parametric forms are possible, as long as they allow for analytic optimization of Eq. (14). On the question of learning the kernels for Gaussian process regression on q and f or g, it is clear that standard ways of inferring kernels [15, 18] can be used without complication, but that they are not covered by the notion of optimal learning as addressed here. 4 Numerically Solving the Hamilton-Jacobi-Bellman Equation Solving Equation (16) is principally a problem of numerical analysis, and a battery of numerical methods may be considered. This section reports on one specific Ansatz, a Galerkin-type projection analogous to the one used in [12]. For this we break with the generality of previous sections and assume that the kernels k are given by square exponentials k(a, b) = kSE (a, b; ?, S) = ?2 exp(? 21 (a ? b)| S ?1 (a ? b)) with parameters ?, S. As discussed above, we approximate by setting ? = 0. We find an approximate solution through a factorizing parametric Ansatz: Let the value of any point z ? H in the belief space be given through a set of parameters w and some nonlinear functionals ?, such that their contributions separate over phase space, mean, and covariance functions: X v(z) = ?e (ze )| we with ?e , we ? RNe (19) e=x,?q ,?q ,?f ,?f 5 This projection is obviously restrictive, but it should be compared to the use of radial basis functions for function approximation, a similarly restrictive framework widely used in reinforcement learning. The functionals ? have to be chosen conducive to the form of Eq. (17). For square exponential kernels, one convenient choice is ?as (zs ) = k(sz , sa ; ?a , Sa ) ZZ ?b? (z? ) = [?z (s?i , s?j ) ? k(s?i , s?j )]k(s?i , sb ; ?b , Sb )k(s?j , sb ; ?b , Sb ) ds?i ds?j Z ZK c ?? (z? ) = ?z (s?i )?z (s?j )k(s?i , sc , ?c , Sc )k(s?j , sc , ?c , Sc ) ds?i ds?j (20) and (21) (22) K (the subtracted term in the first integral serves only numerical purposes). With this choice, the integrals of Equation (17) can be solved analytically (solutions left out due to space constraints). The approximate Ansatz turns Eq. (17) into an algebraic equation quadratic in wx , linear in all other we : X 1 | wx ?(zx )wx ? q(zx ) + ?e (ze )we = 0 (23) 2 e=x,?q ,?q ,?f ,?f using co-vectors ? and a matrix ? with elements ?xa (zs ) = ? ?1 ?as (zs ) ? f (zx )| ?x ?as (zs ) ? ?t ?as (zs ) ZZ ??? (z? ) ?1 a ?? (z ) = ? ? (z ) + ? Ls? (s?i )Ls? (s?j ) ds? ds? a ? ? ? ??z (s?i , s?j ) i j K ZZ ? 2 ?a? (z? ) ?2 ? ?s2? ? ?1 a Ls? (s?i )Ls? (s?j ) ds? ds? ?a (z? ) = ? ?? (z? ) ? 2 ??z (s?i )??z (s?j ) i j K (24) ?(z)k` = [?x ?ks (z)]| g(zx )Rg(zx )| [?x ?`s (z)] Note that ?? and ?? are both functions of the physical state, through s? . It is through this functional dependency that the value of information is associated with the physical phase space-time. To solve for w, we simply choose a number of evaluation points z eval sufficient to constrain the resulting system of quadratic equations, and then find the least-squares solution wopt by function minimisation, using standard methods, such as Levenberg-Marquardt [19]. A disadvantage of this approach is that is has a number of degrees of freedom ?, such as the kernel parameters, and the number and locations xa of the feature functionals. Our experiments (Section 5) suggest that it is nevertheless possible to get interesting results simply by choosing these parameters heuristically. 5 5.1 Experiments Illustrative Experiment on an Artificial Environment As a simple example system with a one-dimensional state space, f, q were sampled from the model described in Section 2, and g set to the unit function. The state space was tiled regularly, in a bounded region, with 231 square exponential (?radial?) basis functions (Equation 20), initially all with weight wxi = 0. For the information terms, only a single basis function was used for each term (i.e. one single ??q , one single ??q , and equally for f , all with very large length scales S, covering the entire region of interest). As pointed out above, this does not imply a trivial structure for these terms, because of the functional dependency on Ls? . Five times the number of parameters, i.e. Neval = 1175 evaluation points zeval were sampled, at each time step, uniformly over the same region. It is not intuitively clear whether each ze should have its own belief (i.e. whether the points must cover the belief space as well as the phase space), but anecdotal evidence from the experiments suggests that it suffices to use the current beliefs for all evaluation points. A more comprehensive evaluation of such aspects will be the subject of a future paper. The discount factor was set to ? = 50s, the sampling rate at ? = 2/s, the control cost at 10m2 /($s). Value and optimal control were evaluated at time steps of ?t = 1/? = 0.5s. Figure 1 shows the situation 50s after initialisation. The most noteworthy aspect is the nontrivial structure of exploration and exploitation terms. Despite the simplistic parameterisation of the corresponding functionals, their functional dependence on s? induces a complex shape. The system 6 0 40 40 0.5 20 ?2 20 x 0 0 0 ?4 ?20 ?0.5 ?20 ?6 ?40 ?1 ?40 ?8 0 20 40 60 80 100 0 40 20 40 60 80 100 0 40 20 20 x 0.5 0 0 ?20 ?20 ?40 ?40 0 0 20 40 60 80 ?0.5 100 ?1 0 t 20 40 60 80 100 t Figure 1: State after 50 time steps, plotted over phase space-time. top left: ?q (blue is good). The belief over f is not shown, but has similar structure. top right: value estimate v at current belief: compare to next two panels to note that the approximation is relatively coarse. bottom left: exploration terms. bottom right: exploitation terms. At its current state (black diamond), the system is in the process of switching from exploitation to exploration (blue region in bottom right panel is roughly cancelled by red, forward cone in bottom left one). constantly balances exploration and exploitation, and the optimal balance depends nontrivially on location, time, and the actual value (as opposed to only uncertainty) of accumulated knowledge. This is an important insight that casts doubt on the usefulness of simple, local exploration boni, used in many reinforcement learning algorithms. Secondly, note that the system?s trajectory does not necessarily follow what would be the optimal path under full information. The value estimate reflects this, by assigning low (good) value to regions behind the system?s trajectory. This amounts to a sense of ?remorse?: If the learner would have known about these regions earlier, it would have strived to reach them. But this is not a sign of sub-optimality: Remember that the value is defined on the augmented space. The plots in Figure 1 are merely a slice through that space at some level set in the belief space. 5.2 Comparative Experiment ? The Furuta Pendulum The cart-and-pole system is an under-actuated problem widely studied in reinforcement learning. For variation, this experiment uses a cylindrical version, the pendulum on the rotating arm [20]. The task is to swing up the pendulum from the lower resting point. The table in Figure 2 compares the average loss of a controller with access to the true f, g, q, but otherwise using Algorithm 1, to that of an -greedy TD(?) learner with linear function approximation, Simpkins? et al.?s [12] Kalman method and the Gaussian process learning controller (Fig. 2). The linear function approximation of TD(?) used the same radial basis functions as the three other methods. None of these methods is free of assumptions: Note that the sampling frequency influences TD in nontrivial ways rarely studied (for example through the coarseness of the -greedy policy). The parameters were set to ? = 5s, ? = 50/s. Note that reinforcement learning experiments often quote total accumulated loss, which differs from the discounted task posed to the learner. Figure 2 reports actual discounted losses. The GP method clearly outperforms the other two learners, which barely explore. Interestingly, none of the tested methods, not even the informed controller, achieve a stable controlled balance, although 7 u ?1 `1 `2 Method cumulative loss Full Information (baseline) TD(?) Kalman filter Optimal Learner Gaussian process optimal learner 4.4 ?0.3 6.401?0.001 6.408?0.001 4.6 ?1.4 ?2 Figure 2: The Furuta pendulum system: A pendulum of length `2 is attached to a rotatable arm of length `1 . The control input is the torque applied to the arm. Right: cost to go achieved by different methods. Lower is better. Error measures are one standard deviation over five experiments. the GP learner does swing up the pendulum. This is due to the random, non-optimal location of basis functions, which means resolution is not necessarily available where it is needed (in regions of high curvature of the value function), and demonstrates a need for better solution methods for Eq. (17). There is of course a large number of other algorithms methods to potentially compare to, and these results are anything but exhaustive. They should not be misunderstood as a critique of any other method. But they highlight the need for units of measure on every quantity, and show how hard optimal exploration and exploitation truly is. Note that, for time-varying or discounted problems, there is no ?conservative? option that cold be adopted in place of the Bayesian answer. 6 Conclusion Gaussian process priors provide a nontrivial class of reinforcement learning problems for which optimal reinforcement learning reduces to solving differential equations. Of course, this fact alone does not make the problem easier, as solving nonlinear differential equations is in general intractable. However, the ubiquity of differential descriptions in other fields raises hope that this insight opens new approaches to reinforcement learning. For intuition on how such solutions might work, one specific approximation was presented, using functionals to reduce the problem to finite least-squares parameter estimation. The critical reader will have noted how central the prior is for the arguments in Section 3: The dynamics of the learning process are predictions of future data, thus inherently determined exclusively by prior assumptions. One may find this unappealing, but there is no escape from it. Minimizing future loss requires predicting future loss, and predictions are always in danger of falling victim to incorrect assumptions. A finite initial identification phase may mitigate this problem by replacing prior with posterior uncertainty ? but even then, predictions and decisions will depend on the model. The results of this paper raise new questions, theoretical and applied. The most pressing questions concern better solution methods for Eq. (14), in particular better means for taking the expectation over the uncertain dynamics to more than first order. There are also obvious probabilistic issues: Are there other classes of priors that allow similar treatments? (Note some conceptual similarities between this work and the BEETLE algorithm [4]). To what extent can approximate inference methods ? widely studied in combination with Gaussian process regression ? be used to broaden the utility of these results? Acknowledgments The author wishes to express his gratitude to Carl Rasmussen, Jan Peters, Zoubin Ghahramani, Peter Dayan, and an anonymous reviewer, whose thoughtful comments uncovered several errors and crucially improved this paper. 8 References [1] R.S. Sutton and A.G. Barto. Reinforcement Learning. MIT Press, 1998. [2] W.R. Thompson. On the likelihood that one unknown probability exceeds another in view of two samples. Biometrika, 25:275?294, 1933. [3] M.O.G. Duff. Optimal Learning: Computational procedures for Bayes-adaptive Markov decision processes. PhD thesis, U of Massachusetts, Amherst, 2002. [4] P. Poupart, N. Vlassis, J. Hoey, and K. Regan. An analytic solution to discrete Bayesian reinforcement learning. In Proceedings of the 23rd International Conference on Machine Learning, pages 697?704, 2006. [5] Richard Dearden, Nir Friedman, and David Andre. Model based Bayesian exploration. In Uncertainty in Artificial Intelligence, pages 150?159, 1999. [6] Malcolm Strens. A Bayesian framework for reinforcement learning. In International Conference on Machine Learning, pages 943?950, 2000. [7] T. Wang, D. Lizotte, M. Bowling, and D. Schuurmans. Bayesian sparse sampling for on-line reward optimization. In International Conference on Machine Learning, pages 956?963, 2005. [8] J. Asmuth, L. Li, M.L. Littman, A. Nouri, and D. Wingate. A Bayesian sampling approach to exploration in reinforcement learning. In Uncertainty in Artificial Intelligence, 2009. [9] J.Z. Kolter and A.Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th International Conference on Machine Learning. Morgan Kaufmann, 2009. [10] E. Todorov. Linearly-solvable Markov decision problems. Advances in Neural Information Processing Systems, 19, 2007. [11] H. J. Kappen. An introduction to stochastic control theory, path integrals and reinforcement learning. In 9th Granada seminar on Computational Physics: Computational and Mathematical Modeling of Cooperative Behavior in Neural Systems., pages 149?181, 2007. [12] A. Simpkins, R. De Callafon, and E. Todorov. Optimal trade-off between exploration and exploitation. In American Control Conference, 2008, pages 33?38, 2008. [13] I. Fantoni and L. Rogelio. Non-linear Control for Underactuated Mechanical Systems. Springer, 1973. [14] A.A. Feldbaum. Dual control theory. Automation and Remote Control, 21(9):874?880, April 1961. [15] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [16] N. Wiener. Differential space. Journal of Mathematical Physics, 2:131?174, 1923. [17] T. Kailath. An innovations approach to least-squares estimation ? part I: Linear filtering in additive white noise. IEEE Transactions on Automatic Control, 13(6):646?655, 1968. [18] I. Murray and R.P. Adams. Slice sampling covariance hyperparameters of latent Gaussian models. arXiv:1006.0868, 2010. [19] D. W. Marquardt. An algorithm for least-squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathematics, 11(2):431?441, 1963. [20] K. Furuta, M. Yamakita, and S. Kobayashi. Swing-up control of inverted pendulum using pseudo-state feedback. Journal of Systems and Control Engineering, 206(6):263?269, 1992. 9
4410 |@word cylindrical:1 exploitation:14 version:2 polynomial:1 coarseness:1 open:2 heuristically:1 seek:1 crucially:1 covariance:8 incurs:1 tr:2 harder:1 kappen:2 initial:4 hunting:1 uncovered:1 exclusively:1 initialisation:1 interestingly:1 outperforms:1 current:4 marquardt:2 si:3 assigning:2 dx:1 must:2 additive:1 realistic:1 numerical:6 engendered:1 wx:3 analytic:7 cheap:1 remove:1 drop:2 shape:1 plot:1 alone:1 greedy:4 intelligence:2 beginning:1 core:1 provides:1 coarse:1 complication:1 philipp:1 location:4 readability:1 five:2 mathematical:2 along:2 c2:1 differential:11 become:1 incorrect:1 consists:1 hjb:1 introduce:2 x0:2 acquired:4 expected:2 presumed:1 behavior:1 mpg:1 roughly:1 bellman:5 torque:1 discounted:4 td:4 actual:6 considering:1 notation:4 bounded:1 bonus:3 panel:2 what:4 kind:1 interpreted:1 q2:5 z:5 informed:1 pseudo:1 remember:1 every:3 mitigate:1 biometrika:1 demonstrates:2 control:36 unit:4 yn:1 planck:1 hamilton:4 arguably:1 before:1 t1:2 kobayashi:1 local:1 engineering:1 switching:1 despite:2 shortened:1 accumulates:1 sutton:1 critique:1 path:2 noteworthy:1 black:1 might:1 studied:4 k:1 suggests:1 challenging:1 relaxing:1 co:3 ease:1 obeys:1 acknowledgment:1 lf:7 differs:1 cold:1 procedure:1 jan:1 danger:1 area:1 empirical:1 projection:3 convenient:1 word:1 radial:3 regular:1 suggest:1 zoubin:1 get:2 close:1 operator:2 influence:2 restriction:1 equivalent:1 optimize:1 deterministic:5 map:1 yt:1 dz:1 straightforward:3 go:2 williams:1 l:8 thompson:2 pomdp:1 resolution:1 m2:1 estimator:1 insight:2 dominate:1 regularize:1 datapoints:1 his:1 classic:3 notion:2 coordinate:1 variation:1 analogous:2 controlling:4 play:1 imagine:1 digression:1 gps:1 us:1 carl:1 element:4 ze:3 particularly:1 cooperative:1 observed:1 role:2 bottom:4 solved:3 wang:1 wingate:1 commonplace:1 region:7 remote:1 trade:6 balanced:1 intuition:1 environment:1 phennig:1 reward:3 littman:1 battery:1 dynamic:23 raise:3 solving:5 depend:1 dilemma:1 upon:1 f2:1 learner:7 swap:1 basis:5 necessitates:1 joint:4 k0:19 derivation:1 describe:2 artificial:3 sc:4 furuta:3 choosing:1 exhaustive:1 whose:4 quite:1 widely:4 solve:4 posed:1 victim:1 tested:2 otherwise:1 precludes:1 gp:5 itself:1 obviously:1 hoc:3 advantage:1 pressing:1 product:3 helicopter:1 inserting:1 combining:1 achieve:1 lookahead:1 description:7 intuitive:1 exploiting:1 optimum:1 produce:1 comparative:1 adam:1 object:1 spent:1 sa:3 eq:10 solves:1 qd:1 quantify:1 correct:1 filter:2 stochastic:4 exploration:23 subordinate:1 f1:2 suffices:1 anonymous:1 secondly:1 extension:3 considered:1 exp:1 claim:1 purpose:1 estimation:3 label:1 quote:1 reflects:1 hope:2 anecdotal:1 mit:2 clearly:1 gaussian:19 always:1 aim:1 rather:3 varying:3 barto:1 minimisation:3 focus:1 likelihood:1 greatly:1 industrial:1 lizotte:1 baseline:1 sense:4 helpful:1 inference:5 dependent:1 dayan:1 accumulated:2 sb:5 entire:1 initially:1 germany:1 overall:1 among:1 dual:2 issue:1 development:1 integration:4 marginal:1 field:2 once:1 construct:2 ng:1 sampling:8 zz:4 look:2 constitutes:1 future:11 others:1 report:2 intelligent:1 simplify:2 richard:1 escape:1 simultaneously:2 comprehensive:1 cheaper:1 familiar:1 phase:9 unappealing:1 attempt:2 freedom:1 friedman:1 interest:1 fd:5 eval:1 custom:1 evaluation:4 truly:1 behind:1 myopic:1 kt:1 integral:7 partial:2 machinery:1 pleasingly:1 spemannstra:1 tree:2 euclidean:1 rotating:1 re:2 plotted:1 theoretical:1 uncertain:7 earlier:1 downside:1 modeling:1 cover:2 disadvantage:2 cost:12 pole:2 deviation:1 reviewer:1 kq:1 uniform:1 usefulness:1 optimally:2 kn:1 answer:2 dependency:2 considerably:1 my:1 density:1 fundamental:1 amherst:1 international:4 bu:2 probabilistic:2 off:6 physic:2 ansatz:5 thesis:2 central:3 containing:1 choose:1 opposed:1 stochastically:1 american:1 doubt:1 li:1 account:1 potential:1 underactuated:1 de:2 sec:1 automation:1 kolter:1 caused:2 ad:3 depends:1 script:1 performed:2 lot:1 closed:2 break:1 doing:3 pendulum:7 red:1 view:1 bayes:2 option:1 parallel:1 contribution:1 minimize:1 square:9 formed:1 construed:1 wiener:2 variance:3 characteristic:1 kaufmann:1 yield:2 spaced:1 weak:1 bayesian:10 identification:1 none:2 trajectory:7 zx:9 datapoint:1 reach:1 andre:1 complicate:1 definition:4 colleague:1 frequency:1 obvious:2 proof:1 jacobi:4 associated:1 gain:1 sampled:2 treatment:2 massachusetts:1 knowledge:5 emerges:1 actually:1 higher:1 dt:24 asmuth:1 follow:1 improved:1 april:1 formulation:1 evaluated:1 generality:2 marginalising:1 xa:2 correlation:1 d:10 hand:4 replacing:1 nonlinear:7 lack:1 nonparametrically:1 yf:1 kab:1 effect:6 concept:1 y2:1 true:2 former:1 analytically:2 inspiration:1 hence:1 read:1 swing:3 ignorance:1 white:1 ll:1 during:2 branching:1 width:1 bowling:1 covering:1 illustrative:1 levenberg:1 anything:1 noted:1 strens:1 impression:1 tn:2 nouri:1 mentally:1 functional:3 physical:9 attached:1 discussed:1 resting:1 numerically:2 accumulate:1 measurement:1 rd:2 mistaken:1 automatic:1 mathematics:1 similarly:1 pointed:1 stochasticity:2 had:1 access:2 stable:1 similarity:1 curvature:1 brownian:1 own:1 recent:2 showed:1 posterior:1 optimizing:1 irrelevant:1 beetle:1 ubingen:1 inverted:1 morgan:1 relaxed:1 somewhat:1 maximize:1 full:2 reduces:1 stem:1 conducive:1 exceeds:1 long:2 equally:2 controlled:3 prediction:4 simplistic:2 regression:2 controller:7 expectation:1 poisson:4 arxiv:1 kernel:10 achieved:1 strived:1 interval:1 addressed:1 comment:1 subject:2 cart:2 thing:1 regularly:1 flow:1 nontrivially:1 structural:1 near:1 rne:1 todorov:3 restrict:2 inner:2 idea:2 reduce:1 t0:2 whether:2 utility:4 peter:2 algebraic:2 proceed:1 action:1 useful:1 detailed:1 clear:2 covered:1 amount:4 nonparametric:1 discount:3 locally:3 induces:1 simplest:1 affords:1 sign:2 rb:1 blue:2 write:2 discrete:1 hennig:1 express:1 nevertheless:1 falling:1 drawn:1 changing:1 diffusion:2 merely:1 sum:1 cone:1 uncertainty:8 place:1 throughout:1 reader:2 separation:1 draw:1 decision:4 confusing:1 scaling:1 misunderstood:1 entirely:1 ki:1 wopt:1 quadratic:4 replaces:1 nontrivial:6 ahead:1 constraint:1 constrain:1 aspect:2 argument:2 min:3 optimality:1 relatively:2 department:1 combination:1 wxi:1 beneficial:1 describes:1 y0:1 parameterisation:1 s1:1 subtler:1 intuitively:1 restricted:1 invariant:1 hoey:1 principally:1 taken:1 equation:24 mutually:1 remains:2 turn:1 needed:1 know:1 tractable:1 serf:1 adopted:2 available:1 ubiquity:1 cancelled:1 subtracted:1 alternative:1 broaden:1 assumes:1 top:2 ensure:1 restrictive:3 ghahramani:1 murray:1 approximating:1 society:1 objective:2 move:1 question:3 quantity:2 strategy:1 parametric:4 dependence:1 usual:1 subspace:1 separate:2 poupart:1 extent:2 tuebingen:1 trivial:1 considers:1 collected:1 reason:2 writ:1 assuming:1 ru:1 kalman:3 length:3 useless:1 barely:1 balance:4 minimizing:1 innovation:2 thoughtful:1 statement:4 potentially:2 relate:1 dire:1 negative:2 zt:4 policy:4 unknown:3 diamond:1 markov:2 rotatable:1 finite:4 kse:1 immediate:1 situation:2 vlassis:1 y1:1 rn:1 duff:2 arbitrary:1 drift:3 introduced:1 gratitude:1 cast:1 required:2 mechanical:2 david:1 usually:1 challenge:1 max:1 belief:24 brb:1 dearden:1 critical:1 rely:2 predicting:1 solvable:1 abbreviate:1 arm:3 yq:1 numerous:1 imply:1 sn:1 nir:1 prior:7 literature:1 kf:3 relative:1 law:1 loss:16 expect:1 bear:1 highlight:1 generation:1 interesting:2 proportional:1 regan:1 filtering:1 integrate:1 degree:1 affine:3 sufficient:1 s0:3 dq:1 principle:1 famously:1 granada:1 uncountably:1 course:5 last:2 free:5 rasmussen:2 side:2 allow:3 institute:1 taking:1 absolute:1 sparse:1 benefit:2 slice:2 feedback:1 depth:1 finitedimensional:1 transition:2 dimension:1 xn:1 gram:1 infinitedimensional:1 author:2 forward:1 reinforcement:31 cumulative:1 adaptive:1 transaction:1 functionals:5 sj:1 approximate:7 compact:2 ignore:1 implicitly:1 sz:1 global:1 conceptual:1 unnecessary:1 assumed:1 alternatively:1 factorizing:1 continuous:5 latent:1 table:1 learn:2 zk:1 inherently:1 actuated:1 schuurmans:1 expansion:1 complex:1 necessarily:2 constructing:1 diag:2 main:1 linearly:3 s2:1 noise:6 definitive:1 hyperparameters:1 x1:1 augmented:5 fig:1 lc:1 galerkin:1 sub:3 inferring:1 comprises:1 explicit:2 wish:2 exponential:7 lq:4 lie:1 governed:1 seminar:1 specific:5 concern:2 dominates:1 intractable:4 consist:1 evidence:1 cumulated:1 kr:1 gained:1 phd:1 magnitude:1 conditioned:1 horizon:2 easier:2 rg:3 simply:2 explore:1 infinitely:1 desire:1 ordered:1 scalar:2 applies:1 acquiring:1 springer:1 truth:1 chance:1 inexactly:1 constantly:1 kailath:1 replace:1 change:9 hard:1 infinite:6 generalisation:1 uniformly:1 determined:1 conservative:1 called:1 total:1 tiled:1 rarely:1 latter:2 dx0:1 malcolm:1 instructive:2 correlated:1
3,768
4,411
Bayesian Spike-Triggered Covariance Analysis Il Memming Park Center for Perceptual Systems University of Texas at Austin Austin, TX 78712, USA [email protected] Jonathan W. Pillow Center for Perceptual Systems University of Texas at Austin Austin, TX 78712, USA [email protected] Abstract Neurons typically respond to a restricted number of stimulus features within the high-dimensional space of natural stimuli. Here we describe an explicit modelbased interpretation of traditional estimators for a neuron?s multi-dimensional feature space, which allows for several important generalizations and extensions. First, we show that traditional estimators based on the spike-triggered average (STA) and spike-triggered covariance (STC) can be formalized in terms of the ?expected log-likelihood? of a Linear-Nonlinear-Poisson (LNP) model with Gaussian stimuli. This model-based formulation allows us to define maximum-likelihood and Bayesian estimators that are statistically consistent and efficient in a wider variety of settings, such as with naturalistic (non-Gaussian) stimuli. It also allows us to employ Bayesian methods for regularization, smoothing, sparsification, and model comparison, and provides Bayesian confidence intervals on model parameters. We describe an empirical Bayes method for selecting the number of features, and extend the model to accommodate an arbitrary elliptical nonlinear response function, which results in a more powerful and more flexible model for feature space inference. We validate these methods using neural data recorded extracellularly from macaque primary visual cortex. 1 Introduction A central problem in systems neuroscience is to understand the probabilistic relationship between sensory stimuli and neural responses. Most neurons in the early sensory pathway are only sensitive to a low-dimensional space of stimulus features, and ignore the other axes in the high-dimensional space of stimuli. Dimensionality reduction therefore plays an important role in neural characterization. The most popular dimensionality-reduction method for neural data uses the first two moments of the spike-triggered stimulus distribution: the spike-triggered average (STA) and the eigenvectors of the spike-triggered covariance (STC) [1?5]. These features are interpreted as filters or ?receptive fields? that form the first stage in a linear-nonlinear-Poisson (LNP) cascade model [6, 7]. In this model, stimuli are projected onto a bank of linear filters, whose outputs are combined via a nonlinear function, which drives spiking as an inhomogeneous Poisson process (see Fig. 1). Prior work has established the conditions for statistical consistency and efficiency of the STA and STC as feature space estimators [1, 2, 8, 9]. However, these moment-based estimators have not yet been interpreted in terms of an explicit probabilistic encoding model. We formalize that relationship here, building on a recent information-theoretic treatment of spike-triggered average and covariance analysis (iSTAC) [9]. Our general approach is inspired by probabilistic and Bayesian formulations of principal components analysis (PCA) and extreme components analysis (XCA), moment-based methods for linear dimensionality reduction that are closely related to STC analysis, but which were only more recently formulated in terms of an explicit probabilistic model [10?14]. 1 linear filters nonlinearity Poisson spiking Figure 1: Schematic of linear-nonlinear-Poisson (LNP) neural encoding model [6]. Here we show, first of all, that STA and STC arise naturally from the expected log-likelihood of an LNP model with an ?exponentiated-quadratic? nonlinearity, where expectation is taken with respect to a Gaussian stimulus distribution. This insight allows us to formulate exact maximum-likelihood estimators that apply to arbitrary stimulus distributions. We then introduce Bayesian methods for regularizing and smoothing receptive field estimates, and an approximate empirical Bayes method for selecting the feature space dimensionality, which obviates nested hypothesis tests, bootstrapping, or cross-validation based methods [5]. Finally, we generalize these estimators to accommodate LNP models with arbitrary elliptically symmetric nonlinearities. The resulting model class provides a richer and more flexible model of neural responses but can still recover a high-dimensional feature space (unlike more general information-theoretic estimators [8, 15], which do not scale easily to more than 2 filters). We apply these methods to a variety of simulated datasets and to responses from neurons in macaque primary visual cortex stimulated with binary white noise stimuli [16]. 2 Model-based STA and STC In a typical neural characterization experiment, the experimenter presents a train of rapidly varying sensory stimuli and records a spike train response. Let x denote a D-dimensional vector containing the spatio-temporal stimulus affecting a neuron?s scalar spike response y in a single time bin. A principal goal of neural characterization is to identify , a low-dimensional projection matrix such that > x captures the neuron?s dependence on the stimulus x. The columns of can be regarded as linear receptive fields that provide a basis for the neural feature space. The methods we consider here all assume that neural responses can be described by an LNP cascade model (Fig. 1). Under this model, the conditional probability of a response y|x is Poisson with rate f ( > x), where f is a vector function mapping feature space to instantaneous spike rate.1 2.1 STA and STC analysis The STA and the STC matrix are the (empirical) first and second moments, respectively, of the spike-triggered stimulus ensemble {xi |yi }N i=1 . They are defined as: STA: ? = N 1 X yi x i , nsp i=1 and STC: ? = N 1 X yi (xi nsp i=1 ?)(xi > ?) , (1) P where nsp = yi is the number of spikes and N is the total number of time bins. Traditional STA/STC analysis provides an estimate for the feature space basis consisting of: (1) ?, if it is significantly different from zero; and (2) the eigenvectors of ? whose eigenvalues are significantly smaller or larger from those of the prior stimulus covariance = E[xxT ]. This estimate is provably consistent only in the case of stimuli drawn from a spherically symmetric (for STA) or independent Gaussian distribution (for STC) [17].2 1 Here f has units of spikes/bin, for some fixed bin size . In the limit ! 0, the model output is an inhomogeneous Poisson process, but we use discrete time bins here for concreteness. 2 For elliptically symmetric or colored Gaussian stimuli, a consistent estimate requires whitening the stimuli by 1 2 and then multiplying the estimated features (STA and STC eigenvectors) again by 2 1 2 (see [5]). 2.2 Equivalent model-based formulation Motivated by [9], we consider an LNP model where the spike rate is defined by an exponentiated general quadratic function: 1 > 2 x Cx f (x) = exp + b> x + a , (2) where C is a symmetric matrix, b is a vector, and a is a scalar. Then the log-likelihood per spike, the conditional log-probability of the data divided by the number of spikes, is X X L = n1sp log P (yi |C, b, a, xi ) = n1sp (yi log f (xi ) f (xi )) (3) i = 1 2 Tr [C?] + i 1 > 2 ? C? > N a nsp e +b ?+a " 1 N X exp > 1 2 xi Cxi > + b xi i If the stimuli are drawn from x ? N (0, ), a zero-mean Gaussian with covariance expression in square brackets (eq. 4) will converge to its expectation, given by: ? ? ? 1 1 > > 1 E e 2 x Cx+b x = |I C| 2 exp 12 b> ( 1 C) b , # . (4) , then the (5) so long as ( 1 C) is invertible and positive definite.3 Substituting this expectation (eq. 5) into the ? which can be expressed log-likelihood (eq. 4) yields a quantity we call the expected log-likelihood L, in terms of the STA, STC, , and the model parameters: ? ? 1 1 L? = 12 Tr [C?] + 12 ?> C? + b> ? + a nNsp |I C| 2 exp 12 b> ( 1 C) b + a . (6) Maximizing this expression yields expected-ML estimates (see online supplement for derivation): C?ml = a ?ml = log 1 ? nsp N ? 1 ? 1 2 , ? 1 ?bml = ? 1 > 2? 1 1 ?, ? 1 ?. (7) Thus, for an LNP model with exponentiated-quadratic nonlinearity stimulated with Gaussian noise, the (expected) maximum likelihood estimates can be obtained in closed form from the STA, STC, stimulus covariance, and mean spike rate nsp /N . Several features of this solution are worth remarking. First, if the quadratic component C = 0, then 1 ?bml = ?, the whitened STA (as in [17]). Second, if the stimuli are white, meaning = I, ? then Cml = I ? 1 , which has the same eigenvectors as the STC matrix. Third, if we plug the expected-ML estimates back into the log-likelihood, we get ? ? L? = 12 Tr ? 1 + ?> 1 ? log ? 1 + const (8) which (for = I) is the information-theoretic spike-triggered average and covariance (iSTAC) cost function [9]. The iSTAC estimator finds the subspace that maximizes the ?single-spike information? [18] under a Gaussian model of the raw and spike-triggered stimulus distributions (that coincides with (eq. 8)), but its precise relationship to maximum likelihood has not been shown previously. 2.3 Generalizing to non-Gaussian stimuli The conditions for which the STA and STC provide asymptotically efficient estimators for a neural feature space are clear from the derivations above: if the stimuli are Gaussian (a condition which is rarely if ever met in practice), the STA is optimal when the nonlinearity is f (x) = exp(b> x + a) (as shown in [8]); the STC is optimal when f (x) = exp(x> Cx + a) (as shown in [9]). However, the maximum of the exact model log-likelihood (eq. 4) yields a consistent and asymptotically efficient estimator even when stimuli are not Gaussian. Numerically optimizing this loss 3 If it is not, then this expectation does not exist, and simulations of the corresponding model will produce impossibly high spike counts, with STA and STC dominated by the response to a single stimulus. 3 function is computationally more expensive than computing the STA and STC eigendecomposition, but the log-likelihood is jointly concave in the model parameters (C, b, a), meaning ML estimates can be obtained rapidly by convex optimization [19]. For cases where x is high-dimensional, it is easier to directly estimate a low-rank representation of C, rather than optimize the entire D ? D matrix. We therefore define a rank-d representation for C: d X C= wi si wi > = W SW > , (9) i=1 where W is a matrix whose columns wi are features, si 2 { 1, 1} are constants that control the shape of the nonlinearity along each axis in feature space (-1 for suppressive, +1 for excitatory), and S is a diagonal matrix containing si along the diagonal. (We will assume the si are fixed using the sign of the eigenvalues of the expected-ML estimate C?ml , and not varied thereafter). The feature space of the resulting model is spanned by b and the columns of W . We refer to ML estimators for (b, W ) as maximum-likelihood STA and STC (or exact ML, as opposed to expectedML estimates from moment-based formulas (eq. 7); see Figs. 2-3 for comparisons). These estimates will closely match the standard STA and STC-based feature space when stimuli are Gaussian, but (as maximum-likelihood estimates) are also consistent and asymptotically efficient for arbitrary stimuli. An additional difference between maximum-likelihood and standard STA/STC analysis is that the parameters (b, W ) have meaningful units of length: the vector norm of b determines the amplitude of the ?linear? contribution to the neural response (via b> x), while the norm of columns in W determines the amplitude of ?symmetric? excitatory or suppressive contributions to the response (via x> W SW > x). Shrinking these vectors (e.g., with a prior) has the effect of reducing their influence in the model, and they drop out of the model entirely if we shrink them to zero (a fact that we will exploit in the next section). By contrast, the standard STA and STC eigenvectors are usually taken as unit vectors, providing a basis for the neural feature space in which the nonlinearity (?N? ? ) and estimate an stage) must still be estimated. We are free to normalize the ML estimates (?b, W arbitrary nonlinearity in a similar manner, but it is noteworthy that the parameters (a, b, W ) specify a complete encoding model in and of themselves. 3 Bayesian STC Now that we have defined an explicit model and likelihood function underlying STA and STC analysis, we can straightforwardly apply Bayesian methods for estimation, prediction, error bars, model comparison, etc., by introducing a prior over the model parameters. Bayesian methods can be especially useful in cases where we have prior information (e.g., about smoothness or sparseness of neural features, [20?25]), and in general have attractive theoretical properties for high-dimensional inference problems [26?28]. Here we consider two types of priors: (1) a smoothing prior, which holds the filters to be smooth in space/time; and (2) a sparsifying prior, which we employ to directly estimate the feature space dimensionality (i.e., the number of significant filters). We apply these priors to b and the columns of W , in conjunction with either exact (for accuracy) or expected (for speed) log-likelihood functions defined above. We refer to the resulting estimators as Bayesian STC (or ?BSTC?). We perform BSTC estimation by maximizing the sum of log-likelihood and log-prior to obtain maximum a posteriori (MAP) estimates of the filters and constant a. It is worth noting that since the derivatives of the expected likelihood (eq. 6) are also written in terms of STA/STC, optimization using the expected log-likelihood can be carried out more efficiently?it reduces the cost of each iteration by a factor of N compared to optimizing the exact likelihood (eq. 3). 3.1 Smoothing prior Neural receptive fields are generally smooth, so a prior that encourages this tendency will tend to improve performance. Receptive field estimates under such a prior will be smooth unless the likelihood provides sufficient evidence for jaggedness. To encourage smoothness, we placed a zeromean Gaussian prior on the second-order differences of each filter [29]: Lw ? N (0, 1 I), (10) 4 A true filter STA/STC Expected ML Exact ML Bayesian smoothing B C Gaussian stimuli sparse binary stimuli reconstruction error 1 0.8 0.6 0.4 0.2 0 10 3 4 10 # samples 5 10 10 3 4 10 # samples 5 10 Figure 2: Estimated filters and error rates for various estimators. An LNP model with 4 orthogonal 32-elements filters (see text) was simulated with two types of stimuli (A-B: white Gaussian: C: sparse binary). Mean firing rate 0.16 spk/s. (A) Filters estimated from 10,000 samples. STA/STC filters are normalized to match the norm of true filters. (B) Convergence to the true filter under each method, Gaussian stimuli. (C) Convergence for sparse binary stimuli. where L is the discrete Laplacian operator and is a hyperparameter controlling the smoothness of feature vectors. This is equivalent to imposing a penalty (given by 12 wi > LL> wi ) on the squared second derivatives b and W in the optimization function. Larger implies a narrower Gaussian prior on these differences, hence a stronger preference for smooth filters. For simplicity, we assumed all filters came from the same prior, resulting in a single hyperparameter for all filters, and used cross-validation to choose an appropriate for each dataset. To illustrate the effects of this prior, we simulated an example dataset from an LNP neuron with exponentiated-quadratic nonlinearity and four 32-element, 1-dimensional (temporal) filters. The filter shapes were given by orthogonalized randomly-placed Gaussians (Fig. 2). We fixed the dimensionality of our feature space estimates to be the same as the true model, since our focus was the quality of each corresponding filter estimate. For Gaussian stimuli, we found that classical STA/STC, expected-ML, and exact-ML estimates were indistinguishable (Fig. 2). However, for ?sparse? binary stimuli (3 of the 32 pixels set randomly to ?1), for which STA/STC and expected-ML estimates are no longer consistent, we found significantly better performance from the exact-ML estimates (Fig. 2C). Most importantly, for both Gaussian and sparse stimuli alike, the smoothing prior provided a large improvement in the quality of feature space estimates, achieving similar error with 2 orders of magnitude fewer stimuli. 3.2 Automatic selection of feature space dimensionality While smoothing regularizes receptive field estimates by penalizing filter roughness, a perhaps more critical aspect of the STA/STC model is its vast number of possible parameters due to uncertainty in the number of filters. Our approach to this problem was inspired by Bayesian PCA [10], a method for automatically choosing the number of meaningful principle components using a ?feature-selection prior? designed to encourage sparsity. The basic idea behind this approach is that a zero-mean Gaussian prior on each filter wi (separately controlled by a hyperparameter ?i ) can be used to ?shrink to zero? any components that do not contribute meaningfully to the evidence, just as in automatic relevance determination (ARD), also known as sparse Bayesian learning [27,30]. Unlike PCA, we seek to preserve components of the STC matrix with both large and small eigenvalues, which correspond to excitatory and suppressive filters, respectively. One solution to this problem, Bayesian Extreme Components Analysis [14], preserves large and small eigenvalues of the covariance matrix, but does not incorporate additional priors on filter shape, and has not yet been formulated for our (Poisson) likelihood function. Instead, we address the problem by using the sign of the diagonal elements in S to determine whether a feature w produces a positive or negative eigenvalue in C (eq. 9). (Recall 1 that the eigenvalues of C = ? 1 are positive and negative, while those of the STC matrix ? are strictly positive). Reparametrizing the STC in terms of C therefore allows us to apply a variant of the Bayesian PCA algorithm directly to b and the columns of W . The details of our approach are as follows. We put the ARD prior on each column of W : wi ? N 0, ?i 1 I , 5 (11) 8 expected likelihood 6 final # of dimensions 100000 0.2 goodness-of-fit (nats/spk) cross-validation expected ARD expected smooth+ARD exact ARD exact smooth+ARD 7 500000 0.25 exact likelihood 0.15 0.1 5 4 true 3 2 0.05 0 10000 1 ML smooth ARD both ML smooth ARD both 0 3 10 10 4 10 5 # of samples 10 6 Figure 3: Goodness-of-fit of estimated models and the estimated dimension as a function of number of samples. The same simulation parameters as Fig. 2 were used. Left: Information per spike (normalized difference in log-likelihoods) captured by different estimates. Models were estimated from 103 , 104 , and 5 ? 104 stimuli respectively. Right: Estimated number of dimensions as a function of the number of training samples. When both smoothing and ARD priors are used, the variability rapidly diminishes to near zero. where ?i is a hyperparameter controlling the prior variance of wi . We impose the same prior on b, with an additional hyperparamter ?0 , resulting in (D + 1) hyperparameters for the complete model. We initialize b to its ML estimate and the wi to the eigenvectors of C?ml , scaled by the square root of their eigenvalues. Then, we optimize the parameters and hyperparameters in a similar fashion to the Bayesian PCA algorithm [10]: we alternate between maximizing the posterior for the parameters (a, b, W ) given hyperparameters ?, and evidence optimization (arg max? Pr[(x, y)|?]) to update ?. Since a closed form for the evidence is not known, we use the approximate fixed point update rule D developed in [10]: ?inew . This update is valid when each element of the receptive field wi |||wi ||2 is well defined (non-zero), otherwise it overestimates the corresponding ?i . The algorithm begins with all ?i set to zero (infinite prior variance), giving ML estimates for the parameters. Subsequent updates will cause some ?i to grow without bound, shrinking the prior variance of the corresponding feature vector wi until it drops out of the model entirely as ?i ! 1. The remaining wj , for which ?j remain finite, define the feature space estimate. Note that these updates are fast (especially with expected log-likelihood), providing a much less computationally intensive estimate of feature space dimensionality than bootstrap-based methods [5]. Figure 3 (left) shows that ARD prior greatly increases the model goodness-of-fit (likelihood on test data), and is synergistic with the smoothing prior defined above. The improvement (relative to ML estimates) is greatest when the number of samples is small, and it enhances both expected and exact likelihood estimates. We compared this method for estimating feature space dimensionality with a more classical (non-Bayesian) approach based on cross-validation. We first fit a full-rank model with exact likelihood, and built a sparse model by adding filters from this set greedily until the likelihood of test data began to decrease. The resulting estimate of dimension is underestimated when there is not enough data, and even with large amount of data, it has high variance (Fig. 3, right). In comparison, our ARD-based estimate converged quickly to the correct dimension and exhibited smaller variability. When both smoothing and ARD priors were used, the variability decreased markedly and always achieved the correct dimension even for moderate amounts of data. One additional advantage of Bayesian approach is that it can use all the available data; under crossvalidation, some proportion of data is needed to form the test set (in this example we provided extra data for this method only). 4 Extension: the elliptical-LNP model Finally, the model and inference procedures we have described above can be extended to a much more general class of response functions with zero additional computational cost. We can replace the exponential function which operates on the quadratic form in the model nonlinearity (eq. 2) 6 14 12 Figure 4: 1-D nonlinear functions g mapping z, the output of the quadratic stage, to spike rate for a V1 complex cell [16]. The exact-ML filter estimate for W and b were obtained using the smoothing BSTC with an exponential nonlinearity. (Final filter estimates for this cell shown in Fig. 5). The quadratic projection (z) was computed using the filter estimates, and is plotted against the observed spike counts (gray circles), histogram-based estimate of the nonlinearity (green diamonds), exponential nonlinearity (black trace), a well-known alternative nonlinearity log(1 + ez ) (red), and a cubic spline estimated using 7 knots (green trace). We fixed the fitted cubic spline nonlinearity and then refit the filters, resulting in an estimate of the elliptical-LNP model. data exp(x) log(1+exp(x)) spline fit rate (spk/bin) 10 8 6 4 2 0 0 with an arbitrary function g(?), resulting in a model class that includes any elliptically symmetric mapping of the stimulus to spike rate. We call this the elliptical-LNP model. The elliptical-LNP model can be formalized by writing the nonlinearity f (x) (depicted in Fig. 1) as the composition of two nonlinear functions: a quadratic function that maps high dimensional stimulus to real line z(x) = 12 x> Cx + b> x + a, and a 1-D nonlinearity g(z). The full nonlinearity is thus f (x) = g(z(x)). Although LNP with exponential nonlinearity has been widely adapted in neuroscience for its simplicity, the actual nonlinearity of neural systems is often sub-exponential. Moreover, the effect of nonlinearity is even more pronounced in the exponentiated-quadratic function, and hence it may be helpful to use a sub-exponential function g. Figure 4 shows the nonlinearity of an example neuron from V1 (see next section) compared to g(z) = ez (the assumption implicit in STA/STC), a more linear function g(z) = log(1 + ez ), and a cubic spline fit by maximum likelihood. The likelihood given by eq. 3 can be optimized efficiently as long as g and g 0 can be computed efficiently. The log-likelihood is concave in (a, b, C) so long as g obeys the standard regularity conditions (convex and log-concave), but we did not impose those conditions here. For fast optimization, we first used the exponentiated-quadratic nonlinearity as an initialization (expected then exact-ML), then we refined the model with a spline nonlinearity. 5 Application to neural data We applied BSTC to data from a V1 complex cell (data published in [16]). The stimulus consisted of oriented binary white noise (?flickering bars?) aligned with the cell?s preferred orientation. We selected a cell (544l029.p21) that was reported to have large set of filters, to illustrate the power of our technique. The size of receptive field was chosen to be 16 bars ? 10 time bins, yielding a 160-dimensional stimulus space. Three features of this data that make BSTC appropriate: (1) the stimulus is non-Gaussian; (2) the nonlinearity is not exponential (Fig. 4); (3) the filters are smooth in space and time (Fig. 5). We estimated the nonlinearity using a cubic spline, and applied a smoothing BSTC to 104 samples presented at 100 Hz (Fig. 5, top). The ARD-prior BSTC estimate trained on 2?105 stimuli preserved 14 filters (Fig. 5, bottom). The quality of the filters are qualitatively close to that obtained by STA/STC. However, the resulting model has better overall goodness-of-fit, as well as significant improvement over the exact ML model for each reduced dimension model (Fig. 6). To achieve the same level of fit as using 2 filters for BSTC, the exact ML based sparse model required 6 additional filters (dotted line). We also compared BSTC to a generalized linear model (GLM) with same number of linear and quadratic filters fit by STA/STC (a method described previously by [7]). This approach places a prior over the weights on squared filter outputs, but not on the filters themselves. On a test set, 7 excitatory b suppresive STA/STC BSTC STA/STC BSTC+ARD Figure 5: Estimating visual receptive fields from a complex cell. Each image corresponds to a normalized 16 dimensions spatial pixels (horizontal) by 10 time bins (vertical) filter. (top) Smoothing prior recovers better filters. Bayesian STC (BSTC) with smoothing prior and fixed spline nonlinearity applied to a fixed number of filters. (bottom) Sparsification determines the number of filters. BSTC with ARD, smoothing, and spline nonlinearity recovers 14 receptive fields out of 160. goodness-of-fit (nats/spk) 0.3 train 0.25 BSTC test 0.2 ML train 0.15 test 0.1 0.05 Figure 6: Goodness-of-model fits from exact ML solution with exponential nonlinearity compared to BSTC with a fixed spline nonlinearity and smoothing prior (2 ? 105 samples). Filters are added in the order that increases the likelihood on the training set the most. The corresponding filters are visualized in fig. 5. 0 0 2 4 6 8 # dimensions 10 12 14 BSTC outperformed the GLM on all cells in the dataset, achieving 34% more bits/spike (normalized log-likelihood) over a population of 50 cells. 6 Conclusion We have provided an explicit, probabilistic, model-based framework that formalizes the classical moment-based estimators (STA, STC) and a more recent information-theoretic estimator (iSTAC) for neural feature spaces. The maximum of the ?expected log-likelihood? under this model, where expectation is taken with respect to Gaussian stimulus distribution, corresponds precisely to the moment-based estimators for uncorrelated stimuli. A model-based formulation allows us to compute exact maximum-likelihood estimates when stimuli are non-Gaussian, and we have incorporated priors in conjunction with both expected and exact likelihoods to achieve Bayesian methods for smoothing and feature selection (estimation of the number of filters). The elliptical-LNP model extends BSTC analysis to a richer class of nonlinear response models. Although the assumption of elliptical symmetry makes it less general than information-theoretic estimators such as maximally informative dimensions (MID) [8, 15], it has significant advantages in computational efficiency, number of local optima, and suitability for high-dimensional feature spaces. The elliptical-LNP model may also be easily extended to incorporate spike-history effects by adding linear projections of the neuron?s spike history as inputs, as in the generalized linear model (GLM) [9, 17, 25, 31]. We feel the synthesis of multi-dimensional nonlinear stimulus sensitivity (as described here) and non-Poisson, history-dependent spiking presents a promising tool for unlocking the statistical structure of the neural code. 8 References [1] J. Bussgang. Crosscorrelation functions of amplitude-distorted gaussian signals. RLE Technical Reports, 216, 1952. [2] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Comput. Neural Syst., 12:199?213, 2001. [3] R. de Ruyter and W. Bialek. Real-time performance of a movement-senstivive neuron in the blowfly visual system. Proc. R. Soc. Lond. B, 234:379?414, 1988. [4] O. Schwartz, E. J. Chichilnisky, and E. P. Simoncelli. Characterizing neural gain control using spiketriggered covariance. Adv. Neural Information Processing Systems, pages 269?276, 2002. [5] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli. Spike-triggered neural characterization. J. Vision, 6(4):484?507, 7 2006. [6] E. P. Simoncelli, J. Pillow, L. Paninski, and O. Schwartz. Characterization of neural responses with stochastic stimuli. The Cognitive Neurosciences, III, chapter 23, pages 327?338. MIT Press, 2004. [7] S. Gerwinn, J. Macke, M. Seeger, and M. Bethge. Bayesian inference for spiking neuron models with a sparsity prior. Adv. in Neural Information Processing Systems 20, pages 529?536. MIT Press, 2008. [8] L. Paninski. Convergence properties of some spike-triggered analysis techniques. Network: Comput. Neural Syst., 14:437?464, 2003. [9] J. W. Pillow and E. P. Simoncelli. Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis. J. Vision, 6(4):414?428, 4 2006. [10] C. M. Bishop. Bayesian PCA. Adv. in Neural Information Processing Systems, pages 382?388, 1999. [11] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. J. the Royal Statistical Society. Series B, Statistical Methodology, pages 611?622, 1999. [12] T. P. Minka. Automatic choice of dimensionality for PCA. NIPS, pages 598?604, 2001. [13] M. Welling, F. Agakov, and C. K. I. Williams. Extreme components analysis. Adv. in Neural Information Processing Systems 16. MIT Press, 2004. [14] Y. Chen and M. Welling. Bayesian extreme components analysis. IJCAI, 2009. [15] T. Sharpee, N. C. Rust, and W. Bialek. Analyzing neural responses to natural signals: maximally informative dimensions. Neural Comput, 16(2):223?250, Feb 2004. [16] N. C. Rust, O. Schwartz, J. A. Movshon, and E. P. Simoncelli. Spatiotemporal elements of macaque V1 receptive fields. Neuron, 46(6):945?956, Jun 2005. [17] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Comput. Neural Syst., 15(04):243?262, November 2004. [18] N. Brenner, S. P. Strong, R. Koberle, W. Bialek, and R. R. de Ruyter van Steveninck. Synergy in a neural code. Neural Comput, 12(7):1531?1552, Jul 2000. [19] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15:243?262, 2004. [20] F. Theunissen, S. David, N. Singh, A. Hsu, W. Vinje, and J. Gallant. Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network: Comput. Neural Syst., 12:289?316, 2001. [21] M. Sahani and J. Linden. Evidence optimization techniques for estimating stimulus-response functions. NIPS, 15, 2003. [22] S. V. David, N. Mesgarani, and S. A. Shamma. Estimating sparse spectro-temporal receptive fields with natural stimuli. Network: Comput. Neural Syst., 18(3):191?212, 2007. [23] I. H. Stevenson, J. M. Rebesco, N. G. Hatsopoulos, Z. Haga, L. E. Miller, and K. P. K?ording. Bayesian inference of functional connectivity and network structure from spikes. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 17(3):203?213, 2009. [24] S. Gerwinn, J. H Macke, and M. Bethge. Bayesian inference for generalized linear models for spiking neurons. Frontiers in Computational Neuroscience, 2010. [25] A. Calabrese, J. W. Schumacher, D. M. Schneider, L. Paninski, and S. M. N. Woolley. A generalized linear model for estimating spectrotemporal receptive fields from responses to natural sounds. PLoS One, 6(1):e16104, 2011. [26] W. James and C. Stein. Estimation with quadratic loss. 4th Berkeley Symposium on Mathematical Statistics and Probability, 1:361?379, 1960. [27] M. Tipping. Sparse Bayesian learning and the relevance vector machine. JMLR, 1:211?244, 2001. [28] D. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. PNAS, 100:2197?2202, 2003. [29] K. R. Rad and L. Paninski. Efficient, adaptive estimation of two-dimensional firing rate surfaces via gaussian process methods. Network: Comput. Neural Syst., 21(3-4):142?168, 2010. [30] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. Adv. in Neural Information Processing Systems 20, pages 1625?1632. MIT Press, 2008. [31] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J. Neurophysiol, 93(2):1074?1089, 2005. 9
4411 |@word proportion:1 norm:3 stronger:1 simulation:2 cml:1 seek:1 covariance:11 tr:3 accommodate:2 moment:7 reduction:4 series:1 selecting:2 ording:1 elliptical:8 si:4 yet:2 must:1 written:1 subsequent:1 informative:2 shape:3 drop:2 designed:1 update:5 fewer:1 selected:1 nnsp:1 record:1 colored:1 provides:4 characterization:5 contribute:1 preference:1 mathematical:1 along:2 symposium:1 pathway:1 manner:1 introduce:1 expected:21 themselves:2 multi:2 inspired:2 automatically:1 actual:1 provided:3 begin:1 underlying:1 estimating:6 maximizes:1 moreover:1 interpreted:2 developed:1 sparsification:2 bootstrapping:1 formalizes:1 temporal:4 berkeley:1 fellow:1 concave:3 scaled:1 schwartz:4 control:2 unit:3 overestimate:1 positive:4 engineering:1 local:1 limit:1 encoding:5 analyzing:1 firing:2 noteworthy:1 black:1 initialization:1 shamma:1 schumacher:1 statistically:1 obeys:1 steveninck:1 mesgarani:1 practice:1 definite:1 bootstrap:1 procedure:1 empirical:3 cascade:4 significantly:3 projection:3 confidence:1 naturalistic:1 onto:1 get:1 selection:3 operator:1 synergistic:1 put:1 close:1 influence:1 writing:1 hyperparamter:1 optimize:2 equivalent:2 map:2 center:2 maximizing:3 williams:1 convex:2 formulate:1 formalized:2 simplicity:2 bussgang:1 estimator:18 insight:1 rule:1 regarded:1 spanned:1 importantly:1 population:1 feel:1 controlling:2 play:1 exact:20 us:1 hypothesis:1 element:5 expensive:1 agakov:1 theunissen:1 observed:1 role:1 bottom:2 capture:1 wj:1 adv:5 plo:1 decrease:1 movement:1 hatsopoulos:1 nats:2 trained:1 singh:1 efficiency:2 basis:3 neurophysiol:1 spk:4 easily:2 various:1 tx:2 chapter:1 xxt:1 derivation:2 train:4 fast:2 describe:2 choosing:1 refined:1 whose:3 richer:2 larger:2 widely:1 elad:1 otherwise:1 statistic:1 jointly:1 final:2 online:1 triggered:13 eigenvalue:7 advantage:2 reconstruction:1 aligned:1 rapidly:3 achieve:2 validate:1 normalize:1 pronounced:1 crossvalidation:1 convergence:3 regularity:1 optimum:1 ijcai:1 produce:2 wider:1 illustrate:2 ard:15 eq:11 strong:1 soc:1 implies:1 met:1 inhomogeneous:2 closely:2 correct:2 filter:47 stochastic:1 bin:8 truccolo:1 inew:1 nagarajan:1 generalization:2 suitability:1 roughness:1 extension:2 strictly:1 frontier:1 hold:1 exp:8 mapping:3 nonorthogonal:1 substituting:1 dictionary:1 early:1 estimation:7 diminishes:1 outperformed:1 proc:1 spectrotemporal:1 utexas:2 sensitive:1 tool:1 minimization:1 mit:4 gaussian:25 always:1 rather:1 varying:1 conjunction:2 ax:1 focus:1 improvement:3 rank:3 likelihood:41 greatly:1 contrast:1 seeger:1 greedily:1 posteriori:1 inference:6 helpful:1 dependent:1 typically:1 entire:1 provably:1 pixel:2 arg:1 overall:1 flexible:2 orientation:1 smoothing:17 spatial:1 initialize:1 field:14 park:1 wipf:1 report:1 stimulus:54 spline:9 employ:2 sta:35 randomly:2 oriented:1 preserve:2 consisting:1 extreme:4 bracket:1 yielding:1 light:1 behind:1 encourage:2 orthogonal:1 unless:1 circle:1 plotted:1 orthogonalized:1 theoretical:1 fitted:1 column:7 goodness:6 cost:3 introducing:1 optimally:1 reported:1 straightforwardly:1 spatiotemporal:1 combined:1 sensitivity:1 probabilistic:6 invertible:1 modelbased:1 synthesis:1 quickly:1 bethge:2 connectivity:1 again:1 central:1 recorded:1 squared:2 containing:2 opposed:1 choose:1 cognitive:1 derivative:2 macke:2 crosscorrelation:1 syst:6 stevenson:1 nonlinearities:1 de:2 includes:1 root:1 view:1 extracellularly:1 closed:2 red:1 bayes:2 recover:1 jul:1 memming:2 contribution:2 il:1 square:2 accuracy:1 variance:4 efficiently:3 ensemble:2 yield:3 identify:1 correspond:1 miller:1 generalize:1 bayesian:26 raw:1 calabrese:1 knot:1 multiplying:1 worth:2 drive:1 published:1 converged:1 history:4 against:1 minka:1 james:1 naturally:1 senstivive:1 recovers:2 gain:1 hsu:1 experimenter:1 treatment:1 popular:1 dataset:3 auditory:1 recall:1 dimensionality:11 formalize:1 amplitude:3 back:1 tipping:2 methodology:1 response:19 specify:1 maximally:2 formulation:4 shrink:2 zeromean:1 just:1 stage:3 implicit:1 until:2 horizontal:1 nonlinear:9 reparametrizing:1 xca:1 quality:3 perhaps:1 gray:1 bml:2 usa:2 building:1 effect:5 normalized:4 true:5 consisted:1 brown:1 regularization:1 hence:2 symmetric:6 spherically:1 white:5 attractive:1 ll:1 indistinguishable:1 encourages:1 coincides:1 generalized:4 theoretic:6 complete:2 l1:1 meaning:2 image:1 instantaneous:1 recently:1 began:1 functional:1 spiking:7 rust:3 extend:1 interpretation:1 relating:1 numerically:1 refer:2 significant:3 composition:1 imposing:1 smoothness:3 automatic:4 consistency:1 nonlinearity:29 cortex:2 longer:1 whitening:1 etc:1 feb:1 surface:1 posterior:1 recent:2 optimizing:2 moderate:1 gerwinn:2 binary:6 came:1 yi:6 lnp:17 captured:1 additional:6 impose:2 schneider:1 converge:1 determine:1 signal:2 full:2 simoncelli:5 pnas:1 reduces:1 sound:1 smooth:9 technical:1 match:2 determination:2 plug:1 cross:4 long:3 divided:1 laplacian:1 schematic:1 prediction:1 controlled:1 basic:1 variant:1 whitened:1 vision:2 expectation:5 poisson:9 iteration:1 histogram:1 achieved:1 cell:8 preserved:1 affecting:1 separately:1 interval:1 underestimated:1 decreased:1 grow:1 suppressive:3 extra:1 unlike:2 exhibited:1 markedly:1 hz:1 tend:1 meaningfully:1 call:2 near:1 noting:1 iii:1 enough:1 variety:2 fit:11 idea:1 intensive:1 texas:2 donoghue:1 whether:1 motivated:1 pca:7 expression:2 penalty:1 movshon:1 cause:1 elliptically:3 useful:1 generally:1 clear:1 eigenvectors:6 amount:2 stein:1 mid:1 visualized:1 reduced:1 exist:1 dotted:1 sign:2 neuroscience:4 estimated:10 per:2 extrinsic:1 discrete:2 hyperparameter:4 sparsifying:1 thereafter:1 four:1 eden:1 achieving:2 drawn:2 penalizing:1 v1:4 vast:1 asymptotically:3 concreteness:1 sum:1 impossibly:1 powerful:1 respond:1 uncertainty:1 distorted:1 place:1 extends:1 bit:1 entirely:2 bound:1 quadratic:13 activity:1 adapted:1 precisely:1 dominated:1 aspect:1 speed:1 lond:1 alternate:1 smaller:2 remain:1 wi:12 rehabilitation:1 alike:1 restricted:1 pr:1 glm:3 taken:3 computationally:2 previously:2 count:2 needed:1 available:1 gaussians:1 apply:5 blowfly:1 appropriate:2 alternative:1 obviates:1 top:2 remaining:1 sw:2 const:1 exploit:1 giving:1 rebesco:1 especially:2 classical:3 society:1 added:1 quantity:1 spike:32 receptive:14 primary:2 dependence:1 traditional:3 diagonal:3 bialek:3 enhances:1 subspace:1 simulated:3 mail:1 length:1 code:2 relationship:3 providing:2 trace:2 negative:2 refit:1 perform:1 diamond:1 gallant:1 vertical:1 neuron:14 datasets:1 finite:1 november:1 spiketriggered:1 unlocking:1 regularizes:1 extended:2 ever:1 precise:1 variability:3 incorporated:1 varied:1 arbitrary:6 david:2 required:1 chichilnisky:2 optimized:1 rad:1 established:1 macaque:3 nip:2 address:1 bar:3 usually:1 remarking:1 sparsity:2 built:1 max:1 green:2 royal:1 greatest:1 critical:1 power:1 natural:5 improve:1 axis:1 carried:1 jun:1 koberle:1 nsp:6 text:1 prior:37 sahani:1 relative:1 loss:2 vinje:1 validation:4 eigendecomposition:1 sufficient:1 consistent:6 principle:1 bank:1 uncorrelated:1 austin:5 excitatory:4 placed:2 free:1 exponentiated:6 understand:1 characterizing:1 sparse:11 van:1 dimension:11 valid:1 pillow:5 sensory:3 qualitatively:1 adaptive:1 projected:1 welling:2 transaction:1 approximate:2 spectro:1 ignore:1 preferred:1 synergy:1 ml:28 assumed:1 spatio:2 xi:8 stimulated:2 promising:1 ruyter:2 symmetry:1 complex:3 stc:42 did:1 noise:4 arise:1 hyperparameters:3 neuronal:1 fig:16 cxi:1 fashion:1 cubic:4 shrinking:2 sub:2 explicit:5 exponential:8 comput:8 perceptual:2 lw:1 third:1 jmlr:1 formula:1 bishop:2 covariate:1 linden:1 evidence:5 adding:2 woolley:1 supplement:1 magnitude:1 sparseness:1 chen:1 easier:1 cx:4 generalizing:1 depicted:1 paninski:6 ez:3 visual:5 expressed:1 scalar:2 nested:1 corresponds:2 determines:3 conditional:2 haga:1 goal:1 formulated:2 narrower:1 donoho:1 flickering:1 replace:1 brenner:1 rle:1 typical:1 infinite:1 reducing:1 operates:1 principal:3 total:1 tendency:1 meaningful:2 sharpee:1 rarely:1 jonathan:1 relevance:3 incorporate:2 p21:1 regularizing:1
3,769
4,412
Greedy Algorithms for Structurally Constrained High Dimensional Problems Ambuj Tewarl Department of Computer Science University of Texas at Austin [email protected] Pradeep Ravikumar Department of Computer Science University of Texas at Austin [email protected] Inderjit S. Dhillon Department of Computer Science University of Texas at Austin [email protected] Abstract A hallmark of modern machine learning is its ability to deal with high dimensional problems by exploiting structural assumptions that limit the degrees of freedom in the underlying model. A deep understanding of the capabilities and limits of high dimensional learning methods under specific assumptions such as sparsity, group sparsity, and low rank has been attsined. Efforts [1,2] are now underway to distill this valuable experience by proposing general unified frameworks that can achieve the twio goals of summarizing previous analyses and enabling their application to notions of structure hitherto unexplored. Inspired by these developments, we propose and analyze a general computational scheme based on a greedy strategy to solve convex optimization problems that arise when dealing with structurally constrained high-dimensional problems. Our framework not only unifies existing greedy algorithms by recovering them as special cases but also yields novel ones. Finally, we extend our results to infinite dimensional settings by using interesting connections between smoothness of norms and behavior of martingales in Banach spaces. 1 Introduction Increasingly in modern settings, in domains across science and engineering, one is faced with the challenge of working with high-dimensional models where the number of parameters is large, particularly when compared to the number of observations. In such high-dimensional regimes, a growing body of literature in machine learning and statistics has shown that it is typically iropossible to obtain consistent estimators unless some low-dimensional "structure" is imposed on the high dimensional object that is being estimated from the data. For instance, the sigoal could be sparse in some basis, could lie on some manifold, have some graphical model structure, or be matrix-structured with a low rank. Indeed, given the variety of high dimensional problems that researchers face, it is natural that many novel notions of such low-dimensional structure will continue to appear in the future. There are a variety of issues that researchers have grappled with in this area but two themes stand out. First, there is the statistical problem of identifyiog the miniroum amount of data needed to accurately estimate high-<limensional objects that are structurally constrained. Second is the computational issue of desigoing efficient algurithms that, in the ideal case, can recover high dimensional objects from a limited amount of data. Both of these themes have spurred a huge amount of work over the past decade. For each of the specific structures, a large body of work has studied regularized and 1 constrained M -estimators, where some loss function such as the negative log-likelihood of the data which measures goodness of fit to the data, is regularized by a function appropriate to the assumed structure, or constrained to lie within an appropriately chosen set. In recent years, researchers [I, 2] studying the statistical properties of such estimators have started discovering commonalities among proofs and analyses and have proposed unified frameworks that take advantage of such commonalities. Specifically, using a single theorem, they are able to rederive a wide range of known results on high-dimensional consisteney and error bounds for the various regularized and constrained estimators. The potential benefits are obvious: distillation of existing ideas and knowledge and the enabling of novel applications that are unexplored to date. In this paper, we consider the computational facet of such high-dimensional estimation, and propose a general computational scheme that can be used for recovering objects with low-dimensional structure in the high dimensional setting. A key feature of our general method is that, at each step, it greedily chooses to add a single "simple element" or "atom" to the current representation. The idea, of course, is not new. Indeed we show that our general framework yields several existing greedy algorithms if we specialize it appropriately. It also yields novel algorithms that, to the best of our knowledge, have not appeared in the literature so far. Greedy algorithms for optimizing smooth convex functions over the ii-ball [3,4,5], the probability simplex [6] and the trace norm ball [7] have appeared in the recent literature. Other recent references on greedy leaming algorithm for high-dimensional problems include [8, 9]. Greedy algorithms have also been studied in approximation theory [10, II] to approximate a given function, viewed as an element of a Banach space of functions, using convex combinations of "simple" functions. There is also the well-known viewpoint of seeing boosting algorithms as greedy minimization algorithms in function space (see, for example, [12, Section 3], and the references therein). Often, the proofs and results in these various settings resemble each other to a great extent. There is thus clearly a need for unification of ideas and proofs. In this paper, we focus on the underlying similarities between the greedy algorithms mentioned above. All these algorithms can be seen as specializations of a general computational scheme, with specific choices of the loss function, regularization or constraint set, and assumptions on the lowdimensional structure. Is there a commonality in their analyses of convergence rates, and are there key properties that inform such analyses? Here, we identify two such key properties. The first is a restricted smoothness property (RSP) parameter (see also [13], for a similar quantity), which relates the smoothness of the function when restricted to sets with low-dimensional structure, and which depends on the ambient space norm, as well as a potentially distinct norm in which smoothness is established. The other, established in [I, 2], measures the size of the low-dimensional structured object with respect to an "atomic" norm. Using these two quantities, we are able to provide a general theorem that yields convergence rates for general greedy methods. We recover a wide range of existing results, as well as some potentially novel ones, such as for block-sparse matrices, lowrank tensors, and permutation matrices. In certain cases, most notably for low rank tensors, the scheme appears to lead to a greedy step that is intractable, which leads to intriguing questions about tractable approximations that we hope will be adequately addressed in the future. We then show how to extend these results to a general infinite-dimensional setting, by extending our definition of the restricted smoothness property (RSP) parameter, which allows us to obtain rates for L. spaces as well Banach spaces with Martingale type p. For the latter, the RSP parameter binges on the rate at which martingale difference sequences concentrate in that space, which provides yet another connection to the folk-lore statement that the "curse of dimensionality" in high dimensional settings is sometimes accompanied with the "blessings of concentration of measure". 2 PreUminaries 2.1 Atnms, Norms, and Structure In Negahban et al.'s work[I], any specific structure such as sparsity is related to a low-dimensional subspace of structured vectors. In Chandrasekaran et al.'s work [2], this notion of structure is distilled further by the use of "atoms." Specifically, given a set A of very "simple" objects, called atoms, we can say that a vector x is simple (with some low-dimensional structure) if it can be written as a linear combination of few atoms: x = LZ~I C,8;, where k is small relative to the ambient 2 dimensionality. They then use these atoms to generalize the idea behind the use of i , -norm for sparsity, trace or nuclear norm for low rank, etc. Let A be a collection of atoms. We start by assuming [2] that these atoms lie in a finite-dimensional space, and that in particular A is a compact subset of some Euclidean space RP. Later, in Section 6, we will extend our treatment to include the case where the atoms belong to an infinite-dimensional space. Let CA denote the convex hull of A and define the gauge: IlxiiA := inf{t <': a : x E tCA)} . (I) Note that the gauge II . IIA is not a norm in general, unless for instance A satisfies a technical condition, namely that it be centrally symmetric: x E A iff - x E A. Also, define the support function, IlxiiA := sup{ (x, a) : a E A}. If II . IIA happens to be a norm, then this is just the dual norm of I . IIA. 2.2 Examples Example 1. (Sparse vectors) A huge amount of recent literature deals with the notion of sparsity of high-dimensional vectors. Here, the set A c RP of atoms is finite and consists of the 2p vectors ?ei. This is a centrally symmetric set and hence II? IIA becomes a norm, viz. the i , -norm. Example 2. (Sparse non-negative vectors) Using a slight variation on the previous example, the atoms can be the p non-negative basis vectors ei. The convex hull CA is the (p - I)-dimensional probability simplex. This is not centrally symmetric and hence I . IIA is not a norm. Example 3. (Group sparse matrices) Here the structure we have in mind for a p x k matrix is that it only has a few non-zero rows. This generalizes Example I which can be thought of as the case when k = 1. There are an infinite number of atoms: all matrices with a single non-zero row where that row has i.-norm I for some q > 1. The convex hull CA becomes the unit ball of the I . 11.,1 group normonRPxk that is defined to be the sum ofthel.-norms of the rows of its matrix argument. Example 4. (Low rank matrices) This is another example that has attracted a huge amount of attention in the recent literature. The set I A E RPxp of atoms here is infinite and consists of rankone matrices with Frobenius norm 1. This is centrally symmetric and II?IIA becomes the trace norm (also called the nucleaTor Schatten-I norm, it is equal to the sum of the singular values of a matrix). Example 5. (Low rank tensors) This is a generalization of the previous example to higher order tensors. Considering order three tensors, the set A of atoms can be taken to be all rank-one tensors of the form U1 <8l U2 <8l Ua E Rpxpxp for Ui E RP, 1111;112 = 1. Their convex hull is the unit ball of II . IIA which can thought of as the tensor nuclear norm. Unfortunately, the tensor nuclear norm is intractable to compute and hence there is a need to consider relaxations to retain tractability. Example 6. (Permutation matrices) Here, we consider permutation matrices2 of size p x p as the set A of atoms. Even though there are p! of them, their convex hull has a succinct description thanks to the BiTklwff-von Newnann theorem: the convex hull of permutation matrices is the set of doubly stochastic matrices. As we shall see later, this fact will be crucial for the greedy algorithm to be tractable in this case. 3 Problem. Setup We consider the general optimization problem min x: IIxIlA~1t f(x), (2) where f is a convex and smooth function, and {x: IlxiiA ::; ,,} is the atomic norm constraint set that encourages some specific structure. This is a convex optimiwtion problem that is a constrained version of the usual regularized problem, minx f(x) + I'llxIlA. A line of recent work (see, for example, [2], and the references therein) has focused on different cases, with different atomic norms, i For simplicity we consider square matrices. It is definitely also possible to consider rectangular matrices in lRP1 XP2 for PI =f:. P2 2A pennutation matrix is one consisting only of O's & l's such that there is exactly a single 1 in each row & column. A non-negative matrix with every row & column sum equal to 1 is called a doubly stochastic matrix. 3 but largely on the linear case, where f(x) = ~lly - q.xll~, for a given y E Rn and a linear map <P : RP -> Rn. <P is typically a linear measurement operator that generates a noisy measurement y E Rn from an underlying "simple" signal Xt, and 11?112 is the standard Euclidean norm in Rn. For the linear case, projected gradient type methods have been suggested [2]. In this paper, we consider the general problem in (2), with a general loss function f(x), and a general constraint set induced by a structure-inducing atomic ''norm'' II? IIA. 3_1 Smoothness We now discuss our assumptions on the loss function f in (2). We start by defining a restricted smoothness property that we require for our analysis. Consider a convex function f : RP -> R that is differentiable on some convex subset S of RP. Given a norm 11?11 on RP, we would like to measure how "smooth" the function f is on S with respect to 11?11. Towards this end, we define the following: Definition 1. Given a set S, and nonn II ,11, we define the Restricted Smoothness Property (RSP) constant of a function f : RP -> R as L [[?11 (f ; S) := sup x,yES,a:E(O,l] f((l - a)x + ay) - f(x) - (V f(x), a(y - x)) 211 112 a Y- x (3) Since f is convex, it is clear that LII'II (f; S) <': O. The larger it is, the larger the function f "curves up" on the set S. Remark 1. (Connection to Lipschitz continuity of the gradient) Recall that a function f : RP -> R is said to have L-Upschitz continuous gradients w.r.t. II ?11 if for all x, y E RP, we have IIV f(x) V f(y)ll* ::; L 'lIx - yll where II . 11* is the norm dual to II . II. Using the mean value theorem it is easy to see that if f has L-Upschitz continuous gradient w.r.t. 11?11 then LII'II (f; S) ::; L. However, LII'II (f; S) can be much smaller since it only looks at the behavior of f on S and cares less about the global smoothness of f. Remark 2. (Connection to boundetiness of the Hessian) If the function f is twice differentiable on S, using second order Taylor expansion, L II ' II (f; S) can be bounded as LII'II (f; S)::; sup x,y,zES (V2 f(z)(y - x),y - x) II _ xl12 Y (4) Again, suppose we have global control on V 2 f(x) in the form'lz E RP, IIIV2 fez) III ::; H where 111?111 is the II . II -> II . 11* operator norm of the matrix M defined as 111M III := sUPllxIl91IMxll*. Then, we immediately have LII'II (f; S) ::; H but this inequality might be loose in general. In the statement of our results, we will derive convergence rates that would depend on this Restricted Smoothness Property (RSP) constant of the loss function f in (2). 4 Greedy Algorithm and Analysis In this section, we consider a general greedy scheme to solve the general optimization problem in (2) where f is a convex, smooth function. The idea is to add one atom to our representation at a time in a way that the stucture of the set of atoms can be exploited to perform the greedy step efficiently. Our greedy method is applicable to any constrained problem where the objective is sufficiently smooth. Algorithm 1 A general greedy algorithm to minimize a convex function f over the ",-scaled atomicnorm ''ball'' 1: Xo +- ",ao for an arbitrary atom ao E A 2: for t = 0, 1, 2, 3, ... do 3: a, +- argminaEA (V f(x,), a) 4: at +- argrninaE[o,l] f(x, + a(",at - Xt)) 5: Xt+l +- Xt + G't(x;at - Xt) 6: end for 4 Theorem 1. Assume that ! is convex and differentiable and let II . II be any norm. Then, for any ~ 1. the iterates generated by Algorithm 1 lie in !<CA and satisfy. T (5) for any solution x* o! (2). Here LII'II (/; !<CA) is the smoothness constant as defined in (3) and IIAII := sUP.EA II all? Proof. Let us use the abbreviations L and R for LII'II (/; S) and IIAII respeetively. The fact that the iterates lie in !<CA follows inunediately from the definition of the algorithm and a simple induction. Now assuming Xt E "CA. we have, by definition of L, for any a E [0,1], 2 !(Xt + a("at - Xt)) :S !(Xt) + a (V!(Xt), "at - Xt} + ~a2 LII"at - Xtl1 :S !(Xt) + a (V!(Xt), "at - Xt} + ~a2 L (211"atI1 2 + 211Xt112) :S !(Xt) - a( - (V!(Xt), "at} + (V!(Xt),Xt)) + 2a2L,,2 R2 . (6) The last inequality holds because II"atll, IIXtl1 :S "R. Now, for any minimizer x* of!, we have, by convexity of !, 8t := !(Xt) - !(x*) :S (V !(Xt), Xt - x*) = (V!(Xt), Xt} - (V !(Xt), x*) :S (V !(Xt), Xt) - (V!(Xt), "at} . (7) The last inequality holds because, at is the minimizer of the linear function (V!(Xt), -} over A (and Plugging (7) into (6), hence also over CA ) and x* /" E CA. Thus, (V!(Xt), at} :S (V !~Xt), x* we have, for any a ~ 0, !(xt+a("at -Xt)) :S !(Xt) -a8t +2a L,,2R . SinceXt+1 is chosen by minimizing theLHS over a E [0,1]. we have !(Xt+1) :S !(Xt) + mina E[o,1] (-a8 t + 2a2L,,2R2). Thus, we have, for all t ~ O,8t +1 :S 8t + mina E[O,1] (-a8t + 2a2L,,2R2). For t = 0, choose a = 1 on the RHS to get 8, :S 2L,,2 R2. Since 8t's are decreasing, this shows 8t :S 2L,,2 R2 for all t ~ 1. Hence, for t ~ 1, we can choose a = 8';4L,,2 R2 E [0, ~] on the RHS to get 1ft ~ I, 8t+1 :S t,,}. 8t - 8L!i R?. Solving this recursion easily gives, for all t ~ I, !(Xt+1) - !(x*) :S 8.'.~.R'. D Remark 3. We emphasize that the norm II . II appears only in the analysis and not in the algorithm. Since the bound of Theorem 1 is simultaneously true for all norms II ,11, the best bound is achieved by choosing a norm that minimizes the product of IIAI12 and LII'[[ (/; !<CA). Remark 4. We make the simple but useful observation that the iterate Xt can be written as a convex combination of at most t + 1 atoms, namely Ro, a" ... , at. Remark 5. Given ", Algorithm 1 is completely parameter free. This is a nice feature from a practical perspective as it frees the practitioner from the task of tuning parameters. 5 Special Cases Let us revisit the examples from Section 2.2 to see what concrete algorithms and accuracy bounds we get by speeializing Algorithm 1 and its bound (Theorem I) to them. Sparse vectors The greedy step reduces to at <- argmin aE?{el, ... ,ep (V!(Xt), a} . } Clearly, assuming that the gradient is already available, this can be done in O(P) time by finding j E {I, ... ,p} such thatj = argmaxj' I[V!(Xt)];.I and setting at = - sign([V!(xt)]j)ej. This actually gives a well-known algorithm whose roots go back to the 1950s [3]. More recently, a vatiant appeared as the Forward Greedy Selection algorithm in [5] (see also [4]). In fact, the original FrankWolfe algorithm can be applied whenever the set CA is polyhedral. If we choose the norm 11?11 to be iq then II All is 1 irrespective of q E [1,00] and the smoothness constant LII'II (/; !<CA) is an increasing function of q. Hence to minimize the boond, we should choose p = 1 and measure smoothness of! over the It-scaled i , -ball using the i , -norm. When !(x) = ~ Ily - ef?xll~. we can use the connection to Hessian bounds (Remark 2) and inunediately get the upper bound 8,,2 . lief? T ef?111~oo/T where the norm IIMII>~oo := sUPllxll,<;1 IIMxlloo is simply maxi,j IMi,j I. 5 Sparse non-negative vectors The greedy step becomes a, <- argmin ('il J(x,), aJ . aE{el, ... ,e p } & in the previous example, this can be done in O(P) time given the gradient entries by computing j = argmin;'E{l, ... ,p} ['ilJ(x,)I;' and setting a, = e;. This particular algorithm to optimize a smooth function over the (scaled) probability simplex appears in [6]. Following the same reasoning as above, we get the best (among all i.-norms) bound if we choose II . II to be II . 111 and then our smoothness constant becomes similar to Clarkson's ''nonlinearity measure" that he denotes by C f. Group sparse matrices But still the greedy step This is an interesting case since there are an infinite number of atoms. a, <- ('ilJ(x,), aJ argmin a: nnzrows(a)=l,IIBllq,l=l (where nozrows counts the number of non-zero rows of a matrix) can be computed easily as follows. Let ri be the dual exponent of q that satisfies I/q + I/ri = I and find the row j of 'il J(x,) with maximal norm. Then, set a, to be the matrix all of whose rows are zero except row j. In row j, place the vector u T where u E Rkx1 is such that' (u, ['ilJ(xtllD = -11['ilJ(x,)lIII ?? and lIull. = 1. Such a vector u can be found in closed form. For the case J(x,) = Ily - il?xll~, choosing the norm 11?11 in Theorem I to be 11,11.,1 (and this gives the optinlal bound among allll?II.,r norms for r > I), we get the accuracy bound: 8,,2 '11iI?T il?11.,1 ~.,=/Twhere the q, I -+ q, 00 norm of the operator il?T iI? is defined as sup{lliI? T il?MII.,= : M E RPx., IIMII.,l :0; I}. This algorithm and its analysis are novel to the best of our knowledge. However, we note that a related greedy algorithm (that does not directly optimize the objective (2? called Group-OMP appears in [14, 15]. i.. ! Low rank matrices As in the previous case, we have an infinite number of atoms: all rank-I matrices with Frobenius norm I. Yet, the greedy step a, <- argmin ('il J(x,), aJ a: rank(a)=l,lIaIlF=l can be done in polynomial time by computing the SVD, 'ilJ(x,) = UEV T and setting a = -U1V! where U" v, are the left, right singular vectors corresponding to the largest singular value 0'1. Since we only need the singular vectors corresponding to the largest singular value, the computation of a, can be done much faster than the time it takes to compute a full SVD. For the case J(x) = Illy - il?xll~, the bound of Theorem I is minimized, among all Schatten-pnorms" by using 11?11 = Ils(l), i.e. the trsce or nuclear norm. Since the objective is twice differentiable, using Remark 2 we get the following upper bound on the accuracy: 8,,2 . IliI?T il?lls(l)~s(=)/T which depends on the S(I) -+ operator norm of iI? T iI? which is defined as sup{lIiI? T il?Mlls(=) : M E RPxp, IIMlls(l) :0; I}. This algorithm was recently independently discovered and analyzed in [7]. ff? stool Low rank tensors Here, the greedy step a, <- argmin ('il J(x,), aJ a: a=ul@u2I8lus,lIuiI12=1 appears intrsctable. Indeed, the above problem is closely related to the problem of finding the best rank-one approximLltion to a given tensor which is known to be NP-hard [16] already for order-3 tensors. However, as described in [2], it is possible to construct a fanilly of outer approximLltions CA <;; ??? <;; THk+! <;; THk such that, for any fixed k, THk can be described by a semidefinite program of size polynomial in k. So, even though the exact greedy step above may not be tractable, we can use these ''theta bodies" (whence the notation 'TH") to approximate the greedy step. The iterates will no longer lie strictly in the tensor nuclear ball of the given radius. Understanding the implications of such approximations and their analysis are interesting questions to pursue but lie beyond the scope of the current paper. 3We use MATLAB notation M j ,: to denote row j of a matrix M. 4The Schatten-q norm of a matrix is the i q norm of its singular values. 6 Permutation matrices Here, fortunately, we again do not face intractability: the step axgmin aT <- (V/(xt),a) a : a is a permutation matrix reduces to solving a linear assignment problem with costs C(i,j) = [V/(xtlk;. This can be efficiently done using, for example, the Hungarian algorithm. Another way to see that the above step does not involve combinatorial explosion is to appeal to the Birkhoff-von Neumann theorem that statea that the convex hull of permutation matrices is the set of doubly stochastic matrices. As a result, the above reduces to minimizing a linear objective (V I(xtl, M) subject to polynomially many constraints: M 2': 0, M1 = 1 and MT 1 = 1. 6 Extension to Infinite Dimensional Banach Spaces 1n !hi section, we consider an extension of the framework behind Algorithm 1 to the case when the set of atoms are in some infinite dimensional (real) Banach space (V, II . II). For example, the atoms could be some "simple" real valued functions on some interval [a, b] ~ JR. The two ingredients in our framework were the atomic norms, and the Restricted Smoothness Property (RSP) parameters. 1n [2], and in Section 2.1, the atoms were considered as belonging to a finite dimensional Euclidean space. Note however that the definition of the atomic norms in (1) did not make use of the topology of the ambient space, and hence is applicable even when the atoms belong to some Banach space (V, 11?11). However, our definition of the RSP parameter in (3) relied critically on the Euclidean inner product, whence we will now extend this to the infinite dimensional case in the sequel. Consider a convex continuous Frechet differentiable function I : V --+ JR, and let V I(x) denote the Frechet derivative of I at x. Let (.,.) : V* x V --+ JR denotea the bilinear function (which is not an inner product in general) (X, x) := X(x) for x E V and X in the dual space V* (consisting of bounded linear functions on V). Definition 2. Given a Banach space (V, II . II), and a set S ~ V, and some r E [1,2], we define the Restricted Uniform Smoothness Property (RUSP) constant of a convex continuous Frechet differentiable function I : V --+ JR as 1((1 - a)x + ay) - I(x) - (V I(x), a(y - x)} Lr (/; S) := sup (8) (l/r) arlly - xll r x,yES,o:E[O,l] This need not be bounded in general, but would be bounded for instance if the function I were runiformly smooth (though this would be a far stronger condition). Suppose the set of atoms A ~ V is such that max.,E.A (X, a) is defined for any X E V*. Then, we can define a straightforward extension of Algorithm I given as Algorithm 2. Algorithm 2 A general greedy algorithm to minimize a continuous Frechet differentiable convex function lover the convex hull of a set of atoms A in a Banach space (V, II ,11) 1: Xo <- ao for an arbitrary atom ao E A 2: for t = 0, 1, 2, 3, ... do 3: X t E V* +- V I(xt), the Frechet derivative of I at Xt 4: at <- argmaxaE.A (-Xt,a) 5: at +- argminaE[O,l] I(xt + a(at - Xt)) 6: Xt+l +- Xt + at( at - Xt) 7: end for The following result proves a general rate of convergence for Algorithm 2. Since the proof follows the proof of Theorem I very closely, we defer it to the appendix. Theorem 2. Suppose that (V, II . II) is a Banach space and let I : V --+ JR be a convex continuous Frechet differentiable function. Let A be a set of atoms such that II all :<; Rfor all a E A, and let S = conv(A). Suppose the Restricted Uniform Smoothness Property (RUSP) constant Lr (/; S) of I is boundedfor some r E [1,2]. Then, I(xt) - inf I(x) = xES where the hidden constant depends on r only. 7 0 (Lr (/; S) tr 1 RT) 6.1 Rates of Convex Approximation in Lp spaces For p E (1,00) the space Lp([a, b]) consists of all functions 9 : [a, b] --> IR such that the (Lebesgue) integral Ig(x)IPda; is fioite. The space Lp is a Banach space once we eqoip it with the norm J: (J: IlgiIL. := Ig(x)IPda;) 1/? . LetA be a set of atoms in L. with bounded norm and let hE L. be a function that we wish to approximate using convex combinations of the atoms. Since, the function 9 r-> Ilgll~ is 1" = min{p, 2} uoiformly smooth for p E(1,00), we can use Algorithm 2 to generate a sequence? of functions 91, !J2, ... such that gt is a convex combination of only t atoms. Moreover, (,.t, ). we will have the guarantee: Ilgt+1 - hll( - infgEconv(A) Ilg - hll( = 0 Such rates of convex approximation in non-Hilbert spaces have been studied earlier (see, for example, [10, II]). Note that, unlike [10), we do not assume that h E conv(A). If that is the case, the above rate simplifies to the rates given in [10): O(t-1+~) for p E (1,2), and orr!) for p > 2. 6.2 Rates of Convex Approximation in Spaces with Martingale Type p Note the fact that, in the previous subsection, the only property of L. spaces that we used to get rates was the fact that the norm to some power was a uoiformly smooth convex function. It tums out that the existence ofuoiformly smooth functions in a given Banach space is intimately connected to the behavior of martingale difference sequences in that space. To precisely state the connection, we need to define the notion of martingale type (also called Haar type) [17, p. 320). A Banach space (V, 11?11) is said to have martingale type p (M-type p in short) if there exists a constant K. such that, for all T ;::: I, and any V -valued martingale difference sequence d" ... ,dT, we have Note that, by triangle inequality for norms, any Banach space always has M-type I while a Hilbert space (i.e. the norm II . II comes from an inner product) has M-type 2. Hilbert space essentially have the best M-type in the sense that no Banach space has M-type p for p > 2. The conoection of M-type to uoiform smoothness is made precise by the following remarkable theorem (see also [18]). Theorem (Pisier, [19). A Banach space has M-type p iff there is an equivalent norm' 11?11# such thar the function II'II~ is p-uniformly smooth. Consider the setting of the previous subsection where we have some hE conv(A) for some set A of atoms in an arbitrary Banach space (V, 11?11). Using Pisier's theorem, we get the following corollary. Corollary3. Suppose A is a seto/atoms in a Banach space (V, 11?11) that has M-typepand let hE conv(A). Suppose Algorithm 2 generates iterates g" g2, ... when run on the function 9 r-> Ilgll~ whose existence is guaranteed byPisier's theorem Then, we have, Ilg'+1 - hll = 0 (C1+*). 7 Future Work First, we envisage the algorithm being used to compute the entire regularization parh corresponding to all values of the constraint parameter 1<. Using a warm start strategy, where the algorithm for higher values of I< is ioitialized with the solution for lower values, can be very helpful here. Exploring this to get a general practical algorithm to compute the entire path would be very oice. Third, linear convergence guaraotees for projected gradient type methods have been obtained by [13) where they make the additional assumption of (generali2ed) restricted strong convexity. It should be possible to derive similar faster rates for our greedy algorithm. Acknowledgments We gratefully acknowledge the support of NSF under grant IIS-1018426. ISD acknowledges support from the Moncrief Grand Challenge Award. 'Thatis,cLII?1I ~ 11'11# ~ cull' II forsomecL,cu > O. 8 References [I] S. Negahban, P. Ravikumar, M. Wainwrigbt, and B. Yu. A unified framework for higb-dimensional analysis of M-estimators with decomposable regularizers. In Advances in Neural In/onnation Processing Systems 22, pages 1341>-1356,2009. [2] V. Chandrasekaran, B. Rech~ P. A. Parri1o, and A. S. Willsky. The convex geometry of linear inverse problems. In Proceedings of the 48th Annual Allerton Conference on Communication, Control and Computing, pages 699-703, 2010. [3] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(1-2):95-110,1956. [4] T. Zhang. Sequential greedy approximation for certain convex optimization problems. IEEE Transactions on Information T!reory, 49(3):682-{;91, 2003. [5] S. Shalev-Shwartz, N. Srebro, and T. Zhang. Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM Journal on Optimiuztion, 20(6):2807-2832, 2010. [6] K. Clarkson. Coresets, sparse greedy approximation" and the Frank-Wolfe algorithm. In Proceedings of the nineteenth annual ACM-SIAM symposiwn on discrete algorithms, pages 922-931. Society for Industrial and Applied Matherustics, 2008. [7] S. Shalev-Shwartz, A. Gonen, and O. Shamir. Large-scale convex minimization with a low-rank constraint. In Proceedings o/the 28th International Conference on Machine Learning. pages 329-336, 2011. [8] H. Liu and X. Chen. Nonpararnetric greedy algorithms for the sparse learning problem. In Advances in Neuralinformation Processing Systems 22, pages 1141-1149,2009. [9] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection" sparse approximation and dictionary selection. In Proceedings of the 28th International Conference on Machine Learning, pages 1057-1064,2011. [10] M. 1. Donahue, C. Darken, L. Gurvits, and E. Sontag. Rates of convex approximation in non-Hilbert spaces. Constructive Approximation, 13(2):187-220,1997. [II] V. N. Thmlyakov. Greedy approximation. Acta Numerica. 17:235-409,2008. [12] R. E. Schapire. The boosting approach to machine learning: an overview. In D. D. Denison, M. H. Hansen" C. C. Holmes, B. Mallick, and B. Yu, editors, Nonlinear estimation and classification, volume 171 of Lecture Notes in Statistics, pages 149-172. Springer, 2003. [13] A. Agarwal, S. Negahban, and M. Wainwright. Fast global convergence rates of gradient methods for high-dimensional statistical recovery. In Advances in Neural Information Processing Systems 23, pages 37-45,2010. [14] A. C. Lozano, G. Swirszcz, and N. Abe. Grouped orthogonal matching pursuit for variable selection and prediction. In Advances in Neural Information Processing Systems 22, pages 1150--1158, 2009. [15] A. C. Lozano, G. Swirszcz, and N. Abe. Grouped orthogonal matching pursuit for logistic regression. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of JMLR Workshop and Conference Proceedings, 2011. [16] C. Hillar andL.-H. Lim. Most tensor problems areNF hard, 2010. available at http:/ / arxiv. org/ abs/0911.1393v2. [17] A. Pietsch. History of Banach spaces and linear operators. Birkhiuser, 2007. [18] 1. Borwein, A. 1. Guirao, P. Hl\jek, and 1. Vanderwerff. Uniformly convex functions in Banach spaces. Proceedings of the American Marhematical Society, 137(3):1081-1091, 2009. [19] G. Pisier. Martingales with values in uniformly convex spaces. Israel Journal of Mathematics, 20(34):326-350, 1975. 9
4412 |@word uev:1 cu:1 version:1 polynomial:2 norm:53 stronger:1 tr:1 liu:1 frankwolfe:1 past:1 existing:4 current:2 yet:2 intriguing:1 written:2 attracted:1 ilii:1 greedy:33 discovering:1 denison:1 intelligence:1 short:1 moncrief:1 lr:3 provides:1 boosting:2 iterates:4 llii:1 allerton:1 org:1 zhang:2 nonpararnetric:1 specialize:1 consists:3 doubly:3 xtl:1 polyhedral:1 notably:1 indeed:3 behavior:3 growing:1 inspired:1 decreasing:1 fez:1 curse:1 considering:1 ua:1 becomes:5 increasing:1 conv:4 underlying:3 bounded:5 notation:2 moreover:1 hitherto:1 what:1 argmin:6 israel:1 minimizes:1 pursue:1 z:1 proposing:1 unified:3 finding:2 guarantee:1 unexplored:2 every:1 exactly:1 ro:1 scaled:3 control:2 unit:2 grant:1 appear:1 engineering:1 limit:2 bilinear:1 meet:1 path:1 might:1 lief:1 twice:2 therein:2 studied:3 acta:1 limited:1 range:2 practical:2 acknowledgment:1 atomic:6 block:1 area:1 thought:2 matching:2 seeing:1 get:10 selection:4 operator:5 optimize:2 equivalent:1 imposed:1 map:1 hillar:1 go:1 attention:1 straightforward:1 independently:1 convex:36 focused:1 rectangular:1 lrp1:1 simplicity:1 decomposable:1 immediately:1 recovery:1 estimator:5 holmes:1 nuclear:5 notion:5 variation:1 shamir:1 suppose:6 exact:1 programming:1 element:2 wolfe:2 particularly:1 ep:1 lix:1 ft:1 pradeepr:1 connected:1 valuable:1 mentioned:1 convexity:2 ui:1 iiv:1 depend:1 solving:2 basis:2 tca:1 completely:1 triangle:1 easily:2 various:2 distinct:1 fast:1 artificial:1 choosing:2 shalev:2 whose:3 larger:2 solve:2 valued:2 say:1 nineteenth:1 ability:1 statistic:3 noisy:1 envisage:1 advantage:1 sequence:4 differentiable:8 propose:2 lowdimensional:1 product:4 maximal:1 j2:1 date:1 iff:2 achieve:1 description:1 frobenius:2 inducing:1 exploiting:1 convergence:6 extending:1 neumann:1 rankone:1 object:6 derive:2 iq:1 oo:2 lowrank:1 strong:1 p2:1 recovering:2 c:3 resemble:1 hungarian:1 come:1 trading:1 concentrate:1 stucture:1 radius:1 closely:2 rfor:1 hull:8 stochastic:3 require:1 ao:4 generalization:1 strictly:1 extension:3 exploring:1 hold:2 sufficiently:1 considered:1 great:1 scope:1 dictionary:1 commonality:3 a2:2 estimation:2 applicable:2 combinatorial:1 hansen:1 utexas:3 ilg:2 largest:2 grouped:2 gauge:2 minimization:2 hope:1 clearly:2 always:1 ej:1 iimii:2 corollary:1 focus:1 viz:1 naval:1 ily:2 rank:13 likelihood:1 industrial:1 greedily:1 summarizing:1 whence:2 sense:1 helpful:1 el:2 typically:2 entire:2 hidden:1 issue:2 among:4 dual:4 classification:1 exponent:1 development:1 constrained:8 special:2 kempe:1 equal:2 construct:1 distilled:1 once:1 gurvits:1 atom:33 look:1 argmaxae:1 yu:2 a8t:2 future:3 simplex:3 minimized:1 np:1 lls:1 few:2 modern:2 simultaneously:1 geometry:1 consisting:2 lebesgue:1 seto:1 ab:1 freedom:1 huge:3 analyzed:1 pradeep:1 semidefinite:1 birkhoff:1 behind:2 clii:1 regularizers:1 implication:1 ambient:3 integral:1 unification:1 explosion:1 experience:1 folk:1 orthogonal:2 unless:2 euclidean:4 taylor:1 instance:3 column:2 earlier:1 facet:1 frechet:6 goodness:1 andl:1 assignment:1 tractability:1 cost:1 distill:1 subset:3 entry:1 uniform:2 imi:1 chooses:1 thanks:1 definitely:1 negahban:3 grand:1 siam:2 retain:1 sequel:1 international:3 concrete:1 ilj:5 von:2 again:2 borwein:1 choose:5 lii:10 american:1 derivative:2 jek:1 potential:1 orr:1 accompanied:1 coresets:1 satisfy:1 depends:3 later:2 root:1 closed:1 analyze:1 sup:7 start:3 recover:2 relied:1 capability:1 defer:1 minimize:3 lore:1 ir:1 square:1 accuracy:4 il:12 largely:1 efficiently:2 yield:4 identify:1 yes:2 generalize:1 unifies:1 accurately:1 critically:1 researcher:3 history:1 inform:1 whenever:1 definition:7 obvious:1 proof:6 treatment:1 recall:1 knowledge:3 subsection:2 dimensionality:2 lim:1 hilbert:4 ea:1 actually:1 back:1 appears:5 tum:1 higher:2 dt:1 done:5 though:3 just:1 working:1 ei:2 nonlinear:1 continuity:1 logistic:1 aj:4 true:1 adequately:1 regularization:2 hence:7 lozano:2 symmetric:4 dhillon:1 deal:2 ll:1 encourages:1 mina:2 ay:2 reasoning:1 hallmark:1 novel:6 recently:2 ef:2 mt:1 overview:1 volume:2 banach:19 extend:4 belong:2 slight:1 he:4 m1:1 distillation:1 measurement:2 nonn:1 stool:1 smoothness:18 tuning:1 mathematics:1 iia:8 nonlinearity:1 submodular:1 gratefully:1 similarity:1 longer:1 etc:1 add:2 gt:1 recent:6 perspective:1 optimizing:1 inf:2 certain:2 inequality:4 thk:3 continue:1 exploited:1 seen:1 fortunately:1 care:1 additional:1 omp:1 signal:1 ii:59 relates:1 full:1 reduces:3 smooth:11 technical:1 faster:2 ravikumar:2 award:1 plugging:1 prediction:1 regression:1 ae:2 essentially:1 lly:1 arxiv:1 sometimes:1 agarwal:1 achieved:1 c1:1 addressed:1 interval:1 singular:6 crucial:1 appropriately:2 unlike:1 induced:1 subject:1 lover:1 practitioner:1 structural:1 ideal:1 iii:2 easy:1 variety:2 iterate:1 fit:1 topology:1 inner:3 idea:5 simplifies:1 texas:3 vanderwerff:1 specialization:1 ul:1 effort:1 clarkson:2 sontag:1 hessian:2 neuralinformation:1 remark:7 matlab:1 deep:1 useful:1 clear:1 involve:1 amount:5 generate:1 schapire:1 http:1 nsf:1 revisit:1 sign:1 estimated:1 discrete:1 numerica:1 shall:1 group:5 key:3 isd:1 relaxation:1 year:1 sum:3 run:1 inverse:1 fourteenth:1 place:1 chandrasekaran:2 mii:1 appendix:1 bound:12 hi:1 guaranteed:1 centrally:4 quadratic:1 annual:2 constraint:7 precisely:1 ri:2 generates:2 u1:1 argument:1 min:2 department:3 structured:3 ball:7 combination:5 belonging:1 jr:5 across:1 smaller:1 increasingly:1 intimately:1 matrices2:1 lp:3 happens:1 hl:1 restricted:10 xo:2 taken:1 discus:1 loose:1 count:1 argmaxj:1 needed:1 mind:1 tractable:3 end:3 studying:1 generalizes:1 available:2 pursuit:2 liii:2 quarterly:1 v2:2 appropriate:1 spectral:1 rp:11 existence:2 original:1 denotes:1 spurred:1 include:2 graphical:1 a2l:3 prof:1 society:2 tensor:13 objective:4 question:2 quantity:2 already:2 strategy:2 concentration:1 rt:1 usual:1 said:2 minx:1 gradient:8 subspace:1 schatten:3 outer:1 manifold:1 extent:1 induction:1 willsky:1 assuming:3 minimizing:2 setup:1 unfortunately:1 potentially:2 statement:2 frank:2 trace:3 negative:5 xll:5 perform:1 upper:2 observation:2 darken:1 enabling:2 finite:3 acknowledge:1 logistics:1 defining:1 communication:1 precise:1 rn:4 discovered:1 arbitrary:3 abe:2 namely:2 pisier:3 connection:6 yll:1 established:2 swirszcz:2 able:2 suggested:1 beyond:1 gonen:1 regime:1 sparsity:7 challenge:2 appeared:3 program:1 ambuj:2 max:1 wainwright:1 power:1 mallick:1 natural:1 warm:1 regularized:4 haar:1 recursion:1 hll:3 scheme:5 theta:1 started:1 irrespective:1 acknowledges:1 faced:1 nice:1 understanding:2 literature:5 underway:1 relative:1 loss:5 lecture:1 permutation:7 interesting:3 xtlk:1 srebro:1 ingredient:1 remarkable:1 degree:1 consistent:1 viewpoint:1 editor:1 intractability:1 pi:1 austin:3 row:12 course:1 last:2 free:2 wide:2 face:2 sparse:11 benefit:1 curve:1 stand:1 forward:1 collection:1 made:1 projected:2 ig:2 lz:2 far:2 polynomially:1 transaction:1 approximate:3 compact:1 emphasize:1 dealing:1 global:3 assumed:1 shwartz:2 continuous:6 decade:1 ca:13 expansion:1 domain:1 da:1 did:1 rh:2 arise:1 succinct:1 body:3 ff:1 martingale:9 structurally:3 theme:2 wish:1 uoiformly:2 lie:7 rederive:1 third:1 iiaii:2 jmlr:1 inunediately:2 donahue:1 theorem:16 specific:5 xt:50 maxi:1 r2:6 appeal:1 x:1 intractable:2 exists:1 workshop:1 sequential:1 chen:1 simply:1 rsp:7 g2:1 inderjit:2 u2:1 xp2:1 springer:1 a8:1 minimizer:2 satisfies:2 acm:1 abbreviation:1 goal:1 viewed:1 leaming:1 towards:1 lipschitz:1 hard:2 infinite:10 specifically:2 except:1 uniformly:3 blessing:1 called:5 svd:2 support:3 latter:1 constructive:1
3,770
4,413
On the Universality of Online Mirror Descent Nathan Srebro TTIC [email protected] Karthik Sridharan TTIC [email protected] Ambuj Tewari University of Texas at Austin [email protected] Abstract We show that for a general class of convex online learning problems, Mirror Descent can always achieve a (nearly) optimal regret guarantee. 1 Introduction Mirror Descent is a first-order optimization procedure which generalizes the classic Gradient Descent procedure to non-Euclidean geometries by relying on a ?distance generating function? specific to the geometry (the squared ?2 norm in the case of standard Gradient Descent) [14, 4]. Mirror Descent is also applicable, and has been analyzed, in a stochastic optimization setting [9] and in an online setting, where it can ensure bounded online regret [20]. In fact, many classical online learning algorithms can be viewed as instantiations or variants of Online Mirror Descent, generally either with the Euclidean geometry (e.g. the Perceptron algorithm [5] and Online Gradient Descent [27]), or in the simplex (?1 geometry), using an entropic distance generating function (Winnow [13] and Multiplicative Weights / Online Exponentiated Gradient algorithm [11]). More recently, the Online Mirror Descent framework has been applied, with appropriate distance generating functions derived for a variety of new learning problems like multi-task learning and other matrix learning problems [10], online PCA [26] etc. In this paper, we show that Online Mirror Descent is, in a sense, universal. That is, for any convex online learning problem, of a general form (specified in Section 2), if the problem is online learnable, then it is online learnable, with a nearly optimal regret rate, using Online Mirror Descent, with an appropriate distance generating function. Since Mirror descent is a first order method and often has simple and computationally efficient update rules, this makes the result especially attractive. Viewing online learning as a sequentially repeated game, this means that Online Mirror Descent is a near optimal strategy, guaranteeing an outcome very close to the value of the game. In order to show such universality, we first generalize and refine the standard Mirror Descent analysis to situations where the constraint set is not the dual of the data domain, obtaining a general upper bound on the regret of Online Mirror Descent in terms of the existence of an appropriate uniformly convex distance generating function (Section 3). We then extend the notion of a martingale type of a Banach space to be sensitive to both the constraint set and the data domain, and building on results of [24], we relate the value of the online learning repeated game to this generalized notion of martingale type (Section 4). Finally, again building on and generalizing the work of [16], we show how having appropriate martingale type guarantees the existence of a good uniformly convex function (Section 5), that in turn establishes the desired nearly-optimal guarantee on Online Mirror Descent (Section 6). We mainly build on the analysis of [24], who related the value of the online game to the notion of martingale type of a Banach space and uniform convexity when the constraint set and data domain are dual to each other. The main technical advance here is a non-trivial generalization of their analysis (as well as the Mirror Descent analysis) to the more general situation where the constraint set and data domain are chosen independently of each other. In Section 7 several examples are provided that demostrate the use of our analysis. Mirror Descent was initially introduced as a first order deterministic optimization procedure, with an ?p constraint and a matching ?q Lipschitz assumption (1 ? p ? 2, 1/q + 1/p = 1), was shown to be optimal in terms of the number of exact gradient evaluations [15]. Shalev-Shwartz and Singer later observed that the online version of Mirror Descent, again with an ?p bound and matching ?q Lipschitz assumption (1 ? p ? 2, 1/q + 1/p = 1), is also optimal in terms 1 of the worst-case (adversarial) online regret. In fact, in such scenarios stochastic Mirror Descent is also optimal in terms of the number of samples used. We emphasize that although in most, if not all, settings known to us these three notions of optimality coincide, here we focus only on the worst-case online regret. Sridharan and Tewri [24] generalized the optimality of online Mirror Descent (w.r.t. regret) to scenarios where learner is constrained to a unit ball of an arbitrary Banach space (not necessarily and ?p space) and the objective functions have sub-gradients that lie in the dual ball of the space?for reasons that will become clear shortly, we refer to this as the data domain. However, often we encounter problems where the constraint set and data domain are not dual balls, but rather are arbitrary convex subsets. In this paper, we explore this more general, ?non-dual?, variant, and show that also in such scenarios online Mirror Descent is (nearly) optimal in terms of the (asymptotic) worst-case online regret. 2 Online Convex Learning Problem An online convex learning problem can be viewed as a multi-round repeated game where on round t, the learner first picks a vector (predictor) wt from some fixed set W, which is a closed convex subset of a vector space B. Next, the adversary picks a convex cost function ft : W ?? R from a class of convex functions F. At the end of the round, the learner pays instantaneous cost ft (wt ). We refer to the strategy used by the learner to pick the ft ?s as an online ? learning algorithm. More formally, an online learning algorithm A for the problem is specified by the mapping A : n?N F n?1 ?? W. The regret of the algorithm A for a given sequence of cost functions f1 , . . . , fn is given by n Rn (A, f1 , . . . , fn ) = n 1? 1? ft (A(f1:t?1 )) ? inf ft (w) . w?W n n t=1 t=1 The goal of the learner (or the online learning algorithm), is to minimize the regret for any n. In this paper, we consider cost function classes F specified by a convex subset X ? B ? of the dual space B ? . We consider various types of classes, where for all of them, subgradients1 of the functions in F lie inside X (we use the notation ?x, w? to mean applying linear functional x ? B ? on w ? B) : FLip (X ) = {f : f is convex ?w ? W, ?f (w) ? X } , Fsup (X ) = {w ?? |?x, w? ? y| : x ? X , y ? [?b, b]} Flin (X ) = {w ?? ?x, w? : x ? X } , The value of the game is then the best possible worst-case regret guarantee an algorithm can enjoy. Formally : Vn (F, X , W) = inf sup A f1:n ?F (X ) Rn (A, f1:n ) (1) It is well known that the value of a game for all the above sets F is the same. More generally: Proposition 1. If for a convex function class F, we have that ?f ? F, w ? W, ?f (w) ? X then, Furthermore, Vn (F, X , W) ? Vn (Flin , X , W) Vn (FLip , X , W) = Vn (Fsup , X , W) = Vn (Flin , X , W) That is, the value for any class F for which subgradients are in X , is upper bounded by the value of the class of linear functionals in W, see e.g. [1]. In particular, this includes the class FLip which is the class of all functions with subgradients in X , and since Flin (X ) ? FLip (X ) we get the first equality. The second equality is shown in [18]. The class Fsup (X ) corresponds to linear prediction with an absolute-difference loss, and thus its value is the best possible guarantee for online supervised learning with this loss. We can define more generally a class F? = {?(?x, w?, y) : x ? X , y ? [?b, b]} for any 1-Lipschitz loss ?, and this class would also be of the desired type, with its value upper bounded by Vn (Flin , X , W). In fact, this setting includes supervised learning fairly generally, including problems such as multitask learning and matrix completion, where in all cases X specifies the data domain2 . The equality in the above proposition can also be extended to other commonly occurring convex loss function classes like the hinge loss class with some extra constant factors. 1 Throughout we commit to a slight abuse of notation, with ?f (w) indicating some sub-gradient of f at w and ?f (w) ? X meaning that at least one of the sub-gradients is in X . 2 Any convex supervised learning problem can be viewed as linear classification with some convex constraint W on predictors. 2 Owing to Proposition 1, we can focus our attention on the class Flin (as other two behave similarly), and use shorthand Vn (W, X ) := Vn (Flin , X , W) (2) Henceforth the term value without any qualification refers to value of the linear game. Further, for any p ? [1, 2] let, ? ? ? 1 ? Vp := inf V ? ?n ? N, Vn (W, X ) ? V n?(1? p ) (3) Most prior work on online learning and optimization considers the case when W is the unit ball of some Banach space, and X is the unit ball of the dual space, i.e. W and X are related to each other through duality. In this work, however, we analyze the general problem where X ? B ? is not necessarily the dual ball of W. It will be convenient for us to relate the notions of a convex set and a corresponding norm. The Minkowski functional of a subset K of a vector space V is defined as ?v?K := inf {? > 0 : v ? ?K}. If K is convex and centrally symmetric (i.e. K = ?K), then ???K is a semi-norm. Throughout this paper, we will require that W and X are convex and centrally symmetric. Further, if the set K is bounded then ???K is a norm. Although not strictly required for our results, for simplicity we will assume W and X are are such that ???W and ???X (the Minkowski functionals of the sets W and X ) are norms. Even though we do this for simplicity, we remark that all the results go through for semi-norms. We use X ? and W ? ? ? to represent the dual of balls X and W respectively, i.e. the unit balls of the dual norms ???X and ???W . 3 Mirror Descent and Uniform Convexity A key tool in the analysis mirror descent is the notion of strong convexity, or more generally uniform convexity: Definition 1. ? : B ? R is q-uniformly convex w.r.t. ? ? ? if for any w, w? ? B: ???[0,1] ? (?w + (1 ? ?)w? ) ? ??(w) + (1 ? ?)?(w? ) ? ?(1??) q q ?w ? w? ? We emphasize that in the definition above, the norm ?.? and the subset W need not be related, and we only require uniform convexity inside W. This allows us to relate a norm with a non-matching ?ball?. To this end define, ? ?? ? ? p?1 ? p ? p + Dp := inf sup ?(w) ? ? : W ?? R is p?1 -uniformly convex w.r.t. ???X ? , ?(0) = 0 ? w?W Given a function ?, the Mirror Descent algorithm, AMD is given by wt+1 = argmin ?? (w|wt ) + ???ft (wt ), w ? wt ? or equivalently w?W ? ? wt+1 = ?? (??(wt ) ? ??ft (wt )) , wt+1 (4) ? ? ? ? = argmin ?? w?wt+1 (5) w?W where ?? (w|w? ) := ?(w)??(w? )????(w? ), w ? w? ? is the Bregman divergence and ?? is the convex conjugate 2 of ?. As an example notice that when ?(w) = 12 ?w?2 then we get the gradient descent algorithm and when W is ?d the d dimensional simplex and ?(w) = i=1 wi log(1/wi ) then we get the multiplicative weights update algorithm. Lemma 2. Let ? : B ?? R be non-negative and q-uniformly convex w.r.t. norm ???X ? . For the Mirror Descent algo? ?1/p supw?W ?(w) rithm with this ?, using w1 = argmin ?(w) and ? = we can guarantee that for any f1 , . . . , fn nB w?W ? n p q s.t. n1 t=1 ??ft ?X ? 1 (where p = q?1 ), ? ?1 supw?W ?(w) q R(AMD , f1 , . . . , fn ) ? 2 . n ?n p Note that in our case we have ?f ? X , i.e. ??f ?X ? 1, and so certainly n1 t=1 ??ft ?X ? 1. Similarly to the value of the game, for any p ? [1, 2], we define: ? ? MDp := inf D : ??, ? s.t. ?n ? N, sup f1:n ?F (X ) 1 Rn (AMD , f1:n ) ? Dn?(1? p ) (6) where the Mirror Descent algorithm in the above definition is run with the corresponding ? and ?. The constant MDp is a characterization of the best guarantee the Mirror Descent algorithm can provide. Lemma 2 therefore implies: 3 Corollary 3. Vp ? MDp ? 2Dp . Proof. The first inequality is by the definition of Vp and MDp . Second inequality follows from previous lemma. The Mirror Descent bound suggests that as long as we can find an appropriate function ? that is uniformly convex ? w.r.t. ???X we can get a diminishing regret guarantee. This suggests constructing the following function: ? q := ? argmin (7) sup ?(w) . ?:? is q-uniformly convex w?W w.r.t. ???X ? on W and ??0 ? q = ? is assumed by default. The above function is in a sense the If no q-uniformly convex function exists then ? best choice for the Mirror Descent bound in (2). The question then is: when can we find such appropriate functions and what is the best rate we can guarantee using Mirror Descent? 4 Martingale Type and Value In [24], it was shown that the concept of the Martingale type (also sometimes called the Haar type) of a Banach space and optimal rates for online convex optimization problem, where X and W are duals of each other, are closely related. In this section we extend the classic notion of Martingale type of a Banach space (see for instance [16]) to one that accounts for the pair (W ? , X ). Before we proceed with the definitions we would like to introduce a few necessary notations. First, throughout we shall use ? ? {?1}N to represent infinite sequence of signs drawn uniformly at random (i.e. each ?i has equal probability of being +1 or ?1). Also throughout (xn )n?N represents a sequence of mappings where each xn : {?1}n?1 ?? B ? . We shall commit to the abuse of notation and use xn (?) to represent xn (?) = xn (?1 , . . . , ?n?1 ) (i.e. although we used entire ? as argument, xn only depends on first n ? 1 signs). We are now ready to give the extended definition of Martingale type (or M-type) of a pair (W ? , X ). Definition 2. A pair (W ? , X ) of subsets of a vector space B ? is said to be of M-type p if there exists a constant C ? 1 such that for all sequence of mappings (xn )n?1 where each xn : {?1}n?1 ?? B ? and any x0 ? B ? : ? ? ?p ? ?? n ? ? ? ? ? ? p p sup E ?x0 + ?i xi (?)? ? C p ??x0 ?X + E [?xn (?)?X ]? (8) ? ? ? n i=1 n?1 W The concept is called Martingale type because (?n xn (?))n?N is a martingale difference sequence and it can be shown that rate of convergence ?nof martingales in Banach spaces is governed by the rate of convergence of martingales of the form Zn = x0 + i=1 ?i xi (?) (which are incidentally called Walsh-Paley martingales). We point the reader to [16, 17] for more details. Further, for any p ? [1, 2] we also define, ? ? ? ? ?? ?p ? ?? ? n ? ? ? ? ? ? ? ? Cp := inf C ?? ?x0 ? B? , ?(xn )n?N , sup E ?x0 + ?i xi (?)? ? C p ??x0 ?pX + E ?xn (?)?pX ? ? ? ? ? ? ? n i=1 n?1 W Cp is useful in determining if the pair (W ? , X ) has Martingale type p. The results of [24, 18] showing that a Martingale type implies low regret, actually apply also for ?non-matching? W and X and, in our notation, imply that Vp ? 2Cp . Specifically we have the following theorem from [24, 18] : Theorem 4. [24, 18] For any W ? B and any X ? B ? and any n ? 1, ? ? ? ? ?? n ?? n ?1 ? ? ?1 ? ? ? ? ? ? sup E ? ?i xi (?)? ? Vn (W, X ) ? 2 sup E ? ?i xi (?)? ? ? ? ? ? n n x x ? i=1 i=1 W where the supremum above is over sequence of mappings (xn )n?1 where each xn : {?1} W n?1 ?? X . Our main interest here will is in establishing that low regret implies Martingale type. To do so, we start with the above theorem to relate value of the online convex optimization game to rate of convergence of martingales in the Banach space. We then extend the result of Pisier in [16] to the ?non-matching? setting combining it with the above theorem to finally get : 4 Lemma 5. If for some r ? (1, 2] there exists a constant D > 0 such that for any n, 1 Vn (W, X ) ? Dn?(1? r ) then for all p < r, we can conclude that any x0 ? B ? and any B ? sequence of mappings (xn )n?1 where each xn : {?1}n?1 ?? B ? will satisfy : ? ? ?p ? ? ?? ?p n ? ? ? ? 1104 D ? ? p ??x0 ?p + sup E ?x0 + ?i xi (?)? ? E [?xi (?)?X ]? X ? ? ? (r ? p)2 n i=1 i?1 W That is, the pair (W, X ) is of martingale type p. The following corollary is an easy consequence of the above lemma. Corollary 6. For any p ? [1, 2] and any p? < p : Cp? ? 5 1104 Vp (p?p? )2 Uniform Convexity and Martingale Type The classical notion of Martingale type plays a central role in the study of geometry of Banach spaces. In [16], it was shown that a Banach space has Martingale type p (the classical notion) if and only if uniformly convex functions with certain properties exist on that space (w.r.t. the norm of that Banach space). In this section, we extend this result and show how the Martingale type of a pair (W ? , X ) are related to existence of certain uniformly convex functions. Specifically, the following theorem shows that the notion of Martingale type of pair (W ? , X ) is equivalent to the existence of a non-negative function that is uniformly convex w.r.t. the norm ???X ? . Lemma 7. If, for some p ? (1, 2], there exists a constant C > 0, such that for all sequences of mappings (xn )n?1 where each xn : {?1}n?1 ?? B ? and any x0 ? B ? : ? ? ?p ? ?? n ? ? ? ? ? ? p p sup E ?x0 + ?i xi (?)? ? C p ??x0 ?X + E [?xn (?)?X ]? ? ? ? n i=1 n?1 W (i.e. (W ? , X ) has Martingale type p), then there exists a convex function ? : B ?? R+ with ?(0) = 0, that is q q q q-uniformly convex w.r.t. norm ???X ? s.t. ?w ? B, 1q ?w?X ? ? ?(w) ? Cq ?w?W . The following corollary follows directly from the above lemma. Corollary 8. For any p ? [1, 2], Dp ? Cp . The proof of Lemma 7 goes further and gives a specific uniformly convex function ? satisfying the desired requirement (i.e. establishing Dp ? Cp ) under the assumptions of the previous lemma: ? ? ? ?p ? ?? n ? ? ? ? ? ? ? 1 ? ? p ??q (x) := sup sup E x + ? x (?) ? E ?x (?)? , ?q := (??q )? . ? ? i i i X ? Cp n ? ? ? ? i=1 W where the supremum above is over sequences (xn )n?N and p = 6 (9) i?1 q q?1 . Optimality of Mirror Descent In the Section 3, we saw that if we can find an appropriate uniformly convex function to use in the mirror descent algorithm, we can guarantee diminishing regret. However the pending question there was when can we find such a function and what is the rate we can gaurantee. In Section 4 we introduced the extended notion of Martingale type of a pair (W ? , X ) and how it related to the value of the game. Then, in Section 5, we saw how the concept of M-type related to existence of certain uniformly convex functions. We can now combine these results to show that the mirror descent algorithm is a universal online learning algorithm for convex learning problems. Specifically we show that whenever a problem is online learnable, the mirror descent algorithm can guarantee near optimal rates: 5 1 Theorem 9. If for some constant V > 0 and some q ? [2, ?), Vn (W, X ) ? V n? q for all n, then for any n > eq?1 , there exists regularizer function ? and step-size ?, such that the regret of the mirror descent algorithm using ? against any f1 , . . . , fn chosen by the adversary is bounded as: 1 Rn (AMD , f1:n ) ? 6002 V log2 (n) n? q (10) Proof. Combining Mirror descent guarantee in Lemma 2, Lemma 7 and the lower bound in Lemma 5 with p = q 1 q?1 ? log(n) we get the above statement. The above Theorem tells us that, with appropriate ? and learning rate ?, mirror descent will obtain regret at most a factor of 6002 log(n) from the best possible worst-case upper bound. We would like to point out that the constant V in the value of the game appears linearly and there is no other problem or space related hidden constants in the bound. The following figure summarizes the relationship between the various constants. The arrow mark from Cp? to Cp indicates that for any n, all the quantities are within log2 n factor of each other. p? < p, Cp? ? Vp Lemma 5 (extending Pisier?s result [16]) ? MDp Definition of Vp ? ? Dp Lemma 2 (Generalized MD guarantee) Cp Construction of ?, Lemma 10 (extending Pisier?s result [16]) Figure 1: Relationship between the various constants We now provide some general guidelines that will help us in picking out appropriate function ? for mirror descent. First we note that though the function ?q in the construction (9) need not be such that (q?q (w))1/q is a norm, with a simple modification as noted in [17] we can make it a norm. This basically tells us that the pair (W, X ) is online learnable, if and only if we can sandwich a q-uniformly convex norm in-between X ? and a scaled version of W (for some q < ?). Also note that by definition of uniform convexity, if any function ? is q-uniformly convex w.r.t. some norm ??? and we have that ??? ? c ???X , then ?(?) cq is q-uniformly convex w.r.t. norm ???X . These two observations ? together suggest that given pair (W, X ) what we need to do is find a norm ??? in between ???X and C ???W (C < ?, q smaller the C better the bound ) such that ??? is q-uniformly convex w.r.t ???. 7 Examples We demonstrate our results on several online learning problems, specified by W and X . ?p non-dual pairs It is usual in the literature to consider the case when W is the unit ball of the ?p norm in some finite dimension d while X is taken to be the unit ball of the dual norm ?q where p, q are H?older conjugate exponents. Using the machinery developed in this paper, it becomes effortless to consider the non-dual case when W is the unit ball Bp1 of some ?p1 norm while X is the unit ball Bp2 for arbitrary p1 , p2 in [1, ?]. We shall use q1 and q2 to represent Holder 1 conjugates of p1 and p2 . Before we proceed we first note that for any r ? (1, 2], ?r (w) := 2(r?1) ?w?2r is 2-uniformly w.r.t. norm ???r (see for instance [25]). On the other hand by Clarkson?s inequality, we have that for r ? (2, ?), r ?r (w) := 2r ?w?rr is r-uniformly convex w.r.t. ???r . Putting it together we see that for any r ? (1, ?), the function ?r defined above, is Q-uniformly convex w.r.t ???r for Q = max{r, 2}. The basic technique idea is to be to select ?r 1 1 based on the guidelines in the end of the previous section. Finally we show that using ??r := dQ max{ q2 ? r ,0} ?r in Mirror descent Lemma 2 yields the bound that for any f1 , . . . , fn ? F: Rn (AMD , f1:n ) ? 2 max{2, ? 1 1 1 1 1 }dmax{ q2 ? r ,0}+max{ r ? p1 ,0} 2(r?1) n1/ max{r,2} ? The following table summarizes the scenarios where a value of r = 2, i.e. a rate of D2 / n, is possible, and lists the corresponding values of D2 (up to numeric constant of at most 16): 6 p1 Range 1 ? p1 ? 2 1 ? p1 ? 2 1 ? p1 ? 2 p1 > 2 p1 > 2 1 ? p1 ? 2 2 q2 = p2p?1 Range q2 > 2 p 1 ? q2 ? 2 1 ? q2 < p 1 q2 > 2 1 ? q2 ? 2 q2 = ? D2 1 ? p2 ? 1 ? d1/q2 ?1/p1 p2 ? 1 (1/2?1/p1 ) d (1/q2 ?1/p1 ) d ? log(d) Note that the first two rows are dimension free, and so apply also in infinite-dimensional settings, whereas in the other scenarios, D2 is finite only when the dimension is finite. An interesting phenomena occurs when d is ?, p1 > 2 and q2 ? p1 . In this case D2 = ? and so one cant expect a rate of O( ?1n ). However we have Dp2 < 16 and so can still 1 get a rate of n? q2 . Ball et al [3] tightly calculate the constants of strong convexity of squared ?p norms, establishing the tightness of D2 when p1 = p2 . By extending their constructions it is also possible to show tightness (up to a factor of 16) for all other values in the table. Also, Agarwal et al [2] recently showed lower bounds on the sample complexity of stochastic optimization when p1 = ? and p2 is arbitrary?their lower bounds match the last two rows in the table. Non-dual Schatten norm pairs in finite dimensions Exactly the same analysis as above can be carried out for Schatten p-norms, i.e. when W = BS(p1 ) , X = BS(p2 ) are the unit balls of Schatten p-norm (the p-norm of the singular values) for matrix of dimensions d1 ? d2 . We get the same results as in the table above (as upper bounds on D2 ), with d = min{d1 , d2 }. These results again follow using similar arguments as ?p case and tight constants for strong convexity parameters of the Schatten norm from [3]. Non-dual group norm pairs in finite dimensions In applications such as multitask learning, groups norms such as ?w?q,1 are often used on matrices w ? Rk?d where (q, 1) norm means taking the ?1 -norm of the ?q -norms of the columns of w. Popular choices include q = 2, ?. Here, it may be quite unnatural to use the dual norm (p, ?) to define the space X where the data lives. For instance, we might want to consider W = B(q,1) and X = B(?,?) = B? . In 1? 2 log d 1 such a case we can calculate that D2 (W, X ) = ?(k 1? q log(d)) using ?(w) = q+r?2 ?w?q,r where r = log d?1 . Max Norm Max-norm has been proposed as a convex matrix regularizer for application such as matrix completion [21]. In the online version of the matrix completion problem at each time step one element of the matrix is revealed, corresponding to X being the set of all matrices with a single element being 1 and the rest 0. Since we need X to be convex we can take the absolute convex hull of this set and use X to be the unit element-wise ?1 ball. Its dual is ?W ?X ? = maxi,j |Wi,j hand given a matrix W , its max-norm is given by ? |. On the other ? ?W ?max = minU,V :W =U V ? (maxi ?Ui ?2 ) maxj ?Vj ?2 . The set W is the unit ball under the max norm. As noted in [22] the max-norm ball is equivalent, up to a factor two, to the convex hull of all rank one sign matrices. Let us now make a more general observation. Consider any set W = abscvx({w1 , . . . , wK }), the absolute convex hull of K points w1 , . . . , wK ? B. In this case, the Minkowski norm for this W is given by ?K ?w?W := inf ?1 ,...,?K :w=?K ?i wi i=1 |?i |. In this case, for any q ? (1, 2], if we define the norm ?w?W,q = i=1 ?? ?1/q K 2 1 inf ?1 ,...,?K :w=?K ?i wi |?i |q , then the function ?(w) = 2(q?1) ?w?W,q is 2-uniformly convex w.r.t. i=1 i=1 ? ? log K ???W,q (similar to ?1 ? ?q case). Further if we use q = log , then supw?W ?(w) = O( log K) and so K?1 ? D2 = log K. For the max norm case the norm is equivalent to the norm got by the taking the absolute convex hull of the set of all rank one sign matrices. Cardinality of this set is of course 2N +M . Hence using the above proposition and noting that X ?? is the unit ball of | ? |? we see that ? is obviously 2-uniformly convex w.r.t. ???X ? and so we get ? ? M +N a regret bound O . This matches the stochastic (PAC) learning guarantee [22], and is the first guarantee we n are aware of for the max norm matrix completion problem in the online setting. 8 Conclusion and Discussion In this paper we showed that for a general class of convex online learning problems, there always exists a distance generating function ? such that Mirror Descent using this function achieves a near-optimal regret guarantee. This 7 shows that a fairly simple first-order method, in which each iteration requires a gradient computation and a proxmap computation, is sufficient for online learning in a very general sense. Of course, the main challenge is deriving distance generating functions appropriate for specific problems?although we give two mathematical expressions for such functions, in equations (7) and (9), neither is particularly tractable in general. In the end of Section 6 we do give some general guidelines for choosing the right distance generating function. However obtaining a more explicit and simple procedure at least for reasonable Banach spaces is a very interesting question. Furthermore, for the Mirror Descent procedure to be efficient, the prox-map of the distance generating function must be efficiently computable, which means that even though a Mirror Descent procedure is always theoretically possible, we might in practice choose to use a non-optimal distance generating function, or even a non-MD procedure. Furthermore, we might also find other properties of w desirable, such as sparsity, which would bias us toward alternative methods [12, 7]. Nevertheless, in most instances that we are aware of, Mirror Descent, or slight variations of it, is truly an optimal procedure and this is formalized and rigorously establish here. In terms of the generality of the problems we handle, we required that the constraint set W be convex, but this seems unavoidable if we wish to obtain efficient algorithms (at least in general). Furthermore, we know that in terms of worst-case behavior, both in the stochastic and in the online setting, for convex cost functions, the value is unchained when the convex hull of a non-convex constraint set [18]. The requirement that the data domain X be convex is perhaps more restrictive, since even with non-convex data domain, the objective is still convex. Such non-convex X are certainly relevant in many applications, e.g. when the data is sparse, or when x ? X is an indicator, as in matrix completion problems and total variation regularization. In the total variation regularization problem, W is the set of all functions on the interval [0, 1] with total variation bounded by 1 which is in fact a Banach space. However set X we consider here is not the entire dual ball and in fact is neither convex nor symmetric. It only consists of evaluations of the functions in W at points on interval [0, 1] and one can consider a supervised learning problem where the goal is to use the set of all functions with bounded variations to predict targets which take on values in [?1, 1] . Although the total-variation problem is not learnable, the matrix completion problem certainly is of much interest. In the matrix completion case, taking the convex hull of X does not seem to change the value, but we are unaware of neither a guarantee that the value of the game is unchanged when a non-convex X is replaced by its convex hull, nor of an example where the value does change?it would certainly be useful to understand this issue. We view the requirement that W and X be symmetric around the origin as less restrictive and mostly a matter of convenience. We also focused on a specific form of the cost class F, which beyond the almost unavoidable assumption of convexity, is taken to be constrained through the cost sub-gradients. This is general enough for considering supervised learning with an arbitrary convex loss in a worst-case setting, as the sub-gradients in this case exactly correspond to the data points, and so restricting F through its sub gradients corresponds to restricting the data domain. Following Proposition 1, any optimality result for FLip also applies to Fsup , and this statement can also be easily extended to any other reasonable loss function, including the hinge-loss, smooth loss functions such as the logistic loss, and even stronglyconvex loss functions such as the squared loss (in this context, note that a strongly convex scalar function for supervised learning does not translate to a strongly convex optimization problem in the worst case). Going beyond a worst-case formulation of supervised learning, one might consider online repeated games with other constraints on F, such as strong convexity, or even constraints on {ft } as a sequence, such as requiring low average error or conditions on the covariance of the data?these are beyond the scope of the current paper. Even for the statistical learning setting, online methods along with online to batch conversion are often preferred due to their efficiency especially in high dimensional problems. In fact for ?p spaces in the dual case, using lower bounds on the sample complexity for statistical learning of these problems, one can show that for large dimensional problems, mirror descent is an optimal procedure even for the statistical learning problem. We would like to consider the question of whether Mirror Descent is optimal for stochastic convex optimization (convex statistical learning) setting [9, 19, 23] in general. Establishing such universality would have significant implications, as it would indicate that any learnable (convex) problem, is learnable using a one-pass first-order online method (i.e. Stochastic Approximation approach). References [1] J. Abernethy, P. L. Bartlett, A. Rakhlin, and A. Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the Nineteenth Annual Conference on Computational Learning Theory, 2008. [2] Alekh Agarwal, Peter L. Bartlett, Pradeep Ravikumar, and Martin J. Wainwright. Information-theoretic lower bounds on the oracle complexity of convex optimization. 8 [3] Keith Ball, Eric A. Carlen, and Elliott H. Lieb. Sharp uniform convexity and smoothness inequalities for trace norms. Invent. Math., 115:463?482, 1994. [4] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167?175, 2003. [5] H. D. Block. The perceptron: A model for brain functioning. Reviews of Modern Physics, 34:123?135, 1962. Reprinted in ?Neurocomputing? by Anderson and Rosenfeld. [6] V. Chandrasekaran, S. Sanghavi, P. Parrilo, and A. Willsky. Sparse and low-rank matrix decompositions. In IFAC Symposium on System Identification, 2009. [7] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ?1 -ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, 2008. [8] Ali Jalali, Pradeep Ravikumar, Sujay Sanghavi, and Chao Ruan. A Dirty Model for Multi-task Learning. In NIPS, December 2010. [9] A. Juditsky, G. Lan, A. Nemirovski, and A. Shapiro. Stochastic approximation approach to stochastic programming. SIAM J. Optim, 19(4):1574?1609, 2009. [10] Sham M. Kakade, Shai Shalev-shwartz, and Ambuj Tewari. On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization, 2010. [11] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?64, January 1997. [12] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. In Advances in Neural Information Processing Systems 21, pages 905?912, 2009. [13] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285?318, 1988. [14] A. Nemirovski and D. Yudin. On cesaro?s convergence of the gradient descent method for finding saddle points of convex-concave functions. Doklady Akademii Nauk SSSR, 239(4), 1978. [15] A. Nemirovski and D. Yudin. Problem complexity and method efficiency in optimization. Nauka Publishers, Moscow, 1978. [16] G. Pisier. Martingales with values in uniformly convex spaces. Israel Journal of Mathematics, 20(3?4):326?350, 1975. [17] G. Pisier. Martingales in banach spaces (in connection with type and cotype). Winter School/IHP Graduate Course, 2011. [18] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Random averages, combinatorial parameters, and learnability. NIPS, 2010. [19] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In COLT, 2009. [20] S. Shalev-Shwartz and Y. Singer. Convex repeated games and fenchel duality. Advances in Neural Information Processing Systems, 19:1265, 2007. [21] Nathan Srebro, Jason D. M. Rennie, and Tommi S. Jaakola. Maximum-margin matrix factorization. In Advances in Neural Information Processing Systems 17, pages 1329?1336. MIT Press, 2005. [22] Nathan Srebro and Adi Shraibman. Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545?560. Springer-Verlag, 2005. [23] Nathan Srebro and Ambuj Tewari. Stochastic optimization for machine learning. In ICML 2010, tutorial, 2010. [24] K. Sridharan and A. Tewari. Convex games in Banach spaces. In Proceedings of the 23nd Annual Conference on Learning Theory, 2010. [25] S.Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, Hebrew University of Jerusalem, 2007. [26] Manfred K. Warmuth and Dima Kuzmin. Randomized online pca algorithms with regret bounds that are logarithmic in the dimension, 2007. [27] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003. [28] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67:301?320, 2005. 9
4413 |@word multitask:2 version:3 norm:49 seems:1 nd:1 d2:11 covariance:1 decomposition:1 q1:1 pick:3 series:1 current:1 optim:1 universality:3 must:1 fn:6 cant:1 update:2 juditsky:1 warmuth:2 manfred:1 characterization:1 math:1 zhang:1 mathematical:1 dn:2 along:1 become:1 symposium:1 stronglyconvex:1 shorthand:1 consists:1 combine:1 inside:2 introduce:1 theoretically:1 x0:13 behavior:1 p1:19 nor:2 multi:3 brain:1 relying:1 cardinality:1 considering:1 becomes:1 provided:1 abound:1 bounded:7 notation:5 what:3 israel:1 argmin:4 q2:14 developed:1 shraibman:1 finding:1 guarantee:17 concave:1 exactly:2 doklady:1 scaled:1 dima:1 unit:12 enjoy:1 before:2 qualification:1 consequence:1 establishing:4 abuse:2 might:4 suggests:2 factorization:1 walsh:1 nemirovski:3 range:2 graduate:1 jaakola:1 practice:1 regret:20 block:1 procedure:9 universal:2 got:1 matching:5 convenient:1 projection:1 refers:1 suggest:1 get:9 convenience:1 close:1 onto:1 selection:1 nb:1 effortless:1 applying:1 context:1 equivalent:3 deterministic:1 map:1 zinkevich:1 go:2 attention:1 jerusalem:1 independently:1 convex:79 focused:1 simplicity:2 formalized:1 rule:1 deriving:1 classic:2 handle:1 notion:11 variation:6 construction:3 play:1 target:1 shamir:1 exact:1 programming:2 origin:1 element:3 satisfying:1 particularly:1 bp2:1 observed:1 ft:10 role:1 worst:9 calculate:2 convexity:13 complexity:4 ui:1 rigorously:1 tight:1 algo:1 ali:1 efficiency:2 learner:5 eric:1 easily:1 various:3 regularizer:2 tell:2 outcome:1 shalev:6 choosing:1 abernethy:1 quite:1 nineteenth:1 rennie:1 tightness:2 commit:2 rosenfeld:1 online:55 obviously:1 sequence:10 paley:1 rr:1 net:1 relevant:1 combining:2 translate:1 achieve:1 nauk:1 convergence:4 requirement:3 extending:3 generating:10 guaranteeing:1 incidentally:1 help:1 completion:7 school:1 keith:1 eq:1 p2:7 strong:6 c:1 implies:3 indicate:1 nof:1 tommi:1 closely:1 sssr:1 owing:1 attribute:1 stochastic:11 hull:7 duals:1 viewing:1 require:2 f1:13 generalization:1 proposition:5 strictly:1 around:1 minu:1 mapping:6 predict:1 scope:1 achieves:1 entropic:1 applicable:1 combinatorial:1 utexas:1 sensitive:1 saw:2 establishes:1 tool:1 mit:1 always:3 rather:1 corollary:5 derived:1 focus:2 rank:4 indicates:1 mainly:1 adversarial:1 sense:3 carlen:1 entire:2 initially:1 diminishing:2 hidden:1 going:1 issue:1 dual:19 classification:1 supw:3 colt:1 exponent:1 constrained:2 fairly:2 ruan:1 equal:1 aware:2 having:1 represents:1 icml:2 nearly:4 simplex:2 sanghavi:2 few:1 modern:1 winter:1 divergence:1 tightly:1 neurocomputing:1 maxj:1 replaced:1 geometry:5 beck:1 karthik:2 n1:3 sandwich:1 interest:2 evaluation:2 certainly:4 analyzed:1 truly:1 pradeep:2 implication:1 bregman:1 necessary:1 machinery:1 euclidean:2 littlestone:1 desired:3 instance:4 column:1 fenchel:1 teboulle:1 zn:1 cost:7 subset:6 uniform:7 predictor:3 learnability:1 international:1 siam:1 randomized:1 physic:1 picking:1 together:2 quickly:1 w1:3 thesis:1 squared:3 again:3 central:1 unavoidable:2 choose:1 henceforth:1 li:1 account:1 prox:1 parrilo:1 wk:2 includes:2 matter:1 satisfy:1 depends:1 multiplicative:2 later:1 view:1 closed:1 jason:1 analyze:1 sup:12 start:1 shai:1 p2p:1 minimize:1 holder:1 who:1 efficiently:1 yield:1 correspond:1 vp:7 generalize:1 identification:1 basically:1 whenever:1 trevor:1 definition:9 infinitesimal:1 against:1 proof:3 popular:1 actually:1 appears:1 supervised:7 follow:1 formulation:1 though:3 strongly:2 generality:1 furthermore:4 anderson:1 langford:1 hand:2 nonlinear:1 logistic:1 perhaps:1 mdp:5 building:2 concept:3 requiring:1 functioning:1 ihp:1 equality:3 hence:1 regularization:4 symmetric:4 attractive:1 round:3 game:17 noted:2 generalized:4 theoretic:1 demonstrate:1 duchi:1 cp:11 meaning:1 wise:1 instantaneous:1 recently:2 functional:2 banach:15 extend:4 slight:2 refer:2 significant:1 smoothness:2 sujay:1 mathematics:1 similarly:2 alekh:1 etc:1 showed:2 winnow:1 inf:9 irrelevant:1 scenario:5 certain:3 verlag:1 inequality:4 life:1 semi:2 desirable:1 sham:1 smooth:1 technical:1 match:2 ifac:1 long:1 ravikumar:2 prediction:1 variant:2 basic:1 invent:1 chandra:1 iteration:1 represent:4 sometimes:1 agarwal:2 whereas:1 want:1 interval:2 singular:1 publisher:1 extra:1 rest:1 ascent:1 december:1 sridharan:5 seem:1 near:3 noting:1 revealed:1 easy:1 enough:1 variety:1 hastie:1 idea:1 reprinted:1 computable:1 texas:1 whether:1 expression:1 pca:2 bartlett:2 unnatural:1 clarkson:1 lieb:1 peter:1 proceed:2 remark:1 dp2:1 generally:5 tewari:6 clear:1 useful:2 specifies:1 shapiro:1 exist:1 tutorial:1 notice:1 sign:4 shall:3 group:2 key:1 putting:1 nevertheless:1 lan:1 threshold:1 drawn:1 flin:7 neither:3 subgradient:1 run:1 letter:1 throughout:4 reader:1 reasonable:2 almost:1 vn:13 chandrasekaran:1 summarizes:2 bound:17 fsup:4 pay:1 centrally:2 refine:1 annual:3 oracle:1 constraint:11 nathan:4 argument:2 optimality:4 min:1 subgradients:2 minkowski:3 px:2 martin:1 ball:22 conjugate:3 smaller:1 wi:5 kakade:1 modification:1 b:2 taken:2 computationally:1 equation:1 turn:1 dmax:1 singer:3 know:1 flip:5 tractable:1 end:4 generalizes:1 operation:1 apply:2 appropriate:10 alternative:1 encounter:1 shortly:1 batch:1 existence:5 moscow:1 dirty:1 ensure:1 bp1:1 include:1 log2:2 hinge:2 restrictive:2 especially:2 build:1 establish:1 classical:3 society:1 unchanged:1 objective:2 question:4 quantity:1 occurs:1 strategy:3 md:2 usual:1 jalali:1 said:1 gradient:18 dp:5 distance:10 schatten:4 amd:5 considers:1 trivial:1 reason:1 toward:1 willsky:1 relationship:2 cq:2 hebrew:1 equivalently:1 mostly:1 statement:2 relate:4 trace:2 negative:2 guideline:3 upper:5 conversion:1 observation:2 finite:5 descent:50 behave:1 january:1 truncated:1 situation:2 extended:4 rn:5 arbitrary:5 sharp:1 ttic:4 introduced:2 pair:13 required:2 specified:4 pisier:5 connection:1 nip:2 beyond:3 adversary:2 sparsity:1 challenge:1 akademii:1 ambuj:4 including:2 max:14 royal:1 wainwright:1 haar:1 indicator:1 kivinen:1 minimax:1 older:1 imply:1 ready:1 carried:1 chao:1 prior:1 literature:1 review:1 nati:1 determining:1 asymptotic:1 loss:12 expect:1 interesting:2 srebro:5 versus:1 sufficient:1 elliott:1 dq:1 austin:1 row:2 course:3 last:1 free:1 bias:1 exponentiated:2 understand:1 perceptron:2 taking:3 absolute:4 sparse:3 default:1 xn:20 dimension:8 numeric:1 unaware:1 yudin:2 commonly:1 coincide:1 projected:1 functionals:2 emphasize:2 preferred:1 supremum:2 sequentially:1 instantiation:1 assumed:1 conclude:1 xi:8 shwartz:6 table:4 pending:1 elastic:1 obtaining:2 adi:1 necessarily:2 zou:1 constructing:1 domain:9 vj:1 main:3 linearly:1 arrow:1 repeated:5 kuzmin:1 rithm:1 martingale:27 sub:6 explicit:1 wish:1 lie:2 governed:1 theorem:7 rk:1 specific:4 showing:1 pac:1 learnable:7 list:1 maxi:2 rakhlin:2 exists:7 restricting:2 hui:1 mirror:44 phd:1 occurring:1 margin:1 generalizing:1 logarithmic:1 explore:1 saddle:1 scalar:1 applies:1 springer:1 corresponds:2 viewed:3 goal:2 lipschitz:3 change:2 infinite:2 specifically:3 uniformly:26 wt:11 lemma:16 called:3 total:4 pas:1 duality:3 indicating:1 formally:2 select:1 nauka:1 mark:1 phenomenon:1 d1:3 cotype:1
3,771
4,414
Kernel Bayes? Rule Kenji Fukumizu The Institute of Statistical Mathematics, Tokyo Le Song College of Computing Georgia Institute of Technology Arthur Gretton Gatsby Unit, UCL MPI for Intelligent Systems [email protected] [email protected] [email protected] Abstract A nonparametric kernel-based method for realizing Bayes? rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean of the posterior distribution is computed in the form of a weighted sample. The kernel Bayes? rule can be applied to a wide variety of Bayesian inference problems: we demonstrate Bayesian computation without likelihood, and filtering with a nonparametric statespace model. A consistency rate for the posterior estimate is established. 1 Introduction Kernel methods have long provided powerful tools for generalizing linear statistical approaches to nonlinear settings, through an embedding of the sample to a high dimensional feature space, namely a reproducing kernel Hilbert space (RKHS) [16]. The inner product between feature mappings need never be computed explicitly, but is given by a positive definite kernel function, which permits efficient computation without the need to deal explicitly with the feature representation. More recently, the mean of the RKHS feature map has been used to represent probability distributions, rather than mapping single points: we will refer to these representations of probability distributions as kernel means. With an appropriate choice of kernel, the feature mapping becomes rich enough that its expectation uniquely identifies the distribution: the associated RKHSs are termed characteristic [6, 7, 22]. Kernel means in characteristic RKHSs have been applied successfully in a number of statistical tasks, including the two sample problem [9], independence tests [10], and conditional independence tests [8]. An advantage of the kernel approach is that these tests apply immediately to any domain on which kernels may be defined. We propose a general nonparametric framework for Bayesian inference, expressed entirely in terms of kernel means. The goal of Bayesian inference is to find the posterior of x given observation y; Z p(y|x)?(x) q(x|y) = , qY (y) = p(y|x)?(x)d?X (x), (1) qY (y) where ?(x) and p(y|x) are respectively the density function of the prior, and the conditional density or likelihood of y given x. In our framework, the posterior, prior, and likelihood are all expressed as kernel means: the update from prior to posterior is called the Kernel Bayes? Rule (KBR). To implement KBR, the kernel means are learned nonparametrically from training data: the prior and likelihood means are expressed in terms of samples from the prior and joint probabilities, and the posterior as a kernel mean of a weighted sample. The resulting updates are straightforward matrix operations. This leads to the main advantage of the KBR approach: in the absence of a specific parametric model or an analytic form for the prior and likelihood densities, we can still perform Bayesian inference by making sufficient observations on the system. Alternatively, we may have a parametric model, but it might be complex and require time-consuming sampling techniques for inference. By contrast, KBR is simple to implement, and is amenable to well-established approximation techniques which yield an overall computational cost linear in the training sample size [5]. We further 1 establish the rate of consistency of the estimated posterior kernel mean to the true posterior, as a function of training sample size. The proposed kernel realization of Bayes? rule is an extension of the approach used in [20] for state-space models. This earlier work applies a heuristic, however, in which the kernel mean of the previous hidden state and the observation are assumed to combine additively to update the hidden state estimate. More recently, a method for belief propagation using kernel means was proposed [18, 19]: unlike the present work, this directly estimates conditional densities assuming the prior to be uniform. An alternative to kernel means would be to use nonparametric density estimates. Classical approaches include finite distribution estimates on a partitioned domain or kernel density estimation, which perform poorly on high dimensional data. Alternatively, direct estimates of the density ratio may be used in estimating the conditional p.d.f. [24]. By contrast with density estimation approaches, KBR makes it easy to compute posterior expectations (as an RKHS inner product) and to perform conditioning and marginalization, without requiring numerical integration. 2 2.1 Kernel expression of Bayes? rule Positive definite kernel and probabilities We begin with a review of some basic concepts and tools concerning statistics on RKHS [1, 3, 6, 7]. Given Panset ?, a (R-valued) positive definite kernel k on ? is a symmetric kernel k : ??? ? R such that i,j=1 ci cj k(xi , xj ) ? 0 for arbitrary points x1 , . . . , xn in ? and real numbers c1 , . . . , cn . It is known [1] that a positive definite kernel on ? uniquely defines a Hilbert space H (RKHS) consisting of functions on ?, where hf, k(?, x)i = f (x) for any x ? ? and f ? H (reproducing property). Let (X , BX , ?X ) and (Y, BY , ?Y ) be measure spaces, and (X, Y ) be a random variable on X ? Y with probability P . Throughout this paper, it is assumed that positive definite kernels on the measurable spaces are measurable and bounded, where boundedness is defined as supx?? k(x, x) < ?. Let kX be a positive definite kernel on a measurable space (X , BX ), with RKHS HX . The kernel mean mX of X on HX is defined by the mean of the HX -valued random variable kX (?, X), namely Z mX = kX (?, x)dPX (x). (2) For notational simplicity, the dependence on kX in mX is not shown. Since the kernel mean depends only on the distribution of X (and the kernel), it may also be written mPX ; we will use whichever of these equivalent notations is clearest in each context. From the reproducing property, we have hf, mX i = E[f (X)] (?f ? HX ). (3) Let kX and kY be positive definite kernels on X and Y with respective RKHS HX and HY . The (uncentered) covariance operator CY X : HX ? HY is defined by the relation (?f ? HX , g ? HY ). hg, CY X f iHY = E[f (X)g(Y )] ( = hg ? f, m(Y X) iHY ?HX ) It should be noted that CY X is identified with the mean m(Y X) in the tensor product space HY ?HX , which is given by the product kernel kY kX [1]. The identification is standard: the tensor product is isomorphic to the space of linear maps by the correspondence ? ? ? ? [h 7? ?h?, hi]. We also define CXX : HX ? HX by hf2 , CXX f1 i = E[f2 (X)f1 (X)] for any f1 , f2 ? HX . We next introduce the notion of a characteristic RKHS, which is essential when using kernels to manipulate probability measures. A bounded measurable positive definite kernel k is called characteristic if EX?P [k(?, X)] = EX ? ?Q [k(?, X ? )] implies P = Q: probabilities are uniquely determined by their kernel means [7, 22]. With this property, problems of statistical inference can be cast in terms of inference on the kernel means. A widely used characteristic kernel on Rm is the Gaussian kernel, exp(?kx ? yk2 /(2? 2 )). Empirical estimates of the kernel mean and covariance operator are straightforward to obtain. Given an i.i.d. sample (X1 , Y1 ), . . . , (Xn , Yn ) with law P , the empirical kernel mean and covariance operator are respectively n n X 1X (n) b (n) = 1 m bX = kX (?, Xi ), C kY (?, Yi ) ? kX (?, Xi ), YX n i=1 n i=1 b (n) is written in the tensor product form. These are known to be ?n-consistent in norm. where C YX 2 2.2 Kernel Bayes? rule We now derive the kernel mean implementation of Bayes? rule. Let ? be a prior distribution on X with p.d.f. ?(x). In the following, Q and QY denote the probabilities with p.d.f. q(x, y) = p(y|x)?(x) and qY (y) in Eq.R(1), respectively. Our goal is to obtain an estimator of the kernel mean of posterior mQX |y = kX (?, x)q(x|y)d?X (x). The following theorem is fundamental in manipulating conditional probabilities with positive definite kernels. Theorem 1 ([6]). If E[g(Y )|X = ?] ? HX holds for g ? HY , then CXX E[g(Y )|X = ?] = CXY g. If CXX is injective, the above relation can be expressed as (4) E[g(Y )|X = ?] = CXX ?1 CXY g. Using Eq. (4), we can obtain an expression for the kernel mean of QY . Theorem 2 ([20]). Assume CXX is injective, and let m? and mQY be the kernel means of ? in HX and QY in HY , respectively. If m? ? R(CXX ) and E[g(Y )|X = ?] ? HX for any g ? HY , then mQY = CY X CXX ?1 m? . (5) ?1 As discussed in [20], the operator CY X CXX implements forward filtering of the prior ? with the conditional density p(y|x), as in Eq. (1). Note, however, that the assumptions E[g(Y )|X = ?] ? HX and injectivity of CXX may not hold in general; we can easily provide counterexamples. In the following, we nonetheless derive a population expression of Bayes? rule under these strong assumptions, use it as a prototype for an empirical estimator expressed in terms of Gram matrices, and prove its consistency subject to appropriate smoothness conditions on the distributions. In deriving kernel realization of Bayes? rule, we will also use Theorem 2 to obtain a kernel mean representation of the joint probability Q: ?1 mQ = C(Y X)X CXX m? ? H Y ? H X . (6) In the above equation, C(Y X)X is the covariance operator from HX to HY ? HX with p.d.f. p?((y, x), x? ) = p(x, y)?x (x? ), where ?x (x? ) is the point measure at x. In many applications of Bayesian inference, the probability conditioned on a particular value should be computed. By plugging the point measure at x into ? in Eq. (5), we have a population expression (7) E[kY (?, Y )|X = x] = CY X CXX ?1 kX (?, x), which was used by [20, 18, 19] as the kernel mean of the conditional probability p(y|x). Let (Z, W ) be a random variable on X ? Y with law Q. Replacing P by Q and x by y in Eq. (7), we obtain ?1 E[kX (?, Z)|W = y] = CZW CW (8) W kY (?, y). This is exactly the kernel mean of the posterior which we want to obtain. The next step is to derive the covariance operators in Eq. (8). Recalling that the mean mQ = m(ZW ) ? HX ? HY can be identified with the covariance operator CZW : HY ? HX , and m(W W ) ? HY ? HY with CW W , we use Eq. (6) to obtain the operators in Eq. (8), and thus the kernel mean expression of Bayes? rule. The above argument can be rigorously implemented for empirical estimates of the kernel means and covariances. Let (X1 , Y1 ), . . ., (Xn , Yn ) be an i.i.d. sample with law P , and assume a consistent estimator for m? given by ? X (?) ?j kX (?, Uj ), m b? = j=1 where U1 , . . . , U? is the sample that defines the estimator (which need not be generated by ?), and ?j are the weights. Negative values are allowed for ?j . The empirical estimators for CZW and CW W are identified with m b (ZW ) and m b (W W ) , respectively. From Eq. (6), they are given by ?1 (?) ?1 (?) (n) (n) b b (n) b b (n) m bQ = m b (ZW ) = C m b? , m b (W W ) = C m b? , (Y X)X CXX + ?n I (Y Y )X CXX + ?n I where I is the identity and ?n is the coefficient of Tikhonov regularization for operator inversion. The next two propositions express these estimators using Gram matrices. The proofs are simple matrix manipulation and shown in Supplementary material. In the following, GX and GY denote the Gram matrices (kX (Xi , Xj )) and (kY (Yi , Yj )), respectively. 3 ? Input: (i) {(Xi , Yi )}n i=1 : sample to express P . (ii) {(Uj , ?j )}j=1 : weighted sample to express the kernel mean of the prior m b ? . (iii) ?n , ?n : regularization constants. Computation: b? = 1. Compute Gram matrices GX = (kX (Xi , Xj )), GY = (kY (Yi , Yj )), and a vector m P n ( ?j=1 ?j kX (Xi , Uj ))n ? R . i=1 b ?. 2. Compute ? b = n(GX + n?n In )?1 m 3. Compute RX|Y = ?GY ((?GY )2 + ?n In )?1 ?, where ? = Diag(b ?). Output: n ? n matrix RX|Y . Given conditioning value y, the kernel mean of the posterior q(x|y) is estimated by the weighted n sample {(Xi , wi )}n i=1 with w = RX|Y kY (y), where kY (y) = (kY (Yi , y))i=1 . Figure 1: Kernel Bayes? Rule Algorithm bZW and C bW W are given by Proposition 3. The Gram matrix expressions of C P bZW = n ? bW W = Pn ? C bi kX (?, Xi ) ? kY (?, Yi ) and C bi kY (?, Yi ) ? kY (?, Yi ), i=1 i=1 respectively, where the common coefficient ? b ? Rn is b ?, ? b = n(GX + n?n In )?1 m b ?,i = m m b ? (Xi ) = P? j=1 ?j kX (Xi , Uj ). (9) Prop. 3 implies that the probabilities Q and QY are estimated by the weighted samples {((Xi , Yi ), ? bi )}ni=1 and {(Yi , ? bi )}ni=1 , respectively, with common weights. Since the weights ? bi may be negative, we use another type of Tikhonov regularization in computing Eq. (8), ?1 bW W kY (?, y). b2 bZW C C (10) m b QX |y := C W W + ?n I Proposition 4. For any y ? Y, the Gram matrix expression of m b QX |y is given by m b QX |y = kTX RX|Y kY (y), RX|Y := ?GY ((?GY )2 + ?n In )?1 ?, where ? = Diag(b ?) is a diagonal matrix with elements ? bi given by Eq. (9), kX (kX (?, X1 ), . . . , kX (?, Xn ))T ? HX n , and kY = (kY (?, Y1 ), . . . , kY (?, Yn ))T ? HY n . (11) = We call Eqs.(10) or (11) the kernel Bayes? rule (KBR): i.e., the expression of Bayes? rule entirely in terms of kernel means. The algorithm to implement KBR is summarized in Fig. 1. If our aim is to estimate E[f (Z)|W = y], that is, the expectation of a function f ? HX with respect to the posterior, then based on Eq. (3) an estimator is given by T RX|Y kY (y), hf, m b QX |y iHX = fX (12) where fX = (f (X1 ), . . . , f (Xn ))T ? Rn . In using a weighted sample to represent the posterior, KBR has some similarity to Monte Carlo methods such as importance sampling and sequential Monte Carlo ([4]). The KBR method, however, does not generate samples from the posterior, but updates the weights of a sample via matrix operations. We will provide experimental comparisons between KBR and sampling methods in Sec. 4.1. 2.3 Consistency of KBR estimator We now demonstrate the consistency of the KBR estimator in Eq. (12). We show only the best rate that can be derived under the assumptions, and leave more detailed discussions and proofs to the Supplementary material. We assume that the sample size ? = ?n for the prior goes to infinity as the (? ) sample size n for the likelihood goes to infinity, and that m b ? n is n? -consistent. In the theoretical results, we assume all Hilbert spaces are separable. In the following, R(A) denotes the range of A. Theorem 5. Let f ? HX , (Z, W ) be a random vector on X ? Y such that its law is Q with (? ) (? ) p.d.f. p(y|x)?(x), and m b ? n be an estimator of m? such that km b ? n ? m? kHX = Op (n?? ) as 1/2 n ? ? for some 0 < ? ? 1/2. Assume that ?/pX ? R(CXX ), where pX is the p.d.f. of PX , and 8 2 ? 23 ? ? 27 ? E[f (Z)|W = ?] ? R(CW W ). For ?n = n and ?n = n , we have for any y ? Y 8 where T fX RX|Y T fX RX|Y kY (y) ? E[f (Z)|W = y] = Op (n? 27 ? ), (n ? ?), kY (y) is the estimator of E[f (Z)|W = y] given by Eq. (12). 4 1/2 (n) The condition ?/pX ? R(CXX ) requires the prior to be smooth. If ?n = n, and if m b ? is a direct empirical kernel mean with an i.i.d. sample of size n from ?, typically ? = 1/2 and the theorem implies n4/27 -consistency. While this might seem to be a slow rate, in practice the convergence may be much faster than the above theoretical guarantee. 3 Bayesian inference with Kernel Bayes? Rule In Bayesian inference, tasks of interest include finding properties of the posterior (MAP value, moments), and computing the expectation of a function under the posterior. We now demonstrate the use of the kernel mean obtained via KBR in solving these problems. First, we have already seen from Theorem 5 that we may obtain a consistent estimator under the posterior for the expectation of some f ? HX . This covers a wide class of functions when characteristic kernels are used (see also experiments in Sec. 4.1). Next, regarding a point estimate of x, [20] proposes to use the preimage x b = arg minx kkX (?, x) ? kTX RX|Y kY (y)k2HX , which represents the posterior mean most effectively by one point. We use this approach in the present paper where point estimates are considered. In the case of the Gaussian kernel, a fixed point method can be used to sequentially optimize x [13]. In KBR the prior and likelihood are expressed in terms of samples. Thus unlike many methods for Bayesian inference, exact knowledge on their densities is not needed, once samples are obtained. The following are typical situations where the KBR approach is advantageous: ? The relation among variables is difficult to realize with a simple parametric model, however we can obtain samples of the variables (e.g. nonparametric state-space model in Sec. 3). ? The p.d.f of the prior and/or likelihood is hard to obtain explicitly, but sampling is possible: (a) In population genetics, branching processes are used for the likelihood to model the split of species, for which the explicit density is hard to obtain. Approximate Bayesian Computation (ABC) is a popular sampling method in these situations [25, 12, 17]. (b) In nonparametric Bayesian inference (e.g. [14]), the prior is typically given in the form of a process without a density. The KBR approach can give alternative ways of Bayesian computation for these problems. We will show some experimental comparisons between KBR approach and ABC in Sec. 4.2. ? If a standard sampling method such as MCMC or sequential MC is applicable, the computation given y may be time consuming, and real-time applications may not be feasible. Using KBR, the expectation of the posterior given y is obtained simply by the inner product as in Eq. (12), once T fX RX|Y has been computed. The KBR approach nonetheless has a weakness common to other nonparametric methods: if a new data point appears far from the training sample, the reliability of the output will be low. Thus, we need sufficient diversity in training sample to reliably estimate the posterior. In KBR computation, Gram matrix inversion is necessary, which would cost O(n3 ) for sample size n if attempted directly. Substantial cost reductions can be achieved by low rank matrix approximations such as the incomplete Cholesky decomposition [5], which approximates a Gram matrix in the form of ??T with n ? r matrix ?. Computing ? costs O(nr2 ), and with the Woodbury identity, the KBR can be approximately computed with cost O(nr2 ). Kernel choice or model selection is key to the effectiveness of KBR, as in other kernel methods. KBR involves three model parameters: the kernel (or its parameters), and the regularization parameters ?n and ?n . The strategy for parameter selection depends on how the posterior is to be used in the inference problem. If it is applied in a supervised setting, we can use standard cross-validation (CV). A more general approach requires constructing a related supervised problem. Suppose the prior is given by the marginal PX of P . The posterior density q(x|y) averaged with PY is then equal to the marginal density pX . We are then able to compare the discrepancy of the kernel mean of PX and b X |y=Y over Yi . This leads to application of K-fold CV approach. the average of the estimators Q i [?a] Namely, for a partition of {1, . . . , n} into K disjoint subsets {Ta }K b QX |y be the kernel mean a=1 , let m [?a] b X with data {Xi }i?T of posterior estimated with data {(Xi , Yi )}i?T / a . We / a , and the prior mean m PK P P 2 [a] [a] [?a] 1 1 use a=1 |Ta | j?Ta m b X = |Ta | j?Ta kX (?, Xj ). b X H for CV, where m b QX |y=Y ? m j X 5 Application to nonparametric state-space model. Consider the state-space model, QT QT ?1 p(X, Y ) = ?(X1 ) t=1 p(Yt |Xt ) t=1 q(Xt+1 |Xt ), where Yt is observable and Xt is a hidden state. We do not assume the conditional probabilities p(Yt |Xt ) and q(Xt+1 |Xt ) to be known explicitly, nor do we estimate them with simple parametric models. Rather, we assume a sample (X1 , Y1 ), . . . , (XT +1 , YT +1 ) is given for both the observable and hidden variables in the training phase. This problem has already been considered in [20], but we give a more principled approach based on KBR. The conditional probability for the transition q(xt+1 |xt ) and observation process p(y|x) are represented by the covariance PT bX,X = T1 i=1 kX (?, Xi ) ? kX (?, Xi+1 ), operators as computed with the training sample; C +1 bXY = 1 PT kX (?, Xi ) ? kY (?, Yi ), and C bY Y and C bXX are defined similarly. Note that though C i=1 T the data are not i.i.d., consistency is achieved by the mixing property of the Markov model. For simplicity, we focus on the filtering problem, but smoothing and prediction can be done similarly. In filtering, we wish to estimate the current hidden state xt , given observations y?1 , . . . , y?t . The sequential estimate of p(xt |? y1 , . . . , y?t ) can be derived using KBR (we give only a sketch below; see Supplementary material for the detailed derivation). Suppose we already have an estimator of the kernel mean of p(xt |? y1 , . . . , y?t ) in the form PT (t) m b xt |?y1 ,...,?yt = i=1 ?i kX (?, Xi ), (t) (t) where ?i = ?i (? y1 , . . . , y?t ) are the coefficients at time t. By applying Theorem 2 twice, the PT (t+1) bi kY (?, Yi ), where kernel mean of p(yt+1 |? y1 , . . . , y?t ) is estimated by m b yt+1 |?y1 ,...,?yt = i=1 ? Here GX+1 X (13) ? b(t+1) = (GX + T ?T IT )?1 GX,X+1 (GX + T ?T IT )?1 GX ?(t) .  is the ?transfer? matrix defined by GX+1 X ij = kX (Xi+1 , Xj ). With the notation (t+1) ?(t+1) = Diag(b ?1 (t+1) ,...,? bT ?(t+1) ), kernel Bayes? rule yields ?1 (t+1) ? kY (? yt+1 ). = ?(t+1) GY (?(t+1) GY )2 + ?T IT (14) (t) Eqs. (13) and (14) describe the update rule of ? (? y1 , . . . , y?t ). By contrast with [20], where the estimates of the previous hidden state and observation are assumed to combine additively, the above derivation is based only on applying KBR. In sequential filtering, a substantial reduction of computational cost can be achieved by low rank approximations for the matrices of a training phase: given rank r, the computation costs only O(T r2 ) for each step in filtering. Bayesian computation without likelihood. When the likelihood and/or prior is not obtained in an analytic form but sampling is possible, the ABC approach [25, 12, 17] is popular for Bayesian computation. The ABC rejection method generates a sample from q(X|Y = y) as follows: (1) generate Xt from the prior ?, (2) generate Yt from p(y|Xt ), (3) if D(y, Yt ) < ?, accept Xt ; otherwise reject, (4) go to (1). In Step (3), D is a distance on X , and ? is the tolerance to acceptance. In the exactly the same situation as the above, the KBR approach gives the following method: (i) generate X1 , . . . , Xn from the prior ?, (ii) generate a sample Yt from p(y|Xt ) (t = 1, . . . , n), (iii) compute Gram matrices GX and GY with (X1 , Y1 ), . . . , (Xn , Yn ), and RX|Y kY (y). The distribution of a sample given by ABC approaches the true posterior if ? ? 0, while the empirical posterior estimate of KBR converges to the true one as n ? ?. The computational efficiency of ABC, however, can be arbitrarily low for a small ?, since Xt is then rarely accepted in Step (3). Finally, ABC generates a sample, which allows any statistic of the posterior to be approximated. In the case of KBR, certain statistics of the posterior (such as confidence intervals) can be harder to obtain, since consistency is guaranteed only for expectations of RKHS functions. In Sec. 4.2, we provide experimental comparisons addressing the trade-off between computational time and accuracy for ABC and KBR. 4 4.1 Experiments Nonparametric inference of posterior First we compare KBR and the standard kernel density estimation (KDE). Let {(Xi , Yi )}ni=1 be an i.i.d. sample from P on Rd ? Rr . With p.d.f. K(x) on Rd and H(y) on Rr , the conditional 6 Pn Pn p.d.f. p(y|x) is estimated by pb(y|x) = j=1 KhX (x ? Xj )HhY (y ? Yj )/ j=1 KhX (x ? Xj ), ?r ? where KhX (x) = h?d X K(x/hX ) and HhY (x) = hY H(y/hY ). Given an i.i.d. sample {Uj }j=1 from the prior ?, the posterior q(x|y) is represented by the weighted sample (Ui , wi ) with wi = P? pb(y|Ui )/ j=1 pb(y|Uj ) as importance weight (IW). R We compare the estimates of xq(x|y)dx obtained by KBR and KDE + IW, using Gaussian kernels for both the methods. Note that with Gaussian kernel, the function f (x) = x does not belong to HX , and the consistency of the KBR method is not rigorously guaranteed (c.f. Theorem 5). Gaussian kernels, however, are known to be able to approximate any continuous function on a compact subset with arbitrary accuracy [23]. We can thus expect that the posterior mean can be estimated effectively. Ave. MSE (50 runs) KBR vs KDE+IW (E[X|Y=y]) In the experiments, the dimensionality was given by 60 KBR (CV) r = d ranging form 2 to 64. The distribution P of KBR (Med dist) KDE+IW (MS CV) 50 KDE+IW (best) (X, Y ) was N ((0, 1d )T , V ) with V randomly generated for each run. The prior ? was PX = N (0, VXX /2), 40 where VXX is the X-component of V . The sample sizes 30 were n = ? = 200. The bandwidth parameter hX , hY 20 in KDE were set hX = hY and chosen by two ways, the least square cross-validation [15] and the best mean 10 performance, over the set {2 ? i | i = 1, . . . , 10}. For 0 the KBR, we used use two methods to choose the devi2 4 8 12 16 24 32 48 64 Dimension ation parameter in Gaussian kernel: the median over the pairwise distances in the data [10] and the 10-fold CV Figure 2: KBR v.s. KDE+IW. described in Sec. 3. Fig. 2 shows the MSE of the estimates over 1000 random points y ? N (0, VY Y ). While the accuracy of the both methods decrease for larger dimensionality, the KBR significantly outperforms the KDE+IW. Bayesian computation without likelihood We compare KBR and ABC in terms of the estimation accuracy and computational time. To compute the estimation accuracy rigorously, Gaussian distributions are used for the true prior and likelihood. The samples R are taken from the same model as in Sec. 4.1, and xq(x|y)dx is evaluated at 10 different points of y. We performed 10 runs with different covariance. CPU time vs Error (6 dim.) 5.1?10 Av. Mean Square Errors 4.2 10 2 2.5?10 ?1 KBR ABC 3 1.0?10 4 200 6.4?10 4 600 400 800 7.9?10 1000 5 2000 10 10 10 10 10 For ABC, we used only the rejection method; while CPU time (sec) there are more advanced sampling schemes [12, 17], implementation is not straightforward. Various parameters Figure 3: Estimation accuracy and comfor the acceptance are used, and the accuracy and com- putational time with KBR and ABC. putational time are shown in Fig.3 together with total sizes of generated samples. For the KBR method, the sample sizes n of the likelihood and prior are varied. The regularization parameters are given by ?n = 0.01/n and ?n = 2?n . In KBR, Gaussian kernels are used and the incomplete Cholesky decomposition is employed. The results indicate that KBR achieves more accurate results than ABC in the same computational time. 0 4.3 1 2 3 4 Filtering problems The KBR filter proposed in Sec. 3 is applied. Alternative strategies for state-space models with complex dynamics involve the extended Kalman filter (EKF) and unscented Kalman filter (UKF, [11]). There are some works on nonparametric state-space model or HMM which use nonparametric estimation of conditional p.d.f. such as KDE or partitions [27, 26] and, more recently, kernel method [20, 21]. In the following, the KBR method is compared with linear and nonlinear Kalman filters. KBR has the regularization parameters ?T , ?T , and kernel parameters for kX and kY (e.g., the deviation parameter for Gaussian kernel). The validation approach is applied for selecting them by dividing the training sample into two. To reduce the search space, we set ?T = 2?T and use the Gaussian kernel deviation ??X and ??Y , where ?X and ?Y are the median of pairwise distances among the training samples ([10]), leaving only two parameters ? and ?T to be tuned. 7 0.16 KBR EKF UKF KBF EKF UKF 0.09 Mean square errors Mean square errors 0.14 0.12 0.1 0.08 0.06 0.08 0.07 0.06 0.04 0.02 200 400 600 800 Training sample size 0.05 1000 200 400 600 Training data size Data (a) 800 Data (b) Figure 4: Comparisons with the KBR Filter and EKF. (Average MSEs and SEs over 30 runs.) ? 2 = 10?4 ? 2 = 10?3 KBR (Gauss) 0.210 ? 0.015 0.222 ? 0.009 KBR (Tr) 0.146 ? 0.003 0.210 ? 0.008 Kalman (9 dim.) 1.980 ? 0.083 1.935 ? 0.064 Kalman (Quat.) 0.557 ? 0.023 0.541 ? 0.022 Table 1: Average MSEs and SEs of camera angle estimates (10 runs). We first use two synthetic data sets with KBR, EKF, and UKF, assuming that EKF and UKF know the exact dynamics. The dynamics has a hidden state Xt = (ut , vt )T ? R2 , and is given by (ut+1 , vt+1 ) = (1 + b sin(M ?t+1 ))(cos ?t+1 , sin ?t+1 ) + Zt , ?t+1 = ?t + ? (mod 2?), where Zt ? N (0, ?h2 I2 ) is independent noise. Note that the dynamics of (ut , vt ) is nonlinear even for b = 0. The observation Yt follows Yt = Xt + Wt , where Wt ? N (0, ?o2 I). The two dynamics are defined as follows: (a) (noisy rotation) ? = 0.3, b = 0, ?h = ?o = 0.2, (b) (noisy oscillatory rotation) ? = 0.4, b = 0.4, M = 8, ?h = ?o = 0.2. The results are shown in Fig. 4. In all the cases, EKF and UKF show unrecognizably small difference. The dynamics in (a) has weak nonlinearity, and KBR shows slightly worse MSE than EKF and UKF. For dataset (b) of strong nonlinearity, KBR outperforms for T ? 200 the nonlinear Kalman filters, which know the true dynamics. Next, we applied the KBR filter to the camera rotation problem used in [20]1 , where the angle of a camera is the hidden variable and the movie frames of a room taken by the camera are observed. We are given 3600 frames of 20 ? 20 RGB pixels (Yt ? [0, 1]1200 ), where the first 1800 frames are used for training, and the second half are used for test. For the details on the data, see [20]. We make the data noisy by adding Gaussian noise N (0, ? 2 ) to Yt . Our experiments cover two settings. In the first, we assume we do not know the hidden state Xt is included in SO(3), but is a general 3 ? 3 matrix. In this case, we use the Kalman filter by estimating the relations under a linear assumption, and the KBR filter with Gaussian kernels for Xt and Yt . In the second setting, we exploit the fact Xt ? SO(3): for the Kalman filter, Xt is represented by a quanternion, and for the KBR filter the kernel k(A, B) = Tr[AB T ] is used for Xt . Table 1 shows the Frobenius norms between the estimated matrix and the true one. The KBR filter significantly outperforms the Kalman filter, since KBR has the advantage in extracting the complex nonlinear dependence of the observation on the hidden state. 5 Conclusion We have proposed a general, novel framework for implementing Bayesian inference, where the prior, likelihood, and posterior are expressed as kernel means in reproducing kernel Hilbert spaces. The model is expressed in terms of a set of training samples, and inference consists of a small number of straightforward matrix operations. Our approach is well suited to cases where simple parametric models or an analytic forms of density are not available, but samples are easily obtained. We have addressed two applications: Bayesian inference without likelihood, and sequential filtering with nonparametric state-space model. Future studies could include more comparisons with sampling approaches like advanced Monte Carlo, and applications to various inference problems such as nonparametric Bayesian models and Bayesian reinforcement learning. Acknowledgements. KF was supported in part by JSPS KAKENHI (B) 22300098. 1 Due to some difference in noise model, the results here are not directly comparable with those of [20]. 8 References [1] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68(3):337?404, 1950. [2] C.R. Baker. Joint measures and cross-covariance operators. Trans. Amer. Math. Soc., 186:273?289, 1973. [3] A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Kluwer Academic Publisher, 2004. [4] A. Doucet, N. De Freitas, and N.J. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 2001. [5] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. JMLR, 2:243? 264, 2001. [6] K. Fukumizu, F.R. Bach, and M.I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. JMLR, 5:73?99, 2004. [7] K. Fukumizu, F.R. Bach, and M.I. Jordan. Kernel dimension reduction in regression. Anna. Stat., 37(4):1871?1905, 2009. [8] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional dependence. In Advances in NIPS 20, pages 489?496. MIT Press, 2008. [9] A. Gretton, K.M. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the twosample-problem. In Advances in NIPS 19, pages 513?520. MIT Press, 2007. [10] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of independence. In Advances in NIPS 20, pages 585?592. MIT Press, 2008. [11] S.J. Julier and J.K. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In Proc. AeroSense: The 11th Intern. Symp. Aerospace/Defence Sensing, Simulation and Controls, 1997. [12] P. Marjoram, Jo. Molitor, V. Plagnol, and S. Tavare. Markov chain monte carlo without likelihoods. PNAS, 100(26):15324?15328, 2003. [13] S. Mika, B. Sch?olkopf, A. Smola, K.-R. M?uller, M. Scholz, and G. R?atsch. Kernel pca and de-noising in feature spaces. In Advances in NIPS 11, pages 536?542. MIT Press, 1999. [14] P. M?uller and F.A. Quintana. Nonparametric bayesian data analysis. Statistical Science, 19(1):95?110, 2004. [15] M. Rudemo. Empirical choice of histograms and kernel density estimators. Scandinavian J. Statistics, 9(2):pp. 65?78, 1982. [16] B. Sch?olkopf and A.J. Smola. Learning with Kernels. MIT Press, 2002. [17] S. A. Sisson, Y. Fan, and M. M. Tanaka. Sequential monte carlo without likelihoods. PNAS, 104(6):1760? 1765, 2007. [18] L. Song, A. Gretton., and C. Guestrin. Nonparametric tree graphical models via kernel embeddings. In AISTATS 2010, pages 765?772, 2010. [19] L. Song, A. Gretton, D. Bickson, Y. Low, and C. Guestrin. Kernel belief propagation. In AISTATS 2011. [20] L. Song, J. Huang, A. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions with applications to dynamical systems. Proc ICML2009, pages 961?968. 2009. [21] L. Song and S. M. Siddiqi and G. Gordon and A. Smola. Hilbert Space Embeddings of Hidden Markov Models. Proc. ICML2010, 991?998. 2010. [22] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and G. R.G. Lanckriet. Hilbert space embeddings and metrics on probability measures. JMLR, 11:1517?1561, 2010. [23] I. Steinwart. On the Influence of the Kernel on the Consistency of Support Vector Machines. JMLR, 2:67?93, 2001. [24] M. Sugiyama, I. Takeuchi, T. Suzuki, T. Kanamori, H. Hachiya, and D. Okanohara. Conditional density estimation via least-squares density ratio estimation. In AISTATS 2010, pages 781?788, 2010. [25] S. Tavar?e, D.J. Balding, R.C. Griffithis, and P. Donnelly. Inferring coalescence times from dna sequece data. Genetics, 145:505?518, 1997. [26] S. Thrun, J. Langford, and D. Fox. Monte carlo hidden markov models: Learning non-parametric models of partially observable stochastic processes. In ICML 1999, pages 415?424, 1999. [27] V. Monbet , P. Ailliot, and P.F. Marteau. l1 -convergence of smoothing densities in non-parametric state space models. Statistical Inference for Stochastic Processes, 11:311?325, 2008. 9
4414 |@word inversion:2 norm:2 advantageous:1 km:1 additively:2 simulation:1 rgb:1 covariance:11 decomposition:2 kbr:60 tr:2 harder:1 boundedness:1 reduction:4 moment:1 selecting:1 tuned:1 rkhs:9 outperforms:3 o2:1 freitas:1 current:1 com:2 gmail:1 dx:2 written:2 realize:1 numerical:1 partition:2 analytic:3 update:5 bickson:1 v:2 half:1 realizing:1 math:2 gx:11 direct:2 prove:1 consists:1 combine:2 symp:1 introduce:1 pairwise:2 nor:1 dist:1 cpu:2 becomes:1 provided:1 estimating:2 begin:1 bounded:2 notation:2 baker:1 finding:1 guarantee:1 exactly:2 rm:1 berlinet:1 control:1 unit:1 yn:4 positive:9 t1:1 approximately:1 might:2 mika:1 twice:1 co:1 scholz:1 bi:7 range:1 averaged:1 ms:2 woodbury:1 camera:4 yj:3 practice:2 definite:9 implement:4 dpx:1 empirical:9 reject:1 significantly:2 confidence:1 selection:2 operator:12 noising:1 context:1 applying:2 influence:1 py:1 optimize:1 measurable:4 map:3 equivalent:1 yt:17 straightforward:4 go:3 simplicity:2 immediately:1 rule:17 estimator:15 deriving:1 mq:2 embedding:1 population:3 notion:1 fx:5 pt:4 suppose:2 exact:2 lanckriet:1 element:1 approximated:1 observed:1 cy:6 sun:1 trade:1 decrease:1 substantial:2 principled:1 ui:2 rigorously:3 dynamic:7 solving:1 f2:2 efficiency:1 balding:1 easily:2 joint:3 represented:3 various:2 vxx:2 derivation:2 describe:1 monte:7 heuristic:1 widely:1 valued:2 supplementary:3 larger:1 s:2 otherwise:1 okanohara:1 statistic:5 noisy:3 sisson:1 advantage:3 rr:2 ucl:1 propose:1 product:7 realization:2 mixing:1 poorly:1 ihy:2 frobenius:1 ky:27 olkopf:6 convergence:2 leave:1 converges:1 derive:3 ac:1 stat:1 ij:1 qt:2 op:2 lsong:1 eq:17 strong:2 dividing:1 implemented:1 kenji:1 involves:1 implies:3 indicate:1 soc:2 rasch:1 tokyo:1 filter:14 stochastic:2 material:3 implementing:1 require:1 hx:28 f1:3 proposition:3 extension:2 unscented:1 hold:2 considered:2 exp:1 mapping:3 achieves:1 estimation:9 proc:3 applicable:1 iw:7 uhlmann:1 teo:1 successfully:1 tool:2 weighted:7 fukumizu:8 uller:2 mit:5 gaussian:12 defence:1 aim:1 ekf:8 rather:2 pn:3 gatech:1 derived:2 focus:1 notational:1 kakenhi:1 rank:4 likelihood:18 contrast:3 ave:1 tavar:1 dim:2 inference:19 typically:2 bt:1 accept:1 hidden:12 relation:4 manipulating:1 pixel:1 overall:1 arg:1 among:2 proposes:1 smoothing:2 integration:1 marginal:2 equal:1 once:2 never:1 sampling:9 represents:1 icml:1 discrepancy:1 future:1 intelligent:1 gordon:2 randomly:1 phase:2 consisting:1 bw:3 ukf:7 recalling:1 ab:1 plagnol:1 interest:1 acceptance:2 putational:2 weakness:1 hg:2 chain:1 amenable:1 accurate:1 arthur:2 injective:2 necessary:1 respective:1 bq:1 fox:1 tree:1 incomplete:2 quintana:1 theoretical:2 earlier:1 cover:2 cost:7 nr2:2 deviation:2 subset:2 addressing:1 uniform:1 jsps:1 supx:1 synthetic:1 density:20 fundamental:1 borgwardt:1 rudemo:1 off:1 together:1 jo:1 choose:1 huang:1 worse:1 bx:4 diversity:1 de:2 gy:9 b2:1 summarized:1 sec:9 coefficient:3 explicitly:4 depends:2 performed:1 hf:3 bayes:16 square:5 ni:3 accuracy:7 cxy:2 takeuchi:1 characteristic:6 yield:2 weak:1 bayesian:20 identification:1 mc:1 carlo:7 rx:11 bxy:1 cc:1 hachiya:1 oscillatory:1 sriperumbudur:1 hf2:1 nonetheless:2 pp:1 clearest:1 associated:1 proof:2 dataset:1 popular:2 knowledge:1 ut:3 dimensionality:3 hilbert:10 cj:1 appears:1 ta:5 supervised:3 amer:2 done:1 though:1 evaluated:1 ihx:1 smola:6 langford:1 sketch:1 steinwart:1 replacing:1 aronszajn:1 nonlinear:6 propagation:2 nonparametrically:1 defines:2 preimage:1 requiring:1 true:6 concept:1 regularization:6 symmetric:1 i2:1 deal:1 sin:2 branching:1 uniquely:3 noted:1 mpi:1 m:1 demonstrate:3 l1:1 ranging:1 novel:1 recently:3 common:3 rotation:3 conditioning:2 jp:1 discussed:1 belong:1 approximates:1 kluwer:1 julier:1 molitor:1 tavare:1 refer:1 counterexample:1 cv:6 smoothness:1 rd:2 consistency:10 mathematics:1 similarly:2 nonlinearity:2 sugiyama:1 coalescence:1 reliability:1 scandinavian:1 similarity:1 yk2:1 ktx:2 posterior:32 termed:1 tikhonov:2 manipulation:1 certain:1 arbitrarily:1 vt:3 yi:15 seen:1 injectivity:1 guestrin:2 employed:1 ii:2 pnas:2 gretton:8 smooth:1 faster:1 academic:1 cross:3 long:1 bach:2 concerning:1 manipulate:1 plugging:1 prediction:1 basic:1 regression:1 expectation:7 metric:1 histogram:1 kernel:104 represent:2 achieved:3 qy:7 c1:1 want:1 fine:1 interval:1 addressed:1 median:2 leaving:1 publisher:1 sch:6 zw:3 unlike:2 subject:1 med:1 mod:1 seem:1 effectiveness:1 call:1 extracting:1 jordan:2 iii:2 enough:1 easy:1 split:1 variety:1 xj:7 independence:3 marginalization:1 embeddings:4 identified:3 bandwidth:1 inner:3 regarding:1 cn:1 prototype:1 reduce:1 expression:8 pca:1 song:6 detailed:2 involve:1 nonparametric:15 siddiqi:1 dna:1 generate:5 vy:1 estimated:8 disjoint:1 express:3 donnelly:1 key:1 pb:3 run:5 angle:2 aerosense:1 powerful:1 throughout:1 cxx:16 comparable:1 entirely:2 hi:1 guaranteed:2 correspondence:1 fold:2 fan:1 infinity:2 n3:1 hy:17 generates:2 u1:1 argument:1 separable:1 px:8 slightly:1 partitioned:1 wi:3 n4:1 making:1 taken:2 equation:1 scheinberg:1 needed:1 know:3 whichever:1 available:1 operation:3 bzw:3 permit:1 apply:1 appropriate:2 alternative:3 rkhss:2 thomas:1 denotes:1 include:3 graphical:1 yx:2 exploit:1 ism:1 uj:6 establish:1 classical:1 tensor:3 already:3 parametric:7 strategy:2 dependence:3 diagonal:1 minx:1 mx:4 cw:4 distance:3 thrun:1 hmm:1 assuming:2 kalman:10 ratio:2 difficult:1 kde:9 negative:2 implementation:2 reliably:1 zt:2 perform:3 av:1 observation:8 markov:4 finite:1 situation:3 extended:1 y1:12 rn:2 varied:1 reproducing:8 frame:3 arbitrary:2 bxx:1 namely:3 cast:1 kkx:1 aerospace:1 learned:1 established:2 tanaka:1 nip:4 trans:2 able:2 below:1 agnan:1 dynamical:1 including:1 belief:2 ation:1 advanced:2 marjoram:1 scheme:1 movie:1 technology:1 identifies:1 xq:2 prior:26 review:1 acknowledgement:1 kf:1 law:4 expect:1 icml2010:1 filtering:8 validation:3 h2:1 sufficient:2 consistent:4 genetics:2 twosample:1 supported:1 kanamori:1 institute:2 wide:2 tolerance:1 dimension:2 xn:7 gram:9 transition:1 rich:1 forward:1 suzuki:1 reinforcement:1 far:1 qx:6 approximate:2 observable:3 compact:1 uncentered:1 sequentially:1 doucet:1 assumed:3 consuming:2 xi:20 alternatively:2 continuous:1 search:1 table:2 transfer:1 mse:3 complex:3 constructing:1 domain:2 diag:3 anna:1 pk:1 main:1 aistats:3 noise:3 allowed:1 kbf:1 x1:9 fig:4 georgia:1 gatsby:1 slow:1 hhy:2 inferring:1 explicit:1 wish:1 jmlr:4 theorem:9 specific:1 xt:26 sensing:1 r2:2 svm:1 essential:1 sequential:7 effectively:2 importance:2 ci:1 k2hx:1 adding:1 conditioned:1 kx:28 rejection:2 suited:1 generalizing:1 simply:1 intern:1 expressed:9 partially:1 applies:1 springer:1 abc:13 prop:1 conditional:15 goal:2 identity:2 room:1 absence:1 feasible:1 hard:2 included:1 determined:1 typical:1 wt:2 called:2 specie:1 isomorphic:1 accepted:1 experimental:3 total:1 attempted:1 gauss:1 atsch:1 rarely:1 college:1 cholesky:2 support:1 khx:4 statespace:1 mcmc:1 ex:2
3,772
4,415
Selecting the State-Representation in Reinforcement Learning Odalric-Ambrym Maillard INRIA Lille - Nord Europe [email protected] R?emi Munos INRIA Lille - Nord Europe [email protected] Daniil Ryabko INRIA Lille - Nord Europe [email protected] Abstract The problem of selecting the right state-representation in a reinforcement learning problem is considered. Several models (functions mapping past observations to a finite set) of the observations are given, and it is known that for at least one of these models the resulting state dynamics are indeed Markovian. Without knowing neither which of the models is the correct one, nor what are the probabilistic characteristics of the resulting MDP, it is required to obtain as much reward as the optimal policy for the correct model (or for the best of the correct models, if there are several). We propose an algorithm that achieves that, with a regret of order T 2/3 where T is the horizon time. 1 Introduction We consider the problem of selecting the right state-representation in an average-reward reinforcement learning problem. Each state-representation is defined by a model ?j (to which corresponds a state space S?j ) and we assume that the number J of available models is finite and that (at least) one model is a weakly-communicating Markov decision process (MDP). We do not make any assumption at all about the other models. This problem is considered in the general reinforcement learning setting, where an agent interacts with an unknown environment in a single stream of repeated observations, actions and rewards. There are no ?resets,? thus all the learning has to be done online. Our goal is to construct an algorithm that performs almost as well as the algorithm that knows both which model is a MDP (knows the ?true? model) and the characteristics of this MDP (the transition probabilities and rewards). Consider some examples that help motivate the problem. The first example is high-level feature selection. Suppose that the space of histories is huge, such as the space of video streams or that of game plays. In addition to these data, we also have some high-level features extracted from it, such as ?there is a person present in the video? or ?the adversary (in a game) is aggressive.? We know that most of the features are redundant, but we also know that some combination of some of the features describes the problem well and exhibits Markovian dynamics. Given a potentially large number of feature combinations of this kind, we want to find a policy whose average reward is as good as that of the best policy for the right combination of features. Another example is bounding the order of an MDP. The process is known to be k-order Markov, where k is unknown but un upper bound K >> k is given. The goal is to perform as well as if we knew k. Yet another example is selecting the right discretization. The environment is an MDP with a continuous state space. We have several candidate quantizations of the state space, one of which gives an MDP. Again, we would like to find a policy that is as good as the optimal policy for the right discretization. This example also opens 1 the way for extensions of the proposed approach: we would like to be able to treat an infinite set of possible discretization, none of which may be perfectly Markovian. The present work can be considered the first step in this direction. It is important to note that we do not make any assumptions on the ?wrong? models (those that do not have Markovian dynamics). Therefore, we are not able to test which model is Markovian in the classical statistical sense, since in order to do that we would need a viable alternative hypothesis (such as, the model is not Markov but is K-order Markov). In fact, the constructed algorithm never ?knows? which model is the right one; it is ?only? able to get the same average level of reward as if it knew. Previous work. This work builds on previous work on learning average-reward MDPs. Namely, we use in our algorithm as a subroutine the algorithm UCRL2 of [6] that is designed to provide finite time bounds for undiscounted MDPs. Such a problem has been pioneered in the reinforcement learning literature by [7] and then improved in various ways by [4, 11, 12, 6, 3]; UCRL2 achieves a regret of the order DT 1/2 in any weakly-communicating MDP with diameter D, with respect to the best policy for this MDP. The diameter D of a MDP is defined in [6] as the expected minimum time required to reach any state starting from any other state. A related result is reported in [3], which improves on constants related to the characteristics of the MDP. A similar approach has been considered in [10]; the difference is that in that work the probabilistic characteristics of each model are completely known, but the models are not assumed to be Markovian, and belong to a countably infinite (rather than finite) set. The problem we address can be also viewed as a generalization of the bandit problem (see e.g. [9, 8, 1]): there are finitely many ?arms?, corresponding to the policies used in each model, and one of the arms is the best, in the sense that the corresponding model is the ?true? one. In the usual bandit setting, the rewards are assumed to be i.i.d. thus one can estimate the mean value of the arms while switching arbitrarily from one arm to the next (the quality of the estimate only depends on the number of pulls of each arm). However, in our setting, estimating the average-reward of a policy requires playing it many times consecutively. This can be seen as a bandit problem with dependent arms, with complex costs of switching between arms. Contribution. We show that despite the fact that the true Markov model of states is unknown and that nothing is assumed on the wrong representations, it is still possible to derive a finite-time analysis of the regret for this problem. This is stated in Theorem 1; the bound on the regret that we obtain is of order T 2/3 . The intuition is that if the ?true? model ?? is known, but its probabilistic properties are not, then we still know that there exists an optimal control policy that depends on the observed state sj ? ,t only. Therefore, the optimal rate of rewards can be obtained by a clever exploration/exploitation strategy, such as UCRL2 algorithm [6]. Since we do not know in advance which model is a MDP, we need to explore them all, for a sufficiently long time in order to estimate the rate of rewards that one can get using a good policy in that model. Outline. In Section 2 we introduce the precise notion of model and set up the notations. Then we present the proposed algorithm in Section 3; it uses UCRL2 of [6] as a subroutine and selects the models ? according to a penalized empirical criterion. In Section 4 we discuss some directions for further development. Finally, Section 5 is devoted to the proof of Theorem 1. 2 Notation and definitions We consider a space of observations O, a space of actions A, and a space of rewards R (all assumed def to be Polish). Moreover, we assume that A is of finite cardinality A = |A| and that 0 ? R ? [0, 1]. def The set of histories up to time t for all t ? N ? {0} will be denoted by H<t = O ? (A ? R ? O)t?1 , ? [ def and we define the set of all possible histories by H = H<t . t=1 Environments. For a Polish X , we Denote by P(X ) the set of probability distributions over X . Define an environment to be a mapping from the set of histories H to the set of functions that map any action a ? A to a probability distribution ?a ? P(R ? O) over the product space of rewards and observations. 2 We consider the problem of reinforcement learning when the learner interacts with some unknown environment e? . The interaction is sequential and goes as follows: first some h<1 = {o0 } is generated according to ?, then at time step t > 0, the learner choses an action at ? A according to the current history h<t ? H<t . Then a couple of reward and observations (rt , ot ) is drawn according to the distribution (e? (h<t ))at ? P(R ? O). Finally, h<t+1 is defined by the concatenation of h<t with (at , rt , ot ). With these notations, at each time step t > 0, ot?1 is the last observation given to the learner before choosing an action, at is the action output at this step, and rt is the immediate reward received after playing at . State representation functions (models). Let S ? N be some finite set; intuitively, this has to be considered as a set of states. A state representation function ? is a function from the set of histories H to S. For a state representation function ?, we will use the notation S? for its set of states, and st,? := ?(h<t ). In the sequel, when we talk about a Markov decision process, it will be assumed to be weakly communicating, which means that for each pair of states u1 , u2 there exists k ? N and a sequence of actions ?1 , .., ?k ? A such that P (sk+1,? = u2 |s1,? = u1 , a1 = ?1 ...ak = ?k ) > 0. Having that in mind, we introduce the following definition. Definition 1 We say that an environment e with a state representation function ? is Markov, or, for short, that ? is a Markov model (of e), if the process (st,? , at , rt ), t ? N is a (weakly communicating) Markov decision process. For example, consider a state-representation function ? that depends only on the last observation, and that partitions the observation space into finitely many cells. Then an environment is Markov with this representation function if the probability distribution on the next cells only depends on the last observed cell and action. Note that there may be many state-representation functions with which an environment e is Markov. 3 Main results Given a set ? = {?j ; j 6 J} of J state-representation functions (models), one of which being a Markov model of the unknown environment e? , we want to construct a strategy that performs nearly as well as the best algorithm that knows which ?j is Markov, and knows all the probabilistic characteristics (transition probabilities and rewards) of the MDP corresponding to this model. For that purpose we define the regret of any strategy at time T , like in [6, 3], as T X def rt , ?(T ) = T ?? ? t=1 where rt are the rewards received when following the proposed strategy and ?? is the average optimal PT value in the best Markov model, i.e., ?? = limT T1 E( t=1 rt (? ? )) where rt (? ? ) are the rewards received when following the optimal policy for the best Markov model. Note that this definition makes sense since when the MDP is weakly communicating, the average optimal value of reward does not depend on the initial state. Also, one could replace T ?? with the expected ? sum of rewards obtained in T steps (following the optimal policy) at the price of an additional O( T ) term. In the next subsection, we describe an algorithm that achieves a sub-linear regret of order T 2/3 . 3.1 Best Lower Bound (BLB) algorithm In this section, we introduce the Best-Lower-Bound (BLB) algorithm, described in Figure 1. The algorithm works in stages of doubling length. Each stage consists in 2 phases: an exploration and an exploitation phase. In the exploration phase, BLB plays the UCRL2 algorithm on each model (?j )16j6J successively, as if each model ?j was a Markov model, for a fixed number ?i,1,J of rounds. The exploitation part consists in selecting first the model with highest lower bound, according to the empirical rewards obtained in the previous exploration phase. This model is initially selected for the same time as in the exploration phase, and then a test decides to either continue playing this model (if its performance during exploitation is still above the corresponding lower bound, i.e. if the rewards obtained are still at least as good as if it was playing the best model). If it does not pass the test, then another model (with second best lower-bound) is select and played, and so on. Until the exploitation phase (of fixed length ?i,2 ) finishes and the next stage starts. 3 Parameters: f, ? For each stage i > 1 do Set the total length of stage i to be ?i := 2i . 2/3 1. Exploration. Set ?i,1 = ?i . For each j ? {1, . . . , J} do ? Run UCRL2 with parameter ?i (?) defined in (1) using ?j during ?i,1,J steps: the state space is assumed to be S?j with transition structure induced by ?j . ? Compute the corresponding average empirical reward ? bi,1 (?j ) received during this exploration phase. 2. Exploitation. Set ?i,2 = ?i ? ?i,1 and initialize J := {1, . . . , J} . While the current length of the exploitation part is less than ?i,2 do ? Select b j = argmax ? bi,1 (?j ) ? 2B(i, ?j , ?) (using (3)). j?J ? Run UCRL2 with parameter ?i (?) using ?bj : update at each time step t the current average empirical reward ? bi,2,t (?bj ) from the beginning of the run. Provided that the length of the current run is larger than ?i,1,J , do the test ? bi,2,t (?bj ) > ? bi,1 (?bj ) ? 2B(i, ?bj , ?) . ? If the test fails, then stop UCRL2 and set J := J \ {b j}. If J = ? then set J := {1, . . . , J}. Figure 1: The Best-Lower-Bound selection strategy. def The length of stage i is fixed and defined to be ?i = 2i . Thus for a total time horizon T , the number def of stages I(T ) before time T is I(T ) = xlog2 (T + 1)y. Each stage i (of length ?i ) is further decomposed into an exploration (length ?i,1 ) and an exploitation (length ?i,2 ) phases. Exploration phase. All the models {?j }j6J are played one after another for the same amount of def ? time ?i,1,J = i,1 J . Each episode 1 6 j 6 J consists in running the UCRL2 algorithm using the model of states and transitions induced by the state-representation function ?j . Note that UCRL2 does not require the horizon T in advance, but requires a parameter p in order to ensure a near optimal regret bound with probability higher than 1 ? p. We define this parameter p to be ?i (?) in stage i, where def ?i (?) = (2i ? (J ?1 + 1)22i/3 + 4)?1 2?i+1 ? . The average empirical reward received during each episode is written ? bi,1 (?j ). (1) Exploitation phase. We use the empirical rewards ? bi,1 (?j ) received in the previous exploration part of stage i together with a confidence bound in order to select the model to play. Moreover, a model ? is no longer run for a fixed period of time (as in the exploration part of stage i), but for a period ?i,2 (?) that depends on some test; we first initialize J := {1, . . . , J} and then choose def b j = argmax ? bi,1 (?j ) ? 2B(i, ?j , ?) , (2) j?J where we define s def B(i, ?, ?) = 34f (?i ? 1 + ?i,1 )|S? | ? A log( ?i,1,J ) i (?) , (3) ?i,1,J where ? and the function f are parameters of the BLB algorithm. Then UCRL2 is played using the selected model ?bj for the parameter ?i (?). In parallel we test whether the average empirical reward we receive during this exploitation phase is high enough; at time t, if the length of the current episode is larger than ?1,i,J , we test if ? bi,2,t (?bj ) > ? bi,1 (?bj ) ? 2B(i, ?bj , ?). (4) If the test is positive, we keep playing UCRL2 using the same model. Now, if the test fails, then the model b j is discarded (until the end of stage i) i.e. we update J := J \ {b j} and we select a new one according to (2). We repeat those steps until the total time ?i,2 of the exploitation phase of stage i is over. 4 Remark Note that the model selected for exploitation in (2) is the one that has the best lower bound. This is a pessimistic (or robust) selection strategy. We know that if the right model is selected, then with high probability, this model will be kept during the whole exploitation phase. If this is not the right model, then either the policy provides good rewards and we should keep playing it, or it does not, in which case it will not pass the test (4) and will be removed from the set of models that will be exploited in this phase. 3.2 Regret analysis Theorem 1 (Main result) Assume that a finite set of J state-representation functions ? is given, and there exists at least one function ?? ? ? such that with ?? as a state-representation function the environment is a Markov decision process. If there are several such models, let ?? be the one with the highest average reward of the optimal policy of the corresponding MDP. Then the regret (with respect to the optimal policy corresponding to ?? ) of the BLB algorithm run with parameter ?, for any horizon T , with probability higher than 1 ? ? is bounded as follows  1/2  1/2  ?(T ) 6 cf (T )S AJ log (J?)?1 log2 (T ) T 2/3 + c0 DS A log(? ?1 ) log2 (T )T + c(f, D), (5) for some numerical constants c, c0 and c(f, D). The parameter f (t) can be chosen to be any increasing function, for instance the choice f (t) := log2 t + 1, gives c(f, D) 6 2D . The proof of this result is reported in Section 5. Remark. Importantly, the algorithm considered here does not know in advance the diameter D of the true model, nor the time horizon T . Due to this lack of knowledge, it uses a guess f (t) (e.g. log(t)) on this diameter, which result in the additional regret term c(f, D) and the additional factor f (T ); knowing D would enable to remove both of them, but this is a strong assumption. Choosing f (t) := log2 t + 1 gives a bound which is of order T 2/3 in T but is exponential in D; taking f (t) := t? we get a bound of order T 2/3+? in T but of polynomial order 1/? in D. 4 Discussion and outlook Intuition. The main idea why this algorithm works is as follows. The ?wrong? models are used during exploitation stages only as long as they are giving rewards that are higher than the rewards that could be obtained in the ?true? model. All the models are explored sufficiently long so as to be able to estimate the optimal reward level in the true model, and to learn its policy. Thus, nothing has to be known about the ?wrong? models. This is in stark contrast to the usual situation in mathematical statistics, where to be able to test a hypothesis about a model (e.g., that the data is generated by a certain model versus some alternative models), one has to make assumptions about alternative models. This has to be done in order to make sure that the Type II error is small (the power of the test is large): that this error is small has to be proven under the alternative. Here, although we are solving seemingly the same problem, the role of the Type II error is played by the rewards. As long as the rewards are high we do not care where the model we are using is correct or not. We only have to ensure that the true model passes the test. Assumptions. A crucial assumption made in this work is that the ?true? model ?? belongs to a known finite set. While passing from a finite to a countably infinite set appears rather straightforward, getting rid of the assumption that this set contains the true model seems more difficult. What one would want to obtain in this setting is sub-linear regret with respect to the performance of the optimal policy in the best model; this, however, seems difficult without additional assumptions on the probabilistic characteristics of the models. Another approach not discussed here would be to try to build a good state representation function, as what is suggested for instance in [5]. Yet another interesting generalization in this direction would be to consider uncountable (possibly parametric but general) sets of models. This, however, would necessarily require some heavy assumptions on the set of models. Regret. The reader familiar with adversarial bandit literature will notice that our bound of order T 2/3 is worse than T 1/2 that usually appears in this context (see, for example [2]). The reason is that our notion of regret is different: in adversarial bandit literature, the regret is measured with respect to the best choice of the arm for the given fixed history. In contrast, we measure the regret with respect to the best policy (for knows the correct model and its parameters) that, in general, would obtain completely different (from what our algorithm would get) rewards and observations right from the beginning. 5 Estimating the diameter? As previously mentioned, a possibly large additive constant c(f, D) appears in the regret since we do not known a bound on the diameter of the MDP in the ?true? model, and use log T instead. Finding a way to properly address this problem by estimating online the diameter of the MDP is an interesting open question. Let us provide some intuition concerning this problem. First, we notice that, as reported in [6], when we compute an optimistic model based on the empirical rewards and transitions of the true model, the span of the corresponding optimistic value function sp(Vb + ) is always smaller than the diameter D. This span increases as we get more rewards and transitions samples, which gives a natural empirical lower bound on D. However, it seems quite difficult to compute a tight empirical upper bound on D (or sp(Vb + )). In [3], the authors derive a regret bound that scales with the span of the true value function sp(V ? ), which is also less than D, and can be significantly smaller in some cases. However, since we do not have the property that sp(Vb + ) 6 sp(V ? ), we need to introduce an explicit penalization in order to control the span of the computed optimistic models, and this requires assuming we know an upper bound B on sp(V ? ) in order to guarantee a final regret bound scaling with B. Unfortunately this does not solve the estimation problem of D, which remains an open question. 5 Proof of Theorem 1 In this section, we now detail the proof of Theorem 1. The proof is stated in several parts. First we remind a general confidence bound for the UCRL2 algorithm in the true model. Then we decompose the regret into the sum of the regret in each stage i. After analyzing the contribution to the regret in stage i, we then gather all stages and tune the length of each stage and episode in order to get the final regret bound. 5.1 Upper and Lower confidence bounds From the analysis of UCRL2 in [6], we have the property that with probability higher than 1 ? ? 0 , the regret of UCRL2 when run for ? consecutive many steps from time t1 in the true model ?? is r upper bounded by t +? ?1 A log( ??0 ) 1 1X ? ? ? rt 6 34D|S?? | , (6) ? t=t ? 1 where D is the diameter of the MDP. What is interesting is that this diameter does not need to be known by the algorithm. Also by carefully looking at the proof of UCRL, it can be shown that the following bound is also valid with probability higher than 1 ? ? 0 : r t +? ?1 A log( ??0 ) 1 1X rt ? ?? 6 34D|S?? | . ? t=t ? 1 0 We now define the following quantity, for every model r?, episode length ? and ? ? (0, 1) A log( ??0 ) def BD (?, ?, ? 0 ) = 34D|S? | . (7) ? 5.2 Regret of stage i In this section we analyze the regret of the stage i, which we denote ?i . Note that since each stage i 6 I is of length ?i = 2i except the last one I that may stop before, we have I(T ) X ?(T ) = ?i , (8) i=1 where I(T ) = xlog2 (T +1)y. We further decompose ?i = ?1,i +?i,2 into the regret corresponding to the exploration stage ?1,i and the regret corresponding to the exploitation stage ?i,2 . ?i,1 is the total length of the exploration stage i and ?i,2 is the total length of the exploitation stage i. def ? For each model ?, we write ?i,1,J = i,1 J the number of consecutive steps during which the UCRL2 algorithm is run with model ? in the exploration stage i, and ?i,2 (?) the number of consecutive steps during which the UCRL2 algorithm is run with model ? in the exploitation stage i. Good and Bad models. Let us now introduce the two following sets of models, defined after the end of the exploration stage, i.e. at time ti . def Gi = {? ? ? ; ? bi,1 (?) ? 2B(i, ?, ?) ? ? bi,1 (?? ) ? 2B(i, ?? , ?)}\{?? } , def Bi = {? ? ? ; ? bi,1 (?) ? 2B(i, ?, ?) < ? bi,1 (?? ) ? 2B(i, ?? , ?)} . With this definition, we have the decomposition ? = Gi ? {?? } ? Bi . 6 5.2.1 Regret in the exploration phase Since in the exploration stage i each model ? is run for ?i,1,J many steps, the regret for each model ? 6= ?? is bounded by ?i,1,J ?? . Now the regret for the true model is ?i,1,J (?? ? ? b1 (?? )), thus the total contribution to the regret in the exploration stage i is upper-bounded by ?i,1 6 ?i,1,J (?? ? ? b1 (?? )) + (J ? 1)?i,1,J ?? . (9) 5.2.2 Regret in the exploitation phase By definition, all models in Gi ? {?? } are selected before any model in Bi is selected. The good models. Let us consider some ? ? Gi and an event ?i under which the exploitation phase does not reset. The test (equation (4)) starts after ?i,1,J , thus, since there is not reset, either ?i,2 (?) = ?i,1,J in which case the contribution to the regret is bounded by ?i,1,J ?? , or ?i,2 (?) > ?i,1,J , in which case the regret during the (?i,2 (?) ? 1) steps (where the test was successful) is bounded by (?i,2 (?) ? 1)(?? ? ? bi,2,?i,2 (?)?1 (?)) 6 (?i,2 (?) ? 1)(?? ? ? bi,1 (?) + 2B(i, ?, ?)) 6 (?i,2 (?) ? 1)(?? ? ? bi,1 (?? ) + 2B(i, ?? , ?)) , and now since in the last step ? fails to pass the test, this adds a contribution to the regret at most ?? . We deduce that the total contribution to the regret of all the models ? ? Gi in the exploitation stages on the event ?i is bounded by X ?i,2 (Gi ) 6 max{?i,1,J ?? , (?i,2 (?) ? 1)(?? ? ? bi,1 (?? ) + 2B(i, ?? , ?)) + ?? } . (10) ??G The true model. First, let us note that since the total regret of the true model during the exploitation step i is given by ?i,2 (?? )(?? ? ? bi,2,t (?? )) , then the total regret of the exploration and exploitation stages in episode i on ?i is bounded by ?i ?i,1,J (?? ? ? b1 (?? )) + ?i,1,J (J ? 1)?? + ?i,2 (?? )(?? ? ? bi,2,ti +?i,2 (?? )) + X X max{?i,1,J ?? , (?i,2 (?) ? 1)(?? ? ? bi,1 (?? ) + 2B(i, ?? , ?)) + ?? } + ?i,2 (?)?? . 6 ??Gi ??Bi Now from the analysis provided in [6] we know that when we run the UCRL2 with the true model ?? with parameter ?i (?), then there exists an event ?1,i of probability at least 1 ? ?i (?) such that on this event ?? ? ? bi,1 (?? ) 6 BD (?i,1,J , ?? , ?i (?)) , and similarly there exists an event ?2,i of probability at least 1 ? ?i (?), such that on this event ?? ? ? bi,2,t (?? ) 6 BD (?i,2 (?? ), ?? , ?1 (?)) . Now we show that, with high probability, the true model ?? passes all the tests (equation (4)) until the end X of the episode i, and thus equivalently, with high probability no model ? ? Bi is selected, so that ?i,2 (?) = 0. ??Bi For the true model, after ? (?? , t) > ?i,1,J , there remains at most (?i,2 ??i,1,J +1) possible timesteps where we do the test for the true model ?? . For each test we need to control ?i,2,t (?? ), and the event corresponding to ? bi,1 (?? ) is shared by all the tests. Thus we deduce that with probability higher than 1 ? (?i,2 ? ?i,1,J + 2)?i (?) we have simultaneously on all time step until the end of exploitation phase of stage i, ? bi,2,t (?? ) ? ? bi,1 (?? ) = ? bi,2,t (?? ) ? ?? + ?? ? ? bi,1 (?? ) ? ? > ?BD (? (? , t), ? , ?i (?)) ? BD (?i,1,J , ?? , ?i (?)) > ?2BD (?i,1,J , ?? , ?i (?)) . Now provided that f (ti ) > D, then BD (?i,1,J , ?? , ?i (?)) 6 B(i, ?? , ?) , thus the true model passes all tests until the end of the exploitation part of stage i on an event ?3,i of probability higher than def 1 ? (?i,2 ? ?i,1,J + X 2)?i (?). Since there is no reset, we can choose ?i = ?3,i . Note that on this event, we thus have ?i,2 (?) = 0. ??Bi 7 By using a union bound over the events ?1,i , ?2,i and ?3,i , then we deduce that with probability higher than 1 ? (?i,2 ? ?i,1,J + 4)?i (?), ?i 6 ?i,1,J BD (?i,1,J , ?? , ?i (?))) + [?i,1,J (J ? 1) + |Gi |]?? + ?i,2 (?? )BD (?i,2 (?? ), ?? , ?i (?)) X max{(?i,1,J ? 1)?? , (?i,2 (?) ? 1)(BD (?i,1,J , ?? , ?i (?)) + 2B(i, ?? , ?)} . + ??Gi Now using again the fact that f (ti ) > D, and after some simplifications, we deduce that ?i 6 ?i,1,J BD (?i,1,J , ?? , ?i (?)) + ?i,2 (?? )BD (?i,2 (?? ), ?? , ?i (?)) X (?i,2 (?) ? 1)3B(i, ?? , ?) + ?i,1,J (J + |Gi | ? 1)?? . + ??Gi Finally, we use the fact that ? BD (?, ?? , ?i (?)) is increasing with ? to deduce the following rough bound that holds with probability higher than 1 ? (?i,2 ? ?i,1,J + 4)?i (?) 6 ?i,2 B(i, ?? , ?) + ?i,2 BD (?i,2 , ?? , ?i (?)) + 2J?i,1,J ?? , X where we used the fact that ?i,2 = ?i,2 (?? ) + ?i,2 (?) . ?i ??G 5.3 Tuning the parameters of each stage. We now conclude by tuning the parameters of each stage, i.e. the probabilities ?i (?) and the length ?i , ?i,1 and ?i,2 . The total length of stage i is by definition ?i = ?i,1 + ?i,2 = ?i,1,J J + ?i,2 , def def 2/3 2/3 ? 2/3 where ?i = 2i . So we set ?i,1 = ?i and then we have ?i,2 = ?i ? ?i and ?i,1,J = iJ . Now using these values and the definition of the bound B(i, ?? , ?), and BD (?i,2 , ?? , ?i (?)), we deduce with probability higher than 1 ? (?i,2 ? ?i,1,J + 4)?i (?) the following upper bound s r  ? 2/3   ?  i 2/3 2/3 i ?i 6 34f (ti )S AJ log ?i + 34DS A log ?i + 2?i ?? , J?i (?) ?i (?) 1/2  ? 2/3 J ?i,2 6 J?i . with ti = 2i ? 1 + 22i/3 and where we used the fact that 2/3 ?i def We now define ?i (?) such that ?i (?) = (2i ? (J ?1 + 1)22i/3 + 4)?1 2?i+1 ? . def Since for the stages i ? I0 = {i > 1; f (ti ) < D}, the regret is bounded by ?i 6 ?i ?? , then the total cumulative regret of the algorithm is bounded with probability higher than 1 ? ? (using the defition of the ?i (?)) by r r  28i/3   23i  X X 2i/3 + 2]2 + 34DS A log 2i + 2 i ?? . ?(T ) 6 [34f (ti )S JA log J? ? i?I0 i?I / 0 i where ti = 2 ? 1 + 2 2i/3 6 T. We conclude by using the fact that since I(T ) 6 log2 (T + 1), then with probability higher than 1 ? ?, the following bound on the regret holds  1/2  1/2 ?(T ) 6 cf (T )S AJ log(J?)?1 log2 (T ) T 2/3 + c0 DS A log(? ?1 ) log2 (T )T + c(f, D) . P def for some constant c, c0 , and where c(f, D) = i?I0 2i ?? . Now for the special choice when f (T ) = log2 (T +1), then i ? I0 means 2i +22i/3 < 2D +2, thus we must have i < D, and thus c(f, d) 6 2D . Acknowledgements This research was partially supported by the French Ministry of Higher Education and Research, Nord- Pas-de-Calais Regional Council and FEDER through CPER 2007-2013, ANR projects EXPLO-RA (ANR-08-COSI-004) and Lampada (ANR-09-EMER-007), by the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement 231495 (project CompLACS), and by Pascal-2. 8 References [1] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235?256, 2002. [2] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995. Proceedings., 36th Annual Symposium on, pages 322 ?331, oct 1995. [3] Peter L. Bartlett and Ambuj Tewari. REGAl: a regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI, pages 35?42, Arlington, Virginia, United States, 2009. AUAI Press. [4] Ronen I. Brafman and Moshe Tennenholtz. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213?231, March 2003. [5] Marcus Hutter. Feature reinforcement learning: Part I: Unstructured MDPs. Journal of Artificial General Intelligence, 1:3?24, 2009. [6] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 99:1563?1600, August 2010. [7] Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49:209?232, November 2002. [8] Tze L. Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4?22, 1985. [9] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527?535, 1952. [10] Daniil Ryabko and Marcus Hutter. On the possibility of learning in reactive environments with arbitrary dependence. Theoretical Compututer Science, 405:274?284, October 2008. [11] Alexander L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman. PAC model-free reinforcement learning. In Proceedings of the 23rd international conference on Machine learning, ICML, pages 881?888, New York, NY, USA, 2006. ACM. [12] Ambuj Tewari and Peter L. Bartlett. Optimistic linear programming gives logarithmic regret for irreducible mdps. In Proceedings of Neural Information Processing Systems Conference (NIPS), 2007. 9
4415 |@word exploitation:24 polynomial:3 seems:3 c0:4 open:3 rigged:1 decomposition:1 outlook:1 initial:1 contains:1 selecting:5 united:1 past:1 current:5 com:1 discretization:3 gmail:1 yet:2 written:1 bd:15 must:1 john:1 ronald:1 numerical:1 partition:1 additive:1 wiewiora:1 remove:1 designed:1 update:2 intelligence:2 selected:7 guess:1 beginning:2 short:1 provides:1 ucrl2:18 mathematical:1 constructed:1 symposium:1 viable:1 consists:3 introduce:5 indeed:1 ra:1 expected:2 nor:2 multi:1 decomposed:1 armed:1 cardinality:1 increasing:2 provided:3 estimating:3 notation:4 moreover:2 bounded:10 project:2 what:5 kind:1 finding:1 guarantee:1 every:1 ti:9 auai:1 wrong:4 control:3 grant:1 before:4 t1:2 positive:1 treat:1 switching:2 despite:1 ak:1 analyzing:1 inria:4 bi:35 union:1 regret:43 empirical:10 significantly:1 confidence:3 get:6 clever:1 selection:3 context:1 map:1 go:1 straightforward:1 starting:1 unstructured:1 communicating:6 rule:1 importantly:1 pull:1 notion:2 pt:1 suppose:1 play:3 pioneered:1 programming:1 us:2 hypothesis:2 agreement:1 pa:1 observed:2 role:1 ryabko:3 episode:7 highest:2 removed:1 mentioned:1 intuition:3 environment:11 reward:37 littman:1 dynamic:3 motivate:1 weakly:6 depend:1 solving:1 tight:1 singh:1 eric:1 learner:3 completely:2 various:1 talk:1 cper:1 describe:1 artificial:2 choosing:2 whose:1 quite:1 larger:2 solve:1 say:1 anr:3 statistic:1 gi:11 fischer:1 final:2 online:2 seemingly:1 sequence:1 net:1 propose:1 interaction:1 product:1 reset:4 fr:1 getting:1 undiscounted:1 help:1 derive:2 measured:1 ij:1 finitely:2 received:6 strong:1 direction:3 correct:5 consecutively:1 exploration:19 enable:1 education:1 require:2 ja:1 generalization:2 decompose:2 pessimistic:1 extension:1 hold:2 sufficiently:2 considered:6 mapping:2 bj:9 achieves:3 consecutive:3 purpose:1 estimation:1 calais:1 council:1 robbins:2 rough:1 always:1 rather:2 ucrl:1 properly:1 polish:2 contrast:2 adversarial:3 sense:3 dependent:1 i0:4 initially:1 bandit:7 subroutine:2 selects:1 pascal:1 denoted:1 development:1 special:1 initialize:2 construct:2 never:1 having:1 lille:3 choses:1 icml:1 nearly:1 ortner:1 irreducible:1 simultaneously:1 familiar:1 phase:18 argmax:2 huge:1 possibility:1 odalricambrym:1 devoted:1 theoretical:1 hutter:2 instance:2 markovian:6 yoav:1 cost:1 successful:1 seventh:1 daniil:3 virginia:1 reported:3 person:1 st:2 international:1 sequel:1 probabilistic:5 complacs:1 together:1 michael:2 again:2 cesa:2 successively:1 choose:2 possibly:2 worse:1 american:1 stark:1 li:1 aggressive:1 de:1 casino:1 blb:5 depends:5 stream:2 try:1 optimistic:4 analyze:1 start:2 parallel:1 contribution:6 characteristic:6 ronen:1 none:1 history:7 reach:1 definition:8 proof:6 couple:1 stop:2 subsection:1 knowledge:1 improves:1 maillard:2 carefully:1 auer:3 appears:3 higher:13 dt:1 arlington:1 improved:1 done:2 cosi:1 stage:38 until:6 d:4 langford:1 lack:1 french:1 aj:3 quality:1 mdp:18 usa:1 true:23 regularization:1 jaksch:1 round:1 game:2 during:11 criterion:1 outline:1 performs:2 belong:1 discussed:1 multiarmed:1 tuning:2 rd:1 mathematics:2 similarly:1 lihong:1 europe:3 longer:1 deduce:6 add:1 belongs:1 certain:1 arbitrarily:1 continue:1 exploited:1 seen:1 ministry:1 minimum:1 additional:4 care:1 herbert:2 period:2 redundant:1 ii:2 long:4 lai:1 concerning:1 a1:1 limt:1 cell:3 receive:1 addition:1 want:3 crucial:1 ot:3 regional:1 sure:1 pass:3 induced:2 near:4 enough:1 finish:1 timesteps:1 perfectly:1 idea:1 knowing:2 whether:1 o0:1 bartlett:2 feder:1 peter:5 passing:1 york:1 action:8 remark:2 tewari:2 tune:1 amount:1 diameter:10 schapire:1 notice:2 write:1 drawn:1 neither:1 kept:1 asymptotically:1 sum:2 run:11 uncertainty:1 almost:1 reader:1 decision:4 scaling:1 vb:3 bound:31 def:20 played:4 simplification:1 annual:1 u1:2 emi:1 aspect:1 span:4 according:6 combination:3 march:1 describes:1 smaller:2 s1:1 intuitively:1 equation:2 previously:1 remains:2 discus:1 know:14 mind:1 fp7:1 end:5 available:1 alternative:4 thomas:1 uncountable:1 running:1 ensure:2 cf:2 log2:8 giving:1 build:2 classical:1 society:1 question:2 quantity:1 moshe:1 strategy:6 parametric:1 rt:10 usual:2 interacts:2 dependence:1 exhibit:1 concatenation:1 odalric:1 reason:1 marcus:2 assuming:1 length:17 remind:1 equivalently:1 difficult:3 unfortunately:1 october:1 robert:1 potentially:1 nord:4 stated:2 design:1 policy:18 unknown:5 perform:1 bianchi:2 upper:7 twenty:1 observation:10 markov:17 discarded:1 finite:11 november:1 immediate:1 situation:1 looking:1 precise:1 emer:1 regal:1 august:1 arbitrary:1 community:1 lampada:1 namely:1 required:2 pair:1 nip:1 address:2 able:5 adversary:1 suggested:1 usually:1 tennenholtz:1 ambuj:2 max:4 video:2 power:1 event:10 natural:1 arm:8 mdps:5 literature:3 acknowledgement:1 nicol:2 freund:1 interesting:3 allocation:1 proven:1 versus:1 penalization:1 foundation:1 agent:1 gather:1 playing:6 heavy:1 strehl:1 penalized:1 repeat:1 last:5 supported:1 brafman:1 free:1 ambrym:1 taking:1 bulletin:1 munos:2 fifth:1 transition:6 valid:1 cumulative:1 author:1 made:1 reinforcement:12 adaptive:1 programme:1 sj:1 countably:2 keep:2 satinder:1 decides:1 uai:1 rid:1 b1:3 assumed:6 conclude:2 knew:2 un:1 continuous:1 sk:1 why:1 learn:1 robust:1 complex:1 necessarily:1 european:1 sp:6 main:3 bounding:1 whole:1 paul:1 nothing:2 repeated:1 gambling:1 ny:1 sub:2 fails:3 explicit:1 exponential:1 candidate:1 theorem:5 bad:1 pac:1 explored:1 exists:5 quantization:1 sequential:2 j6j:2 horizon:5 logarithmic:1 remi:1 tze:1 explore:1 partially:1 doubling:1 u2:2 corresponds:1 extracted:1 acm:1 oct:1 goal:2 viewed:1 replace:1 price:1 shared:1 infinite:3 except:1 kearns:1 total:11 pas:3 explo:1 select:4 alexander:1 reactive:1
3,773
4,416
A blind deconvolution method for neural spike identification Chaitanya Ekanadham Courant Institute New York University New York, NY 10012 [email protected] Daniel Tranchina Courant Institute New York University New York, NY 10012 Eero P. Simoncelli Courant Institute Center for Neural Science Howard Hughes Medical Institute New York University New York, NY 10012 Abstract We consider the problem of estimating neural spikes from extracellular voltage recordings. Most current methods are based on clustering, which requires substantial human supervision and systematically mishandles temporally overlapping spikes. We formulate the problem as one of statistical inference, in which the recorded voltage is a noisy sum of the spike trains of each neuron convolved with its associated spike waveform. Joint maximum-a-posteriori (MAP) estimation of the waveforms and spikes is then a blind deconvolution problem in which the coefficients are sparse. We develop a block-coordinate descent procedure to approximate the MAP solution, based on our recently developed continuous basis pursuit method. We validate our method on simulated data as well as real data for which ground truth is available via simultaneous intracellular recordings. In both cases, our method substantially reduces the number of missed spikes and false positives when compared to a standard clustering algorithm, primarily by recovering overlapping spikes. The method offers a fully automated alternative to clustering methods that is less susceptible to systematic errors. 1 Introduction The identification of individual spikes in extracellularly recorded voltage traces is a critical step in the analysis of neural data for much of systems neuroscience. One or more electrodes are embedded in neural tissue, and the voltage(s) are recorded as a function of time, with the intention of recovering the spiking activity of one or more nearby cells. Each spike appears with a stereotyped waveform, whose shape depends on the cell morphology, the filtering properties of the medium and the electrode, and the cell?s position relative to the electrode. The ?spike sorting? problem is that of identifying distinct cells and their respective spike times. This is a difficult statistical inverse problem, since one typically does not know the number of cells, the shapes of their waveforms, or the frequency or temporal dynamics of their spike trains (see [1] for a review). The observed voltage is well-described as a linear superposition of the spike waveforms [1, 2, 3, 4], and thus, the problem bears resemblance to the classic sparse decomposition problem in signal processing and machine learning, where the neural waveforms are the ?features? and the spike trains are the ?coefficients?, with the additional constraint that the features are unknown but convolutional, and the coefficients are mostly zero except for a few that are close to one. This sparse blind deconvolution problem arises in a variety of contexts other than spike sorting, including radar [5], seismology [6], and acoustic processing [7, 8]. Most current approaches to spike sorting (with notable exceptions [9, 10]) can be summarized in three steps ([1, 2]): (1) identify segments of neural activity (e.g., by thresholding the voltage), (2) 1 determine a low-dimensional feature representation for these segments (e.g., PCA), (3) cluster the segments in the feature space (e.g., k-means, mixture of Gaussians). Fig. 1 illustrates a simple version of this procedure. Segments within the same cluster are interpreted as spikes of a single neuron, whose waveform is estimated by the cluster centroid. This method works well in identifying temporally isolated spikes whose waveforms are easily distinguishable from background noise and each other. However it generally fails for segments containing more than one spike (either from the same or different neurons), because these segments do not lie close to the clusters of any individual cell [1]. This is illustrated in Figs. 1(b) 1(c), and 1(d). Several state-of-the-art methods improve or combine upon one or more of these steps (e.g., [11, 12]), but remain susceptible to these errors because they still rely on clustering. These errors are systematic, and can have important scientific consequences. For example, an unresolved question in neuroscience is whether the occurrence of correlated or synchronous spikes carries specialized information [13, 14]. In order to experimentally address this question, one needs to record from multiple neurons, and to accurately obtain their joint spiking activity. A method that systematically fails for synchronous spikes (e.g., by missing them altogether, or by incorrectly assigning them to another neuron) will lead to erroneous conclusions. Although the limitations of clustering methods have been known within the neuroscience community for some time [1, 2, 15, 16], they remain ubiquitous. Practitioners have developed a wide range of manual adjustments to overcome these limitations, from adjusting the electrode position to isolate a single neuron, to manually performing the clustering for spike identification. However, previous studies have shown that there is great variability in manual sorting results [17], and that human choices for cluster parameters are often suboptimal [18]. As such, there is a need for a fully automated sorting method that avoids these errors. This need is becoming ever more urgent as the use of multi-electrode arrays increases ([19]): manual parameter selection for a multi-dimensional clustering problem becomes more difficult and time-consuming as the number of electrodes grows. We formulate the spike sorting problem as a Bayesian estimation problem by incorporating a prior model for the spikes and assuming a linear-Gaussian model for the recording given the spikes [2, 4]. Although the generative model is simple, inferring the spike times and waveforms is challenging. We approximate the most likely spikes and waveform shapes given the recording (i.e. the maximuma-posteriori, or MAP solution), by alternating between solving for the spike times while fixing the waveforms and vice versa. Solving for optimal spike times and amplitudes with fixed waveform shapes is itself an NP-hard problem, and we employ a novel method called continuous basis pursuit [20, 21], combined with iterative reweighting techniques, to approximate its solution. We compare our method with clustering on simulated and real data, demonstrating substantial reduction in spike identification errors (both misses and false positives), particularly when spikes overlap in the signal. 2 Model of voltage trace The major deficiency of clustering is that each time segment is modeled as a noisy version of a single centered waveform rather than a noisy superposition of multiple, time-shifted waveforms. A simple generative model for the observed voltage trace V (t) is summarized as follows: V (t) = Kn N X X ani Wn (t ? ?ni ) + ?(t) (1) n=1 i=1 n {?ni }K i=1 n {ani }K i=1 ? ? Poisson Process(?n ) N (1, ?2n ) n = 1, ..., N n = 1, ..., N (2) In words, the spikes are a Poisson processes with known rates {?n } and amplitudes independently normally distributed about unity. The trace is the sum of convolutions of the spikes with their respective waveforms W ? {Wn (t)}N n=1 along with Gaussian noise ?(t) (note: other log-concave noise distributions can be used). Here, Kn is the (Poisson-distributed) number of spikes of the n?th waveform in the signal. Thus, the model accounts for superimposed spikes, variability in spike amplitude, as well as background noise. The model can easily be generalized to multielectrode recordings by making V (t) and the Wn (t)?s vector-valued, but to simplify notation we assume a single electrode. Note also that since the model describes the full voltage trace, it does not require a 2 (a) 4 15 4 6 PC 2 5 0 7 ?5 1 3 5 ?10 ?20 0 PC 1 1 5 2 6 3 7 10 0 ?10 ?20 20 (b) PC 2 20 2 10 (c) ?10 0 PC 1 10 (d) Figure 1: Illustration of clustering on simulated data. (a) Threshold/windowing procedure. Peaks are identified using a threshold (horizontal lines) and windows are drawn about them (vertical lines) to identify segments. (b) Plot of the segments projected onto the first two principal components. Color indicates the output of k-means clustering (k = 3). (c) The top-left plot shows the true waveforms used in this example. The other plots indicate the waveforms whose projections are the black points in (b).(d) Another example of simulated data with a single biphasic waveform (not shown). The projections of the spikes can have a non-Gaussian distribution in PC space. Two clusters arise because the waveform has two peaks around which the segments can be centered. thresholding/windowing preprocessing stage, which can lead to additional artifacts (e.g., Fig 1(d)). The priors on the spike trains account for the observed variability in spike amplitudes and average spike rates with minimal assumptions. We are interested in the maximum-a-posteriori (MAP) solution of the waveforms and spike times and amplitudes given the observed voltage trace V (t): P ({ani }, {?ni }, W|V (t)) arg max (3) {ani },{?ni },W = arg max log(P (V (t)|{ani }, {?ni }, W)) + log(P ({ani }, {?ni }, W)) {ani },{?ni },W In the following sections, we describe a procedure to approximate this solution. 3 3.1 Inference methods Objective function MAP estimation under the model described in Eq. (2) and Eq. (1) boils down to solving:  X X  (ani ? 1)2 1 1 2 2 min + log(2??n ) ? log(?n ) (4) kV (t)? ani Wn (t??ni )k2,? + 2?2n 2 {ani },{?ni },W 2 n,i n,i where k~xk2,? = k??1/2 ~xk2 and ? is the noise covariance. Direct inference of the parameters is a highly nonlinear and intractable problem. However, we can make the problem tractable by using a linear representation for time-shifted waveforms. The simplest such representation uses a dictionary containing discretely time-shifted copies of the waveforms themselves {Wn (t ? i?)}n,i . We chose 3 to use a more accurate and efficient dictionary to represent continuously time-shifted waveforms in the context of sparse optimization, which relies on trigonometrically varying coefficients [21]: Kn N X X ani Wn (t ? ?ni ) ? N X X n=1 n=1 i=1 = i N X X n=1 i !T ? ? ani ? ani rn cos( 2?ni ?n ) ? ? ?n ani rn sin( 2?ni ? ) !T ! xni1 Cn (t ? i?) xni2 Un (t ? i?) = (?W ~x)(t) xni3 Vn (t ? i?) Cn (t ? i?) Un (t ? i?) Vn (t ? i?) (5) The dictionary ?W contains shifted copies of the functions Cn (t), Un (t), Vn (t) that approximate the space of time-shifted waveforms. The functions Cn (t), Un (t), and V (t), as well as the constants rn and ?n depend on the waveform Wn (t) and are explained in Fig. 2(b). We can then solve the following optimization problem: xni2 ? rn cos(?n )xni1 , ?n, i min F (~x, W) such that p 2 (6) xni2 + x2ni3 ? rn xni1 , ?n, i ~ x,W where F (~x, W) = X  1 kV (t) ? (?W ~x)(t)k22,? ? log (1 ? ?n ?)?(xni1 ) + (?n ?)?1,?2n (xni1 ) 2 n,i where ??,?2 (.) is the Gaussian density function.The constraints on ~x in Eq. (6) ensure that each triplet (xni1 , xni2 , xni3 ) is consistent with the mapping defined in Eq. 5, with xni1 being the amplitude and 2??n atan (xni3 /xni2 ) being the time-shift associated with the waveform Wn (t) (see [21] for a detailed development of this approach). The constrained region, denoted by C, is convex and is illustrated as sections of cones in Fig. 2(c). Note that we have used the Bernoulli discrete-time process with a spacing ? (matching the interpolation dictionary spacing) to approximate the Poisson process described in Eq. (2). Even with this linear representation, the problem is not jointly convex in W and ~x, and is not convex in ~x for fixed W. The optimization of Eq. (6) resembles that of [22] and other sparse-coding objective functions with the following important differences: (1) the dictionary is translation-invariant and interpolates continuous time-shifts, (2) there is a constraint on the coefficients ~x due to the interpolation, and (3) there is a nonconvex mixture prior on the coefficients to model the spike amplitudes. We propose a block coordinate descent procedure to solve Eq. (6). After initializing W randomly, we iterate the following steps: 1. Given W, approximately solve for ~x. x 2. Perform a rescaling xnij ? znij and Wn (t) ? zn Wn (t) where the zn ?s are chosen to n i h xnij optimize F ( zn , {zn Wn (t)}). 3. Given ~x, solve for W, constraining kWn (t)k2 to be less than or equal to its current value. The first step minimizes successive convex approximations of F and is the most involved of the three. The second is guaranteed to decrease F and amounts to N scalar optimizations. The final step minimizes the first term with respect to the waveforms while keeping the second term constant, and amounts to an L2 -constrained least squares problem (ridge regression) that can be solved very efficiently. The following sections provide details of each of the steps. 3.2 Solve spikes given waveforms In this step we wish to minimize the function F (?, W) while ensuring that the solution lies in the convex set C. However, this function is nonconvex and nonsmooth due to the second term in Eq. (6). This especially causes problems when the current estimates of W are far from the optimal values, since in this case there are many intermediate amplitudes between 0 and 1. To get around this, we replace each summand in the second term by a relaxation:  Z G(xni1 ) = ? log (1 ? ?n ?) ? 0  1 ? xni1 e ? P (?)d? + (?n ?)?1,?2n (xni1 ) ? 4 (7) M f-?/2 c f0 f?/2 (a) (b) (c) Figure 2: (a) Illustration of the circle approximation in [21]. The manifold M of translates of a function f (t) lies on the hypersphere since translation preserves norm (black curve). This can be locally approximated by a circle (red curve). The approximation is exact at 3 equally-spaced points (black dots). (b) Visualization in the plane on which the three translates of f (t) lie. The quantities r and ? can be derived analytically for a fixed f (t) and spacing ?. (c) These circle approximations can be linked together to form a piecewise-circular approximation of the entire manifold. which replaces the delta function at 0 with a mixture of exponential distributions. We chose the parameter ? to be Gamma-distributed about a fixed small value. We solve this approximation using (0) an iterative reweighting scheme.The weights are initialized to be uniform wni = ?n , ?n, i. Then the following updates are iterated computed: X (t) 1 ~x(t+1) ? arg min kV (t) ? (?W ~x)(t)k22 + wni |xni1 | 2 ~ x?C n,i (t+1) wni (8) (t+1) ? G(xni1 ) (9) (t+1) xni1 Eq. (8) is a convex optimization that can be solved efficiently. The weights are updated so that the second term in Eq. (8) is exactly the negative log prior probability of the previous solution ~x(t) . If a coefficient is 0, its weight is ? and the corresponding basis function is discarded. Such reweighting procedures have been used to optimize a nonconvex function by a series of convex optimizations [23, 24, 25]. Although there is no convergence guarantee, we find that it works well in practice. 3.3 Solve rescaling factors The first term of F (~x, {Wn (t)}) does not change by much if one divides the coefficients xnij by some zn and multiplies the corresponding waveform by zn 1 . The second term does change under such a rescaling. In order to avoid the solution where the waveforms/coefficients become arbitrarily large/small, respectively, we perform a rescaling in a separate step and then optimize the waveform shapes subject to a fixed norm constraint (described in the next section). Since the second term decomposes into terms that are each only dependent on one zn , we can independently solve the following scalar optimizations numerically: zn ? arg max z>0 X i   x  1 xni1 ni1 log (1 ? ??n ) e? z? + ??n ?1,?2n ? z n = 1, ..., N (10) These are essentially maximum likelihood estimates of the scale factors given fixed coefficients and waveform shapes. One then performs the updates: 1 If ?W is linear in W, there is no change. For our choice of ?W , there is a small change of order O(?). 5 xnij ? Wn (t) ? xnij zn zn Wn (t) ?n, i, j (11) ?n (12) This step is guaranteed not to increase the objective in Eq. (6) since the first term is held constant (up to a small error term, see footnote) and the second term cannot increase. 3.4 Solve waveforms given spikes Given a set of coefficients ~x, we can optimize waveform shapes by solving: min W:kWi (t)k2 ?ki 1 kV (t) ? (?W ~x)(t)k22 2 (13) where ki is the current norm of Wi (t). The constraints ensure that only the waveform shapes change (ideally, we would like the norm to be held fixed, but we relax to to an inequality to retain convexity), leaving any changes in scale to the previous step. Since (?W ~x)(t) is approximately a linear function of the waveforms, Eq. (13) is a standard ridge regression problem. Efficient algorithms exist for solving this problem in its dual form ([26]). This step is guaranteed to decrease the objective in Eq. (6) since the second term is held constant and the first term can only decrease. 4 Results We applied our method to two data sets. The first was simulated according to the generative model described in Eq. (2-1). The second is real data from Harris et al. ([18]) consisting of simultaneous paired intracellular/extracellular recordings. The intracellular recording provides ground truth spikes for one of the cells in the extracellular recording. 4.1 Simulated data We obtained three waveforms from retinal recordings made in the Chichilnisky lab at the Salk Institute (shown in Fig. 3(a)). Three Poisson spike trains were sampled independently with rate (1??)?0 with ?0 = 10Hz. To introduce a correlation of ? = 13 , we sampled another Poisson spike train with rate ??0 and added these spikes (with random jitter) to each of the previous three trains. Spike amplitudes were drawn from N (1, 0.12 ). The spikes were convolved with the waveforms and Gaussian white noise was added (with ? six times the smallest waveform amplitude). For clustering, the original trace was thresholded to identify segments(the threshold was varied in order to see the error tradeoff). PCA was applied and the leading PC?s explaining 95% of the total variance were retained. K-means clustering was then applied (with k = 3) in the reduced space. To reduce computational cost, we applied our method to disjoint segments of the trace, which were split off whenever activity was less than 3? for more than half the waveform duration (about 4ms). The waveforms were initialized randomly and P (?) was Gamma-distributed with mean 0.0005 and coefficient of variation 0.25 (in Eq. (7)) for all experiments. The waveforms were allowed to change in length by adding (removing) padding on the ends on each iteration if the values exceeded (did not exceed) 5% of the peak amplitude (similar to [7]). Padding was added in increments of 10% of the current waveform length. Convex optimizations were performed using the CVX package ([27]). The learned waveforms and spike amplitude distributions are shown in Fig. 3. The amplitude distributions are well-matched to the generative distributions (shown in red). To evaluate performance, we counted missed spikes (relative to the number of true spikes) and false positives (relative to the number of predicted spikes) for clustering and our method. We varied the segment-finding threshold for clustering, and the amplitude threshold for our algorithm. The error tradeoff is shown in Fig. 4(a), and indicates that our method reduces both types of errors. To visualize the errors, we chose optimal thresholds for each method (yielding the smallest number of misses and false positives), and then projected all segments used in clustering onto the first two principal components. We indicate by dots, open circles, and crosses the hits, misses, and false 6 ?10 ?10 0 samples 4 2 0 0 10 4 2 0 0 10 20 amplitude (? units) (a) 6 relative freq 0 6 relative freq 6 10 relative freq amplitude (? units) positives, respectively (with colors indicating the waveform). For the same segments, we illustrate the behavior of our method in the same space. Note that unlike clustering, our method is allowed to assign more than one spike to each segment. The visualization is shown in Figures 4(b) and 4(c), and shows how clustering fails to account for the superimposed spikes, while our method eliminates a large portion of these errors. We found that this improvement was robust to the amount of noise added to the original trace (not shown). 2 0 0 10 20 amplitude (? units) (b) 4 10 20 amplitude (? units) (c) (d) Figure 3: (a) Three waveforms used in simulations. (b),(c),(d) Histograms of the spike amplitudes learned by our algorithm of the blue,green, and red waveforms, respectively. The amplitudes were converted into units ? by multiplying them by the corresponding waveform amplitudes, then dividing by the noise standard deviation. The red line indicates the generative density, corresponding to a Gaussian with mean 1 and standard deviation 0.1. 4 3 15 15 10 10 5 5 PC 2 k=3 CBP 5 PC 2 Percent est. spikes false 6 0 0 2 ?5 ?5 ?10 ?10 1 0 0 10 20 Percent true spikes missed (a) 30 ?30 ?20 ?10 0 PC 1 (b) 10 20 30 ?30 ?20 ?10 0 PC 1 10 20 30 (c) Figure 4: (a) Tradeoff of misses and false positives as the segment-identification threshold in clustering is varied (blue), and the amplitude threshold for our method (red) is varied. Diagonal lines indicate surfaces with equal total error. (b),(c) Visualization of spike sorting errors for clustering (b) and our method (c). Each point is a threshold-crossing segment in the signal, projected onto the first two principal components. Dots represent segments whose composite spikes were all correctly identified, with the color specifying the waveform (see Fig. 3(a)). Open circles and crosses represent misses and false positives, respectively. The thresholds were optimized for each method, and correspond to the enlarged dots in (a). 4.2 Real data We used one electrode from the tetrode data in [18] to simplify our analysis. The raw trace was highpass filtered (800Hz) to remove slow drift. The noise standard deviation was estimated from regions not exceeding three times the overall standard deviation. We then repeated the same analysis as for the simulated data. The resulting waveforms and coefficients histograms are shown in Figure 5. Unlike the simulated example, the spike amplitude distributions are bimodal in nature, despite the prior amplitude distribution containing only one Gaussian. We first focus on the high-amplitude groups (2 and 4), both of which are well-separated from their low-amplitude counterparts (1 and 3), suggesting that an appropriately chosen threshold would provide accurate spike identification for the ground-truth cell (4). Figure 6(a) confirms this, showing that our method provides substantial reduction in misses/false positives. Figures 6(b) and 6(c) show that, as before, the majority of this reduction is accounted for by recovering spikes overlapping with those of another cell (group 2). The low-amplitude groups (1 and 3) could arise from background cells whose waveforms look like scaled-down versions of those of the foreground cells 2 and 4, thus creating secondary ?lumps? in the amplitude distributions. The projections of the events in these groups are labeled in Figures 6(b) 7 6 0 ground truth cell ?10 ?20 0 samples 6 other cell 2 4 1 2 0 0 20 relative freq other cell 10 relative freq amplitude (? units) and 6(c), showing that it is unclear whether they arise from noise or one or two background cells. It is up to the user whether to interpret these badly-isolated groups as cells. 4 2 3 0 0 10 20 amplitude (? units) (a) ground truth cell 4 10 20 amplitude (? units) (b) (c) Figure 5: (a) Two waveforms learned from CBP. (b),(c) Distributions of the amplitude values for the blue and green waveform, respectively. The numbers label distinct groups of amplitudes that could be treated as spikes of a single cell. Group 4 corresponds to the ground truth cell. Group 2 corresponds to another foreground cell. Groups 1 and 3 likely correspond to a mixture of background cell activity and noise. The groups are labeled in PC-space in Figures 6(b) and 6(c). 10 5 5 10 15 Percent true spikes missed (a) 20 15 Missed overlapping spikes 10 10 2 5 ?5 ?5 ?10 4 0 1 3 0 ?10 ?15 ?10 2 5 1 3 0 PC 2 15 0 0 15 k=2 k=3 k=4 CBP PC 2 Percent est. spikes false 20 4 10 PC 1 20 (b) ?15 ?10 0 10 PC 1 20 (c) Figure 6: (a) Error tradeoff as in Fig. 4(a). The blue, green, and red curves are results of k-means clustering for different k. (b) Illustration of clustering errors in PC-space, with k = 4 and a threshold corresponding to the large red dot in (a). (c) Errors for our method with threshold corresponding to the large black dot. The numbers show the approximate location in PC-space of the amplitude groups demarcated in Figures 5(b) and 5(c). 5 Discussion We have formulated the spike sorting problem as a maximum-a-posteriori (MAP) estimation problem, assuming a linear-Gaussian likelihood of the observed trace given the spikes and a Poisson process prior on the spikes. Unlike clustering methods, the model explicitly accounts for overlapping spikes, translation-invariance, and variability in spike amplitudes. Unlike other methods that handle overlapped spikes (e.g., [10]), our method jointly learns waveforms and spikes within a unified framework. We derived an iterative procedure based on block-coordinate descent to approximate the MAP solution. We showed empirically on simulated data that our method outperforms the standard clustering approach, particularly in the case of superimposed spikes. We also showed that our method yields an improvement on a real data set with ground truth, despite the fact that there are similar waveform shapes with different amplitudes. The majority of improvement in this case is also accounted for by identifying superimposed spikes. Our method has only a few parameters that are stable across a variety of conditions, thus addressing the need for an automated method for spike sorting that is not susceptible to systematic errors. References [1] M. S. Lewicki. A review of methods for spike sorting: the detection and classification of neural action potentials. Network, 9(4):R53?R78, Nov 1998. [2] M. Sahani. Latent variable models for neural data analysis. PhD thesis, California Institute of Technology, Pasadena, California, 1999. [3] M Wehr, J S Pezaris, and M Sahani. Simultaneous paired intracellular and tetrode recordings for evaluating the performance of spike sorting algorithms. Neurocomputing, 26-27:1061?1068, 1999. 8 [4] Maneesh Sahani, John S. Pezaris, and Richard A. Andersen. On the separation of signals from neighboring cells in tetrode recordings. In In Advances in Neural Information Processing Systems 10, pages 222?228. MIT Press, 1998. [5] P. H. van Cittert. Zum einflu der spaltbreite auf die intensittsverteilung in spektrallinien. ii. Zeitschrift fr Physik A Hadrons and Nuclei, 69:298?308, 1931. 10.1007/BF01391351. [6] J. Mendel. Optimal Seismic Deconvolution: An Estimation Based Approach. Academic Press, 1983. [7] Evan Smith and Michael S Lewicki. Efficient coding of time-relative structure using spikes. Neural Computation, 17(1):19?45, Jan 2005. [8] Roger Grosse Rajat Raina, Helen Kwong, and Andrew Y. Ng. Shift-invariant sparse coding for audio classification. In UAI, 2007. [9] J W Pillow, J Shlens, L Paninski, A Sher, A M Litke, E J Chichilnisky, and E P Simoncelli. Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454(7206):995? 999, Aug 2008. [10] Jason S. Prentice, Jan Homann, Kristina D. Simmons, Gaper Tkaik, Vijay Balasubramanian, and Philip C. Nelson. Fast, scalable, bayesian spike identification for multi-electrode arrays. PLoS ONE, 6(7):e19884, 07 2011. [11] R. Quian Quiroga, Z. Nadasdy, and Y. Ben-Shaul. Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Comput., 16:1661?1687, August 2004. [12] Ki Yong Kwon and K. Oweiss. Wavelet footprints for detection and sorting of extracellular neural action potentials. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages 609 ?612, may 2011. [13] Markus Meister, Jerome Pine, and Denis A. Baylor. Multi-neuronal signals from the retina: acquisition and analysis. Journal of Neuroscience Methods, 51(1):95 ? 106, 1994. [14] S. Nirenberg, S. M. Carcieri, A. L. Jacobs, and P. E. Latham. Retinal ganglion cells act largely as independent encoders. Nature, 411:698?701, 2001. [15] R. Segev, J. Goodhouse, J. Puchalla, and M. J. Berry. Recording spikes from a large fraction of the ganglion cells in a retinal patch. Nature Neuroscience, 7(10):1154?1161, October 2004. [16] C. Pouzat, O. Mazor, and G. Laurent. Using noise signature to optimize spike-sorting and to assess neuronal classification quality. J Neurosci Methods, 122(1):43?57, 2002. [17] Frank Wood, Michael J. Black, Carlos Vargas-irwin, Matthew Fellows, and John P. Donoghue. On the variability of manual spike sorting. IEEE Transactions on Biomedical Engineering, 51:912?918, 2004. [18] Kenneth D. Harris, Darrell A. Henze, Jozsef Csicsvari, Hajime Hirase, Kenneth D, Darrell A. Henze, and Jozsef Csicsvari. Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements. J Neurophysiol, 84:401?414, 2000. [19] Emery N. Brown, Robert E. Kass, and Partha P. Mitra. Multiple neural spike train data analysis: state-ofthe-art and future challenges. Nature neuroscience, 7(5):456?461, May 2004. [20] C Ekanadham, D Tranchina, and E P Simoncelli. Sparse decomposition of transformation-invariant signals with continuous basis pursuit. In Proc. Int?l Conf Acoustics Speech Signal Processing (ICASSP), Los Angeles, CA, May 22-27 2011. IEEE Sig Proc Society. [21] C Ekanadham, D Tranchina, and E P Simoncelli. Sparse decomposition of translation-invariant signals with continuous basis pursuit. IEEE Transactions on Signal Processing, 2011. Accepted for publication. [22] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607?609, Jun 1996. [23] Ingrid Daubechies, Ronald DeVore, Massimo Fornasier, and C. Sinan Gntrk. Iteratively reweighted least squares minimization for sparse recovery. Communications on Pure and Applied Mathematics, 63(1):1? 38, 2010. [24] Emmanuel J. C. Enhancing sparsity by reweighted 1 minimization. J. Fourier Analysis and Applications, pages 877?905, 2008. [25] R. Chartrand and Wotao Yin. Iteratively reweighted algorithms for compressive sensing. In Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, pages 3869 ? 3872, 31 2008-april 4 2008. [26] Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. In Advances in Neural Information Processing Systems 19, pages 801?808. 2007. [27] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, October 2010. 9
4416 |@word version:4 norm:4 open:2 physik:1 confirms:1 simulation:1 decomposition:3 covariance:1 jacob:1 carry:1 reduction:3 contains:1 series:1 daniel:1 demarcated:1 outperforms:1 nadasdy:1 current:6 ka:1 com:1 assigning:1 john:2 ronald:1 shape:9 remove:1 plot:3 update:2 kristina:1 generative:5 half:1 plane:1 smith:1 record:1 filtered:1 hypersphere:1 provides:2 math:1 location:1 successive:1 cbp:3 mendel:1 denis:1 along:1 ingrid:1 direct:1 become:1 combine:1 introduce:1 behavior:1 themselves:1 morphology:1 multi:4 balasubramanian:1 window:1 becomes:1 estimating:1 notation:1 matched:1 medium:1 interpreted:1 substantially:1 minimizes:2 developed:2 compressive:1 unified:1 finding:1 transformation:1 biphasic:1 guarantee:1 temporal:1 fellow:1 act:1 concave:1 exactly:1 k2:3 hit:1 scaled:1 normally:1 medical:1 unit:8 grant:1 positive:8 before:1 engineering:1 mitra:1 consequence:1 zeitschrift:1 despite:2 laurent:1 becoming:1 interpolation:2 approximately:2 black:5 chose:3 resembles:1 specifying:1 challenging:1 co:2 range:1 hughes:1 block:3 practice:1 signaling:1 procedure:7 footprint:1 jan:2 evan:1 maneesh:1 superparamagnetic:1 composite:1 projection:3 matching:1 intention:1 word:1 boyd:1 carcieri:1 get:1 onto:3 close:2 selection:1 cannot:1 prentice:1 context:2 r78:1 optimize:5 map:7 center:1 missing:1 helen:1 independently:3 convex:9 duration:1 formulate:2 identifying:3 recovery:1 pure:1 array:2 shlens:1 classic:1 handle:1 population:1 coordinate:3 variation:1 increment:1 updated:1 simmons:1 user:1 exact:1 programming:1 us:1 alexis:1 sig:1 overlapped:1 crossing:1 approximated:1 particularly:2 tranchina:3 labeled:2 observed:5 homann:1 initializing:1 solved:2 region:2 plo:1 decrease:3 substantial:3 convexity:1 ideally:1 dynamic:1 radar:1 signature:1 depend:1 solving:5 segment:19 upon:1 basis:5 neurophysiol:1 easily:2 joint:2 kwn:1 icassp:3 train:8 separated:1 distinct:2 fast:1 describe:1 whose:6 valued:1 solve:9 relax:1 nirenberg:1 jointly:2 noisy:3 itself:1 final:1 emergence:1 propose:1 unresolved:1 fr:1 neighboring:1 validate:1 kv:4 los:1 convergence:1 electrode:9 cluster:6 darrell:2 emery:1 seismology:1 ben:1 illustrate:1 develop:1 andrew:2 fixing:1 aug:1 eq:15 dividing:1 recovering:3 predicted:1 indicate:3 waveform:56 centered:2 human:2 kwong:1 require:1 assign:1 fornasier:1 quiroga:1 around:2 ground:7 great:1 henze:2 mapping:1 visualize:1 pouzat:1 pine:1 matthew:1 major:1 dictionary:5 smallest:2 xk2:2 estimation:5 proc:2 label:1 superposition:2 vice:1 minimization:2 mit:1 gaussian:8 rather:1 avoid:1 varying:1 voltage:10 publication:1 derived:2 focus:1 improvement:3 bernoulli:1 superimposed:4 indicates:3 likelihood:2 centroid:1 litke:1 posteriori:4 inference:3 dependent:1 typically:1 entire:1 pasadena:1 shaul:1 interested:1 atan:1 arg:4 dual:1 overall:1 classification:3 denoted:1 multiplies:1 development:1 art:2 constrained:2 equal:2 field:2 ng:2 manually:1 look:1 unsupervised:1 foreground:2 future:1 np:1 nonsmooth:1 simplify:2 piecewise:1 primarily:1 employ:1 summand:1 randomly:2 richard:1 kwon:1 preserve:1 few:2 neurocomputing:1 individual:2 gamma:2 consisting:1 detection:3 highly:1 circular:1 mixture:4 yielding:1 pc:17 held:3 accurate:2 respective:2 divide:1 chaitanya:1 circle:5 initialized:2 isolated:2 mazor:1 minimal:1 zn:10 cost:1 ekanadham:3 deviation:4 addressing:1 uniform:1 kn:3 encoders:1 spatiotemporal:1 ni1:1 combined:1 density:2 peak:3 international:2 retain:1 lee:1 systematic:3 off:1 michael:2 together:1 continuously:1 thesis:1 andersen:1 recorded:3 daubechies:1 containing:3 conf:1 creating:1 leading:1 rescaling:4 account:4 converted:1 suggesting:1 potential:2 retinal:3 summarized:2 coding:4 coefficient:13 int:1 notable:1 explicitly:1 blind:3 depends:1 performed:1 jason:1 extracellularly:1 lab:1 linked:1 red:7 portion:1 carlos:1 partha:1 ass:1 square:2 ni:12 minimize:1 convolutional:1 variance:1 accuracy:1 efficiently:2 largely:1 spaced:1 identify:3 correspond:2 yield:1 ofthe:1 chartrand:1 identification:7 bayesian:2 iterated:1 accurately:1 raw:1 multiplying:1 tissue:1 simultaneous:4 footnote:1 manual:4 whenever:1 acquisition:1 frequency:1 involved:1 associated:2 boil:1 sampled:2 adjusting:1 color:3 ubiquitous:1 amplitude:36 hajime:1 appears:1 exceeded:1 courant:3 devore:1 disciplined:1 april:1 roger:1 stage:1 biomedical:1 correlation:2 jerome:1 horizontal:1 nonlinear:1 overlapping:5 reweighting:3 artifact:1 quality:1 resemblance:1 scientific:1 grows:1 r53:1 olshausen:1 k22:3 brown:1 true:4 counterpart:1 analytically:1 alternating:1 iteratively:2 freq:5 illustrated:2 white:1 pezaris:2 reweighted:3 sin:1 die:1 m:1 generalized:1 ridge:2 complete:1 latham:1 performs:1 percent:4 image:1 novel:1 recently:1 specialized:1 spiking:2 empirically:1 numerically:1 interpret:1 measurement:1 versa:1 honglak:1 mathematics:1 dot:6 stable:1 f0:1 supervision:1 surface:1 showed:2 nonconvex:3 inequality:1 arbitrarily:1 der:1 additional:2 hadron:1 determine:1 signal:12 ii:1 multiple:3 simoncelli:4 full:1 reduces:2 windowing:2 academic:1 offer:1 cross:2 equally:1 paired:2 ensuring:1 scalable:1 regression:2 essentially:1 enhancing:1 poisson:7 iteration:1 represent:3 histogram:2 bimodal:1 cell:25 background:5 spacing:3 leaving:1 appropriately:1 eliminates:1 unlike:4 kwi:1 recording:12 chaitu:1 isolate:1 subject:1 hz:2 lump:1 practitioner:1 constraining:1 intermediate:1 split:1 wn:14 automated:3 variety:2 iterate:1 exceed:1 identified:2 suboptimal:1 reduce:1 cn:4 tradeoff:4 translates:2 donoghue:1 shift:3 angeles:1 synchronous:2 whether:3 six:1 pca:2 quian:1 padding:2 interpolates:1 york:6 cause:1 speech:3 action:2 matlab:1 generally:1 detailed:1 amount:3 locally:1 simplest:1 reduced:1 http:1 exist:1 shifted:6 neuroscience:6 estimated:2 delta:1 disjoint:1 correctly:1 blue:4 hirase:1 discrete:1 group:11 demonstrating:1 threshold:13 drawn:2 ani:14 thresholded:1 kenneth:2 relaxation:1 fraction:1 sum:2 cone:1 wood:1 inverse:1 package:1 jitter:1 vn:3 missed:5 cvx:3 separation:2 patch:1 ki:3 multielectrode:1 guaranteed:3 replaces:1 discretely:1 activity:5 badly:1 constraint:5 deficiency:1 segev:1 software:1 yong:1 markus:1 nearby:1 fourier:1 min:4 performing:1 extracellular:5 vargas:1 according:1 battle:1 remain:2 describes:1 across:1 unity:1 wi:1 urgent:1 making:1 explained:1 invariant:4 visualization:3 know:1 tractable:1 end:1 meister:1 pursuit:4 available:1 gaussians:1 occurrence:1 alternative:1 altogether:1 convolved:2 original:2 top:1 clustering:25 ensure:2 emmanuel:1 especially:1 society:1 objective:4 question:2 quantity:1 spike:89 added:4 receptive:1 diagonal:1 unclear:1 separate:1 simulated:9 majority:2 philip:1 nelson:1 manifold:2 assuming:2 length:2 code:1 modeled:1 retained:1 illustration:3 baylor:1 difficult:2 susceptible:3 mostly:1 october:2 robert:1 frank:1 trace:11 negative:1 unknown:1 perform:2 seismic:1 wotao:1 vertical:1 neuron:6 convolution:1 discarded:1 howard:1 descent:3 incorrectly:1 variability:5 ever:1 communication:1 rn:5 varied:4 highpass:1 august:1 community:1 drift:1 chichilnisky:2 csicsvari:2 optimized:1 auf:1 acoustic:4 california:2 learned:3 address:1 sparsity:1 challenge:1 including:1 max:3 green:3 critical:1 overlap:1 event:1 rely:1 treated:1 natural:1 raina:2 scheme:1 improve:1 technology:1 temporally:2 jun:1 sher:1 sahani:3 review:2 prior:6 l2:1 berry:1 retina:1 relative:9 embedded:1 fully:2 bear:1 limitation:2 filtering:1 nucleus:1 oweiss:1 consistent:1 thresholding:2 systematically:2 translation:4 accounted:2 copy:2 keeping:1 institute:6 wide:1 explaining:1 sparse:11 distributed:4 van:1 overcome:1 curve:3 evaluating:1 avoids:1 pillow:1 made:1 projected:3 preprocessing:1 counted:1 far:1 transaction:2 approximate:8 nov:1 uai:1 eero:1 consuming:1 continuous:5 iterative:3 un:4 triplet:1 decomposes:1 latent:1 nature:6 robust:1 ca:1 wehr:1 did:1 stereotyped:1 intracellular:5 neurosci:1 noise:12 arise:3 wni:3 cvxr:1 xni2:5 allowed:2 repeated:1 enlarged:1 fig:10 neuronal:3 grosse:1 ny:3 salk:1 slow:1 fails:3 position:2 inferring:1 wish:1 exceeding:1 exponential:1 comput:1 lie:4 learns:1 wavelet:2 down:2 removing:1 erroneous:1 showing:2 sensing:1 nyu:1 deconvolution:4 incorporating:1 intractable:1 tetrode:4 false:10 adding:1 phd:1 illustrates:1 sorting:15 vijay:1 yin:1 distinguishable:1 paninski:1 likely:2 ganglion:2 visual:1 adjustment:1 jozsef:2 scalar:2 lewicki:2 corresponds:2 truth:7 relies:1 harris:2 formulated:1 massimo:1 replace:1 experimentally:1 hard:1 change:7 determined:1 except:1 miss:6 principal:3 called:1 total:2 secondary:1 invariance:1 accepted:1 est:2 exception:1 indicating:1 arises:1 irwin:1 rajat:2 evaluate:1 audio:1 correlated:1
3,774
4,417
Improved Algorithms for Linear Stochastic Bandits Yasin Abbasi-Yadkori D?avid P?al Csaba Szepesv?ari [email protected] [email protected] [email protected] Dept. of Computing Science University of Alberta Dept. of Computing Science University of Alberta Dept. of Computing Science University of Alberta Abstract We improve the theoretical analysis and empirical performance of algorithms for the stochastic multi-armed bandit problem and the linear stochastic multi-armed bandit problem. In particular, we show that a simple modification of Auer?s UCB algorithm (Auer, 2002) achieves with high probability constant regret. More importantly, we modify and, consequently, improve the analysis of the algorithm for the for linear stochastic bandit problem studied by Auer (2002), Dani et al. (2008), Rusmevichientong and Tsitsiklis (2010), Li et al. (2010). Our modification improves the regret bound by a logarithmic factor, though experiments show a vast improvement. In both cases, the improvement stems from the construction of smaller confidence sets. For their construction we use a novel tail inequality for vector-valued martingales. 1 Introduction Linear stochastic bandit problem is a sequential decision-making problem where in each time step we have to choose an action, and as a response we receive a stochastic reward, expected value of which is an unknown linear function of the action. The goal is to collect as much reward as possible over the course of n time steps. The precise model is described in Section 1.2. Several variants and special cases of the problem exist differing on what the set of available actions is in each round. For example, the standard stochastic d-armed bandit problem, introduced by Robbins (1952) and then studied by Lai and Robbins (1985), is a special case of linear stochastic bandit problem where the set of available actions in each round is the standard orthonormal basis of Rd . Another variant, studied by Auer (2002) under the name ?linear reinforcement learning?, and later in the context of web advertisement by Li et al. (2010), Chu et al. (2011), is a variant when the set of available actions changes from time step to time step, but has the same finite cardinality in each step. Another variant dubbed ?sleeping bandits?, studied by Kleinberg et al. (2008), is the case when the set of available actions changes from time step to time step, but it is always a subset of the standard orthonormal basis of Rd . Another variant, studied by Dani et al. (2008), Abbasi-Yadkori et al. (2009), Rusmevichientong and Tsitsiklis (2010), is the case when the set of available actions does not change between time steps but the set can be an almost arbitrary, even infinite, bounded subset of a finite-dimensional vector space. Related problems were also studied by Abe et al. (2003), Walsh et al. (2009), Dekel et al. (2010). In all these works, the algorithms are based on the same underlying idea?the optimism-in-theface-of-uncertainty (OFU) principle. This is not surprising since they are solving almost the same problem. The OFU principle elegantly solves the exploration-exploitation dilemma inherent in the problem. The basic idea of the principle is to maintain a confidence set for the vector of coefficients of the linear function. In every round, the algorithm chooses an estimate from the confidence set and an action so that the predicted reward is maximized, i.e., estimate-action pair is chosen optimistically. We give details of the algorithm in Section 2. 1 Thus the problem reduces to the construction of confidence sets for the vector of coefficients of the linear function based on the action-reward pairs observed in the past time steps. This is not an easy problem, because the future actions are not independent of the actions taken in the past (since the algorithm?s choices of future actions depend on the random confidence set constructed from past data). In fact, several authors (Auer, 2000, Li et al., 2010, Walsh et al., 2009) fell victim of making a mistake because they did not recognize this issue. Correct solutions require new martingale techniques which we provide here. The smaller confidence sets one is able to construct, the better regret bounds one obtains for the resulting algorithm, and, more importantly, the better the algorithm performs empirically. With our new technique, we vastly reduce the size of the confidence sets of Dani et al. (2008) and Rusmevichientong and Tsitsiklis (2010). First, our confidence sets are valid uniformly over all time steps, which immediately saves log(n) factor by avoiding the otherwise needed union bound. Second, our confidence sets are ?more empirical? in the sense that some worst-case quantities from the old bounds are replaced by empirical quantities that are always smaller, sometimes substantially. As a result, our experiments show an order-of-magnitude improvement over the C ONFIDENCE BALL algorithm of Dani et al. (2008). To construct our confidence sets, we prove a new martingale tail inequality. The new inequality is derived using techniques from the theory of self-normalized processes (de la Pe?na et al., 2004, 2009). Using our confidence sets, we modify the UCB algorithm (Auer, 2002) for the d-armed bandit problem and show that with probability 1 , the regret of the modified algorithm is O(d log(1/ )/ ) where is the difference between the expected rewards of the best and the second best action. In particular, note that the regret does not depend on n. This seemingly contradicts the result ofPLai and Robbins (1985) who showed that the expected regret of any algorithm is at least ( i6=i? 1/D(pj | pi? ) o(1)) log n where pi? and pi are the reward distributions of the optimal arm and arm i respectively and D is the Kullback-Leibler divergence. However, our algorithm receives as an input, and thus its expected regret depends on . With = 1/n our algorithm has the same expected regret bound, O((d log n)/ ), as Auer (2002) has shown for UCB. For the general linear stochastic bandit problem, we improve regret of the C ONFIDENCE BALL p algorithm of Dani et al. (2008). They showed that its regret is at most O(d log(n) n log(n/ )) with probability at least 1 . We modify their algorithm sopthat it uses our new confidence p sets and we show that its regret is p at most O(d log(n) n + dn log(n/ )) which is roughly improvement a multiplicative factor log(n). Dani et al. (2008) prove also a problem dependent 2 regret bound. Namely, they show that the regret of their algorithm is O( d log(n/ ) log2 (n)) where is the ?gap? as defined in (Dani et al., 2008). For our modified algorithm we prove an improved O( log(1/ ) (log(n) + d log log n)2 ) bound. 1.1 Notation We use kxkp to denote the p-norm of a vector x 2 Rd . For appositive definite matrix A 2 Rd?d , the weighted 2-norm of vector x 2 Rd is defined by kxkA = x> Ax. The inner product is denoted by h?, ?i and the weighted inner-product x> Ay = hx, yiA . We use min (A) to denote the minimum eigenvalue of the positive definite matrix A. For any sequence {at }1 t=0 we denote by ai:j the sub-sequence ai , ai+1 , . . . , aj . 1.2 The Learning Model In each round t, the learner is given a decision set Dt ? Rd from which he has to choose an action Xt . Subsequently he observes reward Yt = hXt , ?? i + ?t where ?? 2 Rd is an unknown parameter and ?t is a random noise satisfying E[?t | X1:t , ?1:t 1 ] = 0 and some tail-constraints, to be specified soon. Pn The goal of the learner is to maximize his total reward t=1 hXt , ?? i accumulated over the course of n rounds. Clearly, with the knowledge of ?? , the optimal strategy is to choose in round t the point x?tP= argmaxx2Dt hx, ?? i that maximizes the reward. This strategy would accumulate total n reward t=1 hx?t , ?? i. It is thus natural to evaluate the learner relative to this optimal strategy. The difference of the learner?s total reward and the total reward of the optimal strategy is called the 2 for t := 1, 2, . . . do (Xt , ?et ) = argmax(x,?)2Dt ?Ct Play Xt and observe reward Yt Update Ct end for 1 hx, ?i Figure 1: OFUL ALGORITHM pseudo-regret (Audibert et al., 2009) of the algorithm and it can be formally written as ! ! n n n X X X Rn = hx?t , ?? i hXt , ?? i = hx?t Xt , ?? i . t=1 t=1 t=1 As compared to the regret, the pseudo-regret has the same expected value, but lower variance because the additive noise ?t is removed. However, the omitted quantity is uncontrollable, hence we have no interest in including it in our results (the omitted quantity would also cancel, if ?t was a sequence which is independently selected of X1:t .) In what follows, for simplicity we use the word regret instead of the more precise pseudo-regret in connection to Rn . The goal of the algorithm is to keep the regret Rn as low as possible. As a bare minimum, we require that the algorithm is Hannan consistent, i.e., Rn /n ! 0 with probability one. In order to obtain meaningful upper bounds on the regret, we will place assumptions on {Dt }1 t=1 , 1 ?? and the distribution of {?t }1 t=1 . Roughly speaking, we will need to assume that {Dt }t=1 lies in a bounded set. We elaborate on the details of the assumptions later in the paper. However, we state the precise assumption on the noise sequence {?t }1 t=1 now. We will assume that ?t is conditionally R-sub-Gaussian where R 0 is a fixed constant. Formally, this means that ? 2 2? ? ? R 8 2R E e ?t | X1:t , ?1:t 1 ? exp . 2 The sub-Gaussian condition automatically implies that E[?t | X1:t , ?1:t 1 ] = 0. Furthermore, it also implies that Var[?t | Ft ] ? R2 and thus we can think of R2 as the (conditional) variance of the noise. An example of R-sub-Gaussian ?t is a zero-mean Gaussian noise with variance at most R2 , or a bounded zero-mean noise lying in an interval of length at most 2R. 2 Optimism in the Face of Uncertainty A natural and successful way to design an algorithm is the optimism in the face of uncertainty principle (OFU). The basic idea is that the algorithm maintains a confidence set Ct 1 ? Rd for the parameter ?? . It is required that Ct 1 can be calculated from X1 , X2 , . . . , Xt 1 and Y1 , Y2 , . . . , Yt 1 and ?with high probability? ?? lies in Ct 1 . The algorithm chooses an optimistic D E estimate ?et = argmax?2Ct 1 (maxx2Dt hx, ?i) and then chooses action Xt = argmaxx2Dt x, ?et which maximizes the reward according to the estimate ?et . Equivalently, and more compactly, the algorithm chooses the pair (Xt , ?et ) = argmax hx, ?i , (x,?)2Dt ?Ct 1 which jointly maximizes the reward. We call the resulting algorithm the OFUL ALGORITHM for ?optimism in the face of uncertainty linear bandit algorithm?. Pseudo-code of the algorithm is given in Figure 1. The crux of the problem is the construction of the confidence sets Ct . This construction is the subject of the next section. 3 Self-Normalized Tail Inequality for Vector-Valued Martingales Since the decision sets {Dt }1 t=1 can be arbitrary, the sequence of actions Xt 2 Dt is arbitrary as well. Even if {Dt }1 t=1 is ?well-behaved?, the selection rule that OFUL uses to choose Xt 2 Dt 3 generates a sequence {Xt }1 t=1 with complicated stochastic dependencies that are hard to handle. Therefore, for the purpose of deriving confidence sets it is easier to drop any assumptions on {Xt }1 t=1 and pursue a more general result. If we consider the -algebra Ft = (X1 , X2 , . . . , Xt+1 , ?1 , ?2 , . . . , ?t ) then Xt becomes Ft 1 measurable and ?t becomes Ft -measurable. Relaxing this a little bit, we can assume that {Ft }1 t=0 is any filtration of -algebras such that for any t 1, Xt is Ft 1 -measurable and ?t is Ft -measurable and therefore Yt = hXt , ?? i + ?t is Ft -measurable. This is the setup we consider for derivation of the confidence sets. Pt 1 The sequence {St }1 t=0 , St = s=1 ?t Xt , is a martingale with respect {Ft }t=0 which happens to be crucial for the construction of the confidence sets for ?? . The following theorem shows that with high probability the martingale stays close to zero. Its proof is given in Appendix A Theorem 1 (Self-Normalized Bound for Vector-Valued Martingales). Let {Ft }1 t=0 be a filtration. Let {?t }1 be a real-valued stochastic process such that ? is F -measurable and ?t is conditionally t t t=1 R-sub-Gaussian for some R 0 i.e. ? 2 2? ? ? R 8 2R E e ?t | Ft 1 ? exp . 2 d Let {Xt }1 t=1 be an R -valued stochastic process such that Xt is Ft is a d ? d positive definite matrix. For any t 0, define Vt =V + t X Xs Xs> St = s=1 Then, for any 1 -measurable. t X > 0, with probability at least 1 , for all t 0, ? det(V t )1/2 det(V ) 2 kSt kV 1 ? 2R2 log 2 Note that the deviation of the martingale kSt kV Vt 4 ?s Xs . s=1 1/2 t 1 Assume that V 1 t ? . is measured by the norm weighted by the matrix which is itself derived from the martingale, hence the name ?self-normalized bound?. Construction of Confidence Sets Let ?bt be the `2 -regularized least-squares estimate of ?? with regularization parameter ?bt = (X> 1:t X1:t + I) 1 X> 1:t Y1:t > 0: (1) where X1:t is the matrix whose rows are X1> , X2> , . . . , Xt> and Y1:t = (Y1 , . . . , Yt )> . The following theorem shows that ?? lies with high probability in an ellipsoid with center at ?bt . Its proof can be found in Appendix B. Theorem 2 (Confidence Ellipsoid). Assume the same as in Theorem 1, let V = I , > 0, define Yt = hXt , ?? i + ?t and assume that k?? k2 ? S. Then, for any > 0, with probability at least 1 , for all t 0, ?? lies in the set 8 9 s ? ? < = 1/2 1/2 det(V t ) det( I) Ct = ? 2 Rd : ?bt ? ? R 2 log + 1/2 S . : ; Vt Furthermore, if for all t in the set ( Ct0 = 1, kXt k2 ? L then with probability at least 1 ? 2 Rd : ?bt ? Vt ?R s 4 d log ? 1 + tL2 / ? + , for all t 1/2 S ) . 0, ?? lies The above bound could be compared with a similar bound of Dani et al. (2008) whose bound, under identical conditions, states that (with appropriate initialization) with probability 1 , (s ? 2? ? 2 ?) t 8 t for all t large enough ?bt ?? ? R max 128 d log(t) log , log , (2) 3 Vt p where large enough means that t satisfies 0 < < t2 e 1/16 . Denote by t ( ) the right-hand side in the last bound. The restriction on t comes from the fact that t ( ) 2d(1 + 2 log(t)) is needed in the proof of the last inequality of their Theorem 5. On the other hand, Rusmevichientong and Tsitsiklis (2010) proved that for any fixed t 2, for any 0 < < 1, with probability at least 1 , p p 2 ?bt ?? ? 2 ? R log t d log(t) + log(1/ ) + 1/2 S , Vt p 2 where ? = 3 + 2 log((L + trace(V ))/ . To get a uniform bound one can use a union bound P1 2 with t = /t2 . Then t=2 t = ( ?6 1) ? . This thus gives that for any 0 < < 1, with probability at least 1 , p p 8t 2, ?bt ?? ? 2?2 R log t d log(t) + log(t2 / ) + 1/2 S , Vt This is tighter than (2), but is still lagging behind the result of Theorem 2. Note that the new confidence set seems to require the computation of a determinant of a matrix, a potentially expensive step. However, one can speed up the computation by using the matrix determinant lemma, exploiting that the matrix whose determinant is needed is obtained via a rank-one update (cf. the proof of Lemma 11 in the Appendix). This way, the determinant can be kept up-to-date with linear time computation. 5 Regret Analysis of the OFUL ALGORITHM We now give a bound on the regret of the OFUL algorithm when run with confidence sets Cn constructed in Theorem 2 in the previous section. We will need to assume that expected rewards are bounded. We can view this as a bound on ?? and the bound on the decision sets Dt . The next theorem states a bound on the regret of the algorithm. Its proof can be found in Appendix C. Theorem 3 (The regret of the OFUL algorithm). Assume that for all t and all x 2 Dt , hx, ?? i 2 [ 1, 1]. Then, with probability at least 1 , the regret of the OFUL algorithm satisfies ? ? p p 1/2 8n 0, Rn ? 4 nd log( + nL/d) S + R 2 log(1/ ) + d log(1 + nL/( d)) . Figure 2 shows the experiments with the new confidence set. The regret of OFUL is significantly better compared to the regret of C ONFIDENCE BALL of Dani et al. (2008). The figure also shows a version of the algorithm that has a similar regret to the algorithm with the new bound, but spends about 350 times less computation in this experiment. Next, we explain how we can achieve this computation saving. 5.1 Saving Computation In this section, we show that we essentially need to recompute ?et only O(log n) times up to time n and hence saving computations.1 The idea is to recompute ?et whenever det(Vt ) increases by a constant factor (1 + C). We call the resulting algorithm the RARELY SWITCHING OFUL algorithm and its pseudo-code is given in Figure 3. As the next theorem shows its regret bound is essentially the same as the regret for OFUL. Theorem 4. Under the same assumptions as in Theorem 3, with probability at least 1 , for all n 0, the regret of the RARELY SWITCHING OFUL ALGORITHM satisfies s s ) r ? ?( ? ? p nL nL 1 n Rn ? 4 (1 + C)nd log + S + R d log 1 + + 2 log + 4 d log . d d d 1 Note this is very different than the common ?doubling trick? in online learning literature. The doubling is used to cope with a different problem. Namely, the problem when the time horizon n is unknown ahead of time. 5 3000 New bound Old bound New bound with rare switching 2500 Regret 2000 1500 1000 500 0 0 2000 4000 Time 6000 8000 10000 Figure 2: The application of the new bound to a linear bandit problem. A 2-dimensional linear bandit, where the parameters vector and the actions are from the unit ball. The regret of OFUL is significantly better compared to the regret of C ONFIDENCE BALL of Dani et al. (2008). The noise is a zero mean Gaussian with standard deviation = 0.1. The probability that confidence sets fail is = 0.0001. The experiments are repeated 10 times. Input: Constant C > 0 ? = 1 {This is the last time step that we changed ?et } for t := 1, 2, . . . do if det(Vt ) > (1 + C) det(V? ) then (Xt , ?et ) = argmax(x,?)2Dt ?Ct 1 h?, xi. ? = t. end if D E Xt = argmaxx2Dt ?e? , x . Play Xt and observe reward Yt . end for Figure 3: The RARELY SWITCHING OFUL ALGORITHM Average regret 0.2 0.15 0.1 0.05 0 0 0.2 0.4 C 0.6 0.8 1 Figure 4: Regret against computation. We fixed the number of times the algorithm is allowed to update its action in OFUL. For larger values of C, the algorithm changes action less frequently, hence, will play for a longer time period. The figure shows the average regret obtained during the given time periods for the different values of C. Thus, we see that by increasing C, one can actually lower the average regret per time step for a given fixed computation budget. The proof of the theorem is given in Appendix D. Figure 4 shows a simple experiment with the RARELY SWITCHING OFUL ALGORITHM . 5.2 Problem Dependent Bound Let t be the ?gap? at time step t as defined in (Dani et al., 2008). (Intuitively, t is the difference between the rewards of the best and the ?second best? action in the decision set Dt .) We consider 6 the smallest gap ? n = min1?t?n t . This includes the case when the set Dt is the same polytope in every round or the case when Dt is finite. The regret of OFUL can be upper bounded in terms of ( ? n )n as follows. Theorem 5. Assume that 1 and k?? k2 ? S where S all n 1, the regret of the OFUL satisfies 16R2 S 2 ? 64R2 S 2 L Rn ? log(Ln) + (d 1) log ? ?2 n + 2(d 1. With probability at least 1 , for n ? ?2 ? d + nL2 1) log d log + 2 log(1/ ) + 2 log(1/ ) . d The proof of the theorem can be found in the Appendix E. 2 The problem dependent regret of (Dani et al., 2008) scales like O( d log3 n), while our bound scales like O( 1 (log2 n + d log n + d2 log log n)), where = inf n ? n . 6 Multi-Armed Bandit Problem In this section we show that a modified version of UCB has with high probability constant regret. Let ?i be the expected reward of action i = 1, 2, . . . , d. Let ?? = max1?i?d ?i be the expected reward of the best arm, and let i = ?? ?i , i = 1, 2, . . . , d, be the ?gaps? with respect to the best arm. We assume that if we choose action It in round t we obtain reward ?It + ?t . Let Ni,t denote the number of times that we have played action i up to time t, and X i,t denote the average of the rewards received by action i up to time t. We construct confidence intervals for the expected rewards ?i based on X i,t in the following lemma. (The proof can be found in the Appendix F.) Lemma 6 (Confidence Intervals). Assuming that the noise ?t is conditionally 1-sub-Gaussian. With probability at least 1 , where 8i 2 {1, 2, . . . , d}, 8t ci,t = s (1 + Ni,t ) 2 Ni,t ? |X i,t 0 1 + 2 log ? ?i | ? ci,t , d(1 + Ni,t )1/2 ?? . (3) Using these confidence intervals, we modify the UCB algorithm of Auer et al. (2002) and change the action selection rule accordingly. Hence, at time t, we choose the action (4) It = argmax X i,t + ci,t . i We call this algorithm UCB( ). The main difference between UCB( ) and UCB is that the length of confidence interval ci,t depends neither on n, nor on t. This allows us to prove the following result that the regret of UCB( ) is constant. (The proof can be found in the Appendix G.) Theorem 7 (Regret of UCB( )). Assume that the noise ?t is conditionally 1-sub-Gaussian, with probability at least 1 , the total regret of the UCB( ) is bounded as ? X ? 16 2d Rn ? 3 i+ log . i: i i >0 i Lai and Robbins (1985) prove that for any suboptimal arm j, E Ni,t log t , D(pj , p? ) where, p? and pj are the reward density of the optimal arm and arm j respectively, and D is the KL-divergence. This lower bound does not contradict Theorem 7, as Theorem 7 only states a high 7 800 Regret 600 New bound Old bound 400 200 0 0 2000 4000 Time 6000 8000 10000 Figure 5: The regret of UCB( ) against-time when it uses either the confidence bound based on Hoeffding?s inequality, or the bound in (3). The results are shown for a 10-armed bandit problem, where the mean value of each arm is fixed to some values in [0, 1]. The regret of UCB( ) is improved with the new bound. The noise is a zero-mean Gaussian with standard deviation = 0.1. The value of is set to 0.0001. The experiments are repeated 10 times and the average is shown, together with the error bars. probability upper bound for the regret. Note that UCB( ) takes delta as its input. Because with probability , the regret in time t can be t, on expectation, the algorithm might have a regret of t . Now if we select = 1/t, then we get O(log t) upper bound on the expected regret. If one is interested in an average regret result, then, with slight modification of the proof technique one can obtain an identical result to what Auer et al. (2002) proves. Figure 5 shows the regret of UCB( ) when it uses either the confidence bound based on Hoeffding?s inequality, or the bound in (3). As can be seen, the regret of UCB( ) is improved with the new bound. Coquelin and Munos (2007), Audibert et al. (2009) prove similar high-probability constant regret bounds for variations of the UCB algorithm. Compared to their bounds, our bound is tighter thanks to that with the new self-normalized tail inequality we can avoid one union bound. The improvement can also be seen in experiment as the curve that we get for the performance of the algorithm of Coquelin and Munos (2007) is almost exactly the same as the curve that is labeled ?Old Bound? in Figure 5. 7 Conclusions In this paper, we showed how a novel tail inequality for vector-valued martingales allows one to improve both the theoretical analysis and empirical performance of algorithms for various stochastic bandit problems. In particular, we show that a simple modification of Auer?s UCB algorithm (Auer, 2002) achieves with high probability constant regret. Further, we modify and improve the analysis of the algorithm for the for linear stochastic bandit problem studied by Auer (2002), Dani et al. (2008), Rusmevichientong and Tsitsiklis (2010), Li et al. (2010). Our modification improves the regret bound by a logarithmic factor, though experiments show a vast improvement, stemming from the construction of smaller confidence sets. To our knowledge, ours is the first, theoretically well-founded algorithm, whose performance is practical for this latter problem. We also proposed a novel variant of the algorithm with which we can save a large amount of computation without sacrificing performance. We expect that the novel tail inequality will also be useful in a number of other situations thanks to its self-normalized form and that it holds for stopped martingales and thus can be used to derive bounds that hold uniformly in time. In general, the new inequality can be used to improve deviation bounds which use a union bound (over time). Since many modern machine learning techniques rely on having tight high-probability bounds, we expect that the new inequality will find many applications. Just to mention a few examples, the new inequality could be used to improve the computational complexity of the HOO algorithm Bubeck et al. (2008) (when it is used with a fixed , by avoiding union bounds, or the need to know the horizon, or the doubling trick) or to improve the bounds derived by Garivier and Moulines (2008) for UCB for changing environments, or the stopping rules and racing algorithms of Mnih et al. (2008). 8 References Y. Abbasi-Yadkori, A. Antos, and Cs. Szepesv?ari. Forced-exploration based algorithms for playing in stochastic linear bandits. In COLT Workshop on On-line Learning with Limited Feedback, 2009. N. Abe, A. W. Biermann, and P. M. Long. Reinforcement learning with immediate rewards and linear hypotheses. Algorithmica, 37:263293, 2003. A. Antos, V. Grover, and Cs. Szepesv?ari. Active learning in heteroscedastic noise. Theoretical Computer Science, 411(29-30):2712?2728, 2010. J.-Y. Audibert, R. Munos, and Csaba Szepesv?ari. Exploration-exploitation tradeoff using variance estimates in multi-armed bandits. Theoretical Computer Science, 410(19):1876?1902, 2009. P. Auer. Using upper confidence bounds for online learning. In FOCS, pages 270?279, 2000. P. Auer. Using confidence bounds for exploitation-exploration trade-offs. JMLR, 2002. P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235?256, 2002. S. Bubeck, R. Munos, G. Stoltz, and Cs. Szepesv?ari. Online optimization in X-armed bandits. In NIPS, pages 201?208, 2008. N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. 2006. W. Chu, L. Li, L. Reyzin, and R. E. Schapire. Contextual bandits with linear payoff functions. In AISTATS, 2011. P.-A. Coquelin and R. Munos. Bandit algorithms for tree search. In UAI, 2007. V. Dani, T. P. Hayes, and S. M. Kakade. Stochastic linear optimization under bandit feedback. In Rocco Servedio and Tong Zhang, editors, COLT, pages 355?366, 2008. V. H. de la Pe?na, M. J. Klass, and T. L. Lai. Self-normalized processes: exponential inequalities, moment bounds and iterated logarithm laws. Annals of Probability, 32(3):1902?1933, 2004. V. H. de la Pe?na, T. L. Lai, and Q.-M. Shao. Self-normalized processes: Limit theory and Statistical Applications. Springer, 2009. O. Dekel, C. Gentile, and K. Sridharan. Robust selective sampling from single and multiple teachers. In COLT, 2010. D. A. Freedman. On tail probabilities for martingales. The Annals of Probability, 3(1):100?118, 1975. A. Garivier and E. Moulines. On upper-confidence bound policies for non-stationary bandit problems. Technical report, LTCI, 2008. R. Kleinberg, A. Niculescu-Mizil, and Y. Sharma. Regret bounds for sleeping experts and bandits. Machine learning, pages 1?28, 2008. T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4?22, 1985. T. L. Lai and C. Z. Wei. Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems. The Annals of Statistics, 10(1):154?166, 1982. T. L. Lai, H. Robbins, and C. Z. Wei. Strong consistency of least squares estimates in multiple regression. Proceedings of the National Academy of Sciences, 75(7):3034?3036, 1979. L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web (WWW 2010), pages 661?670. ACM, 2010. V. Mnih, Cs. Szepesv?ari, and J.-Y. Audibert. Empirical Bernstein stopping. pages 672?679, 2008. H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58:527?535, 1952. P. Rusmevichientong and J. N. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations Research, 35(2):395?411, 2010. G. W. Stewart and J.-G. Sun. Matrix Perturbation Theory. Academic Press, 1990. T. J. Walsh, I. Szita, C. Diuk, and M. L. Littman. Exploring compact reinforcement-learning representations with linear regression. In UAI, pages 591?598. AUAI Press, 2009. 9
4417 |@word exploitation:3 version:2 determinant:4 norm:3 seems:1 nd:2 dekel:2 d2:1 diuk:1 mention:1 moment:1 ours:1 past:3 com:1 contextual:2 surprising:1 chu:3 written:1 stemming:1 additive:1 drop:1 update:3 stationary:1 selected:1 accordingly:1 recompute:2 zhang:1 mathematical:1 dn:1 constructed:2 focs:1 prove:6 lagging:1 theoretically:1 expected:11 roughly:2 p1:1 frequently:1 nor:1 multi:4 yasin:1 moulines:2 alberta:3 automatically:1 little:1 armed:8 cardinality:1 increasing:1 becomes:2 notation:1 bounded:6 underlying:1 maximizes:3 what:3 substantially:1 pursue:1 spends:1 differing:1 csaba:2 dubbed:1 pseudo:5 every:2 auai:1 exactly:1 k2:3 control:1 unit:1 positive:2 modify:5 mistake:1 limit:1 switching:5 optimistically:1 lugosi:1 might:1 initialization:1 studied:7 collect:1 relaxing:1 heteroscedastic:1 limited:1 walsh:3 practical:1 union:5 regret:59 definite:3 empirical:5 significantly:2 confidence:33 word:1 get:3 close:1 selection:2 context:1 restriction:1 measurable:7 www:1 yt:7 center:1 independently:1 simplicity:1 immediately:1 rule:4 importantly:2 orthonormal:2 deriving:1 his:1 handle:1 variation:1 annals:3 construction:8 play:3 pt:1 ualberta:2 us:4 hypothesis:1 trick:2 satisfying:1 expensive:1 racing:1 labeled:1 observed:1 ft:12 min1:1 worst:1 news:1 sun:1 trade:1 removed:1 observes:1 environment:1 complexity:1 reward:25 littman:1 dynamic:1 depend:2 solving:1 tight:1 algebra:2 dilemma:1 max1:1 learner:4 basis:2 compactly:1 shao:1 various:1 derivation:1 forced:1 onfidence:4 victim:1 whose:4 larger:1 valued:6 otherwise:1 statistic:1 fischer:1 think:1 jointly:1 itself:1 seemingly:1 online:3 sequence:7 eigenvalue:1 kxt:1 product:2 date:1 reyzin:1 achieve:1 academy:1 kv:2 exploiting:1 derive:1 measured:1 received:1 strong:1 solves:1 predicted:1 c:4 implies:2 come:1 correct:1 stochastic:17 subsequently:1 exploration:4 require:3 hx:9 crux:1 uncontrollable:1 tighter:2 exploring:1 hold:2 lying:1 exp:2 achieves:2 smallest:1 omitted:2 purpose:1 robbins:7 weighted:3 dani:14 offs:1 clearly:1 always:2 gaussian:9 modified:3 pn:1 avoid:1 derived:3 ax:1 improvement:6 rank:1 sense:1 dependent:3 stopping:2 accumulated:1 niculescu:1 bt:8 bandit:28 selective:1 interested:1 issue:1 szita:1 colt:3 denoted:1 special:2 construct:3 saving:3 having:1 sampling:1 identical:2 cancel:1 future:2 t2:3 report:1 inherent:1 few:1 modern:1 recognize:1 divergence:2 national:1 replaced:1 argmax:5 algorithmica:1 maintain:1 ltci:1 interest:1 mnih:2 nl:4 behind:1 antos:2 stoltz:1 tree:1 old:4 logarithm:1 sacrificing:1 theoretical:4 stopped:1 tp:1 stewart:1 deviation:4 subset:2 rare:1 uniform:1 successful:1 dependency:1 teacher:1 chooses:4 st:3 density:1 thanks:2 international:1 stay:1 together:1 na:3 vastly:1 abbasi:3 cesa:2 choose:6 hoeffding:2 yia:1 expert:1 american:1 li:6 de:3 rusmevichientong:6 includes:1 coefficient:2 audibert:4 depends:2 later:2 multiplicative:1 view:1 optimistic:1 maintains:1 complicated:1 square:3 ni:5 variance:4 who:1 maximized:1 identification:1 iterated:1 explain:1 nl2:1 whenever:1 against:2 servedio:1 proof:10 proved:1 knowledge:2 improves:2 auer:15 actually:1 dt:15 response:1 improved:4 wei:2 though:2 furthermore:2 just:1 langford:1 hand:2 receives:1 web:2 google:1 aj:1 behaved:1 name:2 normalized:8 y2:1 regularization:1 hence:5 leibler:1 conditionally:4 round:8 during:1 self:8 game:1 szepesva:1 ay:1 performs:1 novel:4 ari:6 common:1 empirically:1 tail:8 he:2 slight:1 accumulate:1 multiarmed:1 ai:3 rd:10 consistency:1 mathematics:2 i6:1 longer:1 ct0:1 showed:3 inf:1 inequality:14 vt:9 seen:2 minimum:2 gentile:1 sharma:1 maximize:1 period:2 multiple:2 hannan:1 reduces:1 stem:1 technical:1 academic:1 long:1 dept:3 lai:7 prediction:1 variant:6 basic:2 regression:3 essentially:2 expectation:1 sometimes:1 sleeping:2 receive:1 szepesv:6 interval:5 crucial:1 fell:1 subject:1 sridharan:1 call:3 bernstein:1 easy:1 enough:2 suboptimal:1 reduce:1 idea:4 avid:1 inner:2 cn:1 tradeoff:1 det:7 optimism:4 speaking:1 action:27 useful:1 amount:1 schapire:2 exist:1 delta:1 per:1 changing:1 pj:3 neither:1 garivier:2 kept:1 vast:2 asymptotically:1 run:1 parameterized:1 uncertainty:4 place:1 almost:3 decision:5 appendix:8 bit:1 bound:56 ct:10 played:1 ahead:1 constraint:1 x2:3 personalized:1 kleinberg:2 generates:1 speed:1 aspect:1 min:1 according:1 ball:5 hoo:1 smaller:4 contradicts:1 kakade:1 modification:5 making:2 happens:1 intuitively:1 taken:1 ln:1 fail:1 needed:3 know:1 end:3 available:5 operation:1 observe:2 appropriate:1 save:2 yadkori:3 cf:1 log2:2 prof:1 society:1 quantity:4 strategy:4 rocco:1 polytope:1 assuming:1 length:2 code:2 kst:2 ellipsoid:2 equivalently:1 setup:1 potentially:1 trace:1 filtration:2 design:2 policy:1 unknown:3 bianchi:2 upper:6 finite:4 immediate:1 situation:1 dpal:1 payoff:1 precise:3 y1:4 rn:8 perturbation:1 arbitrary:3 abe:2 introduced:1 pair:3 namely:2 specified:1 required:1 connection:1 kl:1 nip:1 able:1 bar:1 including:1 max:1 ofu:3 natural:2 rely:1 regularized:1 arm:8 mizil:1 improve:8 bare:1 literature:1 relative:1 law:1 expect:2 allocation:1 grover:1 var:1 consistent:1 article:1 principle:4 editor:1 kxkp:1 playing:1 pi:3 row:1 course:2 changed:1 last:3 soon:1 tsitsiklis:6 side:1 wide:1 face:3 bulletin:1 munos:5 curve:2 calculated:1 feedback:2 valid:1 world:1 author:1 reinforcement:3 adaptive:1 founded:1 cope:1 log3:1 obtains:1 contradict:1 compact:1 kullback:1 keep:1 active:1 uai:2 hayes:1 kxka:1 xi:1 search:1 robust:1 ca:2 elegantly:1 did:1 aistats:1 main:1 linearly:1 noise:11 freedman:1 repeated:2 allowed:1 x1:9 elaborate:1 martingale:12 tong:1 sub:7 exponential:1 lie:5 pe:3 jmlr:1 advertisement:1 theorem:19 xt:21 r2:6 x:3 workshop:1 hxt:5 sequential:2 ci:4 magnitude:1 budget:1 horizon:2 gap:4 easier:1 logarithmic:2 bubeck:2 doubling:3 klass:1 recommendation:1 springer:1 satisfies:4 acm:1 conditional:1 goal:3 consequently:1 change:5 hard:1 infinite:1 uniformly:2 lemma:4 total:5 called:1 la:3 biermann:1 ucb:19 meaningful:1 rarely:4 formally:2 select:1 coquelin:3 tl2:1 latter:1 evaluate:1 avoiding:2
3,775
4,418
?-MRF: Capturing Spatial and Semantic Structure in the Parameters for Scene Understanding Congcong Li, Ashutosh Saxena, Tsuhan Chen Cornell University, Ithaca, NY 14853, United States [email protected], [email protected], [email protected] Abstract For most scene understanding tasks (such as object detection or depth estimation), the classifiers need to consider contextual information in addition to the local features. We can capture such contextual information by taking as input the features/attributes from all the regions in the image. However, this contextual dependence also varies with the spatial location of the region of interest, and we therefore need a different set of parameters for each spatial location. This results in a very large number of parameters. In this work, we model the independence properties between the parameters for each location and for each task, by defining a Markov Random Field (MRF) over the parameters. In particular, two sets of parameters are encouraged to have similar values if they are spatially close or semantically close. Our method is, in principle, complementary to other ways of capturing context such as the ones that use a graphical model over the labels instead. In extensive evaluation over two different settings, of multi-class object detection and of multiple scene understanding tasks (scene categorization, depth estimation, geometric labeling), our method beats the state-of-the-art methods in all the four tasks. 1 Introduction Most scene understanding tasks (e.g., object detection, depth estimation, etc.) require that we exploit contextual information in addition to the local features for predicting the labels. For example, a region is more likely to be labeled as a car if the region below is labeled as road. I.e., we have to consider information in a larger area around the region of interest. Furthermore, the location of the region in the image could also have a large effect on its label, and on how it depends on the neighboring regions. For example, one would look for sky or clouds when looking for an airplane; however if one sees grass or a runway, then there may still be an airplane (e.g., when the airplane is on the ground)?here the contextual dependence of the airplane classifier changes based on object?s location in the image. We can capture such contextual information by using features from all the regions in the image, and then also train a specific classifier of each spatial location for each object category. However, the dimensionality of the feature space would become quite large,1 and training a classifier with limited training data would not be effective. In such a case, one could reduce the amount of context captured to prevent overfitting. For example, some recent works [22, 33, 37] use context by encoding input features, but are limited by the amount of context area they can handle. In our work, we do not want to eliminate the amount of context captured. We therefore keep the large number of parameters, and model the interaction between the parameters of the classifiers at different locations and different tasks. For example, the parameters of two neighboring locations are similar. The key contribution of our work is to note that two parameters may not ascribe a directionality to the interaction between them. These interactions are sparse, and we represent these interactions as an undirected graph where the nodes represent the parameters for each location (for each task) and 1 As an example, consider the problem of object detection with many categories: we have 107 object categories which may occur in any spatial location in the image. Even if we group the regions into 64 (8 ? 8) spatial locations, the total number of parameters will be 107 ? 64 ? K (for K features each). This is rather large, e.g., in our multi-class object detection task this number would be about 47.6 million (see Section 4). 1 the edges represent the interaction between the parameters. We call this representation a ?-MRF, i.e., a Markov Random Field over the parameters. This idea is, in principle, complementary to previous works that capture context by capturing the correlation between the labels. Note that our goal is not to directly compare against such models. Instead, we want to answer the question: How far can we go with just modeling the interactions between the parameters? The edges in our ?-MRF not only connect spatial neighbors but also semantic neighbors. In particular, if two tasks are highly correlated, their parameters given to the same image context should be similar. For example, oven is often next to the dishwasher (in a kitchen scene), therefore they should share similar context, indicating that they can share their parameters. These semantic interactions between the parameters from different tasks also follow the undirected graph. Just like object labels are often modeled as conditionally independent of other non-contextual objects given the important context, the corresponding parameters can also be modeled similarly. There has been a large body of work that capture contextual information in many different ways which are often complementary to ours. These methods range from capturing the correlation between labels using a graphical model to introduce different types of priors on the labels (based on location, prior knowledge, etc.). For example, a graphical model (directed or undirected) is often used to model the dependency between different labels [29, 40, 19, 17]. Informative priors on the labels are also commonly used to improve performance (e.g., [47]). Some previous works enforce priors on the parameters as a directed graph [46, 32], but our model offers a different and perhaps a more relevant perspective than a directed model, in terms of the independence properties modeled. We extensively evaluate our method on two different settings. First, we consider the task of labeling 107 object categories in the SUN09 dataset, and show that our method gets better performance than the state-of-the-art methods even when with simple regression as the learning model. Second, we consider the multiple tasks of scene categorization, depth estimation and geometry labeling, and again show that our method gets comparable or better performance than the state-of-the-art methods when we use our method with simple regression. Furthermore, we show that our performance is much higher as compared to just using other methods of putting priors on the parameters. 2 Related Work There is a large body of work that leverages contextual information. We possibly cannot do justice to literature, but we mention a few here. Various sources of context have been explored, ranging from the global scene layout, interactions between regions to local features. To incorporate scenelevel information, Torralba et al. [47] use the statistics of low-level features across the entire scene to prime object detection. Hoiem et al. [24] and Saxena et al. [45] use 3D scene information to provide priors on potential object locations. Li et al. [32] propose a hierarchical model to make use of contextual information between tasks on different levels. There are also generic approaches [22, 31] that leverage related tasks to boost the overall performance, without requiring considerate insight into specific tasks. Many works also model context to capture the local interactions between neighboring regions [23, 35, 28], objects [48, 14], or both [16, 10, 2]. Object co-occurence statistics have also been captured in several ways, e.g., using a CRF [40, 19, 17]. Desai et al. [9] combine individual classifiers by considering spatial interactions between the object detections, and solve a unified multi-class object detection problem through a structured discriminative approach. Other ways to share information across categories include sharing representations [12, 30], sharing training examples between categories [36, 15], sharing parameters [26, 27], and so on. Our work lies in the category of sharing parameters, aiming at capturing the dependencies in the parameters for relevant vision applications. There are several regularization methods when the number of parameters is quite large, e.g., based on L2 norms [6] and Lasso shrinkage methods [42]. Liang et al. [34] present an asymptotic analysis of smooth regularizers. Recent works [26, 1, 18, 25] place interesting priors on parameters. Jalali et al. [26] do multi-task learning by expressing the parameters as a sum of two parts: shared and specific to the task, which combines the l? penalty and l1 penalty to get block-sparse and elementwise sparse components in the parameters. Negahban and Wainright [38] provide analysis of when l1,? norm could be useful. Kim and Xing [27] use a tree to construct the hierarchy of multitask outputs, and then use the tree-guided group lasso to regularize the multi-task regression. In contemporary work [43], Salakhutdinov et al. learn a hierarchy to share the hierarchical parameters for the object appearance models. Our work is motivated by this direction of work, and our focus is to capture spatial and semantic sharing in parameters using undirected graphical models that have appropriate independence properties. 2 #$%&%''()*+,-$./"0)123" #$%&'()&**+,-./01$2"3,456" !!!" 055" 388" !!!" !!!" !!!" !!!" !!!" !!!" !!!" 7;" !!!" !!!" !!!" !!!" !!!" !!!" !!!" !!!" 78" !!!" !!!" !!!" 076" 47" !!!" !!!" !!!" 056" 45" !!!" !!!" !!!" !!!" !!!" !!!" !!!" !!!" !!!" 075" !!!" 39:" 79" Figure 1: The proposed ?-MRF graph with spatial and semantic interaction structure. Bayesian priors over parameters are also quite commonly used. For example, [3] uses Dirichlet priors for parameters of a multinomial and normal distribution respectively. In fact, there is a huge body of work on using non-informative priors distributions over parameters [4]?this is particularly useful when the amount of data is not enough to train the parameters. If all the distributions involved (including the prior distribution) are Gaussian, the parameters follow certain useful statistical hyper Markov properties [41, 21, 8]. In applications, [46] considers capturing relationships between the object categories using a Dirichlet prior on the parameters. [20] considers putting posterior sparsity on the parameters instead of parameter sparsity. [11] present a method to learn hyperparameters for CRF-type models. Most of these methods express the prior as another distribution with hyperparameters?one can view this as a directed graphical model over the parameters. On the other hand, we express relationships between two parameters of the distribution, which does not necessarily involve hyper parameters. This also allows us to capture interesting independence properties. 3 Our Approach: ?-MRF In order to give better intuition, we use the multi-class object detection task as an illustrative example. (Later we will describe and apply it to other scene understanding problems.) Let us consider the K-class object detection. We uniformly divide an image into L grids. We then have a binary classi(n) fier, whose output is yk,` ? {0, 1} that indicates the presence of the k th object at the `th grid in the nth image. Let x(n) be the features (or attributes) extracted from nth image, and let the parameters of the classifier be ?k,` . Let ?k = (?k,1 , ? ? ? , ?k,L ) and let ? be the set {?k }, k = 1, . . . , K. Let P (yk,` |x(n) , ?k,` ) be the probability of the output given the input features and the parameters. In order to find the classifier parameters, one typically solves an optimization problem, such as: XX ? log P (yk,` |x(n) , ?k,` ) + R(?) (1) minimize ? n k,l where R(?) is a regularization term (e.g., ?||?||22 with ? as a tuning parameter) (In Bayesian view, it is a prior on the parameters that could be informative or non-informative.) Let us use J(?k,` ) = ? log P (yk,` |x(n) , ?k,` ) to indicate the cost of the data dependent term ?k,` . The exact form of J(?k,` ) would depend on the particular learning model example,  being usedover the labels y?s. For (1?yk,` )  yk,` 1 1 1? . for logistic regression it would be J(?k,` ) = ? log ?? T x(n) ?? T x(n) 1+e k,` 1+e k,` Motivated by the earlier discussion, we want to model the interactions between the parameters of the different classification models, indexed by {k, `} that we merge into one index {m}. In this work, we represent these interactions as an undirected graph G where each node m represents the parameters ?m . The edges E in the this graph would represent the interaction between two sets of parameters ?i and ?j . These interactions are often sparse. We call this graph ?-MRF. Eq. 1 can now be viewed as optimizing the energy function of the MRF over the parameters. I.e., X X minimize J(?m ) + R(?i , ?j ) (2) ? m?G i,j?E where J(?m ) is now the node potential, and the term R(?i , ?j ) corresponds to the edge potentials. Note this idea of MRF is quite complementary of other modeling structures one may impose over y?s?which may itself be an MRF. This ?-MRF is different from the label-based MRFs whose variables y?s are often in low-dimension. In our parameter-based MRF, each node constitutes highdimensional variables ?m . One nice property of having an MRF over parameters is that there is no increase in complexity of the inference problem. In previous work (also see Section 2), several priors have been used on the parameters. Such priors are often in the form of imposing a distribution with some other hyper parameters?this corresponds to a directed model on the ? and in some application scenarios they may not be able to express the desired conditional independence properties and therefore may be sub-optimal. Our ?-MRF is 3 largely a non-informative prior, and also corresponds to some regularization methods. See Section 5 for experimental comparisons with different forms of priors. Having presented this general notion of ?-MRF, we will now describe two types of interactions that it models well in the following. Spatial interactions. Intuitively the parameters of the classifiers at neighboring spatial regions (for the same object category) should share their parameters. To model this type of interactions between parameters, we introduce edges on the ?-MRF that connect the spatially neighboring nodes, as shown in Figure 1-left. Note that the spatial edges only couple the parameters of the same task together. This type of edge does not exist across tasks. We define the edge potential as follows.  ?spt k?i ? ?j kp if ?i and ?j are spatial neighbors for a task R(?i , ?j ) = 0 otherwise where ?spt is a tuning factor for the spatial interactions. When p ? 1, this potential has the nice property of being convex. Note that such a potential has been extensively used in an MRF over labels, e.g., [44]. Note that this potential does not make the original learning problem in Equation 1 any ?harder.? In fact, if the original objective J(?) is convex, then the overall problem still remains convex. In this work, we consider p = 1 and p = 2. In addition to connecting the parameters for neighboring locations, we also encourage the sharing between the elements of a parameter vector that correspond to spatially neighboring inputs. The intuition is described in the following example. Assume we have the presence of the object ?road? at the different regions of an image as attributes. In order to learn a car detector with these attributes as inputs, we would like to give similar high-weights to the neighboring regions in the car detector output. We call this source-based spatial grouping, as compared to target-based spatial grouping that we described in the previous paragraph. We found that this also gives us a contextual map (i.e., parameters that map the feature/attributes in the neighboring regions) that is more spatially structured. This interaction happens within the same node in the graph, therefore it is equivalent to adding an extra term to the node potential on the ?-MRF. X X t2 t1 kp (3) ? ?m k?m Jnew (?m ) = J(?m ) + ?src t1 t2 ?N r(t1 ) t1 t2 th where ?m and ?m corresponds the weights given to the tth 1 and the t2 feature inputs. t2 ? N r(t1 ) means that the respective features are the same type of attributes form neighboring regions. Equation 3 can be reformed as Jnew (?m ) = J(?m ) + ?src kT ?m kp , where T indicates the linear transform matrix that computes the difference in the neighbors. ?src is a tuning factor for the source interactions. Semantic interactions. We not only connect the parameters for spatial neighbors of the same task, but also consider the semantic neighbors across tasks. Motivated by the conditional independency in the object labels which suggests that given the important context the presence of an object is independent of other non-contextual objects, we can encode such properties in our ?-MRF. For example, the road often appears below the car. Note that in our framework we have the road classifier and the car classifier take the same features as input, which are extracted from all regions of the images to capture long-range context. Since the high concurrence of these two objects, their corresponding detectors should be activated simultaneously. Therefore, the parameter for detecting ?road? at a bottom region of the image, can partly share with the parameter for detecting ?car? above the bottom region. Assume we already know the dependency between the objects, we introduce the semantic edge potential of the ?-MRF, as shown in Figure 1-right.  ?smn wij k?i ? ?j kp if ?i and ?j are semantic neighbors R(?i , ?j ) = 0 otherwise where wij indicates the strength of the semantic dependency between these two parameters and ?smn is a tuning factor for the semantic interactions. In the following we discuss how to find the semantic connections and the weights w?s. Finding the semantic neighbors. We first calculate the positive correlations between the tasks from the ground-truth training data. If two tasks are highly positively correlated, they are likely to share some of the parameters. In order to model how they share parameters, we model the relative spatial relationship between the positive outputs of the two tasks. For example, assume we have two highly co-occuring object categories, indexed by k1 and k2 . From the training data, we learn the relative spatial distribution map of the presence of the k2th object, given the k1th object in the center. We then find out the top M highest response regions on the map, each of which has a relative location ?` 4 G;H021"/0102345" ,-." /0102345" >4-?" /0102345" 666" A1.001B:CD1" /0102345" A2050",B-==:F2-345" @-10." /0102345" ,-.E=2050" ,B-==:F2-345" 666" @-10.E=2050" ,B-==:F2-345" 89.:;<10=" !"#$%&'$(()*+,$-".* /$-$#"0"-+* !/$&$(()*+,$-".* /$-$#"0"-+* 666"7" ?!1,10 !2,8 ? !1,9 ? !" #" $" !%" &" '" !(" !)" %" *" !!" !#" )" +" !&" !'" ,-." /0102345" !1,7 ? !3,14 ? !" #" $" !%" !" #" $" !%" &" '" !(" !)" &" '" !(" !)" %" *" !!" !#" %" *" !!" !#" )" +" !&" !'" )" +" !&" !'" >4-?" /0102345" A1.001B:CD1" /0102345" 666" !" #" $" !%" &" '" !(" !)" %" *" !!" !#" )" +" !&" !'" @-10." /0102345" Figure 2: An instantiation of the proposed algorithm for the object recognition tasks on SUN09 dataset. and co-occuring response w. Therefore, the parameters of the k2th object that satisfy these relative locations, have semantic edges with ?k1 ,l1 . Learning and Optimization. R(?) couples the different independent parameters. Typically, the total number of parameters is quite large in an application (e.g., 47.6 million in one of our applications, see Section 4). Running an optimization algorithm jointly on all the parameters would either not be feasible or have very slow convergence in practice. Since the parameters follow conditional independence assumptions and also follow a nice topological structure, we can optimize more connected subsets of the parameters separately, and then iterate. These separate sub-problems can also run in parallel. In our implementation, R(?)?s and J(?m ) are convex, and such a decomposed algorithm for optimizing the parameters is guaranteed to converge to the global optima [5]. 4 Applications We apply our ?-MRF on two different settings: 1) object detection on the SUN09 dataset [7]; 2) multiple scene understanding tasks (scene categorization, geometric labeling, depth estimation), comparing to the cascaded classification models (CCM) [22, 31]. Object Detection. The task of object detection is to recognize and localize objects of interest in an image. We use the SUN 09 dataset introduced in [7], which has 4,367 training images and 4,317 test images. Choi et al. [7] use an additional set of 26,000 images to training baseline detectors [13], and select 107 object categories to evaluate their contextual model. We follow the same settings as [7], i.e., we use the same baseline object detector outputs as the attribute inputs for our algorithm, the same training/testing data, and the same evaluation metrics. For evaluation, a predicted bounding box is considered correct if it overlaps the ground-truth bounding box (in the intersection/union sense) by more than 50%. We compute the average precision (AP) of the precision-recall curve for each category, and compute the mean AP across categories as the overall performance. We use each of the baseline object detectors to produce a 8 ? 8 detection map, with each element indicating the confidence (between 0 and 1) of the object?s presence at the respective region. We also define 107 scene categories, where the ith (i = 1, . . . , 107) scene category indicates the type of scene containing the ith object category. We train a logistic regression classifier for each scene category. The 107 8 ? 8 object maps and the 107 scene classifier outputs together form a 6955-dimension feature vector, as the attribute inputs for our algorithm. The setup is shown in Figure 2. We divide an image into 8 ? 8 regions. Our algorithm learns a region-specific contextual model for each object category, resulting in a specific classifier of each region for each category. The 8 ? 8 division is determined based on the criteria that more than 70% of the training data contain bounding boxes no smaller than a single grid. We use a linear model for each classifier. So we have 6955?8?8?107 = 47627840 parameter dimensions in total. Our ?-MRF captures the independencies between these parameters based on location and semantics. For the lth region, it is labeled  as positive for the k th object category if it satisfies: overlap(Ok , Rl )/ min area(Rl ), area(Ok ) > 0.3, where Ok means a bounding-box instantiation of the k th object category and Rl means the lth grid cell. Negative examples are sampled from the false positives of the baseline detectors. We apply the trained classifiers to the test images, and gain the object detection maps. To create bounding-box based results, we use the candidate bounding boxes created by the baseline detectors, and average the scores gained from our algorithm within the bounding box as the confidence score for the candidate. Multiple Scene Understanding Tasks. We consider the task of estimating different types of labels in a scene: scene categorization, geometry labeling, and depth estimation. We compose these three tasks in the feed-forward cascaded classification models (CCM) [22]. CCM creates repeated instantiations of each classifier on multiple layers of a cascade, where the latter-layer classifiers take the outputs of the previous-layer classifiers as input. The previous CCM algorithms [22, 31] consider sharing information across tasks, but do not consider the sharing between categories or between 5 Table 1: Performance of object recognition and detection on SUN09 dataset. Model Chance Baseline (w/o context) Single model per object Independent model State-of-the-art [7] ?-MRF (l2 -regularized) ?-MRF (l1 -regularized) Object Recognition (% AP) 5.34 17.9 22.3 22.9 25.2 26.4 27.0 Object Detection (% AP) N/A 7.06 8.02 8.18 8.33 8.76 8.93 Table 2: Performance of scene categorization, geometric labeling, and depth estimation in CCM. Model Chance Baseline(w/o context) State-of-the-art [31] CCM [22] (our implementation) ?-MRF (l2 -regularized) ?-MRF (l1 -regularized) Scene Categorization (% AP) 22.5 83.8 86.1 Geometric Labeling (% AP) 33.3 86.2 88.9 Depth Estimation (RMSE in m) 24.6 16.7 15.2 83.8 87.0 16.5 85.7 86.3 88.6 89.2 15.3 15.2 different spatial regions within a task. Here we introduce the semantically-grouped regularization to scene categorization, and the spatially-grouped regularization to depth and geometry estimation. For the three tasks we consider, we use the same datasets and 2-layer settings as [31]. For scene categorization, we classify 8 different categories on the MIT outdoor scene dataset [39]. We consider two semantic groups: man-made (tall building, inside city, street, highway) and natural (coast, opencountry, mountain and forest). Semantic edges are introduced between the parameters within each group. We train a logistic classifier for each scene category. This gives us a total of 8 parameter vectors for scene categorization task. We evaluate the performance by measuring the accuracy of assigning the correct scene label to an image. For depth estimation, we train a specific linear regression model for every region of the image (with uniformly divided 11 ? 10 regions), and incorporate the spatial grouping on both the second-layer inputs and outputs. This gives us a total of 110 parameter vectors for the depth estimation task. We evaluate the performance by computing the root mean square error of the estimated depth with respect to ground truth laser scan depth using the Make3D Range Image dataset [44]. For geometry labeling, We use the dataset and the algorithm by [24] as the first-layer geometric labeling module, and use a single segmentation with about 100 segments/image. On the secondlayer, we train a logistic regression classifier for every region of the image (with uniformly divided 16 ? 16 regions), and incorporate the spatial grouping on both the second-layer inputs and outputs. This gives us a total of 768 parameter vectors. We then assign the geometric label to each segment based on the average confidence scores within the segment. We evaluate the performance by computing the accuracy of assigning the correct geometric label to a pixel. 5 Experiments We evaluate the proposed algorithm on two applications: (1) object recognition and detection on SUN09 dataset with 107 object categories; (2) the multi-task cascaded structure that composes scene categorization, depth estimation and geometric labeling on multiple datasets as described in Section 4. The training of our algorithm takes 6-7 hours for object detection/recognition and 3-4 hours for multi-task cascade. The attribute models in (1) and the first-layer base classifiers in (2) are pre-trained. The complexity of our inference is no more than constant times of the complexity of inference of an individual classifier. Furthermore, the inference for different classifiers can be easily parallelized. For example, a base object detector [13] takes about 1.5 second to output results for an image. Our algorithm, taking the outputs of the base detectors as input, only requires an overhead of less than 0.2 second. 5.1 Overall performance on multiple tasks in CCM strcuture. Table 2 shows the performance of different methods on the three tasks composed into the cascaded classification model (CCM) [22]. ?Baseline? means the individual classifier for each task on the first layer, ?State-of-the-art? corresponds to the state-of-the-art algorithm for each sub-task respectively for that specic dataset, and ?CCM? corresponds to the second-layer output for each sub-task in the CCM structure. The results are computed as the average performance over 6-fold cross validation. With the semantic and spatial regularization, our proposed ?-MRF algorithm improves significantly over the CCM algorithm that also uses the same set of tasks for prediction. Finally, we perform better than the state-of-the-art algorithms on two tasks and comparably for the third. Is ?-MRF ?complementary? to label-MRF? In this experiment, we also consider the MRF over labels [44] together with our ?-MRF for depth estimation. The combination results in a lower rootmean-square-error (RMSE) of 15.0m as compared to 15.2m for ?-MRF alone and 16.0m for labelMRF alone. This indicates that our method is complementary to the traditional MRF over labels. 5.2 Overall performance on SUN09 object detection. Table 1 gives the performance of different methods on SUN09 dataset, for both object recognition (predicting the object presence) and object detection (predicting the object location). 6 !"#$%&'(&))*+,-&./+0&.&%/#/.,+ !"#$%&'(&))*+,-&./+0&.&%/#/.,+ D$<+$E+#.&C2C25+C%&5/,+ ;$<+$=+#.&32324+3%&4/,+ 5000 4500 5000 +3)45+ 6$../)&'$2+3/#7//2+#-/+)/&.2/4+0&.&%/#/.,8+1&2+9+$#-/.,+ 4500 4000 7$../)&'$2+5/#1//2+#-/+)/&.2/6+0&.&%/#/.,8+&12324+9:+$#-/.,+ 4000 3500 3500 +3)45+ +(&.+ +.$&4+ 3000 2000 +5)64+ 3000 2000 +.$&4+ 1500 1500 +,#.//#)34-#+ 1000 1000 1&2+ 500 20 40 +5&)($2*+ &12324+ 500 0 0 5&)($2*++,#.//#)34-#+ !"#"$%&'()"*+&*,-'$"8+++++++++++++++++++++++++&12324+6/#/(#$.8++++><?@+ ++++++++++++++++++++++++++++++++++++0&.&%/#/.A,-&./6+&12324+6/#/(#$.+++++B<C@+ 2500 !"#"$%&'()"*+&*,-'$"8++++++++++++++++++++++++++++++++1&2+4/#/(#$.8+:;<=>+ +++++++++++++++++++++++++++++++++++++++++++0&.&%/#/.9,-&./4+1&2+4/#/(#$.8+:?<@>+ +(&.+ 2500 0 +5)64+ 60 80 100 120 0 20 40 60 80 100 120 D5E/(#+7&#/4$.3/,+ A3B/(#+6&#/5$.C/,+ Figure 3: Examples showing that infrequent object categories share parameters with frequent object categories. - Baseline (w/o context): the baseline object detectors trained by [13], which are also used to generate the initial detection results used as inputs for our algorithm and the state-of-the-art algorithm. - Single model: a single classifier is trained for each object category, not varying across different locations. In the following, if not specified, we use a l1 -regularized linear regression as the classifier. - Independent model: this means an independent classifier is trained for the presence of an object for each region. There is no information sharing between the models belonging to different locations of the same category, or different categories. - State-of-the-art: This is the tree-based graphical model proposed in [7], which explicitly models the object dependencies based on labels and detector outputs.2 - The proposed ?-MRF algorithm, which shares the models spatially within an object category and semantically across various objects. We evaluate both the l1 and l2 regularization on the potentials. Table 1 shows the location-specific model (Independent) is better than the general model (Single model), which confirms our intuition that the contextual model is location-specific. Furthermore, our approach that shares parameters spatially and semantically significantly outperforms the independent model without these regularizations. We also note that our algorithm can achieve comparable performance to the state-of-the-art algorithm, without explicitly modeling the probabilistic dependency between the objects labels. We study the relative improvement of the proposed parameter sharing algorithm over the nonparameter-sharing algorithm (Independent model in Table 1) on object categories with different number of training samples in the SUN09 object recognition task. The relative improvement on object categories with less than 200 training samples is 34.2%, while the improvement on objects with more than 200 training samples is 11.5%. Our parameter sharing algorithm helps the infrequent objects implicitly make use of the data of frequent objects to learn better models. We give two examples in Fig. 3, focusing on two infrequent object categories: van and awning, respectively. The histogram in the figures shows the number of training instances for each object category. The color bar shows the correlation between the learned parameter of the object with the parameters for other objects. The redder indicates the higher correlation between the parameters of the respective categories. Figure 3-left shows that the van category has few training instances, turn out to share the parameters strongly with the categories of car, building and road. Similarly, Figure 3-right shows how the learned awning parameters with other categories. We note that in the dataset, awning and streetlight are not highly co-occuring, thus initially when we create the semantic groups, these two objects do not appear simultaneously in any group. However, the semantic groups containing streetlight and the semantic groups containing awning both contain objects like road, building, and car. Through our ?-MRF algorithm, the sharing information can be transferred. Effect of different priors. We compare our spatially-grouped and semantically-grouped regularization with other parameter sharing algorithms such as the prior-based algorithms in Figure 4. %&'()&"*+,'+" #$" %&'(')"*+,-&" .+/,+0" !!!" #--" !!!" #-0" .-" !!!" !!!" #/0" ./" #$$" !!!" #$" #$3" 1$" !!!" #2" !!!" !!!" &'()*+,"$-./0" 1-2.-3" #%%" #23" 12" #$%" !!!" !!!" !!!" !!!" 4%" !!!" #$6" 46" 45" !!!" 47" Figure 4: Some baseline prior-based algorithms we compare the propose algorithm with. From left to right: these models use global prior, spatial-based prior, and semantic-based prior. 2 We evaluate the contextual model in [7] using the software published by the authors: http://web. mit.edu/?myungjin/www/HContext.html and report the average performance on multiple runs. 7 Top-ranked Contexts Target: Shoes? Shelves Target: Oven? Top-ranked Contexts Closet Microwave Stove Box Target: Shoes? Top-ranked Contexts Target: Refrigerator? Dishwasher Top-ranked Contexts Wall Stove Cabinet Floor Top-ranked Contexts Target: Shoes? Floor Countertop Desk Microwave Top-ranked Contexts Target: Sink? Wall Cabinet Stove Microwave Figure 5: Examples of the visual context learned from the proposed algorithm. Six examples are given. In each example, the left figure illustrates the task: whether the white region belongs to the target category. The following three figures shows the contextual inputs (showing the spatial map) which have the top ranked weights (highest positive elements of the parameters) . - Global prior (Fig. 4-left): all the classifiers share the prior ?0 , and the parameter for a classifier is defined as: ?k,l = ?0 + ?k,l . Assuming zero-mean gaussian for each ?, we have the ` ? P Pdistribution regularization term in Equation 1: R(?) = k,l R(?k,l ) = k,l ?0 k?0 k2 + ?k,l k?k,l k2 . - Spatial prior (Fig. 4-middle): the classifiers for theP same object OkP at different locations share a prior ` ? ?k ,i.e.,?k,l = ?k + ?k,l . Thus we have R(?) = k,l R(?k,l ) = k,l ?k k?k k2 + ?k,l k?k,l k2 . - Semantic prior (Fig. 4-right): the classifiers from the same semantic group share a prior ?Gi , and the parameter P for a classifier P is `defined as: ?k,l = ?Gi +??k,l , where ?k,l ? Gi . Thus we have R(?) = k,l R(?k,l ) = k,l ?Gi k?Gi k2 + ?k,l k?k,l k2 . The semantic groups are generated by an agglomerative clustering based on the co-occurence of objects. - Spatial ?-MRF and Semantic ?-MRF (Fig. 1): the proposed regularizations in Section 3 and l ? 2 norm is used as the regularization form. Table 3: Results for different Table 3 shows that the proposed ?-MRF algorithms outperform the parameter sharing methods on object recognition task. prior-based algorithms in both the spatial-grouping and semanticgrouping settings. Sharing the only global prior across all tasks performs slightly better than the independent l2 regularization based classifier. Modeling the spatial and semantic interactions by both methods (adding priors or adding edges) improve the performance, while the ?-MRF based approach is more effective, especially with l1 norm. Models No prior, l2 -sparsity Global prior Spatial priors Spatial ?-MRF Semantic priors Semantic ?-MRF Full ?-MRF Full ?-MRF, l1 Object Recog. (% AP) 22.5 22.8 23.6 24.6 24.0 25.2 26.4 27.0 Visual grouping. Figure 5 illustrates the effect of our proposed parameter sharing. For each object in a location, we show the top three contextual inputs learned by our approach. In Figure 5-left, we show where the highest positive weights locate in order to detect shoes at different regions. We note that to detect shoes in topper part of the image, shelves, closet and box are the most important contextual inputs; while floor and wall play a more important role in detecting shoes at the bottom of the image. The results also show that the two neighboring shoe regions (row 2 and row 3) share similar context, while far-away ones (row 1) do not. This reflects our target-based spatial interaction within the parameter-MRF. In Figure 5-right, we show a group of parameters (corresponding to different regions for oven, refrigerator, and sink) that share semantic edges with each other on the parameter-MRF. We note that they share the high weights given to stove at the bottom-right and microwave at the middle-left. All these objects have very few training examples. With the proposed semantic constraint, they can implicitly leverage information from each other in the training stage. Besides we also note the spatially smooth effect on the figures, which is resulted from our source-based spatial interactions. Conclusion. We propose a method to capture structure in the parameters by designing an MRF over parameters. Our evaluations show that our method performs better than the current state-of-the-art algorithms on four different tasks (that were specifically designed for the respective tasks). Note that our method is complementary to the techniques state-of-the-art methods use for the respective tasks (e.g., MRF on the labels for depth estimation), and we believe that one can get even higher performance by combining our ?-MRF technique with the respective state-of-the-art techniques. References [1] [2] [3] [4] [5] [6] S. Bengio, F. Pereira, and Y. Singer. Group sparse coding. In NIPS, 2009. M. Blaschko and C. Lampert. Object localization with global and local context kernels. In BMVC, 2009. D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. JMLR, 3:993?1022, 2003. G. E. Box and G. C. Tiao. Bayesian Inference in Statistical Analysis. John Wiley & Sons, 1992. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge [u.a.]: Univ. Press, 2004. C. Brezinski, M. Redivo-Zaglia, G. Rodriguez, and S. Seatzu. Multi-parameter regularization techniques for ill-conditioned linear systems. Mathematics and Statistics, 94(2):203?228, 2003. 8 [7] M. J. Choi, J. J. Lim, A. Torralba, and A. S. Willsky. Exploiting hierarchical context on a large database of object categories. In CVPR, 2010. [8] A. Dawid and S. Lauritzen. Hyper markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 1993. [9] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. In ICCV?09. [10] S. Divvala, D. Hoiem, J. Hays, A. Efros, and M. Hebert. An empirical study of context in object detection. In CVPR, 2009. [11] C. B. Do, C.-S. Foo, and A. Y. Ng. Efficient multiple hyperparameter learning for log-linear models. In NIPS, 2007. [12] L. Fei-Fei, R. Fergus, and P. Perona. A bayesian approach to unsupervised one-shot learning of object categories. In CVPR, 2003. [13] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. PAMI, 2009. [14] R. Fergus, H. Bernal, Y. Weiss, and A. Torralba. Semantic label sharing for learning with many categories. In ECCV, 2010. [15] R. Fergus, Y. Weiss, and A. Torralba. Semi-supervised learning in gigantic image collections. In NIPS?09. [16] C. Galleguillos, B. McFee, S. Belongie, and G. Lanckriet. Multi-class object localization by combining local contextual interactions. In CVPR, 2010. [17] C. Galleguillos, A. Rabinovich, and S. Belongie. Object categorization using co-occurrence, location and appearance. In CVPR, 2008. [18] P. Garrigues and B. Olshausen. Group sparse coding with a laplacian scale mixture prior. In NIPS, 2010. [19] S. Gould, J. Rodgers, D. Cohen, G. Elidan, and D. Koller. Multi-class segmentation with relative location prior. IJCV, 80(3), 2008. [20] J. V. Graa, K. Ganchev, B. Taskar, and F. Pereira. Posterior vs. parameter sparsity in latent variable models. In NIPS, 2009. [21] D. Heinz. Hyper markov non-parametric processes for mixture modeling and model selection. In CMU. [22] G. Heitz, S. Gould, A. Saxena, and D. Koller. Cascaded classification models: Combining models for holistic scene understanding. In NIPS, 2008. [23] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In ECCV, 2008. [24] D. Hoiem, A. A. Efros, and M. Hebert. Putting objects in perspective. IJCV, 2008. [25] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In ICML, 2009. [26] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In NIPS, 2010. [27] S. Kim and E. P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In ICML, 2010. [28] S. Kumar and M. Hebert. A hierarchical field framework for unified context-based classification. In ICCV, 2005. [29] S. Kumar and Singh. Discriminative fields for modeling spatial dependencies in natural images. In NIPS, 2004. [30] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009. [31] C. Li, A. Kowdle, A. Saxena, and T. Chen. Feedback enabled cascaded classication models for scene understanding. In NIPS, 2010. [32] L.-J. Li, R. Socher, and L. Fei-Fei. Towards total scene understanding: Classification, annotation and segmentation in an automatic framework. In CVPR, 2009. [33] L.-J. Li, H. Su, E. P. Xing, and L. Fei-Fei. Object bank: A high-level image representation for scene classification and semantic feature sparsification. In NIPS, 2010. [34] P. Liang, F. Bach, G. Bouchard, and M. I. Jordan. Asymptotically optimal regularization in smooth parametric models. In NIPS, 2010. [35] J. Lim, P. Arbel andez, C. Gu, and J. Malik. Context by region ancestry. In ICCV, 2009. [36] M. Marszalek and C. Schmid. Semantic hierarchies for visual object recognition. In CVPR, 2007. [37] D. Munoz, J. Bagnell, and M. Hebert. Stacked hierarchical labeling. In ECCV, 2010. [38] S. Negahban and M. J. Wainwright. Joint support recovery under high-dimensional scaling: Benefits and perils of l1,-regularization. In NIPS, 2008. [39] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV, 42:145?175, 2001. [40] A. Rabinovich and et al. Objects in context. In ICCV, 2007. [41] A. Roverato. Hyper inverse wishart distribution for non-decomposable graphs and its application to bayesian inference for gaussian graphical models. Scandinavian Journal of Statistics, 2002. [42] R.Tibshirani. Regression shrinkage and selection via the lasso. J Royal Stat. Soc. B, 58(1):267?288, 1996. [43] R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to share visual appearance for multiclass object detection. In CVPR, 2011. [44] A. Saxena, S. H. Chung, and A. Y. Ng. 3-d depth reconstruction from a single still image. IJCV, 76, 2007. [45] A. Saxena, M. Sun, and A. Y. Ng. Make3d: Learning 3d scene structure from a single still image. IEEE PAMI, 30(5), 2009. [46] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Learning hierarchical models of scenes, objects, and parts. In ICCV, 2005. [47] A. Torralba. Contextual priming for object detection. Int. J. Comput. Vision, 53(2):169?191, 2003. [48] B. Yao and L. Fei-Fei. Modeling mutual context of object and human pose in human-object interaction activities. In CVPR, 2010. 9
4418 |@word multitask:1 middle:2 norm:4 justice:1 confirms:1 mention:1 shot:1 harder:1 garrigues:1 initial:1 score:3 united:1 hoiem:3 ours:1 outperforms:1 current:1 contextual:21 comparing:1 assigning:2 john:1 informative:5 shape:1 designed:1 ashutosh:1 grass:1 alone:2 v:1 ith:2 blei:1 detecting:3 node:7 location:26 zhang:1 become:1 ijcv:4 combine:2 compose:1 overhead:1 paragraph:1 inside:1 introduce:4 multi:14 heinz:1 salakhutdinov:2 freeman:1 decomposed:1 considering:1 xx:1 estimating:1 blaschko:1 mountain:1 unified:2 finding:1 sparsification:1 sky:1 every:2 saxena:6 stuff:1 classifier:34 k2:7 ramanan:2 appear:1 gigantic:1 t1:5 positive:6 local:6 aiming:1 encoding:1 marszalek:1 merge:1 ap:7 pami:2 suggests:1 co:6 limited:2 range:3 directed:5 harmeling:1 testing:1 practice:1 block:1 union:1 mcfee:1 area:4 empirical:1 cascade:2 significantly:2 ccm:11 a3b:1 confidence:3 road:7 pre:1 boyd:1 get:4 cannot:1 close:2 selection:2 context:32 optimize:1 equivalent:1 map:8 www:1 center:1 go:1 layout:2 convex:5 decomposable:2 recovery:1 insight:1 regularize:1 vandenberghe:1 enabled:1 handle:1 notion:1 annals:1 hierarchy:3 target:9 infrequent:3 play:1 exact:1 us:2 designing:1 runway:1 lanckriet:1 element:3 dawid:1 recognition:9 particularly:1 labeled:3 database:1 bottom:4 cloud:1 module:1 recog:1 role:1 taskar:1 capture:10 refrigerator:2 calculate:1 region:36 connected:1 sun:2 desai:2 contemporary:1 highest:3 yk:6 src:3 intuition:3 complexity:3 trained:6 depend:1 singh:1 segment:3 creates:1 division:1 f2:3 localization:2 sink:2 gu:1 easily:1 joint:1 various:2 streetlight:2 train:6 laser:1 univ:1 stacked:1 effective:2 describe:2 kp:4 labeling:11 hyper:6 quite:5 whose:2 larger:1 solve:1 cvpr:10 otherwise:2 statistic:5 gi:5 unseen:1 transform:1 itself:1 jointly:1 arbel:1 propose:3 reconstruction:1 interaction:28 frequent:2 neighboring:11 relevant:2 combining:3 holistic:2 achieve:1 exploiting:1 convergence:1 optimum:1 produce:1 categorization:11 bernal:1 object:106 tall:1 help:1 pose:1 stat:1 lauritzen:1 make3d:2 eq:1 soc:1 solves:1 c:1 predicted:1 indicate:1 dishwasher:2 direction:1 guided:2 correct:3 attribute:10 human:2 mcallester:1 require:1 assign:1 andez:1 wall:3 around:1 considered:1 ground:4 normal:1 efros:2 torralba:8 estimation:14 label:24 pdistribution:1 highway:1 grouped:4 create:2 city:1 ganchev:1 reflects:1 mit:2 gaussian:3 rather:1 shelf:2 cornell:4 shrinkage:2 varying:1 closet:2 encode:1 focus:1 improvement:3 indicates:6 kim:2 baseline:11 sense:1 detect:3 inference:6 dependent:1 mrfs:1 eliminate:1 entire:1 typically:2 initially:1 perona:1 koller:3 wij:2 semantics:1 pixel:1 overall:5 classification:8 html:1 ill:1 spatial:37 art:14 mutual:1 ruan:1 field:4 construct:1 having:2 ng:4 encouraged:1 represents:1 look:1 unsupervised:1 constitutes:1 icml:2 t2:5 report:1 sanghavi:1 few:3 composed:1 simultaneously:2 recognize:1 resulted:1 individual:3 kitchen:1 geometry:4 detection:26 interest:3 huge:1 highly:4 evaluation:4 mixture:2 activated:1 regularizers:1 kt:1 microwave:4 edge:13 encourage:1 respective:6 tree:4 indexed:2 divide:2 desired:1 girshick:1 instance:2 classify:1 modeling:8 earlier:1 measuring:1 rabinovich:2 cost:1 subset:1 connect:3 answer:1 varies:1 dependency:7 nickisch:1 negahban:2 probabilistic:1 together:3 connecting:1 yao:1 again:1 containing:3 huang:1 possibly:1 wishart:1 chung:1 li:5 potential:10 coding:2 int:1 satisfy:1 explicitly:2 depends:1 later:1 view:2 root:1 xing:3 parallel:1 bouchard:1 annotation:1 rmse:2 contribution:1 minimize:2 square:2 accuracy:2 largely:1 correspond:1 peril:1 bayesian:5 metaxas:1 comparably:1 published:1 composes:1 detector:12 sharing:18 against:1 energy:1 involved:1 cabinet:2 redder:1 couple:2 sampled:1 gain:1 dataset:12 recall:1 knowledge:1 car:8 dimensionality:1 improves:1 segmentation:3 spt:2 color:1 lim:2 appears:1 ok:3 feed:1 higher:3 focusing:1 supervised:1 follow:5 response:2 wei:2 bmvc:1 box:10 strongly:1 furthermore:4 just:3 stage:1 correlation:5 hand:1 web:1 su:1 rodriguez:1 logistic:4 considerate:1 perhaps:1 ascribe:1 believe:1 olshausen:1 building:3 effect:4 requiring:1 contain:2 galleguillos:2 regularization:16 spatially:9 semantic:33 white:1 conditionally:1 illustrative:1 criterion:1 crf:2 occuring:3 performs:2 l1:10 image:31 ranging:1 coast:1 multinomial:1 rl:3 cohen:1 million:2 rodgers:1 elementwise:1 expressing:1 cambridge:1 imposing:1 munoz:1 tuning:4 automatic:1 grid:4 mathematics:1 similarly:2 okp:1 scandinavian:1 etc:2 base:3 oven:3 posterior:2 recent:2 perspective:2 optimizing:2 belongs:1 prime:1 scenario:1 certain:1 hay:1 binary:1 captured:3 additional:1 impose:1 floor:3 parallelized:1 converge:1 elidan:1 semi:1 multiple:9 full:2 smooth:3 offer:1 long:1 cross:1 bach:1 divided:2 ravikumar:1 a1:2 laplacian:1 prediction:1 mrf:48 regression:10 oliva:1 vision:2 metric:1 cmu:1 histogram:1 represent:5 kernel:1 cell:1 addition:3 want:3 separately:1 sudderth:1 source:4 ithaca:1 extra:1 envelope:1 undirected:5 thing:1 jordan:2 call:3 leverage:3 presence:7 bengio:1 enough:1 iterate:1 independence:6 lasso:4 reduce:1 idea:2 airplane:4 sun09:8 multiclass:1 whether:1 motivated:3 six:1 penalty:2 useful:3 involve:1 amount:4 desk:1 extensively:2 tenenbaum:1 category:43 tth:1 generate:1 http:1 outperform:1 exist:1 estimated:1 per:1 tibshirani:1 hyperparameter:1 express:3 group:14 key:1 four:2 putting:3 independency:2 secondlayer:1 localize:1 prevent:1 smn:2 graph:9 asymptotically:1 sum:1 run:2 inverse:1 place:1 scaling:1 asaxena:1 comparable:2 capturing:6 layer:10 guaranteed:1 stove:4 fold:1 topological:1 activity:1 strength:1 occur:1 constraint:1 fei:8 scene:37 software:1 min:1 kumar:2 gould:2 transferred:1 structured:4 combination:1 belonging:1 across:9 smaller:1 slightly:1 son:1 happens:1 intuitively:1 iccv:5 equation:3 remains:1 discus:1 turn:1 singer:1 know:1 apply:3 hierarchical:6 away:1 generic:1 enforce:1 appropriate:1 occurrence:1 fowlkes:1 original:2 top:9 dirichlet:3 include:1 running:1 clustering:1 graphical:8 dirty:1 exploit:1 k1:2 especially:1 objective:1 malik:1 question:1 already:1 parametric:2 dependence:2 jalali:2 traditional:1 bagnell:1 separate:1 street:1 agglomerative:1 considers:2 willsky:2 assuming:1 besides:1 modeled:3 relationship:3 index:1 liang:2 setup:1 tsuhan:2 negative:1 implementation:2 perform:1 markov:5 datasets:2 beat:1 defining:1 looking:1 locate:1 introduced:2 specified:1 extensive:1 connection:1 learned:4 boost:1 hour:2 nip:12 able:1 bar:1 below:2 sparsity:6 including:1 royal:1 wainwright:1 overlap:2 natural:2 ranked:7 regularized:5 predicting:3 cascaded:6 nth:2 improve:2 created:1 schmid:1 occurence:2 prior:39 understanding:10 geometric:8 literature:1 l2:6 nice:3 asymptotic:1 relative:7 law:1 discriminatively:1 interesting:2 wainright:1 allocation:1 validation:1 strcuture:1 principle:2 bank:1 share:19 classication:1 row:3 eccv:3 hebert:4 divvala:1 concurrence:1 neighbor:8 taking:2 felzenszwalb:1 sparse:6 van:2 benefit:1 curve:1 depth:17 dimension:3 heitz:2 feedback:1 computes:1 forward:1 commonly:2 made:1 author:1 collection:1 far:2 implicitly:2 keep:1 global:7 overfitting:1 instantiation:3 belongie:2 discriminative:3 thep:1 fergus:3 ancestry:1 latent:2 table:8 learn:5 transfer:1 forest:1 necessarily:1 priming:1 bounding:7 hyperparameters:2 lampert:2 repeated:1 complementary:7 body:3 positively:1 fig:5 ny:1 slow:1 wiley:1 jnew:2 sub:4 precision:2 pereira:2 foo:1 comput:1 lie:1 candidate:2 outdoor:1 jmlr:1 third:1 learns:1 choi:2 specific:8 showing:2 explored:1 grouping:6 socher:1 false:1 adding:3 gained:1 illustrates:2 conditioned:1 chen:2 intersection:1 cd1:2 likely:2 appearance:3 shoe:7 visual:4 corresponds:6 truth:3 satisfies:1 chance:2 extracted:2 conditional:3 lth:2 goal:1 viewed:1 towards:1 shared:1 man:1 feasible:1 change:1 directionality:1 determined:1 specifically:1 uniformly:3 semantically:5 kowdle:1 classi:1 total:7 ece:1 experimental:1 partly:1 indicating:2 select:1 highdimensional:1 tiao:1 support:1 latter:1 scan:1 incorporate:3 evaluate:8 correlated:2
3,776
4,419
Matrix Completion for Multi-label Image Classification Ricardo S. Cabral?,? Fernando De la Torre? Jo?o P. Costeira? , Alexandre Bernardino? ? ? Carnegie Mellon University, ISR - Instituto Superior T?cnico, Pittsburgh, PA Lisboa, Portugal [email protected], [email protected], {jpc,alex}@isr.ist.utl.pt Abstract Recently, image categorization has been an active research topic due to the urgent need to retrieve and browse digital images via semantic keywords. This paper formulates image categorization as a multi-label classification problem using recent advances in matrix completion. Under this setting, classification of testing data is posed as a problem of completing unknown label entries on a data matrix that concatenates training and testing features with training labels. We propose two convex algorithms for matrix completion based on a Rank Minimization criterion specifically tailored to visual data, and prove its convergence properties. A major advantage of our approach w.r.t. standard discriminative classification methods for image categorization is its robustness to outliers, background noise and partial occlusions both in the feature and label space. Experimental validation on several datasets shows how our method outperforms state-of-the-art algorithms, while effectively capturing semantic concepts of classes. 1 Introduction With the ever-growing amount of digital image data in multimedia databases, there is a great need for algorithms that can provide effective semantic indexing. Categorizing digital images using keywords, however, is the quintessential example of a challenging classification problem. Several aspects contribute to the difficulty of the image categorization problem, including the large variability in appearance, illumination and pose of different objects. Moreover, in the multi-label setting the interaction between objects also needs to be modeled. Over the last decade, progress in the image classification problem has been achieved by using more powerful classifiers and building or learning better image representations. On one hand, standard discriminative approaches such as Support Vector Machines or Boosting have been extended to the multi-label case [28, 14] and incorporated under frameworks such as Multiple Instance Learning [31, 33, 32, 20, 27] and Multi-task Learning [26]. However, a major limitation of discriminative approaches is the lack of robustness to outliers and missing data. Recall most discriminative approaches project the data directly onto linear or non-linear spaces, thus lacking a noise model for it. To address this issue, we propose formulating the image classification problem under a matrix completion framework, that has been fueled by recent advances in Rank Minimization [7, 18]. Using this paradigm, we can easily deal with incomplete descriptions and errors in features and labels. On the other hand, traversal to the use of more powerful classifiers, better image representations, such as SIFT [17] or GIST [21] have boosted recognition and categorization performance. A common approach to represent an object has been to group local descriptors using the bag of words model [24]. Our algorithms make use of the fact that in this model the histogram of an entire image contains information of all of its subparts. By modeling the error in the histogram, our matrix completion algorithm is able to capture semantically discriminative portions of the image, thus obviating the need for training with precise localization, as required by previous methods [31, 33, 32, 20, 27]. 1 Our main contributions are twofold: (1) We propose two new Rank Minimization algorithms, MCPos and MC-Simplex, motivated by the image categorization problem. We study the advantages of matrix completion over classic discriminative approaches and show that performing classification under this paradigm not only improves state-of-the-art results on several datasets, but it does so without recurring to bounding boxes or other precise localization methods in its labeling or modeling. (2) We prove that MC-Pos and MC-Simplex enjoy the same convergence properties of Fixed Point Continuation methods for Rank Minimization without constraints. We also show that this result extends to the framework presented by [11], whose convergence was only empirically verified. 2 Previous Work This section reviews related work in the area of image categorization and the problem of Matrix Completion using a Rank Minimization criterion, optimized with Nuclear Norm methods. Image Categorization Since the seminal work of Barnard et al. [3], many researchers have addressed the problem of associating words to images. Image semantic understanding is now typically formulated as a multi-label problem. In this setting, each image may be simultaneously categorized into more than one of a set of predefined categories. An important difference between multi-class classification and multi-label classification is that classes in multi-class classification are assumed to be mutually exclusive whereas in multi-label classification are normally interdependent from one another. Therefore, many multi-class techniques such as SVM, LDA and Boosting have been modified to make use of label correlations to improve multi-label classification performance [28, 14]. Additionally, Multiple Instance Learning (MIL) approaches can be used to explicitly model the relations between labels and specific regions of the image, as initially proposed by Maron et al. [19]. This framework allows for the localization and classification tasks to benefit from each other, thus reducing noise in the corresponding feature space and making the learned semantic models more accurate [31, 33, 32, 20, 27, 26]. Although promising, the MIL framework is combinatorial, so several approaches have been proposed to avoid local minima and deal with the prohibitive number of possible subregions in an image. Zha et al. [32] make use of hidden CRFs while Vijayanarasimhan et al. [27] recur to multi-set kernels to emphasize instances differently. Yang et al. [31] exploit asymmetric loss functions to balance false positives and negatives. These methods, however, require an explicit enumeration of instances in the image. This is usually obtained by pre-segmenting images to a small fixed number of parts or applied in settings where detectors perform well, such as the problem of associating faces to captioned names [4]. On the other hand, to avoid explicitly enumerating the instances, Nguyen et al. [20] couple constraint generation algorithms with a branch and bound method for fast localization. Multi-task learning has also been proposed as a way to regularize the MIL problem, so as to avoid local minima due to many available degrees of freedom. In this setting, the MIL problem is jointly learned with an easier fully supervised task such as geometric context [26]. Matrix Completion using Rank Minimization Rank Minimization has recently received much attention due to its success in matrix completion problems such as the Netflix challenge, where one wishes to predict a user?s movie preferences based on a subset of his and other people?s choices, or minimum order control [10], where the goal is to find the least complex controller achieving some performance measure. A major breakthrough by [7] states the minimization of the rank function, under broad conditions, can be achieved using the minimizer obtained with the Nuclear Norm (sum of singular values). Since the natural reformulation of the Nuclear Norm gives rise to a Semidefinite Program, existing interior point methods can only handle problems with a number of variables in the order of the hundreds. Thus, several methods have been devised to perform this optimization efficiently [15, 6, 18, 25, 13, 1, 7, 2]. In the last few years, incremental matrix completion methods have also been proposed [1, 2, 5]. In the context of Computer Vision, minimization of the Nuclear Norm has been applied to several problems: Structure from Motion [1, 8, 5], Robust PCA [29], Subspace Alignment [22], Subspace Segmentation [16] and Tag Refinement [34]. 2 3 Multi-label classification using Matrix Completion In a supervised setting, a classifier learns a mapping1 W : X ? Y between the space of features X and the space of labels Y, from Ntr tuples of known features and labels. Linear classifiers define (xj , yj ) ? RF ? RK , where F is the feature dimension and K the number of classes, and minimize the loss l between the output space and the projection of the input space, as minimize W,b   Ntr  X xj l yj , [W b] , 1 (1) j=1 with parameters W ? RK?F , b ? RK . Given (1), Goldberg et al. [11] note that the problem of classifying Ntst test entries can be cast as a Matrix Completion. For this purpose, they concatenate all labels and features into matrices Ytst ? RK?Ntst , Ytr ? RK?Ntr , Xtst ? RF ?Ntst , Xtr ? RF ?Ntr . If the linear model holds, then the matrix ? ? Ytr Ytst Z0 = ? Xtr Xtst ? , (2) 1> should be rank deficient. The classification process consists in filling the unknown entries in Ytst such that the Nuclear Norm of Z0 , the convex envelope of rank [7], is minimized. Since in practice we may have errors and partial knowledge in the training labels and in the feature space, let us define ?X and ?Y as the set of known feature and label entries and zero out unknown entries in Z0 . Additionally, let the data matrix Z be defined as a sum of Z0 with an error term E, as ? ? ? # ? " Ytr Ytst EY tr 0 ZY (3) Z = ZX = ? Xtr Xtst ? + ? EXtr EXtst ? = Z0 + E, Z1 1> 0> where ZY , ZX , Z1 respectively stand for the label, feature and last rows of Z. Then, classification can be posed as an optimization problem that finds the best label assignment Ytst and error matrix E such that the rank of Z is minimized. The resulting optimization problem, MC-1 [11], is minimize Ytst ,EXtr ,EY tr ,EXtst ?kZk? + X ? X 1 cx (zij , z0ij ) + cy (zij , z0ij ) |?X | |?Y | ij??X ij??Y Z = Z0 + E subject to (4) Z1 = 1> . Note that the constraint that Z1 remains equal to one is necessary for dealing with the bias b in (1). To avoid trivial solutions, large distortions of Z from known entries in Z0 are penalized according to losses cy (?) and cx (?): in [11], the former is defined as the Least Squares error, while the latter is a log loss to emphasize the error on entries switching classes as opposed to their absolute numerical difference. The parameters ?, ? are positive trade-off weights between better feature adaptation and label error correction. We note this problem is equivalent to minimize Z subject to ?kZk? + X 1 ? X cx (zij , z0ij ) + cy (zij , z0ij ) |?X | |?Y | ij??X Z1 = 1 ij??Y (5) > which can be solved using a Fixed Point Continuation method [18], described in Sec. 4.1. 1 Bold capital letters denote matrices (e.g., D), bold lower-case letters represent column vectors (e.g., d). All non-bold letters denote scalar variables. dj is the j th column of the matrix D. dij denotes the scalar in 2 the row i and j of D. hd1 , d2 i denotes the inner product between two vectors P column P d1 and d2 . ||d||2 = 2 hd, di = i di denotes the squared Euclidean Norm of the vector d. tr(A) = i aii is the trace of the matrix A. ||A||? designates the Nuclear Norm (sum of singular values) of A.kA||2F = tr(A> A) = tr(AA> ) designates the squared Frobenius Norm of A. 1k ? Rk?1 is a vector of ones, 0k?n ? Rk?n is a matrix of zeros and Ik ? Rk?k denotes the identity matrix (dimensions are omitted when trivially inferred). 3 4 Matrix completion for multi-label classification of visual data In this section, we present the main contributions of this paper: the application of Matrix Completion to the multi-label image classification problem and its convergence proof. In the bag of (visual) words (BoW) model [24], visual data is encoded by the distribution of features among entries from a codebook. The codebook is typically created by clustering local feature representations such as SIFT [17] or GIST [21]. In this setting, the formulation MC-1 (5) is inadequate because it introduces negative values to the histograms in ZX . To address this issue, we replace the penalties used so they reflect the nature of data: we replace the Least-Squares penalty in cx (?) by Pearson?s ?2 distance, that takes into account the asymmetry in histogram data, as ?2 (zj , z0j ) = F X ?2i (zij , z0ij ) = i=1 F X (zij ? z0ij )2 . zij + z0ij i=1 (6) As the modification to cx (?) alone does not ensure that data retains its histogram nature, we add to (5) a constraint that all feature vectors in ZX are either positive, resulting in the MC-Pos formulation X 1 ? X minimize ?kZk? + ?2i (zij , z0ij ) + cy (zij , z0ij ) Z |?X | |?Y | ij??X ij??Y (7) subject to ZX ? 0 Z1 = 1> , or alternatively, that they belong to the Probability Simplex P (positive elements that sum to 1), resulting in the MC-Simplex formulation X 1 ? X minimize ?kZk? + ?2i (zij , z0ij ) + cy (zij , z0ij ) Z |?X | |?Y | ij??X ij??Y (8) subject to ZX ? P Z1 = 1> , depending on whether we wish to perform normalization on the data or not. Additionally, we note that the Log label error in cy (?), albeit asymmetric, incurs in unnecessary penalization of entries belonging to the same class as the original entry (see Fig. 1). Therefore, we generalize this loss to progressively resemble smooth version of the Hinge loss, specified by the parameter ? as cy (zij , z0ij ) = 1 log(1 + exp (??z0ij zij )). ? Log Loss (a = 1) Gen. Log Loss (a = 3) Gen. Log Loss (a = 30) 1 cy(1,z) (9) 0.5 0 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 z Figure 1: Comparison of Generalized Log loss with Log loss (? = 1). 4.1 Fixed Point continuation (FPC) for MC-1 Albeit convex, the Nuclear Norm operator makes (5), (7), (8) not smooth. Since the natural reformulation of a Nuclear Norm minimization is a Semidefinite Program, existing off-the-shelf interior point methods are not applicable due to the large dimension of Z. Thus, several methods have been devised to efficiently optimize this problem class [15, 6, 18, 25, 13, 1, 7, 2]. The FPC method [18], in particular, is comprised by a series of gradient updates h(?) = I(?) ? ? g(?) with step size ? and gradient g(?) given by the error penalizations cx (?) and cy (?). These steps are alternated with a shrinkage operator S? (?) = max (0, ? ? ?), applied to the singular values of the resulting matrix, so the rank is minimized. Provided h(?) is a contraction, this method provably converges to the optimal solution for the unconstrained problem. However, the formulation MC-1 (5) is constrained so in [11] 4 a projection step is added to the algorithm (see Alg. 1), whose convergence was only empirically verified. In this paper, we prove the convergence of FPC to the constrained problem class by using the fact that projections onto Convex sets are also non-expansive; thus, the composition of gradient, shrinkage and projection steps is also a contraction. Since the problem is convex, a unique fixed point exists in the optimal solution of the problem. First, let us write some preliminary results. Algorithm 1 FPC algorithm for solving MC-1 (5) Input: Initial Matrix Z0 Initialize Z as the rank-1 approximation of Z0 for ? = ?1 > ?2 > ? ? ? > ?k do while Rel. Error >  do Gradient Descent: A = h(A) = Z ? ? g(Z) Shrink: A = U?V> , Z = US? ? (?)V> Project onto feasible set: Z1 = 1> end while end for Output: Complete Matrix Z Lemma 1 Let pC (?) be a projection operator onto any given convex set C. Then, pC (?) is nonexpansive. Moreover, kpC (Z) ? pC (Z? )k = kZ ? Z? k iff pC (Z) ? pC (Z? ) = Z ? Z? . Proof For the first part, we apply the Cauchy-Schwarz inequality on the fact that (see [12, pg. 48]) kpC (Z) ? pC (Z? )k2F ? hpC (Z) ? pC (Z? ), Z ? Z? i. (10) For the second part, let us write kpC (Z) ? pC (Z? ) ? (Z ? Z? ) k2F = kpC (Z) ? pC (Z? )k2F + kZ ? Z? k2F ? 2hpC (Z) ? pC (Z? ), Z ? Z? i, (11) where the inner product can be bounded by applying (10), yielding kpC (Z)?pC (Z? )?(Z ? Z? ) k2F ? kpC (Z)?pC (Z? )k2F +kZ?Z? k2F ?2kpC (Z)?pC (Z? )k2F . (12) Introducing our hypothesis kpC (Z) ? pC (Z? )k = kZ ? Z? k into (12) yields kpC (Z) ? pC (Z? ) ? (Z ? Z? ) k2F ? 0, (13) from which we conclude an equality is in place. Theorem 2 Let Z? be an optimal solution to (5). Then Z is also an optimal solution if kpC (S? (h(Z))) ? pC (S? (h(Z? )))k = kZ ? Z? k. (14) Proof Using the non-expansiveness of operators pC (?), S? (?) and h(?) (Lemma 1 and [18, Lemmas 1 and 2]), we can write kZ ? Z? k = kpC (S? (h(Z))) ? pC (S? (h(Z? )))k ? ? kS? (h(Z)) ? S? (h(Z? ))k ? kh(Z) ? h(Z? ))k ? kZ ? Z? k, (15) so we conclude the inequalities are equalities. Using the second part of the Lemmas, we get pC (S? (h(Z? ))) ? pC (S? (h(Z))) = S? (h(Z? )) ? S? (h(Z)) = h(Z? ) ? h(Z) = Z ? Z? . (16) Since Z? is optimal, by the projected subgradient method, we have pC (S? (h(Z? ))) = Z? , (17) which, in turn, implies that pC (S? (h(Z))) = Z, (18) from which we conclude Z is an optimal solution to (5). We are now ready to prove the convergence of MC-1 to a fixed point Z? = pC (S? (h(Z? ))), which allows us to state its result as an optimal solution of (5). Theorem 3 The sequence {Zk } generated by Alg. 1 converges to Z? , an optimal solution of (5). Proof Once we note the non-expansiveness of pC (?), S? (?) and h(?) ensures the composite operator pC (S? (h(?))) is also non-expansive, we can use the same rationale as in [18, Theorem 4]. 5 4.2 Fixed Point Continuation for MC-Pos and MC-Simplex The condition that h(?) is a contraction [18, Lemma 2] used for proving the convergence of Alg. 1 is still valid for the new loss functions proposed in (6) and (9), since the new gradient ? ?z0ij ? ? if zij ? ?Y , ? ? |?Y | 1+exp (?z0ij zij ) g(zij ) = 1 |?X | ? ? ?0 2 zij +2zij z0ij ?3z0 2ij (zij +z0ij )2 if zij ? ?X , (19) otherwise Y| is contractive, provided we choose a step size of ? ? [0, min ( 4|? ?? , ?X |?X |)]. These values are easily obtained by noting the gradient of the Log loss function is Lipschitz continuous with L = 0.25 and choosing ?X such that the ?2 error, for the Non-Negative Orthant, is Lipschitz continuous with L = 1. Key to the feasibility of (7) and (8) within this algorithmic framework, however, is an efficient way to project Z onto the newly defined constraint sets. While for MC-Pos (7) projecting a vector onto the Non-Negative Orthant is done in closed form by truncating negative components to zero, efficiently performing the projection onto the Probability Simplex in MC-Simplex (8) is not straightforward. We note, however, this is a projection onto a convex subset of an `1 ball [9]. Therefore, we can explore the dual of the projection problem and use a sorting procedure to implement this projection in closed form, as described in Alg. 2. The final algorithms are summarized in Alg. 3 and Alg. 4. Algorithm 2 Projection of a vector onto probability Simplex Input: Vector v ? RF to be projected Sort v into ? : n ?1 ? ?2 ? ... ? ?F o P? Find ? = max j ? n : ?j ? 1j ( i=1 ?i ? 1) > 0 P? Compute ? = ?1 ( i=1 ?i ? 1) Output: w s.t. wi = max{vi ? ?, 0} Algorithm 4 FPC Solver for MC-Simplex (8) Input: Initial Matrix Z0 Initialize Z as the rank-1 approximation of Z0 for ? = ?1 > ?2 > ? ? ? > ?k do while Rel. Error >  do Gradient Descent: A = Z ? ? g(Z) Shrink: A = U?V> Shrink: Z = US? ? (?)V> Project ZX onto P (Alg. 2) Project Z1 : Z1 = 1> end while end for Output: Complete Matrix Z Algorithm 3 FPC Solver for MC-Pos (7) Input: Initial Matrix Z0 Initialize Z as the rank-1 approximation of Z0 for ? = ?1 > ?2 > ? ? ? > ?k do while Rel. Error >  do Gradient Descent: A = Z ? ? g(Z) Shrink 1: A = U?V> Shrink 2: Z = US? ? (?)V> Project ZX : ZX = max (ZX , 0) Project Z1 : Z1 = 1> end while end for Output: Complete Matrix Z 5 Experiments This section presents the performance evaluation of the proposed algorithms MC-Pos (7) and MCSimplex (8) in image categorization tasks. We compare our results with MC-1 (5) and standard discriminative and MIL approaches [30, 20, 27, 26, 33, 32] on three datasets: CMU-Face , MSRC and 15 Scene. For our algorithms and MC-1, the values considered for the parameter tuning were ? ? {1, 3, 30}, ? ? [10?4 , 102 ]. The continuation steps require a decreasing sequence of ?, which we chose as ?k = 0.25?k?1 , stopping when ? = 10?12 . We use ?0 = 0.25?1 , where ?1 is the largest singular value of Z0 . Convergence was defined as a relative change in the objective function smaller than 10?2 . CMU-Face dataset This dataset consists in 624 images of 20 subject faces with several expressions and poses, under two conditions: wearing sunglasses and not. We test single class classifica6 tion and localization. As in [20], our training set is built using images of the first 8 subjects (126 images with glasses and 128 without), leaving the remainder for testing (370, equally split among the classes). We describe each image by extracting 10000 SIFT features [17] at random scales and positions and quantizing them onto a 1000 visual codebook, obtained by performing hierarchical k-means clustering on 100000 features randomly selected from the training set. For this dataset, note that subjects were captured in a very similar environment, so the most discriminative part is the eye region. Thus, Nguyen et al. [20] argue that better results are obtained when the classifier training is restricted to that region. Since the face position varies, they propose using a Multiple Instance Learning framework (MIL-SegSVM), that localizes the most discriminative region in each image while learning a classifier to split both classes. We compare the results of our classifier to the ones obtained by MIL-SegSVM as well as a Support Vector Machine. For the SVM, we either trained with the entire image information (SVM-Img) or with only the features extracted from the relevant, manually labeled, region of the eyes. For MC-1, MC-Pos and MC-Simplex, we proceed as follows. We fill Z with the label vector and the BoW histograms of each entire image and leave the test set labels Ytst as unknown entries. For the MCSimplex case, we preprocess Z by `1 -normalizing each histogram in ZX . This is done to avoid the Simplex projection picking a single bin and zeroing out the others, due to scale disparities in the bin counts. The obtained results are presented in Table 1, in terms of area under ROC curve (AUROC). These indicate both the fully supervised and the MIL approaches are more robust to the variability introduced by background noise, when compared to what is obtained when training without localization information (SVM-Img). However, this is done at either the cost of cumbersome labeling efforts or iteratively approximating the solution of MIL, an integer quadratic problem. By using Matrix Completion, in turn, we are able to surpass these classification scores by solving a single convex minimization, since our error term E removes noise introduced by non-discriminative parts of the image. To validate this hypothesis, we run a sliding window search in the images using the same size criteria of [20]. We search for the box having the normalized histogram most closely resemblant to the corrected version in ZX according to the ?2 distance, and get the results shown in Fig. 2 (similar results were obtained using MC-Simplex). These show how the corrected histograms capture the semantic concept being trained. When comparing Matrix Completion approaches, we note that while the previous method MC-1 achieves competitive performance against previous baselines, it is outperformed by both MC-Pos, showing the improvement introduced by the domain knowledge constraints. Moreover, MC-1 does not allow to pursue further localization of the class representative since it introduces erroneous negative numbers in the histograms (Fig. 5). bin count Figure 2: Histograms corrected by MC-Pos (7) preserve semantic meaning. Table 1: AUROC result comparison for the CMU Face dataset. Method AUROC SVM-Img [20] 0.90 SVM-FS [20] 0.94 MIL-SegSVM [20] 0.96 MC-1 [11] 0.96 MC-Pos 0.97 MC-Simplex 0.96 100 50 0 0 200 400 600 800 1000 600 800 1000 bin count bin 4 2 0 ?2 ?4 0 200 400 bin Figure 3: Erroneous histogram correction performed by MC-1 (5). Top: Global view. Bottom: Rescaling shows negative entries. 7 MSRC dataset Next, we run our method on a multi-label object recognition setting. The MSRC dataset consists of 591 real world images distributed among 21 classes, with an average of 3 classes present per image. We mimic the setup of [27] and use as features histograms of Textons [23] concatenated with histograms of colors in the L+U+V space. Our algorithm is given the task of classifying the presence of each object class in the images. We proceed as in the CMU-Face dataset. In this dataset, we compare our formulations to MC-1 and several state-of-the-art approaches for categorization using Multiple-Label Multiple Instance Learning: Multiple Set Kernel MIL SVM (MSK-MIL) by Vijayanarasimhan et al. [27], Multi-label Multiple Instance Learning (ML-MIL) approach by Zha [32] and the Multi-task Random Texton Forest (MTL-RF) method of Vezhnevets et al. [26]. For localization, [32, 27] enumerate possible instances as the result of pre-segmenting images into a fixed number of parts, whereas [26] provides pixel level classification. The obtained average AUROC scores using 5-fold cross validation are shown in Table 2. Results show our methods significantly outperform MC-1. Moreover, MC-Simplex (8) outperforms results given by MIL techniques. Again, the fact that feature errors are corrected allows us to achieve good results while training with the entire image. This is opposed to relying on full blown pixel classification or segmentation techniques, which is still considered an open problem in Computer Vision. Moreover, we point out that MSK-MIL is a kernel approach as opposed to ours which, despite non-linear error penalizations, assumes a linear classification model in the feature space. 15 Scene dataset Finally, we test the performance of our algorithm for scene classification. Scenes differ from objects in the sense that they do not necessarily have a constrained physical location in the image. The 15 scene dataset is a multi-label dataset with 4485 images. According to the feature study in [30], we use GIST [21], the non-histogram feature achieving best results on this dataset. Notice that while not a BoW model, this feature represents the output energy of a bank of 24 filters, thus also positive. We run our algorithm on 10 folds, each comprised by 1500 training and 2985 test examples. The results on Table 3 show again that our method is able to achieve results comparable to state-of-the-art. One should note here that the state-of-the-art results are obtained by using a kernel space, whereas our method is essentially a linear technique aided by non linear error corrections. When we compare our results to using a linear kernel, MC-Simplex is able to achieve better performance. Relating to the results obtained for CMU-Face and MSRC datasets, we note that the roles of the MC-Pos and MC-Simplex are inverted, thus emphasizing the need for existence of models with and without normalization. Table 2: 5-fold CV AUROC comparison for the MSRC dataset (Std. Dev. negligible at this precision). Method Avg. AUROC MSK-MIL[27] 0.90 ML-MIL [32] 0.90 MTL-RF [26] 0.89 MC-1 [11] 0.87 MC-Pos 0.92 MC-Simplex 0.90 6 Table 3: 10-fold CV AUROC comparison for the 15 Scene dataset (Std. Dev. negligible at this precision). Method Avg. AUROC 1-vs-all Linear SVM [30] 0.94 1-vs-all ?2 SVM [30] 0.97 MC-1 [11] 0.90 MC-Pos 0.91 MC-Simplex 0.94 Conclusions We presented two new convex methods for performing semi-supervised multi-label classification of histogram data, with proven convergence properties. Casting the classification under a Matrix Completion framework allows for easily handling of partial data and labels and robustness to outliers. Moreover, since histograms of full images contain the information for parts contained therein, the error embedded in our formulation is able to capture intra class variability arising from different backgrounds. Experiments show that our methods perform comparably to state-of-the-art MIL methods in several image datasets, surpassing them in several cases, without the need for precise localization of objects in the training set. Acknowledgements: Support for this research was provided by the Portuguese Foundation for Science and Technology through the Carnegie Mellon Portugal program under the project FCT/CMU/P11. Partially funded by FCT project Printart PTDC/EEA-CRO/098822/2008. Fernando De la Torre is partially supported by Grant CPS-0931999 and NSF IIS-1116583. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 8 References [1] P. Aguiar, J. Xavier, and M. Stosic. Spectrally optimal factorization of incomplete matrices. In CVPR, 2008. [2] L. Balzano, R. Nowak, and B. Recht. Online identification and tracking of subspaces from highly incomplete information. In Proceedings of the 48th Annual Allerton Conference, 2010. [3] K. Barnard and D. Forsyth. Learning the semantics of words and pictures. In ICCV, 2001. [4] T. L. Berg, A. C. Berg, J. Edwards, and D. A. Forsyth. Who?s in the Picture? In NIPS, 2004. [5] R. S. Cabral, J. P. Costeira, F. De la Torre, and A. Bernardino. Fast incremental method for matrix completion: an application to trajectory correction. In ICIP, 2011. [6] J.-F. Cai, E. J. Candes, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM J. on Optimization, 20(4):1956?1982, 2008. [7] E. Candes and B. Recht. Exact low-rank matrix completion via convex optimization. In Allerton, 2008. [8] Y. Dai, H. Li, and M. He. Element-wise factorization for n-view projective reconstruction. In ECCV, 2010. [9] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l1-ball for learning in high dimensions. In ICML, 2008. [10] M. Fazel, H. Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proceedings American Control Conference, 2001. [11] A. B. Goldberg, X. Zhu, B. Recht, J. ming Xu, and R. Nowak. Transduction with matrix completion: Three birds with one stone. In NIPS, 2010. [12] J.-B. Hiriart-Urruty and C. Lemar?chal. Fundamentals of Convex Analysis. Grundlehren der mathematien Wissenschaften. Springer-Verlag, New York?Heildelberg?Berlin, 2001. [13] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inf. Theor., 56:2980?2998, June 2010. [14] V. Lavrenko, R. Manmatha, and J. Jeon. A model for learning the semantics of pictures. In NIPS, 2003. [15] Z. Lin and M. Chen. The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices. preprint. [16] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In ICML, 2010. [17] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004. [18] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, to appear. [19] O. Maron and A. Ratan. Multiple-instance learning for natural scene classification. In ICML, 1998. [20] M. H. Nguyen, L. Torresani, F. De la Torre, and C. Rother. Weakly supervised discriminative localization and classification: a joint learning process. In ICCV, 2009. [21] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV, 42:145?175, 2001. [22] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma. Rasl: Robust alignment by sparse and low-rank decomposition for linearly correlated images. In CVPR, 2010. [23] J. Shotton, J. M. Winn, C. Rother, and A. Criminisi. Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In ECCV, 2006. [24] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In CVPR, 2003. [25] K.-C. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. preprint, 2009. [26] A. Vezhnevets and J. Buhmann. Towards weakly supervised semantic segmentation by means of multiple instance and multitask learning. In CVPR, 2010. [27] S. Vijayanarasimhan and K. Grauman. What?s it going to cost you?: Predicting effort vs. informativeness for multi-label image annotations. In CVPR, 2009. [28] H. Wang, C. Ding, and H. Huang. Multi-label linear discriminant analysis. In ECCV, 2010. [29] J. Wright, A. Ganesh, S. Rao, and Y. Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices by convex optimization. In NIPS, 2009. [30] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010. [31] C. Yang, M. Dong, and J. Hua. Region-based image annotation using asymmetrical support vector machine-based multiple-instance learning. In CVPR, 2006. [32] Z.-j. Zha, X.-s. Hua, T. Mei, J. Wang, and G.-j. Q. Zengfu. Joint multi-label multi-instance learning for image classification. In CVPR, 2008. [33] Z.-h. Zhou and M. Zhang. Multi-instance multi-label learning with application to scene classification. In NIPS, 2006. [34] G. Zhu, S. Yan, and Y. Ma. Image tag refinement towards low-rank, content-tag prior and error sparsity. In ICMM, 2010. 9
4419 |@word multitask:1 version:2 norm:11 open:1 d2:2 ratan:1 contraction:3 decomposition:1 pg:1 textonboost:1 incurs:1 tr:5 initial:3 manmatha:1 contains:1 series:1 zij:20 disparity:1 score:2 liu:1 ours:1 outperforms:2 existing:2 ka:1 comparing:1 toh:1 portuguese:1 concatenate:1 numerical:1 shape:2 remove:1 gist:3 progressively:1 update:1 v:3 alone:1 prohibitive:1 selected:1 provides:1 boosting:2 contribute:1 codebook:3 preference:1 location:1 allerton:2 lavrenko:1 zhang:1 mathematical:1 ik:1 prove:4 consists:3 ijcv:2 peng:1 growing:1 multi:29 relying:1 decreasing:1 ming:1 enumeration:1 window:1 solver:2 project:9 provided:3 moreover:6 cabral:2 bounded:1 what:2 pursue:1 spectrally:1 finding:1 fpc:6 grauman:1 classifier:7 control:2 normally:1 grant:1 enjoy:1 appear:1 segmenting:2 positive:5 negligible:2 local:4 instituto:1 switching:1 despite:1 chose:1 bird:1 therein:1 k:1 challenging:1 factorization:2 projective:1 contractive:1 fazel:1 unique:1 testing:3 yj:2 practice:1 implement:1 procedure:1 mei:1 area:2 yan:1 significantly:1 composite:1 projection:12 boyd:1 word:4 pre:2 matching:1 extr:2 get:2 onto:12 interior:2 operator:5 vijayanarasimhan:3 context:3 seminal:1 applying:1 optimize:1 equivalent:1 missing:1 crfs:1 straightforward:1 attention:1 truncating:1 convex:12 shen:1 recovery:2 nuclear:9 regularize:1 fill:1 his:1 retrieve:1 hd:1 classic:1 handle:1 proving:1 oh:1 pt:1 user:1 exact:3 programming:1 goldberg:2 hypothesis:2 pa:1 element:2 recognition:4 asymmetric:2 std:2 database:2 labeled:1 bottom:1 role:1 preprint:2 ding:1 solved:1 capture:3 wang:2 cy:9 region:6 ensures:1 sun:1 cro:1 trade:1 environment:1 traversal:1 trained:2 weakly:2 solving:2 localization:10 distinctive:1 easily:3 po:13 aii:1 differently:1 joint:3 fast:2 effective:1 describe:1 labeling:2 pearson:1 choosing:1 shalev:1 whose:2 encoded:1 posed:2 balzano:1 cvpr:8 distortion:1 heuristic:1 otherwise:1 jointly:1 final:1 online:1 advantage:2 sequence:2 quantizing:1 cai:1 propose:4 reconstruction:1 interaction:1 product:2 hiriart:1 adaptation:1 remainder:1 relevant:1 bow:3 gen:2 holistic:1 iff:1 achieve:3 description:1 frobenius:1 kh:1 validate:1 convergence:10 asymmetry:1 categorization:10 incremental:2 converges:2 leave:1 object:9 depending:1 completion:22 pose:2 ij:9 keywords:2 received:1 progress:1 edward:1 c:1 resemble:1 implies:1 indicate:1 msk:3 differ:1 closely:1 torre:4 filter:1 criminisi:1 opinion:1 material:1 ytr:3 bin:6 require:2 ptdc:1 preliminary:1 theor:1 correction:4 hold:1 considered:2 wright:2 exp:2 great:1 algorithmic:1 predict:1 major:3 achieves:1 torralba:2 abbey:1 omitted:1 purpose:1 kpc:11 applicable:1 bag:2 label:38 combinatorial:1 outperformed:1 schwarz:1 largest:1 hpc:2 minimization:13 xtr:3 modified:1 avoid:5 shelf:1 shrinkage:2 boosted:1 zhou:1 casting:1 mil:18 categorizing:1 june:1 improvement:1 rank:23 expansive:2 baseline:1 sense:1 glass:1 utl:1 stopping:1 entire:4 typically:2 initially:1 hidden:1 captioned:1 relation:1 going:1 semantics:2 provably:1 pixel:2 issue:2 classification:30 among:3 dual:1 art:6 breakthrough:1 constrained:3 initialize:3 spatial:1 equal:1 once:1 having:1 manually:1 represents:1 broad:1 yu:1 k2f:9 filling:1 icml:3 mimic:1 simplex:18 minimized:3 others:1 torresani:1 few:2 randomly:1 simultaneously:1 preserve:1 national:1 occlusion:1 jeon:1 freedom:1 highly:1 intra:1 evaluation:1 alignment:2 introduces:2 semidefinite:2 pc:25 yielding:1 predefined:1 accurate:1 bregman:1 nowak:2 partial:3 necessary:1 incomplete:3 euclidean:1 instance:14 column:3 modeling:4 rao:1 dev:2 formulates:1 retains:1 assignment:1 cost:2 introducing:1 entry:13 subset:2 hundred:1 comprised:2 xtst:3 dij:1 inadequate:1 varies:1 corrupted:2 proximal:1 recht:3 fundamental:1 siam:1 recur:1 off:2 dong:1 picking:1 jo:1 squared:2 reflect:2 again:2 opposed:3 choose:1 huang:1 american:1 ricardo:1 rescaling:1 li:1 account:1 de:4 sec:1 bold:3 summarized:1 forsyth:2 textons:1 explicitly:2 vi:1 tion:1 performed:1 view:3 closed:2 lowe:1 portion:1 zha:3 netflix:1 sort:1 competitive:1 candes:2 annotation:2 contribution:2 minimize:6 square:3 descriptor:1 who:1 efficiently:3 yield:1 preprocess:1 generalize:1 identification:1 zy:2 comparably:1 mc:44 zoo:1 trajectory:1 researcher:1 zx:12 detector:1 cumbersome:1 against:1 energy:1 proof:4 di:2 couple:1 newly:1 dataset:14 recall:1 knowledge:2 color:1 improves:1 segmentation:5 alexandre:1 supervised:6 mtl:2 costeira:2 zisserman:1 formulation:6 done:3 box:2 shrink:5 correlation:1 hand:3 keshavan:1 ganesh:2 lack:1 google:1 maron:2 lda:1 building:1 name:1 concept:2 normalized:1 contain:1 multiplier:1 former:1 equality:2 xavier:1 asymmetrical:1 iteratively:1 goldfarb:1 semantic:8 deal:2 criterion:3 generalized:1 stone:1 yun:1 complete:3 duchi:1 motion:1 l1:1 image:50 meaning:1 wise:1 recently:2 superior:1 common:1 empirically:2 physical:1 vezhnevets:2 belong:1 he:1 relating:1 surpassing:1 mellon:2 composition:1 cv:2 tuning:1 unconstrained:1 trivially:1 msrc:5 zeroing:1 portugal:2 dj:1 funded:1 add:1 recent:2 inf:1 verlag:1 browse:1 hay:1 inequality:2 success:1 der:1 inverted:1 captured:1 minimum:4 dai:1 ey:2 fernando:2 paradigm:2 semi:1 branch:1 multiple:10 lisboa:1 ntr:4 sliding:1 full:2 keypoints:1 smooth:2 ii:1 cross:1 jpc:1 lin:2 retrieval:1 devised:2 equally:1 feasibility:1 oliva:2 controller:1 vision:2 cmu:8 essentially:1 chandra:1 histogram:17 represent:2 tailored:1 kernel:5 normalization:2 achieved:2 texton:1 cps:1 background:3 whereas:3 addressed:1 winn:1 singular:5 leaving:1 envelope:2 subject:7 deficient:1 grundlehren:1 integer:1 extracting:1 yang:2 noting:1 presence:1 split:2 shotton:1 xj:2 associating:2 inner:2 enumerating:1 whether:1 motivated:1 pca:1 expression:1 effort:2 penalty:2 f:1 proceed:2 york:1 enumerate:1 amount:1 z0j:1 subregions:1 category:1 continuation:5 outperform:1 zj:1 nsf:1 notice:1 blown:1 arising:1 per:1 subpart:1 carnegie:2 write:3 ist:1 group:1 key:1 reformulation:2 achieving:2 p11:1 capital:1 verified:2 subgradient:1 sum:4 year:1 run:3 letter:3 powerful:2 fueled:1 you:1 extends:1 place:1 comparable:1 capturing:1 bound:1 completing:1 fold:4 quadratic:1 annual:1 constraint:6 alex:1 scene:10 tag:3 aspect:1 min:1 formulating:1 performing:4 fct:2 according:3 ball:2 nonexpansive:1 belonging:1 smaller:1 wi:1 urgent:1 making:1 modification:1 outlier:3 projecting:1 indexing:1 restricted:1 handling:1 iccv:2 invariant:1 mutually:1 remains:1 turn:2 count:3 singer:1 urruty:1 end:6 available:1 apply:1 hierarchical:1 robustness:3 existence:1 original:1 ytst:7 denotes:4 clustering:2 ensure:1 top:1 assumes:1 hinge:1 exploit:1 concatenated:1 approximating:1 objective:1 added:1 exclusive:1 gradient:9 subspace:4 distance:2 berlin:1 topic:1 argue:1 cauchy:1 discriminant:1 trivial:1 rother:2 modeled:1 balance:1 setup:1 trace:1 negative:7 rise:1 unknown:4 perform:4 datasets:5 descent:3 orthant:2 extended:1 ever:1 variability:3 incorporated:1 precise:3 inferred:1 introduced:3 eea:1 cast:1 required:1 specified:1 optimized:1 z1:12 icip:1 sivic:1 learned:2 nip:5 trans:1 address:2 able:5 recurring:1 usually:1 chal:1 sparsity:1 challenge:1 program:3 rf:6 including:1 max:4 built:1 video:2 difficulty:1 natural:3 regularized:1 predicting:1 buhmann:1 localizes:1 hindi:1 zhu:2 improve:1 movie:1 technology:1 sunglass:1 eye:2 picture:3 created:1 ready:1 alternated:1 text:1 review:1 understanding:1 interdependent:1 geometric:1 acknowledgement:1 prior:1 relative:1 lacking:1 loss:13 fully:2 embedded:1 rationale:1 generation:1 limitation:1 proven:1 digital:3 validation:2 penalization:3 foundation:2 degree:1 informativeness:1 xiao:1 thresholding:1 bank:1 classifying:2 rasl:1 row:2 eccv:3 penalized:1 supported:1 last:3 bias:1 expansiveness:2 allow:1 face:8 absolute:1 sparse:1 isr:2 benefit:1 distributed:1 kzk:4 dimension:4 curve:1 stand:1 valid:1 world:1 kz:7 author:1 refinement:2 projected:2 avg:2 nguyen:3 emphasize:2 dealing:1 ml:2 global:1 active:1 pittsburgh:1 assumed:1 unnecessary:1 tuples:1 discriminative:11 quintessential:1 alternatively:1 conclude:3 img:3 continuous:2 search:2 designates:2 decade:1 shwartz:1 iterative:1 table:6 additionally:3 promising:1 nature:2 concatenates:1 robust:5 zk:1 forest:1 alg:7 complex:1 necessarily:2 domain:1 wissenschaften:1 ntst:3 main:2 montanari:1 linearly:1 bounding:1 noise:5 obviating:1 categorized:1 xu:2 augmented:1 fig:3 representative:1 roc:1 transduction:1 ehinger:1 precision:2 position:2 explicit:1 wish:2 learns:1 rk:8 z0:15 theorem:3 erroneous:2 specific:1 emphasizing:1 sift:3 showing:1 svm:9 auroc:8 normalizing:1 exists:1 false:1 albeit:2 effectively:1 rel:3 illumination:1 sorting:1 easier:1 chen:2 cx:6 appearance:2 explore:1 visual:5 bernardino:2 expressed:1 contained:1 lagrange:1 tracking:1 partially:2 scalar:2 recommendation:1 hua:2 springer:1 aa:1 minimizer:1 extracted:1 ma:4 goal:1 formulated:1 identity:1 towards:2 aguiar:1 twofold:1 barnard:2 replace:2 content:1 feasible:1 lipschitz:2 change:1 aided:1 specifically:1 lemar:1 reducing:1 semantically:1 corrected:4 surpass:1 lemma:5 principal:1 multimedia:1 hd1:1 experimental:1 la:4 berg:2 support:4 people:1 latter:1 accelerated:1 wearing:1 d1:1 correlated:1
3,777
442
Splines, Rational Functions and Neural Networks Robert C. Willialnson Department of Systems Engineering Australian National University Canberra, 2601 Australia Peter L. Bartlett Department of Electrical Engineering University of Queensland Queensland, 4072 Australia Abstract Connections between spline approximation, approximation with rational functions, and feedforward neural networks are studied. The potential improvement in the degree of approximation in going from single to two hidden layer networks is examined. Some results of Birman and Solomjak regarding the degree of approximation achievable when knot positions are chosen on the basis of the probability distribution of examples rather than the function values are extended. 1 INTRODUCTION Feedforward neural networks have been proposed as parametrized representations suitable for nonlinear regression. Their approximation theoretic properties are still not well understood. This paper shows some connections with the more widely known methods of spline and rational approximation. A result due to Vitushkin is applied to determine the relative improvement in degree of approximation possible by having more than one hidden layer. Furthermore, an approximation result relevant to statistical regression originally due to Birman and Solomjak for Sobolev space approximation is extended to more general Besov spaces. The two main results are theorems 3.1 and 4.2. 1040 Splines, Rational Functions and Neural Networks 2 SPLINES AND RATIONAL FUNCTIONS The two most widely studied nonlinear approximation methods are splines with free knots and rational functions. It is natural to ask what connection, if any, these have with neural networks. It is already known that splines with free knots and rational functions are closely related, as Petrushev and Popov's remarkable result shows: Theorem 2.1 ([10, chapter 8]) Let Rn(J)p := inf{llf - rllp: r a rational function of degree n} S!(f)p := inf{lIf - slip: s a spline of degree k - 1 with n - 1 free knots}. If f E Lp [a, b],oo < a < b < 00, Rn(f)p 1<p = D(n- a ) < 00, k ~ 1, 0 < a < k, then if and only if S~(f)p = D(n- a ). In both cases the efficacy of the methods can be understood in terms of their flexibility in partitioning the domain of definition: the partitioning amounts to a "balancing" of the error of local linear approximation [4]. There is an obvious connection between single hidden layer neural networks and splines. For example, replacing the sigmoid (1 + e-x)-l by the piecewise linear function (Ix + II-Ix - 11)/2 results in networks that are in one dimension splines, and in d dimensions can be written in "Canonical Piecewise Linear" form [3]: f( x) := a + bT x + L Ci IaT x - fJi I i=l defines f: IR d --+ IR, where a, Ci, fJi E IR and b, O'i E IR d. Note that canonical piecewise linear representations are unique on a compact domain if we use the form f(x) := L;~ll Ci laT x -11. Multilayer piecewise linear nets are not generally canonical piecewise linear: Let g(x) := Ix+y-ll-lx+y+ll-lx-y+ll-lx-y-ll+x+y. Then gC) is canonical piecewise linear, but Ig(x)1 (a simple two-hidden layer network) is not. The connection between certain single hidden layer networks and rational functions has been exploited in [13]. 3 COMPOSITIONS OF RATIONAL FUNCTIONS There has been little effort in the nonlinear approximation literature in understanding nonlinearly parametrized approximation classes "more complex" than splines or rational functions. Multiple hidden layer neural networks are in this more complex class. As a first step to understanding the utility of these representations we now consider the degree of approximation of certain smooth function classes via rational functions or compositions of rational functions in the sup-metric. A function ?> : IR --+ IR is rational of degree 7r if ?> can be expressed as a ratio of polynomials in x E IR of degree at most 7r. Thus (3.1) ?>(J := ?>(J(x) := L:'"~=l O'i xi. Li=l fJixt x E IR, (} := [0',,8] 1041 1042 Williamson and Bartlett Let (]" 1f (1, ?) := inf {Ilf - ?911: deg ? S 7T} denote the degree of approximation of f by a rational function of degree 7T or less. Let 'Ij; := ? 0 p, where ? and pare rational functions: p: lR x e p ---+ lR, ?: lR x e ? ---+ lR, both of degree 7T. Let IF be some function space (metrized by 11?1100) and let (]"1f(lF,.) := sup{(]"1f(1,?): f ElF} denote the degree of approximation of the function class IF. Theorem 3.1 Let lFa := W~(n) denote the Sobolev space of functions from a compact subset n c lR to lR with s := LaJ continuous derivatives and the sth derivative satisfying a Lipschitz condition with order a - s. Then there exist positive constants C1 and C2 not depending on 7T such that for sufficiently large 7T (3.2) (]"1f(lF a ,p) 2: C1 (217T) a and (3.3) Note that (3.2) is tight: it is achievable. Whether (3.3) is achievable is unknown. The proof is a consequence of theorem 3.4. The above result, although only for rational functions of a single variable, suggests that no great benefit in terms of degree of approximation is to be obtained by using multiple hidden layer networks. 3.1 PROOF OF THEOREM Definition 3.2 Let rd C lR d. A map r: r d ---+ lR is called a piecewise rational function of degree k with barrier b~ of order q if there is a polynomial b~ of degree q in x E r d such that on any connected component of Ii C r d \ {x: b~( x) = O}, r is a rational function on Ii of degree k: r:=r(x):= Note that at any point x E Ii P;i(X) k' Qdi(x) , n Ii, (i f. j j, k k d Pdi,QdiElR[x]. , , r is not necessarily single valued. Definition 3.3 Let IF be some function class defined on a set G metrized with 11?1100 and let lR II. Then F:,'J: G x ---+ lR, F:'.,}: (x, 0) f-t F( x, 0) where e= e 1. F( x, 0) is a piecewise rational function of 0 of degree k or less with barrier b~'x (possibly depending on x) of order q; 2. For all f E IF there is a 0 E e such that IIf - Fe, 0) liSE; is called an E-representation of IF of degree k and order q. Theorem 3.4 ([12, page 191, theorem 1]) If Ff:k~q is an E-representation oflFa of degree k and order q with barrier b not depending ~n x, then for sufficiently small (3.4) v log[(q + l)(k 1)1/a + 1)] 2: C ( "[ where C is a constant not dependent on E, v, k or q. Splines, Rational Functions and Neural Networks Theorem 3.4 holds for any [-representation F and therefore (by rearrangement of (3.4) and setting v = 271") (3.5) 0"11" 1 (IF, F) ~ c -(2-7I"-lo--:g[:--(q-+-1-)(-k-+-1--=-)]-) Q Now ?(J given by (3.1) is, for any given and fixed x E IR, a piecewise rational function of () of degree 1 with barrier of degree 0 (no barrier is actually required). Thus (3.5) immediately gives (3.2). Now consider 'IjJ(J = <p p, where 0 'I\" 11" 4> ? 'I\" 11" P ? = L...-~~1 ai Y~ (y E IR) and j ? = L...-~=1 IJ x. p Lj~l 8j Li=l j3iY' (x E IR) . xJ Direct substitution and rearrangement gives '1\"11"4> V'9 L.....i=l = 'I\" 11" 4> ? ['I\"1I"p az L...-i=l j3i L...-j=l [ 'I\" 11" . j]i ['I\"1I"p '. j]1I"4>-i L...-j=l OJ x IJ x .] L...-j~lljxJ . ['I\" 11" t . ] 11" 4> - L...-j~l 8j xJ . t where we write () = [a, j3, 1,8] and for simplicity set 71" ? = 71" P = 71". Thus dim () = 471" =: v. For arbitrary but fixed x, V' is a rational function of degree k = 71". No barrier is needed so q = 0 and hence by (3.4), u.(IF.,.;,) > 3.2 c, (4"10g~,,+ IJ? OPEN PROBLEMS An obvious further question is whether results as in the previous section hold for multivariable approximation, perhaps for multivariable rational approximation . A popular method of d-dimensional nonlinear spline approximation uses dyadic splines [2,5,8]. They are piecewise polynomial representations where the partition used is a dyadic decomposition . Given that such a partition 3 is a subset of a partition generated by the zero level set of a barrier polynomial of degree ~ 131, can Vitushkin's results be applied to this situation? Note that in Vitushkin's theory it is the parametrization that is piecewise rational (PR), not the representation. What connections are there in general (if any) between PR representations and PR parametrizations? 4 DEGREE OF APPROXIMATION AND LEARNING Determining the degree of approximation for given parametrized function classes is not only of curiosity value. It is now well understood that the statistical sample complexity of learning depends on the size of the approximating class. Ideally the approximating class is small whilst well approximating as large as possible an approximat ed class. Furthermore, in order to make statements such as in [1] regarding the overall degree of approximation achieved by statistical learning, the classical degree of approximation is required. 1043 1044 Williamson and Bartlett For regression purposes the metric used is L p ,/-, , where where J-t is a probability measure. Ideally one would like to avoid calculating the degree of approximation for an endless series of different function spaces. Fortunately, for the case of spline approximation (with free knots) this not necessary because (thanks to Petrushev and others) there now exist both direct and converse theorems characterizing such approximation classes. Let Sn (f)p denote the error of n knot spline approximation in Lp[O, 1]. Let I denote the identity operator and T(h) the translation operator (T(h)(f, x) := f(x + h)) and let ~~ := (T(h) - I)k, k = 1,2, ... , be the difference operators. The modulus of smoothness of order k for f E LpUJ) is Wk(f,t)p := L 11~~f(?)IILp(n). Ihl:$;t Petrushev [9] has obtained Theorelll 4.1 Let T = (aid + IIp)-l. Then f)n aSn(f)p]k ~n (4.1) < 00 n=l if and only if (4.2) The somewhat strange quantity in (4.2) is the norm of f in a Besov space B<; Tok' Note that for a large enough, T < 1. That is, the smoothness is measured in a~ Lp (p < 1) space. More generally [11], we have (on domain [0,1]) IlflIBC> 0 := ( p,q,k l/q Jot (t-awk(f, t)p)q dt) t Besov spaces are generalizations of classical smoothness spaces such as Sobolev spaces (see [11]). We are interested in approximation in L p ,/-, and following Birman and Solomjak [2] ask what degree of approximation in L p ,/-, can be obtained when the knot positions are chosen according to J-t rather than f. This is of interest because it makes the problem of determining the parameter values on the basis of observations linear. Theorelll 4.2 Let f E L p ,/-, where J-t E LA for some>. > 1 and is absolutely continuous. Choose the n knot positions of a spline approximant v to f on the basis of J-t only. Then for all such f there is a constant c not dependent on n such that (4.3) where u = (a + (1- >.-l)p-l)-l and p < u. The constant c depends on J-t and >.. Splines, Rational Functions and Neural Networks and (1 a v such that II p ~ 1 S p, < (1-1 for any el' for all I under the conditions above, there is (4.4) and again c depends on J-l and A but does not depend on n. Proof First we prove (4.3). Let [0,1] be partitioned by 3. Thus if v is the approximant to Ion [0,1] we have Ilf - vIIi ?., = ~ Ilf - vlli ?.,(,,) = ~ L If(x) - v(x)IPdl'(x). For any A 2: 1, i I/(x) - v(x)IPdJ-l(x) = ill - viP (~~) dx ~ [L If - vI P (1-'-')-' dXr'-' [L (ix)' r' dx vll~~(A) IldJ-l/dxIIL>.(A) = III - where 'IjJ = p(l- A-I)-I. Now Petrushev and Popov [10, p.216] have shown that there exists a polynomial of degree k on ~ [r, s] such that = vll~~(A) III - s cll/ll~(A) where f(s-r)/k IlfIIB(A):= ( Jo and (~i (1:= (el' + 'IjJ-l)-I, = [ri, SiD such that 0< 'IjJ < 00 1. (~~)' dt) (t-QII~: 1(?)IIL.,.(r,s-kt)t T and k dx > 1. Let 131 =: n l/u and choose:=: = Ui~i = ~lIdl'/dxIlL(o.1)' Thus IldJ-l/dxIIL>.(A) = n-l/AlldJ-l/dxIIL>.(O,l)' Hence (4.5) III - vll~p, ,, s ClldJ-l/dxIIL>. L n-I/Allfll~(A)' Ae2 Since (by hypothesis) p < II! - vilL., (1, Holder's inequality gives ~ clldJlldxllL, [~ G) t.-:,. ]1-~ [~llfIlR(")] ~ Now for arbitrary partitions 3 of [0,1] Petrushev and Popov [10, page 216] have shown L II/IIB(A) S 1I/IIB~.k' Ae3 1045 1046 Williamson and Bartlett where Be;.k , = Be;,l7?k , = B([O, 1]). Hence III - vll~ p,p. :S clldJ.tjdxIIL>. n~+I-t "I"~cw tT j k and so (4 .6) = = = with u (a + '1/'-1 )-1, 'I/' p(l- A-I )-1 . Hence u (a + 1_;-1 )-1. Thus given a and p, choosing different A adjusts the u used to measure I on the right-hand side of (4 .6). This proves (4.3). Note that because of the restriction that p < u , a > 1 is only achievable for p < 1 (which is rarely used in statistical regression [6]). Note also the effect of the term IIdJ.tjdxll.i!:' When A = 1 this is identically 1 (since J.t is a probability measure). When A > 1 it measures the departure from uniform distribution, suggesting the degree of approximation achievable under non-uniform distributions is worse than under uniform distributions. Equation (4.4) is proved similarly. When u :S p with p 2: 1, for any a :S 1j u, we can set A := (1 - ~ + pa )-1 2: 1. From (4.5) we have III - vll~ ?.? :0; clld,,/dxllL, ~ :0; clld,,/dxIIL, (D G) 1/' 1/' 11/11':.("1 [~II/IIB("f" :S clldJ.tjdxIlL>.n-l+~-pa ll/ll~<> ;k 0' and therefore ? 5 CONCLUSIONS AND FURTHER WORK In this paper a result of Vitushkin has been applied to "multi-layer" rational approximation . Furthermore, the degree of approximation achievable by spline approximation with free knots when the knots are chosen according to a probability distribution has been examined. The degree of approximation of neural networks, particularly multiple layer networks, is an interesting open problem . Ideally one would like both direct and converse theorems, completely characterizing the degree of approximation. If it turns out that from an approximation point of view neural networks are no better than dyadic splines (say), then there is a strong incentive to study the PAC-like learning theory (of the style of [7]) for such spline representations. We are currently working on this topic . Splines, Rational Functions and Neural Networks Acknowledgements This work was supported in part by the Australian Telecommunications and Electronics Research Board and OTC. The first author thanks Federico Girosi for providing him with a copy of [4]. The second author was supported by an Australian Postgraduate Research Award. References [1] A. R. Barron, Approxima.tion and Estimation Bounds for Artificial Neural Networks, To appear in Machine Learning, 1992. [2] M. S. Birman and M. Z. Solomjak, Piecewise-Polynomial Approximations of Functions of the Classes W;, Mathematics o{the USSR - Sbornik, 2 (1967), pp. 295317. [3] L. Chua and A. -C'. Deng, Canonical Piecewise-Linear Representation, IEEE Transactions on C'iIcuits and Systems, 35 (1988), pp. 101-111. [4] R. A. DeVore. Degree of Nonlinear Approximat.ion, in Approximation Theory VI, Volump 1. C. K. Chui. L. 1. Schumaker and J. D. Ward, eds., Academic Press, Bost.on, 1991, pp. 17.5-201. [5] R. A. DeVore, B. Jawert.h and V. Popov, Compression of Wavelet Decompositions, To appear in American Journal of Mathematics, 1992. [6] H. Ekblom , Lp-met.hods for Robust Regression, BIT, 14 (1974), pp. 22-32. [7] D. Haussler, Decision Theoretic Generalizations of the PAC Model for Neural Net and Ot.her Learning Applicat.ions, Report UCSC-CRL-90-52, Baskin Center for Computer Engineering and Informat.ion Sciences, University of California, Santa Cruz, H)90. [8] P. Oswald. On t.he Degree of Nonlinear Spline Approximat.ion in Besov-Sobolev Space~. Journal of A.pproximatioll Theory, 61 (1990), pp. 131-157. [9] P. P. Pet.ru~hev. Direct and Converse Theorems for Spline and Rational Approximation and Be~oy Spaces, in Function Spaces and Applications (Lecture Notes ill Ma tllem a.tics 1.'3(2), M. Cwikel, J. Peetre, Y. Sagher and H. Wallin, eds., Sprin~er-Verlag. Berlin, 1988, pp. 363-377. [10] P. P. Petrushev and V. A. Popov, Rational Approximation of Real Functions, Cambridge Univer~it.y Press, Cambridge, 1987. [11] H. Triebel. TlleorJ' of Function Spaces . Birkhauser Verlag, Basel, 1983. [12] A. G. Vitllshkin. Tlleocy of the Transmission and Processing of Information, Pergamon Press, Oxford, 1961, Originally published as Otsenka slozhnosti zadachi taIJu1icO\'ani,va (Est.ima.tion of the Complexit.y of the Tabulation Problem), Fizmatgiz. Mo~cow. 19.59. [13] R. C. Williamson and U. Helmke, Existence and Uniqueness Results for Neural Network Approximations. Submitted, 1992. 1047
442 |@word polynomial:6 compression:1 achievable:6 norm:1 open:2 queensland:2 decomposition:2 electronics:1 substitution:1 series:1 efficacy:1 complexit:1 dx:3 written:1 cruz:1 partition:4 girosi:1 parametrization:1 lr:10 chua:1 lx:3 c2:1 direct:4 ucsc:1 prove:1 multi:1 little:1 what:3 tic:1 whilst:1 partitioning:2 converse:3 appear:2 positive:1 engineering:3 understood:3 local:1 consequence:1 oxford:1 studied:2 examined:2 suggests:1 peetre:1 qii:1 unique:1 lf:2 operator:3 ilf:3 restriction:1 map:1 center:1 simplicity:1 immediately:1 adjusts:1 haussler:1 vll:5 dxr:1 us:1 slip:1 hypothesis:1 pa:2 satisfying:1 iib:3 particularly:1 electrical:1 awk:1 connected:1 besov:4 complexity:1 ui:1 ideally:3 depend:1 tight:1 basis:3 completely:1 chapter:1 artificial:1 iilp:1 choosing:1 widely:2 valued:1 say:1 federico:1 ward:1 net:2 schumaker:1 relevant:1 parametrizations:1 flexibility:1 az:1 transmission:1 oo:1 depending:3 measured:1 ij:4 approxima:1 strong:1 australian:3 met:1 closely:1 australia:2 iat:1 generalization:2 theorelll:2 univer:1 hold:2 sufficiently:2 iil:1 great:1 ae2:1 mo:1 purpose:1 uniqueness:1 estimation:1 currently:1 him:1 rather:2 avoid:1 lise:1 improvement:2 pdi:1 dim:1 dependent:2 el:2 bt:1 lj:1 hidden:7 her:1 going:1 interested:1 overall:1 l7:1 ill:2 ussr:1 lif:1 otc:1 having:1 iif:1 elf:1 spline:24 piecewise:13 others:1 report:1 national:1 ima:1 rearrangement:2 interest:1 llf:1 kt:1 endless:1 popov:5 necessary:1 subset:2 uniform:3 thanks:2 cll:1 jo:1 again:1 iip:1 choose:2 possibly:1 worse:1 american:1 derivative:2 style:1 li:2 approximant:2 potential:1 suggesting:1 wk:1 depends:3 vi:2 tion:2 view:1 sup:2 ir:11 holder:1 sid:1 knot:10 applicat:1 published:1 submitted:1 ed:3 definition:3 pp:6 obvious:2 proof:3 rational:30 proved:1 popular:1 ask:2 ihl:1 actually:1 originally:2 dt:2 devore:2 furthermore:3 hand:1 working:1 approximat:3 replacing:1 nonlinear:6 defines:1 perhaps:1 modulus:1 effect:1 hence:4 ll:8 multivariable:2 theoretic:2 tt:1 sigmoid:1 tabulation:1 he:1 composition:2 cambridge:2 ai:1 smoothness:3 rd:1 mathematics:2 similarly:1 inf:3 certain:2 verlag:2 inequality:1 exploited:1 fortunately:1 somewhat:1 deng:1 determine:1 ii:10 multiple:3 smooth:1 academic:1 award:1 va:1 j3:1 regression:5 multilayer:1 metric:2 achieved:1 ion:5 c1:2 laj:1 ot:1 feedforward:2 iii:5 enough:1 identically:1 xj:2 cow:1 regarding:2 triebel:1 whether:2 bartlett:4 utility:1 effort:1 metrized:2 peter:1 generally:2 santa:1 amount:1 exist:2 canonical:5 write:1 incentive:1 ani:1 telecommunication:1 strange:1 sobolev:4 decision:1 informat:1 bit:1 layer:9 bound:1 ri:1 chui:1 department:2 according:2 helmke:1 sth:1 partitioned:1 lp:4 pr:3 equation:1 turn:1 needed:1 vip:1 fji:2 barron:1 existence:1 asn:1 j3i:1 lat:1 calculating:1 prof:1 approximating:3 classical:2 pergamon:1 already:1 question:1 quantity:1 lfa:1 vill:1 cw:1 berlin:1 parametrized:3 topic:1 viii:1 pet:1 bost:1 ru:1 ratio:1 providing:1 robert:1 fe:1 statement:1 unknown:1 basel:1 observation:1 tok:1 situation:1 extended:2 rn:2 gc:1 arbitrary:2 nonlinearly:1 required:2 connection:6 california:1 curiosity:1 departure:1 oj:1 sbornik:1 suitable:1 natural:1 solomjak:4 pare:1 sn:1 literature:1 understanding:2 acknowledgement:1 determining:2 relative:1 lecture:1 interesting:1 oy:1 remarkable:1 degree:36 balancing:1 translation:1 lo:1 supported:2 free:5 copy:1 vitushkin:4 side:1 characterizing:2 barrier:7 jot:1 benefit:1 dimension:2 author:2 ig:1 transaction:1 compact:2 deg:1 xi:1 continuous:2 robust:1 williamson:4 complex:2 necessarily:1 domain:3 main:1 dyadic:3 canberra:1 ff:1 board:1 aid:1 position:3 ix:4 wavelet:1 theorem:11 birman:4 pac:2 er:1 exists:1 postgraduate:1 ci:3 hod:1 ijj:4 expressed:1 ma:1 identity:1 lipschitz:1 crl:1 birkhauser:1 called:2 la:1 est:1 rarely:1 absolutely:1
3,778
4,420
Nonlinear Inverse Reinforcement Learning with Gaussian Processes Zoran Popovi?c University of Washington [email protected] Sergey Levine Stanford University [email protected] Vladlen Koltun Stanford University [email protected] Abstract We present a probabilistic algorithm for nonlinear inverse reinforcement learning. The goal of inverse reinforcement learning is to learn the reward function in a Markov decision process from expert demonstrations. While most prior inverse reinforcement learning algorithms represent the reward as a linear combination of a set of features, we use Gaussian processes to learn the reward as a nonlinear function, while also determining the relevance of each feature to the expert?s policy. Our probabilistic algorithm allows complex behaviors to be captured from suboptimal stochastic demonstrations, while automatically balancing the simplicity of the learned reward structure against its consistency with the observed actions. 1 Introduction Inverse reinforcement learning (IRL) methods learn a reward function in a Markov decision process (MDP) from expert demonstrations, allowing the expert?s policy to be generalized to unobserved situations [7]. Each task is consistent with many reward functions, but not all rewards provide a compact, portable representation of the task, so the central challenge in IRL is to find a reward with meaningful structure [7]. Many prior methods impose structure by describing the reward as a linear combination of hand selected features [1, 12]. In this paper, we extend the Gaussian process model to learn highly nonlinear reward functions that still compactly capture the demonstrated behavior. GP regression requires input-output pairs [11], and was previously used for value function approximation [10, 4, 2]. Our Gaussian Process Inverse Reinforcement Learning (GPIRL) algorithm only observes the expert?s actions, not the rewards, so we extend the GP model to account for the stochastic relationship between actions and underlying rewards. This allows GPIRL to balance the simplicity of the learned reward function against its consistency with the expert?s actions, without assuming the expert to be optimal. The learned GP kernel hyperparameters capture the structure of the reward, including the relevance of each feature. Once learned, the GP can recover the reward for the current state space, and can predict the reward for any unseen state space within the domain of the features. Previous IRL algorithms generally learn the reward as a linear combination of features, either by finding a reward under which the expert?s policy has a higher value than all other policies by a margin [7, 1, 12, 15], or else by maximizing the probability of the reward under a model of near-optimal expert behavior [6, 9, 17, 3]. While several margin-based methods learn nonlinear reward functions through feature construction [13, 14, 5], such methods assume optimal expert behavior. To the best of our knowledge, GPIRL is the first method to combine probabilistic reasoning about stochastic expert behavior with the ability to learn the reward as a nonlinear function of features, allowing it to outperform prior methods on tasks with inherently nonlinear rewards and suboptimal examples. 1 2 Inverse Reinforcement Learning Preliminaries A Markov decision process is defined as a tuple M = {S, A, T , ?, r}, where S is the state space, A is the set of actions, Tssa is the probability of a transition from s ? S to s0 ? S under action a ? A, 0 ? ? [0, 1) is the discount factor, and r is the reward function. The optimal policy ? ? maximizes P? the expected discounted sum of rewards E [ t=0 ? t rst |? ? ]. In inverse reinforcement learning, the algorithm is presented with M \ r, as well as expert demonstrations, denoted D = {?1 , ..., ?N }, where ?i is a path ?i = {(si,0 , ai,0 ), ..., (si,T , ai,T )}. The algorithm is also presented with features of the form f : S ? R that can be used to represent the unknown reward r. IRL aims to find a reward function r under which the optimal policy matches the expert?s demonstrations. To this end, we could assume that the examples D are drawn from the optimal policy ? ? . However, real human demonstrations are rarely optimal. One approach to learning from a suboptimal expert is to use a probabilistic model of the expert?s behavior. We employ the maximum entropy IRL (MaxEnt) model [17], which is closely related to linearly-solvable MDPs [3], and has been used extensively to learn from human demonstrations [16, 17]. Under this model, the probability of taking a path ? is proportional to the exponential of the rewards encountered along that path. This model is convenient for IRL, because its likelihood is differentiable [17], and a complete stochastic policy uniquely determines the reward function [3]. Intuitively, such a stochastic policy is more deterministic when the stakes are high, and more random when all choices have similar value. Under this policy, the probability of an action a in state s can be shown to be proportional to the exponential of the expected total reward after taking the action, denoted P (a|s) ? exp(Qrsa ), where Qr = r + ?T Vr . The valueP function Vr is computed with a ?soft? version of the familiar Bellman r backup operator: Vs = log a exp Qrsa . The probability of a in state s is therefore normalized by exp Vr , giving P (a|s) = exp(Qrsa ? Vsr ). Detailed derivations of these equations can be found in prior work [16]. The complete log likelihood of the data under r can be written as log P (D|r) = XX i log P (ai,t |si,t ) = XX t i Qrsi,t ai,t ? Vsri,t  (1) t While we can maximize Equation 1 directly to obtain r, such a reward is unlikely to exhibit meaningful structure, and would not be portable to novel state spaces. Prior methods address this problem by representing r as a linear combination of a set of provided features [17]. However, if r is not linear in the features, such methods are not sufficiently expressive. In the next section, we describe how Gaussian processes can be used to learn r as a general nonlinear function of the features. 3 The Gaussian Process Inverse Reinforcement Learning Algorithm GPIRL represents the reward as a nonlinear function of feature values. This function is modeled as a Gaussian process, and its structure is determined by its kernel function. The Bayesian GP framework provides a principled method for learning the hyperparameters of this kernel, thereby learning the structure of the unknown reward. Since the reward is not known, we use Equation 1 to specify a distribution over GP outputs, and learn both the output values and the kernel function. In GP regression, we use noisy observations y of the true underlying outputs u. GPIRL directly learns the true outputs u, which represent the rewards associated with feature coordinates Xu . These coordinates may simply be the feature values of all states or, as discussed in Section 5, a subset of all states. The rewards at states that are not included in this subset are inferred by the GP. We also learn the kernel hyperparameters ? in order to recover the structure of the reward. The most likely values of u and ? are found by maximizing their probability under the expert demonstrations D: ? ? Z ? ? P (u, ?|D, Xu ) ? P (D, u, ?|Xu ) = ? P (D|r) P (r|u, ?, Xu ) dr? P (u, ?|Xu ) (2) | {z } | {z } | {z } r IRL term GP posterior GP probability The log of P (D|r) is given by Equation 1, the GP posterior P (r|u, ?, Xu ) is the probability of a reward function under the current values of u and ?, and P (u, ?|Xu ) is the prior probability of a 2 particular assignment to u and ?. The log of P (u, ?|Xu ) is the GP log marginal likelihood, which favors simple kernel functions and values of u that conform to the current kernel matrix [11]: 1 1 n log P (u, ?|Xu ) = ? uT K?1 log |Ku,u | ? log 2? + log P (?) u,u u ? 2 2 2 (3) The last term log P (?) is a hyperparameter prior, which is discussed in Section 4. The entries of the covariance matrix Ku,u are given by the kernel function. In order to determine the relevance of each feature, we use the automatic relevance detection (ARD) kernel, with hyperparameters ? = {?, ?}:   1 k(xi , xj ) = ? exp ? (xi ? xj )T ?(xi ? xj ) 2 The hyperparameter ? is the overall variance, and the diagonal matrix ? specifies the weight on each feature. When ? is learned, less relevant features receive low weights, and more relevant features receive high weights. States distinguished by highly-weighted features can take on different reward values, while those that have similar values for all highly-weighted features take on similar rewards. ?1 The GP posterior P (r|u, ?, Xu ) is a Gaussian distribution with mean KT r,u Ku,u u and covariance ?1 T Kr,r ? Kr,u Ku,u Kr,u . Kr,u is the covariance of the rewards at all states with the inducing point values u, located respectively at Xr and Xu [11]. Due to the complexity of P (D|r), the integral in Equation 2 cannot be computed in closed form. Instead, we can consider this problem as analogous to sparse approximation for GP regression [8], where a small set of inducing points u acts as the support for the full set of training points r. In this context, the Gaussian posterior distribution over r is called the training conditional. One approximation is to assume that the training conditional is deterministic ? that is, has variance zero [8]. This approximation is particularly appropriate in our case, because if the learned GP is used to predict a reward for a novel state space, the most likely reward would have the same form as the mean of the training conditional. Under this approximation, ?1 the integral disappears, and r is set to KT r,u Ku,u u. The resulting log likelihood is simply ?1 log P (D, u, ?|Xu ) = log P (D|r = KT r,u Ku,u u) + log P (u, ?|Xu ) | {z } | {z } IRL log likelihood (4) GP log likelihood ?1 Once the likelihood is optimized, the reward r = KT r,u Ku,u u can be used to recover the expert?s policy on the entire state space. The GP can also predict the reward function for any novel state space in the domain of the features. The most likely reward for a novel state space is the mean ?1 posterior KT u, where K?,u is the covariance of the new states and the inducing points. In ?,u K our implementation, the likelihood is optimized with the L-BFGS method, with derivatives provided in the supplement. When the hyperparameters are learned, the likelihood is generally not convex. While this is not unusual for GP methods, it does mean that the method can suffer from local optima. In the supplement, we also describe a simple restart procedure we used to mitigate this problem. 4 Regularization and Hyperparameter Priors In GP regression, a noise term is often added to the diagonal of the kernel matrix to account for noisy observations. Since GPIRL learns the noiseless underlying outputs u, there is no cause to add a noise term, which means that the kernel matrix Ku,u can become singular. Intuitively, this indicates that two or more inducing points are deterministically covarying, and therefore redundant. To ensure that no inducing point is redundant, we assume that their positions in feature space Xu , rather than their values, are corrupted by white noise with variance ? 2 . The expected squared difference in the k th feature values between two points xi and xj is then given by (xik ? xjk )2 + 2? 2 , and the new, regularized kernel function is given by   1 T 2 k(xi , xj ) = ? exp ? (xi ? xj ) ?(xi ? xj ) ? 1i6=j ? tr(?) 2 3 (5) The regularization ensures that k(xi , xj ) < k(xi , xi ) so long as at least one feature is relevant ? that is, tr(?) > 0. While the regularized kernel prevents singular covariance matrices when many features become irrelevant, the log likelihood can still increase to infinity as ? ? 0 or ? ? 0: in both cases, ? 21 log |Ku,u | ? ? and, so long as u ? 0, all other terms remain finite. To prevent such degeneracies, we use a hyperparameter prior that discourages kernels under which two inducing points become deterministically covarying. As two points ui and uj become deterministically related, the magnitude of their partial correlation [K?1 ] becomes infinity. We can P u,u ij 2 1 ?2 therefore prevent degeneracies with a prior term of the form ? 12 ij [K?1 u,u ]ij = ? 2 tr(Ku,u ), which discourages large partial correlations between inducing points. Such a prior is dependent on Xu . However, unlike in GP regression, Xu and u are parameters of the algorithm rather than data, and since the inducing point positions are fixed in advance, it is possible to condition the prior on Xu . To encourage sparse feature weights ?, we also use a sparsity-inducing penalty ?(?), resulting in the prior log P (?|Xu ) = ? 12P tr(K?2 u,u ) ? ?(?). A variety of penalties are suitable, but we obtained the best results with ?(?) = i log(?ii + 1). Although we can also optimize for the noise variance ? 2 , we did not observe that this significantly altered the results, and instead fixed 2? 2 to 10?2 . 5 Inducing Points and Large State Spaces A straightforward choice for the inducing points Xu is the feature values of all states in the state space S. Unfortunately, the kernel matrix Ku,u is constructed and inverted at each iteration of the optimization in order to compute the gradient. This is a costly procedure: constructing the matrix has running time O(dX |Xu |2 ) and inverting it is O(|Xu |3 ), where dX is the number of features. To make GPIRL tractable on large state spaces, we can instead choose Xu to be a small subset of S, so that only the construction of Kr,u depends on |S|, and this dependence is linear. In principle, the minimum size of Xu corresponds to the complexity of the reward function. For example, if the true reward has two constant regions, it can be represented by just two properly placed inducing points. In practice, Xu must cover the space of feature values well enough to represent an unknown reward function, but we can nonetheless use many fewer points than there are states in S. In our implementation, we chose Xu to contain the feature values of all states visited in the example paths, as well as additional random states added to raise |Xu | to a desired size. While this heuristic worked well in our experiments, we can also view the choice of Xu as analogous to the choice of the active set in sparse GP approximation. A number of methods have been proposed for selecting these sets [8], and applying such methods to GPIRL is a promising avenue for future work. 6 Alternative Kernels The particular choice of kernel function influences the structure of the learned reward. The stationary kernel in Equation 5 favors rewards that are smooth with respect to feature values. Other kernels can be used to learn other types of structure. For example, a reward function might have wide regions with uniform values, punctuated by regions of high-frequency variation, as is the case for piecewise constant rewards. A stationary kernel would have difficulty representing such structure. Instead, we can warp each coordinate xik of xi by a function wk (xik ) to give high resolution to one region, and low resolution everywhere else. One such function is a sigmoid centered at mk and scaled by `k : 1   wk (xik ) = k 1 + exp ? xik`?m k Replacing xi by w(xi ) in Equation 5, we get a regularized warped kernel of the form !   1X 2 2 ? ? k(xi , xj ) = ? exp ? ?kk (wk (xik ) ? wk (xjk )) + 1i6=j ? (wk (xik ) + wk (xjk )) 2 k The second term in the sum is the contribution of the noise to the expected distance. Assuming ? 2 is ?wk small, this value can be approximated to first order by setting wk? (xik ) = ?x + sk , where sk is an ik 4 additional parameter that increases the noise in the tails of the sigmoid to prevent degeneracies. The parameters m, `, and s are added to ? and jointly optimized with u and the other hyperparameters, using unit variance Gaussian priors for ` and s and gamma priors for m. Note that this procedure is not equivalent to merely fitting a sigmoid to the reward function, since the reward can still vary nonlinearly in the high resolution regions around each sigmoid center mk . The accompanying supplement includes details about the priors placed on the warp parameters in our implementation, a complete derivation of wk? , and the derivatives of the warped kernel function. During the optimization, as the sigmoid scales ` become small, the derivatives with respect to the sigmoid centers m fall to zero. If the centers have not yet converged to the correct values, the optimization will end in a local optimum. It is therefore more important to address local optima when using the warped kernel. As mentioned in Section 3, we mitigate the effects of local optima with a small number of random restarts. Details of the particular random restart technique we used can also be found in the supplement. We presented just one example of how an alternative kernel allows us to learn a reward with a particular structure. Many kernels have been proposed for GPs [11], and this variety of kernel functions can be used to apply GPIRL to new domains and to extend its generality and flexibility. 7 Experiments We compared GPIRL with prior methods on several IRL tasks, using examples sampled from the stochastic MaxEnt policy (see Section 2) as well as human demonstrations. Examples drawn from the stochastic policy can intuitively be viewed as noisy samples of an underlying optimal policy, while the human demonstrations contain the stochasticity inherent in human behavior. GPIRL was compared with the MaxEnt IRL algorithm [17] and FIRL [5], as well as a variant of MaxEnt with a sparsity-inducing Laplace prior, which we refer to as MaxEnt/Lp. We evaluated a variety of other margin-based methods, including Abbeel and Ng?s projection algorithm, MMP, MWAL, MMPBoost and LEARCH [1, 12, 15, 13, 14]. Since GPIRL, FIRL, and MaxEnt consistently produced better results, the other algorithms are not shown here, but are included in the supplementary result tables. We compare the algorithms using the ?expected value difference? score, which is a measure of how suboptimal the learned policy is under the true reward. To compute this score, we find the optimal deterministic policy under each learned reward, measure its expected sum of discounted rewards under the true reward function, and subtract this quantity from the expected sum of discounted rewards under the true policy. While we could also evaluate the optimal stochastic policies, this would unfairly penalize margin-based methods, which are unaware of the MaxEnt model. To determine how well each algorithm captured the structure of the reward function, we evaluated the learned reward on the environment on which it was learned, and on 4 additional random environments (denoted ?transfer?). Algorithms that do not express the reward function in terms of the correct features are expected to perform poorly on the transfer environments, even if they perform well on the training environment. Methods that correctly identify relevant features should perform well on both. For each environment, we evaluated the algorithms with both discrete and continuous-valued features. In the latter case, GPIRL used the warped kernel in Section 6 and FIRL, which requires discrete features, was not tested. Each test was repeated 8 times with different random environments. 7.1 Objectworld Experiments The objectworld is an N ? N grid of states with five actions per state, corresponding to steps in each direction and staying in place. Each action has a 30% chance of moving in a different random direction. Randomly placed objects populate the objectworld, and each is assigned one of C inner and outer colors. Object placement is randomized in the transfer environments, while N and C remain the same. There are 2C continuous features, each giving the Euclidean distance to the nearest object with a specific inner or outer color. In the discrete feature case, there are 2CN binary features, each one an indicator for a corresponding continuous feature being less than d ? {1, ..., N }. The true reward is positive in states that are both within 3 cells of outer color 1 and 2 cells of outer color 2, negative within 3 cells of outer color 1, and zero otherwise. Inner colors and all other outer colors are distractors. The algorithms were provided example paths of length 8, and the number of examples and colors was varied to determine their ability to handle limited data and distractors. 5 discrete features discrete features transfer FIRL 20 15 10 5 8 16 32 64 examples GPIRL MaxEnt/Lp FIRL 20 15 10 5 0 128 MaxEnt 25 expected value difference expected value difference expected value difference MaxEnt/Lp 4 30 GPIRL MaxEnt continuous features transfer 30 GPIRL 25 0 continuous features 30 4 8 16 32 examples 64 MaxEnt/Lp 20 15 10 5 0 128 GPIRL MaxEnt 25 expected value difference 30 4 8 16 32 64 examples MaxEnt/Lp 20 15 10 5 0 128 MaxEnt 25 4 8 16 32 examples 64 128 Figure 1: Results for 32?32 objectworlds with C = 2 and varying numbers of examples. Shading shows standard error. GPIRL learned accurate rewards that generalized well to new state spaces. discrete features discrete features transfer FIRL 20 15 10 5 2 4 6 8 colors 10 30 GPIRL 12 MaxEnt 25 expected value difference expected value difference expected value difference MaxEnt/Lp MaxEnt/Lp FIRL 20 15 10 5 0 continuous features transfer 30 GPIRL MaxEnt 25 0 continuous features 30 GPIRL 2 4 6 8 colors 10 MaxEnt/Lp 20 15 10 5 0 12 GPIRL MaxEnt 25 expected value difference 30 2 4 6 8 colors 10 12 MaxEnt 25 MaxEnt/Lp 20 15 10 5 0 2 4 6 8 colors 10 12 Figure 2: Objectworld evaluation with 32 examples and varying numbers of colors C. GPIRL was able to perform well even as the number of distractor features increased. Because of the large number of irrelevant features and the nonlinearity of the reward, this example is particularly challenging for methods that learn linear reward functions. With 16 or more examples, GPIRL consistently learned reward functions that performed as well as the true reward, as shown in Figure 1, and was able to sustain this performance as the number of distractors increased, as shown in Figure 2. While the performance of MaxEnt and FIRL also improved with additional examples, they were consistently outperformed by GPIRL. In the case of FIRL, this was likely due to the suboptimal expert examples. In the case of MaxEnt, although the Laplace prior improved the results, the inability to represent nonlinear rewards limited the algorithm?s accuracy. These issues are evident in Figure 3, which shows part of a reward function learned by each method. When using continuous features, the performance of MaxEnt suffered even more from the increased nonlinearity of the reward function, while GPIRL maintained a similar level of accuracy. True Reward outer color 1 objects GPIRL MaxEnt/Lp outer color 2 objects other objects (distractors) FIRL expert actions Figure 3: Part of a reward function learned by each algorithm on an objectworld. While GPIRL learned the correct reward function, MaxEnt was unable to represent the nonlinearities, and FIRL learned an overly complex reward under which the suboptimal expert would have been optimal. 6 discrete features discrete features transfer FIRL 40 30 20 10 4 8 16 32 examples GPIRL MaxEnt/Lp FIRL 40 30 20 10 0 64 MaxEnt 50 expected value difference expected value difference expected value difference MaxEnt/Lp 2 60 GPIRL MaxEnt continuous features transfer 60 GPIRL 50 0 continuous features 60 2 4 8 16 examples 32 MaxEnt/Lp 40 30 20 10 0 64 GPIRL MaxEnt 50 expected value difference 60 2 4 8 16 examples 32 MaxEnt/Lp 40 30 20 10 0 64 MaxEnt 50 2 4 8 16 examples 32 64 Figure 4: Results for 64-car-length highways with varying example counts. While GPIRL achieved only modest improvement over prior methods on the training environment, the large improvement in the transfer tests indicates that the underlying reward structure was captured more accurately. discrete features discrete features transfer FIRL 40 30 20 10 4 8 examples GPIRL 16 MaxEnt 50 expected value difference expected value difference expected value difference MaxEnt/Lp 2 60 GPIRL MaxEnt MaxEnt/Lp FIRL 40 30 20 10 0 continuous features transfer 60 GPIRL 50 0 continuous features 60 2 4 8 examples MaxEnt/Lp 40 30 20 10 0 16 GPIRL MaxEnt 50 expected value difference 60 2 4 8 examples 16 MaxEnt 50 MaxEnt/Lp 40 30 20 10 0 2 4 8 examples 16 Figure 5: Evaluation on the highway environment with human demonstrations. GPIRL learned a reward function that more accurately reflected the true policy the expert was attempting to emulate. 7.2 Highway Driving Behavior In addition to the objectworld environment, we evaluated the algorithms on more concrete behaviors in the context of a simple highway driving simulator, modeled on the experiment in [5] and similar evaluations in other work [1]. The task is to navigate a car on a three-lane highway, where all other vehicles move at a constant speed. The agent can switch lanes and drive at up to four times the speed of traffic. Other vehicles are either civilian or police, and each vehicle can be a car or motorcycle. Continuous features indicate the distance to the nearest vehicle of a specific class (car or motorcycle) or category (civilian or police) in front of the agent, either in the same lane, the lane to the right, the lane to the left, or any lane. Another set of features gives the distance to the nearest such vehicle in a given lane behind the agent. There are also features to indiciate the current speed and lane. Discrete features again discretize the continuous features, with distances discretized in the same way as in the objectworld. In this section, we present results from synthetic and manmade demonstrations of a policy that drives as fast as possible, but avoids driving more than double the speed of traffic within two car-lengths of a police vehicle. Due to the connection between the police and speed features, the reward for this policy is nonlinear. We also evaluated a second policy that instead avoids driving more than double the speed of traffic in the rightmost lane. The results for this policy were similar to the first, and are included in the supplementary result tables. Figure 4 shows a comparison of GPIRL and prior algorithms on highways with varying numbers of 32-step synthetic demonstrations of the ?police? task. GPIRL only modestly outperformed prior methods on the training environments with discrete features, but achieved large improvement on the transfer experiment. This indicates that, while prior algorithms learned a reasonable reward, this reward was not expressed in terms of the correct features, and did not generalize correctly. With continuous features, the nonlinearity of the reward was further exacerbated, making it difficult for linear methods to represent it even on the training environment. In Figure 5, we also evaluate how GPIRL and prior methods were able to learn the ?police? behavior from human demonstrations. 7 MaxEnt/Lp True Reward FIRL GPIRL Figure 6: Highway reward functions learned from human demonstration. Road color indicates the reward at the highest speed, when the agent should be penalized for driving fast near police vehicles. The reward learned by GPIRL most closely resembles the true one. Although the human demonstrations were suboptimal, GPIRL was still able to learn a reward function that reflected the true policy more accurately than prior methods. Furthermore, the similarity of GPIRL?s performance with the human and synthetic demonstrations suggests that its model of suboptimal expert behavior is a reasonable reflection of actual human suboptimality. An example of rewards learned from human demonstrations is shown in Figure 6. Example videos of the learned policies and human demonstrations, as well as source code for our implementation of GPIRL, can be found at http://graphics.stanford.edu/projects/gpirl/index.htm 8 Discussion and Future Work We presented an algorithm for inverse reinforcement learning that represents nonlinear reward functions with Gaussian processes. Using a probabilistic model of a stochastic expert with a GP prior on reward values, our method is able to recover both a reward function and the hyperparameters of a kernel function that describes the structure of the reward. The learned GP can be used to predict a reward function consistent with the expert on any state space in the domain of the features. In experiments with nonlinear reward functions, GPIRL consistently outperformed prior methods, especially when generalizing the learned reward to new state spaces. However, like many GP models, the GPIRL log likelihood is multimodal. When using the warped kernel function, a random restart procedure was needed to consistently find a good optimum. More complex kernels might suffer more from local optima, potentially requiring more robust optimization methods. It should also be noted that our experiments were intentionally chosen to be challenging for algorithms that construct rewards as linear combinations. When good features that form a linear basis for the reward are already known, prior methods such as MaxEnt would be expected to perform comparably to GPIRL. However, it is often difficult to ensure this is the case in practice, and previous work on margin-based methods suggests that nonlinear methods often outperform linear ones [13, 14]. When presented with a novel state space, GPIRL currently uses the mean posterior of the GP to estimate the reward function. In principle, we could leverage the fact that GPs learn distributions over functions to account for the uncertainty about the reward in states that are different from any of the inducing points. For example, such an approach could be used to learn a ?conservative? policy that aims to achieve high rewards with some degree of certainty, avoiding regions where the reward distribution has high variance. In an interactive training setting, such a method could also inform the expert about states that have high reward variance and require additional demonstrations. More generally, by introducing Gaussian processes into inverse reinforcement learning, GPIRL can benefit from the wealth of prior work on Gaussian process regression. For instance, we apply ideas from sparse GP approximation in the use of a small set of inducing points to learn the reward function in time linear in the number of states. A substantial body of prior work discusses techniques for automatically choosing or optimizing these inducing points [8], and such methods could be incorporated into GPIRL to learn reward functions with even smaller active sets. We also demonstrate how different kernels can be used to learn different types of reward structure, and further investigation into the kinds of kernel functions that are useful for IRL is another exciting avenue for future work. Acknowledgments. We thank Andrew Y. Ng and Krishnamurthy Dvijotham for helpful feedback and discussion. This work was supported by NSF Graduate Research Fellowship DGE-0645962. 8 References [1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In ICML ?04: Proceedings of the 21st International Conference on Machine Learning, 2004. [2] M. P. Deisenroth, C. E. Rasmussen, and J. Peters. Gaussian process dynamic programming. Neurocomputing, 72(7?9):1508?1524, 2009. [3] K. Dvijotham and E. Todorov. Inverse optimal control with linearly-solvable MDPs. In ICML ?10: Proceedings of the 27th International Conference on Machine Learning, pages 335?342, 2010. [4] Y. Engel, S. Mannor, and R. Meir. Reinforcement learning with Gaussian processes. In ICML ?05: Proceedings of the 22nd International Conference on Machine learning, pages 201?208, 2005. [5] S. Levine, Z. Popovi?c, and V. Koltun. Feature construction for inverse reinforcement learning. In Advances in Neural Information Processing Systems 23. 2010. [6] G. Neu and C. Szepesv?ari. Apprenticeship learning using inverse reinforcement learning and gradient methods. In Uncertainty in Artificial Intelligence (UAI), 2007. [7] A. Y. Ng and S. J. Russell. Algorithms for inverse reinforcement learning. In ICML ?00: Proceedings of the 17th International Conference on Machine Learning, pages 663?670, 2000. [8] J. Qui?nonero Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6:1939?1959, 2005. [9] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In IJCAI?07: Proceedings of the 20th International Joint Conference on Artifical Intelligence, pages 2586?2591, 2007. [10] C. E. Rasmussen and M. Kuss. Gaussian processes in reinforcement learning. In Advances in Neural Information Processing Systems 16, 2003. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2005. [12] N. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In ICML ?06: Proceedings of the 23rd International Conference on Machine Learning, pages 729?736, 2006. [13] N. Ratliff, D. Bradley, J. A. Bagnell, and J. Chestnutt. Boosting structured prediction for imitation learning. In Advances in Neural Information Processing Systems 19, 2007. [14] N. Ratliff, D. Silver, and J. A. Bagnell. Learning to search: Functional gradient techniques for imitation learning. Autonomous Robots, 27(1):25?53, 2009. [15] U. Syed and R. Schapire. A game-theoretic approach to apprenticeship learning. In Advances in Neural Information Processing Systems 20, 2008. [16] B. D. Ziebart. Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy. PhD thesis, Carnegie Mellon University, 2010. [17] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI Conference on Artificial Intelligence (AAAI 2008), pages 1433?1438, 2008. 9
4420 |@word version:1 nd:1 covariance:5 thereby:1 tr:4 shading:1 score:2 selecting:1 rightmost:1 bradley:1 current:4 si:3 yet:1 dx:2 written:1 must:1 v:1 stationary:2 intelligence:3 selected:1 fewer:1 amir:1 provides:1 mannor:1 boosting:1 five:1 along:1 constructed:1 become:5 koltun:2 ik:1 combine:1 fitting:1 apprenticeship:3 expected:25 behavior:12 planning:1 distractor:1 simulator:1 discretized:1 bellman:1 discounted:3 automatically:2 actual:1 becomes:1 provided:3 xx:2 underlying:5 project:1 maximizes:1 kind:1 unobserved:1 finding:1 certainty:1 mitigate:2 act:1 interactive:1 scaled:1 control:1 unit:1 positive:1 local:5 path:5 might:2 chose:1 resembles:1 suggests:2 challenging:2 limited:2 graduate:1 acknowledgment:1 practice:2 xr:1 procedure:4 significantly:1 convenient:1 projection:1 road:1 get:1 cannot:1 operator:1 context:2 applying:1 influence:1 optimize:1 equivalent:1 deterministic:3 demonstrated:1 center:3 maximizing:2 zinkevich:1 straightforward:1 punctuated:1 williams:1 convex:1 resolution:3 simplicity:2 handle:1 coordinate:3 variation:1 analogous:2 laplace:2 krishnamurthy:1 construction:3 autonomous:1 programming:1 gps:2 us:1 approximated:1 particularly:2 located:1 observed:1 levine:2 capture:2 region:6 ensures:1 russell:1 highest:1 observes:1 principled:1 mentioned:1 environment:12 substantial:1 complexity:2 ui:1 reward:105 ziebart:2 dynamic:1 zoran:2 raise:1 basis:1 compactly:1 htm:1 multimodal:1 joint:1 represented:1 emulate:1 derivation:2 fast:2 describe:2 artificial:2 choosing:1 heuristic:1 stanford:5 supplementary:2 valued:1 otherwise:1 ability:2 favor:2 unseen:1 gp:26 jointly:1 noisy:3 differentiable:1 relevant:4 motorcycle:2 nonero:1 flexibility:1 poorly:1 achieve:1 inducing:16 qr:1 rst:1 ijcai:1 double:2 optimum:6 silver:1 staying:1 object:6 andrew:1 nearest:3 ij:3 ard:1 exacerbated:1 c:3 indicate:1 direction:2 closely:2 correct:4 manmade:1 stochastic:9 centered:1 human:13 require:1 abbeel:2 preliminary:1 investigation:1 mwal:1 accompanying:1 sufficiently:1 around:1 exp:8 predict:4 driving:5 vary:1 outperformed:3 currently:1 visited:1 highway:7 engel:1 weighted:2 mit:1 gaussian:18 aim:2 rather:2 varying:4 properly:1 consistently:5 improvement:3 likelihood:11 indicates:4 helpful:1 dependent:1 unlikely:1 entire:1 overall:1 issue:1 denoted:3 marginal:1 once:2 construct:1 washington:2 ng:4 represents:2 icml:5 future:3 piecewise:1 inherent:1 employ:1 randomly:1 gamma:1 neurocomputing:1 familiar:1 detection:1 highly:3 vsr:1 evaluation:3 behind:1 kt:5 accurate:1 tuple:1 integral:2 partial:2 encourage:1 modest:1 euclidean:1 maxent:46 desired:1 xjk:3 causal:1 mk:2 increased:3 instance:1 soft:1 civilian:2 modeling:1 cover:1 assignment:1 introducing:1 subset:3 entry:1 uniform:1 front:1 graphic:1 corrupted:1 synthetic:3 st:1 international:6 randomized:1 probabilistic:5 concrete:1 squared:1 central:1 again:1 thesis:1 aaai:2 choose:1 dr:1 expert:25 derivative:3 warped:5 account:3 nonlinearities:1 bfgs:1 wk:9 includes:1 depends:1 vehicle:7 performed:1 view:2 closed:1 candela:1 traffic:3 recover:4 contribution:1 accuracy:2 variance:7 identify:1 generalize:1 bayesian:2 accurately:3 produced:1 comparably:1 drive:2 kuss:1 converged:1 inform:1 neu:1 against:2 nonetheless:1 svlevine:1 frequency:1 intentionally:1 associated:1 degeneracy:3 sampled:1 knowledge:1 ut:1 color:16 distractors:4 car:5 higher:1 popovi:2 restarts:1 reflected:2 specify:1 sustain:1 improved:2 evaluated:5 generality:1 furthermore:1 just:2 dey:1 correlation:2 hand:1 ramachandran:1 irl:11 expressive:1 nonlinear:14 replacing:1 mdp:1 dge:1 effect:1 normalized:1 true:13 contain:2 requiring:1 regularization:2 assigned:1 white:1 during:1 game:1 uniquely:1 maintained:1 noted:1 suboptimality:1 generalized:2 evident:1 complete:3 demonstrate:1 theoretic:1 reflection:1 reasoning:1 novel:5 ari:1 sigmoid:6 discourages:2 covarying:2 functional:1 extend:3 discussed:2 tail:1 refer:1 mellon:1 ai:4 automatic:1 rd:1 consistency:2 grid:1 i6:2 stochasticity:1 nonlinearity:3 moving:1 robot:1 similarity:1 add:1 posterior:6 optimizing:1 irrelevant:2 binary:1 inverted:1 captured:3 minimum:1 additional:5 impose:1 determine:3 maximize:1 redundant:2 ii:1 full:1 smooth:1 match:1 long:2 prediction:1 variant:1 regression:7 noiseless:1 iteration:1 sergey:1 represent:7 kernel:32 achieved:2 cell:3 penalize:1 receive:2 addition:1 fellowship:1 szepesv:1 else:2 singular:2 source:1 suffered:1 wealth:1 unlike:1 near:2 leverage:1 enough:1 variety:3 xj:9 switch:1 todorov:1 suboptimal:8 inner:3 idea:1 cn:1 avenue:2 penalty:2 suffer:2 peter:1 cause:1 action:11 generally:3 useful:1 detailed:1 discount:1 extensively:1 category:1 http:1 specifies:1 outperform:2 meir:1 schapire:1 nsf:1 overly:1 correctly:2 per:1 conform:1 discrete:13 hyperparameter:4 carnegie:1 express:1 four:1 purposeful:1 drawn:2 prevent:3 merely:1 sum:4 inverse:18 everywhere:1 uncertainty:2 place:1 reasonable:2 decision:3 qui:1 encountered:1 placement:1 infinity:2 worked:1 lane:9 speed:7 attempting:1 structured:1 combination:5 vladlen:2 remain:2 describes:1 smaller:1 lp:19 making:1 intuitively:3 equation:7 previously:1 describing:1 count:1 discus:1 needed:1 tractable:1 end:2 unusual:1 apply:2 observe:1 chestnutt:1 appropriate:1 distinguished:1 alternative:2 running:1 ensure:2 unifying:1 giving:2 uj:1 especially:1 move:1 added:3 quantity:1 already:1 costly:1 dependence:1 diagonal:2 modestly:1 bagnell:4 exhibit:1 gradient:3 distance:5 unable:1 thank:1 restart:3 outer:8 portable:2 assuming:2 length:3 code:1 modeled:2 relationship:1 kk:1 index:1 demonstration:20 balance:1 difficult:2 unfortunately:1 potentially:1 xik:8 negative:1 ratliff:3 implementation:4 policy:26 unknown:3 perform:5 allowing:2 discretize:1 observation:2 markov:3 indiciate:1 finite:1 situation:1 incorporated:1 varied:1 police:7 inferred:1 inverting:1 pair:1 nonlinearly:1 optimized:3 connection:1 learned:26 address:2 able:5 sparsity:2 challenge:1 including:2 video:1 suitable:1 syed:1 difficulty:1 regularized:3 solvable:2 indicator:1 representing:2 altered:1 mdps:2 disappears:1 prior:30 determining:1 proportional:2 agent:4 degree:1 consistent:2 s0:1 principle:3 exciting:1 balancing:1 penalized:1 maas:1 placed:3 last:1 supported:1 rasmussen:4 unfairly:1 populate:1 warp:2 wide:1 fall:1 taking:2 sparse:5 benefit:1 feedback:1 transition:1 avoids:2 unaware:1 reinforcement:19 adaptive:1 approximate:1 compact:1 active:2 uai:1 xi:14 imitation:2 continuous:15 search:1 sk:2 table:2 promising:1 learn:21 ku:11 transfer:13 robust:1 inherently:1 complex:3 constructing:1 domain:4 did:2 linearly:2 backup:1 noise:6 hyperparameters:7 firl:16 repeated:1 xu:27 body:1 vr:3 position:2 deterministically:3 mmp:1 exponential:2 learns:2 specific:2 navigate:1 kr:5 supplement:4 phd:1 magnitude:1 margin:6 subtract:1 entropy:3 generalizing:1 simply:2 likely:4 prevents:1 expressed:1 dvijotham:2 corresponds:1 determines:1 chance:1 conditional:3 goal:1 viewed:1 included:3 determined:1 conservative:1 total:1 called:1 stake:1 meaningful:2 rarely:1 deisenroth:1 support:1 latter:1 inability:1 relevance:4 artifical:1 evaluate:2 tested:1 avoiding:1
3,779
4,421
A Convergence Analysis of Log-Linear Training Hermann Ney Computer Science Department RWTH Aachen University 52056 Aachen, Germany [email protected] Simon Wiesler Computer Science Department RWTH Aachen University 52056 Aachen, Germany [email protected] Abstract Log-linear models are widely used probability models for statistical pattern recognition. Typically, log-linear models are trained according to a convex criterion. In recent years, the interest in log-linear models has greatly increased. The optimization of log-linear model parameters is costly and therefore an important topic, in particular for large-scale applications. Different optimization algorithms have been evaluated empirically in many papers. In this work, we analyze the optimization problem analytically and show that the training of log-linear models can be highly ill-conditioned. We verify our findings on two handwriting tasks. By making use of our convergence analysis, we obtain good results on a large-scale continuous handwriting recognition task with a simple and generic approach. 1 Introduction Log-linear models, also known as maximum entropy models or multiclass logistic regression, have found a wide range of applications in machine learning. Special cases of log-linear models include logistic regression for binary class problems and conditional random fields [10] for structured data, in particular sequential data. In recent years, the interest in log-linear models has increased greatly. Different models of log-linear form have been applied to natural language processing tasks, e.g. for segmentation [10], parsing [21], and information extraction [16], and many other tasks. The most frequently mentioned advantages of log-linear models are, first, their discriminative nature, and second, the possibility to use arbitrary and correlated features in log-linear models. Furthermore, the conventional training of log-linear models is a strictly convex optimization problem. Thus, the global optimum of the training criterion is unique and no other local optima exist. Steepest descent and other gradient-based optimization algorithms are guaranteed to converge to the unique global optimum from any initialization. The probabilistic approach of log-linear models is beneficial in many practical applications. For example, log-linear models are directly defined as multiclass models and can be integrated into more complex classifiers. For large datasets, the costs of training log-linear models are very high and limit their application range. Therefore, the efficient optimization of log-linear models is of great interest. The most widely used algorithms for this problem can be divided into three categories. Bound optimization algorithms, as generalized iterative scaling (GIS) [4] and variants of GIS have been used in earlier works. Later it has been found by several authors [17, 14, 21] that these algorithms converge very slowly and are inferior to gradient-based optimization algorithms. First-order optimization algorithms require the evaluation of the gradient of the objective function. The simplest algorithm of this category is steepest descent. The more sophisticated conjugate gradient (CG) and L-BFGS are now the standard choices for the training of log-linear models. Newton?s method converges rapidly in a neighborhood of the optimum. For large-scale problems it is in general not applicable, because it requires the evaluation and storage of the Hessian matrix. 1 So far, a rigorous mathematical analysis of the optimization problem encountered in training of loglinear models has been missing. From optimization theory it is known that the convergence rate of first-order optimization algorithms depends on the condition number of the Hessian matrix at the optimum.1 The dependence of the convergence behavior on the condition number is strongest for steepest descent. For high condition numbers, steepest descent is useless in practice [3, Chapter 9.3]. It can be shown that more sophisticated gradient-based optimization algorithms as CG and L-BFGS depend on the condition number as well [18, Chapter 5.1],[18, Chapter 9.1]. Apart from numerical reasons, the convergence behavior of Newton?s method is completely independent of the condition number. In practice, it is not, because computing Newton?s search direction requires solving a system of linear equations, which is more difficult for problems with high condition number [3, Chapter 9.5]. In this paper, we derive an estimate for the condition number of the objective function used for training of log-linear models. Our analysis shows that convergence can be accelerated by feature transformations. We verify our analytic results on two classification tasks. One is a small digit recognition task, the other a large-scale continuous handwriting recognition task with real-life data. The experiments show that in extreme cases, log-linear training can be so ill-conditioned that a usable model can only be found from a reasonable initialization. On the other hand, when care is taken, we obtain good results with a conceptually simple and generic approach. The remaining paper is structured as follows: In the next section, we introduce the log-linear model and the training criterion. In Section 3, we give an overview on related work. Our novel convergence analysis is presented in Section 4. Experimental results are reported in Section 5. In the last section, we discuss our results. 2 Model Definition and Training Criterion In this section, the log-linear model is defined and the necessary notation is introduced. Let X ? Rd denote the observation space and C = {1, . . . , C} a finite set of classes. A log-linear model with parameters ? ? Rd?C = (?1 ; . . . ; ?C ) is a model for class-posterior probabilities of the form exp(?Tc x) . T c0 ?C exp(?c0 x) (1) x 7? argmax p? (c|x) = argmax ?Tc x. (2) p? (c|x) = P A log-linear model induces a decision rule via r : X ? C, c?C c?C The decision boundaries of log-linear models are linear. Non-linear decision boundaries can be achieved by embedding observations into a higher dimensional space. The penalized maximum likelihood criterion is regarded as the natural training criterion for log-linear models. Let (xn , cn )n=1,...,N denote the training sample. Then the training criterion of log-linear models is an unconstrained optimization problem of the form N ? 1 X log p? (cn |xn ) + k?k22 ? 7? ? N n=1 2 ? = argmin F(?), with F : Rd?C ? R, ? ??Rd?C (3) Here, F is the objective function, and ? ? 0 the regularization constant. In the following, we refer to the optimization of the parameters of log-linear models as log-linear training. The first and second partial derivatives of the objective function for 1 ? c, c? ? C and 1 ? j, ?? ? d are: ?F (?) ??c,j = N 1 X (p? (c|xn ) ? ?(c, cn )) xn,j + ??c,j , N n=1 (4) ?2F (?) ??c,j ??c?,?? = N 1 X p? (c|xn )(?(c, c?) ? p? (? c|xn )) xn,j xn,?? + ? ?(c, c?)?(j, ??) . N n=1 (5) 1 Recall that the condition number of a positive definite matrix A is the ratio of its largest and its smallest eigenvalues: ?(A) = ?max (A)/?min (A) 2 Here, ? denotes the Kronecker delta. It can be shown that the Hessian matrix of F is positive semidefinite, and strictly positive definite for ? > 0. Thus, the optimization problem (3) is convex, respectively strictly convex (see e.g. [22]). 3 Related Work In earlier works, e.g. [16, 10], the optimization problem (3) has been solved with generalized iterative scaling (GIS) [4] or improved iterative scaling [10]. Since then, it has been shown in several works that gradient-based optimization algorithms are far superior to iterative scaling methods. Minka [17] showed for logistic regression that iterative scaling methods perform poorly in comparison to conjugate gradient (CG). Although Minka performed his experiments only on artificial data with quite low dimensional features and a small number of observations, other authors came to similar findings. Malouf [14] performed experiments with (multiclass) log-linear models on typical natural language processing tasks. As Minka, he found that CG outperforms iterative scaling methods. Furthermore, he obtained best results with L-BFGS [12], which today is considered as the best algorithm for log-linear training. One of the first applications of CRFs to large-scale problems is by Sha and Pereira [21]. They confirmed again that L-BFGS is superior to CG and far superior to GIS. All of the above mentioned papers concentrated on the empirical comparison of the performance of various optimization algorithms. The theoretical analysis of the optimization problem is very limited. Salakhutdinov [20] derived a convergence analysis for bound optimization algorithms as GIS and showed that GIS converges extremely slowly when features are highly correlated and are far from the origin. The disadvantage of Salakhutdinov?s analysis is that, for log-linear models, it concerns only GIS which now is known to perform very badly in practice. The effect of correlation on the difficulty of the optimization problem has been noted by several authors, though not analyzed in detail, e.g. by Minka [17]. An interesting connection is the convergence analysis by LeCun et al. for neural network training [11]. Their analysis differs in a number of aspects from our analysis. Interestingly, we come to similar conclusions for the convergence behavior of log-linear training as LeCun et al. for neural network training. A comparison to their work is given in the discussion. 4 Convergence Analysis of Log-Linear Model Training This section contains our theoretical result. We derive an estimate of the eigenvalues of the Hessian of log-linear training, which determine the convergence behavior of gradient-based optimization algorithms. First, we calculate the eigenvalues of the Hessian in terms of the eigenvalues of the uncentered covariance matrix. Our new Theorems 1 and 2 give lower and upper bounds for the condition number of the uncentered covariance matrix. The analysis of the case with regularization is based on the analysis of the unregularized case. 4.1 The Unregularized Case Let ?? be the limit of the optimization algorithm applied to problem (3) without regularization (? = 0). The Hessian matrix of the objective function at the optimum depends on the posterior probabilities p?? (c|x), which are of course unknown. In the following, we consider a simpler problem. We derive the eigenvalues of the Hessian at ?0 = 0. If the quadratic approximation of F at ?0 is good, the Hessian does not change strongly from ?0 to ?? , and the eigenvalues of HF (?0 ) are close to those of HF (?? ). This enables us to draw conclusions about the convergence behavior of gradient-based optimization algorithms. The experiments in Section 5 justify our assumption. All experimental results are in accordance to the theoretical results. For ?0 = 0, the posterior probabilities are uniform, i.e. p?0 (c|x) = C ?1 . Hence, N  1 X ?2F (?0 ) = C ?1 ?(c, c?) ? C ?1 xn,j xn,?? . ??c,j ??c?,?? N n=1 (6) The Hessian matrix can be written as a Kronecker product (see e.g. [8]): HF (?0 ) = S ? X . Here,  S ? RC?C is defined by S = C ?1 IC ? C ?1 1C , where IC ? RC?C is the identity matrix, and 1C ? RC?C denotes the matrix, where all entries are equal to one. The matrix X ? Rd?d is the 3 uncentered covariance matrix: X = 1 N PN n=1 xn xTn . The eigenvalues of S can be computed easily: ?1 (S) = 0, ?2 (S) = C ?1 . (7) Let 0 ? ?1 (X) ? . . . ? ?d (X) denote the eigenvalues of X. The eigenvalues of the Kronecker product S ? X are of the form ?i (S)?j (X) (see [8, Theorem 4.2.12]). Therefore, the spectrum of the Hessian is determined by the eigenvalues of X: ?(HF (?0 )) = {0} ? {C ?1 ?1 (X), . . . , C ?1 ?d (X)} . (8) A difficulty in the analysis of the unregularized case is that the objective function is only convex, but not strictly convex. This is caused by the invariances of log-linear models. For instance, shifting all parameter vectors by a constant does not change the posterior probabilities. In addition, singularities appear as a result of linear dependencies in the features. Thus, one of the eigenvalues of the Hessian at the optimum is zero and the condition number is not defined. Intuitively, the convergence rate should not depend on the eigenvalue zero, since the objective function is constant in the direction of the corresponding eigenvectors. The classic proof about the convergence rate of steepest descent for quadratic functions with the Kantorovich inequality (see [13, p218]) can directly be generalized to the singular case. The convergence rate depends on the ratio of the largest and the smallest non-zero eigenvalue. Because of space constraints we omit this proof here. An analog result was shown by Notay [19] for the application of CG for solving systems of linear equations, which is equivalent to the minimization of quadratic functions. All results about the convergence behavior of conjugate gradient extend to the singular case, if instead of the complete spectrum only the non-zero eigenvalues are considered. Therefore, Notay defines the condition number of a singular matrix as the ratio of its largest eigenvalue and its smallest non-zero eigenvalue. In the following, we adopt this definition of the condition number. The condition number of the Hessian is then: ?d (X) ?(HF (?0 )) = ?(X) = . (9) mini:?i (X)6=0 ?i (X) In the following subsection, we analyze the condition number ?(X). 4.2 The Eigenvalues of X The dependence of the convergence behavior on the properties of X is in accordance to experimental observations. Other researchers have noted before, that the use of correlated features leads to slower convergence [21]. Minka [17] noted that convergence slows down when adding a constant to the features, because this ?introduces correlation, in the sense that? X ?has significant off-diagonals.?. How can we verify these findings formally? The following theorem concerns the case of uncorrelated features. The proof is an application of Weyl?s inequalities (see [9, Theorem 4.3.7]). Theorem 1. Suppose the features xi , 1 ? i ? d, are uncorrelated with respect to the empirical distribution. Let ?i and ?i2 denote the empirical mean and variance of xi for 1 ? i ? d. Without loss of generality, we assume that the features are ordered such that ?12 ? . . . ? ?d2 . Then the PN condition number of X = N1 n=1 xn xTn is bounded by max{?12 + k?k22 , ?d2 + ?2d } ? 2 + k?k2 ? ?(X) ? d 2 2 . 2 2 2 min{?2 , ?1 + ?1 } ?1 (10) Proof of Theorem 1. Since the features are uncorrelated, we have def X = diag(?12 , . . . , ?d2 ) + ??T = A + B . (11) The eigenvalues of the sum of two Hermitian matrices can be estimated with Weyl?s inequalities. Let ?j (M ) denote the j-th eigenvalue in ascending order of a Hermitian d ? d-matrix M . Weyl?s inequalities state that for all Hermitian d ? d-matrices A, B and all j, k: ?j+k?d (A + B) ? ?j (A) + ?k (B) , ?j+k?1 (A + B) ? ?j (A) + ?k (B) . (12) (13) The eigenvalues of A are the diagonal elements ?j (A) = ?j2 . B is a rank-one matrix with the eigenvalues ?d (B) = k?k22 and ?j (B) = 0 for 1 ? j ? d ? 1. The bounds for ?(X) follow 4 with the application of (13) and (12) to the smallest and largest eigenvalue. For instance, the upper bound on the condition number follows from the application of (12) with j = k = d to the largest eigenvalue and (13) with j = k = 1 to the lowest eigenvalue. The proof of the lower bound is analogous. The bound is sharpened by using the fact that every diagonal element of X is an upper bound for the smallest eigenvalue and a lower bound for the largest eigenvalue (see [9, p181]). Analyzing the general case of correlated and unnormalized features is more difficult. The idea of the following theorem is regarding the off-diagonals as a perturbation of the diagonal matrix. This case can be analyzed with Ger?sgorin?s circle theorem [9, Theorem 6.1.1], which states that all eigenvalues lie in circles around the diagonal entries of the matrix. Theorem 2. Let ?i and ?i2 denote the empirical mean and variance of xi for 1 ? i ? d and assume that ?12 ? . . . ? ?d2 . Let X Ri = |Cov (xj , xi ) | (14) j,j6=i denote the radius of the i-th Ger?sgorin circle. Then, the largest and smallest eigenvalues of X = PN 1 T n=1 xn xn are bounded by: N ?12 ? R1 max{?d2 + ?2d , ?12 ? R1 + k?k22 } ? ?1 (X) ? min{?12 + ?21 , ?d2 + Rd } , ? ?d (X) ? ?d2 + Rd + k?k22 . (15) (16) The proof of Theorem 2 is a direct generalization of Theorem 1. In contrast to Theorem 1, only the bounds for the eigenvalues of A obtained by Ger?sgorin?s theorem are known instead of the exact eigenvalues. For strongly correlated features, the eigenvalues can be distributed almost arbitrarily according to the bounds (15) and (16). For weakly correlated features, the bounds are tighter. In particular, for normalized features and R1 < 1, Theorem 2 implies: 1 + Rd 1 ? ?(X) ? . (17) 1 ? R1 This shows that the best conditioning of the optimization problem is obtained for uncorrelated and normalized features. Conversely, our analysis shows that log-linear training can be accelerated by decorrelating the features and normalizing their means and variances, i.e. after whitening of the data. 4.3 The Regularized Case In the following, we investigate the regularized training criterion, i.e. the objective function (3) with ? > 0. Since the Hessian of the `2 -regularization term is a multiple of the identity, the eigenvalues of the regularization term and the loss-term can be added. This has an important consequence. In the unregularized case, all non-zero eigenvalues depend on the eigenvalues of X. In the regularized case, the eigenvalue zero changes to ?, which is then the smallest non-zero eigenvalue of the Hessian. Therefore, the condition number depends only on the largest eigenvalue of X C ?1 ?d (X) + ? . (18) ?(HF (?0 )) = ? This shows that for large regularization parameters, the condition number is close to one and convergence is fast. On the other hand, for small regularization parameters, the condition number gets very large, even if X is well-conditioned. On first glance, it seems paradoxical that a small modification of the objective function can change the convergence behavior completely. But for a small regularization constant, the objective function has a very flat optimum instead of being constant in these directions. Finding the exact optimum is indeed very hard. On the other hand, the optimization is dominated by the unregularized part of the objective function. Therefore, the iterates of the optimization algorithm will be close to an optimum of the unregularized objective function. Since the regularization term is only small, the iterates already correspond to good models according to the objective function. 5 Experimental Results In this section, we validate the theoretical results on two classification tasks. The first one is the well-known USPS task for handwritten digit recognition. The second task, IAM, is a large-scale continuous handwriting recognition task with real-life data. Our main interest is the large-scale task, since this is a task for which log-linear models are especially useful. 5 Table 1: Results on the USPS task for different feature transformations and regularization parameters ?. The columns ?separation? and ?termination? list the number of passes through the dataset until separation of the training data, respectively the termination of the optimization algorithm. Preprocessing ?N Train error (%) Separation Termination Whitening and mean norm. Mean and variance norm. None None None None None 5.1 0.0 0.0 0.0 0.01 0.1 1.0 10.0 0.0 0.0 0.0 0.03 0.43 2.08 4.29 21 61 356 - 66 116 513 731 358 174 100 Handwritten Digit Recognition The USPS dataset2 consists of 7291 training and 2007 test images from ten classes of handwritten digits. We trained a log-linear classifier directly on the whole image with 16 ? 16 pixels. We used the L-BFGS algorithm for optimization, which is considered as the best algorithm for loglinear training. For all experiments, we used a a backtracking line search and a history length of ten, which is a standard value given in literature [14, 21]. We stopped the optimization, when the relative change in the objective was below  = 10?5 , i.e. (F(?k?1 ) ? F(?k ))/F(?k ) <  . (19) Table 1 contains the results on the USPS task. The results reflect our analysis of the condition number. Without normalizing mean and variance, the optimization problem is not well-conditioned. It requires more than 500 passes through the dataset until the termination criterion is reached. The optimization takes even longer, when a very small non-zero regularization constant is used. This is what we expected by analyzing the condition number ? the objective function has a very flat optimum, which slows down convergence. On the other hand, for higher regularization parameters, the optimization is much faster. We applied the normalizations only to the unregularized models, because the feature transformations affect the regularization term. Therefore, results with regularization are not comparable when feature transformations are applied. The mean and variance normalization reduced the computational costs greatly, from 513 to 116 iterations. The application of the whitening transformation further reduced the number of iterations to 66. Often, the classification error on the training data reaches its minimum before the optimization algorithm terminates, so one might argue that it is not necessary to run the optimization until the termination criterion is reached. The USPS training data is linearly separable and for all unregularized trainings, a zero classification error on the training set is reached. It turns out that the effect of the feature transformations is even stronger when the number of iterations until the training data is separated is compared (see Table 1). 5.2 Handwritten Text Recognition Our second task is the IAM handwriting database [15]. In contrast to USPS, where single images are classified into a small number of classes, IAM defines a continuous handwriting recognition task with unrestricted vocabulary, and is therefore much harder. The corpus has a predefined subdivision into training, development, and testing folds. The training fold contains lines of handwritten text with 53k words in total. With our feature extraction, this corresponds to 3, 592, 006 observations. The development and test fold contain 9k respectively 25k words. The IAM database is a large-scale learning problem in the sense that it is not feasible to run the optimization until convergence [2] and the test error is strongly influenced by the optimization accuracy. 5.2.1 Baseline Model For our baseline model, we use the conventional generative approach of a statistical classifier based on hidden Markov models (HMMs) with Gaussian Mixture models (GMMs) as emission probabilities. The generative classifier maps an observation sequence xT1 = (x1 , . . . , xT ) ? X to a word 2 ftp://ftp.kyb.tuebingen.mpg.de/pub/bs/data/ 6 sequence w ?1N = (w ?1 , . . . , w ?N ) ? W according to Bayes rule: r:X ?W, xT1 7? w ?1N = argmax p? (w1N )? p? (xT1 |w1N ) . (20) w1N ?W The prior probability p? (w1N ) is a smoothed trigram language model trained on the reference of the training data and the three additional text corpora Lancaster-Oslo-Bergen, Brown, and Wellington, as proposed in [1]. The language model scale ? > 0 has been optimized on the development set. The visual model p? (xT1 |w1N ) is defined by an HMM, which is composed of submodels for each character in the word sequence. In total there are 78 characters, which are modeled by five-state left-to-right HMMs, resulting in 390 distinct states plus one state for the whitespace model. The emission probabilities of the HMM are modeled by GMMs with a single shared covariance matrix. The parameters of the visual model are optimized according to the maximum likelihood criterion with the expectation-maximization (EM) algorithm and a splitting procedure. We obtained best results with 25k mixture components in total. We only used basic deslanting and size normalization for feature preprocessing, as it is commonly applied in handwriting recognition. An image slice was extracted at every position. Seven features in a sliding window were concatenated and projected to a thirty dimensional vector by a principal component analysis (PCA). The recognition lexicon consists of the 50k most frequent words in the language model training data. The generative baseline system achieves a word error rate (WER) of 32.8% on the development set and 39.4% on the test set, similar to the results of the GMM/HMM-baseline systems by [1, 6, 5]. 5.2.2 Hybrid LL/HMM Recognition System The main component of the visual model of our baseline system is the GMM for the emission probabilities p? (x|s). Analogous to the use of neural network outputs by [6], we build a hybrid LL/HMM recognition system by deriving the emission probabilities via p? (x|s) = p? (s|x)p(x)/p(s) . The prior probability p(s) can be estimated easily as the relative frequency, and p(x) can be discarded in recognition without changing the maximizing word sequence. We used our baseline system for generating a state alignment, i.e. an assignment of the feature vectors to an HMM state, and then trained a log-linear model on the resulting training sample (xt , st )t=1,...,T analogous to the setup on USPS. Note that the training of the log-linear model is conceptually exactly the same as for USPS and our convergence analysis applies. On large-scale tasks as IAM, it is not practicable to run the optimization until convergence as on USPS. Instead, we assume a limited training budget for all experiments, which allows for performing 200 iterations, and compare the resulting classifiers. This procedure corresponds to the characterization of large-scale learning tasks by Bottou and Bousquet [2]. The performance of a linear classifier on a complex task as IAM is quite limited. Therefore, we used polynomial feature spaces of degree one (d = 30), two (d = 495) and three (d = 5455), corresponding to polynomial kernels. In contrast to USPS, where the classification error on the training data without regularization was zero, on IAM, the state-classification error on the training data ranges from forty to sixty percent. Thus, the impact of regularization on the performance of the classifier is only minor. In preliminary experiments, we obtained almost no improvements by regularization. Therefore, we report only the results without regularization. 5.2.3 Results The results on the IAM database (see Table 2) are again in accordance to our theoretical analysis. The first-order features are already decorrelated, but without mean and variance normalization, the convergence is slower, resulting in a worse WER on the development and test set. The difference is moderate, when the parameters are initialized with zero, corresponding to a uniform distribution. In a next experiment we initialized all parameters randomly with plus or minus one. This results in a huge degradation for the unnormalized features and ? with exactly the same random initialization ? has only a minor impact when normalized features are used. The differences are even larger for the second-order experiments. This can be expected, since mean and variance take on more extreme values when the features are squared, and the features are correlated. For the zero initialization, the improvement from mean and variance normalization is only moderate in WER. For the unnormalized features and random initialization, the optimization did not lead to a usable 7 Table 2: Results on the IAM database for polynomial feature spaces of degree m ? {1, 2, 3} with different initializations and preprocessings. m Preprocessing Initialization WER / dev set (%) WER / test set (%) 1 1 none mean and var. norm. zero / random zero / random 49.9 / 68.3 49.7 / 48.9 60.1 / 75.5 58.9 / 58.5 2 2 2 2 none mean and var. norm. mean and var. norm. whitening and mean norm. zero / random zero / random 1st order zero / random 32.4 / >100.0 30.2 / 34.4 26.8 25.1 / 25.9 40.2 / >100.0 38.5 / 41.3 33.1 31.6 / 32.3 3 mean and var. norm. 2nd order 23.0 27.4 model for recognition at all. Fastest convergence and best results are obtained by the application of the whitening transformation to the features. In addition, the influence of the initialization is the smallest in this case. Because of the high dimension of the third-order features, the estimation of the whitening transformation itself is already computationally very expensive. Therefore, we only performed a mean and variance normalization of the third-order features, but initialized the models incrementally from first to second to third-order features. In this manner, we obtain our best result of 27.4% WER, which is a drastic improvement over the generative baseline system (39.4% WER). Our hybrid LL/HMM system outperforms other systems based on HMMs with comparable preprocessing. Bertolami and Bunke [1] obtain 32.9% WER with an ensemble-based HMM approach. Dreuw et al. [5] obtain 30.0% WER with discriminatively trained GMMs and 29.0% WER with an additional discriminative adaptation method. The system of Graves [7], which has a completely different architecture based on recurrent neural networks, outperforms our system with 25.9% WER. The best published result of 21.2% WER on the IAM database is by Espa?na-Boquera et al. [6], who use several specialized neural networks for preprocessing. 6 Discussion In this paper, we presented a novel convergence analysis for the optimization of the parameters of log-linear models. Our main results are first that the convergence of gradient-based optimization algorithms depends on the eigenvalues of the uncentered empirical covariance matrix. For this derivation we assumed that the quadratic term of the objective function at the optimum behaves similar as at the initialization. Second, we analyzed the eigenvalues of the covariance matrix. According to this analysis, it is important to normalize mean and variances of the features. Best convergence behavior can be expected when, in addition, the features are decorrelated. Interestingly, the same result is obtained by LeCun et al. [11] for neural network training, but their analysis differs from ours in a number of aspects. First, LeCun et al. consider a simpler loss function. In contrast to our analysis, they assume that all components of the observations have identical mean and variance and that the components are independent. Furthermore, they fix the ratio of the number of model parameters and the number of training observations. The derivation of the spectrum of the Hessian is then performed in the limit of infinite training data, leading to a continuous spectrum. This approach is more suited for the analysis of online learning. In the case of batch learning, the training data as well as the model size is fixed. We verified our findings on two handwriting recognition tasks and found that the theoretical analysis predicted the observed convergence behavior very well. On IAM, a real-life dataset for continuous handwriting recognition, our log-linear system outperforms other systems with comparable architecture and preprocessing. This is remarkable, because we use a generic and conceptually simple method, which is simple to implement and allows for reproducing experimental results easily. An interesting point for future work is the use of approximate decorrelation techniques, e.g. by assuming a structure for the covariance matrix. This will be useful for very high-dimensional features for which the estimation of the whitening transformation is not feasible. 8 References [1] Bertolami, R., Bunke, H.: HMM-based Ensamble Methods for Offline Handwritten Text Line Recognition. Pattern Recogn. 41, 3452?3460 (2008) [2] Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. In: Advances in Neural Information Processing Systems. pp. 161?168 (2008) [3] Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press (2004) [4] Darroch, J., Ratcliff, D.: Generalized Iterative Scaling for Log-Linear Models. Ann. Math. Stat. 43(5), 1470?1480 (1972) [5] Dreuw, P., Heigold, G., Ney, H.: Confidence- and Margin-Based MMI/MPE Discriminative Training for Off-Line Handwriting Recognition. Int. J. Doc. Anal. Recogn. pp. 1?16 (2011) [6] Espa?na-Boquera, S., Castro-Bleda, M., Gorbe-Moya, J., Zamora-Martinez, F.: Improving Offline Handwritten Text Recognition with Hybrid HMM/ANN Models. IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 767 ?779 (april 2011) [7] Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., Schmidhuber, J.: A Novel Connectionist System for Unconstrained Handwriting Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 855?868 (May 2009) [8] Horn, R., Johnson, C.: Topics in Matrix Analysis. Cambridge University Press (1994) [9] Horn, R., Johnson, C.: Matrix Analysis. Cambridge University Press (2005) [10] Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proceedings of the 18th International Conference on Machine Learning. pp. 282?289 (2001) [11] LeCun, Y., Kanter, I., Solla, S.: Second order properties of error surfaces: Learning time and generalization. In: Advances in Neural Information Processing Systems. pp. 918?924. Morgan Kaufmann Publishers Inc. (1990) [12] Liu, D., Nocedal, J.: On the Limited Memory BFGS Method for Large-Scale Optimization. Math. Program. 45(1), 503?528 (1989) [13] Luenberger, D., Ye, Y.: Linear and Nonlinear Programming. Springer Verlag (2008) [14] Malouf, R.: A comparison of algorithms for maximum entropy parameter estimation. In: Proceedings of the Sixth Conference on Natural Language Learning. pp. 49?55 (2002) [15] Marti, U., Bunke, H.: The IAM-Database: An English Sentence Database for Offline Handwriting Recognition. Int. J. Doc. Anal. Recogn. 5(1), 39?46 (2002) [16] McCallum, A., Freitag, D., Pereira, F.: Maximum entropy markov models for information extraction and segmentation. In: Proceedings of the 17th International Conference on Machine Learning. pp. 591?598 (2000) [17] Minka, T.: Algorithms for maximum-likelihood logistic regression. Tech. rep., Carnegie Mellon University (2001) [18] Nocedal, J., Wright, S.: Numerical Optimization. Springer (1999) [19] Notay, Y.: Solving positive (semi)definite linear systems by preconditioned iterative methods. In: Preconditioned Conjugate Gradient Methods, Lecture Notes in Mathematics, vol. 1457, pp. 105?125. Springer (1990) [20] Salakhutdinov, R., Roweis, S., Ghahramani, Z.: On the convergence of bound optimization algorithms. In: Uncertainty in Artificial Intelligence. vol. 19, pp. 509?516 (2003) [21] Sha, F., Pereira, F.: Shallow parsing with conditional random fields. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. pp. 134?141 (2003) [22] Sutton, C., McCallum, A.: An introduction to conditional random fields for relational learning. In: Getoor, L., Taskar, B. (eds.) Introduction to Statistical Relational Learning. MIT Press (2007) 9
4421 |@word polynomial:3 stronger:1 seems:1 norm:7 c0:2 nd:1 termination:5 d2:7 covariance:7 minus:1 harder:1 liu:1 contains:3 pub:1 ours:1 interestingly:2 outperforms:4 written:1 parsing:2 numerical:2 weyl:3 analytic:1 enables:1 kyb:1 generative:4 intelligence:1 mccallum:3 steepest:5 iterates:2 characterization:1 math:2 lexicon:1 simpler:2 five:1 mathematical:1 rc:3 direct:1 consists:2 freitag:1 hermitian:3 manner:1 introduce:1 expected:3 indeed:1 behavior:10 mpg:1 frequently:1 salakhutdinov:3 window:1 notation:1 bounded:2 lowest:1 what:1 argmin:1 finding:5 transformation:9 every:2 exactly:2 classifier:7 k2:1 omit:1 appear:1 segmenting:1 positive:4 before:2 local:1 accordance:3 limit:3 consequence:1 sutton:1 mach:2 analyzing:2 might:1 plus:2 initialization:9 conversely:1 hmms:3 limited:4 fastest:1 range:3 unique:2 practical:1 lecun:5 testing:1 thirty:1 practice:3 horn:2 definite:3 differs:2 implement:1 digit:4 procedure:2 empirical:5 boyd:1 word:7 confidence:1 get:1 close:3 storage:1 influence:1 conventional:2 equivalent:1 map:1 missing:1 crfs:1 maximizing:1 convex:7 splitting:1 rule:2 regarded:1 deriving:1 vandenberghe:1 his:1 embedding:1 classic:1 analogous:3 today:1 suppose:1 exact:2 programming:1 origin:1 element:2 recognition:22 expensive:1 database:7 observed:1 taskar:1 solved:1 calculate:1 solla:1 mentioned:2 trained:5 depend:3 solving:3 weakly:1 completely:3 usps:10 oslo:1 easily:3 chapter:5 various:1 recogn:3 derivation:2 train:1 separated:1 distinct:1 fast:1 liwicki:1 artificial:2 labeling:1 neighborhood:1 lancaster:1 quite:2 kanter:1 widely:2 larger:1 cov:1 gi:7 itself:1 online:1 advantage:1 eigenvalue:39 sequence:5 product:2 adaptation:1 frequent:1 j2:1 rapidly:1 poorly:1 roweis:1 validate:1 normalize:1 convergence:33 optimum:12 r1:4 generating:1 converges:2 ftp:2 derive:3 recurrent:1 stat:1 minor:2 c:2 predicted:1 come:1 implies:1 direction:3 hermann:1 radius:1 human:1 require:1 fix:1 generalization:2 preliminary:1 tighter:1 singularity:1 strictly:4 around:1 considered:3 ic:2 wright:1 exp:2 great:1 trigram:1 achieves:1 adopt:1 smallest:8 estimation:3 applicable:1 largest:8 minimization:1 mit:1 gaussian:1 bunke:4 pn:3 derived:1 emission:4 improvement:3 rank:1 likelihood:3 ratcliff:1 greatly:3 contrast:4 rigorous:1 cg:6 baseline:7 sense:2 w1n:5 tech:1 bergen:1 typically:1 integrated:1 hidden:1 germany:2 pixel:1 classification:6 ill:2 development:5 special:1 field:4 equal:1 extraction:3 identical:1 future:1 report:1 connectionist:1 randomly:1 composed:1 intell:2 argmax:3 n1:1 interest:4 huge:1 highly:2 possibility:1 investigate:1 evaluation:2 alignment:1 introduces:1 analyzed:3 extreme:2 mixture:2 semidefinite:1 sixty:1 predefined:1 partial:1 necessary:2 initialized:3 circle:3 theoretical:6 stopped:1 increased:2 instance:2 earlier:2 column:1 dev:1 disadvantage:1 assignment:1 maximization:1 cost:2 entry:2 uniform:2 johnson:2 reported:1 dependency:1 st:2 international:2 probabilistic:2 off:3 na:2 again:2 sharpened:1 reflect:1 squared:1 slowly:2 worse:1 american:1 usable:2 derivative:1 leading:1 de:3 bfgs:6 north:1 int:2 inc:1 caused:1 fernandez:1 depends:5 later:1 performed:4 analyze:2 mpe:1 reached:3 hf:6 bayes:1 simon:1 accuracy:1 variance:12 who:1 kaufmann:1 ensemble:1 correspond:1 conceptually:3 handwritten:7 mmi:1 none:7 confirmed:1 researcher:1 xtn:2 published:1 j6:1 history:1 classified:1 strongest:1 influenced:1 reach:1 decorrelated:2 ed:1 definition:2 sixth:1 frequency:1 pp:9 minka:6 proof:6 handwriting:12 dataset:3 recall:1 subsection:1 segmentation:2 sophisticated:2 higher:2 follow:1 improved:1 april:1 decorrelating:1 evaluated:1 though:1 strongly:3 generality:1 furthermore:3 correlation:2 until:6 hand:4 nonlinear:1 glance:1 incrementally:1 defines:2 logistic:4 effect:2 ye:1 verify:3 k22:5 normalized:3 contain:1 brown:1 analytically:1 regularization:18 hence:1 i2:2 ll:3 inferior:1 noted:3 unnormalized:3 criterion:11 generalized:4 complete:1 percent:1 image:4 novel:3 superior:3 aachen:6 specialized:1 behaves:1 empirically:1 overview:1 conditioning:1 analog:1 he:2 extend:1 association:1 refer:1 significant:1 mellon:1 cambridge:3 rd:8 unconstrained:2 mathematics:1 malouf:2 language:7 longer:1 surface:1 whitening:7 posterior:4 recent:2 showed:2 moderate:2 apart:1 schmidhuber:1 verlag:1 inequality:4 binary:1 came:1 arbitrarily:1 life:3 dataset2:1 rep:1 morgan:1 minimum:1 unrestricted:1 care:1 additional:2 converge:2 determine:1 forty:1 wellington:1 semi:1 sliding:1 multiple:1 faster:1 divided:1 impact:2 variant:1 regression:4 basic:1 expectation:1 iteration:4 normalization:6 kernel:1 achieved:1 addition:3 singular:3 publisher:1 pass:2 lafferty:1 gmms:3 xj:1 affect:1 architecture:2 rwth:4 idea:1 cn:3 regarding:1 multiclass:3 tradeoff:1 pca:1 darroch:1 hessian:15 useful:2 eigenvectors:1 ten:2 induces:1 concentrated:1 category:2 simplest:1 reduced:2 exist:1 delta:1 estimated:2 carnegie:1 vol:2 changing:1 gmm:2 verified:1 nocedal:2 year:2 sum:1 run:3 wer:12 uncertainty:1 bertolami:3 almost:2 reasonable:1 submodels:1 separation:3 draw:1 whitespace:1 decision:3 doc:2 scaling:7 comparable:3 bound:13 def:1 guaranteed:1 fold:3 quadratic:4 encountered:1 badly:1 iam:12 kronecker:3 constraint:1 ri:1 flat:2 bousquet:2 dominated:1 aspect:2 min:3 extremely:1 performing:1 separable:1 department:2 structured:2 according:6 conjugate:4 beneficial:1 terminates:1 em:1 character:2 shallow:1 making:1 modification:1 b:1 practicable:1 castro:1 intuitively:1 taken:1 unregularized:8 computationally:1 equation:2 discus:1 turn:1 ascending:1 drastic:1 luenberger:1 generic:3 ney:3 batch:1 slower:2 denotes:2 remaining:1 include:1 linguistics:1 paradoxical:1 newton:3 concatenated:1 ghahramani:1 especially:1 build:1 objective:16 added:1 already:3 costly:1 dependence:2 sha:2 diagonal:6 loglinear:2 kantorovich:1 gradient:12 hmm:10 topic:2 seven:1 argue:1 tuebingen:1 reason:1 preconditioned:2 assuming:1 length:1 useless:1 modeled:2 mini:1 ratio:4 difficult:2 setup:1 slows:2 anal:4 unknown:1 perform:2 upper:3 observation:8 datasets:1 markov:2 discarded:1 finite:1 descent:5 relational:2 perturbation:1 reproducing:1 smoothed:1 arbitrary:1 zamora:1 introduced:1 connection:1 optimized:2 sentence:1 trans:2 below:1 pattern:4 program:1 max:3 memory:1 shifting:1 getoor:1 decorrelation:1 natural:4 difficulty:2 regularized:3 hybrid:4 technology:1 text:5 prior:2 literature:1 relative:2 graf:2 espa:2 loss:3 lecture:1 discriminatively:1 interesting:2 ger:3 var:4 remarkable:1 degree:2 uncorrelated:4 course:1 penalized:1 last:1 english:1 offline:3 wide:1 distributed:1 slice:1 boundary:2 dimension:1 xn:14 vocabulary:1 author:3 commonly:1 preprocessing:6 projected:1 far:4 approximate:1 global:2 uncentered:4 corpus:2 xt1:4 assumed:1 discriminative:3 xi:4 spectrum:4 continuous:6 iterative:8 search:2 table:5 nature:1 improving:1 bottou:2 complex:2 diag:1 did:1 main:3 linearly:1 whole:1 martinez:1 x1:1 position:1 pereira:4 lie:1 marti:1 third:3 theorem:15 down:2 xt:2 list:1 concern:2 normalizing:2 sequential:1 adding:1 conditioned:4 budget:1 margin:1 suited:1 entropy:3 tc:2 backtracking:1 visual:3 ordered:1 applies:1 springer:3 corresponds:2 extracted:1 conditional:4 identity:2 ann:2 shared:1 feasible:2 change:5 hard:1 typical:1 determined:1 infinite:1 justify:1 principal:1 degradation:1 total:3 invariance:1 experimental:5 subdivision:1 formally:1 accelerated:2 correlated:7
3,780
4,422
Uniqueness of Belief Propagation on Signed Graphs Yusuke Watanabe? The Institute of Statistical Mathematics 10-3 Midori-cho, Tachikawa, Tokyo 190-8562, Japan [email protected] Abstract While loopy Belief Propagation (LBP) has been utilized in a wide variety of applications with empirical success, it comes with few theoretical guarantees. Especially, if the interactions of random variables in a graphical model are strong, the behaviors of the algorithm can be difficult to analyze due to underlying phase transitions. In this paper, we develop a novel approach to the uniqueness problem of the LBP fixed point; our new ?necessary and sufficient? condition is stated in terms of graphs and signs, where the sign denotes the types (attractive/repulsive) of the interaction (i.e., compatibility function) on the edge. In all previous works, uniqueness is guaranteed only in the situations where the strength of the interactions are ?sufficiently? small in certain senses. In contrast, our condition covers arbitrary strong interactions on the specified class of signed graphs. The result of this paper is based on the recent theoretical advance in the LBP algorithm; the connection with the graph zeta function. 1 Introduction The belief propagation algorithm [1] was originally proposed as an efficient method for the exact computation in the inference with graphical models associated to trees; the algorithm has been extended to general graphs with cycles and called Loopy Belief Propagation (LBP) algorithm. It has shown empirical success in a wide class of problems including computer vision, compressed sensing and error correcting codes [2, 3, 4]. In such applications, existence of cycles and strong interactions between variables make the behaviors of the LBP algorithm difficult to analyze. In this paper we propose a novel approach to the uniqueness problem of LBP fixed point. Although a considerable number of researches have been done in this decade [5, 6], understating of the LBP algorithm is not yet complete. An important step toward better understanding of the algorithm has been the variational interpretation by the Bethe free energy function; the fixed points of LBP correspond to the stationary points of the Bethe free energy function [7]. This view provides a number of algorithms that (provably) find a stationary point of the Bethe free energy function [8, 9, 10, 11]. For the uniqueness problem of the LBP fixed point a number of conditions has been proposed [12, 13, 14, 15]. (Note that the convergence property implies uniqueness by definition.) In all previous works, the uniqueness is guaranteed only in the situations where the strength of the interactions are ?sufficiently? small in certain senses. In this paper we propose a completely new approach to the uniqueness condition of the LBP algorithm; it should be emphasized that strength of interactions on specified class of signed graphs can be arbitrary large in this condition. (The signs denote the attractive/repulsive types of the compatibility function on the edges.) Generally speaking, the behavior of the algorithm is complex if the strength of interactions are strong. In such regions, phase transition phenomena can occur in the underlying computation tree [15], making theoretical analyses difficult. To overcome such difficulties, ? Current affiliation: SONY, Intelligent Systems Research Laboratory. [email protected] 1 we utilize the connection between the Bethe free energy and the graph zeta function established in [16]; the determinant of the Hessian of the Bethe free energy equals the reciprocal of the graph zeta function up to a positive factor. Combined with the index formula [16], the uniqueness problem is reduced to a positivity property of the graph zeta function. This paper is organized as follows. In section 2 we introduce the background of LBP. In section 3 we explain the condition for the uniqueness, which is the main result of this paper. In section 4 the proof of the main result is given by a graph theoretic approach. In section 5 we remark foregoing researches based on the new technique. 2 Loopy Belief Propagation, Bethe free energy and graph zeta function In this section, we provide basic facts on LBP; the connection with the Bethe free energy and graph zeta function. Throughout this paper, G = (V, E) is a connected undirected graph with V , the vertices, and E, the undirected edges. We consider the binary pairwise model, which is given by the following factorization form with respect to G: Y 1 Y ?ij (xi , xj ) ?i (xi ), (1) p(x) = Z ij?E i?V where x = (xi )i?V is a list of binary ( i.e., xi ? {?1}) variables, Z is the normalization constant and ?ij , ?i are positive functions called compatibility functions. Without loss of generality we assume that ?ij (xi , xj ) = exp(Jij xi xj ) and ?i (xi ) = exp(hi xi ). We refer Jij as interaction and its absolute value as ?strength?. In various applications, we would like to compute marginal distributions X X pi (xi ) := p(x) and pij (xi , xj ) := x\{xi } p(x) (2) x\{xi xj } though exact computations are often intractable due to the combinatorial complexities. If the graph is a tree, however, they are efficiently computed by the belief propagation algorithm [1]. Even if the graph has cycles, the direct application of the algorithm (Loopy Belief Propagation; LBP) often gives good approximation [6]. LBP is a message passing algorithm. For each directed edge, a message vector ?i?j (xj ) is assigned and initialized arbitrarily. The update rule of messages is given by X Y ?new ?ji (xj , xi )?i (xi ) ?k?i (xi ), (3) i?j (xj ) ? k?Ni \j xi where Ni is the neighborhood of i ? V . The order of edges in the update is arbitrary; the set of fixed point does not depend on the order. If the messages converge to some fixed point {?? i?j (xj )}, the approximations of pi (xi ) and pij (xi , xj ) are calculated as Y bi (xi ) ? ?i (xi ) ?? (4) k?i (xi ), k?Ni bij (xi , xj ) ? ?ij (xi , xj )?i (xi )?j (xj ) Y ?? k?i (xi ) k?Ni \j Y ?? k?j (xj ), (5) k?Nj \i P P with normalization xi bi (xi ) = 1 and xi ,xj bij (xi , xj ) = 1. From (3) and (5), the constraints P bij (xi , xj ) > 0, and xj bij (xi , xj ) = bi (xi ) are automatically satisfied. 2.1 The Bethe free energy The LBP algorithm is interpreted as a variational problem of the Bethe free energy function [7]. In this formulation, the domain of the function is given by o n X X qij (xi , xj ) = qi (xi ) (6) qij (xi , xj ) = 1, L(G) = {qi , qij }; qij (xi , xj ) > 0, xj xi ,xj 2 and element of this set is called pseudomarginals, i.e., a set of locally consistent probability distributions. The closure of this set is called local marginal polytope [6]. The objective function called Bethe free energy is defined on L(G) by: X X XX F (q) := ? qij (xi , xj ) log ?ij (xi , xj ) ? qi (xi ) log ?i (xi ) ij?E xi xj + X X i?V xi qij (xi , xj ) log qij (xi , xj ) + ij?E xi xj X (1 ? di ) i?V X qi (xi ) log qi (xi ), (7) xi where di = |Ni |. The outcome of this variational problem is the same as that of LBP. More precisely, there is a one-to-one correspondence between the set of stationary points of the Bethe free energy and the set of fixed points of LBP. The correspondence is given by (4, 5). 2.2 Zeta function and Ihara?s formula In this section, we explain the connection of LBP to the graph zeta function. We use the follow~ be the set of directed edges obtained by duplicating undiing terms for graphs [17, 16]. Let E ~ o(e) ? V is the origin of e and t(e) ? V is rected edges. For each directed edge e ? E, ~ the terminus of e. For e ? E, the inverse edge is denoted by e?, and the corresponding undirected edge by [e] = [? e] ? E. A closed geodesic in G is a sequence (e1 , . . . , ek ) of directed edges such that t(ei ) = o(ei+1 ), ei 6= e?i+1 for i ? Z/kZ. For a closed geodesic c, we may form the m-multiple, cm , by repeating it m-times. A closed geodesic c is prime if there are no closed geodesic d and natural number m(? 2) such that c = dm . For example, a closed geodesic c = (e1 , e2 , e3 , e1 , e2 , e3 ) is not prime and c = (e1 , e2 , e3 , e4 , e1 , e2 , e3 ) is prime. Two closed geodesics are said to be equivalent if one is obtained by cyclic permutation of the other. For example, closed geodesics (e1 , e2 , e3 ), (e2 , e3 , e1 ) and (e3 , e1 , e2 ) are equivalent. An equivalence class of prime closed geodesics is called a prime cycle. Let P be the set of prime cycles of G. For given (complex or real) weights u = (ue )e?E~ , the Ihara?s graph zeta function [18, 19] is given by Y ?G (u) := (1 ? g(p))?1 g(p) := ue1 ? ? ? uek for p = (e1 , . . . , ek ), p?P = det(I ? UM)?1 , where the second equality is the determinant representation [19] with matrices indexed by the directed edges. The definitions of M and U are  1 if e 6= e?0 and o(e) = t(e0 ), Me,e0 := (8) 0 otherwise. and Ue,e0 := ue ?e,e0 , respectively. The following theorem gives the connection between the Bethe free energy and the zeta function. More precisely, the theorem asserts that the determinant of the Hessian of the Bethe free energy function is the reciprocal of the zeta function up to a positive factor. Theorem 1 ([16, 20]). The following equality holds at any point of L(G): Y Y Y Y ?G (u)?1 = det(?2 F ) qij (xi , xj ) qi (xi )1?di 22|V |+4|E| (9) ij?E xi ,xj =?1 i?V xi =?1 where the derivatives are taken over a affine coordinate of L(G): mi = Eqi [xi ], ?ij = Eqij [xi xj ], and Covqij [xi , xj ] ?ij ? mi mj ui?j = = . =: ?ij (10) 2 2 1/2 {(1 ? mi )(1 ? mj )} {Varqi [xi ]Varqj [xj ]}1/2 Note that, from (7), the Hessian ?2 F does not depend on Jij and hi . Since the weight (10) in Theorem 1 is symmetric with respect to the inversion of edges, the zeta function can be reduced to undirected edge weights. To avoid confusion, we introduce a notation: the zeta function of undirected edge weights ? = (?ij )ij?E is denoted by ZG (?). Note also that, since ?ij is the correlation coefficient of qij , we have |?ij | < 1. The equality does not occur by the positivity assumption of probabilities. 3 Figure 1: w1-reduction 3 Figure 2: reduction. Example of the complete w- Signed graphs with unique solution In this section, we state the main result of this paper, Theorem 3. The result shows a new type of approach towards uniqueness conditions. The proof of the theorem is given in the next section. 3.1 Existing conditions on uniqueness There have been many works on the uniqueness and/or convergence of the LBP algorithm for discrete graphical models [12, 13, 14, 15] and Gaussian graphical models [21]. As we are discussing binary pairwise graphical models, we review some of the conditions for the model. The following condition is given by Mooij and Kappen: Theorem 2. [[13]] Let ?(X) denote the spectral radius (i.e., the maximum of the absolute value of the eigenvalues) of a matrix X. If ?(J M) < 1, then the LBP converges to the unique fixed point, where J is a diagonal matrix defined by Je,e0 = tanh(|Je |)?e,e0 . This theorem gives the uniqueness property by bounding the strengths of the interactions, i.e., {|Jij |}ij?E . Therefore, the condition does not depend on the signs of the interactions. The situations are the same in other existing conditions [12, 13, 14, 15]. For example, Heskes?s condition [12] is X |Jij | < 1. (11) j?Ni These conditions are unsatisfactory in a sense that they do not use the information of the signs, {sgn Jij }ij?E . In fact, the behaviors of LBP algorithm can be dramatically different if the signs of the compatibility functions are changed. Note that each edge compatibility function ?ij tend to force the variables xi , xj equal if Jij > 0 and not equal if Jij < 0; the first case is called attractive interaction and the latter repulsive. In contrast to the above uniqueness conditions, we pursue another approach: we use the information of signs, {sgn Jij }ij?E , rather than the strengths. In this paper, we characterize the signed graphs that guarantee the uniqueness of the solution; this result is stated in Theorem 3. 3.2 Statement of main theorem of this section We introduce basic terms to state the main theorem. A signed graph, (G, s), is a graph equipped with a sign map, s, from the edges to {?1}. A compatibility function defines the sign function, s, by s(ij) = sgn Jij . The sign function of all plus (resp. minus) sign is denoted by s+ (resp. s? ). The deletion and subgraph of a signed graph is defined naturally restricting the sign function. Definition 1. A w-reduction of a signed graph (G, s) is a signed graph that is obtained by one of the following operations: (w1) Erasure of a vertex of degree two. (Let j be a vertex of degree two and ij, jk (i 6= k) be the connecting edges. Delete them and make a new edge ik with the sign s(ij)s(jk). See Figure 1.) (w2) Deletion of a loop with minus sign. (An edge ij is called a loop if i = j.) (w3) Contraction of a bridge. (An edge is a bridge if the deletion of the edge makes the number of the connected component increase. The sign on the bridge can be either +1 or ?1.) 4 Figure 3: B3 Figure 4: P3 Figure 6: Example 4 in Subsection 3.3. Figure 5: D4 . Note that all the operations decrease the number of edges by one. A signed graph is w-reduced if no w-reduction is applicable. Any signed graph is reduced to the unique w-reduced signed graph called the complete w-reduction. Example of a complete w-reduction is given in Figure 2. From the viewpoint of the computational complexity, finding the complete w-reduction is easy. (See the supplementary material for further discussions.) Here are important (signed) graphs. See Figures 3, 4 and 5. A bouquet graph, Bn , is a graph with the single node with n loops. Pn is a graph with two vertices and n parallel edges. Kn is the complete graph of n vertices. Cn is cycle of length n. Dn is a signed graph obtained by duplicating each edge of Cn with plus and minus signs. Definition 2. Two signed graphs (G, s) and (G, s0 ) are said to be gauge equivalent if there exists a map g : V ?? {?1} such that s0 (ij) = s(ij)g(i)g(j). The map g is called gauge transformation. Theorem 3. For a signed graph (G, s) the following conditions are equivalent. 1. LBP algorithm on G has the unique fixed point for any compatibility functions with sign s. 2. The complete w-reduction of (G, s) is one of the followings: (i) B0 (ii) (B1 , +) (iii) (P3 , +, ?, ?) and (P3 , +, +, ?). (iv) (K4 , s? ) and its gauge equivalent signed graphs. (v) Dn and its w-reduced subgraphs (n ? 2). The proof of this theorem is given in the next section. 3.3 Examples and experiments In this subsection we present concrete examples of signed graphs which do or do not satisfy the condition of Theorem 3. (Ex.1) Trees and graphs with a single cycle: In these cases it is well known that LBP has the unique fixed point irrespective of the compatibility functions [1, 22]. This fact is easily derived by Theorem 3 since the complete w-reduction of them are B0 or (B1 , +). (Ex.2) Complete graph Kn : (Kn , s) is w-reduced as we can not apply w-reduction. For n = 4, the condition of sign is given in 2.(iv). If n ? 5 it does not satisfy the condition for any sign. (Ex.3) 2 ? 2 grid graph: This graph does not satisfy the condition for any sign because its complete w-reduction is different from the signed graphs in the item 2 of Theorem 3. (Ex.4) Consider a signed graph in Figure 6. Notice that the products of signs along the five cycles are all minus. Applying (w2) and (w3), we see that the complete w-reduction is B0 . Therefore the signed graph satisfies the condition. We experimentally check convergence behaviors of the LBP algorithm on D4 , which satisfies the condition of Theorem 3. Since the LBP fixed point is unique, it is the absolute minimum of the Bethe free energy function. We set the compatibility functions Jij = ?J, hi = h and initialized messages randomly. We judged convergence if average message update is less than 10?3 after 50 iterations. The result is shown in Figure 7. LBP is not convergent in the right white region and convergent in the rest of gray region. Convergence is theoretically guaranteed for tanh(|J|) < 1/3 (|J| / 0.347) by Theorem 2. In the non-convergent region LBP appears to be unstable around the fixed point. 4 Proofs: conditions in terms of graph zeta function The aim of this subsection is to prove Theorem 3. For the proof, Lemma 2, which is purely a result of the graph zeta function, is utilized. 5 Figure 8: X1 and X2 . Figure 7: Convergence region of LBP. 4.1 Graph theoretic results We denote by G ?  the deletion of an undirected edge  from a graph G and by G/ the contraction. A minor of a graph is obtained by the repeated applications of the deletion, contraction and removal of isolated vertices. The Deletion and contraction operations have natural meaning in the context of the graph zeta function as follows: Lemma 1. ?1 ?1 1. Let ij be an edge, then ?G?ij (u) = ?G (? u), where u ?e is equal to ue if [e] 6= ij and 0 otherwise. ?1 ?1 2. Let ij be a non-loop edge, then ?G/ij (u) = ?G (? u), where u ?e is equal to ue if [e] 6= ij and 1 otherwise. Proof. From the prime cycle representation of zeta functions, both of the assertions are trivial. Next, to prove Theorem 3, we formally define the notion of deletions, contractions and minors on signed graphs [23]. For a signed graph the signed-deletion of an edge is just the deletion of the edge along with the sign on it. The signed-contraction of a non-loop edge ij ? E is defined up to gauge equivalence as follows. For any non-loop edge ij, there is a gauge equivalent signed graph that has the sign + on ij. The signed-contraction is obtained by contracting the edge. The resulting signed graph is determined up to gauge equivalence. A signed minor of a signed graph is obtained by repeated applications of the signed-deletion, signed-contraction, and removal of isolated vertices. Lemma 2. For a signed graph, (G, s), the following conditions are equivalent. ?1 1. (G, s) is U-type. That is, if ?ij ? Is(ij) for all ij ? E then ZG (?) > 0, where ? = (?ij )ij?E , I+ = [0, 1) and I? = (?1, 0]. ?1 2. (G, s) is weakly U-type. That is, if ?ij ? Is(ij) for all ij ? E then ZG (?) ? 0 3. (B2 , s+ ) is not contained as a signed minor. 4. The complete w-reduction of (G, s) is one of the followings: (i) B0 (ii) (B1 , s+ ) (iii) (P3 , +, ?, ?) and (P3 , +, +, ?). (iv) (K4 , s? ) and its gauge equivalent signed graphs. (v) Dn and its w-reduced subgraphs (n ? 2). The uniqueness condition in Theorem 3 is equivalent to all the conditions in this lemma. Here, we remark properties of this condition (the proof is straightforward from definition and Lemma 2): (1) (G, s) is U-type iff its gauge equivalents are U-type. (2) If (G, s) is U-type then its signed minors are U-type. We prove the equivalence cyclic manner. Here we give a sketch of the proof (Detail is given in the supplementary material.) 6 Proof of 1 ? 2. Trivial. Proof of 2 ? 3. If (G, s) is weakly U-type, then its signed minors are weakly U-type; this is obvious from Lemma 1. However, direct computation of the zeta of (B2 , s+ ) shows that this signed graph is not weakly U-type. In fact, the directed edge matrix with weight of B2 is ? ? ?1 ?1 0 ? 1 ?2 ?2 0 ? ? ? BM = ? 2 0 ?1 ?1 ?1 ? ?2 0 ?2 ?2 and det(I ? BM) = (1 ? ?1 )(1 ? ?2 )(1 ? ?1 ? ?2 ? 3?1 ?2 ). This value can be negative in the region 0 ? ?1 , ?2 < 1. Proof of 3 ? 4. Note that if (G, s) does not contain (B2 , s+ ) as a signed minor then any wreductions of (G, s) also do not contain (B2 , s+ ) as a signed minor; we can check this property for each type of w-reductions, (w,1,2,3). Therefore, it is sufficient to show that if a w-reduced signed graph (G, s) does not contain (B2 , +, +) as a signed minor then it is one of the five types. Notice that G has no vertex of degree less than three. First, if the nullity of G is less than three, it is not hard to see that the signed graph is type (i), (ii) or (iii). Secondly, we consider the case that the graph G has nullity three. Note that all w-reduced signed graphs of nullity two have the signed minor (B1 , +). Therefore, we can assume that G does not have (plus) loop. Since (G, s) is w-reduced, G must be one of the following graphs: K4 , P4 , X1 and X2 , where X1 and X2 are defined in Figure 8. It is easy to check that possible way of assigning signs on these graphs are one of the types, (iii-v). Finally, we consider the case of the nullity, n, is more than three. In this case, we can show that (G, s) must be Dn or its subgraph. (Details are found in the supplementary material.) Proof of 4 ? 1. First we claim the following statement: if Y ?1 ?G (u) ? 0 ?u = (ue ) ? {0, s([e])}, (12) ~ e?E ?1 then (G, s) is U-type. This claim can be proved using the property that ?G (u) = det(I ? U M) ?1 is linear for each variable, ue . (That is, if we fix u except for one variable, say ue1 , then ?G = ~ C1 + C2 ue1 .) Take the product of the closed intervals from 0 to s(e) (e ? E) and make a hypercube. If there is a non-positive point in the hypercube then there must be a non-positive point in a face; we can repeat this argument until we arrive at a vertex. We check the condition (12) for all the four classes. Notice that if (G, s) satisfies (12) then its gauge equivalents, the deletion and signed-contraction has the same property. So far, we have proven the assertion for w-reduced graphs; we extend the proof to arbitrary signed graphs. For any signed graph, the complete w-reductions are obtained by first using reductions (w1,w2) and then reducing the bridges (w3) because (w3) always makes the degree bigger and does not make a loop. Therefore, the following two claims complete the proof. Claim 1. Let (G0 , s0 ) be a (w3)-reduction of a signed graph (G, s), i.e., obtained by contraction of a bridge . If (G0 , s0 ) has the property (12) then (G, s) also has the property. Proof of Claim 1. Let b and ?b be the corresponding directed edges of . Since any prime cycles pass b and ?b at the same number of times, ?1 ?1 ?G (u) = ?G? (? u) + ub u?b f (? u), (13) ? is restriction of u on G ?  and f is a function. Assume that s() = 1. (The case where u s() = ?1 is completely analogous.) Since (G0 , s0 ) has the property (12), (G, s) has the property for (ub , u?b ) = (1, 1). For (ub , u?b ) = (0, 0), (1, 0), (0, 1) cases, we can deduce form the property of G ? .  Claim 2. Let (G0 , s0 ) be a (w1) or (w2)-reduction of a signed graph (G, s). If (G0 , s0 ) is U-type then (G, s) is U-type. 7 Proof of Claim 2. The case of (w1) is trivial. We Q prove the case (w2). From the multivariate ?1 Ihara?s formula, the positivity of ZG 0 (?) on the set ij?E Is(ij) implies the positive definiteness of 0 0 ? ? I + D ? A on the set. Adding a minus loop correspond to adding 2? 2 (1 ? ? 2 )?1 ? 2?(1 ? ? 2 ) = ?2?(1 + ?) on the diagonal, where ?1 < ? ? 0. Therefore the new matrix is also positive definite and (G, s) is U-type.  4.2 Proof of Theorem 3 Proof of 2 ? 1. The basic strategy is to use the following theorem. Theorem 4 (Index sum theorem [16]). As usual, consider the Bethe free energy function, F , defined on L(G). Assume that det ?2 F (q) 6= 0 for all LBP fixed points q. Then the sum of indices at the LBP fixed points are equal to one:  X  1 if x > 0, 2 sgn det ? F (q) = 1, where sgn(x) := ?1 if x < 0. q:?F (q)=0 (We call each summand, which is +1 or ?1, the index of F at q.) At each LBP fixed point, the beta values for Qa solution can be computed using (10). Since the signs of ?ij and Jij are equal [16], ? = (?ij ) ? ij?E Is(ij) is satisfied. Therefore, from the assumption and Lemma 2, the index of the solution is positive. We conclude the uniqueness of the solution from the above index sum theorem. Proof of 1 ? Q 2. We show the contraposition. From Lemma 2, (G, s) is not weakly U-type; there is ?1 (?) < 0. Take pseudomarginals q = {qij }ij?E ? {qi }i?V ? = (?ij ) ? ij?E Is(ij) such that ?G that has the correlation coefficients of qij equal to ?ij . (For example, set ?ij = ?ij , mi = 0.) We can choose Jij and hi such that ? ? Y Y X X qij (xi , xj ) qi1?di (xi ) ? exp ? Jij xi xj + hi xi ? . (14) ij?E i?V ij?E i?V This construction implies that q correspond to a LBP fixed point with compatibility functions {Jij , hi }. This solution has index -1 by definition. If this is the unique solution, it contradicts the index sum formula. Therefore, there must be other solutions. 5 Concluding remarks In this paper we have developed a new approach to the uniqueness problem of the LBP algorithm. As a result, we have obtained a new class of LBPs that are guaranteed to have the unique solution. The uniqueness problem is reduced to the properties of graph zeta functions, Lemma 2, using the indexed formula. In contrast to the existing conditions, our uniqueness guarantee includes graphical models with strong interactions. Though our result is shown in the case of binary pairwise models, the idea can be extended to factor graph models with many states. In fact, Theorem 1 has been extended to the general settings of the LBP algorithm on factor graphs [20]. One direction for the future research is to combine the information of the signs and strengths of the interactions to show the uniqueness. The uniqueness problem is reduced to the positivity of the graph zeta function on a restricted set, rather than the hypercube of size one. If we can check the positivity of graph zeta functions theoretically or algorithmically, the result can be used for a better guarantee of the uniqueness. References [1] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, San Mateo, CA, 1988. 8 [2] P.F. Felzenszwalb and D.P. Huttenlocher. Efficient belief propagation for early vision. International journal of computer vision, 70(1):41?54, 2006. [3] D. Baron, S. Sarvotham, and R.G. Baraniuk. Bayesian compressive sensing via belief propagation. Signal Processing, IEEE Transactions on, 58(1):269?280, 2010. [4] R.J. McEliece, D.J.C. MacKay, and J.F. Cheng. Turbo decoding as an instance of Pearl?s ?belief propagation? algorithm. IEEE J. Sel. Areas Commun., 16(2):140?52, 1998. [5] S. Ikeda, T. Tanaka, and S. Amari. Stochastic reasoning, free energy, and information geometry. Neural Computation, 16(9):1779?1810, 2004. [6] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008. [7] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Generalized belief propagation. Adv. in Neural Information Processing Systems, 13:689?95, 2001. [8] A.L. Yuille. CCCP algorithms to minimize the bethe and kikuchi free energies: Convergent alternatives to belief propagation. Neural computation, 14(7):1691?1722, 2002. [9] A.L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15(4):915?936, 2003. [10] Y.W. Teh, M. Welling, et al. The unified propagation and scaling algorithm. Advances in neural information processing systems, 2:953?960, 2002. [11] T. Heskes. Convexity arguments for efficient minimization of the bethe and kikuchi free energies. Journal of Artificial Intelligence Research, 26(1):153?190, 2006. [12] T. Heskes. On the uniqueness of loopy belief propagation fixed points. Neural Computation, 16(11):2379?2413, 2004. [13] J. M. Mooij and H. J. Kappen. Sufficient Conditions for Convergence of the Sum-Product Algorithm. IEEE Transactions on Information Theory, 53(12):4422?4437, 2007. [14] A.T. Ihler, JW Fisher, and A.S. Willsky. Loopy belief propagation: Convergence and effects of message errors. Journal of Machine Learning Research, 6(1):905?936, 2006. [15] S. Tatikonda and M.I. Jordan. Loopy belief propagation and Gibbs measures. Uncertainty in AI, 18:493?500, 2002. [16] Y. Watanabe and K. Fukumizu. Graph zeta function in the bethe free energy and loopy belief propagation. Adv. in Neural Information Processing Systems, 22:2017?2025, 2009. [17] M. Kotani and T. Sunada. Zeta functions of finite graphs. J. Math. Sci. Univ. Tokyo, 7(1):7?25, 2000. [18] K. Hashimoto. Zeta functions of finite graphs and representations of p-adic groups. Automorphic forms and geometry of arithmetic varieties, 15:211?280, 1989. [19] H.M. Stark and A.A. Terras. Zeta functions of finite graphs and coverings. Advances in Mathematics, 121(1):124?165, 1996. [20] Y. Watanabe and K. Fukumizu. Loopy belief propagation, Bethe free energy and graph zeta function. arXiv:1103.0605. [21] D.M. Malioutov, J.K. Johnson, and A.S. Willsky. Walk-sums and belief propagation in Gaussian graphical models. The Journal of Machine Learning Research, 7:2064, 2006. [22] Y. Weiss. Correctness of Local Probability Propagation in Graphical Models with Loops. Neural Computation, 12(1):1?41, 2000. [23] Thomas Zaslavsky. Characterizations of signed graphs. Journal of Graph Theory, 5(4):401? 406, 1981. 9
4422 |@word determinant:3 inversion:1 closure:1 bn:1 contraction:10 minus:5 kappen:2 reduction:18 cyclic:2 terminus:1 ue1:3 existing:3 current:1 com:1 yet:1 assigning:1 must:4 ikeda:1 pseudomarginals:2 update:3 midori:1 stationary:3 intelligence:1 item:1 reciprocal:2 provides:1 math:1 node:1 characterization:1 five:2 dn:4 along:2 direct:2 c2:1 bouquet:1 ik:1 beta:1 qij:12 prove:4 kotani:1 combine:1 qi1:1 manner:1 introduce:3 theoretically:2 pairwise:3 behavior:5 freeman:1 automatically:1 equipped:1 xx:1 underlying:2 notation:1 cm:1 interpreted:1 pursue:1 developed:1 compressive:1 unified:1 finding:1 transformation:1 nj:1 guarantee:4 duplicating:2 concave:1 um:1 positive:8 local:2 yusuke:1 signed:51 plus:3 mateo:1 equivalence:4 factorization:1 bi:3 directed:7 unique:8 definite:1 procedure:1 erasure:1 area:1 empirical:2 judged:1 context:1 applying:1 restriction:1 equivalent:11 map:3 straightforward:1 convex:1 correcting:1 subgraphs:2 rule:1 notion:1 coordinate:1 analogous:1 resp:2 construction:1 exact:2 origin:1 element:1 trend:1 jk:2 utilized:2 terras:1 huttenlocher:1 region:6 cycle:10 connected:2 adv:2 decrease:1 convexity:1 complexity:2 ui:1 geodesic:8 depend:3 weakly:5 purely:1 yuille:2 completely:2 hashimoto:1 easily:1 various:1 univ:1 artificial:1 neighborhood:1 outcome:1 supplementary:3 plausible:1 foregoing:1 say:1 otherwise:3 compressed:1 amari:1 sequence:1 eigenvalue:1 propose:2 interaction:14 jij:15 product:3 p4:1 loop:10 subgraph:2 iff:1 asserts:1 convergence:8 rangarajan:1 converges:1 kikuchi:2 develop:1 ac:1 ij:58 minor:10 b0:4 strong:5 come:1 implies:3 direction:1 radius:1 tokyo:2 stochastic:1 sgn:5 material:3 fix:1 secondly:1 hold:1 sufficiently:2 around:1 exp:3 claim:7 early:1 uniqueness:25 applicable:1 combinatorial:1 tanh:2 tatikonda:1 bridge:5 correctness:1 gauge:9 minimization:1 fukumizu:2 gaussian:2 always:1 aim:1 rather:2 avoid:1 pn:1 sel:1 derived:1 unsatisfactory:1 check:5 contrast:3 sense:1 inference:3 provably:1 compatibility:10 denoted:3 mackay:1 marginal:2 equal:8 future:1 intelligent:2 summand:1 few:1 randomly:1 phase:2 geometry:2 message:7 sens:2 edge:35 necessary:1 tree:4 indexed:2 iv:3 initialized:2 walk:1 e0:6 isolated:2 theoretical:3 delete:1 instance:1 cover:1 assertion:2 loopy:9 vertex:9 johnson:1 characterize:1 kn:3 cho:1 combined:1 international:1 probabilistic:1 decoding:1 zeta:26 connecting:1 concrete:1 w1:5 satisfied:2 choose:1 positivity:5 ek:2 derivative:1 stark:1 japan:1 b2:6 includes:1 coefficient:2 satisfy:3 view:1 closed:9 analyze:2 parallel:1 minimize:1 adic:1 ni:6 baron:1 kaufmann:1 efficiently:1 correspond:3 bayesian:1 malioutov:1 explain:2 definition:6 energy:20 obvious:1 dm:1 e2:7 associated:1 proof:19 di:4 mi:4 naturally:1 ihler:1 proved:1 subsection:3 organized:1 appears:1 originally:1 follow:1 wei:2 jw:1 formulation:1 done:1 though:2 generality:1 just:1 correlation:2 until:1 sketch:1 mceliece:1 ei:3 propagation:20 defines:1 gray:1 b3:1 effect:1 contain:3 equality:3 assigned:1 symmetric:1 laboratory:1 white:1 attractive:3 zaslavsky:1 ue:7 covering:1 d4:2 generalized:1 complete:14 theoretic:2 eqi:1 confusion:1 reasoning:2 meaning:1 variational:4 novel:2 ji:1 jp:2 extend:1 interpretation:1 refer:1 gibbs:1 ai:1 grid:1 mathematics:2 heskes:3 deduce:1 multivariate:1 recent:1 commun:1 prime:8 certain:2 affiliation:1 arbitrarily:1 success:2 binary:4 discussing:1 morgan:1 minimum:1 converge:1 signal:1 ii:3 arithmetic:1 multiple:1 cccp:1 e1:9 bigger:1 qi:7 basic:3 vision:3 arxiv:1 iteration:1 normalization:2 c1:1 lbp:34 background:1 interval:1 tachikawa:1 publisher:1 w2:5 rest:1 tend:1 undirected:6 jordan:2 call:1 iii:4 easy:2 variety:2 xj:38 w3:5 idea:1 cn:2 det:6 e3:7 speaking:1 hessian:3 passing:1 remark:3 dramatically:1 generally:1 repeating:1 locally:1 reduced:14 notice:3 sign:26 algorithmically:1 discrete:1 group:1 four:1 k4:3 utilize:1 graph:82 sum:6 inverse:1 baraniuk:1 uncertainty:1 arrive:1 throughout:1 family:1 p3:5 scaling:1 hi:6 guaranteed:4 convergent:4 correspondence:2 cheng:1 turbo:1 strength:8 occur:2 constraint:1 precisely:2 x2:3 argument:2 concluding:1 contradicts:1 making:1 restricted:1 taken:1 sony:2 repulsive:3 operation:3 yedidia:1 apply:1 spectral:1 alternative:1 existence:1 thomas:1 denotes:1 nullity:4 graphical:9 ism:1 especially:1 hypercube:3 objective:1 g0:5 strategy:1 usual:1 diagonal:2 said:2 sci:1 me:1 polytope:1 unstable:1 trivial:3 toward:1 willsky:2 code:1 length:1 index:8 difficult:3 statement:2 stated:2 negative:1 teh:1 finite:3 situation:3 extended:3 arbitrary:4 ihara:3 specified:2 connection:5 deletion:11 established:1 pearl:2 tanaka:1 qa:1 including:1 belief:18 wainwright:1 difficulty:1 natural:2 force:1 irrespective:1 review:1 understanding:1 removal:2 mooij:2 loss:1 contracting:1 permutation:1 proven:1 foundation:1 degree:4 affine:1 sufficient:3 pij:2 consistent:1 s0:7 viewpoint:1 pi:2 changed:1 repeat:1 free:20 institute:1 wide:2 face:1 felzenszwalb:1 absolute:3 overcome:1 calculated:1 transition:2 kz:1 san:1 bm:2 far:1 welling:1 transaction:2 understating:1 b1:4 conclude:1 xi:62 decade:1 bethe:19 mj:2 ca:1 complex:2 domain:1 main:5 bounding:1 repeated:2 x1:3 je:2 definiteness:1 watanabe:4 exponential:1 bij:4 formula:5 e4:1 theorem:27 emphasized:1 sensing:2 list:1 intractable:1 exists:1 restricting:1 adding:2 rected:1 contained:1 satisfies:3 sarvotham:1 towards:1 fisher:1 considerable:1 experimentally:1 hard:1 determined:1 except:1 reducing:1 lemma:9 called:10 pas:1 zg:4 formally:1 latter:1 ub:3 phenomenon:1 ex:4
3,781
4,423
Inductive reasoning about chimeric creatures Charles Kemp Department of Psychology Carnegie Mellon University [email protected] Abstract Given one feature of a novel animal, humans readily make inferences about other features of the animal. For example, winged creatures often fly, and creatures that eat fish often live in the water. We explore the knowledge that supports these inferences and compare two approaches. The first approach proposes that humans rely on abstract representations of dependency relationships between features, and is formalized here as a graphical model. The second approach proposes that humans rely on specific knowledge of previously encountered animals, and is formalized here as a family of exemplar models. We evaluate these models using a task where participants reason about chimeras, or animals with pairs of features that have not previously been observed to co-occur. The results support the hypothesis that humans rely on explicit representations of relationships between features. Suppose that an eighteenth-century naturalist learns about a new kind of animal that has fur and a duck?s bill. Even though the naturalist has never encountered an animal with this pair of features, he should be able to make predictions about other features of the animal?for example, the animal could well live in water but probably does not have feathers. Although the platypus exists in reality, from a eighteenth-century perspective it qualifies as a chimera, or an animal that combines two or more features that have not previously been observed to co-occur. Here we describe a probabilistic account of inductive reasoning and use it to account for human inferences about chimeras. The inductive problems we consider are special cases of the more general problem in Figure 1a where a reasoner is given a partially observed matrix of animals by features then asked to infer the values of the missing entries. This general problem has been previously studied and is addressed by computational models of property induction, categorization, and generalization [1?7]. A challenge faced by all of these models is to capture the background knowledge that guides inductive inferences. Some accounts rely on similarity relationships between animals [6, 8], others rely on causal relationships between features [9, 10], and others incorporate relationships between animals and relationships between features [11]. We will evaluate graphical models that capture both kinds of relationships (Figure 1a), but will focus in particular on relationships between features. Psychologists have previously suggested that humans rely on explicit mental representations of relationships between features [12?16]. Often these representations are described as theories?for example, theories that specify a causal relationship between having wings and flying, or living in the sea and eating fish. Relationships between features may take several forms: for example, one feature may cause, enable, prevent, be inconsistent with, or be a special case of another feature. For simplicity, we will treat all of these relationships as instances of dependency relationships between features, and will capture them using an undirected graphical model. Previous studies have used graphical models to account for human inferences about features but typically these studies consider toy problems involving a handful of novel features such as ?has gene X14? or ?has enzyme Y132? [9, 11]. Participants might be told, for example, that gene X14 leads to the production of enzyme Y132, then asked to use this information when reasoning about novel animals. Here we explore whether a graphical model approach can account for inferences 1 (a) slow heavy flies wings hippo 1 1 0 0 rhino 1 1 0 0 sparrow 0 0 1 1 robin 0 0 1 1 new ? ? 1 ? o (b) Figure 1: Inductive reasoning about animals and features. (a) Inferences about the features of a new animal onew that flies may draw on similarity relationships between animals (the new animal is similar to sparrows and robins but not hippos and rhinos), and on dependency relationships between features (flying and having wings are linked). (b) A graph product produced by combining the two graph structures in (a). about familiar features. Working with familiar features raises a methodological challenge since participants have a substantial amount of knowledge about these features and can reason about them in multiple ways. Suppose, for example, that you learn that a novel animal can fly (Figure 1a). To conclude that the animal probably has wings, you might consult a mental representation similar to the graph at the top of Figure 1a that specifies a dependency relationship between flying and having wings. On the other hand, you might reach the same conclusion by thinking about flying creatures that you have previously encountered (e.g. sparrows and robins) and noticing that these creatures have wings. Since the same conclusion can be reached in two different ways, judgments about arguments of this kind provide little evidence about the mental representations involved. The challenge of working with familiar features directly motivates our focus on chimeras. Inferences about chimeras draw on rich background knowledge but require the reasoner to go beyond past experience in a fundamental way. For example, if you learn that an animal flies and has no legs, you cannot make predictions about the animal by thinking of flying, no-legged creatures that you have previously encountered. You may, however, still be able to infer that the novel animal has wings if you understand the relationship between flying and having wings. We propose that graphical models over features can help to explain how humans make inferences of this kind, and evaluate our approach by comparing it to a family of exemplar models. The next section introduces these models, and we then describe two experiments designed to distinguish between the models. 1 Reasoning about objects and features Our models make use of a binary matrix D where the rows {o1 , . . . , o129 } correspond to objects, and the columns {f 1 , . . . , f 56 } correspond to features. A subset of the objects is shown in Figure 2a, and the full set of features is shown in Figure 2b and its caption. Matrix D was extracted from the Leuven natural concept database [17], which includes 129 animals and 757 features in total. We chose a subset of these features that includes a mix of perceptual and behavioral features, and that includes many pairs of features that depend on each other. For example, animals that ?live in water? typically ?can swim,? and animals that have ?no legs? cannot ?jump far.? Matrix D can be used to formulate problems where a reasoner observes one or two features of a new object (i.e. animal o130 ) and must make inferences about the remaining features of the animal. The next two sections describe graphical models that can be used to address this problem. The first graphical model O captures relationships between objects, and the second model F captures relationships between features. We then discuss how these models can be combined, and introduce a family of exemplar-style models that will be compared with our graphical models. A graphical model over objects Many accounts of inductive reasoning focus on similarity relationships between objects [6, 8]. Here we describe a tree-structured graphical model O that captures these relationships. The tree was constructed from matrix D using average linkage clustering and the Jaccard similarity measure, and part of the resulting structure is shown in Figure 2a. The subtree in Figure 2a includes clusters 2 alligator caiman crocodile monitor lizard dinosaur blindworm boa cobra python snake viper chameleon iguana gecko lizard salamander frog toad tortoise turtle anchovy herring sardine cod sole salmon trout carp pike stickleback eel flatfish ray plaice piranha sperm whale squid swordfish goldfish dolphin orca whale shark bat fox wolf beaver hedgehog hamster squirrel mouse rabbit bison elephant hippopotamus rhinoceros lion tiger polar bear deer dromedary llama giraffe zebra kangaroo monkey cat dog cow horse donkey pig sheep (a) (b) can swim lives in water eats fish eats nuts eats grain eats grass has gills can jump far has two legs has no legs has six legs has four legs can fly can be ridden has sharp teeth nocturnal has wings strong predator can see in dark eats berries lives in the sea lives in the desert crawls lives in the woods has mane lives in trees can climb well lives underground has feathers has scales slow has fur heavy Figure 2: Graph structures used to define graphical models O and F. (a) A tree that captures similarity relationships between animals. The full tree includes 129 animals, and only part of the tree is shown here. The grey points along the branches indicate locations where a novel animal o130 could be attached to the tree. (b) A network capturing pairwise dependency relationships between features. The edges capture both positive and negative dependencies. All edges in the network are shown, and the network also includes 20 isolated nodes for the following features: is black, is blue, is green, is grey, is pink, is red, is white, is yellow, is a pet, has a beak, stings, stinks, has a long neck, has feelers, sucks blood, lays eggs, makes a web, has a hump, has a trunk, and is cold-blooded. corresponding to amphibians and reptiles, aquatic creatures, and land mammals, and the subtree omitted for space includes clusters for insects and birds. We assume that the features in matrix D (i.e. the columns) are generated independently over O: Y P (D|O, ?, ?) = P (f i |O, ? i , ?i ). i i i i The distribution P (f |O, ? , ? ) is based on the intuition that nearby nodes in O tend to have the same value of f i . Previous researchers [8, 18] have used a directed graphical model where the distribution at the root node is based on the baserate ? i , and any other node v with parent u has the following conditional probability distribution: ( i ? i + (1 ? ? i )e?? l , if u = 1 (1) P (v = 1|u) = i ? i ? ? i e?? l , if u = 0 where l is the length of the branch joining node u to node v. The variability parameter ?i captures the extent to which feature f i is expected to vary over the tree. Note, for example, that any node v must take the same value as its parent u when ? = 0. To avoid free parameters, the feature baserates ? i and variability parameters ?i are set to their maximum likelihood values given the observed values of the features {f i } in the data matrix D. The conditional distributions in Equation 1 induce a joint distribution over all of the nodes in graph O, and the distribution P (f i |O, ? i , ?i ) is computed by marginalizing out the values of the internal nodes. Although we described O as a directed graphical model, the model can be converted into an equivalent undirected model with a potential for each edge in the tree and a potential for the root node. Here we use the undirected version of the model, which is a natural counterpart to the undirected model F described in the next section. The full version of structure O in Figure 2a includes 129 familiar animals, and our task requires inferences about a novel animal o130 that must be slotted into the structure. Let D? be an expanded version of D that includes a row for o130 , and let O? be an expanded version of O that includes a node for o130 . The edges in Figure 2a are marked with evenly spaced gray points, and we use a 3 uniform prior P (O? ) over all trees that can be created by attaching o130 to one of these points. Some of these trees have identical topologies, since some edges in Figure 2a have multiple gray points. Predictions about o130 can be computed using: X X P (D? |D) = P (D? |O? , D)P (O? |D) ? P (D? |O? , D)P (D|O? )P (O? ). (2) O? O? Equation 2 captures the basic intuition that the distribution of features for o130 is expected to be consistent with the distribution observed for previous animals. For example, if o130 is known to fly then the trees with high posterior probability P (O? |D) will be those where o130 is near other flying creatures (Figure 1a), and since these creatures have wings Equation 2 predicts that o130 probably also has wings. As this example suggests, model O captures dependency relationships between features implicitly, and therefore stands in contrast to models like F that rely on explicit representations of relationships between features. A graphical model over features Model F is an undirected graphical model defined over features. The graph shown in Figure 2b was created by identifying pairs where one feature depends directly on another. The author and a research assistant both independently identified candidate sets of pairwise dependencies, and Figure 2b was created by merging these sets and reaching agreement about how to handle any discrepancies. As previous researchers have suggested [13, 15], feature dependencies can capture several kinds of relationships. For example, wings enable flying, living in the sea leads to eating fish, and having no legs rules out jumping far. We work with an undirected graph because some pairs of features depend on each other but there is no clear direction of causal influence. For example, there is clearly a dependency relationship between being nocturnal and seeing in the dark, but no obvious sense in which one of these features causes the other. We assume that the rows of the object-feature matrix D are generated independently from an undirected graphical model F defined over the feature structure in Figure 2b: Y P (D|F) = P (oi |F). i Model F includes potential functions for each node and for each edge in the graph. These potentials were learned from matrix D using the UGM toolbox for undirected graphical models [19]. The learned potentials capture both positive and negative relationships: for example, animals that live in the sea tend to eat fish, and tend not to eat berries. Some pairs of feature values never occur together in matrix D (there are no creatures that fly but do not have wings). We therefore chose to compute maximum a posteriori values of the potential functions rather than maximum likelihood values, and used a diffuse Gaussian prior with a variance of 100 on the entries in each potential. After learning the potentials for model F, we can make predictions about a new object o130 using the distribution P (o130 |F). For example, if o130 is known to fly (Figure 1a), model F predicts that o130 probably has wings because the learned potentials capture a positive dependency between flying and having wings. Combining object and feature relationships There are two simple ways to combine models O and F in order to develop an approach that incorporates both relationships between features and relationships between objects. The output combination model computes the predictions of both models in isolation, then combines these predictions using a weighted sum. The resulting model is similar to a mixture-of-experts model, and to avoid free parameters we use a mixing weight of 0.5. The structure combination model combines the graph structures used by the two models and relies on a set of potentials defined over the resulting graph product. An example of a graph product is shown in Figure 1b, and the potential functions for this graph are inherited from the component models in the natural way. Kemp et al. [11] use a similar approach to combine a functional causal model with an object model O, but note that our structure combination model uses an undirected model F rather than a functional causal model over features. Both combination models capture the intuition that inductive inferences rely on relationships between features and relationships between objects. The output combination model has the virtue of 4 simplicity, and the structure combination model is appealing because it relies on a single integrated representation that captures both relationships between features and relationships between objects. To preview our results, our data suggest that the combination models perform better overall than either O or F in isolation, and that both combination models perform about equally well. Exemplar models We will compare the family of graphical models already described with a family of exemplar models. The key difference between these model families is that the exemplar models do not rely on explicit representations of relationships between objects and relationships between features. Comparing the model families can therefore help to establish whether human inferences rely on representations of this sort. Consider first a problem where a reasoner must predict whether object o130 has feature k after observing that it has feature i. An exemplar model addresses the problem by retrieving all previouslyobserved objects with feature i and computing the proportion that have feature k: P (ok = 1|oi = 1) = |f k & f i | |f i | (3) where |f k | is the number of objects in matrix D that have feature k, and |f k & f i | is the number that have both feature k and feature i. Note that we have streamlined our notation by using ok instead of o130 to refer to the kth feature value for object o130 . k Suppose now that the reasoner observes that object o130 has features i and j. The natural generalization of Equation 3 is: P (ok = 1|oi = 1, oj = 1) = |f k & f i & f j | |f i & f j | (4) Because we focus on chimeras, |f i & f j | = 0 and Equation 4 is not well defined. We therefore evaluate an exemplar model that computes predictions for the two observed features separately then computes the weighted sum of these predictions: P (ok = 1|oi = 1, oj = 1) = wi |f k & f i | |f k & f j | + wj . i |f | |f j | (5) where the weights wi and wj must sum to one. We consider four ways in which the weights could be set. The first strategy sets wi = wj = 0.5. The second strategy sets wi ? |f i |, and is consistent with an approach where the reasoner retrieves all exemplars in D that are most similar to the novel animal and reports the proportion of these exemplars that have feature k. The third strategy sets wi ? |f1i | , and captures the idea that features should be weighted by their distinctiveness [20]. The final strategy sets weights according to the coherence of each feature [21]. A feature is coherent if objects with that feature tend to resemble each other overall, and we define the coherence of feature i as the expected Jaccard similarity between two randomly chosen objects from matrix D that both have feature i. Note that the final three strategies are all consistent with previous proposals from the psychological literature, and each one might be expected to perform well. Because exemplar models and prototype models are often compared, it is natural to consider a prototype model [22] as an additional baseline. A standard prototype model would partition the 129 animals into categories and would use summary statistics for these categories to make predictions about the novel animal o130 . We will not evaluate this model because it corresponds to a coarser version of model O, which organizes the animals into a hierarchy of categories. The key characteristic shared by both models is that they explicitly capture relationships between objects but not features. 2 Experiment 1: Chimeras Our first experiment explores how people make inferences about chimeras, or novel animals with features that have not previously been observed to co-occur. Inferences about chimeras raise challenges for exemplar models, and therefore help to establish whether humans rely on explicit representations of relationships between features. Each argument can be represented as f i , f j ? f k 5 exemplar r = 0.42 7 feature F exemplar (wi = |f i |) (wi = 0.5) r = 0.44 7 object O r = 0.69 7 output combination r = 0.31 7 structure combination r = 0.59 7 r = 0.60 7 5 5 5 5 5 3 3 3 3 3 3 all 5 1 1 0 1 r = 0.06 7 conflict 0.5 1 1 0 0.5 1 r = 0.71 7 1 0 0.5 1 r = ?0.02 7 1 0 0.5 1 r = 0.49 7 0 5 5 5 5 5 3 3 3 3 3 3 1 0.5 1 r = 0.51 7 1 0 0.5 1 r = 0.64 7 1 0 0.5 1 r = 0.83 7 1 0 0.5 1 r = 0.45 7 0.5 1 r = 0.76 7 0 5 5 5 5 3 3 3 3 3 3 1 0.5 1 r = 0.26 7 1 0 0.5 1 r = 0.25 7 1 0 0.5 1 r = 0.19 7 1 0 0.5 1 r = 0.25 7 0.5 1 r = 0.24 7 0 5 5 5 5 3 3 3 3 3 3 1 0.5 1 1 0 0.5 1 1 0 0.5 1 1 0 0.5 1 0.5 1 r = 0.33 7 5 0 1 1 0 5 1 0.5 r = 0.79 7 5 0 1 1 0 5 1 0.5 r = 0.57 7 5 0 edge 0.5 r = 0.17 7 1 other 1 0 1 0 0.5 1 0 0.5 1 Figure 3: Argument ratings for Experiment 1 plotted against the predictions of six models. The y-axis in each panel shows human ratings on a seven point scale, and the x-axis shows probabilities according to one of the models. Correlation coefficients are shown for each plot. where f i and f k are the premises (e.g. ?has no legs? and ?can fly?) and f k is the conclusion (e.g. ?has wings?). We are especially interested in conflict cases where the premises f i and f j lead to opposite conclusions when taken individually: for example, most animals with no legs do not have wings, but most animals that fly do have wings. Our models that incorporate feature structure F can resolve this conflict since F includes a dependency between ?wings? and ?can fly? but not between ?wings? and ?has no legs.? Our models that do not include F cannot resolve the conflict and predict that humans will be uncertain about whether the novel animal has wings. Materials. The object-feature matrix D includes 447 feature pairs {f i , f j } such that none of the 129 animals has both f i and f j . We selected 40 pairs (see the supporting material) and created 400 arguments in total by choosing 10 conclusion features for each pair. The arguments can be assigned to three categories. Conflict cases are arguments f i , f j ? f k such that the single-premise arguments f i ? f k and f j ? f k lead to incompatible predictions. For our purposes, two singlepremise arguments with the same conclusion are deemed incompatible if one leads to a probability greater than 0.9 according to Equation 3, and the other leads to a probability less than 0.1. Edge cases are arguments f i , f j ? f k such that the feature network in Figure 2b includes an edge between f k and either f i or f j . Note that some arguments are both conflict cases and edge cases. All arguments that do not fall into either one of these categories will be referred to as other cases. The 400 arguments for the experiment include 154 conflict cases, 153 edge cases, and 120 other cases. 34 arguments are both conflict cases and edge cases. We chose these arguments based on three criteria. First, we avoided premise pairs that did not co-occur in matrix D but that co-occur in familiar animals that do not belong to D. For example, ?is pink? and ?has wings? do not co-occur in D but ?flamingo? is a familiar animal that has both features. Second, we avoided premise pairs that specified two different numbers of legs?for example, {?has four legs,? ?has six legs?}. Finally, we aimed to include roughly equal numbers of conflict cases, edge cases, and other cases. Method. 16 undergraduates participated for course credit. The experiment was carried out using a custom-built computer interface, and one argument was presented on screen at a time. Participants 6 rated the probability of the conclusion on seven point scale where the endpoints were labeled ?very unlikely? and ?very likely.? The ten arguments for each pair of premises were presented in a block, but the order of these blocks and the order of the arguments within these blocks were randomized across participants. Results. Figure 3 shows average human judgments plotted against the predictions of six models. The plots in the first row include all 400 arguments in the experiment, and the remaining rows show results for conflict cases, edge cases, and other cases. The previous section described four exemplar models, and the two shown in Figure 3 are the best performers overall. Even though the graphical models include more numerical parameters than the exemplar models, recall that these parameters are learned from matrix D rather than fit to the experimental data. Matrix D also serves as the basis for the exemplar models, which means that all of the models can be compared on equal terms. The first row of Figure 3 suggests that the three models which include feature structure F perform better than the alternatives. The output combination model is the worst of the three models that incorporate F, and the correlation achieved by this model is significantly greater than the correlation achieved by the best exemplar model (p < 0.001, using the Fisher transformation to convert correlation coefficients to z scores). Our data therefore suggest that explicit representations of relationships between features are needed to account for inductive inferences about chimeras. The model that includes the feature structure F alone performs better than the two models that combine F with the object structure O, which may not be surprising since Experiment 1 focuses specifically on novel animals that do not slot naturally into structure O. Rows two through four suggest that the conflict arguments in particular raise challenges for the models which do not include feature structure F. Since these conflict cases are arguments f i , f j ? f k where f i ? f k has strength greater than 0.9 and f j ? f k has strength less than 0.1, the first exemplar model averages these strengths and assigns an overall strength of around 0.5 to each argument. The second exemplar model is better able to differentiate between the conflict arguments, but still performs substantially worse than the three models that include structure F. The exemplar models perform better on the edge arguments, but are outperformed by the models that include F. Finally, all models achieve roughly the same level of performance on the other arguments. Although the feature model F performs best overall, the predictions of this model still leave room for improvement. The two most obvious outliers in the third plot in the top row represent the arguments {is blue, lives in desert ? lives in woods} and {is pink, lives in desert ? lives in woods}. Our participants sensibly infer that any animal which lives in the desert cannot simultaneously live in the woods. In contrast, the Leuven database indicates that eight of the twelve animals that live in the desert also live in the woods, and the edge in Figure 2b between ?lives in the desert? and ?lives in the woods? therefore represents a positive dependency relationship according to model F. This discrepancy between model and participants reflects the fact that participants made inferences about individual animals but the Leuven database is based on features of animal categories. Note, for example, that any individual animal is unlikely to live in the desert and the woods, but that some animal categories (including snakes, salamanders, and lizards) are found in both environments. 3 Experiment 2: Single-premise arguments Our results so far suggest that inferences about chimeras rely on explicit representations of relationships between features but provide no evidence that relationships between objects are important. It would be a mistake, however, to conclude that relationships between objects play no role in inductive reasoning. Previous studies have used object structures like the example in Figure 2a to account for inferences about novel features [11]?for example, given that alligators have enzyme Y132 in their blood, it seems likely that crocodiles also have this enzyme. Inferences about novel objects can also draw on relationships between objects rather than relationships between features. For example, given that a novel animal has a beak you will probably predict that it has feathers, not because there is any direct dependency between these two features, but because the beaked animals that you know tend to have feathers. Our second experiment explores inferences of this kind. Materials and Method. 32 undergraduates participated for course credit. The task was identical to Experiment 1 with the following exceptions. Each two-premise argument f i , f j ? f k from Experiment 1 was converted into two one-premise arguments f i ? f k and f j ? f k , and these 7 feature F exemplar r = 0.78 all 7 output combination r = 0.75 7 structure combination r = 0.75 7 5 5 5 5 3 3 3 3 3 1 0.5 1 r = 0.87 7 1 0 0.5 1 r = 0.87 7 1 0 0.5 1 r = 0.84 7 1 0 0.5 1 r = 0.86 7 0 5 5 5 5 3 3 3 3 3 1 0 0.5 1 r = 0.79 7 1 0 0.5 1 r = 0.21 7 1 0 0.5 1 r = 0.74 7 0.5 1 r = 0.66 7 0 5 5 5 3 3 3 3 3 1 0.5 1 1 0 0.5 1 1 0 0.5 1 0.5 1 r = 0.73 7 5 0 1 1 0 5 1 0.5 r = 0.85 7 5 1 r = 0.77 7 5 0 edge r = 0.54 7 1 other object O 1 0 0.5 1 0 0.5 1 Figure 4: Argument ratings and model predictions for Experiment 2. one-premise arguments were randomly assigned to two sets. 16 participants rated the 400 arguments in the first set, and the other 16 rated the 400 arguments in the second set. Results. Figure 4 shows average human ratings for the 800 arguments plotted against the predictions of five models. Unlike Figure 3, Figure 4 includes a single exemplar model since there is no need to consider different feature weightings in this case. Unlike Experiment 1, the feature model F performs worse than the other alternatives (p < 0.001 in all cases). Not surprisingly, this model performs relatively well for edge cases f j ? f k where f j and f k are linked in Figure 2b, but the final row shows that the model performs poorly across the remaining set of arguments. Taken together, Experiments 1 and 2 suggest that relationships between objects and relationships between features are both needed to account for human inferences. Experiment 1 rules out an exemplar approach but models that combine graph structures over objects and features perform relatively well in both experiments. We considered two methods for combining these structures and both performed equally well. Combining the knowledge captured by these structures appears to be important, and future studies can explore in detail how humans achieve this combination. 4 Conclusion This paper proposed that graphical models are useful for capturing knowledge about animals and their features and showed that a graphical model over features can account for human inferences about chimeras. A family of exemplar models and a graphical model defined over objects were unable to account for our data, which suggests that humans rely on mental representations that explicitly capture dependency relationships between features. Psychologists have previously used graphical models to capture relationships between features, but our work is the first to focus on chimeras and to explore models defined over a large set of familiar features. Although a simple undirected model accounted relatively well for our data, this model is only a starting point. The model incorporates dependency relationships between features, but people know about many specific kinds of dependencies, including cases where one feature causes, enables, prevents, or is inconsistent with another. An undirected graph with only one class of edges cannot capture this knowledge in full, and richer representations will ultimately be needed in order to provide a more complete account of human reasoning. Acknowledgments I thank Madeleine Clute for assisting with this research. This work was supported in part by the Pittsburgh Life Sciences Greenhouse Opportunity Fund and by NSF grant CDI-0835797. 8 References [1] R. N. Shepard. Towards a universal law of generalization for psychological science. Science, 237:1317? 1323, 1987. [2] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98(3):409?429, 1991. [3] E. Heit. A Bayesian analysis of some forms of inductive reasoning. In M. Oaksford and N. Chater, editors, Rational models of cognition, pages 248?274. Oxford University Press, Oxford, 1998. [4] J. B. Tenenbaum and T. L. Griffiths. Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24:629?641, 2001. [5] C. Kemp and J. B. Tenenbaum. Structured statistical models of inductive reasoning. Psychological Review, 116(1):20?58, 2009. [6] D. N. Osherson, E. E. Smith, O. Wilkie, A. Lopez, and E. Shafir. Category-based induction. Psychological Review, 97(2):185?200, 1990. [7] D. J. Navarro. Learning the context of a category. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1795?1803. 2010. [8] C. Kemp, T. L. Griffiths, S. Stromsten, and J. B. Tenenbaum. Semi-supervised learning with trees. In Advances in Neural Information Processing Systems 16, pages 257?264. MIT Press, Cambridge, MA, 2004. [9] B. Rehder. A causal-model theory of conceptual representation and categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29:1141?1159, 2003. [10] B. Rehder and R. Burnett. Feature inference and the causal structure of categories. Cognitive Psychology, 50:264?314, 2005. [11] C. Kemp, P. Shafto, and J. B. Tenenbaum. An integrated account of generalization across objects and features. Cognitive Psychology, in press. [12] S. E. Barrett, H. Abdi, G. L. Murphy, and J. McCarthy Gallagher. Theory-based correlations and their role in children?s concepts. Child Development, 64:1595?1616, 1993. [13] S. A. Sloman, B. C. Love, and W. Ahn. Feature centrality and conceptual coherence. Cognitive Science, 22(2):189?228, 1998. [14] D. Yarlett and M. Ramscar. A quantitative model of counterfactual reasoning. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 123?130. MIT Press, Cambridge, MA, 2002. [15] W. Ahn, J. K. Marsh, C. C. Luhmann, and K. Lee. Effect of theory-based feature correlations on typicality judgments. Memory and Cognition, 30(1):107?118, 2002. [16] D. C. Meehan C. McNorgan, R. A. Kotack and K. McRae. Feature-feature causal relations and statistical co-occurrences in object concepts. Memory and Cognition, 35(3):418?431, 2007. [17] S. De Deyne, S. Verheyen, E. Ameel, W. Vanpaemel, M. J. Dry, W. Voorspoels, and G. Storms. Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts. Behavior Research Methods, 40(4):1030?1048, 2008. [18] J. P. Huelsenbeck and F. Ronquist. MRBAYES: Bayesian inference of phylogenetic trees. Bioinformatics, 17(8):754?755, 2001. [19] M. Schmidt. UGM: A Matlab toolbox for probabilistic undirected graphical models. 2007. Available at http://people.cs.ubc.ca/?schmidtm/Software/UGM.html. [20] L. J. Nelson and D. T. Miller. The distinctiveness effect in social categorization: you are what makes you unusual. Psychological Science, 6:246?249, 1995. [21] A. L. Patalano, S. Chin-Parker, and B. H. Ross. The importance of being coherent: category coherence, cross-classification and reasoning. Journal of memory and language, 54:407?424, 2006. [22] S. K. Reed. Pattern recognition and categorization. Cognitive Psychology, 3:393?407, 1972. 9
4423 |@word version:5 proportion:2 seems:1 squid:1 grey:2 mammal:1 score:1 ridden:1 slotted:1 past:1 comparing:2 surprising:1 must:5 readily:1 herring:1 grain:1 numerical:1 partition:1 trout:1 enables:1 designed:1 plot:3 fund:1 grass:1 alone:1 selected:1 beaver:1 rehder:2 smith:1 mental:4 node:12 location:1 five:1 phylogenetic:1 along:1 constructed:1 direct:1 retrieving:1 lopez:1 feather:4 combine:7 behavioral:2 ray:1 introduce:1 pairwise:2 hippo:2 expected:4 behavior:1 roughly:2 love:1 brain:1 resolve:2 little:1 sting:1 notation:1 panel:1 chimera:13 what:1 kind:7 substantially:1 monkey:1 transformation:1 quantitative:1 sensibly:1 shafir:1 grant:1 positive:4 treat:1 mistake:1 joining:1 oxford:2 yarlett:1 might:4 chose:3 black:1 frog:1 studied:1 bird:1 suggests:3 co:7 bat:1 directed:2 acknowledgment:1 block:3 cold:1 universal:1 significantly:1 induce:1 griffith:2 seeing:1 suggest:5 cannot:5 context:1 live:8 influence:1 bill:1 equivalent:1 eighteenth:2 missing:1 rhinoceros:1 go:1 williams:1 starting:1 independently:3 rabbit:1 typicality:1 formulate:1 formalized:2 simplicity:2 cobra:1 identifying:1 assigns:1 rule:2 century:2 x14:2 handle:1 hierarchy:1 suppose:3 play:1 caption:1 us:1 hypothesis:1 agreement:1 deyne:1 recognition:1 lay:1 predicts:2 database:3 coarser:1 observed:7 labeled:1 role:2 fly:12 capture:21 iguana:1 worst:1 wj:3 culotta:1 observes:2 underground:1 substantial:1 intuition:3 environment:1 asked:2 legged:1 ultimately:1 raise:3 depend:2 flying:9 basis:1 cdi:1 joint:1 osherson:1 cat:1 retrieves:1 represented:1 describe:4 cod:1 zemel:1 horse:1 deer:1 choosing:1 kangaroo:1 richer:1 elephant:1 statistic:1 final:3 differentiate:1 propose:1 product:3 combining:4 mixing:1 poorly:1 achieve:2 greenhouse:1 dolphin:1 parent:2 cluster:2 sea:4 categorization:5 leave:1 object:37 help:3 develop:1 exemplar:26 sole:1 strong:1 c:1 resemble:1 indicate:1 direction:1 shafto:1 human:20 enable:2 material:3 require:1 premise:10 generalization:5 squirrel:1 around:1 credit:2 considered:1 cognition:4 predict:3 vary:1 omitted:1 purpose:1 ronquist:1 polar:1 assistant:1 outperformed:1 vanpaemel:1 ross:1 individually:1 weighted:3 reflects:1 mit:2 eats:5 clearly:1 gaussian:1 reaching:1 rather:4 avoid:2 hippopotamus:1 eating:2 chater:1 flatfish:1 focus:6 mane:1 methodological:1 fur:2 likelihood:2 indicates:1 salamander:2 improvement:1 contrast:2 baseline:1 sense:1 sperm:1 posteriori:1 inference:26 typically:2 snake:2 integrated:2 unlikely:2 rhino:2 relation:1 interested:1 overall:5 classification:1 html:1 chimeric:1 insect:1 proposes:2 animal:55 development:1 special:2 equal:2 never:2 having:6 whale:2 identical:2 represents:1 thinking:2 discrepancy:2 ugm:3 others:2 report:1 future:1 randomly:2 simultaneously:1 individual:2 murphy:1 familiar:7 hamster:1 preview:1 hump:1 custom:1 sheep:1 introduces:1 mixture:1 edge:19 experience:1 jumping:1 fox:1 tree:14 taylor:1 plotted:3 causal:8 isolated:1 uncertain:1 psychological:6 instance:1 column:2 applicability:1 entry:2 subset:2 uniform:1 dependency:17 combined:1 fundamental:1 explores:2 randomized:1 twelve:1 probabilistic:2 told:1 eel:1 lee:1 together:2 mouse:1 huelsenbeck:1 worse:2 luhmann:1 qualifies:1 cognitive:4 expert:1 wing:22 style:1 toy:1 account:13 potential:11 converted:2 de:1 includes:16 coefficient:2 llama:1 explicitly:2 depends:1 performed:1 root:2 linked:2 observing:1 reached:1 red:1 sort:1 participant:9 inherited:1 predator:1 oi:4 variance:1 characteristic:1 miller:1 judgment:3 correspond:2 spaced:1 dry:1 yellow:1 bayesian:3 produced:1 none:1 heit:1 burnett:1 researcher:2 explain:1 reach:1 beak:2 streamlined:1 against:3 involved:1 storm:1 obvious:2 naturally:1 rational:1 counterfactual:1 recall:1 knowledge:8 appears:1 ok:4 supervised:1 specify:1 amphibian:1 though:2 anderson:1 tortoise:1 correlation:6 working:2 hand:1 web:1 schmidtm:1 gray:2 aquatic:1 dietterich:1 effect:2 concept:4 counterpart:1 inductive:11 ramscar:1 assigned:2 nut:1 semantic:1 naturalist:2 white:1 criterion:1 chin:1 complete:1 orca:1 performs:6 interface:1 reasoning:12 novel:15 salmon:1 charles:1 marsh:1 functional:2 attached:1 endpoint:1 shepard:1 belong:1 he:1 mellon:1 refer:1 cambridge:2 zebra:1 leuven:3 shawe:1 language:1 crocodile:2 similarity:7 ahn:2 enzyme:4 posterior:1 mccarthy:1 showed:1 perspective:1 binary:1 life:14 captured:1 additional:1 greater:3 gill:1 performer:1 living:2 assisting:1 semi:1 branch:2 multiple:2 full:4 mix:1 infer:3 sparrow:3 cross:1 long:1 equally:2 prediction:15 involving:1 basic:1 cmu:1 dutch:1 represent:1 achieved:2 proposal:1 background:2 separately:1 participated:2 addressed:1 unlike:2 probably:5 navarro:1 tend:5 undirected:12 inconsistent:2 climb:1 incorporates:2 lafferty:1 consult:1 near:1 isolation:2 psychology:5 fit:1 topology:1 identified:1 cow:1 opposite:1 idea:1 prototype:3 donkey:1 whether:5 six:4 linkage:1 swim:2 becker:1 cause:3 pike:1 matlab:1 useful:1 nocturnal:2 clear:1 aimed:1 amount:1 dark:2 ten:1 tenenbaum:4 category:11 stromsten:1 http:1 specifies:1 nsf:1 fish:5 blue:2 carnegie:1 key:2 four:5 monitor:1 blood:2 prevent:1 graph:14 wood:7 sum:3 convert:1 noticing:1 you:13 family:8 shark:1 draw:3 coherence:4 incompatible:2 jaccard:2 capturing:2 distinguish:1 encountered:4 strength:4 occur:7 reasoner:6 handful:1 software:1 diffuse:1 nearby:1 turtle:1 argument:34 expanded:2 eat:3 relatively:3 department:1 structured:2 according:4 combination:14 pink:3 across:3 wi:7 appealing:1 psychologist:2 leg:13 outlier:1 taken:2 equation:6 previously:9 trunk:1 discus:1 boa:1 f1i:1 needed:3 know:2 madeleine:1 serf:1 unusual:1 caiman:1 available:1 eight:1 occurrence:1 centrality:1 alternative:2 schmidt:1 top:2 clustering:1 remaining:3 platypus:1 include:9 graphical:25 opportunity:1 flamingo:1 alligator:2 especially:1 establish:2 ghahramani:1 already:1 strategy:5 kth:1 sloman:1 unable:1 thank:1 evenly:1 seven:2 nelson:1 extent:1 kemp:5 water:4 reason:2 induction:2 pet:1 mcrae:1 length:1 o1:1 relationship:51 reed:1 negative:2 motivates:1 goldfish:1 perform:6 mrbayes:1 supporting:1 variability:2 sharp:1 piranha:1 rating:4 pair:12 dog:1 toolbox:2 specified:1 conflict:13 coherent:2 learned:4 address:2 able:3 suggested:2 beyond:1 lion:1 pattern:1 pig:1 challenge:5 built:1 green:1 oj:2 including:2 memory:4 natural:5 rely:13 carp:1 rated:3 oaksford:1 axis:2 created:4 deemed:1 carried:1 toad:1 faced:1 prior:2 literature:1 berry:2 python:1 review:3 marginalizing:1 law:1 bear:1 teeth:1 consistent:3 editor:3 ckemp:1 heavy:2 production:1 row:9 land:1 dinosaur:1 summary:1 course:2 surprisingly:1 accounted:1 free:2 supported:1 guide:1 understand:1 fall:1 distinctiveness:2 attaching:1 crawl:1 stand:1 rich:1 computes:3 author:1 made:1 jump:2 adaptive:1 avoided:2 far:4 reptile:1 social:1 wilkie:1 implicitly:1 gene:2 conceptual:2 pittsburgh:1 conclude:2 reality:1 robin:3 learn:2 nature:1 ca:1 baserates:1 did:1 giraffe:1 child:2 referred:1 creature:10 egg:1 screen:1 slow:2 parker:1 lizard:3 explicit:7 duck:1 candidate:1 perceptual:1 third:2 weighting:1 learns:1 specific:2 normative:1 barrett:1 virtue:1 evidence:2 exists:1 undergraduate:2 merging:1 importance:1 gallagher:1 subtree:2 bison:1 explore:4 likely:2 prevents:1 partially:1 baserate:1 wolf:1 corresponds:1 ubc:1 relies:2 extracted:1 ma:2 conditional:2 slot:1 marked:1 towards:1 room:1 shared:1 fisher:1 tiger:1 specifically:1 total:2 neck:1 experimental:2 organizes:1 desert:7 exception:1 internal:1 support:2 people:3 abdi:1 bioinformatics:1 incorporate:3 evaluate:5
3,782
4,424
On Causal Discovery with Cyclic Additive Noise Models Joris M. Mooij Radboud University Nijmegen, The Netherlands [email protected] Tom Heskes Radboud University Nijmegen, The Netherlands [email protected] Dominik Janzing Max Planck Institute for Intelligent Systems T?ubingen, Germany [email protected] Bernhard Sch?olkopf Max Planck Institute for Intelligent Systems T?ubingen, Germany [email protected] Abstract We study a particular class of cyclic causal models, where each variable is a (possibly nonlinear) function of its parents and additive noise. We prove that the causal graph of such models is generically identifiable in the bivariate, Gaussian-noise case. We also propose a method to learn such models from observational data. In the acyclic case, the method reduces to ordinary regression, but in the more challenging cyclic case, an additional term arises in the loss function, which makes it a special case of nonlinear independent component analysis. We illustrate the proposed method on synthetic data. 1 Introduction Causal discovery refers to a special class of statistical and machine learning methods that infer causal relationships between variables from data and prior knowledge [1, 2, 3]. Whereas in machine learning, one traditionally concentrates on the task of predicting the values of variables given observations of other variables (for example in regression or classification tasks), causal discovery focuses on predicting the results of interventions on the system: if one forces one (or more) of the variables into a particular state, how will the probability distribution of the other variables be affected? In this sense, causal discovery concentrates more on inferring the underlying mechanism that generated the data than on modeling the data itself. An important assumption often made in causal discovery is that the causal mechanism is acyclic, i.e., that no feedback loops are present in the system. For example, if A causes B, and B causes C, then the possibility that C also causes A is usually excluded from the outset. This acyclicity assumption is useful because it simplifies the theoretical analysis and often is also a reasonable assumption to make. Nevertheless, causal cycles are known to occur frequently in biological systems such as gene regulatory networks and protein interaction networks. One would expect that taking such feedback loops into account during data analysis should therefore significantly improve the quality of the inferred causal structure. Essentially two strategies for dealing with cycles in causal models can be distinguished. The first one is to perform repeated measurements in time, and to infer a causal model for the dynamics of the underlying system. The fact that causes always precede their effects provides additional prior knowledge that simplifies causal discovery, which is exploited in methods based on Granger causality [4]. Additionally, under certain assumptions, ?unrolling? the model in time effectively removes the cycles, which is used in methods such as vector auto-regressive models, which are popular in 1 econometrics, or more generally, Dynamic Bayesian Networks [5] and ordinary differential equation models. However, all these methods need time series data where the temporal resolution of the measurements is high relative to the characteristic time scale of the feedback loops in order to rule out instantaneous cyclic relationships. Therefore, a significant practical drawback of this strategy is that obtaining time series data with sufficiently high temporal resolution is often costly?or even impossible?using current technology. The second strategy is based on the assumption that the system is in equilibrium, and that the data have been gathered from an equilibrium distribution (in the ergodic case, the data can also consist of snapshots of the dynamical system, taken at different points in time). The equilibrium distribution is then used to draw conclusions about the underlying dynamic system, and to predict the results of interventions. This is the approach taken in the current paper. We assume the equilibrium to be described by fixed point equations, where each variable is a function of some other variables, plus noise. This noise models unobserved causes and is assumed to be different for each independent realization of the system, but constant during equilibration. In the simplest case (assuming causal sufficiency), the noise terms are jointly independent. Together, these assumptions define an interesting model class that forms a direct generalization of Structural Equation Models (SEMs) [2] to the nonlinear (and cyclic) case. An important novel aspect of our work is that we consider continuous-valued variables and nonlinear causal mechanisms. Although the linear case has been studied in considerable detail already [6, 7, 8], as far as we know, nobody has yet investigated the (more realistic) case of nonlinear causal mechanisms. The basic assumption made in [7] is the so-called Global Directed Markov Condition, which relates (conditional) independences between the variables with the structure of the causal graph. In the cyclic case, however, it is not obvious what the relationship is with the class of nonlinear causal models that we consider here. Therefore, direct generalization of the algorithm proposed in [7] to the nonlinear case seems difficult. Furthermore, conditional independences only allow identification of the graph up to Markov equivalence classes. For instance, in the bivariate case, one cannot distinguish between X ? Y , Y ? X and X  Y using conditional independences alone. Researchers have also studied cyclic causal models with discrete variables [9, 10]. However, if the measured variables are intrinsically continuous-valued, it is desirable to avoid discretization as a preprocessing step, as this throws away information that is useful for causal discovery. 2 Cyclic additive noise models Let V be a finite index set. Let (Xi )i?V be random variables modeling measurable properties of the system of interest and let (Ei )i?V be other random variables modeling unobservable noise sources. We assume that all random variables take values in the real numbers. We also assume that the noise variables (Ei )i?V have densities and are jointly independent: Y p(eV ) = pEi (ei ). (1) i?V For each i, let pa(i) ? V \ {i} be a set defining the parents of i and fi : R|pa(i)| ? R be a continuously differentiable function. Under certain assumptions (see below), the following equations specify a unique probability distribution on the observable variables (Xi )i?V : Xi = fi (Xpa(i) ) + Ei , i ? V. (2) Using vector notation, we can write the fixed point equations (2) in a more compact manner as X = f (X) + E. (3) The probability distribution p(X) induced by these equations is interpreted as the equilibrium distribution of an underlying dynamic system. Each function fi represents a causal mechanism which determines Xi as a function of its parents Xpa(i) , which model its direct causes. The noise variables can be interpreted as other, unobserved causes for their corresponding variables. By assuming independence of the noise variables, we are assuming causal sufficiency, or in other words, absence of confounders (hidden common causes). We call a model specified by (1) and (2) an additive noise model. With any additive noise model we can associate a directed graph with vertices V and directed edges i ? j if i ? pa(j), i.e., from 2 causes to their direct effects.1 If this graph is acyclic, we call the model an acyclic additive noise model. If the graph contains (directed) cycles, we call the model a cyclic additive noise model.2 Interpretation in the cyclic case Note that the presence of cycles increases the complexity of the model, because the equations (2) become recursive. The interpretation of these equations also becomes less straightforward in the cyclic case. In general, for a fixed noise value E = e, the fixed point equations x = f (x) + e can have any number of fixed points between 0 and ?. For simplicity, however, we will assume that for each noise value e there exists a unique fixed point x = F (e). Later, in Section 3.1, we will give a sufficient condition for this to be the case. Under this assumption, the joint probability distribution p(E) induces a unique joint probability distribution p(X). This interpretation also shows a way to sample from the joint distribution: First, one samples a joint value of the noise e. Then, one iterates the fixed point equations (2) to find the corresponding fixed point x = F (e). This yields one sample x. Different independent samples are obtained by repeating this process. Thus, the equations can be interpreted as the equilibrium distribution of a dynamic system in the presence of noise which is constant during equilibration, but differs across measurements (data points). If in reality the noise does change over time, but on a slow time scale relative to the time scale of the equilibration, then this model can be considered as the first-order approximation. The induced density Although the mapping F : e 7? x that maps noise values to their corresponding fixed points under (3) is nontrivial in most cases, a crucial observation is that its inverse G = F ?1 = I ? f has a very simple form (here, I is the identity mapping). Under the change of variables e 7? x, the transformation rule of the densities reads: Y   pX (x) = pE x ? f (x) |I ? ?f (x)| = |I ? ?f (x)| pEi xi ? fi (xpa(i) ) (4) i?V where ?f (x) is the Jacobian of f evaluated at x and |?| denotes the absolute value of the determinant of a matrix. Note that although sampling from the distribution pX is elaborate (as it typically involves many iterations of the fixed point equations), the corresponding density can be easily expressed analytically in terms of the noise distributions and partial derivatives of the causal mechanisms. Later we will see that the fact that the model has a simple structure in the ?backwards? direction allows us to efficiently learn it from data, which may be surprising considering the fact that the model is complex in the ?forward? direction. Causal interpretation An additive noise model can be used for ordinary prediction tasks (i.e., predict some of the variables conditioned on observations of some other variables), but can also be used to predict the results of interventions: if we force some of the variables to certain values, what will happen with the others? Such an intervention can be modeled by replacing the equations for the intervened variables by simple equations Xi = Ci , with Ci the value set by the intervention. This procedure results in another additive noise model. If the altered fixed point equations induce a unique probability distribution on X, then this is the predicted distribution on X under the intervention. In this sense, additive noise models are given a causal interpretation. Hereafter, we will therefore refer to the graph associated with the additive noise model as the causal graph. 3 Identifiability An interesting and important question for causal discovery is under which conditions the causal graph is identifiable given only the joint distribution p(X). Lacerda et al. [8] have shown that under ?f 1 If some causal mechanism fj does not depend on one of its parents i ? pa(j), i.e., if ?Xji (Xpa(j) ) = 0 everywhere, then we discard the edge i ? j. 2 Cyclic additive noise models are also known as ?non-recursive? (nonlinear) structural equation models, whereas the acyclic versions are known as ?recursive? (nonlinear) SEMs. This terminology is common usage but confusing, as it is precisely in the cyclic case that one needs a recursive procedure to calculate the solutions of equations (2), and not the other way around. 3 the additional assumption of linearity (i.e., all functions fi are linear), the causal graph is completely identifiable if at most one of the noise sources has a Gaussian distribution. The proof is based on Independent Component Analysis. Our aim here is to deal with the more difficult nonlinear case. In this work, we focus our attention on the bivariate case. Our main result, Theorem 1, can be seen as an extension of the identifiability result for acyclic nonlinear additive noise models derived in [11], although we make the additional simplifying assumption that the noise variables are Gaussian. We believe that similar identifiability results can be derived in the multivariate case (|V | > 2) and for non-Gaussian noise distributions. However, proving such results seems to be significantly harder as the calculations become very cumbersome, and we leave this as an open problem for future work. 3.1 The bivariate case Before we state our identifiability result, we first give a sufficient condition for existence of a unique equilibrium distribution for the bivariate case. Lemma 1 Consider the fixed point equations x = fX (y) + cX , y = fY (x) + cY parameterized 0 by constants (cX , cY ). If supx,y |fX (y)fY0 (x)| = r < 1, then for any (cX , cY ), the fixed point equations converge to a unique fixed point that does not depend on the initial conditions. Proof. Consider the mapping defined by applying the fixed point equations twice. Its Jacobian is diagonal and the absolute values of the entries are bounded from above by r < 1 under the assumption above. According to Banach?s fixed point theorem, it is a contraction (e.g., with respect to the Euclidean norm on R2 ) and therefore has a fixed point that is unique. Independent of the initial conditions, under repeated application of this mapping, one converges to this fixed point. Lemma 1 in the supplement then shows that the same conclusion must hold for the mapping that applies the fixed point equations only once.  This lemma provides a sufficient condition for an additive noise model to be well-defined in the bivariate case. Also, the result of any intervention will be well-defined under this condition. Now suppose we are given the joint distribution pX,Y of two real-valued random variables X, Y which is induced by an additive noise model. The question is whether we can identify the causal graph corresponding with the true model out of the four possibilities (X Y , X ? Y , Y ? X, X  Y ). Hoyer et al. [11] have shown that if one excludes the cyclic case X  Y , then in the generic case, the causal structure is identifiable. Our aim is to prove a stronger identifiability result where the cyclic case is not excluded a priori. As a first step in this direction, we consider here the case of Gaussian noise. ? Theorem 1 Let pX,Y be induced by two additive Gaussian noise models, M and M: ?1 X = fX (Y ) + EX , Y = fY (X) + EY , EX ? ? EY , EX ? N (0, ?X ), EY ? N (0, ?Y?1 ) (M) ?1 ?X , Y = f?Y (X) + E ?Y , E ?X ? ?Y , E ?X ? N (0, ? ?Y ? N (0, ? ? X = f?X (Y ) + E ?E ?X ), E ? Y?1 ) (M) 0 0 (y)f?Y0 (x) < 1, then the two Assuming that supx,y |fX (y)fY0 (x)| < 1 and similarly supx,y f?X corresponding causal graphs coincide: GM = GM ? , i.e.: ? fX is constant ?? fX is constant, and fY is constant ?? f?Y is constant, or the models are of the following very special form: ? either: fX , f?X , fY , f?Y are all affine, ? is acyclic, the other is cyclic, and the following equations hold: ? or: one model (say M) ? ?X ? ?Y ?Y ? (5) fY (x) = Cx + D with C 6= 0, fX (y) = fX (y) ? Cy + CD, f?Y (x) = D ?X ?X ?X and f?X satisfies the following differential equation:3 1 0 0 ? (? ?X f?X ? ?Y Cy + ?Y CD)(? ?X f?X ? ?Y C) + ? ? X f?X f?X ?X (6) 00 ? ? X f?X ? +C = ?Y (y ? D) ? ? ? Y (y ? D) . 0 ? ? C)C ?X ? (? ?X f?X Y 3 Or similar equations with the roles of X and Y reversed. 4 We will only sketch the proof here, and refer to the supplementary material for the details. What the theorem shows is that, apart from a small class of exceptions, bivariate additive Gaussian-noise models induce densities that allow a perfect reconstruction of the causal graph. In a certain sense, the situation can be seen as similar to the well-known ?faithfulness assumption? [3]: the latter assumption is often made in order to exclude the highly special cases of causal models which would spoil identifiability of the Markov equivalence class. The usual reasoning is that these cases are so rare that they can be ignored in practice. A similar reasoning can be made in our case. Although our main identifiability result, Theorem 1, may seem rather restricted as it only considers two variables, it may be possible to use this two-variable identifiability result as a key building block for deriving more general identifiability results for many variables, similar as how [12] generalized the (acyclic) identifiability result of [11] from two to many variables. 3.2 Proof sketch Writing ???? (? ? ? ) := log p??? (? ? ? ) for logarithms of densities, we reexpress (4) for the bivariate case:   0 ?X,Y (x, y) = ?EX x ? fX (y) + ?EY y ? fY (x) + log |1 ? fX (y)fY0 (x)| (7) Partial differentiation with respect to x and y yields the following equation, which will be the equation on which we base our identifiability proof: 00  0  0 ? 2 ?X,Y (y)fY00 (x) fX 00 00 = ??E x ? f (y) f (y) ? ? y ? f (x) f (x) ?  X Y X E Y X Y 0 (y)f 0 (x) 2 ?x?y 1 ? fX Y (8) We will now specialize to Gaussian noise and give a sketch of how to prove identifiability of the ?2 ?1 causal graph. We assume EX ? N (0, ?X ) and EY ? N (0, ?Y?1 ) where ?X = ?X , ?Y = ?Y?2 are the precisions (inverse variances) of the Gaussian noise variables. Equation (8) simplifies to: 00 ? 2 ?X,Y fX (y)fY00 (x) 0 = ?X fX (y) + ?Y fY0 (x) ?  0 (y)f 0 (x) 2 ?x?y 1 ? fX Y (9) A similar equation holds for the other model: 00 ? 2 ?X,Y f?X (y)f?Y00 (x) 0 =? ? X f?X (y) + ? ? Y f?Y0 (x) ?  0 (y)f?0 (x) 2 ?x?y 1 ? f?X Y (10) ? has The general idea of the identifiability proof is as follows. We consider two cases: (i) model M 0 0 0 0 ? ? ? ? ? zero ?arrows?, i.e., fX = 0 and fY = 0; (ii) model M has one ?arrow?, say, fX 6= 0, fY = 0. By equating the r.h.s.?s of (9) and (10), we show in both cases that generically (i.e., except for very ? This then implies that special choices of the model parameters), model M must equal model M. ? the causal graphs of M and M must be the same in the generic case. 0 For example, in the first case, because f?X = f?Y0 = 0, we obtain the following equation:  2 0 0 00 0 = ?X fX (y) + ?Y fY0 (x) 1 ? fX (y)fY0 (x) ? fX (y)fY00 (x) fY0 (11) 0 fX (y). This is a nonlinear partial differential equation in ?(x) := (x) and ?(y) := Inspired by the identifiability proof in [13], we adopt the solution method from [14, Supplement S.4.3] that gives a general method for solving functional-differential equations of the form ?1 (x)?1 (y) + ?2 (x)?2 (y) + ? ? ? + ?k (x)?k (y) = 0 (12) where the functionals ?i (x) and ?i (y) depend only on x and y, respectively: ?i (x) = ?i (x, ?, ?0 ), ?i (y) = ?i (y, ?, ? 0 ). The idea behind the solution method is to repeatedly divide by one of the functionals and differentiate with respect to the corresponding variable. For example, dividing by ?1 and differentiating with respect to x, we obtain:     ? ?2 (x) ? ?k (x) ?2 (y) + ? ? ? + ?k (y) = 0 ?x ?1 (x) ?x ?1 (x) 5 which is again of the form (12), but with one fewer term. This process is repeated until an equation of the form (12) remains with only 2 terms. That equation is easily solved, as its general solution can be written as C1 ?1 (x) + C2 ?2 (x) = 0, C2 ?1 (y) ? C1 ?2 (y) = 0 for arbitrary constants C1 , C2 ? R, and there are also two degenerate solutions ?1 = ?2 = 0 (and ?1 , ?2 arbitrary) and ?1 = ?2 = 0 (and ?1 , ?2 arbitrary). These equations, which are now ordinary differential equations, can be solved by standard methods. The solutions are then substituted into the original equation (12) in order to remove redundant constants of integration. 0 Applying this method to the case at hand, one obtains equations for fX and fY0 . Solving these 0 0 0 0 ? or that f = f = f? = f? = 0. In the second case equations, one finds that either M = M, X Y X Y ? has one arrow) the equations show that either M = M, ? or the model parameters should (where M satisfy equations (5) and (6). 4 Learning additive noise models from observational data In this section, we propose a method to learn an additive noise model from a finite data set D := {x(n) }N n=1 . We will only describe the bivariate case in detail, although the method can be extended to more than two variables in a straightforward way. We first consider how we can learn the causal mechanisms {fi }i?V for a fixed causal structure. This can be done efficiently by a MAP estimate with respect to (the parameters of) the causal mechanisms. Using (4), the MAP problem can be written as: ! N  Y  Y  (n) (n) (n) argmax p(f?) pE x ? f?i x (13) I ? ?f? x i f? n=1 i pa(i) i?V where p(f?) specifies the prior distribution of the causal mechanisms. Note the presence of the determinant; in the acyclic case, this term becomes 1, and the method reduces to standard regression. In the cyclic case, however, the determinant is necessary in order to penalize dependencies between the estimated noise variables. One can consider this as a special case of nonlinear independent component analysis, as the MAP estimate (13) can also be interpreted as the minimizer of the mutual ?i = information between the noise variables. If the estimated functions lead to noise estimates E Xi ? f?i (Xpa(i) ) which are mutually independent according to some independence test, then we accept the model. One can try all possible causal graph structures and test which ones fit the data. The models that lead to independent estimated noise values are possible causal explanations of the data. If multiple models with different causal graphs lead to independent estimated noise values, we prefer models with fewer arrows in the graph.4 If the number of data points is large enough, Theorem 1 suggests that for two variables with Gaussian noise, in the generic case, a unique causal structure will be identified in this way. For more than two variables, and for other noise distributions, the method can still be applied, but we do not know whether (in general and asymptotically) there will be a unique causal structure that explains the data. We now work out the bivariate Gaussian case in more detail. The prior for the functions f? can be chosen arbitrarily, for example using some parametric approach. Here, we will use a nonparametric approach using Gaussian processes. The negative log-likelihood L := ? ln p(D | f?X , f?Y ) can be written in terms of the observational data D := {(x(n) , y (n) )}N n=1 as: L=? N X i=1 ? EY y (i) N N  X  X (i) (i) (i) 0 ? ? ? fY (x ) ? ?EX x ? fX (y ) ? log 1 ? f?Y0 (x(i) )f?X (y (i) ) . i=1 i=1 2 Assuming Gaussian noise EX ? N (0, ?X ), EY ? N (0, ?Y2 ) and using Gaussian Process  priors for the causal mechanisms fX and fY , i.e., taking x ? := fX (y) ? N 0, KX (y) and 4 Note that if a certain model leads to independent noise terms, then adding more arrows will still allow independent noise terms, by setting some functions to 0?see also Figure 1 below. 6  y ? := fY (x) ? N 0, KY (x) where KX is the Gram matrix with entries KX;ij = kX (y (i) , y (j) ) for some covariance function kX : R2 ? R, and similarly for KY , we obtain: 1 1 min L = N log ?X + N log ?Y + log |KX | + log |KY | x ?,y ? 2 2  1 1 T ?1 1 1 T ?1 2 2 ky ? y ?k + 2 kx ? x ?k + x + min ? KX x ? KY y ?+ y ? 2 x ?,y ? 2?Y 2?X 2 2   !  N X ?k ?k X Y ?1 ?1 (i) (i) (x , x)KY y (y , y)KX x ? ? , ? log 1 ? ?x ?y i=1 where we used the expected derivatives of the Gaussian Processes for approximating the determinant-term. In our experiments, we used Gaussian covariance kernels   (y ? y 0 )2 0 2 kX (y, y ) = ?X exp ? + ??y,y0 , 2?2X and likewise for kY . Note that we added a small constant (? = 10?4 ) to the diagonal to allow for small, independent measurement errors or rounding errors (which occur because the Gram matrices are very ill-conditioned). The optimization problem can be solved numerically, e.g., using standard methods such as conjugate gradient or L-BFGS. We optimize simultaneously with respect to the noise values x ?, y ? and the hyperparameters log ?X , log ?X , log ?X , log ?Y , log ?Y , log ?Y . 5 Experiments We illustrate the method on several synthetic data sets in Figure 1. Each row shows a data set with N = 500 data points. Because of space constraints, we only show the learned cyclic additive noise models, omitting the acyclic ones. In each case, we calculated the p-value for independence of the two noise variables using the HSIC (Hilbert-Schmidt Independence Criterion) test [15]; for p-values substantially above 0 (say larger than 1%), we do not reject the null hypothesis of independence and hence accept the model as possible causal explanation of the data. This happens in four out of six cases, except for the cases displayed in rows 1b and 3b, which are rejected. Rows 1a and 1b concern the same data generated from a nonlinear and acyclic model. We found two different local minima, one of which is accepted (the one more closely resembling the true model), and one is rejected. Even though we learned a causal model with cyclic structure, in the accepted solution, one of the learned causal mechanisms becomes (almost) constant. Rows 3a and 3b show again two different solutions for the same data, now generated from a nonlinear cyclic model. Note that the solution in row 3b could be preferred over that in row 3a based upon its likelihood, but is actually rejected because its estimated noises are highly dependent. Row 4 shows data from a linear, cyclic model, where the ratio of the noise sources equals the ratio of the slopes of the causal mechanisms. This makes this linear model part of the special class of unidentifiable additive noise models. In this case, the MAP estimates for the causal mechanisms are quite different from the true ones. 6 Discussion and Conclusion We have studied a particular class of cyclic causal models given by nonlinear SEMs with additive noise. We have discussed how these models can be interpreted to describe the equilibrium distribution of a dynamic system with noise that is constant in time. We have looked in detail at the bivariate Gaussian-noise case and shown generic identifiability of the causal graph. We have also proposed a method to learn such models from observational data and illustrated it on synthetic data. Even though we have shown that in this ?laboratory setting?, the method can be made to work on purely observational data when enough data is available, it includes several assumptions that make it challenging to apply in real-world scenarios. Also, from our experiments, it appears that the method often finds other solutions (local minima of the log likelihood) which differ from the expected true data generating model but which have dependent estimated noises. Thus there is ample opportunity for future work: For example, improving the robustness of the learning method, and generalizing the results to many variables and non-Gaussian noise. 7 Data Estimated noise Reconstructed data Y Y EX X Data X ?> fY(X) Y ?> fX(Y) Estimated noise Reconstructed data Y X EY X EX X Data X ?> fY(X) Y ?> fX(Y) Estimated noise Reconstructed data Y X EY Y Y X Y X Y EX X Data Y ?> fX(Y) Estimated noise Reconstructed data Y X EY X X ?> fY(X) Y X X Y EX X Data X ?> fY(X) Y ?> fX(Y) Estimated noise Reconstructed data X X Y X EY X Y 4: X X Y 3b: Y X EY EX Y Y Y ?> fX(Y) Y 3a: X Y Y X Y 2: Reconstructed data X ?> fY(X) Y 1b: Estimated noise X Y 1a: Y ?> fX(Y) EY X ?> fY(X) Data EX Y X Figure 1: From left to right: observed data pairs (x, y), true (blue) and estimated (red) functions fY and fX , respectively, estimated noise values (eX , eY ) and reconstructed data (x, y) based on the estimated noise. Rows 1a and 1b show two different solutions (minima of the log likelihood) for the same data, as do rows 3a and 3b. The true models used to generate the data, the p-values for independence of the estimated residuals, and the negative log-likelihoods are, from top to bottom: # Identifiable? Linear? Cyclic? fY (x) fX (y) ?X ?Y 1a 1b 2 3a 3b 4 + + + + + ? ? ? ? ? ? + ? ? ? + + + 0.9 tanh(2x) 0.9 tanh(2x) 0 0.9 cos(x) 0.9 cos(x) ?0.4x 0 0 0.9 tanh(2x) 0.9 tanh(y) 0.9 tanh(y) 0.8y 1 1 0.5 1 1 0.5 0.5 0.5 1 1 1 1 pE X ? ? EY 0.76 7 ? 10?3 0.74 0.78 3 ? 10?58 0.61 L ?2.56 ? 103 ?2.51 ? 103 ?2.57 ? 103 ?2.24 ? 103 ?2.26 ? 103 ?2.73 ? 103 Acknowledgments We thank Stefan Maubach and Wieb Bosma for their help with the computer algebra. DJ was supported by DFG, the German Research Foundation (SPP 1395). TH and JM were supported by NWO, the Netherlands Organization for Scientific Research (VICI grant 639.023.604 and VENI grant 639.031.036, respectively). 8 References [1] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000. [2] K. A. Bollen. Structural Equations with Latent Variables. John Wiley & Sons, 1989. [3] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. Springer-Verlag, 1993. (2nd ed. MIT Press 2000). [4] C.W.J. Granger. Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37:424438, 1969. [5] N. Friedman, K. Murphy, and S. Russell. Learning the structure of dynamic probabilistic networks. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI-98), pages 139?147, 1998. [6] P. Spirtes. Directed cyclic graphical representations of feedback models. In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence (UAI-95), page 491499, 1995. [7] T. Richardson. A discovery algorithm for directed cyclic graphs. In Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI-1996), 1996. [8] G. Lacerda, P. Spirtes, J. Ramsey, and P. O. Hoyer. Discovering cyclic causal models by independent components analysis. In Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence (UAI-2008), 2008. [9] M. Schmidt and K. Murphy. Modeling discrete interventional data using directed cyclic graphical models. In Proceedings of the 25th Annual Conference on Uncertainty in Artificial Intelligence (UAI-09), 2009. [10] S. Itani, M. Ohannessian, K. Sachs, G. P. Nolan, and M. A. Dahleh. Structure learning in causal cyclic networks. In JMLR Workshop and Conference Proceedings, volume 6, page 165176, 2010. [11] P.O. Hoyer, D.Janzing, J.M.Mooij, J.Peters, and B.Sch?olkopf. Nonlinear causal discovery with additive noise models. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21 (NIPS*2008), pages 689?696, 2009. [12] Jonas Peters, Joris M. Mooij, Dominik Janzing, and Bernhard Sch?olkopf. Identifiability of causal graphs using functional models. In Proceedings of the 27th Annual Conference on Uncertainty in Artificial Intelligence (UAI-11), 2011. [13] K. Zhang and A. Hyv?arinen. On the identifiability of the post-nonlinear causal model. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence (UAI-09), Montreal, Canada, 2009. [14] A.D. Polyanin and V.F. Zaitsev. Handbook of Nonlinear Partial Differential Equations. Chapman & Hall / CRC, 2004. [15] A. Gretton, R. Herbrich, A. Smola, O. Bousquet, and B. Sch?olkopf. Kernel methods for measuring independence. Journal of Machine Learning Research, 6:2075?2129, 2005. 9
4424 |@word determinant:4 version:1 stronger:1 seems:2 norm:1 nd:1 twelfth:1 open:1 hyv:1 simplifying:1 contraction:1 covariance:2 harder:1 initial:2 cyclic:28 series:2 contains:1 hereafter:1 ramsey:1 current:2 discretization:1 surprising:1 yet:1 must:3 written:3 john:1 additive:23 realistic:1 happen:1 remove:2 alone:1 intelligence:7 fewer:2 discovering:1 regressive:1 provides:2 iterates:1 herbrich:1 zhang:1 lacerda:2 c2:3 direct:4 differential:6 become:2 jonas:1 prove:3 specialize:1 manner:1 expected:2 xji:1 mpg:2 frequently:1 inspired:1 jm:1 considering:1 unrolling:1 becomes:3 underlying:4 notation:1 linearity:1 bounded:1 null:1 what:3 interpreted:5 substantially:1 unobserved:2 transformation:1 differentiation:1 temporal:2 grant:2 intervention:7 planck:2 before:1 local:2 plus:1 twice:1 studied:3 equating:1 equivalence:2 suggests:1 challenging:2 co:2 directed:7 practical:1 unique:9 acknowledgment:1 recursive:4 practice:1 block:1 differs:1 procedure:2 dahleh:1 significantly:2 reject:1 outset:1 word:1 refers:1 induce:2 protein:1 cannot:1 impossible:1 applying:2 writing:1 optimize:1 measurable:1 map:5 resembling:1 straightforward:2 attention:1 ergodic:1 resolution:2 simplicity:1 equilibration:3 rule:2 deriving:1 proving:1 traditionally:1 fx:34 hsic:1 suppose:1 gm:2 hypothesis:1 pa:5 associate:1 econometrics:1 observed:1 role:1 bottom:1 solved:3 calculate:1 cy:5 cycle:5 russell:1 complexity:1 econometrica:1 dynamic:7 depend:3 solving:2 algebra:1 purely:1 upon:1 completely:1 easily:2 joint:6 describe:2 fy00:3 radboud:2 artificial:7 quite:1 supplementary:1 valued:3 larger:1 say:3 nolan:1 richardson:1 jointly:2 itself:1 differentiate:1 differentiable:1 propose:2 reconstruction:1 interaction:1 loop:3 realization:1 degenerate:1 olkopf:4 ky:7 parent:4 generating:1 perfect:1 leave:1 converges:1 help:1 illustrate:2 montreal:1 measured:1 ij:1 throw:1 dividing:1 c:2 involves:1 implies:1 predicted:1 differ:1 concentrate:2 direction:3 drawback:1 closely:1 observational:5 material:1 explains:1 crc:1 arinen:1 generalization:2 biological:1 extension:1 hold:3 sufficiently:1 considered:1 around:1 y00:1 exp:1 hall:1 equilibrium:8 mapping:5 predict:3 adopt:1 precede:1 tanh:5 nwo:1 xpa:5 stefan:1 mit:1 gaussian:18 always:1 aim:2 rather:1 avoid:1 derived:2 focus:2 likelihood:5 sense:3 inference:1 dependent:2 typically:1 accept:2 hidden:1 relation:1 koller:1 germany:2 unobservable:1 classification:1 ill:1 priori:1 special:7 integration:1 mutual:1 equal:2 once:1 sampling:1 chapman:1 represents:1 future:2 others:1 intelligent:2 causation:1 simultaneously:1 dfg:1 murphy:2 argmax:1 friedman:1 organization:1 interest:1 possibility:2 highly:2 generically:2 nl:2 behind:1 edge:2 partial:4 necessary:1 euclidean:1 logarithm:1 divide:1 causal:61 theoretical:1 instance:1 modeling:4 measuring:1 ordinary:4 vertex:1 entry:2 rare:1 rounding:1 dependency:1 supx:3 synthetic:3 confounders:1 density:6 probabilistic:1 together:1 continuously:1 again:2 possibly:1 derivative:2 account:1 exclude:1 de:2 bfgs:1 includes:1 satisfy:1 later:2 try:1 red:1 identifiability:17 slope:1 variance:1 characteristic:1 efficiently:2 likewise:1 gathered:1 yield:2 identify:1 bayesian:1 identification:1 researcher:1 cumbersome:1 janzing:4 ed:1 obvious:1 associated:1 proof:7 popular:1 intrinsically:1 knowledge:2 hilbert:1 actually:1 appears:1 tom:1 specify:1 sufficiency:2 evaluated:1 done:1 though:2 unidentifiable:1 furthermore:1 rejected:3 smola:1 until:1 sketch:3 hand:1 ei:4 replacing:1 nonlinear:19 quality:1 scientific:1 believe:1 building:1 effect:2 usage:1 omitting:1 true:6 y2:1 analytically:1 hence:1 excluded:2 read:1 laboratory:1 spirtes:3 illustrated:1 deal:1 during:3 criterion:1 generalized:1 fj:1 reasoning:3 instantaneous:1 novel:1 fi:6 common:2 functional:2 volume:1 banach:1 discussed:1 interpretation:5 numerically:1 measurement:4 significant:1 refer:2 cambridge:1 heskes:2 similarly:2 nobody:1 dj:1 base:1 multivariate:1 apart:1 discard:1 scenario:1 certain:5 verlag:1 ubingen:2 arbitrarily:1 exploited:1 sems:3 seen:2 minimum:3 additional:4 ey:15 converge:1 redundant:1 ii:1 relates:1 multiple:1 desirable:1 reduces:2 infer:2 gretton:1 calculation:1 cross:1 post:1 prediction:2 regression:3 basic:1 essentially:1 iteration:1 kernel:2 c1:3 penalize:1 whereas:2 source:3 crucial:1 sch:4 induced:4 ample:1 seem:1 call:3 structural:3 reexpress:1 presence:3 backwards:1 bengio:1 enough:2 independence:10 fit:1 identified:1 simplifies:3 idea:2 whether:2 six:1 peter:2 cause:9 repeatedly:1 ignored:1 useful:2 generally:1 netherlands:3 ohannessian:1 repeating:1 nonparametric:1 induces:1 simplest:1 generate:1 specifies:1 estimated:16 blue:1 discrete:2 write:1 affected:1 key:1 four:2 terminology:1 nevertheless:1 veni:1 interventional:1 econometric:1 graph:21 excludes:1 asymptotically:1 inverse:2 everywhere:1 parameterized:1 fourteenth:1 uncertainty:7 almost:1 reasonable:1 draw:1 confusing:1 prefer:1 distinguish:1 identifiable:5 annual:2 nontrivial:1 occur:2 precisely:1 constraint:1 bousquet:1 aspect:1 min:2 px:4 glymour:1 according:2 conjugate:1 across:1 son:1 y0:5 b:1 happens:1 restricted:1 taken:2 ln:1 equation:42 mutually:1 remains:1 scheines:1 granger:2 mechanism:14 german:1 know:2 available:1 apply:1 away:1 generic:4 spectral:1 distinguished:1 schmidt:2 robustness:1 existence:1 original:1 denotes:1 top:1 graphical:2 opportunity:1 joris:2 approximating:1 already:1 question:2 added:1 looked:1 strategy:3 costly:1 parametric:1 usual:1 diagonal:2 hoyer:3 gradient:1 reversed:1 thank:1 fy:19 tuebingen:2 considers:1 assuming:5 ru:2 index:1 relationship:3 modeled:1 ratio:2 difficult:2 nijmegen:2 negative:2 bollen:1 pei:2 perform:1 observation:3 snapshot:1 markov:3 finite:2 displayed:1 defining:1 situation:1 extended:1 arbitrary:3 canada:1 inferred:1 pair:1 specified:1 faithfulness:1 learned:3 pearl:1 nip:1 vici:1 usually:1 dynamical:1 ev:1 below:2 spp:1 max:2 explanation:2 force:2 predicting:2 residual:1 improve:1 altered:1 technology:1 auto:1 prior:5 discovery:10 mooij:4 relative:2 fy0:8 loss:1 expect:1 interesting:2 acyclic:11 foundation:1 spoil:1 affine:1 sufficient:3 editor:1 cd:2 row:9 supported:2 allow:4 institute:2 taking:2 differentiating:1 absolute:2 feedback:4 calculated:1 gram:2 world:1 forward:1 made:5 preprocessing:1 coincide:1 far:1 functionals:2 reconstructed:7 observable:1 compact:1 obtains:1 bernhard:2 preferred:1 gene:1 dealing:1 global:1 investigating:1 uai:7 handbook:1 assumed:1 xi:7 continuous:2 regulatory:1 latent:1 search:1 reality:1 additionally:1 learn:5 obtaining:1 improving:1 schuurmans:1 investigated:1 complex:1 bottou:1 substituted:1 main:2 sachs:1 arrow:5 noise:70 hyperparameters:1 repeated:3 causality:2 elaborate:1 slow:1 wiley:1 precision:1 inferring:1 intervened:1 pe:3 dominik:3 jacobian:2 jmlr:1 theorem:6 r2:2 concern:1 bivariate:11 consist:1 exists:1 workshop:1 adding:1 effectively:1 ci:2 supplement:2 conditioned:2 kx:10 cx:4 generalizing:1 expressed:1 applies:1 springer:1 minimizer:1 determines:1 satisfies:1 conditional:3 identity:1 absence:1 considerable:1 change:2 except:2 lemma:3 called:1 accepted:2 acyclicity:1 exception:1 latter:1 arises:1 ex:14
3,783
4,425
Nearest Neighbor based Greedy Coordinate Descent Inderjit S. Dhillon Department of Computer Science University of Texas at Austin [email protected] Pradeep Raviknmar Department of Computer Science University of Texas at Austin [email protected] Ambuj Tewari Department of Computer Science University of Texas at Austin [email protected] Abstract Increasingly, optimization problems in machine learning, especially those arising from bigh-dimensional statistical estimation, bave a large number of variables. Modem statistical estimators developed over the past decade have statistical or sample complexity that depends only weakly on the number of parameters when there is some structore to the problem, such as sparsity. A central question is whether similar advances can be made in their computational complexity as well. In this paper, we propose strategies that indicate that such advances can indeed be made. In particular, we investigate the greedy coordinate descent algorithm, and note that performing the greedy step efficiently weakens the costly dependence on the problem size provided the solution is sparse. We then propose a snite of methods that perform these greedy steps efficiently by a reduction to nearest neighbor search. We also devise a more amenable form of greedy descent for composite non-smooth objectives; as well as several approximate variants of such greedy descent. We develop a practical implementation of our algorithm that combines greedy coordinate descent with locality sensitive hashing. Without tuning the latter data structore, we are not only able to significantly speed up the vanilla greedy method, hot also outperform cyclic descent when the problem size becomes large. Our resnlts indicate the effectiveness of our nearest neighbor strategies, and also point to many open questions regarding the development of computational geometric techniques tailored towards first-order optimization methods. 1 Introduction Increasingly, optimization problems in machine learning are very high-dimensional, where the number of variables is very large. This has led to a renewed interest in iterative algorithms that reqnire bounded time per iteration. Such iterative methods take various forms such as so-called row-action methods [6] which enforce constraints in the optimization problem sequentially, or first-order methods [4] which only compute the gradient or a coordinate of the gradient per step. But the overall time complexity of these methods still has a high polynomial dependence on the number of parameters. Modem statistical estimators developed over the past decade have statistical or sample complexity that depends only weakly on the number ofpararneters [5, 15, 18]. Can similar advances be made in their computational complexity? Towards this, we investigate one of the simplest classes of first order methods: coordinate descent, which only updates a single coordinate of the iterate at every step. The coordinate descent class of algorithms has seen a renewed interest after recent papers [8, 10, 19] have shown considerable empirical success in application to large problems. Saba and Tewari [13] even show that under 1 certain conditions, the convergence rate of cyclic coordinate descent is at least as fast as that of gradient descent. In this paper, we focus on high-dimensional optimization problems where the solution is sparse. We were motivated to investigate coordinate descent algorithms by the intuition that they could leverage the sparsity structure of the solution by judiciously choosing the coordinate to be updated. In particular, we show that a greedy selection of the coordinates succeeds in weakening the costly dependence on problem size with the caveat that we could perform the greedy step efficiently. The naive greedy updates would however take time that scales at least linearly in the problem dimension O(P) since it has to compute the coordinate with the maximum gradient. We thus come to the other main question of this paper: Can the greedy steps in a greedy coordinate scheme be peiformed efficiently? Surprisingly, we are able to answer in the affirmative, and we show this by a reduction to nearest neighbor search. This allows us to leverage the significant amount of recent research on sublinear methods for nearest neighbor search, provided it suffices to have approximate nearest neighbors. The upshot of our results is a suite of methods that depend weakly on the problem size or number of parameters. We also investigate several notions of approximate greedy coordinate descent for which we are able to derive similar rates. For the composite objective case, where the objective is the sum of a smooth component and a separable non-smooth component, we propose and analyze a "look-ahead" variant of greedy coordinate descent. The development in this paper thus raises a new line ofresearch on connections between computational geometry and first-order optimization methods. For instance, given our results, it would be of interest to develop approximate nearest neighbor methods tuned to greedy coordinate descent. As an instance of such a connection, we show that if the covariates underlying the optimization objective satisfy a mutual incoherence condition, then a very simple nearest neighbor data structure suffices to yield a good approximation. Finally, we provide simulations that not ouly show that greedy coordinate descent with approximate nearest neighbor search performs overwheltuingly better than vanilla greedy coordinate descent, but also that it starts outperforming cyclic descent when the problem size increases: the larger the number of variables, the greater the relative improvement in performance. The results of this paper natorally lead to several open problems: can effective computational geometric data structures be tailored towards greedy coordinate descent? Can these be adapted to (a) other first-order methods, perhaps based on sampling, and (b) different regularized variants that uncover structored sparsity. We hope this paper fosters further research and cross-fertilization of ideas in computational geometry and optimization. 2 Setup and Notation We start our treatment with differentiable objective functions, and then extend this to encompass non-differentiable functions which arise as the sum of a smooth component and a separable nonsmooth component. Let C : JR" --+ IR be a convex differentiable function. We do not assume that the function is strongly convex: indeed most optimizations arising out of high-dimensional machine learning problems are convex but typically not strungly so. Our analysis requires that the function satisfies the following coordinate-wise Lipschitz condition: A ??omptionAt. The loss function C satisfies IIV'C(w) - V'C(v)ll~ ::; "1 ?llw - vIiI, for some "1> o. We note that this condition is weaker than the standard Lipschitz conditions on the gradients. In particular, we say that C has "2-Lipschitz continuous gradient w.r.t. 11?112 when IIV'C(w) - V'C(v)112 ::; "2 . IIw - vl12' Note that "1 ::; "2; indeed "1 could be up to p times smaller than "2. E.g. when C(w) = 1/2wT Aw with a positive setui-definite matrix A , we have "1 = max; A;,;, the maximum entry on the diagonal, while "2 = max; >';(A), the maxium eigenvalue of A. It is thus possible for "2 to be much larger than ",: for instance "2 = P'" when A is the all I's matrix. We are interested in the general optimization problem, min C(w). (I) wE"" We will focus on the case where the solution is bounded and sparse. We thus assume: A ??omptionAl. The solution w' of(J) satisfies: Ilw'll~ ::; B for some constant B < dent ofp, and that Ilw'lIo = 8, i.e., solution is 8-sparse. 00 indepen- 2.t Coordinate Descent Coordinate descent solves (I) iteratively by optimizing over a single coordinate while holding others fixed. lYPically, the choice of the coordinate to be updated is cyclic. One caveat with this scheme 2 however is that it could be expensive to compute the one-dimensional optimum for general functions ?,. Moreover when ?, is not smooth, such coordinatewise descent is not guaranteed to converge to the global optimum in general, unless the non-differentiable component is separable [16]. A line of recent work [16, 17, 14] has thus focused on a "gradient descent" version of coordinate descent, that iteratively uses a local quadratic upper bound rY of the function C. For the case where the optimization function is the sum of a smooth function aod the i l regularizer, this variant is also ca\led Iterative Soft Thresholding [7]. A template for such coordinate gradient descent is the set of iterates: w' = W'-I - ;, VjC(w')ej. Friedman et aI. [8], Genkin et aI. [10], Wu and Laoge [19] aod others have shown considerable empirical success in applying these to large problems. 2.2 Greedy Coordinate Descent 10 this section, we focus on a simple deterministic variant of coordinate descent that picks the coordinate that attains the coordinalewise maximum of the gradient vector: Algorithm 1 Greedy Coordinate Gradient Descent Initialize: Set the initial value of w O? fort = 1, ... do j = argmruq IVIC(w')I. w' = w'-I - ;, VjC(w')ej. end for *. Lemma 1. Suppose the convex differentiable function C satisfies Assumptions Al and A2. Then the iterates of Algorithm 1 satisfy: ? C(w') _ C(w*) :<; ~I Ilw ~ w II.. Letting c(P) denote the time required to solve each greedy step mruq IVIC( w') I, the greedy version of coordinate descent achieves the rate C(w') - C(w*) = 0(.' c(P)IT) at time T. Note that the dependence on the problem size p is restricted to the greedy step: if we could solve this maximization more efficiently, then we have a powerful "active-set" method. While brute force maximization for the greedy step would take O(P) time, ifit cao be done in 0(1) time, then at time T, the iterate w satisfies C( w) - C( w*) = 0(.' IT) which would be independent ofthe problem size. 3 Nearest Neighbor aod Fast Greedy 10 this section, we examine whether the greedy step cao be performed in sublinear time. We focus in particular on optimization problems arising from statistical learoing problems where the optimization objective can be written as n C(w) = ~i(wTx',y'), (2) i=l for some loss functioni : RxR r-> R, and a set of observations {(Xi, yi)}:'~I' with Xi E RP, yi E R. Note that such an optimization objective arises in most statisticallearoing problems. For instance, consider linear regression, with response y = (w, x) + E, where E ~ N(O, 1). Then given observations {(xi, yi)}:'~I' the maximum likelihood problem has the form of (2), with i(u, v) = (u - v)'. LetJing g( u, v) = V ui( u, v) denote the gradient of the sample loss with respect to its first argument, and ri(w) = g(wT Xi, yi), the gradient of the objective (2) may be written as VjC(w) = L~~I x~ r'(w) = (Xj, r(w)) . It then follows that the greedy coordinate descent step in Algorithm 1 reduces to the following simple problem: maxi , (xj,r(w)) I? (3) We can now see why the greedy step (3) cao be performed efficiently: it cao be cast as a nearness problem. Iodeed, assume that the data is standardized so that IIxj I = 1 for j = 1, ... ,po Let x = {XI, ... , xp, -xp} include the negated data vectors. Then, it cao be seen that -X" ... , argmax I (Xj, r) I == arg ,Ej'pj min IIxj ,E[Pj rll~? (4) Thus, the greedy step amounts to a nearest neighbor problem of computing the nearest point to r in the set {Xj} ~~I' While this would take O(pn) time via brute force, the hope is to leverage the state of 3 the art in nearest neighbor search [II] to perform this greedy selection in sublinear time. Regarding the time taken to compute the gradient r(w), note that after any coordinate descent update, we can update r' in 0(1) time if we cache the values {(w, x')}, so that r can be updated in O(n) time. The reduction to nearest neighbor search however comes with a caveat: nearest neighbor search variants that run in sublinear time only compute approximate nearest neighbors. This in turn aroounts to performing the greedy step approximately. In the next few subsections, we investigate the consequences of such approximations. 3.1 Multiplicative Greedy We first consider a variant where the greedy step is performed under a mnltiplicative approximation, where we choose a coordinate it such that, for some c E (0,1], IIV.c(w')];, I 2: c?IIV.c(w')lloo. (5) As the following lemma shows, the approximate greedy steps have little qualitative effect (proof in Supplementary Material). Lemma 2. The greedy coordinate descent iterates, with the greedy step computed as in (5), satisfy: ~ . ""lwO; w*ll~ . .c(w') _ .c(w*) :0; The price for the approximate greedy updates is thus just a constant factor 1/c 2: I reduction in the convergence rate. Note that the equivalence of (4) need not hold under multiplicative approximations. That is, approximate nearest neighbor techuiques that obtain a nearest neighbor upto a multiplicative factor, do not guarantee a mnltiplicative approximation for the inner product in the greedy step in turn. As the next lemma shows this still achieves the required qualitative rate. Lemma 3. Suppose the greedy step is performed as in (5) with a multiplicative approximation factor of (I + ,=) (due to approximate nearest neighbor search for instance). Then, at any iteration t, the greedy coordinate descent iterates satisfy either of the following two conditions, for any' > 0: (a) V.c(w') is small (i.e. the iterate is near-stationary): (b) .c(w') - .c(w*) < - 1+'00 EIIII(l/f)+l IIV.c(w')lloo :0; C::::<:) Ilr(w')1I2' or . ~,lIwo_w'll: t 3.2 Additive Greedy Another natural variant is the following additive approximate greedy coordioate descent, where we choose the coordinate i, such that (6) for some 'odd. As the lemma below shows, the approximate greedy steps have little qualitative effect Lemma 4. The greedy coordinate descent iterates, with the greedy step computed as in (6), satisfy: .c(w') - .c(w*) :0; ""lwO; w*ll~ + 'odd. Note that we need obtain an additive approximation in the greedy step only upto the order of the final precision desired of the optimization problem. In particular, for statistical estimation problems the desired optimization accuracy need not be lower than the statisical precision, which is typically of the order of slog(P) /..;n. Indeed, given the conoections elucidated above to greedy coordinate descent, it is an interesting futore problem to develop approximate nearest neighbor methods with additive approximations. 4 Tailored Nearest Neighbor Data Structures In this section, we show that one could develop approximate nearest neighbor methods tailored to the statistical estimation setting. 4 4.1 Qnadtree nnder Mntnallncoherence We will show that just a vanilla quadtree yields a good approximation when the covariates satisfY a technical statistical condition of mutual coherence. A quadtree is a tree data structure which partitions the space. Each internal node u in the quadtree has a representative point, denoted by rep(u), and a list of children nodes, denoted by children(u), which partition the space under u. For further details, we refer to Har-Peled [II]. The spread <li(D) of the set of points D is defined as <li(D) = m~;~; 1IIIx;-x'llll, and is the mtio between the diameter of D and the closest pair distance of mIDi~J Xi :1:; points in D. Following Har-Peled [II], we can show that the depth of the quadtree by the standard construction is bounded by O(log <li( D) + log n) and can be constructed in time O(p log( n<li (D))). Here, we show that a standard nearest neighbor algorithm using quadtrees Har-Peled [II], Arya and Mount [2], rewritten below to allow for arbitrary approximation factor (1 + <=), suffices under appropriate statistical conditions. Input: quadtree T, approx. factor (1 + <nn), query r. Initialize: ; = 0; Ao = {root(T)}. while Ai oF {} do for each node v E Ai do Uonn = nu(T, { ..... } u rep( children(v))). for each node u E children(v) do if Ilr - rep(u)II - diam(u) < Ilr - u onn ll/(1 + <=), then AHl = Am u {u}. end for end for i+-;+1 end while Return U ann . Lemma S. Let (1 + <nn) be the approximation factor for the approximate nearest neighbor search. Let nn(T) be the true nearest neighbor to r. Then the output ..... of Algorithm 4.1 satisfies Ilr - ..... 112 =:; (1 + <=)IIT - nn(r) 112. Proof Let u be the last node in the quadtree containing nn(T) thrown away by the algorithm. Then, Ilr - nn(T)11 2: liT - rep(u)II-llrep(u) - nn(T) II 2: liT - rep(u)ll- diam(u) 2: IIr 1 ~::11, D whence the statement in the theorem follows. The next lemma shows the time taken by the algorithm. Again, we rewrite the analysis ofHar-Peled [II], Arya and Mount [2] to allow for arbitrary approximation factors. Lemma 6. The time taken by algorithm 4.1 to compute a (1 D = {Xl,'" ,Xp} is 0 (IOg(<li(D)) + <nn)-nearest neighbor to T from + (1 + ,;"f). As the next lemma shows, the spread is controlled when the mutual coherence of the covariates is small. In particular, define f.'(D) = ma.x;"j (Xi, Xj). We require that the mutual coherence f.'(D) be small and in particular be bounded away from I. Such a condition is typically imposed as sufficient condition for sparse parameter recovery [5, 15]. Intriguingly, this very condition allows us to provide guarantees for optimization. This thus adds to the burgeoning set of recent papers that are finding that conditions imposed for strong statistical guarantees are useful in torn for obtaining faster mtes in the corresponding optimization problems. Under this condition, the closest pair distance can be bounded as, Ilxi - Xj 112 = 2 - 2 (Xi, Xj) 2: 2(1 - f.'), which in torn allows us to control the spread: <li(D) =:; ~ = Jl~~' which thus yields the corollary: Lemma 7. Suppose the mutual coherence of the covariates D = {Xl, ... ,xp} is bounded so that f.'(D) < 1. Then the time taken by algorithm 4.1 to compute a (1 + <nn)-nearest neighbor to r from is 0 (log (,~~) + (1+ ,;,,)} 5 While this data structure is quite useful in most settings, it requires that the mutual coherence of the covariates be bounded, and further the time required is exponential (but weakly so) in the number of samples. However, following [I, II], we can use random projections to bring the runtime down to O(P,,;;'), and the preprocessing time to O(np logpf.;;.2). 5 Overall TIme Complexity In the previous sections, we saw that the greedy step for generalized linear models is equivalent to nearest neighbor search: given any query r, we want to find its nearest neighbor among the p points D = {X" ... , x p } each in Standard data structures include quadtrees which spatially partition the data, and KD trees which partition the data according to their point mass. IRn. Approximate nearest neighbor search [11] estimates an approximate nearest neighbor, upto a multiplicative approximation say f=: so that if the nearest neighbor to r is x; and the algorithm outputs Xk, then it guarantees that Ilx> - rl12 ~ (1 + f=)llx; - rll. Any such nearest neighbor algorithm, given a query r, incurs time depends on the number of points p (typically sublinearly), their dimension n, and the approxinlation factor (1 + f=). Let us denote this cost by C,(n,p, f=). From our analysis of multiplicative approxinlate greedy (see Lemma 3), given a multiplicative approximation factor (1 + f=) in the approximate nearest neighbor method, the approximate greedy coordinate descent has the convergence rate: K . 1\:1,,,3 for some constant K > O. Thus, the num- '- ber of iterations required to obtain a solution with accuracy fopt is given by, T greedy = ~:~. Since each of these greedy steps have cost C,(n,p, f=), the overall cost CG is given as: CG = C, (n, p, f=) . !,"::~. Of course these approxinlate nearest neighbor methods also require some pre-processing time C _ (p, n, f=), but this can typically be amortized across multiple runs of the optimization problem with the same covariates (for a regularization path for instance). It could also be reused across different models, and for other forms of data analysis. Examples include: (a). Locality Sensitive Hashing [12] uses random shifting windows and random projections to hash the data points such that distant points do not collide with high probability. Let p = 1/(1 + f=) < 1. Then here, C_(p,n,f=) = 0 (np1+p.,;;,2) while C,(n,p,f=) = O(npp). Thus, for sparse solutions B = o(y'P), the runtime cost scales as CG = 0 (npp.,;;,'f;;pi). (b). Allon and Chazelle [1] use multiple lookup tables after random projections to obtain a nearest neighbor data structore with costs and C_(p,n,f=) = O(P,,;;'), and C,(p,n,f=) = O(nlogn + .,;;,3 log" p). Thus the runtime cost here scales as CG = 0 (nlogn.:::-;;; 10" p) . (0). In Section 4, we showed that when the covariates are mutually incoherent, then we can use a simple quadtree, and random Gaussian projections to obtain C_(P, n, f=) = O(np logp.,;;,2) and C,(p, n, f=) = O(P,,;;'). Thus the runtime cost here scales as CG = 0 (p'';;' f;if';;") . 6 Non-Smooth Objectives Now we consider the more general composite objective case where the objective is the sum of a differentiable, and a separable non-differentiable function: min C(w) wERp + :R.(w) , (7) where we assume C is convex and differentiable and satisfies the Lipshitz condition in Assump>-+ could be non-differentiable. Again, we tion AI, and :R.(w) = L; :R.;(w;) where:R.; : assume that Assumption 2 holds. The natursl counterpart of the greedy algorithms in the previous sections would be to pick the coordinate with the maximum absolute value of the subgradient. However, we did not observe good performance for this variant either theoretically or in simulations. Thus, we now stody a lookahead variant that picks the coordinate with the maximum absolute value of the sum of the gradient of the smooth component and the subgradient of the non-smooth component at the next iterate. IR IR Denote [V'C(w')]; by 0;, and compute the next iterate w~+' as argmiIlw g;(w - w;) W;)2 + R;(w). Let p; = 8R;(wJ+1) denote the subgradient at this next iterate, and let + T(w(8) 6 Algorithm 2 A Greedy Coordinate Descent Algorithm for Composite Objectives 1: Initialize: W O +- 0 2: fort ~ 1,2,3, ... do 3: j, +- argmax;EiPlll1jl (withl1j as defined in (8)) 4'? w t + 1 +- w t +n~ e? 'IJt 3t' 5: end for I. Then pick the coordinate as argmax;EiPlll1j The next lemma states that this variant performs qualitatively similar to its smooth counterpart in Algorithm 1. Lemma 8. The greedy coordinate descent iterates of Algorithm 2 satisfY: C(w') +R(w') _ C(w') _ R(w') :;; ~' o Ilw ~ w'II~. The greedy step for composite objectives in Algorithm 2 at any iteration t entails solving the maximization problem: max; 111; I, where 11; is as defined in (8). Let us focus on the case where the regularizer R is the i, norm, so that R(w) ~ >'L~~1Iw;l, for some>. > O. Using the notation from above, we thus have the following objective: min", ~ L~~1 i(WTXi,Yi) + >.llwI11. Then 11; from (8) can be writteu in this case as: 11; ~ 8,,", (w; - (x;, r(w')) /It,} - w;, where 8 r (u) ~ sign(u)max{lul- r,O} is the soft-thresholding function. So the greedy step reduces to maximizing max; 18,,<. (wj - (x;, r(w')) /1t1) - wj over j. The next lemma shows that by focusing the maximization on the inner products (x;, r(w)) we lose at most a factor of >,/",: Lemma9. I (x;,r(w')) /",1-111;11:;; >'/"" The Lemma in tum implies that if j' E argmax;EiPl I (x;, r(w')) 111;, I :;; I (x;" r( w') / 1t11 + >./ 1t1 ~ ;',W;,j I (x;, r(w') /"11, then / 1t11 + >./ 1t1 :;; ~'f"j 111; I + 2>'/"" Typical setting of>. for statistical estimation is at the level of the statistical precision of the problem (and indeed of the order of 0(1/ v'ii) even for low-dimensional problems). Thus, as in the previous section, we estimate the coordinate j that maximizes the inner product I (x;, r(w)) I, which in tum can be approximated using approximate nearest neighbor search. So, even for composite objectives, we can reduce the greedy step to performing a nearest neighbor search. Note however that this can be performed sublinearly only at the cost of recovering an approximate nearest neighbor. Note that this in tum entails that we wonld be performing each greedy step in coordinate descent approximately. 7 Experimental Results We conducted speed trials in MATLAB comparing 3 algorithms: greedy (Algorithm 2), greedy.LSH (coordinate to update chosen by LSH) and cyclic on i , -regularized problems: L~~1 i(wT Xi, Yi) + >.llwl11 wherei(y, t) was either (y_t)2 /2 (squared loss) or 10g(1 +exp( -ty)) (logistic loss) and we chose >. ~ 0.01. All these algorithms, after selecting a coordinate to update, minimize the function fully along that coordinate. For squared loss, this minimum can be obtained in closed form while for logistic we performed 6 steps of the (I-dimensional) Newton method. The data was generated as follows: a matrix X E !RH," was chosen with i.i.d. standard normal entries aod the each column was normalized to i 2 -norm 1. Then, we set Y ~ X w" for a k-sparse vector w" E !R" (with nonzero entries placed raodomly). The labels Yi were chosen to be either Yi or sigu(Yi) depending on whether the squared or logistic loss was being optimized. The rows of X became the instances Xi. Figure I shows the objective function value versus CPU time plots for the logistic loss with p ~ 10',105 ,106 . As p grows we keep k ~ 100 constant aod scale n as L4klog(P)J. In this case, greedy.LSH not only speeds up naive greedy significaotly but also beats cyclic coordinate descent In fact, cyclic appears to be stalled especially for p ~ 10 5 ,106 ? The reason for this is that cyclic, in the time allotted, was only able to complete 52%,40% aod 27% of a full sweep through the p coordinates for p ~ 10',105 aod 106 respectively. Furthermore, cyclic had generated far less sparse final iterates than greedy.LSH in all 3 cases. Figure 2 shows the same plots but for squared loss. Here, since each coordinate minimization is closed form aod thus very quick, greedy.LSH has a harder time competing with it. Greedy.LSH is still way faster thao naive greedy aod start to beat cyclic at p ~ 10 6 ? The trend of greedy.LSH catching up with cyclic as p grows is clearly 7 . - . p"10000, .. '00 (\oG""_) I=,--S?L51 -- G 10 15 20 25 ~11 . . ~"_) 30 SOD 1000 1500 2000 CPI.Inno~"~ 250CI 3ODO Figure 1: (best viewed in color) Objective VB. CPU time plots for logistic loss using p = 104 ,105 , 106 " ." . cpunno~n_) . -- - - - - ". " " CPUr_(i'I .........j Figure 2: (best viewed in color) Objective vs. CPU time plots for squared loss using p = 104 , 10',106 demonstrated by these plots. In contrast with the logistic case, here cyclic as able to finish several full sweeps through the p coordinate, namely 13.4,10.5 and 7.9 sweeps for p = 104 ,10 5 and 106 respectively. even though cyclic got lower objective values, it was at the expense of sparsity: cylic's final iterates were usually 10 times denser than those of greedy.LSH. Figure 3 shows the plots for the objective versus number of coordinate descent steps. We clearly see that cyclic is wasteful in terms of number of coordinate updates and greedy achieves much greater descent in the objective per coordinate update. Moreover, greedy.LSH is much closer to greedy in its per coordinate-update performance (to the extent that it is hard to tell them apart in some of these plots). This plot thus suggests the improvements possible with better nearest-neighbor implementations that perform the greedy step even faster than our non-optimized greedy.LSH implementation. Cyclic coordinate descent is one of the most competitive methods for large scale i 1 -regularized problems [9]. We are able to outperform it for large problems using a homegrown implementation that was not optimized for performance. This provides strong reasons to believe that with a careful well-toned LSH implementation, and indeed with better data structures than LSH, nearest neighbor based greedy methods should be able to scale to problems beyond the reach of current methods. Acknowledgments We gratefully acknowledge the support of NSF under grants IIS-1018426 & CCF-0728879. ISD acknowledges support from the Moncrief Grand Challenge Award. -_a ..-..eoi, pol000DD, 1<-100 <-cI- _) - . 1"'10011C1, t-l00 (... _ _) 1m::, 1 :I? :r .1" I? " 0 ? " 1=:='"1 J .', ? " -'-. ? .. .. .. - Num ..... ot _ _ _ IIo.. " ,~ k-1OD~_} 1====?LS1 I" "' I:? " _ . p-l0D00D0, ? 2 4 S e NUrrDI''''_____ 10 ~1a' Figure 3: (best viewed in color) Objective vs. no. of coordinate updates: squared loss using p = 10', 105 , 106 8 References [1] N. Ailon and B. Chazelle. Approximate nearest neighbors and the fastjohnson-lindenstrauss transform. In Proc. 38th STOC, pages 557-563, 2006. [2] S. Arya and D. M. Mount. Approximate nearest neighbor queries in fixed dimensions. In Proc. 4th ACM-SIAM SODA, pages 271-280, 1993. [3] S. Arya, T. Malamatos, aod D. M. Mount. Space-time tradeoffs for approximate nearest neighbor searching. Journal ofthe ACM, 57(1), 2009. [4] D.P. Bertsekas. Nonlinear programming. Athena Scientific, Behnont, MA, 1995. [5] E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Annals ofStatistics, 2006. [6] Y. Censor and S. A. Zenios. Parallel optimization: Theory, algorithms, and applications. Oxford University Press, 1997. [7] 1. Daubechies, M. Defrise, and C. De Mol. Ao iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math., 57(11):1413-1457, 2004. [8] J. Friedman, T. Hastie, H. Holling, and R. Tibshirani. Pathwise coordinate optimization. 2007. [9] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Regularization paths for generalized linear models via coordinate descent Journal ofStatistical Software, 33(1): 1-22, 20 I O. [10] A. Genkin, D. D. Lewis, and D. Madigan. Large-scale bayesian logistic regression for text categorization. Technometrics, 49(3):291-304, 2007. [11] S. Har-Peled. Lectures notes on geometric approximation algorithms. 2009. http://valis.cs.uiuc.edu/-sariel/teach/notes/aprx/lec/. URI. [12] P. Indyk aod R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proc. 30th STOC, pages 604-613,1998. [13] A. Saba and A. Tewari. On the finite time convergence of cyclic coordinate descent methods. preprint, 2010. [14] S. Shalev-Shwartz and A. Tewari. Stochastic methods for i, regularized loss minimization. In ICML,2009. [15] J. Tropp. Just relax: Convex programming methods for identifYing sparse signals in noise. IEEE 17ans. Info Theory, 52(3): I 030-1051, March 2006. [16] P. Tseng and S. Yun. A block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization. Journal of Optimization Theory and Applications, 140(3): 513-535, . [17] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Math. Prog. E, 117:387-423, . [18] M. J. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using i,-constrained quadratic programming (lasso). IEEE Transactions on l1ifo. Theory, 55:21832202,2009. [19] T. T. Wu and K. Laoge. Coordinate descent algorithms for lasso penalized regression. Annals ofApplied Statistics, 2:224-244, 2008. 9
4425 |@word trial:1 version:2 polynomial:1 norm:2 reused:1 open:2 simulation:2 pick:4 incurs:1 harder:1 reduction:4 initial:1 cyclic:16 selecting:1 tuned:1 renewed:2 allon:1 past:2 current:1 chazelle:2 comparing:1 od:1 written:2 additive:4 partition:4 distant:1 fertilization:1 plot:8 update:11 ouly:1 stationary:1 greedy:76 hash:1 v:2 xk:1 moncrief:1 num:2 caveat:3 iterates:8 nearness:1 node:5 provides:1 math:2 along:1 constructed:1 qualitative:3 combine:1 theoretically:1 sublinearly:2 indeed:6 examine:1 uiuc:1 ry:1 little:2 cpu:3 cache:1 window:1 curse:1 becomes:1 provided:2 bounded:7 underlying:1 notation:2 moreover:2 mass:1 maximizes:1 affirmative:1 developed:2 finding:1 suite:1 guarantee:4 every:1 runtime:4 brute:2 control:1 lipshitz:1 grant:1 bertsekas:1 positive:1 t1:3 local:1 consequence:1 mount:4 oxford:1 incoherence:1 path:2 approximately:2 defrise:1 chose:1 dantzig:1 equivalence:1 suggests:1 appl:1 quadtrees:2 practical:1 acknowledgment:1 block:1 definite:1 nlogn:2 empirical:2 significantly:1 composite:6 projection:4 got:1 pre:1 madigan:1 selection:2 applying:1 equivalent:1 deterministic:1 imposed:2 rll:2 maximizing:1 quick:1 demonstrated:1 convex:6 focused:1 recovery:2 torn:2 pure:1 identifying:1 estimator:2 searching:1 notion:1 coordinate:66 updated:3 annals:2 construction:1 suppose:3 programming:3 us:2 amortized:1 trend:1 expensive:1 approximated:1 preprint:1 wj:3 pradeepr:1 intuition:1 comm:1 complexity:6 covariates:7 ui:1 peled:5 iiv:5 weakly:4 depend:1 raise:1 rewrite:1 solving:1 po:1 collide:1 iit:1 various:1 regularizer:2 lul:1 fast:2 effective:1 query:4 tell:1 choosing:1 shalev:1 quite:1 larger:3 solve:2 supplementary:1 say:2 denser:1 relax:1 statistic:1 transform:1 noisy:1 final:3 indyk:1 differentiable:9 eigenvalue:1 propose:3 product:3 cao:5 lookahead:1 convergence:4 motwani:1 optimum:2 categorization:1 weakens:1 develop:4 derive:1 depending:1 eoi:1 n_:1 nearest:45 odd:2 lwo:2 strong:2 wtx:1 recovering:1 c:4 solves:1 indicate:2 come:2 implies:1 iio:1 stochastic:1 material:1 require:2 suffices:3 ao:2 dent:1 c_:3 hold:2 normal:1 exp:1 achieves:3 a2:1 estimation:5 proc:3 lose:1 label:1 iw:1 odo:1 utexas:3 sensitive:2 saw:1 hope:2 minimization:3 clearly:2 gaussian:1 pn:1 ej:3 og:1 corollary:1 focus:5 improvement:2 likelihood:1 contrast:1 attains:1 cg:5 am:1 whence:1 censor:1 nn:9 weakening:1 typically:5 irn:1 interested:1 llw:1 tao:1 overall:3 arg:1 among:1 denoted:2 development:2 art:1 constrained:2 initialize:3 mutual:6 intriguingly:1 sampling:1 lit:2 look:1 icml:1 nonsmooth:3 others:2 np:2 few:1 genkin:2 geometry:2 argmax:4 friedman:3 thrown:1 technometrics:1 interest:3 investigate:5 pradeep:1 slog:1 har:4 amenable:1 closer:1 unless:1 tree:2 desired:2 catching:1 instance:7 column:1 soft:2 logp:1 maximization:4 cost:8 entry:3 sod:1 conducted:1 iir:1 answer:1 aw:1 grand:1 siam:1 again:2 central:1 squared:6 daubechies:1 containing:1 choose:2 return:1 li:6 de:1 lookup:1 satisfy:7 depends:3 performed:6 multiplicative:7 root:1 tion:1 closed:2 analyze:1 start:3 competitive:1 parallel:1 candes:1 minimize:1 ir:3 accuracy:2 became:1 bave:1 efficiently:6 yield:3 ofthe:2 bayesian:1 reach:1 trevor:1 ty:1 proof:2 treatment:1 subsection:1 color:3 dimensionality:1 uncover:1 focusing:1 appears:1 tum:3 hashing:2 vl12:1 response:1 done:1 though:1 strongly:1 furthermore:1 just:3 jerome:1 saba:2 tropp:1 nonlinear:1 logistic:7 perhaps:1 scientific:1 grows:2 believe:1 effect:2 normalized:1 true:1 counterpart:2 ccf:1 regularization:2 npp:2 spatially:1 dhillon:1 iteratively:2 nonzero:1 i2:1 ll:7 thao:1 rxr:1 generalized:2 yun:2 complete:1 performs:2 bring:1 wise:1 iiw:1 ilxi:1 jl:1 extend:1 significant:1 refer:1 ai:5 llx:1 tuning:1 vanilla:3 approx:1 gratefully:1 had:1 lsh:12 entail:2 add:1 closest:2 recent:4 showed:1 optimizing:1 apart:1 certain:1 outperforming:1 success:2 rep:5 yi:9 devise:1 seen:2 minimum:1 greater:2 converge:1 signal:1 ii:13 encompass:1 multiple:2 full:2 reduces:2 smooth:10 technical:1 faster:3 cross:1 ofp:1 award:1 iog:1 controlled:1 variant:11 regression:3 iteration:4 tailored:4 c1:1 want:1 ot:1 fopt:1 effectiveness:1 near:1 leverage:3 iterate:6 xj:7 finish:1 hastie:2 competing:1 zenios:1 lasso:2 inner:3 regarding:2 idea:1 reduce:1 tradeoff:1 judiciously:1 texas:3 whether:3 motivated:1 action:1 matlab:1 iixj:2 tewari:4 useful:2 ifit:1 amount:2 simplest:1 diameter:1 http:1 outperform:2 nsf:1 sign:1 arising:3 per:4 tibshirani:2 stalled:1 burgeoning:1 threshold:1 wasteful:1 pj:2 ls1:1 isd:1 subgradient:3 sum:5 run:2 inverse:1 powerful:1 soda:1 prog:1 wu:2 coherence:5 vb:1 bound:1 guaranteed:1 lec:1 quadratic:2 elucidated:1 adapted:1 ahead:1 constraint:2 ri:1 software:1 speed:3 argument:1 min:4 performing:4 separable:6 department:3 ailon:1 according:1 march:1 kd:1 jr:1 smaller:1 across:2 increasingly:2 restricted:1 taken:4 mutually:1 turn:2 letting:1 end:5 ijt:1 rewritten:1 observe:1 llll:1 away:2 enforce:1 upto:3 appropriate:1 rp:1 standardized:1 include:3 t11:2 newton:1 especially:2 sweep:3 objective:22 question:3 strategy:2 costly:2 dependence:4 diagonal:1 gradient:16 distance:2 athena:1 lio:1 extent:1 tseng:2 reason:2 viii:1 vali:1 setup:1 ilr:5 statement:1 holding:1 ilw:4 expense:1 stoc:2 robert:1 teach:1 info:1 implementation:5 quadtree:7 perform:4 negated:1 upper:1 observation:2 modem:2 arya:4 acknowledge:1 finite:1 descent:48 beat:2 arbitrary:2 sharp:1 structore:3 fort:2 required:4 cast:1 pair:2 connection:2 optimized:3 namely:1 nu:1 able:7 beyond:1 below:2 usually:1 sparsity:6 challenge:1 ambuj:2 max:5 shifting:1 hot:1 wainwright:1 natural:1 force:2 regularized:4 scheme:2 resnlts:1 acknowledges:1 incoherent:1 naive:3 text:1 geometric:3 upshot:1 sariel:1 relative:1 loss:13 fully:1 lecture:1 sublinear:4 interesting:1 versus:2 sufficient:1 xp:4 foster:1 thresholding:3 pi:1 austin:3 row:2 course:1 penalized:1 surprisingly:1 last:1 indepen:1 placed:1 weaker:1 allow:2 ber:1 neighbor:44 template:1 absolute:2 sparse:9 dimension:3 depth:1 lindenstrauss:1 made:3 qualitatively:1 preprocessing:1 far:1 transaction:1 approximate:25 lloo:2 selector:1 midi:1 l00:1 keep:1 global:1 sequentially:1 active:1 xi:10 shwartz:1 search:13 iterative:4 continuous:1 decade:2 why:1 table:1 ca:1 obtaining:1 mol:1 did:1 main:1 spread:3 linearly:2 rh:1 noise:1 arise:1 coordinatewise:1 child:4 cpi:1 representative:1 precision:3 exponential:1 xl:2 theorem:1 down:1 removing:1 maxi:1 list:1 aod:11 ci:2 uri:1 locality:2 ilx:1 led:2 assump:1 pathwise:1 inderjit:2 wtxi:1 satisfies:7 lewis:1 acm:2 ma:2 diam:2 viewed:3 ann:1 careful:1 towards:4 lipschitz:3 price:1 considerable:2 hard:1 typical:1 wt:3 lemma:17 called:1 onn:1 experimental:1 succeeds:1 allotted:1 internal:1 support:2 latter:1 arises:1
3,784
4,426
Structural equations and divisive normalization for energy-dependent component analysis Jun-ichiro Hirayama Dept. of Systems Science Graduate School of of Informatics Kyoto University 611-0011 Uji, Kyoto, Japan Aapo Hyv?arinen Dept. of Mathematics and Statistics Dept. of Computer Science and HIIT University of Helsinki 00560 Helsinki, Finland Abstract Components estimated by independent component analysis and related methods are typically not independent in real data. A very common form of nonlinear dependency between the components is correlations in their variances or energies. Here, we propose a principled probabilistic model to model the energycorrelations between the latent variables. Our two-stage model includes a linear mixing of latent signals into the observed ones like in ICA. The main new feature is a model of the energy-correlations based on the structural equation model (SEM), in particular, a Linear Non-Gaussian SEM. The SEM is closely related to divisive normalization which effectively reduces energy correlation. Our new twostage model enables estimation of both the linear mixing and the interactions related to energy-correlations, without resorting to approximations of the likelihood function or other non-principled approaches. We demonstrate the applicability of our method with synthetic dataset, natural images and brain signals. 1 Introduction Statistical models of natural signals have provided a rich framework to describe how sensory neurons process and adapt to ecologically-valid stimuli [28, 12]. In early studies, independent component analysis (ICA) [2, 31, 13] and sparse coding [22] have successfully shown that V1 simple cell-like edge filters, or receptive fields, emerge as optimal inference on latent quantities under linear generative models trained on natural image patches. In the subsequent developments over the last decade, many studies (e.g. [10, 32, 11, 14, 23, 17]) have focused explicitly or implicitly on modeling a particular type of nonlinear dependency between the responses of the linear filters, namely correlations in their variances or energies. Some of them showed that models on energy-correlation could account for, e.g., response properties of V1 complex cells [10, 15], cortical topography [11, 23], and contrast gain control [26]. Interestingly, such energy correlations are also prominent in other kinds of data, including brain signals [33] and presumably even financial time series which have strong heteroscedasticity. Thus, developing a general model for energy-correlations of linear latent variables is an important problem in the theory of machine learning, and such models are likely to have a wide domain of applicability. Here, we propose a new statistical model incorporating energy-correlations within the latent variables. Our two-stage model includes a linear mixing of latent signals into the observed ones like in ICA, and a model of the energy-correlations based on the structural equation model (SEM) [3], in particular the Linear Non-Gaussian (LiNG) SEM [27, 18] developed recently. As a model of natural signals, an important feature of our model is its connection to ?divisive normalization? (DN) [7, 4, 26], which effectively reduces energy-correlations of linearly-transformed natural signals [32, 26, 29, 19, 21] and is now part of a well-accepted model of V1 single cell responses [12]. 1 We provide a new generative interpretation of DN based on the SEM, which is an important contribution of this work. Also, from machine learning perspective, causal analysis by using SEM has recently become very popular; our model could extend the applicability of LiNG-SEM for blindly mixed signals. As a two-stage extension of ICA, our model is also closely related to both the scale-mixture-based models, e.g. [11, 30, 14] (see also [32]) and the energy-based models, e.g. [23, 17]. An advantage of our new model is its tractability: our model requires neither an approximation of likelihood function nor non-canonical principles for modeling and estimation as previous models. 2 Structural equation model and divisive normalization A structural equation model (SEM) [3] of a random vector y = (y1 , y2 , . . . , yd )? is formulated as simultaneous equations of random variables, such that yi = ?i (yi , y ?i , ri ), i = 1, 2, . . . , d, (1) or y = ?(y, r), where the function ?i describes how each single variable yi is related to other variables y ?i , possibly including itself, and a corresponding stochastic disturbance or external input ri which is independent of y. These equations, called structural equations, specify the distribution of y, as y is an implicit function (assuming the system is invertible) of the random vector r = (r1 , r2 , . . . , rd )? . If there exists a permutation ? : y 7? y ? such that each yi? only depends on the preceding ones {yj? |j < i}, an SEM is called recursive or acyclic, associated with a directed acyclic graph (DAG); the model is then a cascade of (possibly) nonlinear regressions of yi ?s on the preceding variables on the graph, and is also seen as a Bayesian network. Otherwise, the SEM is called non-recursive or cyclic, where the structural equations cannot be simply decomposed into regressive models. In a standard interpretation, a cyclic SEM rather describes the distribution of equilibrium points of a dynamical system, y(t) = ?(y(t ? 1), r) (t = 0, 1, . . .), where every realized input r is fixed until y(t) converges to y [24, 18]; some conditions are usually needed to make the interpretation valid. 2.1 Divisive normalization as non-linear SEM Now, we briefly point out the connection of SEM to DN, which strongly motivated us to explore the application of SEM to natural signal statistics. Let s1 , s2 , . . . , sd be scalar-valued outputs of d linear filters applied to a multivariate input, collectively written as s = (s1 , s2 , . . . , sd )? . The linear filters may either be derived/designed with some mathematical principles (e.g. Wavelets) or be learned from data (e.g. ICA). The outputs of linear filters often have the property that their energies ?(|si |) (i = 1, 2, . . . , d) have non-negligible dependencies or correlations to each other, even when the outputs themselves are linearly uncorrelated. The nonlinear function ? is any appropriate measure of energy, typically given by the squaring function, i.e. ?(|s|) = s2 [26, 12], while other choices will not be excluded; we assume ? is continuously differentiable and strictly increasing over [0, ?), and ?(0) = 0. Divisive Normalization (DN) [26] is an effective nonlinear transformation for eliminating the energy-dependencies remained in the filtered outputs. Although several variants have been proposed, a basic form can be formulated as follows: Given the d outputs, their energies are normalized (divided) by a linear combination of the energies of other signals, such that zi = ? ?(|si |) , j hij ?(|sj |) + hi0 i = 1, 2, . . . , d, (2) where hij and hi0 are real-valued parameters of this transform. Now, it is straightforward to see that the following structural equations in the log-energy domain, ? yi := ln ?(|si |) = ln( hij exp(yj ) + hi0 ) + ri , i = 1, 2, . . . , d, (3) j correspond to Eq. (2) where zi = exp(ri ) is another representation of the disturbance. The SEM will typically be cyclic, since the coefficients hij in Eq. (2) are seldom constrained to satisfy acyclicity; 2 Eq. (3) thus implies a nonlinear dynamical system, and this can be interpreted as the data-generating processes underlying DN. Interestingly, Eq. (3) also implies a linear system with multiplicative ? input, yei = ( j hij yej + hi0 )zi , in the energy domain, i.e. yei := ?(|si |). The DN transform of Eq. (2) gives the optimal mapping under the SEM to infer the disturbance from given si ?s; if the true disturbances are independent, it optimally reduces the energy-dependencies. This is consistent with the redundancy reduction view of DN [29, 19]. e = (I ? diag(z)H)?1 diag(h0 )z with H = (hij ) and Note also that the SEM above implies y h0 = (hi0 ), as shown in [20] in the context of DN 1 . Although mathematically equivalent, such a complicated dependence [20] on the disturbance z does not provide an elegant model of the underlying data-generating process, compared to relatively the simple form of Eq. (3). 3 Energy-dependent ICA using structural equation model Now, we define a new generative model which models energy-dependencies of linear latent components using an SEM. 3.1 Scale-mixture model Let s now be a random vector of d source signals underlying an observation x = (x1 , x2 , . . . , xd )? which has the same dimensionality for simplicity. They follow a standard linear generative model: x = As, (4) where A is a square mixing matrix. We assume here E[x] = E[s] = 0 without loss of generality, by always subtracting the sample mean from every observation. Then, assuming A is invertible, each transposed row wi of the demixing (filtering) matrix W = A?1 gives the optimal filter to recover si from x, which is constrained to have unit norm, ?wi ?22 = 1 to fix the scaling ambiguity. To introduce energy-correlations into the sources, a classic approach is to use a scale-mixture representation of sources, such that si = ui ?i , where ui represents a normalized signal having zero mean and constant variance, and ?i is a positive factor that is independent of ui and modulates the variance (energy) of si [32, 11, 30, 14, 16]. Also, in vector notation, we write s = u ? ?, (5) where ? denotes component-wise multiplication. Here, u and ? are mutually independent, and ui ?s are also independent of each other. Then E[s|?] = 0 and E[ss? |?] = diag(?12 , ?22 , . . . , ?d2 ) for any given ?, where ?i ?s may be dependent of each other and introduce energy-correlations. A drawback of this approach is that to learn effectively the model based on the likelihood, we usually need some approximation to deal with the marginalization over u. 3.2 Linear Non-Gaussian SEM Here, we simplify the above scale-mixture model by restricting ui to be binary, i.e. ui ? {?1, 1}, and uniformly distributed. Although the simplification reduces the flexibility of source distribution, the resultant model is tractable, i.e. no approximation is needed for likelihood computation, as will be shown below. Also, this implies that ui = sign(si ) and ?i = |si |, and hence the log-energy above now has a simple deterministic relation to ?i , i.e. yi = ln ?(?i ), which can be inverted to ?i = ??1 (exp(yi )). We particularly assume the log-energies yi follow the Linear Non-Gaussian (LiNG) [27, 18] SEM: ? yi = hij yj + hi0 + ri , i = 1, 2, . . . , d, (6) j where the disturbances are zero-mean and in particular assumed to be non-Gaussian and independent of each other, which has been shown to greatly improve the identifiability of linear SEMs [27]; the interaction structure in Eq. (6) can be represented by a directed graph for which the matrix 1 To be precise, [20] showed the invertibility of the entire mapping s 7? z in the case of a ?signed? DN transform that keeps the signs of zi and si to be the same. 3 H = (hij ) serves as the weighted adjacency matrix. In the energy domain, Eq. (6) is equivalent to (? hij ) h yei = ej e i0 zi (i = 1, 2, . . . , d), and interestingly, these SEMs further imply a novel form of jy DN transform, given by ?(|si |) zi = hi0 ? (7) , i = 1, 2, . . . , d, hij e j ?(|sj |) where the denominator is now not additive but multiplicative. It provides an interesting alternative to the original DN. To recapitulate the new generative model proposed here: 1) The log-energies y are generated according to the SEM in Eq. (6); 2) the sources are generated according to Eq. (5) with ?i = ??1 (exp(yi )) and random signs, ui ; and 3) the observation x is obtained by linearly mixing the sources as in Eq. (4). In our model, the optimal mapping to infer zi = exp(ri ) from x under this model is the linear filtering W followed by the new DN transform, Eq. (7). On the other hand, it would also be possible to define the energy-dependent ICA by using the nonlinear SEM in Eq. (3) instead. Then, the optimal inference would be given by the divisive normalization in Eq. (2). However, estimation and other theoretical issues (e.g. identifiability) related to nonlinear SEMs, particularly in the case of non-Gaussianity of the disturbances, are quite involved, and are still under development, e.g. [8]. 3.3 Identifiability issues Both the theory and algorithms related to LiNG coincide largely with those of ICA, since Eq. (6) with non-Gaussian r implies the generative model of ICA, y = Br + b0 , where B = (I ? H)?1 and b0 = Bh0 with h0 = (hi0 ). Like ICA [13], Eq. (6) is not completely identifiable due to the ambiguities related to scaling (with signs) and permutation [27, 18]. To fix the scaling, we set E[rr ? ] = I here. The permutation ambiguity is more serious than in the case of ICA, because the row-permutation of H completely changes the structure of corresponding directed graph, and is typically addressed by constraining the graph structure, as will be discussed next. Two classes of LiNG-SEM have been proposed, corresponding to different constraints on the graph structure. One is LiNGAM [27], which ensures the full identifiability by the DAG constraint. The other is generally referred to as LiNG [18] which allows general cyclic graphs; the ?LiNG discovery? algorithm in [18] dealt with the non-identifiability of cyclic SEMs by finding out multiple solutions that give the same distribution. Here we define two variants of our model: One is the acyclic model, using LiNGAM. In contrast to original LiNGAM, our target is (linear) latent variables, but not observed ones. The ordering of latent variables is not meaningful, because the rows of filter matrix W can be arbitrarily permuted. The acyclic constraint thus can be simplified into a lower-triangular constraint on H. Another one is the symmetric model, which uses a special case of cyclic SEM, i.e. those with a symmetric constraint on H. Such constraint would be relatively new to the context of SEM, although it is a well-known setting in the ICA literature (e.g. [5]). The SEM is then identifiable using only the first- and second1 order statistics, based on the relations h0 = VE[y] and V := I ? H = Cov[y]? 2 [5], provided 2 that V is positive definite . This implies the non-Gaussianity is not essential for identifiability, in contrast that the acyclic model is not identifiable without non-Gaussianity [27]. The above relations also suggest moment-based estimators of h0 and V, which can be used either as the final estimates or as the initial conditions in the maximum likelihood algorithm below. 3.4 Maximum likelihood Let ?(s) := ln ?(|s|) for notational simplicity, and denote ? ? (s) := sign(s)(ln ?)? (|s|) as a convention, e.g. (ln |s|)? := 1/s. Also, following the basic? theory of ICA, we assume the disturbances have a joint probability density function (pdf) pr (r) = i ?(ri ) with a common fixed marginal pdf ?. Then, we have the following pdf of s without any approximation (see Appendix for derivation): ps (s) = d ? 1 ? | det V| ?(v ? i ?(s) ? hi0 )|? (si )|. 2d i=1 (8) 2 Under the dynamical system interpretation, the matrix H should have absolute eigenvalues smaller than one for stability [18], where V = I ? H is naturally positive definite because the eigenvalues are all positive. 4 1 Amari Index 10 FastICA No Dep. Proposed 0 10 ?=?0.4 2 10 ?=?0.3 3 10 2 10 ?=?0.2 3 10 2 10 ?=0 3 10 2 10 ?=0.2 3 10 2 10 ?=0.3 3 10 2 10 ?=0.4 3 10 2 10 3 10 Sample Size Figure 1: Estimation performance of mixing matrix measured by the ?Amari Index? [1] (nonnegative, and zero denotes perfect estimation up to unavoidable indeterminacies) versus sample size, shown in log-log scales. Each panel corresponds to a particular value of ?, which determined the relative connection strength between sources. The solid lines denotes the median of ten runs. where v i is i-th transposed row vector of V (= I ? H). The pdf of x is given by px (x) = | det W|ps (Wx), and the corresponding loss function, l = ? ln px (x) + const., is given by l(x, W, V, h0 ) = f?(V?(Wx) ? h0 ) + g?(Wx) ? ln | det W| ? ln | det V|, (9) ? ? where f?(r) = i f (ri ), f (ri ) = ? ln ?(ri ), g?(s) = i g(si ), and g(si ) = ? ln |? ? (si )|. Note that the loss function above is closely related to the ones in previous studies, such as of energybased models [23, 17]. Our model is less flexible to these models, since it is limited to the case that A is square, but the exact likelihood is available. It is also interesting to see that the loss function above includes an additional second term that has not appeared in previous models, due to the formal derivation of pdf by the argument of transformation of random variables. To obtain the maximum likelihood estimates of W, V, and h0 , we minimize the negative loglikelihood (i.e. empirical average of the losses) by the projected gradient method (for the unit-norm constraints, ?wi ?22 = 1). The required first derivatives are given by ?l ?l = ?f ? (r), = f ? (Vy ? h0 )y ? ? V?? , (10a) ?h0 ?V } { ?l (10b) = diag(? ? (Wx))V? f ? (Vy ? h0 ) + g ? (Wx) x? ? W?? . ?W In both acyclic and symmetric cases, only the lower-triangular elements in V are free parameters. If acyclic, the upper-triangular elements are fixed at zero; if symmetric, they are dependent of the lower-triangular elements, and thus ?l/?vij (i > j) should be replaced with ?l/?vij + ?l/?vji . 4 Simulations To demonstrate the applicability of our method, we conducted the following simulation experiments. In all experiments below, we set ?(|s|) = |s|, and ?(r) = (1/2)sech(?r/2) corresponding to the standard tanh nonlinearity in ICA: f ? (r) = (?/2) tanh((?/2)r). In our projected gradient algorithm, the matrix W was first initialized by FastICA [9]; the SEM parameters, H and h0 , were initialized by the moment-based estimator described above (symmetric model) or by the LiNGAM algorithm [27] (acyclic model). The algorithm was terminated when the decrease of objective value was smaller than 10?6 ; the learning rate was adjusted in each step by simply multiplying it by the factor 0.9 until the new point did not increase the objective value. 4.1 Synthetic dataset First, we examined how the energy-dependence learned in the SEM affects the estimation of linear filters. We artificially sampled the dataset with d = 10 from our generative model by setting the matrix V to be tridiagonal, where all the main and the first diagonals were set at 10 and 10?, respectively. Figure 1 shows the ?Amari Index? [1] of estimated W by three methods, at several 5 0 0 0.5 1 Pairwise Distance 0.06 0.04 0.02 0 ?1 0 1 Pairwise Difference (mod ??/2) 0.06 Phase Connection Weight 0.02 Connection Weight 0.04 Frequency Orientation Connection Weight Connection Weight Position 0.06 0.04 0.02 0 ?0.2 0 0.2 Pairwise Difference 0.06 0.04 0.02 0 ?2 0 2 Pairwise Difference (mod ? ?) Figure 2: Connection weights versus pairwise differences of four properties of linear basis functions, estimated by fitting 2D Gabor functions. The curves were fit by local Gaussian smoothing. factors ? and sample sizes, with ten runs for every condition. In each run, the true mixing matrix was given by inverting W randomly generated from standard Gaussian and then row-normalized to have unit norms. The three methods were: 1) FastICA 3 with the tanh nonlinearity, 2) Our method (symmetric model) without energy-dependence (NoDep) initialized by FastICA, and 3) Our full method (symmetric model) initialized by NoDep. NoDep was the same as the full method except that the off-diagonal elements of H was kept zero. Note that our two algorithms used exactly the same criterion for termination of algorithm, while FastICA used a different one. This could cause the relatively poor performance of FastICA in this figure. The comparison between the full method and NoDep showed that energy-dependence learned in the SEM could improve the estimation of filter matrix, especially when the dependence was relatively strong. 4.2 Natural images The dataset consisted of 50, 000 image patches of 16 ? 16 pixels randomly taken from the original gray-scale pictures of natural scenes 4 . As a preprocessing, the sample mean was subtracted and the dimensionality was reduced to 160 by the principal component analysis (PCA) where 99% of the variance was retained. We constrained the SEM to be symmetric. Both of the obtained basis functions and filters were qualitatively very similar to those reported in many previous studies, and given in the Supplementary Material. Figure 2 shows the values of connection weights hij (after a row-wise re-scaling of V to set any hii = 1 ? vii to be zero, as a standard convention in SEM [18]) for every d(d ? 1) pairs, compared with the pairwise difference of four properties of learned features (i.e. basis functions), estimated by fitting 2D Gabor functions: spatial positions, frequencies, orientations and phases. As is clearly seen, the connection weights tended to be large if the features were similar to each other, except for their phases; the phases were not strongly correlated with the weights as suggested by the fitted curve, while they exhibited a weak tendency to be the same or the opposite (shifted ??) to each other. We can also see a weak tendency for the negative weights to have large magnitudes when the pairs have near-orthogonal directions or different frequencies. Figure 3 illustrates how the learned features are associated with the other ones, using iconified representations. We can see: 1) associations with positive weights between features were quite spatially-localized and occur particularly with similar orientations, and 2) those with negative weights especially occur from cross-oriented features to a target, which were sometimes non-localized and overlapped to the target feature. Notice that in the DN transform (7), these positive weights learned in the SEM perform as inhibitory and will suppress the energies of the filters having similar properties. 4.3 Magnetoencephalography (MEG) Brain activity was recorded in a single healthy subject who received alternating visual, auditory, and tactile stimulation interspersed with rest periods [25]. The original signals were measured in 204 channels (sensors) for several minutes with sampling rate (75Hz); the total number of measurements, i.e. sample size, was N = 73, 760. As a preprocessing, we applied a band-pass filter (8-30Hz) and remove some outliers. Also, we subtracted the sample mean and then reduced the dimensionality by PCA to d = 24, with 90% of variance still retained. 3 Matlab package is available at http://research.ics.tkk.fi/ica/fastica/. We used the following options: g=tanh, approach=symm, epsilon=10?6 , MaxNumIterations=104 , finetune=tanh. 4 Available in Imageica Toolbox by Patrik Hoyer, at http://www.cs.helsinki.fi/u/phoyer/software.html 6 Figure 3: Depiction of connection properties between learned basis functions in a similar manner to that has used in e.g. [6]. In each small panel, the black bar depicts the position, orientation and length of a single Gabor-like basis function obtained by our method; the red (resp. blue) pattern of superimposed bars is a linear combination of the bars of the other basis functions according to the absolute values of positive (resp. negative) connection weights to the target one. The intensities of red and blue colors were adjusted separately from each other in each panel; the ratio of the maximum positive and negative connection strengths is depicted at the bottom of each small panel by the relative length of horizontal color bars. Figure 4: Estimated interaction graph (DAG) for MEG data. The red and blue edges respectively denotes the positive and negative connections. Only the edges with strong connections are drawn, where the absolute threshold value was the same for positive and negative weights. The two manually-inserted contours denote possible clusters of sources (see text). 7 Figure 4 shows an interaction graph under the DAG constraint. One cluster of components, highlighted in the figure by the manually inserted yellow contour, seems to consist of components related to auditory processing. The components are located in the temporal cortex, and all but one in the left hemisphere. The direction of influence, which we can estimate in the acyclic model, seems to be from the anterior areas to posterior ones. This may be related to top-down influence, since the primary auditory cortex seems to be included in the posterior areas on the left hemisphere; at the end of the chain, the signal goes to the right hemisphere. Such temporal components are typically quite difficult to find because the modulation of their energies is quite weak. Our method may help in grouping such components together by analyzing the energy correlations. Another cluster of components consists of low-level visual areas, highlighted by the green contour. It is more difficult to interpret these interactions because the areas corresponding to the components are very close to each other. It seems, however, that here the influences are mainly from the primary visual areas to the higher-order visual areas. 5 Conclusion We proposed a new statistical model that uses SEM to model energy-dependencies of latent variables in a standard linear generative model. In particular, with a simplified form of scale-mixture model, the likelihood function was derived without any approximation. The SEM has both acyclic and cyclic variants. In the acyclic case, non-Gaussianity is essential for identifiability, while in the cyclic case we introduces the constraint of symmetricity which also guarantees identifiability. We also provided a new generative interpretation of DN transform based on a nonlinear SEM. Our method exhibited a high applicability in three simulations each with synthetic dataset, natural images, and brain signals. Appendix: Derivation of Eq. (8) From the uniformity of signs, we have ps (s) = ps (Ds) for any D = diag(?1, . . . , ?1); particularly, let Dk correspond to the signs of k-th orthant Sk of Rd , and S1 = (0, ?)d . Then, the ? ? ?K ? ?K ? d d? ps (?) imrelation S1 d? p? (?) = k=1 S1 d? ps (Dk ?) = 2 k=1 Sk ds ps (s) = S1 d d d plies ps (s) = (1/2 )p? (s) for any s ? S1 ; thus ps (s) = (1/2 ? )p? (|s|) for any s ? R . Now, y = ln ?(?) (for every component) and thus p? (?) = py (y) i |(ln ?)? (?i )|, where we assume ? ? ? is differentiable. Let ?(s) ? :=? ln ?(|s|) and ? (s) := sign(s)(ln ?) (|s|). Then it follows that d ps (s) = (1/2 )py (?(s)) i |? (si )|, where ?(s) performs component-wise. ? Since y maps linearly to r with the absolute Jacobian | det V|, we have py (y) = | det V| i ?(ri ); combining it with ps above, we obtain Eq. (8). Acknowledgements We would like to thank Jes?us Malo and Valero Laparra for inspiring this work, Michael Gutmann and Patrik Hoyer for helpful discussions and providing codes for fitting Gabor functions and visualization. The MEG data was kindly provided by Pavan Ramkumar and Riitta Hari. J.H. was partially supported by JSPS Research Fellowships for Young Scientists. References [1] S. Amari, A. Cichoki, and H. H. Yang. A new learning algorithm for blind signal separation. In Advances in Neural Information Processing Systems, volume 8, 1996. [2] A. J. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are edge filters. Vision Res., 37:3327?3338, 1997. [3] K. A. Bollen. Structural Equations with Latent Variables. Wiley, New York, 1989. [4] M. Carandini, D. J. Heeger, and J. A. Movshon. Linearity and normalization in simple cells of the macaque primary visual cortex. Journal of Neuroscience, 17:8621?8644, 1997. [5] A. Cichocki and P. Georgiev. Blind source separation algorithms with matrix constraints. IEICE Trans. Fundamentals, E86-A(3):522?531, 2003. 8 [6] P. Garrigues and B. A. Olshausen. Learning horizontal connections in a sparse coding model of natural images. In Advances in Neural Information Processing Systems, volume 20, pages 505?512, 2008. [7] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9:181?197, 1992. [8] P. O. Hoyer, D. Janzing, J. Mooij, J. Peters, and B. Sch?olkopf. Nonlinear causal discovery with additive noise models. In Advances in Neural Information Processing Systems, volume 21, pages 689?696, 2009. [9] A. Hyv?arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626?634, 1999. [10] A. Hyv?arinen and P.O. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Comput., 12(7):1705?1720, 2000. [11] A. Hyv?arinen, P.O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Comput., 13(7):1527?1558, 2001. [12] A Hyv?arinen, J. Hurri, and P. O. Hoyer. Natural Image Statistics ? A probabilistic approach to early computational vision. Springer-Verlag, 2009. [13] A. Hyv?arinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley & Sons, 2001. [14] Y. Karklin and M. S. Lewicki. A hierarchical Bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Comput., 17:397?423, 2005. [15] Y. Karklin and M. S. Lewicki. Emergence of complex cell properties by learning to generalize in natural scenes. Nature, 457:83?86, January 2009. [16] M. Kawanabe and K.-R. M?uller. Estimating functions for blind separation when sources have variance dependencies. Journal of Machine Learning Research, 6:453?482, 2005. [17] U. K?oster and A. Hyv?arinen. A two-layer model of natural stimuli estimated with score matching. Neural Comput., 22:2308?2333, 2010. [18] G. Lacerda, P. Spirtes, J. Ramsey, and P. Hoyer. Discovering cyclic causal models by independent components analysis. In Proceedings of the Twenty-Fourth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI?08), pages 366?374, 2008. [19] S. Lyu. Divisive normalization: Justification and effectiveness as efficient coding transform. In Advances in Neural Information Processing Systems 23, pages 1522?1530, 2010. [20] J. Malo, I. Epifanio, R. Navarro, and E. P. Simoncelli. Nonlinear image representation for efficient perceptual coding. IEEE Trans Image Process, 15(1):68?80, 2006. [21] J. Malo and V. Laparra. Psychophysically tuned divisive normalization approximately factorizes the PDF of natural images. Neural Comput., 22(12):3179?3206, 2010. [22] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607?609, 1996. [23] S. Osindero, M. Welling, and G. E. Hinton. Topographic product models applied to natural scene statistics. Neural Comput., 18:381?414, 2006. [24] J. Pearl. On the statistical interpretation of structural equations. Technical Report R-200, UCLA Cognitive Systems Laboratory, 1993. [25] P. Ramkumar, L. Parkkonen, R. Hari, and A. Hyv?arinen. Characterization of neuromagnetic brain rhythms over time scales of minutes using spatial independent component analysis. Human Brain Mapping, 2011. In press. [26] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4(8), 2001. [27] S. Shimizu, P.O. Hoyer, A. Hyv?arinen, and A. Kerminen. A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7:2003?2030, 2006. [28] E. P. Simoncelli and B. A. Olshausen. Natural image statistics and neural representation. Annu. Rev. Neurosci., 24:1193?1216, 2001. [29] R. Valerio and R. Navarro. Optimal coding through divisive normalization models of V1 neurons. Network: Computation in Neural Systems, 14:579?593, 2003. [30] H. Valpola, M. Harva, and J. Karhunen. Hierarchical models of variance sources. Signal Processing, 84(2):267?282, 2004. [31] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. R. Soc. Lond. B, 265(359?366), 1998. [32] M. J. Wainwright and E. P. Simoncelli. Scale mixtures of gaussians and the statistics of natural images. In Advances in Neural Information Processing Systems, volume 12, pages 855?861, 2000. [33] K. Zhang and A. Hyv?arinen. Source separation and higher-order causal analysis of MEG and EEG. In Proceedings of the Twenty-Sixth Conference (UAI 2010), pages 709?716, 2010. 9
4426 |@word briefly:1 eliminating:1 norm:3 seems:4 termination:1 hyv:10 d2:1 simulation:3 riitta:1 recapitulate:1 decomposition:1 solid:1 garrigues:1 moment:2 reduction:1 cyclic:9 series:1 score:1 initial:1 tuned:1 interestingly:3 ramsey:1 laparra:2 anterior:1 si:17 written:1 john:1 subsequent:1 additive:2 wx:5 enables:1 remove:1 designed:1 generative:9 discovering:1 intelligence:1 filtered:1 regressive:1 provides:1 characterization:1 zhang:1 lacerda:1 mathematical:1 dn:14 become:1 consists:1 fitting:3 manner:1 introduce:2 pairwise:6 ica:15 themselves:1 nor:1 brain:6 decomposed:1 increasing:1 provided:4 estimating:1 underlying:3 notation:1 panel:4 symmetricity:1 linearity:1 kind:1 interpreted:1 developed:1 finding:1 transformation:2 guarantee:1 temporal:2 every:5 xd:1 exactly:1 schwartz:1 control:2 unit:3 positive:10 negligible:1 scientist:1 local:1 sd:2 analyzing:1 modulation:1 yd:1 approximately:1 signed:1 black:1 examined:1 limited:1 graduate:1 directed:3 yj:3 recursive:2 definite:2 area:6 empirical:1 bell:1 cascade:1 gabor:4 matching:1 suggest:1 cannot:1 close:1 yej:1 context:2 influence:3 py:3 www:1 equivalent:2 deterministic:1 map:1 straightforward:1 go:1 focused:1 simplicity:2 estimator:2 financial:1 classic:1 stability:1 justification:1 resp:2 target:4 exact:1 us:2 overlapped:1 element:4 particularly:4 located:1 observed:3 bottom:1 inserted:2 ensures:1 gutmann:1 ordering:1 decrease:1 principled:2 ui:8 hi0:9 neuromagnetic:1 trained:1 uniformity:1 heteroscedasticity:1 yei:3 completely:2 basis:6 joint:1 represented:1 cat:1 derivation:3 fast:1 describe:1 effective:1 sejnowski:1 artificial:1 h0:12 quite:4 supplementary:1 valued:2 loglikelihood:1 s:1 otherwise:1 amari:4 triangular:4 statistic:8 cov:1 topographic:2 transform:8 itself:1 highlighted:2 final:1 emergence:3 advantage:1 differentiable:2 rr:1 eigenvalue:2 propose:2 subtracting:1 interaction:5 product:1 combining:1 mixing:7 flexibility:1 olkopf:1 cluster:3 p:11 r1:1 regularity:1 generating:2 perfect:1 converges:1 help:1 measured:2 hirayama:1 school:1 received:1 dep:1 b0:2 indeterminacy:1 soc:1 eq:18 c:1 strong:3 implies:6 convention:2 direction:2 closely:3 drawback:1 filter:14 stochastic:1 human:1 material:1 adjacency:1 arinen:10 fix:2 mathematically:1 adjusted:2 extension:1 strictly:1 ic:1 exp:5 presumably:1 equilibrium:1 mapping:4 lyu:1 finland:1 early:2 estimation:7 proc:1 tanh:5 healthy:1 successfully:1 weighted:1 uller:1 clearly:1 sensor:1 gaussian:9 always:1 rather:1 ej:1 factorizes:1 derived:2 notational:1 likelihood:9 superimposed:1 mainly:1 greatly:1 contrast:3 helpful:1 inference:2 dependent:5 squaring:1 i0:1 typically:5 entire:1 relation:3 transformed:1 pixel:1 issue:2 flexible:1 orientation:4 html:1 development:2 constrained:3 special:1 smoothing:1 spatial:2 marginal:1 field:3 schaaf:1 having:2 sampling:1 manually:2 represents:1 report:1 stimulus:2 simplify:1 serious:1 randomly:2 oriented:1 oja:1 ve:1 replaced:1 phase:5 introduces:1 mixture:6 chain:1 edge:4 orthogonal:1 initialized:4 re:2 causal:5 theoretical:1 fitted:1 modeling:2 kerminen:1 applicability:5 tractability:1 jsps:1 fastica:7 conducted:1 tridiagonal:1 osindero:1 optimally:1 reported:1 dependency:8 pavan:1 synthetic:3 psychophysically:1 density:1 fundamental:1 probabilistic:2 off:1 informatics:1 invertible:2 michael:1 together:1 continuously:1 ambiguity:3 unavoidable:1 recorded:1 possibly:2 external:1 cognitive:1 derivative:1 japan:1 account:1 coding:5 includes:3 coefficient:1 invertibility:1 gaussianity:4 satisfy:1 explicitly:1 depends:1 blind:3 multiplicative:2 view:1 ichiro:1 red:3 recover:1 option:1 complicated:1 identifiability:8 contribution:1 minimize:1 square:2 variance:8 largely:1 who:1 correspond:2 yellow:1 dealt:1 weak:3 bayesian:2 generalize:1 ecologically:1 multiplying:1 simultaneous:1 tended:1 janzing:1 sixth:1 energy:38 frequency:3 involved:1 resultant:1 associated:2 naturally:1 transposed:2 gain:2 sampled:1 dataset:5 auditory:3 popular:1 carandini:1 color:2 dimensionality:3 jes:1 finetune:1 higher:2 follow:2 response:4 specify:1 hiit:1 strongly:2 generality:1 stage:3 implicit:1 tkk:1 correlation:15 until:2 hand:1 d:2 horizontal:2 nonlinear:12 gray:1 ieice:1 olshausen:3 normalized:3 y2:1 true:2 consisted:1 hence:1 excluded:1 symmetric:8 spatially:1 alternating:1 spirtes:1 laboratory:1 deal:1 rhythm:1 criterion:1 prominent:1 pdf:6 demonstrate:2 performs:1 image:15 wise:3 novel:1 recently:2 fi:2 common:2 inki:1 permuted:1 stimulation:1 cichoki:1 volume:4 interspersed:1 extend:1 interpretation:6 discussed:1 association:1 interpret:1 measurement:1 dag:4 rd:2 seldom:1 resorting:1 mathematics:1 nonlinearity:2 sech:1 depiction:1 cortex:5 multivariate:1 posterior:2 showed:3 perspective:1 hemisphere:3 verlag:1 binary:1 arbitrarily:1 yi:11 sems:4 der:1 inverted:1 seen:2 additional:1 preceding:2 period:1 signal:19 full:4 multiple:1 simoncelli:4 kyoto:2 reduces:4 infer:2 technical:1 valerio:1 adapt:1 cross:1 divided:1 jy:1 variant:3 aapo:1 regression:1 basic:2 denominator:1 vision:2 symm:1 blindly:1 normalization:12 sometimes:1 cell:8 fellowship:1 separately:1 addressed:1 median:1 source:12 sch:1 rest:1 exhibited:2 navarro:2 subject:1 hz:2 elegant:1 e86:1 mod:2 effectiveness:1 nonstationary:1 structural:11 near:1 yang:1 constraining:1 marginalization:1 affect:1 zi:7 fit:1 opposite:1 br:1 det:6 shift:1 motivated:1 pca:2 ramkumar:2 movshon:1 tactile:1 peter:1 york:1 cause:1 matlab:1 generally:1 ten:2 band:1 inspiring:1 reduced:2 http:2 canonical:1 vy:2 shifted:1 notice:1 inhibitory:1 sign:8 estimated:6 neuroscience:3 blue:3 write:1 redundancy:1 four:2 threshold:1 drawn:1 neither:1 kept:1 v1:4 graph:9 run:3 package:1 fourth:1 uncertainty:1 phoyer:1 patch:2 separation:4 appendix:2 scaling:4 layer:1 followed:1 simplification:1 identifiable:3 nonnegative:1 activity:1 strength:2 occur:2 annual:1 constraint:10 helsinki:3 ri:11 x2:1 scene:4 software:1 ucla:1 argument:1 lond:1 relatively:4 px:2 developing:1 according:3 combination:2 poor:1 describes:2 smaller:2 son:1 wi:3 rev:1 s1:7 vji:1 outlier:1 invariant:1 pr:1 valero:1 lingam:4 taken:1 ln:15 equation:13 mutually:1 visualization:1 needed:2 tractable:1 serf:1 end:1 available:3 gaussians:1 kawanabe:1 hierarchical:2 appropriate:1 hii:1 subtracted:2 alternative:1 original:4 denotes:4 top:1 const:1 epsilon:1 especially:2 objective:2 quantity:1 realized:1 receptive:2 primary:4 dependence:5 striate:1 diagonal:2 hoyer:8 gradient:2 subspace:1 distance:1 thank:1 valpola:1 parkkonen:1 assuming:2 meg:4 length:2 code:2 index:3 retained:2 ratio:1 providing:1 difficult:2 hij:11 negative:7 suppress:1 bollen:1 twenty:2 perform:1 upper:1 neuron:2 observation:3 orthant:1 january:1 hinton:1 precise:1 y1:1 intensity:1 inverting:1 namely:1 required:1 pair:2 toolbox:1 connection:16 learned:7 pearl:1 macaque:1 trans:2 suggested:1 bar:4 dynamical:3 usually:2 below:3 pattern:1 appeared:1 including:2 green:1 wainwright:1 natural:23 disturbance:8 karklin:2 improve:2 imply:1 picture:1 jun:1 cichocki:1 patrik:2 oster:1 text:1 literature:1 discovery:3 acknowledgement:1 mooij:1 multiplication:1 relative:2 loss:5 permutation:4 topography:1 mixed:1 interesting:2 filtering:2 acyclic:12 versus:2 localized:2 consistent:1 principle:2 vij:2 uncorrelated:1 row:6 supported:1 last:1 free:1 formal:1 wide:1 emerge:1 absolute:4 sparse:3 distributed:1 van:2 curve:2 cortical:1 valid:2 rich:1 contour:3 sensory:2 qualitatively:1 twostage:1 coincide:1 simplified:2 projected:2 preprocessing:2 welling:1 transaction:1 sj:2 implicitly:1 keep:1 hari:2 uai:2 assumed:1 hurri:1 latent:11 decade:1 uji:1 sk:2 nature:3 learn:1 channel:1 robust:1 sem:36 eeg:1 epifanio:1 complex:2 artificially:1 domain:4 diag:5 did:1 kindly:1 main:2 linearly:4 terminated:1 s2:3 ling:7 malo:3 noise:1 neurosci:1 x1:1 referred:1 depicts:1 wiley:2 position:3 harva:1 heeger:2 comput:6 perceptual:1 ply:1 jacobian:1 wavelet:1 young:1 minute:2 remained:1 down:1 annu:1 r2:1 dk:2 demixing:1 incorporating:1 exists:1 essential:2 restricting:1 consist:1 effectively:3 modulates:1 grouping:1 magnitude:1 illustrates:1 karhunen:2 vii:1 shimizu:1 depicted:1 simply:2 likely:1 explore:1 visual:7 partially:1 scalar:1 lewicki:2 collectively:1 springer:1 corresponds:1 formulated:2 magnetoencephalography:1 change:1 included:1 determined:1 except:2 uniformly:1 principal:1 called:3 total:1 pas:1 accepted:1 divisive:10 tendency:2 meaningful:1 acyclicity:1 hateren:1 dept:3 correlated:1
3,785
4,427
Active Ranking using Pairwise Comparisons Kevin G. Jamieson University of Wisconsin Madison, WI 53706, USA Robert D. Nowak University of Wisconsin Madison, WI 53706, USA [email protected] [email protected] Abstract This paper examines the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects). In general, the ranking of n objects can be identified by standard sorting methods using n log2 n pairwise comparisons. We are interested in natural situations in which relationships among the objects may allow for ranking using far fewer pairwise comparisons. Specifically, we assume that the objects can be embedded into a d-dimensional Euclidean space and that the rankings reflect their relative distances from a common reference point in Rd . We show that under this assumption the number of possible rankings grows like n2d and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than d log n adaptively selected pairwise comparisons, on average. If instead the comparisons are chosen at random, then almost all pairwise comparisons must be made in order to identify any ranking. In addition, we propose a robust, error-tolerant algorithm that only requires that the pairwise comparisons are probably correct. Experimental studies with synthetic and real datasets support the conclusions of our theoretical analysis. 1 Introduction This paper addresses the problem of ranking a set of objects based on a limited number of pairwise comparisons (rankings between pairs of the objects). A ranking over a set of n objects ? = (?1 , ?2 , . . . , ?n ) is a mapping ? : {1, . . . , n} ? {1, . . . , n} that prescribes an order ?(?) := ??(1) ? ??(2) ? ? ? ? ? ??(n?1) ? ??(n) (1) where ?i ? ?j means ?i precedes ?j in the ranking. A ranking uniquely determines the collection of pairwise comparisons between all pairs of objects. The primary objective here is to bound the number of pairwise comparisons needed to correctly determine the ranking when the objects (and hence rankings) satisfy certain known structural constraints. Specifically, we suppose that the objects may be embedded into a low-dimensional Euclidean space such that the ranking is consistent with distances in the space. We wish to exploit such structure in order to discover the ranking using a very small number of pairwise comparisons. To the best of our knowledge, this is a previously open and unsolved problem. There are practical and theoretical motivations for restricting our attention to pairwise rankings that are discussed in Section 2. We begin by assuming that every pairwise comparison is consistent with an unknown ranking. Each pairwise comparison can be viewed as a query: is ?i before ?j ? Each query provides 1 bit of information about the underlying ranking. Since the number of rankings is n!, in general, specifying a ranking requires ?(n log n) bits of information. This implies that at least this many pairwise comparisons are required without additional assumptions about the ranking. In fact, this lower bound can be achieved with a standard adaptive sorting algorithm like binary sort [1]. In large-scale problems or when humans are queried for pairwise comparisons, obtaining this many pairwise comparisons may be impractical and therefore we consider situations in which the space of rankings is structured and thereby less complex. 1 A natural way to induce a structure on the space of rankings is to suppose that the objects can be embedded into a d-dimensional Euclidean space so that the distances between objects are consistent with the ranking. This may be a reasonable assumption in many applications, and for instance the audio dataset used in our experiments is believed to have a 2 or 3 dimensional embedding [2]. We further discuss motivations for this assumption in Section 2. It is not difficult to show (see Section 3) that the number of full rankings that could arise from n objects embedded in Rd grows like n2d , and so specifying a ranking from this class requires only O(d log n) bits. The main results of the paper show that under this assumption a randomly selected ranking can be determined using ? O(d ? log n) pairwise comparisons selected in an adaptive and sequential fashion, but almost all n2 pairwise rankings are needed if they are picked randomly rather than selectively. In other words, actively selecting the most informative queries has a tremendous impact on the complexity of learning the correct ranking. 1.1 Problem statement Let ? denote the ranking to be learned. The objective is to learn the ranking by querying the reference for pairwise comparisons of the form qi,j := {?i ? ?j }. (2) The response or label of qi,j is binary and denoted as yi,j := 1{qi,j } where 1 is the indicator function; ties are not allowed. The main results quantify the minimum number of queries or labels required to determine the reference?s ranking, and they are based on two key assumptions. A1 Embedding: The set of n objects are embedded in Rd (in general position) and we will also use ?1 , . . . , ?n to refer to their (known) locations in Rd . Every ranking ? can be specified by a reference point r? ? Rd , as follows. The Euclidean distances between the reference and objects are consistent with the ranking in the following sense: if the ? ranks ?i ? ?j , then ??i ? r? ? < ??j ? r? ?. Let ?n,d denote the set of all possible rankings of the n objects that satisfy this embedding condition. The interpretation of this assumption is that we know how the objects are related (in the embedding), which limits the space of possible rankings. The ranking to be learned, specified by the reference (e.g., preferences of a human subject), is unknown. Many have studied the problem of finding an embedding of objects from data [3, 4, 5]. This is not the focus here, but it could certainly play a supporting role in our methodology (e.g., the embedding could be determined from known similarities between the n objects, as is done in our experiments with the audio dataset). We assume the embedding is given and our interest is minimizing the number of queries needed to learn the ranking, and for this we require a second assumption. A2 Consistency: Every pairwise comparison is consistent with the ranking to be learned. That is, if the reference ranks ?i ? ?j , then ?i must precede ?j in the (full) ranking. As we will discuss later in Section 3.2, these two assumptions alone are not enough to rule out pathological arrangements of objects in the embedding for which at least ?(n) queries must be made to recover the ranking. However, because such situations are not representative of what is typically encountered, we analyze the problem in the framework of the average-case analysis [6]. ? Definition 1. With each ranking ? ? ?n,d we associate a probability ?? such that ???n,d ?? = 1. Let ? denote these probabilities and write ? ? ? for shorthand. The uniform distribution corresponds to ?? = |?n,d |?1 for all ? ? ?n,d , and we write ? ? U for this special case. Definition 2. If Mn (?) denotes the number of pairwise comparisons requested by an algorithm to identify the ranking ?, then the average query complexity with respect to ? is denoted by E? [Mn ]. The main results are proven for the special case of ? = U , the uniform distribution, to make the analysis more transparent and intuitive. However the results can easily be extended to general distributions ? that satisfy certain mild conditions [7]. All results henceforth, unless otherwise noted, will be given in terms of (uniform) average query complexity and we will say such results hold ?on average.? Our main results can be summarized as follows. If the queries are chosen deterministically or randomly ? ? in advance of collecting the corresponding pairwise comparisons, then we show that almost all n2 pairwise comparisons queries are needed to identify a ranking under the assumptions above. However, if the queries are selected in an adaptive and sequential fashion according to the algorithm 2 Query Selection Algorithm input: n objects in Rd initialize: objects ?1 , . . . , ?n in uniformly random order for j=2,. . . ,n for i=1,. . . ,j-1 if qi,j is ambiguous, request qi,j ?s label from reference; else impute qi,j ?s label from previously labeled queries. output: ranking of n objects q1 q2,3 ,2 ?2 q 1 ,3 ?1 ?3 Figure 2: Objects ?1 , ?2 , ?3 and queries. The r? lies in the shaded region (consistent with the labels of q1,2 , q1,3 , q2,3 ). The dotted (dashed) lines represent new queries whose labels are (are not) ambiguous given those labels. Figure 1: Sequential algorithm for selecting queries. See Figure 2 and Section 4.2 for the definition of an ambiguous query. in Figure 1, then we show that the number of pairwise rankings required to identify a ranking is no more than a constant multiple of d log n, on average. The algorithm requests a query if and only if the corresponding pairwise ranking is ambiguous (see Section 4.2), meaning that it cannot be determined from previously collected pairwise comparisons and the locations of the objects in Rd . The efficiency of the algorithm is due to the fact that most of the queries are unambiguous when considered in a sequential fashion. For this very same reason, picking queries in a non-adaptive or random fashion is very inefficient. It is also noteworthy that the algorithm is also computationally efficient with an overall complexity no greater than O(n poly(d) poly(log n)) [7]. In Section 5 we present a robust version of the algorithm of Figure 1 that is tolerant to a fraction of errors in the pairwise comparison queries. In the case of persistent errors (see Section 5) we show that at least O(n/ log n) objects can be correctly ranked in a partial ranking with high probability by requesting just O(d log2 n) pairwise comparisons. This allows us to handle situations in which either or both of the assumptions, A1 and A2, are reasonable approximations to the situation at hand, but do not hold strictly (which is the case in our experiments with the audio dataset). Proving the main results involves an uncommon marriage of ideas from the ranking and statistical learning literatures. Geometrical interpretations of our problem derive from the seminal works of [8] in ranking and [9] in learning. From this perspective our problem bears a strong resemblance to the halfspace learning problem, with two crucial distinctions. In the ranking problem, the underlying halfspaces are not in general position and have strong dependencies with each other. These dependencies invalidate many of the typical analyses of such problems [10, 11]. One popular method of analysis in exact learning involves the use of something called the extended teaching dimension [12]. However, because of the possible pathological situations alluded to earlier, it is easy to show that the extended teaching dimension must be at least ?(n) making that sort of worst-case analysis uninteresting. These differences present unique challenges to learning. 2 Motivation and related work The problem of learning a ranking from few pairwise comparisons is motivated by what we perceive as a significant gap in the theory of ranking and permutation learning. Most work in ranking assumes a passive approach to learning; pairwise comparisons or partial rankings are collected in a random or non-adaptive fashion and then aggregated to obtain a full ranking (cf. [13, 14, 15, 16]). However, this may be quite inefficient in terms of the number of pairwise comparisons or partial rankings needed to learn the (full) ranking. This inefficiency was recently noted in the related area of social choice theory [17]. Furthermore, empirical evidence suggests that, even under complex ranking models, adaptively selecting pairwise comparisons can reduce the number needed to learn the ranking [18]. It is cause for concern since in many applications it is expensive and time-consuming to obtain pairwise comparisons. For example, psychologists and market researchers collect pairwise comparisons to gauge human preferences over a set of objects, for scientific understanding or product placement. The scope of these experiments is often very limited simply due to the time and expense required 3 to collect the data. This suggests the consideration of more selective and judicious approaches to gathering inputs for ranking. We are interested in taking advantage of underlying structure in the set of objects in order to choose more informative pairwise comparison queries. From a learning perspective, our work adds an active learning component to a problem domain that has primarily been treated from a passive learning mindset. We focus on pairwise comparison queries for two reasons. First, pairwise comparisons admit a halfspace representation in embedding spaces which allows for a geometrical approach to learning in such structured ranking spaces. Second, pairwise comparisons are the most common form of queries in many applications, especially those involving human subjects. For example, consider the problem of finding the most highly ranked object, as illustrated by the following familiar task. Suppose a patient needs a new pair of prescription eye lenses. Faced with literally millions of possible prescriptions, the doctor will present candidate prescriptions in a sequential fashion followed by the query: better or worse? Even if certain queries are repeated to account for possible inaccurate answers, the doctor can locate an accurate prescription with just a handful of queries. This is possible presumably because the doctor understands (at least intuitively) the intrinsic space of prescriptions and can efficiently search through it using only binary responses from the patient. We assume that the objects can be embedded in Rd and that the distances between objects and the reference are consistent with the ranking (Assumption A1). The problem of learning a general function f : Rd ? R using just pairwise comparisons that correctly ranks the objects embedded in Rd has previously been studied in the passive setting [13, 14, 15, 16]. The main contributions of this paper are theoretical bounds for the specific case when f (x) = ||x ? r? || where r? ? Rd is the reference point. This is a standard model used in multidimensional unfolding and psychometrics [8, 19]. We are unaware of any existing query-complexity bounds for this problem. We do not assume a generative model is responsible for the relationship between rankings to embeddings, but one could. For example, the objects might have an embedding (in a feature space) and the ranking is generated by distances in this space. Or alternatively, structural constraints on the space of rankings could be used to generate a consistent embedding. Assumption A1, while arguably quite natural/reasonable in many situations, significantly constrains the set of possible rankings. 3 Geometry of rankings from pairwise comparisons The embedding assumption A1 gives rise to geometrical interpretations of the ranking problem, which are developed in this section. The pairwise comparison qi,j can be viewed as the membership query: is ?i ranked before ?j in the (full) ranking ?? The geometrical interpretation is that qi,j requests whether the reference r? is closer to object ?i or object ?j in Rd . Consider the line connecting ?i and ?j in Rd . The hyperplane that bisects this line and is orthogonal to it defines two halfspaces: one containing points closer to ?i and the other the points closer to ?j . Thus, qi,j is a membership query about which halfspace r? is in, and there is an equivalence between each query, each pair of objects, and the corresponding? bisecting hyperplane. The set of all possible pairwise comparison ? queries can be represented as n2 distinct halfspaces in Rd . The intersections of these halfspaces partition Rd into a number of cells, and each one corresponds to a unique ranking of ?. Arbitrary rankings are not possible due to the embedding assumption A1, and recall that the set of rankings possible under A1 is denoted by ?n,d . The cardinality of ?n,d is equal to the number of cells in the partition. We will refer to these cells as d-cells (to indicate they are subsets in d-dimensional space) since at times we will also refer to lower dimensional cells; e.g., (d ? 1)-cells. 3.1 Counting the number of possible rankings The following lemma determines the cardinality of the set of rankings, ?n,d , under assumption A1. Lemma 1. [8] Assume A1-2. Let Q(n, d) denote the number of d-cells defined by the hyperplane arrangement of pairwise comparisons between these objects (i.e. Q(n, d) = |?n,d |). Q(n, d) satisfies the recursion Q(n, d) = Q(n ? 1, d) + (n ? 1)Q(n ? 1, d ? 1) , where Q(1, d) = 1 and Q(n, 0) = 1. (3) In the hyperplane arrangement induced by the n objects in d dimensions, each hyperplane is intersected by every other and is partitioned into Q(n ? 1, d ? 1) subsets or (d ? 1)-cells. The recursion, 4 above, arises by considering the addition of one object at a time. Using this lemma in a straightforward fashion, we prove the following corollary in [7]. Corollary 1. Assume A1-2. There exist positive real numbers k1 and k2 such that n2d n2d < Q(n, d) < k 2 2d d! 2d d! for n > d + 1. If n ? d + 1 then Q(n, d) = n!. For n sufficiently large, k1 = 1 and k2 = 2 suffice. k1 3.2 Lower bounds on query complexity Since the cardinality of the set of possible rankings is |?n,d | = Q(n, d), we have a simple lower bound on the number of queries needed to determine the ranking. Theorem 1. Assume A1-2. To reconstruct an arbitrary ranking ? ? ?n,d any algorithm will require at least log2 |?n,d | = ?(2d log2 n) pairwise comparisons. Proof. By Corollary 1 |?n,d | = ?(n2d ), and so at least 2d log n bits are needed to specify a ranking. Each pairwise comparison provides at most one bit. If each query provides a full bit of information about the ranking, then we achieve this lower bound. For example, in the one-dimensional case (d = 1) the objects can be ordered and binary search can be used to select pairwise comparison queries, achieving the lower bound. This is generally impossible in higher dimensions. Even in two dimensions there are placements of the objects (still in general position) that produce d-cells in the partition induced by queries that have n ? 1 faces (i.e., bounded by n ? 1 hyperplanes) as shown in [7]. It follows that the worst case situation may require at least n ? 1 queries in dimensions d ? 2. In light of this, we conclude that worst case bounds may be overly pessimistic indications of the typical situation, and so we instead consider the average case performance introduced in Section 1.1. 3.3 Inefficiency of random queries The geometrical representation of the ranking problem reveals that randomly choosing pairwise comparison queries is inefficient relative to the lower bound ? ? above. To see this, suppose m queries were chosen uniformly at random from the possible n2 . The answers to m queries narrows the set of possible rankings to a d-cell in Rd . This d-cell may consist of one or more of the d-cells in the partition induced by all queries. If it contains more than one of the partition cells, then the underlying ranking is ambiguous. ? ? Theorem 2. Assume A1-2. Let N = n2 . Suppose ? ? m pairwise comparison are chosen uniformly at random without replacement from the possible n2 . Then for all positive integers N ? m ? d the ? ? ?N ? em d probability that the m queries yield a unique ranking is m d / d ?( N ) . Proof. No fewer than d hyperplanes bound each d-cell in the partition of Rd induced by all possible queries. The probability of selecting d specific queries in a random draw of m is equal to ? ??? ? ? ??? ? ? ?d d ? ?d N ?d N m N md dd m d em ? = ? ? . ? m?d m d d d! N d N d! N ? ? ?N ? 2 Note that m d / d < 1/2 unless m = ?(n ). Therefore, if the queries are randomly chosen, then we will need to ask almost all queries to guarantee that the inferred ranking is probably correct. 4 Analysis of sequential algorithm for query selection Now consider the basic sequential process of the algorithm in Figure 1. Suppose we have ranked k ? 1 of the n objects. Call these objects 1 through k ? 1. This places the reference r? within a d-cell (defined by the labels of the comparison queries between objects 1, . . . , k ? 1). Call this d-cell Ck?1 . Now suppose we pick another object at random and call it object k. A comparison query between object k and one of objects 1, . . . , k ? 1 can only be informative (i.e., ambiguous) if the associated hyperplane intersects this d-cell Ck?1 (see Figure 2). If k is significantly larger than d, then it turns out that the cell Ck?1 is probably quite small and the probability that one of the queries intersects Ck?1 is very small; in fact the probability is on the order of 1/k 2 . 5 4.1 Hyperplane-point duality Consider a hyperplane h = (h0 , h1 , . . . , hd ) with (d + 1) parameters in Rd and a point p = (p1 , . . . , pd ) ? Rd that does not lie on the hyperplane. Checking which halfspace p falls in, i.e., h1 p1 + h2 p2 + ? ? ? + hd pd + h0 ? 0, has a dual interpretation: h is a point in Rd+1 and p is a hyperplane in Rd+1 passing through the origin (i.e., with d free parameters). Recall that each possible ranking can be represented by a reference point? r?? ? Rd . Our problem is to determine the ranking, or equivalently the vector of responses to the n2 queries represented by hyperplanes in? R?d . Using the above observation, we see that our problem is equivalent to finding a labeling over n2 points in Rd+1 with as few queries as possible. We will refer to this alternative representation as the dual and the former as the primal. 4.2 Characterization of an ambiguous query The characterization of an ambiguous query has interpretations in both the primal and dual spaces. We will now describe the interpretation in the dual which will be critical to our analysis of the sequential algorithm of Figure 1. Definition 3. [9] Let S be a finite subset of Rd and let S + ? S be points labeled +1 and S ? = S \ S + be the points labeled ?1 and let x be any other point except the origin. If there exists two homogeneous linear separators of S + and S ? that assign different labels to the point x, then the label of x is said to be ambiguous with respect to S. Lemma 2. [9, Lemma 1] The label of x is ambiguous with respect to S if and only if S + and S ? are homogeneously linearly separable by a (d ? 1)-dimensional subspace containing x. Let us consider the implications of this lemma to our scenario. Assume that we have labels for all the pairwise comparisons of k ? 1 objects. Next consider a new object called object k. In the dual, the pairwise comparison between object k and object i, for some i ? {1, . . . , k ?1}, is ambiguous if and only if there exists a hyperplane that still separates the original points and also passes through this new point. In the primal, this separating hyperplane corresponds to a point lying on the hyperplane defined by the associated pairwise comparison. 4.3 The probability that a query is ambiguous An essential component of the sequential algorithm of Figure 1 is the initial random order of the objects; every sequence in which it could consider objects is equally probable. This allows us to state a nontrivial fact about the partial rankings of the first k objects observed in this sequence. Lemma 3. Assume A1-2 and ?? ?? U . Consider the subset S ? ? with |S| = k that is randomly selected from ? such that all nk subsets are equally probable. If ?k,d denotes the set of possible rankings of these k objects then every ? ? ?k,d is equally probable. Proof. Let a k-partition denote the partition of Rd into Q(k, d) d-cells induced by k objects for 1 ? k ? n. In the n-partition, each d-cell is weighted uniformly and is equal to 1/Q(n, d). If we uniformly at random select k objects from the possible n and consider the k-partition, each d-cell in the k-partition will contain one or more d-cells of the n-partition. If we select one of these d-cells from the k-partition, on average there will be Q(n, d)/Q(k, d) d-cells from the n-partition contained in this cell. Therefore the probability mass in each d-cell of the k-partition is equal to the number of cells from the n-partition in this cell multiplied by the probability of each of those cells from the n-partition: Q(n, d)/Q(k, d) ? 1/Q(n, d) = 1/Q(k, d), and |?k,d | = Q(k, d). As described above, for 1 ? i ? k some of the pairwise comparisons qi,k+1 may be ambiguous. The algorithm chooses a random sequence of the n objects in its initialization and does not use the labels of q1,k+1 , . . . , qj?1,k+1 , qj+1,k+1 , . . . , qk,k+1 to make a determination of whether or not qj,k+1 is ambiguous. It follows that the events of requesting the label of qi,k+1 for i = 1, 2, . . . , k are independent and identically distributed (conditionally on the results of queries from previous steps). Therefore it makes sense to talk about the probability of requesting any one of them. Lemma 4. Assume A1-2 and ? ? U. Let A(k, d, U ) denote the probability of the event that the pairwise comparison qi,k+1 is ambiguous for i = 1, 2, . . . , k. Then there exists positive, real number constants a1 and a2 independent of k such that for k > 2d, a1 k2d2 ? A(k, d, U ) ? a2 k2d2 . 6 Proof. By Lemma 2, a point in the dual (pairwise comparison) is ambiguous if and only if there exists a separating hyperplane that passes through this point. This implies that the hyperplane representation of the pairwise comparison in the primal intersects the cell containing r? (see Figure 2 for an illustration of this concept). Consider the partition of Rd generated by the hyperplanes corresponding to pairwise comparisons between objects 1, . . . , k. Let P (k, d) denote the number of d-cells in this partition that are intersected by a hyperplane corresponding to one of the queries qi,k+1 , i ? {1, . . . , k}. Then it is not difficult to show that P (k, d) is bounded above and below by k2(d?1) constants independent of n and k times 2d?1 [7]. By Lemma 3, every d-cell in the partition (d?1)! induced by the k objects corresponds to an equally probable ranking of those objects. Therefore, the probability that a query is ambiguous is the number of cells intersected by the corresponding P (k,d) hyperplane divided by the total number of d-cells, and therefore A(k, d, U ) = Q(k,d) . The result follows immediately from the bounds on P (k, d) and Corollary 1. Because the individual events of requesting each query are conditionally independent, the total num?n?1 ?k ber of queries requested by the algorithm is just Mn = k=1 i=1 1{Request qi,k+1 }. Using the results above, it straightforward to prove the main theorem below (see [7]). Theorem 3. Assume A1-2 and ? ? U. Let the random variable Mn denote the number of pairwise comparisons that are requested in the algorithm of Figure 1, then EU [Mn ] ? 2d log2 2d + 2da2 log n. Furthermore, if ? ? ? and max???n,d ?? ? c|?n,d |?1 for some c > 0, then E? [Mn ] ? cEU [Mn ]. 5 Robust sequential algorithm for query selection We now extend the algorithm of Figure 1 to situations in which the response to each query is only probably correct. If the correct label of a query qi,j is yi,j , we denote the possibly incorrect response by Yi,j . The probability that Yi,j = yi,j is at least 1 ? p, p < 1/2. The robust algorithm operates in the same fashion as the algorithm in Figure 1, with the exception that when an ambiguous query is encountered several (equivalent) queries are made and a decision is based on the majority vote. This voting procedure allows us to construct a ranking (or partial ranking) that is correct with high probability by requesting just O(d log2 n) queries where the extra log factor comes from voting. First consider the case in which each query can be repeated to obtain multiple independent responses (votes) for each comparison query. This random noise model arises, for example, in social choice theory where the ?reference? is a group of people, each casting a vote. The elementary proof of the next theorem is given in [7]. Theorem 4. Assume A1-2 and ? ? U but that each query response is a realization of an i.i.d. Bernoulli random variable Yi,j with P (Yi,j ?= yi,j ) ? p < 1/2. If all ambiguous queries are decided by the majority vote of R independent responses to each such query, then with probability greater than 1 ? 2n log2 (n) exp(? 12 (1 ? 2p)2 R) this procedure correctly identifies the correct ranking and requests no more than O(Rd log n) queries on average. In other situations, if we ask the same query multiple times we may get the same, possibly incorrect, response each time. This persistent noise model is natural, for example, if the reference is a single human. Under this model, if two rankings differ by only a single pairwise comparison, then they cannot be distinguished with probability greater than 1 ? p. So, in general, exact recovery of the ranking cannot be guaranteed with high probability. The best we can hope for is to exactly recover a partial ranking of the objects (i.e. the ranking over a subset of the objects). Henceforth, we will assume the noise is persistent and aim to exactly recover a partial ranking of the objects. The key ingredient in the persistent noise setting is the design of a voting set for each ambiguous query encountered. Suppose that at the jth object in the algorithm in Figure 1 the query qi,j is ambiguous. In principle, a voting set could be constructed using objects ranked between i and j. If object k is between i and j, then note that yi,j = yi,k = yk,j . In practice, we cannot identify the subset of objects ranked between i and j, but it is contained within the set Ti,j , defined to be the subset of objects ?k such that qi,k , qk,j , or both are ambiguous. Furthermore, Lemma 3 implies that each object in Ti,j is ranked between i and j with probability at least 1/3 [7]. Ti,j will be our voting set. Note however, if objects i and j are closely ranked, then Ti,j may be rather small, and so it is not 7 Numb er of query requests Table 1: Statistics for the algorithm robust to persistent ? ? noise of Section 5 with respect to all n2 pairwise comparisons. Recall y is the noisy response vector, y? is the embedding?s solution, and y? is the output of the robust algorithm. 600 2 log2 |? n,d | 500 400 log2 |? n,d | 300 200 Dimension % of queries requested mean std d(y, y?) Average error d(y, y?) 100 0 0 10 20 30 40 50 60 70 80 90 100 Dimension Figure 3: Mean and standard deviation of requested queries (solid) in the noiseless case for n = 100; log2 |?n,d | is a lower bound (dashed). 2 14.5 5.3 0.23 0.31 3 18.5 6 0.21 0.29 always possible to find a sufficiently large voting set. Therefore, we must specify a size-threshold R ? 1. If the size of Ti,j is at least R, then we decide the label for qi,j by voting over the responses to {qi,k , qk,j : k ? Ti,j } and qi,j ; otherwise we pass over object j and move on to the next object in the list. This allows us to construct a probably correct ranking of the objects that are not passed over. The theorem below proves that a large portion of objects will not be passed over. At the end of the process, some objects that were passed over may then be unambiguously ranked (based on queries made after they were passed over) or they can be ranked without voting (and without guarantees). The proof of the next theorem is provided in the longer version of this paper [7]. Theorem 5. Assume A1-2, ? ? U , and P (Y ? i,j ?= yi,j ) = p.? For any size-threshold R ? 1, with probability greater than 1 ? 2n log2 (n) exp ? 29 (1 ? 2p)2 R the procedure above correctly ranks at least n/(2R + 1) objects and requests no more than O(Rd log n) queries on average. 6 Empirical results In this section we present empirical results for both the noiseless algorithm of Figure 1 and the robust algorithm of Section 5. For the noiseless algorithm, n = 100 points, representing the objects to be ranked, were uniformly at random simulated from the unit hypercube [0, 1]d for d = 1, 10, 20, . . . , 100. The reference was simulated from the same distribution. For each value of d the experiment was repeated 25 times using a new simulation of points and the reference. Because responses are noiseless, exact identification of the ranking is guaranteed. The number of requested queries is plotted in Figure 3 with the lower bound of Theorem 1 for reference. The number of requested queries never exceeds twice the lower bound which agrees with the result of Theorem 3. The robust algorithm in Section 5 was evaluated using a symmetric similarity matrix dataset available at [20] whose (i, j)th entry, denoted si,j , represents the human-judged similarity between audio signals i and j for all i ?= j ? {1, . . . , 100}. If we consider the kth row of this matrix, we can rank (k) the other signals with respect to their similarity to the kth signal; we define qi,j := {sk,i > sk,j } (k) (k) and yi,j := 1{qi,j }. Since the similarities were derived from human subjects, the derived labels may be erroneous. Moreover, there is no possibility of repeating queries here and so the noise is persistent. The analysis of this dataset in [2] suggests that the relationship between signals can be well approximated by an embedding in 2 or 3 dimensions. We used non-metric multidimensional scaling [5] to find an embedding of the signals: ?1 , . . . , ?100 ? Rd for d = 2 and 3. For each object ?k , we use the embedding to derive pairwise comparison labels between all other objects as follows: (k) y?i,j := 1{||?k ? ?i || < ||?k ? ?j ||}, which can be considered as the best approximation to the la(k) bels yi,j (defined above) in this embedding. The output of the robust sequential algorithm, which (k) uses only a small fraction of the similarities, is denoted by y?i,j . We set R = 15 using Theorem 5 as ? ??1 ? (k) (k) a rough guide. Using the popular Kendell-Tau distance d(y (k) , y?(k) ) = n2 ?i,j } i<j 1{yi,j ?= y [21] for each object k, we denote the average of this metric over all objects by d(y, y?) and report this statistic and the number of queries requested in Table 1. Because the average error of y? is only 0.07 higher than that of y?, this that the algorithm is doing almost as well as we could hope. ? suggests ? Also, note that 2R 2d log n/ n2 is equal to 11.4% and 17.1% for d = 2 and 3, respectively, which agrees well with the experimental values. 8 References [1] D. Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching. AddisonWesley, 1998. [2] Scott Philips, James Pitton, and Les Atlas. Perceptual feature identification for active sonar echoes. In OCEANS 2006, 2006. [3] B. McFee and G. Lanckriet. Partial order embedding with multiple kernels. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 721?728. ACM, 2009. [4] I. Gormley and T. Murphy. A latent space model for rank data. Statistical Network Analysis: Models, Issues, and New Directions, pages 90?102, 2007. [5] M.A.A. Cox and T.F. Cox. Multidimensional scaling. Handbook of data visualization, pages 315?347, 2008. [6] J.F. Traub. Information-based complexity. John Wiley and Sons Ltd., 2003. [7] Kevin G. Jamieson and Robert D. Nowak. arXiv:1109.3701v1, 2011. Active ranking using pairwise comparisons. [8] C.H. Coombs. A theory of data. Psychological review, 67(3):143?159, 1960. [9] T.M. Cover. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE transactions on electronic computers, 14(3):326?334, 1965. [10] S. Dasgupta, A.T. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. The Journal of Machine Learning Research, 10:281?299, 2009. [11] S. Hanneke. Theoretical foundations of active learning. PhD thesis, Citeseer, 2009. [12] Tibor Heged?us. Generalized teaching dimensions and the query complexity of learning. In Proceedings of the eighth annual conference on Computational learning theory, COLT ?95, pages 108?117, New York, NY, USA, 1995. ACM. [13] Y. Freund, R. Iyer, R.E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. The Journal of Machine Learning Research, 4:933?969, 2003. [14] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pages 89?96. ACM, 2005. [15] Z. Zheng, K. Chen, G. Sun, and H. Zha. A regression framework for learning ranking functions using relative relevance judgments. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 287?294. ACM, 2007. [16] R. Herbrich, T. Graepel, and K. Obermayer. Support vector learning for ordinal regression. In Artificial Neural Networks, 1999. ICANN 99. Ninth International Conference on (Conf. Publ. No. 470), volume 1, pages 97?102. IET, 1999. [17] T. Lu and C. Boutilier. Robust approximation and incremental elicitation in voting protocols. IJCAI-11, Barcelona, 2011. [18] W. Chu and Z. Ghahramani. Extensions of gaussian processes for ranking: semi-supervised and active learning. Learning to Rank, page 29, 2005. [19] J.F. Bennett and W.L. Hays. Multidimensional unfolding: Determining the dimensionality of ranked preference data. Psychometrika, 25(1):27?43, 1960. [20] Similarity Learning. Aural Sonar dataset. [http://idl.ee.washington.edu/SimilarityLearning]. University of Washington Information Design Lab, 2011. [21] J.I. Marden. Analyzing and modeling rank data. Chapman & Hall/CRC, 1995. 9
4427 |@word mild:1 cox:2 version:2 nd:1 open:1 simulation:1 citeseer:1 q1:4 pick:1 idl:1 thereby:1 solid:1 initial:1 inefficiency:2 contains:1 selecting:4 existing:1 si:1 chu:1 must:5 john:1 partition:20 informative:3 atlas:1 alone:1 generative:1 fewer:2 selected:6 renshaw:1 num:1 provides:3 characterization:2 boosting:1 location:2 preference:4 hyperplanes:4 herbrich:1 constructed:1 persistent:6 incorrect:2 shorthand:1 prove:2 pairwise:62 market:1 p1:2 cardinality:3 considering:1 psychometrika:1 begin:1 discover:1 underlying:4 suffice:1 bounded:2 mass:1 moreover:1 provided:1 what:2 q2:2 developed:1 finding:3 impractical:1 guarantee:2 every:7 collecting:1 multidimensional:4 voting:9 ti:6 tie:1 exactly:2 k2:3 unit:1 jamieson:2 hamilton:1 arguably:1 before:2 positive:3 limit:1 analyzing:1 noteworthy:1 might:1 twice:1 initialization:1 studied:2 equivalence:1 specifying:2 shaded:1 suggests:4 collect:2 limited:2 decided:1 practical:1 unique:3 responsible:1 practice:1 procedure:3 mcfee:1 area:1 empirical:3 significantly:2 deed:1 word:1 induce:1 get:1 cannot:4 selection:3 judged:1 impossible:1 seminal:1 shaked:1 equivalent:2 straightforward:2 attention:1 sigir:1 recovery:1 immediately:1 perceive:1 examines:1 rule:1 marden:1 hd:2 embedding:19 handle:1 proving:1 searching:1 suppose:8 play:1 exact:3 programming:1 homogeneous:1 us:1 origin:2 lanckriet:1 associate:1 expensive:1 approximated:1 recognition:1 std:1 labeled:3 observed:1 role:1 worst:3 region:1 sun:1 eu:1 halfspaces:4 yk:1 pd:2 complexity:8 constrains:1 engr:1 prescribes:1 efficiency:1 bisecting:1 easily:1 represented:3 talk:1 intersects:3 distinct:1 describe:1 precedes:1 query:87 labeling:1 artificial:1 kevin:2 choosing:1 h0:2 whose:2 quite:3 larger:1 say:1 psychometrics:1 otherwise:2 reconstruct:1 statistic:2 noisy:1 echo:1 advantage:1 indication:1 sequence:3 propose:1 product:1 combining:1 realization:1 achieve:1 intuitive:1 ijcai:1 produce:1 incremental:1 object:85 derive:2 coombs:1 p2:1 strong:2 involves:2 implies:3 indicate:1 quantify:1 come:1 differ:1 direction:1 closely:1 correct:8 human:7 crc:1 require:3 assign:1 transparent:1 pessimistic:1 probable:4 elementary:1 strictly:1 extension:1 hold:2 lying:1 marriage:1 considered:2 sufficiently:2 hall:1 exp:2 presumably:1 mapping:1 scope:1 a2:4 numb:1 precede:1 label:18 agrees:2 bisects:1 gauge:1 weighted:1 unfolding:2 hope:2 rough:1 always:1 gaussian:1 aim:1 rather:2 ck:4 kalai:1 casting:1 corollary:4 gormley:1 derived:2 focus:2 rank:9 bernoulli:1 sense:2 membership:2 inaccurate:1 typically:1 selective:1 interested:2 overall:1 among:1 dual:6 issue:1 denoted:5 colt:1 development:1 art:1 special:2 initialize:1 equal:5 construct:2 never:1 washington:2 chapman:1 represents:1 report:1 few:2 primarily:1 pathological:2 randomly:7 individual:1 murphy:1 familiar:1 geometry:1 replacement:1 interest:1 highly:1 possibility:1 zheng:1 certainly:1 uncommon:1 traub:1 light:1 primal:4 n2d:5 accurate:1 implication:1 nowak:3 partial:8 closer:3 orthogonal:1 unless:2 literally:1 addisonwesley:1 euclidean:4 plotted:1 theoretical:4 psychological:1 instance:1 earlier:1 modeling:1 cover:1 deviation:1 subset:8 entry:1 lazier:1 uniform:3 uninteresting:1 dependency:2 answer:2 synthetic:1 chooses:1 adaptively:2 international:4 picking:1 connecting:1 thesis:1 reflect:1 containing:3 choose:1 possibly:2 henceforth:2 worse:1 admit:1 conf:1 inefficient:3 actively:1 account:1 summarized:1 satisfy:3 ranking:107 later:1 h1:2 picked:1 lab:1 analyze:1 doing:1 doctor:3 portion:1 recover:3 sort:2 zha:1 halfspace:4 contribution:1 qk:3 efficiently:1 yield:1 identify:6 judgment:1 identification:2 lu:1 hanneke:1 researcher:1 monteleoni:1 definition:4 james:1 proof:6 associated:2 unsolved:1 dataset:6 popular:2 ask:2 recall:3 knowledge:1 dimensionality:1 graepel:1 understands:1 higher:2 supervised:1 methodology:1 response:12 specify:2 unambiguously:1 done:1 evaluated:1 furthermore:3 just:6 hand:1 defines:1 resemblance:1 scientific:1 grows:2 usa:3 contain:1 concept:1 former:1 hence:1 symmetric:1 illustrated:1 conditionally:2 impute:1 uniquely:1 ambiguous:22 noted:2 unambiguous:1 generalized:1 demonstrate:1 passive:3 geometrical:6 meaning:1 consideration:1 recently:1 common:2 volume:2 million:1 discussed:1 interpretation:7 extend:1 refer:4 significant:1 queried:1 rd:29 consistency:1 teaching:3 aural:1 similarity:7 invalidate:1 longer:1 add:1 something:1 perspective:2 scenario:1 certain:3 hay:1 inequality:1 binary:4 yi:14 minimum:1 additional:1 greater:4 determine:4 aggregated:1 dashed:2 signal:5 semi:1 full:6 multiple:4 exceeds:1 determination:1 believed:1 retrieval:1 prescription:5 divided:1 equally:4 a1:19 impact:1 qi:22 involving:1 basic:1 regression:2 patient:2 noiseless:4 metric:2 arxiv:1 represent:1 kernel:1 achieved:1 cell:34 addition:2 else:1 crucial:1 extra:1 probably:5 pass:2 subject:3 induced:6 integer:1 call:3 structural:2 ee:1 counting:1 enough:1 easy:1 embeddings:1 identically:1 identified:1 reduce:1 idea:1 requesting:5 qj:3 whether:2 motivated:1 passed:4 ltd:1 passing:1 cause:1 york:1 boutilier:1 generally:1 repeating:1 generate:1 schapire:1 http:1 exist:1 heged:1 dotted:1 overly:1 correctly:5 write:2 dasgupta:1 group:1 key:2 threshold:2 achieving:1 intersected:3 wisc:2 v1:1 fraction:2 place:1 almost:5 reasonable:3 decide:1 electronic:1 draw:1 decision:1 scaling:2 bit:6 bound:15 followed:1 guaranteed:2 encountered:3 annual:3 nontrivial:1 placement:2 constraint:2 handful:1 separable:1 structured:2 according:1 request:7 slightly:1 em:2 son:1 wi:2 partitioned:1 making:1 psychologist:1 intuitively:1 gathering:1 computationally:1 alluded:1 visualization:1 previously:4 discus:2 turn:1 needed:8 know:1 singer:1 ordinal:1 end:1 available:1 multiplied:1 ocean:1 homogeneously:1 distinguished:1 alternative:1 original:1 denotes:2 assumes:1 cf:1 log2:11 madison:2 exploit:1 k1:3 especially:1 prof:1 ghahramani:1 hypercube:1 objective:2 move:1 arrangement:3 primary:1 md:1 said:1 obermayer:1 gradient:1 kth:2 subspace:1 distance:7 separate:1 separating:2 simulated:2 majority:2 philip:1 collected:2 reason:2 assuming:1 relationship:3 illustration:1 minimizing:1 equivalently:1 difficult:2 robert:2 statement:1 expense:1 rise:1 design:2 publ:1 unknown:2 observation:1 datasets:1 finite:1 descent:1 supporting:1 situation:11 extended:3 locate:1 ninth:1 arbitrary:2 inferred:1 introduced:1 pair:4 required:4 specified:2 bel:1 learned:3 distinction:1 tremendous:1 narrow:1 barcelona:1 address:1 elicitation:1 below:3 pattern:1 scott:1 eighth:1 challenge:1 max:1 tau:1 critical:1 event:3 natural:4 ranked:12 treated:1 indicator:1 recursion:2 mn:7 representing:1 eye:1 identifies:1 hullender:1 faced:1 review:1 literature:1 understanding:1 checking:1 determining:1 relative:3 wisconsin:2 embedded:7 freund:1 bear:1 permutation:1 querying:1 proven:1 ingredient:1 mindset:1 h2:1 foundation:1 consistent:8 dd:1 principle:1 row:1 free:1 jth:1 guide:1 allow:1 burges:1 ber:1 perceptron:1 fall:1 taking:1 face:1 distributed:1 dimension:10 unaware:1 collection:2 made:4 adaptive:5 far:1 social:2 transaction:1 active:7 tolerant:2 reveals:1 handbook:1 conclude:1 consuming:1 alternatively:1 search:2 latent:1 iet:1 sk:2 sonar:2 table:2 learn:4 robust:10 obtaining:1 requested:8 complex:2 poly:2 separator:1 domain:1 protocol:1 icann:1 main:7 linearly:1 motivation:3 noise:6 arise:1 n2:11 allowed:1 repeated:3 ceu:1 representative:1 fashion:8 ny:1 wiley:1 position:3 wish:1 deterministically:1 lie:2 candidate:1 perceptual:1 theorem:12 erroneous:1 specific:2 er:1 list:1 evidence:1 concern:1 intrinsic:1 consist:1 exists:4 restricting:1 sequential:11 essential:1 knuth:1 phd:1 iyer:1 nk:1 sorting:3 gap:1 chen:1 intersection:1 simply:1 ordered:1 contained:2 corresponds:4 determines:2 satisfies:1 acm:5 viewed:2 bennett:1 judicious:1 specifically:2 determined:3 uniformly:6 typical:2 hyperplane:17 except:1 operates:1 lemma:11 called:2 lens:1 total:2 duality:1 experimental:2 pas:1 la:1 vote:4 da2:1 exception:1 selectively:1 select:3 support:2 people:1 arises:2 relevance:1 audio:4
3,786
4,428
Active Learning Ranking from Pairwise Preferences with Almost Optimal Query Complexity Nir Ailon? Technion, Haifa, Israel [email protected] Abstract Given a set V of n elements we wish to linearly order them using pairwise preference labels which may be non-transitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The number of disagreements (loss) and the query complexity (number of pairwise preference labels). Our algorithm adaptively queries at most O(n poly(log n, ??1 )) preference labels for a regret of ? times the optimal loss. This is strictly better, and often significantly better than what non-adaptive sampling could achieve. Our main result helps settle an open problem posed by learning-to-rank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels? 1 Introduction We study the problem of learning to rank from pairwise preferences, and solve an open problem that has led to development of many heuristics but no provable results. The input is a set V of n elements from some universe, and we wish to linearly order them given pairwise preference labels, given as response to which is preferred, u or v? for pairs u, v ? V . The goal is to linearly order the elements from the most preferred to the least preferred, while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The loss (number of disagreements) and query complexity (number of preference responses we need). This is a learning  problem, with a finite sample space of n2 possibilities only (hence a transductive learning problem). The loss minimization problem given the entire n ? n preference matrix is a well known NP-hard problem called MFAST (minimum feedback arc-set in tournaments) [5]. Recently, Kenyon and Schudy [23] have devised a PTAS for it, namely, a polynomial (in n) -time algorithm computing a solution with loss at most (1 + ?) the optimal, for and ? > 0 (the degree of the polynomial there may depend on ?). In our case each edge from the input graph is given for a unit cost, hence we seek query efficiency. Our algorithm samples preference labels non-uniformly and adaptively, hence we obtain an active learning algorithm. Our output is not a solution to MFAST, but rather a reduction of the original learning problem to a simpler one decomposed into small instances in which the optimal loss is high, consequently, uniform sampling of preferences can be shown to be sufficiently good. Our Setting vs. The Usual ?Learning to Rank? Problem. Our setting defers from much of the learning to rank (LTR) literature. Usually, the labels used in LTR problems are responses to individual elements, and not to pairs of elements. A typical example is the 1..5 scale rating for restaurants, or 0, 1 rating (irrelevant/relevant) for candidate documents retrieved for a query (known as the binary ranking problem). The preference graph induced from these labels is transitive, hence no combinatorial problems arise due to nontransitivity. We do not discuss this version of LTR. Some LTR literature does consider the pairwise preference label approach, and there is much justification to it (see [11, 22] and reference therein). Other works (e.g. [26]) discuss pairwise or higher order ? Supported by a Marie Curie International Reintegration Grant PIRG07-GA-2010-268403 1 (listwise) approaches, but a close inspection reveals that they do not use pairwise (or listwise) labels, only pairwise (or listwise) loss functions. Using Kenyon and Schudy?s PTAS as a starting point. As mentioned above, our main algorithm is derived from the PTAS of [23], but with a significant difference. We use their algorithm to obtain a certain decomposition of the input. A key change to their algorithm, which is not query efficient, involves careful sampling followed by iterated sample refreshing steps. Our work can be studied in various contexts, aside from LTR. Machine Learning Reductions: Our main algorithm reduces a given instance to smaller subproblems decomposing it. We mention other work in this vein: [6, 3, 9]. Active Learning: An important field of statistical learning theory and practice ([8, 21, 15, 14, 24, 17, 13, 20, 16, 13]). In the most general setting, one wishes to improve on standard statistical learning theoretical complexity bounds by actively choosing instances for labels. Many heuristics have been developed, while algorithms with provable bounds (especially in the agnostic case) are known for few problems, often toys. General bounds are difficult to use: [8] provides general purpose active learning bounds which are quite difficult to use in actual specific problems; The A2 algorithm [7], analyzed in [21] using the disagreement coefficient is not useful here. It can be shown that the disagreement coefficient here is trivial (omitted due to lack of space). Noisy Sorting: There is much literature in theoretical computer science on sorting noisy data. [10] work in a Bayesian setting; In [19], the input preference graph is transitive, and labels are nondeterministic. In other work, elements from the set of alternatives are assumed to have a latent value. In this work the input is worst case and not Bayesian, query responses are deterministic and elements do not necessarily have a latent value. Paper Organization: Section 2 presents basic definitions and lemmata, and in particular defines what a good decomposition is and how it can be used in learning permutations from pairwise preferences. Section 3 presents our main active learning algorithm which is, in fact, an algorithm for producing a good decomposition query efficiently. The main result is presented in Theorem 3.1. Section 4 discusses future work and followup work appearing in the full version of this paper. 2 Notation and Basic Lemmata Let V denote a finite set of size n that we wish to rank.1 We assume an unknown preference function W on pairs of elements in V , which is unknown to us. For any pair u, v ? V , W (u, v) is 1 if u is deemed preferred over v, and 0 otherwise. We enforce W (u, v) + W (v, u) = 1 (no abstentation) hence, (V, W ) is a tournament. We assume that W is agnostic: it is not necessarily transitive and may contain errors and inconsistencies. For convenience, for any two real numbers a, b we will let [a, b] denote the interval {x : a ? x ? b} if a ? b and {x : b ? x ? a} otherwise. We wish to predict W using a hypothesis h from concept class H = ?(V ), where ?(V ) is the set of permutations ? over V viewed equivalently as binary functions over V ? V satisfying, for all u, v, w ? V , ?(u, v) = 1 ? ?(v, u) and ?(u, w) = 1 whenever ?(u, v) = ?(v, w) = 1. For ? ? ?(V ) we also use notation: ?(u, v) = 1 if and only if u ?? v, namely, if u precedes v in ?. Abusing notation, we also view permutations as injective functions from [n] to V , so that the element ?(1) ? V is in the first, most preferred position and ?(n) is the least preferred one. We also define the function ?? inverse to ? as the unique function satisfying ?(?? (v)) = v for all v ? V . Hence, u ?? v is equivalent to ?? (u) < ?? (v). ) As in standard ERM, we define a risk function Cu,v penalizing the error of ? with respect to the pair u, v, namely, Cu,v (?, V, W ) = 1?(u,v)6=W (u,v) . The total loss, C(h, V, W ) is defined as Cu,v summed over all unordered u, v ? V . Our goal is to devise an active learning algorithm for the purpose of minimizing this loss. In this paper we show an improved, almost optimal statistical learning theoretical bound using recent important breakthroughs in combinatorial optimization of a related problem called minimum feedback arc-set in tournaments (MFAST). The relation between this NP-Hard problem and our learning problem has been noted before in (eg [12]), when these breakthroughs were yet to be known. MFAST is more precisely defined as follows: V and W are given in entirety (we pay no price for reading W ), and we seek ? ? ?(V ) minimizing the MFAST cost C(?, V, W ). A PTAS has been 1 In a more general setting we are given a sequence V 1 , V 2 , . . . of sets, but there is enough structure and interest in the single set case, which we focus on in this work. 2 discovered for this NP-Hard very recently in groundbreaking work by Kenyon and Schudy [23]. This PTAS is not useful however for the purpose of learning to rank from pairwise preferences because it is not query efficient. It may require to read all quadratically many entries in W . In this work we fix this drawback, and use the PTAS to obtain a certain useful decomposition. Definition 2.1. Given a set V of size n, an ordered decomposition is a list of pairwise disjoint subsets V1 , . . . , Vk ? V such that ?ki=1 Vi = V . We let W |Vi denote the restriction of W to Vi ? Vi for i = 1, . . . , k. For a permutation ? ? ?(v) we let ?|Vi denote its restriction to the elements of Vi (hence, ?|Vi ? ?(Vi )). We say that ? ? ?(V ) respects V1 , . . . , Vk if for all u ? Vi , v ? Vj , i < j, u ?? v. We denote the set of permutations ? ? ?(V ) respecting the decomposition V1 , . . . , Vk by ?(V1 , . . . , Vk ). We say that a subset U of V is small in V if |U | ? log n/ log log n, otherwise we say that U is big in V . A decomposition V1 , . . . , Vk is ?-good with respect to W if:2 Local Chaos: min ???(V ) i:Vi Approximate Optimality:   ni . 2 (2.1) C(?, V, W ) ? (1 + ?) min C(?, V, W ) . (2.2) X C(?|Vi , Vi , W|Vi ) ? ? big in V min X 2 i:Vi ???(V1 ,...,Vk ) big in V ???(V ) We will show how to use an ?-good decomposition, and how to obtain one query-efficiently. Basic (suboptimal) results from statistical learning theory: Viewing pairs of V -elements as data points, the loss C(?, V, W ) is, up to normalization, an expected cost over a random draw of a data point. sample E of unordered pairs gives rise to a partial cost, CE defined as: CE (?, V, W ) =  A ?1 P n |E| (u,v)?E W (v, u). (We assume throughout that E is chosen with repetitions and is hence 2 u?? v a multiset; the accountingof parallel edges is clear.) CE (?, ?, ?) is an empirical unbiased estimator of C(?, V, W ) if E ? V2 is chosen uniformly at random among all (multi)subsets of a given size. The basic question in statistical learning theory is, how good is the minimizer ? of CE , in terms of C? The notion of VC dimension [25] gives us a nontrivial (albeit suboptimal - see below) bound. Lemma 2.2. The VC dimension of the set of permutations on V , viewed as binary classifiers on pairs of elements, is n ? 1. It is easy to show that the VC dimension is at most O(n log n), which is the logarithm of the number of permutations. See [4] for a linear bound. The implications are: Proposition  2.3. If E is chosen uniformly at random (with repetitions) as a sample of m elements from V2 , where m > n, then with probability at least 1 ? ? over q  the sample, all permutations ? n log m+log(1/?) satisfy: |CE (?, V, W ) ? C(?, V, W )| = n2 O . m Hence, if we want to minimize C(?, V, W ) over ? to within an additive error of ?n2 with probability at least 1??, it suffices to choose a sample E of m = O(??2 (n log n+log ? ?1 )) elements from V2 uniformly at random (with repetitions), and optimize CE (?, V, W ) instead.3 Assume ? ? e?n , so that we get a more manageable sample bound of O(??2 n log n). Is this bound at all P interesting? For two permutations ?, ?, the Kendall-Tau metric d? (?, ?) is defined as d? (?, ?) = u6=v 1[(u ?? P v) ? (v ?? u)] . The Spearman Footrule metric dfoot (?, ?) is defined as dfoot (?, ?) = u |?? (u) ? ?? (u)| . The following is well known [18]: d? (?, ?) ? dfoot (?, ?) ? 2d? (?, ?) . (2.3) Clearly C(?, V, ?) extends d? (?, ?) to distances between permutations and binary tournaments, with the triangle inequality d? (?, ?) ? C(?, V, W ) + C(?, V, W ) satisfied for all W and ?, ? ? ?(V ). Assume we use Proposition 2.3 to find ? ? ?(V ) with an additive regret of O(?n2 ) with respect to an optimal solution ? ? for some ? > 0. The triangle inequality implies d? (?, ? ? ) = ?(?n2 ). By (2.3), hence, dfoot (?, ? ? ) = ?(?n2 ). By definition of dfoot , this means that the averege element v ? V is translated ?(?n) positions away from its position in ? ? . In some applications (e.g. IR), one may 2 Wewill just say ?-good if W is clear from the context. denotes the set of unordered pairs of distinct elements in V . 3 V 2 3 want elements to be at most a constant ? positions off. This translates to a sought regret of O(?n) for constant ?, and using our notation, to ? = ?/n. Proposition 2.3 cannot guarantee less than a quadratic sample size for such a regret, which is tantamount to querying all of W . We can do better: For any ? > 0 we achieve an additive regret of O(?C(? ? , V, W )) using O(poly(log n, ??1 )) W queries, for arbitrarily small optimal loss C(? ? , V, W ). This is not achievable using Proposition 2.3. One may argue that the VC bound may be too pessimistic, and other arguments may work for the uniform sample case. A simple extremal case (omitted from this abstract) shows that this is false. Proposition 2.4. Let V1 , . . . , Vk be an ordered decomposition of V . Let B denote the set of indices i ? [k] such that Vi is big in V . Assume E is chosen uniformly at random (with repetitions) as a  S sample of m elements from i?B V2i , where m > n. For each i = 1, . . . , k, let Ei = E ? V2i . Define CE (?, {V1 , . . . , Vk }, W ) to be  ?1 P P ni CE (?, {V1 , . . . , Vk }, W ) = |E|?1 i?B n2i |Ei |CEi (? i?B 2 P|Vi , Vi , W|Vi ) . (The normalization is defined so that the expression is an unbiased estimator of i?B C(?|Vi , Vi , W|Vi ). If ?1 |Ei | = 0 for some i, formally define n2i |Ei |CEi (?|Vi , Vi , W|Vi ) = 0.) Then with probability at least 1 ? e?n over the sample, all permutations ? ? ?(V ) satisfy:  q P  n log m+log(1/?) ni CE (?, {V1 , . . . , Vk }, W ) ? P C(?| , V , W | ) = O . Vi i Vi i?B i?B 2 m The proof (omitted from this abstract) uses simple VC dimension arithmetic. Now, why is ?goodness good? Lemma 2.5. Fix ? > 0 and assume we have an ?-good partition (Definition 2.1) V1 , . . . , Vk of V . Let B denote the set of i ? [k] such that Vi is big in V , and let B? = [k] \ B. Let n = |Vi| for i = 1, . . . , n, and let E denote a random sample of O(??6 n log n) elementsfrom Si Vi Vi i?B 2 , each element chosen uniformly at random with repetitions. Let Ei denote E ? 2 . Let CE (?, {V1 , . . . , Vk }, W ) be defined as in Proposition 2.4. For any ? ? ?(V1 , . . . , Vk ) define: X X X ? C(?) := CE (?, {V1 , . . . , Vk }, W ) + 1v?? u . (2.4) C(?|Vi , Vi , W|Vi ) + i?B? 1?i<j?k (u,v)?Vi ?Vj ? Then the following event occurs with probability at least 1 ? e?n : For any minimizer ? ? of C(?) over ?(V1 , . . . , Vk ): C(? ? , V, W ) ? (1 + 2?) min???(V ) C(?, V, W ). (Proof omitted from abstract.) The consequence: Given an ?-good decomposition V1 , . . . , Vk , op? timizing C(?) over ? ? ?(V1 , . . . , Vk ), would give a solution with relative regret of 2? w.r.t. the optimum. The first and last terms in the RHS of (2.4) require no more than O(??6 n log n) W queries to compute (by definition of E, and given the decomposition). The middle term runs over small Vi ?s, and can be computed from O(n log n/ log log n) W -queries. If we now assume that a good decomposition can be efficiently computed using O(n polylog(n, ??1 )) W -queries (as we indeed show), then we would beat the VC bound whenever the optimal loss is at most O(n2?? ) for some ? > 0. 3 A Query Efficient Algorithm for ?-Good Decompositions Theorem 3.1. Given a set V of size n, a preference oracle W and an error tolerance parameter 0 < ? < 1, there exists a poly(n, ??1 )-time algorithm returning, with constant probabiliy, an ?-good partition of V , querying at most O(??6 n log5 n) locations in W on expectation. Before describing the algorithm and its analysis, we need some definitions: Definition 3.2. Let ? denote a permutation over V . Let v ? V and i ? [n]. We define ?v?i to be the permutation obtained by moving the rank of v to i in ?, and leaving the rest of the elements in the same order.4 Definition 3.3. Fix ? ? ?(V ), v ? V and i ? [n]. We define TestMove(?, V, W, v, i) := C(?, V, W ) ? C(?v?i , V, W ) . Equivalently, if i ? ?? (v) then TestMove(?, V, W, v, i) := 4 For example, if V = {x, y, z} and (?(1), ?(2), ?(3)) = (x, y, z), then (?x?3 (1), ?x?3 (2), ?x?3 (3)) = (y, z, x). 4 P u:?? (u)?[?? (v)+1,i] (Wuv ? Wvu ) . A similar expression can be written for i < ?? (v). For a multi set E ? V2 , define TestMoveE (?, V, W, v, i), for i ? ?? (v), as TestMoveE (?, V, W, v, i) := |i??? (v)| P ? ? (W (u, v) ? W (v, u)). where the multiset E is defined as {(u, v) ? E : ? u:(u,v)?E |E| ?? (u) ? [?? (v) + 1, i]}. Similarly, for i < ?? (v) we define TestMoveE (?, V, W, v, i) := |i??? (v)| P ? ? (W (v, u) ? W (u, v)). where E is now {(u, v) ? E : ?? (u) ? [i, ?? (v) ? 1]}. ? u:(u,v)?E |E|  Lemma 3.4. Fix ? ? ?(V ), v ? V , i ? [n] and an integer N . Let E ? V2 be a random (multi)set of size N with elements (v, u1 ), . . . , (v, uN ), drawn so that for each j ? [N ] the element uj is chosen uniformly at random from among the elements lying between v (exclusive) and position i (inclusive) in ?. Then E[TestMoveE (?, V, W, v, i)] = TestMove(?, V, W, v, i). Additionally, for any ? > 0, except with probability of failure ?,   q ?1 | TestMoveE (?, V, W, v, i) ? TestMove(?, V, W, v, i)| = O |i ? ?? (v)| logN? . The lemma is easily proven using Hoeffding tail bounds, using the fact that |W (u, v)| ? 1 for all u, v. Our decomposition algorithm SampleAndRank is detailed in Algorithm 1, with subroutines in Algorithms 2 and 3. It is a query efficient improvement of the PTAS in [23] with the following difference: here we are not interested in an approximation algorithm for MFAST, but just in an ?-good decomposition. Whenever we reach a small block (line 3) or a big block with a probably approximately sufficiently high cost (line 8) in our recursion of Algorithm 2), we simply output it as a block in our partition. Denote the resulting outputted partition by V1 , . . . , Vk . Denote by ? ? the minimizer of C(?, V, W ) over ?(V1 , . . . , Vk ). We need to show that C(? ? , V, W ) ? (1 + ?) min???(V ) C(?, V, W ), thus establishing (2.2). The analysis closely follows [23]. Due to space limitations, we focus on the differences, and specifically on Procedure ApproxLocalImprove (Algorithm 3), replacing a greedy local improvement step in [23] which is not query efficient. SampleAndRank (Algorithm 1) takes the following arguments: The set V , the preference matrix W and an accuracy argument ?. It is implicitly understood that the argument W passed to SampleAndRank is given as a query oracle, incurring a unit cost upon each access. The first warm start step in SampleAndRank computes an expected constant factor approximation ? to MFAST on V, W using QuickSort [2]. The query complexity of this step is O(n log n) on expectation (see [3]). Before continuing, we make the following assumption, which holds with constant probability using Markov probability bounds. Assumption 3.5. The cost C(?, V, W ) of ? computed in line 2 of SampleAndRank is O(1) times that of the optimal ? ? , and the query cost incurred in the computation is O(n log n). Next, a recursive procedure SampleAndDecompose is called, running a divide-and-conquer algorithm. Before branching, it executes the following: Lines 5 to 9 identify local chaos (2.1) (with high probability). Line 10 calls ApproxLocalImprove (Algorithm 3), responsible for performing query-efficient approximate greedy steps, as we now explain. Approximate local improvement steps. ApproxLocalImprove takes a set V of size N , W , a permutation ? on V , two numbers C0 , ? and an integer n.5 The number n is always the size of the input in the root call to SampleAndDecompose, passed down in the recursion, and used for the purpose of controlling success probabilities. The goal of is to repeatedly identify w.h.p. single vertex moves that considerably decrease the cost. The procedure starts by creating a sample ensemble S = {Ev,i : v ? V, i ? [B, L]}, where B = logb?(?N/ log n)c and L = dlog N e. The size of each Ev,i ? S is ?(??2 log2 n), and each element (v, x) ? Ev,i was added (with possible multiplicity) by uniformly at random selecting, with repetitions, an element x ? V positioned at distance at most 2i from the position of v in ?. Let D? denote the distribution space from which S was drawn, and let PrX?D? [X = S] denote the probability of obtaining a given sample ensemble S. S will enable us to approximate the improvement in cost obtained by moving a single element u to position j. Definition 3.6. Fix u ? V and j ? [n], and assume log |j ? ?? (u)| ? B. Let ` = dlog |j ? ?? (u)|e. We say that S is successful at u, j if |{x : (u, x) ? Eu,` } ? {x : ?? (x) ? [?? (u), j]}| = ?(??2 log2 n) . 5 Notation abuse: V here is a subset of the original input. 5 Success of S at u, j means that sufficiently many samples x ? V such that ?? (x) is between ?? (u) and j are represented in Eu,` . Conditioned on S being successful at u, j, note that the denominator from the definition of TestMoveE does not vanish, and we can thereby define: Definition 3.7. S is a good approximation at u, j if (defining ` as in Definition 3.6): TestMoveE (?, V, W, u, j) ? TestMove(?, V, W, u, j) ? 1 ?|j ? ?? (u)|/ log n . S is a good u,` 2 approximation if it is succesful and a good approximation at all u ? V , j ? [n] satisfying dlog |j ? ?? (u)|e ? [B, L]. Using Chernoff to ensure success and Hoeffding to ensure good approximation, union bounding: Lemma 3.8. Except with probability 1 ? O(n?4 ), S is a good approximation. Algorithm 1 SampleAndRank(V, W, ?) 1: n ? |V | 2: ? ? Expected O(1)-approx solution to MFAST using O(n log n) W -queries on expectation using QuickSort [2] 3: return SampleAndDecompose(V, W, ?, n, ?) Algorithm 2 SampleAndDecompose(V, W, ?, n, ?) 1: N ? |V | 2: if N ? log n/ log log n then 3: return trivial partition {V } 4: end if  5: E ? random subset of O(??4 log n) elements from V2 (with repetitions) 6: C ? CE (?, V, W ) (C is an additive O(?2 N 2 ) approximation of C w.p. ? 1 ? n?4 ) 2 2 7: if C = ?(? N ) then 8: return trivial partition {V } 9: end if 10: ?1 ? ApproxLocalImprove(V, W, ?, ?, n) 11: k ? random integer in the range [N/3, 2N/3] 12: VL ? {v ? V : ?? (v) ? k}, ?L ? restriction of ?1 to VL 13: VR ? V \ VL , ?R ? restriction of ?1 to VR 14: return concatenation of SampleAndDecompose(VL , W, ?, n, ?L ), SampleAndDecompose(VR , W, ?, n, ?R ) Mutating the Pair Sample To Reflect a Single Element Move. Line 16 in ApproxLocalImprove requires elaboration. In lines 15-18 we sought (using S) an element u and position j, such that moving u to j (giving rise to ?u?j ) would considerably improve the cost w.h.p. If such an element u existed, we executed the exchange ? ? ?u?j . Unfortunately the sample ensemble S becomes stale: even if S was a good approximation, it is no longer necessarily so w.r.t. the new value of ?. We refresh it in line 16 by applying a transformation ?u?j on S, resulting in a new sample ensemble ?u?j (S) approximately distributed by D?u?j . More precisely, ? (defined below) is such that ?u?j (D? ) = D?u?j , (3.1) where the left hand side denotes the distribution obtained by drawing from D? and applying ?u?j 0 to the result. We now define ?u?j . Denoting ?u?j (S) = S 0 = {Ev,i : v ? V, i ? [B, L]}, we 0 need to define each Ev,i . Definition 3.9. Ev,i is interesting in the context of ? and ?u?j if the two sets T1 , T2 defined as T1 = {x ? V : |?? (x) ? ?? (v)| ? 2i }, T2 = {x ? V : |??u?j (x) ? ??u?j (v)| ? 2i } differ. 0 We set Ev,i = Ev,i for all v, i for which Ev,i is not interesting. Fix one interesting choice v, i. Let T1 , T2 be as in Defintion 3.9. It can be easily shown that each of T1 and T2 contains O(1) elements that are not contained in the other, and it can be assumed (using a simple clipping argument - omitted) that this number is exactly 1, hence |T1 | = |T2 |. let X1 = T1 \ T2 , and X2 = T2 \ T1 . Fix any injection ? : X1 ? X2 , and extend ? : T1 ? T2 so that ?(x) = x for all x ? T1 ? T2 . Finally, 6 Algorithm 3 ApproxLocalImprove(V, W, ?, ?, n) (Note: ? used as both input and output) 1: N ? |V |, B ? dlog(?(?N/ log n)e, L ? dlog N e 2: if N = O(??3 log3 n) then 3: return 4: end if 5: for v ? V do 6: r ? ?? (v) 7: for i = B . . . L do 8: Ev,i ? ? 9: for m = 1..?(??2 log2 n) do 10: j ? integer uniformly at random chosen from [max{1, r ? 2i }, min{n, r + 2i }] 11: Ev,i ? Ev,i ? {(v, ?(j))} 12: end for 13: end for 14: end for 15: while ?u ? V and j ? [n] s.t. (setting ` := dlog |j ? ?? (u)|e): ` ? [B, L] and TestMoveEu,` (?, V, W, u, j) > ?|j ? ?? (u)|/ log n do 16: For v ? V , i ? [B, L] refresh Ev,i w.r.t. the move u ? j using ?u?j (Section 3) 17: ? ? ?u?j 18: end while 0 0 define Ev,i = {(v, ?(x)) : (v, x) ? Ev,i }. For v = u we create Ev,i from scratch by repeating the loop in line 7 for that v. It is easy to see that (3.1) holds. By Lemma 3.8, the total variation distance between (D? | good approximation) and D?u?j is O(n?4 ). Using a simple chain rule argument: Lemma 3.10. Fix ? 0 on V of size N , and fix u1 , . . . , uk ? V and j1 , . . . , jk ? [n]. Draw S 0 from D?0 , and define S 1 = ?u1 ?j1 (S 0 ), S 2 = ?u2 ?j2 (S 1 ), ? ? ? , S k = ?uk ?jk (S k?1 ), ? 1 = ?u0 1 ?j1 , ? 2 = ?u1 2 ?j2 , ? ? ? , ? k = ?uk?1 . Consider the random variable S k conditioned on k ?jk S 0 , S 1 , . . . , S k?1 being good approximations for ?0 , . . . , ? k?1 , respectively. Then the total variation distance between the distribution of S k and the distribution (D?k |? k ) (corresponding to the process of obtaning ? k and drawing from D?k ?from scratch?) is at most O(kn?4 ). S 0 The difference between S and S 0 , defined as dist(S, S 0 ) := v,i Ev,i ?Ev,i bounds the query complexity of computing mutations. The proof of the following has been omitted from this abstract. Lemma 3.11. Assume S ? D? for some ?, and S 0 = ?u?j . Then E[dist(S, S 0 )] = O(??3 log3 n). Analysis of SampleAndDecompose. Various high probability events must occur in order for the algorithm guarantees to hold. Let E1 denote the event that the first ?(n4 ) sample ensembles S1 , S2 , . . . ApproxLocalImprove, either in lines 5 and 14, or via mutations, are good approximations By Lemmas 3.8 and 3.10, using a union bound, with constant probability (say, 0.99) this happens. Let E2 denote the event that the cost approximations obtained in line 5 of SampleAndDecompose are successful at all recursive calls. By Hoeffding tail bounds, this happens with probability 1 ? O(n?4 ) for each call, there are O(n log n) calls, hence we can lower bound the probability of success of all executions by 0.99. Concluding, the following holds with probability at least 0.97: Assumption 3.12. Events E1 and E2 hold true. We condition what follows on this assumption.6 Let ? ? denote the optimal permutation for the root call to SampleAndDecompose with V, W, ?. The permutation ? is, by Assumption 3.5, a constant factor approximation for ? ? . By the triangle inequality, d? (?, ? ? ) ? C(?, V, W ) + C(? ? , V, W ), hence, E[d? (?, ? ? )] = O(C(? ? , V, W )) . From this, using (2.3), E[dfoot (?, ? ? )] = O(C(? ? , V, W )). Now consider the recursion tree T of SampleAndDecompose. Denote I the set of internal nodes, and by L the set of leaves (i.e. executions exiting from line 8). For a call SampleAndDecompose corresponding to a node X, denote the input arguments by (VX , W, ?, n, ?X ). Let L[X], R[X] denote the left and right children of X respectively. Let kX 6 This may bias some expectation upper bounds derived earlier and in what follows. This bias can multiply the estimates by at most 1/0.97, which can be absorbed in our O-notations. 7 denote the integer k in 11 in the context of X ? I. Hence, by our definitions, VL[X] , VR[X] , ?L[X] and ?R[X] are precisely VL , VR , ?L , ?R from lines 12-13 in the context of node X. Take, as ? in line 1, NX = |VX |. Let ?X denote the optimal MFAST solution for instance (VX , W|VX ). By E1 we conclude that the cost of ?X u?j is always an actual improvement compared to ?X (for the current value of ?X , u and j in iteration), and the improvement in cost is of magnitude at least ?(?|??X (u) ? j|/ log n), which is ?(?2 NX / log2 n) due to the use of B defined in line 1.7 But then the number of iterations of the while loop in line 15 of ApproxLocalImprove is O(??2 C(?X , VX , W|VX ) log2 n/NX ) (Otherwise the true cost of the running solution would  go below 0.) Since C(?X , VX , W|VX ) ? N2X , the number of iterations is hence at most O(??2 NX log2 n). By Lemma 3.11 the expected query complexity incurred by the call to ApproxLocalImprove is therefore O(??5 NX log5 n). Summing over the recursion tree, the total query complexity incurred by calls to ApproxLocalImprove is, on expectation, at most O(??5 n log6 n). Now consider the moment at which the while loop of ApproxLocalImprove terminates. Let ?1X denote the permutation obtained at that point, returned to SampleAndDecompose in line 10. We classify the elements v ? VX to two families: VXshort denotes all u ? VX s.t. long ? (u)| = O(?NX / log n), and V |??1X (u) ? ??X denotes VX \ VXshort . We know by assumption, X that the last sample ensemble S used in ApproxLocalImprove was a good approximation, hence ? (u)) = O(?|?? ? (u)|/ log n). for all u ? VXlong : (*) TestMove(?1X , VX , W|VX , u, ??X (u) ? ??X 1X ? (u)] contains kX . Following [23], we say for u ? VX that u crosses kX if [??1X (u), ??X Let set of elements u ? V that cross k . We define a key quanVXcross denote the (random) X X P tity as in [23]: TX := Following (*), the cross TestMove(?1X , VX , W|VX , u, ?? ? (u)). u?VX X  P  ? (u)|/ log n elements u ? VXlong can contribute at most O ? u?V long |??1 X (u) ? ??X to TX . X ? )/ log n) which is, using (2.3) at most This latter bound is, by definition, O(?dfoot (?1X , ?X ? ? , the last expres)/ log n). By the triangle inequality and the definition of ?X O(?d? (?1X , ?X sion is O(?C(?1X , VX , W|VX )/ log n). How much can elements in VXshort contribute to TX ? ? (u)|/NX ). The probability of each such element to cross k is O(|??1 X (u) ? ??X Hence, the P  2 total expected contribution is O Under the constraints short |??1X (u) ? ?? ? (u)| /NX . u?VX X P ? ? ? |? (u) ? ? (u)| ? d (? , ? ) and |? (u) ? ? (u)| = O(?NX / log n), short ?1X ?X foot 1X ?1X ?X X u?VX ? ? this is O(dfoot (?1X , ?X )?NX /(NX log n)) = O(dfoot (?1X , ?X )?/ log n). Again using (2.3) and the triangle inequality, the bound becomes O(?C(?1X , VX , W|VX )/ log n). Combining for V long ? and V short , we conclude: (**) EkX [TX ] = O(?C(?X , VX , W|VX )/ log n), (the expectation is over the choice of kX .) The bound (**) is the main improvement over [23], and should be compared with Lemma 3.2 there, stating (in our notation) TX = O(?C ? NX /(4n log n)). The latter bound is more restrictive than ours in certain cases, and obtaining it relies on a procedure that cannot be performed without having access W in its entirety. (**) however can be achieved using efficient querying of W , as we have shown. The remaineder of the arguments leading to proof of Theorem 3.1 closely follow those in Section 4 of [23]. The details have been omitted from this abstract. 4 Future Work We presented a statistical learning theoretical active learning result for pairwise ranking. The main vehicle was a query (and time) efficient decomposition procedure, reducing the problem to smaller ones in which the optimal loss is high and uniform sampling suffices. The main drawback of our result is the inability to use it in order to search in a limited subspace of permutations. A typical example of such a subspace is the case in which each element v ? V has a corresponding feature vector in a real vector space, and we only seek permutations induced by linear score functions. In followup work, Ailon, Begleiter and Ezra [1] show a novel technique achieving a slightly better query complexity than here with a simpler proof, while also admitting search in restricted spaces. Acknowledgements The author gratefully acknowledges the help of Warren Schudy with derivation of some of the bounds in this work. Special thanks to Ron Begleiter for helpful comments. Apologizes for omitting references to much relevant work that could not fit in this version?s bibliography. 7 This also bounds the number of times a sample ensemble is created by O(n4 ), as required by E1 . 8 References [1] Nir Ailon, Ron Begleiter, and Esther Ezra, A new active learning scheme with applications to learning to rank from pairwise preferences, arxiv.org/abs/1110.2136 (2011). [2] Nir Ailon, Moses Charikar, and Alantha Newman, Aggregating inconsistent information: Ranking and clustering, J. ACM 55 (2008), no. 5. [3] Nir Ailon and Mehryar Mohri, Preference based learning to rank, vol. 80, 2010, pp. 189?212. [4] Nir Ailon and Kira Radinsky, Ranking from pairs and triplets: Information quality, evaluation methods and query complexity, WSDM, 2011. [5] Noga Alon, Ranking tournaments, SIAM J. Discret. Math. 20 (2006), no. 1, 137?142. [6] M. F. Balcan, N. Bansal, A. Beygelzimer, D. Coppersmith, J. Langford, and G. B. Sorkin, Robust reductions from ranking to classification, Machine Learning 72 (2008), no. 1-2, 139? 153. [7] Maria-Florina Balcan, Alina Beygelzimer, and John Langford, Agnostic active learning, J. Comput. Syst. Sci. 75 (2009), no. 1, 78?89. [8] Maria-Florina Balcan, Steve Hanneke, and Jennifer Vaughan, The true sample complexity of active learning, Machine Learning 80 (2010), 111?139. [9] A. Beygelzimer, J. Langford, and P. Ravikumar, Error-correcting tournaments, ALT, 2009, pp. 247?262. [10] M. Braverman and E. Mossel, Noisy sorting without resampling, SODA: Proceedings of the 19th annual ACM-SIAM symposium on Discrete algorithms, 2008, pp. 268?276. [11] B. Carterette, P. N. Bennett, D. Maxwell Chickering, and S. T. Dumais, Here or there: Preference judgments for relevance, ECIR, 2008. [12] William W. Cohen, Robert E. Schapire, and Yoram Singer, Learning to order things, NIPS ?97, 1998, pp. 451?457. [13] D. Cohn, L. Atlas, and R. Ladner, Improving generalization with active learning, Machine Learning 15 (1994), no. 2, 201?221. [14] A. Culotta and A. McCallum, Reducing labeling effort for structured prediction tasks, AAAI: Proceedings of the 20th national conference on Artificial intelligence, 2005, pp. 746?751. [15] S. Dasgupta, Coarse sample complexity bounds for active learning, Advances in Neural Information Processing Systems 18, 2005, pp. 235?242. [16] S. Dasgupta, A. Tauman Kalai, and C. Monteleoni, Analysis of perceptron-based active learning, Journal of Machine Learning Research 10 (2009), 281?299. [17] Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni, A general agnostic active learning algorithm, NIPS, 2007. [18] Persi Diaconis and R. L. Graham, Spearman?s footrule as a measure of disarray, Journal of the Royal Statistical Society. Series B (Methodological) 39 (1977), no. 2, pp. 262?268. [19] U. Feige, D. Peleg, P. Raghavan, and E. Upfal, Computing with unreliable information, STOC: Proceedings of the 22nd annual ACM symposium on Theory of computing, 1990, pp. 128?137. [20] Yoav Freund, H. Sebastian Seung, Eli Shamir, and Naftali Tishby, Selective sampling using the query by committee algorithm, Mach. Learn. 28 (1997), no. 2-3, 133?168. [21] Steve Hanneke, A bound on the label complexity of agnostic active learning, ICML, 2007, pp. 353?360. [22] Eyke H?ullermeier, Johannes F?urnkranz, Weiwei Cheng, and Klaus Brinker, Label ranking by learning pairwise preferences, Artif. Intell. 172 (2008), no. 16-17, 1897?1916. [23] Claire Kenyon-Mathieu and Warren Schudy, How to rank with few errors, STOC, 2007, pp. 95? 103. [24] Dan Roth and Kevin Small, Margin-based active learning for structured output spaces, 2006. [25] V. N. Vapnik and A. Ya. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities, Theory of Prob. and its Applications 16 (1971), no. 2, 264?280. [26] F. Xia, T-Y Liu, J. Wang, W. Zhang, and H. Li, Listwise approach to learning to rank: theory and algorithm, ICML ?08, 2008, pp. 1192?1199. 9
4428 |@word cu:3 version:3 manageable:1 achievable:1 polynomial:2 middle:1 nd:1 c0:1 open:2 seek:3 decomposition:16 accounting:1 mention:1 thereby:1 moment:1 reduction:3 liu:1 contains:2 score:1 selecting:1 series:1 daniel:1 denoting:1 document:1 ours:1 chervonenkis:1 current:1 beygelzimer:3 si:1 yet:1 written:1 must:1 refresh:2 john:1 additive:4 partition:6 j1:3 atlas:1 v:1 aside:1 greedy:2 leaf:1 resampling:1 intelligence:1 theoretician:1 inspection:1 ecir:1 mccallum:1 short:3 provides:1 multiset:2 node:3 location:1 preference:25 contribute:2 ron:2 simpler:2 org:1 math:1 zhang:1 symposium:2 discret:1 dan:1 nondeterministic:1 pairwise:18 indeed:1 expected:5 dist:2 multi:3 wsdm:1 decomposed:1 v2i:2 actual:2 becomes:2 notation:7 agnostic:5 israel:1 what:5 developed:1 transformation:1 guarantee:2 exactly:1 returning:1 classifier:1 uk:3 unit:2 grant:1 producing:1 before:4 t1:9 understood:1 local:4 wuv:1 aggregating:1 consequence:1 mach:1 establishing:1 approximately:2 abuse:1 tournament:6 therein:1 studied:1 schudy:5 limited:1 range:1 unique:1 responsible:1 practice:1 regret:6 block:3 recursive:2 union:2 procedure:5 empirical:1 significantly:1 outputted:1 get:1 convenience:1 ga:1 close:1 cannot:2 context:5 risk:1 applying:2 vaughan:1 restriction:4 equivalent:1 deterministic:1 optimize:1 roth:1 go:1 starting:1 correcting:1 estimator:2 rule:1 u6:1 notion:1 variation:2 justification:1 controlling:1 shamir:1 us:1 hypothesis:1 element:39 satisfying:3 jk:3 vein:1 disagreeing:2 wang:1 worst:1 culotta:1 eu:2 decrease:1 mentioned:1 complexity:13 respecting:1 seung:1 depend:1 upon:1 efficiency:1 triangle:5 translated:1 easily:2 various:2 represented:1 tx:5 derivation:1 distinct:1 query:30 precedes:1 newman:1 labeling:1 artificial:1 choosing:1 klaus:1 kevin:1 quite:1 heuristic:2 posed:1 solve:1 say:7 drawing:2 defers:1 otherwise:4 succesful:1 transductive:1 noisy:3 sequence:1 coarse:1 j2:2 relevant:2 loop:3 combining:1 achieve:2 convergence:1 optimum:1 help:2 polylog:1 alon:1 ac:1 stating:1 measured:2 op:1 c:1 involves:1 implies:1 peleg:1 entirety:2 differ:1 foot:1 drawback:2 correct:1 closely:2 vc:6 vx:25 raghavan:1 viewing:1 settle:1 enable:1 require:2 exchange:1 fix:9 generalization:1 suffices:2 proposition:6 pessimistic:1 strictly:1 hold:5 lying:1 sufficiently:3 predict:1 sought:2 a2:1 omitted:7 purpose:4 label:16 combinatorial:2 extremal:1 repetition:7 create:1 minimization:1 clearly:1 always:2 rather:1 kalai:1 sion:1 derived:2 focus:2 vk:19 improvement:7 rank:11 maria:2 methodological:1 helpful:1 esther:1 vl:6 entire:1 brinker:1 relation:1 selective:1 subroutine:1 interested:1 provably:1 among:2 classification:1 logn:1 development:1 summed:1 breakthrough:2 special:1 field:1 having:1 sampling:5 chernoff:1 icml:2 future:2 np:3 t2:9 ullermeier:1 few:4 diaconis:1 national:1 intell:1 individual:1 william:1 ab:1 organization:1 interest:1 possibility:1 multiply:1 braverman:1 evaluation:1 analyzed:1 admitting:1 chain:1 implication:1 edge:2 partial:1 injective:1 tree:2 continuing:1 logarithm:1 divide:1 haifa:1 theoretical:4 instance:4 classify:1 earlier:1 ezra:2 goodness:1 yoav:1 clipping:1 cost:15 vertex:1 subset:5 entry:1 uniform:4 technion:2 successful:3 too:1 tishby:1 kn:1 considerably:2 dumais:1 adaptively:2 thanks:1 international:1 siam:2 off:1 again:1 reflect:1 satisfied:1 aaai:1 choose:1 hoeffding:3 begleiter:3 creating:1 leading:1 return:5 actively:1 toy:1 syst:1 li:1 unordered:3 coefficient:2 satisfy:2 ranking:8 vi:35 performed:1 view:1 root:2 vehicle:1 kendall:1 start:2 parallel:1 curie:1 mutation:2 contribution:1 minimize:1 il:1 ni:3 ir:1 accuracy:1 tity:1 efficiently:3 ensemble:7 judgment:1 identify:2 bayesian:2 iterated:1 hanneke:2 expres:1 executes:1 explain:1 reach:1 monteleoni:2 whenever:3 sebastian:1 definition:16 failure:1 pp:11 frequency:1 e2:2 proof:5 hsu:1 persi:1 positioned:1 steve:2 higher:1 maxwell:1 follow:1 response:4 improved:1 just:2 langford:3 hand:1 ei:5 replacing:1 cohn:1 lack:1 abusing:1 defines:1 quality:1 stale:1 artif:1 omitting:1 kenyon:4 contain:1 concept:1 unbiased:2 true:3 hence:17 read:1 alantha:1 eg:1 eyke:1 branching:1 naftali:1 noted:1 bansal:1 balcan:3 chaos:2 novel:1 recently:2 cohen:1 tail:2 extend:1 significant:1 approx:1 similarly:1 gratefully:1 moving:3 access:2 longer:1 recent:1 retrieved:1 irrelevant:1 certain:3 inequality:5 binary:4 arbitrarily:1 success:4 inconsistency:1 devise:1 minimum:2 ptas:7 arithmetic:1 u0:1 full:1 reduces:1 cross:4 long:3 elaboration:1 devised:1 e1:4 ravikumar:1 prediction:1 basic:4 florina:2 denominator:1 metric:2 expectation:6 arxiv:1 iteration:3 normalization:2 achieved:1 want:2 interval:1 leaving:1 noga:1 rest:1 probably:1 comment:1 induced:2 thing:1 inconsistent:1 practitioner:1 integer:5 call:9 enough:1 reintegration:1 easy:2 weiwei:1 restaurant:1 fit:1 followup:2 sorkin:1 suboptimal:2 translates:1 expression:2 passed:2 effort:1 returned:1 repeatedly:1 useful:3 clear:2 detailed:1 johannes:1 repeating:1 schapire:1 moses:1 ekx:1 disjoint:1 kira:1 discrete:1 dasgupta:3 vol:1 urnkranz:1 key:2 achieving:1 drawn:2 alina:1 disarray:1 probabiliy:1 marie:1 penalizing:1 ce:12 v1:19 groundbreaking:1 graph:3 run:1 inverse:1 eli:1 prob:1 soda:1 extends:1 almost:2 throughout:1 family:1 draw:2 graham:1 bound:26 ki:1 pay:1 followed:1 cheng:1 existed:1 quadratic:1 oracle:2 annual:2 nontrivial:1 log5:2 occur:1 precisely:3 constraint:1 inclusive:1 x2:2 bibliography:1 u1:4 argument:8 min:6 optimality:1 concluding:1 performing:1 injection:1 charikar:1 ailon:6 structured:2 spearman:2 smaller:2 terminates:1 slightly:1 feige:1 n4:2 s1:1 happens:2 dlog:6 multiplicity:1 erm:1 restricted:1 jennifer:1 discus:3 describing:1 committee:1 singer:1 know:1 wvu:1 mutating:1 end:7 decomposing:1 incurring:1 v2:6 enforce:1 disagreement:4 away:1 appearing:1 alternative:1 original:2 denotes:4 running:2 ensure:2 clustering:1 log2:6 yoram:1 giving:1 restrictive:1 especially:1 uj:1 conquer:1 society:1 irrationality:1 move:3 question:1 added:1 occurs:1 exclusive:1 usual:1 subspace:2 distance:4 sci:1 concatenation:1 nx:12 argue:1 trivial:3 provable:2 index:1 minimizing:2 equivalently:2 difficult:2 executed:1 unfortunately:1 robert:1 stoc:2 subproblems:1 rise:2 unknown:2 upper:1 ladner:1 markov:1 arc:2 finite:2 beat:1 defintion:1 defining:1 discovered:1 arbitrary:1 exiting:1 rating:2 pair:11 namely:3 required:1 quadratically:1 nip:2 usually:1 below:3 ev:18 reading:1 coppersmith:1 max:1 tau:1 royal:1 event:6 warm:1 recursion:4 scheme:1 improve:2 footrule:2 mossel:1 mathieu:1 created:1 deemed:1 acknowledges:1 transitive:4 nir:5 literature:3 acknowledgement:1 tantamount:1 relative:2 freund:1 loss:13 permutation:19 log6:1 interesting:4 limitation:1 querying:3 proven:1 upfal:1 incurred:3 degree:1 ltr:5 claire:2 mohri:1 supported:1 last:3 side:1 bias:2 warren:2 perceptron:1 tauman:1 listwise:4 tolerance:1 feedback:2 dimension:4 distributed:1 xia:1 computes:1 author:1 adaptive:1 log3:2 approximate:4 preferred:6 implicitly:1 unreliable:1 active:16 reveals:1 quicksort:2 summing:1 assumed:2 conclude:2 un:1 latent:2 search:2 triplet:1 why:1 additionally:1 carterette:1 learn:1 robust:1 obtaining:2 improving:1 mehryar:1 poly:3 necessarily:3 vj:2 refreshing:1 main:8 linearly:4 universe:1 s2:1 big:6 noise:1 arise:1 cei:2 n2:7 rh:1 prx:1 nailon:1 bounding:1 child:1 x1:2 vr:5 position:8 wish:5 comput:1 candidate:1 vanish:1 chickering:1 theorem:3 down:1 specific:1 list:1 alt:1 exists:1 albeit:1 false:1 vapnik:1 magnitude:1 execution:2 conditioned:2 kx:4 margin:1 sorting:3 led:1 simply:1 absorbed:1 ordered:2 n2i:2 contained:1 u2:1 minimizer:3 relies:1 acm:3 goal:4 viewed:2 consequently:1 careful:1 price:1 bennett:1 hard:3 change:1 typical:2 except:2 uniformly:9 specifically:1 reducing:2 lemma:13 called:3 total:5 sanjoy:1 ya:1 formally:1 internal:1 latter:2 inability:1 relevance:1 scratch:2
3,787
4,429
Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression Sham M. Kakade Microsoft Research and Wharton, U Penn [email protected] Adam Tauman Kalai Microsoft Research [email protected] Ohad Shamir Microsoft Research [email protected] Varun Kanade SEAS, Harvard University [email protected] Abstract Generalized Linear Models (GLMs) and Single Index Models (SIMs) provide powerful generalizations of linear regression, where the target variable is assumed to be a (possibly unknown) 1-dimensional function of a linear predictor. In general, these problems entail non-convex estimation procedures, and, in practice, iterative local search heuristics are often used. Kalai and Sastry (2009) provided the first provably efficient method, the Isotron algorithm, for learning SIMs and GLMs, under the assumption that the data is in fact generated under a GLM and under certain monotonicity and Lipschitz (bounded slope) constraints. The Isotron algorithm interleaves steps of perceptron-like updates with isotonic regression (fitting a one-dimensional non-decreasing function). However, to obtain provable performance, the method requires a fresh sample every iteration. In this paper, we provide algorithms for learning GLMs and SIMs, which are both computationally and statistically efficient. We modify the isotonic regression step in Isotron to fit a Lipschitz monotonic function, and also provide an efficient O(n log(n)) algorithm for this step, improving upon the previous O(n2 ) algorithm. We provide a brief empirical study, demonstrating the feasibility of our algorithms in practice. 1 Introduction The oft used linear regression paradigm models a dependent variable Y as a linear function of a vector-valued independent variable X. Namely, for some vector w, we assume that E[Y |X] = w?X. Generalized linear models (GLMs) provide a flexible extension of linear regression, by assuming that the dependent variable Y is of the form, E[Y |X] = u(w ? X); u is referred to as the inverse link function or transfer function (see [1] for a review). Generalized linear models include commonly used regression techniques such as logistic regression, where u(z) = 1/(1 + e?z ) is the logistic function. The class of perceptrons also falls in this category, where u is a simple piecewise linear function of the form /?, with the slope of the middle piece being the inverse of the margin. In the case of linear regression, the least-squares method is an highly efficient procedure for parameter estimation. Unfortunately, in the case of GLMs, even in the setting when u is known, the problem of fitting a model that minimizes squared error is typically not convex. We are not aware of any classical estimation procedure for GLMs which is both computationally and statistically efficient, and with provable guarantees. The standard procedure is iteratively reweighted least squares, based on Newton-Raphson (see [1]). The case when both u and w are unknown (sometimes referred to as Single Index Models (SIMs)), involves the more challenging (and practically relevant) question of jointly estimating u and w, 1 where u may come from a large non-parametric family such as all monotonic functions. There are two questions here: 1) What statistical rate is achievable for simultaneous estimation of u and w? 2) Is there a computationally efficient algorithm for this joint estimation? With regards to the former, under mild Lipschitz-continuity restrictions on u, it is possible to characterize the effectiveness of an (appropriately constrained) joint empirical risk minimization procedure. This suggests that, from a purely statistical viewpoint, it may be worthwhile to attempt jointly optimizing u and w on empirical data. However, the issue of computationally efficiently estimating both u and w (and still achieving a good statistical rate) is more delicate, and is the focus of this work. We note that this is not a trivial problem: in general, the joint estimation problem is highly non-convex, and despite a significant body of literature on the problem, existing methods are usually based on heuristics, which are not guaranteed to converge to a global optimum (see for instance [2, 3, 4, 5, 6]). The Isotron algorithm of Kalai and Sastry [7] provides the first provably efficient method for learning GLMs and SIMs, under the common assumption that u is monotonic and Lipschitz, and assuming that the data corresponds to the model.1 The sample and computational complexity of this algorithm is polynomial, and the sample complexity does not explicitly depend on the dimension. The algorithm is a variant of the ?gradient-like? perceptron algorithm, where apart from the perceptronlike updates, an isotonic regression procedure is performed on the linear predictions using the Pool Adjacent Violators (PAV) algorithm, on every iteration. While the Isotron algorithm is appealing due to its ease of implementation (it has no parameters other than the number of iterations to run) and theoretical guarantees (it works for any u, w), there is one principal drawback. It is a batch algorithm, but the analysis given requires the algorithm to be run on fresh samples each batch. In fact, as we show in experiments, this is not just an artifact of the analysis ? if the algorithm loops over the same data in each update step, it really does overfit in very high dimensions (such as when the number of dimensions exceeds the number of examples). Our Contributions: We show that the overfitting problem in Isotron stems from the fact that although it uses a slope (Lipschitz) condition as an assumption in the analysis, it does not constrain the output hypothesis to be of this form. To address this issue, we introduce the S L I SOTRON algorithm (pronounced slice-o-tron, combining slope and Isotron). The algorithm replaces the isotonic regression step of the Isotron by finding the best non-decreasing function with a bounded Lipschitz parameter - this constraint plays here a similar role as the margin in classification algorithms. We also note S L I SOTRON (like Isotron) has a significant advantage over standard regression techniques, since it does not require knowing the transfer function. Our two main contributions are: 1. We show that the new algorithm, like Isotron, has theoretical guarantees, and significant new analysis is required for this step. 2. We provide an efficient O(n log(n)) time algorithm for finding the best non-decreasing function with a bounded Lipschitz parameter, improving on the previous O(n2 ) algorithm [10]. This makes S L I SOTRON practical even on large datasets. We begin with a simple perceptron-like algorithm for fitting GLMs, with a known transfer function u which is monotone and Lipschitz. Somewhat surprisingly, prior to this work (and Isotron [7]) a computationally efficient procedure that guarantees to learn GLMs was not known. Section 4 contains the more challenging S L I SOTRON algorithm and also the efficient O(n log(n)) algorithm for Lipschitz isotonic regression. We conclude with a brief empirical analysis. 2 Setting We assume the data (x, y) are sampled i.i.d. from a distribution supported on Bd ? [0, 1], where Bd = {x ? Rd : kxk ? 1} is the unit ball in d-dimensional Euclidean space. Our algorithms and 1 In the more challenging agnostic setting, the data is not required to be distributed according to a true u and w, but it is required to find the best u, w which minimize the empirical squared error. Similar to observations of Kalai et al. [8], it is straightforward to show that this problem is likely to be computationally intractable in the agnostic setting. In particular, it is at least as hard as the problem of ?learning parity with noise,? whose hardness has been used as the basis for designing multiple cryptographic systems. Shalev-Shwartz et al. [9] present a kernel-based algorithm for learning certain types of GLMs and SIMs in the agnostic setting. However, their worst-case guarantees are exponential in the norm of w (or equivalently the Lipschitz parameter). 2 Algorithm 1 GLM- TRON d s Input: data h(xi , yi )im i=1 ? R ? [0, 1], u : R ? [0, 1], held-out data h(xm+j , ym+j )ij=1 1 w := 0; for t = 1, 2, . . . do ht (x) := u(wt ? x); m 1 X wt+1 := wt + (yi ? u(wt ? xi ))xi ; m i=1 end for Ps Output: arg minht j=1 (ht (xm+j ) ? ym+j )2 analysis also apply to the case where Bd is the unit ball in some high (or infinite)-dimensional kernel feature space. We assume there is a fixed vector w, such that kwk ? W , and a non-decreasing 1-Lipschitz function u : R ? [0, 1], such that E[y|x] = u(w ? x) for all x. The restriction that u is 1-Lipschitz is without loss of generality, since the norm of w is arbitrary (an equivalent restriction is that kwk = 1 and that u is W -Lipschitz for an arbitrary W ). Our focus is on approximating the regression function well, as measured by the squared loss. For a real valued function h : Bd ? [0, 1], define   err(h) = E(x,y) (h(x) ? y)2   ?(h) = err(h) ? err(E[y|x]) = E(x,y) (h(x) ? u(w ? x))2 err(h) measures the error of h, and ?(h) measures the excess error of h compared to the Bayesoptimal predictor x 7? u(w ? x). Our goal is to find h such that ?(h) (equivalently, err(h)) is as small as possible. In addition, we define the empirical counterparts err(h), c ??(h), based on a sample (x1 , y1 ), . . . , (xm , ym ), to be m m 1 X 1 X 2 err(h) c = (h(xi ) ? yi ) ; ??(h) = (h(xi ) ? u(w ? xi ))2 . m i=1 m i=1 Note that ?? is the standard fixed design error (as this error conditions on the observed x?s). Our algorithms work by iteratively constructing hypotheses ht of the form ht (x) = ut (wt ?x), where ut is a non-decreasing, 1-Lipschitz function, and wt is a linear predictor. The algorithmic analysis provides conditions under which ??(ht ) is small, and using statistical arguments, one can guarantee that ?(ht ) would be small as well. 3 The GLM- TRON algorithm We begin with the simpler case, where the transfer function u is assumed to be known (e.g. a sigmoid), and the problem is estimating w properly. We present a simple, parameter-free, perceptronlike algorithm, GLM- TRON (Alg. 1), which efficiently finds a close-to-optimal predictor. We note that the algorithm works for arbitrary non-decreasing, Lipschitz functions u, and thus covers most generalized linear models. We refer the reader to the pseudo-code in Algorithm 1 for some of the notation used in this section. To analyze the performance of the algorithm, we show that if we run the algorithm for sufficiently many iterations, one of the predictors ht obtained must be nearly-optimal, compared to the Bayesoptimal predictor. Theorem 1. Suppose (x1 , y1 ), . . . , (xm , ym ) are drawn independently from a distribution supported on Bd ? [0, 1], such that E[y|x] = u(w ? x), where kwk ? W , and u : R ? [0, 1] is a known nondecreasing 1-Lipschitz function. Then for any ? ? p (0, 1), the following holds with probability at least 1 ? ?: there exists some iteration t < O(W m/ log(1/?)) of GLM- TRON such that the hypothesis ht (x) = u(wt ? x) satisfies ! r W 2 log(m/?) t t max{? ?(h ), ?(h )} ? O . m 3 Algorithm 2 S L I SOTRON d s Input: data h(xi , yi )im i=1 ? R ? [0, 1], held-out data h(xm+j , ym+j )ij=1 1 w := 0; for t = 1, 2, . . . do ut := LIR ((wt ? x1 , y1 ), . . . , (wt ? xm , ym )) // Fit 1-d function along wt m 1 X wt+1 := wt + (yi ? ut (wt ? xi ))xi m i=1 end for Ps Output: arg minht j=1 (ht (xm+j ) ? ym+j )2 In particular, the theorem implies that some ht has small enough ?(ht ). Since ?(ht ) equals err(ht ) up to a constant, we can easily find an appropriate ht by picking the one that has least err(h c t ) on a held-out set. The main idea of the proof is showing that at each iteration, if ??(ht ) is not small, then the squared dis 2 2 tance wt+1 ? w is substantially smaller than kwt ? wk . Since the squared distance is bounded 0 2 below by 0, and w ? w ? W 2 , there is an iteration (arrived at within reasonable time) such that the hypothesis ht at that iteration is highly accurate. Although the algorithm minimizes empirical squared error, we can bound the true error using a uniform convergence argument. The complete proofs are provided in the full version of the paper ([11] Appendix A). 4 The S L I SOTRON algorithm In this section, we present S L I SOTRON (Alg. 2), which is applicable to the harder setting where the transfer function u is unknown, except for it being non-decreasing and 1-Lipschitz. S L I SOTRON does have one parameter, the Lipschitz constant; however, in theory we show that this can simply be set to 1. The main difference between S L I SOTRON and GLM- TRON is that now the transfer function must also be learned, and the algorithm keeps track of a transfer function ut which changes from iteration to iteration. The algorithm is inspired by the Isotron algorithm [7], with the main difference being that at each iteration, instead of applying the PAV procedure to fit an arbitrary monotonic function along the direction wt , we use a different procedure, (Lipschitz Isotonic Regression) LIR, to fit a Lipschitz monotonic function, ut , along wt . This key difference allows for an analysis that does not require a fresh sample each iteration. We also provide an efficient O(m log(m)) time algorithm for LIR (see Section 4.1), making S L I SOTRON an extremely efficient algorithm. We now turn to the formal theorem about our algorithm. The formal guarantees parallel those of the GLM- TRON algorithm. However, the rates achieved are somewhat worse, due to the additional difficulty of simultaneously estimating both u and w. Theorem 2. Suppose (x1 , y1 ), . . . , (xm , ym ) are drawn independently from a distribution supported on Bd ? [0, 1], such that E[y|x] = u(w ? x), where kwk ? W , and u : R ? [0, 1] is an unknown non-decreasing 1-Lipschitz function. Then the following two bounds hold: 1. (Dimension-dependent) With probability at least 1 ? ?, there exists some iteration t <  1/3  Wm O of S L I SOTRON such that d log(W m/?) max{? ?(ht ), ?(ht )} ? O  dW 2 log(W m/?) m 1/3 ! . 2. (Dimension-independent) With probability at least 1 ? ?, there exists some iteration t <  1/4  Wm O of S L I SOTRON such that log(m/?) t  t max{? ?(h ), ?(h )} ? O 4 W 2 log(m/?) m 1/4 ! As in the case of Thm. 1, one can easily find ht which satisfies the theorem?s conditions, by running the S L I SOTRON algorithm for sufficiently many iterations, and choosing the hypothesis ht which minimizes err(h c t ) on a held-out set. The algorithm minimizes empirical error and generalization bounds are obtained using a uniform convergence argument. The proofs are somewhat involved and appear in the full paper ([11] Appendix B). 4.1 Lipschitz isotonic regression The S L I SOTRON algorithm (Alg. 2) performs Lipschitz Isotonic Regression (LIR) at each iteration. The goal is to find the best fit (least squared error) non-decreasing 1-Lipschitz function that fits the data in one dimension. Let (z1 , y1 ), . . . (zm , ym ) be such that zi ? R, yi ? [0, 1] and z1 ? z2 ? ? ? ? ? zm . The Lipschitz Isotonic Regression (LIR) problem is defined as the following quadratic program: m Minimize w.r.t y?i : 1X (yi ? y?i )2 2 i=1 (1) subject to: y?i ? y?i+1 y?i+1 ? y?i ? (zi+1 ? zi ) 1 ? i ? m ? 1 (Monotonicity) 1 ? i ? m ? 1 (Lipschitz) (2) (3) Once the values y?i are obtained at the data points, the actual function can be constructed by interpolating linearly between the data points. Prior to this work, the best known algorithm for this problem wass due to Yeganova and Wilbur [10] and required O(m2 ) time for m points. In this work, we present an algorithm that performs the task in O(m log(m)) time. The actual algorithm is fairly complex and relies on designing a clever data structure. We provide a high-level view here; the details are provided in the full version ([11] Appendix D). Algorithm Sketch: We define functions Gi (?), where Gi (s) is the minimum squared loss that can be attained if y?i is fixed to be s, and y?i+1 , . . . y?m are then chosen to be the best fit 1-Lipschitz non-decreasing function to the points (zi , yi ), . . . , (zm , ym ). Formally, for i = 1, . . . , m, define the functions, Gi (s) = min y?i+1 ,...,? ym m 1 X 1 2 (s ? yi ) + (? yj ? yj )2 2 2 j=i+1 (4) subject to the constraints (where s = y?i ), y?j ? y?j+1 y?j+1 ? y?j ? zj+1 ? zj i ? j ? m ? 1 (Monotonic) i ? j ? m ? 1 (Lipschitz) Furthermore, define: s?i = mins Gi (s). The functions Gi are piecewise quadratic, differentiable everywhere and strictly convex, a fact we prove in full paper [11]. Thus, Gi is minimized at s?i and it is strictly increasing on both sides of s?i . Note that Gm (s) = (1/2)(s ? ym )2 and hence is piecewise quadratic, differentiable everywhere and strictly convex. Let ?i = zi+1 ? zi . The remaining Gi obey the following recursive relation. ( Gi (s + ?i?1 ) If s ? s?i ? ?i?1 1 2 Gi (s?i ) If s?i ? ?i?1 < s ? s?i Gi?1 (s) = (s ? yi?1 ) + (5) 2 G (s) If s? < s i i As intuition for the above relation, note that Gi?1 (s) is obtained fixing y?i?1 = s and then by choosing y?i as close to s?i (since Gi is strictly increasing on both sides of s?i ) as possible without violating either the monotonicity or Lipschitz constraints. The above argument can be immediately translated into an algorithm, if the values s?i are known. Since s?1 minimizes G1 (s), which is the same as the objective of (1), start with y?1 = s?1 , and then successively chose values for y?i to be as close to s?i as possible without violating the Lipschitz or monotonicity constraints. This will produce an assignment for y?i which achieves loss equal to G1 (s?1 ) and hence is optimal. 5 b (a) (b) Figure 1: (a) Finding the zero of G0i . (b) Update step to transform representation of G0i to G0i?1 The harder part of the algorithm is finding the values s?i . Notice that G0i are all piecewise linear, continuous and strictly increasing, and obey a similar recursive relation (G0m (s) = s ? ym ): ( 0 Gi (s + ?i?1 ) If s ? s?i ? ?i?1 0 0 If s?i ? ?i?1 < s ? s?i Gi?1 (s) = (s ? yi?1 ) + (6) G0i (s) If s?i < s The algorithm then finds s?i by finding zeros of G0i . Starting from m, G0m = s ? ym , and s?m = ym . We design a special data structure, called notable red-black trees, for representing piecewise linear, continuous, strictly increasing functions. We initialize such a tree T to represent G0m (s) = s ? ym . Assuming that at some time it represents G0i , we need to support two operations: 1. Find the zero of G0i to get s?i . Such an operation can be done efficiently O(log(m)) time using a tree-like structure (Fig. 1 (a)). 2. Update T to represent G0i?1 . This operation is more complicated, but using the relation (6), we do the following: Split the interval containing s0i . Move the left half of the piecewise linear function G0i by ?i?1 (Fig. 1(b)), adding the constant zero function in between. Finally, we add the linear function s ? yi?1 to every interval, to get G0i?1 , which is again piecewise linear, continuous and strictly increasing. To perform the operations in step (2) above, we cannot na??vely apply the transformations, shift-by(?i?1 ) and add(s ? yi?1 ) to every node in the tree, as it may take O(m) operations. Instead, we simply leave a note (hence the name notable red-black trees) that such a transformation should be applied before the function is evaluated at that node or at any of its descendants. To prevent a large number of such notes accumulating at any given node we show that these notes satisfy certain commutative and additive relations, thus requiring us to keep track of no more than 2 notes at any given node. This lazy evaluation of notes allows us to perform all of the above operations in O(log(m)) time. The details of the construction are provided in the full paper ([11] Appendix D). 5 Experiments In this section, we present an empirical study of the S L I SOTRON and GLM- TRON algorithms. We perform two evaluations using synthetic data. The first one compares S L I SOTRON and Isotron [7] and illustrates the importance of imposing a Lipschitz constraint. The second one demonstrates the advantage of using S L I SOTRON over standard regression techniques, in the sense that S L I SOTRON can learn any monotonic Lipschitz function. We also report results of an evaluation of S L I SOTRON, GLM- TRON and several competing approaches on 5 UCI[12] datasets. All errors are reported in terms of average root mean squared error (RMSE) using 10 fold cross validation along with the standard deviation. 5.1 Synthetic Experiments Although, the theoretical guarantees for Isotron are under the assumption that we get a fresh sample each round, one may still attempt to run Isotron on the same sample each iteration and evaluate the 6 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.5 0.6 0.4 0.5 0.3 0.4 0.2 Slisotron Isotron 0.1 0 ?1 ?0.8 ?0.6 S L I SOTRON 0.289 ? 0.014 ?0.4 ?0.2 0 0.2 Isotron 0.334 ? 0.026 0.4 0.6 0.8 0.3 Slisotron 0.2 ?0.6 1 ? 0.045 ? 0.018 ?0.4 S L I SOTRON 0.058 ? 0.003 (a) Synthetic Experiment 1 ?0.2 0 0.2 Logistic 0.073 ? 0.006 0.4 0.6 0.8 ? 0.015 ? 0.004 (b) Synthetic Experiment 2 Figure 2: (a) The figure shows the transfer functions as predicted by S L I SOTRON and Isotron. The table shows the average RMSE using 10 fold cross validation. The ? column shows the average difference between the RMSE values of the two algorithms across the folds. (b) The figure shows the transfer function as predicted by S L I SOTRON. Table shows the average RMSE using 10 fold cross validation for S L I SOTRON and Logistic Regression. The ? column shows the average difference between the RMSE values of the two algorithms across folds. empirical performance. Then, the main difference between S L I SOTRON and Isotron is that while S L I SOTRON fits the best Lipschitz monotonic function using LIR each iteration, Isotron merely finds the best monotonic fit using PAV. This difference is analogous to finding a large margin classifier vs. just a consistent one. We believe this difference will be particularly relevant when the data is sparse and lies in a high dimensional space. Our first synthetic dataset is the following: The dataset is of size m = 1500 in d = 500 dimensions. The first co-ordinate of each point is chosen uniformly at random from {?1, 0, 1}. The remaining co-ordinates are all 0, except that for each data point one of the remaining co-ordinates is randomly set to 1. The true direction is w = (1, 0, . . . , 0) and the transfer function is u(z) = (1 + z)/2. Both S L I SOTRON and Isotron put weight on the first co-ordinate (the true direction). However, Isotron overfits the data using the remaining (irrelevant) co-ordinates, which S L I SOTRON is prevented from doing because of the Lipschitz constraint. Figure 2(a) shows the transfer functions as predicted by the two algorithms, and the table below the plot shows the average RMSE using 10 fold cross validation. The ? column shows the average difference between the RMSE values of the two algorithms across the folds. A principle advantage of S L I SOTRON over standard regression techniques is that it is not necessary to know the transfer function in advance. The second synthetic experiment is designed as a sanity check to verify this claim. The dataset is of size m = 1000 in d = 4 dimensions. We chose a random direction as the ?true? w and used a piecewise linear function as the ?true? u. We then added random noise (? = 0.1) to the y values. We compared S L I SOTRON to Logistic Regression on this dataset. S L I SOTRON correctly recovers the true function (up to some scaling). Fig. 2(b) shows the actual transfer function as predicted by S L I SOTRON, which is essentially the function we used. The table below the figure shows the performance comparison between S L I SOTRON and logistic regression. 5.2 Real World Datasets We now turn to describe the results of experiments performed on the following 5 UCI datasets: communities, concrete, housing, parkinsons, and wine-quality. We compared the performance of S L I SOTRON (Sl-Iso) and GLM- TRON with logistic transfer function (GLM-t) against Isotron (Iso), as well as standard logistic regression (Log-R), linear regression (Lin-R) and a simple heuristic algorithm (SIM) for single index models, along the lines of standard iterative maximum-likelihood procedures for these types of problems (e.g., [13]). The SIM algorithm works by iteratively fixing the direction w and finding the best transfer function u, and then fixing u and 7 optimizing w via gradient descent. For each of the algorithms we performed 10-fold cross validation, using 1 fold each time as the test set, and we report averaged results across the folds. Table 1 shows average RMSE values of all the algorithms across 10 folds. The first column shows the mean Y value (with standard deviation) of the dataset for comparison. Table 2 shows the average difference between RMSE values of S L I SOTRON and the other algorithms across the folds. Negative values indicate that the algorithm performed better than S L I SOTRON. The results suggest that the performance of S L I SOTRON (and even Isotron) is comparable to other regression techniques and in many cases also slightly better. The performance of GLM- TRON is similar to standard implementations of logistic regression on these datasets. This suggests that these algorithms should work well in practice, while providing non-trivial theoretical guarantees. It is also illustrative to see how the transfer functions found by S L I SOTRON and Isotron compare. In Figure 3, we plot the transfer functions for concrete and communities. We see that the fits found by S L I SOTRON tend to be smoother because of the Lipschitz constraint. We also observe that concrete is the only dataset where S L I SOTRON performs noticeably better than logistic regression, and the transfer function is indeed somewhat far from the logistic function. Table 1: Average RMSE values using 10 fold cross validation. The Y? column shows the mean Y value and standard deviation. dataset communities concrete housing parkinsons winequality Y? 0.24 ? 0.23 35.8 ? 16.7 22.5 ? 9.2 29 ? 10.7 5.9 ? 0.9 Sl-Iso 0.13 ? 0.01 9.9 ? 0.9 4.65 ? 1.00 10.1 ? 0.2 0.78 ? 0.04 GLM-t 0.14 ? 0.01 10.5 ? 1.0 4.85 ? 0.95 10.3 ? 0.2 0.79 ? 0.04 Iso 0.14 ? 0.01 9.9 ? 0.8 4.68 ? 0.98 10.1 ? 0.2 0.78 ? 0.04 Lin-R 0.14 ? 0.01 10.4 ? 1.1 4.81 ? 0.99 10.2 ? 0.2 0.75 ? 0.04 Log-R 0.14 ? 0.01 10.4 ? 1.0 4.70 ? 0.98 10.2 ? 0.2 0.75 ? 0.04 SIM 0.14 ? 0.01 9.9 ? 0.9 4.63 ? 0.78 10.3 ? 0.2 0.78 ? 0.03 Table 2: Performance comparison of S L I SOTRON with the other algorithms. The values reported are the average difference between RMSE values of the algorithm and S L I SOTRON across the folds. Negative values indicate better performance than S L I SOTRON. dataset communities concrete housing parkinsons winequality GLM-t 0.00 ? 0.00 0.56 ? 0.35 0.20 ? 0.48 0.19 ? 0.09 0.01 ? 0.01 Iso 0.00 ? 0.00 0.04 ? 0.17 0.03 ? 0.55 0.01 ? 0.03 0.00 ? 0.00 Lin-R 0.00 ? 0.00 0.52 ? 0.35 0.16 ? 0.49 0.11 ? 0.07 -0.03 ? 0.02 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 Log-R 0.00 ? 0.00 0.55 ? 0.32 0.05 ? 0.43 0.09 ? 0.07 -0.03 ? 0.02 SIM 0.00 ? 0.00 -0.03 ? 0.26 -0.02 ? 0.53 0.21 ? 0.20 0.01 ? 0.01 0.2 Slisotron Isotron 0.1 0 ?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 Slisotron Isotron 0.1 0 1 ?1 (a) concrete ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 (b) communities Figure 3: The transfer function u as predicted by S L I SOTRON (blue) and Isotron (red) for the concrete and communities datasets. The domain of both functions was normalized to [?1, 1]. 8 References [1] P. McCullagh and J. A. Nelder. Generalized Linear Models (2nd ed.). Chapman and Hall, 1989. [2] P. Hall W. H?ardle and H. Ichimura. Optimal smoothing in single-index models. Annals of Statistics, 21(1):157?178, 1993. [3] J. Horowitz and W. H?ardle. Direct semiparametric estimation of single-index models with discrete covariates, 1994. [4] A. Juditsky M. Hristache and V. Spokoiny. Direct estimation of the index coefficients in a single-index model. Technical Report 3433, INRIA, May 1998. [5] P. Naik and C. Tsai. Isotonic single-index model for high-dimensional database marketing. Computational Statistics and Data Analysis, 47:775?790, 2004. [6] P. Ravikumar, M. Wainwright, and B. Yu. Single index convex experts: Efficient estimation via adapted bregman losses. Snowbird Workshop, 2008. [7] A. T. Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In COLT ?09, 2009. [8] A. T. Kalai, A. R. Klivans, Y. Mansour, and R. A. Servedio. Agnostically learning halfspaces. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, FOCS ?05, pages 11?20, Washington, DC, USA, 2005. IEEE Computer Society. [9] S. Shalev-Shwartz, O. Shamir, and K. Sridharan. Learning kernel-based halfspaces with the zero-one loss. In COLT, 2010. [10] L. Yeganova and W. J. Wilbur. Isotonic regression under lipschitz constraint. Journal of Optimization Theory and Applications, 141(2):429?443, 2009. [11] S. M. Kakade, A. T. Kalai, V. Kanade, and O. Shamir. Efficient learning of generalized linear and single index models with isotonic regression. arxiv.org/abs/1104.2018. [12] UCI. University of california, irvine: http://archive.ics.uci.edu/ml/. [13] S. Cosslett. Distribution-free maximum-likelihood estimator of the binary choice model. Econometrica, 51(3), May 1983. 9
4429 |@word mild:1 version:2 middle:1 polynomial:1 achievable:1 norm:2 nd:1 harder:2 contains:1 minht:2 existing:1 err:10 com:3 z2:1 bd:6 must:2 additive:1 plot:2 designed:1 update:5 juditsky:1 v:1 half:1 iso:5 provides:2 node:4 org:1 simpler:1 along:5 constructed:1 direct:2 symposium:1 descendant:1 prove:1 focs:1 fitting:3 introduce:1 indeed:1 hardness:1 inspired:1 decreasing:10 actual:3 increasing:5 provided:4 estimating:4 bounded:4 begin:2 notation:1 agnostic:3 wilbur:2 what:1 minimizes:5 substantially:1 finding:7 transformation:2 guarantee:9 pseudo:1 every:4 demonstrates:1 classifier:1 unit:2 penn:1 appear:1 before:1 local:1 modify:1 despite:1 black:2 chose:2 inria:1 suggests:2 challenging:3 co:5 ease:1 statistically:2 averaged:1 practical:1 yj:2 practice:3 recursive:2 procedure:10 empirical:10 suggest:1 get:3 cannot:1 close:3 clever:1 put:1 risk:1 applying:1 isotonic:14 restriction:3 equivalent:1 accumulating:1 straightforward:1 starting:1 independently:2 convex:6 immediately:1 m2:1 estimator:1 dw:1 analogous:1 annals:1 shamir:3 target:1 play:1 suppose:2 gm:1 construction:1 us:1 designing:2 hypothesis:5 harvard:2 particularly:1 database:1 observed:1 role:1 worst:1 halfspaces:2 intuition:1 complexity:2 covariates:1 econometrica:1 depend:1 purely:1 upon:1 basis:1 translated:1 easily:2 joint:3 describe:1 shalev:2 choosing:2 sanity:1 whose:1 heuristic:3 valued:2 statistic:2 gi:14 g1:2 jointly:2 nondecreasing:1 transform:1 housing:3 advantage:3 differentiable:2 skakade:1 zm:3 relevant:2 perceptronlike:2 loop:1 combining:1 uci:4 pronounced:1 convergence:2 optimum:1 p:2 sea:1 produce:1 adam:1 leave:1 fixing:3 snowbird:1 measured:1 ij:2 sim:4 predicted:5 involves:1 come:1 implies:1 indicate:2 direction:5 vkanade:1 drawback:1 noticeably:1 require:2 generalization:2 really:1 im:2 ardle:2 extension:1 strictly:7 hold:2 practically:1 sufficiently:2 hall:2 ic:1 algorithmic:1 claim:1 achieves:1 wine:1 estimation:9 applicable:1 minimization:1 kalai:7 parkinson:3 focus:2 properly:1 check:1 likelihood:2 sense:1 dependent:3 typically:1 relation:5 provably:2 arg:2 issue:2 classification:1 flexible:1 colt:2 winequality:2 constrained:1 special:1 fairly:1 initialize:1 smoothing:1 wharton:1 aware:1 once:1 equal:2 washington:1 chapman:1 represents:1 yu:1 nearly:1 minimized:1 report:3 piecewise:8 randomly:1 simultaneously:1 kwt:1 isotron:29 microsoft:6 delicate:1 attempt:2 ab:1 highly:3 evaluation:3 held:4 accurate:1 bregman:1 necessary:1 ohad:1 vely:1 tree:5 euclidean:1 theoretical:4 instance:1 column:5 cover:1 assignment:1 deviation:3 predictor:6 uniform:2 characterize:1 reported:2 synthetic:6 pool:1 picking:1 ym:16 concrete:7 na:1 again:1 squared:9 successively:1 containing:1 possibly:1 worse:1 horowitz:1 expert:1 wk:1 coefficient:1 spokoiny:1 satisfy:1 notable:2 explicitly:1 piece:1 performed:4 view:1 root:1 overfits:1 kwk:4 analyze:1 red:3 wm:2 start:1 parallel:1 complicated:1 doing:1 slope:4 rmse:11 contribution:2 minimize:2 square:2 efficiently:3 simultaneous:1 ed:1 against:1 servedio:1 involved:1 proof:3 recovers:1 sampled:1 irvine:1 dataset:8 ut:6 attained:1 varun:1 violating:2 ichimura:1 done:1 evaluated:1 generality:1 furthermore:1 just:2 marketing:1 glms:10 overfit:1 sketch:1 continuity:1 logistic:11 quality:1 artifact:1 believe:1 name:1 usa:1 verify:1 true:7 requiring:1 counterpart:1 former:1 hence:3 normalized:1 iteratively:3 reweighted:1 adjacent:1 pav:3 round:1 illustrative:1 generalized:7 arrived:1 complete:1 tron:11 performs:3 common:1 sigmoid:1 significant:3 refer:1 imposing:1 rd:1 sastry:3 interleaf:1 entail:1 add:2 optimizing:2 irrelevant:1 apart:1 certain:3 binary:1 g0m:3 yi:13 minimum:1 additional:1 somewhat:4 converge:1 paradigm:1 smoother:1 multiple:1 full:5 sham:1 stem:1 exceeds:1 technical:1 cross:6 raphson:1 lin:3 prevented:1 ravikumar:1 feasibility:1 prediction:1 variant:1 regression:31 essentially:1 arxiv:1 iteration:18 sometimes:1 kernel:3 represent:2 achieved:1 addition:1 semiparametric:1 interval:2 appropriately:1 archive:1 subject:2 tend:1 sridharan:1 effectiveness:1 split:1 enough:1 fit:10 zi:6 competing:1 agnostically:1 idea:1 knowing:1 shift:1 category:1 http:1 sl:2 zj:2 notice:1 track:2 correctly:1 blue:1 discrete:1 key:1 demonstrating:1 achieving:1 drawn:2 prevent:1 ht:20 naik:1 monotone:1 merely:1 run:4 inverse:2 everywhere:2 powerful:1 family:1 reader:1 reasonable:1 appendix:4 scaling:1 comparable:1 bound:3 guaranteed:1 fold:14 replaces:1 quadratic:3 annual:1 adapted:1 constraint:9 constrain:1 argument:4 extremely:1 min:2 klivans:1 according:1 ball:2 smaller:1 across:7 slightly:1 kakade:2 appealing:1 making:1 glm:14 computationally:6 turn:2 know:1 end:2 operation:6 apply:2 obey:2 worthwhile:1 observe:1 appropriate:1 hristache:1 batch:2 running:1 include:1 remaining:4 newton:1 approximating:1 classical:1 society:1 objective:1 move:1 question:2 added:1 fa:1 parametric:1 gradient:2 distance:1 link:1 trivial:2 provable:2 fresh:4 assuming:3 code:1 index:11 providing:1 equivalently:2 unfortunately:1 negative:2 implementation:2 design:2 cryptographic:1 unknown:4 perform:3 observation:1 datasets:6 descent:1 y1:5 mansour:1 dc:1 arbitrary:4 thm:1 community:6 ordinate:5 namely:1 required:4 z1:2 california:1 learned:1 address:1 usually:1 below:3 xm:8 oft:1 program:1 max:3 tance:1 wainwright:1 difficulty:1 representing:1 ohadsh:1 brief:2 review:1 literature:1 prior:2 loss:6 validation:6 foundation:1 consistent:1 principle:1 viewpoint:1 surprisingly:1 supported:3 parity:1 free:2 dis:1 formal:2 side:2 perceptron:3 fall:1 tauman:1 sparse:1 distributed:1 regard:1 slice:1 dimension:8 world:1 commonly:1 far:1 excess:1 keep:2 monotonicity:4 lir:6 global:1 overfitting:1 ml:1 assumed:2 conclude:1 nelder:1 xi:9 shwartz:2 search:1 iterative:2 continuous:3 s0i:1 table:8 kanade:2 learn:2 transfer:19 improving:2 alg:3 interpolating:1 complex:1 constructing:1 domain:1 main:5 linearly:1 noise:2 n2:2 body:1 x1:4 fig:3 referred:2 exponential:1 lie:1 theorem:5 showing:1 sims:6 intractable:1 exists:3 workshop:1 bayesoptimal:2 adding:1 importance:1 illustrates:1 commutative:1 margin:3 simply:2 likely:1 lazy:1 kxk:1 monotonic:9 corresponds:1 violator:1 satisfies:2 relies:1 goal:2 lipschitz:36 hard:1 change:1 mccullagh:1 infinite:1 except:2 uniformly:1 wt:16 principal:1 called:1 perceptrons:1 formally:1 support:1 tsai:1 evaluate:1
3,788
443
Connectionist Optimisation of Tied Mixture Hidden Markov Models Steve Renals Nelson Morgan ICSI Berkeley CA 94704 USA Herve Bourlard L&H Speech products leper B-9800 Belgium Horacio Franco Michael Cohen SRI International Menlo Park CA 94025 USA Abstract Issues relating to the estimation of hidden Markov model (HMM) local probabilities are discussed. In particular we note the isomorphism of radial basis functions (RBF) networks to tied mixture density modellingj additionally we highlight the differences between these methods arising from the different training criteria employed. We present a method in which connectionist training can be modified to resolve these differences and discuss some preliminary experiments. Finally, we discuss some outstanding problems with discriminative training. 1 INTRODUCTION In a statistical approach to continuous speech recognition the desired quantity is the posterior probability p(Wrlxf, 8) of a word sequence Wr = Wl ..... Ww given the acoustic evidence X[ = Xl ..... XT and the parameters of the speech model used 8. Typically a set of models is used, to separately model different units of speech. This probability may be re-expressed using Bayes' rule: (1) P(WwIXT 8) = p(Xfl W r.8)p(WrI8) 1 l' p(Xn8) p(Xflwr. 8)p(Wr 18) = Lw' p(XnW'. 8)P(W'18) . p(XnWr.8)/p(Xn8) is the acoustic model. This is the ratio of the likelihood of the acoustic evidence given the sequence of word models, to the probability of 167 168 RenaIs, Morgan, Bourlard, Franco, and Cohen the acoustic data being generated by the complete set of models. p(Xn8) may be regarded as a normalising term that is constant (across models) at recognition time. However at training time the parameters 8 are being adapted, thus p(Xn8) is no longer constant. The prior, P(WfI8), is obtained from a language model. The basic unit of speech, typically smaller than a word (here we use phones), is modelled by a hidden Markov model (HMM). Word models consist of concatenations of phone HMMs (constrained by pronunciations stored in a lexicon), and sentence models consist of concatenations of word HMMs (constrained by a grammar). The lexicon and grammar together make up a language model, specifying prior probabilities for sentences, words and phones. A HMM is a stochastic automaton defined by a set of states qi, a topology specifying allowed state transitions and a set oflocal probability density functions (PDFs) P(x" qilqj. X~-l). Making the further assumptions that the output at time t is independent of previous outputs and depends only on the current state, we may separate the local probabilities into state transition probabilities P(q;lqj) and output PDFs P(X,lqi). A set of initial state probabilities must also be specified. The parameters of a HMM are usually set via a maximum likelihood procedure that optimally estimates the joint density P(q, xI8). The forward-backward algorithm, a provably convergent algorithm for this task, is extremely efficient in practice. However, in speech recognition we do not wish to make the best model of the data {x, q} given the model parameters; we want to make the optimal discrimination between classes at each time. This can be better achieved by computing a discriminant P(qlx, 8). Note that in this case we do not model the input density P(xI8). We may estimate P(qlx.8) using a feed-forward network trained to an entropy criterion (Bourlard & Wellekens, 1989). However, we require likelihoods of the form P(xlq, 8), as HMM output probabilities. We may convert posterior probabilities to scaled likelihoods P(xlq, 8)IP(xI8), by dividing the network outputs by the relative frequencies of each class 1 . Note that we are not using connectionist training to obtain density estimates here; we are obtaining a ratio and not modelling P(xI8). This ratio is the quantity that we wish to maximise: this corresponds to maximising P(xlqc.8) and minimising P(Xlqi. 8). i =I c, where qc is the correct class. We have used discriminatively trained networks to estimate the output PDFs (Bourlard & Morgan, 1991; Renals et al., 1991, 1992), and have obtained superior results to maximum likelihood training on continuous speech recognition tasks. In this paper, we are mainly concerned with radial basis function (RBF) networks. A RBF network generally has a single hidden layer, whose units may be regarded as computing local (or approximately local) densities, rather than global decision surfaces. The resultant posteriors are obtained by output units that combine these local densities. We are interested in using RBF networks for various reasons: ? A RBF network is isomorphic to a tied mixture density model, although the training criterion is typically different. The relationship between the two is explored in this paper . ? The locality of RBFs makes them suitable for situations in which the input ITbese are the estimates of P(qi) implicitly used during classifier training. Connectionist Optimisation of Tied Mixture Hidden Markov Models distribution may change (e.g. speaker adaptation). Surplus RBFs in a region of the input space where data no longer occurs will not effect the final classification. This is not so for sigmoidal hidden units in a multi-layer perceptron (MLP), which have a global effect. ? RBFs are potentially more computationally efficient than MLPs at both training and recognition time. 2 TIED MIXTURE HMM Tied mixtures of Gaussians have proven to be powerful PDF estimators in HMM speech recognition systems (Huang & Jack, 1989; Bellegarda & Nahamoo, 1990). The resulting systems are also known as semi-continuous HMMs. Tied mixture density estimation may be regarded as an interpolation between discrete and continuous density modelling Essentially, tied mixture modelling has a single "codebook" of Gaussians shared by all output PDFs. Each of these PDFs has its own set of mixture coefficients used to combine the individual Gaussians. If h(xlqk) is the output PDF of state qk, and N j (xlJ1j. ~j) are the component Gaussians, then: (2) h(xlqk. 8 ) = I: akjN (xlJ1jo ~j) j j 1:akj =1 j where akj is an element of the matrix of mixture coefficients (which may be interpreted as the prior probability P(Pj. l:jlqk?) defining how much component density Nj(xlJ1jt ~j) contributes to output PDF h(xlqk.8). Alternatively this may be regarded as "fuzzy" vector quantisation. 3 RADIAL BASIS FUNCTIONS The radial basis functions (RBF) network was originally introduced as a means of function interpolation (Powell, 1985; Broomhead & Lowe, 1988). A set of K approximating functions, hex) is constructed from a set of J basis functions cp(x): J (3) fk(X) =1: akjcpj(x) j=l This equation defines a RBF network with J RBFs (hidden units) and K outputs . The output units here are linear, with weights akj. The RBFs are typically Gaussians, with means Ilj and covariance matrices ~r (4) where R is a normalising constant . The covariance matrix is frequently assumed to be diagonal2 ? 1bis is often reasonable for speech applications. since mel or PLP cepstral coefficients are orthogonal. 169 170 RenaIs, Morgan, Bourlard, Franco, and Cohen Such a network has been used for HMM output probability estimation in continuous speech recognition (Renals et al., 1991) and an isomorphism to tied-mixture HMMs was noted. However, there is a mismatch between the posterior probabilities estimated by the network and the likelihoods required for the HMM decoding. Previously this was resolved by dividing the outputs by the relative frequencies of each state. It would be desirable, though, to retain the isomorphism to tied mixtures: specifically we wish to interpret the hidden-to-output weights of an RBF network as the mixture coefficients of a tied mixture likelihood function. This can be achieved by defining the transfer units of the output units to implement Bayes' rule, which relates the posterior gk{X) to the likelihood !k(x): (5) Such a transfer function ensures the output units sum to 1; if fA:{x) is guaranteed non-negative, then the outputs are formally probabilities. The output of such a network is a probability distribution and we are using 'l-from-K' training: thus the relative entropy E is simply: E == -loggc(x). (6) where qc is the desired output class (HMM distribution). Bridle (1990) has demonstrated that minimising this error function is equivalent to maximising the mutual information between the acoustic evidence and HMM state sequence. If we wish to interpret the weights as mixture coefficients, then we must ensure that they are non-negative and sum to 1. This may be achieved us'ing a normalised exponential (softmax) transformation: (7) The mixture coefficients akj are used to compute the likelihood estimates, but it is the derived variables wkj that are used in the unconstrained optimisation. 3.1 TRAINING Steepest descent training specifies that: (8) Here E is the relative entropy objective function (6). We may decompose the right hand side of this by a careful application of the chain rule of differentiation: (9) Connectionist Optimisation of Tied Mixture Hidden Markov Models We may write down expressions for each of these partials (where delta and qc is the desired state): ~ab is the Kronecker dE - - = -~c' - (10) dg,(X) gc dg,(X) = gA;(X) (~ _ g,) d!A:(X) !A: (x) dfA;(X) () - - =tPli x daU dau (11) (12) -dWIcj =aU(~IIj - (13) alcj). Substituting (10), (11), (12) and (13) into (9) we obtain: (14) dE dwlcj 1 = fA;(X) (gA;(X) - ~A:c) alcj (tPj(x) - !A: (x) ) . Apart from the added terms due to the normalisation of the weights, the major difference in the gradient compared with using a sigmoid or softmax transfer function is the 1/!A:(x) factor. To some extent we may regard this as a dimensional term. The required gradient is simpler if we construct the network to estimate log likelihoods, replacing fA;(X) with ZA;(X) =logfA;(x): (15) ZA;(X) = L wkj'tPj(x) j (16) gA;(X) = p(qA;) exp(zA;(x? . 2:, p(q,) exp(z,(x? Since this is in the log domain, no constraints on the weights are required. The new gradient we need is: (17) Thus the gradient of the error is: dE (18) -;aWIcj =(gA;(X) - ~d) tPj(x). Since we are in log domain, the lI!A:(x) factor is additive and thus disappears from the gradient. This network is similar to Bridle's softmax, except here uniform priors are not assumed; the gradient is of identical form, though. In this case the weights do not have a simple relationship with the mixture coefficients obtained in tied mixture density modelling. We may also train the means and variances of the RBFs by back-propagation of error; the gradients are straightforward. 3.2 PRELIMINARY EXPERIMENTS We have experimented with both the Bayes' rule transfer function (5) and the variant in the log domain (16). We used a phoneme classification task, with a 171 172 Renals, Morgan, Bourlard, Franco, and Cohen database consisting of 160,000 frames of continuous speech. We typically computed the parameters of the RBFs by a k-means clustering process. We found that the gradient resulting from the first transfer function (14) had a tendency to numerical instability, due to the lIJ term; thus most of our experiments have used the log domain transfer function. In experiments using a 1000 RBFs, we have obtained frame classification ra.tes of 52%. This is somewhat poorer than the frame classification we obtain using 80512 hidden unit MLP (59%) . We are investigating improvements to our procedure, including variations to the learning schedule, the use of the EM algorithm to set RBF parameters and the use of priors on the weight matrix. 4 PROBLEMS WITH DISCRIMINATIVE TRAINING 4.1 UNLABELLED DATA A problem arises from the use of unlabelled or partially labelled data. When training a speech recogniser, we typically know the word sequence for an utterance, but we do not have a time-aligned phonetic transcription. This is a case of partially labelled data: a training set of data pairs {x"qt} is unavailable, but we do not have purely unlabelled data {Xt}. Instea.d, we have the constraining information of the word sequence W . Thus P(qilxt) may be decomposed as: (19) We usually make the further approximation that the optimal state sequence is much more likely than any competing state sequence. Thus, P(qclxt) = 1, and the probabilities of all other states at time tare O. This most likely state sequence (which may be computed using a forced Viterbi alignment) is often used as the desired outputs for a discriminatively trained network. Using this alignment implicitly assumes model correctness; however, we use discriminative training because we believe the HMMs are an inadequate speech model. Hence there is a mismatch between the maximum likelihood labelling and alignment, and the discriminative training used for the networks. It may be that this mismatch is responsible for the lack of robustness of discrim- inative training (compared with pure maximum likelihood training) in vocabulary independent speech recognition tasks (Paul et al., 1991). The assumption of model correctness used to generate the labels may have the effect of further embedding specifics of the training data into the final models. A solution to this problem may be to use a probabilistic alignment, with a distribution over labels at each timestep. This could be computed using the forward-backward algorithm, rather than the Viterbi approximation. This maximum likelihood approach still assumes model correctness of course. A discriminative approach to this problem would also attempt to infer distributions over labels. A basic goal might be to sharpen the distribution toward the maximum likelihood estimate. An example of such a method is the 'phantom targets' algorithm introduced by Bridle & Cox (1991). These optimisations are local: the error is not propagated through time. Algorithms for globally optimising discriminative training have been proposed (e.g. Bengio et al., these proceedings), but are not without problems, when used with a constrain- Connectionist Optimisation of Tied Mixture Hidden Markov Models ing language model. The problem is that to compute the posterior, the ratio of the probabilities of generating the correct utterance and generating all allowable utterances must be computed. 4.2 THE PRIORS It has been shown, both theoretically and in practice, that the training and recognition procedures used with standard HMMs remain valid for posterior probabilities (Bourlard & Wellekens, 1989). Why then do we replace these posterior probabilities with likelihoods? The answer to this problem lies in a mismatch between the prior probabilities given by the training data and those imposed by the topology of the HMMs. Choosing the HMM topology also amounts to fixing the priors. For instance, if classes qA: represent phones, prior probabilitiesP(qk) are fixed when word models are defined as particular sequences of phone models. This discussion can be extended to different levels of processing: if qk represents sub-phonemic states and recognition is constrained by a language model, prior probabilities qk are fixed by (and can be calculated from) the phone models, word models and the language model. Ideally, the topologies of these models would be inferred directly from the training data, by using a discriminative criterion which implicitly contains the priors. Here, at least in theory, it would be possible to start from fully-connected models and to determine their topology according to the priors observed on the training data. Unfortunately this results in a huge number of parameters that would require an unrealistic amount of training data to estimate them significantly. This problem has also been raised in the context of language modelling (Paul et aI., 1991). Since the ideal theoretical solution is not accessible in practice, it is usually better to dispose of the poor estimate of the priors obtained using the training data, replacing them with "prior" phonological or syntactic knowledge. 5 CONCLUSION Having discussed the similarities and differences between RBF networks and tied mixture density estimators, we present a method that attempts to resolve a mismatch between discriminative training and density estimation. Some preliminary experiments relating to this approach were discussed; we are currently performing further speech recognition experiments using these methods. Finally we raised some important issues pertaining to discriminative training. Acknowledgement This work was partially funded by DARPA contract MDA904-90-C-5253. References Bellegarda, J. R. & Nahamoo, D. (1990). Tied mixture continuous parameter modeling for speech recognition. IEEE Transactions on Acoustics, Speech and Signal Processing, 38, 2033-2045. 173 174 Renals, Morgan, Bourlard, Franco, and Cohen Bourlard, H. & Morgan, N. (1991). Conectionist approaches to the use of Markov models for continuous speech recognition. In Lippmann, R . P., Moody, J. E., & Touretzky, D. S. (Eds.), Advances in Neural Information Processing Systems, Vol. 3, pp. 213-219. Morgan Kaufmann, San Mateo CA. Bourlard, H. & Wellekens, C. J. (1989). Links between Markov models and multilayer perceptrons. In Touretzky, D. S. (Ed.), Advances in Neural Information Processing Systems, Vol. 1, pp. 502-510. Morgan Kaufmann, San Mateo CA. Bridle, J. S. & Cox, S. J. (1991). RecNorm: Simultaneous normalisation and classification applied to speech recognition. In Lippmann, R. P., Moody, J. E., & Touretzky, D. S. (Eds.), Advances in Neural Information Processing Systems, Vol. 3, pp. 234-240. Morgan Kaufmann, San Mateo CA. Bridle, J. S. (1990). Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters. In Touretzky, D. S. (Ed.), Advances in Neural Information Processing Systems, Vol. 2, pp. 211-217. Morgan Kaufmann, San Mateo CA. Broomhead, D. S. & Lowe, D. (1988). Multi-variable functional interpolation and adaptive networks. Complex Systems, 2, 321-355. Huang, X. D. & Jack, M. A. (1989). Semi-continuous hidden Markov models for speech signals. Computer Speech and Language, 3, 239-251. Paul, D. B., Baker, J. K., & Baker, J. M. (1991) . On the interaction between true source, training and testing language models. In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 569-572 Toronto. Powell, M. J . D. (1985). Radial basis functions for multi-variable interpolation: a review. Tech. rep. DAMPT/NAI2, Dept. of Applied Mathematics and Theoretical Physics, University of Cambridge. Renals, S., McKelvie, D., & McInnes, F. (1991). A comparative study of continuous speech recognition using neural networks and hidden Markov models. In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 369-372 Toronto. Renals, S., Morgan, N., Cohen, M., & Franco, H. (1992). Connectionist probability estimation in the DECIPHER speech recognition system . In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing San Francisco. In press.
443 |@word cox:2 sri:1 covariance:2 initial:1 contains:1 current:1 must:3 numerical:1 additive:1 discrimination:1 steepest:1 normalising:2 codebook:1 toronto:2 lexicon:2 sigmoidal:1 simpler:1 constructed:1 combine:2 theoretically:1 ra:1 frequently:1 multi:3 globally:1 decomposed:1 resolve:2 baker:2 interpreted:1 fuzzy:1 transformation:1 differentiation:1 nj:1 berkeley:1 scaled:1 classifier:1 unit:11 maximise:1 local:6 interpolation:4 approximately:1 might:1 au:1 mateo:4 specifying:2 hmms:7 recnorm:1 bi:1 responsible:1 testing:1 practice:3 implement:1 procedure:3 powell:2 oflocal:1 significantly:1 word:10 radial:5 ga:4 context:1 instability:1 equivalent:1 phantom:1 demonstrated:1 imposed:1 straightforward:1 automaton:1 qc:3 pure:1 rule:4 estimator:2 regarded:4 embedding:1 variation:1 target:1 element:1 recognition:17 database:1 observed:1 region:1 ensures:1 connected:1 icsi:1 ideally:1 trained:3 purely:1 basis:6 resolved:1 joint:1 darpa:1 various:1 train:1 forced:1 pertaining:1 choosing:1 pronunciation:1 whose:1 grammar:2 syntactic:1 ip:1 final:2 sequence:9 xi8:4 interaction:1 product:1 adaptation:1 renals:7 aligned:1 generating:2 comparative:1 fixing:1 qt:1 phonemic:1 dividing:2 correct:2 stochastic:2 require:2 lqj:1 preliminary:3 decompose:1 exp:2 viterbi:2 substituting:1 major:1 belgium:1 estimation:6 label:3 currently:1 wl:1 correctness:3 modified:1 rather:2 derived:1 pdfs:5 improvement:1 modelling:5 likelihood:15 mainly:1 tech:1 tpj:3 typically:6 hidden:13 interested:1 provably:1 issue:2 classification:5 constrained:3 softmax:3 raised:2 mutual:2 construct:1 phonological:1 having:1 identical:1 optimising:1 park:1 represents:1 mda904:1 connectionist:7 dg:2 individual:1 consisting:1 ab:1 attempt:2 mlp:2 normalisation:2 huge:1 alignment:4 mixture:22 chain:1 poorer:1 partial:1 herve:1 orthogonal:1 desired:4 re:1 theoretical:2 instance:1 modeling:1 uniform:1 inadequate:1 optimally:1 stored:1 answer:1 density:14 international:4 akj:4 retain:1 accessible:1 probabilistic:1 contract:1 physic:1 decoding:1 michael:1 together:1 moody:2 ilj:1 huang:2 li:1 de:3 coefficient:7 recogniser:1 depends:1 lowe:2 start:1 bayes:3 horacio:1 rbfs:8 mlps:1 qk:4 variance:1 phoneme:1 kaufmann:4 decipher:1 modelled:1 za:3 simultaneous:1 touretzky:4 ed:4 frequency:2 pp:6 resultant:1 bridle:5 propagated:1 broomhead:2 knowledge:1 schedule:1 surplus:1 back:1 feed:1 steve:1 originally:1 lqi:1 though:2 hand:1 replacing:2 propagation:1 lack:1 dau:2 defines:1 believe:1 usa:2 effect:3 true:1 hence:1 during:1 plp:1 speaker:1 mel:1 noted:1 criterion:4 pdf:3 allowable:1 complete:1 cp:1 jack:2 superior:1 sigmoid:1 functional:1 cohen:6 discussed:3 relating:2 interpret:2 cambridge:1 ai:1 unconstrained:1 fk:1 mathematics:1 sharpen:1 language:8 had:1 funded:1 longer:2 surface:1 quantisation:1 similarity:1 posterior:8 own:1 apart:1 phone:6 phonetic:1 rep:1 morgan:12 somewhat:1 employed:1 determine:1 signal:5 semi:2 relates:1 desirable:1 infer:1 ing:2 unlabelled:3 minimising:2 qi:2 variant:1 basic:2 multilayer:1 optimisation:6 essentially:1 represent:1 achieved:3 want:1 separately:1 bellegarda:2 wkj:2 source:1 ideal:1 constraining:1 bengio:1 concerned:1 discrim:1 topology:5 competing:1 expression:1 isomorphism:3 speech:26 dfa:1 generally:1 amount:2 generate:1 specifies:1 dispose:1 estimated:1 arising:1 wr:2 delta:1 discrete:1 write:1 vol:4 pj:1 backward:2 timestep:1 convert:1 sum:2 powerful:1 reasonable:1 decision:1 layer:2 guaranteed:1 convergent:1 nahamoo:2 adapted:1 kronecker:1 constraint:1 tare:1 constrain:1 franco:6 extremely:1 performing:1 according:1 poor:1 across:1 smaller:1 em:1 remain:1 making:1 computationally:1 equation:1 wellekens:3 previously:1 discus:2 know:1 gaussians:5 robustness:1 assumes:2 clustering:1 ensure:1 approximating:1 objective:1 added:1 quantity:2 occurs:1 fa:3 gradient:8 separate:1 link:1 concatenation:2 hmm:12 nelson:1 extent:1 discriminant:1 reason:1 toward:1 maximising:2 relationship:2 ratio:4 unfortunately:1 potentially:1 gk:1 negative:2 markov:10 descent:1 situation:1 defining:2 extended:1 frame:3 ww:1 gc:1 inferred:1 introduced:2 pair:1 required:3 specified:1 sentence:2 acoustic:9 qa:2 usually:3 mismatch:5 including:1 unrealistic:1 suitable:1 bourlard:10 disappears:1 utterance:3 lij:1 prior:14 review:1 acknowledgement:1 relative:4 fully:1 highlight:1 discriminatively:2 qlx:2 proven:1 course:1 hex:1 side:1 normalised:1 perceptron:1 cepstral:1 regard:1 calculated:1 vocabulary:1 transition:2 valid:1 forward:3 adaptive:1 san:5 transaction:1 lippmann:2 implicitly:3 transcription:1 global:2 investigating:1 inative:1 assumed:2 francisco:1 discriminative:9 alternatively:1 continuous:10 why:1 additionally:1 transfer:6 ca:6 menlo:1 obtaining:1 contributes:1 unavailable:1 complex:1 domain:4 paul:3 allowed:1 iij:1 sub:1 wish:4 exponential:1 xl:1 lie:1 tied:16 lw:1 down:1 xt:2 specific:1 explored:1 experimented:1 evidence:3 consist:2 te:1 labelling:1 locality:1 entropy:3 simply:1 likely:2 expressed:1 partially:3 corresponds:1 goal:1 rbf:10 careful:1 labelled:2 shared:1 replace:1 change:1 specifically:1 except:1 isomorphic:1 tendency:1 perceptrons:1 formally:1 arises:1 outstanding:1 dept:1
3,789
4,430
A concave regularization technique for sparse mixture models Martin Larsson School of Operations Research and Information Engineering Cornell University [email protected] Johan Ugander Center for Applied Mathematics Cornell University [email protected] Abstract Latent variable mixture models are a powerful tool for exploring the structure in large datasets. A common challenge for interpreting such models is a desire to impose sparsity, the natural assumption that each data point only contains few latent features. Since mixture distributions are constrained in their L1 norm, typical sparsity techniques based on L1 regularization become toothless, and concave regularization becomes necessary. Unfortunately concave regularization typically results in EM algorithms that must perform problematic non-concave M-step maximizations. In this work, we introduce a technique for circumventing this difficulty, using the so-called Mountain Pass Theorem to provide easily verifiable conditions under which the M-step is well-behaved despite the lacking concavity. We also develop a correspondence between logarithmic regularization and what we term the pseudo-Dirichlet distribution, a generalization of the ordinary Dirichlet distribution well-suited for inducing sparsity. We demonstrate our approach on a text corpus, inferring a sparse topic mixture model for 2,406 weblogs. 1 Introduction The current trend towards ?big data? has created a strong demand for techniques to efficiently extract structure from ever-accumulating unstructured datasets. Specific contexts for this demand include latent semantic models for organizing text corpora, image feature extraction models for navigating large photo datasets, and community detection in social networks for optimizing content delivery. Mixture models identify such latent structure, helping to categorize unstructured data. Mixture models approach datasets as a set D of element d ? D, for example images or text documents. Each element consists of a collection of words w ? W drawn with replacement from a vocabulary W. Each element-word pair observation is further assumed to be associated with an unobserved class z ? Z, where Z is the set of classes. Ordinarily it is assumed that |Z|  |D|, namely that the number of classes is much less than the number of elements. In this work we explore an additional sparsity assumption, namely that individual elements only incorporate a small subset of the |Z| classes, so that each element arises as a mixture of only `  |Z| classes. We develop a framework to overcome mathematical difficulties in how this assumption can be harnessed to improve the performance of mixture models. Our primary context for mixture modeling in this work will be latent semantic models of text data, where elements d are documents, words w are literal words, and classes z are vocabulary topics. We apply our framework to models based on Probabilistic Latent Semantic Analysis (PLSA) [1]. While PLSA is often outperformed within text applications by techniques such as Latent Dirichlet Allocation (LDA) [2], it forms the foundation of many mixture model techniques, from computer vision [3] to network community detection [4], and we emphasize that our contribution is an optimization technique intended for broad application outside merely topic models for text corpora. The 1 near-equivalence between PLSA and Nonnegative Matrix Factorization (NMF) [5, 6] implies that our technique is equally applicable to NMF problems as well. Sparse inference as a rule targets point estimation, which makes PLSA-style models appropriate since they are inherently frequentist, deriving point-estimated models via likelihood maximization. In contrast, fully Bayesian frameworks such as Latent Dirichlet Allocation (LDA) output a posterior distribution across the model space. Sparse inference is commonly achieved through two largely equivalent techniques: regularization or MAP inference. Regularization modifies ordinary likelihood maximization with a penalty on the magnitudes of the parameters. Maximum a posteriori (MAP) inference employs priors concentrated towards small parameter values. MAP PLSA is an established technique [7], but earlier work has been limited to log-concave prior distributions (corresponding to convex regularization functions) that make a concave contribution to the posterior log-likelihood. While such priors allow for tractable EM algorithms, they have the effect of promoting smoothing rather than sparsity. In contrast, sparsity-inducing priors are invariably convex in their contribution. In this work we resolve this difficulty by showing how, even though concavity fails in general, we are able to derive simple checkable conditions that guarantee a unique stationary point to the M-step objective function that serves as the unique global maximum. This rather surprising result, using the so-called Mountain Pass Theorem, is a noteworthy contribution to the theory of learning algorithms which we expect has many applications outside merely PLSA. Section 2 briefly outlines the structure of MAP inference for PLSA. Section 3 discusses priors appropriate for inducing sparsity, and introduces a generalization of the Dirichlet distribution which we term the pseudo-Dirichlet distribution. Section 4 contains our main result, a tractable EM algorithm for PLSA under sparse pseudo-Dirichlet priors using the Mountain Pass Theorem. Section 5 presents empirical results for a corpus of 2,406 weblogs, and section 6 concludes with a discussion. 2 2.1 Background and preliminaries Standard PLSA Within the PLSA framework, word-document-topic triplets (w, d, z) are assumed to be i.i.d. draws from a joint distribution on W ? D ? Z of the form P (w, d, z | ?) = P (w | z)P (z | d)P (d), (1) where ? consists of the model parameters P (w | z), P (z | d) and P (d) for (w, d, z) ranging over W ? D ? Z. Following [1], the corresponding data log-likelihood can be written i X hX X P (w | z)P (z | d) + n(d) log P (d), (2) `0 (?) = n(w, d) log w,d z d P where n(w, d) is the number of occurrences of word w in document d, and n(d) = w n(w, d) is the total number of words in d. The goal is to maximize the likelihood over the set of admissible ?. This is accomplished using the EM algorithm, iterating between the following two steps: E-step: Find P (z | w, d, ?0 ), the posterior distribution of the latent variable z, given (w, d) and a current parameter estimate ?0 . M-step: Maximize Q0 (? | ?0 ) over ?, where h i X X Q0 (? | ?0 ) = n(d) log P (d) + n(w, d)P (z | w, d, ?0 ) log P (w | z)P (z | d) . d w,d,z We refer to [1] for details on the derivations, as well as extensions using so-called tempered EM. The resulting updates corresponding to the E-step and M-step are, respectively, P (w | z)P (z | d) P (z | w, d, ?) = P (3) 0 0 z 0 P (w | z )P (z | d) and P P (z | w, d, ?0 )n(w, d) P (w | z) = P d , 0 0 0 w0 ,d P (z | w , d, ? )n(w , d) (4) P 0 n(d) w P (z | w, d, ? )n(w, d) P P (z | d) = , P (d) = . 0 n(d) d0 n(d ) 2 Note that PLSA has an alternative parameterization, where (1) is replaced by P (w, d, z | ?) = P (w|z)P (d|z)P (z). This formulation is less interesting in our context, since our sparsity assumption is intended as a statement about the vectors (P (z | d) : z ? Z), d ? D. 2.2 MAP PLSA The standard MAP extension of PLSA is to introduce a prior density P (?) on the parameter vector, and then maximize the posterior data log-likelihood `(?) = `0 (?) + log P (?) via the EM algorithm. In order to simplify the optimization problem, we impose the reasonable restriction that the vectors (P (w | z) : w ? W) for z ? Z, (P (z | d) : z ? Z) for d ? D, and (P (d) : d ? D) be mutually independent under the prior P (?). That is, Y Y P (?) = fz (P (w | z) : w ? W) ? gd (P (z | d) : z ? Z) ? h(P (d) : d ? D), z?Z d?D for densities fz , gd and h on the simplexes in R|W| , R|Z| and R|D| , respectively. With this structure on P (?) one readily verifies that the M-step objective function for the MAP likelihood problem, Q(? | ?0 ) = Q0 (? | ?0 ) + log P (?), is given by X X Q(? | ?0 ) = Fz (? | ?0 ) + Gd (? | ?0 ) + H(? | ?0 ), z d where Fz (? | ?0 ) = X P (z | w, d, ?0 )n(w, d) log P (w | z) + log fz (P (w | z) : w ? W), w,d 0 X 0 X Gd (? | ? ) = P (z | w, d, ?0 )n(w, d) log P (z | d) + log gd (P (z | d) : z ? Z), w,z H(? | ? ) = n(d) log P (d) + log h(P (d) : d ? D). d As a comment, notice that if the densities fz , gd , or h are log-concave then Fz , Gd , and H are concave in ?. Furthermore, the functions Fz , Gd , and H can be maximized independently, since the corresponding non-negativity and normalization constraints are decoupled. In particular, the |Z| + |D| + 1 optimization problems can be solved in parallel. 3 The pseudo-Dirichlet prior The parameters for PLSA models consist of |Z| + |D| + 1 probability distributions taking their values on |Z| + |D| + 1 simplexes. The most well-known family of distributions on the simplex is the Dirichlet family, which has many properties that make it useful in Bayesian statistics [8]. Unfortunately the Dirichlet distribution is not a suitable prior for modeling sparsity for PLSA, as we shall see, and to address this we introduce a generalization of the Dirichlet distribution which we call the pseudo-Dirichlet distribution. To illustrate why the Dirichlet distribution is unsuitable in the present context, consider placing a symmetric Dirichlet prior on (P (z | d) : z ? Z) for each document d. That is, for each d ? D, Y gd (P (z | d) : z ? Z) ? P (z | d)??1 , z?Z where ? > 0 is the concentration parameter. Let fz and h be constant. The relevant case for sparsity is when ? < 1, which concentrates the density toward the (relative) boundary of the simplex. It is easy to see that the distribution is in this case log-convex, which means that the contribution to the log-likelihood and M-step objective function Gd (? | ?0 ) will be convex. We address this problem in Section 4. A bigger problem, however, is that for ? < 1 the density of the symmetric Dirichlet distribution is unbounded and the MAP likelihood problem does not have a well-defined solution, as the following result shows. 3 Proposition 1 Under the above assumptions on fz , gd and h there are infinitely many sequences (?m )m?1 , converging to distinct limits, such that limm?? Q(?m | ?m ) = ?. As a consequence, `(?m ) tends to infinity as well. Proof. Choose ?m as follows: P (d) = |D|?1 and P (w | z) = |W|?1 for all w, d and z. Fix d0 ? D ?1 ?1 and z0 ? Z, and set P (z0 | d0 ) = m?1 , P (z | d0 ) = 1?m for |Z|?1 for z 6= z0 , and P (z | d) = |Z| all z and d 6= d0 . It is then straightforward to verify that Q(?m | ?m ) tends to infinity. The choice of d0 and z0 was arbitrary, so by choosing two other points we get a different sequence with a different limit. Taking convex combinations yields the claimed infinity of sequences. The second statement follows from the well-known fact that Q(? | ?0 ) ? `(?) for all ? and ?0 .  This proposition is a formal statement of the observation that when the Dirichlet prior is unbounded, any single zero element in P (z|d) leads to an infinite posterior likelihood, and so the optimization problem is not well-posed. To overcome these unbounded Dirichlet priors while retaining their sparsity-inducing properties, we introduce the following class of distributions on the simplex. Definition 1 A random vector confined to the simplex in Rp is said to follow a pseudo-Dirichlet distribution with concentration parameter ? = (?1 , . . . , ?p ) ? Rp and perturbation parameter  = (1 , . . . , p ) ? Rp+ if it has a density on the simplex given by P (x1 , . . . , xp | ?, ) = C p Y (i + xi )?i ?1 (5) i=1 for a normalizing constant C depending on ? and . If ?i = ? and i =  for all i and some fixed ? ? R,  ? 0, we call the resulting distribution symmetric pseudo-Dirichlet. Notice that if i > 0 for all i, the pseudo-Dirichlet density is indeed bounded for all ?. If i = 0 and ?i > 0 for all i, we recover the standard Dirichlet distribution. If i = 0 and ?i ? 0 for some i then the density is not integrable, but can still be used as an improper prior. Like the Dirichlet distribution, when ? < 1 the pseudo-Dirichlet distribution is log-convex, and it will make a convex contribution to the M-step objective function of any EM algorithm. The psuedo-Dirichlet distribution can be viewed as a bounded perturbation of the Dirichlet distribution, and for small values of the perturbation parameter , many of the properties of the original Dirichlet distribution hold approximately. In our discussion section we offer a justification for allowing ? ? 0, framed within a regularization approach. 4 EM under pseudo-Dirichlet priors We now derive an EM algorithm for PLSA under sparse pseudo-Dirichlet priors. The E-step is the same as for standard PLSA, and is given by (3). The M-step consists in optimizing each Fz , Gd and H individually. While our M-step will not offer a closed-form maximization, we are able to derive simple checkable conditions under which the M-step has a stationary point that is also the global maximum. Once the conditions are satisfied, the M-step optimum can be found via a practitioner?s favorite root-finding algorithm. For consideration, we propose an iteration scheme that in practice we find converges rapidly and well. Because our sparsity assumption focuses on the parameters P (z|d), we perform our main analysis on Gd , but for completeness we state the corresponding result for Fz . The less applicable treatment of H is omitted. P Consider the problem of maximizing Gd (? | ?0 ) over (P (z | d) : z ? Z) subject to z P (z | d) = 1 and P (z | d) ? 0 for all z. We use symmetric pseudo-Dirichlet priors with parameters ?d = (?d , . . . , ?d ) and d = (d , . . . , d ) for ?d ? R and d > 0. Since each Gd is treated separately, let us fix d and write X xz = P (z | d), cz = P (z | w, d, ?0 )n(w, d), w where the dependence on d is suppressed in the notation. For x = (xz : z ? Z) and a fixed ?0 , we write Gd (x) = Gd (? | ?0 ), which yields, up to an additive constant, i Xh Gd (x) = (?d ? 1) log(d + xz ) + cz log xz . z 4 P The task is to maximize Gd , subject to z xz = 1 and xz ? 0 for all z. Assuming that every word w is observed in at least one document d and that all components of ?0 are strictly positive, Lemma 1 below implies that any M-step optimizer must have strictly positive components. The non-negativity constraint is therefore never binding, so the appropriate Lagrangian for this problem is h X i Ld (x; ?) = Gd (x) + ? 1 ? xz , z and it suffices to consider its stationary points. Lemma 1 Assume that every word w has been observed in at least one document d, and that P (z | w, d; ?0 ) > 0 for all (w, d, z). If xz ? 0 for some z, and the nonnegativity and normalization constraints are maintained, then Gd (x) ? ??. Proof. The assumption implies that cz > 0, ?z. Therefore, since log(d + xz ) and log xz are bounded from above, ?z, when ? stays in the feasible region, xz ? 0 leads to Gd (x) ? ??.  The next lemma establishes a property of the stationary points of the Lagrangian Ld which will be the key to proving our main result. Lemma 2 Let (x, ?) be any stationary point of Ld such that xz > 0 for all z. Then ? ? n(d) ? (1 ? ?d )|Z|. If in addition to the assumptions of Lemma 1 we have n(d) ? (1 ? ?d )|Z|, then ? 2 Gd (x, ?) < 0 ?x2z for all z ? Z. 1??d ?Ld cz d we get Proof. We have ?L ?xz = xz ? d +xz ? ?. Since ?xz (x, ?) = 0 at the stationary point, P xz ?xz = cz ?(1??d ) d +xz ? cz ?(1??d ), which, after summing over z and using that z xz = 1, yields X ?? cz ? (1 ? ?d )|Z|. z Furthermore, P z cz = P w n(w, d) 0 z P (z | w, d, ? ) = n(d), so ? ? n(d) ? (1 ? ?d )|Z|. P d For the second assertion, using once again that ?L ?xz (x, ?) = 0 at the stationary point, a calculation shows that h i ? 2 Gd 1 2 (x, ?) = ? x ? + c  . z d ?x2z x2z (d + xz ) z The assumptions imply that cz > 0, so it suffices to prove that ? ? 0. This follows from our hypothesis and the first part of the lemma.  This allows us to obtain our main result result concerning the structure of the optimization problem associated with the M-step. Theorem 1 Assume that (i) every word w has been observed in at least one document d, (ii) P (z | w, d, ?0 ) > 0 for all (w, d, z), and (iii) n(d) > (1 ? ?d )|Z| for each d. Then each Lagrangian Ld has a unique stationary point, which is the global maximum of the corresponding optimization problem, and whose components are strictly positive. The proof relies on the following version of the so-called Mountain Pass Theorem. Lemma 3 (Mountain Pass Theorem) Let O ? Rn be open, and consider a continuously differentiable function ? : O ? R s.t. ?(x) ? ?? whenever x tends to the boundary of O. If ? has two distinct strict local maxima, it must have a third stationary point that is not a strict local maximum. Proof. See p. 223 in [9], or Theorem 5.2 in [10].  Proof of Theorem 1. Consider a fixed d. We first prove that the corresponding Lagrangian Ld can have at most one stationary point. To simplify notation, assume without loss of generality that Z = {1, . . . , K}, and define K?1  X  e d (x1 , . . . , xK?1 ) = Gd x1 , . . . , xK?1 , 1 ? G xk . k=1 5 e d over the open set O = The constrained maximization of Gd is then equivalent to maximizing G K?1 P {(x1 , . . . , xK?1 ) ? R++ : k xk < 1}. The following facts are readily verified: ed . (i) If (x, ?) is a stationary point of Ld , then (x1 , . . . , xK?1 ) is a stationary point of G e d , then (x, ?) is a stationary point of Ld , where (ii) If (x1 , . . . , xK?1 ) is a stationary point of G PK?1 1??d cK xK = 1 ? k=1 xk and ? = xK ? d +xK . PK?1 e d y = PK y 2 ? 2 G2d . (iii) For any y = (y1 , . . . , yK?1 , k=1 yk ), we have y T ?2 G k=1 k ?x k Now, suppose that (x, ?) is a stationary point of Ld . Property (i) and property (iii) in conjunction e d and that ?2 G e d is negative with Lemma 2 imply that (x1 , . . . , xK?1 ) is a stationary point of G definite there. Hence it is a strict local maximum. e d tends to Next, suppose for contradiction that there are two distinct such points. By Lemma 1, G ?? near the boundary of O, so we may apply the mountain pass theorem to get the existence of a e d , that is not a strict local maximum. But by (ii), this third point (? x1 , . . . , x ?K?1 ), stationary for G ? for Ld . The same reasoning as above then shows that yields a corresponding stationary point (? x, ?) e d , which is a contradiction. We deduce that Ld has (? x1 , . . . , x ?K?1 ) has to be a strict local max for G at most one stationary point. Finally, the continuity of Gd together with its boundary behavior (Lemma 1) implies that a maximizer exists and has strictly positive components. But the maximizer must be a stationary point of Ld , so together with the previously established uniqueness, the result follows.  Condition (i) in Theorem 1 is not a real restriction, since a word that does not appear in any document typically will be removed from the vocabulary. Moreover, if the EM algorithm is initialized such that P (z | w, d; ?0 ) > 0 for all (w, d, z), Theorem 1 ensures that this will be the case for all future iterates as well. The critical assumption is Condition (iii). It can be thought of as ensuring that the prior does not drown the data. Indeed, sufficiently large negative values of ?d , corresponding to strong prior beliefs, will cause the condition to fail. While there are various methods available for finding the stationary point of Ld , we have found that the following fixed-point type iterative scheme produces satisfactory results. c xz h z i, xz ? xz ? P . (6) P xz0 1 z xz n(d) + (1 ? ? ) ? 0 d z d +xz0 d +xz To motivate this particular update rule, recall that ?Ld cz 1 ? ?d = ? ? ?, ?xz xz d + x z X ?Ld =1? xz . ?? z P d At the stationary point, ?xz = cz ? 1?? xz , so by summing over z and using that z xz = 1 d +xz P P d z and z cz = n(d), we get ? = n(d) ? (1 ? ?d ) z dx+x . Substituting this for ? in ?L ?xz = 0 z and rearranging terms yields the first part of (6). Notice that Lemma 2 ensures that the denominator stays strictly positive. Further, the normalization is a classic technique to restrict x to the simplex. Note that (6) reduces to the standard PLSA update (4) if ?d = 1. For completeness we also consider the topic-vocabulary distribution (P (w|z) : w ? W). We impose a symmetric pseudo-Dirichlet prior on the vector (P (w | z) : w ? W) for each z ? Z. The corresponding parameters are denoted by ?z and z . Each Fz is optimized individually, so we fix z ? Z and write yw = P (w | z). The objective function Fz (y) = Fz (? | ?0 ) is then given by i Xh X Fz (y) = (?z ? 1) log(z + yw ) + bw log yz , bw = P (z | w, d, ?0 )n(w, d). (7) w d The following is an analog of Theorem 1, whose proof is essentially the same and therefore omitted. Theorem 2 Assume condition (i) and (ii) of Theorem 1 are satisfied, and that for each z ? Z, P w bw ? (1 ? ?z )|W|. Then each Fz has a unique local optimum on the simplex, which is also a global maximum and whose components are strictly positive. 6 P Unfortunately there is no simple expression for w bw in terms of the inputs to the problem. On the other hand, the sum can be evaluated at the beginning of each M-step, which makes it possible to verify that ?z is not too negative. 5 Empirical evaluation To evaluate our framework for sparse mixture model inference, we develop a MAP PLSA topic model for a corpus of 2,406 blogger.com blogs, a dataset originally analyzed by Schler et al. [11] for the role of gender in language. Unigram frequencies for the blogs were built using the python NLTK toolkit [12]. Inference was run on the document-word distribution of 2,406 blogs and 2,000 most common words, as determined by the aggregate frequencies across the entire corpus. The implications of Section 4 is that in order to adapt PLSA for sparse MAP inference, we simply need to replace equation (4) from PLSA?s ordinary M-step with an iteration of (6). The corpus also contains a user-provided ?category? for each blog, indicating one of 28 categories. We focused our analysis on 8 varied but representative topics, while the complete corpus contained over 19,000 blogs. The user-provided topic labels are quite noisy, and so in order to have cleaner ground truth data for evaluating our model we chose to also construct a synthetic, sparse dataset. This synthetic dataset is employed to evaluate parameter choices within the model. To generate our synthetic data, we ran PLSA on our text corpus and extracted the inferred P (w|z) and P (d) distributions, while creating 2,406 synthetic P (z|d) distributions where each synthetic blog was a uniform mixture of between 1 and 4 topics. These distributions were then used to construct a ground-truth word-document distribution Q(w, d), which we then sampled N times, where N is the total number of words in our true corpus. In this way we were able to generate a realistic synthetic dataset with a sparse and known document-topic distribution. We evaluate the quality of each model by calculating the model perplexity of the reconstructed worddocument distribution as compared to the underlying ground truth distribution used to generate the synthetic data. Here model perplexity is given by P(P (w, d)) = 2? P w,d Q(w,d) log2 P (w,d) , where Q(w, d) is the true document-word distribution used to generate the synthetic dataset and P (w, d) is the reconstructed matrix inferred by the model. Using this synthetic dataset we are able to evaluate the roles of ? and  in our algorithm, as seen in Figure 1. From Figure 1 we can conclude that ? should in practice be chosen close the algorithm?s feasible lower bound, and  can be almost arbitrarily small. Choosing ? = d1?maxd n(d)/ke and  = 10?6 , we return to our blog data with its user-provided labels. In Figure 2 we see that sparse inference indeed results in P (z|d) distributions with significantly sparser support. Furthermore, we can more easily see how certain categories of blogs cluster in their usage of certain topics. For example, a majority of the blogs self-categorized as pertaining to ?religion? employ almost exclusively the second topic vocabulary of the model. The five most exceptional unigrams for this topic are ?prayer?, ?christ?, ?jesus?, ?god?, and ?church?. 6 Discussion We have shown how certain latent variable mixture models can be tractably extended with sparsityinducing priors using what we call the pseudo-Dirichlet distribution. Our main theoretical result shows that the resulting M-step maximization problem is well-behaved despite the lack of concavity, and empirical findings indicate that the approach is indeed effective. Our use of the Mountain Pass Theorem to prove that all local optima coincide is to the best of our knowledge new in the literature, and we find it intriguing and surprising that the global properties of maximizers, which are very rarely susceptible to analysis in the absence of concavity, can be studied using this tool. The use of log-convex priors (equivalently, concave regularization functions) to encourage sparsity is particularly relevant when the parameters of the model correspond to probability distributions. Since each distribution has a fixed L1 norm equal to one, the use of L1 -regularization, which otherwise would be the natural choice for inducing sparsity, becomes toothless. P The pseudo-Dirichlet prior we introduce corresponds to a concave regularization of the form i log(xi + ). We mention in 7 5 2.95 x 10 ?6 model perplexity ?=10 ?=0.1 PLSA ?=1 2.9 2.85 ?50 ?40 ?30 ? ? ?20 ?10 0 Figure 1: Model perplexity for inferred models with k = 8 topics as a function of the concentration parameter ? of the pseudo-Dirichlet prior, shown from the algorithm?s lower bound ? = 1 ? n(d)/k to the uniform prior case of ? = 1. Three different choices of  are shown, as well as the baseline PLSA perplexity corresponding to a uniform prior. The dashed line indicates the perplexity P(Q(w, d)), which should be interpreted as a lower bound. P(z|d), PLSA, Technology P(z|d), Ps-Dir MAP, Technology P(z|d), PLSA, Fashion P(z|d), Ps-Dir MAP, Fashion P(z|d), PLSA, Engineering P(z|d), Ps-Dir MAP, Engineering P(z|d), PLSA, Media P(z|d), Ps-Dir MAP, Media P(z|d), PLSA, Internet P(z|d), Ps-Dir MAP, Internet P(z|d), PLSA, Tourism P(z|d), Ps-Dir MAP, Tourism P(z|d), PLSA, Religion P(z|d), Ps-Dir MAP, Religion P(z|d), PLSA, Law P(z|d), Ps-Dir MAP, Law Figure 2: Document-topic distributions P (z|d) for the 8 different categories of blogs studied. All distributions share the same color scale. passing that the same sum-log regularization has also been used for sparse signal recovery in [13]. It should be emphasized that the notion of sparsity we discuss in this work is not in the formal sense of a small L0 norm. Indeed, Theorem 1 shows that, no different from ordinary PLSA, the estimated parameters for MAP PLSA will all be strictly positive. Instead, we seek sparsity in the sense that most parameters should be almost zero. Next, let us comment on the possibility to allow the concentration parameter ?d to be negative, assuming for simplicity that fz and h are constant. Consider the normalized likelihood, where clearly `(?) may be replaced by `(?)/N , `(?) `0 (?) X 1 ? ?d X = ? log(d + P (z | d)), N N N z d which by (2) we deduce only depends on the data through the normalized quantities n(w, d)/N . This indicates that the quantity (1 ? ?d )/N , which plays the role of a regularization ?gain? in the normalized problem, must be non-negligible in order for the regularization to have an effect. For realistic sizes of N , allowing ?d < 0 therefore becomes crucial. Finally, while we have chosen to present our methodology as applied to topic models, we expect the same techniques to be useful in a notably broader context. In particular, our methodology is directly applicable to problems solved through Nonnegative Matrix Factorization (NMF), a close relative of PLSA where matrix columns or rows are often similarly constrained in their L1 norm. Acknowledgments: This work is supported in part by NSF grant IIS-0910664. 8 References [1] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42:177?196, 2001. [2] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, 2003. [3] A. Bosch, A. Zisserman, and X. Munoz. Scene Classification via pLSA. In European Conference on Computer Vision, 2006. [4] I. Psorakis and B. Sheldon. Soft Partitioning in Networks via Baysian Non-negative Matrix Factorization. In NIPS, 2010. [5] C. Ding, T. Li, and W. Peng. Nonnegative matrix factorization and probabilistic latent semantic indexing: Equivalence chi-square statistic, and a hybrid method. In Proceedings of AAAI ?06, volume 21, page 342, 2006. [6] E. Gaussier and C. Goutte. Relation between PLSA and NMF and implications. In Proceedings of ACM SIGIR, pages 601?602. ACM, 2005. [7] A. Asuncion, M. Welling, P. Smyth, and Y.W. Teh. On smoothing and inference for topic models. In Proc. of the 25th Conference on Uncertainty in Artificial Intelligence, pages 27?34, 2009. [8] A. Gelman. Bayesian data analysis. CRC Press, 2004. [9] R. Courant. Dirichlet?s principle, conformal mapping, and minimal surfaces. Interscience, New York, 1950. [10] Y. Jabri. The Mountain Pass Theorem: Variants, Generalizations and Some Applications. Cambridge University Press, 2003. [11] J. Schler, M. Koppel, S. Argamon, and J. Pennebaker. Effects of age and gender on blogging. In Proc. of the AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs, pages 191?197, 2006. [12] S. Bird, E. Klein, and Loper E. Natural language processing with Python. O?Reilly Media, 2009. [13] E.J. Cand`es, M.B. Wakin, and S.P. Boyd. Enhancing sparsity by reweighted `1 minimization. Journal of Fourier Analysis and Applications, 14:877?905, 2008. 9
4430 |@word version:1 briefly:1 norm:4 plsa:37 checkable:2 open:2 seek:1 mention:1 ld:15 contains:3 exclusively:1 document:14 xz0:2 current:2 com:1 surprising:2 dx:1 must:5 written:1 readily:2 intriguing:1 additive:1 realistic:2 hofmann:1 update:3 stationary:22 intelligence:1 parameterization:1 xk:12 beginning:1 ugander:1 blei:1 completeness:2 iterates:1 five:1 unbounded:3 mathematical:1 become:1 symposium:1 consists:3 prove:3 interscience:1 introduce:5 peng:1 notably:1 indeed:5 behavior:1 cand:1 xz:35 chi:1 resolve:1 becomes:3 provided:3 bounded:3 notation:2 moreover:1 underlying:1 medium:3 what:2 mountain:8 interpreted:1 unobserved:1 finding:3 guarantee:1 pseudo:16 every:3 concave:10 partitioning:1 grant:1 appear:1 positive:7 negligible:1 engineering:3 local:7 tends:4 limit:2 consequence:1 despite:2 analyzing:1 noteworthy:1 approximately:1 chose:1 bird:1 studied:2 equivalence:2 factorization:4 limited:1 unique:4 acknowledgment:1 practice:2 definite:1 empirical:3 thought:1 significantly:1 reilly:1 boyd:1 word:16 get:4 close:2 gelman:1 context:5 accumulating:1 restriction:2 equivalent:2 map:19 lagrangian:4 center:1 maximizing:2 modifies:1 straightforward:1 independently:1 convex:8 focused:1 ke:1 sigir:1 simplicity:1 unstructured:2 recovery:1 contradiction:2 rule:2 deriving:1 proving:1 classic:1 notion:1 justification:1 target:1 suppose:2 play:1 user:3 smyth:1 sparsityinducing:1 hypothesis:1 trend:1 element:8 particularly:1 observed:3 role:3 ding:1 solved:2 region:1 ensures:2 improper:1 removed:1 yk:2 ran:1 motivate:1 easily:2 joint:1 various:1 derivation:1 distinct:3 effective:1 pertaining:1 artificial:1 aggregate:1 outside:2 choosing:2 whose:3 quite:1 posed:1 otherwise:1 statistic:2 god:1 noisy:1 sequence:3 differentiable:1 propose:1 blogger:1 relevant:2 rapidly:1 organizing:1 inducing:5 cluster:1 optimum:3 p:8 produce:1 converges:1 derive:3 develop:3 illustrate:1 depending:1 bosch:1 school:1 strong:2 implies:4 indicate:1 concentrate:1 psuedo:1 crc:1 hx:1 fix:3 generalization:4 suffices:2 preliminary:1 proposition:2 exploring:1 helping:1 extension:2 hold:1 weblogs:3 strictly:7 sufficiently:1 drown:1 ground:3 mapping:1 substituting:1 optimizer:1 omitted:2 uniqueness:1 estimation:1 proc:2 outperformed:1 applicable:3 label:2 individually:2 exceptional:1 establishes:1 tool:2 minimization:1 clearly:1 rather:2 ck:1 schler:2 cornell:4 broader:1 conjunction:1 l0:1 focus:1 loper:1 koppel:1 likelihood:11 indicates:2 contrast:2 baseline:1 sense:2 posteriori:1 inference:10 typically:2 entire:1 relation:1 limm:1 classification:1 denoted:1 retaining:1 constrained:3 smoothing:2 tourism:2 equal:1 once:2 never:1 extraction:1 construct:2 ng:1 placing:1 broad:1 unsupervised:1 future:1 simplex:7 simplify:2 few:1 employ:2 individual:1 replaced:2 intended:2 replacement:1 bw:4 detection:2 invariably:1 possibility:1 evaluation:1 introduces:1 mixture:13 analyzed:1 argamon:1 implication:2 encourage:1 necessary:1 decoupled:1 initialized:1 theoretical:1 minimal:1 column:1 modeling:2 earlier:1 soft:1 assertion:1 maximization:6 ordinary:4 subset:1 uniform:3 too:1 dir:8 synthetic:9 gd:27 density:8 stay:2 probabilistic:3 together:2 continuously:1 again:1 aaai:2 satisfied:2 choose:1 literal:1 creating:1 style:1 return:1 li:1 depends:1 root:1 closed:1 unigrams:1 recover:1 parallel:1 toothless:2 asuncion:1 contribution:6 square:1 largely:1 efficiently:1 maximized:1 yield:5 identify:1 correspond:1 bayesian:3 whenever:1 ed:1 definition:1 simplexes:2 frequency:2 associated:2 proof:7 sampled:1 gain:1 dataset:6 treatment:1 recall:1 knowledge:1 color:1 originally:1 courant:1 follow:1 methodology:2 zisserman:1 formulation:1 evaluated:1 though:1 generality:1 furthermore:3 hand:1 maximizer:2 lack:1 continuity:1 lda:2 quality:1 behaved:2 usage:1 effect:3 verify:2 true:2 normalized:3 regularization:15 hence:1 q0:3 symmetric:5 satisfactory:1 semantic:5 reweighted:1 self:1 maintained:1 outline:1 complete:1 demonstrate:1 l1:5 interpreting:1 reasoning:1 image:2 ranging:1 consideration:1 common:2 harnessed:1 volume:1 analog:1 refer:1 munoz:1 cambridge:1 framed:1 mathematics:1 similarly:1 language:2 toolkit:1 surface:1 deduce:2 posterior:5 larsson:1 optimizing:2 perplexity:6 claimed:1 certain:3 blog:10 arbitrarily:1 maxd:1 accomplished:1 tempered:1 integrable:1 seen:1 additional:1 impose:3 employed:1 maximize:4 dashed:1 ii:5 signal:1 reduces:1 d0:6 adapt:1 calculation:1 offer:2 concerning:1 equally:1 bigger:1 ensuring:1 converging:1 variant:1 denominator:1 vision:2 essentially:1 enhancing:1 iteration:2 normalization:3 cz:12 achieved:1 confined:1 background:1 addition:1 separately:1 crucial:1 strict:5 comment:2 subject:2 jordan:1 call:3 practitioner:1 near:2 iii:4 easy:1 restrict:1 expression:1 penalty:1 passing:1 cause:1 york:1 useful:2 iterating:1 yw:2 cleaner:1 verifiable:1 concentrated:1 category:4 generate:4 fz:18 problematic:1 nsf:1 notice:3 estimated:2 klein:1 write:3 shall:1 key:1 drawn:1 verified:1 circumventing:1 merely:2 sum:2 run:1 powerful:1 uncertainty:1 family:2 reasonable:1 almost:3 draw:1 delivery:1 bound:3 internet:2 correspondence:1 nonnegative:3 constraint:3 infinity:3 scene:1 sheldon:1 fourier:1 spring:1 martin:1 combination:1 across:2 em:10 suppressed:1 indexing:1 equation:1 mutually:1 previously:1 goutte:1 discus:2 fail:1 tractable:2 serf:1 photo:1 conformal:1 available:1 operation:1 apply:2 promoting:1 appropriate:3 occurrence:1 frequentist:1 alternative:1 rp:3 existence:1 original:1 dirichlet:35 include:1 log2:1 wakin:1 unsuitable:1 calculating:1 yz:1 objective:5 quantity:2 primary:1 concentration:4 dependence:1 said:1 navigating:1 majority:1 w0:1 topic:17 toward:1 assuming:2 equivalently:1 gaussier:1 unfortunately:3 susceptible:1 statement:3 negative:5 ordinarily:1 perform:2 allowing:2 teh:1 observation:2 datasets:4 extended:1 ever:1 y1:1 rn:1 perturbation:3 varied:1 arbitrary:1 community:2 nmf:4 inferred:3 pair:1 namely:2 optimized:1 baysian:1 established:2 tractably:1 nip:1 address:2 able:4 below:1 sparsity:17 challenge:1 built:1 max:1 belief:1 suitable:1 critical:1 natural:3 difficulty:3 treated:1 hybrid:1 scheme:2 improve:1 technology:2 imply:2 created:1 concludes:1 church:1 negativity:2 extract:1 text:7 prior:26 literature:1 python:2 relative:2 law:2 lacking:1 fully:1 expect:2 loss:1 interesting:1 allocation:3 age:1 foundation:1 jesus:1 xp:1 blogging:1 principle:1 share:1 row:1 supported:1 formal:2 allow:2 taking:2 sparse:12 overcome:2 boundary:4 vocabulary:5 evaluating:1 concavity:4 collection:1 commonly:1 coincide:1 social:1 welling:1 reconstructed:2 emphasize:1 global:5 corpus:10 summing:2 assumed:3 conclude:1 xi:2 latent:13 iterative:1 triplet:1 why:1 favorite:1 johan:1 rearranging:1 inherently:1 european:1 jabri:1 pk:3 main:5 big:1 verifies:1 categorized:1 x1:9 representative:1 fashion:2 fails:1 inferring:1 nonnegativity:1 xh:2 third:2 admissible:1 theorem:17 z0:4 nltk:1 specific:1 unigram:1 emphasized:1 showing:1 normalizing:1 maximizers:1 consist:1 exists:1 magnitude:1 demand:2 sparser:1 suited:1 logarithmic:1 simply:1 explore:1 infinitely:1 desire:1 contained:1 religion:3 christ:1 binding:1 gender:2 corresponds:1 truth:3 relies:1 extracted:1 acm:2 goal:1 viewed:1 towards:2 replace:1 absence:1 content:1 feasible:2 typical:1 infinite:1 determined:1 lemma:11 total:2 called:4 pas:8 e:1 indicating:1 rarely:1 support:1 arises:1 categorize:1 incorporate:1 evaluate:4 d1:1
3,790
4,431
Hierarchical Topic Modeling for Analysis of Time-Evolving Personal Choices XianXing Zhang Duke University [email protected] David B. Dunson Duke University [email protected] Lawrence Carin Duke University [email protected] Abstract The nested Chinese restaurant process is extended to design a nonparametric topic-model tree for representation of human choices. Each tree path corresponds to a type of person, and each node (topic) has a corresponding probability vector over items that may be selected. The observed data are assumed to have associated temporal covariates (corresponding to the time at which choices are made), and we wish to impose that with increasing time it is more probable that topics deeper in the tree are utilized. This structure is imposed by developing a new ?change point" stick-breaking model that is coupled with a Poisson and productof-gammas construction. To share topics across the tree nodes, topic distributions are drawn from a Dirichlet process. As a demonstration of this concept, we analyze real data on course selections of undergraduate students at Duke University, with the goal of uncovering and concisely representing structure in the curriculum and in the characteristics of the student body. 1 Introduction As time progresses, the choices humans make often change. For example, the types of products one purchases, as well as the types of people one befriends, often change or evolve with time. However, the choices one makes later in life are typically statistically related to choices made earlier. Such behavior is of interest when considering marketing to particular groups of people, at different stages of their lives, and it is also relevant for analysis of time-evolving social networks. In this paper we seek to develop a hierarchical tree structure for representation of this phenomena, with each tree path characteristic of a type of person, and a tree node defined by a distribution over choices (characterizing a type of person at a ?stage of life?). As time proceeds, each person moves along layers of a tree branch, making choices based on the node at a given layer, thereby yielding a hierarchical representation of behavior with time. Note that as one moves deeper in the tree, the number of nodes at a given tree layer increases as a result of sequential branching; this appears to be well matched to the modeling of choices made by individuals, who often become more distinctive and specialized with increasing time. The number of paths (types of people), nodes (stages of development) and the statistics of the time dependence are to be inferred nonparametrically based on observed data, which are typically characterized by a very sparse binary matrix (most individuals only select a tiny fraction of the choices that are available to them). To demonstrate this concept using real data, we consider selections of classes made by undergraduate students at Duke University, with the goal of uncovering structure in the students and classes, as inferred by time-evolving student choices. For each student, the data presented to the model are a set of indices of selected classes (but not class names or subject matter), as well as the academic year in which each class was selected (e.g., sophomore year). While the student majors and class names are not used by the model, they are known, and this information provides ?truth? with which model-inferred structure may be assessed. This study therefore also provides the opportunity to examine the quality of the inferred hierarchical-tree structure in models of the type considered in 1 [1, 4, 5, 8, 7, 12, 13, 6, 21, 15, 22] (such structure is difficult to validate with documents, for which there is no ?truth?). We seek to impose that as time progresses it is more probable that an individual?s choices are based on nodes deeper in the tree, so that as one moves from the tree root to the leaves, we observe the evolution of choices as people age. Such temporal tree-structure could be meaningful by itself, e.g., in our particular case it will allow university administrators and faculty to examine if objectives in curriculum design are manifested in the actual usage/choices of students. Further, the results of such an analysis may be of interest to applicants at a given school (e.g., high school students), as the inferred structure concisely describes both the student body and the curriculum. Also the uncovered structure may be used to aid downstream applications [17, 2, 11]. The basic form of the nonparametric tree developed here is based on the the nested Chinese restaurant process (nCRP) topic model [4, 5]. However, to achieve the goals of the unique problem considered, we make the following new modeling contributions. We develop a new ?change-point? stick-breaking process (cpSBP), which is a stick-breaking process that induces probabilities that stochastically increase to an unknown changepoint and then decrease. This construction is conceptually related to the ?umbrella? placed on dose response curves [9]. In the proposed model each individual has a unique cpSBP, that evolves with time such that choices at later times are encouraged to be associated with deeper nodes in the tree. Time is a covariate, and within the change-point model a new product-of-gammas construction is developed, and coupled to the Poisson distribution. Another novel aspect of the proposed model concerns drawing the node-dependent topics from a Dirichlet process, sharing topics across the tree. This is motivated by the idea that different types of people (paths) may be characterized by similar choices at different nodes in the respective paths (e.g., person Type A may make certain types of choices early in life, while person Type B may make similar choices later in life). Such sharing of topics allows the inference of relationships between choices different people make over time. 2 2.1 Model Formulation Nested Chinese Restaurant Process The nested Chinese restaurant process (nCRP) [4, 5] is a generative probabilistic model that defines a prior distribution over a tree-structured hierarchy with infinitely many paths. In an nCRP model of personal choice each individual picks a tree path by walking from the root node down the tree, from node to node. Specifically, when situated at a particular parent node, the child node ci individual i chooses is modeled as a random variable that can be either an existing node or a new node: (i) the probability that ci is an existing child node k is proportional to the number of persons who already chose node k from the same parent, (ii) and a new node can be created and chosen with probability proportional to ?0 > 0, which is the nCRP concentration parameter. This process is defined recursively such that each individual is allocated to one specific path of the tree hierarchy, through a sequence of probabilistic parent-child steps. The tree hierarchy implied by the nCRP provides a natural framework to capture the structure of personal choices, where each node is characterized by a distribution on the items that may be selected (e.g., each node is a ?choice topic"). Similar constructions have been considered for document analysis [5, 21, 4], in which the model captures the structure of word usage in documents. However, there are unique aspects of the time-evolving personal-choice problem, particularly the goal motivated above that one should select topics deeper in the tree as time evolves, to capture the specialized characteristics of people as they age. Hierarchical topic models proposed previously [4, 7] have employed a stick-breaking process (SBP) to guide selection of the tree depth at which a node/topic is selected, with an unbounded number of path layers, but these models do not provide a means of imposing the above temporal dynamics (which were not relevant for the document problems considered there). 2.2 Change Point Stick Breaking Process In the proposed model, instead of constraining the SBP construction to start at the root node, we model the starting-point depth of the SBP as a random variable and infer it from data, while still maintaining a valid distribution over each layer of any path. To do this we replace the single SBP over path layers with two statistically dependent SBPs: one begins from layer p + 1 and moves down 2 0.5 cpSBP, ?=1 cpSBP, ?=5 cpSBP, ?=10 SBP, ?=1 SBP, ?=5 SBP, ?=10 Stick Lenght 0.4 0.3 0.2 0.1 0 5 10 15 20 25 30 Index 35 40 45 50 55 60 Figure 1: Illustrative comparison of the stick lengths between change point stick breaking process (cpSBP) and stick breaking process (SBP) with different value of ?; typical draws from cpSBP and SBP are depicted. a? and b? are both set to 1, the change point is set to p = 30 and the truncation of both stick breaking constructions are set to 60. the tree away from the root, and the other begins from layer p and moves upward to the root; the latter SBP is truncated when it hits the root, while the former is in principle of infinite length. The tree depth p which relates these two SBPs is modeled as a random variable, drawn from a Poisson distribution, and is denoted the change point. In this way we encourage the principal stick weight to be placed heavily around the change point p, instead of restricting it to the top layers as in [4, 7]. To model the time dependence, and encourage use of greater tree depths with increasing time, we seek a formulation that imposes that the Poisson parameter grows (statistically) with increasing time. The temporal information is represented as covariate t(i, n), denoting the time at which the the nth selection/choice is made by individual i; in many applications t(i, n) ? {1, . . . , T }, and for the student class-selection problem T = 4, corresponding to the freshman through senior years; below we drop the indices (i, n) on the time, for notational simplicity. When individual i makes selections at time t, she employs a corresponding change point pi,t . To integrate the temporal covariate into the model, we develop a product-of-gammas and Poission conjugate pair to model pi,t which encourages pi,t associated with larger t to locate deeper in the tree. Specifically, consider ?i,l = Ga(?i,l |ai,l , bi,l ), ?i,t = t Y pi,t ? Poi(pi,t |?i,t ) ?i,l , (1) l=1 The product-of-gammas construction in (1) is inspired by the multiplicative-gamma process (MGP) developed in [3] for sparse factor analysis. Although each draw of ?i,l from a gamma distribution is not guaranteed to be greater than one, and thus ?i,t will not increase with probability one, in practice we find ?i,l is often inferred to be greater than one when (ai,l ? 1)/bi,l > 1. However, an MGP based on a left-truncated gamma distribution can be readily derived. p Given the change point pi,t = p, the cpSBP constructs the stick-weight vector ?i,t over layers of p p path bi by dividing it to into two parts: ??i,t and ??i,t , modeling them separately as two SBPs, where p p p p p p p p ??i,t = {??i,t (p), ??i,t (p ? 1), . . . , ??i,t (1)} and ??i,t = {??i,t (p + 1), ??i,t (p + 2), . . . , ??i,t (?)}. For p notation simplicity, we denote Vh = Vi,t (h) when constructing ?i,t , yielding p ??i,t (u) = Vu p Y (1 ? Vh ), p ??i,t (d) = Vd h=u+1 d?1 Y (1 ? Vh ), Vh ? beta(Vh |1, ?) (2) h=p+1 p Note that the above SBP contains two constructions: When d > p the stick weight ??i,t (d) is constructed as a classic SBP but with the stick-breaking construction starting at layer p + 1. On the p other hand when u ? p the stick weight ??i,t (u) is constructed ?backwards? from p to the root node, which is a truncated stick breaking process with truncation level set to p. A thorough discussion of the truncated stick breaking process is found in [10]. We further use a beta distributed latent variable ?bi to combine the two stick breaking process together while ensuring each element of p p ?p ?i,t = {??i,t , ?i,t } sums to one. Thus we have the following distribution over layers of a given path from which the layer allocation variables {li,n : t(i, n) = t} for a selection at time t by individual i are sampled: p ? X X li,n ? ?i,t ??i,t (l)?l + (1 ? ?i,t ) ??i,t (l)?l , ?i,t ? Beta(?i,t |a? , b? ) (3) l=1 l=p+1 3 Note that the change point stick breaking process (cpSBP) can be treated as a generalization of the stick breaking process for Dirichlet process, since when pi,t = 0 the cpSBP corresponds to the SBP. From the simulation studied in Figure 1, we observe that the change point, which is modeled through the temporal covariate t as in (1), corresponds to the layer with large stick weight and thus at which topic draws are most probable. Also note that one may alternatively suggest simply using pi,t directly as the layer from which a topic is drawn, without the subsequent use of a cpSBP. We examined this in the course of the experiments, and it did not work well, likely as a result of the inflexibility of the single-parameter Poisson (with its equal mean and variance). The cpSBP provided the additional necessary modeling flexibility. 2.3 Sharing topics among different nodes One problem with the nCRP-based topic model, implied by the tree structure, is that all descendent sub-topics from parent node pa1 are distinct from the descendants of parent pa2 , if pa1 6= pa2 . Some of these distinct sets of children from different parents may be redundant, and this redundancy can be removed if a child can have more than one parent [7, 13, 6]. In addition to the above problem, in our application there are other potential problems brought by the cpSBP. Since we encourage the later choice selections to be drawn from topics deeper in the tree, redundant topics at multiple layers may be manifested if two types of people tend to make similar choices at different time points (e.g., at different stages of life). Thus it is likely that similar (redundant) topics may be learned on different layers of the tree, and the inability of the original nCRP construction to share these topics misses another opportunity to share statistical strength. In [7, 13, 6] the authors addressed related challenges by replacing the tree structure with a directed acyclic graph (DAG), demonstrating success for document modeling. However, those solutions don?t have the flexibility of sharing topics on nodes among different layers. Here we propose a new and simpler approach, so that the nCRP-based tree hierarchy is retained, while different nodes in the whole tree may share the same topic, resolving the two problems discussed above. To achieve this we draw a set of ?global? topics {??k }, and a stick-breaking process is employed to allocate one of these global topics as ?j , representing the jth node in the tree (this corresponds to drawing the {?j } from a Dirichlet process [16], with a Dirichlet distribution base). The SBP defined over the global topics is represented as follows: ?k = ? k k?1 Y (1 ? ?i ), ?i ? Beta(?i |1, ?), i=1 ??k ? Dir(??k |?), ?j ? ? X ?k ???k (4) k=1 Within the generative process, let zi,n denote the assignment of the nth choice of individual i to global topic ??zi,n ; then the corresponding item chosen is drawn from Mult(1, ??zi,n ). 3 Model Inference In the proposed model, we sample the per-individual tree path indicator bi , the layer allocation of choice topics in those paths li,n , the change point pi,t for each time interval, the parameters associp ated with the cpSBP construction ?i,t , ?i,t , ?i,t , the stick breaking weight ? over the global topics ? ?k , and the global topic-assignment indicator zi,n . Similar to [4], the per-node topic parameters ?n p are marginalized out. We provide update equations cycling through {li,n , pi,t , ?i,t , ?i,t , ?i,t } that are unique for this model. The update equations for bi and {?, zi,n } are similar to the the ones in [4] and [18], respectively, which we do not reproduce here for brevity. Sampling for change point pi,t Due to the non-conjugacy between the Poisson and multinomial distributions, the exact form of its posterior distribution is difficult to compute. Additionally, in order to sample pi,t , we require imputation of an infinite-dimensional process. The implementation of the sampling algorithm either relies on finite approximations [10] which lead to straightforward update equations, or requires an additional Metropolis-Hastings (M-H) step which allows us to obtain samples from the exact posterior distribution of pi,t with no approximation, e.g., the retrospective sampler [14] proposed for Dirichlet process hierarchical models. In this section we first introduce the finite approximation based sampler, and the retrospective sampling scheme based method will be described in the supplemental material. 4 Denote P as the truncated maximum value of the change point, then given the samples of all other latent variables, pi,t can be sampled from the following equation: p p , ?i,t ), 0 ? p ? P , ?i,t , ?i,t , li,t ) ? p(pi,t = p|?i,t , P )p(li,t |?i,t q(pi,t = p|?i,t (5) where li,t = {li,n : t(i, n) = t} are all layer allocations of choices made by individual i at time ?p e??i,t t. p(pi,t = p|?i,t , P ) = i,tp!CP is the Poisson density function truncated with p ? P , and PP ?pi,t e??i,t p p p }) is the multinomial , (1 ? ?i,t )??i,t CP = p=1 . p(li,t |?i,t , ?i,t ) = Mult(li,t |{?i,t ??i,t p! density function over the layer allocations li,t . Sampling choice layer allocation li,n Given all the other variables, now we sample the layer allocation li,n for the nth choice made by individual i. Denote ci,n as the nth choice made by individual i, Mzi,n ,ci,n = #[z?(i,n) = zi,n , c?(i,n) = ci,n ] + ? as the smoothed count of seeing choice ci,n allocated to global topic zi,n , excluding the current choice. Parameter li,n can be sampled from the following equation: ( p ?i,t ??i,t (l)Mzi,n ,ci,n , 0<l?p p p(li,n = l|pi,t = p, z, ?i,t , ?i,t , c) ? p (1 ? ?i,t )??i,t (l)Mzi,n ,ci,n , p < l ? P Sampling for product-of-gammas construction ?i,t From (1) note that the temporal dependent intensity parameter ?i,t can be reconstructed from the gamma distributed variables ?i,t , which again can be sampled directly from its posterior distribution given all other variables, due to the conjugacy Ql (t) of product-gamma variable and Poisson construction. Denoting ?i,l = j=1,j6=t ?i,j , we have: ! T T X X (t) T ?i,l pi,l , bi,t + p(?i,t |{pi,t }t=1 , ai,t , bi,t ) = Ga ?i,t ai,t + l=t l=t p Sampling for cpSBP parameters {?i,t , ?i,t } Given the change points pi,t and choice layer allop p ?p cation li,t = {li,n : t(i, n) = t}, the cpSBP parameters ?i,t = {??i,t , ?i,t } can be reconstructed based on samples of Vh as defined in (2). Specifically, we have ?   ? Beta Vh |ah + Nh,t , bh + Ph?1 Nl,t if h ? p l=1   p(Vh |pi,t = p, li,t ) = P max l i,t ? Beta Vh |ah + Nh,t , bh + if h > p l=h+1 Nl,t where Nl,t = #[li,n = l, t(i, n) = t] records the number of times a choice made by individual i in time interval t is allocated to path layer l. Given the samples of other variables, ?i,t is sampled its full conditional posteior distribution: p(?i,t |pi,t = p, {li,n : t(i, n) = t}) =  from  Pp Pmax li,t Beta ?i,t 1 + l=1 Nl,t , 1 + l=p+1 Nl,t Sampling the Hyperparameters Concerning hyperparameters ?, ?, ?0 , ?, related to the stick breaking process and hierarchical topic model construction, we sample them within the inference process by placing prior distributions over them, similar to methods in [4]. One may also consider other alternatives for learning the hyperparameters within topic models [19]. For the hyperparameters ai,l , bi,l in the product-of-gamma construction, we sample them as proposed in [3]. Finally, we fix aw = 1 and sample bw by placing a gamma prior distribution on it. All these steps are done by a M-H step between iterations of the Gibbs sampler. 4 4.1 Analysis of student course selection Data description and computations We demonstrate the proposed model on real data by considering selections of classes made by undergraduate students at Duke University, for students in graduating classes 2009 to 2013; the data consists of class selections of all students from Fall 2005 to Spring 2011. For computational reasons 5 the cpSBP and SBP employed over the tree-path depth are truncated to a maximum of 10 layers (beyond this depth the number of topics employed by the data was minimal), while the number children of each parent node is allowed to be unbounded. Within the sampler, we ran the model based on the class selection records of students from class of 2009 and 2010 (total of 3382 students and 2756 unique classes), and collected 200 samples after burn-in, taking every fifth sample to approximate the posterior distribution over the latent tree structure as well as the topic on each node of the tree. We analyze the quality of the learned models using the remaining data (classes of 2011-2013), characterized by 5171 students and 2972 unique classes. Each topic is a probabilistic vector defined over 3015 classes offered across all years. Within the MCMC inference procedure we trained our model as follows: first, we fixed the change point pi,t = t and then ran the sampler for 100 iterations, then burned in the inference for 5000 iterations with pi,t updated before drawing 5000 samples from the full posterior. 4.2 Quantitative assessment # Topics 492?11 973?37 318?26 367?32 nCRP cpSBP-nCRP DP-nCRP Full model # Nodes 492?11 973?37 521?41 961?44 Predictive LL (11) -293226.8399 -290271.3576 -292311.3971 -288511.4298 Predictive LL (11-13) -471736.8876 -469912.1120 -471951.3452 -468331.2990 Table 1: Predictive log-likelihood comparison on two datasets, given the mean of number of topics and nodes learned with rounded standard deviation. nCRP is the model proposed in [4]. Compared to nCRP, the cpSBP-nCRP replaced SBP with the proposed cpSBP, while in DP-nCRP the draw of topic for each node is from Dirichlet process(DP) instead of Dirichlet distribution and retained the SBP construction in nCRP. The full model used both cpSBP and DP. Results shown for class of 2011, and classes 2011-2013. 4 4 4 x 10 4 Freshman Sophomore Junior Senior 3 2 1 1 1 2 3 4 5 Layer 6 7 8 9 Freshman Sophomore Junior Senior 3 2 0 x 10 0 10 1 2 3 4 5 Layer 6 7 8 9 10 Figure 2: Histograms of class layer allocations according to their time covariates. Left: Stick breaking process, Right: Change point stick breaking process In this section we examine the model?s ability to explain unseen data. For comparison consistency we computed the predictive log-likelihood based on the samples collected in the same way as [4] (alternatives means of evaluating topic models are discussed in [20]). We test the model using two different compositions of the data, the first is based on class selection history of students from class of 2011 (1696 students), where all 4 years of records are available. The second is based on class selection history of students from class of 2011 to 2013 (3475 students), where for the later two years only partial course selection information is available, e.g., for students from class of 2013 only class selection choices made in freshman year are available. Additionally, we compare the different models with respect to the learned number of topics and the learned number of tree nodes. This comparison is an indicator of the level of ?parsimony? of the proposed model, introduced by replacing independent draws of topics from a Dirichlet distribution by draws from a Dirichlet process (with Dirichlet distribution base), as explained in Section 2.3. Since the number of tree nodes grows exponentially with the number of tree layers, from a practical viewpoint sharing topics among the nodes saves memory used to store the topic vectors, whose dimension is typically large (here the number of classes: 3015). In addition to the above insight, as the experimental results indicates, sharing topics among different nodes can enhance the sharing of statistical strength, which leads to better predictive performance. The results are summarized in Table 4.2. We hypothesize that the enhanced performance of the proposed model to explain the unseen data is also due to it?s improved ability to capture the latent predictive statistical structure, e.g., to capture 6 7 6 6 5 5 Layer Layer 7 4 BME POLI ECON CS BIO PPS ME OTHER 4 3 3 2 2 1 1 0 0.1 0.2 0.3 0.4 0.5 Proportion 0.6 0.7 0.8 0.9 0 1 0.1 0.2 0.3 0.4 0.5 Proportion 0.6 0.7 0.8 0.9 1 Figure 3: Change of the proportion of important majors along the layers of two paths which share the nodes up to the second layer. These two paths correspond to the full versions (all 7 layers) of the top two paths in Figure 4. BME: Biomedical Engineering, POLI: Political Science, ECON: Economics, CS: Computer Science, BIO: Biology, PPS: Public Policy Science, ME: Mechanical Engineering, OTHER: other 73 majors. the latent temporal dynamics within the data by the change point stick breaking process (cpSBP). To demonstrate this point, in Figure 2 we compare how cpSBP and SBP guided the class layer allocations which have associated time covariates (e.g., the academic year of each student). From Figure 2 we observe that under spSBP, as the students? academic career advances, they are more probable to choose classes from topics deeper in the tree, while such pattern is less obvious in the SBP case. Further, spSBP encouraged the data to utilize more layers in the tree than SBP. 4.3 Analyzing the learned tree With incorporation of time covariates, we examine if the uncovered hierarchical structure is consistent with the actual curriculum of students from their freshman to senior year. And we consider two analyses here. The first is a visualization of a subtree learned from the class-selection history, based on students of the class of 2009, as shown in Figure 4; shown are the most-probable classes in each topic, as well as a histogram on the covariates (1 to 4, for freshman through senior) of the students who employed the topic. For example, the topics on the top two layers correspond to the most popular classes selected by mechanical engineering and computer science students, respectively, while topics located to the right correspond to more advanced classes; to the left-most the root topic corresponds to classes required for all students (e.g., academic writing). The tree structured hierarchy captured the general trend of class selection within/across different majors. In Figure 4 we also highlight a topic in red, shared by two nodes. This topic corresponds to a set of general introductory classes which are popular (high attendance) for two types of students: (i) young students who take these classes early for preparation of future advanced studies, and (ii) students who need to fill elective requirements later in their academic career (?ideally" of an easy/elementary nature, to not ?distract" from required classes from the major). It is therefore deemed interesting that these same classes seem to be preferred by young and old students, for apparently very different reasons. Note that the sharing of topics between nodes of different layers is a unique aspect of this model, not possible in [7, 13, 6]. In the second analysis we examine how the majors of students are distributed in the learned tree; the ideal case would be that each tree path corresponds to an academic major, and the nodes shared by paths manifest sharing of topics between different but related majors. In Figure 3 we show the change of proportions of different majors among different layers of the top two paths in Figure 4 (this is a zoom-in of a much larger tree). For a clear illustration, we show the seven most popular majors for these paths as a function of time (out of a total of 80 majors), and the remaining 73 majors are group together. We observe that the students with mechanical engineering (ME) majors share the node on the second layer with students with a computer science (CS) major, and the layers deeper in the tree begin to be exclusive to students with CS and ME majors, respectively. This phenomenon corresponds to the process a student determining her major by choosing courses as she walks down tree path. This also matches the fact that in this university, students declare their major during the sophomore year. 7 COMP ?METH ?IN ?ENGINEERING ? INTERMEDIATE ?CALCULUS ? INTRODUCTORY ?MECHANICS ? INTRO ?TO ?ENGINEERING ? LINEAR ?ALGEBRA ? GENERAL ?CHEMISTRY ? ENGINEERING ?INNOVATION ? ELECTRICITY ?& ?MAGNETISM ? ACADEMIC ?WRITING ? ECONOMIC ?PRINCIPLES ? ? ? GENERAL ?CHEMISTRY ?I ? GENERAL ?CHEMISTRY ?II ? INTERMEDIATE ?CALCULUS ? INTERMEDIATE ?ECONOMICS ?I ? ? INTRODUCTORY ?PSYCHOLOGY ? LABORATORY ?CALCULUS ?I ? CHEM/TECHNOL/SOCIETY ? DATA ?ANALY/STAT ?INFER ? THE ?DYNAMIC ?EARTH ? SOCIAL ?PSYCHOLOGY ? ABNORMAL ?PSYCHOLOGY ? POL ?ANALY ?PUB ?POL ?MAKING ? GLOBAL ?HEALTH ? ELEMENTARY ?SPANISH ? THE ?DYNAMIC ?EARTH ? INTEGRATING ?ENV ?SCI/POL ? ENERGY ?AND ?ENVIRONMENT ? LIBERTY ?& ?AMER ?CONST ? GENERAL ?PHYSICS ?II ? INTERMEDIATE ?GERMAN ?I ? U ?S ?ENVIRONMENTAL ?POL ? ECONOMIC ?PRINCIPLES ? INTRO ?TO ?SIGNALS ?AND ?SYSTEMS ? MICROELECT ?DEVICES ?& ?CIRCUITS ? ELECTRONIC ?MUSIC ? QUANTUM ?MECHANICS ?I ? TRNSPRT ?PHENOM ?BIOLOGCL ?SYS ? NONLINEAR ?DYNAMICS ? INTRO ?ELECTRIC, ?MAGNET ? TISSUE ?ENGINEERING ? MECHANICAL ?DESIGN ? AERODYNAMICS ? THERMO ?DYNAMICS ? MODELS ?CELL ?& ?MOL ?SYSTEMS ? COMPRESSIBLE ?FLUID ?FLOW ? SIGNALS ?AND ?SYSTEMS ? ELECTROMAGNET ?FIELDS ? FAILURE ?ANALY/PREVENTION ? CHEM/TECHNOL/SOCIETY ? DATA ?ANALY/STAT ?INFER ? THE ?DYNAMIC ?EARTH ? SOCIAL ?PSYCHOLOGY ? ABNORMAL ?PSYCHOLOGY ? POL ?ANALY ?PUB ?POL ?MAKING ? GLOBAL ?HEALTH ? ELEMENTARY ?SPANISH ? PROGRAM ?DESIGN/ANALY ?I ? INTRO ?ELECTRIC, ?MAGNET ? STRUCT ?DESIGN ?AND ?OPTMZATN ? ELEM ?DIFFERENTIAL ?EQUAT ? COMPUTER ?ARCHITECTURE ? PROBABIL/STATIS ?IN ?EGR ? MATRIX ?STRUCT ?ANALYSIS ? CONCRETE ?AND ?COMP ?STRUCT ? COMPUTER ?ORGANIZA/PROG ? SOFTWARE ?DESIGN/IMPLEMEN ? DISCRETE ?MATH ?FOR ?COMPSCI ? INTRO ?TO ?OPERATING ?SYSTM ? INTRO ?TO ?MATH ?LOGIC ? INTRO ?TO ?DATABASE ?SYSTEMS ? DESIGN/ANALY ?ALGORITHMS ? PROGRAM ?DESIGN/ANALY ?II ? COMP ?NETWORK ?ARCHITEC ? ELECTROMAGNET ?FIELDS ? INTRO ?TO ?EMBEDDED ?SYSTEMS ? LINEAR ?CONTROL ?SYSTEMS ? ELECTRICITY ?& ?MAGNETISM ? COMPUTER ?ARCHITECTURE ? INTRO ?TO ?OPERATING ?SYSTM ? DIG ?IMAGE/MULTIDIM ?PROCESSING ? MODERN ?TERRORISM ? LIBERTY/EQUALITY&AMER ?CONST ? CAMPAIGNS ?AND ?ELECTIONS ? JUNIOR-??SENIOR ?SEM ?SP ?TOP ? THE ?AMERICAN ?PRESIDENCY ? ERA ?OF ?THE ?AMERICAN ?REVOLU ? HISTORY ?OF ?WORLD ?WARS ? POLITICS ?AND ?LITERATURE ? DEV ?CONGRESS ?AS ?INSTITUTION ? MODERN ?AMERICA ? INTERNATIONAL ?SECURITY ? POL ?DEV ?WESTERN ?EUROPE ? GLOBAL ?AND ?DOM ?POLITICS ? CRUSADES ?TO ?HOLY ?LAND ? SPECIAL ?TOPICS ?IN ?POLITICS ? THEOL/FICTION ?C ?S ?LEWIS ? LAW ?AND ?POLITICS ? LIBERTY/EQUALITY&AMER ?CONST ? CAMPAIGNS ?AND ?ELECTIONS ? JUNIOR-??SENIOR ?SEM ?SP ?TOP ? THE ?AMERICAN ?PRESIDENCY ? ERA ?OF ?THE ?AMERICAN ?REVOL ? HISTORY ?OF ?WORLD ?WARS ? POLITICS ?AND ?LITERATURE ? THEATER ?IN ?LONDON: ?TEXT ? THEATER ?IN ?LONDON: ?PERFORM ? INDIVID ?DANCE ?PROG: ?SPECIAL ? INTRODUCTION ?TO ?ACTING ? DIRECTING ? THEATER ?PRODUCTION ? TENNESSEE ?WILLIAMS/CHEKHOV ? INTRODUCTION ?TO ?THEATER ? ACOUSTICS ?AND ?MUSIC ? REPERTORY: ?MODERN ? DANCE ?COMPOSITION ? MEDIA ?INTERN ?LOS ?ANGELES ? U.S. ?CULTURE ?INDUSTRIES ? THE ?HOLLYWOOD ?CYBER ?JOUR ? REPERTORY: ?BALLET ? MODERN ?DANCE ?1890-??1950 ? THEATER ?IN ?LONDON: ?TEXT ? THEATER ?IN ?LONDON: ?PERFORM ? INDIVID ?DANCE ?PROG: ?SPECIAL ? INTRODUCTION ?TO ?ACTING ? DIRECTING ? THEATER ?PRODUCTION ? TENNESSEE ?WILLIAMS/CHEKHOV ? INTRODUCTION ?TO ?THEATER ? Figure 4: A subtree of topics learned from courses chosen by undergraduate students of class 2009; the whole tree has 372 nodes and 252 topics, and a maximum of 7 layers. Each node shows two aggregated statistics at that node: the eight most common classes of the topic on that node and a histogram of the academic year the topic was selected by students (1-4, for freshman-senior). The columns in each of the histogram correspond to freshman to senior year from left to right. The two highlighted red nodes share the same topic. These results correspond to one (maximum-likelihood) collection sample. 5 Discussion We have extended hierarchical topic models to an important problem that has received limited attention to date: the evolution of personal choices over time. The proposed approach builds upon the nCRP [4], but introduces novel modeling components to address the problem of interest. Specifically, we develop a change-point stick-breaking process, coupled with a product of gammas and Poisson construction, that encourages individuals to be represented by nodes deeper in the tree as time evolves. The Dirichlet process has also been used to design the node-dependent topics, sharing strength and inferring relationships between choices of different people over time. The framework has been successfully demonstrated with a real-world data set: selection of courses over many years, for students at Duke University. Although we worked only on one specific real-world data set, there are many other examples for which such a model may be of interest, especially when the data correspond to a sparse set of choices over time. For example, it could be useful for companies attempting to understand the purchases (choices) of customers, as a function of time (e.g., the clothing choices of people as they advance from teen years to adulthood). This may be of interest in marketing and targeted advertisement. Acknowledgements We would like to thank the anonymous reviewers for their insightful comments. The research reported here was supported by AFOSR, ARO, DARPA, DOE, NGA and ONR. 8 References [1] R. P. Adams, Z. Ghahramani, and M. I. Jordan. Tree-structured stick breaking for hierarchical data. In Neural Information Processing Systems (NIPS), 2010. [2] E. Bart, I. Porteous, P. Perona, and M. Welling. Unsupervised learning of visual taxonomies. In CVPR, 2008. [3] A. Bhattacharya and D. B. Dunson. Sparse Bayesian infinite factor models. Biometrika, 2011. [4] D. M. Blei, T. L. Griffiths, and M. I. Jordan. The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 57(2), 2010. [5] D. M. Blei, T. L. Griffiths, M. I. Jordan, and J. B. Tenenbaum. Hierarchical topic models and the nested Chinese restaurant process. In Neural Information Processing Systems (NIPS). 2004. [6] A. Chambers, P. Smyth, and M. Steyvers. Learning concept graphs from text with stickbreaking priors. In Advances in Neural Information Processing Systems(NIPS). 2010. [7] H. Chen, D.B. Dunson, and L. Carin. Topic modeling with nonparametric Markov tree. In Proc. Int. Conf. Machine Learning (ICML), 2011. [8] T. L. Griffiths, M. Steyvers, and J. B. Tenenbaum. Topics in semantic representation. Psychological Review, 114(2):211?244, 2007. [9] C. Hans and D. B. Dunson. Bayesian inferences on umbrella orderings. BIOMETRICS, 61:1018?1026, 2005. [10] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96(453):161?173, 2001. [11] L. Li, C. Wang, Y. Lim, D. Blei, and L. Fei-Fei. Building and using a semantivisual image hierarchy. In CVPR, 2010. [12] W. Li and A. McCallum. Pachinko allocation: Dag-structured mixture models of topic correlations. In Proc. Int. Conf. Machine Learning (ICML), 2006. [13] D. Mimno, W. Li, and A. McCallum. Mixtures of hierarchical topics with Pachinko allocation. In Proc. Int. Conf. Machine Learning (ICML), 2007. [14] O. Papaspiliopoulos and G. O. Roberts. Retrospective Markov chain Monte Carlo methods for Dirichlet process hierarchiacal models. Biometrika, 95(1):169?186, 2008. [15] R. Salakhutdinov, J. Tenenbaum, and A. Torralba. One-shot Learning with a Hierarchical Nonparametric Bayesian Model. MIT Technical Report, 2011. [16] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994. [17] J. Sivic, B. C. Russell, A. Zisserman, W. T. Freeman, and A. A. Efros. Unsupervised discovery of visual object class hierarchies. In CVPR, 2008. [18] Y. W. Teh, M. I. Jordan, Matthew J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566?1581, 2006. [19] H. M. Wallach, D. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In Neural Information Processing Systems (NIPS), 2009. [20] H. M. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In Proc. Int. Conf. Machine Learning (ICML), 2009. [21] C. Wang and D. M. Blei. Variational inference for the nested Chinese restaurant process. In Neural Information Processing Systems (NIPS), 2009. [22] XX. Zhang, D. B. Dunson, and L. Carin. Tree-structured infinite sparse factor model. In Proc. Int. Conf. Machine Learning (ICML), 2011. 9
4431 |@word version:1 faculty:1 proportion:4 calculus:3 seek:3 simulation:1 pick:1 thereby:1 holy:1 shot:1 recursively:1 uncovered:2 contains:1 pub:2 denoting:2 document:5 existing:2 current:1 readily:1 applicant:1 subsequent:1 hypothesize:1 drop:1 statis:1 update:3 bart:1 generative:2 selected:7 leaf:1 item:3 device:1 mccallum:3 sys:1 record:3 implemen:1 blei:5 institution:1 provides:3 math:2 node:52 compressible:1 simpler:1 zhang:3 unbounded:2 along:2 constructed:2 become:1 beta:7 differential:1 descendant:1 consists:1 combine:1 introductory:3 introduce:1 poli:2 behavior:2 examine:5 mechanic:2 inspired:1 salakhutdinov:2 freeman:1 company:1 actual:2 election:2 considering:2 increasing:4 begin:3 provided:1 matched:1 notation:1 circuit:1 medium:1 xx:1 parsimony:1 developed:3 supplemental:1 temporal:8 thorough:1 every:1 quantitative:1 biometrika:2 hit:1 stick:30 bio:2 control:1 before:1 declare:1 engineering:8 congress:1 era:2 analyzing:1 path:25 graduating:1 chose:1 burn:1 studied:1 examined:1 terrorism:1 wallach:2 campaign:2 limited:1 bi:9 statistically:3 directed:1 unique:7 practical:1 vu:1 practice:1 lcarin:1 procedure:1 evolving:4 mult:2 word:1 integrating:1 griffith:3 seeing:1 suggest:1 ga:2 selection:19 bh:2 writing:2 imposed:1 demonstrated:1 customer:1 reviewer:1 straightforward:1 economics:2 starting:2 williams:2 attention:1 simplicity:2 insight:1 theater:8 fill:1 steyvers:2 classic:1 updated:1 construction:17 hierarchy:8 heavily:1 enhanced:1 exact:2 duke:10 analy:8 smyth:1 element:1 trend:1 particularly:1 utilized:1 walking:1 pps:2 located:1 sbp:20 database:1 observed:2 wang:2 capture:5 ordering:1 decrease:1 removed:1 equat:1 russell:1 ran:2 environment:1 covariates:5 pol:7 ideally:1 dynamic:7 personal:5 dom:1 trained:1 algebra:1 magnetism:2 predictive:6 distinctive:1 upon:1 darpa:1 represented:3 america:1 distinct:2 london:4 monte:1 choosing:1 whose:1 larger:2 cvpr:3 drawing:3 pa2:2 ability:2 statistic:2 unseen:2 highlighted:1 itself:1 beal:1 sequence:1 propose:1 aro:1 product:8 relevant:2 date:1 flexibility:2 achieve:2 description:1 validate:1 probabil:1 los:1 parent:8 requirement:1 individ:2 adam:1 object:1 develop:4 stat:3 bme:2 school:2 received:1 progress:2 dividing:1 c:4 liberty:3 guided:1 human:2 material:1 public:1 require:1 fix:1 generalization:1 anonymous:1 probable:5 elementary:3 clothing:1 around:1 considered:4 lawrence:1 matthew:1 changepoint:1 major:17 efros:1 early:2 torralba:1 earth:3 proc:5 stickbreaking:1 hollywood:1 successfully:1 brought:1 mit:1 tennessee:2 poi:1 derived:1 notational:1 she:2 likelihood:3 indicates:1 political:1 inference:8 dependent:4 typically:3 her:1 perona:1 reproduce:1 theol:1 upward:1 uncovering:2 sbps:3 among:5 denoted:1 development:1 prevention:1 special:3 equal:1 construct:1 field:2 sampling:8 encouraged:2 biology:1 placing:2 env:1 unsupervised:2 carin:3 icml:5 attendance:1 purchase:2 future:1 report:1 employ:1 modern:4 gamma:13 zoom:1 individual:17 replaced:1 bw:1 interest:5 evaluation:1 introduces:1 mixture:2 nl:5 yielding:2 ncrp:17 chain:1 encourage:3 partial:1 necessary:1 freshman:8 culture:1 respective:1 biometrics:1 tree:56 old:1 walk:1 minimal:1 dose:1 psychological:1 industry:1 modeling:8 earlier:1 column:1 dev:2 tp:1 assignment:2 electricity:2 deviation:1 reported:1 aw:1 dir:1 chooses:1 person:7 density:2 international:1 jour:1 probabilistic:3 physic:1 rounded:1 enhance:1 together:2 concrete:1 again:1 choose:1 administrator:1 stochastically:1 conf:5 american:6 li:24 potential:1 chemistry:3 student:43 summarized:1 int:5 matter:2 descendent:1 mcmc:1 vi:1 ated:1 later:6 root:8 multiplicative:1 analyze:2 apparently:1 red:2 start:1 contribution:1 variance:1 characteristic:3 who:5 correspond:6 conceptually:1 bayesian:4 carlo:1 comp:3 dig:1 j6:1 cation:1 ah:2 history:5 tissue:1 explain:2 sharing:10 definition:1 failure:1 energy:1 pp:2 james:1 obvious:1 associated:4 sampled:5 popular:3 manifest:1 lim:1 appears:1 response:1 improved:1 zisserman:1 formulation:2 done:1 amer:3 marketing:2 stage:4 biomedical:1 correlation:1 hand:1 hastings:1 replacing:2 nonlinear:1 assessment:1 western:1 nonparametrically:1 defines:1 quality:2 lda:1 grows:2 building:1 usage:2 name:2 concept:3 umbrella:2 evolution:2 former:1 equality:2 laboratory:1 semantic:1 ll:2 during:1 branching:1 encourages:2 spanish:2 illustrative:1 demonstrate:3 cp:2 image:2 variational:1 electromagnet:2 novel:2 common:1 specialized:2 multinomial:2 teen:1 inflexibility:1 exponentially:1 nh:2 discussed:2 association:2 elem:1 composition:2 imposing:1 ai:5 dag:2 gibbs:2 consistency:1 europe:1 han:1 operating:2 base:2 posterior:5 store:1 certain:1 manifested:2 binary:1 success:1 onr:1 life:6 captured:1 greater:3 additional:2 impose:2 employed:5 egr:1 aggregated:1 redundant:3 signal:2 ii:5 branch:1 relates:1 multiple:1 resolving:1 infer:3 full:5 technical:1 match:1 characterized:4 academic:8 concerning:1 ensuring:1 basic:1 poisson:9 iteration:3 histogram:4 adulthood:1 cell:1 addition:2 separately:1 addressed:1 interval:2 allocated:3 aerodynamics:1 comment:1 subject:1 tend:1 cyber:1 flow:1 seem:1 jordan:4 ee:1 backwards:1 constraining:1 ideal:1 easy:1 intermediate:4 restaurant:7 zi:7 psychology:5 mgp:2 architecture:2 economic:2 idea:1 angeles:1 politics:5 motivated:2 war:2 allocate:1 retrospective:3 xianxing:2 useful:1 clear:1 nonparametric:5 tenenbaum:3 situated:1 induces:1 ph:1 fiction:1 per:2 econ:2 discrete:1 group:2 redundancy:1 demonstrating:1 drawn:5 imputation:1 utilize:1 graph:2 downstream:1 fraction:1 year:14 sum:1 nga:1 prog:3 pa1:2 electronic:1 draw:7 ballet:1 layer:43 abnormal:2 guaranteed:1 strength:3 incorporation:1 worked:1 fei:2 software:1 aspect:3 spring:1 attempting:1 structured:5 developing:1 according:1 conjugate:1 across:4 describes:1 metropolis:1 evolves:3 making:3 explained:1 equation:5 conjugacy:2 previously:1 visualization:1 count:1 german:1 available:4 eight:1 observe:4 hierarchical:14 away:1 ishwaran:1 chamber:1 save:1 alternative:2 bhattacharya:1 struct:3 original:1 top:6 dirichlet:15 remaining:2 porteous:1 opportunity:2 maintaining:1 marginalized:1 const:3 music:2 ghahramani:1 chinese:7 build:1 especially:1 society:2 murray:1 implied:2 move:5 objective:1 already:1 intro:9 concentration:1 dependence:2 exclusive:1 cycling:1 dp:4 thank:1 sci:1 rethinking:1 vd:1 me:4 topic:73 seven:1 collected:2 reason:2 length:2 index:3 relationship:2 modeled:3 retained:2 demonstration:1 illustration:1 innovation:1 difficult:2 dunson:6 ql:1 robert:1 taxonomy:1 sinica:1 fluid:1 pmax:1 design:9 implementation:1 policy:1 unknown:1 perform:2 teh:1 datasets:1 markov:2 finite:2 technol:2 truncated:7 extended:2 mzi:3 excluding:1 locate:1 directing:2 smoothed:1 intensity:1 inferred:6 david:1 introduced:1 pair:1 mechanical:4 junior:4 required:2 security:1 sivic:1 acoustic:1 concisely:2 learned:9 nip:5 address:1 beyond:1 proceeds:1 below:1 pattern:1 challenge:1 program:2 max:1 memory:1 presidency:2 natural:1 treated:1 indicator:3 curriculum:4 advanced:2 nth:4 representing:2 scheme:1 meth:1 sethuraman:1 created:1 deemed:1 coupled:3 health:2 vh:9 text:3 prior:7 literature:2 acknowledgement:1 review:1 discovery:1 evolve:1 determining:1 law:1 embedded:1 afosr:1 highlight:1 interesting:1 burned:1 proportional:2 allocation:10 acyclic:1 age:2 integrate:1 offered:1 consistent:1 imposes:1 principle:3 viewpoint:1 tiny:1 share:7 pi:26 land:1 production:2 course:7 placed:2 supported:1 truncation:2 jth:1 guide:1 allow:1 deeper:10 senior:9 understand:1 fall:1 characterizing:1 taking:1 fifth:1 sparse:5 distributed:3 mimno:3 curve:1 depth:6 dimension:1 valid:1 evaluating:1 world:4 quantum:1 pachinko:2 author:1 made:11 collection:1 social:3 welling:1 reconstructed:2 approximate:1 preferred:1 logic:1 global:10 assumed:1 alternatively:1 don:1 latent:5 why:1 table:2 additionally:2 nature:1 career:2 sem:2 mol:1 distract:1 constructing:1 electric:2 did:1 sp:2 statistica:1 whole:2 hyperparameters:4 child:6 allowed:1 body:2 papaspiliopoulos:1 aid:1 sub:1 inferring:1 wish:1 breaking:23 advertisement:1 young:2 down:3 specific:2 covariate:4 insightful:1 concern:1 undergraduate:4 restricting:1 sequential:1 ci:8 subtree:2 chen:1 depicted:1 simply:1 likely:2 infinitely:1 intern:1 visual:2 nested:7 corresponds:8 truth:2 relies:1 environmental:1 lewis:1 acm:1 conditional:1 goal:4 targeted:1 replace:1 shared:2 change:24 specifically:4 typical:1 infinite:4 sampler:5 acting:2 miss:1 principal:1 total:2 experimental:1 meaningful:1 select:2 people:10 latter:1 inability:1 assessed:1 brevity:1 chem:2 preparation:1 constructive:1 magnet:2 dance:4 phenomenon:2
3,791
4,432
Better Mini-Batch Algorithms via Accelerated Gradient Methods Andrew Cotter Toyota Technological Institute at Chicago [email protected] Ohad Shamir Microsoft Research, NE [email protected] Nathan Srebro Toyota Technological Institute at Chicago [email protected] Karthik Sridharan Toyota Technological Institute at Chicago [email protected] Abstract Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insu?cient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this deficiency, enjoys a uniformly superior guarantee and works well in practice. 1 Introduction We consider a stochastic convex optimization problem of the form minw?W L(w), where L(w) = Ez [?(w, z)], based on an empirical sample of instances z1 , . . . , zm . We assume that W is a convex subset of some Hilbert space (which in this paper, we will take to be Euclidean space), and ? is non-negative, convex and smooth in its first argument (i.e. has a Lipschitzconstinuous gradient). The classical learning application is when z = (x, y) and ?(w, (x, y)) is a prediction loss. In recent years, there has been much interest in developing e?cient first-order stochastic optimization methods for these problems, such as mirror descent [2, 6] and dual averaging [9, 16]. These methods are characterized by incremental updates based on subgradients ??(w, zi ) of individual instances, and enjoy the advantages of being highly scalable and simple to implement. An important limitation of these methods is that they are inherently sequential, and so problematic to parallelize. A popular way to speed-up these algorithms, especially in a parallel setting, is via mini-batching, where the incremental update is performed on an average of the subgradients with respect to several instances at a time, rather than a single ?b instance (i.e., 1b j=1 ??(w, zi+j )). The gradient computations for each mini-batch can be parallelized, allowing these methods to perform faster in a distributed framework (see for instance [11]). Recently, [10] has shown that a mini-batching distributed framework is capable of attaining asymptotically optimal speed-up in general (see also [1]). A parallel development has been the popularization of accelerated gradient descent methods [7, 8, 15, 5]. In a deterministic optimization setting and for general smooth convex functions, these methods enjoy a rate of O(1/n2 ) (where n is the number of iterations) as opposed to O(1/n) using standard methods. However, in a stochastic setting (which is the relevant ? one for learning problems), the rate of both approaches have an O(1/ n) dominant term in general, so the benefit of using accelerated methods for learning problems is not obvious. 1 Algorithm 1 Stochastic Gradient Descent with Mini-Batching (SGD) Parameters: Step size ?, mini-batch size b. Input: Sample z1 , . . . , zm w1 = 0 for i = 1 to n = m/b do ?bi Let ?i (wi ) = 1b t=b(i?1)+1 ?(wi , zt ) ? wi+1 := wi ? ???i (wi )) ? wi+1 := PW (wi+1 ) end for ? ? = n1 ni=1 wi Return w Algorithm 2 Accelerated Gradient Method (AG) Parameters: Step sizes (?i , ?i ), mini-batch size b Input: Sample z1 , . . . , zm w=0 for i = 1 to n = m/b do ?bi Let ?i (wi ) := 1b t=b(i?1)+1 ?(w, zt ) wimd := ?i?1 wi + (1 ? ?i?1 )wiag ? wi+1 := wimd ? ?i ??i (wimd ) ? wi+1 := PW (wi+1 ) ag ?1 wi+1 ? ?i wi+1 + (1 ? ?i?1 )wiag end for Return wnag In this paper, we study the application of accelerated methods for mini-batch algorithms, and provide theoretical results, a novel algorithm, and empirical experiments. The main resulting message is that by using an appropriate accelerated method, we obtain significantly better stochastic optimization algorithms in terms of convergence speed. Moreover, in certain regimes acceleration is actually necessary in order to allow a significant speedups. The potential benefit of acceleration to mini-batching has been briefly noted in [4], but here we study this issue in much more depth. In particular, we make the following contributions: ? We develop novel convergence bounds for the standard gradient method, which refines the result of [10, 4] by being dependent on L(w? ), the expected loss of the best predictor in our class. For example, we show that in the regime where the desired suboptimality is comparable or larger than L(w? ), including in the separable case L(w? ) = 0, mini-batching does not lead to significant speed-ups with standard gradient methods. ? We develop a novel variant of the stochastic accelerated gradient method [5], which is optimized for a mini-batch framework and implicitly adaptive to L(w? ). ? We provide an analysis of our accelerated algorithm, refining the analysis of [5] by being dependent on L(w? ), and show how it always allows for significant speedups via mini-batching, in contrast to standard gradient methods. Moreover, its performance is uniformly superior, at least in terms of theoretical upper bounds. ? We provide an empirical study, validating our theoretical observations and the e?cacy of our new method. 2 Preliminaries As discussed in the introduction, we focus on a stochastic convex optimization problem, where we wish to minimize L(w) = Ez [?(w, z)] over some convex domain W, using an i.i.d. sample z1 , . . . , zm . Throughout this paper we assume that the instantaneous loss ?(?, z) is convex, non-negative and H-smooth for each z ? Z. Also in this paper, we take W to be the set W = {w : ?w? ? D}, although our results can be generalized. 2 We discuss two stochastic optimization approaches to deal with this problem: stochastic gradient descent (SGD), and accelerated gradient methods (AG). In a mini-batch setting, both approaches iteratively average sub-gradients with respect to several instances, and use this average to update the predictor. However, the update is done in di?erent ways. The stochastic gradient descent algorithm (which in more general settings is known as mirror descent, e.g. [6]) is summarized as Algorithm 1. In the pseudocode, PW refers to the projection on to the ball W, which amounts to rescaling w to have norm at most D. The accelerated gradient method (e.g., [5]) is summarized as Algorithm 2. In terms of existing results, for the SGD algorithm we have [4, Section 5.1] ?? ? 1 b ? ? ? L(w ) ? O E [L(w)] + , m m whereas for an accelerated gradient algorithm, we have [5] ?? ? 1 b2 ag ? E [L(wn )] ? L(w ) ? O + . m m2 ? Thus, as long as b = o( m), both methods allow us to use a large mini-batch size b without significantly degrading the performance of either method. This allows the number of iterations n = m/b to be smaller, potentially resulting in faster convergence speed. However, these bounds do not show that accelerated methods have a significant advantage over?the ? SGD algorithm, at least when b = o( m), since both have the same first-order term 1/ m. To understand the di?erences between these two methods better, we will need a more refined analysis, to which we now turn. 3 Convergence Guarantees The following theorems provide a refined convergence guarantee for the SGD algorithm and the AG algorithm, which improves on the analysis of [10, 4, 5] by being explicitly dependent on L(w? ), the expected loss of the best predictor w? in W. Due to lack of space, the proofs are only sketched. The full proofs are deferred to the supplementary material. Theorem ? min 1 2H , ? 1. For ? the bD 2 L(w? )Hn ? HD 2 1+ L(w ? )bn Stochastic Gradient Descent algorithm with = , assuming L(0) ? HD2 , we get that ? ? ? L(w ) ? E [L(w)] ? HD2 L(w? ) 2HD2 9HD2 + + 2bn n bn Theorem 2. For the Accelerated Descent algorithm with ?i = ? p+1 ? ? ? ? 2p+1 1 bD 2 b ? = min 4H , , ? 2p+1 2p 412HL(w )(n?1) 1044H(n?1) i+1 2 , ?i = ?ip where 2 ?D 4HD 2 + 4HD 2 L(w? ) and ? ? ? p = min max log(b) log log(n) , 2 log(n ? 1) 2 (log(b(n ? 1)) ? log log(n)) ? p ? 2p+1 (1) ? ? ,1 , as long as n ? 904, we have that ? ? HD2 L(w? ) 1545HD2 1428HD2 log n 4HD2 ag ? E [L(wn )] ? L(w ) ? 358 +? + + b(n ? 1) b(n ? 1) (n ? 1)2 b(n ? 1) 3 (2) We emphasize that Theorem 2 gives more than a theoretical bound: it actually specifies a novel accelerated gradient strategy, where the step size ?i scales polynomially in i, in a way dependent on the minibatch size b and L(w? ). While L(w? ) may not be known in advance, it does have the practical implication that choosing ?i ? ip for some p < 1, as opposed to just choosing ?i ? i as in [5]), might yield superior results. The key observation used for analyzing the dependence on L(w? ) is that for any non-negative H-smooth convex function f : W ?? R, we have [13]: ? ??f (w)? ? 4Hf (w) (3) This self-bounding property tells us that the norm of the gradient is small at a point if the loss is itself small at that point. This self-bounding property has been used in [14] in the online setting and in [13] in the stochastic setting to get better (faster) rates of convergence for non-negative ? smooth losses. The implication of?this observation are that for any w ? W, ??L(w)? ? 4HL(w) and ?z ? Z, ??(w, z)? ? 4H?(w, z). Proof sketch for Theorem 1. The proof for the stochastic gradient descent bound is mainly based on the proof techniques in [5] and its extension to the mini-batch case in [10]. Following the line of analysis in [5], one can show that ? n ? n?1 ? ? ? ? 2 ? 1 D2 E n L(wi ) ? L(w? ) ? n?1 E ??L(wi ) ? ??i (wi )? + 2?(n?1) i=1 i=1 In the case of [5], E [??L(wi ) ? ??i (wi )?] is bounded by the variance, and that leads to the final bound provided in [5] (by setting ? appropriately). As noticed in [10], in the minibatch ?bi setting we have ??i (wi ) = 1b t=b(i?1)+1 ?(wi , zt ) and so one can further show that ? n ? n?1 ib ? ? ? 2 ? 1 D2 E n L(wi ) ? L(w? ) ? b2 (n?1) E ??L(wi ) ? ??(wi , zt )? + 2?(n?1) (4) i=1 i=1 t= (i?1)b+1 In [10], each of ??L(wi ) ? ??(wi , zt )? is bounded by ?0 and so setting ?, the mini-batch bound provided there is obtained. In our analysis we further use the self-bounding property to (4) and get that ? n ? n?1 ? ? 16H? 1 D2 E n L(wi ) ? L(w? ) ? b(n?1) E [L(wi )] + 2?(n?1) i=1 i=1 rearranging and setting ? appropriately gives the final bound. Proof sketch for Theorem 2. The proof of the accelerated method starts in a similar way as in [5]. For the ?i ?a and ?i ?s mentioned in the theorem, following similar lines of analysis as in [5] we get the preliminary bound E [L(wnag )] n?1 ?? ? ? ? 2? D2 2p ??L(wimd ) ? ??i (wimd )?2 + ? L(w ) ? i E (n ? 1)p+1 i=1 ?(n ? 1)p+1 ? In [5] the step size ?i = ?(i + 1)/2 and ?i = (i + 1)/2 which e?ectively amounts to p ?= 1 and further similar to?the stochastic gradient descent analysis. Furthermore, each ? ?2 E ??L(wimd ) ? ??i (wimd )? is assumed to be bounded by some constant, and thus leads to the final bound provided in [5] by setting ? appropriately. On the other hand, we first notice that due to the mini-batch setting, just like in the proof of stochastic gradient descent, E [L(wnag )] ? L(w? ) ? 2? b2 (n?1)p+1 n?1 ? i=1 i2p ib ? t= b(i?1)+1 4 ?? ?2 ? E ??L(wimd ) ? ??(wimd , zt )? + D2 ?(n?1)p+1 Using smoothness, the self bounding property some manipulations, we can further get the bound E [L(wnag )] ? L(w? ) ? 64H? b(n?1)1?p n?1 ? i=1 + (E [L(wiag )] ? L(w? )) + D2 ?(n?1)p+1 + 64H?L(w? )(n?1)p b 32HD 2 b(n?1) Notice that the above recursively bounds E [L(wnag )] ? L(w? ) in terms of ?n?1 ag ? While unrolling the recursion all the way down to 2 i=1 (E [L(wi )] ? L(w )). does not help, we notice that for any w ? W, L(w) ? L(w? ) ? 12HD2 + 3L(w? ). Hence we unroll the recursion to M steps and use this inequality for the remaining sum. Optimizing over number of steps up to which we unroll and also optimizing over the choice of ?, we get the bound, ? E [L(wnag )] ? L(w? ) ? 1648HD 2 L(w? ) b(n?1) + 4HD 2 (n?1)p+1 + + 348(6HD 2 +2L(w? )) (b(n b(n?1) 36HD 2 b(n?1) log(n) p ? 1)) p+1 + 32HD 2 b(n?1) p (b(n?1)) 2p+1 Using the p as given in the theorem statement, and few simple manipulations, gives the final bound. 4 Optimizing with Mini-Batches To compare our two theorems and understand their implications, it will be convenient to treat H and D as constants, and focus on the more interesting parameters of sample size m, minibatch size b, and optimal expected loss L(w? ). Also, we will ignore the logarithmic factor in Theorem 2, since we will mostly be interested in significant (i.e. polynomial) di?erences between the two algorithms, and it is quite possible that this logarithmic factor is merely an artifact of our analysis. Using m = nb, we get that the bound for the SGD algorithm is ?? ? ?? ? L(w? ) 1 L(w? ) b ? ? ? ? ? L(w ) ? O E [L(w)] + = O + , (5) bn n m m and the bound for the accelerated gradient method we propose is ?? ? ?? ? ? 2 ?) ?) L(w 1 1 L(w b b ag ? ? ? E [L(wn )] ? L(w ) ? O +? + 2 = O + + 2 . (6) bn m m m bn n To understand the implication these bounds, we follow the approach described in [3, 12] to analyze large-scale learning algorithms. First, we fix a desired suboptimality parameter ?, which measures how close to L(w? ) we want to get. Then, we assume that both algorithms are ran till the suboptimality of their outputs is at most ?. Our goal would be to understand the runtime each algorithm needs, till attaining suboptimality ?, as a function of L(w? ), ?, b. To measure this runtime, we need to discern two settings here: a parallel setting, where we assume that the mini-batch gradient computations are performed in parallel, and a serial setting, where the gradient computations are performed one after the other. In a parallel setting, we can take the number of iterations n as a rough measure of the runtime (note that in both algorithms, the runtime of a single iteration is comparable). In a serial setting, the relevant parameter is m, the number of data accesses. To analyze the dependence on m and n, we upper bound (5) and (6) by ?, and invert them to get the bounds on m and n. Ignoring logarithmic factors, for the SGD algorithm we get ? ? ? ? 1 L(w? ) 1 1 L(w? ) n? ? +1 m? +b , (7) ? ? b ? ? 5 and for the AG algorithm we get ? ? ? ? ? ? 1 L(w? ) 1 1 1 L(w? ) ? n? ? +? + ? m? + b+b ? . (8) ? ? b ? ? b First, let us compare the performance of these two algorithms in the parallel setting, where the relevant parameter to measure runtime is n. Analyzing which of the terms in each bound dominates, we get that for the SGD algorithm, there are 2 regimes, while for the AG algorithm, there are 2-3 regimes depending on the relationship between L(w? ) and ?. The following two tables summarize the situation (again, ignoring constants): AG Algorithm Regime SGD Algorithm Regime n ? ? ) b ? L(w? )m L(w ?2 b ? 1 b ? L(w? )m ? ? 2 ? ? L(w ) ? ? L(w? )2 n b ? L(w? )1/4 m3/4 L(w? ) ?2 b 1 ? ? b ? L(w? )m L(w? ) ?2 b 1 ? ? b 1 ? ? b ? L(w? )1/4 m3/4 L(w? )m ? b ? m2/3 b ? m2/3 From the tables, we see that for both methods, there is an initial linear speedup as a function of the minibatch size b. However, in the AG algorithm, this linear speedup regime holds for much larger minibatch sizes1 . Even beyond the linear speedup regime, the AG algorithm ? still maintains a b speedup, for the reasonable case where ? ? L(w? )2 . Finally, in all regimes, the runtime bound of the AG algorithm is equal or significantly smaller than that of the SGD algorithm. We now turn to discuss the serial setting, where the runtime is measured in terms of m. Inspecting (7) and (8), we see that a larger size of b actually requires m to increase for both algorithms. This is to be expected, since mini-batching does not lead to large gains in a serial setting. However, using mini-batching in a serial setting might still be beneficial for implementation reasons, resulting in constant-factor improvements in runtime (e.g. saving overhead and loop control, and via pipelining, concurrent memory accesses etc.). In that case, we can at least ask what is the largest mini-batch size that won?t degrade the runtime guarantee by more than a constant. Using our bounds, the mini-batch size b for the SGD algorithm can scale as much as L/?, vs. a larger value of L/?3/2 for the AG algorithm. Finally, an interesting point is that the AG algorithm is sometimes actually necessary to obtain significant speed-ups via a mini-batch framework (according to our bounds). Based on the table above, this happens when the desired suboptimality ? is not much bigger then L(w? ), i.e. ? = ?(L(w? )). This includes the ?separable? case, L(w? ) = 0, and in general a regime where the ?estimation error? ? and ?approximation error? L(w? ) are roughly the same?an arguably very ? relevant one in machine learning. For the SGD algorithm, the critical mini-batch value L(w? )m can be shown to equal L(w? )/?, which is O(1) in our case. So with SGD we get no?non-constant parallel speedup. However, with AG, we still enjoy a speedup of at least ?( b), all the way up to mini-batch size b = m2/3 . 5 Experiments We implemented both the SGD algorithm (Algorithm 1) and the AG algorithm (Algorithm 2, using step-sizes of the form ?i = ?ip as suggested by Theorem 2) on two publicly-available binary classification problems, astro-physics and CCAT. We used the smoothed hinge loss ?(w; x, y), defined as 0.5?yw? x if yw? x ? 0; 0 if yw? x > 1, and 0.5(1?yw? x)2 otherwise. While both datasets are relatively easy to classify, we also wished to understand the algorithms? performance in the ?separable? case L(w? ) = 0, to see if the theory in Section 4 1 Since it is easily verified that L(w? )m ? L(w? )m is generally smaller than both L(w? )1/4 m3/4 and 6 CCAT b=16 b=64 b=256 0.0016 0.0020 0.0018 0.0022 b=4 b=16 b=64 ? 0.0018 ? ? ? 0.0016 0.0014 Test Loss 0.0020 astro-physics 0.0 0.2 0.4 0.6 0.8 1.0 ? 0.0 p ? 0.2 0.4 0.6 0.8 1.0 p Figure 1: Left: Test smoothed hinge loss, as a function of p, after training using the AG algorithm on 6361 examples from astro-physics, for various batch sizes. Right: the same, for 18578 examples from CCAT. In both datasets, margin violations were removed before training so that L(w? ) = 0. The circled points are the theoretically-derived values p = ln b/(2 ln(n ? 1)) (see Theorem 2). holds in practice. To this end, we created an additional version of each dataset, where L(w? ) = 0, by training a classifier on the entire dataset and removing margin violations. In all of our experiments, we used up to half of the data for training, and one-quarter each for validation and testing. The validation set was used to determine the step sizes ? and ?i . We justify this by noting that our goal is to compare the performance of the SGD and AG algorithms, independently of the di?culties in choosing their stepsizes. In the implementation, we neglected the projection step, as we found it does not significantly a?ect performance when the stepsizes are properly selected. In our first set of experiments, we attempted to determine the relationship between the performance of the AG algorithm and the p parameter, which determines the rate of increase of the step sizes ?i . Our experiments are summarized in Figure 5. Perhaps the most important conclusion to draw from these plots is that neither the ?traditional? choice p = 1, nor the constant-step-size choice p = 0, give the best performance in all circumstances. Instead, there is a complicated data-dependent relationship between p, and the final classifier?s performance. Furthermore, there appears to be a weak trend towards higher p performing better for larger minibatch sizes b, which corresponds neatly with our theoretical predictions. In our next experiment, we directly compared the performance of the SGD and AG. To do so, we varied the minibatch size b while holding the total amount of training data (m = nb) fixed. When L(w? ) > 0 (top row of Figure 5), the total sample size m is high and the suboptimality ? is low (red and black plots), we see that for small minibatch size, both methods do not degrade as we increase b, corresponding to a linear parallel speedup. In fact, SGD is actually overall better, but as b increases, its performance degrades more quickly, eventually performing worse than AG. That is, even in the least favorable scenario for AG (high L(w? ) and small ?, see the tables in Sec. 4), it does give benefits with large enough minibatch sizes. Further, we see that once the suboptimality ? is roughly equal to L? , AG significantly outperforms SGD, even with small minibatches, agreeing with theory. Turning to the case L(w? ) = 0 (bottom two rows of Figure 5), which is theoretically more favorable to AG, we see it is indeed mostly better, in terms of retaining linear parallel speedups for larger minibatch sizes, even for large data set sizes corresponding to small suboptimality values, and might even be advantageous with small minibatch sizes. 6 Summary In this paper, we presented novel contributions to the theory of first order stochastic convex optimization (Theorems 1 and 2, generalizing results of [4] and [5] to be sensitive to L (w? )), 7 0.13 0.11 0.065 0.09 0.10 0.060 0.08 0.055 1 5 10 50 100 500 1 10 100 1000 10000 T=402207 T=25137 T=1571 0.030 0.012 0.020 0.008 0.000 0.010 0.004 0.000 Test Loss 5000 T=31884 T=7796 T=1949 5 10 50 100 500 0.008 1 1 10 100 1000 0.000 0.005 0.002 0.010 0.004 0.015 0.006 10000 T=402207 T=25137 T=1571 0.020 T=31884 T=7796 T=1949 0.000 Test Misclassification T=402207 T=25137 T=1571 0.12 0.070 T=31884 T=7796 T=1949 0.050 Test Loss 0.14 CCAT 0.075 astro-physics 1 5 10 50 100 500 1 b 10 100 1000 10000 b Figure 2: Test loss on astro-physics and CCAT as a function of mini-batch size b (in log-scale), where the total amount of training data m = nb is held fixed. Solid lines and dashed lines are for SGD and AG respectively (for AG, we used p = ln b/(2 ln(n ? 1)) as in Theorem 2). The upper row shows the smoothed hinge loss on the test set, using the original (uncensored) data. The bottom rows show the smoothed hinge loss and misclassification rate on the test set, using the modified data where L(w? ) = 0. All curves are averaged over three runs. developed a novel step size strategy for the accelerated method that we used in order to obtain our results and we saw works well in practice, and provided a more refined analysis of the e?ects of minibatching which paints a di?erent picture then previous analyses [4, 1] and highlights the benefit of accelerated methods. A remaining open practical and theoretical question is whether the bound of Theorem 2 is tight. Following [5], the bound is tight for b = 1?and b ? ?, i.e. the first and third terms are tight, but it is not clear whether the 1/( bn) dependence is indeed necessary. It would be interesting to understand whether with a more refined analysis, or perhaps di?erent step-sizes, we can avoid this term, whether an altogether di?erent algorithm is needed, or whether this term does represent the optimal behavior for any method based on b-aggregated stochastic gradient estimates. 8 References [1] A. Agarwal and J. Duchi. Distributed delayed stochastic optimization. Technical report, arXiv, 2011. [2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167 ? 175, 2003. [3] L. Bottou and O. Bousquet. The tradeo?s of large scale learning. In NIPS, 2007. [4] O. Dekel, R. Gilad Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Technical report, arXiv, 2010. [5] G. Lan. An optimal method for stochastic convex optimization. Technical report, Georgia Institute of Technology, 2009. [6] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009. [7] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k 2 ). Doklady AN SSSR, 269:543?547, 1983. [8] Y. Nesterov. Smooth minimization of non-smooth functions. 103(1):127?152, 2005. Math. Program., [9] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221?259, August 2009. [10] O. Shamir O. Dekel, R. Gilad-Bachrach and L. Xiao. Optimal distributed online prediction. In ICML, 2011. [11] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: primal estimated sub-gradient solver for SVM. Math. Program., 127(1):3?30, 2011. [12] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size. In ICML, 2008. [13] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In NIPS, 2010. [14] S.Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, Hebrew University of Jerusalem, 2007. [15] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. Submitted to SIAM Journal on Optimization, 2008. [16] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11:2543?2596, 2010. 9
4432 |@word version:1 briefly:1 pw:3 polynomial:1 norm:2 advantageous:1 dekel:2 open:1 d2:6 bn:7 sgd:19 solid:1 recursively:1 initial:1 outperforms:1 existing:1 com:1 bd:2 refines:1 chicago:3 plot:2 update:4 juditsky:1 v:1 half:1 selected:1 math:2 mathematical:1 ect:2 ectively:1 overhead:1 theoretically:2 indeed:2 expected:4 behavior:1 roughly:2 nor:1 solver:1 unrolling:1 provided:4 moreover:2 bounded:3 what:1 degrading:1 developed:1 ag:28 guarantee:4 concave:1 runtime:9 doklady:1 classifier:2 control:1 enjoy:3 arguably:1 before:1 treat:1 analyzing:2 parallelize:1 might:3 black:1 cacy:1 nemirovski:1 bi:3 averaged:1 practical:2 testing:1 practice:3 implement:1 empirical:3 significantly:5 projection:2 ups:2 convenient:1 refers:1 get:13 pegasos:1 close:1 nb:3 deterministic:1 jerusalem:1 independently:1 convex:15 bachrach:2 m2:4 hd:9 shamir:3 programming:2 trend:1 bottom:2 technological:3 removed:1 ran:1 mentioned:1 nesterov:3 neglected:1 tight:3 easily:1 various:1 fast:1 tell:1 choosing:3 refined:4 shalev:3 quite:1 larger:6 supplementary:1 otherwise:1 itself:1 ip:3 online:5 final:5 advantage:2 propose:2 minibatching:1 zm:4 relevant:4 loop:1 till:2 convergence:7 incremental:2 help:1 depending:1 andrew:1 develop:2 measured:1 erent:4 wished:1 implemented:1 sssr:1 stochastic:23 pipelining:1 material:1 fix:1 preliminary:2 inspecting:1 extension:1 hold:2 estimation:1 favorable:2 sensitive:1 saw:1 concurrent:1 largest:1 cotter:3 minimization:2 rough:1 always:1 modified:1 rather:1 avoid:1 stepsizes:2 derived:1 focus:2 refining:1 improvement:1 properly:1 mainly:1 contrast:1 dependent:5 entire:1 interested:1 sketched:1 issue:1 dual:3 classification:1 overall:1 retaining:1 development:1 equal:3 once:1 saving:1 icml:2 report:3 few:1 individual:1 delayed:1 beck:1 microsoft:2 karthik:2 n1:1 interest:1 message:1 highly:1 deferred:1 violation:2 primal:2 held:1 implication:4 capable:1 necessary:3 ohad:1 minw:1 euclidean:1 desired:3 theoretical:6 instance:6 classify:1 teboulle:1 subset:1 predictor:3 proximal:1 siam:2 physic:5 quickly:1 w1:1 again:1 thesis:1 opposed:2 hn:1 worse:1 rescaling:1 return:2 potential:1 attaining:2 summarized:3 b2:3 includes:1 sec:1 explicitly:1 performed:3 analyze:2 red:1 start:1 hf:1 maintains:1 parallel:9 complicated:1 contribution:2 minimize:1 ni:1 publicly:1 variance:1 wnag:6 yield:1 weak:1 ccat:5 submitted:1 obvious:1 proof:8 di:7 gain:1 dataset:2 popular:1 ask:1 improves:1 hilbert:1 actually:5 appears:1 higher:1 follow:1 improved:1 done:1 erences:2 furthermore:2 just:2 sketch:2 hand:1 nonlinear:1 lack:1 minibatch:11 artifact:1 perhaps:2 unroll:2 hence:1 iteratively:1 insu:1 deal:2 self:4 noted:1 won:1 suboptimality:8 generalized:1 duchi:1 instantaneous:1 novel:8 recently:1 superior:3 pseudocode:1 quarter:1 discussed:1 significant:7 smoothness:2 unconstrained:1 neatly:1 access:2 etc:1 dominant:1 recent:1 optimizing:3 scenario:1 manipulation:2 certain:1 inequality:1 binary:1 additional:1 parallelized:1 determine:2 aggregated:1 dashed:1 full:1 smooth:7 technical:3 faster:3 characterized:1 long:2 serial:5 bigger:1 prediction:4 scalable:1 variant:1 circumstance:1 arxiv:2 iteration:4 sometimes:2 represent:1 tradeo:1 agarwal:1 gilad:2 invert:1 whereas:1 want:1 appropriately:3 validating:1 sridharan:2 noting:1 easy:1 wn:3 enough:1 zi:2 whether:5 generally:1 tewari:1 yw:4 clear:1 amount:4 specifies:1 shapiro:1 problematic:1 notice:3 estimated:1 key:1 lan:2 neither:1 verified:1 asymptotically:1 subgradient:2 merely:1 year:1 sum:1 run:1 inverse:1 letter:1 discern:1 throughout:1 reasonable:1 draw:1 comparable:2 bound:25 deficiency:1 bousquet:1 nathan:1 speed:8 argument:1 min:3 subgradients:2 separable:3 performing:2 relatively:1 speedup:10 developing:1 according:1 ball:1 smaller:3 beneficial:1 agreeing:1 wi:30 happens:1 hl:2 ln:4 discus:2 turn:2 eventually:1 needed:1 singer:1 end:3 hd2:9 available:1 operation:1 appropriate:1 batching:8 batch:22 altogether:1 original:1 top:1 remaining:2 hinge:4 especially:1 classical:1 noticed:1 paint:1 question:1 strategy:2 degrades:1 dependence:4 traditional:1 gradient:31 uncensored:1 degrade:2 astro:5 tseng:1 reason:1 assuming:1 relationship:3 mini:29 hebrew:1 mostly:2 potentially:1 statement:1 holding:1 negative:4 implementation:2 zt:6 perform:1 allowing:1 upper:3 observation:3 datasets:2 descent:12 situation:1 varied:1 smoothed:4 august:1 ttic:3 z1:4 optimized:1 nip:2 beyond:1 suggested:1 regime:10 summarize:1 program:2 including:1 max:1 memory:1 critical:1 misclassification:2 regularized:1 turning:1 recursion:2 ohadsh:1 technology:1 ne:1 picture:1 created:1 circled:1 nati:1 loss:15 highlight:1 interesting:3 limitation:1 srebro:4 validation:2 xiao:3 row:4 summary:1 enjoys:1 allow:2 understand:6 institute:4 distributed:5 benefit:4 curve:1 depth:1 adaptive:1 projected:1 polynomially:1 emphasize:1 ignore:1 implicitly:1 assumed:1 shwartz:3 table:4 robust:1 rearranging:1 inherently:1 ignoring:2 culties:1 bottou:1 domain:1 main:1 i2p:1 bounding:4 noise:1 n2:1 cient:2 georgia:1 sub:2 wish:1 ib:2 toyota:3 third:1 theorem:15 down:1 removing:1 svm:2 dominates:1 sequential:1 mirror:3 phd:1 margin:2 generalizing:1 logarithmic:3 ez:2 corresponds:1 determines:1 minibatches:1 goal:2 acceleration:2 towards:1 uniformly:2 averaging:2 justify:1 total:3 m3:3 attempted:1 accelerated:21
3,792
4,433
Co-Training for Domain Adaptation Minmin Chen, Kilian Q. Weinberger Department of Computer Science and Engineering Washington University in St. Louis St. Louis, MO 63130 mc15,[email protected] John C. Blitzer Google Research 1600 Amphitheatre Parkway Mountain View, CA 94043 [email protected] Abstract Domain adaptation algorithms seek to generalize a model trained in a source domain to a new target domain. In many practical cases, the source and target distributions can differ substantially, and in some cases crucial target features may not have support in the source domain. In this paper we introduce an algorithm that bridges the gap between source and target domains by slowly adding to the training set both the target features and instances in which the current algorithm is the most confident. Our algorithm is a variant of co-training [7], and we name it CODA (Co-training for domain adaptation). Unlike the original co-training work, we do not assume a particular feature split. Instead, for each iteration of cotraining, we formulate a single optimization problem which simultaneously learns a target predictor, a split of the feature space into views, and a subset of source and target features to include in the predictor. CODA significantly out-performs the state-of-the-art on the 12-domain benchmark data set of Blitzer et al. [4]. Indeed, over a wide range (65 of 84 comparisons) of target supervision CODA achieves the best performance. 1 Introduction Domain adaptation addresses the problem of generalizing from a source distribution for which we have ample labeled training data to a target distribution for which we have little or no labels [3, 14, 28]. Domain adaptation is of practical importance in many areas of applied machine learning, ranging from computational biology [17] to natural language processing [11, 19] to computer vision [23]. In this work, we focus primarily on domain adaptation problems that are characterized by missing features. This is often the case in natural language processing, where different genres often use very different vocabulary to describe similar concepts. For example, in our experiments we use the sentiment data of Blitzer et al. [4], where a breeze to use is a way to express positive sentiment about kitchen appliances, but not about books. In this situation, most domain adaptation algorithms seek to eliminate the difference between source and target distributions, either by re-weighting source instances [14, 18] or learning a new feature representation [6, 28]. We present an algorithm which differs from both of these approaches. Our method seeks to slowly adapt its training set from the source to the target domain, using ideas from co-training. We accomplish this in two ways: First, we train on our own output in rounds, where at each round, we include in our training data the target instances we are most confident of. Second, we select a subset of shared source and target features based on their compatibility. Different from most previous work on selecting features for domain adaptation, the compatibility is measured across the training set and the unlabeled set, instead of across the two domains. As more target instances are added to the training set, target specific features become compatible across the two sets, therefore are included in the predictor. Finally, we exploit the pseudo multiview co-training algorithm of Chen et al. [10] 1 to exploit the unlabeled data efficiently. These three intuitive ideas can be combined in a single optimization problem. We name our algorithm CODA (Co-Training for Domain Adaptation). By allowing us to slowly change our training data from source to target, CODA has an advantage over representation-learning algorithms [6, 28], since they must decide a priori what the best representation is. In contrast, each iteration of CODA can choose exactly those few target features which can be related to the current (source and pseudo-labeled target) training set. We find that in the sentiment prediction data set of Blitzer et al. [4] CODA improves the state-of-the-art cross widely varying amounts of target labeled data in 65 out of 84 settings. 2 Notation and Setting We assume our data originates from two domains, Source (S) and Target (T). The source data is fully labeled DS = {(x1 , y1 ), . . . , (xns , yns )} ? Rd ? Y and sampled from some distribution PS (X, Y ). The target data is sampled from PT (X, Y ) and is divided into labeled DTl = {(x1 , y1 ), . . . , (xnt , ynt )} ? Rd ? Y and unlabeled DTu = {(x1 , ?), . . . (xmt , ?)} ? Rd ? Y parts, where in the latter the labels are unknown during training time. Both domains are of equal dimensionality d. Our goal is to learn a classifier h ? H to accurately predict the labels on the unlabeled portion of DT , but also to extend to out-of-sample test points, such that for any (x, y) sampled from PT , we have h(x) = y with high probability. For simplicity we assume that Y = {+1, ?1}, although our method can easily be adapted to multi-class or regression settings. We assume the existence of a base classifier, which determines the set H. Throughout this paper we simply use logistic regression, i.e. our classifier is parameterized by a weight-vector w ? Rd and > defined as hw (x) = (1 + e?w x )?1 . The weights w are set to minimize the loss function X 1 `(w; D) = ? log(1 + exp(?yw> x)). (1) |D| (x,y)?D If trained on data sampled from PS (X, Y ), logistic regression models the distribution PS (Y |X) [13] > through Ph (Y = y|X = x; w) = (1 + e?w xy )?1 . In this paper, our goal is to adapt this classifier to the target distribution PT (Y |X). 3 Method In this section, we begin with a semi-supervised approach and describe the rote-learning procedure to automatically annotate target domain inputs. The algorithm maintains and grows a training set that is iteratively adapted to the target domain. We then incorporate feature selection into the optimization, a crucial element of our domain-adaptation algorithm. The feature selection addresses the change in distribution and support from PS to PT . Further, we introduce pseudo multi-view co-training [7, 10], which improves the rote-learning procedure by adding inputs with features that are still not used effectively by the current classifier. We use automated feature decomposition to artificially split our data into multiple views, explicitly to enable successful co-training. 3.1 Self-training for Domain Adaptation First, we assume we are given a loss function ` ? in our case the log-loss from eq. (1) ? which provides some estimate of confidence in its predictions. In logistic regression, if y? = sign(h(x)) is the prediction for an input x, the probability Ph (Y = y?|X = x; w) is a natural metric of certainty (as h(x) can be interpreted as a probability for x to be of label +1), but other methods [22] can be used. Self-training [19] is a simple and intuitive iterative algorithm to leverage unlabeled data. During training one maintains a labeled training set L and an unlabeled test set U , initialized as L = DS ? DTl and U = DTu . Each iteration, a classifier hw is trained to minimize the loss function ` over L and is evaluated on all elements of U . The c most confident predictions on U are moved to L for the next iteration, labeled by the prediction of sign(hw ). The algorithm terminates when U is empty or all predictions are below a pre-defined confidence threshold (and considered unreliable). Algorithm 1 summarizes self-training in pseudo-code with the use of feature selection, described in the following section. 2 Algorithm 1 SEDA pseudo-code. 1: 2: 3: 4: 5: 6: 7: Inputs: L and U . repeat w? = argminw `(w; L) + ?s(L, U, w) Apply hw? on all elements of U . Move up-to c confident inputs xi from U to L, labeled as sign(h(xi )). until No more predictions are confident Return hw? 3.2 Feature Selection So far, we have not addressed that the two data sets U and L are not sampled from the same distribution. In domain adaptation, the training data is no longer representative of the test data. More explicitly, PS (Y |X = x) is different from PT (Y |X = x). For illustration, consider the sentiment analysis problem in section 4, where data consists of unigram and bigram bag-of-words features and the task is to classify if a book-review (source domain) or dvd-review (target domain) is positive or negative. Here, the bigram feature ?must read? is indicative of a positive opinion within the source (?books?) domain, but rarely appears in the target (?dvd?) domain. A classifier, trained on the source-dominated set L, that relies too heavily on such features will not make enough highconfidence predictions on the set U = DTu . To address this issue, we extend the classifier with a weighted `1 regularization for feature selection. The weights are assigned to encourage the classifier to only use features that behave similarly in both L and U . Different from previous work on feature selection for domain adaptation [25], where the goal is to find a new representation to minimize the difference between the distributions of the source and target domain, what we are proposing is to minimize the difference between the distributions of the labeled training set L and the unlabeled set U (which coincides with the testing set in our setting). This difference is crucial, as it makes the empirical distributions of L and U align gradually. For example, after some iterations, the classifier can pick features that are never present in the source domain, but which have entered L through the rote-learning procedure. We perform the feature selection implicitly through w. For a feature ?, let us denote the Pearson correlation coefficient (PCC)1 between feature value x? and the label y for all pairs (x, y) ? L as ?L (x? , y). It can be shown that ?L (x? , y) ? [?1, 1] with a value of +1 if a feature is perfectly aligned with the label (i.e. the feature is the label), 0 if it has no correlation, and ?1 if it is of opposite polarity (i.e. the inverted label). Similarly, let us define the PCC for all pairs in U as ?U ;w (x? , Y ), where the unknown label Y is a random variable drawn from the conditional probability Ph (Y |X; w). The two PCC values indicate how predictive a feature is of the (estimated) class label in the two respective data sets. Ideally, we would like to choose features that are similarly predictive across the two sets. We measure how similarly a feature behaves across L and U with the product ?L (x? , y)?U ;w (x? , Y ). With this notation, we define the feature weight that reflects the cross-domain incompatibility of a feature as ?L,U,w (?) = (1 ? ?L (x? , y)?U ;w (x? , Y )). (2) It is straight-forward to show that ?L,U,w ? [0, 2]. Intuitively, ?L,U,w expresses to what degree we would like to remove a feature. A perfect feature, that is the label itself (and the prediction in U ), results in a score of 0. A feature that is not correlated with the class label in at least one of the two domains (and therefore is too domain-specific) obtains a score of 1. A feature that switches polarization across domains (and therefore is ?malicious?) has a score ?L,U,w (?) > 1 (in the extreme case if it is the label in L and the inverted label in U , its score would be 2). We incorporate (2) into a weighted `1 regularization s(L, U, w) = d X ?L,U,w (?)|w? |. (3) ?=1 Intuitively (3) encourages feature sparsity with a strong emphasis on features with little or opposite correlation across the domains, whereas good features that are consistently predictive in both 1 The PCC for two random variables X, Y is defined as ? = mean and ?X the standard deviation of X. 3 E[(X??X )(Y ??Y )] , ?X ?Y where ?X denotes the domains become cheap. We refer to this version of the algorithm as Self-training for Domain Adaptation (SEDA). The optimization with feature selection, used in Algorithm 1, becomes w = argminw `(L) + ?s(L, U, w). (4) Here, ? ? 0 denotes the loss-regularization trade-off parameter. As we have very few labeled inputs from the target domain in the early iterations, stronger regularization is imposed so that only features shared across the two domains are used. When more and more inputs from the target domain are included in the training set, we gradually decrease the regularization to accommodate target specific features. The algorithm is very insensitive to the exact initial choice of ?. The guideline is to start with a relatively large number, and decrease it until the selected feature set is not empty. In our implementation, we set it to ?0 = 0.1, and we divide it by a factor of 1.1 during each iteration. 3.3 Co-training for Domain Adaptation For rote-learning to be effective, we need to move test inputs from U to L that 1) are correctly classified (with high probability) and 2) have potential to improve the classifier in future iterations. The former is addressed by the feature selecting regularization from the previous section ? restricting the classifier to a sub-set of features that are known to be cross-data set compatible reduces the generalization error on U . In this section we address the second requirement. We want to add inputs xi that contain additional features, which were not used to obtain the prediction hw (xi ) and would enrich the training set L. If the exact labels of the inputs in U were known, a good active learning [26] strategy would be to move inputs to L on which the current classifier hw is most uncertain. In our setting, this would be clearly ill advised as the uncertain prediction is also used as the label. A natural solution to this dilemma is co-training [7]. Co-training assumes the data set is presented in two separate views and two classifiers are trained, one in each view. Each iteration, only inputs that are confident according to exactly one of the two classifiers are moved to the training set. This way, one classifier provides the (estimated) labels to the inputs on which the other classifier is uncertain. In our setting we do not have multiple views and which features are selected varies in each iteration. Hence, co-training does not apply out-of-the-box. We can, however, split our features into two mutually exclusive views such that co-training is effective. To this end we follow the pseudo-multiview regularization introduced by Chen et al. [10]. The main intuition is to train two classifiers on a single view X such that: (1) both perform well on the labeled data; (2) both are trained on strictly different features; (3) together they are likely to satisfy Balcan?s condition of -expandability [2], a necessary and sufficient pre-condition for co-training to work2 . These three aspects can be formulated explicitly as three modifications of our optimization problem (4). We discuss each of them in detail in the following. Loss. Two classifiers are required for co-training, whose weight vectors we denote by u and v. The performance of each classifier is measured by the log-loss `(?; L) in eq. (1). To ensure that both classifiers perform well on the training set L, i.e. both have a small training loss, we train them jointly while minimizing the soft-maximum3 of the two losses,   log e`(u;L) + e`(v;L) . (5) Feature Decomposition. Co-training requires the two classifiers to be trained on different feature spaces. We create those by splitting the feature-space into two mutually exclusive sub-sets. More precisely, for each feature ?, at least one of the two classifiers must have a zero weight in the ?th dimension. We can enforce this across all features with the equality constraint d X 2 u2? v? = 0. (6) ?=1 -Expandability. In the original co-training formulation [7], it is assumed that the two views of the data are class conditionally independent. This assumption is very strong and can easily be 2 3 Provided that the classifiers are never confident and wrong ? which can be violated in practice. P The soft-max of a set of elements S is a differentiable approximation of max(S) ? log( s?S es ). 4 violated in practice [20]. Recent work [2] weakens this requirement significantly to a condition of -expandability. Loosely phrased, for the two classifiers to be able to teach each other, they must make confident predictions on different subsets of the unlabeled set U . For the classifier hu , let y? = sign(u> x) ? {?1} denote the class prediction and Ph (? y |x; u) its confidence. Define cu (x) as a confidence indicator function (for some confidence threshold ? > 0)4  1 if p(? y |x; u) > ? cu (x) = (7) 0 otherwise, and cv respectively. Then the -expanding condition translates to " # X X X [cu (x)? cv (x) + c?u (x)cv (x)] ?  min cu (x)cv (x), c?u (x)? cv (x) , x?U x?U (8) x?U for some  > 0. Here, cu (x) = 1 ? cu (x) indicates that classifier hu is not confident about input x. Intuitively, the constraint in eq. (8) ensures that the total number of inputs in U that can be used for rote-learning because exactly one classifier is confident (LHS), is larger than the set of inputs which cannot be used because both classifiers are already confident or both are not confident (RHS). In summary, the framework splits the feature space into two mutually exclusive sub-sets. This representation enables us to train two logistic regression classifiers, both with small loss on the labeled data set, while satisfying two constraints to ensure feature decomposition and -expandability. Our final classifier has the weight vector w = u + v. We refer to the resulting algorithm as CODA (Cotraining for Domain Adaptation), which can be stated concisely with the following optimization problem: min w,u,v  log e`(u;L) + e`(v;L) + ?s(L, U, w) subject to: Pd (1) i=1 u2i vi2 = 0 P  P P (2) x?U [cu (x)? cv (x) + c?u (x)cv (x)] ?  min ?u (x)? cv (x) x?U cu (x)cv (x), x?U c (3) w = u + v The optimization is non-convex. However, as it is not particularly sensitive to initialization, we set u, v randomly and optimize with standard conjugate gradient descent5 . Due to space constraints we do not include a pseudo-code implementation of CODA. The implementation is essentially identical to that of SEDA (Algorithm 1) where the above optimization problem is solved instead of eq. (4) in line 3. In line 5, we move inputs that one classifier is confident about while the other one is uncertain to the training set L to improve the classifier in future iterations. 4 Results We evaluate our algorithm together with several other domain adaptation algorithms on the ?Amazon reviews? benchmark data sets [6]. The data set contains reviews of four different types of products: books, DVDs, electronics, and kitchen appliances from Amazon.com. In the original dataset, each review is associated with a rating of 1-5 stars. For simplicity, we are only concerned about whether or not a review is positive (higher than 3 stars) or negative (3 stars or lower). That is, yi = {+1, ?1}, where yi = 1 indicates that it is a positive review, and ?1 otherwise. The data from four domains results in 12 directed adaptation tasks (e.g. books ? dvds). Each domain adaptation task consists of 2, 000 labeled source inputs and around 4, 000 unlabeled target test inputs (varying slightly between tasks). We let the amount of labeled target data vary from 0 to 1600. For each setting with target labels we ran 10 experiments with different, randomly chosen, labeled instances. The original feature space of unigrams and bigrams is on average approximately 100, 000 dimensions across 4 In our implementation, the 0-1 indicator was replaced by a very steep differentiable sigmoid function, and ? was set to 0.8 across different experiments. 5 We use minimize.m (http://tinyurl.com/minimize-m). 5 different domains. To reduce the dimensionality, we only use features that appear at least 10 times in a particular domain adaptation task (with approximately 40, 000 features remaining). Further, we pre-process the data set with standard tf-idf [24] feature re-weighting. 1.05 1.15 1 1.1 1.05 Relative Test Error Relative Test Error 0.95 0.9 0.85 0.8 1 0.95 0.9 0.85 Logistic Regression Self?training SEDA CODA 0.75 0.7 Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 Number of target labeled data 0.8 0.75 1600 0 50 100 200 400 800 Number of target labeled data 1600 Figure 1: Relative test-error reduction over logistic regression, averaged across all 12 domain adaptation tasks, as a function of the target training set size. Left: A comparison of the three algorithms from section 3. The graph shows clearly that self-training (Self-training vs. Logistic Regression), feature-selection (SEDA vs. Self-training) and co-training (CODA vs. SEDA), each improve the accuracy substantially. Right: A comparison of CODA with four state-of-the-art domain adaptation algorithms. CODA leads to particularly strong improvements under little target supervision. As a first experiment, we compare the three algorithms from Section 3 and logistic regression as a baseline. The results are in the left plot of figure 1. For logistic regression, we ignore the difference between source and target distribution, and train a classifier on the union of both labeled data sets. We use `2 regularization, and set the regularization constant with 5-fold cross-validation. In figure 1, all classification errors are shown relative to this baseline. Our second baseline is self-training, which adds self-training to logistic regression ? as described in section 3.1. We start with the set of labeled instances from source and target domain, and gradually add confident predictions to the training set from the unlabeled target domain (without regularization). SEDA adds feature selection to the self-training procedure, as described in section 3.2. We optimize over 100 iterations of selftraining, at which stage the regularization was effectively zero and the classifier converged. For CODA we replace self-training with pseudo-multi-view co-training, as described in section 3.3. The left plot in figure 1 shows the relative classification errors of these four algorithms averaged over all 12 domain adaptation tasks, under varying amounts of target labels. We observe two trends: First, there are clear gaps between logistic regression, self-training, SEDA, and CODA. From these three gaps one can conclude that self-training, feature-selection and co-training each lead to substantial improvements in classification error. A second trend is that the relative improvement over logistic regression reduces as more labeled target data becomes available. This is not surprising, as with sufficient target labels the task turns into a classical supervised learning problem and the source data becomes irrelevant. As a second experiment, we compare CODA against three state-of-the-art domain adaptation algorithms. We refer to these as Coupled, the coupled-subspaces approach [6], EasyAdapt [11], and EasyAdapt++. [15]. Details about the respective algorithms are provided in section 5. Coupled subspaces, as described in [6], does not utilize labeled target data and its result is depicted as a single point. The right plot in figure 1 compares these algorithms, relative to logistic regression. Figure 3 shows the individual results on all the 12 adaptation tasks with absolute classification error rates. The error bars show the standard deviation across the 10 runs with different labeled instances. EasyAdapt and EasyAdapt++, both consistently improve over logistic regression once sufficient target data is available. It is noteworthy that, on average, CODA outperforms the other algorithms in almost all settings when 800 labeled target points or less are present. With 1600 labeled target points all algorithms perform similar to the baseline and additional source data is irrelevant. All hyper-parameters of competing algorithms were carefully set by 5-fold cross validation. Concerning computational requirements, it is fair to say that CODA is significantly slower than the other algorithms, as each iteration is of comparable complexity as logistic regression or EasyAdapt. 6 1.3 Source heavy 1.2 1.1 Target heavy 1 0.9 20 40 60 Iterations 80 100 1.3 1.2 1.1 Target heavy 1 0.9 20 40 60 Iterations 1600 target labels 1.4 Source heavy Ratio of used features r(w) 400 target labels 1.4 Ratio of used features Ratio of usedoffeatures (source/target) Ratio used features Ratio of used features (source/target) 0 target labels 1.4 80 100 1.3 1.2 Source heavy 1.1 Target heavy 1 0.9 20 40 60 Iterations 80 100 Figure 2: The ratio of the average number of used features between source and target inputs (9), tracked throughout the CODA optimization. The three plots show the same statistic at different amounts of target labels. Initially, an input from the source domain has on average 10-35% more features that are used by the classifier than a target input. At around iteration 40, this relation changes and the classifier uses more target-typical features. The graph shows the geometric mean across all adaptation tasks. With no target data available (left plot), the early spike in source dominance is more pronounced and decreases when more target labels are available (middle and right plot). In typical domain adaptation settings this is generally not a problem, as training sets tend to be small. In our experiments, the average training time for CODA6 was about 20 minutes. Finally, we investigate the feature-selection process during CODA training. Let us define the indicator function ?(a) ? {0, 1} to be ?(a) = 0 if and only if a = 0, which operates element-wise on vectors. The vector ?(w) ? {0, 1}d indicates which features are used in the classifier and ?(xi ) indicates which features are present in input xi . We can denote the ratio between the average number of used features in labeled training inputs over those in unlabeled target inputs as P 1 > l ?(w) ?(xs ) l | xs ?DS |DS r(w) = 1 P . (9) > xt ?D l ?(w) ?(xt ) |D l | T T Figure 2 shows the plot of r(w) for all weight vectors during the 100 iterations of CODA, averaged across all 12 data sets. The three plots show the same statistic under varying amounts of target labels. Two trends can be observed: First, during CODA training, the classifier initially selects more source-specific features. For example in the case with zero labeled target data, during early iterations the average source input contains 20 ? 35% more used features relative to target inputs. This source-heavy feature distribution changes and eventually turns into target-heavy distribution as the classifier adapts to the target domain. As a second trend, we observe that with more target labels (right plot), this spike in source features is much less pronounced whereas the final target-heavy ratio is unchanged but starts earlier. This indicates that as the target labels increase, the classifier makes less use of the source data and relies sooner and more directly on the target signal. 5 Related Work and Discussion Domain adaptation algorithms that do not use labeled target domain data are sometimes called unsupervised adaptation algorithms. There are roughly three types of algorithms in this group. The first type, which includes the coupled subspaces algorithm of Blitzer et al. [5], learns a shared representation under which the source and target distributions are closer than under the ambient feature space [28]. The largest disadvantage of these algorithms is that they do not jointly optimize the predictor and the representation, which prevents them from focusing on those features which are both different and predictive. By jointly optimizing the feature selection, the multi-view split and the prediction, CODA allows us to do both. The second type of algorithm attempts to directly minimize the divergence between domains, typically by weighting individual instances [14, 16, 18]. These algorithms do not assume highly divergent domains (e.g. those with unique target features), but they have the advantage over both CODA and representation-learning of learning asymptotically optimal target predictors from only 6 We used a straight-forward MatlabT M implementation. 7 0.25 0.2 0.3 Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.25 0.2 0.25 50 100 200 400 800 1600 Number of target labeled data Books ?> Electronics Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.3 Test Error Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.3 0.25 0.2 0.35 Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.3 0.25 0.2 Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.3 0.25 0.2 0.2 0 0.3 50 100 200 400 800 1600 Number of target labeled data Kitchen ?> Electronics Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.25 0.2 0.15 0.1 0 0.05 50 100 200 400 800 1600 Number of target labeled data Dvd ?> Kitchen 0.25 Test Error 0.35 0.1 50 100 200 400 800 1600 Number of target labeled data Books ?> Kitchen 0.25 0.1 0.15 0 50 100 200 400 800 1600 Number of target labeled data Kitchen ?> Dvd Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.3 50 100 200 400 800 1600 Number of target labeled data Dvd ?> Electronics 0.35 0.15 0 0.15 0 Test Error 0 0.2 0.1 50 100 200 400 800 1600 Number of target labeled data Electronics ?> Dvd 0.2 0.35 Test Error 0 0.35 0.15 0.25 0.15 Test Error Test Error 0.1 50 100 200 400 800 1600 Number of target labeled data Books ?> Dvd Test Error 0 0.3 0.1 0.2 Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.3 0.15 0.35 0.1 0.25 Kitchen ?> Books 0.35 Test Error 0.15 0.1 Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.2 0 0.18 50 100 200 400 800 1600 Number of target labeled data Electronics ?> Kitchen Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0.16 Test Error Test Error 0.3 Electronics ?> Books 0.35 Test Error Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA Test Error Dvd ?> Books 0.35 0.14 0.12 0.15 0.15 0.1 0.1 0 50 100 200 400 800 1600 Number of target labeled data 0.1 0 50 100 200 400 800 1600 Number of target labeled data 0.08 0 50 100 200 400 800 1600 Number of target labeled data Figure 3: The individual results on all domain adaptation tasks under varying amounts of labeled target data. The graphs show the absolute classification error rates. All settings with existing labeled target data were averaged over 10 runs (with randomly selected labeled instances). The vertical bars indicate the standard deviation in these cases. source training data (when their assumptions hold). We did not explore them here because their assumptions are clearly violated for this data set. In natural language processing, a final type of very successful algorithm self-trains on its own target predictions to automatically annotate new target domain features [19]. These methods are most closely related, in spirit, to our own CODA algorithm. Indeed, our self-training baseline is intended to mimic this style of algorithm. The final set of domain adaptation algorithms, which we compared against but did not describe, are those which actively seek to minimize the labeling divergence between domains using multi-task techniques [1, 8, 9, 12, 21, 27]. Most prominently, Daum?e [11] trains separate source and target models, but regularizes these models to be close to one another. The EasyAdapt++ variant of this algorithm, which we compared against, generalizes this to the semi-supervised setting by making the assumption that for unlabeled target instances, the tasks should be similar. Although these methods did not significantly out-perform our baselines in the sentiment data set, we note that there do exist data sets on which such multi-task techniques are especially important [11], and we hope soon to explore combinations of CODA with multi-task learning on those data sets. 8 References [1] R.K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817?1853, 2005. [2] M.F. Balcan, A. Blum, and K. Yang. Co-training and expansion: Towards bridging theory and practice. NIPS, 17:89?96, 2004. [3] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and Jenn Wortman. A theory of learning from different domains. Machine Learning, 2009. [4] J. Blitzer, M. Dredze, and F. Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Association for Computational Linguistics, Prague, Czech Republic, 2007. [5] J. Blitzer, D. Foster, and S. Kakade. Domain adaptation with coupled subspaces. In Conference on Artificial Intelligence and Statistics, Fort Lauterdale, 2011. [6] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 120?128. Association for Computational Linguistics, 2006. [7] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, page 100. ACM, 1998. [8] R. Caruana. Multitask learning. Machine Learning, 28:41?75, 1997. [9] O. Chapelle, P. Shivaswamy, S. Vadrevu, K.Q. Weinberger, Y. Zhang, and B. Tseng. Multi-task learning for boosting with application to web search ranking. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ?10, pages 1189?1198, New York, NY, USA, 2010. ACM. [10] M. Chen, K.Q. Weinberger, and Y. Chen. Automatic Feature Decomposition for Single View Co-training. In International Conference on Machine Learning, 2011. [11] H. Daume III. Frustratingly easy domain adaptation. In Association for Computational Linguistics, 2007. [12] T. Evgeniou, C.A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6(1):615, 2006. [13] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer Verlag, New York, 2009. [14] J. Huang, A.J. Smola, A. Gretton, K. M. Borgwardt, and B. Scholkopf. Correcting sample selection bias by unlabeled data. In NIPS 19, pages 601?608. MIT Press, Cambridge, MA, 2007. [15] H. Daume III, A. Kumar, and A. Saha. Co-regularization based semi-supervised domain adaptation. In NIPS 23, pages 478?486. MIT Press, 2010. [16] J. Jiang and C.X. Zhai. Instance weighting for domain adaptation in nlp. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 264?271, Prague, Czech Republic, June 2007. Association for Computational Linguistics. [17] Qian Liu, Aaron Mackey, David Roos, and Fernando Pereira. Evigan: a hidden variable model for integrating gene evidence for eukaryotic gene prediction. Bioinformatics, 2008. [18] T. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation with multiple sources. In NIPS 21, pages 1041?1048. MIT Press, 2009. [19] D. McClosky, E. Charniak, and M. Johnson. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 337?344. Association for Computational Linguistics, 2006. [20] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In Proceedings of the ninth international conference on Information and knowledge management, pages 86?93. ACM, 2000. [21] S. Parameswaran and K.Q. Weinberger. Large margin multi-task metric learning. In NIPS 23, pages 1867?1875. 2010. [22] J.C. Platt et al. Probabilities for sv machines. NIPS, pages 61?74, 1999. [23] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. Computer Vision?ECCV 2010, pages 213?226, 2010. [24] G. Salton and C. Buckley. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513?523, 1988. [25] S. Satpal and S. Sarawagi. Domain adaptation of conditional probability models via feature subsetting. Knowledge Discovery in Databases: PKDD 2007, pages 224?235, 2007. [26] B. Settles. Active learning literature survey. Machine Learning, 15(2):201?221, 1994. [27] K.Q. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg. Feature hashing for large scale multitask learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1113?1120. ACM, 2009. [28] G. Xue, W. Dai, Q. Yang, and Y. Yu. Topic-bridged plsa for cross-domain text classication. In SIGIR, 2008. 9
4433 |@word multitask:2 kulis:1 cu:8 version:1 middle:1 bigram:3 pcc:4 stronger:1 plsa:1 hu:2 seek:4 blender:1 decomposition:4 pick:1 accommodate:1 reduction:1 electronics:7 initial:1 contains:2 score:4 selecting:2 liu:1 charniak:1 outperforms:1 existing:1 current:4 com:3 surprising:1 must:4 john:1 kdd:1 minmin:1 rote:5 remove:1 cheap:1 enables:1 plot:9 v:3 mackey:1 intelligence:1 selected:3 reranking:1 indicative:1 provides:2 boosting:1 appliance:2 zhang:2 u2i:1 become:2 scholkopf:1 consists:2 eleventh:1 introduce:2 amphitheatre:1 indeed:2 roughly:1 pkdd:1 multi:9 automatically:2 little:3 becomes:3 begin:1 provided:2 notation:2 what:3 mountain:1 interpreted:1 substantially:2 proposing:1 pseudo:8 certainty:1 exactly:3 classifier:41 wrong:1 platt:1 originates:1 xmt:1 appear:1 louis:2 positive:5 engineering:1 analyzing:1 jiang:1 advised:1 approximately:2 noteworthy:1 emphasis:1 initialization:1 co:26 range:1 averaged:4 directed:1 practical:2 unique:1 testing:1 practice:3 union:1 differs:1 sarawagi:1 procedure:4 pontil:1 area:1 empirical:2 significantly:4 adapting:1 confidence:5 wustl:1 pre:3 word:1 integrating:1 dtl:2 unlabeled:15 selection:14 cannot:1 close:1 optimize:3 imposed:1 missing:1 convex:1 survey:1 formulate:1 sigir:1 simplicity:2 splitting:1 amazon:2 correcting:1 qian:1 target:93 pt:5 heavily:1 parser:1 exact:2 us:1 element:6 trend:4 satisfying:1 particularly:2 labeled:43 database:1 observed:1 solved:1 ensures:1 kilian:2 trade:1 decrease:3 ran:1 substantial:1 intuition:1 pd:1 complexity:1 ideally:1 trained:7 predictive:5 dilemma:1 easily:2 genre:1 train:7 describe:3 effective:2 artificial:1 labeling:1 hyper:1 pearson:1 whose:1 widely:1 larger:1 say:1 otherwise:2 statistic:3 jointly:3 itself:1 final:4 advantage:2 differentiable:2 product:2 adaptation:40 argminw:2 aligned:1 combining:1 entered:1 adapts:1 intuitive:2 moved:2 pronounced:2 empty:2 p:5 requirement:3 darrell:1 perfect:1 ben:1 blitzer:10 weakens:1 measured:2 xns:1 eq:4 strong:3 indicate:2 differ:1 closely:1 enable:1 settle:1 opinion:1 generalization:1 strictly:1 hold:1 around:2 considered:1 exp:1 predict:1 mo:1 achieves:1 early:3 vary:1 bag:1 label:28 bridge:1 sensitive:1 largest:1 create:1 tf:1 weighted:2 reflects:1 hope:1 mit:3 clearly:3 incompatibility:1 varying:5 focus:1 june:1 improvement:3 consistently:2 indicates:5 contrast:1 sigkdd:1 baseline:6 rostamizadeh:1 parameswaran:1 shivaswamy:1 eliminate:1 typically:1 initially:2 hidden:1 breeze:1 relation:1 selects:1 compatibility:2 issue:1 classification:6 ill:1 priori:1 enrich:1 art:4 equal:1 once:1 never:2 evgeniou:1 washington:1 biology:1 identical:1 yu:1 unsupervised:1 future:2 mimic:1 primarily:1 few:2 saha:1 randomly:3 simultaneously:1 divergence:2 individual:3 kitchen:8 replaced:1 intended:1 ando:1 attempt:1 friedman:1 investigate:1 highly:1 mining:1 mcclosky:1 extreme:1 ambient:1 encourage:1 closer:1 necessary:1 xy:1 lh:1 respective:2 divide:1 loosely:1 initialized:1 re:2 sooner:1 uncertain:4 instance:11 classify:1 soft:2 earlier:1 disadvantage:1 caruana:1 applicability:1 deviation:3 subset:3 republic:2 predictor:5 successful:2 wortman:1 johnson:1 too:2 varies:1 sv:1 accomplish:1 xue:1 combined:1 confident:14 st:3 borgwardt:1 international:5 fritz:1 off:1 together:2 management:2 choose:2 slowly:3 huang:1 book:11 style:1 return:1 actively:1 potential:1 star:3 includes:1 coefficient:1 boom:1 satisfy:1 explicitly:3 ranking:1 view:13 unigrams:1 portion:1 start:3 maintains:2 minimize:8 accuracy:1 ynt:1 efficiently:1 generalize:1 accurately:1 straight:2 classified:1 converged:1 against:3 associated:1 salton:1 sampled:5 dataset:1 bridged:1 mitchell:1 knowledge:3 improves:2 dimensionality:2 carefully:1 appears:1 focusing:1 higher:1 dt:1 supervised:4 follow:1 hashing:1 formulation:1 evaluated:1 box:2 stage:1 smola:2 until:2 d:4 correlation:3 langford:1 web:1 google:2 logistic:28 vadrevu:1 grows:1 dredze:1 name:2 usa:1 concept:1 contain:1 former:1 regularization:12 assigned:1 polarization:1 read:1 hence:1 iteratively:1 equality:1 conditionally:1 round:2 during:7 self:17 encourages:1 coincides:1 coda:39 multiview:2 mcdonald:1 performs:1 balcan:2 tinyurl:1 ranging:1 wise:1 sigmoid:1 behaves:1 tracked:1 insensitive:1 extend:2 association:7 refer:3 cambridge:1 cv:9 rd:4 automatic:2 similarly:4 language:4 chapelle:1 supervision:2 longer:1 base:1 align:1 add:4 own:3 recent:1 optimizing:1 irrelevant:2 verlag:1 meeting:2 yi:2 inverted:2 additional:2 dai:1 fernando:1 signal:1 semi:3 multiple:5 reduces:2 gretton:1 characterized:1 adapt:2 cross:6 retrieval:1 divided:1 concerning:1 prediction:17 variant:2 regression:29 vision:2 metric:2 essentially:1 iteration:19 annotate:2 sometimes:1 kernel:1 whereas:2 want:1 addressed:2 source:41 malicious:1 crucial:3 unlike:1 subject:1 tend:1 ample:1 spirit:1 effectiveness:1 prague:2 structural:1 leverage:1 yang:2 split:6 enough:1 concerned:1 automated:1 switch:1 iii:2 easy:1 hastie:1 perfectly:1 opposite:2 competing:1 reduce:1 idea:2 translates:1 whether:1 bridging:1 sentiment:6 york:2 buckley:1 generally:1 yw:1 clear:1 amount:6 ph:4 category:1 http:1 exist:1 sign:4 estimated:2 correctly:1 tibshirani:1 dasgupta:1 express:2 dominance:1 group:1 four:4 threshold:2 blum:2 drawn:1 utilize:1 graph:3 asymptotically:1 run:2 parameterized:1 throughout:2 almost:1 decide:1 summarizes:1 comparable:1 correspondence:1 fold:2 annual:4 adapted:2 precisely:1 constraint:4 idf:1 dvd:10 phrased:1 dominated:1 aspect:1 min:3 kumar:1 relatively:1 department:1 according:1 combination:1 conjugate:1 across:15 terminates:1 slightly:1 kakade:1 modification:1 making:1 intuitively:3 gradually:3 mutually:3 discus:1 turn:2 eventually:1 end:1 available:4 generalizes:1 apply:2 observe:2 enforce:1 attenberg:1 weinberger:5 slower:1 existence:1 original:4 denotes:2 assumes:1 include:3 ensure:2 remaining:1 linguistics:8 nlp:1 daum:1 exploit:2 especially:1 classical:1 unchanged:1 micchelli:1 move:4 added:1 already:1 spike:2 strategy:1 exclusive:3 gradient:1 subspace:4 separate:2 topic:1 tseng:1 code:3 polarity:1 illustration:1 ratio:8 minimizing:1 zhai:1 steep:1 teach:1 negative:2 stated:1 xnt:1 implementation:5 guideline:1 unknown:2 perform:5 allowing:1 vertical:1 benchmark:2 behave:1 situation:1 regularizes:1 y1:2 mansour:1 ninth:1 rating:1 introduced:1 david:2 pair:2 required:1 fort:1 concisely:1 czech:2 nip:6 address:4 able:1 bar:2 below:1 kulesza:1 sparsity:1 max:2 vi2:1 natural:6 indicator:3 improve:4 dtu:3 coupled:18 text:2 review:7 geometric:1 discovery:2 literature:1 relative:8 fully:1 loss:10 validation:2 degree:1 sufficient:3 foster:1 heavy:9 classication:1 eccv:1 compatible:2 summary:1 mohri:1 repeat:1 soon:1 bias:1 wide:1 absolute:2 expandability:4 dimension:2 vocabulary:1 forward:2 far:1 obtains:1 ignore:1 implicitly:1 unreliable:1 gene:2 roos:1 active:2 parkway:1 assumed:1 conclude:1 xi:6 search:1 iterative:1 frustratingly:1 learn:1 ca:1 expanding:1 nigam:1 expansion:1 artificially:1 domain:75 bollywood:1 easyadapt:32 did:3 eukaryotic:1 main:1 rh:1 daume:2 fair:1 ghani:1 x1:3 representative:1 ny:1 sub:3 pereira:4 prominently:1 cotraining:2 weighting:5 learns:2 hw:7 minute:1 specific:4 unigram:1 xt:2 x:2 divergent:1 evidence:1 restricting:1 adding:2 effectively:2 importance:1 jenn:1 margin:1 chen:5 gap:3 generalizing:1 depicted:1 simply:1 likely:1 explore:2 visual:1 prevents:1 u2:1 springer:1 determines:1 relies:2 acm:5 ma:1 conditional:2 goal:3 formulated:1 towards:1 shared:3 replace:1 change:4 included:2 typical:2 operates:1 total:1 called:1 e:1 saenko:1 rarely:1 select:1 aaron:1 support:2 latter:1 crammer:1 bioinformatics:1 violated:3 incorporate:2 evaluate:1 biography:1 correlated:1
3,793
4,434
Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Zhouchen Lin Visual Computing Group Microsoft Research Asia Risheng Liu Zhixun Su School of Mathematical Sciences Dalian University of Technology Abstract Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, we propose a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the subproblems. For fast convergence, we also allow the penalty to change adaptively according a novel update rule. We prove the global convergence of LADM with adaptive penalty (LADMAP). As an example, we apply LADMAP to solve lowrank representation (LRR), which is an important subspace clustering technique yet suffers from high computation cost. By combining LADMAP with a skinny SVD representation technique, we are able to reduce the complexity O(n3 ) of the original ADM based method to O(rn2 ), where r and n are the rank and size of the representation matrix, respectively, hence making LRR possible for large scale applications. Numerical experiments verify that for LRR our LADMAP based methods are much faster than state-of-the-art algorithms. 1 Introduction Recently, compressive sensing [5] and sparse representation [19] have been hot research topics and also have found abundant applications in signal processing and machine learning. Many of the problems in these fields can be formulated as the following linearly constrained convex programs: min f (x) + g(y), s.t. A(x) + B(y) = c, x,y (1) where x, y and c could be either vectors or matrices, f and g are convex functions (e.g., the nuclear norm ? ? ?? [2], Frobenius norm ? ? ?, l2,1 norm ? ? ?2,1 [13], and l1 norm ? ? ?1 ), and A and B are linear mappings. Although the interior point method can be used to solve many convex programs, it may suffer from unbearably high computation cost when handling large scale problems. For example, when using CVX, an interior point based toolbox, to solve nuclear norm minimization (namely, f (X) = ?X?? in (1)) problems, such as matrix completion [4], robust principal component analysis [18] and their combination [3], the complexity of each iteration is O(n6 ), where n ? n is the matrix size. To overcome this issue, first-order methods are often preferred. The accelerated proximal gradient (APG) algorithm [16] is a popular technique due to its guaranteed O(k ?2 ) convergence rate, where k is the iteration number. The alternating direction method (ADM) has also regained a lot of attention [11, 15]. It updates the variables alternately by minimizing the augmented Lagrangian function with respect to the variables in a Gauss-Seidel manner. While APG has to convert (1) into an approximate unconstrained problem by adding the linear constraints to the objective function as a penalty, hence only producing an approximate solution to (1), ADM can solve (1) exactly. However, when 1 A or B is not the identity mapping, the subproblems in ADM may not have closed form solutions. So solving them is cumbersome. In this paper, we propose a linearized version of ADM (LADM) to overcome the difficulty in solving subproblems. It is to replace the quadratic penalty term by linearizing the penalty term and adding a proximal term. We also allow the penalty parameter to change adaptively and propose a novel and simple rule to update it. Linearization makes the auxiliary variables unnecessary, hence saving memory and waiving the expensive matrix inversions to update the auxiliary variables. Moreover, without the extra constraints introduced by the auxiliary variables, the convergence is also faster. Using a variable penalty parameter further speeds up the convergence. The global convergence of LADM with adaptive penalty (LADMAP) is also proven. As an example, we apply our LADMAP to solve the low-rank representation (LRR) problem [12]1 : min ?Z?? + ??E?2,1 , s.t. X = XZ + E, Z,E (2) where X is the data matrix. LRR is an important robust subspace clustering technique and has found wide applications in machine learning and computer vision, e.g., motion segmentation, face clustering, and temporal segmentation [12, 14, 6]. However, the existing LRR solver [12] is based on ADM, which suffers from O(n3 ) computation complexity due to the matrix-matrix multiplications and matrix inversions. Moreover, introducing auxiliary variables also slows down the convergence, as there are more variables and constraints. Such a heavy computation load prevents LRR from large scale applications. It is LRR that motivated us to develop LADMAP. We show that LADMAP can be successfully applied to LRR, obtaining faster convergence speed than the original solver. By further representing Z as its skinny SVD and utilizing an advanced functionality of the PROPACK [9] package, the complexity of solving LRR by LADMAP becomes only O(rn2 ), as there is no full sized matrix-matrix multiplications, where r is the rank of the optimal Z. Numerical experiments show the great speed advantage of our LADMAP based methods for solving LRR. Our work is inspired by Yang et al. [20]. Nonetheless, the difference of our work from theirs is distinct. First, they only proved the convergence of LADM for a specific problem, namely nuclear norm regularization. Their proof utilized some special properties of the nuclear norm, while we prove the convergence of LADM for general problems in (1). Second, they only proved in the case of fixed penalty, while we prove in the case of variable penalty. Although they mentioned the dynamic updating rule proposed in [8], their proof cannot be straightforwardly applied to the case of variable penalty. Moreover, that rule is for ADM only. Third, the convergence speed of LADM heavily depends on the choice of penalty. So it is difficult to choose an optimal fixed penalty that fits for different problems and problem sizes, while our novel updating rule for the penalty, although simple, is effective for different problems and problem sizes. The linearization technique has also been used in other optimization methods. For example, Yin [22] applied this technique to the Bregman iteration for solving compressive sensing problems and proved that the linearized Bregman method converges to an exact solution conditionally. In comparison, LADM (and LADMAP) always converges to an exact solution. 2 Linearized Alternating Direction Method with Adaptive Penalty 2.1 The Alternating Direction Method ADM is now very popular in solving large scale machine learning problems [1]. When solving (1) by ADM, one operates on the following augmented Lagrangian function: L(x, y, ?) = f (x) + g(y) + ??, A(x) + B(y) ? c? + ? ?A(x) + B(y) ? c?2 , 2 (3) where ? is the Lagrange multiplier, ??, ?? is the inner product, and ? > 0 is the penalty parameter. The usual augmented Lagrange multiplier method is to minimize L w.r.t. x and y simultaneously. This is usually difficult and does not exploit the fact that the objective function is separable. To remedy this issue, ADM decomposes the minimization of L w.r.t. (x, y) into two subproblems that 1 Here we switch to bold capital letters in order to emphasize that the variables are matrices. 2 minimize w.r.t. x and y, respectively. More specifically, the iterations of ADM go as follows: xk+1 = ?k+1 x = ? ?A(x) + B(yk ) ? c + ?k /??2 , 2 arg min L(xk+1 , y, ?k ) = arg min g(y) + = yk+1 arg min L(x, yk , ?k ) = arg min f (x) + (4) x y ? ?B(y) + A(xk+1 ) ? c + ?k /??2 , 2 ?k + ?[A(xk+1 ) + B(yk+1 ) ? c]. (5) y (6) In many machine learning problems, as f and g are matrix or vector norms, the subproblems (4) and (5) usually have closed form solutions when A and B are identities [2, 12, 21]. In this case, ADM is appealing. However, in many problems A and B are not identities. For example, in matrix completion A can be a selection matrix, and in LRR and 1D sparse representation A can be a general matrix. In this case, there are no closed form solutions to (4) and (5). Then (4) and (5) have to be solved iteratively. To overcome this difficulty, a common strategy is to introduce auxiliary variables [12, 1] u and v and reformulate problem (1) into an equivalent one: min f (x) + g(y), s.t. A(u) + B(v) = c, x = u, y = v, (7) x,y,u,v and the corresponding ADM iterations analogous to (4)-(6) can be deduced. With more variables and more constraints, more memory is required and the convergence of ADM also becomes slower. Moreover, to update u and v, whose subproblems are least squares problems, expensive matrix inversions are often necessary. Even worse, the convergence of ADM with more than two variables is not guaranteed [7]. To avoid introducing auxiliary variables and still solve subproblems (4) and (5) efficiently, inspired by Yang et al. [20], we propose a linearization technique for (4) and (5). To further accelerate the convergence of the algorithm, we also propose an adaptive rule for updating the penalty parameter. 2.2 Linearized ADM By linearizing the quadratic term in (4) at xk and adding a proximal term, we have the following approximation: xk+1 = arg min f (x) + ?A? (?k ) + ?A? (A(xk ) + B(yk ) ? c), x ? xk ? + ??2A ?x ? xk ?2 x ??A = arg min f (x) + ?x ? xk + A? (?k + ?(A(xk ) + B(yk ) ? c))/(??A )?2 , x 2 (8) where A? is the adjoint of A and ?A > 0 is a parameter whose proper value will be analyzed later. The above approximation resembles that of APG [16], but we do not use APG to solve (4) iteratively. Similarly, subproblem (5) can be approximated by ??B ?y ? yk + B ? (?k + ?(A(xk+1 ) + B(yk ) ? c))/(??B )?2 . yk+1 = arg min g(y) + y 2 (9) The update of Lagrange multiplier still goes as (6)2 . 2.3 Adaptive Penalty In previous ADM and LADM approaches [15, 21, 20], the penalty parameter ? is fixed. Some scholars have observed that ADM with a fixed ? can converge very slowly and it is nontrivial to choose an optimal fixed ?. So is LADM. Thus a dynamic ? is preferred in real applications. Although Tao et al. [15] and Yang et al. [20] mentioned He et al.?s adaptive updating rule [8] in their papers, the rule is for ADM only. We propose the following adaptive updating strategy for the penalty parameter ?: ?k+1 = min(?max , ??k ), (10) 2 As in [20], we can also introduce a parameter ? and update ? as ?k+1 = ?k +??[A(xk+1 )+B(yk+1 )?c]. We choose not to do so in this paper in order not to make the exposition of LADMAP too complex. The readers can refer to Supplementary Material for full details. 3 where ?max is an upper bound of {?k }. The value of ? is defined as { ? ? ?0 , if ?k max( ?A ?xk+1 ? xk ?, ?B ?yk+1 ? yk ?)/?c? < ?2 , ?= 1, otherwise, (11) where ?0 ? 1 is a constant. The condition to assign ? = ?0 comes from the analysis on the stopping criteria (see Section 2.5). We recommend that ?0 = ??2 , where ? depends on the size of c. Our updating rule is fundamentally different from He et al.?s for ADM [8], which aims at balancing the errors in the stopping criteria and involves several parameters. 2.4 Convergence of LADMAP To prove the convergence of LADMAP, we first have the following propositions. Proposition 1 ? k+1 ) ? ?f (xk+1 ), ??k ?B (yk+1 ?yk )?B ? (? ? k+1 ) ? ?g(yk+1 ), (12) ??k ?A (xk+1 ?xk )?A? (? ? k+1 = ?k + ?k [A(xk ) + B(yk ) ? c], ? ? k+1 = ?k + ?k [A(xk+1 ) + B(yk ) ? c], and ?f and where ? ?g are subgradients of f and g, respectively. This can be easily proved by checking the optimality conditions of (8) and (9). Proposition 2 Denote the operator norms of A and B as ?A? and ?B?, respectively. If {?k } is nondecreasing and upper bounded, ?A > ?A?2 , ?B > ?B?2 , and (x? , y? , ?? ) is any Karush-KuhnTucker (KKT) point of problem (1) (see (13)-(14)), then: (1). {?A ?xk ? x? ?2 ? ?A(xk ? x? )?2 + ?B ?yk ? y? ?2 + ?k?2 ??k ? ?? ?2 } is non-increasing. (2). ?xk+1 ? xk ? ? 0, ?yk+1 ? yk ? ? 0, ??k+1 ? ?k ? ? 0. The proof can be found in Supplementary Material. Then we can prove the convergence of LADMAP, as stated in the following theorem. Theorem 3 If {?k } is non-decreasing and upper bounded, ?A > ?A?2 , and ?B > ?B?2 , then the sequence {(xk , yk , ?k )} generated by LADMAP converges to a KKT point of problem (1). The proof can be found in Appendix A. 2.5 Stopping Criteria The KKT conditions of problem (1) are that there exists a triple (x? , y? , ?? ) such that A(x? ) + B(y? ) ? c = 0, ?A? (?? ) ? ?f (x? ), ?B? (?? ) ? ?g(y? ). (13) (14) The triple (x? , y? , ?? ) is called a KKT point. So the first stopping criterion is the feasibility: ?A(xk+1 ) + B(yk+1 ) ? c?/?c? < ?1 . (15) As for the second KKT condition, we rewrite the second part of Proposition 1 as follows ? k+1 ) ? ?g(yk+1 ). ??k [?B (yk+1 ? yk ) + B ? (A(xk+1 ? xk ))] ? B ? (? (16) ? k+1 to satisfy the second KKT condition, both ?k ?A ?xk+1 ? xk ? and ?k ??B (yk+1 ? yk ) + So for ? B ? (A(xk+1 ? xk ))? should be small enough. This leads to the second stopping criterion: ?k max(?A ?xk+1 ? xk ?/?A? (c)?, ?B ?yk+1 ? yk ?/?B ? (c)?) ? ??2 . (17) ? ? ? ? By estimating ?A (c)? and ?B (c)? by ?A ?c? and ?B ?c?, respectively, we arrive at the second stopping criterion in use: ? ? ?k max( ?A ?xk+1 ? xk ?, ?B ?yk+1 ? yk ?)/?c? ? ?2 . (18) Finally, we summarize our LADMAP algorithm in Algorithm 1. 4 Algorithm 1 LADMAP for Problem (1) Initialize: Set ?1 > 0, ?2 > 0, ?max ? ?0 > 0, ?A > ?A?2 , ?B > ?B?2 , x0 , y0 , ?0 , and k ? 0. while (15) or (18) is not satisfied do Step 1: Update x by solving (8). Step 2: Update y by solving (9). Step 3: Update ? by (6). Step 4: Update ? by (10) and (11). Step 5: k ? k + 1. end while 3 Applying LADMAP to LRR In this section, we apply LADMAP to solve the LRR problem (2). We further introduce acceleration tricks to reduce the computation complexity of each iteration. 3.1 Solving LRR by LADMAP As the LRR problem (2) is a special case of problem (1), PADM can be directly applied to it. The two subproblems both have closed form solutions. In the subproblem for updating E, one may apply the l2,1 -norm shrinkage operator [12], with a threshold ?k?1 , to matrix Mk = ?XZk + X ? ?k /?k . In the subproblem for updating Z, one has to apply the singular value shrinkage operator [2], with ?1 T a threshold (?k ?X )?1 , to matrix Nk = Zk ? ?X X (XZk + Ek+1 ? X + ?k /?k ), where ?X > 2 ?max (X). If Nk is formed explicitly, the usual technique of partial SVD, using PROPACK [9] and rank prediction3 , can be utilized to compute the leading r singular values and associated vectors of Nk efficiently, making the complexity of SVD computation O(rn2 ), where r is the predicted rank of Zk+1 and n is the column number of X. Note that as ?k is non-decreasing, the predicted rank is almost non-decreasing, making the iterations computationally efficient. 3.2 Acceleration Tricks for LRR Up to now, LADMAP for LRR is still of complexity O(n3 ), although partial SVD is already used. This is because forming Mk and Nk requires full sized matrix-matrix multiplications, e.g., XZk . To break this complexity bound, we introduce a decomposition technique to further accelerate LADMAP for LRR. By representing Zk as its skinny SVD: Zk = Uk ?k VkT , some of the full sized matrix-matrix multiplications are gone: they are replaced by successive reduced sized matrix-matrix multiplications. For example, when updating E, XZk is computed as ((XUk )?k )VkT , reducing the complexity to O(rn2 ). When computing the partial SVD of Nk , things are more complicated. If we form Nk explicitly, we will face with computing XT (X + ?k /?k ), which is neither low-rank nor sparse4 . Fortunately, in PROPACK the bi-diagonalizing process of Nk is done by the Lanczos procedure [9], which only requires to compute matrix-vector multiplications Nk v and uT Nk , where u and v are some vectors in the Lanczos procedure. So we may compute Nk v and uT Nk by multiplying the vectors u and v successively with the component matrices in Nk , rather than forming Nk explicitly. So the computation complexity of partial SVD of Nk is still O(rn2 ). Consequently, with our acceleration techniques, the complexity of our accelerated LADMAP (denoted as LADMAP(A) for short) for LRR is O(rn2 ). LADMAP(A) is summarized in Algorithm 2. 3 The current PROPACK can only output a given number of singular values and vectors. So one has to predict the number of singular values that are greater than a threshold [11, 20, 16]. See step 3 of Algorithm 2. Recently, we have modified PROPACK so that it can output the singular values that are greater than a threshold and their corresponding singular vectors. See [10]. 4 When forming Nk explicitly, XT XZk can be computed as ((XT (XUk ))?k )VkT , whose complexity is still O(rn2 ), while XT Ek+1 could also be accelerated as Ek+1 is a column-sparse matrix. 5 Algorithm 2 Accelerated LADMAP for LRR (2) Input: Observation matrix X and parameter ? > 0. Initialize: Set E0 , Z0 and ?0 to zero matrices, where Z0 is represented as (U0 , ?0 , V0 ) ? 2 (0, 0, 0). Set ?1 > 0, ?2 > 0, ?max ? ?0 > 0, ?X > ?max (X), r = 5, and k ? 0. while (15) or (18) is not satisfied do Step 1: Update Ek+1 = arg min ??E?2,1 + ?2k ?E + (XUk )?k VkT ? X + ?k /?k ?2 . This E subproblem can be solved by using Lemma 3.2 in [12]. Step 2: Update the skinny SVD (Uk+1 , ?k+1 , Vk+1 ) of Zk+1 . First, compute the partial ? r? ? rV ? T of the implicit matrix Nk , which is bi-diagonalized by the successive matrixSVD U r ? r (:, 1 : r? ), vector multiplication technique described in Section 3.1. Second, Uk+1 = U ? ? ?1 ? ? ? ? ?k+1 = ?r (1 : r , 1 : r ) ? (?k ?X ) I, Vk+1 = Vr (:, 1 : r ), where r is the number of singular values in ?r that are greater than (?k ?X )?1 . Step 3: Update the predicted rank r: If r? < r, then r = min(r? + 1, n); otherwise, r = min(r? + round(0.05n), n). T Step 4: Update ?k+1 = ?k + ?k ((XUk+1 )?k+1 Vk+1 + Ek+1 ? X). Step 5: Update ?k+1 by (10)-(11). Step 6: k ? k + 1. end while 4 Experimental Results In this section, we report numerical results on LADMAP, LADMAP(A) and other state-of-the-art algorithms, including APG5 , ADM6 and LADM, for LRR based data clustering problems. APG, ADM, LADM and LADMAP all utilize the Matlab version of PROPACK [9]. For LADMAP(A), we provide two function handles to PROPACK which fulfils the successive matrix-vector multiplications. All experiments are run and timed on a PC with an Intel Core i5 CPU at 2.67GHz and with 4GB of memory, running Windows 7 and Matlab version 7.10. We test and compare these solvers on both synthetic multiple subspaces data and the real world motion data (Hopkin155 motion segmentation database [17]). For APG, we set the parameters ?0 = 0.01, ?min = 10?10 , ? = 0.9 in its continuation technique and the Lipschitz constant ? = 2 ?max (X). The parameters of ADM and LADM are the same as those in [12] and [20], respectively. In particular, for LADM the penalty is fixed at ? = 2.5/ min(m, n), where m ? n is the size of X. For LADMAP, we set ?1 = 10?4 , ?2 = 10?5 , ?0 = min(m, n)?2 , ?max = 1010 , ?0 = 1.9, 2 (X). As the code of ADM was downloaded, its stopping criteria, ?XZk + and ?X = 1.02?max Ek ? X?/?X? ? ?1 and max(?Ek ? Ek?1 ?/?X?, ?Zk ? Zk?1 ?/?X?) ? ?2 , are used in all our experiments7 . 4.1 On Synthetic Data The synthetic test data, parameterized as (s, p, d, r?), is created by the same procedure in [12]. s independent subspaces {Si }si=1 are constructed, whose bases {Ui }si=1 are generated by Ui+1 = TUi , 1 ? i ? s ? 1, where T is a random rotation and U1 is a d ? r? random orthogonal matrix. So each subspace has a rank of r? and the data has an ambient dimension of d. Then p data points are sampled from each subspace by Xi = Ui Qi , 1 ? i ? s, with Qi being an r? ? p i.i.d. zero mean unit variance Gaussian matrix N (0, 1). 20% samples are randomly chosen to be corrupted by adding Gaussian noise with zero mean and standard deviation 0.1?x?. We empirically find that LRR achieves the best clustering performance on this data set when ? = 0.1. So we test all algorithms with ? = 0.1 in this experiment. To measure the relative errors in the solutions, we run LADMAP 2000 iterations with ?max = 103 to establish the ground truth solution (E0 , Z0 ). The computational comparison is summarized in Table 1. We can see that the iteration numbers and the CPU times of both LADMAP and LADMAP(A) are much less than those of other methods, and 5 Please see Supplementary Material for the detail of solving LRR by APG. We use the Matlab code provided online by the authors of [12]. 7 Note that the second criterion differs from that in (18). However, this does not harm the convergence of LADMAP because (18) is always checked when updating ?k+1 (see (11)). 6 6 LADMAP(A) is further much faster than LADMAP. Moreover, the advantage of LADMAP(A) is even greater when the ratio r?/p, which is roughly the ratio of the rank of Z0 to the size of Z0 , is smaller, which testifies to the complexity estimations on LADMAP and LADMAP(A) for LRR. It is noteworthy that the iteration numbers of ADM and LADM seem to grow with the problem sizes, while that of LADMAP is rather constant. Moreover, LADM is not faster than ADM. In particular, on the last data we were unable to wait until LADM stopped. Finally, as APG converges to an approximate solution to (2), its relative errors are larger and its clustering accuracy is lower than ADM and LADM based methods. Table 1: Comparison among APG, ADM, LADM, LADMAP and LADMAP(A) on the synthetic data. For each quadruple (s, p, d, r?), the LRR problem, with ? = 0.1, was solved for the same data using different algorithms. We present typical running time (in ?103 seconds), iteration number, ? Z) ? and the clustering accuracy (%) of tested algorithms, relative error (%) of output solution (E, respectively. Method Time Iter. ? ?Z?Z 0? ?Z0 ? ? ?E?E 0? ?E0 ? Acc. APG ADM LADM LADMAP LADMAP(A) APG ADM LADM LADMAP LADMAP(A) APG ADM LADM LADMAP LADMAP(A) APG ADM LADM LADMAP LADMAP(A) 0.0332 0.0529 0.0603 0.0145 0.0010 0.0869 0.1526 0.2943 0.0336 0.0015 1.8837 3.7139 8.1574 0.7762 0.0053 6.1252 11.7185 N.A. 2.3891 0.0058 110 176 194 46 46 106 185 363 41 41 117 225 508 40 40 116 220 N.A. 44 44 2.2079 0.5491 0.5480 0.5480 0.5480 2.4824 0.6519 0.6518 0.6518 0.6518 2.8905 1.1191 0.6379 0.6379 0.6379 3.0667 0.6865 N.A. 0.6864 0.6864 1.5096 0.5093 0.5024 0.5024 0.5024 1.0341 0.4078 0.4076 0.4076 0.4076 2.4017 1.0170 0.4268 0.4268 0.4268 0.9199 0.4866 N.A. 0.4294 0.4294 81.5 90.0 90.0 90.0 90.0 80.0 83.7 86.7 86.7 86.7 72.4 80.0 80.0 84.6 84.6 69.4 76.0 N.A. 80.1 80.1 Size (s, p, d, r?) (10, 20,200, 5) (15, 20,300, 5) (20, 25, 500, 5) (30, 30, 900, 5) Table 2: Comparison among APG, ADM, LADM, LADMAP and LADMAP(A) on the Hopkins155 database. We present their average computing time (in seconds), average number of iterations and average classification errors (%) on all 156 sequences. Two Motion Three Motion All Time Iter. CErr. Time Iter. CErr. Time Iter. CErr. APG 15.7836 90 5.77 46.4970 90 16.52 22.6277 90 8.36 ADM 53.3470 281 5.72 159.8644 284 16.52 77.0864 282 8.33 LADM 9.6701 110 5.77 22.1467 64 16.52 12.4520 99 8.36 LADMAP 3.6964 22 5.72 10.9438 22 16.52 5.3114 22 8.33 22 5.72 6.1098 22 16.52 3.0202 22 8.33 LADMAP(A) 2.1348 4.2 On Real World Data We further test the performance of these algorithms on the Hopkins155 database [17]. This database consists of 156 sequences, each of which has 39 to 550 data vectors drawn from two or three motions. For computational efficiency, we preprocess the data by projecting it to be 5-dimensional using PCA. As ? = 2.4 is the best parameter for this database [12], we test all algorithms with ? = 2.4. Table 2 shows the comparison among APG, ADM, LADM, LADMAP and LADMAP(A) on this database. We can also see that LADMAP and LADMAP(A) are much faster than APG, ADM, and 7 LADM, and LADMAP(A) is also faster than LADMAP. However, in this experiment the advantage of LADMAP(A) over LADMAP is not as dramatic as that in Table 1. This is because on this data ? is chosen as 2.4, which cannot make the rank of the ground truth solution Z0 much smaller than the size of Z0 . 5 Conclusions In this paper, we propose a linearized alternating direction method with adaptive penalty for solving subproblems in ADM conveniently. With LADMAP, no auxiliary variables are required and the convergence is also much faster. We further apply it to solve the LRR problem and combine it with an acceleration trick so that the computation complexity is reduced from O(n3 ) to O(rn2 ), which is highly advantageous over the existing LRR solvers. Although we only present results on LRR, LADMAP is actually a general method that can be applied to other convex programs. Acknowledgments The authors would like to thank Dr. Xiaoming Yuan for pointing us to [20]. This work is partially supported by the grants of the NSFC-Guangdong Joint Fund (No. U0935004) and the NSFC Fund (No. 60873181, 61173103). R. Liu also thanks the support from CSC. A Proof of Theorem 3 Proof By Proposition 2 (1), {(xk , yk , ?k )} is bounded, hence has an accumulation point, say (xkj , ykj , ?kj ) ? (x? , y? , ?? ). We accomplish the proof in two steps. 1. We first prove that (x? , y? , ?? ) is a KKT point of problem (1). By Proposition 2 (2), A(xk+1 ) + B(yk+1 ) ? c = ?k?1 (?k+1 ? ?k ) ? 0. This shows that any accumulation point of {(xk , yk )} is a feasible solution. By letting k = kj ? 1 in Proposition 1 and the definition of subgradient, we have ? k )? f (xkj ) + g(ykj ) ? f (x? ) + g(y? ) + ?xkj ? x? , ??kj ?1 ?A (xkj ? xkj ?1 ) ? A? (? j ? ? ? +?ykj ? y , ??kj ?1 ?B (ykj ? ykj ?1 ) ? B (?kj )?. Let j ? +?, by observing Proposition 2 (2), we have f (x? ) + g(y? ) ? f (x? ) + g(y? ) + ?x? ? x? , ?A? (?? )? + ?y? ? y? , ?B? (?? )? = f (x? ) + g(y? ) ? ?A(x? ? x? ), ?? ? ? ?B(y? ? y? ), ?? )? = f (x? ) + g(y? ) ? ?A(x? ) + B(y? ) ? A(x? ) ? B(y? ), ?? ? = f (x? ) + g(y? ), where we have used the fact that both (x? , y? ) and (x? , y? ) are feasible solutions. So we conclude that (x? , y? ) is an optimal solution to (1). Again, let k = kj ? 1 in Proposition 1 and by the definition of subgradient, we have ? k )?, ?x. f (x) ? f (xk ) + ?x ? xk , ??k ?1 ?A (xk ? xk ?1 ) ? A? (? (19) j j j j j j Fix x and let j ? +?, we see that f (x) ? f (x? ) + ?x ? x? , ?A? (?? )?, ?x. So ?A? (?? ) ? ?f (x? ). Similarly, ?B ? (?? ) ? ?g(y? ). Therefore, (x? , y? , ?? ) is a KKT point of problem (1). 2. We next prove that the whole sequence {(xk , yk , ?k )} converges to (x? , y? , ?? ). By choosing (x? , y? , ?? ) = (x? , y? , ?? ) in Proposition 2, we have ?A ?xkj ? x? ?2 ? ?A(xkj ? x? )?2 + ?B ?ykj ? y? ?2 + ?k?2 ??kj ? ?? ?2 ? 0. By Proposition 2 (1), we readily have j ?A ?xk ? x? ?2 ? ?A(xk ? x? )?2 + ?B ?yk ? y? ?2 + ?k?2 ??k ? ?? ?2 ? 0. So (xk , yk , ?k ) ? (x? , y? , ?? ). As (x? , y? , ?? ) can be an arbitrary accumulation point of {(xk , yk , ?k )}, we may conclude that {(xk , yk , ?k )} converges to a KKT point of problem (1). 8 References [1] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. In Michael Jordan, editor, Foundations and Trends in Machine Learning, 2010. [2] J. Cai, E. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. preprint, 2008. [3] E. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 2011. [4] E. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 2009. [5] E. J. Cand`es and M. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine, 2008. [6] P. Favaro, R. Vidal, and A. Ravichandran. A closed form solution to robust subspace estimation and clustering. In CVPR, 2011. [7] B. He, M. Tao, and X. Yuan. Alternating direction method with Gaussian back substitution for separable convex programming. SIAM Journal on Optimization, accepted. [8] B. He, H. Yang, and S. Wang. Alternating direction method with self-adaptive penalty parameters for monotone variational inequality. J. Optimization Theory and Applications, 106:337? 356, 2000. [9] R. Larsen. Lanczos bidiagonalization with partial reorthogonalization. Department of Computer Science, Aarhus University, Technical report, DAIMI PB-357, 1998. [10] Z. Lin. Some software packages for partial SVD computation. arXiv:1108.1548. [11] Z. Lin, M. Chen, and Y. Ma. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. UIUC Technical Report UILU-ENG-09-2215, 2009, arXiv:1009.5055. [12] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In ICML, 2010. [13] J. Liu, S. Ji, and J. Ye. Multi-task feature learning via efficient l2,1 norm minimization. In UAI, 2009. [14] Y. Ni, J. Sun, X. Yuan, S. Yan, and L. Cheong. Robust low-rank subspace segmentation with semidefinite guarantees. In ICDM Workshop, 2010. [15] M. Tao and X.M. Yuan. Recovering low-rank and sparse components of matrices from incomplete and noisy observations. SIAM Journal on Optimization, 21(1):57?81, 2011. [16] K. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least sequares problems. Pacific J. Optimization, 6:615?640, 2010. [17] R. Tron and R. Vidal. A benchmark for the comparison of 3D montion segmentation algorithms. In CVPR, 2007. [18] J. Wright, A. Ganesh, S. Rao, and Y. Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In NIPS, 2009. [19] J. Wright, Y. Ma, J. Mairal, G. Sapirao, T. Huang, and S. Yan. Sparse representation for computer vision and pattern recognition. Proceedings of the IEEE, 2010. [20] J. Yang and X. Yuan. Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. submitted, 2011. [21] J. Yang and Y. Zhang. Alternating direction algorithms for l1 problems in compressive sensing. SIAM J. Scientific Computing, 2010. [22] W. Yin. Analysis and generalizations of the linearized Bregman method. SIAM Journal on Imaging Sciences, 2010. 9
4434 |@word version:3 inversion:3 norm:13 advantageous:1 linearized:9 decomposition:1 eng:1 dramatic:1 liu:4 substitution:1 existing:2 diagonalized:1 current:1 si:3 yet:1 chu:1 toh:1 readily:1 csc:1 numerical:3 update:16 fund:2 propack:7 xk:49 short:1 core:1 successive:3 zhang:1 favaro:1 mathematical:1 constructed:1 yuan:5 prove:7 consists:1 combine:1 introduce:4 manner:1 x0:1 roughly:1 cand:4 xz:1 nor:1 uiuc:1 multi:1 inspired:2 decreasing:3 cpu:2 window:1 solver:4 increasing:1 becomes:2 provided:1 estimating:1 moreover:6 bounded:3 compressive:4 guarantee:1 temporal:1 exactly:1 uk:3 unit:1 grant:1 producing:1 nsfc:2 quadruple:1 noteworthy:1 resembles:1 lrr:29 gone:1 bi:2 acknowledgment:1 differs:1 procedure:3 yan:2 boyd:1 wait:1 cannot:2 interior:2 selection:1 operator:3 ravichandran:1 applying:1 accumulation:3 equivalent:1 lagrangian:3 go:2 attention:1 convex:8 shen:1 recovery:2 rule:9 utilizing:1 nuclear:6 handle:1 testifies:1 analogous:1 heavily:1 magazine:1 exact:5 programming:1 trick:3 trend:1 expensive:2 approximated:1 utilized:2 updating:10 recognition:1 database:6 observed:1 subproblem:4 preprint:1 solved:4 wang:1 sun:1 yk:39 mentioned:2 complexity:14 ui:3 dynamic:2 solving:13 rewrite:1 efficiency:1 easily:2 accelerate:2 joint:1 represented:1 distinct:1 fast:1 effective:1 choosing:1 whose:4 supplementary:3 solve:9 larger:1 say:1 cvpr:2 otherwise:2 nondecreasing:1 noisy:1 online:1 advantage:3 sequence:4 cai:1 propose:7 product:1 combining:1 adjoint:1 frobenius:1 convergence:19 converges:6 develop:1 completion:4 school:1 lowrank:1 auxiliary:7 predicted:3 involves:1 come:1 recovering:1 direction:11 functionality:1 material:3 assign:1 fix:1 scholar:1 karush:1 generalization:1 proposition:11 ground:2 wright:3 great:1 mapping:3 predict:1 pointing:1 achieves:1 estimation:2 cheong:1 successfully:1 minimization:4 gaussian:3 always:2 aim:1 modified:1 rather:2 avoid:1 shrinkage:2 vk:3 rank:17 stopping:7 tao:3 adm:39 issue:3 arg:8 among:3 denoted:1 classification:1 constrained:2 art:2 special:2 initialize:2 field:1 saving:1 sampling:1 yu:1 icml:1 report:3 recommend:1 fundamentally:1 randomly:1 simultaneously:1 replaced:1 skinny:4 microsoft:1 highly:1 analyzed:1 semidefinite:1 pc:1 ambient:1 bregman:3 partial:7 necessary:1 orthogonal:1 incomplete:1 abundant:1 timed:1 e0:3 mk:2 stopped:1 column:2 rao:1 lanczos:3 cost:2 introducing:2 deviation:1 uilu:1 too:1 straightforwardly:1 corrupted:3 proximal:5 synthetic:4 accomplish:1 adaptively:2 deduced:1 thanks:1 recht:1 siam:4 michael:1 xuk:4 again:1 satisfied:2 successively:1 choose:3 slowly:1 huang:1 dr:1 worse:1 ek:8 leading:1 li:1 rn2:8 bold:1 summarized:2 satisfy:1 explicitly:4 depends:2 later:1 break:1 lot:1 closed:5 observing:1 complicated:1 hopkins155:2 minimize:2 square:1 ni:1 formed:1 accuracy:2 variance:1 bidiagonalization:1 efficiently:3 preprocess:1 multiplying:1 acc:1 submitted:1 suffers:2 cumbersome:1 checked:1 definition:2 nonetheless:1 larsen:1 proof:7 associated:1 sampled:1 proved:4 popular:2 ut:2 segmentation:6 actually:1 back:1 asia:1 done:1 implicit:1 until:1 su:1 ganesh:1 scientific:1 ye:1 verify:1 multiplier:5 remedy:1 zhixun:1 hence:4 regularization:1 alternating:11 iteratively:2 guangdong:1 conditionally:1 round:1 self:1 please:1 linearizing:3 criterion:8 yun:1 tron:1 l1:2 motion:6 variational:1 novel:3 recently:2 parikh:1 xkj:7 common:1 rotation:1 empirically:1 ji:1 he:4 theirs:1 refer:1 unconstrained:1 zhouchen:1 similarly:2 mathematics:1 v0:1 base:1 inequality:1 regained:1 fortunately:1 greater:4 converge:1 signal:3 u0:1 rv:1 full:4 multiple:1 daimi:1 seidel:1 technical:2 faster:8 lin:4 icdm:1 feasibility:1 qi:2 vision:2 arxiv:2 iteration:12 singular:8 risheng:1 grow:1 extra:1 thing:1 seem:1 jordan:1 ladm:27 yang:6 enough:1 switch:1 fit:1 reduce:2 inner:1 motivated:1 pca:1 gb:1 penalty:25 suffer:1 matlab:3 reduced:2 continuation:1 group:1 xzk:6 iter:4 threshold:4 pb:1 drawn:1 capital:1 neither:1 utilize:1 imaging:1 subgradient:2 monotone:1 convert:1 run:2 package:2 letter:1 i5:1 parameterized:1 arrive:1 almost:1 reader:1 cvx:1 appendix:1 apg:17 bound:2 guaranteed:2 quadratic:3 nontrivial:1 constraint:5 n3:4 software:1 fulfils:1 u1:1 speed:4 min:17 optimality:1 subgradients:1 separable:2 xiaoming:1 department:1 pacific:1 according:1 combination:1 smaller:2 y0:1 appealing:1 making:3 projecting:1 computationally:1 letting:1 end:2 vidal:2 apply:6 slower:1 original:2 clustering:8 running:2 wakin:1 exploit:1 establish:1 objective:2 already:1 strategy:2 usual:2 gradient:2 subspace:9 unable:1 thank:1 topic:1 code:2 reformulate:1 ratio:2 minimizing:1 difficult:2 subproblems:10 slows:1 stated:1 proper:1 upper:3 observation:2 vkt:4 benchmark:1 arbitrary:1 peleato:1 introduced:1 tui:1 namely:2 required:2 toolbox:1 eckstein:1 alternately:1 nip:1 address:1 able:1 usually:3 pattern:1 summarize:1 program:4 max:14 memory:3 including:1 hot:1 difficulty:2 regularized:1 solvable:1 advanced:1 diagonalizing:1 representing:2 technology:1 created:1 n6:1 kj:7 l2:3 checking:1 aarhus:1 multiplication:8 relative:3 proven:1 triple:2 foundation:2 downloaded:1 thresholding:1 editor:1 heavy:1 balancing:1 supported:1 last:1 allow:2 wide:1 face:2 kuhntucker:1 sparse:5 ghz:1 distributed:1 overcome:3 dimension:1 world:2 author:2 adaptive:10 reorthogonalization:1 approximate:3 emphasize:1 preferred:2 global:2 kkt:9 uai:1 mairal:1 harm:1 unnecessary:1 conclude:2 xi:1 decomposes:1 table:5 zk:7 robust:7 obtaining:1 complex:1 linearly:2 whole:1 noise:1 augmented:5 intel:1 vr:1 third:1 down:1 theorem:3 z0:8 load:1 specific:1 xt:4 sensing:3 exists:1 workshop:1 adding:5 linearization:3 nk:16 chen:1 yin:2 forming:3 visual:1 conveniently:1 prevents:1 lagrange:4 partially:1 truth:2 acm:1 ma:4 identity:4 formulated:2 sized:4 acceleration:4 exposition:1 consequently:1 replace:1 lipschitz:1 feasible:2 change:2 specifically:1 typical:1 operates:1 reducing:1 principal:3 lemma:1 called:1 accepted:1 svd:10 gauss:1 experimental:1 e:4 support:1 accelerated:5 tested:1 ykj:6 handling:1
3,794
4,435
Transfer from Multiple MDPs Alessandro Lazaric INRIA Lille - Nord Europe, Team SequeL, France [email protected] Marcello Restelli Department of Electronics and Informatics, Politecnico di Milano, Italy [email protected] Abstract Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them in the training set used to solve a target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem. 1 Introduction The objective of transfer in reinforcement learning (RL) [10] is to speed-up RL algorithms by reusing knowledge (e.g., samples, value function, features, parameters) obtained from a set of source tasks. The underlying assumption of transfer methods is that the source tasks (or a suitable combination of these) are somehow similar to the target task, so that the transferred knowledge can be useful in learning its solution. A wide range of scenarios and methods for transfer in RL have been studied in the last decade (see [12, 6] for a thorough survey). In this paper, we focus on the simple transfer approach where trajectory samples are transferred from source MDPs to increase the size of the training set used to solve the target MDP. This approach is particularly suited in problems (e.g., robotics, applications involving human interaction) where it is not possible to interact with the environment long enough to collect samples to solve the task at hand. If samples are available from other sources (e.g., simulators in case of robotic applications), the solution of the target task can benefit from a larger training set that includes also some source samples. This approach has been already investigated in the case of transfer between tasks with different state-action spaces in [11], where the source samples are used to build a model of the target task whenever the number of target samples is not large enough. A more sophisticated sample-transfer method is proposed in [5]. The authors introduce an algorithm which estimates the similarity between source and target tasks and selectively transfers from the source tasks which are more likely to provide samples similar to those generated by the target MDP. Although the empirical results are encouraging, the proposed method is based on heuristic measures and no theoretical analysis of its performance is provided. On the other hand, in supervised learning a number of theoretical works investigated the effectiveness of transfer in reducing the sample complexity of the learning process. In domain adaptation, a solution learned on a source task is transferred to a target task and its performance depends on how similar the two tasks are. In [1] and [8] different distance measures are proposed and are shown to be connected to the performance of the transferred solution. The case of transfer of samples from multiple source tasks is studied in [2]. The most interesting finding is that the transfer performance benefits from using a larger training set at the cost of an additional error due to the average distance between source and target tasks. This implies the existence of a transfer tradeoff between transferring as many samples as possible and limiting the transfer to sources which are similar to the target task. As a result, the transfer of samples is expected to outperform single-task learning whenever negative transfer (i.e., transfer from source tasks far from the target task) is limited w.r.t. to the advantage of increasing 1 the size of the training set. This also opens the question whether it is possible to design methods able to automatically detect the similarity between tasks and adapt the transfer process accordingly. In this paper, we investigate the transfer of samples in RL from a more theoretical perspective w.r.t. previous works. The main contributions of this paper can be summarized as follows: ? Algorithmic contribution. We introduce three sample-transfer algorithms based on fitted Q-iteration [3]. The first algorithm (AST in Sec. 3) simply transfers all the source samples. We also design two adaptive methods (BAT and BTT in Sec. 4 and 5) whose objective is to solve the transfer tradeoff by identifying the best combination of source tasks. ? Theoretical contribution. We formalize the setting of transfer of samples and we derive a finite-sample analysis of AST which highlights the importance of the average MDP obtained by the combination of the source tasks. We also report the analysis for BAT which shows both the advantage of identifying the best combination of source tasks and the additional cost in terms of auxiliary samples needed to compute the similarity between tasks. ? Empirical contribution. We report results (in Sec. 6) on a simple chain problem which confirm the main theoretical findings and support the idea that sample transfer can significantly speed-up the learning process and that adaptive methods are able to solve the transfer tradeoff and avoid negative transfer effects. The proofs and additional experiments are available in [7]. 2 Preliminaries In this section we introduce the notation and the transfer problem considered in the rest of the paper. We define a discounted Markov decision process (MDP) as a tuple M = hX , A, R, P, ?i where the state space X is a bounded closed subset of the Euclidean space, A is a finite (|A| < ?) action space, the reward function R : X ? A ? R is uniformly bounded by Rmax , the transition kernel P is such that for all x ? X and a ? A, P(?|x, a) is a distribution over X , and ? ? (0, 1) is a discount factor. We denote by S(X ? A) the set of probability measures over X ? A max and by B(X ? A; Vmax = R1?? ) the space of bounded measurable functions with domain X ? A and bounded in [?Vmax , Vmax ]. We define the optimal action-value function Q? as the unique fixed-point of the optimal Bellman operator T : B(X ? A; Vmax ) ? B(X ? A; Vmax ) defined as R (T Q)(x, a) = R(x, a) + ? X maxa? ?A Q(y, a? )P(dy|x, a). For any measure ? ? S(X ? A) obtained from the combination of a distribution ? ? S(X ) and a uniform distribution over the discrete set P A, andR a measurable function f : X ? A ? R, we 1 2 define the L2 (?)-norm of f as ||f ||2? = |A| a?A X f (x, a) ?(dx). The supremum norm of f is defined as ||f ||? = supx?X |f (x)|. Finally, we define the standard L2 -norm for a vector ? ? Rd ? P as ||?||2 = di=1 ?2i . We denote by ?(?, ?) = ?1 (?, ?), . . . , ?d (?, ?) a feature vector with features ?i : X ? A ? [?C, C], and by F = {f? (?, ?) = ?(?, ?)? ?} the linear space of action-value functions spanned by the basis functions in ?. Given a set of state-action pairs {(Xl , Al )}L l=1 , let ? = [?(X1 , A1 )? ; . . . ; ?(XL , AL )? ] be the corresponding feature matrix. We define the orthogonal projection operator ? : B(X ? A; Vmax ) ? F as ?Q = arg minf ?F ||Q ? f ||? . Finally, by T (Q) we denote the truncation of a function Q in the range [?Vmax , Vmax ]. We consider the transfer problem in which M tasks {Mm }M m=1 are available and the objective is to learn the solution for the target task M1 transferring samples from the source tasks {Mm }M m=2 . We define an assumption on how the training sets are generated. Definition 1. (Random Tasks Design) An input set {(Xl , Al )}L l=1 is built with samples drawn from an arbitrary sampling distribution ? ? S(X ? A), i.e. (Xl , Al ) ? ?. For each task m, one transition and reward sample is generated in each of the state-action pairs in the input set, i.e. Ylm ? P(?|Xl , Al ), and Rlm = R(Xl , Al ). Finally, we define the random sequence {Ml }L l=1 where the indexes Ml are drawn i.i.d. from a multinomial distribution with parameters (?1 , . . . , ?M ). The training set available to the learner is {(Xl , Al , Yl , Rl )}L l=1 where Yl = Yl,Ml and Rl = Rl,Ml . This is an assumption on how the samples are generated but in practice, a single realization of samples and task indexes Ml is available. We consider the case in which ?1 ? ?m (m = 2, . . . , M ). This condition implies that (on average) the number of target samples is much less than the source 2 e0 ? F Input: Linear space F = span{?i , 1 ? i ? d}, initial function Q for k = 1, 2, . . . do Build the training set {(Xl , Al , Yl , Rl )}L l=1 [according to random tasks design] Build the feature matrix ? = [?(X1 , A1 )? ; . . . ; ?(XL , AL )? ] e k?1 (Yl , a? ) Compute the vector p ? RL with pl = Rl + ? maxa? ?A Q k ? ?1 ? b k = f?? k Compute the projection ? ? = (? ?) ? p and the function Q k k e b Return the truncated function Q = T (Q ) end for Figure 1: A pseudo-code for All-Sample Transfer (AST) Fitted Q-iteration. samples and it is usually not enough to learn an accurate solution for the target task. We will also consider the pure transfer case in which ?1 = 0 (i.e., no target sample is available). Finally, we notice that Def. 1 implies the existence of a generative model for all the MDPs, since the stateaction pairs are generated according to an arbitrary sampling distribution ?. 3 All-Sample Transfer Algorithm We first consider the case when the source samples are generated according to Def. 1 and the designer has no access to the source tasks. We study the algorithm called All-Sample Transfer (AST) (Fig. 1) which simply runs FQI with a linear space F on the whole training set {(Xl , Al , Yl , Rl )}L l=1 . k?1 k?1 e b At each iteration k, given the result of the previous iteration Q = T (Q ), the algorithm returns 2 L  X b k = arg min 1 e k?1 (Yl , a? )) . Q f (Xl , Al ) ? (Rl + ? max Q a? ?A f ?F L (1) l=1 In the case of linear spaces, the minimization problem is solved in closed form as in Fig. 1. In the following we report a finite-sample analysis of the performance of AST. Similar to [9], we first study the prediction error in each iteration and we then propagate it through iterations. 3.1 Single Iteration Finite-Sample Analysis We define the average MDP M? as the average of the M MDPs at hand. We define its reward function R? and its transition kernel P ? as the weighted average of reward functions and transition kernels of the basic MDPs with weights determined by the proportions ? of the multinomial distribuP PM tion in the definition of the random tasks design (i.e., R? = M m=1 ?m Rm , P ? = m=1 ?m Pm ). We also denote by T ? its optimal Bellman operator. In the random tasks design, the average MDP plays a crucial role since the implicit target function of the minimization of the empirical loss in e k?1 . At each iteration k, we prove the following performance bound for AST. Eq. 1 is indeed T ? Q Theorem 1. Let M be the number of tasks {Mm }M m=1 , with M1 the target task. Let the training set {(Xl , Al , Yl , Rl )}L l=1 be generated as in Def. 1, with a proportion vector ? = (?1 , . . . , ?M ). Let e k?1 ||? , then for any 0 < ? ? 1, Q b k (Eq. 1) satisfies e k?1 = arg inf f ?F ||f ? T1 Q f?k? = ?T1 Q q k?1 k k?1 e b e k?1 ) e ||? ? 4||f?k? ? T1 Q ||T (Q ) ? T1 Q ||? + 5 E? (Q s r   9 27(12Le2)2(d+1) 2 2 k . log + 32Vmax log + 24(Vmax + C||?? ||) L ? L ? e k?1 ) = k(T1 ? T ? )Q e k?1 k2 . with probability 1 ? ? (w.r.t. samples), where ||?i ||? ? C and E? (Q ? Remark 1 (Analysis of the bound). We first notice that the previous bound reduces (up to constants) to the standard bound for FQI when M = 1 [7]. The bound is composed by three main terms: (i) approximation error, (ii) estimation error, and (iii) transfer error. The approximation error ||f?k? ? e k?1 and it e k?1 ||? is the smallest error of functions in F in approximating the target function T1 Q T1 Q does not depend on the transfer algorithm. The estimation error (third and fourth terms in the bound) b k and it depends on the dimensionality d of the is due to the finite random samples used to learn Q function space and it decreases with the total number of samples L with the fast rate of linear spaces 3 p (O(d/L) instead of O( d/L)). Finally, the transfer error E? accounts for the difference between bk source and target tasks. In fact, samples from source tasks different from the target might bias Q k?1 e . It towards a wrong solution, thus resulting in a poor approximation of the target function T1 Q is interesting to notice that the transfer error depends on the difference between the target task and the average MDP M? obtained by taking a linear combination of the source tasks weighted by the parameters ?. This means that even when each of the source tasks is very different from the target, if there exists a suitable combination which is similar to the target task, then the transfer process is still likely to be effective. Furthermore, E? considers the difference in the result of the application e k?1 . As a result, when the two operators T1 and of the two Bellman operators to a given function Q T ? have the same reward functions, even if the transition distributions are different (e.g., the total e k?1 might still be variation ||P1 (?|x, a) ? P ? (?|x, a)||TV is large), their corresponding averages of Q R R ? ? e a )P1 (dy|x, a) similar to maxa? Q(y, e a )P ? (dy|x, a)). similar (i.e., maxa? Q(y, b ks be the solution obtained by solving one Remark 2 (Comparison to single-task learning). Let Q b k and Q b ks can iteration of FQI with only samples from the source task, the performance bounds of Q be written as (up to constants and logarithmic factors) r r p e k?1 ||? + (Vmax + C||?k ||) 1 + Vmax d + E? , e k?1 k? ? ||f?k ? T1 Q b k ) ? T1 Q kT (Q ? ? L L r r d e k?1 ||? + (Vmax + C||?k? ||) 1 + Vmax e k?1 k? ? ||f?k ? T1 Q b ks ) ? T1 Q , kT (Q ? N1 N1 with N1 = ?1 L (on average). Both bounds have the same approximation error. The main difference b k uses only N1 samples and, as a result, has a much bigger estimation error than Q b k , which is that Q s b k suffers takes advantage of all the L samples transferred from the source tasks. At the same time, Q from an additional transfer error. Thus, we can conclude that AST is expected to perform better than single-task learning whenever the advantage of using more samples is greater than the bias due to samples coming from tasks different from the target task. This introduces a transfer tradeoff between including many source samples, so as to reduce the estimation error, and finding source tasks whose combination leads to a small transfer error. In Sec. 4 we define an adaptive transfer algorithm which selects proportions ? so as to keep the transfer error E? as small as possible. Finally, in Sec. 5 we consider a different setting where the number of samples in each source is limited. 3.2 Propagation Finite-Sample Analysis We now study how the previous error is propagated through iterations. Let ? be the evaluation norm (i.e., in general different from the sampling distribution ?). We first report two assumptions. 1 Assumption 1. [9] Given ?, ?, p ? 1, and an arbitrary sequence of policies {?p }p?1 , we assume that the future-state distribution ?P?11 ? ? ? P?1p is absolutely continuous w.r.t. ?. We assume that P c(p) = sup?1 ????p ||d(?P?11 ? ? ? P?1p )/?||? satisfies C?,? = (1 ? ? 2 )2 p p? p?1 c(p) < ?. R Assumption 2. Let G ? Rd?d be the Gram matrix with [G]ij = ?i (x, a)?j (x, a)?(dx, a). We assume that its smallest eigenvalue ? is strictly positive (i.e., ? > 0). Theorem 2. Let Assumptions 1 and 2 hold and the setting be as in Thm. 1. After K iterations, AST e K , whose corresponding greedy policy ?K satisfies returns an action-value function Q " p 2? ? ?K ||Q ? Q ||? ? C?,? 4 sup inf ||f ? T1 g||? + 5 sup k(T1 ? T ? )T (f? )k? (1 ? ?)3/2 ? g?F f ?F s # r   2Vmax K 9K 27K(12Le2)2(d+1) 2 2 Vmax +p log + 32Vmax log ? . + 56(Vmax + ? ) L ? L ? ? C?,? Remark (Analysis of the bound). The bound reported in the previous theorem displays few differences w.r.t. to the single-iteration bound (see [7] for further discussion). The transfer error sup? k(T1 ? T ? )T (f? )k? characterizes the difference between the target and average Bellman operators through the space F . As a result, even MDPs with significantly different rewards and transitions might have a small transfer error because of the functions in F . This introduces a tradeoff 1 We refer to [9] for a thorough explanation of the concentrability terms. 4 e 0 ? F, number of samples L Input: Space F = span{?i , 1 ? i ? d}, initial function Q S t t Build the auxiliary set {(Xs , As , Rs,1 , . . . , Rs,M }s=1 and {Ys,1 ,. . ., Ys,M }Tt=1 for each s for k = 1, 2, . . . do bk = arg min??? Eb? (Q e k?1 ) Compute ? bk Run one iteration of AST (Fig. 1) using L samples generated according to ? end for Figure 2: A pseudo-code for the Best Average Transfer (BAT) algorithm. in the design of F between a ?large? enough space containing functions able to approximate T1 Q (i.e., small approximation error) and a small function space where the Q-functions induced by T1 and T ? can be closer (i.e., small transfer error). This term also displays interesting similarities with the notion of discrepancy introduced in [8] in domain adaptation. 4 Best Average Transfer Algorithm As discussed in the previous section, the transfer error E? plays a crucial role in the comparison with single-task learning. In particular, E? is related to the proportions ? inducing the average Bellman operator T ? which defines the target function approximated at each iteration. We now consider the case where the designer has direct access to the source tasks (i.e., it is possible to choose how many samples to draw from each source) and can define an arbitrary proportion ?. In particular, we propose a method that adapts ? at each iteration so as to minimize the transfer error E? . We consider the case in which L is fixed as a parameter of the algorithm and ?1 = 0 (i.e., no target samples are used in the learning training set). At each iteration k, we need to estie k?1 ). We assume that for each task additional samples available. Let mate the quantity E? (Q {(Xs , As , Rs,1 , . . . , Rs,M )}Ss=1 be an auxiliary training set where (Xs , As ) ? ? and Rs,m = t Rm (Xs , As ). In each state-action pair, we generate T next states for each task, that is Ys,m ? Pm (?|Xs , As ) with t = 1, . . . , T . Thus, for any function Q we define the estimated transfer error as # " M M S T  2 X X ? X 1X t ? t ? b ?m max Q(Ys,m , a ) . ?m Rs,m + max Q(Ys,1 , a )? Rs,1 ? E? (Q) = a? a? S s=1 T t=1 m=2 m=2 (2) bk = At each iteration, the algorithm Best Average Transfer (BAT) (Fig. 2) first computes ? k?1 e ), where ? is the (M -2)-dimensional simplex, and then runs an iteraarg min??? Eb? (Q bk . We denote by ?k = tion of AST with samples generated according to the proportions ? ? k?1 e arg min??? E? (Q ) the best combination at iteration k. e k?1 be the function returned at the previous iteration and Q b k the function Theorem 3. Let Q BAT k b returned by the BAT algorithm (Fig. 2). Then for any 0 < ? ? 1, Q BAT satisfies q e k?1 ||? + 5 E?k (Q e k?1 ||? ? 4||f?k ? T1 Q b k ) ? T1 Q e k?1 ) ||T (Q BAT ? ? !1/4 r p log 8SM/? (M ? 2) log 8S/? + 20Vmax + 5 2Vmax S T s r   18 54(12Le2)2(d+1) 2 2 . log + 32Vmax log + 24(Vmax + C||?k? ||) L ? L ? with probability 1 ? ?. Remark 1 (Comparison with AST and single-task learning). The bound shows that BAT outperforms AST whenever the advantage in achieving the smallest possible transfer error E?k? is larger than the additional estimation error due to the auxiliary training set. When compared to single-task learning, BAT has a better performance whenever the best combination of source tasks has a small transfer error and the additional auxiliary estimation error is smaller than the estimation error in single-task learning. In particular, this means that O((M/S)1/4 ) + O((1/T )1/2 ) should be smaller than O((d/N )1/2 ) (with N the number of target samples). The number of calls to the generative 5 Table 1: Parameters for the first set of tasks Table 2: Parameters for the second set of tasks tasks p l ? Reward tasks p l ? Reward M1 0.9 1 0.1 +1 in [?11, ?9] ? [9, 11] M1 0.9 1 0.1 +1 in [?11, ?9] ? [9, 11] M2 M3 M4 M5 0.9 0.9 0.9 0.9 2 1 1 1 0.1 0.1 0.1 0.1 ?5 in [?11, ?9] ? [9, 11] +5 in [?11, ?9] ? [9, 11] +1 in [?6, ?4] ? [4, 6] ?1 in [?6, ?4] ? [4, 6] M6 M7 M8 M9 0.7 0.1 0.9 0.7 1 1 1 1 0.1 0.1 0.1 0.5 +1 in [?11, ?9] ? [9, 11] +1 in [?11, ?9] ? [9, 11] ?5 in [?11, ?9] ? [9, 11] +5 in [?11, ?9] ? [9, 11] model for BAT is ST . In order to have a fair comparison with single-task learning we set S = N 2/3 and T = N 1/3 , then we obtain the condition M ? d2 N ?4/3 that constrains the number of tasks to be smaller than the dimensionality of F . We remark that the dependency of the auxiliary estimation error on M is due to the fact that the ? vectors (over which the transfer error is optimized) belong to the simplex ? of dimensionality M -2. Hence, the previous condition suggests that, in general, adaptive transfer methods may significantly improve the transfer performance (i.e., in this case a smaller transfer error) at the cost of additional sources of errors which depend on the dimensionality of the search space used to adapt the transfer process (in this case ?). 5 Best Transfer Trade-off Algorithm The previous algorithm is proved to successfully estimate the combination of source tasks which better approximates the Bellman operator of the target task. Nonetheless, BAT relies on the implicit assumption that L samples can always be generated from any source task 2 and it cannot be applied to the case where the number of source samples is limited. Here we consider the more challenging case where the designer has still access to the source tasks but only a limited number of samples is available in each of them. In this case, an adaptive transfer algorithm should solve a tradeoff between selecting as many samples as possible, so as to reduce the estimation error, and choosing the proportion of source samples properly, so as to control the transfer error. The solution of this tradeoff may return non-trivial results, where source tasks similar to the target task but with few samples are removed in favor of a pool of tasks whose average roughly approximate the target task but can provide a larger number of samples. Here we introduce the Best Tradeoff Transfer (BTT) algorithm. Similar to BAT, it relies on an auxiliary training set to solve the tradeoff. We denote by Nm the maximum number of samples available for source task m. Let ? ? [0, 1]M be a weight vector, where ?m is the fraction of samples from task m used in the transfer process. We denote by P E? (Eb? ) the transfer error (the estimated transfer error) with proportions ? where ?m = (?m Nm )/ m? (?m? Nm? ). At each iteration k, BTT returns the vector ? which optimizes the tradeoff between estimation and transfer errors, that is s   d k k?1 ? b e ? = arg min E? (Q ) + ? PM , (3) ??[0,1]M m=1 ?m Nm where ? is a parameter. While the first term accounts for the transfer error induced by ?, the second term is the estimation error due to the total amount of samples used by the algorithm. Unlike AST and BAT, BTT is a heuristic algorithm motivated by the bound in Thm. 1 and we do not provide any theoretical guarantee for it. The main technical difficulty is that the setting considered here does not match the random task design assumption (see Def. 1) since the number of source samples is constrained by Nm . As a result, given a proportion ?, we cannot assume samples to be drawn at random according to a multinomial of parameters ?. Without this assumption, it is an open question whether a similar bound as AST and BAT could be derived. 6 Experiments In this section, we report preliminary experimental results of the transfer algorithms. The main objective is to illustrate the functioning of the algorithms and compare their results with the theoretical findings. We consider a continuous extension of the chain walk problem proposed in [4]. The state is described by a continuous variable x and two actions are available: one that moves toward left and the other toward right. With probability p each action makes a step of length l, affected by a noise ?, 2 If ?m = 1 for task m, then the algorithm would generate all the L training samples from task m. 6 0.5 0.7 0.4 0.5 0.3 0.4 Probability Average reward per step 0.6 0.3 0.2 0.2 0.1 0.1 without transfer BAT with 1000 samples BAT with 5000 samples BAT with 10000 samples AST with 10000 samples 0 -0.1 -0.2 0 2000 4000 6000 8000 0 ?2 ?3 ?4 ?5 -0.1 10000 1 Number of target samples 2 3 4 5 6 7 8 9 10 11 12 13 Number of iterations Figure 3: Transfer from M2 , M3 , M4 , M5 . Left: Comparison between single-task learning, AST with L = 10000, BAT with L = 1000, 5000, 10000. Right: Source task probabilities estimated by BAT algorithm as a function of FQI iterations. in the intended direction, while with probability 1?p it moves in the opposite direction. In the target task M1 , the state?transition model is defined by the following parameters: p = 0.9, l = 1, and ? is uniform in the interval [?0.1, 0.1]. The reward function provides +1 when the system state reaches the regions [?11, ?9] and [9, 11] and 0 elsewhere. Furthermore, to evaluate the performance of the transfer algorithms previously described, we considered eight source tasks {M2 , . . . , M9 } whose state?transition model parameters and reward functions are reported in Tab. 1 and 2. To approximate the Q-functions, we use a linear combination of 20 radial basis functions. In particular, for each action, we consider 9 Gaussians with means uniformly spread in the interval [?20, 20] and variance equal to 16, plus a constant feature. The number of iterations for the FQI algorithm has been empirically fixed to 13. Samples are collected starting from the state x0 = 0 with actions chosen uniformly at random. All the results are averaged over 100 runs and we report standard deviation error bars. We first consider the pure transfer problem where no target samples are actually used in the learning training set (i.e., ?1 = 0). The objective is to study the impact of the transfer error due to the use of source samples and the effectiveness of BAT in finding a suitable combination of source tasks. The left plot in Fig. 3 compares the performances of FQI with and without the transfer of samples from the first four tasks listed in Tab. 1. In case of single-task learning, the number of target samples refers to the samples used at learning time, while for BAT it represents the size S of the auxiliary training set used to estimate the transfer error. Thus, while in single-task learning the performance increases with the target samples, in BAT they just make estimation of E? more accurate. The number of source samples added to the auxiliary set for each target sample was empirically fixed to one (T = 1). We first run AST with L = 10000 and ?2 = ?3 = ?4 = ?5 = 0.25 (which on average corresponds to 2500 samples from each source). As it can be noticed by looking at the models in Tab. 1, this combination is very different from the target model and AST does not learn any good policy. On the other hand, even with a small set of auxiliary target samples, BAT is able to learn good policies. Such result is due to the existence of linear combinations of source tasks which closely approximate the target task M1 at each iteration of FQI. An example of the proportion coefficients computed at each iteration of BAT is shown in the right plot in Fig. 3. At the first iteration, FQI produces an approximation of the reward function. Given the first four source tasks, BAT finds a combination (? ? (0.2, 0.4, 0.2, 0.2)) that produces the same reward function as R1 . However, after a few FQI iterations, such combination is no more able to accurately approximate e In fact, the state?transition model of task M2 is different from all the other ones functions T1 Q. (the step length is doubled). As a result, the coefficient ?2 drops to zero, while a new combination among the other source tasks is found. Note that BAT significantly improves single-task learning, in particular when very few target samples are available. In the general case, the target task cannot be obtained as any combination of the source tasks, as it happens by considering the second set of source tasks (M6 , M7 , M8 , M9 ). The impact of such situation on the learning performance of BAT is shown in the left plot in Fig. 4. Note that, when a few target samples are available, the transfer of samples from a combination of the source tasks using the BAT algorithm is still beneficial. On the other hand, the performance attainable by BAT is bounded by the transfer error corresponding to the best source task combination (which in this case is large). As a result, single-task FQI quickly achieves a better performance. 7 0.7 0.6 0.6 Average reward per step Average reward per step 0.7 0.5 0.4 0.3 0.2 without transfer BAT with 1000 source samples BAT with 5000 source samples BAT with 10000 source samples 0.1 0 0 2000 4000 6000 8000 0.5 0.4 0.3 0.2 without transfer BAT with 1000 source samples + target samples BAT with 10000 source samples + target samples BTT with max 5000 samples for each source BTT with max 10000 samples for each source 0.1 0 10000 0 Number of target samples 1000 2000 3000 4000 5000 Number of target samples Figure 4: Transfer from M6 , M7 , M8 , M9 . Left: Comparison between single-task learning and BAT with L = 1000, 5000, 10000. Right: Comparison between single-task learning, BAT with L = 1000, 10000 in addition to the target samples, and BTT (? = 0.75) with 5000 and 10000 samples for each source task. To improve readability, the plot is truncated at 5000 target samples. Results presented so far for the BAT transfer algorithm assume that FQI is trained only with the samples obtained through combinations of source tasks. Since a number of target samples is already available in the auxiliary training set, a trivial improvement is to include them in the training set together with the source samples (selected according to the proportions computed by BAT). As shown in the plot in the right side of Fig. 4 this leads to a significant improvement. From the behavior of BAT it is clear that with a small set of target samples, it is better to transfer as many samples as possible from source tasks, while as the number of target samples increases, it is preferable to reduce the number of samples obtained from a combination of source tasks that actually does not match the target task. In fact, for L = 10000, BAT has a much better performance at the beginning but it is then outperformed by single-task learning. On the other hand, for L = 1000 the initial advantage is small but the performance remains close to single-task FQI for large number of target samples. This experiment highlights the tradeoff between the need of samples to reduce the estimation error and the resulting transfer error when the target task cannot be expressed as a combination of source tasks (see Sec. 5). BTT algorithm provides a principled way to address such tradeoff, and, as shown by the right plot in Fig. 4, it exploits the advantage of transferring source samples when a few target samples are available, and it reduces the weight of the source tasks (so as to avoid large transfer errors) when samples from the target task are enough. It is interesting to notice that increasing the number of samples available for each source task from 5000 to 10000 improves the performance in the first part of the graph, while keeping unchanged the final performance. This is due to the capability of the BTT algorithm to avoid the transfer of source samples when there is no need for them, thus avoiding negative transfer effects. 7 Conclusions In this paper, we formalized and studied the sample-transfer problem. We first derived a finitesample analysis of the performance of a simple transfer algorithm which includes all the source samples into the training set used to solve a given target task. At the best of our knowledge, this is the first theoretical result for a transfer algorithm in RL showing the potential benefit of transfer over single-task learning. When the designer has direct access to the source tasks, we introduced an adaptive algorithm which selects the proportion of source tasks so as to minimize the bias due to the use of source samples. Finally, we considered a more challenging setting where the number of samples available in each source task is limited and a tradeoff between the amount of transferred samples and the similarity between source and target tasks must be solved. For this setting, we proposed a principled adaptive algorithm. Finally, we report a detailed experimental analysis on a simple problem which confirms and supports the theoretical findings. Acknowledgments This work was supported by French National Research Agency through the projects EXPLO-RA n? ANR-08-COSI-004, by Ministry of Higher Education and Research, NordPas de Calais Regional Council and FEDER through the ?contrat de projets e? tat region 2007?2013?, and by PASCAL2 European Network of Excellence. The research leading to these results has also received funding from the European Community?s Seventh Framework Programme (FP7/20072013) under grant agreement n 231495. 8 References [1] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Vaughan. A theory of learning from different domains. Machine Learning, 79:151?175, 2010. [2] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. Journal of Machine Learning Research, 9:1757?1774, 2008. [3] Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6:503?556, 2005. [4] M.G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107?1149, 2003. [5] A. Lazaric, M. Restelli, and A. Bonarini. Transfer of samples in batch reinforcement learning. In Proceedings of the Twenty-Fifth Annual International Conference on Machine Learning (ICML?08), pages 544?551, 2008. [6] Alessandro Lazaric. Knowledge Transfer in Reinforcement Learning. PhD thesis, Poltecnico di Milano, 2008. [7] Alessandro Lazaric and Marcello Restelli. Transfer from Multiple MDPs. Technical Report 00618037, INRIA, 2011. [8] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In Proceedings of the 22nd Conference on Learning Theory (COLT?09), 2009. [9] R. Munos and Cs. Szepesv?ari. Finite time bounds for fitted value iteration. Journal of Machine Learning Research, 9:815?857, 2008. [10] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. [11] Matthew E. Taylor, Nicholas K. Jong, and Peter Stone. Transferring instances for model-based reinforcement learning. In Proceedings of the European Conference on Machine Learning (ECML?08), pages 488?505, 2008. [12] Matthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(1):1633?1685, 2009. 9
4435 |@word norm:4 proportion:12 nd:1 open:2 d2:1 confirms:1 r:7 propagate:1 tat:1 attainable:1 initial:3 electronics:1 selecting:1 outperforms:1 dx:2 written:1 must:1 john:1 plot:6 drop:1 generative:2 greedy:1 selected:1 accordingly:1 beginning:1 provides:2 readability:1 direct:2 m7:3 prove:1 introduce:5 excellence:1 x0:1 indeed:1 ra:1 behavior:1 p1:2 expected:2 roughly:1 simulator:1 bellman:6 m8:3 discounted:1 automatically:1 encouraging:1 considering:1 increasing:2 provided:1 project:1 underlying:1 notation:1 bounded:5 rmax:1 maxa:4 finding:6 guarantee:1 pseudo:2 thorough:2 stateaction:1 preferable:1 rm:2 k2:1 wrong:1 control:1 grant:1 louis:1 t1:21 positive:1 sutton:1 inria:3 might:3 plus:1 eb:3 studied:3 k:3 collect:1 suggests:1 challenging:2 limited:5 range:2 averaged:1 bat:41 unique:1 acknowledgment:1 practice:1 empirical:3 adapting:1 significantly:4 projection:2 radial:1 fqi:12 refers:1 doubled:1 cannot:4 close:1 operator:8 ast:18 vaughan:1 measurable:2 starting:1 survey:2 politecnico:1 formalized:1 identifying:2 pure:2 m2:4 spanned:1 notion:1 variation:1 limiting:1 target:62 play:2 yishay:1 us:1 agreement:1 approximated:1 particularly:1 role:2 solved:2 region:2 connected:1 decrease:1 trade:1 removed:1 alessandro:4 principled:2 environment:1 agency:1 complexity:1 constrains:1 reward:15 trained:1 depend:2 solving:1 contrat:1 learner:1 basis:3 fast:1 effective:2 choosing:1 whose:5 heuristic:2 larger:4 solve:8 s:1 anr:1 favor:1 final:1 advantage:7 sequence:2 eigenvalue:1 propose:1 interaction:1 coming:1 fr:1 adaptation:3 realization:1 ernst:1 adapts:1 inducing:1 r1:2 produce:2 ben:1 derive:1 illustrate:1 blitzer:1 damien:1 andrew:1 ij:1 received:1 eq:2 auxiliary:11 c:1 implies:3 direction:2 closely:1 milano:2 human:1 education:1 hx:1 preliminary:2 strictly:1 pl:1 extension:1 mm:3 hold:1 considered:4 algorithmic:1 parr:1 matthew:2 achieves:1 smallest:3 estimation:13 outperformed:1 calais:1 ylm:1 council:1 successfully:1 weighted:2 minimization:2 mit:1 always:1 avoid:3 barto:1 derived:2 focus:1 properly:1 improvement:2 rostamizadeh:1 detect:1 transferring:4 france:1 selects:2 arg:6 among:1 colt:1 constrained:1 equal:1 sampling:3 represents:1 lille:1 marcello:2 koby:2 icml:1 minf:1 future:1 btt:9 report:9 discrepancy:1 simplex:2 richard:1 few:6 composed:1 national:1 m4:2 intended:1 n1:4 investigate:2 evaluation:1 introduces:2 chain:3 accurate:2 kt:2 tuple:1 closer:1 experience:1 orthogonal:1 tree:1 euclidean:1 taylor:2 walk:1 e0:1 theoretical:10 fitted:3 instance:1 cost:3 deviation:1 subset:1 uniform:2 wortman:1 seventh:1 reported:2 dependency:1 supx:1 st:1 international:1 sequel:1 yl:8 informatics:1 off:1 pool:1 michael:1 together:1 quickly:1 thesis:1 nm:5 containing:1 choose:1 leading:1 return:5 reusing:1 account:2 m9:4 potential:1 de:2 summarized:1 sec:6 includes:2 coefficient:2 depends:3 tion:2 closed:2 tab:3 sup:4 characterizes:1 capability:1 shai:1 contribution:4 minimize:2 square:1 variance:1 accurately:1 trajectory:1 bonarini:1 reach:1 suffers:1 concentrability:1 whenever:5 nordpas:1 definition:2 nonetheless:1 proof:1 di:3 propagated:1 proved:1 knowledge:4 dimensionality:4 improves:2 formalize:1 sophisticated:1 actually:2 higher:1 supervised:1 cosi:1 furthermore:2 just:1 implicit:2 hand:6 propagation:1 somehow:1 french:1 defines:1 mode:1 mdp:7 effect:2 functioning:1 hence:1 illustrative:1 m5:2 stone:2 tt:1 geurts:1 novel:1 funding:1 lagoudakis:1 ari:1 multinomial:3 rl:16 empirically:2 discussed:1 belong:1 m1:6 approximates:1 refer:1 significant:1 cambridge:1 rd:2 pm:4 access:4 europe:1 similarity:6 perspective:1 italy:1 inf:2 optimizes:1 scenario:1 ministry:1 additional:8 greater:1 fernando:1 ii:1 multiple:4 reduces:2 technical:2 match:2 adapt:2 long:1 y:5 bigger:1 a1:2 impact:2 prediction:1 involving:1 basic:1 iteration:29 kernel:3 robotics:1 addition:1 szepesv:1 interval:2 source:82 crucial:2 rest:1 unlike:1 regional:1 induced:2 effectiveness:2 call:1 leverage:1 iii:1 enough:5 m6:3 opposite:1 reduce:4 idea:1 tradeoff:13 whether:2 motivated:1 feder:1 peter:2 returned:2 action:12 remark:5 useful:1 clear:1 listed:1 detailed:1 amount:2 discount:1 generate:2 outperform:1 andr:1 notice:4 designer:4 estimated:3 lazaric:5 per:3 discrete:1 elet:1 affected:1 four:2 achieving:1 drawn:3 graph:1 fraction:1 run:5 fourth:1 draw:1 decision:1 dy:3 def:4 bound:16 display:2 annual:1 alex:1 speed:3 span:2 min:5 transferred:6 department:1 tv:1 according:7 combination:24 poor:1 smaller:4 beneficial:1 happens:1 previously:1 remains:1 jennifer:2 needed:1 fp7:1 end:2 available:16 gaussians:1 eight:1 pierre:1 nicholas:1 batch:2 existence:3 include:2 wehenkel:1 exploit:1 build:4 approximating:1 unchanged:1 objective:5 move:2 already:2 question:2 quantity:1 added:1 noticed:1 distance:2 collected:2 considers:1 trivial:2 toward:2 afshin:1 code:2 length:2 index:2 nord:1 negative:3 design:8 policy:5 twenty:1 perform:1 markov:1 sm:1 finite:7 mate:1 polimi:1 projets:1 truncated:2 ecml:1 situation:1 looking:1 team:1 mansour:1 arbitrary:4 thm:2 community:1 bk:5 introduced:2 pair:4 david:1 optimized:1 learned:1 address:1 able:5 bar:1 usually:1 kulesza:1 built:1 max:6 including:1 explanation:1 pascal2:1 suitable:3 difficulty:1 improve:2 mdps:7 l2:2 finitesample:1 loss:1 highlight:2 interesting:4 elsewhere:1 mohri:1 supported:1 last:1 truncation:1 keeping:1 bias:3 side:1 wide:1 taking:1 munos:1 fifth:1 benefit:3 transition:9 gram:1 computes:1 author:1 reinforcement:8 adaptive:7 vmax:22 programme:1 far:2 approximate:5 supremum:1 confirm:1 ml:5 keep:1 robotic:1 conclude:1 continuous:4 search:1 decade:1 table:2 learn:5 transfer:97 interact:1 mehryar:1 investigated:2 european:3 domain:6 main:6 spread:1 whole:1 noise:1 restelli:4 fair:1 x1:2 fig:10 pereira:1 xl:12 third:1 theorem:4 showing:1 x:5 exists:1 importance:1 phd:1 suited:1 logarithmic:1 simply:2 likely:2 expressed:1 corresponds:1 satisfies:4 relies:2 ma:1 towards:1 determined:1 reducing:1 uniformly:3 rlm:1 kearns:1 called:1 total:3 experimental:3 m3:2 explo:1 jong:1 selectively:1 support:2 crammer:2 absolutely:1 evaluate:1 avoiding:1
3,795
4,436
Continuous-Time Regression Models for Longitudinal Networks Duy Q. Vu Department of Statistics Pennsylvania State University University Park, PA 16802 [email protected] Arthur U. Asuncion? Department of Computer Science University of California, Irvine Irvine, CA 92697 [email protected] David R. Hunter Department of Statistics Pennsylvania State University University Park, PA 16802 [email protected] Padhraic Smyth Department of Computer Science University of California, Irvine Irvine, CA 92697 [email protected] Abstract The development of statistical models for continuous-time longitudinal network data is of increasing interest in machine learning and social science. Leveraging ideas from survival and event history analysis, we introduce a continuous-time regression modeling framework for network event data that can incorporate both time-dependent network statistics and time-varying regression coefficients. We also develop an efficient inference scheme that allows our approach to scale to large networks. On synthetic and real-world data, empirical results demonstrate that the proposed inference approach can accurately estimate the coefficients of the regression model, which is useful for interpreting the evolution of the network; furthermore, the learned model has systematically better predictive performance compared to standard baseline methods. 1 Introduction The analysis of the structure and evolution of network data is an increasingly important task in a variety of disciplines, including biology and engineering. The emergence and growth of largescale online social networks also provides motivation for the development of longitudinal models for networks over time. While in many cases the data for an evolving network are recorded on a continuous time scale, a common approach is to analyze ?snapshot? data (also known as collapsed panel data), where multiple cross-sectional snapshots of the network are recorded at discrete time points. Various statistical frameworks have been previously proposed for discrete snapshot data, including dynamic versions of exponential random graph models [1, 2, 3] as well as dynamic block models and matrix factorization methods [4, 5]. In contrast, there is relatively little work to date on continuous-time models for large-scale longitudinal networks. In this paper, we propose a general regression-based modeling framework for continuous-time network event data. Our methods are inspired by survival and event history analysis [6, 7]; specifically, we employ multivariate counting processes to model the edge dynamics of the network. Building on recent work in this context [8, 9], we use both multiplicative and additive intensity functions that allow for the incorporation of arbitrary time-dependent network statistics; furthermore, we consider ? current affiliation: Google Inc. 1 time-varying regression coefficients for the additive approach. The additive form in particular enables us to develop an efficient online inference scheme for estimating the time-varying coefficients of the model, allowing the approach to scale to large networks. On synthetic and real-world data, we show that the proposed scheme accurately estimates these coefficients and that the learned model is useful for both interpreting the evolution of the network and predicting future network events. The specific contributions of this paper are: (1) We formulate a continuous-time regression model for longitudinal network data with time-dependent statistics (and time-varying coefficients for the additive form); (2) we develop an accurate and efficient inference scheme for estimating the regression coefficients; and (3) we perform an experimental analysis on real-world longitudinal networks and demonstrate that the proposed framework is useful in terms of prediction and interpretability. The next section introduces the general regression framework and the associated inference scheme is described in detail in Section 3. Section 4 describes the experimental results on synthetic and real-world networks. Finally, we discuss related work and conclude with future research directions. 2 Regression models for continuous-time network data Below we introduce multiplicative and additive regression models for the edge formation process in a longitudinal network. We also describe non-recurrent event models and give examples of timedependent statistics in this context. 2.1 General framework Assume in our network that nodes arrive according to some stochastic process and directed edges among these nodes are created over time. Given the ordered pair (i, j) of nodes in the network at time t, let Nij (t) be a counting process denoting the number of edges from i to j up to time t. In this paper, each Nij (t) will equal zero or one, though this can be generalized. Combining the individual counting processes of all potential edges gives a multivariate counting process N(t) = (Nij (t) : i, j ? {1, . . . n}, i 6= j); we make no assumption about the independence of individual edge counting processes. (See [7] for an overview of counting processes.) We do not consider an edge dissolution process in this paper, although in theory it is possible to do so by placing a second counting process on each edge for dissolution events. (See [10, 3] for different examples of formation?dissolution process models.) As proposed in [9], we model the multivariate counting process via the Doob-Meyer decomposition [7], Z t N(t) = ?(s) ds + M(t), (1) 0 where essentially ?(t) and M(t) may be viewed as the (deterministic) signal and (martingale) noise, respectively. To model the so-called intensity process ?(t), we denote the entire past of the network, up to but not including time t, by Ht? and consider for each potential directed edge (i, j) two possible intensity forms, the multiplicative Cox and the additive Aalen functions [7], respectively:   ?ij (t|Ht? ) = Yij (t)?0 (t) exp ? ? s(i, j, t) ; (2)   ? ?ij (t|Ht? ) = Yij (t) ?0 (t) + ?(t) s(i, j, t) , (3) where the ?at risk? indicator function Yij (t) equals one if and only if (i, j) could form an edge at time t, a concept whose interpretation is determined by the context (e.g., see Section 2.2). In equations (2) and (3), s(i, j, t) is a vector of p statistics for directed edge (i, j) constructed based on Ht? ; examples of these statistics are given in Section 2.2. In each of the two models, the intensity process depends on a linear combination of the coefficients ?, which can be time-varying in the additive Aalen formulation. When all elements of sk (i, j, t) equal zero, we obtain the baseline hazards ?0 (t) and ?0 (t). The two intensity forms above, the Cox and Aalen, each have their respective strengths (e.g., see [7, chapter 4]). In particular, the coefficients of the Aalen model are quite easy to estimate via linear regression, unlike the Cox model. We leverage this computational advantage to develop an efficient inference algorithm for the Aalen model later in this paper. On the other hand, the Cox model forces the hazard function to be non-negative, while the Aalen model does not?however, in our experiments on both simulated and real-world data we did not encounter any issues with negative hazard functions when using the Aalen model. 2 2.2 Non-recurrent event models for network formation processes If tarr and tarr are the arrival times of nodes i and j, then the risk indicator of equations (2) and (3) i j  arr is Yij (t) = I max(tarr i , tj ) < t ? teij . The time teij of directed edge (i, j) is taken to be +? if the edge is never formed during the observation time. The reason for the upper bound teij is that the counting process is non-recurrent; i.e., formation of an edge means that it can never occur again. The network statistics s(i, j, t) of equations (2) and (3), corresponding to the ordered pair (i, j), can be time-invariant (such as gender match) or time-dependent (such as the number of two-paths from i to j just before time t). Since it has been found empirically that most new edges in social networks are created between nodes separated by two hops [11], we limit our statistics to the following: P 1. Out-degree of sender i: s1 (i, j, t) = h?V,h6=i Nih (t? ) P 2. In-degree of sender i: s2 (i, j, t) = h?V,h6=i Nhi (t? ) P 3. Out-degree of receiver j: s3 (i, j, t) = h?V,h6=j Njh (t? ) P 4. In-degree of receiver j: s4 (i, j, t) = h?V,h6=j Nhj (t? ) 5. Reciprocity: s5 (i, j, t) = Nji (t? ) P 6. Transitivity: s6 (i, j, t) = h?V,h6=i,j Nih (t? )Nhj (t? ) P 7. Shared contactees: s7 (i, j, t) = h?V,h6=i,j Nih (t? )Njh (t? ) P 8. Triangle closure: s8 (i, j, t) = h?V,h6=i,j Nhi (t? )Njh (t? ) P 9. Shared contacters: s9 (i, j, t) = h?V,h6=i,j Nhi (t? )Nhj (t? ) Here Nji (t? ) denotes the value of the counting process (i, j) right before time t. While this paper focuses on the non-recurrent setting for simplicity, one can also develop recurrent models using this framework, by capturing an alternative set of statistics specialized for the recurrent case [8, 12, 9]. Such models are useful for data where interaction edges occur multiple times (e.g., email data). 3 Inference techniques In this section, we describe algorithms for estimating the coefficients of the multiplicative Cox and additive Aalen models. We also discuss an efficient online inference technique for the Aalen model. 3.1 Estimation for the Cox model Recent work has posited Cox models similar to (2) with the goal of estimating general network effects [8, 12] or citation network effects [9]. Typically, ?0 (t) is considered a nuisance parameter, and estimation for ? proceeds by maximization of the so-called partial likelihood of Cox [13]:  m Y exp ? ? s(ie , je , te ) , L(?) = Pn P (4) ? i=1 j6=i Yij (te ) exp ? s(i, j, te ) e=1 where m is the number of edge formation events, and te , ie , and je are the time, sender, and receiver of the eth event. In this paper, maximization is performed via the Newton-Raphson algorithm. The covariance matrix of ?? is estimated as the inverse of the negative Hessian matrix of the last iteration. We use the caching method of [9] to compute the likelihood, the score vector, and the Hessian matrix more efficiently. We will illustrate this method through the computation of the likelihood, where the most expensive computation is for the denominator ?(te ) = n X X i=1 j6=i  Yij (te ) exp ? ? s(i, j, te ) . (5) For models such as the one in Section 2.2, a na??ve update for ?(te ) needs O(pn2 ) operations, where n is the the current number of nodes. A na??ve calculation of log L(?) needs O(mpn2 ) operations (where m is the number of edge events), which is costly since m and n may be large. Calculations of the score vector and Hessian matrix are similar, though they involve higher exponents of p. 3 Alternatively, as in [9], we may simply write ?(te ) = ?(te?1 ) + ??(te ), where ??(te ) entails all of the possible changes that occur during the time interval [te?1 , te ). Since we assume in this paper that edges do not dissolve, it is necessary to keep track only of the group of edges whose covariates change during this interval, which we call Ue?1 , and those that first become at risk during this interval, which we call Ce?1 . These groups of edges may be cached in memory during an initialization step; then, subsequent calculations of ??(te ) are simple functions of the values of s(i, j, te?1 ) and s(i, j, te ) for (i, j) in these two groups (for Ce?1 , only the time te statistic is relevant). The number of edges cached at each time step tends to be small, generally O(n) because our network statistics s are limited to those based on node degrees and two-paths. This leads to substantial computational savings; since we must still initialize ?(t1 ), the total computational complexity of each Newton-Raphson iteration is O(p2 n2 + m(p2 n + p3 )). 3.2 Estimation for the Aalen model Inference in model (3) proceeds not for the ?k parameters directly but rather for their time-integrals Z t Bk (t) = ?k (s)ds. (6) 0 The reason for this is that B(t) = [B1 (t), . . . , Bp (t)] may be estimated straightforwardly using a procedure akin to simple least squares [7]: First, let us impose some ordering on the n(n ? 1) possible ordered pairs (i, j) of nodes. Take W(t) to be the n(n ? 1) ? p matrix whose (i, j)th row equals Yij (t)s(i, j, t)? . Then Z t X ? B(t) = J(s)W? (s)dN(s) = J(te )W? (te )?N(te ) (7) 0 te ?t is the estimator of B(t), where the multivariate counting process N(te ) uses the same ordering of its n(n ? 1) entries as the W(t) matrix,  ?1 W? (t) = W(t)? W(t) W(t)? , and J(t) is the indicator that W(t) has full column rank, where we take J(t)W? (t) = 0 whenever W(t) does not have full column rank. As with typical least squares, a covariance matrix for these ? B(t) may also be estimated [7]; we give a formula for this matrix in equation (11). If estimates of ?k (t) are desired for the sake of interpretability, a kernel smoothing method may be used: 1 X  t ? te  ? ??k (t) = ?Bk (te ), (8) K b t b e ?k (te ) = B ?k (te ) ? B ?k (te?1 ), and K is a bounded kernel where b is the bandwidth parameter, ?B function with compact support [?1, 1] such as the Epanechnikov kernel. 3.3 Online inference for the Aalen model Similar to the caching method for the Cox model in Section 3.1, it is possible to streamline the computations for estimating the integrated Aalen model coefficients B(t). First, we rewrite (7) as X X  ?1 ? = B(t) J(te ) W(te )? W(te ) W(te )? ?N(te ) = A?1 (te )W(te )? ?N(te ), (9) te ?t te ?t ? where A(te ) = W(te ) W(te ) and J(te ) is omitted because for large network data sets and for reasonable choices of starting observation times, the covariate matrix is always of full rank. The computation of W(te )? ?N(te ) is simple because ?N(te ) consists of all zeros except for a single entry equal to one. The most expensive computation is to update the (p + 1) ? (p + 1) matrix A(te ) at every event time te ; inverting A(te ) is not expensive since p is relatively small. Using Ue?1 and Ce?1 as in Section 3.1, the component (k, l) of the matrix A(te ) corresponding to covariates k and l can be written as Akl (te ) = Akl (te?1 ) + ?Akl (te?1 ), where X X Wijk (te )Wijl (te ). (10) Wijk (te?1 )Wijl (te?1 ) + ?Akl (te?1 ) = ? (i,j)?Ue?1 ?Ce?1 (i,j)?Ue?1 4 For models such as the one presented in Section 2.2, if n is the current number of nodes, the cost of na??vely calculating Akl (te ) by iterating through all ?at-risk? edges is nearly n2 . As in Section 3.1, the cost will be O(n) if we instead use caching together with equation (10). In other cases, there may be restrictions on the set of edges at risk at a particular time. Here the computational burden for the na??ve calculation can be substantially smaller than O(n2 ); yet it is generally the case that using (10) will still provide a substantial reduction in computing effort. Our online inference algorithm during the time interval [te?1 , te ) may be summarized as follows: 1. Update A(te?1 ) using equation (10). ? e?1 ) = B(t ? e?2 ) + A?1 (te?1 )W(te?1 )? ?N(te?1 ). 2. Compute B(t 3. Compute and cache the network statistics changed by the event e ? 1, then initialize Ue?1 with a list of those at-risk edges whose network statistics are changed by this event. 4. Compute and cache all values of network statistics changed during the time interval [te?1 , te ). Define Ce?1 as the set of edges that switch to at-risk during this interval. 5. Before considering the event e: (a) Compute look-ahead summations at time te?1 indexed by Ue?1 . (b) Update the covariate matrix W(te?1 ) based on the cache. (c) Compute forward summations at time te indexed by Ue?1 and Ce?1 . For the first event, A(t1 ) must be initialized by na??ve summation over all current at-risk edges, which requires O(p2 n2 ) calculations. Assuming that the number n of nodes stays roughly the same over each of the m edge events, the overall computational complexity of this online inference algorithm ? is desired, it can also be is thus O(p2 n2 + m(p2 n + p3 )). If a covariance matrix estimate for B(t) derived online using the ideas above, since we may write it as X X   ? ?(t) = W? (te )diag{?N(te )}W? (te )? = A?1 (te ) Wij e (te ) ? Wij e (te ) A?1 (te ), (11) te ?t te ?t where Wij e (te ) denotes the vector W(te )? ?N(te ) and ? is the outer product. 4 Experimental analysis In this section, we empirically analyze the ability of our inference methods to estimate the regression coefficients as well as the predictive power of the learned models. Before discussing the experimental results, we briefly describe the synthetic and real-world data sets that we use for evaluation. We simulate two data sets, SIM-1 and SIM-2, from ground-truth regression coefficients. In particular, we simulate a network formation process starting from time unit 0 until time 1200, where nodes arrive in the network at a constant rate ?0 = 10 (i.e., on average, 10 nodes join the network at each time unit); the resulting simulated networks have 11,997 nodes. The edge formation process is simulated via Otaga?s modified thinning algorithm [14] with an additive conditional intensity function. From time 0 to 1000, the baseline coefficient is set to ?0 = 10?6 ; the coefficients for sender out-degree and receiver in-degree are set to ?1 = ?4 = 10?7 ; the coefficients for reciprocity, transitivity, and shared contacters are set to ?5 = ?6 = ?9 = 10?5 ; and the coefficients for sender in-degree, receiver out-degree, shared contactees, and triangle closure are set to 0. For SIM-1, these coefficients are kept constant and 118,672 edges are created. For SIM-2, between times 1000 and 1150, we increase the coefficients for transitivity and shared contacters to ?6 = ?9 = 4 ? 10?5, and after 1150, the coefficients return to their original values; in this case, 127,590 edges are created. We also evaluate our approach on two real-world data sets, IRVINE and METAFILTER. IRVINE is a longitudinal data set derived from an online social network of students at UC Irvine [15]. This dataset has 1,899 users and 20,296 directed contact edges between users, with timestamps for each node arrival and edge creation event. This longitudinal network spans April to October of 2004. The METAFILTER data set is from a community weblog where users can share links and discuss Web content1 . This dataset has 51,362 users and 76,791 directed contact edges between users. The continuous-time observation spans 8/31/2007 to 2/5/2011. Note that both data sets are non-recurrent in that the creation of an edge between two nodes only occurs at most once. 1 The METAFILTER data are available at http://mssv.net/wiki/index.php/Infodump 5 1150 (a) 1000 Constant Transitivity (b) 4e?5 Coefficient 1150 Time 1e?5 4e?5 Coefficient 1000 Time 1e?5 4e?5 Coefficient 1e?5 4e?5 Coefficient 1e?5 1000 1150 1000 Time Constant Shared Contacters (c) 1150 Time Piecewise Transitivity (d) Piecewise Shared Contacters 6/1/04 7/21/04 9/9/04 Coefficient ?5e?5 .0003 Coefficient 0 .02 0 Coefficient 3e?5 0 Coefficient 0 Figure 1: (a,b) Estimated time-varying coefficients on SIM-1; (c,d) Estimated time-varying coefficients on SIM-2. Ground-truth coefficients are also shown in red dashed lines. 6/1/04 Time 7/21/04 9/9/04 6/1/04 Time (a) Sender Out-Degree 7/21/04 9/9/04 6/1/04 Time (b) Reciprocity 7/21/04 9/9/04 Time (c) Transitivity (d) Shared Contacters 1/21/10 7/10/10 12/2/10 Time (a) Sender Out-Degree 0 Coefficient ?2e?6 1e?5 0 Coefficient 4e?4 0 Coefficient 1e?8 0 Coefficient Figure 2: Estimated time-varying coefficients on IRVINE data. These plots suggest that there are two distinct phases of network evolution, consistent with an independent analysis of these data [15]. 1/21/10 7/10/10 12/2/10 Time 1/21/10 7/10/10 12/2/10 Time (b) Reciprocity (c) Transitivity 1/21/10 7/10/10 12/2/10 Time (d) Shared Contacters Figure 3: Estimated time-varying coefficients on METAFILTER. Here, the network effects continuously change during the observation time. 4.1 Recovering the time-varying regression coefficients This section focuses on the ability of our additive Aalen modeling approach to estimate the timevarying coefficients, given an observed longitudinal network. The first set of experiments attempts to recover the ground-truth coefficients on SIM-1 and SIM-2. We run the inference algorithm described in Section 3.3 and use an Epanechnikov smoothing kernel (with a bandwidth of 10 time units) to obtain smoothed coefficients. On SIM-1, Figures 1(a,b) show the estimated coefficients associated with the transitivity and shared contacters statistics, as well as the ground-truth coefficients. Likewise, Figures 1(c,d) show the same estimated and groundtruth coefficients for SIM-2. These results demonstrate that our inference algorithm can accurately recover the ground-truth coefficients in cases where the coefficients are fixed (SIM-1) and modulated (SIM-2). We also tried other settings for the ground-truth coefficients (e.g., multiple sinusoidal-like bumps) and found that our approach can accurately recover the coefficients in those cases as well. On the IRVINE and METAFILTER data, we also learn time-varying coefficients which are useful for interpreting network evolution. Figure 2 shows several of the estimated coefficients for the IRVINE data, using an Epanechnikov kernel (with a bandwidth of 30 days). These coefficients suggest the existence of two distinct phases in the evolution of the network. In the first phase of network formation, the network grows at an accelerated rate. Positive coefficients for sender outdegree, reciprocity, and transitivity in these plots imply that users with a high numbers of friends tend to make more friends, tend to reciprocate their relations, and tend to make friends with their friends? friends, respectively. However, these coefficients decrease towards zero (the blue line) and enter a second phase where the network is structurally stable. Both of these phases have also been observed in an independent study of the data [15]. Figure 3 shows the estimated coefficients for METAFILTER, using an Epanechnikov kernel (with a bandwidth of 30). Interestingly, the coefficients suggest that there is a marked change in the edge formation process around 7/10/10. Unlike the IRVINE coefficients, the estimated METAFILTER coefficients continue to vary over time. 6 Table 1: Lengths of building, training, and test periods. The number of events are in parentheses. IRVINE METAFILTER Building 4/15/04 ? 5/11/04 (7073) 6/15/04 ? 12/21/09 (60376) Training 5/12/04 ? 5/31/04 (7646) 12/22/09 ? 7/9/10 (8763) Test 6/1/04 ? 10/19/04 (5507) 7/10/10 ? 2/5/11 (7620) 4.2 Predicting future links We perform rolling prediction experiments over the real-world data sets to evaluate the predictive power of the learned regression models. Following the evaluation methodology of [9], we split each longitudinal data set into three periods: a statistics-building period, a training period, and a test period (Table 1). The statistics-building period is used solely to build up the network statistics, while the training period is used to learn the coefficients and the test period is used to make predictions. Throughout the training and test periods, the time-dependent statistics are continuously updated. Furthermore, for the additive Aalen model, we use the online inference technique from Section 3.3. When we predict an event in the test period, all the previous events from the test period are used as training data as well. Meanwhile, for the multiplicative Cox model, we adaptively learn the model in batch-online fashion; during the test period, for every 10 days, we retrain the model (using the Newton-Raphson technique described in Section 3.1) with additional training examples coming from the test set. Our Newton-Raphson implementation uses a step-halving procedure, halving the length of each step if necessary until log L(?) increases. The iterations continue until every element in ? log L(?) is smaller that 10?3 in absolute value, or until the relative increase in log L(?) is less than 10?100 , or until 100 Newton-Raphson iterations are reached, whichever occurs first. The baseline that we consider is logistic regression (LR) with the same time-dependent statistics used in the Aalen and Cox models. Note that logistic regression is a competitive baseline that has been used in previous link prediction studies (e.g., [11]). We learn the LR model in the same adaptive batch-online fashion as the Cox model. We also use case control sampling to address the imbalance between positive and negative cases (since at each ?positive? edge event there are order of n2 ?negative? training cases). At each event, we sample K negative training examples for that same time point. We use two settings for K in the experiments: K = 10 and K = 50. To make predictions using the additive Aalen model, one would need to extrapolate the time-varying coefficients to future time points. For simplicity, we use a uniform smoothing kernel (weighting all observations equally), with a window size of 1 or 10 days. A more advanced extrapolation technique could yield even better predictive performance for the Aalen model. Each model can provide us with the probability of an edge formation event between two nodes at a given point in time, and so we can calculate an accumulative recall metric across all test events: P (i?j,t)?TestSet I[j ? Top(i, t, K)] Recall = , (12) |TestSet| where Top(i, t, K) is the top-K list of i?s potential ?friends? ranked based on intensity ?ij (t). We evaluate the predictive performance of the Aalen model (with smoothing windows of 1 and 10), the Cox model, and the LR baseline (with case control ratios 1:10 and 1:50). Figure 4(a) shows the recall results on IRVINE. In this case, both the Aalen and Cox models outperform the LR baseline; furthermore, it is interesting to note that the Aalen model with time-varying coefficients does not outperform the Cox model. One explanation for this result is that the IRVINE coefficients are pretty stable (apart from the initial phase as shown in Figure 2), and thus time-varying coefficients do not provide additional predictive power in this case. Also note that LR with ratio 1:10 outperforms 1:50. We also tried an LR ratio of 1:3 (not shown) but found that it performed nearly identically to LR 1:10; thus, both the Aalen and Cox models outperform the baseline substantially on these data. Figure 4(b) shows the recall results on METAFILTER. As in the previous case, both the Aalen and Cox models significantly outperform the LR baseline. However, the Aalen model with time-varying coefficients also substantially outperforms the Cox model with time-fixed coefficients. In this case, estimating time-varying coefficients improves predictive performance, which makes sense because we have seen in Figure 3 that METAFILTER?s coefficients tend to vary more over time. We also calculated precision results (not shown) on these data sets which confirm these conclusions. 7 0.3 0.4 0.3 Recall 0.2 Recall 1 5 10 Cut?Point K 15 Adaptive LR (1:10) Adaptive LR (1:50) Adaptive Cox Aalen (Uniform?1) Aalen (Uniform?10) 0.1 0.2 Adaptive LR (1:10) Adaptive LR (1:50) Adaptive Cox Aalen (Uniform?1) Aalen (Uniform?10) 20 1 (a) IRVINE 5 10 Cut?Point K 15 20 (b) METAFILTER Figure 4: Predictive performance of the additive Aalen model, multiplicative Cox model, and logistic regression baseline on the IRVINE and METAFILTER data sets, using recall as the metric. 5 Related Work and Conclusions Evolving networks have been descriptively analyzed in exploratory fashion in a variety of domains, including email data [16], citation graphs [17], and online social networks [18]. On the modeling side, temporal versions of exponential random graph models [1, 2, 3] and latent space models [19, 4, 5, 20] have been developed. Such methods operate on cross-sectional snapshot data, while our framework models continuous-time network event data. It is worth noting that continuous-time Markov process models for longitudinal networks have been proposed previously [21]; however, these approaches have only been applied to very small networks, while our regression-based approach can scale to large networks. Recently, there has also been work on inferring unobserved time-varying networks from evolving nodal attributes which are observed [22, 23, 24]. In this paper, the main focus is the statistical modeling of observed continuous-time networks. More recently, survival and event history models based on the Cox model have been applied to network data [8, 12, 9]. A significant difference between our previous work [9] and this paper is that scalability is achieved in our earlier work by restricting the approach to ?egocentric? modeling, in which counting processes are placed only on nodes. In contrast, here we formulate scalable inference techniques for the general ?relational? setting where counting processes are placed on edges. Prior work also assumed static regression coefficients, while here we develop a framework for time-varying coefficients for the additive Aalen model. Regression models with varying coefficients have been previously proposed in other contexts [25], including a time-varying version of the Cox model [26], although to the best of our knowledge such models have not been developed or fitted on longitudinal networks. A variety of link prediction techniques have also been investigated by the machine learning community over the past decade (e.g., [27, 28, 29]). Many of these methods use standard classifiers (such as logistic regression) and take advantage of key features (such as similarity measures among nodes) to make accurate predictions. While our focus is not on feature engineering, we note that arbitrary network and nodal features such as those developed for link prediction can be incorporated into our continuous-time regression framework. Other link prediction techniques based on matrix factorization [30] and random walks [11] have also been studied. While these link prediction techniques mainly focus on making accurate predictions, our proposed approach here not only gives accurate predictions but also provides a statistical model (with time-varying coefficient estimates) which can be useful in evaluating scientific hypotheses. In summary, we have developed multiplicative and additive regression models for large-scale continuous-time longitudinal networks. On simulated and real-world data, we have shown that the proposed inference approach can accurately estimate regression coefficients and that the learned model can be used for interpreting network evolution and predicting future network events. An interesting direction for future work would be to incorporate time-dependent nodal attributes (such as textual content) into this framework and to investigate regularization methods for these models. Acknowledgments This work is supported by ONR under the MURI program, Award Number N00014-08-1-1015. 8 References [1] S. Hanneke and E. P. Xing. Discrete temporal models of social networks. In Proc. 2006 Conf. on Statistical Network Analysis, pages 115?125. Springer-Verlag, 2006. [2] D. Wyatt, T. Choudhury, and J. Bilmes. Discovering long range properties of social networks with multivalued time-inhomogeneous models. In Proc. 24th AAAI Conf. on AI, 2010. [3] P. N. Krivitsky and M. S. Handcock. A separable model for dynamic networks. Under review, November 2010. http://arxiv.org/abs/1011.1937. [4] W. Fu, L. Song, and E. P. Xing. Dynamic mixed membership blockmodel for evolving networks. In Proc. 26th Intl. Conf. on Machine Learning, pages 329?336. ACM, 2009. [5] J. Foulds, C. DuBois, A. Asuncion, C. Butts, and P. Smyth. A dynamic relational infinite feature model for longitudinal social networks. In AI and Statistics, volume 15 of JMLR W&C Proceedings, pages 287?295, 2011. [6] P. K. Andersen, O. Borgan, R. D. Gill, and N. Keiding. Statistical Models Based on Counting Processes. Springer, 1993. [7] O. O. Aalen, O. Borgan, and H. K. Gjessing. Survival and Event History Analysis: A Process Point of View. Springer, 2008. [8] C. T. Butts. A relational event framework for social action. Soc. Meth., 38(1):155?200, 2008. [9] D. Q. Vu, A. U. Asuncion, D. R. Hunter, and P. Smyth. Dynamic egocentric models for citation networks. In Proc. 28th Intl. Conf. on Machine Learning, pages 857?864, 2011. [10] P. Holland and S. Leinhardt. A dynamic model for social networks. J. Math. Soc., 5:5?20, 1977. [11] L. Backstrom and J. Leskovec. Supervised random walks: Predicting and recommending links in social networks. In Proceedings of the 4th ACM International Conference on Web Search and Data Mining, pages 635?644. ACM, 2011. [12] P. O. Perry and P. J. Wolfe. Point process modeling for directed interaction networks. Under review, October 2011. http://arxiv.org/abs/1011.1703. [13] D. R. Cox. Regression models and life-tables. J. Roy. Stat. Soc., Series B, 34:187?220, 1972. [14] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes, Volume 1. Probability and its Applications (New York). Springer, New York, 2nd edition, 2008. [15] P. Panzarasa, T. Opsahl, and K. M. Carley. Patterns and dynamics of users? behavior and interaction: Network analysis of an online community. J. Amer. Soc. for Inf. Sci. and Tech., 60(5):911?932, 2009. [16] G. Kossinets and D. J. Watts. Empirical analysis of an evolving social network. Science, 311(5757):88? 90, 2006. [17] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graphs over time: densification laws, shrinking diameters and possible explanations. In Proc. 11th ACM SIGKDD Intl. Conf. on Knowledge Discovery in Data Mining, pages 177?187. ACM, 2005. [18] B. Viswanath, A. Mislove, M. Cha, and K. P. Gummadi. On the evolution of user interaction in Facebook. In Proc. 2nd ACM SIGCOMM Wkshp. on Social Networks, pages 37?42. ACM, 2009. [19] P. Sarkar and A. Moore. Dynamic social network analysis using latent space models. SIGKDD Explorations, 7(2):31?40, 2005. [20] Q. Ho, L. Song, and E. Xing. Evolving cluster mixed-membership blockmodel for time-varying networks. In AI and Statistics, volume 15 of JMLR W&C Proceedings, pages 342?350, 2011. [21] T. A. B. Snijders. Models for longitudinal network data. Mod. Meth. in Soc. Ntwk. Anal., pages 215?247, 2005. [22] S. Zhou, J. Lafferty, and L. Wasserman. Time varying undirected graphs. Machine Learning, 80:295?319, 2010. [23] A. Ahmed and E. P. Xing. Recovering time-varying networks of dependencies in social and biological studies. Proc. Natl. Acad. Scien., 106(29):11878?11883, 2009. [24] M. Kolar, L. Song, A. Ahmed, and E. P. Xing. Estimating time-varying networks. Ann. Appl. Stat., 4(1):94?123, 2010. [25] Z. Cai, J. Fan, and R. Li. Efficient estimation and inferences for varying-coefficient models. J. Amer. Stat. Assn., 95(451):888?902, 2000. [26] T. Martinussen and T.H. Scheike. Dynamic Regression Models for Survival Data. Springer, 2006. [27] D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. J. Amer. Soc. for Inf. Sci. and Tech., 58(7):1019?1031, 2007. [28] M. Al Hasan, V. Chaoji, S. Salem, and M. Zaki. Link prediction using supervised learning. In SDM ?06: Workshop on Link Analysis, Counter-terrorism and Security, 2006. [29] J. Leskovec, D. Huttenlocher, and J. Kleinberg. Predicting positive and negative links in online social networks. In Proc. 19th Intl. World Wide Web Conference, pages 641?650. ACM, 2010. [30] D. M. Dunlavy, T. G. Kolda, and E. Acar. Temporal link prediction using matrix and tensor factorizations. ACM Transactions on Knowledge Discovery from Data, 5(2):10, February 2011. 9
4436 |@word cox:24 briefly:1 version:3 nd:2 cha:1 closure:2 tried:2 decomposition:1 covariance:3 reduction:1 initial:1 series:1 score:2 denoting:1 interestingly:1 longitudinal:16 past:2 outperforms:2 current:4 yet:1 must:2 written:1 vere:1 additive:15 subsequent:1 timestamps:1 enables:1 acar:1 plot:2 update:4 discovering:1 epanechnikov:4 lr:12 provides:2 math:1 node:18 org:2 dn:1 constructed:1 nodal:3 become:1 consists:1 introduce:2 behavior:1 roughly:1 nhi:3 inspired:1 little:1 cache:3 considering:1 increasing:1 window:2 estimating:7 bounded:1 panel:1 akl:5 substantially:3 developed:4 unobserved:1 temporal:3 every:3 growth:1 classifier:1 control:2 unit:3 dunlavy:1 before:4 t1:2 engineering:2 positive:4 tends:1 limit:1 acad:1 path:2 solely:1 initialization:1 studied:1 terrorism:1 appl:1 scien:1 factorization:3 limited:1 range:1 directed:7 acknowledgment:1 vu:2 timedependent:1 block:1 procedure:2 empirical:2 evolving:6 eth:1 significantly:1 suggest:3 s9:1 collapsed:1 context:4 risk:8 restriction:1 deterministic:1 pn2:1 starting:2 formulate:2 foulds:1 simplicity:2 wasserman:1 estimator:1 s6:1 exploratory:1 updated:1 kolda:1 user:8 smyth:4 us:2 hypothesis:1 pa:2 element:2 wolfe:1 expensive:3 roy:1 viswanath:1 cut:2 muri:1 huttenlocher:1 observed:4 calculate:1 ordering:2 decrease:1 gjessing:1 liben:1 counter:1 substantial:2 borgan:2 complexity:2 covariates:2 dynamic:11 reciprocate:1 rewrite:1 duy:1 predictive:8 creation:2 triangle:2 various:1 chapter:1 separated:1 distinct:2 describe:3 formation:10 whose:4 quite:1 ability:2 statistic:24 emergence:1 online:14 advantage:2 sdm:1 net:1 cai:1 propose:1 leinhardt:1 interaction:4 arr:1 product:1 coming:1 uci:2 combining:1 relevant:1 date:1 scalability:1 cluster:1 intl:4 cached:2 illustrate:1 develop:6 recurrent:7 stat:5 friend:6 ij:3 sim:12 p2:5 soc:6 recovering:2 streamline:1 direction:2 inhomogeneous:1 attribute:2 stochastic:1 exploration:1 biological:1 summation:3 yij:7 around:1 considered:1 ic:2 ground:6 exp:4 predict:1 bump:1 vary:2 nowell:1 omitted:1 estimation:4 mssv:1 proc:8 always:1 modified:1 rather:1 choudhury:1 pn:1 caching:3 zhou:1 varying:26 timevarying:1 derived:2 focus:5 rank:3 likelihood:3 mainly:1 tech:2 contrast:2 blockmodel:2 sigkdd:2 baseline:10 sense:1 inference:19 dependent:7 membership:2 entire:1 typically:1 integrated:1 relation:1 doob:1 wij:3 issue:1 among:2 overall:1 exponent:1 development:2 smoothing:4 initialize:2 uc:1 equal:5 once:1 never:2 saving:1 psu:2 sampling:1 tarr:3 hop:1 biology:1 placing:1 park:2 look:1 nearly:2 outdegree:1 jones:1 future:6 piecewise:2 employ:1 ve:4 individual:2 phase:6 attempt:1 ab:2 interest:1 investigate:1 mining:2 wijk:2 evaluation:2 introduces:1 analyzed:1 tj:1 natl:1 accurate:4 integral:1 fu:1 edge:39 arthur:1 partial:1 necessary:2 respective:1 vely:1 indexed:2 initialized:1 desired:2 walk:2 nij:3 leskovec:3 fitted:1 column:2 modeling:7 earlier:1 wyatt:1 maximization:2 cost:2 entry:2 rolling:1 uniform:5 nhj:3 straightforwardly:1 dependency:1 synthetic:4 adaptively:1 international:1 ie:2 stay:1 discipline:1 together:1 continuously:2 na:5 again:1 aaai:1 recorded:2 andersen:1 padhraic:1 conf:5 return:1 li:1 potential:3 sinusoidal:1 summarized:1 student:1 coefficient:72 inc:1 salem:1 depends:1 performed:2 multiplicative:7 later:1 view:1 extrapolation:1 analyze:2 dissolve:1 red:1 recover:3 reached:1 competitive:1 xing:5 asuncion:4 contribution:1 formed:1 square:2 php:1 efficiently:1 likewise:1 yield:1 accurately:5 hunter:2 bilmes:1 worth:1 hanneke:1 j6:2 kossinets:1 history:4 whenever:1 email:2 facebook:1 associated:2 static:1 irvine:16 dataset:2 recall:7 knowledge:3 multivalued:1 improves:1 thinning:1 higher:1 zaki:1 day:3 supervised:2 methodology:1 april:1 formulation:1 amer:3 though:2 furthermore:4 just:1 until:5 d:2 hand:1 web:3 perry:1 google:1 logistic:4 scientific:1 grows:1 building:5 effect:3 concept:1 accumulative:1 evolution:8 regularization:1 moore:1 during:10 transitivity:9 nuisance:1 ue:7 generalized:1 demonstrate:3 interpreting:4 recently:2 nih:3 common:1 specialized:1 empirically:2 overview:1 volume:3 s8:1 interpretation:1 s5:1 significant:1 enter:1 ai:3 handcock:1 stable:2 entail:1 similarity:1 multivariate:4 recent:2 inf:2 apart:1 n00014:1 verlag:1 affiliation:1 continue:2 discussing:1 onr:1 life:1 seen:1 additional:2 impose:1 gill:1 period:12 signal:1 dashed:1 multiple:3 full:3 snijders:1 match:1 ahmed:2 calculation:5 cross:2 raphson:5 hazard:3 posited:1 long:1 wkshp:1 equally:1 award:1 gummadi:1 sigcomm:1 parenthesis:1 prediction:15 halving:2 regression:28 scalable:1 denominator:1 essentially:1 metric:2 arxiv:2 iteration:4 kernel:7 achieved:1 nji:2 interval:6 hasan:1 operate:1 unlike:2 tend:4 undirected:1 leveraging:1 mod:1 lafferty:1 call:2 counting:14 leverage:1 noting:1 split:1 easy:1 identically:1 variety:3 independence:1 switch:1 pennsylvania:2 bandwidth:4 idea:2 s7:1 akin:1 effort:1 song:3 hessian:3 york:2 action:1 useful:6 generally:2 iterating:1 involve:1 s4:1 diameter:1 http:3 wiki:1 outperform:4 assn:1 s3:1 estimated:12 track:1 blue:1 discrete:3 write:2 group:3 key:1 ce:6 ht:4 kept:1 graph:5 egocentric:2 run:1 inverse:1 arrive:2 throughout:1 reasonable:1 groundtruth:1 p3:2 capturing:1 bound:1 fan:1 strength:1 occur:3 ahead:1 incorporation:1 bp:1 sake:1 kleinberg:3 simulate:2 span:2 separable:1 relatively:2 department:4 according:1 combination:1 watt:1 describes:1 smaller:2 increasingly:1 across:1 backstrom:1 making:1 s1:1 invariant:1 taken:1 equation:6 previously:3 discus:3 whichever:1 available:1 operation:2 h6:8 alternative:1 encounter:1 batch:2 faloutsos:1 ho:1 existence:1 original:1 denotes:2 top:3 newton:5 calculating:1 build:1 february:1 contact:2 tensor:1 occurs:2 costly:1 link:13 simulated:4 sci:2 outer:1 reason:2 assuming:1 length:2 reciprocity:5 index:1 ratio:3 kolar:1 dissolution:3 october:2 negative:7 implementation:1 anal:1 perform:2 allowing:1 upper:1 imbalance:1 observation:5 snapshot:4 markov:1 november:1 relational:3 incorporated:1 smoothed:1 arbitrary:2 community:3 intensity:7 sarkar:1 david:1 bk:2 pair:3 inverting:1 security:1 california:2 learned:5 textual:1 address:1 proceeds:2 below:1 pattern:1 program:1 including:5 interpretability:2 max:1 memory:1 explanation:2 power:3 event:30 ranked:1 force:1 predicting:5 largescale:1 indicator:3 advanced:1 meth:2 scheme:5 imply:1 dubois:1 created:4 prior:1 review:2 discovery:2 relative:1 law:1 mixed:2 interesting:2 degree:11 consistent:1 systematically:1 share:1 row:1 changed:3 summary:1 placed:2 last:1 supported:1 side:1 allow:1 wide:1 absolute:1 calculated:1 world:10 evaluating:1 forward:1 adaptive:7 testset:2 social:17 transaction:1 citation:3 compact:1 wijl:2 keep:1 confirm:1 mislove:1 butt:2 receiver:5 b1:1 conclude:1 assumed:1 recommending:1 alternatively:1 continuous:14 latent:2 search:1 decade:1 sk:1 pretty:1 table:3 learn:4 ca:2 investigated:1 meanwhile:1 domain:1 diag:1 did:1 main:1 motivation:1 noise:1 s2:1 arrival:2 n2:6 edition:1 je:2 join:1 retrain:1 fashion:3 martingale:1 precision:1 structurally:1 meyer:1 inferring:1 shrinking:1 daley:1 exponential:2 jmlr:2 weighting:1 formula:1 specific:1 covariate:2 densification:1 list:2 survival:5 burden:1 workshop:1 restricting:1 te:81 simply:1 sender:8 sectional:2 ordered:3 holland:1 carley:1 springer:5 gender:1 truth:6 acm:9 conditional:1 viewed:1 goal:1 marked:1 ann:1 towards:1 krivitsky:1 shared:10 content:1 change:4 specifically:1 determined:1 typical:1 except:1 infinite:1 called:2 total:1 experimental:4 aalen:30 support:1 modulated:1 accelerated:1 incorporate:2 evaluate:3 extrapolate:1
3,796
4,437
A Reinforcement Learning Theory for Homeostatic Regulation Mehdi Keramati Group for Neural Theory, LNC, ENS Paris, France [email protected] Boris Gutkin Group for Neural Theory, LNC, ENS Paris, France [email protected] Abstract Reinforcement learning models address animal?s behavioral adaptation to its changing ?external? environment, and are based on the assumption that Pavlovian, habitual and goal-directed responses seek to maximize reward acquisition. Negative-feedback models of homeostatic regulation, on the other hand, are concerned with behavioral adaptation in response to the ?internal? state of the animal, and assume that animals? behavioral objective is to minimize deviations of some key physiological variables from their hypothetical setpoints. Building upon the drive-reduction theory of reward, we propose a new analytical framework that integrates learning and regulatory systems, such that the two seemingly unrelated objectives of reward maximization and physiological-stability prove to be identical. The proposed theory shows behavioral adaptation to both internal and external states in a disciplined way. We further show that the proposed framework allows for a unified explanation of some behavioral pattern like motivational sensitivity of different associative learning mechanism, anticipatory responses, interaction among competing motivational systems, and risk aversion. 1 Introduction ?Reinforcement learning? and ?negative-feedback models of homeostatic regulation? are two control theory-based computational frameworks that have had major contributions in learning and motivation literatures in behavioral psychology, respectively. Proposing neurobiologically-plausible algorithms, computational theory of reinforcement learning (RL) explains how animals adapt their behavior to varying contingencies in their ?external? environment, through persistently updating their estimates of the rewarding value of feasible choices [1]. The teaching signal required for these updates is suggested to be carried by phasic activity of midbrain dopamine neurons [2] projecting to the striatum, where stimulus-response associations are supposed to be encoded. Critically, this theory is built upon one hypothetical assumption: animals behavioral objective is to maximize reward acquisition. In this respect, without addressing the question of how the brain constructs reward, the RL theory gives a normative explanation for how the brains decision making circuitry shapes instrumental responses as a function of external cues, so as the animal to satisfy its reward-maximization objective. Negative-feedback models of homeostatic regulation (HR), on the other hand, seek to explain regular variabilities in behavior as a function of the internal state of the animal, when the external stimuli are fixed [3, 4]. Homeostasis means maintaining stable of some physiological variables. Correspondingly, behavioral homeostasis refers to corrective responses that are triggered by deviation of 1 a regulated variable from its hypothetical setpoint. In fact, regulatory systems operate ?as if? they aim at defending variables against perturbations. In this sense, existence of an ?error signal? (also known as ?negative feedback?), defined by the discrepancy between the current and desired internal state, is essential for any regulatory mechanism. Even though the presence of this negative feedback in controlling motivated behaviors is argued to be indisputable for explaining some behavioral and physiological facts [3], three major difficulties have raised criticism against negative-feedback models of behavioral homeostasis [3, 5]: (1) Anticipatory eating and drinking can be elicited in the absence of physiological depletion [6], supposedly in order to prevent anticipated deviations in future. In more general terms, motivated behaviors are observed to be elicited even when no negative feedback is detectable. (2) Intravenous (and even intragastric, in some cases) injection of food is not motivating, even though it alleviates deviation of the motivational state from its homeostatic setpoint. For example, rats do not show motivation, after several learning trials, to run down an alley for intragastric feeding by milk, whereas they quickly learn to run for normal drinking of milk [7]. These behavioral patterns are used as evidence to argue that maintaining homeostasis (reducing the negative feedback) is not a behavioral objective. In contrast, taste (and other sensory) information of stimuli is argued to be the key factor for their reinforcing properties [5]. (3) The traditional homeostatic regulation theory simply assumes that the animal knows how to translate its various physiological deficits into the appropriate behaviors. In fact, without taking into account the contextual state of the animal, negative feedback models only address the question of whether or not the animal should have motivation toward a certain outcome, without answering how the outcome can be achieved through a series of instrumental actions. The existence of these shortcomings calls for rethinking the traditional view toward homeostatic regulation theory. We believe that these shortcomings, as well as the weak spot of RL models in not taking into account the internal drive state of the animal, all arise from lack of an integrated view toward learning and motivation. We show in this paper that a simple unified computational theory for RL and HR allows for explaining a wide range of behavioral patterns including those mentioned above, without decreasing the explanatory power of the two original theories. We further show that such a unified theory can satisfy the two objectives of reward maximization and deviation minimization at the same time. 2 The model The term ?reward? (with many equivalents like reinforcer, motivational salience, utility, etc.) has been at the very heart of behavioral psychology since its foundation. In purely behavioral terms, it refers to a stimulus delivered to the animal after a response is made, that increases the probability of making that response in the future. RL theory proposes algorithms for how different learning systems can adapt agent?s responses to varying external conditions in order to maximize the acquisition of reward. For this purpose, RL algorithms try to learn, via experiences, the sum of discounted rewards expected to be received after taking a specific action (at ), in a specific state (st ), onward. [? ] ? [ ] 2 i?t V (st , at ) = E rt + ?rt+1 + ? rt+2 + . . . |st , at = E ? ri |st , at (1) i=t 0 ? ? ? 1 discounts future rewards. rt denotes the rewarding value of the outcome the animal receives at time t, which is a often set to a ?fixed? value that can be either positive or negative, depending on whether the corresponding stimulus is appetitive or aversive, respectively. However, animals motivation for outcomes is not fixed, but a function of their internal state: a food pellet is more rewarding to a hungry, than a sated rat. In fact, the internal state (also referred to as drive, or motivational state) of the animal affects the reinforcing value of a constant external outcome. As an attempt to identify the nature of reward and its relation to drive states, neo-behaviorists like Hull [8], Spence, and Mowrer have proposed the ?drive reduction? theory of motivation. According to this theory, one primary mechanism underlying reward is drive reduction. In terms of homeostatic 2 regulation theory, reward is defined as a stimulus that reduces the discrepancy between the current and the desired drive state of the animal, i.e, food pellet is rewarding to a hungry rat because it fulfills a physiological need. To capture this idea formally, let?s Ht = {h1,t , h2,t , .., hN,t } denote the physiological state of the animal at time t, and H ? = {h?1 , h?2 , .., h?N } as the homeostatic setpoint. As a special case, figure 1 shows a simplified system where food and water constitute all the physiological needs. This model can obviously be extended, without loss of generality, to cover other homeostatically regulated drives, as well as more detailed aspects of feeding like differential drives for carbohydrate, sodium, calcium, etc. A drive function can then be defined on this space as a mapping from physio- Figure 1: An examplary homeostatic space, with food and water as regulated physiological needs logical state to motivation: v uN u? m |h?i ? hi,t |n d(Ht ) = t (2) i=1 This drive function is in fact a distance function (Euclidian distance when m = n = 2) that creates some quasi-circle iso-drive curves centred around the homeostatic setpoint. The homeostatic space, as a multidimensional metric space, is a hypothetical construct that allow us to explain a wide range of behavioral and physiological evidence in a unified framework. Since animals physiological states are most often below the setpoint, our focus in this paper is mostly on the quarter of the homeostatic space that is below the homeostatic setpoint. Having defined drive, we can now provide a formal definition for reward, based on the drive reduction theory. Assume that as the result of some actions, the animal receives an outcome at time t that contains ki,t units of the constituent hi , for each i ? {1, 2, .., N }. Kt can be defined as a row vector with entries (ki,t : i ? {1, 2, .., N }). Consumption of that outcome will result in a transition of the physiological state from Ht to Ht+1 = Ht + Kt , and consequently, a transition of the drive state from d(Ht ) to d(Ht + Kt ). For example, figure 1 shows the transition resulted from taking k1 units of food. Accordingly, the rewarding value of that outcome can be defined as the consequent reduction in the drive function: r(Ht , Kt ) = d(Ht ) ? d(Ht+1 ) = d(Ht ) ? d(Ht + Kt ) (3) This drive-reduction definition of reward is the central element in our proposed framework that will bridge the gap between regulatory and reward learning systems. 3 Behavioral plausibility of the reward function Before discussing how the reward defined in equation 3 can be used by associative learning mechanisms (RL theory), we are interested in this section to show that the functional form proposed for the reward function allows for several behavioral phenomena to be explained in a unified framework. 3 For all m > n > 2, the rewarding value of an outcome consisting of only one constituent, Kt = (0, 0, .., kj,t , .., 0) ,when the animal is in the motivational state Ht , will have the four below properties. Even though these properties are written for the cases that the need state remains below the setpoint, as the drive function is symmetric in respect to the setpoint, similar result can be derived for other three quarters. 3.1 Reward value increases with the dose of outcome The reward function is an increasing function of the magnitude of the outcome (e.g., number of food pellets); i.e. the bigger the outcome is, the more rewarding value it will have. It is straightforward to show that: dr(Ht , Kt ) > 0 : for kj,t > 0 (4) dkj,t Supporting this property of the drive function, it is shown in progressive ratio schedules that rats maintain higher breakpoints when reinforced with bigger appetitive outcomes, reflecting higher motivation toward them [9, 10]. 3.2 Excitatory effect of deprivation level Increasing the deprivation level of the animal will increase the reinforcing strength of a constant dose of a corresponding outcome, i.e. a single pellet of food is more rewarding to a hungry, than a sated rate. dr(Ht , Kt ) > 0 : for kj,t > 0 (5) d|h?j ? hj,t | Consistently, the level of food deprivation in rats is demonstrated to increases the breakpoint in progressive ratio schedules [10]. 3.3 Inhibitory effect of the irrelevant drive A large body of behavioral experiments shows that deprivation level for one need has an inhibitory effect on the reinforcing value of outcomes that satisfy irrelevant needs. (see [11] for review). It is well known that high levels of irrelevant thirst impairs Pavlovian as well as instrumental responses for food during both acquisition and extinction. Reciprocally, food deprivation is demonstrated to suppress Pavlovian and instrumental water-related responses. As some other examples, increased calcium appetite is shown to reduce appetite for phosphorus, and increased level of hunger is demonstrated to inhibit sexual behavior. It is straightforward to show that the specific functional form proposed for the drive function can capture this inhibitory-like interaction between irrelevant drives: dr(Ht , Kt ) <0 : d|h?i ? hi,t | for all i ?= j and kj,t > 0 (6) Intuitively, one does not play chess, or even search for sex, with an empty stomach. This behavioral pattern can be interpreted as competition among different motivational systems (different dimensions of the homeostatic space), and is consistent with the notion of ?complementary? relation between some goods in economics, like between cake and coffee. Each of two complement goods are highly rewarding, only when the other one is also available; and taking one good when the other one is lacking, is not that rewarding. 3.4 Risk aversion Risk aversion, as a fundamental concept in both psychology and economics, and supported by a wealth of behavioral experiments, is defined by reluctance of individuals to select choices with uncertain outcomes, compared to choices with more certain payoffs, even when the expected payoff of the former is higher than that of the latter. It is easy to show that a concave reward (utility) function 4 (in respect to the quantity of the outcome), is equivalent to risk aversion. It is due to this feature that concavity of the utility function has become a fundamental assumption in the microeconomic theory. The proposed form of the drive function can capture risk-aversion: d2 r(Ht , Kt ) <0 : 2 dkj,t for kj,t > 0 (7) It is noteworthy that as Kt consists of substances that fulfil physiological needs, one should be careful in extending this and other features of the model to the case of monetary rewards, or any other type of stimuli that does not seem to have a corresponding regulatory system (like social rewards, novelty-induced reward, etc.). It is clear that the four mentioned properties of the reward function depend on the specific functional form adopted for the drive function (equation 2). In fact, the drive-reduction definition of reward allows for the validity of the form of drive function to be experimentally tested in behavioral tasks. 4 Homeostatic control through learning mechanisms Despite significant day-to-day variations in energy expenditure and food availability, the homeostatic regulation system involved in control of food-intake and energy balance is a remarkably precise system. This adaptive nature of homeostatic regulation system can be achieved by employing the animals learning mechanisms, which are capable of adopting flexible behavioral strategies to cope with changes in the environmental conditions. In this section we are interested in looking at behavioral and theoretical implications of interaction between homeostatic regulation and reward learning systems. One theoretical implication of the proposed definition of reward is that it reconciles the RL and homeostatic regulation theories, in terms of normative assumptions behind them. In more precise words, if any reward learning system, like RL algorithms, uses the drive-reduction definition of reward proposed in equation 3, then a behavioral policy that maximizes the sum of discounted rewards is at the same time minimizing the sum of discounted drives (or sum of discounted deviations from the setpoint), and vice versa (see Supplementary Methods: Proposition 1). In this respect, reward maximization can be seen as just means that guide animal?s behavior toward satisfying the more basic objective of maintaining homeostasis. Since the reward function defined by equation 3 depends on the internal state and thus, is nonstationary, some appropriate adjustments in the classical RL algorithms that try to find an optimal policy must be thought of. One straightforward solution is to define an augmented Markov decision process (MDP) implied by the cross-product of internal and external states, an then use a variant of RL algorithms to learn action-values in that problem-space. Consistent with this view, some behavioral studies show that internal state can work in the same way that external state works. That is, animals are able to acquire responses conditioned upon some certain motivational states (e.g. motivational state induced by benzodiazepine agonist) [11]. Although this approach, in theory, guarantees convergence to the optimal policy, high dimensionality of the resulting state-space makes learning rather impossible in practice. Moreover, since the next external state only depends on the current external but not internal state, such an augmented MDP will have significant redundant structure. From a machine learning point of view, as argued in [12], an appropriate function approximator specifically designed to take advantage of such a structure can be used to reduce state-space dimensionality. Beside this algorithm-independent expansion of state-space argued above, the proposed definition of reward provides an opportunity for discussing how different associate learning systems in the brain take the animal?s internal state into account. Here we discuss motivational sensitivity in habitual (hypothesized to use a model-free RL, like temporal difference algorithm [13]), goal-directed (hypothesized to use a model-based RL [13]), and Pavlovian learning systems. 5 A model-based RL algorithm learns the state-transition matrix (action-outcome contingencies), as well as the reward function (the incentive value of each outcome), and then uses them to compute the value of possible actions in a given state, using Bellman optimality equation[1]. In our framework, as long as a model-based algorithm is involved in decision making, all that the animal needs to do when its internal state shifts is to update the incentive value (reward function) of each potential outcome. The reward function being updated, the animal will be able to take the optimal policy, given that the state-transition function is learned completely. In order to update the reward function, one way is that the animal re-experiences outcomes in its new motivational state. This way seems to be how the goal-directed system works, since re-exposure is demonstrated to be necessary in order for the animals? behavior to be sensitive to changes in motivational state [11]. The second way is to update the reward function without re-exposure, but through directly estimating the drivereduction effect of outcomes in the current motivational state, using equation 3. This way seems to be how the Pavlovian system works, since numerous experiments show that Pavlovian conditioned or unconditioned responses are sensitive to the internal state in an outcome-specific way, without reexposure being required [11]. This Pavlovian-type behavioral adaptation is observed even when the animal has never experienced the outcome in its current motivational state during its whole life. This shows that animals are able to directly estimate the drive-reduction effect of at least some certain types of outcomes. Furthermore, the fact that this motivational sensitivity is outcome-specific shows that the underlying associative learning structure is model-based (i.e. based on learning the causal contingencies between events). The habitual system (hypothesized to use a model-free RL[13]) is demonstrated, through outcomedevaluation experiments, that cannot directly (i.e. without new learning) adapt the animal?s behavioral response after shifts in the motivational state [11]. It might be concluded from this observation that action-values in the habitual system only depend on the internal state of the animal during the course of learning (past experiences), but not performance. This is consistent with the information storage scheme in model-free RL, where cached action-values lose connection with the identity of the outcomes. Thus, the habitual system doesn?t know what specific action-values should be updated when a novel internal state is being experienced. However, it has been argued by some authors [14] that even though habitual responses are insensitive to sensory-specific satiety, they are sensitive to motivational manipulations in a general, outcomeindependent way. Note that the previous form of motivational insensitivity concerns lack of behavioral adaptation in an outcome-specific way (for example, lack of greater preference for food-seeking behavior, compared to water-seeking behavior, after a shift from satiety to hunger state). Borrowing from the Hullian concept of ?generalized drive? [8], it has been proposed that transition from an outcome-sated to an outcome-deprived motivational state will energize all pre-potent habitual responses, irrespective of whether or not those actions result in that certain outcome [14]. For example, a motivational shift from satiety to hunger will result in energization of both food-seeking and water-seeking habitual responses. Taking a normative perspective, we argue here that this outcome-independent energizing effect is an approximate way of updating the value of state-action pairs when the motivational state shifts instantaneously. Assuming that the animal is trained under the fixed internal state H0 , and then tested in a novel internal state H1 , the cached values in the habitual system can be approximated in the new motivational state by: Q1 (s, a) = d(H1 ) .Q0 (s, a) : for all state-action pairs d(H0 ) (8) Where Q0 (s, a) represents action-values learned by the habitual system after the training period. According to this update rule, all the prepotent actions will get energized if deviation from the homeostatic setpoint increases in the new internal state, whether or not the outcome of those actions are more desired in the new state. Proposition 2 (see Supplementary Methods) shows that this update rule is a perfect approximation only when H1 = c.H0 , where c ? 0. This means that the energizing effect will result in rational behavior after motivational shift, only when the new internal state is an 6 amplification or abridge of the old internal state in all the dimension of the homeostatic space, with equal magnitude. Since the model-free system does not learn the causal model of the environment, it cannot show motivational sensitivity in an outcome-specific way, and this general energizing effect is the best approximate way to react to motivational manipulations. 4.1 Anticipatory responses The notion of predictive homeostasis [3, 4], as opposed to the classical reactive homeostasis where the existence of negative feedback (physiological depletion) is essential, suggests that through anticipating future needs, individuals make anticipatory responses in order to prevent deviation of regulated variables from their setpoints. Anticipatory eating and drinking, as two examples, are defined by taking food and water in the absence of any detectable decrease in the corresponding physiological signals. Although it is quite clear that explaining such behavioral patterns requires integrating homeostatic mechanisms and learning (as predictive) processes, to our knowledge, no well-defined mechanism has been proposed for it so far. We show here, through an example, how the proposed model can reconcile anticipatory responses to the homeostatic regulation theory. Temperature regulation provides a clear example for predictive homeostasis. Body temperature, which has long been a topic of interest in the homeostatic regulation literature, is shown to increase back to its normal level by shivering, after the animal is placed into a cold place. Interestingly, cues that predict being placed into a cold place induce anticipatory shivering and result in the body temperature to go above the normal temperature of the animal [15]. Similarly, cues that predict receiving a drug that decreases body temperature is shown to have the same effect. This behavior is interpreted as an attempt by the animal to alleviate the severity of deviation from the setpoint. To model this example, lets assume that x? is the normal temperature of the animal, and that putting the animal in the cold place will result in a decrease of lx units in the body temperature. Furthermore, assume that when presented with the coldness-predicting cue (SC ), the animal chooses how much to shiver and thus, increases its body temperature by kx units. In a scenario like this, after observing the coldness-predicting cue, the animals temperature will shift from x? to x? + kx , as a result of anticipatory shivering. Assuming that the animal will then experience coldness after a delay of one time-unit, its temperature will transit from x? + kx to x? + kx ? lx . The rewarding value of this transition will be discounted with the rate of ?. Finally, by assuming that after one more time-unit the body temperature will go back to the normal level, x? , the animal will receive another reward, discounted with the rate of ? 2 . The sum of discounted rewards can be written as below: V (SC , kX ) = [d(x? )?d(x? +kX )]+?.[d(x? +kX )?d(x? +kX ?lX )]+? 2 .[d(x? +kX ?lX )?d(x? )] (9) Proposition 3 (see Supplementary Methods) shows that the optimal strategy for maximizing the sum of discounted rewards in this scenario is when kx = lx /2, assuming that the discount factor is not equal to one, but is sufficiently close to it. In fact, the model predicts that the best strategy is to perform anticipatory shivering to the extent that keep it as close as possible to the setpoint: turning around but close to the setpoint is preferred to getting far from it and coming back. In fact this example, which can be easily generalized to anticipatory eating and drinking, shows that when learning mechanisms play a regulatory role, it is not only the initial and final motivational states of a policy that matters, but also the trajectory of the motivational state in the homeostatic space through that policy matters. It is in fact due to discounting future rewards. If the discount factor is one, then regardless of the trajectory of the motivational state, the sum of rewards for all policies that start from a certain homeostatic point and finish at another point, will be equal. In that case, the sum of rewards for all values of kx in the anticipatory shivering example will be zero and thus, anticipatory strategies will not be preferred to a reactive-homeostasis strategy. In fact, the model predicts that decreasing animal?s discount rate (e.g. through pharmacological agents known to modulate discount rate) should increase the probability that the animal show an anticipatory response, and vice versa. 7 5 Discussion Despite considerable differences, some common principles are argued to govern all motivational systems. we have proposed a model that captures some of these commonalities between homoestatically regulated motivational systems. We showed how the rewarding value of an outcome can be computed as a function of the animal?s internal state and the constituting elements of the outcome, through the drive reduction equation. We further demonstrated how this computed reward can be used by different learning mechanisms to form appropriate Pavlovian or instrumental associations. However, it should be noted that concerning food (and other ingestible) outcomes, the basic form of the drive-reduction definition of reward (equation 2) has four interrelated problems, some of them used traditionally to criticize the drive reduction theory: (1) Post-digestive nutritive feedback of food, defined by the drive reduction equation, might occur hours after ingestion. Such a long delay between an instrumental response and its consequent drive reduction effect (reward) would make it difficult for an appropriate association to be established. In fact, according to the temporal contiguity rule of associative learning, unconditioned stimuli must follow immediately the conditioned stimuli, for an association to be established between them. (2) Dopamine neurons, which are supposed to carry reward learning signal, are demonstrated to show instantaneous burst activity in response to unexpected food rewards, without waiting for the food to be digested, and drive to be reduced [16]. (3) Intravenous injection (and intragastric intubation, in some cases) of food is not rewarding, even though its drive reduction effect is equal to when that food is ingested orally. As mentioned before, oral ingestion of the same outcome is shown to have significant reinforcing effect [7]. (4) Palatable foods have reinforcing effect, even when they do not have any nutritional value (i.e. they do not reduce any physiological need) [16]. Making the story even more complicated, this taste-dependent rewarding value of food is demonstrated to be modulated by not only the approximated nutritional content of the food, but also the internal physiological state of the animal [16]. However, assuming that taste and other sensory properties of a food outcome give an estimate, k?1,t , of its true nutritional content, k1,t , the rewarding effect of food can be approximated by equation 3, as soon as the food is sensed, or taken. This association between sensory information and post-ingestive effects of food might have been established through learning, or evolution. This simple plausible assumption clearly resolves the four problems listed above for the classical notion of drive reduction. It explains that gastric, olfactory, or visual information of food is necessary for its reinforcing effect to be induced and thus, intravenous injection of food is not reinforcing due to lack of appropriate sensory information. Moreover, there is no delay in this mechanism between food intake and its drive-reduction effect and therefore, Dopamine neurons can respond instantaneously. Finally, as equation 3 predicts, this taste-dependent rewarding value is modulated by the motivational state of the animal. Previous computational accounts in the psychological literature attempting to incorporate internalstate dependence of motivation into the RL models use ad hoc addition or multiplication of drive state with the a priori reward magnitude [17, 13]. In the machine learning literature, among others, Bersini [18] uses an RL model where the agent gets punished if its internal state transgresses a predefined viability zone. Simulation results show that such a setting motivates the agent to maintain its internal variables in a bounded zone. A more recent work [12] also uses RL framework where reward is generated by drive difference. It is demonstrated that this design allows the agent to make a balance in satisfying different drives. Apart from physiological and behavioral plausibility, the theoretical novelty of our proposed framework is in formalizing the hypothetical concept of drive, as a mapping from physiological to motivational state. This has allowed the model to show analytically that reward maximization and deviation minimization can be seen as two sides of the same coin. 6 Acknowledgements MK and BG are supported by grants from Frontiers du Vivant, the French MESR, CNRS, INSERM, ANR, ENP and NERF. 8 References [1] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, 1998. [2] W. Schultz, P. Dayan, and P. R. Montague. A neural substrate of prediction and reward. Science, 275(5306):1593?1599, 1997. [3] F. M. Toates. Motivational Systems. Problems in the behavioral sciences. Cambridge University Press, New York, 1986. [4] J. E. R. Staddon. Adaptive behavior and learning. Cambridge University Press, New York, 1983. [5] K. C. Berridge. Motivation concepts in behavioral neuroscience. Physiol Behav, 81(2):179? 209, 2004. [6] S.C. Woods and R.J. Seeley. Hunger and energy homeostasis. In C. R. Gallistel, editor, Volume 3 of Steven?s Handbook of Experimental Psychology: Learning, Motivation, and Emotion, pages 633?68. Wiley, New York, third edition, 2002. [7] N. E. Miller and M. L. Kessen. Reward effects of food via stomach fistula compared with those of food via mouth. J Comp Physiol Psychol, 45(6):555?564, 1952. [8] C. L. Hull. Principles of behavior: an introduction to behavior theory. The Century psychology series. Appleton-Century-Crofts, New York, 1943. [9] P. Skjoldager, P. J. Pierre, and G. Mittleman. Reinforcer magnitude and progressive ratio responding in the rat: Effects of increased effort, prefeeding, and extinction. Learn motiv, 24(3):303?343, 1993. [10] W. Hodos. Progressive ratio as a measure of reward strength. Science, 134:943?944, 1961. [11] A. Dickinson and B. W. Balleine. The role of learning in motivation. In C. R. Gallistel, editor, Volume 3 of Steven?s Handbook of Experimental Psychology: Learning, Motivation, and Emotion, pages 497?533. Wiley, New York, third edition, 2002. [12] George Konidaris and Andrew Barto. An adaptive robot motivational system. In Proceedings of the 9th international conference on Simulation of adaptive behavior : from animals to animats 9, pages 346?356, 2006. [13] N. D. Daw, Y. Niv, and P. Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat Neurosci, 8(12):1704?11, 2005. [14] Y. Niv, D. Joel, and P. Dayan. A normative perspective on motivation. Trends Cogn Sci, 10(8):375?381, 2006. [15] J. G. Mansfield, R. S. Benedict, and S. C. Woods. Response specificity of behaviorally augmented tolerance to ethanol supports a learning interpretation. Psychopharmacology, 79(23):94?98, 1983. [16] L. H. Schneider. Orosensory self-stimulation by sucrose involves brain dopaminergic mechanisms. Ann. N. Y. Acad. Sci, 575:307?319, 1989. [17] J. Zhang, K. C. Berridge, A. J. Tindell, K. S. Smith, and J. W. Aldridge. A neural computational model of incentive salience. PLoS Comp Biol, 5(7), 2009. [18] Hugues Bersini. Reinforcement learning for homeostatic endogenous variables. In Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3, page 325333, Brighton, United Kingdom, 1994. MIT Press. ACM ID: 189936. 9
4437 |@word trial:1 instrumental:6 seems:2 extinction:2 sex:1 d2:1 seek:2 sensed:1 simulation:3 q1:1 euclidian:1 carry:1 reduction:17 initial:1 series:2 contains:1 united:1 interestingly:1 past:1 current:5 contextual:1 intake:2 written:2 must:2 physiol:2 shape:1 designed:1 update:6 cue:5 accordingly:1 iso:1 smith:1 provides:2 preference:1 lx:5 digestive:1 zhang:1 ethanol:1 burst:1 differential:1 become:1 gallistel:2 prove:1 consists:1 behavioral:34 olfactory:1 balleine:1 expected:2 behavior:17 brain:4 bellman:1 discounted:8 decreasing:2 food:34 resolve:1 increasing:2 psychopharmacology:1 motivational:34 estimating:1 unrelated:1 underlying:2 maximizes:1 moreover:2 bounded:1 formalizing:1 what:1 interpreted:2 contiguity:1 proposing:1 unified:5 appetite:2 guarantee:1 temporal:2 hypothetical:5 multidimensional:1 concave:1 control:4 unit:6 grant:1 positive:1 before:2 benedict:1 striatum:1 despite:2 sutton:1 acad:1 id:1 noteworthy:1 might:3 physio:1 suggests:1 range:2 directed:3 spence:1 practice:1 spot:1 cold:3 cogn:1 drug:1 thought:1 word:1 pre:1 regular:1 refers:2 integrating:1 induce:1 specificity:1 get:2 cannot:2 close:3 storage:1 risk:5 impossible:1 equivalent:2 demonstrated:9 maximizing:1 straightforward:3 economics:2 exposure:2 go:2 regardless:1 hunger:4 immediately:1 thirst:1 react:1 rule:3 stability:1 century:2 notion:3 fulfil:1 variation:1 traditionally:1 updated:2 controlling:1 play:2 orally:1 substrate:1 dickinson:1 us:4 associate:1 persistently:1 element:2 mowrer:1 satisfying:2 updating:2 neurobiologically:1 approximated:3 seeley:1 trend:1 predicts:3 observed:2 role:2 steven:2 capture:4 plo:1 decrease:3 inhibit:1 mentioned:3 supposedly:1 environment:3 govern:1 reward:58 trained:1 depend:2 oral:1 predictive:3 purely:1 upon:3 creates:1 completely:1 microeconomic:1 easily:1 montague:1 various:1 corrective:1 carbohydrate:1 shortcoming:2 sc:2 outcome:39 h0:3 quite:1 encoded:1 supplementary:3 plausible:2 anr:1 delivered:1 seemingly:1 associative:4 obviously:1 triggered:1 advantage:1 unconditioned:2 analytical:1 reinforcer:2 final:1 propose:1 hoc:1 interaction:3 product:1 coming:1 fr:2 adaptation:5 monetary:1 translate:1 alleviates:1 insensitivity:1 supposed:2 amplification:1 competition:2 constituent:2 getting:1 convergence:1 empty:1 extending:1 cached:2 boris:2 perfect:1 depending:1 andrew:1 received:1 involves:1 hull:2 explains:2 argued:6 feeding:2 niv:2 alleviate:1 proposition:3 frontier:1 drinking:4 onward:1 around:2 sufficiently:1 normal:5 mapping:2 predict:2 benzodiazepine:1 circuitry:1 major:2 commonality:1 purpose:1 integrates:1 lose:1 punished:1 bridge:1 sensitive:3 homeostasis:10 vice:2 pellet:4 instantaneously:2 minimization:2 mit:2 clearly:1 behaviorally:1 aim:1 rather:1 hj:1 varying:2 eating:3 barto:2 derived:1 focus:1 consistently:1 contrast:1 criticism:1 sense:1 dependent:2 dayan:3 cnrs:1 integrated:1 explanatory:1 borrowing:1 relation:2 quasi:1 france:2 interested:2 among:3 flexible:1 priori:1 proposes:1 animal:50 raised:1 special:1 equal:4 construct:2 never:1 having:1 emotion:2 identical:1 progressive:4 represents:1 anticipated:1 breakpoint:1 discrepancy:2 future:5 others:1 stimulus:9 ingested:1 alley:1 resulted:1 individual:2 consisting:1 maintain:2 attempt:2 interest:1 highly:1 expenditure:1 joel:1 appetitive:2 behind:1 implication:2 kt:11 predefined:1 ingestion:2 capable:1 necessary:2 experience:4 old:1 desired:3 circle:1 re:3 causal:2 theoretical:3 uncertain:1 psychological:1 increased:3 dose:2 mk:1 setpoints:2 cover:1 maximization:5 deviation:10 addressing:1 entry:1 delay:3 motivating:1 chooses:1 st:4 fundamental:2 sensitivity:4 potent:1 international:2 rewarding:16 receiving:1 quickly:1 central:1 opposed:1 hn:1 prefrontal:1 dr:3 external:11 berridge:2 account:4 potential:1 centred:1 availability:1 matter:2 satisfy:3 depends:2 ad:1 bg:1 view:4 try:2 h1:4 endogenous:1 observing:1 start:1 elicited:2 complicated:1 contribution:1 minimize:1 reinforced:1 miller:1 identify:1 weak:1 critically:1 agonist:1 trajectory:2 comp:2 drive:43 explain:2 definition:7 against:2 energy:3 acquisition:4 konidaris:1 involved:2 rational:1 stomach:2 logical:1 knowledge:1 dimensionality:2 schedule:2 anticipating:1 reflecting:1 back:3 higher:3 day:2 follow:1 response:24 disciplined:1 anticipatory:13 though:5 generality:1 furthermore:2 just:1 reluctance:1 hand:2 receives:2 mehdi:1 lack:4 french:1 believe:1 mdp:2 building:1 effect:19 validity:1 dkj:2 true:1 concept:4 hypothesized:3 former:1 discounting:1 evolution:1 analytically:1 symmetric:1 q0:2 pharmacological:1 during:3 self:1 noted:1 enp:1 rat:6 generalized:2 brighton:1 hungry:3 temperature:11 neo:1 instantaneous:1 novel:2 common:1 quarter:2 functional:3 rl:19 stimulation:1 insensitive:1 volume:2 association:5 interpretation:1 significant:3 versa:2 cambridge:3 appleton:1 similarly:1 teaching:1 digested:1 had:1 gutkin:2 habitual:10 stable:1 robot:1 etc:3 showed:1 recent:1 perspective:2 irrelevant:4 apart:1 manipulation:2 scenario:2 certain:6 discussing:2 life:1 seen:2 greater:1 george:1 schneider:1 novelty:2 maximize:3 redundant:1 period:1 signal:4 reduces:1 adapt:3 plausibility:2 cross:1 long:3 concerning:1 post:2 bigger:2 prediction:1 variant:1 basic:2 mansfield:1 metric:1 dopamine:3 adopting:1 achieved:2 receive:1 whereas:1 remarkably:1 addition:1 aldridge:1 wealth:1 concluded:1 operate:1 induced:3 seem:1 call:1 nonstationary:1 presence:1 easy:1 concerned:1 viability:1 affect:1 finish:1 psychology:6 intravenous:3 competing:1 reduce:3 idea:1 coldness:3 shift:7 whether:4 motivated:2 utility:3 impairs:1 effort:1 reinforcing:8 york:5 constitute:1 action:15 behav:1 lnc:2 prepotent:1 detailed:1 clear:3 listed:1 staddon:1 discount:5 reduced:1 inhibitory:3 sated:3 neuroscience:1 incentive:3 waiting:1 group:2 key:2 four:4 putting:1 changing:1 prevent:2 ht:17 sum:8 wood:2 run:2 uncertainty:1 respond:1 place:3 decision:3 dorsolateral:1 ki:2 hi:3 breakpoints:1 activity:2 strength:2 occur:1 ri:1 aspect:1 optimality:1 pavlovian:8 attempting:1 injection:3 dopaminergic:1 according:3 making:4 midbrain:1 chess:1 deprived:1 projecting:1 explained:1 intuitively:1 sucrose:1 depletion:2 heart:1 taken:1 equation:11 remains:1 discus:1 detectable:2 mechanism:12 defending:1 phasic:1 know:2 adopted:1 available:1 appropriate:6 pierre:1 coin:1 existence:3 original:1 cake:1 assumes:1 setpoint:13 denotes:1 responding:1 opportunity:1 maintaining:3 bersini:2 k1:2 coffee:1 classical:3 implied:1 objective:7 seeking:4 question:2 quantity:1 strategy:5 primary:1 rt:4 dependence:1 traditional:2 regulated:5 distance:2 deficit:1 sci:2 rethinking:1 consumption:1 topic:1 transit:1 argue:2 extent:1 toward:5 water:6 assuming:5 ratio:4 balance:2 minimizing:1 acquire:1 regulation:14 mostly:1 difficult:1 kingdom:1 striatal:1 negative:11 aversive:1 suppress:1 design:1 calcium:2 policy:7 motivates:1 perform:1 animats:2 neuron:3 observation:1 markov:1 supporting:1 payoff:2 extended:1 variability:1 precise:2 looking:1 severity:1 gastric:1 perturbation:1 homeostatic:28 complement:1 pair:2 paris:2 required:2 connection:1 learned:2 established:3 hour:1 daw:1 address:2 able:3 suggested:1 below:5 pattern:5 criticize:1 built:1 including:1 explanation:2 reciprocally:1 mouth:1 power:1 event:1 difficulty:1 predicting:2 turning:1 hr:2 sodium:1 scheme:1 numerous:1 irrespective:1 carried:1 psychol:1 kj:5 review:1 keramati:2 literature:4 taste:4 acknowledgement:1 multiplication:1 lacking:1 loss:1 beside:1 approximator:1 foundation:1 aversion:5 contingency:3 agent:5 h2:1 consistent:3 principle:2 editor:2 story:1 row:1 excitatory:1 course:1 nutritional:3 supported:2 placed:2 free:4 soon:1 salience:2 formal:1 allow:1 guide:1 side:1 explaining:3 wide:2 taking:7 correspondingly:1 tolerance:1 feedback:11 curve:1 dimension:2 transition:7 concavity:1 sensory:5 doesn:1 made:1 reinforcement:6 adaptive:5 simplified:1 author:1 schultz:1 employing:1 far:2 social:1 cope:1 constituting:1 approximate:2 indisputable:1 preferred:2 keep:1 inserm:1 handbook:2 un:1 regulatory:6 search:1 learn:5 nature:2 expansion:1 du:1 reconciles:1 neurosci:1 motivation:14 whole:1 arise:1 reconcile:1 edition:2 allowed:1 complementary:1 hugues:1 body:7 augmented:3 referred:1 en:4 wiley:2 experienced:2 answering:1 third:3 deprivation:5 learns:1 croft:1 down:1 specific:10 substance:1 normative:4 physiological:20 consequent:2 evidence:2 concern:1 essential:2 milk:2 magnitude:4 nat:1 conditioned:3 kx:11 gap:1 interrelated:1 simply:1 visual:1 unexpected:1 adjustment:1 environmental:1 acm:1 modulate:1 goal:3 identity:1 consequently:1 careful:1 ann:1 absence:2 feasible:1 experimentally:1 change:2 considerable:1 specifically:1 sexual:1 reducing:1 content:2 experimental:2 palatable:1 zone:2 formally:1 select:1 internal:25 support:1 latter:1 fulfills:1 modulated:2 reactive:2 phenomenon:1 incorporate:1 tested:2 biol:1
3,797
4,438
SpaRCS: Recovering Low-Rank and Sparse Matrices from Compressive Measurements Andrew E. Waters, Aswin C. Sankaranarayanan, Richard G. Baraniuk Rice University {andrew.e.waters, saswin, richb}@rice.edu Abstract We consider the problem of recovering a matrix M that is the sum of a low-rank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) = A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. Empirically, SpaRCS inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm. 1 Introduction The explosion of digital sensing technology has unleashed a veritable data deluge that has pushed current signal processing algorithms to their limits. Not only are traditional sensing and processing algorithms increasingly overwhelmed by the sheer volume of sensor data, but storage and transmission of the data itself is also increasingly prohibitive without first employing costly compression techniques. This reality has driven much of the recent research on compressive data acquisition, in which data is acquired directly in a compressed format [1]. Recovery of the data typically requires finding a solution to an undetermined linear system, which becomes feasible when the underlying data possesses special structure. Within this general paradigm, three important problem classes have received significant recent attention: compressive sensing, affine rank minimization, and robust principal component analysis (PCA). Compressive sensing (CS): CS is concerned with the recovery a vector x that is sparse in some transform domain [1]. Data measurements take the form y = A(x), where A is an underdetermined linear operator. To recover x, one would ideally solve min kxk0 subject to y = A(x), (1) where kxk0 is the number of non-zero components in x. This problem formulation is non-convex, CS recovery is typically accomplished either via convex relaxation or greedy approaches. Affine rank minimization: The CS concept extends naturally to low-rank matrices. In the affine rank minimization problem [14, 23], we observe the linear measurements y = A(L), where L is a low-rank matrix. One important sub-problem is that of matrix completion [3, 5, 22], where A takes the form of a sampling operator. To recover L, one would ideally solve min rank(L) subject to y = A(L). (2) As with CS, this problem is non-convex and so several algorithms based on convex relaxation and greedy methods have been developed for finding solutions. 1 Robust PCA: In the robust PCA problem [2, 8], we wish to decompose a matrix M into a low-rank matrix L and a sparse matrix S such that M = L + S. This problem is known to have a stable solution provided L and S are sufficiently incoherent [2]. To date, this problem has been studied only in the non-compressive setting, i.e, when M is fully available. A variety of convex relaxation methods have been proposed for solving this case. The work of this paper stands at the intersection of these three problems. Specifically, we aim to recover the entries of a matrix M in terms of a low-rank matrix L and sparse matrix S from a small set of compressive measurements y = A(L + S). This problem is relevant in several application settings. A first application is the recovery of a video sequence obtained from a static camera observing a dynamic scene under changing illumination. Here, each column of M corresponds to a vectorized image frame of the video. The changing illumination has low-rank properties, while the foreground innovations exhibit sparse structures [2]. In such a scenario, neither sparse nor low-rank models are individually sufficient for capturing the underlying information of the signal. Models that combine low-rank and sparse components, however, are well suited for capturing such phenomenon. A second application is hyperspectral imaging, where each column of M is the vectorized image of a particular spectral band; a low-rank plus sparse model arises naturally due to material properties [7]. A third application is robust matrix completion [11], which can be cast as a compressive low-rank and sparse recovery problem. The natural optimization problem that unites the above three problem classes above is (P1) min ky A(L + S)k2 subject to rank(L) ? r, kvec(S)k0 ? K. (3) The main contribution of this paper is a novel greedy algorithm for solving (P1), which we dub SpaRCS for SPArse and low Rank decomposition via Compressive Sensing. To the best of our knowledge, we are the first to propose a computationally efficient algorithm for solving a problem like (P1). SpaRCS combines the best aspects of CoSaMP [20] for sparse vector recovery and ADMiRA [17] for low-rank matrix recovery. 2 Background Here we introduce the relevant background information regarding signal recovery from CS measurements, where our definition of signal is broadened to include both vectors and matrices. We further provide background on incoherency between low-rank and sparse matrices. Restricted isometry and rank-restricted isometry properties: Signal recovery for a K-sparse vector from CS measurements is possible when the measurement operator A obeys the so-called restricted isometry property (RIP) [4] with constant K (1 2 K )kxk2 ? kA(x)k22 ? (1 + 2 K )kxk2 , 8kxk0 ? K. (4) 8 rank(L) ? r. (5) This property implies that the information in x is nearly preserved after being measured by A. Analogous to CS, it has been shown that a low-rank matrix can be recovered from a set of CS measurements when the measurement operator A obeys the rank-restricted isometry (RRIP) property [23] with constant r? (1 ? 2 r )kLkF ? kA(L)k2F ? (1 + ? 2 r )kLkF , Recovery algorithms: Recovery of sparse vectors and low-rank matrices can be accomplished when the measurement operator A satisfies the appropriate RIP or RRIP condition. Recovery algorithms typically fall into one of two broad classes: convex optimization and greedy iteration. Convex optimization techniques recast (1) or (2) in a form that can be solved efficiently using convex programming [2, 27]. In the case of CS, the `0 norm is relaxed to the `1 norm; for low-rank matrices, the rank operator is relaxed to the nuclear norm. In contrast, greedy algorithms [17, 20] operate iteratively on the signal measurements, constructing a basis for the signal and attempting signal recovery restricted to that basis. Compared to convex approaches, these algorithms often have superior speed and scale better to large problems. We highlight the CoSaMP algorithm [20] for sparse vector recovery and the ADMiRA algorithm [17] for low-rank matrix recovery in this paper. Both algorithms have strong convergence guarantees when the measurement operator A satisfies the appropriate RIP or RRIP condition, most notably exponential convergence to the true signal. 2 Matrix Incoherency: For matrix decomposition problems such as the Robust PCA problem or the problem defined in (3) to have unique solutions, there must exist a degree of incoherence between the low-rank matrix L and the sparse matrix S. It is known that the decomposition of a matrix into its low-rank and sparse components makes sense only when the low-rank matrix is not sparse and, similarly, when the sparse matrix is not low-rank. A simple deterministic condition can be found in the work by Chandrasekaran, et al [9]. For our purposes, we assume the following model for non-sparse low rank matrices. Definition 2.1 (Uniformly bounded matrix [5]) An N ? N matrix L of rank r is uniformly bounded if its singular vectors {uj , vj , 1 ? j ? r} obey p kuj k1 , kvj k1 ? ?B /N , with ?B = O(1), where kxk1 denotes the largest entry in magnitude of x. When ?B is small (note that ?B 1), this model for the low-rank matrix L ensures that its singular vectors are not sparse. This can be seen in the case of the a singular vector u by noting that 1 = PN 1 N kuk22 = k=1 u2k ? kuk0 kuk21 . Rearranging terms enables us to write kuk0 kuk21 ?B . Thus, ?B controls the sparsity of the matrix L by bounding the sparsity of its singular vectors. A sufficient model for a sparse matrix that is not low-rank is to assume that the support set ? is uniform. As shown in the work of Candes, et al [2] this model is equivalent to defining the sparse support set ? = {(i, j) : i,j = 1} with each i,j being an i.i.d. Bernoulli with sufficiently small parameter ?S . 3 SpaRCS: CS recovery of low-rank and sparse matrices We now present the SpaRCS algorithm to solve (P1) and disucss its empirical properties. Assume that we are interested in a matrix M 2 RN1 ?N2 such that M = L + S, with rank(L) ? r, L uniformly bounded with constant ?B , and kSk0 ? K with support distributed uniformly. Further assume that a known linear operator A : RN1 ?N2 ! Rp provides us with p compressive measurements y of M. Let A? denote the adjoint of the operator A and, given the index set T ? {1, . . . , N1 N2 }, let A|T denote the restriction of the operator to T . Given y = A(M)+e, where e denotes measurement b and a sparse matrix S b such that y ? A(L b + S). b noise, our goal is to estimate a low rank matrix L 3.1 Algorithm SpaRCS iteratively estimates L and S; the estimation of L is closely related to ADMiRA [17], while the estimation of S is closely related to CoSaMP [20]. At each iteration, SpaRCS computes a signal proxy and then proceeds through four steps to update its estimates of L and S. These steps are laid out in Algorithm 1. We use the notation supp(X; K) to denote the largest K-term support set of the matrix X. This forms a natural basis for sparse signal approximation. We further use the notation svd(X; r) to denote computation of the rank-r singular value decomposition (SVD) of X and the arrangement of its singular vectors into a set of up to r rank-1 matrices. This set of rank-1 matrices serve as a natural basis for approximating uniformly bounded low-rank matrices. 3.2 Performance characterization b k and S b k that converge convergence expoEmpirically, SpaRCS produces a series of estimates L nentially towards the true values L and S. This performance is inhereted largely from the behavior of the CoSaMP and ADMiRA algorithms with one noteworthy modification. The key difference is that, for SpaRCS, the sparse and low-rank estimation problems are coupled. While CoSaMP and ADMiRA operate solely in the presence of the measurement noise, SpaRCS must estimate L in the presence of the residual error of S, and vice-versa. Proving convergence for the algorithm in the presence of the additional residual terms is non-trivial; simply lumping these additional residual errors together with the measurement noise e is insufficient for analysis. As a concrete example, consider the support identification step b S supp(P; 2K), with ? ? b b P = A (wk 1 ) = A (A(S Sk 1 ) + A(L Lk 1 ) + e), 3 b S) b = SpaRCS (y, A, A? , K, r, ?) Algorithm 1: (L, b0 b0 Initialization: k 1, L 0, S 0, L ;, S ;, w0 while kwk 1 k2 ? do Compute signal proxy: P A? (wk 1 ) Support identification: bL svd(P; 2r); b S supp(P; 2K) Support merger: eL b L S L; e S bSS S Least squares estimation: e ? (y A(S b k 1 )); BS e ? (y A(L b k 1 )) BL L S Support pruning: b k , L) bk , S) (L svd(BL ; r); (S supp(BS ; K) Update residue: bk + S bk ) wk y A(L end b=L bk L k y k+1 1; b=S bk S 1 that estimates the support set of S. CoSaMP relies on high correlation between supp(P; 2K) and b k 1 ; 2K); to achieve the same in SpaRCS, (L L b k 1 ) must be well behaved. supp(S S We are currently preparing a full theoretical characterization of the SpaRCS algorithm along with the necessary conditions that guarantee this exponential convergence property. We reserve the presentation of the convergence proof for an extended version of this work. Phase transition: The empirical performance of SpaRCS can be charted using phase transition plots, which predicts sufficient and necessary conditions on its success/failure. Figure 1 shows phase transition results on a problem of size N1 = N2 = 512 for various values of p, r, and K. As expected, SpaRCS degrades gracefully as we decrease p or increase r and K. r=5 r=10 r=15 r=20 r=25 Figure 1: Phase transitions for a recovery problem of size N1 = N2 = N = 512. Shown are aggregate results over 20 Monte-Carlo runs at each specification of r, K, and p. Black indicates recovery failure, while white indicates recovery success. Computational cost: SpaRCS is highly computationally efficient and scales well as N1 , N2 grow large. The largest computational cost is that of computing the two truncated SVDs per iteration. The SVDs can be performed efficiently via the Lanczos algorithm or similar method. The least squares estimation can be solved efficiently using conjugate gradient or Richardson iterations. Support estimation for the sparse vector merely entails sorting the signal proxy magnitudes and choosing the largest 2K elements. 4 Figure 2 compares the performance of SpaRCS with two alternate recovery algorithms. We implement CS versions of the IT [18] and APG [19] algorithms, which solve the problems 1 1 min ? (kLk? + kvec(S)k1 ) + kLk2F + kSk2F s.t. y = A(L + S) 2 2 and min kLk? + kvec(S)k1 s.t. y = A(L + S), respectively. We endeavor to tune the parameters of these algorithms (which we refer to as CS IT and CS APG, respectively) to optimize their performance. Details of our implementation can be found in [26]. In all experiments, we consider matrices of size N ? N with rank(L) = 2 and kSk0 = 0.02N 2 and use permuted noiselets [12] for the measurement operator A. As a first experiment, we generate convergence plots for matrices with N = 128 and vary the measurement b b ratio p/N 2 from 0.05 to 0.5. We then recover ? L and S and ? measure the recovered signal to noise kMk F c = L b+S b via 20 log ratio (RSNR) for M . These results are displayed in Figure 10 b Sk b F kM L 2(a), where we see that SpaRCS provides the best recovery. As a second experiment, we vary the problem size N 2 {128, 256, 512, 1024} while holding the number of measurements constant at p = 0.2N 2 . We measure the recovery time required by each algorithm to reach a residual error b S)k b 2 ky A(L+ ? 5 ? 10 4 . These results are displayed in Figure 2(b), which demonstrate that kyk2 SpaRCS converges significantly faster than the two other recovery methods. 5 Convergence Time (sec) RSNR (dB) 40 30 20 10 0 0.1 0.2 0.3 p/N2 SpaRCS CS APG CS IT 0.4 0.5 10 4 10 3 10 2 10 1 10 7 (a) Performance SpaRCS CS APG CS IT 8 log2(N) 9 10 (b) Timing plot Figure 2: Performance and run-time comparisons between SpaRCS, CS IT, and CS APG. Shown are average results over 10 Monte-Carlo runs for problems of size N1 = N2 = N with rank(L) = 2 and kSk0 = 0.02N 2 . (a) Performance for a problem with N = 128 for various values of the measurement ratio p/N 2 . SpaRCS exhibits superior recovery over the alternate approaches. (b) Timing plot for problems of various sizes N . SpaRCS converges in time several orders of magnitude faster than the alternate approaches. 4 Applications We now present several experiments that validate SpaRCS and showcase its performance in several applications. In all experiments, we use permuted noiselets for the measurement operator A; these provide both a fast transform as well as save memory, since we do not have to store A explicitly. Video compressive sensing: The video CS problem is concerned with recovering multiple image frames of a video sequence from CS measurements [6,21,24]. We consider a 128 ? 128 ? 201 video sequence consisting of a static background with a number of people moving in the foreground. We aim to not only recover the original video but also separate the background and foreground. We resize the data cube into a 1282 ? 201 matrix M, where each column corresponds to a (vectorized) image frame. The measurement operator A operates on each column of M independently, simulating acquisition using a single pixel camera [13]. We acquire p = 0.15 ? 1282 measurements per image frame. We recover with SpaRCS using r = 1 and K = 20,000. The results are displayed in Figure 3, where it can be seen that SpaRCS accurately estimates and separates the low-rank background and the sparse foreground. Figure 4 shows recovery results on a more challenging sequence 5 (a) (b) (c) Figure 3: SpaRCS recovery results on a 128 ? 128 ? 201 video sequence. The video sequence is reshaped into an N1 ? N2 matrix with N1 = 1282 and N2 = 201. (a) Ground truth for several frames. (b) Estimated low-rank component L. (c) Estimated sparse component S. The recovery SNR is 31.2 Details dB at the measurement ratio p/(N1 N2 ) = 0.15. The recovery is accurate in spite of the Parameters measurement128x128x201 operator AVideo working independently on each frame. Col-only measurement matrix M per col = 2458 Overall K = 49398 Rank = 1 Rho = 0.15 Compression 6.6656 SNR = 31.1637 (a) (b) Figure 4: SpaRCS recovery results on a 64 ? 64 ? 234 video sequence. The video sequence is reshaped into an N1 ?N2 matrix with N1 = 642 and N2 = 234. (a) Ground truth for several frames. (b) Recovered frames. The recovery SNR is 23.9 dB at the measurement ratio of p/(N1 N2 ) = 0.33. The recovery is accurate in spite of the changing illumination conditions. with changing illumination. In contrast to SpaRCS, existing video CS algorithms do not work well with dramatically changing illumination. Hyperspectral compressive sensing: Low-rank/sparse decomposition has an important physical relevance in hyperspectral imaging [7]. Here we consider a hyperspectral cube, which contains a vector of spectral information at each image pixel. A measurement device such as [25] can provide compressive measurements of such a hyperspectral cube. We employ SpaRCS on a hyperspectral cube of size 128 ? 128 ? 128 rearranged as a matrix of size 1282 ? 128 such that each column corresponds to a different spectral band. Figure 5 demonstrates recovery using p = 0.15 ? 1282 ? 128 total measurements of the entire data cube with r = 8, K = 3000. SpaRCS performs well in terms of residual error (Figure 5(c)) despite the number of rows being much larger than the number of columns. Figure 5(d) emphasizes the utility the sparse component. Using only a lowrank approximation (corresponding to traditional PCA) causes a significant increase in residual error over what is achieved by SpaRCS. Parameter mismatch: In Figure 6, we analyze the influence of incorrect selection of the parameters r using the hyperspectral data as an example. We plot the recovered SNR that can be obtained at various levels of the measurement ratio p/(N1 N2 ) for both the case of r = 8 and r = 4. There are interesting tradeoffs associated with the choice of parameters. Larger values of r and K enable better approximation to the unknown signals. However, by increasing r and K, we also increase the number of independent parameters in the problem, which is given by (2 max(N1 , N2 )r r2 + 2K). An empirical rule-of-thumb for greedy recovery algorithms is that the number of measurements p should be 2?5 times the number of independent parameters. Consequently, there exists a tradeoff between the values of r, K, and p to ensure stable recovery. 6 (a) (b) (c) (d) Figure 5: SpaRCS recovery results on a 128 ? 128 ? 128 hyperspectral data cube. The hyperspectral Datasize:into 128x128x128 x 128 matrix. data is reshaped an N1 == ?128^2 N2 matrix with N1 = 1282 and N2 = 128. Each image pane Measurement matrix takes inner product with the WHOLE data matrix. corresponds to a different spectral band. (a) Ground truth. (b) Recovered images. (c) Residual Rank = 4. Measurement = 15%. K = 3000. Wavelet transformed data? db4. error using both the low-rank and sparse component. (d) Residual error using only the low-rank Xhat = 27.3 dB component. The measurement ratio isRecons p/(NSNR ) = 0.15. 1 N2with Recons SNR with Lhat = 21.9 dB (a) r=4 r=8 RSNR (dB) 30 (b) 25 20 15 5 (c) 10 N2/p 15 20 (d) Figure 6: Hyperspectral data recovery for various values of the rank r of the low-rank matrix L. The data used is the same as in Figure 5. (a) r = 1, SNR = 12.81 dB. (b) r = 2, SNR = 19.42 dB. (c) r = 4, SNR = 27.46 dB. (d) Comparison of compression ratio (N1 N2 )/p and recovery SNR using r = 4 and r = 8. All results were obtained with K = 3000. Robust matrix completion: We apply SpaRCS to the robust matrix completion problem [11] min kLk? + ksk1 subject to L? + s = y (6) where s models outlier noise and ? denotes the set of observed entries. This problem can be cast as a compressive low-rank and sparse matrix recovery problem by using a sparse matrix S in place of the outlier noise s and realizing that the support of S is a subset of ?. This enables recovery of both L and S from samples of their sum L + S. Matrix completion under outlier noise [10,11] has received some attention and, in many ways, is the work that is closest to this paper. There are, however, several important distinctions. Chen et al. [11] analyze the convex problem of (6) to provide performance guarantees. Yet, convex optimization methods often do not scale well with the size of the problem. SpaRCS, by contrast, is computationally efficient and does scale well as the problem size increases. Furthermore, [10] is tied to the case when A is a sampling operator; it is not immediately clear whether this analysis can extend to the more general case of (P1), where the sparse component cannot be modeled as outlier noise in the measurements. 7 80 40 log(execution time) 60 RSNR (dB) 2.5 CVX SpaRCS CS IT OptSpace 20 0 SpaRCS CS IT CVX OptSpace 2 1.5 1 0.5 0 ?0.5 ?20 ?3 10 ?2 10 ?1 K/p 10 0 10 ?1 (a) Performance 1/1000 1/100 1/50 1/25 1/10 K/p 1/5 1/4 1/3 (b) Timing plot Figure 7: Comparison of several algorithms for the robust matrix completion problem. (a) RSNR averaged over 10 Monte-Carlo runs for an N ? N matrix completion problem with N = 128, r = 1, and p/N 2 = 0.2. Non-robust formulations, such OptSpace, fail. SpaRCS acheives performance close to that of the convex solver (CVX). (b) Comparison of convergence times for the various algorithms. SpaRCS converges in only a fraction of the time required by the other algorithms. In our robust matrix completion experiments we compare SpaRCS with CS SVT, OptSpace [16] (a non-robust matrix completion algorithm), and a convex solution using CVX [15]. Figure 7 shows the performance of these algorithms. OptSpace, being non-robust, fails as expected. The accuracy of SpaRCS is closest to that of CVX, although the convergence time of SpaRCS is several orders of magnitude faster. 5 Conclusion We have considered the problem of recovering low-rank and sparse matrices given only a few linear measurements. Our proposed greedy algorithm, SpaRCS, is both fast and accurate even for large matrix sizes and enjoys strong empirical performance in its convergence to the true solution. We have demonstrated the applicability of SpaRCS to video compressive sensing, hyperspectral imaging, and robust matrix completion. There are many avenues for future work. Model-based extensions of SpaRCS are important directions. Both low-rank and sparse matrices exhibit rich structure in practice, including low-rank Hankel matrices in system identification and group sparsity in background subtraction. The use of models could significantly enhance the performance of the algorithm. This would be especially useful in applications such as video CS, where the measurement operator is typically constrained to operate on each image frame individually. Acknowledgements This work was partially supported by the grants NSF CCF-0431150, CCF-0728867, CCF-0926127, CCF-1117939, ARO MURI W911NF-09-1-0383, W911NF-07-1-0185, DARPA N66001-11-14090, N66001-11-C-4092, N66001-08-1-2065, AFOSR FA9550-09-1-0432, and LLNL B593154. Additionally, the authors wish to thank Prof. John Wright for his helpful comments and corrections to a previous version of this manuscript. 8 References [1] E. J. Cand`es. Compressive sampling. In Intl. Cong. of Math., Madrid, Spain, Aug. 2006. [2] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(1):1?37, 2009. [3] E. J. Cand`es and Y. Plan. Matrix completion with noise. Proc. IEEE, 98(6):925?936, 2010. [4] E. J. Cand`es and J. Romberg. Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math., 6(2):227?254, 2006. [5] E.J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. on Info. Theory, 56(5):2053?2080, 2010. [6] V. Cevher, A. C. Sankaranarayanan, M. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa. Compressive sensing for background subtraction. In European Conf. Comp. Vision, Marseilles, France, Oct. 2008. [7] A. Chakrabarti and T. Zickler. Statistics of Real-World Hyperspectral Images. In IEEE Int. Conf. Comp. Vis., Colorado Springs, CO, June 2011. [8] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Sparse and low-rank matrix decompositions. In Allerton Conf. on Comm., Contr., and Comp., Monticello, IL, Sep. 2009. [9] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A.S. Willsky. Rank-sparsity incoherence for matrix decomposition. Arxiv preprint arXiv:0906.2220, 2009. [10] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis. Low-rank matrix recovery from errors and erasures. Arxiv preprint arXiv:1104.0354, 2011. [11] Y. Chen, H. Xu, C. Caramanis, and S. Sanghavi. Robust matrix completion with corrupted columns. Arxiv preprint arXiv:1102.2254, 2011. [12] R. Coifman, F. Geshwind, and Y. Meyer. Noiselets. Appl. Comput. Harmon. Anal., 10:27?44, 2001. [13] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk. Single pixel imaging via compressive sampling. IEEE Signal Processing Mag., 25(2):83?91, 2008. [14] M. Fazel, E. Cand`es, B. Recht, and P. Parrilo. Compressed sensing and robust recovery of low rank matrices. In Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, CA, Nov. 2008. [15] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, Apr. 2011. [16] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. J. Mach. Learn. Res., 11:2057?2078, 2010. [17] K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation. IEEE Trans. on Info. Theory, 56(9):4402?4416, 2010. [18] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical report, University of Illinois at Urbana-Champaign, UrbanaChampaign, IL. [19] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. In Intl. Workshop on Comp. Adv. in Multi-Sensor Adapt. Processing, Aruba, Dutch Antilles, Dec. 2009. [20] D. Needell and J.A. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal., 26(3):301?321, 2009. [21] J. Y. Park and M. B. Wakin. A multiscale framework for compressive sensing of video. In Picture Coding Symp., Chicago, IL, May 2009. [22] B. Recht. A simpler approach to matrix completion. J. Mach. Learn. Res., posted Oct. 2009, to appear. [23] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471?501, 2010. [24] A. C. Sankaranarayanan, P. Turaga, R. G. Baraniuk, and R. Chellappa. Compressive acquisition of dynamic scenes. In European Conf. Comp. Vision, Crete, Greece, Sep. 2010. [25] T. Sun and K. Kelly. Compressive sensing hyperspectral imager. In Comput. Opt. Sensing and Imaging, San Jose, CA, Oct. 2009. [26] A. E. Waters, A. C. Sankaranarayanan, and R. G. Baraniuk. SpaRCS: Recovering low-rank and sparse matrices from compressive measurements. Technical report, Rice University, Houston, TX, 2011. [27] W. Yin, S. Osher, D. Goldfarb, and J. Darbon. Bregman iterative algorithms for `1 -minimization with applications to compressed sensing. SIAM J. Imag. Sci., 1(1):143?168, 2008. 9
4438 |@word version:4 compression:3 norm:4 km:1 simulation:1 decomposition:9 klk:3 series:1 efficacy:1 contains:1 mag:1 existing:1 kmk:1 current:1 ka:2 recovered:5 ksk1:1 com:1 yet:1 must:3 john:1 chicago:1 enables:2 plot:6 update:2 greedy:8 prohibitive:1 device:1 merger:1 realizing:1 fa9550:1 provides:2 characterization:2 math:2 allerton:1 simpler:1 along:1 zickler:1 chakrabarti:1 incorrect:1 combine:2 kuj:1 symp:1 introduce:1 coifman:1 acquired:1 notably:1 expected:2 behavior:1 p1:5 nor:1 cand:6 multi:1 solver:1 increasing:1 becomes:1 provided:1 spain:1 underlying:2 bounded:4 notation:2 what:1 developed:1 compressive:23 finding:2 guarantee:3 quantitative:1 k2:2 demonstrates:1 control:1 imager:1 broadened:1 grant:2 appear:1 imag:1 timing:3 svt:1 limit:1 despite:1 mach:2 incoherence:2 solely:1 noteworthy:1 black:1 plus:1 initialization:1 studied:1 challenging:1 appl:2 co:1 obeys:2 averaged:1 fazel:2 unique:1 camera:2 atomic:1 practice:1 implement:1 erasure:1 empirical:4 significantly:2 boyd:1 spite:2 cannot:1 close:1 selection:1 operator:16 romberg:1 storage:1 influence:1 restriction:1 equivalent:1 deterministic:1 optimize:1 demonstrated:1 attention:2 independently:2 convex:16 recovery:46 lumping:1 immediately:1 needell:1 rule:1 nuclear:2 oh:1 his:1 proving:1 analogous:1 colorado:1 rip:3 exact:2 programming:2 element:1 showcase:1 predicts:1 muri:1 kxk1:1 observed:1 preprint:3 solved:2 svds:2 cong:1 ensures:1 adv:1 kuk21:2 sun:2 decrease:1 marseille:1 comm:1 ideally:2 dynamic:2 solving:3 serve:1 basis:4 sep:2 darpa:1 k0:1 various:6 caramanis:2 tx:1 fast:3 chellappa:2 monte:3 aggregate:1 choosing:1 u2k:1 incoherency:2 larger:2 solve:5 rho:1 compressed:3 statistic:1 richardson:1 reshaped:3 transform:2 itself:1 noisy:1 sequence:8 propose:2 aro:1 product:1 relevant:2 date:1 achieve:1 adjoint:1 validate:1 ky:2 convergence:12 cosamp:8 transmission:1 intl:2 produce:1 converges:3 andrew:2 develop:1 completion:16 urbanachampaign:1 measured:1 lowrank:1 b0:2 received:2 aug:1 strong:2 recovering:5 c:27 implies:1 direction:1 closely:2 enable:1 material:1 decompose:1 opt:1 underdetermined:1 admira:7 extension:1 correction:1 sufficiently:2 considered:1 ground:3 wright:3 reserve:1 vary:2 purpose:1 estimation:6 proc:1 currently:1 individually:2 largest:4 vice:1 minimization:6 sensor:2 aim:2 pn:1 inherits:1 june:1 rank:68 bernoulli:1 indicates:2 contrast:3 sense:1 duarte:2 helpful:1 contr:1 el:1 inaccurate:1 typically:4 entire:1 transformed:1 france:1 interested:1 tao:1 pixel:3 overall:1 plan:1 art:1 special:1 constrained:1 laska:1 cube:6 sampling:4 preparing:1 broad:1 park:1 k2f:1 nearly:1 foreground:4 future:1 sanghavi:4 report:2 richard:1 employ:1 few:1 phase:4 consisting:1 n1:16 highly:1 acheives:1 accurate:3 grove:1 bregman:1 explosion:1 necessary:2 monticello:1 harmon:2 incomplete:1 re:2 theoretical:1 nentially:1 cevher:1 column:7 optspace:5 w911nf:2 lanczos:1 cost:2 applicability:1 entry:4 subset:1 snr:9 undetermined:1 uniform:1 optimally:1 corrupted:3 recht:3 siam:2 lee:1 enhance:1 together:1 kvj:1 concrete:1 rn1:2 davenport:1 conf:5 li:1 supp:6 parrilo:4 coding:1 sec:1 subsumes:1 wk:3 int:1 explicitly:1 vi:1 performed:1 observing:1 kwk:1 analyze:2 recover:6 kuk0:2 candes:1 contribution:1 square:2 il:3 accuracy:2 largely:1 efficiently:3 identification:3 thumb:1 accurately:1 emphasizes:1 dub:1 carlo:3 comp:5 reach:1 definition:2 failure:2 acquisition:3 naturally:2 proof:1 associated:1 static:2 knowledge:1 greece:1 manuscript:1 disciplined:1 formulation:2 furthermore:1 correlation:1 working:1 tropp:1 keshavan:1 ganesh:1 multiscale:1 behaved:1 k22:1 concept:1 true:3 multiplier:1 ccf:4 iteratively:2 goldfarb:1 white:1 kyk2:1 demonstrate:2 performs:1 llnl:1 image:10 novel:1 superior:2 permuted:2 empirically:1 physical:1 volume:1 extend:1 measurement:40 significant:2 refer:1 versa:1 similarly:1 illinois:1 moving:1 stable:2 specification:1 entail:1 closest:2 isometry:4 recent:2 driven:1 scenario:1 store:1 success:2 accomplished:2 seen:2 minimum:2 additional:2 relaxed:2 houston:1 kxk0:3 subtraction:2 converge:1 paradigm:1 signal:20 full:1 desirable:1 multiple:1 champaign:1 technical:2 faster:3 adapt:1 lin:2 vision:2 arxiv:6 iteration:4 dutch:1 achieved:1 dec:1 preserved:1 background:8 residue:1 singular:6 grow:1 operate:3 posse:1 comment:1 subject:4 db:10 near:1 noting:1 presence:3 concerned:2 variety:1 inner:1 regarding:1 avenue:1 tradeoff:2 whether:1 pca:5 utility:1 cause:1 db4:1 matlab:1 dramatically:1 useful:1 clear:1 tune:1 sparcs:49 band:3 rearranged:1 generate:1 http:1 exist:1 nsf:1 estimated:2 per:3 darbon:1 write:1 group:1 key:1 four:1 sheer:1 changing:5 neither:1 n66001:3 imaging:6 relaxation:4 merely:1 fraction:1 sum:2 run:4 jose:1 baraniuk:5 uncertainty:1 hankel:1 extends:1 laid:1 chandrasekaran:3 place:1 wu:2 cvx:7 resize:1 pushed:1 capturing:2 apg:5 guaranteed:1 scene:2 software:1 aspect:1 speed:1 min:6 spring:1 pane:1 attempting:1 format:1 pacific:1 turaga:1 alternate:3 richb:1 conjugate:1 increasingly:2 rev:1 modification:1 b:2 osher:1 outlier:4 restricted:5 asilomar:1 computationally:3 equation:1 reddy:1 fail:1 deluge:1 end:1 available:1 apply:1 observe:1 obey:1 appropriate:2 spectral:4 simulating:1 save:1 rp:1 original:1 denotes:3 include:1 ensure:1 log2:1 wakin:1 k1:4 uj:1 especially:1 approximating:1 prof:1 bl:3 arrangement:1 degrades:1 costly:1 traditional:2 jalali:1 exhibit:3 gradient:1 separate:2 thank:1 sci:1 w0:1 gracefully:1 trivial:1 water:3 willsky:2 index:1 modeled:1 insufficient:1 ratio:8 acquire:1 innovation:1 takhar:1 holding:1 info:2 implementation:2 anal:2 unknown:1 urbana:1 displayed:3 truncated:1 defining:1 extended:1 frame:9 bk:5 cast:2 required:2 crete:1 distinction:1 trans:2 proceeds:1 aswin:1 mismatch:1 sparsity:4 recast:1 including:2 memory:1 video:16 max:1 power:1 natural:4 residual:8 ksk0:3 technology:1 recons:1 picture:1 lk:1 incoherent:1 coupled:1 kelly:2 acknowledgement:1 afosr:1 fully:1 bresler:1 highlight:1 interesting:1 digital:1 ksk2f:1 degree:1 klk2f:1 affine:4 vectorized:3 sufficient:3 proxy:3 principle:1 row:1 supported:1 enjoys:1 fall:1 sparse:42 distributed:1 charted:1 bs:1 stand:1 transition:4 rich:1 computes:1 world:1 kvec:3 author:1 san:1 employing:1 pruning:1 nov:1 iterative:2 sk:2 reality:1 additionally:1 learn:2 robust:19 rearranging:1 ca:2 european:2 posted:1 constructing:1 domain:1 vj:1 apr:1 main:1 montanari:1 bounding:1 noise:9 whole:1 n2:20 cvxr:1 unites:1 xu:1 augmented:1 madrid:1 sub:1 kuk22:1 fails:1 wish:2 meyer:1 exponential:3 col:2 comput:4 kxk2:2 tied:1 third:1 wavelet:1 sensing:16 r2:1 sankaranarayanan:4 exists:1 workshop:1 hyperspectral:14 magnitude:4 execution:1 illumination:5 overwhelmed:1 sorting:1 chen:5 suited:1 intersection:1 klkf:2 yin:1 simply:1 lagrange:1 partially:1 corresponds:4 truth:3 satisfies:2 relies:1 acm:1 rice:3 ma:3 oct:3 goal:1 presentation:1 endeavor:1 consequently:1 towards:1 feasible:1 specifically:1 uniformly:5 operates:1 principal:3 called:2 total:1 svd:4 e:6 support:11 people:1 arises:1 relevance:1 phenomenon:1
3,798
4,439
Generalized Beta Mixtures of Gaussians Artin Armagan Dept. of Statistical Science Duke University Durham, NC 27708 [email protected] David B. Dunson Dept. of Statistical Science Duke University Durham, NC 27708 [email protected] Merlise Clyde Dept. of Statistical Science Duke University Durham, NC 27708 [email protected] Abstract In recent years, a rich variety of shrinkage priors have been proposed that have great promise in addressing massive regression problems. In general, these new priors can be expressed as scale mixtures of normals, but have more complex forms and better properties than traditional Cauchy and double exponential priors. We first propose a new class of normal scale mixtures through a novel generalized beta distribution that encompasses many interesting priors as special cases. This encompassing framework should prove useful in comparing competing priors, considering properties and revealing close connections. We then develop a class of variational Bayes approximations through the new hierarchy presented that will scale more efficiently to the types of truly massive data sets that are now encountered routinely. 1 Introduction Penalized likelihood estimation has evolved into a major area of research, with `1 [22] and other regularization penalties now used routinely in a rich variety of domains. Often minimizing a loss function subject to a regularization penalty leads to an estimator that has a Bayesian interpretation as the mode of a posterior distribution [8, 11, 1, 2], with different prior distributions inducing different penalties. For example, it is well known that Gaussian priors induce `2 penalties, while double exponential priors induce `1 penalties [8, 19, 13, 1]. Viewing massive-dimensional parameter learning and prediction problems from a Bayesian perspective naturally leads one to design new priors that have substantial advantages over the simple normal or double exponential choices and that induce rich new families of penalties. For example, in high-dimensional settings it is often appealing to have a prior that is concentrated at zero, favoring strong shrinkage of small signals and potentially a sparse estimator, while having heavy tails to avoid over-shrinkage of the larger signals. The Gaussian and double exponential priors are insufficiently flexible in having a single scale parameter and relatively light tails; in order to shrink many small signals strongly towards zero, the double exponential must be concentrated near zero and hence will over-shrink signals not close to zero. This phenomenon has motivated a rich variety of new priors such as the normal-exponential-gamma, the horseshoe and the generalized double Pareto [11, 14, 1, 6, 20, 7, 12, 2]. An alternative and widely applied Bayesian framework relies on variable selection priors and Bayesian model selection/averaging [18, 9, 16, 15]. Under such approaches the prior is a mixture of a mass at zero, corresponding to the coefficients to be set equal to zero and hence excluded from the model, and a continuous distribution, providing a prior for the size of the non-zero signals. This paradigm is very appealing in fully accounting for uncertainty in parameter learning and the unknown sparsity structure through a probabilistic framework. One obtains a posterior distribution over the model space corresponding to all possible subsets of predictors, and one can use this posterior for model-averaged predictions that take into account uncertainty in subset selection and to obtain marginal inclusion probabilities for each predictor providing a weight of evidence that a specific signal is non-zero allowing for uncertainty in the other signals to be included. Unfortunately, 1 the computational complexity is exponential in the number of candidate predictors (2p with p the number of predictors). Some recently proposed continuous shrinkage priors may be considered competitors to the conventional mixture priors [15, 6, 7, 12] yielding computationally attractive alternatives to Bayesian model averaging. Continuous shrinkage priors lead to several advantages. The ones represented as scale mixtures of Gaussians allow conjugate block updating of the regression coefficients in linear models and hence lead to substantial improvements in Markov chain Monte Carlo (MCMC) efficiency through more rapid mixing and convergence rates. Under certain conditions these will also yield sparse estimates, if desired, via maximum a posteriori (MAP) estimation and approximate inferences via variational approaches [17, 24, 5, 8, 11, 1, 2]. The class of priors that we consider in this paper encompasses many interesting priors as special cases and reveals interesting connections among different hierarchical formulations. Exploiting an equivalent conjugate hierarchy of this class of priors, we develop a class of variational Bayes approximations that can scale up to truly massive data sets. This conjugate hierarchy also allows for conjugate modeling of some previously proposed priors which have some rather complex yet advantageous forms and facilitates straightforward computation via Gibbs sampling. We also argue intuitively that by adjusting a global shrinkage parameter that controls the overall sparsity level, we may control the number of non-zero parameters to be estimated, enhancing results, if there is an underlying sparse structure. This global shrinkage parameter is inherent to the structure of the priors we discuss as in [6, 7] with close connections to the conventional variable selection priors. 2 Background We provide a brief background on shrinkage priors focusing primarily on the priors studied by [6, 7] and [11, 12] as well as the Strawderman-Berger (SB) prior [7]. These priors possess some very appealing properties in contrast to the double exponential prior which leads to the Bayesian lasso [19, 13]. They may be much heavier-tailed, biasing large signals less drastically while shrinking noiselike signals heavily towards zero. In particular, the priors by [6, 7], along with the StrawdermanBerger prior [7], have a very interesting and intuitive representation later given in (2), yet, are not formed in a conjugate manner potentially leading to analytical and computational complexity. [6, 7] propose a useful class of priors for the estimation of multiple means. Suppose a p-dimensional vector y|? ? N (?, I) is observed. The independent hierarchical prior for ?j is given by 1/2 ?j |?j ? N (0, ?j ), ?j ? C + (0, ?1/2 ), (1) for j = 1, . . . , p, where N (?, ?) denotes a normal distribution with mean ? and variance ? and C + (0, s) denotes a half-Cauchy distribution on <+ with scale parameter s. With an appropriate transformation ?j = 1/(1 + ?j ), this hierarchy also can be represented as ?1/2 ?j |?j ? N (0, 1/?j ? 1), ?(?j |?) ? ?j (1 ? ?j )?1/2 1 . 1 + (? ? 1)?j (2) A special case where ? = 1 leads to ?j ? B(1/2, 1/2) (beta distribution) where the name of the prior arises, horseshoe (HS) [6, 7]. Here ?j s are referred to as the shrinkage coefficients as they determine the magnitude with which ?j s are pulled toward zero. A prior of the form ?j ? B(1/2, 1/2) is natural to consider in the estimation of a signal ?j as this yields a very desirable behavior both at the tails and in the neighborhood of zero. That is, the resulting prior has heavy-tails as well as being unbounded at zero which creates a strong pull towards zero for those values close to zero. [7] further discuss priors of the form ?j ? B(a, b) for a > 0, b > 0 to elaborate more on their focus on the choice a = b = 1/2. A similar formulation dates back to [21]. [7] refer to the prior of the form ?j ? B(1, 1/2) as the Strawderman-Berger prior due to [21] and [4]. The same hierarchical prior is also referred to as the quasi-Cauchy prior in [16]. Hence, the tail behavior of the StrawdermanBerger prior remains similar to the horseshoe (when ? = 1), while the behavior around the origin changes. The hierarchy in (2) is much more intuitive than the one in (1) as it explicitly reveals the behavior of the resulting marginal prior on ?j . This intuitive representation makes these hierarchical priors interesting despite their relatively complex forms. On the other hand, what the prior in (1) or (2) lacks is a more trivial hierarchy that yields recognizable conditional posteriors in linear models. 2 [11, 12] consider the normal-exponential-gamma (NEG) and normal-gamma (NG) priors respectively which are formed in a conjugate manner yet lack the intuition the Strawderman-Berger and horseshoe priors provide in terms of the behavior of the density around the origin and at the tails. Hence the implementation of these priors may be more user-friendly but they are very implicit in how they behave. In what follows we will see that these two forms are not far from one another. In fact, we may unite these two distinct hierarchical formulations under the same class of priors through a generalized beta distribution and the proposed equivalence of hierarchies in the following section. This is rather important to be able to compare the behavior of priors emerging from different hierarchical formulations. Furthermore, this equivalence in the hierarchies will allow for a straightforward Gibbs sampling update in posterior inference, as well as making variational approximations possible in linear models. 3 Equivalence of Hierarchies via a Generalized Beta Distribution In this section we propose a generalization of the beta distribution to form a flexible class of scale mixtures of normals with very appealing behavior. We then formulate our hierarchical prior in a conjugate manner and reveal similarities and connections to the priors given in [16, 11, 12, 6, 7]. As the name generalized beta has previously been used, we refer to our generalization as the threeparameter beta (TPB) distribution. In the forthcoming text ?(.) denotes the gamma function, G(?, ?) denotes a gamma distribution with shape and rate parameters ? and ?, W(?, S) denotes a Wishart distribution with ? degrees of freedom and scale matrix S, U(?1 , ?2 ) denotes a uniform distribution over (?1 , ?2 ), GIG(?, ? ?, ?) denotes a generalized inverse Gaussian distribution with density function (?/?)?/2 {2K? ( ??)}?1 x??1 exp{(?x + ?/x)/2}, and K? (.) is a modified Bessel function of the second kind. Definition 1. The three-parameter beta (TPB) distribution for a random variable X is defined by the density function f (x; a, b, ?) = ?(a + b) b b?1 ?(a+b) ? x (1 ? x)a?1 {1 + (? ? 1)x} , ?(a)?(b) (3) for 0 < x < 1, a > 0, b > 0 and ? > 0 and is denoted by T PB(a, b, ?). It can be easily shown by a change of variable x = 1/(y + 1) that the above density integrates to 1. The kth moment of the TPB distribution is given by E(X k ) = ?(a + b)?(b + k) 2 F1 (a + b, b + k; a + b + k; 1 ? ?) ?(b)?(a + b + k) (4) where 2 F1 denotes the hypergeometric function. In fact it can be shown that TPB is a subclass of Gauss hypergeometric (GH) distribution proposed in [3] and the compound confluent hypergeometric (CCH) distribution proposed in [10]. The density functions of GH and CCH distributions are given by fGH (x; a, b, r, ?) = fCCH (x; a, b, r, s, ?, ?) = xb?1 (1 ? x)a?1 (1 + ?x)?r , B(b, a)2 F1 (r, b; a + b; ??) ? b xb?1 (1 ? x)a?1 (? + (1 ? ?)?x)?r , B(b, a) exp(?s/?)?1 (a, r, a + b, s/?, 1 ? ?) (5) (6) for 0 < x < 1 and 0 < x < 1/?, respectively, where B(b, a) = ?(a)?(b)/?(a + b) denotes the beta function and ?1 is the degenerate hypergeometric function of two variables [10]. Letting ? = ? ? 1, r = a + b and noting that 2 F1 (a + b, b; a + b; 1 ? ?) = ??b , (5) becomes a TPB density. Also note that (6) becomes (5) for s = 1, ? = 1 and ? = (1 ? ?)/? [10]. [20] considered an alternative special case of the CCH distribution for the shrinkage coefficients, ?j , by letting ? = r = 1 in (6). [20] refer to this special case as the hypergeometric-beta (HB) distribution. TPB and HB generalize the beta distribution in two distinct directions, with one practical advantage of the TPB being that it allows for a straightforward conjugate hierarchy leading to potentially substantial analytical and computational gains. 3 Now we move onto the hierarchical modeling of a flexible class of shrinkage priors for the estimation of a potentially sparse p-vector. Suppose a p-dimensional vector y|? ? N (?, I) is observed where ? = (?1 , . . . , ?p )0 is of interest. Now we define a shrinkage prior that is obtained by mixing a normal distribution over its scale parameter with the TPB distribution. Definition 2. The TPB normal scale mixture representation for the distribution of random variable ?j is given by ?j |?j ? N (0, 1/?j ? 1), ?j ? T PB(a, b, ?), (7) where a > 0, b > 0 and ? > 0. The resulting marginal distribution on ?j is denoted by T PBN (a, b, ?). Figure 1 illustrates the density on ?j for varying values of a, b and ?. Note that the special case for a = b = 1/2 in Figure 1(a) gives the horseshoe prior. Also when a = ? = 1 and b = 1/2, this representation yields the Strawderman-Berger prior. For a fixed value of ?, smaller a values yield a density on ?j that is more peaked at zero, while smaller values of b yield a density on ?j that is heavier tailed. For fixed values of a and b, decreasing ? shifts the mass of the density on ?j from left to right, suggesting more support for stronger shrinkage. That said, the density assigned in the neighborhood of ?j = 0 increases while making the overall density lighter-tailed. We next propose the equivalence of three hierarchical representations revealing a wide class of priors encompassing many of those mentioned earlier. Proposition 1. If ?j ? T PBN (a, b, ?), then 1) ?j ? N (0, ?j ), ?j ? G(a, ?j ) and ?j ? G(b, ?). ?(a+b) ?a a?1 2) ?j ? N (0, ?j ), ?(?j ) = ?(a)?(b) ? ? (1 + ?j /?)?(a+b) which implies that ?j ? ? ? 0 (a, b), the inverted beta distribution with parameters a and b. The equivalence given in Proposition 1 is significant as it makes the work in Section 4 possible under the TPB normal scale mixtures as well as further revealing connections among previously proposed shrinkage priors. It provides a rich class of priors leading to great flexibility in terms of the induced shrinkage and makes it clear that this new class of priors can be considered simultaneous extensions to the work by [11, 12] and [6, 7]. It is worth mentioning that the hierarchical prior(s) given in Proposition 1 are different than the approach taken by [12] in how we handle the mixing. In particular, the first hierarchy presented in Proposition 1 is identical to the NG prior up to the first stage mixing. While fixing the values of a and b, we further mix over ?j (rather than a global ?) and further over ? if desired as will be discussed later. ? acts as a global shrinkage parameter in the hierarchy. On the other hand, [12] choose to further mix over a and a global ? while fixing the values of b and ?. By doing so, they forfeit a complete conjugate structure and an explicit control over the tail behavior of ?(?j ). As a direct corollary to Proposition 1, we observe a possible equivalence between the SB and the NEG priors. Corollary 1. If a = 1 in Proposition 1, then TPBN ? NEG. If (a, b, ?) = (1, 1/2, 1) in Proposition 1, then TPBN ? SB ? NEG. An interesting, yet expected, observation on Proposition 1 is that a half-Cauchy prior can be represented as a scale mixture of gamma distributions, i.e. if ?j ? G(1/2, ?j ) and ?j ? G(1/2, ?), then 1/2 ?j ? C + (0, ?1/2 ). This makes sense as ? 1/2 |?j has a half-Normal distribution and the mixing distribution on the precision parameter is gamma with shape parameter 1/2. [7] further place a half-Cauchy prior on ?1/2 to complete the hierarchy. The aforementioned observation helps us formulate the complete hierarchy proposed in [7] in a conjugate manner. This should bring analytical and computational advantages as well as making the application of the procedure much easier for the average user without the need for a relatively more complex sampling scheme. 1/2 Corollary 2. If ?j ? N (0, ?j ), ?j ? C + (0, ?1/2 ) and ?1/2 ? C + (0, 1), then ?j ? T PBN (1/2, 1/2, ?), ? ? G(1/2, ?) and ? ? G(1/2, 1). Hence disregarding the different treatments of the higher-level hyper-parameters, we have shown that the class of priors given in Definition 1 unites the priors in [16, 11, 12, 6, 7] under one family and reveals their close connections through the equivalence of hierarchies given in Proposition 1. The first hierarchy in Proposition 1 makes much of the work possible in the following sections. 4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 (a) 0.0 0.2 0.4 0.2 0.6 0.4 0.8 1.0 0.6 0.8 1.0 0.6 0.8 1.0 (b) 0.8 1.0 0.0 0.2 0.4 (c) 0.0 0.6 (d) 0.6 0.8 1.0 0.0 (e) 0.2 0.4 (f) Figure 1: (a, b) = {(1/2, 1/2), (1, 1/2), (1, 1), (1/2, 2), (2, 2), (5, 2)} for (a)-(f) respectively. ? = {1/10, 1/9, 1/8, 1/7, 1/6, 1/5, 1/4, 1/3, 1/2, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10} considered for all pairs of a and b. The line corresponding to the lowest value of ? is drawn with a dashed line. 4 4.1 Estimation and Posterior Inference in Regression Models Fully Bayes and Approximate Inference Consider the linear regression model, y = X?+, where y is an n-dimensional vector of responses, X is the n ? p design matrix and  is an n-dimensional vector of independent residuals which are normally distributed, N (0, ? 2 In ) with variance ? 2 . We place the hierarchical prior given in Proposition 1 on each ?j , i.e. ?j ? N (0, ? 2 ?j ), ?j ? G(a, ?j ), ?j ? G(b, ?). ? is used as a global shrinkage parameter common to all ?j , and may be inferred using the data. Thus we follow the hierarchy by letting ? ? G(1/2, ?), ? ? G(1/2, 1) which implies ?1/2 ? C + (0, 1) that is identical to what was used in [7] at this level of the hierarchy. However, we do not believe at this level in the hierarchy the choice of the prior will have a huge impact on the results. Although treating ? as unknown may be reasonable, when there exists some prior knowledge, it is appropriate to fix a ? value to reflect our prior belief in terms of underlying sparsity of the coefficient vector. This sounds rather natural as soon as one starts seeing ? as a parameter that governs the multiplicity adjustment as discussed in [7]. Note also that here we form the dependence on the error variance at a lower level of hierarchy rather than forming it in the prior of ? as done in [7]. If we let a = b = 1/2, we will have formulated the hierarchical prior given in [7] in a completely conjugate manner. We also let ? ?2 ? G(c0 /2, d0 /2). Under a normal likelihood, an efficient Gibbs sampler may be obtained as the fully conditional posteriors can be extracted: ?|y, X, ? 2 , ?1 , . . . , ?p ? N (?? , V? ), ? ?2 |y, X, ?, ?1 , . . . , ?p ? G(c? , d? ), ?j |?j , ? 2 , ?j ? Pp GIG(a?1/2, 2?j , ?j2 /? 2 ), ?j |?j , ? ? G(a+b, ?j +?), ?|?j , ? ? G(pb+1/2, j=1 ?j +?), ?|? ? G(1, ? + 1), where ?? = (X0 X + T?1 )?1 X0 y, V? = ? 2 (X0 X + T?1 )?1 , c? = (n + p + c0 )/2, d? = {(y ? X?)0 (y ? X?) + ? 0 T?1 ? + d0 }/2, T = diag(?1 , . . . , ?p ). 5 As an alternative to MCMC and Laplace approximations [23], a lower-bound on marginal likelihoods may be obtained via variational methods [17] yielding approximate posterior distributions on the model parameters. Using a similar approach to [5, 1], the approximate marginal posterior distributions of the parameters are given by ? ? N (?? , V? ), ? ?2 ? G (c? , d? ), Pp ?j ? GIG(a?1/2, 2h?j i, h? ?2 ih?j2 i), ?j ? G(a+b, h?j i+h?i), ? ? G(pb+1/2, h?i+ j=1 h?j i), ? ? G(1, h?i + 1), where ?? = h?i = (X0 X + T?1 )?1 X0 y, V? = h? ?2 i?1 (X0 X + T?1 )?1 , Pn T?1 = diag(h?1?1 i, . . . , h?p?1 i), c? = (n + p + c0 )/2, d? = (y0 y ? 2y0 Xh?i + i=1 xi h?? 0 ixi + Pp ?1 2 h?? 0 i = V? + h?ih? 0 i, h? ?2 i = c? /d? , h?j i = (a + b)/(h?j i + h?i), j=1 h?j ih?j i + d0 )/2, P p h?i = (pb + 1/2)/(h?i + j=1 h?j i), h?i = 1/(h?i + 1) and  (h? ?2 ih?j2 i)1/2 Ka+1/2 (2h?j ih? ?2 ih?j2 i)1/2 ,  h? i = (2h?j i)1/2 Ka?1/2 (2h?j ih? ?2 ih?j2 i)1/2  (2h?j i)1/2 K3/2?a (2h?j ih? ?2 ih?j2 i)1/2 ?1 .  h? i = (h? ?2 ih?j2 i)1/2 K1/2?a (2h?j ih? ?2 ih?j2 i)1/2 This procedure consists of initializing the moments and iterating through them until some convergence criterion is reached. The deterministic nature of these approximations make them attractive as a quick alternative to MCMC. This conjugate modeling approach we have taken allows for a very straightforward implementation of Strawderman-Berger and horseshoe priors or, more generally, TPB normal scale mixture priors in regression models without the need for a more sophisticated sampling scheme which may ultimately attract more audiences towards the use of these more flexible and carefully defined normal scale mixture priors. 4.2 Sparse Maximum a Posteriori Estimation Although not our main focus, many readers are interested in sparse solutions, hence we give the following brief discussion. Given a, b and ?, maximum a posteriori (MAP) estimation is rather straightforward via a simple expectation-maximization (EM) procedure. This is accomplished in a similar manner to [8] by obtaining the joint MAP estimates of the error variance and the regression coefficients having taken the expectation with respect to the conditional posterior distribution of ?j?1 using the second hierarchy given in Proposition 1. The kth expectation step then would consist of calculating R ? a?1/2 2(k?1) 2 ?j (1 + ?j /?)?(a+b) exp{??j /(2?(k?1) ?j )}d?j?1 0 ?1 (k) h?j i = R ? 1/2+a (8) 2(k?1) 2 ?j (1 + ?j /?)?(a+b) exp{??j /(2?(k?1) ?j )}d?j?1 0 2(k?1) 2 where ?j and ?(k?1) denote the modal estimates of the jth component of ? and the error 2 variance ? at iteration (k ? 1). The solution to (8) may be expressed in terms of some special function(s) for changing values of a, b and ?. b < 1 is a good choice as it will keep the tails of the marginal density on ?j heavy. A careful choice of a, on the other hand, is essential to sparse estimation. Admissible values of a for sparse estimation is apparent by the representation in Definition 2, noting that for any a > 1, ?(?j = 1) = 0, i.e. ?j may never be shrunk exactly to zero. Hence for sparse estimation, it is essential that 0 < a ? 1. Figure 2 (a) and (b) give the prior densities on ?j for b = 1/2, ? = 1 and a = {1/2, 1, 3/2} and the resulting marginal prior densities on ?j . These marginal densities are given by ? 2 ? 1 ? e?j /2 ?(0, ?j2 /2) a = 1/2 ? ? 2?3/2 ? |?j | ?j2 /2 ?j ?j2 /2 1 ? ? 2 e + 2e Erf(?j / 2) a = 1 ?(?j ) = o ?2? n ? 2 ? 2 ? 1 ? 1 e?j /2 ? 2 ?(0, ? 2 /2) a = 3/2 ? 3/2 2 j j R? where Erf(.) denotes the error function and ?(s, z) = z ts?1 e?t dt is the incomplete gamma function. Figure 2 clearly illustrates that while all three cases have very similar tail behavior, their behavior around the origin differ drastically. 6 0.0 0.2 0.4 0.6 0.8 1.0 ?3 ?2 ?1 0 ? ? (a) (b) 1 2 3 Figure 2: Prior densities of (a) ?j and (b) ?j for a = 1/2 (solid), a = 1 (dashed) and a = 3/2 (long dash). 5 Experiments Throughout this section we use the Jeffreys? prior on the error precision by setting c0 = d0 = 0. We generate data for two cases, (n, p) = {(50, 20), (250, 100)}, from yi = x0i ? ? + i , for i = 1, . . . , n where ? ? is a p-vector that on average contains 20q non-zero elements which are indexed by the set A = {j : ?j? 6= 0} for some random q ? (0, 1). We randomize the procedure in the following manner: (i) C ? W(p, Ip?p ), (ii) xi ? N (0, C), (iii) q ? B(1, 1) for the first and q ? B(1, 4) for the second cases, (iv) I(j ? A) ? Bernoulli(q) for j = 1, . . . , p where I(.) denotes the indicator function, (v) for j ? A, ?j ? U(0, 6) and for j ? / A, ?j = 0 and finally (vi) i ? N (0, ? 2 ) where ? ? U(0, 6). We generated 1000 data sets for each case resulting in a median signal-to-noise ratio ? using the of approximately 3.3 and 4.5. We obtain the estimate of the regression coefficients, ?, variational Bayes procedure and measure the performance by model error which is calculated as ? 0 C(? ? ? ?). ? Figure 3(a) and (b) display the median relative model error (RME) values (? ? ? ?) (with their distributions obtained via bootstrapping) which is obtained by dividing the model error observed from our procedures by that of `1 regularization (lasso) tuned by 10-fold cross-validation. The boxplots in Figure 3(a) and (b) correspond to different (a, b, ?) values where C+ signifies that ? is treated as unknown with a half-Cauchy prior as given earlier in Section 4.1. It is worth mentioning that we attain a clearly superior performance compared to the lasso, particularly in the second case, despite the fact that the estimator resulting from the variational Bayes procedure is not a thresholding rule. Note that b = 1 choice leads to much better performance under Case 2 than Case 1. This is due to the fact that Case 2 involves a much sparser underlying setup on average than Case 1 and that the lighter tails attained by setting b = 1 leads to stronger shrinkage. To give a high dimensional example, we also generate a data set from the model yi = x0i ? ? + i , for i = 1, . . . , 100, where ? ? is a 10000-dimensional very sparse vector with 10 randomly chosen components set to be 3, i ? N (0, 32 ) and xij ? N (0, 1) for j = 1, . . . , p. This ? ? choice leads to a signal-to-noise ratios of 3.16. For the particular data set we generated, the randomly chosen components of ? ? to be non-zero were indexed by 1263, 2199, 2421, 4809, 5530, 7483, 7638, 7741, 7891 and 8187. We set (a, b, ?) = (1, 1/2, 10?4 ) which implies that a priori P(?j > 0.5) = 0.99 placing much more density in the neighborhood of ?j = 1 (total shrinkage). This choice is due to the fact that n/p = 0.01 and to roughly reflect that we do not want any more than 100 predictors in the resulting model. Hence ? is used, a priori, to limit the number of predictors in the model in relation to the sample size. Also note that with a = 1, the conditional posterior distribution of ?j?1 is reduced to an inverse Gaussian. Since we are adjusting the global shrinkage parameter, ?, a priori, and it is chosen such that P(?j > 0.5) = 0.99, whether a = 1/2 or a = 1 should not matter. We first run the Gibbs sampler for 100000 iterations (2.4 hours on a computer with a 2.8 GHz CPU and 12 Gb of RAM using Matlab), discard the first 20000, thin the rest by picking every 5th sample to obtain the posteriors of the parameters. We observed that the chain converged by the 10000th iteration. For comparison purposes, we also ran the variational Bayes procedure using the values from the converged chain as the initial points (80 seconds). Figure 4 gives the posterior means attained by sampling and the variational approximation. The estimates corresponding to the zero elements of 7 0.7 1.2 0.6 1.1 0.5 1.0 0.4 0.9 (.5,.5,C+) (1,.5,C+) (.5,.5,1) (1,.5,1) (.5,1,C+) (1,1,C+) (.5,1,1) (1,1,1) (.5,.5,C+) (1,.5,C+) (.5,.5,1) (1,.5,1) (.5,1,C+) (1,1,C+) (.5,1,1) (1,1,1) (a) (b) Figure 3: Relative ME at different (a, b, ?) values for (a) Case 1 and (b) Case 2. 3 1263 4809 2199 2421 7483 7638 7741 7891 8187 5530 ? 2 1 0 0 1000 2000 3000 4000 5000 Variable # 6000 7000 8000 9000 10000 Figure 4: Posterior mean of ? by sampling (square) and by approximate inference (circle). ? ? are plotted with smaller shapes to prevent clutter. We see that in both cases the procedure is able to pick up the larger signals and shrink a significantly large portion of the rest towards zero. The approximate inference results are in accordance with the results from the Gibbs sampler. It should be noted that using a good informed guess on ?, rather than treating it as an unknown in this high dimensional setting, improves the performance drastically. 6 Discussion We conclude that the proposed hierarchical prior formulation constitutes a useful encompassing framework in understanding the behavior of different scale mixtures of normals and connecting them under a broader family of hierarchical priors. While `1 regularization, or namely lasso, arising from a double exponential prior in the Bayesian framework yields certain computational advantages, it demonstrates much inferior estimation performance relative to the more carefully formulated scale mixtures of normals. The proposed equivalence of the hierarchies in Proposition 1 makes computation much easier for the TPB scale mixtures of normals. As per different choices of hyper-parameters, we recommend that a ? (0, 1] and b ? (0, 1); in particular (a, b) = {(1/2, 1/2), (1, 1/2)}. These choices guarantee that the resulting prior has a kink at zero, which is essential for sparse estimation, and leads to heavy tails to avoid unnecessary bias in large signals (recall that a choice of b = 1/2 will yield Cauchy-like tails). In problems where oracle knowledge on sparsity exists or when p >> n, we recommend that ? is fixed at a reasonable quantity to reflect an appropriate sparsity constraint as mentioned in Section 5. Acknowledgments This work was supported by Award Number R01ES017436 from the National Institute of Environmental Health Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Environmental Health Sciences or the National Institutes of Health. References [1] A. Armagan. Variational bridge regression. JMLR: W&CP, 5:17?24, 2009. 8 [2] A. Armagan, D. B. Dunson, and J. Lee. arXiv:1104.0861v2, 2011. Generalized double Pareto shrinkage. [3] C. Armero and M. J. Bayarri. Prior assessments for prediction in queues. The Statistician, 43(1):pp. 139?153, 1994. [4] J. Berger. A robust generalized Bayes estimator and confidence region for a multivariate normal mean. The Annals of Statistics, 8(4):pp. 716?761, 1980. [5] C. M. Bishop and M. E. Tipping. Variational relevance vector machines. In UAI ?00: Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 46?53, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. [6] C. M. Carvalho, N. G. Polson, and J. G. Scott. Handling sparsity via the horseshoe. JMLR: W&CP, 5, 2009. [7] C. M. Carvalho, N. G. Polson, and J. G. Scott. The horseshoe estimator for sparse signals. Biometrika, 97(2):465?480, 2010. [8] M. A. T. Figueiredo. Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25:1150?1159, 2003. [9] E. I. George and R. E. McCulloch. Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88, 1993. [10] M. Gordy. A generalization of generalized beta distributions. Finance and Economics Discussion Series 1998-18, Board of Governors of the Federal Reserve System (U.S.), 1998. [11] J. E. Griffin and P. J. Brown. Bayesian adaptive lassos with non-convex penalization. Technical Report, 2007. [12] J. E. Griffin and P. J. Brown. Inference with normal-gamma prior distributions in regression problems. Bayesian Analysis, 5(1):171?188, 2010. [13] C. Hans. Bayesian lasso regression. Biometrika, 96:835?845, 2009. [14] C. J. Hoggart, J. C. Whittaker, and David J. Balding M. De Iorio. Simultaneous analysis of all SNPs in genome-wide and re-sequencing association studies. PLoS Genetics, 4(7), 2008. [15] H. Ishwaran and J. S. Rao. Spike and slab variable selection: Frequentist and Bayesian strategies. The Annals of Statistics, 33(2):pp. 730?773, 2005. [16] I. M. Johnstone and B. W. Silverman. Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences. Annals of Statistics, 32(4):pp. 1594?1649, 2004. [17] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. MIT Press, Cambridge, MA, USA, 1999. [18] T. J. Mitchell and J. J. Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):pp. 1023?1032, 1988. [19] T. Park and G. Casella. The Bayesian lasso. Journal of the American Statistical Association, 103:681?686(6), 2008. [20] N. G. Polson and J. G. Scott. Alternative global-local shrinkage rules using hypergeometricbeta mixtures. Discussion Paper 2009-14, Department of Statistical Science, Duke University, 2009. [21] W. E. Strawderman. Proper Bayes minimax estimators of the multivariate normal mean. The Annals of Mathematical Statistics, 42(1):pp. 385?388, 1971. [22] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267?288, 1996. [23] L. Tierney and J. B. Kadane. Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81(393):82?86, 1986. [24] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 2001. 9
4439 |@word h:1 advantageous:1 stronger:2 c0:4 accounting:1 pick:1 solid:1 moment:3 initial:1 contains:1 series:2 tuned:1 ka:2 comparing:1 yet:4 must:1 shape:3 treating:2 update:1 half:5 intelligence:2 guess:1 provides:1 beauchamp:1 unbounded:1 mathematical:1 along:1 direct:1 beta:14 prove:1 consists:1 recognizable:1 manner:7 x0:6 expected:1 rapid:1 roughly:1 behavior:11 decreasing:1 cpu:1 considering:1 becomes:2 underlying:3 mass:2 mcculloch:1 lowest:1 what:3 evolved:1 kind:1 emerging:1 informed:1 transformation:1 bootstrapping:1 guarantee:1 every:1 subclass:1 friendly:1 act:1 finance:1 exactly:1 biometrika:2 demonstrates:1 control:3 normally:1 accordance:1 local:1 limit:1 despite:2 solely:1 approximately:1 studied:1 equivalence:8 mentioning:2 averaged:1 practical:1 acknowledgment:1 block:1 silverman:1 procedure:9 area:1 empirical:1 attain:1 revealing:3 significantly:1 confidence:1 induce:3 seeing:1 onto:1 close:5 selection:8 needle:1 conventional:2 map:3 equivalent:1 deterministic:1 quick:1 straightforward:5 economics:1 convex:1 formulate:2 estimator:6 rule:2 pull:1 handle:1 laplace:1 annals:4 hierarchy:22 suppose:2 heavily:1 massive:4 user:2 duke:7 lighter:2 ixi:1 origin:3 element:2 particularly:1 updating:1 observed:4 initializing:1 region:1 plo:1 ran:1 substantial:3 intuition:1 mentioned:2 complexity:2 ultimately:1 creates:1 efficiency:1 completely:1 balding:1 iorio:1 easily:1 joint:1 routinely:2 represented:3 distinct:2 monte:1 artificial:1 hyper:2 neighborhood:3 apparent:1 larger:2 widely:1 erf:2 statistic:4 ip:1 advantage:5 sequence:1 analytical:3 propose:4 j2:11 date:1 mixing:5 degenerate:1 flexibility:1 intuitive:3 inducing:1 exploiting:1 convergence:2 double:9 kink:1 help:1 develop:2 fixing:2 stat:3 x0i:2 strong:2 dividing:1 involves:1 implies:3 differ:1 direction:1 shrunk:1 viewing:1 pbn:3 f1:4 generalization:3 fix:1 proposition:13 extension:1 around:3 considered:4 normal:21 exp:4 great:2 k3:1 slab:1 reserve:1 major:1 purpose:1 estimation:13 integrates:1 bridge:1 federal:1 mit:1 clearly:2 gaussian:4 modified:1 rather:7 avoid:2 pn:1 shrinkage:23 varying:1 broader:1 jaakkola:1 corollary:3 artin:2 focus:2 improvement:1 methodological:1 bernoulli:1 likelihood:3 sequencing:1 contrast:1 sense:1 posteriori:3 inference:7 attract:1 sb:3 relation:1 favoring:1 quasi:1 interested:1 overall:2 among:2 flexible:4 aforementioned:1 denoted:2 priori:3 special:7 marginal:9 equal:1 never:1 having:3 ng:2 sampling:7 identical:2 placing:1 park:1 constitutes:1 thin:1 peaked:1 merlise:1 recommend:2 report:1 inherent:1 primarily:1 randomly:2 gamma:9 national:3 statistician:1 freedom:1 interest:1 huge:1 strawderman:6 mixture:16 truly:2 yielding:2 light:1 xb:2 chain:3 accurate:1 indexed:2 incomplete:1 unite:1 iv:1 desired:2 circle:1 plotted:1 re:1 modeling:3 earlier:2 rao:1 maximization:1 signifies:1 addressing:1 subset:2 predictor:6 uniform:1 kadane:1 clyde:2 density:19 probabilistic:1 lee:1 picking:1 straw:1 connecting:1 reflect:3 choose:1 possibly:1 wishart:1 american:4 leading:3 account:1 suggesting:1 de:1 coefficient:7 matter:1 inc:1 explicitly:1 vi:1 later:2 view:1 responsibility:1 bayarri:1 doing:1 reached:1 start:1 bayes:9 cch:3 portion:1 formed:2 square:1 variance:5 kaufmann:1 efficiently:1 yield:8 correspond:1 generalize:1 bayesian:14 carlo:1 worth:2 converged:2 simultaneous:2 casella:1 definition:4 competitor:1 pp:9 naturally:1 gain:1 adjusting:2 treatment:1 mitchell:1 recall:1 knowledge:2 improves:1 confluent:1 sophisticated:1 back:1 carefully:2 focusing:1 higher:1 dt:1 attained:2 follow:1 tipping:2 response:1 modal:1 supervised:1 formulation:5 done:1 shrink:3 strongly:1 furthermore:1 implicit:1 stage:1 until:1 hand:3 assessment:1 lack:2 mode:1 reveal:1 believe:1 name:2 usa:2 brown:2 regularization:4 hence:9 assigned:1 excluded:1 attractive:2 inferior:1 noted:1 criterion:1 generalized:10 complete:3 cp:2 gh:2 bring:1 snp:1 variational:12 novel:1 recently:1 common:1 superior:1 tail:12 interpretation:1 discussed:2 association:5 refer:3 significant:1 haystack:1 gibbs:6 cambridge:1 inclusion:1 han:1 similarity:1 posterior:15 multivariate:2 recent:1 perspective:1 discard:1 compound:1 certain:2 accomplished:1 yi:2 neg:4 inverted:1 morgan:1 gig:3 george:1 determine:1 paradigm:1 bessel:1 signal:15 dashed:2 ii:1 multiple:1 desirable:1 mix:2 sound:1 d0:4 technical:1 cross:1 long:1 award:1 impact:1 prediction:3 regression:12 enhancing:1 expectation:3 arxiv:1 iteration:3 represent:1 audience:1 background:2 want:1 median:2 publisher:1 rest:2 posse:1 subject:1 induced:1 facilitates:1 jordan:1 near:1 noting:2 iii:1 forfeit:1 hb:2 variety:3 forthcoming:1 lasso:8 competing:1 shift:1 whether:1 motivated:1 heavier:2 gb:1 penalty:6 queue:1 matlab:1 useful:3 iterating:1 clear:1 governs:1 generally:1 clutter:1 concentrated:2 reduced:1 generate:2 xij:1 estimated:1 arising:1 per:1 tibshirani:1 promise:1 tpbn:2 pb:5 drawn:1 tierney:1 changing:1 prevent:1 boxplots:1 ram:1 year:1 run:1 inverse:2 uncertainty:4 place:2 family:3 reasonable:2 reader:1 throughout:1 griffin:2 bound:1 dash:1 display:1 fold:1 encountered:1 oracle:1 insufficiently:1 constraint:1 rme:1 relatively:3 department:1 conjugate:12 smaller:3 em:1 y0:2 appealing:4 making:3 jeffreys:1 intuitively:1 multiplicity:1 taken:3 computationally:1 previously:3 remains:1 discus:2 letting:3 gaussians:2 ishwaran:1 observe:1 hierarchical:14 v2:1 appropriate:3 frequentist:1 alternative:6 denotes:11 graphical:1 calculating:1 k1:1 ghahramani:1 society:1 move:1 quantity:1 spike:1 randomize:1 strategy:1 dependence:1 traditional:1 said:1 kth:2 armagan:3 me:1 argue:1 cauchy:7 trivial:1 toward:1 berger:6 providing:2 minimizing:1 ratio:2 nc:3 setup:1 dunson:3 unfortunately:1 potentially:4 polson:3 design:2 implementation:2 proper:1 unknown:4 allowing:1 observation:2 markov:1 horseshoe:8 behave:1 t:1 inferred:1 david:2 pair:1 namely:1 connection:6 hypergeometric:5 hour:1 able:2 pattern:1 scott:3 biasing:1 sparsity:6 encompasses:2 royal:1 belief:1 natural:2 treated:1 indicator:1 residual:1 minimax:1 scheme:2 brief:2 governor:1 health:3 text:1 prior:87 understanding:1 relative:3 encompassing:3 loss:1 fully:3 interesting:6 carvalho:2 validation:1 penalization:1 degree:1 thresholding:1 pareto:2 heavy:4 genetics:1 penalized:1 supported:1 soon:1 jth:1 figueiredo:1 drastically:3 bias:1 allow:2 pulled:1 institute:3 wide:2 johnstone:1 saul:1 sparse:14 distributed:1 ghz:1 calculated:1 rich:5 genome:1 author:1 adaptive:2 san:1 far:1 transaction:1 approximate:6 obtains:1 keep:1 global:8 reveals:3 uai:1 conclude:1 unnecessary:1 francisco:1 xi:2 continuous:3 tailed:3 nature:1 robust:1 ca:1 obtaining:1 complex:4 necessarily:1 domain:1 diag:2 official:1 main:1 noise:2 unites:1 referred:2 elaborate:1 board:1 shrinking:1 precision:2 explicit:1 xh:1 exponential:10 candidate:1 jmlr:2 admissible:1 specific:1 bishop:1 disregarding:1 evidence:1 exists:2 consist:1 ih:13 essential:3 magnitude:1 illustrates:2 sparseness:1 sparser:1 durham:3 easier:2 forming:1 expressed:2 adjustment:1 environmental:2 relies:1 extracted:1 whittaker:1 ma:1 conditional:4 formulated:2 careful:1 towards:5 content:1 change:2 included:1 averaging:2 sampler:3 total:1 gauss:1 support:1 arises:1 relevance:2 dept:3 mcmc:3 phenomenon:1 handling:1
3,799
444
Locomotion in a Lower Vertebrate: Studies of the Cellular Basis of Rhythmogenesis and Oscillator Coupling James T. Buchanan Department of Biology Marquette University Milwaukee, WI 53233 Abstract To test whether the known connectivies of neurons in the lamprey spinal cord are sufficient to account for locomotor rhythmogenesis, a CCconnectionist" neural network simulation was done using identical cells connected according to experimentally established patterns. It was demonstrated that the network oscillates in a stable manner with the same phase relationships among the neurons as observed in the lamprey. The model was then used to explore coupling between identical <?scillators. It was concluded that the neurons can have a dual role as rhythm generators and as coordinators between oscillators to produce the phase relations observed among segmental oscillators during swimming. 1 INTRODUCTION One approach to analyzing neurobiological systems is to use simpler preparations that are amenable to techniques which can investigate the cellular, synaptic, and network levels of organization involved in the generation of behavior. This approach has yielded significant progress in the analysis of rhythm pattern generat.ors in several invertebrate preparations (e .g., the stomatogastric ganglion of lobster, Selverston et al., 1983). We have been carrying out similar types of studies of locomotor rhythm generation in a vertebrate preparation, the lamprey spinal cord, which offers many of the same technical advantages of invertebrate nervous systems. To aid our understanding of how identified lamprey interneurons might participate 101 102 Buchanan in rhythmogenesis and in the coupling of oscillators, we have used neural network models. 2 FICTIVE SWIMMING The neuronal correlate of swimming can be induced in the isolated lamprey spinal cord by exposure to glutamate, which is considered to be the principal endogenous excitatory neurotransmitter. As in the intact swimming lamprey, this "fictive" swimming is characterized by periodic bursts of motoneuron action potentials in A ----- midline EIN axon lateral edge C B 100 EIN-UN ~\~lltnV -,,",,pot! ? -~ 20 tn, .-.. N ::t: '-' 80 ~ u I:l II) 110 ::s C1' LIN-inhibitory CC IN I &~ alrychnine ~re7.r 20m. II) '" r... -- EIN ...r----L-I 1st ? 2nd last 40 II) .....lIIICJ. 20 !'Il 0 0.0 0.& 1.0 1.5 2.0 2.& 3.0 3.& 4.0 4.& &.0 Input Current (nA) Figure 1: Lamprey spinal interneurons. A, drawings of three types of interneurons after intracellular dye injections. B, inhibitory and excitatory postsynaptic potentials and the effects of selective antagonists. C, firing frequency of the first, second, and last spike intervals during a 400ms current injection. Locomotion Network the ventral roots, and these bmsts alternate between sides of the spinal cord and propagate in a head-to-tail direction during forward swimming (Cohen and Wallen, 1980; Wallen and Williams, 1984). Thus, the cellular mechanisms for generating the basic swimming pattern reside within the spinal cord as has been demonstrated for many other vertebrates (Grillner, 1981). -0.2 -0.1 0.0 0.1 0.2 0.3 0 .4 0.5 0.11 0 .7 0 .8 0.9 1.0 0.9 1.0 VR MN LIN 1Jf1tlB II,rulJlllllfll' CC EIN !!XI! ~ !!XI! Peak Depolarization -0.2 -0. 1 0.0 0. 1 0.2 0 .3 111111 = p;x 2*99+&'SM Peak Repolarizalion 0.4 0.5 0.8 0.7 0 .8 SWIM CYCLE Figure 2: Connectivity and activity patterns. Top: synaptic connectivity among the interneurons and motoneurons (MN). Bottom: histograms summarizing the activity of cells recorded intracelllllar1y during fict.ive swimming. Timing of activit.y of neurons with the onset of the ipsilateral ventral root burst. 103 104 Buchanan The swimming rhythm generator is thought to consist of a chain of coupled oscillators distributed throughout the length of the spinal cord. The isolated spinal cord can be cut into pieces as small as two or three segments in length from any head-to-taillevel and still exhibit alternating ventral root bursting upon application of glutamate. The intrinsic swimming frequency in each of these pieces of spinal cord is different by as much as two-fold, and no consistent relationship between intrinsic frequency and the head-to-tail level from which the piece originated has been observed (Cohen, 1986). Thus, coupling among the oscillators must provide some "buffering capacity" to cope with these intrinsic frequency differences. Another feature of the coupling is the constancy of phase lag, such that over a wide range of swimming cycle periods, the delay of ventral root burst onsets between segments is a constant fraction of the cycle period (Wallen and Williams, 1984) . Since the cycle period in swimming lamprey can vary over a ten-fold range, axonal conduction time probably is not a factor in the delay between segments. 3 SPINAL INTERNEURONS In recent years, many c1asses of spinal neurons have been characterized using a variety of neurobiological techniques, particularly intracellular recording of membrane potential (Rovainen, 1974; Buchanan, 1982; Buchanan et a,l., 1989) . Several of these classes of neurons are active during fictive swimming . These include the lateral int.erneurons (LIN), cens with axons projecting contralaterally and caudally (CC), and the excitatory interneurons (EIN). The LINs are large neurons with an ipsilaterally and caudally projecting inhibitory axon (Fig. lA,B) . The CC interneurons are medium-sized inhibitory cells (Fig. lA). The EINs are small interneurons with ipsilaterally and either caudally or rostrally projecting axons (Fig. lA,B,C). The axons of all these cell types project at least five segments and interact with neurons in multiple segments. The neurons have similar resting and firing properties. They are indistinguishable in their resting potentials, their thresholds, and their action potential amplitudes, durat.ions, and after-spike potentials. Their main differences are size-related parameters such as input resistance and membrane time constant. They fire action potentials throughout the duration of long, depolarizing current pulses, showing some adaptation (a declining frequency with successive action potentials). The plots of spike frequency vs. input current for these various cell types are generally monotonic, with a tendency to saturate at higher levels of input current (Fig. lC)(Buchanan, 1991). The synaptic connectivites of these cells have been established with simultaneous intracellular recording of pre- and post-synaptic neurons, and the results are summarized in Fig. 2 along with their activity patterns during fictive swimming. All of the cells exhibit oscillating membrane potentials with depolarizing peaks which tend to occur during the ventral root burst and with repolarizing troughs which occur about one-half cycle later (Buchanan and Cohen 1982) . These oscillations appear to be due in large part to two phases of synaptic input: an excitatory depolarizing phase and an inhibitory repolarizing phase (Kahn, 1982; Russell and Wallen, 1983). The excitatory phase of motoneurons comes from EINs and the inhibitory phase from CCs. However, these interneurons not only interact with motoneurons but with other interneurons as well. So the possibility exists that these interneurons provide the synaptic drive for all neurons of the network, not just motoneurons. Addition- Locomotion Network ally, it is possible that rhythmicity itself originates from the pattern of synaptic connectivity because the circuit has a basic alternating network of reciprocal inhibition between ce interneurons on opposite sides of the spinal cord. Reciprocal inhibition as an oscillatory network needs some form of burst-termination, and this could be provided by the feedforward inhibition of ipsilateral ee interneurons by the LINs. This inhibition could also account for the early peak observed in many ec interneurons during fictive swimming (Fig. 2) . 4 NEURAL NETWORK MODEL The ability of the network of Fig. 2 to generate the basic oscillatory pattern of fictive swimming was tested using a "connectionist" neural network simulation (Buchanan, 1992). All of the cells of the neural network had identical S-shaped input-output curves and differed only in their excitatory levels and their synaptic connectivity, which was set according to the scheme of Fig. 2. If the excitation of ees was made larger than LINs, the network would oscillate (Fig. 3). These oscillations began fairly promptly and could continued for at least thousands of cycles . The phase relations among the units were similar to those in the lamprey: cells on opposite sides of the spinal cord were anti-phasic while most cells on the same side of the cord were co-active. Significantly, both in the model and in the lamprey, the CCs were phase advanced, presumably due to their inhibition by LINs . I.UNUV iV V n n: n (\ I.CCJVU \j\ I. EINJV1JV~ I I Figure 3: Activity of the neural network model for the lamprey locomotor circuit.. 105 106 Buchanan 4.1 COUPLING The neural network model of the lamprey swimming oscillator was further used to explore how the coupling among locomotor oscillators might be achieved. Two identical oscillator networks were coupled using the various pairs of cells in one network connected to pairs of cells in the second network. All nine pairs of possible connections were tested since all of the interneurons interact with neurons in multiple segments. The coupling was evaluated by several criteria based on observations of lamprey swimming: 1) the stability of the phase difference between oscillators and the rate of achieving the steady-state, 2) the ability of the coupling t.o tolerate intrinsic frequency differences hetween oscillators, and 3) the constancy of the phase lag over a wide range of oscillator frequencies . A B1.5 .....(U 1.0 (U 0 .5 > ....J MNa & MNb Q .....0 ., 0 .0 ct1 .....> ., u <: -0.5 -1 .0 -1 .5 0 50 100 150 200 Time C D 1.5 0 .8 0.6 ..0 .... 0 ct1 fIl ..c: 0... (U 0 .5 ....:I .... ? 0.2 (U ct1 1.0 > 0 .4 bD .....:l .....(U 0 .0 I:l ..,.....0 ct1 ..,.....> ? ? ? 0 .0 -0.5 C) <: -0.2 -0.4 0 .0 0.2 0.4 0 .6 0.8 Cycle Period 1.0 1.2 -1.0 -1.5 50 100 150 200 250 300 350 Time Figure 4: Coupling between two identical oscillators. A, the connectivity. H, steady-state coupling within a single cycle. C, constancy of phase lag over a range of oscillator periods. D, adding LIN -CC from oscillator a-b, reverses the phase, simulating backward swimming. Locomotion Network Each of the nine pairs of coupled interneurons between oscillat.ors were capable of producing stable phase locking, although some coupling connections operated over a much wider range of synaptic weights than others. The steady-state phase difference between the oscillators and the rate of reaching it were also dependent on the synaptic weight of the coupling connections. The direction of the phase difference, that is, whether the postsynaptic oscillator was lagging or leading, depended both on the type of postsynaptic cell and the sign of the coupling input t.o it. If the postsynaptic cell was one which speeds the network (LIN or EIN) then their excitation by the coupling connection produced a lead of the postsynaptic network and their inhibition produced a lag. The opposite pattern held for CCs, which slow the network. An example of a coupling scheme that satisfied several criteria for lamprey-like coupling is shown in Fig. 4. In this case (Fig. 4A), there was bidirectional, symmetric coupling of EINs in the two oscillators. This gave the network the ability to tolerate intrinsic frequency differences between the oscillators (buffering capacity). To provide a phase lag of oscillator b, EINs were connected to LINs bidirectionally but with greater weight in one direction (b---ta). Such coupling reached a steady-state within a single cycle (Fig. 4B), and the phase difference was maintained at the same value over a range of cycle periods (Fig. 4C). 4.2 BACKWARD SWIMMING It has been shown recently that there is rhythmic presynaptic inhibition of interneuronal axons in the lamprey spinal cord (Alford et al., 1990). This type of cyc1e-by-cycle modulation of synaptic strength could account for shifts in phase coupling in the lamprey, such as occurs when the animal switches to brief bouts of backward swimming. One mechanism for backward swimming might be the inhibitory connection of LIN ---tCCs. The LINs have axons which descend up to 50 segments (one-half body length). In the neural network model, this descending inhibition of CC interneurons promotes backward swimming, i.e. a phase lead of the postsynaptic oscillators. Thus, presynaptic inhibition of these connections in nonlocal segments would allow forward swimming, while a removal of this presynaptic inhibition would initiate backward swimming (Fig. 4D). 5 CONCLUSIONS The modeling described here demonstrates that the identified interneurons in the lamprey spinal cord may be multi-functional. They are known to contribute to the synaptic input to motoneurons during fictive swimming and thus to the shaping of the final motor output, but they may also function as components of t.he rhythm generating network itself. Finally, by virtue of their multi-segmental connections, they may have the additional role of providing the coupling signals among oscillators. Further experimental work will be required to determine which of t.hese connections are actually used in the lamprey spinal cord for these functions. 107 108 Buchanan References S. Alford, J. Christenson, & S. Grillner. (1990) Presynaptic GABAA and GABAB receptor-mediated phasic modulation in axons of spinal motor interneurons. Eur. J. Neurolfci., 3:107-117. J .T. Buchanan. (1982) Identification of interneurons with contralateral, caudal axons in the lamprey spinal cord: synaptic interactions and morphology. J. NeurophYlfiol., 47:961-975. J.T. Buchanan. (1991) Electrophysiological properties of lamprey spinal neurons. Soc. Neurolfci. Ablftr., 17:1581. J .T. Buchanan. (1992) Neural network simulations of coupled locomotor oscillators in the lamprey spinal cord. Bioi. Cybern., 74: in press. J .T. Buchanan & A.H. Cohen. (1982) Activities of identified interneurons, motoneurons, and muscle fibers during fictive swimming in the lamprey and effects of reticulospinal and dorsal cell stimulation. J. NeurophYlfiol., 47:948-960. J .T. Buchanan, S. Grillner, S. Cullheim, & M. Risling. (1989) Identification of excitatory interneurons contributing to generation of locomotion in lamprey: structure, pharmacology, and function. J. NeurophYlfiol., 62:59-69. A.H. Cohen. (1986) The intersegmental coordinating system of the lamprey: experimental and theoretical studies. In S. Grillner, P.S.G. Stein, D.G. Stuart, H. Forssberg, R.M. Herman (eds.), Neurobiology of l'ertebrate Locomotion, 371-382. London: Macmillan. A.H. Cohen & P. Wanen. (1980) The neuronal correlate of locomotion in fish: "fictive swimming" induced in an in vitro preparation of the lamprey spinal cord. Ezp. Brain Relf., 41:11-18. S. Grillner. (1981) Control of locomotion in bipeds, tetrapods, and fish. In V.B. Brooks (ed.), Handbook of PhYlfiology, Sect. 1. The Nervoulf SYlftem Vol. II. Motor Control, 1179-1236. Maryland: Waverly Press. J .A. Kahn. (1982) Patterns of synaptic inhibtion in motoneurons and interneurons during fictive swimming in the lamprey, as revealed by CI- injections. J. Compo Neurol., 147:189-194. C.M. Rovainen. (1974) Synaptic interactions of identified nerve cells in the spinal cord of the sea lamprey. J. Compo Neurol., 154:189-204. D.F. Russell & P. Wallen. (1983) On the control of myotomal motoneurones during Clfictive swimming" in the lamprey spinal cord in vitro. Acta PhYlfiol. Scand., 117:161-170. A.I. Selverston, J.P. Miller, & M. Wadepuhl. (1983) Cooperative mechanisms for the production of rhythmic movements. Sym. Soc. Ezp. Bioi., 37:55-88. P. Wallen & T.L. Williams. (1984) Fictive locomotion in the lamprey spinal cord in vitro compared with swimming in the intact and spinal animal. J. Phy .? iol., 64:862-871.
444 |@word nd:1 termination:1 pulse:1 propagate:1 simulation:3 phy:1 current:5 must:1 bd:1 motor:3 plot:1 v:1 half:2 nervous:1 reciprocal:2 compo:2 contribute:1 successive:1 simpler:1 five:1 burst:5 along:1 buchanan:15 lagging:1 manner:1 behavior:1 multi:2 morphology:1 brain:1 vertebrate:3 project:1 provided:1 cens:1 circuit:2 medium:1 depolarization:1 selverston:2 oscillates:1 demonstrates:1 interneuronal:1 originates:1 unit:1 control:3 appear:1 producing:1 timing:1 depended:1 receptor:1 analyzing:1 firing:2 modulation:2 might:3 acta:1 bursting:1 co:1 range:6 inhibtion:1 thought:1 significantly:1 pre:1 cybern:1 descending:1 demonstrated:2 exposure:1 williams:3 duration:1 stomatogastric:1 continued:1 stability:1 locomotion:9 particularly:1 cut:1 cooperative:1 mna:1 observed:4 role:2 bottom:1 constancy:3 descend:1 thousand:1 cord:20 connected:3 cycle:11 sect:1 russell:2 movement:1 locking:1 hese:1 ipsilaterally:2 carrying:1 segment:8 upon:1 gabaa:1 basis:1 various:2 neurotransmitter:1 fiber:1 iol:1 london:1 lag:5 larger:1 ive:1 drawing:1 ability:3 itself:2 final:1 advantage:1 interaction:2 adaptation:1 re7:1 sea:1 produce:1 generating:2 oscillating:1 wider:1 coupling:21 progress:1 soc:2 pot:1 come:1 revers:1 direction:3 fil:1 considered:1 presumably:1 ventral:5 vary:1 early:1 intersegmental:1 caudal:1 reaching:1 summarizing:1 dependent:1 coordinator:1 relation:2 kahn:2 selective:1 among:7 dual:1 activit:1 animal:2 fairly:1 contralaterally:1 shaped:1 generat:1 biology:1 identical:5 buffering:2 stuart:1 connectionist:1 others:1 midline:1 forssberg:1 phase:21 fire:1 organization:1 interneurons:23 investigate:1 possibility:1 operated:1 held:1 chain:1 amenable:1 edge:1 capable:1 rhythmogenesis:3 iv:1 isolated:2 theoretical:1 modeling:1 hetween:1 contralateral:1 delay:2 conduction:1 periodic:1 eur:1 st:1 peak:4 na:1 connectivity:5 recorded:1 satisfied:1 wallen:6 leading:1 account:3 potential:9 summarized:1 int:1 trough:1 onset:2 piece:3 later:1 root:5 endogenous:1 reached:1 motoneurones:1 depolarizing:3 il:1 miller:1 identification:2 produced:2 cc:9 drive:1 promptly:1 simultaneous:1 oscillatory:2 synaptic:15 ed:2 lobster:1 involved:1 james:1 frequency:9 gabab:1 electrophysiological:1 shaping:1 amplitude:1 actually:1 nerve:1 bidirectional:1 higher:1 tolerate:2 ta:1 done:1 evaluated:1 just:1 ally:1 effect:2 alternating:2 symmetric:1 indistinguishable:1 during:12 maintained:1 rhythm:5 excitation:2 steady:4 m:1 criterion:2 antagonist:1 tn:1 rostrally:1 caudally:3 recently:1 began:1 functional:1 stimulation:1 vitro:3 spinal:25 cohen:6 tail:2 he:1 resting:2 significant:1 declining:1 biped:1 had:1 stable:2 locomotor:5 inhibition:10 segmental:2 milwaukee:1 recent:1 dye:1 muscle:1 motoneuron:8 greater:1 additional:1 ezp:2 determine:1 period:6 signal:1 ii:5 multiple:2 technical:1 characterized:2 offer:1 long:1 lin:12 post:1 promotes:1 basic:3 histogram:1 achieved:1 cell:16 c1:1 ion:1 addition:1 interval:1 concluded:1 probably:1 induced:2 recording:2 tend:1 axonal:1 ee:2 feedforward:1 revealed:1 variety:1 switch:1 gave:1 identified:4 opposite:3 shift:1 whether:2 swim:1 resistance:1 oscillate:1 nine:2 action:4 generally:1 stein:1 ten:1 generate:1 inhibitory:7 fish:2 sign:1 coordinating:1 ipsilateral:2 vol:1 threshold:1 achieving:1 ce:1 backward:6 swimming:31 fraction:1 year:1 throughout:2 oscillation:2 oscillat:1 fold:2 yielded:1 activity:5 strength:1 occur:2 invertebrate:2 mnb:1 speed:1 injection:3 department:1 according:2 alternate:1 membrane:3 postsynaptic:6 wi:1 projecting:3 mechanism:3 phasic:2 initiate:1 simulating:1 top:1 include:1 lamprey:29 repolarizing:2 spike:3 occurs:1 exhibit:2 maryland:1 lateral:2 capacity:2 participate:1 presynaptic:4 cellular:3 length:3 scand:1 relationship:2 providing:1 neuron:13 observation:1 sm:1 anti:1 neurobiology:1 head:3 pair:4 required:1 ct1:4 connection:8 bout:1 established:2 brook:1 pattern:9 herman:1 glutamate:2 advanced:1 mn:2 scheme:2 brief:1 mediated:1 coupled:4 understanding:1 removal:1 contributing:1 generation:3 generator:2 sufficient:1 consistent:1 production:1 excitatory:7 last:2 sym:1 side:4 allow:1 wide:2 rhythmic:2 distributed:1 curve:1 forward:2 reside:1 made:1 ec:1 cope:1 correlate:2 nonlocal:1 neurobiological:2 active:2 handbook:1 b1:1 xi:2 un:1 interact:3 main:1 intracellular:3 grillner:5 pharmacology:1 body:1 neuronal:2 fig:14 differed:1 ein:6 aid:1 axon:9 vr:1 lc:1 slow:1 originated:1 reticulospinal:1 saturate:1 showing:1 neurol:2 virtue:1 consist:1 intrinsic:5 exists:1 adding:1 ci:1 explore:2 ganglion:1 bidirectionally:1 macmillan:1 monotonic:1 bioi:2 sized:1 oscillator:23 experimentally:1 principal:1 tendency:1 la:3 experimental:2 intact:2 dorsal:1 preparation:4 tested:2